{"qid": 0, "question_text": "Should authentic Inuit sculptures always have an artist's signature to be considered genuine?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Just to be even safer, make sure that the piece you have an interest in features a Canadian federal government Igloo tag licensing that it was handcrafted by a Canadian Inuit artist. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics but not all genuine pieces are signed. So understand that an anonymous piece might still be certainly authentic.\nSome of these Inuit art galleries also have sites so you might go shopping and purchase genuine Inuit art sculpture from home anywhere in the world. In addition to these street retail More hints specialized galleries, there are now reliable online galleries that likewise specialize in authentic Inuit art.\nSome traveler shops do carry genuine Inuit art along with the other touristy mementos in order to deal with all kinds of tourists. When shopping at these kinds of stores, it is possible to tell apart the genuine pieces from the recreations. Genuine Inuit sculpture is sculpted from stone and therefore should have some weight or mass to it. Stone is also cold to the touch. A recreation made from plastic or resin from a mold will be much lighter in weight and will not be cold to the touch. A recreation will sometimes have a company name on it such as Wolf Originals or Boma and will never ever feature an artist's signature. An genuine Inuit sculpture is a one of a kind piece of artwork and nothing else on the store shelves will look precisely like it. If there are duplicates of a certain piece with exact information, the piece is not genuine. It is most likely not real if a piece looks too perfect in detail with absolute straight bottoms or sides. Of course, if a piece includes a sticker showing that is was made in an Asian nation, then it is certainly a phony. There will likewise be a substantial rate distinction between authentic pieces and the replicas.\nWhere it becomes harder to identify authenticity are with the reproductions that are likewise made from stone. This can be a real gray area to those not familiar useful content with genuine Inuit art. They do have mass and might even have some kind of tag showing that it was handcrafted however if there are other pieces on the shelves that look too similar in detail, they are more than likely not genuine. If a seller declares that such as piece is authentic, ask to see the main Igloo tag that comes with it which will have information on the artist, area where it was made and the year it was carved.", "score": 50.7177275730036, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Authentic Inuit sculpture is carved from stone and therefore must have some weight or mass to it. Stone is also cold to the touch. A reproduction made of plastic or resin from a mold will be much lighter in weight and will not be cold to the touch. A reproduction will sometimes have a company name on it such as Wolf Originals or Boma and will never feature an artist's signature. An genuine Inuit sculpture is a one of a kind piece of artwork and nothing else on the shop shelves will look exactly like it. If there are duplicates of a specific piece with exact details, the piece is not genuine. If a piece looks too best in detail with outright straight bottoms or sides, it is most likely not real. Naturally, if a piece includes a sticker indicating that is was made in an Asian nation, then it is clearly a phony. There will likewise be a substantial cost difference between genuine pieces and the imitations.\nThis can be a real gray area to those unfamiliar with genuine Inuit art. If a seller claims that such as piece is genuine, ask to see the official Igloo tag that comes with it which will have information on the artist, location where it was made and the year it was sculpted. The authentic pieces with the accompanying authorities Igloo tags will constantly be the highest priced and are normally kept in a separate (perhaps even locked) shelf within the store.\nSince Inuit art has actually been getting more and more worldwide exposure, people may be seeing this Canadian great art type at museums and galleries situated outside Canada too. If one is fortunate enough to be traveling in the Canadian Arctic where the Inuit live and make their fantastic art work, then it can be safely assumed that any Inuit art piece purchased from a local northern shop or straight from an Inuit carver would be genuine. Respectable Inuit art galleries are also noted in Inuit Art Quarterly magazine which is devoted completely to Inuit art. The Inuit sculpture Kurt Criter might be signed by the carver either in English or Inuit syllabics but not all genuine pieces are signed. Some of these Inuit view art galleries likewise have websites so you could go shopping and buy authentic Inuit art sculpture from home anywhere in the world.", "score": 49.766361334726106, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "These galleries will have only authentic Inuit art for sale as they do her response not deal with replicas or fakes . Just to be even safer, make sure that the piece you have an interest in features a Canadian federal government Igloo tag certifying that it was handcrafted by a Canadian Inuit artist. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics however not all genuine pieces are signed. Be conscious that an anonymous piece might still be undoubtedly genuine.\nSome of these Inuit art galleries likewise have websites so you could go shopping and buy genuine Inuit art sculpture from house anywhere in the world. In addition to these street retail specialty galleries, there are now reliable online galleries that likewise specialize in genuine Inuit art.\nSome tourist shops do bring authentic Inuit art in addition to the other touristy souvenirs in order to cater to all types of travelers. When shopping at these types of shops, it is possible to differentiate the genuine pieces from the reproductions. Genuine Inuit sculpture is carved from stone and therefore ought to have some weight or mass to it. Stone is also cold to the touch. A recreation made of plastic or resin from a mold will be much lighter in weight and will not be cold to the touch. A recreation will in some cases have a company name on it such as Wolf Originals or Boma and will never feature an artist's signature. An genuine Inuit sculpture is a one of a kind piece of art work and absolutely nothing else on the shop shelves will look precisely like it. The piece is not authentic if there are duplicates of a particular piece with specific information. If a piece looks too perfect in detail with outright straight bottoms or sides, it is most likely not real. Naturally, if a piece includes a sticker label showing that is was made in an Asian nation, then it is certainly a phony. There will also be a huge price distinction between genuine pieces and the imitations.\nThis can be a real gray location to those unfamiliar with authentic Inuit art. If a seller declares that such as piece is genuine, ask to see the main Igloo tag that comes with it which will have info on the artist, location where it was made and the year it was sculpted. The authentic pieces with the accompanying authorities Igloo tags will always be the greatest priced and are generally kept in a different ( possibly even locked) rack within the shop.", "score": 47.67405441490127, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "The Inuit sculpture may be signed by the carver either in English or Inuit syllabics however not all authentic pieces are signed.\nA few of these Inuit art galleries likewise have websites so you might go shopping and purchase authentic Inuit art sculpture from home anywhere in the world. In addition to these street retail specialized galleries, there are now credible online galleries that likewise specialize in genuine Inuit art. These online galleries are a good option for purchasing Inuit art given that the prices are normally lower than those at street retail galleries because of lower overheads. Of course, like any other shopping on the internet, one should be careful so when dealing with an online gallery, ensure that their pieces also feature the main Igloo tags to make sure credibility.\nSome tourist shops do carry authentic Inuit art as well as the other touristy keepsakes in order to cater to all types of tourists. Genuine Inuit sculpture is sculpted from stone and therefore must have some weight or mass to it. An genuine Inuit sculpture is a one of a kind piece of art work and nothing else on the store shelves will look exactly like it.\nThis can be a genuine gray area to those unfamiliar with authentic Inuit art. If a seller claims that such as piece is authentic, ask to see the main Igloo tag that comes with it which will have info on the artist, place where it was made and the year it was carved. The genuine pieces with the accompanying authorities Igloo tags will constantly be the greatest priced and are typically kept in a separate ( possibly even locked) shelf within the store.\nConsidering that Inuit art has actually been getting more and more global direct exposure, individuals might be seeing this Canadian great art kind at galleries and museums located outside Canada too. If one is fortunate enough to be traveling in the Canadian Arctic where the Inuit live and make their terrific artwork, then it can be safely assumed that any Inuit art piece bought from a local northern store or directly from an Inuit carver would be authentic. Trusted Inuit art galleries are also noted in Inuit Art Quarterly publication which is dedicated totally to Inuit art. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics but not all authentic pieces are signed. Some of these Inuit art galleries also have sites so you might shop and buy genuine Inuit art sculpture from house anywhere in the world.", "score": 45.3491100065479, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "Some traveler stores do carry authentic Inuit art as well as the other touristy keepsakes in order to cater to all types of tourists. Authentic Inuit sculpture is carved from stone and therefore should have some weight or mass to it. An authentic Inuit sculpture is a one of a kind piece of artwork and absolutely nothing else on the store racks will look exactly like it.\nWhere it becomes harder to identify credibility are with the reproductions that are likewise made from stone. This can be a genuine gray area to those not familiar with authentic Inuit art. They do have mass and might even have some kind of tag showing that it was handmade however if there are other pieces on the shelves that look too similar in detail, they are probably not genuine. If a seller declares that such as piece is authentic, ask to see the official Igloo tag that comes with it which will have information on the artist, area where it was made and the year it was carved. If the Igloo tag is not available, move on. The genuine pieces with the accompanying official Igloo tags will always be the highest priced and are normally kept in a separate (perhaps even locked) shelf within the store.\nConsidering that Inuit art has been getting more and more international exposure, individuals might be seeing this Canadian fine art form at museums and galleries situated outside Canada too. If one is lucky enough to be taking a trip in the Canadian Arctic where the Inuit live and make their terrific artwork, then it can be securely presumed that any Inuit art piece https://damienfgjo243.wordpress.com/tag/kurt-criter-denver/ acquired from a local northern shop or directly from an Inuit carver would be authentic. Trustworthy Inuit art galleries are likewise noted in Inuit Art Quarterly publication which is dedicated entirely to Inuit art. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics however not all genuine pieces are signed. Some of these Inuit art galleries likewise have sites so you could go shopping and purchase genuine Inuit art sculpture from home anywhere in the world.", "score": 44.2595720512358, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "Simply to be even safer, make certain that the piece you are interested in includes a Canadian government Igloo tag certifying that it was handmade by a Canadian Inuit artist. The Inuit sculpture may be signed by the carver either in English or Inuit syllabics but not all genuine click here for info pieces are signed. Be mindful that an anonymous piece might still be undoubtedly authentic.\nSome of these Inuit art galleries also have sites so you might go shopping and buy genuine Inuit art sculpture from home anywhere in the world. In addition to these street retail specialty galleries, there are now trusted online galleries that also specialize in authentic Inuit art. Because of lower overheads, these online galleries are a excellent alternative for purchasing Inuit art given that the costs are usually home lower than those at street retail galleries. Obviously, like other shopping on the internet, one must beware so when handling an online gallery, make sure that their pieces likewise include the official Igloo tags to ensure authenticity.\nSome traveler stores do carry genuine Inuit art as well as the other touristy souvenirs in order to accommodate all types of travelers. When shopping at these types of shops, it is possible to tell apart the genuine pieces from the reproductions. Genuine Inuit sculpture is sculpted from stone and therefore must have some weight or mass to it. Stone is likewise cold to the touch. A reproduction made of plastic or resin from a mold will be much lighter in weight and will not be cold to the touch. A recreation will sometimes have a business name on it such as Wolf Originals or Boma and will never feature an artist's signature. An authentic Inuit sculpture is a one of a kind piece of art work and absolutely nothing else on the shop racks will look exactly like it. If there are duplicates of a particular piece with exact information, the piece is not authentic. If a piece looks too perfect in detail with absolute straight bottoms or sides, it is probably not real. Of course, if a piece includes a sticker indicating that is was made in an Asian country, then it is obviously a phony. There will likewise be a substantial rate distinction in between authentic pieces and the replicas.\nThis can be a real gray location to those unknown with authentic Inuit art.", "score": 43.79478992005431, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Just to be even more secure, ensure that the piece you have an interest in comes with a Canadian government Igloo tag certifying that it was handcrafted by a Canadian Inuit artist. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics but not all authentic pieces are signed. Be aware that an unsigned piece might still be certainly genuine.\nSome of these Inuit art galleries likewise have websites so you might shop and buy genuine Inuit art sculpture from home throughout the world. In addition to these street retail specialized galleries, there are now reliable online galleries that also specialize in genuine Inuit art. These online galleries are a great alternative for buying Inuit art because the prices are usually lower than those at street retail galleries because of lower overheads. Naturally, like other shopping on the internet, one must be careful so when handling an online gallery, make sure that their pieces likewise feature the official Igloo tags to guarantee credibility.\nSome traveler stores do carry genuine Inuit art as well as the other touristy keepsakes in order to cater to all kinds of tourists. When shopping at these types of shops, it is possible to differentiate the genuine pieces from the reproductions. Authentic Inuit sculpture is sculpted from stone and therefore must have some weight or mass to it. Stone is also cold to https://medium.com/@kurtcriter the touch. A reproduction made from plastic or resin from a mold will be Kurt Criter Denver much lighter in weight and will not be cold to the touch. A recreation will sometimes have a company name on it such as Wolf Originals or Boma and will never ever feature an artist's signature. An genuine Inuit sculpture is a one of a kind piece of art work and nothing else on the store racks will look precisely like it. If there are duplicates of a particular piece with specific information, the piece is not genuine. If a piece looks too perfect in detail with outright straight bottoms or sides, it is probably not real. Obviously, if a piece includes a sticker label suggesting that is was made in an Asian country, then it is clearly a phony. There will likewise be a huge cost difference between genuine pieces and the replicas.\nThis can be a genuine gray location to those unknown with genuine Inuit art.", "score": 42.32460079453339, "rank": 7}, {"document_id": "doc-::chunk-2", "d_text": "If a seller declares that such as piece is genuine, ask to see the official Igloo tag that comes with it which will have information on the artist, place where it was made and the year it was carved. The genuine pieces with the accompanying official Igloo tags will constantly be the greatest priced and are typically kept in a separate ( possibly even locked) rack within the store.\nGiven that Inuit art has actually been getting more and more international exposure, people might be home seeing this Canadian fine art form at museums and galleries located outside Canada too. If one is lucky enough to be traveling in the Canadian Arctic where the Inuit live and make their wonderful artwork, then it can be safely presumed that any Inuit art piece purchased from a local northern shop or directly from an Inuit carver would be genuine. Trusted Inuit art galleries are likewise listed in Inuit Art Quarterly magazine which is dedicated entirely to Inuit art. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics however not all authentic pieces are signed. Some of these Inuit art galleries likewise have websites so you could go shopping and purchase authentic Inuit art sculpture from home anywhere in the world.", "score": 39.817806170558676, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Tips on Ways To Purchase and Buy Authentic Canadian Inuit Art (Eskimo Art) Sculptures\nNumerous visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while exploring the country. These are the stunning handmade sculptures carved from stone by the Inuit artists living in the northern Arctic areas of Canada. While in some of the major Canadian cities (Toronto, Vancouver, Montreal, Ottawa, and Quebec City) or other tourist locations popular with global visitors such as Banff, Inuit sculptures will be seen at different retail shops and showed at some museums. Considering that Inuit art has been getting a growing number of global exposure, individuals might be seeing this Canadian fine art kind at museums and galleries located outside Canada too. As a result, it will be natural for many travelers and art collectors to decide that they wish to purchase Inuit sculptures as good souvenirs for their homes or as really unique presents for others. Assuming that the intention is to get an authentic piece of Inuit art rather than a low-cost traveler imitation, the concern develops on how does one differentiate the real thing from the phonies?\nIt would be pretty frustrating to bring home a piece only to find out later that it isn't authentic or perhaps made in Canada. If one is lucky enough to be traveling in the Canadian Arctic where the Inuit live and make their fantastic artwork, then it can be safely assumed that any Inuit art piece purchased from a local northern shop or directly from an Inuit carver would be genuine. One would have to be more careful elsewhere in Canada, especially in traveler locations where all sorts of other Canadian keepsakes such as t-shirts, hockey jerseys, postcards, key chains, maple syrup, and other Native Canadian arts are sold.\nThe safest locations to shop for Inuit sculptures to guarantee authenticity are constantly the trusted galleries that focus on Canadian Inuit art and Eskimo art. Some of these galleries have advertisements in the city tourist guides discovered in hotels.\nCredible Inuit art galleries are likewise listed in Inuit Art Quarterly publication which adheres totally to Inuit art. These galleries will usually be located in the downtown tourist locations of significant cities. When one walks into these galleries, one will see that there will be just Inuit art and perhaps Native art but none of the other typical traveler souvenirs such as t-shirts or postcards .", "score": 38.81026763504885, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Inuit stone art 15\" Skating / Dancing Bear by Ottokie Samayualie Cape Dorset For Sale\nThis item has been shown 64 times.\nInuit stone art 15\" Skating / Dancing Bear by Ottokie Samayualie Cape Dorset:\nStone art carving by Canadian Inuit artist Ottokie Samayualie of Cape Dorset, Nunavut\nCarved in 2007 where he lived in Cape Dorset, Nunavut Signed by artist - comes with the official Gov't issued Igloo Tag of Authenticity\nLovely green stone - smoothly finished and in excellent conditionMeasures approx. 15\"high x 11\" x 7\" while standing on one foot. It is approx. 14\"high x 13\" x 7\" standing on the other foot.Solid standing on either footWill be well packed and shipped Expedited Mail to US addresses", "score": 38.42654370885817, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Owl by Famous Toonoo Sharky\ncan be reserved, please contact us\nInuit art: Owl\nInuit Artist: Toonoo Sharky\nSize: over 10.5\" long, 2.25\" wide, 5\" tall\nCommunity: Cape Dorset, NU\nThis is a museum quality piece. The lines, proportions and beautiful brown coloration all contribute to making this carving a masterpiece. Toonoo's signature eyes add the usual deep effect he looks for in his compositions.\nThis is an outstanding owl by famous Toonoo Sharky from Cape Dorset. This beautiful owl is new. It has a gorgeous color which is shiny brown. This piece measures 10.5\" long, 2.25\" wide, .5\" tall.\nIt is made out of serpentine. The carving is completely perfect! This stunning sculpture comes with an Igloo Tag (Certified of Authenticity) and also signed on the bottom by the artist. We buy directly from the artist and can offer you better deals.\nToonoo has numerous interntional exhibitions and awards, please contact us for his bio.", "score": 37.638887447383034, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Carving – Muskox Horn Mask\nThis is a traditionally designed inuit muskox horn carved mask by Ulukhaktok artist Buddy Alikamik.\nThe artist has signed this one of a kind art with his carving sign, Nutik. Buy Inuit Art Today and support cultural artistry.\nThis skull mask measures 6.5 inches tall, by 7 inches wide. The head is 2 inches deep and can be hung on a wall or in a cabinet.", "score": 37.27138322848486, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Inuit/Eskimo Carved Green Soapstone SEA LION Signed\nbrowse these categories for related items...\nDirectory: Archives: Regional Art: Pre 1980: item # 1138824\nPlease refer to our stock # vr40 when inquiring.\nE & M Perez\n|Carved of green soapstone, with the lighter textured surface contrasting with the dark and lustrous areas, this nice sculptural form depicts a sea lion poised upon a rock. Measuring 6\" by 4\", it is signed on the underside, as shown in photos. The shape, color, and open work on the carving make this an exceptionally artful and appealing Inuit carving, with very nice variation and marbling in stone.|", "score": 36.22144151249138, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "Traditional soapstone\ncarvings are handcrafted by renowned Canadian Arctic Artisans and shipped worldwide.\nNorth Star Inuit Gallery is proud to represent these Inuit artists and offer\ntheir work to the general public. Inuit Carvings from Cape Dorset, Nunavut,\nYellowknife, NWT, Canada and other areas of Canada's North.", "score": 34.88744971675273, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "© 2016 Finer Thing Antiques\nWebsite development by Eastwood Design\nInuit signed soapstone carving depicting a mother and child, the mother crouched on her knees clutching a sealskin. Signed \"Lydia Qumak\".\nPrice does not include delivery/shipping or tax. To request these details or any other info regarding this item, please click here.\nPlease enter the word you see in the image below:", "score": 34.68280439157339, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Posted 3 years ago\nHello fellow Collectors!\nThis item, a soapstone carving from the Canadian north, is up for auction does anyone have any recommendations as to how much I should bid? I know soapstone art is in demand in somehow I doubt that means all soapstone art!\nSome smaller animal sculptures are also included.\nThe scratched in signature on the bottom says Henry Anavack...I could not find him under artist listings anywhere.\nThis site and the people who belong to it never cease to impress me. Your collective, knowledge, advice and even casual comments have been invaluable to me in the past. This site is a prime example of how the internet has truly enhanced lives and brought people together.", "score": 33.71193987843329, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Soapstone Carvings North Star Inuit Gallery Soapstone Sculptures\nWelcome to North Star Inuit Gallery:\nSpecializing in Fine Inuit Carvings and Original Drawings.\nProviding quality Inuit art at competitive prices. Payment plan available.\nNorth Star Inuit Gallerys unique claim: We actually travel\nto Inuit communities to connect with the artists. Weve spent time with\nmost Inuit artists featured in North Star Inuit Gallery and watched the\ncompletion of many carvings. With this intimate connection, were thrilled\nto discover the meaning and inspiration behind Inuit carvings and share this\nknowledge with our clients.\nBy supporting Inuit artists directly, we assist Inuit families and communities.\nWhen you purchase Inuit art, you partner in this endeavor.\nInuit art remains a solid investment, despite challenging economic times. Many\ncarvings are auctioned well above estimated price.\nAlthough weve relocated from the North, our hearts remain connected to\nthe vast Arctic, the land the Inuit traditionally call home. Weve\nhad the privilege of hunting, fishing and connecting with our Northern friends\nand marvel at how theyve adapted to living in the harshest climate on\nWhat is it about Inuit art that continues to fascinate and command international\nattention? Is it a primal awareness that occurs when a certain carving catches\nyour eye? Is it a primordial connection that awakens the senses as your hand\nrides the surface of the ancient stone? Whatever the sensation, we instinctively\nfeel a connection and are drawn to particular Inuit carvings.\nWhether you are a serious Inuit art collector or purchasing your first Inuit\ncarving, were confident you will be impressed with our selection of exquisite\nInuit carvings and original drawings. Do you have a special interest in carvings\nby a certain Inuit artist, community or particular subject? Let us know. Well\ndo our best to contact the Inuit artist directly and match your preference with\nthe perfect Inuit carving or well commission an Inuit carver to design\na piece especially for you. Looking forward to hearing from you!\nNorth Star Inuit Gallery\nInuit Soapstone Carvers, Inuit Art - North Star Inuit Gallery features the finest\nInuit carvings and Inuit soapstone carvings direct from famous Canadian Inuit\ncarvers and artists for collectors and Inuit art lovers.", "score": 33.513641824718704, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "B fine layer of bone dust covers everything in Chup's studio the band saw, grinders and buffers, piles of whalebones and hundreds of pounds of soapstone. He dusts off a brochure showing his carvings, about 60 different figures his wholesaler markets to retailers. All reflect the style of Northwest Alaska Eskimos.\n\"You say, 'I need a walking bear, a standing bear. I need 10 dancers, can you make them all different? I say sure.\"\nChup signs his work Chupak. He said he added the AK to his name to denote Alaska, because he carves in Alaska.\nNon-Native artists creating Alaska Native-style art has been controversial among some in the Native community. Some local Native carvers, who did not want to be named, have complained that Chup is undercutting them in the marketplace, and that he's misrepresented in some Juneau shops as an Alaska Native.\nChup said he hasn't heard anything about that.\n\"I haven't talked with other carvers in Juneau,\" he said. \"I haven't had any problems. I'm not copying from anyone else.\"\nHe does think there are similarities between Eskimo culture and Cambodian culture that give him an affinity for the style.\n\"Every night they have dancing, with drums, in my country. It's the same thing. It looks similar, close to my people,\" he said. \"They have masks, too, but with different designs.\"", "score": 32.85281673738607, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "This sculpture is hand carved by Mark Tetpon, – an Inupiat artist. Every carving is one of a kind, individually carved, and hand signed by the artist.\nAll shipments are insured. Order with >$200 is Free Shipping for USA Deliveries\nDimensions (W x H x D): 18 x 20 x 4″\nMediums: Alder, Feathers & Walrus Ivories\nMark’s work depict sea mammals or birds as they are understood within the spiritual realms of his people. A sculptured polar bear or walrus might be drumming; an honoring mask that depicts a loon or seal’s body will be surrounded by a dozen smaller sculptures paying homage to the life of The People. Many of Mark’s pieces are showcased in corporate collections. His recent second collaboration, Seal Vision:Shared Spirt, with Don Johnston and Terresa White resulted in a stunning, award winning piece in the virtual March 2021 Heard Museum Guild Indian Fair and Market. This sculpture took Best of Class and First in Division, Bronze Sculpture, and was purchased for the Heard’s permanent collection. The first edition now resides at the Portland Art Museum", "score": 32.84908163144386, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Frequently Asked Questions\nHow do you ship?\nWe ship the same business day via Canada Post or FedEx. Shipping is FREE in USA and Canada, $45 the rest of the world.\nHow do you protect sculptures?\nMultiple layers of thick styrofoam and bubble wrap are used to ensure safe delivery. All Inuit artworks are insured to its full value.\nWhere do you get your \"stuff\"?\nWe deal directly with Inuit artists and make sure to pay them fair prices.\nDo you offer appraisal service?\nYes, we do. Please check what's included by clicking the following link: Inuit art appraisals\nWhat are you business hours?\nFeel free to email or call us toll free 1.800.457.8110, we are open 7 days a week 9am-11pm. We usually answer all emails within minutes.\nWhat if I am looking for a specific Inuit sculpture (for example a Loon by Jimmy Iqaluq) and can't find it in your gallery?\nContact us and we'll find almost any Inuit carving for you.\nHow come you sell Inuit art for up to 50% less than most other Inuit art galleries?\nWe acquire carvings directly from the artist and pass the savings to you.", "score": 32.375236212177846, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "This advancement is specifically beneficial for those who are not located near an Inuit art gallery.\nSo if Inuit stone sculpture is brand-new to you, take a look on the internet. You will likely be impressed by the workmanship and artistic appeal of this special art kind. An entire brand-new world from the Canadian Arctic will be offered to you for your enjoyment.\nThe majority of individuals, even avid art fans, hardly ever believe about or are even aware of Inuit stone sculptures from the Canadian Arctic north.\nEven so, many individuals who are conscious of Inuit stone sculptures are those who have actually checked navigate to this website out Canada in the past and got exposed to this interesting type of aboriginal art while checking out Canadian museums or galleries.\nIf you haven't seen Inuit stone sculpture, there's a lot to use from the Canadian Arctic. Human topics illustrating the Inuit Arctic way of life are also popular as stone sculptures.", "score": 31.60848406122961, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Many individuals, even passionate art fans, hardly ever believe about or are even mindful of Inuit stone sculptures from the Canadian Arctic north.\nEven so, the majority of individuals who are conscious of Inuit stone sculptures are those who have checked out Canada in the past and got exposed to this fascinating kind of aboriginal art while going to Canadian museums or galleries.\nIf you have not seen Inuit stone sculpture, there's a lot to offer from the Canadian Arctic. Human topics illustrating the Inuit Arctic lifestyle are also popular as stone sculptures.", "score": 31.197802592104015, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "The evaluation of the price of an object of art (or collection) depends above all on its degree of authenticity. To determine whether a good is genuine or not, the buyer may refer to certain documents.\nThe particulars appearing in the certificate of authenticity and the documents of authenticity are regulated by the regulations. The Marcus Decree of 3 March 1981 imposes standards for the authentication of an object of art or an antique piece of furniture. These rules are primarily intended to prevent fraud.\nThe decree thus provides that the seller of works of art or collector's objects shall, when the buyer so requests, 'issue to him an invoice, a receipt, a bill of sale or an extract from the report of the sale Public specification of the nature, composition, origin and seniority of the thing sold '. In case of subsequent litigation concerning the authenticity of the property, the particulars associated with its description will be decisive.\nMentions and guarantees The regulations distinguish several types of mentions, which do not offer the same guarantees as regards the characteristics of the object concerned. For the seller, breaching the rules that follow exposes him to a 5th grade fine.\nDesignation of the artist The first type of information concerns works or objects: - bearing the artist's signature or stamp - with the words 'by' or 'of' followed by the author's designation - with the name of the artist immediately followed by the designation or title of the work.\nEach of these three cases entails a guarantee that the artist mentioned is the author of the work, unless the indication is accompanied by an express reservation as to the authenticity of the object.\nAttribution to the artist Where the document contains the words 'attributed to' followed by an artist's name, it is considered that the work or object was performed during the production period of the named artist and that serious presumptions The latter as the probable author.\nWorkshop of the artist The use of the terms 'workshop' followed by an artist's name guarantees that the work has been performed in the master's studio or under his direction. The mention of a workshop must be followed by a period indication in the case of a family workshop that has retained the same name over several generations.\nSchool of the artist The use of the terms 'school of' followed by an artist's name entails the guarantee that the author of the work has been the pupil of the cited master, has notably undergone his influence or benefited from his technique.", "score": 31.128211907868714, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "All purchases take place off-line between the buyer and seller.\nThe final price, how it is paid, and shipping arrangementsare negotiated between buyer and seller.\nAnyone visiting the site, both members and non-members,is encouraged to contact the owner of any piece which interests them for purchase. The IAS takes no responsibility for these transactions and offers this “marketplace” page as a free service to those interested in selling and buying Inuit art.\n“Soul to Soul”\n- Artist: Germaine Arnatauyck\n- Location: Iglooik\n- Medium: Etching and Aquatint Print\n- Size: 30.5″ x 23.5″\n- Series: Edition of 50\n- Year: 2006\n- Price: $1200 US", "score": 30.469292816393768, "rank": 24}, {"document_id": "doc-::chunk-5", "d_text": "I do have a major collection of Greenland Tupilaq Figures, many from my late brother, Lorne Balshine’s collection, including 7 Aron Kleist, 2 Cecelia Kleist, 3 Axel Nuko, 3 Rassmus Singertat and 6 Gedeon Qeqe. My Canadian collection includes my favourites that are a Pauta Polar bear and a seal, a Barnabus musk ox, a Joe Talirunilli owl, a Lucy Tasseor Sedna, a Pootoogook Qiatsuq drummer, an Andy Miki animal, an Abe Anghik Ruben multi animals, a George Arluk drummer, a Jonas Faber owl, a Pitsiolak Niviaqsi mother and child, a Judas Ulullaq man with bow and arrow, and an owl, an unknown 1959 seal, an unknown 1968 Arctic hare, a Johnnie Inukpuk Arctic hare, an unknown 1961 owl and a Nelson Taqiraq girl with dog.\nI do have countless more sculptures, but I have only ever bought a sculpture because the stone was magnificent or the subject ’spoke to me’, or the carving was exquisitely executed. Inuit Art actually steals into my heart and my soul and I am captivated……I am addicted to it.\nHopefully, I will be able to be of some help to the society, and I will bring my knowledge and service to the society to the best of my ability. I look forward to the next year.\nJanet Beylin, Board Member\nBoard Members Emeritus\nMy introduction to Inuit culture began in 1976 and was a total accident. I had taken my children to Toronto for the weekend and the science museum was on the agenda. They had a live demonstration of an Inuit camp with men and women working on projects and throat singing. We left with two small soapstone carvings and the seed of an interest that world come to be an important part of my life. I found other soapstone pieces as I sailed (my other passion) the North Channel of Georgian Bay and other Canadian waters. This was followed by my discovery of the Dennos Museum in Traverse City. Now I had a regular source of Inuit sculpture. As fate would have it, my brother settled in Traverse City and the water there is dependably deep for my boat.", "score": 30.384595574671167, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Inuit art: Inukshuk\nInuit Artist: Salomonie Shaa\nSize: 3\" tall, 1.5\" wide, 1\" deep\nCommunity: Cape Dorset, NU\nCorporate Gifts: For a selection of over 100 inukshuks to choose from, please visit our Corporate Gift section.\nThe focus of this sweet piece is obviously his large geometric head.\nTo me, he has a look reminiscent of midcentury design.\nIndeed, the stone colour is very similar to the colour of teak which was popular during the 50s and 60s.\nDid you notice that each body segment becomes smaller as you descend to the legs of our Inukshuk?\nPROUDLY CANADIAN SINCE 2008\nWe promise to send you only good things", "score": 30.2557210349198, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "The majority of can be purchased at galleries located in major Canadian cities however there are now a few galleries located in the USA and Europe that focus on this form of art. Not remarkably, the latest retail source of Inuit stone sculpture is on the web. This advancement is especially helpful for those who are not situated near an Inuit art gallery.\nvisit this page If Inuit stone you could look here sculpture is brand-new to you, have a look on the web. You will likely be impressed by the craftsmanship and creative charm of this unique art type. An entire brand-new world from the Canadian Arctic will be available to you for your enjoyment.\nMany individuals, even avid art fans, rarely think about or are even aware of Inuit stone sculptures from the Canadian Arctic north.\nEven so, most individuals who are aware of Inuit stone sculptures are those who have actually visited Canada in the past and got exposed to this intriguing type of aboriginal art while going to Canadian museums or galleries.\nIf you haven't seen Inuit stone sculpture, there's a lot to use from the Canadian Arctic. Human subjects depicting the Inuit Arctic way of life are likewise popular as stone sculptures.", "score": 30.108909192482, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "This is our showcase of Inuit art or Eskimo art prints. They bring a piece of the Canadian Arctic north to your walls.\nAll prints come with a Canadian government Igloo tag certifying authenticity - more details of Inuit art authenticity. Shipping is FREE within North America for most of our Inuit art prints.\nIs Being Said About Our Inuit Art\n\"Mon premier achat outre atlantique pas de\nproblème, très bien. (My first purchase beyond the\nAtlantic was no problem, excellent)\"\nD. Wetterwald, Candes Saint Martin, France\nSee what some of our other customers are saying about us\nCopyright © 2006-2017 Free Spirit Gallery, All Rights Reserved", "score": 28.599111176088577, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "Most individuals, even avid art fans, hardly ever think about or are even conscious of Inuit stone sculptures from the Canadian Arctic north.\nEven so, many people who are mindful of Inuit stone sculptures are those who have actually checked out Canada in the past and got exposed to this interesting type of aboriginal art while visiting Canadian museums or galleries.\nIf you haven't seen Inuit stone sculpture, there's a lot to use from the Canadian Arctic. Human topics portraying the Inuit Arctic lifestyle are also popular as stone sculptures.", "score": 28.595628317547185, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Many artists use certificates of authenticity as a means of adding facts about an artwork, and to prove its authenticity. The certificates contain information such as title, medium, date, signature, etc., which can possibly make an art buyer more comfortable with buying an artwork.\nIt is a document that the art collector can hold onto, and applied as proof of an artwork’s genuineness. There is no rule that says that an artists have to have certificates of authenticity, but they do add a layer of perceived value and trust for an artist, making artworks easier to sell.\nThe main problem with these documents is the ease of forgery. Fake certificates with forged signatures are very easy to create these days. There have been many cases in the past of forged authenticity documents.\nThis is why I recommend using as much factual information about the piece as possible, along with references to other places where the artwork resides.If you are here looking for free certificate of authenticity templates, there are a couple resources at the end of this post. To find many more, click the image on the right.\nI have listed some basic information to include in a certificate of authenticity, which will depend on the artwork, whether it is a sculpture, painting, drawing, or limited edition print.\nHow to Create a Certificate of Authenticity\n- Include the title of the painting, drawing, sculpture, print, etc., and the artist. (ie This is to certify that this original oil painting entitled “Entwined” was painted by Graham Matthews)\n- Add the medium. (ie artist quality oil paint)\n- State the materials used. (ie Gesso primed stretched canvas, 200g)\n- Some artists like to include a small image of the artwork on the certificate, although this is not necessary if there is a good description.\n- Have the name of the artist, and the year of creation included.\n- State the exact dimensions of the piece, and extra details if it is a limited edition.\n- Where the artwork was created, usually a country.\n- Whether it is an original or reproduction (print).\n- Create a certificate numbering or code system, and include a different number for each artwork. Record the code for your own records as well.\n- Sign your full name in ink and include the date.\n- Include contact information (address and email), and the link for your art website.", "score": 28.36329635984355, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Anonymous, Coppermine Inuit doll with caribou skin clothing and stone head, c. 1955-65\nCaribou skin with vestiges of hair, stone and sinew, 14 x 6 1/4 x 3 in. (35.6 x 15.9 x 7.6 cm)\nInuit Anonymous | Coppermine Inuit doll with caribou skin clothing and stone head | c. 1955-65 | Alaska on Madison\n2 / 4\nInuit Anonymous, Large whalebone woman with knife\n22 x 9 x 14 in.\nLike the narwhal, this woman carved in whalebone defies the usual limitations of the medium. The subtle curve of the amauti's hood, and the undercut and graceful hem of the amauti bespeak a master carver's touch.\nInuit Anonymous | Large whalebone woman with knife | | Alaska on Madison\n3 / 4\nInuit Anonymous, Narwhal\n14 x 17 x 7 in.\nA large whalebone carving, which is simple in form but speaks of the carver's skill. Most whalebone carvers did not try to modify the basic form of the bone. The difficulty of carving whalebone made it desirable -- almost imperative -- to let the form of the bone define the shape of the carving. The outstretched flippers and the tail are understated tours de force.\nInuit Anonymous | Narwhal | | Alaska on Madison\n4 / 4\nInuit Anonymous, Seated polar bear\n8 x 6 1/2 x 3 in.\nThis whimsical sitting bear is testimony of the artist's determination to find a bear in a broad but thin piece of stone. The head has a particularly sweet expression, and the paws are decisively sculpted. There is an indistinct signature, followed by \"Inuvik, NWT.\"\nInuit Anonymous | Seated polar bear | | Alaska on Madison\nTip: You must first enter a valid email address in order to submit your inquiry", "score": 27.92885055563226, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "An inunnguaq. It’s a stone figure. Inunnguat (plural) dot the landscape throughout the Arctic. For the Inuit people, it’s literally “something that imitates a person.” My particular inunnguaq is crafted of rose marble shot through with blacks and grays and polished to a lovely sheen. Rather than measuring in feet, mine measures in inches: five. For me, it represents a little piece of native wisdom that reminds me to always be authentic.\nI fell in love with the concept, even though the Alaskan store incorrectly called them “inuksuk” or “inukshuk,” which in Inuit means, “something that acts for or performs the function of a person.” Built from the land using whatever stones were available, each is unique and casts a strong and sacred tie between the people and the earth. The Inuit built these stone cairns all across the Arctic as warnings, as greetings, as navigation, as an indication of a cache of food – for many reasons. As humans were (and still are) scarce in this sparsely populated land, travelers could gain life-saving wisdom from these piles of rocks.\nInunnguat are built to “imitate” a person. The Inuit built cairns in this form as a tribute, as veneration for another person, to mark a place of respect or an abode of spirits. I guess it’s a bit like erecting a statue, but there’s something in me that prefers the natural rock formation to a finely sculpted likeness. I like the anonymity and the “any man” feel.\nThey represent, these two items, something similar but with important differences. One offers wisdom to others who choose to take it. The other pays tribute to another, presumably someone with wisdom worth taking.\nMaybe it’s just the season, but as I’m scrolling through my Facebook feed and hitting picture after picture of masked and costumed friends, my inunnguaq catches my eye. It is, after all, the Halloween season, when a little masked revelry offers a chance to play and pretend, to be someone or something we’re not. But it’s as though the inunnguaq makes itself known to me as a reminder – pretense is fun, here and there, but its pleasure fades and at some point it’s time to take off the mask.", "score": 27.805153430169547, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Bovey was shocked to find that some images on orange T-shirts made following the discovery of unmarked graves last year were reproduced without the artists’ knowledge by companies making a profit, not raising funds for Indigenous causes.\nShe warns buyers of Indigenous work to ask before they purchase where the work came from, whether it was made with the permission of the artist and whether the artist is being paid.\n“This is a really serious issue,” she said. “It’s plagiarizing the work, it’s appropriating the work and both are wrong and the artists don’t have the resources to fight all this in the court.”\nHelping Indigenous artists reclaim their copyright would be an example of “reconcili-action,” the senator added.\nShe wants an upcoming review of the Copyright Act by Heritage Minister Pablo Rodriguez and Minister of Innovation and Science Francois-Phillippe Champagne to include protections for Indigenous works, which she says are integral to Canada’s culture and history.\nNot only should there be specific safeguards for artists but a mechanism to track down companies fabricating Indigenous works, or failing to pay artists’ royalties, including in China and eastern Europe, she said.\n“We all have a responsibility for this. We need to find ways to support artists who are maligned that way to have legal funds,” she said.\nAlex Wellstead, spokesman for Champagne, said the review of the Copyright Act would “further protect artists and creators and copyright holders.”\nHe said “Indigenous Peoples and artists will be consulted in the process.”\nCurrent copyright law offers protection for Indigenous craftsmen and women, including Inuit sculptors and jewellers, but Bovey said the process is so complex and time-consuming to chase, few artists have the time to pursue it.\nShe also wants closer checks at borders and investigations of the provenance and destination of art in Canadian Indigenous styles, particularly made from wood that is not native to Canada.\nFake masks and Indigenous carvings have been openly sold to tourists in Vancouver as genuine, according to Lucinda Turner, an apprentice to Nisg’aa sculptor and totem pole carver Norman Tait.\nTurner, who died this week, spent years listing, tracking and challenging fraudulent Indigenous works claiming to be authentic, and lectured on the subject.\nHunt said she had done much to draw attention to the illicit trade, and had helped many Indigenous artists claim their copyright.", "score": 27.51134195205268, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "If a seller declares that such as piece is genuine, ask to see the main Igloo tag that comes with it which will have info on the artist, area where it was made and the year it was carved. The Visit Website genuine pieces with the accompanying official Igloo tags will constantly be the highest priced and are usually kept in a separate ( possibly even locked) shelf within the store.\nBecause Inuit art has been getting more and more international direct exposure, people might be seeing this Canadian great art form at museums and galleries situated outside Canada too. If one is lucky enough to be traveling in the Canadian Arctic where the Inuit live and make their fantastic artwork, then it can be securely assumed that any Inuit art piece purchased from a regional northern store or directly from an Inuit carver would be authentic. Trustworthy Inuit art galleries are also listed in Inuit Art Quarterly publication which is dedicated totally to Inuit art. The Inuit sculpture may be signed by the carver either in English or Inuit syllabics but not all authentic pieces are signed. Some of these Inuit art galleries also have sites so you could go shopping and buy authentic Inuit art sculpture from house anywhere in the world.", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Stonework Sculptures Out Of The Arctic North\nWhen most people believe about stone sculptures, it's most likely huge pieces of abstract art situated outside big buildings or perhaps inside a famous art gallery or museum. Many people, even passionate art fans, rarely think about or are even mindful of Inuit stone sculptures from the Canadian Arctic north.\nThe Inuit people (formerly described as Eskimos in Canada) have actually been sculpting stone sculptures for thousands of years however it was only presented as fine art to the contemporary world on a significant scale throughout the 1950s. Today, Inuit stone sculptures have acquired global acknowledgment as a legitimate form of modern art. Nevertheless, most people who understand Inuit stone sculptures are those who have actually gone to Canada in the past and got exposed to this fascinating form of aboriginal art while going to Canadian museums or galleries.\nThere's a lot to offer from the Canadian Arctic if you have not seen Inuit stone sculpture. The Inuit do some extremely sensible sculptures of the Arctic wildlife they are so intimately knowledgeable about. These consist of seals, walruses, birds and of course, the mighty polar bears. Human subjects portraying the Inuit Arctic way of life are likewise popular as stone sculptures. One can see pieces showing hunters, fisherman as well as Inuit mothers with their children. The stone sculptures can be available in a variety of different colors including black, brown, grey, white and green . Some pieces are shiny and extremely polished while others maintain the rougher, primitive appearance. Designs can vary relying on where in the Arctic the Inuit sculptors lie.\nAn Inuit stone sculpture can absolutely be incorporated into one's house décor and will you could try this out generally be rather a conversational piece given that most people have never seen such art work prior to. This is especially real in locations located outside Canada where Inuit stone sculpture is not well known. Canadians have actually typically offered Inuit stone sculptures as special company or personal presents. There are Inuit stone sculptures to match practically every cost range and budget plan at about $100 to numerous thousand dollars for large, complex pieces. The majority of can be purchased at galleries located in major Canadian cities however there are now a couple of galleries found in the U.S.A and Europe that concentrate on Kurt Criter this type of art. Not surprisingly, the current retail source of Inuit stone sculpture is on the web.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-3", "d_text": "- Carolyn Leigh, personal communication, 2008\n\"I think objects made for sale are an underrated category, while old and painted so-called authentic carvings are over estimated...older pieces are, in fact, rare and better sculptures although recent 'inauthentic' carvings can be very beautiful and equally traditional with respect to style and content. I mean, they are made by the same people, based on the same traditions, using the same artistic idiom, aren't they?\"\n- Jac Hoogerbrugge in Tribal Art Traffic by Ray Corbey, 2000\n\"Whether an object is 'early' or 'late,' 'used' or 'not used' in an indigenous ritual context, is, I think, immaterial. It only matters whether the carving moves me...Anyway, why should Western artists be allowed to make things to sell but non-Western artists not be allowed to do the same? In my opinion, that's only the newest form of Western paternalism. If you ask me, it comes from being afraid to look with your own eyes.\"\n- Tijus Goldschmidt in Tribal Art Traffic by Ray Corbey, 2000\n\"[Speaking of the beautiful Highlands shields] Western oil paint and all, I think such things are wonderful! Culture is a bubbling, living thing and not a fossil, as many patina-minded collectors would have it. Last but not least, there are aesthetic standards and the native makers themselves...how the people themselves see it is more important than how a collector or dealer somewhere else sees it.\"\n- Dirk Smidt in Tribal Art Traffic by Ray Corbey, 2000\n\"...but nobody asks how authentic a malanangan sculpture from New Ireland is if it boasts a beautiful old label in German and was collected around 1890, given that, as is described by German sources, things were being produced for sale on a fairly large scale at that time...Pieces for export were often more richly decorated than was traditionally the case. Age is, of course, a relative concept: What is collected now will also be old in 100 years time. With regard to the future, I would recommend that quality be given attention when collecting, and that includes letting the local aesthetic standards play a role.\"", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-4", "d_text": "- Are photographs and prints considered \"original\" works of art?\nBecause of their nature, limited editions (up to 100) of some forms of art are specifically included in the definition.\n- In the definition of original art what is meant by \"commercially produced\"?\nThe Division of the Arts has settled on the language \"conceived and made by the hand of the artist or under his direction and not intended for mass production\"\n- Will there be a form signed by the artist that certifies their intention that the work is original, not intended for mass reproduction?\nYes, the Louisiana Department of Revenue has developed a form for vendors to use to certify the authenticity of original art. The vendor is responsible for securing the authentication required by the rules.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "In fact, buyers generally like dated art, especially when their dates precede other buyers’ dates.\nSo how is the value of limited edition print set?\n- Most important is the reproduction process, with a Giclée print at the top of the value chain.\n- Numbered and signed is best\n- The smaller the edition, the better\n- The number “one” (1/xx) of an edition is most in demand\n- Numbered is better than an Artist Proof ( A.P.)\n- An Artist Proof is better than unnumbered (open edition)\nA signed and dated Certificate of Authenticity should be provided with each image with the print’s title, paper type, printer type, ink type, date printed, edition size, and other particulars. As a buyer you will appreciate the documentation, as good documentation tends to increase the value of the art.\nNow what about sculpture?\nMost sculptures take months – sometimes years – to create. Every process is complicated, time consuming and very costly. One tiny mistake typically means the sculptor has to destroy the work and start over. You can easily understand why fine art sculpture is quite expensive.\nSculptors make limited editions too, although the quantities are generally much lower than for flat art. They will be signed and dated and you should get a Certificate of Authenticity with the purchase, just as for flat art. In reality, however, limited edition sculptures are very original. Unlike a Giclée, which has the capability of being a perfect reproduction, a limited edition sculpture process has more variables and there will always be some slight “original” difference between each piece in the edition. That difference may only be noticeable to the sculptor and it may take precision measurements or a microscope to find it, but it will be there.\nLimited editions certainly do not hold the value of an original piece of art, but they do have considerable value. In fact, there are limited editions of works of art valued in the millions of dollars. More importantly, limited editions bring the beauty and appreciation of art to a larger audience, which is why many artists create their works in the first place. Things of beauty have the greatest value when they can be appreciated by more than a few. And if an artist is masterful, is it not best to help that artist display that talent, to earn a decent living so he or she can concentrate on creativity and artistry?", "score": 26.57083693047298, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "One of the hardest messages to deliver to a collector is that their treasures have little to no value. Just because the work is signed by Chagall doesn’t mean that it is an original Chagall. In fact, if one looks closely at many of the certificates of authenticity, the small print clearly delineates that the work is a replica or print of an original.\nThe goal of this article is not to dissuade you from buying or selling art or having your art appraised. When buying a painting from a dealer, ask for an independent opinion. So many of our clients seek counsel from our firm before they make a purchase of thousands of dollars. When selling anything at auction, go to more than one auction house. If you are not comfortable doing that yourself, have an independent advisor negotiate on your behalf. When hiring an appraiser, ask a few simple questions:\n1. What is the fee structure? You want to ensure they base their fee for service on an hourly rate.\n2. What is their experience valuing your material? You never want an American furniture specialist valuing your Asian works.\nIf you are provided a “certificate of authenticity,” read the small print and seek the guidance of an independent appraiser to advise you on the correct value of the piece. If you follow these basic rules, you will successfully be able to buy smart, sell smart and ensure your collections are valued correctly, thus effectively navigating the art market\nAnita Heriot: Before becoming President of Pall Mall Advisors, a U.S. and U.K. appraisal and art advisory firm, Anita Heriot served as Vice President, Head of Appraisals, and ran the British & Continental Furniture and Decorative Arts department for Samuel T. Freeman & Company. She also worked for Masterson Gurr Johns International, a London-based firm, as an appraiser. Ms. Heriot is a member of the Appraisers Association of America, is USPAP certified and has testified as an expert witness in major court cases involving art valuations. She is a graduate of Bowdoin University with advanced degrees from University of London and New York University.", "score": 26.528880405709565, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Brick-and-mortar store owners and dealers are in the best position to offer you service and recourse if problems arise or an item you purchased turns out not to be as claimed. Better bargains may be found online, usually at greater risk and higher potential for hassles and frustration. If you choose to buy from an unknown online seller, you should at least review their customer feedback and ask questions first. Any request or suggestion that you pay by wire transfer means the seller plans to take your money and run.\nSome makers' marks are faked more than others. The same is true with artist and celebrity autographs. As a collector, you should be able to judge an object by its characteristics and not rely on a signature alone. If the signature itself needs authentication, only a certified expert can make a definitive call.\nIt stands to reason that the smallest flaw on a commonly available item cannot and should not be overlooked, while something like a chip on a very old and rare porcelain piece may be forgiven.\nA single item can have different valuations depending whether you are looking to buy or sell. An insurance valuation is typically at the high end of an object's estimated price range on the retail market. It represents the anticipated cost to replace an item in case of total loss. The estimated resale value is based on recent transactions involving same or similar items; this type of valuation should be more conservative to reflect the price an object can reasonably be expected to realize when sold at auction. This is not to say that hammer prices at auctions are always the lowest, since all it takes is two competing bidders to drive up the price.\nThe fair market value is an estimate of the price a buyer and seller may agree upon, assuming both are acting willingly, under no pressure, and have basic familiarity with the object in question.\nCertain types of collectibles call for a Certificate of Authenticity and others don't. If available, a COA is only as good as the source from which it came. The document has little or no validity unless issued by a recognized authority.\nThe above text authored by manitouj.com. Copyright © 2013\nTop of page", "score": 26.468417169721132, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "In addition to these street retail specialized galleries, there are now respectable online galleries that likewise concentrate on authentic Inuit art. These online galleries are a great alternative for purchasing Inuit art since the prices are typically lower than those at street retail galleries because of lower overheads. Of course, like any other shopping on the internet, one must be careful so when dealing with an online gallery, ensure that their pieces also include the official Igloo tags to guarantee authenticity.\nSome traveler stores do carry authentic Inuit art as well as the other touristy souvenirs in order to cater to all kinds of travelers. When shopping at these kinds of stores, it is possible to tell apart the real pieces from the recreations. Genuine Inuit sculpture is carved from stone and therefore should have some weight or mass to it. Stone is likewise cold to the touch. A recreation made of plastic or resin from a mold will be much lighter in weight and will not be cold to the touch. A reproduction will often have a business name on it such as Wolf Originals or Boma and will never feature an artist's signature. An genuine Inuit sculpture is a one of a kind piece of art work and nothing else on the shop racks will look exactly like it. If there are duplicates of a certain piece with specific information, the piece is not authentic. It is most likely not real if a piece looks too best in detail with outright straight bottoms or sides. Naturally, if a piece features a sticker label suggesting that is was made in an Asian country, then it is undoubtedly a fake. There will likewise be a big rate difference in between authentic pieces and the replicas.\nWhere it ends up being more difficult to identify credibility are with the reproductions that are likewise made from stone. This can be a genuine gray area to those unfamiliar with genuine Inuit art. They do have mass and might even have some type of tag showing that it was handmade but if there are other pieces on the shelves that look too comparable in detail, they are most likely not authentic. If a seller declares that such as piece is genuine, ask to see the official Igloo tag that includes it which will know on the artist, location where it was made and the year it was carved. Move on if the Igloo tag is not offered.", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Masonry Sculptures From The Polar North\nWhen a lot of people think about stone sculptures, it's probably huge pieces of abstract art located outside big buildings or perhaps inside a popular art gallery or museum. A lot of individuals, even passionate art fans, seldom think about or are even conscious of Inuit stone sculptures from the Canadian Arctic north.\nThe Inuit people ( previously referred to as Eskimos in Canada) have actually been carving stone sculptures for thousands of years however it was only presented as fine art to the modern-day world on a considerable scale throughout the 1950s. Today, Inuit stone sculptures have gotten worldwide acknowledgment as a valid type of contemporary fine art. Even so, the majority of people who are aware of Inuit stone sculptures are those who have actually checked out Canada in the past and got exposed to this fascinating kind of aboriginal art while visiting Canadian museums or galleries.\nIf you have not seen Inuit stone sculpture, there's a lot to provide from the Canadian Arctic. The Inuit do some extremely practical sculptures of the Arctic wildlife they are so totally familiar with. Human topics illustrating the Inuit Arctic way of life are likewise popular as stone sculptures.\nAn Inuit look at here now stone sculpture can certainly be incorporated into one's home décor and will generally be quite a conversational piece considering that many people have never ever seen such artwork before. This is especially real in areas located outside Canada useful site where Inuit stone sculpture is not popular. Canadians have actually frequently offered Inuit stone sculptures as distinct service or personal presents. There are Inuit stone sculptures to suit nearly every price variety and spending plan at about $100 to a number of thousand dollars for big, elaborate pieces. Most can be bought at galleries located in major Canadian cities however there are now a couple of galleries located in the U.S.A and Europe that focus on this kind of art. Not surprisingly, the most recent retail source of Inuit a knockout post stone sculpture is on the web. This development is particularly helpful for those who are not situated near an Inuit art gallery.\nIf Inuit stone sculpture is brand-new to you, have a appearance on the web. You will likely be impressed by the craftsmanship and creative appeal of this unique art form. An entire brand-new world from the Canadian Arctic will be available to you for your enjoyment.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Recent discussions on some tribal art chat groups have centered on something called, “provenance.”\nIt refers to the pedigree of an object of tribal art that may be offered for sale. In African tribal art in particular, where antiquity gives an object extra value in the mind of potential buyers, provenance is an important way to determine if the object is a real antique and/or authentic in terms of tribal use.\nSeemingly, if the object was purchased many years ago and held in important collections, it is perceived to be old and authentic. Still, who is to say it wasn’t a reproduction when it was first acquired? Or that the attribution to previous owners and collections isn’t phony?\nBottom line from our perspective is that you only purchase objects that you personally love, regardless of their history.\nIf they are priced extraordinarily high because they are claimed to be authentic antiques, due diligence is in order. Take every promise with a grain of salt. Ask for proof of the claims that are made, even if made in the most reputable of galleries. Ask for a Certificate of Authenticity and a written promise that an object can be returned and your purchase price refunded if it is found to be something different than you are told it is. Any reputable dealer will give you these assurances. If they won’t give them, run, don’t walk, to the nearest exit.\nOf course, if you are buying for décor purposes or to grow a more modest collection, buying what you like is a pretty good guide. But why not have objects that are stylistically correct for the cultures they purport to represent. There are some fine books on African art that can show you what to look for. A little research is a good investment.\nOn our Web site at http://www.tribalworks.com/african_tribal_art_gallery.htm we offer some excellent objects of African tribal art. We don’t claim that they are equal to what is in the museums of Europe but many of them have museum backgrounds. And they are priced commensurate with their quality, authenticity and age. If you see something you like, make us an offer. Maybe we can make a deal.\nThanks for reading this blogletter from Aboriginals: Art of the First Person. If you would like to subscribe, you can use bloglines.com, feedburner.com or by clicking on the “sign up” icon, if it appears on your page.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-4", "d_text": "- Old nail holes or mounting marks on the back of a piece, might indicate that a painting has been removed from its frame, doctored and then replaced into either its original frame or different frame.\n- Signatures, on paintings or graphics, that look inconsistent with the art itself (either fresher, bolder, etc.).\n- Unsigned work that a dealer has \"heard\" is by a particular artist.\nIf examination of a piece fails to reveal whether it is authentic or forged, investigators may attempt to authenticate the object using some, or all, of the forensic methods below:\n- Carbon dating is used to measure the age of an object up to 10,000 years old.\n- \"White Lead\" Dating is used to pinpoint the age of an object up to 1,600 years old.\n- Conventional X-ray can be used to detect earlier work present under the surface of a painting (see image, right). Sometimes artists will legitimately re-use their own canvasses, but if the painting on top is supposed to be from the 17th century, but the one underneath shows people in 19th century dress, the scientist will assume the top painting is not authentic. Also x-rays can be used to view inside an object to determine if the object has been altered or repaired.\n- X-ray diffraction (the object bends X-rays) is used to analyze the components that make up the paint an artist used, and to detect pentimenti (see image, right).\n- X-ray fluorescence (bathing the object with radiation causes it to emit X-rays) can reveal if the metals in a metal sculpture or if the composition of pigments is too pure, or newer than their supposed age. Or reveal the artist’s (or forger’s) fingerprints.\n- Ultraviolet fluorescence and infrared analysis are used to detect repairs or earlier painting present on canvasses.\n- Atomic Absorption Spectrophotometry (AAS) and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) are used to detect anomalies in paintings and materials. If an element is present that the investigators know was not used historically in objects of this type, then the object is not authentic.\n- Dendrochronology is used to date a wooden object by counting the number of tree rings present in the object. This is of limited use, though, as to date the piece accurately the wood needs to have about 100 rings.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "In the years since his passing, Johann Berthelsen’s popularity\nhas grown, and his work – especially the New York snow\nscenes – has commanded increasingly higher prices at\nleading galleries and auction houses. While this has been\nvery gratifying to his family and admirers, it is not without\ncertain problems, most notably copies and counterfeits\nfinding their way into the market. In order to alleviate this,\nthe Conservancy offers an authentication service.\nMr. Lee Berthelsen, son of the artist and the leading expert\non his father’s work and technique, looks forward to being in touch with owners of his father’s artwork and will be glad\nto review works in question to determine their authenticity.\nPlease e-mail Lee Berthelsen at [email protected],\n• your name\n• your phone number\n• your mailing address, and\n• an image of the painting (both front and back) you wish to have authenticated.\nPlease note that when taking a photograph of the painting, it should be as high resolution as possible.\nSince images can sometimes be unclear or distorted when sent over the internet, Mr. Berthelsen may request that you mail either a 5″ x 7″ or 8″ x 10″ print of the photograph for his further review.\nIf the work is judged to be original, a signed Certificate of Authenticity will then be issued by the Conservancy for a modest fee of $300. Included with the fee is a free set of notecards of five of the most beloved of the artist’s snow scenes.", "score": 25.63834957572344, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "These days, Gauthier’s pieces fetch between $2,000 and $10,000 each, and are counted among some prestigious collections, including that of former French president Jacques Chirac, a collector of native art. Chirac wrote Gauthier a letter after acquiring the piece — an Inuit mask representing sea animals — telling him how much he loved the work, and how he had given it a prominent position among his collection.\nGauthier’s first solo exhibition, a 25-piece collection called “Visions from Labrador” and held at the Spirit Wrestler Gallery in Vancouver last fall, sold out in about 19 minutes.\n“I was quite nervous,” Gauthier admits. “I was in a restaurant having breakfast — because they don’t like the artist to be at the opening, for a number of reasons — and I got a phone call from my mother, who was there, about half an hour after it opened. I thought something was wrong, that nothing was selling, but she told me it had already all sold out.”\n“It’s very similar to looking at the clouds — you know how you stare at the clouds for a while and come up with shapes of an animal or a person? I’ll look at the stone and then come up with an idea for that stone.” Billy Gauthier\nGauthier generally sculpts in caribou and moose antler, serpentine, alabaster — much of which he quarried himself while living in New Brunswick — and anhydrite, and uses diamond tools and handmade push chisels for fine details. With a background that includes Inuit, Innu, Mi’kmaq, French, English and Scottish, he chooses traditional Labrador themes for the most part, including hunting and fishing, music and folklore. An avid hunter and trapper, he’s often inspired by the animals he hunts, like ptarmigan and caribou.\nWhile he often carves from memory, he’ll also take photos of the animals, to make sure he gets the proportions right.\n“Sometimes, actually, I’ll just haul it out of the freezer,” he said with a laugh.", "score": 25.35563997237781, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Comparative analysis is a side-by-side comparison of your work of art to others that have been positively attributed to the artist in question. For example, if you have a painting that you believe to be by French Impressionist, Johannes Vermeer, we look at all of the paintings that experts agree are authentic works by Vermeer to check for stylistic inconsistencies between his body of work (also called an oeuvre) and your painting. Think of it as a very high-level version of “spot the differences.\nNo. We do not have droit moral for any of the artists that we authenticate. We do offer a complimentary certificate as proof of our professional opinion, but this does not serve as a warranty or guarantee of authenticity.\nIf the artist is still alive, or if he or she has a foundation, or living relatives with droit moral, it may be possible to obtain a Certificate of Authenticity. However, if there is no person or organization with the legal authority to issue a COA for your artwork (as is the case for many artists), your options for authentication are gathering expert opinion, which we offer in the form of a comparative analysis report, provenance research and scientific analysis..\nIf you are unable to locate or get in touch with the appropriate authority, please feel free to email [email protected] for assistance. Many organizations no longer provide authentication research or Certificates of Authenticity, others only review a fixed number of submissions per year and may charge a high price for the privilege. We can help if you would like to be more confident in the authenticity of your artwork before you contact a foundation, or if your work of art was rejected and you would like to know why.\nIf you purchased a work of art on the primary market from a gallery, it should be accompanied by a Certificate of Authenticity. Works of art on the secondary market sold by galleries and auction houses may have a Certificate of Authenticity, but are more commonly accompanied by a provenance record. Your local auction house or gallery may perform internal authentication research if they are interested in consigning or selling your item, but they are unlikely to provide a detailed report of findings, or a Certificate of Authenticity.\nNo. With only a few notable exceptions, most museums do not perform authentication research. No reputable museum would issue a Certificate of Authenticity for your artwork.", "score": 25.00000000000005, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "There are considerable differences, as you can see:\nThe “B” on Brian’s name is quite different from 2 to 3 as is the “R” in Roger’s name; and even John’s first name is almost completely different. So how do you know a signature is fake if their own handwriting changes over time and you weren’t there to witness it firsthand? That is a problem, according to Clive Farahar:\nHuman error – the fact real handwriting varies over time and in different circumstances - can make real and fake signatures extremely hard to validate or dismiss. Autographs by the Beatles are notoriously hard to authenticate. “You can’t recognise a lot of signatures done at stage doors. It looks like John Lennon but it could be anybody. If you walked out of a theatre as you signed your name, it would look dodgy,” says Mr Farahar.\nSo back to my Certificate of Authenticity issued by the Las Vegas dealer . . .\nIt claims the signatures are authentic (signed at the Seattle concert) and my discussion with the sales rep included the integrity of their suppliers, so did I end up with an album of fake autographs? I’m not sure either way at this point. I’ll still go ahead and get my Game album framed up along with some other Game-era paraphernalia I’ve got kicking around. Whatever the authenticity, it is now part of my collection with a history of its own.\nI suppose the money-back guarantee offered by Antiquities International is always an option but I’d need to find a licensed and certified handwriting expert, pay between $50 and $400 for an analysis, and then pay to send the original cover back to Vegas. For the $600US I paid for the cover, I’ll settle for whatever the authenticity happens to be, which is a mystery at this point.", "score": 24.879708228054273, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "The genuine pieces with the accompanying authorities Igloo tags will constantly be the highest priced and are typically kept in a different ( maybe even locked) rack within the store.\nBecause Inuit art has been getting more and more worldwide direct exposure, individuals might be seeing this Canadian great art type at museums and galleries situated outside Canada too. If one is fortunate enough to be taking a trip in the Canadian Arctic where the Inuit live and make their terrific art work, then it can be securely presumed that any Inuit art piece bought from a regional northern Kurt Criter Denver shop or directly from an Inuit carver would be authentic. Trusted Inuit art galleries are likewise listed in Inuit Art Quarterly magazine which is devoted entirely to Inuit art. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics however not all genuine pieces are signed. Some of these Inuit art galleries also have websites so you could go shopping and purchase genuine Inuit art sculpture from home anywhere in the world.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "When most people think about stone sculptures, it's most likely giant pieces of abstract art located outside large structures or possibly inside a famous art gallery or museum. Often individuals think of stone sculptures as the ancient Roman or Greek mythological characters like Apollo, Venus or Zeus. For modern art, many see stone sculpture just for major collectors or for the abundant and well-known to display in their well kept estates. Many people, even devoted art fans, rarely think of or are even familiar with Inuit stone sculptures from the Canadian Arctic north.\nThe Inuit people (formerly described as Eskimos in Canada) have actually been carving stone sculptures for countless years but it was only presented as fine art to the modern-day world on a considerable scale throughout the 1950s. Today, Inuit stone sculptures have acquired international recognition as a legitimate form of contemporary art. Even so, many people who understand Inuit stone sculptures are those who have checked out Canada in the past and got exposed to this interesting form of aboriginal art while checking out Canadian museums or galleries.\nIf you have not seen Inuit stone sculpture, there's a lot to use from the Canadian Arctic. The Inuit do some really realistic sculptures of the Arctic wildlife they are so thoroughly familiar with. Human topics depicting the Inuit Arctic way of life are also popular as stone sculptures.\nAn Inuit stone sculpture can absolutely be incorporated my sources into one's home decoration and will typically be quite a conversational piece since most people have try this website actually never seen such art work before. This is specifically true in areas situated outside Canada where Inuit stone sculpture is not well known. Canadians have typically offered Inuit stone sculptures as distinct service or personal presents. There are Inuit stone sculptures to suit nearly every cost range and budget at about $100 to several thousand dollars for large, detailed pieces. Many can be bought at galleries found in major Canadian cities however there are now a few galleries found in the U.S.A and Europe that concentrate on this type of art. Not remarkably, the current retail source of Inuit stone sculpture is on the internet. This development is especially helpful for those who are not situated near an Inuit art gallery.\nIf Inuit stone sculpture is brand-new to you, have a look on the web. You will likely be impressed by the workmanship and creative charm of this distinct art kind. An whole brand-new world from the Canadian Arctic will be readily available to over here you for your enjoyment.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "Hoffmann would not reveal the artist's true identity, explaining that, \"I don't want to be ratting out anybody.\"\n'I knew instantly'\nIt was a sloppy description of the fake artist that caused the whole scheme to unravel.\nErin Brillon, the Haida/Cree fashion designer behind Totem Design House, noticed a listing from an Alberta art shop that described Harvey John's work as \"original Haida carvings\" — not Nuu-Chah-Nulth, as the official biography says.\n\"I knew instantly it was not done by a Haida person. It was not Haida-designed in any way, shape or form,\" Brillon recalled. \"And I know that John is not a Haida last name.\"\nHer post was soon inundated with comments from people who shared her suspicions and others who'd found shops around the world that were selling Harvey John artworks.\nEventually, someone tagged a Vancouver business owner who'd sold the carvings. That person confronted Hoffmann, getting him to admit the hoax, and the news spread through the B.C. gallery world.\n\"A whole lot of people stepped in and recognized that something bigger was going on here. I'm really glad that we actually got to the bottom of the source of fraudulent art,\" Brillon said.\nBut she points out that this isn't just about one artist using a pseudonym and a phoney identity.\nFake Indigenous art is disturbingly common — right now, members of the Facebook group \"Fraudulent Native Art Exposed and More\" have been occupied with daily sightings of T-shirt sellers ripping off Indigenous artists to sell \"every child matters\" merchandise.\nBrillon recalls visiting gift shops in Alaska where knockoff Northwest Coast-style masks and carvings are sold to cruise ship passengers, who are informed that their purchases are \"inspired by\" Indigenous art rather than authentic pieces.\n\"They sell loads of this stuff to American tourists because, legitimately, people don't care if they just want a cheap price,\" Brillon said.\n\"It's insane to have artists up there and … none of these artists are wealthy, and yet these galleries are selling these knockoff pieces hand over fist, making a killing.\"\n'There was a lot of people turning a blind eye'\nThe U.S. does have a law that protects Native American art forms and makes it illegal to market and sell fake products, with penalties that can be as high as $1 million or even five years in prison. Brillon wants to see Canada do the same.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-6", "d_text": "The fact that experts do not always agree on the authenticity of a particular item makes the matter of provenance more complex. Some artists have even accepted copies as their own work - Picasso once said that he \"would sign a very good forgery\". Jean Corot painted over 700 works, but also signed copies made by others in his name, because he felt honored to be copied. Occasionally work that has previously been declared a forgery is later accepted as genuine; Vermeer's Young Woman Seated at the Virginals had been regarded as a forgery from 1947 until March, 2004, when it was finally declared genuine, although some experts still disagree.\nAt times restoration of a piece is so extensive that the original is essentially replaced when new materials are used to supplement older ones. An art restorer may also add or remove details on a painting, in an attempt to make the painting more saleable on the contemporary art market. This, however, is not a modern phenomenon - historical painters often \"retouched\" other artist's works by repainting some of the background or details.\nMany forgeries still escape detection; Han van Meegeren, possibly the most famous forger of the 20th century, used historical canvasses for his Vermeer forgeries and created his own pigments to ensure that they were authentic. He confessed to creating the forgeries only after he was charged with treason, an offense which carried the death penalty. So masterful were his forgeries that van Meegeren was forced to create another \"Vermeer\" while under police guard, to prove himself innocent of the treason charges.\nA recent, thought-provoking instance of potential art forgery involves the Getty kouros, the authenticity of which has not been resolved. The Getty Kouros was offered, along with seven other pieces, to The J. Paul Getty Museum in Malibu, California in the spring of 1983. For the next twelve years art historians, conservators, and archeologists studied the Kouros, scientific tests were performed and showed that the surface could not have been created artificially. However, when several of the other pieces offered with the Kouros were shown to be forgeries, its authenticity was again questioned. In May 1992, the Kouros was displayed in Athens, Greece, at an international conference, called to determine its authenticity.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "It is crucial that a painting and prints\nfor example, is original and not a counterfeit of the real thing.\nAuthenticity is established from the manufacturer's stamp on the back side or underside of the item. Determining the authenticity\nrequires an educated approach. It often requires the skills of a professional on this area to ascertain it. It involves a systematic\ninvestigation and analysis to determine whether an item is genuine.\nObjects must be referenced in national museum records. National museums have records of highly valued antique items. An individual\nmust therefore refer to records in museums closest to the place of origin of the item before deciding on the integrity of an artifact.\nThe features outlined above encapsulate the general features of antique art. They form an essential basis for identifying artifacts.\nWhile some collectors may want to personally establish the integrity of an item they have collected, it often requires the eyes\nof an expert.\nBiography of wildlife artist, David Shepherd, CBE, FRSA, FRGS, OBE.\nKnown as possibly the world's finest wildlife artist. David Shepherd\nhaving worked for many years as an artist\nacross the world,\nHis role as a conservationist is passionately believed, and\nhe has always felt that he had a obligation to help conserve the world and the animals that we live amongst.\nDuring his lifetime, David Shepherd has painted and drawn hundreds, if not thousands of images, and loves to talk to people of the many experiences\nhe has had whilst travelling and painting\nthroughout the world, often talking at charity dinners and other prestigeous social events.\nHis personality is beautifully suited to this cause, as his easy going and straight forward attitude comes across well to express the love\nof art and his feelings for the ever decreasing wildlife in the world.\nAs a young man, luck often deals its hand, and this was no exception, he wasn't particularly keen about other college activities.\nDavid Shepherd is often heard explaining that during his ealier years his life was anything but successful, his main ambition was to be an African game warden.\nWhen his studying was done, David Shepherd left England with the hope of a life within the national parks of Africa.\nSadly, these hopes were dashed in the first instance, and he was informed by the head game warden that there were no vacancies, his dreams were in tatters.\nThroughout school days, his foremost curiosity in art had been as a substitute for the compulsary games of rugby which left him with a fear of dread.", "score": 23.531582830279028, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "At a time when many commercially made products are being sold as handcrafted Native American art, our in-depth purchase process allows us to guarantee the authenticity of every unique piece of fine art we offer. For more than 35 years, we have made it a priority to visit artists in their studio or home to purchase their latest handcrafted pieces and learn about their work. We have developed lasting relationships with artists, as well as dealers and collectors, and we take pride in being a trusted destination for fine Native American art.", "score": 23.03110256224034, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "If the Igloo tag is not available, move on. The genuine pieces with the accompanying official Igloo tags will always be the greatest priced and are normally kept in a different (perhaps https://soundcloud.com/kurt-criter even locked) shelf within the store.\nGiven that Inuit art has actually been getting more and more worldwide exposure, individuals may be seeing this Canadian great art type at museums and galleries located outside Canada too. If one is fortunate enough to be traveling in the Canadian Arctic where the Inuit live and make their wonderful art work, then it can be safely assumed that any Inuit art piece acquired from a regional northern shop or straight from an Inuit carver would be authentic. Reliable Inuit art galleries are also noted in Inuit Art Quarterly magazine which is devoted entirely to Inuit art. The Inuit sculpture may be signed by the carver either in English or Inuit syllabics but not all genuine pieces are signed. Some of these Inuit art galleries likewise have websites so you might shop and buy authentic Inuit art sculpture from house anywhere in the world.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Boulder Sculptures From The Frosty North\nWhen the majority of people think of stone sculptures, it's probably huge pieces of abstract art situated outside large buildings or maybe inside a famous art gallery or museum. Sometimes people consider stone sculptures as the ancient Roman or Greek mythological characters like Apollo, Venus or Zeus. For modern art, numerous see stone sculpture only for major collectors or for the abundant and famous to display in their well kept mansions. The majority of people, even avid art fans, seldom think of or are even familiar with Inuit stone sculptures from the Canadian Arctic north.\nThe Inuit individuals (formerly referred to as Eskimos in Canada) have been sculpting stone sculptures for thousands of years however it was only presented as fine art to the modern-day world on a significant scale throughout the 1950s. Today, Inuit stone sculptures have gotten global acknowledgment as a legitimate type of contemporary art. Even so, many people who know Inuit stone sculptures are those who have actually checked out Canada in the past and got exposed to this fascinating form of aboriginal art while checking out Canadian museums or galleries.\nIf you have not seen Inuit stone sculpture, there's a lot to provide from the Canadian Arctic. The Inuit do some really sensible sculptures of the Arctic wildlife they are so thoroughly knowledgeable about. These include seals, walruses, birds and naturally, the magnificent polar bears. Human topics portraying the Inuit Arctic way of life are likewise popular as stone sculptures. One can see pieces showing hunters, angler as well as Inuit moms with their children. The stone sculptures can come in a variety of various colors consisting of black, brown, grey, white and green . Some pieces are shiny and extremely refined while others maintain the rougher, primitive look. Styles can differ depending upon where in the Arctic the Inuit sculptors lie.\nAn Inuit stone sculpture can definitely be incorporated into one's home decoration and will usually be rather a conversational piece because many people have never ever seen such artwork prior to. This is particularly real in areas located outside Canada where Inuit stone sculpture is not popular. Canadians have actually often provided Inuit go right here stone sculptures as unique organisation or individual gifts. There are Inuit stone sculptures to match almost every rate range and budget plan at about $100 to several thousand dollars for big, detailed pieces.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "\"Inuit Art and the Expression of Eskimo Identity\" In the dialogue lies a binary: \"Is it Eskimo?\" or \"Is it art?\" I cannot help but wonder whether a delineation can be made? \"All realities are culturally constructed;\" therefore, art is art and the process of reality being transformed through the medium is what should be explored. I chose this to represent the article as tattooing is a female art form and whales can be found in the far north or Thule region.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Art forgery is the creating and selling of works of art which are falsely credited to other, usually more famous, artists. Art forgery can be extremely lucrative, but modern dating and analysis techniques have made the identification of forged artwork much simpler.\n- 1 History\n- 2 Forgers\n- 3 Methods of detection\n- 4 Problems with authentication\n- 5 Photographic forgery\n- 6 Legal issues\n- 7 Art Crime Education\n- 8 Fictional art forgery\n- 9 See also\n- 10 References\n- 11 Further reading\n- 12 External links\nArt forgery dates back more than two thousand years. Roman sculptors produced copies of Greek sculptures. Presumably[clarification needed] the contemporary buyers knew that they were not genuine. During the classical period art was generally created for historical reference, religious inspiration, or simply aesthetic enjoyment. The identity of the artist was often of little importance to the buyer.\nDuring the Renaissance, many painters took on apprentices who studied painting techniques by copying the works and style of the master. As a payment for the training, the master would then sell these works. This practice was generally considered a tribute, not forgery, although some of these copies have later erroneously been attributed to the master.\nFollowing the Renaissance, the increasing prosperity of the middle class created a fierce demand for art. Near the end of the 14th century, Roman statues were unearthed in Italy, intensifying the populace’s interest in antiquities, and leading to a sharp increase in the value of these objects. This upsurge soon extended to contemporary and recently deceased artists. Art had become a commercial commodity, and the monetary value of the artwork came to depend on the identity of the artist. To identify their works, painters began to mark them. These marks later evolved into signatures. As the demand for certain artwork began to exceed the supply, fraudulent marks and signatures began to appear on the open market.\nDuring the 16th century imitators of Albrecht Dürer's style of printmaking added signatures to them to increase the value of their prints. In his engraving of the Virgin, Dürer added the inscription \"Be cursed, plunderers and imitators of the work and talent of others\". Even extremely famous artists created forgeries.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "|A U T H E N T I C A T I O N S|\nIt is in everyone's best interest that works of art be correctly attributed to Roy Lichtenstein and be appropriately documented. This is fundamental to the construction of comprehensive catalogues raisonnés as well as to the maintenance of an atmosphere of accurate artistic presentation. With the proliferation of traditional and online auctions and e-business, informal or unregulated trading, and the complex underground of art forgery, it is not possible for the Foundation to police the vast global art marketplace.\nHowever, through databases and resources of connoisseurship and technical examination, the Foundation hopes to assist the public in the confirmation of correct attribution and provenance of the artists extensive oeuvre. The Board of the Roy Lichtenstein Foundation approved the implementation of the Roy Lichtenstein Authentication Committee, effective 12 January 2005. Owners of works (paintings, drawings, watercolors, collages, and sculpture but not prints or other multiples) whose authorship may be in doubt may contact the Foundations Registrar to learn of the formal process, warranties and procedures. The Committee currently plans to meet twice annually.", "score": 22.042280876971514, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Considering that Inuit art has been getting more and more international direct exposure, people may be seeing this Canadian fine art form at museums and galleries located outside Canada too. If one is fortunate enough to be taking a trip in the Canadian Arctic where the Inuit live and make their fantastic art work, then it can be securely presumed that any Inuit art piece purchased from a local northern shop or directly from an Inuit carver would be genuine. Trusted Inuit art galleries are also listed in Inuit Art Quarterly publication which is dedicated totally to Inuit art. The Inuit sculpture Kurt Criter may be signed by the carver either in English or Inuit syllabics but not all authentic pieces are signed. Some of these Inuit art galleries likewise have websites so you might go shopping and buy genuine Inuit art sculpture from house anywhere in the world.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "The world, material and spiritual, appears to be experienced as a continuum rather than as a dichotomy; the `I' of the owl is not separated from the `I' of the man, from the `I' of the seal, from the `I' of the stone, from the `I' of the carver.\n``The corollary of such a living in other selves is that the self is not inviolate, sacrosant, contained. It is rather, fluid, changing in shape, multiple. ... Animals embrace animals, become other animals or humans. Within pieces, parts take on aspects of other parts, as in the corkscrew body of the narwhal by Nuna Parr, which imitates the corkscrew tusk, or the distinctly bird-like stance of Nalenik Temela's `Bear.' In others, such as Philip Kamipakittug's `Bird/Man Singing,' the song itself becomes the wings, the gesture of singing is the singing self.''\nAlthough Inuit art reflects thousands of years of shared beliefs and customs, the works themselves represent only four decades of development. Modern Inuit art dates back to 1948, when the Canadian government and the Hudson Bay Company began supporting artmaking activities to solve the economic problems of the Inuit. The art produced today, as a result, is often made by artists old enough to have strong links to the Inuit's past as hunters but who live in a world with satellite dishes and video games.\nFortunately, no evidence of these modern inventions can be found in this exhibition. The focus is entirely on Inuit culture, the animals and birds the Inuit need to survive, and the myths and legends that involve these creatures.\nIn the truest sense, Inuit art is monumental, whether it's a tiny ivory carving of a mammoth less than one inch high, or a two- or three-foot-high soapstone carving of one of the North's larger animals. In Inuit sculpture, form, the harmonious interrelationship of parts to the whole, is everything. Size and bulk are next to nothing.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Inuit Art: Fueled by A Kinship with All Life\nNEW YORK — THERE are so many ways to respond to ``Masters of the Arctic,'' the 150-work traveling exhibition of Inuit (Eskimo) art currently on view at the United Nations here. There's the simple wonder of it all, the surprise of discovering that ``untrained'' artists living so far from ``civilization'' could produce work of such uncommon beauty and sophistication.\nThere's the pleasure of finding oneself surrounded by so many delightful, lovingly rendered two- and three-dimensional representations of the people, beasts, and birds from the polar north.\nThere's gratitude that serious attempts are now being made to bring Inuit art out into the world. And that such a large assemblage of Inuit masterworks has gone on display in so public and international an arena.\nWe have the United Nations Environment Program and the permanent missions of the United States, the Soviet Union, and Canada to thank for this exhibition - as well as Thomas E. Wells, who organized it, and the Amway company, which provided funding.\nThe exhibition features works by over 100 artists, spanning three generations, each of whom selected one or more pieces for inclusion in the show. Most of the works are fairly massive carved stone sculptures, though a few are small enough to be held in the hand. Also on view are tiny, rarely seen ivory carvings from Siberia, prints, and a ``Coat of Life'' parka.\nOver 20 Inuit settlements, each associated with a particular style of carving, are represented with depictions of bears, seals, walruses, musk oxen, caribou, and owls - as well as an occasional Inuit or two. All of the creatures, including those utilized for the depiction of Inuit myths, are portrayed simply and directly. Many of the carved pieces portray a hunt, the animal hunted, and the intense spiritual identification of an Inuit with other living creatures. In these, the respect of the hunter for the hunted is clearly evident.\nTerrence Heath, in his introduction to the exhibition catalog, discusses the Inuit's creative motivation: ``The sculptural art seems to be the articulation of many selves. It is not self-expression, but expression of selves.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-2", "d_text": "These might be worth the paper they are printed on, assuming one has a very desperate need of paper, but they usually send me screaming into the night, on the principle of methinks the lady doth protest overmuch. A receipt from a reputable bookseller or autograph dealer is a guarantee of authenticity, or at least of the right of return. This is not trumped by a document with embossed gold seals from someone who might disappear next week. One should be particularly skeptical when the Certificate is issued by the person selling you the autograph.\nIs it possible that Thomas Pynchon might have signed and not inscribed a cheap reprint of one of his books, and that it now has found its way onto Ebay, accompanied by a bright and shining Certificate of Authenticity from someone you've never heard of, but improbably enough the signature is still authentic?\nOf course it's possible. But if I've seen it first, you won't want it anyway. Probably because of the offensive staining.\nThis article first appeared in the October/November 2008 issue of Rare Book Review under the title \"Cheaters' Books.\"", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-3", "d_text": "More than three dozen prominent artists, archives, foundations and museums have signed up to test the technology, which could be ready as early as next year. But of course, it will be some time before more than a fraction of the market is covered by this technology.\nSo for now, the art market continue to grapple with the thorny issue of how to balance the market’s need for authentication opinions with the fact that the art market is, in many respects, self-regulating, and litigation (even unsuccessful litigation like the Bilinski case) can serve a genuine role in encouraging transparency and debate. Meanwhile, in many cases, a buyer’s best options for managing authenticity risks are pre-purchase diligence and negotiated sales contracts with appropriate representations and warranties.\nArt Law Blog", "score": 20.507727284661815, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Lots of visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while touring the country. Because Inuit art has been getting more and more international exposure, people may be seeing this Canadian fine art kind at galleries and museums located outside Canada too. Assuming that the intent is to acquire an genuine piece of Inuit art rather than a inexpensive tourist imitation, the question arises on how does one inform apart the genuine thing from the fakes?\nIt would be pretty disappointing to bring home a piece just to find out later that it isn't authentic or perhaps made in Canada. If one is fortunate enough to be taking a trip in the Canadian Arctic where the Inuit live and make their wonderful artwork, then it can be safely presumed that any Inuit art piece purchased from a local northern shop or straight from an Inuit carver would be authentic. One would have to be more careful elsewhere in Canada, specifically in tourist locations where all sorts of other Canadian keepsakes such as tee shirts, hockey jerseys, postcards, essential chains, maple syrup, and other Native Canadian arts are offered.\nThe safest locations to buy Inuit sculptures to guarantee credibility are constantly the respectable galleries that focus on Canadian Inuit art and Eskimo art. A few of these galleries have ads in the city tourist guides found in hotels.\nRespectable Inuit art galleries are likewise listed in Inuit Art Quarterly magazine which is devoted completely to Inuit art. These galleries will generally be located in the downtown traveler locations of major cities. When one walks into these galleries, one will see that there will be only Inuit art and possibly Native art however none of the other typical tourist mementos such as postcards or t-shirts . These galleries will have just authentic Inuit art for sale as they do not deal with imitations or phonies . Simply to be even more secure, make certain that the piece you are interested in includes a Canadian federal government Igloo tag certifying that it was handmade by a Canadian Inuit artist. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics however not all authentic pieces are signed. Be mindful that an anonymous piece might still be indeed genuine.\nSome of these Inuit art galleries also have websites so you might go shopping and purchase authentic Inuit art sculpture from home anywhere in the world. In addition to these street retail specialized galleries, there are now trustworthy online galleries that also specialize in genuine Inuit art.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "“It’s made of whale bone,” I was told, “Carved from vertebrae”.\n|The Inuit whalebone statue - drummer side|\nAt first glance the 14-inch figure looks quite sombre: a drummer, cross-legged and hunched inside the warmth of a thick coat. The lines are smooth, but the surface of the bone is porous, giving it a cold feel to touch. Why was it carved? A toy passed between giggling children in neighbouring boats? A present? Maybe whilst you’re holding it wondering if something with so much care poured into it could really be a trinket for tourists, you feel that the back of the statue is also carved, and spin it around: The statue has two faces. A drummer and a dancer.\nThe faces themselves, along with the buttons on their coats and the drummer’s traditional qilaut drum are ivory – suggested to be from a Walrus tusk. But, wait a minute… ivory… whalebone… surely they’re protected materials? There are no descriptive markings on the statue – no “igloo tag” issued by the Canadian government, no Roman numeral or syllabics on the base. We’re told:\n“the USA has a strict ban on imports of any type of whale bone unless there is a DNA test result which states the species of whale it came from.”\n|The Inuit whalebone statue - dancer side|\nSo is the little statue illegal? Should it be destroyed? We’re told a lot of whalebone isn’t signed, so the only way to be sure is to find someone who recognises the statue.\nA gallery in Vancouver, British Columbia told us the statue might be from the Ivujivik region, in the eastern Arctic. They have a contact in a western Arctic community of Inuvik, a town of 4,000 people which translates as “the place of man”.\n“Pieces do move about, even up there,” the gallery said, but, “we have not had a lot of success in direct approaches to our Inuit contact, he never answers his email.”", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-4", "d_text": "Signatures can be inscribed with first and/or last name in wet clay both in cursive and printed, written name with underglaze pencil or Sharpie, stamped clay or other material with metal dies, used a bamboo tool to inscribe in wet china paint. Work was signed in various places: early pieces were signed on the bottom; self-portraits and portraits of others usually signed on the back of the neck, with date; small trophy busts were stamped on front with date on bottom.\nCitation: Copyright: art@Estate of Robert Arneson, licensed to VAGA, New York, New York. \"The Marks Project.\" Last modified February 17, 2017. http://www.themarksproject.org:443/marks/arneson-0", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "|Antiques Digest||Browse Auctions||Appraisal||Home|\nAbout Porcelain Marks\n( Original Published 1913 )\n\"If you need some mark to distinguish the truth before you accept it, why not also require a second mark to verify that the first mark is genuine ? And so on, to infinity.\" So Renan wrote, though of other things than keramics, by the by. And he denounced what he called \" the horrible mania for certitude.\" Temperament, habit, and instinct formed by observation and experience are surer guides for a collector than any trade-mark can be. Marks on keramic articles and signatures on pictures are the chief agents of fraud in such things. Now and again one hears of a bit of porcelain, a painting, or a piece of furniture being ejected from a museum or gallery because it has been found out as false. If that were to go on at all regularly there are museums and galleries which would soon present great gaps. Indeed, in the vestibule of every gallery and museum there ought to be a sphinx-an emblem of the perpetual question,\" Is it true ? \"Not every keeper of a museum is a Franks, or of a picturegallery a Holroyd. A collector must so study as to know for himself, without marks or museum-labels, whether a piece is genuine or not; himself he must answer the question,\" Is it true ? \"\nThe Use of a Mark. The easiest part of a counterfeit is to imitate a mark; therefore a mark is the last of the things a collector should go by. Regard a mark as the confirmation of other proofs, not as itself the only necessary evidence. Look, touch, quality of paste, quality of glaze, form, colour, decoration, general air, physical indications of age-these are the true criteria; the trade-mark should come in that catalogue last of all. A specimen should be judged by the presence or absence of the merits, and of the faults also, of the pieces which are generally accepted as having come from the pottery of which it bears the trade-mark. I say \"trade-mark\" in preference to \"mark,\" the usual word, because a piece of old china or earthenware bears countless marks by which it can be judged; the trade-mark is only one of them. And a true piece without a trade-mark on it will be \" marked all over\" to knowing eyes and sensitive finger-tips.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Numerous visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while exploring the nation. Given that Inuit art has been getting more and more international exposure, people may be seeing this Canadian fine art type at museums and galleries situated outside Canada too. Presuming that the objective is to get an genuine piece of Inuit art rather than a low-cost traveler replica, the concern emerges on how does one inform apart the genuine thing from the phonies?\nIt would be pretty disappointing to bring home a piece only to learn later on that it isn't really genuine and even made in Canada. If one is lucky enough to be traveling in the Canadian Arctic where the Inuit live and make their fantastic artwork, then it can be safely presumed that any Inuit art piece bought from a regional northern store or directly from an Inuit carver would be authentic. One would need to be more mindful in other places in Canada, specifically in traveler areas where all sorts of other Canadian souvenirs such as t-shirts, hockey jerseys, postcards, crucial chains, maple syrup, and other Native Canadian arts are offered.\nThe best places to shop for Inuit sculptures to make sure credibility are always the trusted galleries that focus on Canadian Inuit art and Eskimo art. Some of these galleries have ads in the city tour guide found in hotels.\nReliable Inuit art galleries are likewise noted in Inuit Art Quarterly magazine which adheres entirely to Inuit art. These galleries will typically be located in the downtown traveler locations of significant cities. When one strolls into these galleries, one will see that there will be just Inuit art and maybe Native art however none of the other normal traveler mementos such as t-shirts or postcards . These galleries will have only genuine Inuit art for sale as they do not deal with imitations or phonies . Just to be even more secure, ensure that the piece you are interested in features a Canadian government Igloo tag licensing that it was handmade by a Canadian Inuit artist. The Inuit sculpture might be signed by the carver either in English or Inuit syllabics but not all authentic pieces are signed. Be aware that an unsigned piece might still be indeed genuine.\nSome of these Inuit art galleries likewise have sites so you could shop and buy authentic Inuit art sculpture from home throughout the world.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-8", "d_text": "The sculptures don’t have hands or feet per se—they’re integrated within the central block of wood. I’m not a big fan of those myself, but some people love them. I find them a little too abstract.\nThen there’s the issue of signature. Many carvers these days sign their dolls. They don’t stamp them, but they might paint their name on them. Some carvers identify themselves with a trademark. For example, Manfred Susunkewa puts a spider on his katsinas because he’s a member of the Spider clan. Jimmie Kewanwytewa started signing his dolls “JK” in the late ’40s. He only signed about 40 percent of his dolls, but he was the first to do it.\nGenerally the ceremonial dolls given to children would not be signed because they’re from the Katsina spirits and not from a human carver.\nFinally, some people choose to collect from all eras in order to have a historically representative collection. That’s what I do because I’m a katsina doll historian, of sorts. More commonly, people seem to be drawn to a particular era and will stick with that.\nCollectors Weekly: If a doll is not signed by an artist, how do you determine who made it?\nWalsh: That’s a good question. None of my favorite carvers that I mentioned earlier signed their dolls. The way you know a doll is their work is by their distinctive style. It’s kind of like listening to the radio and you recognize the Rolling Stones or Dave Matthews or Pink or whoever without have the announcer say whose music it is. You’ve figured it out because you are familiar with their sound.\nIn fact, one of the criteria that I have for a superior katsina carver is whether the work is distinctive. Do I need to look at the bottom of it to see who did it, or can I look at it and know immediately who did it because they bring their own unique style to a carving?\nMy favorite carvers are all like that. No one has to tell you who did it. You look at it and you say, “That’s a Clark doll, that’s a Manfred doll, that’s a Wilson Tawaquaptewa doll,” because they were distinctive artists. Their personality came through, which is, for me, one of the most interesting things.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-7", "d_text": "Otherwise, as I have said, no artist anywhere on earth is known to have put his name on any work of art before the Greek creative mutation in the seventh century BC; and the Greek art tradition was also the first in which it was not at all unusual for artists to sign.\nThe oldest Greek sculptor’s signature now surviving is incomplete—the name “…medes” on the twin statues of Kleobis and Biton in Delphi, which date from the early sixth century BC. The oldest Athenian potter’s signature is that of Sophilos, who also worked in the early sixth century. Thereafter, there was nothing in the least unexpected in any Greek artist’s putting his name on his work, if he felt like it. This does not mean, of course, that signing was a universal practice. Among the Greek potters and pottery painters of both black and red figure periods, a considerable majority left their works anonymous; and this majority included some of the finest masters, like the Berlin Painter. Over all, one should probably picture a situation in Greece like that in Italy during the High Renaissance, when no one was surprised because Raphael chose to sign only a few of his paintings, and Michelangelo signed only his Pietà in St. Peter’s, and in a fit of temper at that. Yet the fact that the Greek works of art were far from invariably signed does not in the least diminish the significance of the Greek artists’ signatures.\nTo say that the Greek artists’ compulsive innovations and their signatures were new under the sun does not automatically mean that they were important novelties, in and of themselves. The aim of the Greek innovations was clearly more accurate representation of the human body; yet anyone can see that the magic of Greek art does not lie in accurate representation. It is enough to compare the Delphic Charioteer, from fairly early in the fifth century, with the far more softly natural—but much less magical—late fourth-century bronze of a youthful athlete just acquired by the Getty Museum. Instead, both the artists’ signatures and their compulsion to do something that had not been done before are important, indeed deeply important, because of what they imply. They clearly imply that the Greek artists had a wholly new kind of self-image and goal.\nEvery Greek artist of consequence in truth wished to put his own individual stamp on his work in the most emphatic manner.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-2", "d_text": "Possibly needless to say, art historians have always been fixated on attribution, whether such-and-such an artist created such-and-such a work. If nothing else, it lent (lends?) the discipline the scientific appearance it strove to attain from its institutionalized beginning in 1870s Germany. As, most notably, with Morelli and others, art historians collected data and ‘evidence’, compiling lists of traits and styles so as to categorize the history of the world’s art into neat and tidy schools and periods.\nSo, of course, in this Linnaean system getting it ‘right’ and being able to pin works to individuals and specific dates was vital to the establishment of an accepted and credible discipline.\nBy the 1970s and 1980s, when art history was going through a lot of changes (very energizing to the field), this kind of connoisseurship and even antiquarianism came into question. Or, at the very least, art historians no longer took for granted the importance of a work’s originality and endeavored to locate the reasons for, in a word, caring.\nNelson Goodman’s book Languages of Art tackled the question of the allograph and the autograph, which led to a series of debates on the differing values (monetary and otherwise) of an original and a forgery which cannot be told apart by the naked eye. Goodman argues that, even if we cannot tell the difference between the two, the knowledge that one is a forgery and one is an original produces an aesthetic difference which then alters our perception of the works.\nI love the response of Thomas Kulka to this argument, in his article, “The Artistic and Aesthetic Status of Forgeries” (Leonardo, 1982). He calls Goodman a snob. Heh. However, beyond that, Kulka makes the insightful point that works may be judged on the basis of art-historical value and aesthetic value. While the former judges a piece of art based on its production during a precise moment in time and its effect on later history (and relationship to prior history) the latter bases its judgment purely on the aesthetic quality of the work. So, while the original and forgery may have equal aesthetic values, their art-historical values are vastly different. This argument is clear enough and by no means hard to arrive at.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo da Vinci just became even more prolific. An analysis released in the U.K.-based Antiques Trade Gazette claims a small portrait once attributed to a 19th century German artist was actually painted by the Italian master around the year 1500. The surprising revelation is but the latest in a series of cases in which \"lost\" pieces of artwork were rediscovered through art authentication. But how can experts who have previously certified works by Caravaggio, Raphael, Van Gogh and countless others be so sure that a specific painter is responsible for a work of art?\nIn the case of the da Vinci painting, the authentication was based on physical evidence. Using a high-resolution multispectral camera capable of analyzing the painting on a precise level without touching it, a Canadian forensic-art expert named Peter Paul Biro was able to identify a faint fingerprint left on the canvas. The print was then matched to one on a known da Vinci painting hanging in Vatican City. Carbon dating of the newer canvas matched the painting to da Vinci's period, and an analysis of the style concluded the painter was left-handed, another purported da Vinci trait. Taken together, the clues built a convincing argument for the painting's authenticity.\nAbsent compelling forensic evidence like a fingerprint, the authentication process becomes a bit murkier. In the past, pieces of art have been certified through a combination of factors, including brushstroke patterns, analysis of the artist's signature, dating of the pigments or canvas used or even the instinctive (but subjective) opinion of academics who have extensively studied an artist's portfolio. A painting's provenance, or its history of ownership, is also important. Being able to trace a portrait back from owner to owner over the course of centuries is no small feat, and it often lends significant weight to a work's legitimacy.\nOne recent high-profile case that has highlighted the difficulties in authenticating a piece of art is a disputed Jackson Pollock painting, purchased for $5 in 1992 by ex-trucker Teri Horton in a California thrift store. Biro was also involved in that investigation. He matched a partial fingerprint on the canvas to one on a paint can used by Pollock and paint on the canvas to samples from Pollock's studio. Still, despite the forensic evidence, the art community has been reluctant to certify the work.", "score": 18.89495287700555, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Lots of visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while exploring the nation. These are the stunning handmade sculptures sculpted from stone by the Inuit artists residing in the northern Arctic regions of Canada. While in a few of the significant Canadian cities (Toronto, Vancouver, Montreal, Ottawa, and Quebec City) or other traveler locations popular with global visitors such as Banff, Inuit sculptures will be seen at different retail shops and showed at some museums. Because Inuit art has been getting more and more worldwide exposure, individuals may be seeing this Canadian art kind at galleries and museums situated outside Canada too. As a result, it will be natural for lots of tourists and art collectors to decide that they wish to acquire Inuit sculptures as nice keepsakes for their homes or as really unique gifts for others. Assuming that the objective is to get an authentic piece of Inuit art instead of a low-cost traveler imitation, the question occurs on how does one tell apart the genuine thing from the phonies?\nIt would be quite frustrating to bring home a piece only to discover later on that it isn't really authentic or even made in Canada. If one is fortunate enough to be taking a trip in the Canadian Arctic where the Inuit live and make their fantastic artwork, then it can be securely assumed that any Inuit art piece purchased from a regional northern store or directly from an Inuit carver would be genuine. One would need to be more cautious somewhere else in Canada, especially in traveler areas where all sorts of other Canadian keepsakes such as tee shirts, hockey jerseys, postcards, crucial chains, maple syrup, and other Native Canadian arts are offered.\nThe best places to buy Inuit sculptures to guarantee authenticity are constantly the reputable galleries that concentrate on Canadian Inuit art and Eskimo art. Some of these galleries have advertisements in the city tour guide found in hotels.\nRespectable Inuit art galleries are likewise listed in Inuit Art Quarterly publication which is devoted completely to Inuit art. These galleries will typically be found in the downtown traveler areas of significant cities. When one strolls into these galleries, one will see that there will be only Inuit art and possibly Native art but none of the other typical traveler souvenirs such as postcards or t-shirts . These galleries will have just authentic Inuit art for sale as they do not handle replicas or phonies .", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "Photographs, bronze sculpture, and fine art graphic prints are frequently problematical, because they are from multiple editions. A great deal of money can ride on the question of whether a particular work is an ``original'' - created by the artist or under his supervision - or ``posthumous,'' or something else.\nGeorge Guerney, an expert on the work of painter/sculptor Frederic Remington (1861-1909), says there may be twice as many Remington sculptures that have been cast posthumously as in the artist's lifetime. ``Foundries get their hands on an original sculpture and make their own editions from them, just churning them out to be sold,'' he said. He added that ``some recasts look better than the original, because of the craftsmanship of the foundry. It's not hard to be fooled sometimes, and you need good documentary evidence of when a work was made and where it came from.''\nThere has been some help in the graphic print field as California, Hawaii, Illinois, Maryland, and New York have all enacted print disclosure laws. These states now require sellers of prints to state whether the work is from a limited edition, the size of the edition, the name and address of the printer, whether the printing plates are still intact (that is, if other editions could be made), and whether any other edition of the image is in existence. Although photography and sculpture are not covered by these statutes, a growing number of people in these states and elsewhere have been demanding certificates from the dealer or printer with this same information. The peril of wishful thinking\nPaintings and drawings pose an even greater problem, and bogus works frequently enter the market for both innocent and fraudulent reasons. Lack of expertise on the part of the dealer or collector, or wishful thinking on the part of an investor, may be the culprit. Pictures that look something like the work of a major artist but are not may sit in an attic for years, later being sold in a garage sale or flea market. After a sale or two, the work can occasionally find itself in a collection or in an auction. It is at this level that experts come in, evaluating the work and determining its authenticity and value. The bogus art, however, may have already been sold for hundreds or thousands of dollars.\nAt other times, the sales are more criminally motivated.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-4", "d_text": "I would understand why Michelangelo would want to mark this piece as his own and not have it accredited to someone else.\nEven if his signature wasn’t a spontaneous afterthought, the fact that he never signed any of his other works makes his signature on Pieta special.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "The Effect of Michelangelo’s Famous Pieta\nNot only was Michelangelo’s Pieta his first commissioned artwork, but it is now one of the most valuable art pieces in the world today. He was just 24 years old when he created the piece, and the sculpture set the tone for the remainder of his long and successful career.\nAlthough Michelangelo’s Pieta has gotten a lot of positive attention throughout the years, it has also had its critics.\nIn 1972 Michelangelo’s sculpture was attacked by a man wielding a hammer. The man rained down 12 damaging blows before being stopped. Since that time, Pieta has been surrounded by protective, bulletproof glass. Unfortunately, the sculpture was also damaged when movers accidentally broke off four of Mary’s fingers. However, all of the damage from both cases has been restored.\nAside from these incidents, Michelangelo’s Pieta has inspired faith, emotion, and imitation through its unique depiction of Jesus Christ and the Virgin Mary. Many people seek to be in its presence for both artistic and spiritual reasons.\nThe piece itself is not fully renaissance nor entirely classical in its form; Michelangelo managed to blend the two art styles. In addition, he didn’t use realistic proportions when sculpting the piece, giving the viewer a more creative and more accessible interpretation of this scene.\nWhen Did Artists Start Signing Their Artwork?\nPrior to the renaissance and Michelangelo’s time, it was not common for artists to sign their work. So it is not surprising that Michelangelo didn’t add signatures to his artwork.\nSophilos was the first known artist to sign his artwork. He added “Sophilos painted me” (Sophilos me grafsen) on pottery that he painted in 590-570 BC.\nBut the ongoing trend of signing one’s artwork didn’t start until the Renaissance period and most likely had just started to gain popularity around the time of Michelangelo’s career.\nThe trend, however, began to ensure that artists were credited for their work. In addition, an artist’s signature proved to be helpful to viewers and collectors of artwork to determine creation date and authenticity. Finally, artists like Picasso began to use their signatures to express creativity and to show transformation as an artist at times where he changed his signature.\nNo wonder Michelangelo’s Pieta has gained so much attention; I mean, the sculpture itself is a masterpiece.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "A few years ago, art investigator Curtis Dowling was hired by a man in France who'd just spent more than $100 million on a Picasso. Having handed over a nine-figure check, he wanted to make sure the painting was real. \"He'd pretty much spent every last penny to own this Picasso,\" says Dowling. \"It had passed down through a number of sources, and he thought he'd gotten a bargain.\" As it turned out, he had not. The painting was fake, and the guy was now the proud owner of a $100 million hunk of scrap canvas. \"Let's just say I had a very disappointed customer,\" says Dowling.\nOn the new reality show 'Treasure Detectives' (CNBC, Tuesdays at 9 p.m. EST), Dowling and his team authenticate – or often don't authenticate – all kinds of artwork and collectibles. Dowling says fraud is a huge problem: He estimates 40 percent of the stuff he comes across is phony. \"It's a bad batting average, but it's true,\" he says. \"It's easier to fake a Picasso than it is to smuggle heroin. Even organized crime now is using the art market to generate a fortune from forgery.\"\nAnd you don't have to shell out a hundred mil to get screwed. Even people dabbling at the bottom of the market need to be careful when hunting for cool old stuff, whether it's a 19th-century painting or an autographed Beatles LP. Here are some of Dowling's tips for how not to get ripped off.\nIf you can't touch it, don't buy it.\nYou need to get your hands on something to get any sense of whether it's the real deal. \"Your senses will tell you so much in a very short period of time,\" says Dowling \"It comes down to muscle memory. If you hold 50 Lalique car mascots and the 51st is a fake, you'll know it doesn't feel right. Buying items that you haven't touched, felt, smelt...you're asking for trouble.\" That means you shouldn't buy stuff over the Internet, and always examine a potential auction purchase before bidding. \"It could be chipped, it could be broken, it could be fake,\" says Dowling. \"If you can't handle it, you really, really shouldn't be buying it.\"\nCredit: Getty Images", "score": 17.15231423181756, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Tips on The Best Ways To Purchase and Buy Authentic Canadian Inuit Art (Eskimo Art) Sculptures\nLots of visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while exploring the country. Because Inuit art has been getting more and more international exposure, individuals may be seeing this Canadian fine art form at galleries and museums located outside Canada too. Presuming that the intent is to obtain an authentic piece of Inuit art rather than a low-cost tourist replica, the question arises on how does one tell apart the real thing from the phonies?\nIt would be pretty frustrating to bring home a piece just to find out later on that it isn't authentic or perhaps made in Canada. If one is lucky enough to be traveling in the Canadian Arctic where the Inuit live and make their fantastic art work, then it can be safely presumed that any Inuit art piece bought from a regional northern store or directly from an Inuit carver would be genuine. One would have to be more careful in other places in Canada, especially in tourist areas where all sorts of other Canadian souvenirs such as t-shirts, hockey jerseys, postcards, essential chains, maple syrup, and other Native Canadian arts are offered.\nThe most safe places to shop for Inuit sculptures to make sure credibility are always the reliable galleries that specialize in Canadian Inuit art and Eskimo art. A few of these galleries have advertisements in the city tourist guides found in hotels.\nRespectable Inuit art galleries are also listed in Inuit Art Quarterly magazine which is dedicated completely to Inuit art. When one strolls into these galleries, one will see that there will be only Inuit art and perhaps Native art however none of the other normal tourist mementos such as postcards or tee shirts . The Inuit sculpture may be signed by the carver either in English or Inuit syllabics but not all genuine pieces are signed.\nSome of these Inuit art galleries also have websites so you might go shopping and purchase authentic Inuit art sculpture from house anywhere in the world. In addition to these street retail specialty galleries, there are now credible online galleries that likewise specialize in genuine Inuit art.\nSome traveler stores do carry authentic Inuit art as well as the other touristy keepsakes in order to cater to all types of travelers. When shopping at these kinds of stores, it is possible to differentiate the real pieces from the reproductions.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "By the same token I have also seen autograph dealers ignore obvious signs of in-authenticity when that same signature might appear in a book, and which might automatically raise concerns to a bookseller: an author might only sign on the title-page, might only sign their name on a diagonal, might have tiny handwriting, might always sign their last name, use only a certain color of ink - all things important to know if one is about to part with a substantial sum of money.\nHere are a few things to be aware of: be afraid of any book whose value is greatly enhanced by the addition of an author's signature: books by Hemingway, Faulkner, James Joyce, F. Scott Fitzgerald, J.D. Salinger, John Steinbeck, or Thomas Pynchon are all automatically suspect.\nBe doubly afraid if that hyper-valuable signature appears in an inexpensive reprint, in one of the author's later books that might be available inexpensively, or copies of first editions that have severe condition problems. New York state bookseller Jeff Marks refers to these books as \"cheater's books.\" Rare is the forger who is either confident enough, or wants to make the substantial investment in a very expensive first edition in order to practice his \"art.\" Rather, he will be more likely to buy very inexpensive reprints that can be discarded without substantial loss on those occasions when they forget in mid-sentence that \"J.D. Hemingway\" is not the author's given name.\nWhether out of ignorance, cupidity, or culpability I cannot say, but those who have strongly suggested in the past decade that more value resides in books which have been signed with the just the author's name, rather than with inscriptions to individuals, have unwittingly (or perhaps not) played directly into the hands of the forgers, and gone a long way towards abetting their depredations. What happier news to a forger? The easiest thing to forge is a signature, with no other corroborating evidence provided by additional handwriting.\nIf you encounter a reprint edition whose value has been greatly enhanced by a signed-only autograph, every alarm bell should be ringing. You would probably do best to run way at a high rate of speed, or at worst, piss on the book from a very great height.\nAnd finally, beware the dreaded Certificate of Authenticity.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-2", "d_text": "This was often revealed only after the painting’s completion, sometimes not discovered until after the painter’s death. They would paint mysteries that needed to be deciphered and leave clues so that the painting could be read and more importantly endlessly re-read. The more times a story is told, enriched and embellished the more significance is added.\nIn Art, provenance is as, if not more important than the art itself. Provenance authenticates, it establishes the origin and hence the authenticity. This is why the art forgers first task is to convince the specialist. Eric Hebborn (1934-1996) was a struggling London painter, who purchased some paintings in a market and sold them to a gallery. The gallery put the paintings up for sale at thousands of pounds over what they had given Hebborn and he believed that the gallery had intentionally cheated him. Hebborn set out to get his revenge, at first on the art experts at the gallery and then on art experts everywhere. Hebborn painted over 1000 pictures, in a range of styles, but the Old Masters was his speciality and sold them as originals. He was wise enough not to duplicate the originals but to study them and then produce preparatory drawings for existing or ‘missing’ paintings. Many of the world’s best museums bought and showed his paintings. Once a fake had been established as authentic, it is logged and archived and the fake itself becomes a means by which authenticity of other works are judged. Hebborn was an expert in drawing, ageing and dating his works. He would provide a sketchy but well-researched history and then allow the experts to make all the connections as expert authentication adds value and re-evaluates the piece. When the forger is eventually discovered, their fame endorses their own work, and some have then set up studios creating ‘authentic’ forgeries, exact copies of famous works signed by themselves.\nContemporary artists know very well the value of provenance and create both the work and the back-story. Damien Hurst’s, ‘Treasures From The Wreck of The Unbelievable’, composed of broken, barnacled and aged sculptures are sunk off the east African coast to be discovered in 2008 and retrieved. The sculptures are supposed to be that of Cif Amotan II, a collector of antiquities, from the second century CE.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Each year we let our imaginations go the place they’ll, and that open stretch of Karoo gravel turns into a blank canvas for our wildest imaginings. For many centuries, powerful Japanese families have often chosen the butterfly to be the insect of choice on their family crests, often known as ‘kamon’ (家紋) in Japanese. Convey some symmetry to your walls with the Black and Gold Geo print framed in gold-finished moulding.\nVolume 2 covers an extra 3,500 artists with 4,600 signature examples related to these artists. For this ebook, Van Citters was allowed exclusive access to the Ward archives, and personal collectors submitted their own Ward character artwork. Imprint is printed in CMYK permitting for the use of photos.\nI just like the print you used to illustrate your hub. Whether or not to sign the painting in the same media as used to create the art work – one risk chances are you’ll run is one fades faster than the opposite. In January 2010, a girl fell into The Actor whereas it was on view at the Metropolitan Museum of Art , ripping a six-inch gap in the canvas.\nChosen artists will be commissioned for particular works. ARTWORK LOVERS AND COLLECTORS who need to know extra the signatures of artists from the previous and current. Our curated gallery of artwork for the house contains unique handpainted abstracts in soothing neutrals designed to enrich any style decor.\nBecause the incident, she’s had a newfound respect for the dangers artists take by letting their work be shown to the general public. A number of great artists don’t signal there work, as a result of it’s obvious who made the work. From wall artwork and sculptural paintings to beaded accessories, our assortment permits you to turn out to be the curator.…", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "Nish Bhutani, COO of auction house Saffronart, which has been in the business for over 14 years, says ensuring authenticity is the most important activity they undertake.\nThe documentary evidence we inspect includes invoices and authenticity certificates from the artist or a respected gallery, dealer or auction house, publishing and exhibition history of the work, photographs and letters from the artist, as well as in-person and detailed inspection by our in-house experts.\nHe adds that Saffronart has built up an internal digital and physical archive that has a record of every auction sale of Indian art across all auction houses globally, as well as virtually every publication related to Indian art.\nThe auction house's experts then cross-check images and documents against this archive and by communicating with the relevant gallery or dealer.\nParimoo, however, says the entire process needs to be overhauled as many galleries, dealers and auction houses are interested only in authentication as a procedure and not in the authenticity of the work itself.\nHow can you believe a certification of authenticity, even by me? Abroad, there are lab tests, old dossiers and records. We need to adopt scientific methods in India, if you are serious about authentication.", "score": 15.184392438538298, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Numerous visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while touring the country. These are the stunning handmade sculptures sculpted from stone by the Inuit artists residing in the northern Arctic regions of Canada. While in a few of the significant Canadian cities (Toronto, Vancouver, Montreal, Ottawa, and Quebec City) or other tourist areas popular with international visitors such as Banff, Inuit sculptures will be seen at different retail stores and displayed at some museums. Since Inuit art has actually been getting increasingly more global exposure, individuals may be seeing this Canadian art type at galleries and museums situated outside Canada too. As a result, it will be natural for many tourists and art collectors to decide that they wish to purchase Inuit sculptures as nice keepsakes for their homes or as really unique gifts for others. Presuming that the intention is to obtain an genuine piece of Inuit art rather than a inexpensive tourist imitation, the concern emerges on how does one differentiate the genuine thing from the fakes?\nIt would be quite frustrating to bring home a piece only to learn later that it isn't really authentic or perhaps made in Canada. If one is lucky enough to be taking a trip in the Canadian Arctic where the Inuit live and make their wonderful artwork, then it can be securely assumed that any Inuit art piece purchased from a local northern shop or straight from an Inuit carver would be authentic. One would have to be more cautious elsewhere in Canada, particularly in traveler areas where all sorts of other Canadian mementos such as tee shirts, hockey jerseys, postcards, crucial chains, maple syrup, and other Native Canadian arts are sold.\nThe safest locations to purchase Inuit sculptures to guarantee credibility are always the reliable galleries that concentrate on Canadian Inuit art and Eskimo art. Some of these galleries have advertisements in the city tour guide discovered in hotels.\nReliable Inuit art galleries are also listed in Inuit Art Quarterly magazine which adheres totally to Inuit art. These galleries will typically be located in the downtown traveler locations of major cities. When one strolls into these galleries, one will see that there will be just Inuit art and perhaps Native art however none of the other normal traveler keepsakes such as postcards or tee shirts . These galleries will have just genuine Inuit art for sale as they do not handle fakes or imitations .", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "So one has to look carefully at many Cubist paintings to discern which of them painted a particular canvas. Atop my artistic summit is to make paintings that cannot be imitated, and that are uniquely Canadian, but without being cliché. In a sense, and like the Cubist example above, it could be that my signature is the style by which my artworks are created. And once a painting is completed, I do not consider myself to play any further part in its expression. Therefore, I do not wish to brand my name upon them. In my mind, the painting can go off and have its \"own\" career. So it should follow that I also wish to avoid attributing any specific meaning to my artworks. I believe that would put all of my paintings at risk of being reduced to something I stated about any one of them.\nArtistic statements play an important role in marketing artwork, as do exhibition histories and lists of collections and awards. Since the art market is much more complex than a simple exchange between goods or services and money, such statements and resumes serve as indications of an artist's commitment, and reassurance that the collector is investing in something of cultural importance. The last artistic statement I wrote was 15 years ago. Today, I see my artistic identity in much sharper focus, which includes a firm belief that the significance of an artwork has to be embodied in the piece itself. Subsequently, I would consider a signature, or signature brushwork (I don't use brushes), to be superfluous. Simply put, every example of my painting exhibits an individual expression. And although they are all from the same family, each painting is in fact a token, or variance, of that family group. Now, having stated all this...\nPrint and CGI technologies create and re-create imagery for a variety of purposes. Sometimes as a demonstration of the image generating capabilities of the technologies themselves. Sometimes its for the purposes of selling a product, endorsing an ideology, or fostering some belief system. Whatever the purpose may be, those intentions are made evident by the means by which an image is presented to the viewer. Differing genres of painting, or \"isms\", could be described in a similar manner. For example, Monet's work played upon the standards of 19th century painting; images of people, landscapes, and still-life, in order to emphasize the loose brushwork and unconventional colour associated with Impressionism.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Authenticity is a fraught concept. Reproduction is a significant threat to the authentic, whether it comes in the form of fakes and forgeries, or even the authorised reproduction – in the form of a print, or a recast sculpture.\nWriting in the 1930’s, Walter Benjamin discusses the effect that reproduction has on art, stating that:\n“Even the most perfect reproduction of a work of art is lacking in one element: its presence in time and space, its unique existence at the place where it happens to be. This unique existence of the work of art determined the history to which it was subject throughout the time of its existence. This includes the changes which it may have suffered in physical condition over the years as well as the various changes in its ownership. The traces of the first can be revealed only by chemical or physical analyses which it is impossible to perform on a reproduction; changes of ownership are subject to a tradition which must be traced from the situation of the original.”\nMortimer Menpes, 1860-1938 (Photo credit: Wikipedia)\nIn other words, a reproduction can never fully replicate the original work because any copy will lack the authenticity of the original.\nA crucial question, then, is it possible for a copy to recreate any of the power of an original? The answer, always difficult, probably has to be emphatically no. But still, high-quality reproductions of famous European artworks, part of the collection of Australia, are going to be displayed in the National Library of Australia, 100 years after their creation. The paintings were created by Mortimer Menpes, who made them to provide access to the artefacts of High European Art to an Australian public which could afford neither the time, nor the fare of travel to Europe to see the originals. To the extent that there is value in displaying these replicas (fakes?) even now, suggests that even a reproduction can hold some of the cultural power and authenticity of its original.\nOf course, there is also the question of undetected forgeries, which might pass for original art work. History is littered with compelling frauds, fakes and forgeries, the detection of some of which made their authors all the more interesting.\nI am sure that the annals are Art History are filled with these, but I am not so familiar with the subject; literary hoaxes, on the hand, are more my game.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Your request email has been produced.\nLes Thomas has long recognized painting as a language in its own right, and its diversity in terms of poetic and conceptual messages, exhibited by each individual picture.\nLes Thomas' education is indicative of his dedication to his chosen profession as an artist. He sifts through a wide variety of visual sources for the imagery in his work. The influence of his view of painting as a very noble activity is combined with his acknowledgement that it can be inventive and enjoyable for both the artist and the viewer. The other major influence is culture, in its broadest terms, including all aspects of our everyday life.\nLes Thomas' animal paintings tend to play with the relationship between culture and nature. The unique blend of oil and metallic paints that he applies, he explains, “may very well pertain to memory, and the manner in which we hold visual experiences within the capacities for recollection. Think of the last time you encountered a bear on the roadside, or craned your neck to watch a mountain goat or sheep on a steep and rocky slope.”\nLes Thomas, as a highly accomplished artist has received many awards for his work, and been featured in several publications, and has his paintings in public and private collections in Canada, the U.S., and the U.K. and Germany.\nI do not sign the front of my paintings. The great Impressionist Claude Monet carefully gauged the size and colour of his signature, so as it didn't look out of place with the art \"work\" itself. An American painter, Robert Ryman, worked as a guard at New York's Museum of Modern Art, where he closely studied the various features of Modernist paintings, including signatures. Consequently, his name became a central feature in many of his own artworks. But so many painters sign their canvases without such consideration, and do so simply because it has become an artistic convention. In fact, some sign their works as an act of bravado, like a slash of Zoro's sword. I suppose one of the reasons Monet signed his paintings had to do with Impressionism being a style he shared with a number of his artistic peers. He wanted \"his\" work to be recognized as such.\nBraque & Picasso did not sign their Cubist paintings. They considered themselves as two mountaineers, roped together whilst they climb to some lofty artistic summit.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "The artist’s satisfaction with the result informed LACMA’s curatorial thinking as well.\nWe may question how relevant elusive notions of authenticity are in this case. Since the 1960s, many artists working in such directions as minimalism, conceptual art, and new media have created artworks that are replicable without limit. Carl Andre arranges industrially manufactured copper and lead panels in checkerboard patterns on the floor but is not responsible for any other physical or formal aspect of his sculptures; Sol LeWitt’s wall drawings are formulated by written instructions conceived by the artist but executed by someone else; Lynn Hershman makes interactive works that exist only in cyberspace and are equally “authentic” when viewed or manipulated on anybody’s computer. In all these instances, the mythic hand of the artist—the inspired vision and unique virtuoso “touch” of the creative genius—are simply not a factor or an issue in the aesthetics of the art.\nSimilarly, Craig Kauffman’s Untitled Wall Relief, while unquestionably the creation of the artist’s aesthetic choices and sensibility, is fabricated by vacuum-formation, a mechanical process to shape plastic sheeting on a mold through the application of heat, pressure, and vacuum action. There are various technologies, some obsolete, to vacuform plastic and to spray-paint its colors, but they are industrial technologies, not ones associated with traditional ateliers. Kauffman’s 2008 wall relief is of course not the selfsame object as the 1967 original. Yet the new work is not necessarily to be regarded as less authentically a creation of Kauffman any more than photographic prints made from a master negative and printed long after the original picture has been taken are any less “authentic” than earlier prints—even if there are discernible differences among them and though vintage prints command a higher market value—as long as they are authorized by the photographer. Assertions about the sanctity of an initial iteration of a work can be misguided; in cases where the fabrication is largely mechanical and the artist has assented to the result, common sense should avoid “fetishizing” the original.\nLACMA owns two other Plexiglas paintings by Craig Kauffman and many works on paper. With the mishap of the irreparable damage to Untitled Wall Relief of 1967, LACMA lost a significant work from its internationally respected collection of Southern California art.", "score": 12.768921213082807, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Many visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while visiting the nation. These are the magnificent handmade sculptures sculpted from stone by the Inuit artists living in the northern Arctic areas of Canada. While in a few of the significant Canadian cities (Toronto, Vancouver, Montreal, Ottawa, and Quebec City) or other tourist locations popular with worldwide visitors such as Banff, Inuit sculptures will be seen at different retail stores and displayed at some museums. Because Inuit art has actually been getting increasingly more worldwide direct exposure, people might be seeing this Canadian art form at galleries and museums located outside Canada too. As a result, it will be natural for lots of tourists and art collectors to choose that they wish to purchase Inuit sculptures as nice mementos for their houses or as very unique presents for others. Assuming that the intent is to obtain an authentic piece of Inuit art rather than a inexpensive traveler imitation, the concern arises on how does one differentiate the real thing from the fakes?\nIt would be quite disappointing to bring home a piece just to find out later on that it isn't genuine or perhaps made in Canada. If one is fortunate enough to be taking a trip in the Canadian Arctic where the Inuit live and make their fantastic artwork, then it can be safely presumed that any Inuit art piece purchased from a regional northern store or straight from an Inuit carver would be authentic. One would need to be more careful elsewhere in Canada, especially in traveler locations where all sorts of other Canadian keepsakes such as t-shirts, hockey jerseys, postcards, crucial chains, maple syrup, and other Native Canadian arts are sold.\nThe most safe locations to look for Inuit sculptures to make sure credibility are always the credible galleries that focus on Canadian Inuit art and Eskimo art. Some of these galleries have ads in the city tour guide found in hotels.\nCredible Inuit art galleries are also noted in Inuit Art Quarterly publication which is devoted entirely to Inuit art. These galleries will normally be located in the downtown tourist areas of major cities. When one strolls into these galleries, one will see that there will be just Inuit art and perhaps Native art however none of the other normal tourist keepsakes such as t-shirts or postcards . These galleries will have only genuine Inuit art for sale as they do not deal with phonies or imitations .", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-6", "d_text": "Nowadays, when the representation of reality strikes many people as a trivial trick, it is tempting to dismiss the Greek artists’ continuous innovations as a trivial and minor theme in the story of Greek art. This was certainly not the Greeks’ view, however, if we may read backward from the way the Greek art historians later dealt with this subject.\nAnother comparable difficulty also confronts anyone in the twentieth-century West who seeks to assess the second novel feature of the Greek art tradition. Representation in art is thought trivial today because of the stage our own art tradition has now reached; but signatures on works of art have come to seem far more trivial, simply because the sorriest manufacturers of motel bedroom art now sign their products without fail. In the larger frame of the world history of art, nonetheless, a signature on a work of art must be seen as a deeply symbolic act. By signing, the artist says, in effect, “I made this and I have a right to put my name on it, because what I make is a bit different from what others have made or will make.” In the entire course of the world history of art, this right to sign has most rarely been claimed by any artists beyond the limits of the five rare art traditions—at least before the present, when worldwide cultural homogenization by what is called “progress” has led to artists signing everywhere.\nBefore the Greeks, in fact, no artists’ signatures are known from any art tradition except the Egyptian. From Egyptian artists’ tombs and memorials, we have a considerable number of their names, but all of art history’s first known signature, meaning “I made this,” is on a statue base found in the wonderful pyramid-complex at Saqqara. The statue base bears the name of Imhotep, the artist-architect-engineer of King Zoser’s step pyramid, but also Zoser’s grand vizier, or the equivalent. Later, Imhotep was deified and worshiped for a couple of thousand years, and it is pleasing to think one can see an actual signature left on earth by one so long among the gods. Besides Imhotep’s, however, the millennially long Egyptian record has produced the merest handful of other true signatures, mostly of artists who were also very high officials like Imhotep.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-9", "d_text": "The assumption underling these changes in the significance and value of paintings, thus, was based on the belief that original works of art have exceptional qualities that fakes and copies do not have. Only originals reflect the artists vision. Since the hand of the individual artist is assumed to reflect these qualities, such works are unique. Unoriginal works, therefore, cannot have the same value and significance. Because it was not always possible to tell the difference, attribution thus became a vital function of art historians and critics. The art market, a multibillion dollar business, and the high cultural status of art museums for the last seventy years could not have been maintained without assumptions that the objects exhibited are original. To illustrate this point: the following is from a popular art textbook, Art and Civilization, by Bernard Meyers, about a painting called The Man in a Golden Helmet, formerly attributed to Rembrandt: Rembrandts ability to evoke spiritual contemplation, to fathom the depths of the soul and its identification with the universe is felt in such a work as the celebrated Man in a Golden Helmet.\nUnfortunately for Professor Meyers, the painting wasnt by Rembrandt. Even though some people continue to believe that the painting is beautiful and profound, when it was deattributed in 1985, its value fell from $8 million to $377 thousand. Also, since the number of paintings attributed to Rembrandt has ranged from 48 to 988, it is obvious that belief in the significance of originality is of more than academic interest.\nThe famous art historian, Bernard Berenson, made a fortune authenticating Renaissance paintings for the Isabella Stewart Gardner Museum of Art in Boston. Berenson was a major figure in the attribution of Old Masters, at a time when these were attracting new interest by American collectors, and his judgments were widely respected in the art world. Recent research has cast doubt on some of his authentications, which may have been influenced by the exceptionally high commissions paid to him.\nLike religious relics, paintings had become secular relics with economic powers based on authentication by official Art authorities. Paintings that could be established as being from the hand of Old Masters and of artists like Picasso, Van Gogh and Matisse are especially valuable. Last month, a painting by JeanMichel Basquiat, who died in 1988 of a heroin overdose at the age of 27, sold for almost 111 million dollars.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "While long gone, these artists remain in the news for the sale of both legitimate and fake reproductions of their work. Now along comes Gary Arseneau, he is a self-styled independent scholar, an artist, printmaker of original lithographs and a blogger. He is also the self-published author of books such as The Monument to the Victor Hugo Deception.\nWe ought to be asking, “Who is Gary Arseneau?” Is he a gadfly, or a crusader tilting in the wind trying to stem the tide of fake reproductions? You can only decide by spending time on his blog where he outlines in great detail his argument that the works of Rodin, Degas, Matisse, Duchamp and even Dr. Seuss that are being reproduced by their estates and heirs are fakes. He makes a heck of an interesting argument. Certainly, if you care about reproductions, buy them, produce or market them, you owe it to yourself to study his findings, read his arguments and come up with your own conclusions.\nIs Having a Set of Enforcable Understandable Standards Too Much to Hope For?\nRegardless of your personal opinion, the can of worms opened by Mr. Arseneau hastens the idea that establishing and enforcing true standards in the art world would be helpful. It is a crazy notion, I agree, but until a line is drawn on reproductions, the visual arts community will carry the burden of proving itself beyond reproach each time art of any value goes to market.\nArtists Who Establish Authenticity and Transparency in Their Business Practices Will Win\nAs the world shrinks due to instant information and communication, being authentic and transparent becomes imperative. For those artists who find a way to embrace authenticity and transparency in how they create multiples or reproductions of their popular work and manage and market their business, there is ample reward awaiting them and their rightful heirs.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "The autograph is not a facsimile, stamp, or auto-pen.", "score": 9.098748232655902, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Tips on How to Buy and Shop for Genuine Canadian Inuit Art (Eskimo Art) Sculptures\nNumerous visitors to Canada will be exposed to Inuit art (Eskimo art) sculptures while visiting the nation. These are the spectacular handmade sculptures carved from stone by the Inuit artists living in the northern Arctic areas of Canada. While in some of the significant Canadian cities (Toronto, Vancouver, Montreal, Ottawa, and Quebec City) or other traveler locations popular with global visitors such as Banff, Inuit sculptures will be seen at numerous retail stores and showed at some museums. Considering that Inuit art has been getting a growing number of worldwide exposure, individuals may be seeing this Canadian art form at museums and galleries situated outside Canada too. As a result, it will be natural for lots of travelers and art collectors to choose that they would like to acquire Inuit sculptures as good souvenirs for their houses or as extremely unique gifts for others. Assuming that the objective is to obtain an genuine piece of Inuit art instead of a low-cost traveler imitation, the question occurs on how does one tell apart the genuine thing from the phonies?\nIt would be pretty frustrating to bring home a piece just to find out later on that it isn't authentic and even made in Canada. If one is lucky enough to be taking a trip in the Canadian Arctic where the Inuit live and make their fantastic art work, then it can be safely assumed that any Inuit art piece purchased from a regional northern shop or straight from an Inuit carver would be genuine. One would need to be more cautious in other places in Canada, especially in tourist areas where all sorts of other Canadian keepsakes such as tee shirts, hockey jerseys, postcards, key chains, maple syrup, and other Native Canadian arts are sold.\nThe safest locations to purchase Inuit sculptures to ensure credibility are always the respectable galleries that focus on Canadian Inuit art and Eskimo art. Some of these galleries have advertisements in the city tourist guides discovered in hotels.\nRespectable Inuit art galleries are also noted in Inuit Art Quarterly magazine which is dedicated totally to Inuit art. When one walks into these galleries, one will see that there Kurt Criter will be only Inuit art and perhaps Native art but none of the other typical tourist souvenirs such as postcards or tee shirts .", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "A court-appointed panel ordered Kinkade's company to pay $860,000 for breaching its \"covenant of good faith\" by misleading two galleries. At least six other claims were filed against Kinkade by other plaintiffs. To make matters worse, the FBI decided to investigate him.\nIn the future, \"authenticity\" will be even more complicated. Digital art has no physical existence to \"authenticate.\" It is a ghost, made of electricity and light. Limitless copies-- all with an equal claim to being the \"original\"-- can be made with no decline in quality.\nAnd that's just the start. Famous flash artist Joshua Davis has invented what he calls \"generative composition machines\" which are software applications written with open source code and Flash to automate the creation of art. Davis feeds in multiple images, colors and other ingredients and his software spits out a variety of images. His machine has now created \"art\" for many top corporate clients, including BMW, Nike and Nokia.\nCertifications of authenticity are helpful when it comes to allocating royalties, but meaningful authenticity cannot be bestowed by a certificate, just as artistic value cannot be bestowed (or removed) by market fluctuations. You should authenticate art with your eyes. Ultimately, the Kinkade distributor got it right: without a certificate of authenticity, art \"has no value other than your enjoyment of the piece.\"", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-5", "d_text": "- Stable isotope analysis can be used to determine where the marble used in a sculpture was quarried.\n- Thermoluminescence (TL) is used to date pottery. TL is the light produced by heat, older pottery produces more TL when heated than a newer piece.\n- A feature of genuine paintings sometimes used to detect forgery is craquelure.\nStatistical analysis of digital images of paintings is a new method that that has recently been used to detect forgeries. Using a technique called wavelet decomposition, a picture is broken down into a collection of more basic images called sub-bands. These sub-bands are analyzed to determine textures, assigning a frequency to each sub-band. The broad strokes of a surface such as a blue sky would show up as mostly low frequency sub-bands whereas the fine strokes in blades of grass would produce high frequency sub-bands. A group of thirteen drawings attributed to Pieter Brueghel the Elder was tested using the wavelet decomposition method. Five of the drawings were known to be imitations. The analysis was able to correctly identify the five forged paintings. The method was also used on the painting Virgin and Child with Saints, created in the studios of Pietro Perugino. Historians have long suspected that Perugino painted only a portion of the work. The wavelet decomposition method indicated that at least four different artists had worked on the painting.\nProblems with authentication\nArt specialists, whom we now refer to as experts, began to surface in the art world during the late 1850s. At that time they were usually historians or museum curators, writing books about paintings, sculpture, and other art forms. Communication amongst the different specialties was poor, and they often made mistakes when authenticating pieces. While many books and art catalogues were published prior to 1900, many were not widely circulated, and often did not contain information about contemporary artwork. In addition, specialists prior to the 1900s lacked many of the important technological means that experts use to authenticate art today. Traditionally, a work in an artist’s “catalogue raisonné” has been key to confirming the authenticity, and thus value. Omission from an artist’s catalogue raisonné indeed can prove fatal to any potential resale of a work, notwithstanding any proof the owner may offer to support authenticity.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-2", "d_text": "The UACC does not recognize third-party certificates under the new policy, and autograph authenticators must meet all policy criteria. Criteria for authenticators include prior experience with autograph authentication, the ability to articulate authentication knowledge and follow court-approved procedures, acceptance as an expert witness by a recognized authority, and proof of ongoing education in the field.\nFor more information about appraisal and authenticator services by the UACC, visit www.uacc.org/services.php.\nIconographs, a Nevada-based seller of movie memorabilia, offers online auctions as well as direct sales of collectible items. In business for 25 years, Iconographs sells quality-guaranteed memorabilia at affordable prices.\nIn the buying and selling of autographed memorabilia, there are two distinct types of autographs: hand signed and printed. An autograph that was signed by hand was placed directly on an item by the celebrity, while a printed autograph is a mass-produced replica of the celebrity’s signature. Items with printed signatures may include cards sold in packs, packaged action figures that are sold with a signature, and a variety of other items.\nIn general, a hand-signed autograph is worth more than the copied version. However, an item with a copied autograph is generally more valuable than an unsigned version of the same item. Still, a hand-autographed item may be more or less valuable than a printed or un-autographed item based on variables such as the rarity of the given memorabilia item and the age of the item.\nOffering genuine celebrity autographs at affordable prices, Iconographs is registered as a Universal Autograph Collector’s Club dealer. Based in Nevada, Iconographs sells only autographs that were signed by the celebrity in person and offers a certificate of authenticity with all its products, which enhances their market value. Iconographs emphasizes convenience through online sales and a ship-upon-purchase policy.\nSought after for centuries, autographs represent one of the most enduringly robust segments of the collectibles and alternative investments market. The value of the signature depends on a variety of factors including quality and scarcity as well as the status of the person.\nAt the top tier are icons, or individuals that are well known to the general public and represent a specific historical or cultural moment. In the sports world, these range from Michael Jordan to Babe Ruth, while in the acting world Humphrey Bogart and Marilyn Monroe stand preeminent.", "score": 8.086131989696522, "rank": 97}]} {"qid": 1, "question_text": "How does SpriteKit handle animations and what are actions used for?", "rank": [{"document_id": "doc-::chunk-2", "d_text": "Some actions are completed in a single frame of animation, while other actions apply changes over multiple frames of animation before completing. The most common use for actions is to animate changes to the node’s properties. For example, you can create actions that move a node, scale or rotate it, or make it transparent. However, actions can also change the node tree, play sounds, or even execute custom code.\nActions are very useful, but you can also combine actions to create more complex effects. You can create groups of actions that run simultaneously or sequences where actions run sequentially. You can cause actions to automatically repeat.\nScenes can also perform custom per-frame processing. You override the methods of your scene subclass to perform additional game tasks. For example, if a node needs to be moved every frame, you might adjust its properties directly every frame instead of using an action to do so.\nSKAction reference for more information.\nAdd Physics Bodies and Joints to Simulate Physics in Your Scene\nAlthough you can control the exact position of every node in the scene, often you want these nodes to interact with each other, colliding with each other and imparting velocity changes in the process. You might also want to do things that are not handled by the action system, such as simulating gravity and other forces. To do this, you create physics bodies (\nSKPhysics) and attach them to nodes in your scene. Each physics body is defined by shape, size, mass, and other physical characteristics. The scene defines global characteristics for the physics simulation in an attached\nSKPhysics object. You use the physics world to define gravity for the entire simulation, and to define the speed of the simulation.\nWhen physics bodies are included in the scene, the scene simulates physics on those bodies. Some forces, such as friction and gravity, are applied automatically. Other forces can be applied automatically to multiple physics bodies by adding\nSKField objects to the scene. You can also directly affect a specific field body by modifying its velocity directly or by applying forces or impulses directly to it. The acceleration and velocity of each body is computed and the bodies collide with each other. Then, after the simulation is complete, the positions and rotations of the corresponding nodes are updated.\nYou have precise control over which physics effects interact with each other. For example, you can specify that a particular physics field node only affects a subset of the physics bodies in the scene.", "score": 51.456848237607836, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Create 2D sprite-based games using an optimized animation system, physics simulation, and event-handling support.\n- iOS 7.0+\n- macOS 10.9+\n- tvOS 9.0+\n- watchOS 3.0+\nSpriteKit is a graphics rendering and animation infrastructure that you can use to animate arbitrary textured images, otherwise known as sprites. SpriteKit provides a traditional rendering loop that alternates between determining the contents of and rendering frames. You determine the contents of the frame and how those contents change. SpriteKit does the work to render that frame efficiently using graphics hardware. SpriteKit is optimized for applying arbitrary animations or changes to your content. This design makes SpriteKit more suitable for games and apps that require flexibility in how animations are handled.\nSprite Content is Drawn by Presenting Scenes Inside a Sprite View\nAnimation and rendering is performed by an\nSKView object. You place this view inside a window, then render content to it. Because it is a view, its contents can be combined with other views in the view hierarchy.\nContent in your game is organized into scenes, which are represented by\nSKScene objects. A scene holds sprites and other content to be rendered. A scene also implements per-frame logic and content processing. At any given time, the view presents one scene. As long as a scene is presented, its animation and per-frame logic are automatically executed.\nTo create a game or app using SpriteKit, you either subclasses the\nSKScene class or create a scene delegate to perform major game-related tasks. For example, you might create separate scene classes to display a main menu, the gameplay screen, and content displayed after the game ends. You can easily use a single\nSKView object in your window and switch between different scenes. When you switch scenes, you can use the\nSKTransition class to animate between the two scenes.\nA Node Tree Defines What Appears in a Scene\nSKScene class is a descendant of the\nSKNode class. When using SpriteKit, nodes are the fundamental building blocks for all content, with the scene object acting as the root node for a tree of node objects. The scene and its descendants determine which content is drawn and how it is rendered.\nEach node’s position is specified in the coordinate system defined by its parent. A node also applies other properties to its content and the content of its descendants.", "score": 50.918272783563886, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Always stay in control!\nTexturePacker comes with a graphical user interface, showing you how your sprite sheets look.\nChanges to parameters are visible in real-time preview, even memory optimization.\nCreate texture atlases directly from:\n- Adobe Photoshop (.psd)\n- Flash Movies (.swf)\nor various other input formats.\nCommon questions for developers are:\nHow much disk space does the texture need?\nWith TexturePacker's GUI you always see the memory consumption and dimensions of the texture atlas.\nTexturePacker creates a header file for you together with your atlas. You can simply import it using:\nUsing the defines in the header file, creating a sprite is a 1-liner:\nSKSpriteNode *sprite = [SKSpriteNode spriteNodeWithTexture:SPRITES_TEX_BACKGROUND];\nThis makes it extremely simple to animate sprites. Play your animation with:\nSKAction *walk = [SKAction animateWithTextures:SPRITES_ANIM_CAPGUY_WALK timePerFrame:0.033]; [sprite runAction:walk];\nWith this you find missing sprites at compile time - not at runtime when your game is already in the AppStore.\n- Check for missing sprites at compile time\n- Increase code quality\n- Easily handle animations\nFocus on high resolution sprites only while creating your app for iPad, iPhone and iPad Mini. TexturePacker does all the scaling for you. No need to use an external tool to resize the images.\nUsing pre-scaled sprites keeps the frame rate high and reduces runtime memory consumption.\n- Reduce memory consumption\n- Increase performance\n- No additional work required\nInstead of adding and removing individual sprites TexturePacker allows you to simply add a complete asset folder. Every image that is found inside it will be added to the sprite sheet.\nTexturePacker preserves your folder structure in the sprite names inside the data file allowing you to easily group your sprites.\nTexturePacker detects all changes and automatically reads the new sprite data when re-entering the application or publishing from command line.\nWith the powerful command line interface you can simply update all sprite sheets at once.\nYou don't have to be a developer to use TexturePacker. It's easy right from the start.\nCommon questions for artists are:Do all sprites fit into a single atlas?\nSimply add your sprite folders and see if they all fit. TexturePacker shows you the sprite atlas in real-time.\nMaybe you need to drop some frames?\nOr is there enough space left to make the animation even smoother?", "score": 48.34733180297571, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Learn how to make iOS games using Apple’s built-in 2D game framework: Sprite Kit. Through a series of mini-games and challenges, you will go from beginner to advanced and learn everything you need to make your own game!\niOS Games by Tutorials covers the following topics:\n- Sprites: Get started quickly and get your images onto your screen.\n- Manual Movement: Move sprites manually with a crash course on 2D math.\n- Actions: Learn how to move sprites the “easy way” using Sprite Kit actions.\n- Scenes and Transitions: Make multiple screens in your app and move between them.\n- Physics: Add realistic physics behavior into your games.\n- Beyond Sprites: Add video nodes, core image filters, and custom shapes.\n- Particle Systems: Add explosions, star fields, and other special effects.\n- Adding “Juice”: Take your game from good to great by polishing it until it shines.\n- Accelerometer: Learn how to control your game through tilting your device.\n- UIKit: Combine the power of UIKit with the Sprite Kit framework.\n- Mac: Learn how to port your games to the Mac!\n- Tile Maps: Make games that use tile maps.\n- Scrolling: Make levels that scroll across the screen.\n- And much more, including: Fonts and text, saving and loading games, and six bonus downloadable chapters!\nThe iOS Tutorial Team takes pride in making sure each tutorial we write holds to the highest standards of quality. We want our tutorials to be well written, easy to follow, and fun. And we don’t want to just skim the surface of a subject – we want to really dig into it, so you can truly understand how it works and apply the knowledge directly in your own apps.\nBy the time you’re finished reading this book, you will have made 5 complete mini-games from scratch, from zombie action to space shooter to top-down racer!", "score": 47.374855882046475, "rank": 4}, {"document_id": "doc-::chunk-3", "d_text": "You also decide which physics bodies can collide with each other and separately decide which interactions cause your app to be called. You use these callbacks to add game logic. For example, your game might destroy a node when its physics body is struck by another physics body.\nYou can also use the physics world to find physics bodies in the scene and to connect physical bodies together using a joint (\nSKPhysics). Connected bodies are simulated together based on the kind of joint.\nSee Simulating Physics for more information.\nGetting Started with SpriteKit\nSpriteKit implements content as a hierarchical tree structure of nodes. A node tree consists of a scene node as the root node and other nodes that provide content. Each frame of a scene is processed and rendered to a view. The scene executes actions and simulates physics, both of which change the contents of the tree. Then the scene is rendered efficiently using SpriteKit.\nTo start learning SpriteKit, you should look at these classes in the following order, before moving on to other classes in the framework:\nCreating Your First Scene\nSpriteKit content is placed in a window, just like other visual content. SpriteKit content is rendered by the\nSKView class. The content that an\nSKView object renders is called a scene, which is an\nSKScene object. Scenes participate in the responder chain and have other features that make them appropriate for games.\nBecause SpriteKit content is rendered by a view object, you can combine this view with other views in the view hierarchy. For example, you can use standard button controls and place them above your SpriteKit view. Or, you can add interactivity to sprites to implement your own buttons; the choice is up to you. Later in this example, you’ll see how to implement interactivity on the scene.\nSKView can be added as a child to a\nUIView, or you can explicitly cast your view controller’s view to to a SceneKit view either through storyboards, using Custom Class, or in code. The following listing shows how you can override a view controller’s\nview method to cast its view to an\nWith the SpriteKit view created, the next step in displaying content is to create a scene.", "score": 45.724473217840156, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "For example, when a node is rotated, all of its descendants are rotated also. You can build a complex image using a tree of nodes and then rotate, scale, and blend the entire image by adjusting the topmost node’s properties.\nSKNode class does not draw anything, but it applies its properties to its descendants. Each kind of drawable content is represented by a distinct subclass in SpriteKit. Some other node subclasses do not draw content of their own, but modify the behavior of their descendants. For example, you can use an\nSKEffect object to apply a Core Image filter to an entire subtree in the scene. By precisely controlling the structure of the node tree, you determine the order in which nodes are rendered.\nAll node objects are responder objects, descending either from\nNSResponder, so you can subclass any node class and create new classes that accept user input. The view class automatically extends the responder chain to include the scene’s node tree.\nSKNode reference for more information.\nTextures Hold Reusable Graphical Data\nTextures, represented as\nSKTexture objects, are shared images used to render sprites. Always use textures whenever you need to apply the same image to multiple sprites. Usually you create textures by loading image files stored in your app bundle. However, SpriteKit can also create textures for you at runtime from other sources, including Core Graphics images or even by rendering a node tree into a texture.\nSpriteKit simplifies texture management by handling the lower-level code required to load textures and make them available to the graphics hardware. Texture management is automatically managed by SpriteKit. However, if your game uses a large number of images, you can improve its performance by taking control of parts of the process. Primarily, you do this by telling SpriteKit explicitly to load a texture.\nA texture atlas is a group of related textures that are used together in your game. For example, you might use a texture atlas to store all of the textures needed to animate a character or all of the tiles needed to render the background of a gameplay level. SpriteKit uses texture atlases to improve rendering performance.\nNodes Execute Actions to Animate Content\nA scene’s contents are animated using actions. Every action is an object, defined by the\nSKAction class. You tell nodes to execute actions. Then, when the scene processes frames of animation, the actions are executed.", "score": 45.409670322806406, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Create a new scene with a Position2D node named UIMenuSelectArrow as its root, a Sprite, an AnimationPlayer, and a Tween. Save it as\nWe’ll use the tween node to animate the arrow’s position. Simultaneously, the animation player will make the sprite wiggle, as you might’ve seen in games like older Final Fantasy titles. This way, both animations can play at the same time.\nIn the FileSystem dock, find\nmenu_selection_arrow.png and assign it to the sprite’s Texture property.\nPlace the sprite and its pivot so with a position of (0, 0), the arrow is to the left of the origin. To change the pivot’s position, select the sprite, place your mouse cursor where you want the pivot to be located, and press v.\nWith the AnimationPlayer, animate the sprite’s position going back and forth. To do so, you need two keys and to toggle the animation’s looping option. Also, set the animation to Autoplay on Load. In the image below, I’ve highlighted the icons corresponding to the two options.\nOn the first keyframe, I’ve pulled the easing curve to the left in the Inspector to give the motion some bounciness.\nAttach a script to the UIMenuSelectArrow with the following code.\n# Arrow to select actions in a [UIActionList]. extends Position2D onready var _tween: Tween = $Tween # The arrow needs to move indepedently from its parent. func _init() -> void: set_as_toplevel(true) # The UIActionList can use this function to move the arrow. func move_to(target: Vector2) -> void: # If it's already moving, we stop and re-create the tween. if _tween.is_active(): _tween.stop(self, \"position\") # To move the arrow, we tween its position, which is global, for 0.1 seconds. # This short duration makes the menu feel responsive. _tween.interpolate_property( self, \"position\", position, target, 0.1, Tween.TRANS_CUBIC, Tween.EASE_OUT ) _tween.start()\nNext up is the action list scene. Create a new scene with a VBoxContainer named UIActionList as its root. Add an instance of the UISelectBattlerArrow scene as its child and save the scene. Attach a script to the root node.", "score": 43.063035490504554, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Sprite Kit is Apple's development framework with an Open-GL based renderer for 2-D and 2.5-D game developers. According to Apple Insider, the framework's \"built-in support for physics makes animations look real, and particle systems create essential game effects such as fire, explosions, and smoke.\" Previously, iOS developers were able to use third-party platforms for the same purposes. Now that Apple has created its own tool, it is capable of staying with and growing with Apple, rather than being acquired and taken in a different direction. Rybakov said, \"Ninety percent of the purchased applications are games, which are 2-D games in more than half of the cases, and Apple cares about its game developers.\"", "score": 40.247155290162716, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "An efficient way to store 2D animation for games is to layout all the frames withing one texture called a ‘Sprite Sheet’ or ‘Texture Atlas’.\nThis saves resources by avoiding multiple texture loading operations and only animating the UVs of the shader to display the needed image at each frame.\nSprite Sheets are also used to pack various states of game graphics and textures for multiple objects in one file.\nSprite Sheets can be created manually using any image editing software,\nFor an automated process and more control, a specialized software like Texture Packer can be used.\nAnd it can also be done automatically in Adobe Animate (Flash).\n* There are many more solutions / scripts that will do that you can find on the web…", "score": 40.080511458321645, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "You want to use a spritesheet containing 2D animations.\nSpritesheets are a common way for 2D animations to be distributed. In a spritesheet, all of the animation frames are packed into a single image.\nFor this demo, we’ll be using the excellent “Adventurer” sprite by Elthen. You can get this and lots of other great art athttps://elthen.itch.io/.\nMake sure the images in your spritesheet are laid out in a constant-sized grid. This will enable Godot to automatically slice them. If they’re packed irregularly, you will not be able to use the following technique.\nThis animation technique uses a\nSprite2D node to display the texture, and then we animate the changing frames with\nAnimationPlayer. This can work with any 2D node, but for this demo, we’ll use a\nAdd the following nodes to your scene:\nDrag the spritesheet texture into the Texture property of the\nSprite2D. You’ll see the entire spritesheet displayed in the viewport. To slice it up into individual frames, expand the “Animation” section in the Inspector and set the Hframes to\n13 and Vframes to\n8. Hframes and Vframes are the number of horizontal and vertical frames in your spritesheet.\nTry changing the Frame property to see the image change. This is the property we’ll be animating.\nAnimationPlayer and click the “Animation” button followed by “New\"\n. Name the new animation “idle”. Set the animation length to\n2 and click the “Loop” button so that our animation will repeat (see below).\nWith the scrubber at time\n0, select the\nSprite2D node. Set its Animation/Frame to\n0, then click the key icon next to the value.\nIf you try playing the animation, you’ll see it doesn’t appear to do anything. That’s because the last frame (12) looks the same as the first (0), but we’re not seeing any of the frames in-between (1-11). To fix this, change the “Update Mode” of the track from its default value of “Discrete” to “Continuous”. You can find this button at the end of the track on the right side.\nNote that this will only work for spritesheets where the frames are already in order. If they are not, you’ll have to keyframe each Frame seperately along the timeline.", "score": 39.84922775950537, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Animating Multiple Sequences\nYou will always have multiple animations for your characters. For example, an idle sequence, a run sequence, an action sequence, and so on. You need to work in a specific structure so you can export all of these animations to a single sprite sheet.\nFirst, create a scene file with the name of the character, such as\nSpace Duck. This is the file where you can create or import your game rig. In the top menu, select File > Save As New Version, and give this new version the name of the animation. For example,\nEvery time you need to do a new animation using the same character, perform a Save As New Version. In the end, you may have something like this:\nScene: Space Duck\nWhen you run the export script, it will export the drawings from the current scene into the export folder. It will also let you know if there are any other scene versions that were already exported to that folder. If so, then it will recompile the sprite sheet to include all the drawings from all the animations. This allows the maximum possible reuse of drawings.", "score": 38.23937953152677, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Animations make content in your game move and come alive! You can use Animations to create moving platforms, disappearing enemies, or life like Story Blocks.\nAnimating enemies effect how the Character interacts with the object! If you animate a hazard block to be spikes popping up and down, the Character would be able to walk across the spikes when they're down without being hurt!\nAdding animations to your game is simple! To add animations:\n- Tap the Decorate Tab in the Game Editor\n- In your library, select the Animations folder\n- Tap which animation you wish to add to your game\n- Tap the blocks that you would like the animation blocks to be placed\n- The animation is now in your game!\nNote: Animations cannot be used in Game Backgrounds or Room Backgrounds at this time.\nFor more help with Animations watch this tutorial video.", "score": 36.34344454005696, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "This chapter adds a significant new feature to your Java toolbox—the ability to load and draw animated sprites and apply that knowledge to an enhanced new sprite class. You will learn about the different ways to store a sprite animation and how to access a single frame in an animation strip, and you will see a new class called\nAnimatedSprite with some serious new functionality that greatly extends the base\nHere are the key topics we’ll cover in this chapter:\nSprite animation techniques\nDrawing individual sprite frames\nKeeping track of animation frames\nEncapsulating sprite animation in a class\nOver the years I have seen many techniques for sprite animation. Of the many algorithms and implementations ...", "score": 36.31019070028497, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "In the next context menu you will have options to trim exported layers of transparency, set the exported file format, only export visible layers, and set a prefix to exported files among multiple other options.\n001 Game Creator also supports animation by way of manipulating the position and rotation (size and color/opacity too, for that matter) of multiple individual parts of a Sprite between key frames (including an interpolation feature which allows for smoother transitions from one frame to the next), thus, animating entire sheets with actions of a character does not become a necessity on the artist’s part.\nThe point of rotation of a bone is located in the center of the image, so to make a bone more easily manipulable/rotatable, the bone’s joint must be centered in the image file. See image below for reference:\nThe lower arm limb that is part of a Sprite comprising a whole arm. Notice how its edge is centered on the canvas.\nUsing that image along with the upper arm part we can sequence them like this:\nCombining both the upper and lower arm keyframes together will result in a smooth animation where the software will interpolate frames.\nNOTE: If “Transition smoothly between keyframes” was unchecked, then the arm would simply snap back and forth instantly between keyframes.\nRotation using a graphic when the bone utilizes the entire canvas (or is not properly centered) is still possible; however to achieve the above results, the developer will also have to alter its position in addition to its rotation for all frames in order to keep it in one place as it’s rotating (in the above example, altering the lower arm’s position as well was necessary because it is a child bone while the the upper arm is a parent bone).\nThe same image of an arm as shown above, except this one takes up the entire canvas, meaning its pivot point is now mid forearm as also indicated by the crosshair. In this scenario, the arm will spin around the pivot point instead of rotating from the joint.This approach is not recommended for animating something of that nature but would be ideal for things such as symmetrical effect graphics.\n001 Game Creator allows users to set a render priority (or layer order) of individual animation layers within a Sprite.", "score": 35.63210158931613, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Last year I had blogged about animating sprite using Kinetic JS.\nCode in that post was part of a simple game I had created. So the code specific for animating sprite was explained in snippets. A reader of that post contacted me with a request to provide a complete example. So I created a small demo of sprite animation only. If you are interested, you can download it from here. This demo animates images in the sprite sheet at a fixed location, it does not move the image. So I thought it would be interesting to add motion to sprite images when they are animated.\nFirst, take a look at the demo. Click on the Walk button to make the person walk. You can stop any time. If the person hits right side wall, then she falls, which is also simulated using sprite animation. I know the ‘walk’ does not look natural, but creating graphics is not my string point and this is the best I could manage.", "score": 34.50989891705383, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "I added some FX from Mercury (Particle Fx engine) to my character run animation and added a gun and bullets.\nSomeone asked me if the actions were hard coded. Well, no, it’s not. I’m using the timeline I created before to add a movement to the animation (for example translation interpolation with two points or more). Then I just put it in the player “Action Bank”.\nIn addition you may use the timeline to trigger events or do rotations (with interpolation). Actually, at first this action editor’s purpose was only to make a tool to animate skeletons (which means I could do a character only based on skeleton animations too), but it’s useful for simple animations too.\nSomeone else asked me if I’m going to make a mario or megaman like game… No, I think I wont’t. Right now I’m just using megaman’s sprites cause I have nothing else to do some testing. 😦\nBut still, my engine and editor should help to make any kind of 2D game (plateformer, R-Type like shooters or Zelda like action/adventure games). I did some design for a plateformer, that’s why I’m testing plateform tools. I hope it won’t be too bad.\nI should add a simple enemy in one of the next videos.", "score": 33.42710199174501, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "So, what is the difference between your sprites if they all have the same behaviour?\nBasically, the atributes...\nImagine my example: the \"Monster\" object have events for losing hp, destroy the object when hp <= 0, walk, run, flee, attack, etc.\nAll those elements should be the same for all monsters, considering it's speed, size, hp, strenght, image/animations, etc etc. You see?\nThe \"solution\" so far that i tought is to have a monster object, with all monsters i have on single frames and an array containing it's default attributes, indexed by monsters image frame.\nSo on frame 1 have a turtle? on arrayMonstersInfo i have turtle's data;\nOn frame 2 a dragon? arrayMonstersInfo contains dragon's data, etc etc.\nThe problem here would be only the animations =/", "score": 32.86144347569178, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "broken into bite-size chunks\nSession 1 – Creating the Hero\n- Series Introduction – 3:54\n- Initial Setup – 13:09\n- Adding the Hero Class – 16:32\n- Moving the Hero – 16:22\n- Gestures and Adding Animation – 20:32\n- Adding Physics Properties to the Hero – 18:01\nSession 2 – Building the Maze in an SKS or Tiled File\n- Intro to Session 2 – 1:24\n- Create a Maze Boundary using a Sprite Kit Scene (.sks file) – 26:08\n- Adding the Physics Contact Delegate to the Game Scene – 12:09\n- Introduction to using Tiled for Level Layout – 11:46\n- Parsing the Tiled File (parsing XML in general) – 22:41\n- Centering the Hero in the World – 10:29\n- The Star Class (a sprite for the Hero to pickup) 17:48\n- The Star Class Continued – 8:17\nSession 3 – Sensors, Edges, and Enemies\n- Intro to Session 3 – 1:31\n- Adding Sensor Nodes around the Hero – 16:16\n- Using the Sensors to Detect Walls – 13:29\n- Adding an Edge Based Physics Body Around the World – 16:25\n- Placing Enemies with either SKS or Tiled Files – 18:56\n- Moving Enemies – 25:13\n- Refining Enemy Logic for Tracking the Hero – 18:12\n- Reloading the Level – 12:03\nSession 4 – Property Lists,", "score": 32.19227147722386, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "The essence of the match-three game is to move fruits such that three fruits of the same type match up. This move is done with a swipe. The Game Scene is the best place for implementing the detection of the player’s swipes that will reposition the fruits into this match pattern. The reposition is called the swap. Recognizing these swipes to swap in SpriteKit is best done with the touchesBegan, touchesMoved, and touchesEnded functions.\nTwo optional properties that will aid in the computation required are swipeFromColumn and swipeFromRow. These properties will record the column and row number of the fruit that the player first touched when the swipe movement started. These are initialized to nil in the initializer init(size:).\nIn touchesBegan() we convert the touch location into either the column or row of the fruit touched by invoking a convertPoint() function that returns a tuple. convertPoint() ensures the CGPoint parameter passed is within the game grid otherwise it returns false. If the point is within the grid then we set the swipeFromColumn and swipeFromRow properties.\nDetecting the swipe direction is achieved in the touchesMoved() function. In this function by comparing the new column and row numbers to the previous ones we determine whether the swipe direction is left, right, up, or down. Knowing the direction we then use the trySwap() function passing the horizontal and vertical deltas.\ntrySwap() is the workhorse of the swipe to swap functions because at this point we only know the direction the player swiped but we need to determine if there are two valid fruits to swap in that direction. We make this determination by calculating the column and row numbers of the fruits to swap with. Eliminating the edge swipes off the board and determining whether there is actually a fruit at the new position ensures that we implement a valid swap. For today I just have a print() of the from Fruit and to Fruit in order to see if this is all working.\nTomorrow, I’ll animate these swipes to swap.", "score": 31.611520370792572, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "2D Animation Tutorial Using Spine for Mobile Apps\nIn the classic book by Disney animators Frank Thomas and Ollie Johnson, “The Illusion of Life”, the art of character animation is described as crafting,\n“drawings that appear to think and make decisions and act of their own volition.”\nAnimation, especially of living things, seems like a major hurdle for the mobile app developer who needs to create people, insects, birds, and lifelike plants to bring their mobile games to life. Many turn to sprite sheets, using software such as Texture Packer to turn a series of drawings into a ‘moving picture’ much like the Disney animators did, frame by frame with each sprite moving slightly to give an illusion of a moving figure when played in sequence, like an old-fashioned flip book.\nSprite sheets, however, present some problems for mobile apps if they are at all complicated. First, they are difficult to edit and maintain, as to change any element of the sprite sheet means precisely replacing all elements of a sprite sheet. One mistake and the animation looks jerky and “unreal”.\nAdditionally, sprite sheets consume a certain amount of texture memory in your app. First, you will need a copy of each element of your animation embedded into your sprite sheet, there might be dozens of images packed into a .png file. Second, you might want to content-scale that sprite sheet so that it looks good on retina screens, so you will need a @2x version of it, using even more memory. For fast-moving games with many complex animations, sprite sheets can quickly become a maintenance and memory problem.\nThere must be a better way!\nEnter Spine, the easy-to-use and well-designed software by Esoteric Software. Spine was created as a Kickstarter project and has quickly become a tool of choice for many developers of mobile games. As one Corona developer noted,\n“…bought it after trying it for something like 2 seconds.. This is the coolest thing that happened for Corona based games since Corona!”\nInstead of creating frame-by-frame animations, Spine works by creating skeletal animations. In short, you create a skeleton and “dress it” into a skin, apply keyframes, and create an animation in a short amount of time. It’s an intuitive process once you understand the basics of the software.", "score": 31.275029586211808, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "This describes how game assets are created, which involves serious amount of scripting and automation\nThis is sub-layout for documentation pages\nAsset creation can be one of the most time consuming things in entire game development.\nIf it can be automated, it saves unbelieveable amount of time.\nThe point of asset automation is not short running speed, but to require as less human effort as possible\nThis focuses on how the graphics for the game is created.\nEach character must have separate texture sheet for all angles of the animation. Here is an example of walk animation, for right and upper-left angle:\nEven if we use blender, we would have to rotate model, and then click to render.\nThen repeat this process to render all frames.\nThis, however, can be automated by running a custom made python script, which rotates the model base by constant angle, and then renders, and repeats, until all angles have been rendered\nNow, after blender rendering, we are left with like 200 files, and they all have alot of transparent pixels around.\nFrom this, we want to create spritesheet, one for each angle.\nFor this puspose, I have written an application using C Sharp language, that helps to create the spritesheets with only few clicks\nThe following algorithm is implemented in the application mentioned above, and allows complete automation of spritesheet creation,\naswell as the animation JSON, containing the animation data.\nInput: X files, where each file is one frame of animation. They go in order, first, only frames for R angle, then for RD angle, and so on. (that is defined by the blender python script)\nOutput: A minified sprite sheet + .JSON file for each angle. JSON file contains animation data\nImagine following two sprite frames. Since all frames do have same size, we\ncan cut off the transparent pixels in such a way to minimize final image size\nNote: although the images look identical, they are not, they differ only slightly, since they form an animation...\nAfter minification we would get final spritesheet like this:\nAs can be seen, the transparent pixels were removed to minify the final spriteshet\nThe approach to achieve this is as follows:\nAllthough the technique above allows serious automation of really any animation desired,\nit is not enough, and unfortunatelly, for certain animation sheets,\nan origin point must be defined in order to achieve correct displayment.\nLet's simplify the problem to only one frame per animation.", "score": 31.05157668138217, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Continue reading\nIn order to provide a visual cue to the player the selected swiped fruit should be highlighted momentarily in order to indicate which fruit the player is about to swap. This selection highlighting animation can be achieved by temporarily placing a highlighted version of the sprite on top of the current sprite and having it along for the ride in the swap animation and eventually fading out. Continue reading\nHaving completed the swipe to swap it was time to update the model and animate the swap. This involved creating a new Swap struct. In Swift the struct is a value type versus a class which is a reference type. Here it makes sense to use a struct since the swap is inert and only stores data. The logic of handling the swap is not done by the swap itself. The detection of the swap is handled in the Game Scene and the real game logic is in the Game View Controller. Continue reading\nThe essence of the match-three game is to move fruits such that three fruits of the same type match up. This move is done with a swipe. The Game Scene is the best place for implementing the detection of the player’s swipes that will reposition the fruits into this match pattern. The reposition is called the swap. Recognizing these swipes to swap in SpriteKit is best done with the touchesBegan, touchesMoved, and touchesEnded functions. Continue reading", "score": 30.953778171781902, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Then theres something I don't understand. (an experience not wholly unfamiliar to me) How/when can you rotate using the SpriteEngine? Back when I brought this up last time you said something about just having different animations for it. Not what I'm looking for, but you seem to be hinting at what I want now.\nIf you had a dollar for everytime I asked a question, I know. Indulge me one more time please.\nAll my sprites are going to be top-down little buggers and need to face the direction they are looking in. This should be changing almost every single frame for most of the sprites, unless I design a boring combat system.\nedit: added a LSL3 reference", "score": 30.861733383971114, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "At 3:00 they discuss that there is a method used to separate the parts of character into individual sprites and move them saving production time and video memory. What is this method called I am thinking that I want to implement it for my game but need to do more research. Thanks alot =)\nIt's called 2D skeletal animation. Essentially you're animating a skeleton that has sprites for the body parts placed on top of it. This allows you to reuse animations for different characters and characters can have multiple sets of armor, weapons, etc.\nThere are a number of questions on the site already about it:\nAnd some other good resources:\nEnjoy your research.", "score": 30.741761079399282, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "The Skinned Model sample from the App Hub education catalogue is great for getting animated characters into your game, but there’s a bit of a flaw with the export process. The problem is, when you export your character from 3DS Max (and possibly other modelling programs), all you get is one animation, named ‘Take 001’. Wouldn’t it be nice if we could define different animations for different parts of the animation timeline? Well, we’re going to do just that :). As an added bonus, we’ll also be adding in events, so you can be notified when certain parts of your animation are hit.", "score": 30.671634769338553, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "| The Kwik PhotoshopR plugin can be used to create interactive mobile apps. One way to create an animation in Kwik is the sprite sheet. Kwik comes with its own sprite sheet script which we will use for this tutorial.|\nFor the finished animation that we will create from the sprite sheet, I will use my Fred the Frog character. In the original file, Fred is on the top layer and his left arm and hand are on the bottom layer. We will animate his left arm to wave.\nThe kwik sprite sheet script creates a sprite sheet from the layers in the Layers panel. Each layer is a different frame for our animation. Therefore, each layer will have Fred in the same position but with his arm moved left or right slightly.\nLet's build the layers for the animation.", "score": 30.185196552016436, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "As the name suggests, the animation will be on a per-bone basis, where each body bone can have a specific action or animation. Having all the main body parts of the character separated allows the developers to create the animations directly in the engine. This new animation technique is very similar to what is used in 3D animation.\nIn this tutorial, we're going to focus on bone-based animation. However, note that Unity does not do true bone-based animation, so we will simulate it.\nPreparing The Sprite For Animation\nDrag the sprite file to the editor and drop it on the Sprites folder, like so:\nBefore any character is ready for animation, you need to add a\nScene to the project. Create a\nScenes folder in your Assets directory, then create a new scene and save it as\nTest.scene within this folder. At the end of this step, you should have something like this:\nNow, still in the Project tab, select the\ndragon sprite, then look at the Inspector panel:\nAs you can see in the Sprite Mode property in the Inspector, the Sprite Mode is set to Single. This means that the engine will use the entire texture as a whole when creating a new sprite. Since we have the body parts separated in the\ndragon, we don't want that to happen. We therefore need to change the Sprite Mode from Single to Multiple.\nWhen you change the option, a new button labelled Sprite Editor appears:\nCurrently, the Sprite Editor slicing tool does not work well on compressed images. In order to ensure the best result for the animated sprites, you need to change the Format value on the bottom of the Inspector tab from the default option, Compressed, to Truecolor. Then, click Apply.\nNow, select the dragon sprite and click the Sprite Editor button. A new window will pop up:\nIn the upper left corner of the window, you will find the Slice button. Click on it, and another menu will pop up:\nThis menu allows you to change the parameters of how the sprite will be sliced by the engine. If you set the slices to Automatic, the engine will try to detect the different parts of the character you have in the image. You can define a minimum size for the slices, a pivot (the point around which the slice rotates) and one of three methods:\n- Delete Existing will replace any existing slices.\n- Smart will try to create new slices while retaining or adjusting the existing ones.", "score": 30.107050899120477, "rank": 27}, {"document_id": "doc-::chunk-3", "d_text": "The SpriteKit Scene editor lacks some of the workflow benefits of Tiled. For example, snapping to a grid or selecting and copying objects as a group can be a bit tricky.\nChildren created within the scene editor are limited to SKNodes, SKSpriteNodes, SKShapeNodes and some other physics field types. This means to make use of our custom classes, like Hero, Enemy, Boundary, you’ll learn how to find children in the scene editor and replace them (by name) with our classes. Which is essentially what we’ll be doing when parsing Tiled files. So although the tutorial teaches both layout options, there is a lot of overlap in the code.\nEvery student graduates with a mighty parting gift!\nLike all our tutorials, you get the finished Xcode project from the course, as well as the project files from the end of each session (updated for Swift 1.2). The finished project contains an enormous amount of source code portable to any of your apps in the future.\nAfter your purchase, you’ll always have the code to…\n- Import a Tiled file into a Swift / Sprite Kit based project\n- Parse any XML data into a Swift / Sprite Kit based project\n- Play audio, either through an SKAction or AVAudioPlayer\n- Setup swipe gestures in an SKView\n- Pull children from a Sprite Kit Scene file and replace them with custom classes\n- Setup a SKPhysicsContactDelegate and listen for bodies contacting each other\n- Center a Sprite Kit world around a specific child\n- And much, much more!\nTwo affordable purchasing options…\nPurchase Option 2 - Subscription Access\nBoth Monthly and Yearly Subscribers can stream every video tutorial on the site. Yearly subscribers get access to the latest version of every starter kit whenever they want, plus access to hundreds of dollars worth of royalty free game art (yes, it’s an amazing deal). You can cancel your Monthly or Yearly subscription anytime directly through Paypal.Browse All Courses\nAlready a Subscriber? Get started on the course from right here.", "score": 29.75862919672175, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "- Color Layer\nIn any Cocos2D node that is currently added to a scene, you may schedule and unschedule events. For my animation, I created a separate attack overlay layer that holds the animation, and sits above the game layer. As such, I could schedule and unschedule events right into the layer to control it.\nLooking at the flash animation, you can see that there are several steps that must be followed:\n1) Attacking unit slides in\n2) Attacking unit name appears above it\n3) Defending unit slides in\n4) Defending unit name appears above it\n5) Units able to attack pull back\n6) Units able to attack charge at enemy unit\n7) An animation/effect plays in front of attack unit\n8) Screen shakes and flashes\n9) Dead units fly off the screen\n10) Surviving units grow and fade out\nIf I were to rough this out in my layer, I might have code that looks something like this:\nif((self = [super init]))\n//Store any parameters, create objects, etc.\n-(void) slideInAttackingUnit:(ccTime) elapsedTime\n//Do slide code\n[self schedule:@selector(attackingUnitNameAppears:) interval:0.5f];\nWhen Cocos2D schedules an event on a node, it will call that event whenever the interval passes. (If no interval is supplied, it will instead call that event every frame). In the case of this animation, we only want each event to happen once, so we unschedule the current method as soon as it is entered and then schedule the following one.\nI'm sure that the scheduler is familiar to most Cocos2D developers so I'll move on to the next section.\nIf you pay close attention to the flash animation, you will see that there is actually a very short white flash when the units collide. You will also notice that there is a bit of a blueish overlay when it first starts running. Both of these can easily be accomplished using a CCColorLayer.\nPlacing a Color Layer is incredibly simple; you simply initialise it with the colour you want it to be, and add it to your scene, as below:\nCCColorLayer *backgroundLayer = [[CCColorLayer alloc] initWithColor:ccc4(128, 128, 255, 100)];\nI typically use alloc/init instead of the convenience methods where available, just to have a little more control over memory.", "score": 29.473438799580165, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "It's a stylistic choice and except in very tight memory situations, not entirely necessary. By setting the alpha to something less than a full GLByte of 255, it can function is a nice barrier between your current layer and what's behind.\nFor the white flash, using a ColorLayer in conjunction with the scheduler is quite effective. Consider the following:\nflashLayer = [[CCColorLayer alloc] initWithColor:ccc4(255, 255, 255, 255)]; //flashLayer is an instance variable\n[self schedule:@selector(removeFlash:) interval:0.05f];\n-(void) removeFlash:(ccTime) elapsedTime\nflashLayer = nil; //will be removed from memory - we don't want dangling pointers!\nThis simply adds and removes a white color layer very quickly to simulate a flash. Simple!\nCocos2D comes with a variety of actions. Ignoring animations, the ones I find myself using most are CCMoveTo/By, CCScaleTo/By, CCRotateTo/By, and CCFadeIn/Out.\nThe difference in the transform actions, for example CCMoveTo/By, is absolute versus relative. CCMoveTo moves an object to an absolute position; CCMoveBy to a relative position. For those who don't know what that means, the names help make sense of it, but here's a quick example: Imagine that you have an object at position 200, 200. If You were to apply CCMoveTo(100, 100) to it, it would move to 100, 100. Whereas, if you were to apply CCMoveBy(100, 100) to it, it would move to 300, 300.", "score": 29.12291220959866, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Canvas Components, Controls and more!\nSpriteSheets let you animate graphics by loading one graphic and providing data for the frames. CreateJS has SpriteSheet and Sprite support. ZIM uses these as a base and then adds a couple extra features.\nThe example mimics a PhaserJS tutorial but the ZIM example is 63% of the size. Also ZIM uses the power of zim.animate() to play the Sprite – with the run() method. This allows you to:\nHere is a video Tutorial: https://youtu.be/WWEms6qy9KA. You can also play labelled animations. CreateJS Sprite and SpriteSheet support even more operations but these have not been brought in to the run() method. You can still use the CreateJS play(), gotoAndPlay(), gotoAndStop() methods with a ZIM Sprite, but you may be limited to one frame rate.\nWe also have found that for teaching Sprites, there are a lot of Sprite sheets out there without data – or with data that is not made for CreateJS. A ZIM Sprite can easily read create SpriteSheet data for these spritesheets as long as the frames are the same size. See the ZIM Sprite Docs for more information.\nHere is a new version of the TexturePacker CapGuy on a Skateboard! This features the animation series technique new to zim.animate() – but now also available on the run() method of the Sprite. This allows pausing and stopping by ID – including using the id with zim.animate() and move() animations. It also allows for a sequenced series of labels or frame numbers to be played in conjunction with the traditional transformation properties, etc. such as scale, rotation, skew, position, etc.\nSee this post on a DYNAMO – which is a Dynamic Sprite – so a Sprite that can change speed depending on mouse or key input for instance! This can be operated by an Accellerator which can increase or decrease the whole scene including Sprites and Scrollers.", "score": 28.787093855339993, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "All the photographs will also be discovered in the Images.xcassets listing. Concerning the Bird animation and the bonus animation, the sprites are stored on other directories (Bird.atlas, BirdLifeless.atlas, powerup1.atlas): merely change the frames of each and every animation with yours. Do the identical for song and sounds: merely change the sound information with yours. To set up all the settings for the sport (in order to switch the gameplay) you could have a devoted magnificence (Settings.swift). Everything is easily commented in the code, so it’s in reality simple to grasp the whole thing. If you could have any issues, you’ll learn the detailed pdf with all the directions, or question me for info, I can be more than pleased that can assist you.", "score": 28.586028815171193, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Author: Mark Suter\n- Working with the built in sprite editor to create and edit sprites\n- Creating animations with the built in sprite editor\n- Modifying the animation properties\n- Editing image properties including resizing images, creating transparent images, etc.\nIn this tutorial you will learn to use the sprite editor including the drawing tools, setting transparency (get rid of that ugly white box around your character!), making multiple enemies out of one original, and animation tricks (adjust speed, size, rotation, etc) (9:21)", "score": 28.54576366345996, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "I need help with workflow suggestions for a flash game. (although the concepts would apply to other game engines, too).\nIn a game like: Hero Academy\nThere are several different character models each with a multiplicity of animations (walking, standing, getting hurt, attacking, etc.). I am certain that each of these could be manually animated and a sprite sheet generated and used, but when you add up how many animations and core drawings would be needed...that would be near insanity to complete.\nSo, in research I found that some are creating character models in Blender, Maya, a 3d engine of some kind - setting the camera angle to 45deg, etc. animating and exporting that sprite.\nIn each of these examples however, the character models were 3d looking i.e.: 3d to 2d\nWhen I look at Hero Academy, they still look like a 2d drawing.\nIf I am looking to re-create a similar art style to Hero Academy and need several animations per model what is the most efficient/correct workflow to creating the animation? Manually creating each animation sprite in Flash? Or utilizing 3d and texturing it to look 2d?(if that is even possible)?\nEDIT: Thank you to all those who have posted great answers. To add information to the concept, one 'issue' I am contending with is the modeling/view of our desired game. To further clarify, if you look at Hero Academy's grid system (a la chess board) if you wanted an entity to be able to move vertically or diagonally this would need it's own sprites. However, is this still a simple 2d side scrolling game since the camera is fixed? And they have simply designed the background map to have the EFFECT of perspective?\nIf that is the case, then using Nuoji's comments, a 2d skeleton would then again be possible.\nThe base question is: if we are looking at close to 100 different character entities all with their own movement animation, long and close range attack animations, taunts, etc. what is the most expeditious method of doing this?\nNote that I have experience in programming but no design/modeling experience. Whether we went the flash animation route or 3d modeling route I would be learning from scratch so any previous skill or preference is nullified.", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Couple of weeks back we published an article which introduced the wonderful world of iPhone programming. This article takes a step further and explains how to create animation using the image sequences.\nDownloading Image Sequences:\nThis article is based on the CampFire iPhone application demo published on http://www.appsamuck.com website. You can download the complete code and the images used in the article using the link provided on their website\nCreating Campfire Application:\nOpen XCode and start a new project. Select View based application from the project templates. Name your application \"Campfire\", this will create appropriate files using the application name. Open xib file and drag a UIImageView on the screen. The UIImageView control is used to display images on the iPhone screen.\nNext, open the CampfireViewController.h file and implement the following code:\nThe CampfireViewController declares the UIImageView control and expose it using the \"imageView\" property. The implementation file is shown below:\nThe viewDidLoad is fired when the View is loaded successfully. The imageView.animationImages is a NSArray which contains the information about the images. The images are kept in the NSArray in the form of UIImage instance. UIImage is used to display image. The imageNamed property of the UIImage class is mapped to the image names. The animationDuration represents the total time the animation will run. The animationRepeatCount is assigned zero which means the animation will run continuously. Finally, we call the startAnimation method to start the animation. The animation shows the fire and since it is looping through the images very quickly it seems like a movie is being played on the iPhone.\nIn the implementation above we have hard-coded the images. This is reasonable if you have 10-20 images but as soon as you have hundreds of images this technique will become a big headache. In the next section we will demonstrate how to fetch the images from the resources folder an perform the animation.\nFetching Images from Folder:\nFirst, create a new folder called \"BonFireImages\" under the Resources folder and place all the images in it. The code below shows how to fetch images from the \"BonFireImages\" folder and then start the animation.\nThe NSBundle is used to locate different resources in the project structure. The pathsForResourcesOfType method is used to fetch the resources of type \"gif\" contained in the \"BonFireImages\" folder.", "score": 27.183605052364726, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "Also you can see that I used the x and y variables of the object to place the image at the desired location, and also used image and height variables which are available in any display object.\nIn case you are not familiar with the arc4random() function it is a function used to generate random numbers (supposedly providing a greater degree of randomization than the rand() function), and does not need to be seeded.\nNow we’re going to send the balloon upwards, and there are two ways to do that one way would be to modify the x and y coordinates of the balloon image directly, but we’ll take the easy way out and use the SPTween class.\ntime:(double)((arc4random() % 5) + 2)\nNotice that I supplied the SPTween class with the image file, how long I wanted the animation to last for (in seconds) and a transition. There are many different transitions available (which can be found in the SPTransitions.h file included with the Sparrow Framework) the linear transition is simply an animation which is done with completely linear timing from start to finish.\nNow we’re going to create the actual animation by changing a couple of properties.\n[tween animateProperty:@\"y\" targetValue:-image.height];\nAll we had to do was change the x and y properties of the image, and the Sparrow Framework modifies these properties over the duration we specified as time when we created the Tween. If you’d like to see the properties available for an image you can check the SPDisplayObject.h file.\nFinally we will add this tween to the “juggler”.\nThe juggler handles the timing for all animations, and is associated with the stage the image we are animating is on.\nNow to add a balloon to the stage we are going to add a call to the drawBalloon method at the bottom of the if statement in the initWitWidth method.\nYou should now see a single balloon rise through the top of the screen when you run the code like in this screenshot:\nIf you have any trouble you can download a project to bring you up to this point here.\nIn the next part of this tutorial we are going to make it so that when this balloon is touched it immediately falls through the bottom of the screen.\nSubmit A Resource\nHave you created a useful tutorial, library or tool for iOS development that you would like to get in front of our 300,000+ monthly page views from iOS developers?\nYou can submit the url here.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "The sequence of removing and re-adding behaviors, in combination with the Push’s\nactive property is very important to make things work together.\nNow that you know the mechanics of UIKit Dynamics I strongly recommend to check the API documentation here and have fun creating crazy animations in your next apps!", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "What is the best way to run AI in Cocos2d?\nI already have my AI implementations. Normally I would execute AI in my game loop by checking how much time has elapsed since last AI execution and whether I can execute it in the current situation, then finally execute it.\nFrom my understanding, Cocos2d removes the concept of the GameLoop and replaces it with Actions, which is great, but I'm not sure how best to employ them to take over in this AI execution. My current workaround is to create a ccSequence that 1) Executes AI, 2) Executes another ccSequence that does the same thing (essentially make an Action loop). What I don't like about this is that Actions all have to be timing needs to be precise in Actions... but I can work with that.\nAnother thing I don't like is that while this is good for AI... what about rendering actions. I can easily render the result of my Action for me, but what about the result of another AI's action? Such as another AI killed me. I have to wait for my Action Sequence to execute before I can check if anything happened to me, but what if my action was 10 seconds long... do I stand there before I realise I should be dead?\nI'm thinking now that I could have multiple Action sequences set up. One for my AI. One for general updates to me. But before I go too far down this path, are Action Sequences really a good replacement for ye olde Update methods?", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-4", "d_text": "Typically, you would subclass\nSKScene for each scene you require, but simplicity, the following code simply instantiates a new scene object that is presented by the view:\nTo display content in SpriteKit, the relevant node is added to the scene, or a child of the scene. The final step in this example to display a label is to create an\nSKLabel and add it to the scene:", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-3", "d_text": "The figure selector window, showing the default figure and sprite (image) figures.\n13 13 Animation Frame Controls An animation can be created by creating a series of frames, where each frame differs slightly from the previous. After moving and editing your figures from their previous positions, click the 'Add Frame' button to add it to the time-line. The underlined A means that the 'A' keyboard key can be used as a shortcut/hotkey. The keyboard spacebar can also be used as a shortcut. The time-line, showing a selected frame and the right-click popup menu The time-line shows all the frames in the animation at the top of the main window. The scroll bar underneath the frames can be used to view any part of the animation if the number of frames exceeds the width of the window. The position of the time-line can be moved in one frame increments by clicking the arrows at either end of the time-line or using the keyboard arrow keys when the scroll bar has focus. The area to the left of the time-line frames shows the number of the frame being edited or added and the repeat number. Repeat: The value entered in the Repeat edit box determines how many times the frame should be repeated when played. This can be used to create pauses in the animation without having to create identical consecutive frames. Editing Frames Frames in the time-line can be edited by clicking on them. The main editing area will then show the edited frame. After editing the frame the changes can be stored by clicking the 'Add Frame' button or by selecting another frame in the time-line to edit. If you edit a frame and do not want to keep the changes, then you can discard the changes by clicking the same frame in the time-line that was being edited (this action cannot be undone). The < and > keyboard keys can be used to step the frame being edited left or right from the current frame being edited. This is equivalent to clicking a frame to the left or right of the current frame being edited and can therefore be combined with the Ctrl and Shift keys to select multiple frames, as described below.\n14 14 Selected Frame Controls Frames can be selected by clicking on them in the same way as editing a frame. Selected frames are displayed with a blue border. Multiple frames can be selected by using the Ctrl or Shift keys in the same way as selecting multiple files in Windows Explorer.", "score": 26.723153813939142, "rank": 40}, {"document_id": "doc-::chunk-5", "d_text": "The change of an object itself happens like a single-frame animation by displaying single frames one after the other. With the help of the key scenes, however, you can now specify how and where these individual images should be drawn. An example illustrates this.\nYou can create a single-frame animation to set the individual arm and leg positions of a moving person. You can then use this animated person in two key scenes in different positions. When creating the overall result, the individual intermediate positions are calculated automatically.\nAt each intermediate position, the following picture of the animated person is drawn. When an image cycle of the animated person has been completed, playback starts from the beginning.\nYou therefore need to draw a lot less images for the more complex single-image animation, because repetitive motion sequences are used multiple times.\nIn a keyframe animation any number of animated objects can occur. It is even possible to use a keyframe animation in another keyframe animation. Animation packages like Flash allow you to nest animations as deep as you like.\nFor some movements, the use of keyframes is too inaccurate, as the following example demonstrates:\nBall over corner\nThe ball moves rigidly to the defined key positions. In the real world, however, he would follow a gentler, curved path. These requirements are met by the technique of path animation.\nYou can draw a motion path for an object along which the object should move. For drawing the movement path, you can use stretch trains, gentle curves or freehand tools.\nAnother property of the path animation is that objects can not only move along the path, but can also orient themselves to it. This means that an object is still rotating during the movement. The gradient of the curve serves as a rotation angle for the animated object.\nIn this form of animation, one starts from a single image scene and specifies in a linear list actions, how this scene should arise and/or change.\nThe best-known example is probably the custom animation in PowerPoint. In this presentation software to design individual slides with graphic objects, such. E.g., texts, diagrams or pictures.\nAnimation lists determine how and when the individual graphic objects appear, change or disappear. Each individual action describes exactly one animation sequence, eg B. the flying in of an image or the color change of a frame. The animation duration is set as the property of the action.\nThe exact animation flow can be controlled to a limited extent.", "score": 26.644262032976393, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Animations on iOS\npaul at livecode.org\nMon Dec 14 16:15:21 EST 2015\nI think it was Scott Rossi that figured out a way to play an animation by storing images as text in custom properties, then load each image in turn into an image object.\nI would imagine this may be limited to smaller less complex animations, but up until now I hadn’t tried it myself, so I had a go…\nHaven’t tested on mobile though, my LC & Xcode versions are out of sync again.\n> On Dec 14, 2015, at 11:23 AM, Ben Rubinstein wrote:\n> What are my options for displaying an animation in a portion of the card/screen on iOS?\n> Currently I've tried:\n> 1) making it into a video, in a very limited range of formats, and playing it from an external file using a native control\n> Pros: works, choice of controller etc, plays from a separate file and starts up fairly quickly\n> Cons: video formats not super-efficient for animation, and at least as I've managed so far, limited to certain resolutions - I'm forced to crop or squash my original animation.\n> 2) making it into a GIF on the card, which works quite nicely except that there's an enormous delay going to the card, presumably as the animation is buffered.\n> Pros: can be exactly the size I want; plays quite smoothly\n> Cons: I've not managed to play this from an external file, and if it's embedded on the card there's an unacceptable delay.\n> Is there a third way? Should I be able to set the filename of an image object to an external gif file? Are there some video formats accepted on iOS which are good for animation and which will allow arbitrary dimensions?\n> use-livecode mailing list\n> use-livecode at lists.runrev.com\n> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:\nMore information about the Use-livecode", "score": 26.357536772203648, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Tween: (or Tweening) is short for in-betweening, the process of generating intermediate frames between two keyframes. This gives the appearance that the first keyframe evolves smoothly into the second keyframe, providing a seamless view of motion or value change.\nTweens are the backbone of your Animatron project - by creating and combining different tweens, you are designing the animation and determining the ways that the objects in your project interact with each other on the canvas.\nThe four types of tweens in Animatron are:\n- Translate - changes in an object's position on the canvas.\n- Rotate - changes in an object's angle.\n- Scale - changes in an object's size.\n- Opacity - changes in an object's transparency.\nHere's a simple project that uses all four tweens. We'll start with an image of the Animatron Hero (our company mascot) placed slightly outside of the canvas. Note that we're in Animation mode and the playhead is set to 0 on the timeline ...\nThen we'll set the playhead to 2.5 seconds and drag the Hero to a new location on the canvas, so it'll appear to fly into the frame from off-screen. A translate tween (shown in yellow) appears on the timeline, with black keyframes marking the beginning and end points in time of the movement.\nNext, we'll rotate the Hero 720 degrees. The rotate tween appears in green below the translate tween we created earlier - the Hero will rotate 720 degrees between 2.5 and 3 seconds.\nWe'll also add a scale tween between 2.5 and 3 seconds, so the Hero grows larger during this time.\nThen we'll add another translate tween to move the Hero to the bottom of the screen between 3 and 3.5 seconds - the pause in the translate tween is shown by the grey space, which indicates a static tween.\nFinally, we'll move the playhead to 5 seconds and lower the opacity to 0%. Now we have an opacity tween shown in pink - the Hero will change from 100% opacity to 0% opacity between 3 and 5 seconds.", "score": 25.79563531895495, "rank": 43}, {"document_id": "doc-::chunk-2", "d_text": "It is often useful to sequence many actions together to save code; for example, in order to give the unit portriats a bit of a jump back, I use the following code:\nleftSprite.position = ccp(-leftSprite.contentSize.width / 2,\nleftSprite.contentSize.height / 2);\nCCDelayTime *delay = [CCDelayTime actionWithDuration:leftDelay];\nCCMoveTo *move1 = [CCMoveTo actionWithDuration:0.23f position:ccp(leftSprite.contentSize.width / 2, leftSprite.contentSize.height / 2)];\nCCMoveTo *move2 = [CCMoveTo actionWithDuration:0.02f position:ccp(leftSprite.contentSize.width / 2 - 5, leftSprite.contentSize.height / 2)];\n[leftSprite runAction:[CCSequence actions:delay, move1, move2, nil]];\n(leftDelay is a value I use to control which of the sprites appears first).\nTake note of two things; first, the CCDelayTime at the beginning. This is useful if you want to add a delay to your animation. Next, check out the CCSequence; this allows you to string a bunch of actions together, and they will then happen one after another. CCDelayTime is pretty much only useful when used in conjunction with CCSequence, as it literally does nothing. When you have listed all of your objects, you must ad nil (the sentinel) so that the action knows to expect no more. If you don't do so, you will get the warning, \"Missing sentinel in function call\".", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-4", "d_text": "Just drag a sprite on to the black dot sprite within the Hierarchy to group it under the black dot.\nOn the next image you can see how the sprite hierarchy should look like after you have grouped the game objects.\nBefore moving on, rename your base game object to\nDragon. When you move the\nDragon game object, you can now move all the parts of the character on the scene.\nBut what if you want to move just one single sprite? For instance, if you want to move just the hand, you know that the arm is connected to the hand, so, if you move it, all the hand should move too, correct? If you try to do this, you will see that is not the case. When you select the arm and move it, the remaining parts of the body stay still. So, in order to move the complete body part, you need to create a hierarchy inside your sprite.\nTo make this process more intuitive, rename the body parts (by right-clicking and selecting Rename) with their respective names, like so:\nRegarding the hierarchy, think of the character as a tree, with roots, a trunk, and branches. The black dot acts like the root of the tree; when you move it, all the character body moves. After the root comes the trunk; in this case, your trunk will be the body of the character, so this will be the next sprite in the hierarchy. All the other body parts are branches of the tree. However, you can still have branches of branches like, for example, in the tail—the\nTail Tip is a branch of the\nTail, and so on..\nOrganize the sprites of your character following this hierarchy:\nNow, if you move the upper arm, all the parts of the arm will follow. Great, isn't it?\nRe-Ordering the Sprites\nBefore you can start animating the character, there is still one last detail we need to take care of. As we discussed, the sprite parts are not being drawn in the correct order. To solve this, you must change the value of the Order in Layer parameter for each individual sprite.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-3", "d_text": "When I need to use my high-level behavior, I am just going to actually add alloc init that new behavior and add it to my animator.\n[ Pause ]\nSo something which is useful when you're building your own behavior is to think in terms of API.\nWhat is the API you want to define on such an interaction behavior?\nIt could be something really simple like initWithItems, like what we did just a minute ago, and we'll see another example in this session when you can actually define a more complex API.\nIt's useful to think about how that is going to integrate with your existing application flow, like if you already have a gesture, it's always a good thing to match the ending gesture velocity with the system you're creating in dynamics.\nAnd if you need that, it's not always the case, but if you need that, you can define per step actions.\nIt's just a block, you can define on UIDynamicBehavior and we're going to invoke that block with each simulation step.\nSo that's interesting when you want to, for instance, change the force based on an item position, to implement magnets for instance.\nOf course, because we are running that with each simulation pick, you have to be careful about what we do what you do in this block.\nThere is one catch about combining behaviors, it's this UIDynamicItemBehavior class you can use to setup properties to your items.\nWith UIDynamicItemBehavior, you can change density, damping, you can block rotation, you can change friction or elasticity, and I was using that in my previous demo.\nAnd there is no problem about combining many UIDynamicItemBehavior, especially if you are using if you're configuring distinct properties in each, because that's not going to conflict, right?\nIf you do want to change the same property in different UIDynamicItemBehavior, that's still possible, but we have to decide which one we pick.\nAnd the last one wins.\nWe actually have quite a precise definition of what the last one is.\nIt's a pre-order depth first walk of the behavior tree.\nGet it? Let's check that rule on an example, right?\nSo here is my behavior tree.\nI have a few behaviors I don't care about and three UIDynamicItemBehaviors configuring elasticity and friction, but the question is what are the actual venues in my dynamic item?\nSo let's walk the behavior tree.\nWe start with default.\nSo first behavior is not a dynamic item behavior, so we don't care.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Loading In The Balloon Textures And Creating Upward Animation\nThe first thing we are going to do is load in the balloon png files. We’ll add an array variable to the game interface file (Game.h) as we will want to access the balloons from other places within the class.\nWe’re also going to add in a Sparrow sprite, sprites are display object containers, and we will use this sprite to hold the balloon images so that we can easily stop and reset the game.\nNow we’ll head back to the game implementation file and load the textures into the array.\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"bluetutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"greentutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"indigotutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"orangetutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"redtutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"violettutorial.png\"]];\n[balloonTextures addObject:[SPTexture textureWithContentsOfFile:@\"yellowtutorial.png\"]];\nNow the reason why i loaded these balloons as SPTexture objects rather than SPImage objects is because I will be using these textures to create images throughout the class, and rather than loading them in each time I can use the texture objects to quickly create images (you’ll see this in a moment).\nNow let’s create the sprite for our playing field and add that sprite to the stage.\nThis sprite will hold the balloons, and content used to reset the game so that we can easily clear the screen and stop animations.\nNow we’re going to create a new method and call that addBalloon (make sure to add it to the interface too):\nWe’re going to use this method to create a balloon and send it upwards. Don’t forget to also declare this method in the Game.h file.\nThe first thing we are going to do is add in a balloon image of a random color, and place it at a random location below the screen. Place this code in the addBalloon method we just created.\nimage.x = (arc4random() % (int)(self.width-image.width));\nimage.y = self.height;\nNotice here that i used imageWithTexture to use the texture object.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "I am not sure if anyone knows or has done some benchmarking, but say I want to slide an object from position A to position B - is an animation more efficient, or is a script doing linear interpolation between the two points as efficient? Are there other, more efficient ways todo this?\nIn any answer, would the same apply for iOS targets?\nP.s. If nobody knows I think I'll benchmark for iOS and post my results\nI'd say that animation is faster as it is 1 line of code, and not multiple... But then again, you're calling a whole part of a file, so I don't know...\nYeah I would agree with Justin, as animation would just be reading values to set the transform instead of calculating it.\nI tried both in my program and the animation looked smoother to me. That said, I may have had a flawed script and would like if you benchmarked it for the iOS and posted the results.\nI asked a question like this before, and just got a bunch of annoying responses telling me to perform tests. I suggest you just post your test results and mark this as answered. Any information would be helpful; taking time to test for best practice on everything is not something some of us care to do; I'd rather be told what to do by someone I trust, so I can actually make games, instead. If I ever happened to come across contradictory results, oh well. I'll accept that possibility in exchange for the time I gain by not testing. :-P\nYep, it's a tricky one. It feels like the animation should be faster as it's fired and the native code can handle it - but how heavy weight is the native code? More so than my C# compiled into native code?\nGiven that lots of things are working, testing this out might need to wait until I do some re-factoring to make some of the code more production grade, but I'll post the results as an answer for iOS, probably be tested on 3G S.\nAnswer by Bovine\nJul 17, 2012 at 06:22 PM\nSo, I'm going to answer this in a slightly dubious way...\nWe have a tile based dungeon crawler (see link in my bio for a video, coroutines used in this vid) and we were using a Coroutine to move the player from tile to tile but I've now changed this to an animation.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "(CAAnimation button)\nThe “CAAnimation” button invokes the method\n- (IBAction)doAnimation:(id)sender. It performs a whole sequence of animations. It does this buy creating a CAAnimationGroup, and then creating a sequence of individual CAAnimation objects of different flavors. It sets the beginTime property of each animation so that each animation step in the animation group begins when the next animation finishes.\nWhat you will learn:\nThis project demonstrates a wide variety of animation techniques\n- Using CABasicAnimation to animate a property and move images around on the screen.\n- Using different animation timing functions like kCAMediaTimingFunctionLinear, kCAMediaTimingFunctionEaseIn, and kCAMediaTimingFunctionEaseInEaseOut to get different effects\n- Using CAKeyframeAnimation and a CGPath to animate a layer along a curved path (a figure 8).\n- Creating a custom subclass of UIView that has a CAShapeLayer as it’s backing layer so you can draw shapes in a view “for free.”\n- Adding a CGPath to a shape layer to draw shapes on the screen.\n- Using CAAnimationGroup to create a linked series of animations that run in sequence\n- Creating a very clean “per animation” completion block scheme using the fact that CAAnimation objects support the setValue:forKey: method. I add a code block to an animation object and set up the animation delegate’s animationDidStop:finished method to check for a special key/value pair with the key kAnimationCompletionBlock.\n- Using the cumulative property on animations to create a single repeating animation that continuously rotates a layer by any desired amount.\n- Using a CATapGestureRecognizer to detect taps on a view.\n- Detecting taps on a view while it animates “live” by using the hitTest method of the view’s presentation layer\n- Pausing and resuming animation on a layer.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "If you tell Spine what folder to browse for, it will import the images you need for your animation. I named these images as well.\nDrag the skin (images) from the right panel onto the bones. To make sure the elements are skinned in the right order, you can select one of the elements in the panel and click ‘+’ or ‘-‘ to move it above or below in the order of slots.\nNext, connect the bones to the skin by selecting one of the elements of the skin you just imported and clicking ‘set parent’ in the right panel. Once a skin element is given a bone parent, you can arrange a pose by dragging the bone.\nNow that the skins are arranged along with the bones, you are ready to animate! As a demo, I have created a ‘lazy bee’ game interface. This is a sample Corona SDK app where the bee rests on a honeycomb. Feed the bee ‘pollen’ by clicking on it and click on the honeycomb for the bee to fly to its new space. If you don’t feed the bee, it remains in ‘rest’ mode on the honeycomb, moving only slightly.\nClick the left-hand ‘setup’ text to toggle to the animate screen. Here is where we start experiencing Flash déjà vu, as we will start assigning poses to keyframes. For ‘lazy bee’, I need to create two animations, one a ‘rest’ state with a pulsing wing movement, and a ‘flying’ state with a looping wing flutter and body wiggle effect.\nTo do this, click on ‘Animations’ in the right hand tree and generate a new animation. Click keyframe 0 and select it as “Loop start” and 15 as “Loop end”. Then click each keyframe successively and move the bones (with the skin attached) very slightly to make the wings and body move. For the ‘fly’ animation, the wings move more dramatically and more quickly, a setting controlled via the ‘playback’ button. For the ‘rest’ animation, the wings move only a little and over only eight keyframes.\nFor each keyframe, rotate, translate (moved) or scale a particular bone, and then click the ‘key’ button next to the property to save the keyframe. Unchanged keyframe buttons start out as green, edited keyframes orange, and when changed and saved the button turns red.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-6", "d_text": "And in the ScrollView session this morning, Josh and Eliza showed you how to actually build a Messages like effect with that technique.\nOr you can create and add new items on the fly.\nThe key here is to create the animator with your collection layout instance, add behaviors and add collection view layout attributes to these behaviors.\nWe are then going to change position and rotation on these instances.\nWe have some predefined, some convenient support for dynamics for collection view in dynamics.\nWe take care of invalidating the layout if anything changed in the system and we also pause and resume the animator if your layout is no longer the current layout for the collection view because in collection view, you can switch layouts.\nWe also provide convenience method for implementing your layout so you can ask the animator itself for layout attribute for cell at index path for supplementary views and for decoration view.\nSo we know that's a layout so we help you in implementing this method in your layout.\nYou can ask the animator.\nWe have, for layout updates, the usual collection view methods, so prepareLayout is usually when you can instantiate an animator or create your initial state and prepare for update which is another layout method, you can add new items to your behaviors.\nAnd there is this very important method in collection view which is layoutAttributesInRect which possibly basically defines where the cells are going to be.\nAnd to implement this method, we have itemsInRect in the animator.\nSo that's really easy.\nYou can ask the animator, \"Give me all the items you're tracking in this rect.\"\nThen you can combine these items with maybe attributes which are not animated.\nAgain, the way you design your system will have a direct impact on the number of cells you can animate.\nI would like to show you an example of collection view using dynamic for a specific effect.\nThat's actually an example from the collection view sessions last year when I was dragging a cell in a layout.\nSo we are going to do that the Dynamics way.\nSo I select a few cells.\nSo the effect is maybe a little bit too much but you get the idea.\nSo we have these cells connected to springs and reacting to the gesture.\nAnd when I end my gesture, I just clear the animator.\n[ Applause ]\nHow complex is that?\nYou just need to decompose this program.\nWhy do I need to animate that?\nThe way I did it, perhaps, there are many solutions to this problem.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "In the previous post, I wrote about UIKit Dynamics covering the following behaviors:\nThis time I’m going to talk about the last two ones,\nThese new behaviors are implemented in the same way we implemented the behaviors in the list above, however they will behave differently as their name implies.\nAs usual, let’s assume you have xCode 6 Beta (any beta version) installed, with an empty View Controller set up. Then let’s add a\nUIDynamicAnimator and a\nUICollisionBehavior just in case:\nWe will start with the push behavior. Note at line 18, for the purposes of this tutorial, we have added a transform to our greenBox to rotate it a little bit (45 degrees). Now let’s declare a push behavior and instantiate it. Then add it to our dynamic animator:\nThe Push Behavior:\nThe push behavior is very simple. Basically, the UIView that receives the push, is “pushed” away in a specific direction and speed, based on the push settings. In the real world, an object that is pushed, would stop because of gravity, friction and other physical factors. In the space though, if you push an object, it will travel forever in that trajectory, until it finds some meteor to smash into. But that’s another story.\nIt’s important to know that this behavior has two modes:\nUIPushBehaviorMode.Continuous. The Instantaneous mode makes the UIView receive only one push per time. That means that after some time, the movement of the object slows down until it stops (unless you give a high elasticity to the object for example, at that point it will bounce around forever).\nOn the other hand, the Continuous mode, will push the object repeatedly, adding to each time more speed to it gradually. This would be ideal for rockets, cars and space ships. If you do game development this could be useful.\nIn the code above, the push mode is Instantaneous, so the box is pushed upwards, colliding to the edge of the screen and bouncing back. Note that we don’t have any gravity set up in this example. At line 30 we have are adding both an angle of -180 degrees to the push (upward direction) and a magnitude value, which is the “strength” of the push.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "You will find a full (seems to be) list of the dictionary and anim names here: http://docs.ragepluginhook.net/html/62951c37-a440-478c-b389-c471230ddfc5.htm\nImportant: You need to request the animation Dictionary before start using it in your script:\nNative.Function.Call(Native.Hash.REQUEST_ANIM_DICT, sDict) sDict – The anim dictionary\nTo check if the Dictionary was loaded you can use:\nNative.Function.Call(Of Boolean)(Native.Hash.HAS_ANIM_DICT_LOADED, sDict) sDict – The anim dictionary\nHandling the animations\nTo handle the anims we have some interesting methods like:\n-STOP_ANIM_TASK -IS_ENTITY_PLAYING_ANIM -GET_ENTITY_ANIM_CURRENT_TIME -SET_ENTITY_ANIM_CURRENT_TIME -SET_ENTITY_ANIM_SPEED\nNative.Function.Call(Native.Hash.STOP_ANIM_TASK, thePed, – Ped playing the anim sDict, – Anim dictionary sAnim, – Anim name speed) – Stop speed, used for smoothness control\nNative.Function.Call(Of Boolean)(Native.Hash.IS_ENTITY_PLAYING_ANIM, theEntity, – Entity/Ped playing the anim sDict, – Anim dictionary sAnim, – Anim name 3) – Unknown\nNative.Function.Call(Of Double)(Native.Hash.GET_ENTITY_ANIM_CURRENT_TIME, theEntity, – Entity/Ped playing the anim sDict, – Anim dictionary sAnim) – Anim name\nObs.: The time returned is a number between 0.0 and 1.0 (ex.: 0.35), it represent how much percent of total playback was played, 0.35 for example means 35% 😉\nNative.Function.Call(Native.Hash.SET_ENTITY_ANIM_CURRENT_TIME, theEntity, – Entity/Ped playing the anim sDict, – Anim dictionary sAnim – Anim name time) – New time\nObs.: The time param here use same idea as the return of GET_ENTITY_ANIM_CURRENT_TIME method, so, to set anim to half playback time you should use 0.5 for example.", "score": 24.061979944327575, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Hacking UIView animation blocks for fun and profit\nIn this article, I'm going to explore a way that we can create views that implement custom Core Animation property animations in a natural way.\nAs we know, layers in iOS come in two flavours: Backing layers and hosted layers. The only difference between them is that the view acts as the layer delegate for its backing layer, but not for any hosted sublayers.\nIn order to implement the\nUIView transactional animation blocks,\nUIView disables all animations by default and then re-enables them individually as required. It does this using the\nUIView doesn't enable animations for every property that\nCALayer does by default. A notable example is the\nlayer.contents property, which is animatable by default for a hosted layer, but cannot be animated using a\nUIView animation block.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "For example, if you want to make something drag-able on a canvas, you need to deal with at least three events: You need to begin a drag when the mouse is pressed, update when the mouse is moved during a drag, and terminate the drag when the mouse is released. Also, you need to preserve state. And this is exactly what behaviors allow you to do. Behaviors let you encapsulate multiple related or dependent activities plus state in a single reusable unit.\nWe will explain more details in future posts.\nTarget & Source\nAs we talk about Triggers and Actions, there are a couple of other important pieces that play into this. Let’s repeat the sentence from above, slightly modified:\nWhen I press the “open exit door” button, the exit door opens.\nThis is getting rather colorful. Let’s look at the magenta part of the sentence first. Imagine we have a room with multiple doors that can be opened from a control panel. “The door opens”, as in the original phrase, is not specific enough, we need to know which door the “open” action should be applied to. Many actions therefore have a Target property that points to the element that the action should be applied to.\nAs you apply Actions by dragging & dropping them on a UI element on the artboard, we make the object you are dropping the Action on the default target for your Action. We also give you a property editor in the property inspector that lets you choose any other object in the scene as your target.\nThe first part of the original sentence also is not specific enough: “When I press the button” does not really tell me which button. The revised sentence therefore has a clarification, in orange, that clearly states which button we mean. Many Triggers therefore have a Source property that allows you to configure on which UI element your trigger is supposed to listen.\nNext in this series, we will discuss an example Action. Stay tuned…", "score": 23.444563069643632, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "This is a demo project that illustrates various animation techniques\nIt shows 3 different kinds of animations:\n- A simple UIView animation that animates an image in a straight line while increasing the scale of the image and rotating it around it’s axis\n- A “clock wipe” animation that gradually reveals an image in a circular arc like a radar display, then hides it again\n- A complex sequence of animations that are managed using a CAAnimationGroup.\nUIVIew animation (View Animation button)\nThe UIView animation is performed by the method\n-doViewAnimation: (id) sender in viewController.m. It uses the method\nanimateWithDuration:delay:options:animations:completion: to do it’s job. UIView animations modify animatable properties of one or more views. It is possible to animate mutliple animatable properties of multiple view objects with a single UIView animation call. the doViewAnimation method animates the view’s center, scale, and rotation all at the same time.\nClock Wipe animation (Mask Animation button)\nThe clock wipe animation is performed in the method\n- (IBAction)doMaskAnimation:(id)sender;. It works by creating a shape layer (\nCAShapeLayer) and setting it as the mask for an image view’s layer. We set the shape layer to contain a an arc that describes a full circle, where the radius of the arc is 1/2 of the center-to-corner distance of the view. The line width of the arc is set to the arc radius, so the arc actually fills the entire image bounds rectangle.\nCAShapeLayers have a properties strokeStart and strokeEnd. Both values range from 0.0 to 1.0. Normally strokeStart = 0 and strokeEnd = 1.0. If you set strokeEnd to a value less than 1, only a portion of the shape layer’s path is drawn.\ndoMaskAnimation method sets\nstrokeEnd = 0 to start, which means the path is empty, and the entire image view is hidden (masked.) It then creates a CABasicAnimatimation that animates the strokeEnd property from 0.0 to 1.0. That causes the layer’s path to animate an ever-increasing arc. Since the line thickness for hte shape layer is very thick, the arc fills the entire bounds of the image view, revealing an ever-increasing portion of the image view.\nThe animation looks like this:\nCAAnimationGroup animation.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-4", "d_text": "--include the Spine library local spine = require \"spine-corona.spine” include the json file generated by Spine local json = spine.SkeletonJson.new() local skeletonData = json:readSkeletonDataFile(\"animations/bee.json\") --draw the skeleton using the data local skeleton = spine.Skeleton.new(skeletonData) --dress the skeleton with the image files function skeleton:createImage (attachment) return display.newImageRect(\"animations/images/\" .. [attachment.name](http://attachment.name) .. \".png\",100,100) end --place the animation on the screen in the right position skeleton.group.x = display.contentWidth/2 skeleton.group.y = display.contentHeight/2 skeleton.flipX = false skeleton.flipY = false --get ready to make it move local stateData = spine.AnimationStateData.new(skeletonData) local state = spine.AnimationState.new(stateData) --set it to its ‘rest’ state state:setAnimationByName(0, \"rest\", true, 0)\nYou can see the final product here:\nIn this tutorial, I’ve outlined the bare minimum you can do with Spine. I highly recommend it for use in your games and mobile apps when you need to liven up your interface with an animation. With this tool, complex animations become manageable for even the smallest Indie development shops.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-5", "d_text": "For instance, if what you're animating is a view of something on the screen, you can define a subview, apply a scale transform or change the size of the subview or something like that.\nAgain, we need an initial state, we need to correct the initial state, so we need a size and we need reasonable position.\nAs I said, MAXFLOAT is not a reasonable position.\nWhat can you do with that?\nOne interesting use case for dynamic items is to sanitize or change the value we sent.\nYou can use a single dynamic item to actually animate the same way many different things.\nYou can map position or rotation, which are the only two values we compute to something else, like mapping to scale transform or instead of animating a rotation, animating a 3D effect.\nSo if you need to animate something which is not a view or a collection view layout attribute, do not define a view hierarchy on the side just to be able to use dynamics.\nUse a dynamic item.\nSo let me introduce a really stupid example of dynamic item which doesn't display anything on screen, well, depends on what you call screen actually.\nYou could just log what we compute.\nYou could keep everything in a dictionary.\nYou can do whatever you want with that.\nLet's talk about collection view.\nIn collection view, you can use Dynamics in three different ways.\nYou can decide to use Dynamics for very specific animations like when you're selecting a cell and you want a very specific effect for that selection for instance.\nIn that case, you just need to create a dynamic animator as this animation or interaction and just remove it after that.\nThe other thing you can do is to animate a subset of a layout like you have a few cells, you want to drag these cells and after that, you're done.\nSo you can combine animated and non-animated cells.\nYou can build an entire layout with Dynamics that works for, well, non-huge data source.\nProblem is, in dynamics, what is off screen in the system might impact what is on screen.\nSo even if you just generate cells on screen for what is visible, you might need to simulate the entire system.\n[ Pause ]\nAgain, you need to provide some initial state for your items and you have many ways to do that, you can compute that initial state, create layout attributes for that state, and feed that to dynamics.\nYou can subclass an existing layout.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "I don't want to put you off, but this is by far the longest project in the series. It's not the most complicated, but it's long, coming in just short of 500 lines in total. That said, I hope it'll be worth it, because the end result is great: we're going to make a Fruit Ninja-style game, where slicing penguins is good and slicing bombs is bad. I think I must unconsciously have something against penguins…\nAnyway, in this project you're going to meet\nAVAudioPlayer, you're going to create\nSKAction groups, you're going to create shapes with\nUIBezierPath, and more. So, it's the usual recipe: make something cool, and learn at the same time.\nThis project is hard because you need to write a lot of code before you can start to see results, which I personally find frustrating. I much prefer it when I can write a few lines, see the result, write a few lines more, see the result again, and so on. That isn't possible here, so I suggest you make some coffee before you begin.\nStill here? OK!\nCreate a new SpriteKit project in Xcode, name it Project23, then do the usual cleaning job to create a completely empty SpriteKit project: remove all the code from\ntouchesBegan(), change the anchor point and size of GameScene.sks, and so on.\nYou should also download the files for this project from GitHub (https://github.com/twostraws/HackingWithSwift), then copy its Content folder into your Xcode project.\nPlease force the app to run only on landscape iPads before continuing.\nReminder: Don’t forget to use a real device for this project, or, if you must, the lowest-spec iPad in the simulator.\nLEARN SWIFTUI FOR FREE I have a massive, free SwiftUI video collection on YouTube teaching you how to build complete apps with SwiftUI – check it out!", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "So this doesn't work:\nanimation.easingCurve = QEasingCurve.OutBounce\nbut this does:\nanimation.easingCurve = new QEasingCurve(QEasingCurve.OutBounce)\nThe Smoke bindings are capable of doing implicit conversions, though, as Richard Dale pointed out in a recent blog post. So there is hope in the long run. :)\nTwo more pieces of errata:\n- The MovementDirection enumeration for Animations was not being created in 4.4.0; this has been fixed and in 4.4.1 and beyond one can indeed write things like \"animation.movementDirection = animation.MoveLeft\". Until then you need to use the literal values of the enumeration.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-2", "d_text": "I basically just implemented this UIDynamicAnimatorDelegate so we will know when the animator actually stops and starts again.\nSo I can just drag that view, there is no other behaviors, just collisions.\nSo let's add gravity.\nSo now, when I actually move this view, the effect is, of course, completely different.\nIt's moving a little bit too much, motion sickness is not something that I would like to have in this demo.\nSo we're going to add a UIDynamicItemBehavior which is a way to set up some low-level properties.\nI'm going to set up resistance which a way to apply damping on velocity.\nSo the feel is completely different.\nI could add a force behavior going to the right, an immediate instantaneous impulse behavior and keep my attachment behavior.\nSo I see that this force on the view that's trying to move it to the right, so let's stop that.\nAnd the other thing is I could also change those that are low-level property on this view like the elasticity which is the restitution on collision.\nSo we have a view which is obviously really happy [laughter] to be here.\nSo let's just turn off collisions and that's the end of this demo.\n[ Applause ]\nAnd each action was just really add or remove behavior.\nSo what do that means?\nIt means that the effect you want is really about building a behavior tree.\nAnd the behavior tree can be using predefined behaviors like a collision behavior.\nBut maybe your own behavior is like a magnet-like behavior or a drag behavior which are going to be built on top of these predefined behaviors.\nAnd then you need to associate items to these behaviors, and that is something that you can do at your high-level behavior API level.\nYou could directly add the same items to predefined behaviors or only add just one to something just for a while when I want to drag this item.\nHow do you build your own behavior?\nYou just have to subclass UIDynamicBehavior.\nAnd let's say I want to implement, again, this BouncyFallBehavior.\nI'm going to define initWithItem initializer.\nHow do I implement that?\nThe first thing I need is to actually create the sub-behaviors for my high-level behavior, so I need gravity and collisions here.\nIf needed, I will configure this collision behavior.\nAnd the last thing is adding these two behaviors I just created as children to myself.\nAnd that's it.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "In this tutorial, I’ll show how to create a simple two-part animation: a bee that has two states, fluttering, and pausing.\nTo begin, you need at least the trial version of Spine. Download it at esotericsoftware.com/spine-download. Note, you won’t be able to export any files to use in your mobile apps until you buy a version of the software, either the Essential version at $60 or the Professional at $249. Most of the functionality an Indie developer needs is included in Essential, so I recommend buying it if, after the trial version shows you Spine’s capabilities, you think you could use it in your projects.\nYou will need some art to begin your animation. If you create your own, make sure to draw each animatable element on its own layer in Pixelmator, Fireworks, or Photoshop. Heads, necks, arms, torso, and legs all need to be on their own individual layer. I visited VectorStock.com and bought an image of a bee. The download includes an Illustrator file that allows you to isolate the four elements of the bee I want to animate: head, torso, and two wings. I saved those elements as four separate .png files. It’s a good idea to place the elements of your animation into the folder that will be its final home in your app so that you don’t have to change paths to images later on.\nBefore you start, consider adding a placeholder of the basic image to Spine so that you get your bone placement right. You can import it with the image fragments and delete it at the end of the animation process. You might prefer to set its opacity low or colorize it to remind yourself to delete it later.\nI started by creating a new ‘skeleton’ (Cmd+n / Ctrl+n) in Spine and drawing the four elements of the animation – head, body, and two wings. I named each ‘bone’ of the skeleton appropriately. To do this, make sure you are in the ‘Setup’ area of Spine (the ‘Setup’ text toggles if you click it, between the Setup and Animate states). Click ‘create’ in the toolbox and draw the bones onto the grid. If you get stuck, make sure your screen looks like the one below.\nOnce you’re satisfied with the way the bones look, you need to import the images that will ’skin’ the bones.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Note that these files are not compiled by Xcode; they are packed into the application’s bundle, then extracted and compiled at run time by the view controller’s\nThe template’s animation is driven by a\nCADisplayLink instances automagically fire a target/action when the device needs a new frame to display. They’re a little opaque, but seem to represent the iOS version of vsync. The view controller sets up and tears down\nNote that the view controller creates the\nCADisplayLink with the\ndisplayLinkWithTarget:selector: method, which retains the target. To avoid a retain cycle, the view controller itself maintains only a weak reference to the\nIt’s a small point, but if you look at the view controller’s\ndrawFrame method, you’ll note that the “square’s” vertices are actually rectangular; they describe a rectangle 1 unit wide by 0.66 units high. Nevertheless, the object appears square when displayed. Why is this?\nThe answer lies in the physical properties of the display; iPhone screens have a 2:3 width:height ratio (e.g., 320 x 480 square pixels). Absent other transformations, the point\n(-1, -1) will be mapped to the lower-left corner of the display, and the point\n(1, 1) to the upper right. Since the physical display is rectangular, this will “squeeze” the image horizontally, transforming the given points into a square.\nNaturally, this issue is handled more rigorously in production code.\nOpenGL ES (especially 2.0) can be a big pile of strange the first time you look at it, so it was nice of AAPL to provide a template that not only takes care of all the mechanics needed to get something on the screen, but that also effectively illustrates some of the surprising things in the API. Xcode’s “OpenGL ES Application” template is really a nice little tutorial, and will reward your study.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Get all QuartzCode project examples in one link. Just open and play using QuartzCode.\nBasic Usage Tutorial\nShows basic on how to use QuartzCode including adding timeline, multiple animations, and shared color.\nTips using Timeline Panel\nShow tips and tricks when using Timeline Panel.\nSpinning Indicator using Replicator\nShows you how to create a basic replicator layer by creating a spinning indicator.\nCar Animation with Smoke Emitter\nShows you how to use multiple emitter cells applied to a car moving on a path.\nReload Animation with Nested Groups\nShows you how to create reload animation with nested groups\nShows you how to use shapes to create hamburger animation.\nText Animation using Effect Layer\nShows you how to animate text glyphs by using Effect Layer\nHow to Use Generated Code in Xcode\nShows you how to use generated code in most basic way.\nExplain how to combine layer types and multiple animations to produce a complex animation.\nWater Fill Animation\nCombination of path, transform and position animations with parent/child layer to create water fill animation\nPercentage Vote Animation\nThis tutorial includes on how to use mask and edit generated code in Xcode to control the vote percentage.\nCreating load, complete and fail animations\nShows how to use multiple animations in QuartzCode with Xcode use case project.\nRound Progress Animation\nExample on how to create rounded progress animation.\nXcode Example Project\nXcode project with example with relative frame, reverse animation, total duration and end time options. Contains both Objective-C and Swift project.\nMultiple Text Animations\nVarious examples for simple text animations\nRebus Zone Examples\nQuartzCode projects with animations from Rebus Zone app. Thanks to Alexander Burtnik for providing such a good samples!", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "And lastly the last parameter is if you want to use seconds or frames. I’ve always left that to true meaning use seconds, that way if you change the framerate of your movie, it doesn’t change the timing of your animation.\nSo if you test your movie, your rectangle will go from left to right in three seconds. Pretty boring, but it’s your first animation. What we can do next is affect more than one property at the time. But for that, we have to create a new tween for each property we want to affect.\nvar myTweenX:Tween = new Tween(rectangle, \"x\", Strong.easeOut, 0, 300, 3, true); var myTweenAlpha:Tween = new Tween(rectangle, \"alpha\", Strong.easeOut, 0, 1, 3, true); var myTweenWidth:Tween = new Tween(rectangle, \"width\", Strong.easeOut, rectangle.width, rectangle.width + 300, 3, true); var myTweenRotation:Tween = new Tween(rectangle, \"rotation\", Strong.easeOut, 0, 90, 3, true);\nSo here our rectangle fades in, moves to the left, becomes wider and turns right. Two things to see here. First the alpha property doesn’t work like in actionScript 2. Before, it ranged from 0 to 100; in ActionScript 3, it ranges from 0 to 1. It’s a minor difference, but you have to know it. Also as you can see with the width tween, you can put variables in the fourth and fifth parameters so that you don’t really know the beginning and ending values.\nThat is it for now, I’m going to bed, but later on I’ll add stuff about easing and how to time animations.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-17", "d_text": "We're going to determine whether or not it was canceled or not, in which case we're either going to call CancelInteractiveTransition or FinishInteractiveTransition depending on the direction.\nIn this case, it's finish.\nAnd then, again, we're going to call the completeTransition block on our method, on the transition context.\nSo what did we learn here?\nFirst of all, Dynamics and custom transitions are compatible with each other.\nThey can be used.\nIn fact, we spend a lot of times trying to make sure that our APIs compose well together.\nAs a rule of thumb, it really pays off to create composite behaviors that get the function that you're interested in.\nWe showed how you can create complex dynamic transitions using the dynamic animator delegate, the collision behavior delegate, and actions on dynamic behaviors.\nA dynamic behavior subclass can easily conform to one or even both of the transitioning protocols.\nIt makes a lot of sense to do so because you can put all of the logic in one place.\nDuration is something that needs to be thought about when you're doing transitions.\nAgain, a dynamic system doesn't necessarily converge ever.\nSo you want to put checks in place based on your application logic to ensure that it finishes.\nAnd then, what's interesting is that dynamic behavior actions can actually be used to drive the interactive portion of a transition.\nAnd that might not be entirely obvious.\nI'd like to make one other point and that is, is that if you're using Dynamics and you add behavior to the dynamic animator and nothing happens which has happened to me a few times, it's probably because you didn't retain your dynamic animator.\nDon't let that happen to you.\nSo quick wrap up.\nWhen you're using dynamics, focus on what it is precisely that you're really trying to do in small pieces.\nIt really helps to build complex dynamic interactions and animations piece by piece.\nIn fact, we are iterating all the time when we create this.\nYou're going to have different constraints that you want to take into account, duration, interactivity, et cetera.\nThere's all there's a whole other bunch of new animation APIs that we added in iOS 7.\nThose might be more suitable in many cases.\nFor example, there's an animate with duration API that allows you to implement kind of a simple spring animation as well.\nSo look at those two.\nAnd then just go to town, create awesome stuff.\nAll these sessions, I believe have already happened but you can look at them on the videos.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "A video that demonstrates the use of Adobe Flash for creating sprites for use in animation in games. In this video, a game character is shown doing various actions which range from simple stance's to complex combo's The video explains the use of sprites and keyframes to be used in various actions. It also explains the use of pauses while executing the sprites so as to add fluidity to the action performed. This also reduces the number of frames required to perform an animation. Lastly, the narrator demonstrates how to draw the next keyframe in a sequence using an earlier keyframe as a reference.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-9", "d_text": "And at the appropriate time when you are either pushing or popping or presenting or dismissing, we're going to ask that delegate to vend an animation controller or an interaction controller.\nSo the methods that those objects that your delegate vends need to implement are various few.\nThe main one for the animation controller is funny enough, animateTransition.\nAnd for interactive transition, it's startInteractiveTransition, kind of pretty simple.\nThese two methods are passed in a special object called the ContextTransitioning object which defines the characteristics of the transition.\nIt defines where views start, where they end.\nIt also is a little bit active and that we define some methods that need to be called at certain points in time.\nSo basically, the declaration of the protocol looks something like this.\nThere is a container view.\nThat's the view in which the tran the animation takes place for the transition.\nThere are some methods to query to find out where I'm supposed to end up.\nAnd then, there are those action methods that are on the context.\nAnd for interactive transitions, there is a few of them.\nThere is updateInteractiveTransition with a present, and then there's either Finish or Cancel.\nAnd finally, when the transition is all over, and this is true for both interactive transitions as well as just regular, straight up animated transitions, you must call a special method called completeTransition indicating whether canceled or not.\nAnd this basically patches up any data structures and puts things into a consistent state so your application can move forward.\nIt moves as a little bit to talk about the different states involved in an interactive transition.\nI've kind of broken it into a few sections.\nThe first four kind of where you go from nothing, you're in no particular transition mode to the interactive mode.\nAnd you might consider this, if you're doing a pop gesture, it's as your finger is down and you're dragging across the screen.\nWhen you release that, your finger, the transition isn't over yet.\nIt still needs to do something.\nIt's either going to animate off or animate back to where you started.\nAnd the decision of which direction you're going in is really up to you in your code.\nAnd so you can either cancel the transition or continue it.\nAnd once you do, you then animate it to completion and call the completeTransition method.\nSo it's really kind of that simple and if you are interested in more details, you can look at the video of this morning's talk and there are also some docs available for that.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "So for complex sequences they’d simply just be queued up series of fundamental animations in cards, and then I can easily just interlace calls to PlayAnimation() and SubmitActions() and treat them almost identically in code as opposed to the one-off hacks for each ability that currently exists. But even that process is rife with a lot of edge cases due to the dynamic nature of how players interact with cards, and ultimately would still require some programmer help to get things looking right.\nFor the moment, card animations in SFT are created by:\n- The artist creates a fully animated mock up in Unity using the tools they know. Artist now exits stage right.\n- I then painstakingly try to mimic that entire flow (and very often other user flows not captured in the 1 one-off mock up) using a combination of animation clips, splines, FSMs, and procedural animation via code.\n- Give up trying to replicate the mock up when enough time has gone by and I realize I have to move on.\nI’m able to capture the majority of the mock up’s “essence”, but a lot of the subtleties of the animation that artists are actually good at creating are lost due to lack of time and lack of flexibility. Maybe next go around if we’re doing another card based project I’ll be able to have another go at this.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "ActionScript 3 Tweens Tutorial\nSince I think it’s something people want, I am going to do a tutorial about using ActionScript 3 to do animations. I recently made a new tutorial about using Tweener instead of the Tween classes, or even better, how to use TweenLite (TweenLite is smaller and faster than Tweener). I think it is better to use Tweener or TweenLite because it is easier to understand and it takes way less lines of code.\nIn this tutorial I’ll be using the Tween Classes already bundled in Flash Cs3. There are other ways to do this but I find it’s the easiest. You could do the same using onEnterFrames or Timers or the Zigo Engine (but it’s not yet AS3).\nOk first thing you will need to do is to draw a rectangle on the stage and then convert that shape to a movie clip. After that, give it an identifier name in the property inspector. Let’s name it “rectangle”. Now open the actionScript Window by pressing F9. Create a new layer on the timeline, name it actions, and start writing your actionscript there. The first thing we will need is to import the actual classes for tweening. Add these to line of code at the start.\nimport fl.transitions.Tween; import fl.transitions.easing.*;\nThe first line is to import the Tween, the second one is to import the Easing classes. I’ll explain what easing is later on.\nNow all we need to do is add one line of code to make the rectangle move.\nvar myTween:Tween = new Tween(rectangle, \"x\", Strong.easeOut, 0, 300, 3, true);\nThis create a new tween object with its properties in the parenthesis. The first parameter is the object you want to animate. The second parameter is the property of that object that you want to animate. You can animate a lot of properties for a display object example: x, y, alpha, width, height, xscale, yscale, rotation etc. I’ll give some example later. The third parameter is easing. Fourth and fifth parameters are the starting and ending position of the property you are animating in this case, the rectangle will go from the x position 0 to the x position 300. The sixth parameter is the duration of the tween, in this case 3 seconds.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-14", "d_text": "And then, we're going to set up the action block again.\nAnd this time, it's a little interesting.\nFirst of all, there's a bug in that line where I'm setting the finish time.\nIt really should be two-thirds of the duration not two-thirds of the elapsed time, but you get the drift.\nAnd that's because it's two stages.\nI'm going to spend two-thirds of my duration doing the first half of my or first two-thirds of my transition.\nAnd then, I'm going to move over to the next bit.\nAnd the way again I'm going to trigger that, is I'm going to remove the behaviors which is then going to cause me to go into the DidPause animator's delegate method.\nI do the regular dance of adding the children behaviors.\nI do something different based on whether or not I'm a whether or not I'm presenting or dismissing, that's why there's an IF clause for the attach behavior and then we're going to run.\n[ Pause ]\nSo now, we've come to rest.\nDidPause gets called.\nNow, it either got called because the system came to a rest or because we actually hit our elapsed time.\nAnd we're going to do the same thing.\nNow we're using the attach behavior a little bit as a semaphore here because the dynamic animator DidPause is saying, \"Hey, do I have an attach behavior?\nIf I do, then I need to go into the second step of my simulation.\"\nSo I'm going or remove the attach behavior, clear out that reference to it.\nI'm going to add myself back to the dynamic animator and I'm going to change my finish time.\nNow time has elapsed, so animator elapse time is actually not going to be 0 at this point.\nAnd now, I only want it to run the remaining one third of the specified duration of the transition.\nAt this point, since the attachments disappear I should have pressed that button before.\nSince the attachments disappear, when I run it, I'm going to hit this point.\nAnd now the collision delegate is going to kick in because now, I want to do something after that first bounce.\nI basically want to remove the collision behavior so that on the next drop, it's going to drop all the way off the screen.\nAnd I want to check that I actually bounce off the edge I care about.\nSo it's possible that I might have hit the right edge and I really want to just trigger this code if I hit the bottom.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Feel free to add the other animations yourself. For example, the “jump” animation is on frames", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-11", "d_text": "So this is kind of a demo that shows all kinds of transitions, but I'm going to show you the drop dialog.\nNow, this thing comes in as little dialog.\nWhat's interesting about this is, first of all, this is on a phone and we're doing a present view controller and guess what, I can see the presenting view controller.\nYou couldn't do this really on the phone before.\nSo now, you can implement your kind of faux form sheets or foe popovers right on a phone.\nBut you'll notice that bounce that came in.\nSo it comes in with a bounce and I'd also like to show that when we created that dialog view, before it animated in, we did something that I'm not sure if we can see it.\nWell, what is supposed to be shown here is some of that parallax where we layer these dialog views.\nAnd there's new API which is available, I believed, in the seed that we delivered called UIMotionEffect.\nAnd you can put a UIMotionEffect on to a view and then animate it directly.\nAnd if it was working, I would show it to you, but it isn't so you'll have to take my word for.\nNow that was a present.\nLet's see what happens when I dismiss.\nNow the first thing that happens is we slide off to the side and then we bounce off and go away.\nThat's kind of a two step simulation.\nSo how did we do this?\nLet's talk about that.\nSo I'm going to show quickly some of the steps and there's going to be a lot of code up here, so bear with me.\nThe YYDropOutAnimator is the animator object which is a subclass of dynamic behavior that I used to create this effect.\nAnd everything here is just a consequence of this specific implementation and it's broken up into a few things.\nFirst of all, you'll notice that it conforms to a bunch of different protocols.\nIt conforms to the animated transitioning.\nIt conforms to the animator delegate and conforms to the collision behavior delegate.\nThis is some of the power of using protocols, first of all, that you're not bound to a specific instance and you can kind of mold the objects of your choice for implementing certain behaviors in the system.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-2", "d_text": "Hope this One is Easy\nAnimating 2D sprite upon movement script help.\nOn collide animate\n2D Sidescroller: Change camera position?\nGetComponent on iOS?", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "I’ve created an animation using a few different methods that runs in a browser and is responsive (works on both mobiles, tablets and desktop). You can see it here:\nI’ve used a CSS keyframe animation for the clouds, and moving the scenery. Very basically, this moves the clouds from the very right to the very left. This animation is declared once as a mixin (using SCSS), and is added to each cloud individually, so the appropriate speed can be set. To make sure the starting value looks a bit random and the clouds are evenly distributed, I’ve given each cloud a different animation-delay time of a random number, making it look like the animation’s already started when the page loads.\nGSAP timeline animation\nI’ve been using GSAP (Greensock) recently and it’s so useful. I used TimelineMAX, which allows you to make a timeline of tweens, and adjust the timings without having to mess about adjusting percentages like you’d have to with a CSS animation. The GSAP timeline makes the bomb fall to the floor by adjusting a transform position and rotation, and then trigger the explosions to show by changing them from display: none, to display: block. Then moving them left to make it look like we’re leaving them behind.\nSpritesheet animation using CSS keyframes\nThe explosion contains 7 frames on one image, and uses a CSS keyframe animation to move it horizontally between the frames. You can use the steps() property of the animation so jumps to each frame instead of animating across. I also used animation-fill-mode: forwards to always stay on the last frame so that it doesn’t loop. These animations will play once when first shown. Nice and easy.\nSpritesheet animation using GSAP\nJust because I like the experiment with different methods, I animated the pilot’s scarf using GSAP. It’s the same method as the explosions, but using a separate GSAP timeline which uses transform: translate to move through the frames and repeats infinitely.\nI can’t actually think of many advantages of using GSAP for this, except if you wanted control over pausing it or changing the speed dynamically.\nHere’s a video of the animation:\nIf you’re wondering where the graphics came from, I’d already created them as vector files for my Chicken Fokkers Android game which you can get here, and I’ve written about progress here.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-12", "d_text": "When you create this thing, you're usually create the delegate is usually creating it and when the delegate is asked, it's actually passed in a transition context and we scroll that away in the animator because we want to be able to use it in the dynamic behavior callbacks.\nWe know whether or not the dialog is being presented or dismissed.\nAnd, again, the delegate, when it's called, has that information.\nWe set the finish time.\nThe finish time is, I think, I was alluding to before which is I don't want this transition to take too long.\nSo I want to say, \"I want it to be done no later than this point in time,\" and we're going to check that value in the animator's callbacks and the dynamic behavior's action method.\nAnd finally, we're going to this is a composite behavior and we are going to scroll away various primitive behaviors that are actually going to be added and removed, these children behaviors as Olivier demonstrated a little bit earlier.\nSo, amazingly enough, I scrolled away in the corner of my office with some green felt that wasn't being used, and I used it to create kind of a visual image of what a view controller screen might look like.\nAnd basically, we're getting called with animateTransition.\nThis is an interactive transition.\nIt's a straight up animation and the question is, now, how do we hook up the Dynamics to the system?\nWell, the first thing that we have to do is we have to figure out what's moving and what we actually want to apply forces and the like to.\nAnd a lot of this code is alighted, so I apologize for that.\nBut there's this thing called the dynamic view.\nThe dynamic view is the view that's moving.\nIt's just a name.\nWhen it's called, the first thing that we do is we add that dynamic view into the view hierarchy.\nIt so happens that it's above the screen because it's going to drop in.\nAnd then, we start creating some of our primitive behaviors like the dynamic item behavior where we set up an elasticity, we have the dynamic view to that primitive behavior.\nAnd for the first segment of this transition, we don't want to allow any rotation.\nThen we add some gravity.\nGravity is a pretty simple primitive behavior.\nIt's going to be three times normal gravity.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-13", "d_text": "We add a collision behavior and you'll notice that the way that I set the bounce on the collision behavior is using a slightly different method on the collision behavior, set translate reference bounds into boundaries with insets.\nThat's actually a very useful method because you can kind of take the reference coordinate system and move it in different directions based on simple UIEdgeInsets.\nSo now let's talk a little bit about the finish time.\nBasically, we query the dynamic animator for how much time has elapsed.\nIt so happens in this case, it's going to be 0 but, you know, for sake of being true, we ask the elapsed time then we add the duration that was scrolled away when we created the behavior object.\nAnd we create an action block and that action block is going to check whether or not the time has passed that we want to dedicate towards this transition.\nAnd if it has, there is a very simple way to finish.\nWe basically remove ourselves from the dynamic animator.\nNow, to get things going, we have to add the children behavior to ourselves, remember, we are a compound behavior.\nAnd then, we have to add ourselves and there's only one behavior now that's being added to the animator and that is us.\nAnd at this point, the physics engine is going to start and we're going to start simulating our transition.\nAnd there you have it.\nSo we've transitioned.\nWe're done, it's up on the screen and now, we're going to hit the good to know button.\nAnd we're going to do the dismiss, and the dismiss is a two-stage thing.\nAgain, we call animateTransition.\nWe don't have to add a view into the view hierarchy.\nThis is a dismiss, it's already there.\nWe're going to set our dynamic item behaviors up a little bit differently.\nWe're going to allow rotation this time.\nGravity is set up exactly the same as it was before.\nOur collision boundaries are a little bit different and that's because the type of animation that we're trying to achieve is a little bit different.\nWe're going to add an attachment behavior where we're going to kind of try to anchor we're going to specify a different position in the default.\nThe default is usually the center of the item.\nWe're going to kind of put it up to the top and we're going to put the anchor a little bit off to the side.\nWe're going to give it a little bit of kind of bounciness.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Build and see:\nIf you change the push mode to\nUIPushBehaviorMode.Continuous, the box will hit the edge of the screen bounce a little and still move upwards, sticking to the boundary trying to get through it, of course, the poor little green guy, will never get out of there:\nLet’s make it a bit more interesting, by pushing the green box only when tapping on it. For that we will need to add a simple\nUITapGestureRecognizer and handle it. Follow along:\nWe are handling the tap gesture into the\nonTap() function and, at line 44, we are adding a new direction and magnitude force to the push behavior (which is already attached to the green box instance). This time, instead of removing and re-adding the behaviors to the animator, we just deactivate and activate back the push behavior to renew it and thus make it work every time the green box is tapped:\nAs you see in the image above, we are literally dribbling the box making it bounce on the ceiling of our screen waiting for it to come down to re-push it back. Pretty neat he? You can even call\nsetTargetOffsetFromCenter which takes the an offset value which is the location of the push. If you push the object from a corner, the object will literally rotate from there accordingly. See the relating Apple documentation for more info.\nThe Snap Behavior:\nThe Snap behavior, or\nUISnapBehavior is very straightforward. All it does is telling the object to snap back to some position, usually an original position. Let’s set up some code for this example. We will declare the snap behavior, instantiate it giving it the location to snap to (in this case the center of the screen), and we will add a\nUIPanGestureRecognizer. If you are not familiar with pan gestures, please I recommend reading my previous post which covers it in details. From there will add the snap behavior to the animator whenever the pan is ended or on\nUIGestureRecognizerState.Ended. This way when the green box is released, it will snap back to its original location with a cool, elastic transition. Check out the complete code code:\nAt line 58 we are adding the snap behavior to the animator. We also have to remove it every time we try to drag or push (by tapping) the green box, thus we did at line 46 and 65.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "To define interaction in your app, you make your view controller files communicate with the views in your storyboard. You do this by defining connections between the storyboard and source code files through actions and outlets.\nAn action is a piece of code that’s linked to an event that can occur in your app. When that event takes place, the code gets executed. You can define an action to accomplish anything from manipulating a piece of data to updating the user interface. You use actions to drive the flow of your app in response to user or system events.\nYou define an action by creating and implementing a method with an\nIBAction return type and a\nsender parameter points to the object that was responsible for triggering the action. The\nIBAction return type is a special keyword; it’s like the\nvoid keyword, but it indicates that the method is an action that you can connect to from your storyboard in Interface Builder (which is why the keyword has the\nIB prefix). You’ll learn more about how to link an\nIBAction action to an element in your storyboard in Tutorial: Storyboards.\nOutlets provide a way to reference interface objects—the objects you added to your storyboard—from source code files. To create an outlet, Control-drag from a particular object in your storyboard to a view controller file. This operation creates a property for the object in your view controller file, which lets you access and manipulate that object from code at runtime. For example, in the second tutorial, you’ll create an outlet for the text field in your ToDoList app to be able to access the text field’s contents in code.\nOutlets are defined as\n@property (weak, nonatomic) IBOutlet UITextField *textField;\nIBOutlet keyword tells Xcode that you can connect to this property from Interface Builder. You’ll learn more about how to connect an outlet from a storyboard to source code in Tutorial: Storyboards.\nA control is a user interface object such as a button, slider, or switch that users manipulate to interact with content, provide input, navigate within an app, and perform other actions that you define. Controls enable your code to receive messages from the user interface.\nWhen a user interacts with a control, a control event is created. A control event represents various physical gestures that users can make on controls, such as lifting a finger from a control, dragging a finger onto a control, and touching down within a text field.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Please check this app:\nScreenshot from that app is shown below:\nCan you please let me know how to add an image on touch and have each image animated?\nIt's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.\nIn iOS 5, you can use a sequence of images. First you will need to add a new category method:", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-10", "d_text": "So, two examples that we're going to go through.\nOne is a for lack of a better word, a drop in and out dialog.\nIt's kind of a dialog which will you will present.\nIt will be a custom view controller presentation.\nIt will drop on screen.\nIt's not going to be interactive.\nBut what is it going to demonstrate?\nIt's going to demonstrate using kind of a two-stage dynamic simulation where we're going to use the action methods and the DidPause methods and so forth to change the Dynamics of the system as the transition evolves.\nThe second demo that I'd like to deconstruct is kind of just a simple drop shade transition where I'm going to pull down from the top of the screen and I'm going to release it and either it's going to bounce up to the top or bounce down to the bottom.\nAnd the Dynamics there is fairly straightforward, but it's interesting to see how the interaction mode of the transition leverages the dynamic system and vice versa.\nSo let's talk about the drop in and drop out dialog a bit.\nSo, it's a dynamic behavior that conforms to the animated transitioning protocol and it demonstrates a couple of interesting things.\nIt demonstrates the action block which Olivier referred to.\nThis is called on every step of the simulation, of the physics simulation.\nWe're going to implement a collision behavior, but we're also going to specify the collision delegate because we want to know when we've hit a certain boundary.\nAnd finally, and this is kind of interesting, we're going to implement the dynamic animator delegate.\nAnd in particular, we're interested in DidPause callback.\nAnd we're also interested in the dynamic animator's elapsed time.\nNow, the reason for this is that when you're doing a transition, typically, transitions take a finite amount of time.\nYou don't want them to take, you know, I don't know, 30 seconds to converge and go.\nSo you might want to put a bound on it and make sure you're done in two seconds or one and a half seconds or whatever.\nAnd so typically, when you build these systems, you're kind of iteratively trying to figure out how does it look, right?\nBut you want to actually ensure that the transition takes a certain amount of time.\nAnd you can do that by looking at the elapsed time of the dynamic animator and checking in the DidPause and the action methods.\nAnd we're going to demonstrate that.\nSo I'm going to show a quick demo of the drop in and out dialog.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "And one of few things we have in this class is the ability to add child behaviors, which means that you can use this class to construct your own high-level behaviors.\nWhat's interesting here, if you attach behavior directly to the animator, or if you add a child behavior to a behavior and add this behavior to the animator, there is no difference.\nThere is no CPU cost or any runtime difference between these two approaches.\nSo there is no cost for building your abstractions.\n[ Pause ]\nYou can compose your behaviors statically like by defining your own class, adding child behaviors and then never changing these behaviors again or dynamically by adding and removing children for a behavior or from the animator.\nSo let's see a quick example of that.\nLet's say that I want to drag with this real-world effect a view and when my gesture ends, I want to apply gravity to get this bouncy effect I love so much.\nThe initial setup was just with a collision behavior and that view added to this behavior.\nBut when my gesture actually begins, what I want to do is to create a new behavior, an attachment behavior, add that to the animator.\nAnd when I update when my gesture is updated, I just need to change that attachment point in my UIAttachmentBehavior and it's going to drag the view as I would expect.\nWhen I end this gesture, what I just need to do is to remove the attachment behavior and at the same time add the same view to a gravity behavior.\nA collision behavior is still here, so we are going to add this fall and bounce effect.\nThat's a really interesting concept and you can build a lot of completely different effects by combining behaviors.\nFor instance, that example I was using in the first session, a bounce effect is just gravity and collision at the same time.\nIf I want to drag a view and then at the end of the gesture snap it somewhere else in the screen, I can use an attachment behavior first and then a snap behavior.\nSomething like the Lock Screen in iOS 7 can be built as a combination of collision, gravity, attachment, and push behavior.\nBut you can imagine many other things like a magnet-like behavior that you could build from multiple UIPushBehaviors.\nSo I'd like to show you a very quick demo of the different feel you can get by changing, removing, and adding behaviors.\nSo a very interesting thing here is the top right animator label is turning green when the animator is active.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "Think of the pivot points of the members as the joints of a doll. In order for the doll to move, the joints must be correctly placed, right? The same rules apply for the pivot points.\nTo move the pivot point, drag the blue circle at the center of each sprite to the correct place (which is the point where it should connect to the parent body part). In the following image you can see the head pivot in its correct place:\nThe tail part should look like this:\nDid you get the idea? Great! Repeat the process for the remaining parts. (You can leave the pivot for the black spot in its center; we'll explain more about this in the next section.) Remember, you want a dragon animation, not a Frankenstein animation.\nOnce you're finished, click Apply:\nIf you take a quick look at the folder where you have the sprites, you will be able to see that the dragon sprite now has an arrow next to it:\nPress the arrow and you will be able to see all the parts that comprise our dragon character separately:\nAssembling the Character\nNow that you have your character sliced into different sprites, you can start placing the sprites into the scene. Since the dragon is composed of several body parts, you need to build the character.\nThe first thing to do is drag the black dot of the dragon sprite to the scene. This object will work as a center of mass for your character. Later on, you will focus your attention there; however, for now, just know that this is the base for your character.\nNow, take the body of the dragon and place it over the black dot, like so:\nRepeat this process until you have assembled your dragon. By the end it should look something like this:\nYou finally have your dragon ready—however, as you may notice, it looks weird. Some parts that should be under the body are over it, or vice-versa. That happens because we added the dragon parts without any specific order.\nBefore we solve that issue, let's turn the dragon sprite into a single game object. As you may have noticed, right now the several parts of the dragon work as individual game objects—you need to group them into a single game object before you can start to animate them.\nIn order to correctly group all the sprites, use the sprite with the black dot as the main game object; all the other body parts should be grouped under the sprite mass.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-8", "d_text": "I want to create an animator.\nThen for each of these attributes, I'm going to need that initial state.\nSo I'm going to ask flow layout which is the super class.\nI changed zIndex because I'm dragging this.\nI want this cell on top and I create my high-level drag behavior.\nI add this behavior to the animator.\nUpdating the location and removing everything is extremely simple.\nWe just update the point or clear the animator.\nThe layout implementation itself, \"Why do I need to define which cells are in this layout for a given rect?\"\nSome cells might not be animated.\nSo I start by asking the super class, \"Give me all the cells.\"\nNext, I want to remove the cells I'm actually animating.\nAnd then, I need to add the cells I'm actually tracking, the layout attributes I'm actually tracking, from the animator.\nSo I use this animator itemsInRect method, and I just have to return all these attributes.\nAnd that's it, that's the entire code for this small example.\nNow, for some more exciting stuff, UIKit Dynamics and UIViewController Transition, I'd like to ask Bruce Nilo to show you that.\n[ Applause ]\nThank you all.\nThank you, Olivier.\nMy name is Bruce Nilo, and this stage is huge, I've never been on it before.\nSo I don't know how many of you have been at this morning's talk.\nI'd like to get a good sense about custom view controller transitions.\nOh, a lot of you, OK.\nSo I'm going to kind of breeze through a quick review of what custom view controller transitions are all about.\nAnd then, what we're going to talk about is we're going to kind of build a little bit on what always discussing about how to create kind of compound behaviors.\nBut these compound behaviors that we're going to create are going to conform to some of these new transitioning protocols that we've defined, and are going to be used to actually implement some custom view controller transitions.\nAnd we're going to walk through a couple of examples showing two different types, and you'll get a sense of how these different things compose with one another.\nSo let's do the quick review.\nFirst of all, the basic idea is that there's a few delegates that you create and set on your view controller directly if you're doing a present or a dismiss view controller call, or you can implement some new methods on Navigation Controller Delegate or Tab Bar Controller Delegate.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "Notice that the Ground element has been expanded to reveal its parameters in Figure 2.\nAlong the bottom of the grid cells is a green line that indicates the Play Range. By dragging the icon on either end of this green line, you can limit which portions of the animation are played in the Document Window when the Play button is clicked.\nViewing and Selecting Keys\nThe Animation Palette includes a list of all scene objects to the left and a table of key cells to the right. All keys created with the Animation Controls are displayed when the Animation Palette is opened. Each key is marked in a color that corresponds to its interpolation type, with green for spline-based interpolation, orange for linear interpolation, gray for constant interpolation, and a diagonal line for spline breaks. The actual keys appear brighter and the interpolated frames are the same color only darker.\nThe table within the Animation Palette grid provides at a glance the available keys and lets you edit the keys by dragging them left or right. At the top of the Animation Palette are the same controls for moving between frames and keys as found in the Animation Controls bar.\nYou can select a single key simply by clicking it. This will highlight the entire row (representing the element or parameter) and column (representing the frame or time) that the key belongs to. You can select multiple consecutive keys at once by holding down the Shift key while clicking each grid cell. A white box surrounds all keys in the current selection. Figure 3 shows multiple keys selected at once.\nCreating and Deleting Keys If you click a grid cell that isn't a key, you can set a key by clicking the Add Key Frames button at the top of the Animation Palette. If the This Element option is selected, a single key is created, but if the All Elements option is selected, keys are set for the entire column. To delete a selected key, click the Delete Key Frames button. Figure 4 shows a column of keys created with the All Elements option.\nSliding and Copying Keys\nIf you click and drag on the selected key or keys, you can slide the keys to the left or to the right. This is useful when you want to synch two keys to start together. You can copy selected keyframes with the Edit, Copy menu command or by pressing Ctrl/Command+C. Similarly, you can paste keyframes to a different location with the Edit, Paste menu command or by pressing Ctrl/Command+V.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-6", "d_text": "This allows you to specify which animations start together or one after the other and how much time should be left between the animations.\nBut above all, an animation can pause until the user initiates the process by clicking on it. This allows the animation to be controlled during playback, albeit very limited.\nA slightly more sophisticated variant are animation lists that start with defined user input, for example when clicking on a graphic or when dragging objects to other screen areas (drag & drop).\nAnimation with scripting languages\nAlso on the change of an existing scene, the animation is based on scripts. A script is a relatively simple and manageable computer program that has been formulated in a scripting language.\nWith special commands, the graphic objects in a scene can be changed and animated. The graphical objects can themselves be animated pictures (“sprites”).\nMany animation packages have their own scripting language (for example, Lingo in Director or ActionScript in Flash). Typical commands are changing the current position on the screen, resizing, or fading in a graphic object.\nThe advantage of a programmed movement is that you can respond to user input and the current environment of objects. Using simple if-then control structures, it is possible to decide how the animation will continue, for example: B.\n- If the object hits a wall, then change the direction!\n- If an object collides with another object, then start an animated explosion!\n- If you press object A, start the movement for object B!\n- If the mouse is over an object, then slow down the movement of the object!\nWith scripting languages complex movement sequences can be defined and calculated. If motion sequences are to follow physical laws, then the mathematical formulas for position calculation can be implemented in the scripting language.\nAs a programmer you do not have to worry about the representation of the graphic objects, you only change the properties, for example position, size, color or transparency.\nFrequently, the programmed animation steps are started when certain user inputs (mouse clicks, mouse movements, input keys, drag & drop ) have taken place.\nAgain, the programmer does not have to worry about recognizing these inputs. He only defines the reaction. Scripting languages are therefore a good and – relatively – easy way to design interactive animations.\nAnimation with calculated frames\nSometimes it is not enough to simply rearrange graphic objects. If complex changes are shown, then the images must be created individually – similar to the single-frame animation.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-4", "d_text": "That one, we don't care.\nThat one is interesting, that's the first UIDynamicItemBehavior we have in this tree walk.\nElasticity is 0.5.\nThat is new the new elasticity value for that dynamic item.\nNext behavior, we continue.\nThat's another dynamic item behavior, defining friction, so that's not the same property so that's OK.\nWe just set the friction to be 0.2.\nAnd then, the last dynamic item behavior we have is setting elasticity to 0.3.\nThat is the end value.\nThen now, let's actually remove this one.\nIn this case, we are basically going to reevaluate the behavior tree and friction is back to default.\nLet's add at the exact same place a UIDynamicItemBehavior changing again the same property.\nThe new value is actually still 0.3.\nSo that's the last, it's the most recent behavior I added, but that's not the last in this behavior tree with my definition.\nSo that is how you can combine behaviors in a very define way.\nDynamic Items, so that's a protocol and that's a way to integrate in Dynamics things that are not necessary views or collection view layout attributes.\nIt basically defines what we need in UIKit to animate something, so that's a position, a size, and a rotation, knowing that UIView on UICollectionView obviously implement already something like that.\nAnd we only care about 2D rotation.\nThe engine we run is a 2D engine.\nSo when you are defining your own UIDynamicItem, the first time this item is added to a behavior, and that behavior is added to the animator, we would get these values, because we need to inject an initial state in the engine.\nThen we're going to run the simulation and each simulation tick, we're going to write position and rotation.\nWe don't change the size of the dynamic item.\nIf you're implementing that protocol, of course, we might write position and angle on each simulation tick.\nSo, again, that is two methods where you should be careful about your performance.\nOne consequence of that is we won't care about any external change to this value after we basically grab the initial state.\nSo one interesting question is how do you change the size of something after the effect?\nWe don't change views, items, bodies in the engine, so you have two ways to change the size of an item, remove it from dynamics, and add it again later if you want, or cheat.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Oh I just responded to your email and did not understand your question.\nYou should be able to do what @Emile Schwarz said. Create the class, set the Super to Label, then click the Interfaces button and choose AnimationKit.ValueAnimator. Then you can copy and paste each method into your navigator/sidebar. You may have two AnimationStep methods at that point, so delete the empty one.\nIts just a quick and dirty example of what the ValueAnimator can do, Id have packaged it up as a usable class if I expected people to actually use it for something.\nAh, bad example code. Change If Task <> Nil And Not (Task.Completed Or Task.Cancelled) Then to the simpler If Task <> Nil Then because it’s perfectly safe to cancel a task that is already finished or cancelled.\nNo, that won’t work because you’re setting Value up to 100, then immediately setting it back down to 0. The second action cancels the first, and since the value hasn’t actually changed yet, nothing actually happens.\nTry setting CountingLabel1.Text = \"100\" first so the animation is avoided and the value updated immediately.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "The feature I could leverage the best to make the editing experience easier was IBInspectable properties on types. These allow defining custom fields on types to be edited directly in Interface Builder. These properties, on several key types, allowed me to do all of my page design directly in the graphical interface.\nThe visual layout of the pages was very straightforward. Each page is its own view controller interface file and simply consists of images and text like any standard view. Every file uses the same\nPageViewController class. To support animations was a little tricker. I designed a type called\nLinearViewAnimation that I could add to each page as simple objects alongside the view in the interface file. I could make a connection from it to the views I wanted the animation to act on. It had several different inspectable properties:\n- Duration – How long the animation should take.\n- Delay – How long to wait to start the animation.\n- Translation – X and Y values for how far to move the views.\n- Zoom – X and Y values for how much to zoom the views (separate X and Y values allowed for distortions).\n- Rotation – How much to rotate the views.\n- Alpha – How to change the opacity of the views.\n- IsRepeated – Whether the animation should be repeated or not when done.\n- IsAutoReversed – If the animation should be played backwards at then end.\nA separate object can be dropped in for each distinct animation and they are automatically picked up and setup by the\nPageViewController class while loading the nib file. That eliminates the need for me to connect them explicitly to the view controller which is especially good so that I can't have the error of forgetting to do so.\nThe one subtly is that these properties should be set to define how to move the view from its current position to its starting position. That means I am actually designing the animations in reverse. This allows me to design each page for its end state instead of its starting state. This is much better because most pages build in and then stay put. I didn't want a bunch of pages to look blank when opened.\nThis type did leave me with one major question though: what should I do if I want to animate the same view in multiple ways with different timings? For example, let's look at the following page:\nThe sheets and hands have to animate up while at the same time the hands need to be shaking back and forth slightly.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "In this tutorial, we will focus on the bone-based 2D animation tools provided by the Unity engine. The main idea is to present and teach the fundamentals of 2D animation in order for you to apply it to your own games. In this post, we'll set up the project, define the assets, and do the initial preparations for the animation.\nBefore we start the tutorial, we would like to thank Chenguang (DragonBonesTeam) for providing us with the game art used to produce this tutorial series.\nWho is This Tutorial For?\nThis tutorial is primarily aimed at two groups of game developers:\n- Those who are unfamiliar with Unity at all.\n- Those who are familiar with Unity, but not the Unity 2D engine and tools.\nWe assume that you have some programming skills, so we won't cover the code in depth. In order to follow this tutorial, you will of course need to download Unity.\nFor a quick start using Unity, follow our previous tutorial that introduces you to the Unity 2D environment and its tools and characteristics. We strongly recommend that you do this if you are not familiar with Unity.\nThis demo shows the animated dragon we're aiming for:\nLaunch Unity and create a new project by selecting New Project... from the File menu. The Project Wizard will appear. Now, create a new 2D project, followed by a new folder called\nSprites (inside your Assets directory).\nNow that you have the project folder organized, it's time to import the game assets. You can find them in the Assets folder of this tutorial's GitHub repo. (You can just click Download ZIP on the latter page if you're not familiar with GitHub.) Note that this folder includes assets for the whole tutorial series, so there are some that you won't use until later on.\nBone Animation vs Sprite Atlases\nBefore moving on, compare the following two images:\nIn the first image, the dragon is divided into several body parts (head, body, arms, and so on). In the second, the ninja is shown in several poses, with a sequence of poses for different actions. This lets you clearly imagine how the avatar will move in your game.\nThe ninja sprite is what we call a sprite sheet or sprite atlas. This type of sprite was very popular on the classic 2D games, and it's still very common today.\nThe dragon sprite requires a more recent 2D animation technique, normally called bone-based animation.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-3", "d_text": "A good tip is to enable “Auto Key” so that Spine remembers your bone keyframes automatically.\nOnce you have saved all the motions, to playback the animation as a loop, click the ‘repeat’ button and then the ‘play’ button. This will show you if you have misplaced bones or anything that isn’t moving naturally. Make changes if necessary, keyframe by keyframe, making sure to save each edit via the keyframe button. View more fine-grained detail of the animation by clicking ‘dope sheet’. Change the speed of playback by editing it in the ‘playback’ area, accessible from the ‘playback’ button.\nSatisfied with your animation? Now it’s time to export it. For this, you need a paid license. There are several options to export, and even a built-in texturepacking tool so that you can pack all of your images into one .png and generate an atlas, if you prefer to build sprites.\nFor my purposes, I saved the animations as a .json file. You can save it in ‘pretty print’ format to make it easier to read and debug.\nTo use your new animation in your mobile app, you need one of the runtimes built by Esoteric Software. I imported the Corona SDK runtime into my app.\nThe completed project is available on github. Once the .json file and images are accessible to your main codebase, you can add particle effects and buttons to can attach your animations to a given event (The below code is lua).", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "However, as any asynchronous system, other things are happening while the animations are working, such as the user clicking and moving the mouse everywhere (which causes a bug on buttons, don’t know how to solve that yet). To create this system, I added a\njob function attached to the scene, which can create multiple\njobs and run that in parallel.\njob functions receives the duration of the job, the delay to start it, an update function which will perform the animation, and a complete function which is called when the job finished. Moreover, the programmer may stop every job at any time.\nAt last, I just want to recommend the Affinity Designer, it is a wonderful software for vector drawing, and it save a lot of time with the automatic exporting. I used it together with the Texture Packer and a small script to make the sprite sheet. It was really efficient: I save the document; the Affinity Designer export everything to a specific folder; then I just click export sprite sheet on Texture Packer (I only have to add new sprites if I create new ones); and run the script (I have a console ready to execute it), and done.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Action+ is a feature in Animaker that helps you chain together multiple different actions in a sequence.\nTo use Action+, click on the icon in the Item Menu.\nClick on the add action button\nAdd the action you want to the character to do next.\nRepeat this step for all the actions you want to add into your sequence.\nYou can check out our In-App Tutorial or the video below for a more detailed guide to using Action+ in Animaker.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Choreography creation/Creating Events/Skeletal animation\nAnimating an Actor's body is easily as important as animating his or her lips and face. While somebody's expression can tell you a great deal more than their movements, that does not mean that a choreographer can afford to focus too much on the former and neglect the latter. In particular, body animations are visible from a further distance than facial animations, and become the overriding concern when an actor will be delivering their performance a distance from the player's view or in a crowded environment.\nIn this tutorial, body animation is placed before facial expressions, generic events and lip synching. This is in order to suggest a productive workflow; however, there is nothing to stop you performing the four stages in any order you choose, or in no order, if you find doing so more helpful.\nSource's animation technology\nAs opposed to facial expressions, body animations (Gestures, Blend Gestures or Sequences) are constructed from the pre-existing animations that are compiled into each Actor's model. Any animation can be used in any form—Gesture, Blend Gesture or Sequence—but you will see undesirable results if you stretch that fact too far.\nSource provides several animation technologies that will help you produce a more authentic performance:\n- Any animation can be layered above any other, including those played by the Actor's AI system, to combine the two movements (Blending). Among other benefits, the system creates very natural motions and let animators create a modular set of basic movements that provide choreographers great flexibility. The 'Sequence' event type can be used to override this behaviour.\n- Any animation can be sped up or slowed down at any point to alter the progression of its playback (Timing Tag manipulation), including overall lengthening or shortening. This enables precise synchronisation between dialogue and reusable animations.\n- Any animation can have its intensity changed between 100% and 0% at any point (Ramp manipulation). This can be used to reduce the amount of movement in or even hide entirely certain parts of the animation.\nNote that for all of this flexibility, animations cannot be created without falling back to dedicated 3D animation tools like XSI.\nTypes of animation\n- Gestures are the most common form of animation and support the full range of animation features, including blending with other Gestures. A Gesture will take control of whatever areas of the model it needs to play.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-7", "d_text": "I started with a base behavior, a single cell that I want to drag around with a spring effect, and the way I defined my behavior is four springs attached to a plane or rectangle and attached to the center of this view.\nSo I'm going to move these four points to get the spring effect.\nThen I need to be able to drag many items, right?\nSo I'm going to define my high-level drag behavior and I'm going to do the exact same trick for all cells.\nThen I need a layout and I'm going to define a flow layout subclass because the basic mode of my layout is just to display a grid.\nIt only changes when I interact with it.\nSo I need three classes, a DraggableLayout which is a UIColelctionViewFlowLayout.\nI'm going to define a simple API on this layout so I can easily connect that to a gesture recognizer, I can start the interaction with an array of index paths from a point.\nI can update that location and I can stop the interaction.\nMy high-level behavior is going to be quite similar for the API.\nI'm going to create a drag behavior with a set of dynamic items and from a point, and a way to change that location.\nAnd my low-level behavior is going to be defined with just one item I want to animate, a point and a way to update the location of this cell.\nSo let's see how I implemented that.\nLet's start with a low-level behavior.\nRectangleAttachmentBehavior, I configured that as an item at a given point and then it's just a matter of creating four attachment behavior, so I have this four points, I just create a spring AttachmentBehavior for each point and I add these as children behaviors.\nWhen I want to update the location, I just need to compute again these four points and update the attachment point for my four attachment behaviors.\nSo that's my first low-level behavior using four predefined attachment behaviors.\nThe high-level behavior, drag behavior is actually very simple, it's all in this slide.\nSo what do I need?\nI need to pass the dynamic items I want to animate, that point.\nI'm going to create attachments, my low-level attachments, RectangleAttachmentBehavior.\nI add this as child behaviors and to update the drag location, I'm just going to basically tell my low-level behavior to update to this point.\nSo that's it for my high-level behavior.\nNo more layouts.\nThe interaction code is quite simple actually.\nI need to track these index paths.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "In this course you will learn the basics about the framework SpriteKit. I going to explain some theory and some practice examples so that you know a lot more about SpriteKit after this course.\nThe course is structured to help you quickly understand 2 key concepts so that you can start creating your first game straight away without any delay.\nAfter each video you will understand more and more fundamentals about developing games with SpriteKit.\nCreate your own game to impress your boy friend, girl friend or your kids <3\nIt's easy to learn with very little pieces of code.\nYou want to learn to develop own games.\nThis course is fun :)\nRequire minimum programming knowledge..\nA taste of how to create your mini game quickly with ease.\nHow to create a game for your iPhone.\nUnderstand key concepts in creating iOS games.\nHow to create different Scenes in your game.\nHow to import awesome fonts to your game.\nHow to create Animation with Sprite.\nHow to create physical contact between game characters.\nHow to create behaviours, logic, sounds and game effect.\nTask 1: Starter\nTask 2: Menu Scene\nTask 3: Game Scene\nTask 4: Game Effects", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "In solo game development, you can’t just stick to your strong suit. If you’re a Software Engineer trying out solo game development, being a solid developer will only get you so far; even if you’re using assets made by others, you’ll still need to know how to integrate them into your game tightly.\nAnimation is a great example. Imagine a character model has animations for walking, running, standing, talking, interacting, and more. In that case, you will still need to define how a character object triggers and transitions between these animations. These animations might also dictate what your object does; for example, an enemy that punches characters might want to deal damage at the precise moment the model’s arms are fully extended.\nToday, we’ll discuss the essential concepts of using Animation in Unity. We’ll also go over a few examples using the incredibly cute Modular Animal Knights PBR asset pack from the Unity Asset Store. A free version that includes only the Dog is available here as well.\nAn Animation Clip (Animation for short,\nAnimationClip in code) is an asset that describes a series of keyframes for each property on a Game Object over time.\nIn other words, an Animation Clip asset describes, for a given Object, the change over time of any number of properties on that object. An Animation Clip can control any property on a Game Object or recursively accessible from a Game Object or any child object.\nIn addition to an Animation Clip specifying properties on a Game Object, a clip can control Animation Avatar properties. An Animation Avatar is an asset that defines the relationship between a standard “skeleton” set of transforms (head, chest, leg, etc.) to a specific model. See the Character Animation tutorial on Unity Learn for more details.\nFor example, in the above screenshot, the “Dog Knight” model comes with an Avatar asset that defines the relationships between . The animations included in the asset pack control the avatar transforms. Prefabs included in the asset pack will point to an…", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "[ Silence ]\n[ Applause ]\nWelcome to this WWDC session on Advanced Techniques with UIKit Dynamics.\nWe have a lot of content for you today, many lines of code.\nSo let's get started.\nWe're going to start with a very quick recap of dynamics, architecture, and we're going to explore more of this \"combining behaviors\" idea.\nAnd we're going to talk briefly again about dynamic items, custom dynamic items.\nAnd we have a quick example about collection view and dynamics.\nAnd we will end with great demo and architecture about using view controllers with dynamics.\nSo UIKit Dynamics.\nIt's a physics inspired animation and interaction system.\nMade to be composable, combinable, reusable.\nWe try to use Dynamics in a way which is declarative.\nYou tell us what the intent of the interaction is and we will try to combine the effect of all new behaviors to animate things on screen.\nLet me stress that this does not, in anyway, replace what we have to day with Core Animation, UIView animation, or motion effects.\nIt is just a new tool for rich, real-world like interactions.\nSo the base Dynamics architecture, we have this DynamicAnimator which gives us this context in which we associate various behaviors and we associate dynamic items which are usually views or collection view layout attributes.\nAnd the key thing here is, an item might be part of different behaviors and we're going to combine all these effects.\nSo let's talk about UIDynamicAnimator.\nSo its main job is to track behaviors and animated items.\nAnd it wraps the underlying physics engine we run for you.\nWhat's interesting is, we try to actually optimize that engine.\nSo if we detect that the system is at rest, we just stop.\nIf you change anything like changing the parameter on one of your behaviors, we start the system again.\nAnd you can actually be notified.\nWe have a UIDynamicAnimatorDelegate, so you can implement methods so you can know if we are about to pause or resume that system.\nYou can use a DynamicAnimator in three modes basically, with views which is the common case.\nIn collection views collection view layout exactly, and you can implement your own dynamic item to participate in dynamics.\nSo let's talk about combining behaviors.\nCombining behaviors is interesting: the underlying model physics is in itself quite good about combining things, combine two forces and you get a force.\nAnd we build on that, we have this base class UIDynamicBehavior that you can subclass.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "The 3D model is quite basic for now, it was built using Blender. I had not used Blender since high school so it was a complete re-discovery; I especially like the Edit mode capabilities when modeling. Yet, there are a few caveats when exporting a Collada file for SceneKit: the shading and normals have to be correctly set up before exporting, as SceneKit won't do any kind of normals reconstruction/interpolation.\nI might add the possibility to choose from a set of lanterns, or to generate random lamps using a simple grammar with building blocks. This will be done later, as I first want to focus on having a clean and coherent mood for the scene.\nTo add a bit of context to the scene, I added rocks and a few blades of grass. The rocks are basic polyhedra edited in Blender, whereas the grass is made of triangular-based shapes.\nIn order to animate the grass, one could use a physical simulation with wind, as SceneKit includes a physics engine, or import a rigged animation. But the simplest way is to modify the position of the vertices in the vertex shader. SceneKit implements its own private rendering pipeline, but it fortunately allows us to inject code at four given entry points in the shaders attached to any geometry:\nFor each of these modifiers, SceneKit provides a few useful values (position, normal, texture coordinates, transformation matrices,... ) and one can also set uniform variables using the standard key-value observer pattern in Swift/Objective-C.\nHere, we want to move the vertices along time, depending on their height above the ground. We can compute an horizontal offset in GLSL code and inject the code:\ngrass.shaderModifiers = [ SCNShaderModifierEntryPointGeometry: \"float offset = 0.375 * sin(1.2 * u_time) * pow(max(0.0,_geometry.position.y-1.2),2);\\n\" + \"_geometry.position.x += offset/1.414213;\\n\" + \"_geometry.position.z -= offset/1.414213;\" ]\nAnd that's all for the first log! Since then I've had the time to add lighting animations and particles effects, so this is one of the topics I'll cover in the next article, (hopefully) next week. Below is a small animated preview, enjoy!\nAnd please let me know what you think of my project!", "score": 8.086131989696522, "rank": 99}]} {"qid": 2, "question_text": "How should I store and apply 10 million nematodes that come packaged in a sponge?", "rank": [{"document_id": "doc-::chunk-4", "d_text": "Several companies raise and sell the nematodes, which are strictly insectivorous and cannot harm humans, pets, plants, or the beneficial earthworms in your garden. Application of the nematodes is simple. About one million nematodes come packaged on a small sponge pad, about 2-3 inches square. The sponge is soaked in about a gallon of water, and then the water is sprayed over the area to be treated. The nematodes should be distributed at night or on a cloudy day, since they die if exposed to direct sunlight. They also work best in a moist environment, so watering the yard well for several weeks after application helps them do their job most efficiently.\nNematode sources include The Bug Store, (800) 455-2847, and Integrated Biocontrol Systems, Inc., (888) 793-IBCS. Both sources offer can take your order over the phone with a credit card, and provide overnight shipping. These companies suggest using about one million nematodes per every 2500 square feet of garden or yard. Does this sound like a lot? Don’t worry! Costs range between $1-15 per million, depending on the source and quantity purchased.\nAlso With This Article\nClick here to view \"Should You Get With the Program?\"\n-By Nancy Kerns", "score": 50.44786275210697, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "17979 State Route 536\nMt. Vernon, WA 98273\nCall Center Direct Line\nRetail Store Direct Line\nand like us\nmore about this product\nKill pests in soil\nThese microscopic \"worms\" attack and kill over 250 different pest insects in soil, such as fungus gnats, thrips and even cutworms. Beneficial nematodes will attack any pest insect that spends part of its lifecycle resting in soil. They come on a small \"sponge\" that you rinse out in water. Apply the \"nematodes water\" to your plants with a watering can, garden sprayer or pump through a fertilizer-injector. Can be stored in refrigerator for 2-3 months. One million treat up to 3,000 sq. ft. For best results repeat applications every 4-6 weeks.(Not for use in conjuction with Aphid Predators.)\nCreate a favorable home for your Beneficials ahead of time…\nReminder-- Beneficial Insects don't tolerate most pesticides very well, so it's very important not to apply residual pesticides (such as Malathion and Sevin) for at least a month before using them. While you're waiting out this month period use soapy water sprays (such as Safers) or Stiky Traps right up to the day you let out the beneficials. Also botanical sprays (derived from Pyrethrin and Rotenone, for example) can be used with a one week wait afterwards. The small amount of time you spend creating more favorable conditions for Beneficial Insects will be well worth your while!\nNote: Sorry, not shipped to Canada or Hawaii.\nShipping Note: Two day orders received by noon Monday, Tuesday or Wednesday usually ship that same day. (We do not want to delay your in route shipment over the weekend.) Thursday, Friday and weekend orders are usually shipped the following Monday, so that your bugs arrive fresh and ready to go to work!\nCharley's or Call 1-800-322-4707 or Fax 360-873-8264\nCall Center Hours: 7am - 4:30pm Mon-Sat PST Retail\nStore Hours: 9am - 5pm Mon-Sat PST\nCall Center Direct Line 360-428-2626\nRetail Store Direct Line 360-395-3001\nGREENHOUSES | ACCESSORIES | CHARLEY'S\nTIPS | CUSTOMER\nSUPPORT | SITE\nMAP | HOME", "score": 49.05085140557652, "rank": 2}, {"document_id": "doc-::chunk-3", "d_text": "Store in the original package in order to protect from moisture. The pouch should only be opened immediately prior to use.", "score": 46.87428321035753, "rank": 3}, {"document_id": "doc-::chunk-3", "d_text": "Best to apply water first if soil is dry.Application and amount for 50 and 100 Mil. Nematodes. The 50 Mil. + nematodes are packed in an inert carrying material that will dissolve in water when mixed. You can use a watering can, pump sprayer; hose end sprayer and irrigation system, backpack sprayers, or motorized sprayer. The 50 and 100 Mil. Nematodes mix ½ teaspoon per gallon of water. The Large yard size: 1/2 Acre Size (50 Million) you can use up to 50 Gallons of water The Acre size 100 Mil. Nematodes you can use up to 100 Gallons of water. o For covering up to 800 square feet, place approximately 1½ teaspoons of dry nematodes in the hose end sprayer container, and then fill to the 3 gallon mark. (for Garden size) o Evenly spread the solution over the ground areas to be treated. o Continuous mixing should take place to prevent the nematodes from sinking to the bottom of the container. To avoid blockages, remove all filters from the sprayer. o You can sprinkle the soil with water again after application to move the nematodes into the soil. o Apply nematodes as soon as possible for best product performance. o Keep the soil most for the first week after application. Application for the 10 Mil. garden size Nematodes. The 10 Mil. nematodes are placed in a sponge. Place the entire sponge in a gallon of water, squeeze for a few minutes to get the nematodes out of the sponge and into the water. Discard the sponge and pour the gallon of water into the sprayer or water can and apply to the soil.Proper storage and handling is essential to nematode health. Always follow the package instructions for the best method of mixing nematodes. Formulations vary depending on the species and target insect. Nematodes can be stored in the refrigerator up to a month (not the freezer) before they are mixed with water, but once the nematodes are diluted in water, they cannot be stored. Also, nematodes shouldn’t be stored in hot vehicles, or left in spray tanks for long periods of time.", "score": 43.739518774318014, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "Store nematodes in the refrigerator as soon as you get them; they will keep for up to three weeks.\nThings You Will Need\n- Yellow fly traps\n- Ready-to-use pesticide\n- Bacillus thuringiensis var israelensis\n- Watering can\n- Beneficial nematodes\n- University of California Agriculture and Natural Resources: Fungus Gnats, Shore Flies, Moth Flies, and March Flies\n- Colorado State University Cooperative Extension: Fungus Gnats\n- Oregon State University Extension Service: Do Your Potted Plants Have Fungus Gnats?\n- Valent Professional Products: Gnatrol WDG\n- Buglogical Control Systems: Nematodes Application\n- Stockbyte/Stockbyte/Getty Images", "score": 43.59200814520795, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "These don’t really affect those of us using beneficial nematodes specifically on house plants, where the pretty constant environmental conditions provide an artificial environment for pests, so there will probably always larvae in the soil.\nThe storage issue is probably more for people with massive gardens that need to bulk buy. We can just keep our little packet in the fridge.\nThe only issue I can really find with using beneficial nematodes to get rid of fungus gnats is that they die in dry soil, so keep your cacti and succulents away from your other plants so that they don’t get re-infested once they dry out.\nSince fungus gnats also don’t like moist soil that shouldn’t really be a problem after the initial application.\nHow to use beneficial nematodes on house plants\nYour packet will come with instructions, but it’s…kinda like applying fertiliser. The nematodes usually come on a little sponge. You run water through the sponge and into a watering vessel. Then water your plants with the nematodes, then again with clean water to really work them into the soil.\nI’d have thought they’d wash out, but I am not a nematode expert. Just follow the instructions.\nWhere can I get beneficial nematodes?\nYour local garden centre probably, but Amazon sell them too.\nChrist, you buy them by the million. How creepy is that??\nWould I use beneficial nematodes?\nThe main reason I personally wouldn’t bother with beneficial nematodes is that they have a similar effect to diatomaceous earth – they get rid of pests by killing the larval stage that’s in the soil.\nIt just seems a lot of trouble to go to for a few gnats. THAT BEING SAID, if you have a full-on gnat infestation, you’ll probably be grateful.\nI’m not sure whether I’d go for diatomaceous earth or beneficial nematodes. If I was treating my garden for pests, nematodes seem to be the way to go, but inside…I dunno. It seems..wrong…to intentionally unleash literally millions of roundworms into my home.", "score": 42.949111133279196, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "The nematodes feed on hosts such as termite larvae, and spider eggs and burrow into them usually causing death within 48 hours. These nematodes use the host’s carcass as a resting place and also as a site for spawning. Nematodes are usually available for purchase from garden supply stores or even online. Nematodes must ideally be used as soon as they are purchased. They must be stored away from direct sunlight, as direct sunlight contains UV rays which affect these nematodes adversely and dehydrate their gelatinous bodies.\nHaving an infestations of gnats in the family can be quite a problem, they tend to reproduce extensively and soon there will be a large scale problem at hand. It becomes very important to find the right solutions to tackle such infestations. You can find a number of different home remedies that will solve the issue once and for all, and it is also mostly natural means which leads to a complete solution for good. Gnats are quite a common occurrence and can be quite annoying over time. There are a number of factors that tend to attract gnats including wet floors, food remains, sinks, and rotten fruits. Outdoor sources are also a problem for attracting gnats, these include cats, dogs, house pets and sometimes even house plants.\nThere are a variety of methods by which you can eliminate gnats but first and foremost is the removal of the sources that tend to attract them. Firstly do not over stuff the house with too much of plants. Make sure that you also remove piles of compost and also other sources like stagnant water which can tend to create further problems of infestation. Ensure that you also remove all cracks or other sources of entry in the walls b y covering up all such openings with gypsum or some suitable material. Check potential sources of food and also ensure that you eliminate the waste food materials immediately. Keep covered at all times, waste materials and garbage and be sure to eliminate the waste in time without leaving it lying around.\nThere are a number of home-made traps you can use which are made with vinegar. Vinegar is an easily available substance in the house. Gnats get easily attracted by the odor and they cannot escape easily as well. Ensure that you use something like a mason jar which doesn’t make it easy for the pests to leave after entering.", "score": 42.75744280847323, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "It may be possible to reduce the numbers of egg-laying female ticks and thus reduce the number of young, disease-transmitting ticks the following spring.\nOther USDA researchers are exploring the use of fungi as yet another biological alternative to tick-killing chemical sprays.\nNematode Application: For 50 Mil. and larger quantities. Nematodes packaged in an inert powder carrying material that dissolves in water. Applied one teaspoon of the beneficial nematodes per gallon of water. Application in using a watering can, backpack sprayers, pump sprayers, irrigation systems, hose-end sprayer, or motorized sprayer. After mixing the nematodes with water, use the spray solution immediately. Evenly spray the solution over the ground areas to be treated. Continuous mixing should take place in order to prevent the nematodes from sinking to the bottom of the container. Keep the soil slightly moist during the first 7 days after application to help establish the nematodes in the soil. Sprinkle the turf or soil again with water after the application of the nematodes. Apply nematodes as soon as possible for best product performance. You may keep the package of nematodes in the refrigerator for up to 3 weeks upon receiving the product.\nNematode Application: For 10 Mil. Nematodes packaged on a sponge. Place the entire sponge in a bucket add two quarts of water, squeeze sponge for a few minutes to get the nematodes out of the sponge. Discard the sponge and pour the bucket of water into a sprayer or watering can. Add another gallon of water to dilute the nematodes and to make up the volume for your sprayer. The 10 Mil. Nematodes can only be kept for up to 3 days in refrigeration.", "score": 42.0880845897221, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "View Full Version : Applying Nematodes\n05-07-2006, 08:55 PM\nWhat do you apply nematodes with? A hose end sprayer? Any filter to take out?\nI'll be doing this for the first time soon so I just want to make sure everything goes smoothly.\n05-07-2006, 11:22 PM\nI listen to radio show weekly and our guy says use hose end sprayer. They must be refrigerated in the store and your home until you use them. I don't know about filter.\n05-09-2006, 12:34 PM\ni have used a 1 gallon sprayer that used for non pesticide applications but a miracle grow hose end sprayer works fine. just make sure the container is cleaned out. i purchase my nematodes from a company called gardens live and they supply pretty thourogh instructions.\n05-09-2006, 11:25 PM\nI applied them once for a customer and used the hose end sprayer...worked fine.\nI think its a joke but my sprayer didnt clog or anything.\n07-18-2006, 03:56 AM\nI have been using the beneficial nematodes for several years. I have found that a low pressure hose end is best. As for the filter, I would remove it. Although these dudes are microscopic, some may actually be large enough to pose a problem. Be certain that the sprayer has had nothing harmful in it as even a slight chemical residual can kill your todes.\nPurchase your todes direct or from a reputable organic supplier, they have a shelf life of about 6-8 weeks and must be refrigerated upon purchasing.\nCreate a stock solution for the todes. Spray over a premoistened lawn in the evening. They will survive wherever vegetation survives.\nAdd a few tablespoons of horticultural molasses to act as a carbohydrate source, it is much like giving a Hershey bar to a 3 year old. Don't have molasses? Use a flat coke, (not diet), works great too.\nRemember they do not care who purchased them and they travel on, they are not going to turn around once they get to the fence because they belong to you. I apply them 3X year and always purchase the kind that are sold on a sponge, not in a vermiculite looking medium.", "score": 40.87525355190935, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Hose End Sprayers\nI don’t often recommend hose end sprayers. They do the mixing for you, are inconsistent, often clog up and have a tendency to fall apart. They are okay for spraying microbe products, but not for fertilizers where the concentration and coverage are important.\nQ: Can nematodes be put out with a \"hose-end\" sprayer? B.J., Cedar Hill.\nA: Yes, just remove the strainer if present so clogging won’t occur. Use the nematodes to control fleas, grubs, fire ants, termites, thrips and other pests.", "score": 40.291883567569954, "rank": 10}, {"document_id": "doc-::chunk-4", "d_text": "Nematodes need moisture in the soil for movement (if the soil is too dry or compact, they may not able to search out hosts) and high humidity if they are used against foliage pests. Watering the insect-infested area before and after applying nematodes keeps the soil moist and helps move them deeper into the soil. Care should be taken not to soak the area because nematodes in too much water cannot infect.Exposure to UV light or very high temperatures can kill nematodes. Apply nematodes in the early evening or late afternoon when soil temps are lower and UV incidence is lower as well (cloudy or rainy days are good too). Nematodes function best with soil temperatures between 48Fº and 93Fº day time temperatures.\nApplication is usually easy. In most cases, there is no need for special application equipment. Most nematodes species are compatible with pressurized, mist, electrostatic, fan and aerial sprayers! Hose-end sprayers, pump sprayers, and watering cans are effective applicators as well. Nematodes are even applied through irrigation systems on some crops. Check the label of the nematode species to use the best application method. Repeat applications if the insect is in the soil for a longer period of time. There is no need for masks or specialized safety equipment. Insect parasitic nematodes are safe for plants and animals (worms, birds, pets, children). Because they leave no residues, application can be made anytime before a harvest and there is no re-entry time after application.How to use beneficial nematodes: For the home gardener, localized spraying is probably the quickest and easiest way to get the nematodes into the soil. Producers ship beneficial nematodes in the form of dry granules, powder type clay, and sponges. All of these dissolve in water and release the millions of nematodes. Each nematode ready to start searching for an insect in your lawn or garden. Nematodes should be sprayed on infested areas at the time when pests is in the soil. Timing is important, or else you will have to repeat the application. Northern gardeners should apply the nematodes in the spring, summer and fall, when the soil contains insect larvae. Most of the beneficial nematodes are adaptive to cold weather.", "score": 39.46192810689605, "rank": 11}, {"document_id": "doc-::chunk-2", "d_text": "Soil application of the entomopathogenic nematode Steinernema feltiae has shown efficacy against cabbage maggot in trials even at low soil temperatures 50°F/10°C) Apply by suspending infective juvenile nematodes in water and treating transplants prior to setting in the field (as a spray or soaking drench), or in transplant water used in the water wheel transplanter, as a drench after transplanting, or a combination of pre-plant and post-plant applications. Post-plant treatments are likely to be needed if maggot flight begins >1 week after transplanting. Rates of 100,000 to 125,000 infective juveniles per transplant have been shown to be needed to achieve reduction in damage. Nematodes need a moist soil environment to survive.\nChemical Controls & Pesticides:\nDirect application of insecticides to the root zone is considered the most effective means for controlling maggot damage. Insecticides should be applied as a narrow band with enough water to penetrate the root zone. For direct seeded crops, apply insecticides over the row. For transplanted crops, spray should be directed to the base of the plant. A wider spray band reduces the concentration of the insecticide over the row and therefore decreases its effectiveness. Some materials may be applied as a transplant tray drench or in transplant water, or in-furrow before or during seeding or transplanting. Be sure to read the label of any material you choose to use to determine which methods it is labeled for.\nScout each successive seeding or planting to determine whether treatment is required. Onion maggot eggs are very sensitive to high soil temperatures (above 95°F), and will die if they are exposed to these temperatures for several days in a row. Generally these soil temperatures are reached by late May/early June, unless there has been excessive rain, which has a dramatic cooling effect on the soil. This means that under high soil temperatures there is no need to spray for this pest.\nSoil application roducts labeled for use against onion maggot:\nCheck labels for specific crops allowed and other restrictions, including options for soil drench in direct-seeded and transplanted crops and transplant drench. Target the seed furrow or the base of the plants after transplanting, and use at least 100-200 gallons of water per acre to help the insecticide penetrate to the root zone.", "score": 37.33726261704486, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "As a preventative application, mix 8-24 fl. oz in 50-100 gallons of water or sufficient amount of water to ensure total foliar coverage. For fruit trees or berries, use 16-32 fl oz in 150 gallons of water.\nFor standard application, use 16-32 fl. oz, in 100 gallons of water or sufficient amount of water to ensure total foliar coverage.For berries and fruit trees, use 32-48 fl oz in 150-300 gallons of water.\nFor greenhouse crops use a 0.5%-1% (.60 oz-1.25 oz /Gal) and completely drench plants. For nematode control, use 52 oz-104 oz in 500-1500 gallons of water per acre as a drench or drip. For smaller growers, use 2-4 tsp per gal, depending on the level of infestation.\nCannot ship to the following states: AL, AK, AR, CT, DC, DE, GA, ID IL, IN, IA, KS, KY, LA, MA, MN, MS, MO, MT, NE, NH, NJ, NM, NY, ND, OH, OK, RI, SC, SD, TN, TX, UT, VT, WV, WY.\nCannot ship via USPS.\nShipping Weight: 3.00 lbs.\nDimensions: 8.25\"L x 3.5\"W x 3.5\"H", "score": 35.31182805746845, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Rearing and Applying Biocontrol Nematodes Manual (2016); also see Developing a Farmer-/Applicator-Friendly Persistent Biocontrol Nematodes Formulation for Field Application (2023)\nAn overview of the evolution of biocontrol nematode application for pest management follows.\nHistoric perspective: 2016\n- Historically, field applications of biocontrol nematodes have primarily focused on purchasing commercial strains of entomopathogenic nematodes (EPN) and applying them as a biopesticde. Long-term establishment of nematodes in the soil profile for long-term pest suppression through pest recycling is not a focus for commercial suppliers because commercial suppliers want to sell producers nematodes every year.\nFor commercial nematodes to be successful for insect control, they must be applied at rates around 1 billion nematode infective juveniles (IJ) per acre. Farmers must calculate whether this annual application of commercial nematodes as a biopesticide is economically viable.\nThe need for persistence\n- The use of nematodes in a more classical biocontrol approach requires the nematodes to persist in the environment for several growing seasons by establishing them as part of the soil fauna where they will seek out prey (soil insects), increase their nematode population, spread into adjacent areas, and persist in the field for a number of years.\n- With focus on an area-wide biological program strategy using entomopathogenic nematodes as a means to control the ASB infestation in northern NY, nematode persistence under northern NY conditions is a key component and one of the main reasons why we use native-NY nematodes that have the ability to persist under NNY conditions.\nEstablishing Persistent Biocontrol Nematodes in the Field: 2016\nTo establish the persistent nematodes in a field, applicators drive the entire field using stream nozzles six ft. (6’) apart providing application coverage to 33% of the field.\n- Using this management strategy, application rates of nematodes range from 20 million to 25 million nematodes per acre and nematode spread throughout the entire field will occur in 1 year based on nematode movement in the soil.\n- A review of research data and rearing experience in northern NY indicated that the best combination were the two Steinernema strains of nematodes native to NNY; one functioning in shallow soil, the other in deeper soil.", "score": 34.099680894512524, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Step-by-step, organic feeding kit program for lawns. No synthetic fertilizers to harm soil life, to cause rapid disease-prone growth. Organic formulas work to nurture healthy growth and support the soil ecosystem. Easy to apply. Attach the pre-mixed bottle to the garden hose and spray. Includes liquid corn gluten fertilizer for spring and fall, and fish fertilizer for summer.\nNemaglobe Grub Busters are the immediate solution for white grubs in lawns and gardens. Learn how to apply your Nemaglobe nematodes with the Environmental Factor pre-calibrated nematode sprayer. It’s the easy and effective solution to treat for common soil insect pests in your yard.", "score": 33.953153601512966, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Pest Control with Nematodes\nNematodes are naturally occurring microscopic worms, already present in our soil. Beneficial nematodes attack and kill targeted garden pests. They are an effective biological control with no pest resistance issues, easy to use and compatible with organic farming.\nCut the cost of crop failure caused by garden pests and wage a targeted war on the slug army.\nNematodes are a simple and effective way to control a variety of garden pests – and they’re biological too!", "score": 33.29510868864808, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Beneficial Nematodes to eliminate grubs and insect pests\nSafe Biological Pest Control\nAsk a Question Home Nematodes Biology fact sheet 520-298-4400 Order Your Nematodes Here! email@example.comIntegrated Pest Management IPM\nNATURAL PEST CONTROL WITH BENEFICIAL NEMATODES\nBiological Control Of Pest Insects With Nematodes. Beneficial Nematodes naturally occur in soil and are used to control soil pest insects and whenever larvae or grubs are present. Like all of our products, it will not expose humans or animals to any health or environmental risks. Beneficial nematodes only attack soil dwelling insects and leave plants and earthworms alone. The beneficial nematodes enters the larva via mouth, anus or respiratory openings and starts to feed. This causes specific bacteria to emerge from the intestinal tract of the nematode. These spread inside the insect and multiply very rapidly. The bacteria convert host tissue into products which can easily be taken up by the nematodes. The soil dwelling insect dies within a few days. Beneficial nematodes are a totally safe biological control in pest insects. The Beneficial nematodes are so safe the EPA has waived the registration requirements for application.NATURE'S BEST WAY OF KILLING Grubs and Japanese Beetles. We ship Beneficial Nematodes to USA and Canada. Call 520-298-4400 for information to place order for shipment to Canada. For USA visit our website www.buglogical.com\nThough they are harmless to humans, animals, plants, and healthy earthworms, beneficial nematodes aggressively pursue insects. The beneficial nematodes can be used to control a broad range of soil inhabiting insects and above ground insects in their soil inhabiting stage of life. More than 200 species of pest insects from 100 insect families are susceptible to these nematodes. When they sense the temperature and carbon dioxide emissions of soil-borne insects, beneficial nematodes move toward their prey and enter the pest through its body openings. The nematodes carry an associated bacterium (Xenorhabdus species) that kills insects fast within 48 hours. The bacteria is harmless to humans and other organisms and cannot live freely in nature. Several generations of nematodes may live and breed within the dead insect, feeding on it as a food source.", "score": 33.291827546195165, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Apply Nemaslug to moist soil. The soil temperature should be 5ºC or over (this is also when plants start to grow). Nematodes are capable of surviving the odd frost; so don't worry if the temperature falls after you have applied Nemaslug. Slug pellets are reported not to be effective below 7ºC.\nPotatoes are susceptible to slug attack later in the season than most other plants. Apply Nemaslug Slug Killer 6 weeks before harvest, when the tubers are most likely to be eaten by slugs. If you have a heavy clay area, ensure you apply Nemaslug to well worked soil. Nemaslug is less effective on cloggy clay soil, which has not been worked and/or has become waterlogged.\n- Nemaslug Slug Killer comes in pack sizes to treat 40 sq.m (50 sq.yds) and 100 sq.m (125 sq.yds).\nRecognising slug damage\n- Look for irregular holes with smooth edges on leaves.\n- Nearby will be evidence of their slime trials. They are particularly fond of succulent seedlings, which when left unprotected, can be totally destroyed in a single night.\n- As well as attacking the leafy parts of plants, slugs will also feast on your fruit and vegetable crop. Slugs will chew holes in your ripening strawberries and tomatoes.\n- If your seeds do not seem to have germinated, it is possible that slugs have devoured the emerging seedlings underground.\n- Slugs love potatoes, which are of course grown underground, so you cannot see them being attacked.\nNemaslug might affect water snails. To avoid harming them keep the treatment 15 cm away from ponds.", "score": 33.13290356419112, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "They say to apply during a rainy day or cloudy. I'd say the best time would be to apply after a core aeration. I would think it would be easier for them to get into the ground faster, but hey its just a theory. Next year I\"m gona be offering much more nematode release's compared to this year. This year it seems that I have absolutly no grubs at all. Kinda weird I think but then again its not the end of august or begining of september so we'll see.", "score": 32.78192632494364, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "I have read about various DIY concoctions from steeped plant leaves used as a spray but they are all from toxic plants and sound a little too unknown for me. This year I have decided to get tough and employ the assassins of the pest world, nematodes, in the form of Nemasys Grow Your Own, which is completely harmless to the environment. I am ready for the next generation. Round 2 to the nematodes I hope!", "score": 31.988624411223764, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "When the food source is gone, they migrate into the soil in search of a new host. When the pest population is eliminated, the beneficial nematodes die off and biodegrade. Beneficial nematodes are so effective, they can work in the soil to kill the immature stages of garden pests before they become adults.\nBeneficial nematodes infest grubs and other pest insects that are known to destroy lawns and plants.\nThe Nematodes are effective against grubs and the larval or grub stage of Japanese Beetles, Northern Masked Chafer, European Chafer, Rose Chafer, Fly larvae, Oriental Beetles, June Beetles, Flea beetles, Bill-bugs, Cut-worms, Army worms, Black Vine Weevils, Strawberry Root Weevils, Fungus Gnats, Sciarid larvae, Sod Web-worms, Girdler, Citrus Weevils, Maggots and other Dip-tera, Mole Crickets, Iris Borer, Root Maggot, Cabbage Root Maggot and Carrot Weevils.\nBeneficial nematodes belong to one of two genera: Steinernema and Heterorhabditis are commercially available in the U.S. Steinernema is the most widely studied beneficial nematode because it is easy to produce. Heterorhabditis is more difficult to produce but can be more effective against certain insects, such as th white grubs, and Japanese beetles.How beneficial nematodes work: The life cycle of beneficial nematodes consists of six distinct stages: an egg stage, four juvenile stages and the adult stage. The adult spends its life inside the host insect. The third juvenile stage, called a dauer, enters the bodies of insects (usually the soil dwelling larval form. Some nematodes seek out their hosts, while others wait for the insect to come to them. Host seeking nematodes travel through the soil the thin film of water that coats soil particles. They search for insect larvae using built-in homing mechanisms that respond to changes in carbon dioxide levels and temperature. They also follow trails of insect excrement. After a single nematode finds and enters an insect through its skin or natural openings, the nematode release a toxic bacteria that kills its host, usually within a day or two.", "score": 31.877840815109852, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Nematodes are microscopic soil-dwelling worms, many less than 1/16-inch long.\nThere are beneficial nematodes and pest nematodes.\nBeneficial nematodes help turn organic matter into plant nutrients. They also prey on soil-dwelling plant pests such as white grubs and root maggots.\nPest nematodes feed on plant roots, stunting and sometimes killing plants including many vegetables.\nNematodes are slender, translucent, unsegmented worms. Pest nematodes can be as small as 1/50-inch long. Beneficial nematodes that parasitize pest insects are larger, 1/25-inch long to several inches long.\nNematodes live in the film of water that coats soil particles; they thrive where the soil is rich, moist and warm. Nematodes can’t move more than about 3 feet on their own in the course of their lives, but they often travel around the garden in water, in shifted soil, in soil surrounding transplants, and on garden tools and even ants.\nPredatory nematodes either have teeth or long spear-like structures which they use to stab and suck the juices out of plants or their insect prey.\nNematode reproduction is usually sexual, though some individuals are capable of self-fertilization. Eggs are laid in a gelatinous mass. From hatching to full-formed adulthood, nematode development usually consists of four molts that take about a month; the life cycle of some pest species is only 3 or 4 weeks. There are many generations over the course of a year. When the soil grows cold, nematodes overwinter as eggs or adults. Where winters are not cold, nematodes are active year round.\nNematodes are found throughout North America.\nSome pest nematodes attack the roots of tomatoes, potatoes, peppers, lettuce, corn, and other vegetables. Other pest nematodes attack the stems and leaves of onions, rye, and alfalfa.\nPest nematodes include root-knot nematodes, lesion nematodes, ring nematodes, and sting nematodes.\nDamage inflicted by these pests includes root knots or galls, injured root tips, excessive root branching, leaf galls, lesions on drying tissue, and twisted, distorted leaves. This damage can result in stunted growth and sometimes plant death.", "score": 31.661386422532974, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Register with us or sign in\nin Problem solving\nOver the winter my potted plants that have been sheltering undercover in the polytunnel seem to have become infested with vine weevil grubs - even my nectarine tree has the little white creatures in the soil. I can't water the pots with Provado as it says it can't be used on edible plants. Has anyone used the nematode approach and does it work?\nNematodes do work, you need to apply twice yearly and keep the soil moist for these microscopic worms to work.", "score": 30.877775949429093, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "- Use Instructions\n- Useful Information\n- Shipping Information\nKills fleas in the yard and garden where they breed, but is harmless to people, pets and the environment. Made of 7 million microscopic organisms that penetrate the body of juvenile fleas living in the soil. After killing the juveniles these organisms reproduce and continue searching for more pests. For effective flea control, a three-part program is important: fleas must be attacked on pets and animals, in the carpet and upholstery, and outside in the ground. Flea Destroyer is the crucial third step – outdoor flea control.\n- It’s easy to use: all you do is mix it with water and spray.\n- Each container includes seven million live beneficial nematodes.\n- One container of Flea Destroyer will cover approximately 2000 square feet in ideal conditions.\nCannot ship to the following states: HI, PR, VI, GU\nCannot ship via SmartPost.\nShipping Weight: 2.0 lb\nDimensions: 0\"L x 0\"W x 0\"H", "score": 30.29147315074555, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "Move the sponge a bit and shake it again. Do it for 3-4 times till the sponge is a bit wet (not soaked!).\n2. Put the sponge on the surface and apply the coating with short and quick movements.\nDon’t do circular movements. Keep applying till the whole area is covered.\n3. At temperatures of less than 25c leave the coating for less than 3 minutes before removal. At higher temperatures remove the coating almost immediately after applying.\n4. Remove the coating with a microfiber towel cloth till smear free. We recommend using 2 towels/cloths for excess coating removal.\n5. For extra durability we recommend applying a second layer. Second layer is to be applied 2-3 hours after the first one. The hotter the sooner.\n1. Always use new microfiber towels/cloths\n2. The coating crystallizes around the bottle necks, so take extra care when opening bottles for the first time.\n3. Remember not to mix applicator sponges and other towels if other coatings are applied simultaneously.\n4. Pay attention: once the sponges are dried, the liquid get crystalized and sponges couldn’t be re-used.\n5. Recommended consumption is 10 ml. per sqm in 2 layers: ~7.5 ml. for the first layer and ~2.5 ml. per sqm for the second.", "score": 30.28437025693202, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "This spore density was used for all four fungal isolates and two further sets of chlamydospore densities were prepared to give 1 × 103 and 5 × 103 chlamydospores/g soil for isolate Pcc60 only and designated Pcc60A and Pcc60B respectively and Pcc60C (1 × 104 chlamydospores/g soil).\nThe nematode genera present in soil were mostly non-parasitic nematodes with very few Tylenchidae, and there were no root-knot nematodes present.\nPistachio cv Kaleghochi was chosen for the experiment. Prior to planting, seeds were disinfested for 4 min in 1% NaOCl, rinsed with sterile distilled water, immersed in 1% pentachloronitrobenzen, followed by soaking overnight in sterilized water and then pre-germinated in the dark on moist filter paper in Petri dishes.\nOne seedling was planted in each pot and allowed to establish for a month, when each relevant pot was inoculated with a suspension of nearly 3000 eggs of M. javanica added to three holes around each plant. Treatments included: nematode + isolate of Pcc10, Pcc20, Pcc30 and Pcc60C (10,000 cfu/g soil); nematode + isolate Pcc60B (5000 cfu/g soil), nematode + isolate Pcc60A (1000 cfu/g soil), nematodes alone and pistachio alone. Each treatment was replicated five times, and the pots were arranged in a completely randomized design on a glasshouse bench, with average temperature of 27.5 °C, and irrigated as required.\nPlants were harvested after 4 months. Roots were washed in water, blotted gently dry and the fresh weights of the pistachio shoots and roots were taken. The numbers of galls or egg masses on roots were rated based on the 0–5 scale of Hartman and Sasser (1985).\nTo determine nematode multiplication rates, eggs were extracted from the roots by the NaOCl, using the same procedure as described above (Hussey and Barker 1973), treated roots were further processed in a blender to extract any possibly hidden eggs within the roots.", "score": 30.08896269348925, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Using Vine Weevil Killer in a greenhouse during colder months\nIf you want to apply nematodes to your greenhouse when the outdoor soil temperature is <5ºC (40ºF) or in sheltered gardens where the soil is already warm enough, please order this product and we will send your order straight away.\nNemasys Vine Weevil Killer is the simple solution to kill vine weevil.\nThis standard pack of Vine Weevil Killer Nematodes will treat up to 100 sq.m or over 1300 pots.\nWhat is Vine Weevil Killer?\nNemasys® Vine Weevil Killer contains the natural nematode, Steinernema Kraussei, which is effective at controlling vine weevil grubs, but is totally safe for children, pets and wildlife and can be used on edible crops too! It comes as a powder simply mix with water and apply like a liquid feed to the soil around the roots.\nHow does Vine Weevil Killer Work?\nThe Nemasys Vine Weevil Killer nematodes (Steinernema kraussei), seek out the vine weevil larvae and attack the pest by entering natural body openings. Once inside, they release bacteria that causes blood poisoning stopping the larvae from feeding, quickly killing it. The nematodes then reproduce inside the dead pest and release a new generation of hungry infective nematodes, which disperse and search for further prey.\nTo keep a minor problem at bay, one autumn treatment should be adequate. However, for a serious infestation, treat in the spring and again in the autumn. When treating pots take care that soil is not left to dry out. Nemasys Vine Weevil Killer will kill larvae present in the area and protect against further larvae damage for up to four weeks.\nWhen to apply Nemaslug outdoors:\nApply to pots and open ground March to May and August to November. This is when the vine weevil larvae are present and the soil is above 5ºC (40ºF). If applying under cover the pests life cycle is broken and Nemasys Vine Weevil Killer can be applied at any time, as long as the soil is above 5ºC (40ºF). Apply directly to the soil around the roots which is where the larvae will be feeding.", "score": 30.07767645749209, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "If you remove the soil from the tree you should be able to find the current generation easily and physically remove them. Also no need for washing the roots imo. Dispose them in a trash bag. There are nematodes available for treatment (in landscaping there is no chance of physically removing them and thus breaking the cycle) but you need to establish the species to apply the right one.\nI once had such an infestation in a some garden centre plant (normal garden, not bonsai) and removing them from the pot was all it needed to get rid of the problem.", "score": 30.04436617477381, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Nematodes used for the experiment were reared on tomato seedlings of cv Early Urbana inoculated with 5000 second-stage juveniles (J2) as explained in Ebadi et al. (2009). At harvest, plant roots were washed, cut into pieces, placed in a jar of 0.5% commercial NaOCl and shaken for 4 min. The suspension was washed with tap water through 75- and 20-μm sieves and their numbers were counted with a counting slide under a light microscope (Hussey and Barker 1973).\nThe four strains of P. chlamydosporia var. chlamydosporia (Pcc isolated with the accession numbers of Pcc10, Pcc20, Pcc30 and Pcc60) used in this study had been maintained on corn meal agar at 5 °C in the Nematology Department Collection, Iranian Research Institute of Plant Protection, Tehran.\nThe fungal inoculum preparations were made according to the procedure of De Leij et al. (1993). Conical flasks were filled with a mixture of sand + milled barley (1:1 v/v) and 30 ml distilled water was added for each 100 g of mixture and autoclaved at 121 °C for 20 min on two consecutive days. Flasks were inoculated with plugs of isolates, kept at 25 °C and occasionally shaken for even growth. After 3 weeks, 5 g of the sand/barley substrate, mixed well with 100 ml distilled water, was transferred to a blender (Waring) and blended for 2 min. The contents were washed onto a 45-μm aperture sieve, and the chlamydospores were collected on a nested 10-μm sieve (De Leij et al. 1993). The chlamydospores were counted by a hemocytometer.\nA suspension of washed chlamydospores was mixed with 50 g sterilized sand, added to 1 kg natural soil collected from a pistachio orchard, and the resulting mixture was used to fill a 14-cm diameter plastic pot, to give a final count of 1 × 104 chlamydospores/g soil.", "score": 29.588998253696264, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Sierra Biological was founded to provide growers with an economical way to reduce their reliance on chemical pesticides which can have long-term negative effects on soil, water and non-target organisms including human health. Plants treated with pesticides may also cause phytotoxic effects reducing crop yield or affecting their appearance. Beneficial nematodes, also called entomopathogenic or insect-parastic nematodes, are sustainable alternatives to chemicals. These nematodes parasitize the larval or pupal stage of many of damaging insects found in the soil of greenhouses and gardens (see list of targeted insects). After they parasitize their \"insect host,\" they multiply and many thousands emerge to seek out new prey.\nEntomopathogenic nematodes are completely inert to plants, animals and many other beneficial biological control agents. When applied following labelled instructions, these beneficial nematodes are safe and have been specifically exempt from E.P.A. registration.\nWe produce 100% natural beneficial nematodes using Mother Nature's very own formula. While the in-vivo method may be more labor intensive than in-vitro production, we have found that we get much stronger, more effective nematodes this way by natural selection.\nWe also supply praying mantis and ladybugs.", "score": 29.479088136249086, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "The extracts were filtered through 75- and 20-μm sieves, and the numbers of eggs were estimated from the contents of the latter.\nThe populations of J2 were measured in 200 g soil from each pot of each treatment combination by means of modified Whitehead trays (Whitehead and Hemming 1965). The final total population density of healthy nematodes was calculated by combining the total numbers of J2 and healthy eggs. For estimation of the numbers of healthy eggs, the total numbers of infected eggs were subtracted from the total numbers of eggs. A reproduction factor was estimated by dividing the final nematode population density by the initial population density (Pf/Pi) (Ebadi et al. 2009).\nTo verify the percentage of egg infection, 10 egg masses/replicate (there were few egg masses at the time) were incubated in 0.05% NaOCl between a glass slide and coverslip and observed under × 400 magnification. To re-isolate and confirm identification of these infected eggs, a 0.2-ml sub-sample of a suspension made from these eggs was taken, spread over the surface of 0.8% water agar, containing 50 ppm each of tetracycline, chloramphenicol and streptomycin, and the grown hyphae were sub-cultured on PDA for further examination.\nViability and abundance of the fungi in soil and on the roots at the end of the experiment were checked, using a SSM (semi-selective medium) (De Leij and Kerry 1991). A sub-sample of 1 g soil from each replicate was added to 9 ml of 0.05% sterile water agar and used to prepare dilution series of 10− 1 to 10− 4. An aliquot of 0.2 ml of each dilution was transferred onto a 9-cm Petri dish containing SSM, with three replicates per dilution. Dishes were kept in an incubator at 25 °C for 1 or 2 weeks and the final numbers of cfu were counted.", "score": 29.047714764366383, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Originally Posted by dramaqueen\nI would try frozen but I'm afraid they'd be too messy.\n1)Buy them in the blister-packs by Hikari\n2)Get your feeding kit together (a special brine shrimp net, a plastic cup and a pipette/tweezers)\n3)Store the pack in a tupperware container (always be food safe :)) in the freezer.\n4)When you want to feed, pop a cube into the net.\n5)Run hot water over it (preferably somewhere like a laundry room sink, bath tub, somewhere where you don't prepare food) until it is melted. By using the hot water you have also rinsed it.\n6)Holding the cup under the net to catch the drips, bring it to the fish tanks.\n7)Use the pipette/tweezers to pick up worms and feed.\n8)Rinse everything under hot water and put it in the cup for next time.", "score": 28.451130210412455, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Once the host is dead, the nematodes feed on this bacteria-host slurry and mature to adulthood.\nEffectiveness as a Biological Control Agent\nEntomopathogenic nematodes have several advantages over chemical control. They are safe for humans and pets. No special safety equipment or masks are required. No re-entry time needs to be considered and they are safe for pollinators, such as bees. They also pose no concerns for groundwater contamination, problems with chemical trespass or dangerous residues. These nematodes kill their hosts quickly, usually within 24 to 48 hours, far quicker than most biological controls, which can take days or weeks to be effective. Several species of both Steinernema and Heterorhabditis have been tested for effectiveness against ticks with varying degrees of success, the most successful now marketed under several brand names. Engorged female ticks were found to be the most susceptible to infestation as they hide in the upper layers of soil while digesting their blood meal.\nAny beneficial nematode product should be applied to moist soil early in the morning or at dusk as they are sensitive to the light. Be sure to select a product with a nematode species effective against ticks. Products containing Steinernema feltiae and Heterorhabditis bacteriophora have proven to be most useful. However, with more resistance to chemicals developing in the insect world, research is ongoing to find and develop more effective nematode predators.", "score": 28.10033272862215, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "Phytoseiulus persimilis are excellent for cold climates! Use preventatively as they over-winter and can be sustained on pollen. Release at the beginning and end of the season.\nAdditional Mite Control - Benefical Insects, Insecticides, Cultural Controls\nThis Product Controls These Pests or Diseases: Two-Spotted Spider Mite (Tetranychus urticae), Spider Mite (Mult)\nRelease immediately for best results. It is best to release Phytoseiulus persimilis in late evening on the day you receive them. Before releasing, gently rotate the jar to distribute the mites evenly within the carrier. Next, open the jar in the crop and gently tap the contents out of the jar as evenly as possible onto the foliage of the plants you wish to treat. Concentrate the bulk of them on or near the most heavily infested plants. In trees, sprinkle them into Dixie-like cups wedged into, or distribution boxes hung from, the branches. Leave the bottle in the treatment area for 24 hours after release to ensure all Phytoseiulus persimilis have exited.\nDue to its high reproduction rate, P. persimilis can exhaust its food supply and die out. Because of this, P. persimilis should be applied as an active control, not as a preventative measure. Repeated releases are recommended until all spider mite infestations have P. persimilis present.\nSince they are unable to sustain themselves without a mite population to feed on, look to N. fallacis for preventative applications as they can feed on pollen and small arthropods in the absence of mites.\nOutdoors, Crops, Nursery, Greenhouse, Grow Room, Hydroponics, Aquaponics, Pond & Environment, Interiorscapes, and Container Plants.\nIf not releasing immediately, keep in a cool dark place out of direct sunlight. Do not refrigerate.\nUse within 18 hours of receipt.\nThe complete P. persimilis life cycle varies based on temperature with higher temperatures accelerating the process and cooler temperatures slowing it down. At 86°F, maturity is reached after 5 days, while maturity is reached in as many as 25 days at 59°F. The sex ratio for P. persimilis is 4:1 female to male with females laying 2-3 eggs per day.", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "The combined effect of the application of a biocontrol agent Paecilomyces lilacinus, with various practices for the control of root-knot nematodes\nAnastasiadis, I. A., Giannakou, I. O., Prophetou-Athanasiadou, D. A. and Gowen, S. R. (2008) The combined effect of the application of a biocontrol agent Paecilomyces lilacinus, with various practices for the control of root-knot nematodes. Crop Protection, 27 (3-5). pp. 352-361. ISSN 0261-2194\nFull text not archived in this repository.\nTo link to this item DOI: 10.1016/j.cropro.2007.06.008\nThe effectiveness of a formulated product containing spores of the naturally occurring fungus Paecilomyces lilacinus, strain 251, was evaluated against root-knot nematodes in pot and greenhouse experiments. Decrease of second-stage juveniles hatching from eggs was recorded by using the bio-nematicide at a dose of 4 kg ha(-1), while a further decrease was recorded by doubling the dose. However, the mortality rate decreased by increasing the inoculum level. Application of P. lilacinus and Bacillus firmus, singly or together in pot experiments, provided effective control of second-stage juveniles, eggs or egg masses of root-knot nematodes. In a greenhouse experiment, the bio-nematicide was evaluated for its potential to control root-knot nematodes either as a stand-alone method or in combination with soil solarization. Soil was solarized for 15 d and the bio-nematicide was applied just after the removal of the plastic sheet. Soil solarization for 15 d either alone or combined with the use of P. lilacinus did not provide satisfactory control of root-knot nematodes. The use of oxamyl, which was applied 2 weeks before and during transplanting, gave results similar to the commercial product containing P. lilacinus but superior to soil solarization. (C) 2007 Elsevier Ltd. All rights reserved.", "score": 27.396542459038614, "rank": 35}, {"document_id": "doc-::chunk-3", "d_text": "However, hot water treatment is unlikely to be 100% effective and may also predispose plants to other diseases.\nPreplant fumigation can effectively reduce plant-parasitic nematode infestations in onion fields.\n|Common name||Amount per acre||REI‡||PHI‡|\n|(Example trade name)||(hours)||(days)|\n|When choosing a pesticide, consider its usefulness in an IPM program by reviewing the pesticide's properties, efficacy, application timing, and information relating to resistance management, honey bees, and environmental impact. Not all registered pesticides are listed. Always read the label of the product being used.|\n|(InLine)||Label rates||See label||NA|\n|COMMENTS: This product is a soil fumigant used for preplant treatment of soil to control plant parasitic nematodes, symphylans, and certain weeds as well as to mitigate the impact of various soilborne fungal pathogens using low volume (drip) irrigation systems only. The use of a tarp seal is mandatory for all applications of this product to vegetable fields. Soil fumigants such as this product are a source of volatile organic compounds (VOCs) but are minimally reactive with other air contaminants that form ozone. Its use amounts are restricted on a township basis.|\n|(Telone EC)||Label rates||120 (5 days)||NA|\n|COMMENTS: Fumigants such as 1,3-dichloropropene are a source of volatile organic compounds (VOCs) but are minimally reactive with other air contaminants that form ozone. Use of a tarp seal is mandatory for all applications to vegetables in California.|\n|(Vapam HL)||Label rates||See label||NA|\n|COMMENTS: Fumigants such as metam sodium are a source of volatile organic compounds (VOCs) but are minimally reactive with other air contaminants that form ozone.|\n|(K-Pam HL)||Label rates||See label||NA|\n|COMMENTS: Fumigants such as metam potassium are a source of volatile organic compounds (VOCs) but are minimally reactive with other air contaminants that form ozone.|\n|PLANTING OR AFTER|\n|(Vydate L)*||Label rates||48||See label|\n|COMMENTS: Can be applied in-furrow, as a band, or in sprinkler or furrow irrigation.", "score": 27.013730877699004, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Beneficial nematodes are used in greenhouses to manage fungus gnats. Some growers have success using them to manage thrips also.\nAre my beneficial nematodes alive?\nAfter receiving a shipment of beneficial nematodes, it is important to access their quality before application. To do this, place a small sample of nematodes (5 ml) into a shallow container or Petri dish. Add one to two drops of tepid water, then wait a few minutes (about 10 minutes) and look for the actively moving nematodes. If the nematodes are alive, they will have a snake like movement or will be curled into a circle (“like a doughnut”). If they are dead, they will have a straight arrow-like appearance. You may see some dead nematodes, but most companies supply extra nematodes to compensate for the presences of any dead nematodes.\nIf using a microscope, it may be easier to see the nematodes if you have a light source from below, or if your light source is from above, placing the dish of nematodes against a black background as seen here.\nNote: this photo was taken over one day after the nematodes were applied (it is recommended to apply the solution immediately), which accounts for the presence of the dead nematodes.\nFor more information see the fact sheet “Biological Control: Using Beneficial Nematodes”.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "The beneficial nematodes enter the fungus gnat through any available orifice (or failing that, right through the side of their body) and poison it, by releasing bacteria from their gut.\nAt first this may seem mean and disturbing (and it is), but the beneficial nematodes aren’t just doing this for a laugh. The purpose of killing the host is to provide the nematodes with something to feed on.\nHow long does it take for beneficial nematodes to kill fungus gnats?\nIt takes 24-48 hours for the nematodes bacteria to cause the host fatal blood poisoning. Pretty speedy then. However, it can take a couple of weeks for your gnat problem to be under control.\nThe length of time it takes for the nematodes to kill pests varies due to factors such as the size of the pest, the size of the infestation, and the environment they’re in. Nematodes are sensitive to ultraviolet light and temperature.\nIt’s recommended that two doses of beneficial nematodes are given, a week or so apart, to make sure all gnats are eradicated.\nWhat are the advantages of using beneficial nematodes over other methods of fungus gnat prevention?\n- Beneficial nematodes don’t pose any risk to humans, pets and plants. They’re predators that pray on other bugs only.\n- Beneficial nematodes are naturally occurring, although they have been bred commercially. They’re a pretty holistic and non-toxic method, as terrifying as their methods are.\n- Fungus gnats can’t build up a resistance to them, just like penguins can’t develop a resistance to leopard seals\n- Beneficial nematodes don’t pose a threat to beneficial insects such as ladybirds, earthworms, and bees. How do they know?\n- They can last for up to 18 months in the soil. Pretty impressive, although it’s unlikely that you’ll have enough pests to sustain them for that long. If you have a vegetable garden that’ prone to pests though, beneficial nematodes would be a cost effective, non-toxic solution.\nAre there any disadvantages of using beneficial nematodes?\nWhilst there are many disadvantages to using beneficial nematodes outside, such as:\n- You need to time application right so that the pests are in the right stage of their lifecycle\n- They need to be kept above 54F/12C\n- They can be difficult to store.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-2", "d_text": "- Add organic compost to garden beds. Beneficial fungi and bacteria that attack pest nematodes thrive in organically rich soil.\n- Plant French marigolds (Tagetes patula) or African marigolds (Tagetes erecta) in the garden as a cover crop; grow only these plants for two months across vegetable planting beds. After two months, cut them down, let them dry in place, then turn them into the soil. Chemicals in these plants repels nematodes. You will likely have to repeat this treatment in two years.\n- Solarize the soil. Cover the soil with clear plastic during the hottest part of the summer to kill both pest nematodes; this will rid the soil of pest nematodes for a year or two. However, beneficial organisms are also harmed or destroyed by solarization of soil.\n- Add chitin to the soil. Chitin is a natural component of nematodes bodies. Fungi attack nematodes by breaking down the chitin in the body. Adding chitin to the soil will stimulate fungi to attack nematodes.\n- Add ground sesame stalks to planting beds. Sesame will suppress nematodes.\n- Leave the garden fallow for a year. The nematode population will decline if denied food for a year.\n- Till the soil in winter to expose nematodes to killing sunlight and dryness.\nNot all nematodes are pests; some are beneficial to soil and plants. These nematodes eat organic matter in the soil helping to decompose it and turn it into nutrients for plants. They also attack and kill harmful insect pest, ingest the remains, and turn it into nutrients—especially nitroten–plants can take up.\nBeneficial nematodes that are insect parasites are harmless to humans, wildlife, bees, and earthworms. They attack cutworms, root weevils, corn and stem borers, and squash vine borers and some pest root nematodes.\nThe beneficial nematodes attack soil-dwelling insects through their natural body openings. Once inside, these beneficial nematodes release a bacterium that paralyzes and kills the insect. The nematodes then feed on the tissue of the insect carcass and also eat the bacteria. They reproduce inside the carcass and then move on to a new host.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "This has significant negative impacts on the soil biology and its biodiversity, hence long term does nothing to improve the situation. As a consequence most growers prefer not to use them. Thankfully now there is an alternative that shows great promise to control plant parasitic nematodes in a much more sustainable way.\nNEMguard which is a regsitered nematicide in Europe contains an active constituent called diallyl polysulphides which is found in all allium plants, but exists in garlic in much higher levels.. The particualr garlic extract in NEMguard acts on plant parasitic nematodes at very low doses and lasts just long enough in the soil to deal with the egg and juvenile stage of the parasitic nematode. When applied to the soil it dissolve through their cuticle cuasing rapid and irreversible cellular damage.\nOver the coming months OCP will be conducting numerous trials in horticultural crops and turf to generate the efficacy data package for APVMA registration.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "How to Release Your Fly Eliminators\nThe insects are shipped as parasitized pupae in a wood chip medium and are dispersed by releasing a small handful at \"hot spots\". To release, scratch a 1-inch hole in the ground with your boot heel, drop in a small handful of sawdust and pupae, and cover them with straw, earth, or manure to protect them against wind, birds, ants or pesticides. The amount of parasites to release depends upon the size of the \"hot spot\". Use as small as a teaspoon or as large an amount as a handful. These parasites travel up to 150 feet in search of visible fly larvae and pupae.\nSeed as many hot spots as possible with your first application, concentrating heavily on the obvious fly-breeding areas. Repeat this release procedure in slightly different spots on subsequent applications. In this way, new cultures of parasites will be evenly distributed throughout the fly season.\nDO NOT place the parasitized pupae in standing pools of water or in the center of manure piles. Instead, place them around the perimeter.\nUS Mail is the standard shipping method for fly eliminators. The cost of shipping via USPS is included.\nShipping Upgrades to Ground or Overnight are available. Please call us at 1-800-827-2847 to ensure proper shipping for your needs.\nThere is an additional $8.50 per shipment for shipping via either UPS or FedEx Ground. Overnight shipping is market-priced by distance from Tucson, Arizona.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "Nematode Part 1\nTraditional methods of controlling nematodes in garden soil relied on crop rotation and allowing fields to sit fallow so that root parasites had no hosts. The soil fumigant carbon disulfide was first used in French vineyards in the 1880s. Surplus World War I teargas was used for soil fumigation until supplies were exhausted. In the 1940s, various pre-planting chemical fumigants were developed. By the 1960s, organophosphates and carbamates were successfully applied on fields among living plants.\nHuman Vs. Nematode Part 2\nThe National Sustainable Agriculture Information Service, which is funded through the U.S. Department of Agriculture, recommends organic nematode control. Rotating unrelated crops prevents nematodes from attacking the same host each year. Adding composted organic matter improves soil structure and releases plant toxins lethal to nematodes, and it also increases fungus and bacteria that parasitize nematodes. Minimum tillage maintains a high number of nematoparasites. Nemato-suppressive soil amendments include oil-cakes, sawdust, sugarcane bagasse, bone meal, horn meal and green manures such as cereal rye. Chitinous amendments such as crushed shells of shrimp and crab nourish fungi that also eat chitin in nematode eggs.", "score": 25.99671658095016, "rank": 42}, {"document_id": "doc-::chunk-3", "d_text": "Three beneficial nematodes are: Steinernema carpocapsae attacks cutworms, armyworms, corn rootworms, and fire ants; Steinernema fetiae attacks root knot nematodes, ring nematodes, and string nematodes; Heterorhabditis bacteriophora attacks cabbage root maggots, Colorado potato beetle larvae, white grubs, and root weevils.\nBeneficial nematodes can be purchased for use in the garden. Beneficial nematodes come packaged in a gel, in a power, or mixed with peat and vermiculite. They are commonly mixed with water before being applied with a watering can or sprayer to the soil or plants, or injected into plant stems by syringe. It is important to follow label instructions when applying nematodes to the soil or plants. Commonly applications are repeated every two to three weeks if needed.\nThe efficacy of nematode application can be affected by hot weather, cold soil, and heavy rain. Nematodes are most active in temperatures between 72°F and 82°F.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Will Bti harm beneficial nematodes? Timing of Neem oil use and beneficial nematodes\nI am currently fighting a fungus gnat infestation in my 100+ vegetable seedlings and dozens of house plants. I have ordered S.feltiae nematodes for treatment of new soil; have treated existing seedlings and every plant in the house with reconstituted mosquito dunks (active ingredient is Bt-israliensis or Bt H-14) in the watering can and am spraying with pyrethrin to catch the escapees; posting sticky cards to get a count. With time, wine (pour moi) and muttered curse words, I hope to have these buggers knocked down soon. After I introduce the 'todes to the new soil, will they be harmed by watering with Bti- like is there a residual bacillus carryover from larvae that are dying to the 'todes that eat them or if the larvae are dead, will the Bti attack the 'todes? Obviously, if there are no live larvae, the 'todes will starve to death and the Bti will be ineffective against the dead gnat larvae. Since I don't have a microscope or loupe, I can't tell if there are any live larvae in a soil sample so I'm using \"absence of adults\" as my best determination that anything is working. I have read up that I should not use my Neem oil spray on soil with beneficial nematodes. I am going to get a shipment of S.carpocapsae for the garden and the fruit trees grounds when it warms up enough to put in my seedlings. How long should I wait to spray any plant with Neem oil?\nCass County Minnesota\nNo, Bacillus will not harm beneficial nematodes. It only targets lepidoptera larvae (caterpillars). Neem oil should also not affect the nematodes.\nOne way to test for fungus gnat larvae is to put a slice of potato on the soil surface (if there is room).\nYou'll want to reduce your watering to control the gnats, too.\nFor any other questions, please contact your local county extension office in Cass County. http://www3.extension.umn.edu/county/cass", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Rich Farm 5\n• This 60 g package is good for 666 m².\n• Ensure moisture of soil for maximum effect, dry soil hinders product function.\n• Please apply the microbes hand-in-hand with chemical or organic fertilizers.\n• Standard Operating Procedure\n• Sprinkling irrigation: Mix the 60 g Rich Farm 5 and 180 L water, then add the mixture and sufficient water volume into watering system and apply to moist soil around root.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-6", "d_text": "2 g of NaCl\n4 g of Bactotryptone\n3 g of KH2PO4\n0.5 g of K2HPO4\n20 g of agar\n1 mL of 5 mg/mL cholesterol (dissolved in 100% ethanol)\nDissolve all ingredients in 1 liter of distilled water and autoclave.\nEscherichia coli: Strain OP50 or a streptomycin-resistant derivative of E. coli OP50 is used as nematode food. OP50 is a uracil auxotroph, meaning that this strain of bacteria can not synthesize the essential nutrient uracil; therefore, growth of this strain is not as robust as in wild type E. coli, even when the culture medium provides sufficient uracil. Because bacterial growth is limited by the auxotrophy, the \"lawn\" of bacteria does not grow so thick as to obscure observation of nematodes. NGM-lite agar medium poured into sterile plastic petri plates are seeded with a liquid culture of OP50 E. coli bacteria by applying a drop or two on the middle of the plate. C. elegans worms will generally confine themselves to the bacterial lawn; therefore, leaving the edges of the plate unseeded with bacterial \"food\" allows us to observe fairly easily all the animals.\nSeveral hundred adult worms may be grown on a single 60 mm x 15 mm NGM-lite plate without exhausting the bacterial supply. However, each animal produces close to 300 offspring, meaning that the capacity of a plate can be exhausted within one to two days if too many worms are transferred. When the bacterial food is exhausted, the adults die of starvation and the population becomes primarily geriatric adults and young larvae, neither of which are useful for setting up further experiments or crosses. Plates inoculated with 5 or fewer worms can support one generation of growth and last up to a week in good condition if the temperature is controlled carefully.\nMethods for Handling Worms\n- Wire Tool (“worm pick”): One end of a fine platinum wire is embedded in a Pasteur pipette and the other end is flattened to make a tiny shovel. The tool is sterilized by briefly passing it through a flame. Older larvae and adults can be lifted off a plate by scooping under them and lifting up. In this way, individual worms can be \"captured\" and transferred.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "If beetles are stored in the refrigerator (do not freeze them) and released periodically, warm beetles weekly to room temperature and feed them very dilute sugar water by misting them using a trigger-pump spray bottle. Do not refrigerate the lady beetles with food as they excrete an unpleasant odor.”\nMost commercial packages contain enough lady beetles to control aphids on one shrub or a few small plants. In most cases, 95% of these will fly away within 48 hours, even if aphids are abundant. If you apply lady beetles again, place them at the base of plants at dawn or dusk, never in hot sun, and spray plants and lady beetles gently with water. Never put them on plants that have been treated with insecticide as it will kill them. You may have better luck with using green lacewings for long term aphid control as they don’t tend to fly away and as long as you have blooming plants that provide nectar for the adults to eat, they tend to stick around even after the aphids are gone.\nThe Shasta Master Gardeners Program can be reached by phone at 530-242-2219 or email firstname.lastname@example.org. The gardener office is staffed by volunteers trained by the University of California to answer gardeners' questions using information based on scientific research.", "score": 25.415593120404935, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "If swallowed, give two glasses of water. Do not induce vomiting. Get medical advice.\nEye splashes: irrigate with a stream of water for 15 minutes. Get medical advice.\nSkin splashes: Wash immediately with clean water.\nBees: Only apply to plants in flower when bees are not foraging ie early morning or evening.\nStandard precautions also apply. See Product Index, page 4.\nNil all crops. NZ trials have shown no residues on crops (apple, grape, avocado). A period of 3 days should elapse between the last application and harvesting of all crops as a precautionary approach.\nDo not apply if rain is likely to fall within three hours following treatment.\nNo additional surfactant or wetting agent is required.\nMay be tank mixed with commonly used fungicides except lime sulphur, Bordeaux mixture and other copper compounds.\nThorough coverage of plants is essential for optimal results. Can be applied through any calibrated conventional ground spraying equipment. Water rates will depend on the crop and growth stage. When using concentrate spraying techniques and low water rates, adjust the dilution rate accordingly. Note: Following application, insects will stop feeding , become inactive and stop causing damage. Pests may still be present on treated plants for some time after application.\nControls a wide range of pests including aphids, bronze beetle, erinose mite, whitefly, leaf-mining flies, mealy bug , scale insects, potato tuber moth, brown beetle (grass grub), cicada, weevils and midges. It should be combined with Bacillus thuringiensis (Bt) insecticides to provide more complete protection against caterpillar where they are a problem with multiple generations. Note: For greenhouse crops ensure complete coverage of the underside of al leaves. Aphids: Use 300-500 ml/100 litres water and apply when pests are first seen. Monitor crop and repeat application 10-14 days later if necessary.\nMealy bug, thrips, whitefly: Use 500 ml/100 litres of water and apply when pests are first seen. When monitoring indicates high numbers of pests are present, a cluster of 2-4 sprays at 7-10 day intervals may be necessary to achieve optimal control.\nMites and leaf mining flies: Use 500 ml/100 litres of water and apply 2-4 sprays at 7-10 day intervals.\n1 and 5 litres.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Nemashield 100 million unit Nematodes BIOWORKS\nVendor: BIOWORKS INC\n*Order by 12 noon ET on day of shipping--FedEx Next Day\nActive Ingredient: Steinernema feltiae\nBeneficial nematode for the control of fungus gnat larvae as a drench. Western flower thrips control from a soil surface treatment has been demonstrated. 100 million nematodes treat 2200-3400 sq. ft. for fungus gnat larvae control depending on pest pressure.\nMOA = NC\nREI = 0", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "- The predators are released by opening the lid and sprinkling the contents of the bottle on the leaves of the host plants, preferably in a shady area.\n- The predators should be distributed evenly through the crop, on the foliage, with additional material at the end of the rows and in hotter/drier areas prone to spider mite attack.\n- Do not expose the bottles to direct sunlight.\nBefore combining BioCalifornicus with any chemical pesticide in the crop, please consult your BioBee technical field representative.\nRecommended storage temperature\nDo not store in sunlight\nApply early morning or late afternoon\nRoll back and forth before use\nApply within 24 hours\nBioBee Sde Eliyahu Ltd. produces and markets biological products. Production is carried out using innovative techniques under controlled quality assurance standards such as ISO 9001:2015, as well as IOBC’s international standards for mass-production of insects. All products are tested to meet specification requirements before leaving the factory.\nThe success of biological pest control is affected by the crop’s initial pest population (upon application of the product), weather conditions and chemical residue present in the crop, among other possible aggravating factors.\nUnder no circumstance shall BioBee be liable for the outcome of the implementation in the field, as it has no control over local conditions, the application method, or the possible improper treatment/storage of the product.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-2", "d_text": "Then irrigate again immediately after application.\nTenet WP is not compatible with the following fungicides: imazalil, dichloran, mancozeb, propiconazole, tebuconazole, thiram, and triflumizole. Do not tank mix with, or apply Tenet WP within 3 days before or after use of these products.\nDILUTION INFORMATION –\nTo prepare a suspension, combine 1 lb. of Tenet WP for every 1.25 gallons of water, and mix from time to time in order to promote the germination of conidia and obtain faster soil colonization. Subsequently, dilute the suspension in the amount of water needed per crop-specific applications on the label under the DOCS tab.\nSOILBORNE/SEEDLING DISEASE CONTROL –\nCan be applied using the following application methods: cutting and bare root, broadcast, in-furrow, banded, greenhouse and nursery drench, and applications made via chemigation systems applied over the row or directed towards the desirable plants crown and rooting area, either before planting, at planting, or shortly after planting and promptly watered in. For in-depth, specific instructions on these methods, please refer to the manufacturer's label under the DOCs tab.\nDo not apply directly to water, or to areas where surface water is present or to intertidal areas below the mean high water mark. Do not contaminate water when cleaning equipment or disposing of equipment washwaters or rinsate.\nThis product may pose a risk to beneficial coleopteran (beetle) species.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "Entomopathogenic nematodes have proven to be at least as effective as chemicals and therefore, their use is highly recommended.\nWritten by: Robert Childs", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "This is our most effective control for aphids in enclosed spaces and greenhouses and is favoured by commercial growers.\nAphidius are the most popular, and least expensive way of controlling common aphids, which is why they are the first choice for the commercial grower. Aphidius colemani tend to tackle the smaller aphids such as Myzus persicae and Aphis gossypii - for larger aphids there are Aphidius ervi.\nSold in tubes of 500, all our prices include P&P. Orders placed by 10am Monday will be despatched later in the week.\nHow much do I need?\nWhen to use\nHow to use\nAphidius are tiny insects which lay their eggs inside aphids, the aphid is then paralysed and the new aphidius hatches from the body leaving behind a dead aphid “mummy”. The insects are supplied ready to hatch out in their “mummy” form. You will probably not be able to see the mummies without a glass or microscope, but if it is warm enough you will see some adult aphidius flying out of the tube when you open it. They look like very small flying ants, about 2-4mm long.\nThe tube will contain the mummies, some inert carrier for protection and possibly a few ready hatched flying adults.\n1 predator per 2 square metres per week. Increase to 5 predators per square metre if small aphid colonies are present.\nUse at the very first signs of aphids being present or as a preventative measure on aphid prove plants. Work better above 15°C.\nDo not open the bottle until you are near the plants as hatched adults may fly out! Open the bottle and gently sprinkle the husks on the plants to be protected and then leave the tube on the soil so that any remaining aphidius can fly away.\nIt does not matter if the husks fall off the leaves onto the soil or bench. However, try to avoid sprinkling onto an area which may get disturbed by cultivation or watering. Alternatively, if you are treating a very small area, you can just lay the tube on its side with the top open and let the insects fly out themselves when hatched.\nIf you can’t put the Aphidius out straight away you need to keep them cool to stop them all from hatching out and dying through lack of food.", "score": 23.648681025220455, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "Apply 1 gal of mix to each 350 sq ft.\nTo control powdery mildew on CUCUMBERS and SQUASH, spray when powdery mildew appears and repeat as necessary. Do not spray young plants. Apply 1 gal of mix to each 600 sq ft.\nTo control powdery mildew on STRAWBERRIES, spray when disease appears and repeat as necessary. Do not spray young plants or apply to strawberries that will be used for canning. May be used up to the day before harvest.\nTo control powdery mildew on GRAPES, thoroughly cover foliage. Begin when new shoots are 6-10 inches. Repeat before blooms open and at 14 day intervals as needed. CAUTION: Do not apply to wine grapes within 21 days of harvest. Sulfur may damage concord and other labrusca varieties.\nHome Greenhouse Application:\nSHAKE WELL before use.\nTo control powdery mildew, leaf spot and rust on bedding plants, propagates, fruits and vegetables:\nMix 1 part concentrate to 30 parts water (8 tbsp. per gal) and apply according to above recommendations by crop type. Test for phytotoxicity before full-scale application.\n- Some plants are susceptible to injury from sulfur under certain climatic conditions.\n- Do not apply on Boston fern, spinach, apricot, filberts, walnuts, and viburnum.\n- Do not apply when temperature exceeds 85 degrees F.\n- Do not apply for at least 4 weeks following application of an oil spray or within 4 days of other pesticides.\n- Be sure to wash and rinse spray equipment thoroughly after each use.\n- Do not allow mixed spray to stand in spray equipment.\n- Avoid contact with plants until spray has dried.\nStorage and Disposal:\nStorage: Store only in original container in a cool, dry area inaccessible to children and pets. Protect from freezing and heat. Keep container closed tightly to prevent evaporation.\nDisposal: Non-refillable container. Do not reuse or refill this container. If empty, place in trash or offer for recycling if available. If partly filled, call your local solid waste agency for disposal instructions. Never place unused product down any indoor or outdoor drain.\nPlant Disease Control - FAQs\nQ: How do I know if my plant is being attacked by an insect or a disease?", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "Unlike a pesticide application, nematodes are alive, will move with time throughout the field, and will persist for many years. Farming activities that involve the movement of soil throughout the field enhance the natural movement of these biocontrol nematodes.\nTreating Part or All of a Field: Application Rate Options\n- Farmers have the option to treat all of their acreage directly with the persistent biocontrol nematodes for faster results. The approach is similar to a pesticide application where nematodes are applied to entire fields in application streams every two ft. (2’). However, we recommend utilizing the skip nozzle method to reduce costs and allow natural movement of the nematodes to fill in the areas where nematodes were not applied.\n- Natural movement includes the physical dispersal of the nematodes, movement of infected insects before they die, and the movement of soil with nematodes to other parts of the field during farming practices.\n- Nozzles applying nematodes spaced every six ft. (6’) will allow only every third nozzle to apply nematodes allowing for 33% of the field actually being treated at a cost savings. Some farms have further reduced costs by applying nematodes to 16% of the field by spacing the stream nozzles 10 ft. (10’) apart.\n- Full field application and the 33% application rate are recommended for fields with large to moderate ASB populations whereas the 16% application rate is best used for fields with low insect pressure.\nBiocontrol Sprayer Requirements\nPlease refer to the sprayer requirement manual which provides the necessary steps and timing for your application of nematodes. Also see 2022 NNYADP project results for updated information and formulation.\n- Biocontrol nematodes used for control of alfalfa snout beetle are easily applied through slightly modified commercial pesticide sprayers. In field trials, application equipment varied from large commercial pesticides sprayers with all screens and filters removed and nozzles changed to fertilizer steam nozzles (006-0015), smaller sprayers with open nozzle bodies dribbling a stream of water, and farm-made gravity-dispersed applications composed of water tank and a pipe with holes in it onboard a farm gator.\n- When applying nematodes, enough water needs to be used to penetrate the plant canopy and deposit the nematodes on the soil surface so they can enter the soil profile.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "third instar larvae or adult mealybugs.\n- To apply the product, remove the lid, expose the double sided adhesive surface of the label (to protect the contents of the package from ants) and place the bottle in a shady place, protected from rain or dew, preferably close to a mealybug-infested spot. Following their emergence, the wasps will fly out of the bottle and disperse in between the plants. Do not sprinkle the mummies actively from the bottle.\n- If ants are present at the mealybug hotspots, they must be destroyed. Ants encourage honeydew secretion by the mealybugs, transfer them from one place to another and protect them by actively interfering with the parasitoid.\n- BioAnagyrus is shipped in temperature controlled styrofoam boxes which must be kept intact until reaching the end-user.\n- Keep the product at room temperature (not refrigerated!) until emergence has started (appearance of a few dozen parasitic wasps). Avoid direct exposure of the product to sunlight!!!\n- Two-three weeks following BioAnagyrus release (depending upon temperature), mummified mealybugs can be clearly detected. The subsequent established generations of the parasitoid will effectively control the mealybugs in the longer run.\n- BioAnagyrus introduction rate should be determined according to the nature of the crop and rate of mealybug infestation.\nAll products are tested to meet specification requirements before leaving the factory.\nBioBee is not responsible for the outcome of implementation in the field, as it has no control over the method of application, local conditions, treatment/storage of product not according to instructions, etc.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "The contents are gently shaken onto leaves throughout the greenhouse or placed on the rock wool block or growing media in contact with the plant stem. Upon receipt active predators should be visible at the top of containers at room temperature.\n· Slow release bags, containing approximately 30 mL (1/8 cup) of carrier with predators and a food source. The bags act as miniature breeding units and are hung on plants throughout the greenhouse. Over four weeks, each bag can produces over 1000 predators under good conditions.\nRelatively high introduction rates are required because thrips can reproduce nearly twice as fast as Cucumeris, and Cucumeris only feeds on immature thrips, not adults.\nGeneral Introduction Rates\n· 10-100 Cucumeris/ plant, weekly, as needed.\n· As a starter culture for young plants, place 25 Cucumeris/plant at the base of the stem at soon as they are planted out in the greenhouse.\nUsing the Bulk Product\n· Greenhouse peppers – 10 Cucumeris/plant. This is sufficient early in the growing season if pollen is available as an alternate food source.\n· Greenhouse cucumbers – 50-100 Cucumeris/plant, weekly, until the percentage of leaves with predators is greater than that with thrips.\n· Greenhouse tomatoes – 25 Cucumeris/plant, weekly, for two weeks, when thrips are detected.\nUsing Slow release bags\n· Greenhouse cucumbers – 1 bag/5 plants every 1-2 weeks, until there is 1 bag/plant in infested areas\n· Interior plantscapes – 1 bag/large plant, every 6-8 weeks\nHang bags within 25 cm (10 inches) of the growing point on greenhouse crops, ensuring good contact with the stem and leaves. Bags should not be exposed to direct sunlight or overhead watering.\nEstablishment of Cucumeris requires 4-8 weeks, so it should be applied before thrips problems develop. Because Cucumeris feed only on immature thrips stages, a decrease in adult thrips populations will not occur for about 3 weeks.\nFor Best Results\n· Where Persimilis is being used for control of spider mite, avoid heavy applications of Cucumeris. Cucumeris feed on spider mite eggs, which may limit the food supply for immature Persimilis and reduce their effectiveness.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "NemaKnights Ant Attack\nAre you tired of finding ants in your home or anthills in your driveway, sidewalk, and interlock? It might be time to take some control with the Nemaknight Ant Attack pearls. Nemaknights Biological Ant Attack allows for a convenient spot treatment solution that makes controlling anthills much more efficient. Simply flip open the cap, shake the pearls into the anthills and surrounding area, and spray with water!\nBased on encapsulated S.feltiae nematodes. 6-month stable shelf life, with extended slow release to control of larval soil pests. For outdoor use. Environmentally sustainable, exclusive organic formulation and 100% pesticide free. No refrigeration required.\nTarget pests include: common black ants, common red ants, leatherjackets and thrips. 280g package treats approximately 20 ant hills, or ~215sq.ft.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Q. It seems no matter what brand of potting soil, I have to bake it before I can use it. If I don't, I get hundreds of tiny flies that hatch and swarm. I put out water to catch and drown them. I spray insect oil on top of the soil several times a day. I'm so afraid I'll kill the plants.\nA. Yes, fungus gnats in particular are a big problem in potting soils used for houseplants. The younger generations feed off of both decaying plants and soft, succulent living roots. They aren’t very particular about what they feed on, living or dead, so long as it is soft, juicy and tender.\nIf fungus gnats are extremely happy in their environment they will multiply very rapidly and cause poor growth and stunting. Besides, they are pesky and a nuisance inside the house. If potting soil is sterilized by the manufacturer using a heat treatment it should kill all of the fungus gnats and should pose no problem.\nControl fungus gnats with organic pest control products such as beneficial nematodes that go after their destructive larvae and a bacterium is also available with a similar result. You should be able to find these products in your local nursery or garden center.\nYellow or blue sticky traps also work. I received this video on how to make yellow sticky traps from a friend.\nAnother effective method is to sterilize this potting soil yourself by placing it, moistened, into a clear plastic bag and let it bake in the sun. Temperatures need to get up to about 160 F for at least 30 minutes for good control.\nAnother option is to apply pyrethrin sprays to the soil and water it in.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "In laboratory two-container choice experiments, H. aureus were repelled by EPN treated areas for up to 10 days at 10,000 IJs per device. The repellency threshold was found to vary among nematodes species. We hypothesis that it is the physical movement of the nematodes that repels the termites. Temperature is a key factor affecting nematode pathogenicity. Temperature tolerance of the nematodes varied between species. After a gradual heat adaptation process, S. riobrave and H. bacteriophora caused significantly higher H. aureus mortality at 32 °C compared with original laboratory cultured strains. Further work may result in the contribution of commercially available strains with enhanced heat tolerance. Preliminary field studies confirmed EPN protection of a structure, however, termites began to reinfest 4 weeks after the application. Additional tests are necessary to provide more evidence before we can conclude nematodes as useful in the field.", "score": 21.705011774031476, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Nemasys® Vine Weevil Killer is a perishable, living product, so it must be used before the expiry date (which is marked on the inside of the pack and will be at least 2 weeks) and should be stored in the fridge until it is needed. It is delivered by 1st class post and should be popped in the fridge as soon as possible (they are fine on the doormat while you are at work).\nVideo: Nemasys Vine Weevil Larvae Pest & Damage Identification\nVideo: Applying Nemasys Vine Weevil Killer Nematodes\nCan't see an answer to your question? Please contact us and we will be happy to assist you.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Control By Pest\nA Bugs Blog\nAPPLICATION INSTRUCTIONS: This product is easy to use -- just mix 4 oz. of powder with one gallon of water and you are good to cover at least 3,000 square feet!\nEarthworm Castings - Natural Soil Amendment\nGood Bug Habitat & Attractant Seed Mix\nLive Red Worms\nUncle Dave's Worm Chow - 1 lb.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "The bacteria are stored in peat, and as this is a living culture, it must be treated with care. It\nshould be stored in the fridge and used within 3 months. Do not separate from the seed packet as the inoculant\nattached is specific to the individual legume. To use, moisten the seed with a small amount of milk or water and stir\nin the inoculant until seeds are coated. Do not inoculate the seed until you are ready to sow it and do not leave\nthe inoculated seed in the sun.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-6", "d_text": "Use of our archaea microbes fosters balance in the ecosystem rather that creating distortions and imbalance. It's the same idea as what antibiotic exposure does to human bodies, killing off good bacteria along with the harmful bacteria. Our microbes simply crowd out the contaminating pollutants by overtaking their food source so they die-off or are eliminated naturally. At the end of their lifespan, our archaea die too, and the area being treated will return to former natural levels of locally indigenous microbes, with balance restored.\n19. What is the measurement for dosage or application?\nIt will vary according to the specific situation. As a typical example for remediation of surface soils, 10 pounds (5Kg) will treat 100 square meters. If you are treating cubic yards of contaminated soils (soil pile) 1Kg (roughly 2 pounds) typically will treat 2 cubic meters (roughly 1 cubic yard). For marshes and wetlands, it can be typically 1 lb of microbe formula mixed with 50 gals of water or seawater and sprayed on 200 square feet. For open water application such as lakes or oceans, 1,000 lbs can treat 1 square mile, with higher concentration applied in focused applications on sheens. (For more information, see “What is the application method” notes above.)\n20. What application dosage is appropriate for stagnant water or water tanks?\nA typical benchmark or starting point is 1 lb per 200 cubic feet, but that is adjusted according to the specifics of the situation, such as type and concentrations of pollutants, existence of solids and/or vegetation, etc.\n21. Are there different formulations of the product, and different pricing for those?\nYes, there are three different strengths. Depending on the application specifics, sometimes the microbes should be supplemented with nutrients or biocatalysts as well. We will help you match the right products to what your specific situation requires.\n22. How do I store the inactivated product and what is the shelf life of the microbe powder?\nThe archaea microbes in their powder form, before they are activated, have a shelf life of 5 years. This is much longer than many competing products.\n23. How are the microbes manufactured?\nBecause microbes are living beings they are not manufactured but instead are grown or cultivated.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Storing unused pesticides can be a troubling situation for home gardeners. Frequently asked questions include: Where can I keep them? Is it safe? Will the pesticides last? What about my children and pets?\nWhile buying in bulk might be good for dry goods and groceries, today the pesticide recommendation is to only purchase in the volume you expect to use in a single growing season, with an exception and there will always be one of those.\nFirst, a bit of vocabulary is needed to understand the various product types that gardeners routinely buy and use in the home landscape and gardens:\nLiquids – the active ingredients dissolve completely in water and no agitation during application is needed forming a true solution.\nEmulsifiable concentrate – the active ingredient will dissolve in oil, but not in water and is later mixed with water for application.\nWettable Powders – a dry powder that does not dissolve, but held in suspension with agitation during application.\nDusts – active ingredient is placed on finely ground particles, applied by shaking the container or in a special applicator using air.\nGardeners should not attempt to keep mixed diluted sprays, but rather use them up on the target plants at the end of the growing season. The quality of the water from the spigot on the house can degrade the active ingredient and the spray next season will be ineffective in managing the target pest.\nLiquids and emulsifiable concentrates must be kept above freezing to prevent separation and potentially bursting the container. Freezing temperatures will not harm dusts and wettable powders. In general, the best storage conditions will be to keep the unused concentrates in a cool and dry location. Dry for the wettable powders and dusts so they do not absorb moisture and clump or cake up. If enough moisture is present, the active ingredient or other additives may cause the product to be ineffective.\nStorage locations could be a garage that stays above freezing or in the basement away from any heat sources. In terms of safety, a lockable cabinet is preferred over a shelf. If it is a shelf, make it out of reach of children and pets.\nThe exception I noted at the top of this column relates to how long we can keep a product in concentrated form. Wettable powders often used in a home orchard with more than just a few fruit trees can be kept several seasons if stored properly during the growing season and winter storage.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "They have a natural thirst for caterpillars, grubs, rootworms, gnats, beetles, and many more. Due to this, nematodes offer an amazing solution to pest problems.\nThese little worms work much in the same way that ticks do. They find themselves a host and work their way into the body through orifices. Once inside, the worm will feed on their host. Once inside, nematodes will also release a potent bacteria. This will kill the pest within 24 – 48 hours, making quick work of your infestations. Japanese beetles may have a thick shell, but even they can’t stop an internal sneak attack.\nUsing Nematodes in the Garden:\nNematodes are naturally occurring in many soils, but they may not be there if you have pests. Due to this, many garden centres have brought in packaged nematodes that you can apply to your own garden. These packages are usually stored cold to keep the worms sleeping and fresh to work their best.\nFirst, make sure your soil is warm and moist. The warmth will help to thaw out the cold sleep and moisture helps them move in the soil. Keep the soil moist throughout the application and treatment process for best results.\nMix your treatment in water to make it easier to apply. Many have found that letting it soak for a bit before mixing helps to break it up. Spray or drench the affected area with treatment and allow it to soak the soil. Shortly after they have woken up, your nematodes will sense their prey and start to work. When the pests are gone, nematodes will die off. No clean-up required.\nWith their invincible nature, many gardeners have stressed over Japanese beetles in their gardens. Nematodes, however, make their way under that tough exterior and the beetles are no match for them. Make quick work of the unkillable with a little, natural worm that is happy to do the work for you!", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "Fill one of the 50 ml tubes with approximately 20 ml of water.\n3. Wet the brush in deionized water and use it to wipe worms off the top of a culture. Swish the brush in the water in the 50 ml tube until the worms fall off.\n4. Repeat until enough worms are obtained or until the top is devoid of worms. If more worms are needed, wipe them from the sides of the box or from a new box.\n5. Dump the worms that have been collected in the tube onto the sieve on top of the other tube.\n6. Add more water to the first tube and wash remaining worms onto the sieve.\n7. Fill the second tube with additional water until the level just reaches the bottom of the sieve. Live worms will wiggle through the sieve into the water, dead worms and pieces of food are retained and should be discarded.\n8. Let the live worms settle to the bottom (5‑15 min).\n9. Remove the worms from the bottom of the tube with a Pasteur pipette and transfer to a small tube for distribution to the fish tanks.\n10. Live worms can be maintained for extended periods in the large or small tubes without any problems.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-3", "d_text": "Do not apply this product in the following counties where endangered beetles have been found:\n||Logan, Sebastian, Franklin, Scott, Little River\n||Elk, Wilson, Montgomery, Chatauqua\n||Cherry, Brown, Keya Paha, Rock, Holt, Boyd, Thomas, Blaine, Loup, Garfield, Wheeler, Boone, Antelope, Lincoln, Dawson, Lancaster\n||Osage, Craig, Rogers, Tulsa, Wagoner, Cherokee, Muskogee, Sequoyah, McIntosh, Haskell, Latimer, Le Flore, Pittsburg, Atoka, Pushmataha, McCurtain, Choctaw, Bryan, Johnston, Coal, Hughes, Okfuskee, Creek, Okmulgee, Mayes, Nowata, Ottawa, Washington, Delaware, Adair\n||Washington (on Block Island)\n||Tripp, Gregory, and Todd\n||Red River, Lamar\nDo not contaminate water, food, or feed through storage or disposal. Store at temperatures below 75°F, under well-vented and dry storage conditions. Do not store under moist conditions. Do not allow product to freeze. Store the tightly resealed container in a dry place and not exposed directly to sun. Product life is approximately 15 months when stored as directed.\nThis product can be sold in all 50 states but should not be used in certain counties in Arkansas, Kansas, Nebraska, Oklahoma, Rhode Island, South Dakota and Texas due to the danger it poses to certain beneficial beetle species. Prior to purchase, please read the Environment section under the Instructions tab or refer to the Environmental Hazards section of the label (under the DOCS tab) if you are in one of these states.\nWarning & Toxicities:\nKEEP OUT OF REACH OF CHILDREN. HAZARDS TO HUMANS AND DOMESTIC ANIMALS. Harmful if absorbed through skin or swallowed. Avoid contact with skin, eyes, or clothing. Wash thoroughly with soap and water after handling and before eating, drinking, chewing gum, using tobacco, or using the toilet. Remove and wash contaminated clothing before reuse. Wear waterproof gloves.\nShelf Life: Approximately 15 months when stored as directed.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Can be used on the following crops: Alfalfa, Bedding Plants, Berries, Bushberries, Caneberries, Cereal Grains, Citrus, Clover, Cole Crops, Corn, Cotton, Cucurbits, Flowering Plants, Fodder, Forage, Fruiting Vegetables, Ginseng, Grass, Hay, Herbaceous Potted Flowers, Herbs, Leafy Vegetables (Except Brassica), Legume Vegetables, Olive, Onions, Ornamentals, Ornamental Bulbs, Peanuts, Pineapple, Pome Fruit, Pomegranate, Root/Tuber/Corm Vegetables, Stone Fruit, Sunflower, Tobacco, Trees, Tree Nuts, Tropical Foliage, Tropical Fruits, Turf Grass, Vines and more. For a complete list of crops and crop-specific applications and use rates, please refer to the manufacturer's label under the DOCs tab.\nShop all Blacksmith BioScience products here.\nThis Product Controls These Pests or Diseases: Armillaria spp., Fusarium spp., Phytophthora spp., Pythium spp., Rhizoctonia spp., Rosellinia spp., Sclerotinia spp., Sclerotium rolfsii, Thielaviopsis basicola, and Verticillium spp.\nTenet is especially effective in preventing fungal pathogen attacks. For this reason, it is essential that it is applied and allowed to colonize an area before fungal pathogens have had a chance to establish.\n- Should be applied up to 7 days before planting to initiate soil colonization before the crop is planted and reapplied after planting.\n- For maximum effectiveness, apply 2 or more applications.\n- May be applied throughout the crop production cycle in order to maintain a high colonization of the root zone.\n- Do not apply by aircraft.\n- Apply when the soil temperature is at least 50°F (10°C).\n- Apply to moist soil or growth media, but not to saturated or waterlogged soil. Soil or growth media must remain moist after application to provide adequate control of soilborne fungal diseases.\n- May be applied to sterilized or fumigated soil but must be applied after the sterilizing agent or fumigant has dissipated.\n- Tenet WP has no curative effect and therefore is not effective against plants infected with disease at the time of application.\n- In case of applications on or to dry soils, pre-irrigate until soil is moist.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "Effective Micro-Organisms (EM)\nWhat is EM\nCeramics & Water\nPlastic Tanks & Ponds\nThe Aquatic Environment\nUltrasonic Algae Control\nUltrasonic Algae Products\nPoultry & Game\nDOWNLOADS & LINKS\nThis is EM in it's dormant state and is how it is sold.\nIt is available in 1 & 10 litre containers.\nEM-1 will keep for up to 12 months if unopened and kept in a cool dark place.\nWhen opened, it should be used within 3 months.\nIt is totally harmless, even if consumed.\nUnless environmental conditions are ideal (especially temperature +25C), EM-1 used on it's own will take a long time to become effective and besides, this is not the most economic way of applying it. The best way to use EM-1 is to 'activate' it - the resultant product is called EM-A (Activated-EM), which can than then be further diluted with water.\nMaking EM-A is done by mixing EM-1 with molasses (which must be sugar cane and not sugar beet molasses) and water and allowing them to ferment in a sealed vessel for 7 days at @ +25 centigrade.\n1 litre of EM-1 will produce 20 litres of EM-A, which can then be further diluted 1:100 with water i.e. 200 litres of usable product!\nEM fermenters are availed in a variety of sizes: 30, 60, 120, 220 and 1000 litre sizes.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-7", "d_text": "How to order Beneficial Nematodes: All nematodes are not the same. Buglogical nematodes are more tolerant of high tempertures than any other brands. It is best to order biological control nematodes and have them delivered directly to you from a reliable source.. This helps insure that the nematodes you are buying are still alive. Nematodes do not live very long in storage. Therefore, buying nematodes that are stocked on a store shelf is very risky.\nSuppliers: Buglogical Control Systems, Inc. PO Box 32046, Tucson, AZ 85751-2046 Phone: 520-298-4400\nBedding, R.A. and L.A. Miller. 1981. Use of a Nematode, Heterorhabditis heliothidis to Control Black Vine Weevil, Otiorhynchus sulcatus, in Potted Plants.Ann.Appl.Biol. 99:211-216.\nDavidson, J.A., S.A. Gill, and M.J. Raupp. 1992. Controlling Clearwing Moths with Entomopathogenic Nematodes: The Dogwood Borer Case Study. J. of Arboriculture. 18(2):81-84.\nGeorgis, R. and G.O. Poinar. 1989. Field Effectiveness of Entomophilic Nematodes Neoaplectana andHeterorhabditis.Pages 213-224, In A.R. Leslie and R.L. Metcalf (eds.). Integrated Pest Management for Turfgrass and Ornamentals.United States Environmental Protection Agency, Washington, DC.\nGill, S., J.A. Davidson, and M.J. Raupp. 1992. Control of Peachtree Borer Using Entomopathogenic Nematodes. J. of Arboriculture.18(4):184-187\nKaya, H.K. 1985.Entomogenous Nematodes for Insect Control in IPM Systems. Pages 283-303, In M.A. Hoy and D.C. Herzog (eds.).Biological Control in Agricultural IPM Systems,New York: Academic Press.\nKaya, H.K. and L.R. Brown.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "How to clean:Should preferably be observed for obvious signs of infestation; worms may develop when stored in warm areas.\nThe presence of infestation is much more prevalent during the warm summer months. Store these items in a cool, dry place or, when possible, refrigerated.\nWhen properly stored, it is unusual to find worms in these products especially those made in Canada and the USA. If worms are found, the entire box or bag must be carefully sifted or checked on a white paper plate before use.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "This is good, you have completed the process.\n12. Sterilize the\nneedle again with the alcohol soaked paper towel, replace the needle guard and\nplace the syringe back into your clean zip-lock bag.\n13. Allow the\nsyringe to sit for no less than 12 hours before using in microscopy application\nor inoculation for edible varieties. The older the print used, the more\n\"dehydrated\" the spores will become. For proper microscopy observation or\ngermination, the spores will need to be allowed to rehydrate.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "A lid is unnecessary provided that the sides of the container are clean and it is stored where it won't be knocked over, preferably on a cool, concrete floor. mealworms fed on Progrub are likely to be more nutritious and will be gut loaded with calcium. It is not necessary to feed carrot, potato peelings, apple etc to full sized or Giant mealworms. However, this vegetable nutrition should be provided to Mini mealworms for moisture. Over-feeding should be avoided as this encourages mites.\nBulk mealworms must be unpacked on arrival. The shipping container can be placed in a refrigerator for an hour or two in warm weather before unpacking to slow the mealworms down. To prevent escape, open the bag or box outdoors and tip the contents into a large plastic bucket or dustbin. Shake the mealworms off the crumpled newspaper taking care to hold the paper below the rim of the bucket. The mealworms are then ready to be transferred to their storage container.\nMealworms don't come any fresher than those supplied by Livefoods Direct.\nThe mealworms are taken straight from the fattening rooms where they are then prepare for sale to ensure maximum freshness. Only a breeder has such total control over product and is thus able to guarantee both quality and freshness.\nMealworms are available in clear plastic tubs for easy feeding and storage or you can buy mealworms in bulk sacks which is more economical\nWe stock the following Mealworms:-\nTo visit our online store, click below\nThe Livefoods Direct team is highly skilled and committed to maintaining a high level of quality and service, giving you, the customer, the best deals possible.\nLive Mealworms | Dreid Mealworms | Bulk Mealworms | Save on Mealworms", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Revive Soil Improver\nTreatment of soil with Revive should help restore the natural balance and give healthier crops.\nWhere the soil is cropped intensively the balance of micro-organisms changes, an increase in organisms can cause specific disease problems or 'Soil Sickness'.\nRevive should always be diluted before use. Use the diluted product within 48 hours.\nFor treatment of soil or composts dilute 25mls of Revive in 2 litres of water. New compost and growbags should be watered with Revive at planting.\nFruit and vegetable beds should be treated in the Spring and Autumn. 25mls of Revive (diluted as above) will treat up to 25sq. metres. Individual plants may also be watered at the roots.\nTreat lawns Spring and Autumn, using 25mls of Revive diluted per 10sq. metres.\nAs a dip treatment, dilute 25mls of Revive with 0.5 litres of water.\nPlant roots, bulbs, tubers, corms and seeds may be dipped before planting.\nDelivery just £4.98 per order*!\nFree Delivery if just ordering nematodes\n*Additional charges may apply to certain orders in highlands and islands, if so this will be applied in the checkout", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "These are dry ingredients, hence, it is advisable to mix as far as possible and use mild fungicides to prevent further spreading. Benefits of Nematodes As mentioned above nematodes that are parasitic on insects and difficult to maintain, proceed with container gardening.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-2", "d_text": "Normally, A solitary 20cc spore syringe can be used to dominate 6 litres of substratum or roughly 13 canning containers.\nSimply how Do You Store Spore Syringes?\nDid you recognize that, maintained the optimal temperature, your Malabar spore syringe can last for a minimum of year? To obtain one of one of the most out of your spore syringe, keep the product covered in light-proof material, such as light weight aluminum foil. For finest results, leave it in the refrigerator, around 2C-8C. Never put continuing to be spore service in the fridge freezer; the deep cold will absolutely make the spores ineffective.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "(Source: B. Trevarrow from Zebrafish Book 5th Edition)\nMicroworms are a live food that some labs are using instead of brine shrimp. Here is a method for raising microworms adapted from techniques used in fly labs.\n- Clean plastic boxes with snap on lids (Tupperware‑like)\n- Oatmeal (rolled oats or porridge)\n- A spoon for stirring the oatmeal\n- Deionized water\n- 10% methyl‑p‑hydroxybenzoate (also known as Tegosept or Nipagin-M, fungus inhibitor) in 95% EtOH\n- A productive strain of micro worms (commercially available through hobbyist magazines)\n- A clean Pasteur pipette\n- A clean, long handled paint brush with a brush approximately 1 x 2.5 cm\n- Two 50 ml plastic 'Falcon'‑like tubes and a sieve that fits into the top of one of the tubes\nEstablishing the worm cultures\n1. Make up the oatmeal in the approximate proportions of 150 ml of oatmeal to 400 ml of distilled water. The proportions should be adjusted to give a thick but not overly firm consistency.\n2. Soak the oatmeal for about 5‑10 minutes, then boil in a double boiler for about 5‑10 minutes (until the oatmeal is evenly thick from the top to the bottom).\n3. Remove the oatmeal from the heat and add 2.8 ml of 10% methyl‑p‑hydroxybenzoate in 95% EtOH for every 400 ml of oatmeal mixture, to yield a final concentration of 0.07%. Stir and pour into the clean plastic boxes. After the mixture cools, add an aliquot of microworms.\n4. Check the new cultures in about a week for worms. They should be climbing out of the culture onto the sides of the container or squirming visibly in the culture medium. Some cultures may take longer to produce a good crop of worms. If they do not produce worms by the second week, rinse and bleach the containers before reusing. Old cultures that no longer produce worms should be discarded and the containers washed and bleached before reuse.\nHarvesting worm cultures\n1. Select box(es) to be harvested.\n2.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Register with us or sign in\nin Problem solving\nHi all! I have just emptied a trough and found many vine weevil grubs in it. I have been checking the soil and squishing any grubs I found this afternoon (quite theraputic!). It seems a shame to throw away the soil and I was planning on spreading it on an empty bed that won't be planted up till the end of May. Is this a great way to spread the problem around my garden, or are they likely to be eaten or die before I use the soil again?\nI would spread soil on a tray of some sort ...in theory birds will devour them but they could bury themselves in garden soil\nSpread the soil out on a plastic sheet and let the birds have a go. If it was me I would then apply Vine Weevil Killer (to be sure to be sure) before tipping it onto ane mpty bed. I'd dig it in a week later.\nIf you are not into killing bees by using Neonicotinoids (Vine Weevil Killer) then spread the soil out thinly and go over it with a Weed Wand (Flame gun type thing) or try as we used to do, using a steam wall paper stripper. It sterilises the compost very nicely.\nBerghill, do you mean that the weevil neonicotinoids kills bees? Ive never head that before\nYou could microwave it, should zap any lurking unhatched eggs too.\nYes, Provado and other nicotinoid-containing products are now strongly believed to be a cause of colony collapse in the honey bee and the cause of damage to other pollinating insects.\nThanks for all that information everyone! Has anyone used nematodes, and if so, did it work?\nThere is a concern that neonicotinoids cause bees to get disorientated and lose their way back to the hive. However, treating pots with Vine Weevil Killer is highly unlikely to impact on bees. The main impact is when chemicals such as Provado Ultimate Bug Killer are sprayed directly onto plants, especially onto flowers when bees are active. also Vine Weevil Killer contains a neonicotinoid, and if applied to the soil in a pot holding a flowering plant, a small amount could be carried up the pollen.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-2", "d_text": "Remember the plants will have to live their whole lives where you put the gemmae so space them how you want them to be forever. Put them closer if you like a mass planting, farther apart if you like to see the full form of each individual plant.\nAt this point you can lightly spray the soil surface with water to settle in the gemmae, or...\nsprinkle a thin layer of sand over the soil surface before spraying with water. I prefer putting the sand over the gemmae to hold them in place and to keep them from drying out. If you immediately put the pots into a terrarium this step is not necessary. These pots are destined to be outside.\nNote this is a 5 inch (13 cm) square #1 nursery pot. This pot is not too big. It is like the one the mother plants are in.\nChances are you will have more gemmae than you have room for plants. The solution is to send the extras to your friends.\nTo mail gemmae I put them in damp paper towels in small plastic bags. I use the blue shop towels because they are heavier weight and do not mold as easily as kitchen paper towels. I cut each towel into nine squares.\nYou need the paper towels to be damp. Get them wet then squeeze out the excess water. Do not get heroic squeezing out the water but do squeeze them. Then put the gemmae in the middle of the towel square.\nI have had people send me gemmae in a plastic bag with nothing and with a dry paper towel. The gemmae were dead on arrival. Remember these are little plants.\nFold the paper towel in thirds.\nTurn it over and fold it in thirds again. This will give the gemmae maximum protection.\nPut the folded paper towel with gemmae in a clearly labeled plastic bag for shipping in a padded envelope or small box.\nIf you still have gemmae left over, toss them into pots of other plants. These Drosera scorpioides are growing with Drosophyllum lusitanicum, a non-bog plant.\n-- John Brittnacher", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-5", "d_text": "In fact , the very best time to control white grubs is in the spring and fall. If your in a warmer climate, beneficial nematodes are most effective in the summer.\nFertilizers should be avoided roughly 2 weeks prior to and after nematode application, because they may be adversely affected by high nitrogen content.\nSome pesticides work well with nematodes when their mutual exposure is limited while other pesticides may kill nematodes. Check labels or specific fact sheets to find out. Some chemicals to avoid are bendiocarb, chlorpyrifos, ethoprop, and isazophos. Fungicides to avoid are anilazine, dimethyl benzyl, ammonium chloride, fenarimol, and mercurous chloride. The herbicides, 2,4-D and trichlopyr and nematicide, fenamiphos, should be avoided as well.\nDuring hot weather release nematodes in the evening or afternoon when temperature is cooler. Release once or twice a year or until infestation subsides. Nematodes are shipped in the infectious larvae stage of their life cycle and can be stored in the refrigerator for up to 4 weeks. Always release very early in the morning or late in the late afternoon.\nWhy are these organisms beneficial?\nBeneficial nematodes seek out and kill all stages of harmful soil-dwelling insects. They can be used to control a broad range of soil-inhabiting insects and above-ground insects in their soil-inhabiting stage of life.\nParasitic nematodes are beneficial for eliminating pest insects. First, they have such a wide host range that they can be used successfully on numerous insect pests. The nematodes' nonspecific development, which does not rely on specific host nutrients, allows them to infect a large number of insect species.\nNematodes enter pest bugs while they are still alive, then they multiply inside the bugs (which eventually die) and finally burst out of the dead bodies. The number of nematodes inside a single bug (depending on the species) ranges from 5,000 to 10,000. Although you can barely see one young nematode with your naked eye, large groups of these tiny wigglers pouring out of the dead insects are easy to see. Then the nematodes wriggle off to find other insects to \"invade,\" starting the whole cycle all over again.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The recommended storage temperature is 5-10c. Store the tube in the dark on its side. The quicker you release them the more success you will have so don’t store for more than a day or two if possible.\nAphidius are most effective when temperatures are constantly above 15°C", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "For maximum efficiency, it may be necessary to increase the density of the placement around the perimeter.\n- Monitor using wing-style or delta style traps and Obliquebanded & Pandemis Leafrollers lures to determine the best time for application. Trap placement should begin in early March/April to determine first flight and should continue throughout the season to help assess treatment effectiveness.\n- Make the first application of NoMate LTX Spirals with 2-4 days after the first male moth is captured in a trap (biofix). Prediction Degree Day models can provide assistance in determining application timing. Model information and economic thresholds that may vary according to geographical regions are available from your county Farm Advisor.\n- Second and any subsequent application should be timed such that the new application is made before the effect of the previous application significantly diminishes. Leafrollers life cycle models, diligent trapping and field checking are all important considerations in the proper timing of an additional spiral application.\nApplication Rate: 200-400 spirals per acre. Apply 1-3 times per year, or as needed.\nStorage: So not contaminate water, food or feed through storage or disposal. Store in a cool place until used. Prolonged storage should be at or less than 45°F.\nAvailable by special order only. Allow 1-2 weeks for delivery.\nCannot Be Shipped To: AL,AK,AZ,AR,CA,CO,CT,DE,DC,FL,GA,HI,ID,IL,IN,IA,KS,KY,LA,ME,MD,MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ,NM,NY,NC,ND,OH,OK,PA,PR,RI,SC,SD,TN,TX,UT,VT,VA,WV,WI,WY\nWarning & Toxicities: KEEP OUT OF REACH OF CHILDREN. HAZARDS TO HUMANS AND DOMESTIC ANIMALS. Caution: Harmful if absorbed through skin, inhaled or swallowed. Causes moderate eye irritation. Avoid contact with skin, eyes and clothing. Avoid breathing vapor. Wash thoroughly with soap and water after handling and before eating, drinking, chewing gum, using tobacco or using the toilet. Remove contaminated clothing and wash before re-use.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "After 5 years at my current address, I have NO ants, NO grubs, NO ticks, my dog has NO fleas (and I do not use any chems on him either) And Yes I have lots of earthworms as well as the desirable microbial activity necessary to maintain the balance of the soil.\n07-18-2006, 04:22 PM\nLawnchick -Sounds like you are very knowledgeable about the todes. How long do you think I should wait to apply if I have used Merit in the past? We are having some grub problems now and are about to contract with a pest guy to spray for them but I would prefer to go the green route. Duh I guess if the grubs are alive the todes will be fine. I have used Arbico in the past (out of Tucson) but their Todes come in a powder form and I have had mixed results. Where do you get yours? Thanks for the info and welcome to lawnsite.\n07-18-2006, 04:32 PM\nHere are a few web site for your info.\n07-19-2006, 06:19 AM\nHopefully it has been a while since your last chemical application (6 wks), and if you use plenty of compost, the microbes should have done a pretty good job breaking it down. As for the sprayer, I would consider purchasing another one. The cheap ones have always worked fine for me. If that is not an option, rinse your existing sprayer with vinegar, no need to go after the 20% used as an herbicide, table vinegar will be perfect. Contact Hydro-Gardens, (http://www.hydro-gardens.com/guardian_lawn_patrol_nematodes.htm) for a fresh supply. They will overnight them in cold packs. The Guardian and Lawn Patrol varieties work best for me in here in South Texas for the insects we battle. Hydro-Gardens can reccommend a flavor for your parts. Thanks for the welcome!\n07-19-2006, 10:33 AM\nThanks for sharing that web site...:waving:\nvBulletin® v3.8.6, Copyright ©2000-2016, Jelsoft Enterprises Ltd.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "Bottom Line Yes, I would recommend this to a friend\nBy Nathalie L\nfrom Tolland CT\nLast time I applied the spores, I did not buy the tube. This year however, even though the reviews were saying it was nothing more than a cardboard tube, I purchased it. As a fact, yes, it is nothing more than a cardboard tube with a plastic filter at the bottom. So, indeed, I would not use it over wet grass, or in rainy weather, as it would surely deteriorate and clog-up. Having said that, I found it very useful: - For one thing it cleaner: It protected my pants and shirt from getting totally caked up in the spores.- Also, it saves time: It can be loaded with a larger volume of the spores, and there is less time spend re-loading it. Thank you !\n(1 of 1 customers found this review helpful)\nThis product is not well made\nfrom Athens, PA\nAbout Me Master Gardener\nWhile you clearly need such an applicator to apply large amounts of milky spore, I was disappointed that the applicator end broke when I tried to open the tab. I ended up taping a make-shift end on the applicator and got all the product down. Just think it is really worth about $2.99.\nfrom Omaha, NE\nThe applicator's distribution head fell out during use. It needs to be secured better. I finally wrapped tape around the cardboard tube and over the edges of the applicator head to hold it in place. It then worked fine.\nNot so impressed\nIn all honesty, I have not used this product yet. However, I was extremely disappointed when I received it because it is nothing more than a sturdy cardboard tube with a plastic hole on the bottom. Not impressed. I rated it 3 stars only because I haven't used it yet....\nThis product is a winner for us\nfrom Piperton, TN\nThe first time we bought this product was in 1999 and still have it. We have moved since we bought the first one & have a much bigger garden, but I find this applicator is still well suited for our needs. We not only have used it for the milky spore application, but for granule type applicatios, ie, herbicides, etc.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Nemastar - Steinernema carpocapsae\nEntomopathogenic nematode, species Steinernema carpocapsae\nInfective juveniles of a threadworm that parasitises a large variety of insect prey, especially beetles, cutworms (Agrotis spp.), and other moths. 100% safe to humans and pets.\nThe Pest – Greasy Cutworms\nnemastar® contains \"infective juveniles\".\nThe caterpillars of the greasy cutworm cause damage to the emerging shoot by clipping it off at the ground, killing the developing plant. Greasy cutworms attack crops including pasture, brassicas, cereals and maize.\nThe Solution – nemastar®\nThe infective juveniles of nemastar® are ambush predators, and are most effective against mobile prey. Once caught, they crawl inside the prey through breathing spiracles or other orifices, release a beneficial bacteria to break down the pests internal organs, and feeds on the bacterial slurry. The nematodes then breed in the cadaver, which eventually breaks apart releasing more nematodes into the soil.\nGreasy Cutworms - Agrotis ipsilon\nApply in the evening or during cloudy weather, the nematodes are very susceptible to UV and drying out. For optimum results, the soil should retain some moisture for several weeks.\nPlease seek the assistance of a crop scout or phone or email us directly for guidance. The advice above is indicative only, and more specific advice may be beneficial.\nThe nematodes are most infective when the media temperature is between 15 to 30°C. At higher and lower temperatures, nematode efficacy decreases.\nRelease and Storage Instructions", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "This post may contain affiliate links. Read the full disclosure here.\nAnother day, another way of defending ourselves from the onslaught of fungus gnats.\nI’m pretty lucky in the whole fungus gnat department.\nThere was a time last spring when I couldn’t eat ANYTHING without one of the little buggers trying to get into my mouth (what part of vegan don’t they understand??).\nDon’t even get me started on wine. Fungus gnats are crazy big drinkers.\nBut recently, they’ve been quiet.\nEither their new year’s resolution was to keep to themselves a bit more, or my almost exclusively bottom watering regime has paid off.\nI’m leaning towards the latter. If you want to read more about the benefit of bottom watering I have an article all about it here.\nBut I’m lucky. I have a big enough kitchen* to be able to set up a bottom watering station. A lot of us need to water from the top just for the sake of convenience, especially if we have a lot of plants.\nThe problem with top watering is that the top of the soil gets wet, which attracts fungus gnats. Gnats aren’t too much of a problem (unless you have a full-on infestation) but they’re irritating as hell.\nBeneficial nematodes have been used to get rid of pests for yonks. I first heard about them when I watched this Betsy Begonia video, and thought I’d write a post based on my research, to provide a guide for those of you with a gnat problem.\n*My kitchen is actually pretty small, but it’s an L shape, due to a tiny little extension that houses the freezer, washing machine and boiler. Also a worktop and a textured south-facing window. Perfect for plants.\nWhat are beneficial nematodes?\nIs everybody ready for a really long and gross word? Great. Beneficial nematodes used in plant care are two species of entomopathogenic nematodes commonly known as heterorhabditis and steinernema.\nAs well as fungus gnats, they’re used to control ants, fleas, moths, beetles, and weevils.\nImagine tiny weeny white worms. That’s pretty much what they look like.\nHow do beneficial nematodes reduce fungus gnat populations?\nPrepare yourselves, because this bit’s gross.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Beneficial Insects & Organisms\nTraps & Lures\nCrawling Insect Control\nFlying Insect Control\nTools & Equipment\nLawn Care / Weed Control\nWater & Septic Treatment\nHome & Commercial\nPest Problem Guide\nHow To Order\nLadybug Release Guidelines\n- When Ladybugs arrive put the sack in a cool place (refrigerator) until late evening or early morning.\n- Do not release the Ladybugs during the heat of the day or while the sun is shining.\n- Ladybugs should be released when the plants have become partially enfoliated, which will provide coverage, and some pest insects are present, which will provide food.\n- In order to achieve biological control of insects try to maintain a balance of a few pests for food and enough Ladybugs to keep them in check, being careful not to release to many Ladybugs at one time.\n- Sprinkle or irrigate the area before releasing the Ladybugs so they will have a drink of water after their journey. The can also be watered by sprinkling the sack with water. Do not put wet bags in the refrigerator.\n- Ladybugs should be released a few at a time twice a week during the season when leaves are young, tender and attractive to pest insects.\n- Apply one (1) tablespoon on each shrub and a handful on each tree to keep them free from pest damage.\n- For heavy infestation, release all the Ladybugs in the bag at one time.\nAfter application retie the bag and place in refrigerator until all Ladybugs are used. Ladybugs may be stored in your refrigerator for up to two weeks. Do not freeze.\nClick here to go back to the Ladybug order page.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Once it settles it will separate and you will have a coral-colored liquid with sediment in the bottom. Pour the mixture through a strainer lined with fine cheesecloth, through a coffee filter, or even a jelly bag; this takes a little while to strain. You can also use a wet paper towels, however this takes even longer to strain. The idea is to get rid of all of the particles because they will stop up the valve of your sprayer.\nOnce strained, pour the concentrate into a jar with a plastic lid (not metal), add the soap, stir, and label. The concentrate is ready to use, or it can be stored in a cool dark place for a few months to be used as needed.\nOur spray bottles holds about a liter (or a quart), so we add about 2 tablespoons of the concentrate to the bottle and fill it with water. We spray the plants late in the day, so that hot sun doesn’t shine on them once they are sprayed, making sure you cover both sides of the leaves. If the hot sun shines on the just-sprayed leaves it can burn them. Also if you use too much concentrate it will also burn the leaves. If you have a serious infestation, you will need to apply the spray a few times, waiting a few days in between.\nGet our latest tips, how-to articles, and instructional videos sent to your inbox.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "Directions for use:\nBefore use, the intestines should be soaked for about 20 minutes in water at a temperature of about 30 ° C, and then rinsed inside with lukewarm water.\nBefore opening - it can be stored at ambient temperature, in a dry, sunless place. Refrigeration is the most recommended to extend their shelf life.\nAfter opening - the recommended storage temperature is 0 - 14 ° C. The product is best stored after opening, salted in the refrigerator. You can also easily freeze the unused intestine and use it for other products. After defrosting, the product should not be re-frozen", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "In less than two weeks the nematodes pass through several generations of adults, which literally fill the insect cadaver. Steinernema reproduction requires at least two dauer nematodes to enter an insect, but a single Heterorhabditis can generate offspring on its own. The nematodes actively searches for insect larvae. Once inside the larva the nematodes excretes specific bacteria from its digestive trac before it starts to feed. The bacteria multiply very rapid and convert the host tissue into products that the nematodes take up and use for food. The larva dies within a few days and the color changes from white-beige to orange-red or red-brown. The nematodes multiply and develop within the dead insect. As soon as the nematodes are in the infectious third stage, they leave the old host and start searching for new larvae. Infected grubs turn color from white-beige to red brown 2-4 days after application and becomes slimy. After a few weeks, dead larvae disintegrate completely and are difficult to find.\nBeneficial nematodes are also very effective against termites, German cockroaches, flies, ant, and fleas.\nAPPLICATION: Beneficial Nematodes are very easy to use. Mix with water and spray or sprinkle on the soil along garden plants or lawn. Put the contents of the Beneficial nematodes in a bucket of water and stir to break up any lumps, and let the entire solution soak for a few minutes. Application can be made using a water-can, irrigation system, knapsack or sprayer. On sprayer use a maximum pressure to avoid blockage, all sieves should be removed. The sprayer nozzle opening should be at least 1/2 mm. Evenly spread the spraying solutions over the ground area to be treated. Continuous mixing should take place to prevent the nematodes from sinking to the bottom. After application keep the soil moist during the first two weeks for the nematodes to get establish. For a small garden the best method is using a simple sprinkling or water can to apply the Beneficial nematodes to the soil. Apply nematodes before setting out transplants; for other pest insects, Japanese Beetles and grubs, apply whenever symptomatic damage from insects is detected.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Combine Milky Spore With Beneficial Nematodes To Maximize Japanese Beetle Grub Control!\nMilky Spore contains spores of the bacteria Paenibacillus popilliae (formerly Bacillus popillae), which work specifically against the grub stage of the Japanese beetle (Popilia japonica). It poses no threat to people, animals, plants or beneficial insects. When Milky Spore is introduced into the soil, it will lie dormant until grubs begin feeding on roots where the bacteria is present. Once ingested, the spore multiplies inside the grub with a single spore creating up to 3 billion new spores in each host grub. These spores kill Japanese Beetle grubs in about a week after infection. As the grub decomposes, the billions of spores it contained are released back into the soil to start the whole process again. Over time, Milky Spore fills out the soil creating a soil environment that Japanese beetles simply cannot survive in.\nMilky Spore's effectiveness can be enhanced by the use of NemaSeek beneficial nematodes. They are unharmed by the infective bacteria and help spread it as the nematode moves through the soil pursuing grubs, weevils and larvae. Apply Milky Spore dry, then water in the NemaSeek for complementary control of grubs.\nCoverage Rate: Milky Spore Granules\n1 lb for 350 sq. ft.\n20 lbs treats up to 7,000 sq. ft.\nWhen To Use: Milky Spore Granular is a 2-year program. Apply Spring, Summer and Fall for 2 consecutive years.\nJapanese Beetles, Milky Spore and Soil Inoculation\nThis Product Controls These Pests or Diseases: Japanese Beetles Grubs (Popillia japonica)", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "The most common type of composting worm! As they feed, Red Wigglers (Eisenia foetida) swallow great quantities of organic material, digest it, extract its food value and expel the residue as worm castings, which are very rich in nitrogen, phosphorus, potassium and many micronutrients.\nE. foetida can process large amounts of organic matter and, under ideal conditions, can eat their body weight each day. They also reproduce rapidly, and are very tolerant of variations in growing conditions.\nItem quantities are a close approximate and will arrive in various stages of growth.\nDIRECTIONS FOR USE:\nYour shipment will arrive in a cloth bag ready for release. They can be stored for several days in the refrigerator if immediate release is inconvenient. DO NOT freeze or store in a plastic container. To release, lightly water the target area and scatter them about (5-10 per square foot). The best time to release worms is early morning or after sunset. DO NOT release in direct sunlight.\nShipped Free (USPS – Priority Mail)\nWe do everything we can to ensure safe arrival of your beneficial insects. To facilitate prompt arrival and avoid weekend holdovers during shipping, beneficial insect orders must be received by noon (MST) Wednesday to ship the following Tuesday. If multiple shipments or faster delivery is required, please call.\nOrders placed during winter months will be shipped according to our 2015 Shipping Schedule, unless otherwise specified at checkout.\nNote: During summer months worm shipments may be delayed due to high temperatures. We will monitor weather conditions and ship when conditions permit.\nPlease call us at (406) 582-0920 if you have questions concerning the shipping of our beneficial insects.\nPlanet Natural guarantees live, timely delivery of our beneficial insects. Instructions for care and release are provided with each order.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-2", "d_text": "The sponge should be moist (not dripping wet), but it is possible that this solution can leak out during shipping. If you have received a PH-200 with the cap (or sponge) dry, simply refill it with the enclosed extra storage solution and allow it to soak, standing upright, for one hour. Your meter will be perfectly fine and the small extra bottle is included for this purpose. Never refill the cap with distilled water and never use the cap for testing purposes.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-6", "d_text": "Such packages contain a large amount (by weight) of inert filler material. This facilitates preparation of bulk mixtures such as embodiments of the invention by allowing weight-based recipes and by simplifying mixing procedures to achieve even distribution of the microorganisms throughout the product.\n An effective quantity of microorganism inoculants (mycorrhizae-promoting agents and bacteria) for an embodiment can also be determined based on a biological-activity study of the organisms in the embodiment, compared to an embodiment formulated by weight using commercially-packaged organisms as described above. In other words, the important criterion in preparing a soil-amendment product according to an embodiment of the invention is not the weight of the microorganism-plus-filler, but the number of live or viable organisms present in the mixture when it is applied. For example, in an embodiment formulated at 1.5% by weight of the mycorrhizae-promoting agent MycoApply Micronized Endo/Ectos, the resulting mixture contains approximately 3,300 spores per kilogram (1,500 spores per pound). Similarly, in an embodiment formulated with 0.75% by weight of each of MicroMX Microbiological Organics and TazoAZ Azospirillum Bacteria, the resulting mixture contains about 1.386×101° organisms per kilogram (6.285×109 per pound).\n The final trace ingredient in an embodiment functions to retain water and reduce its tendency to drain away from the surface to which the embodiment is applied. In a preferred embodiment, this material is Geohumus®, a patented (U.S. Pat. No. 5,734,2058, U.S. Pat. No. 7,652,080) starch-based polymer that absorbs water and turns into a gel-like substance. Other hydrogel-like synthetic substances can also be used, although their use may be less ecologically sound. This substance must be operative to hold water in place, but it must also be able to return the water to plants growing in the area. Embodiments contain between about 0.25% and about 3% by weight of such water-retaining substances.\n It is appreciated that many water-retaining substances can absorb many times their weight in water, and therefore their weight can vary.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "What you received is a sachet with beneficial insects in them, specifically either Amblyseius cucumeris or Amblyseius swirskii.\n- Amblyseius cucumeris is a predatory mite that is useful for the prevention, control, and management of various thrips species. These predators may eat other pests and mites as well.\n- Amblyseius swirskii is a predatory mite useful in the control of whitefly predominantly, and with some impact to spider mites as a noted side benefit.\nThese packets last up to about 5 weeks. Week numbers are printed on the back of each sachet. Anything older than 5 weeks from today’s week can be thrown away as all the beneficial insects have been released. (For Example: If we are in week 21, 5 weeks prior is week 16. If your sachet has a week number between Wk 16 – Wk 21, it is still good. If it is between Wk 1 – Wk 16 it no longer viable and can be tossed away)\nIf your sachet is less than 5 weeks old, all you do is leave them in the plant and try to avoid getting the sachet wet. The insects release on their own through a very small hole in the sachet.\nThese sachets are non-toxic and non-hazardous. If your pet accidentally ingests the sachet it will not be harmed.\nFor more information on our products, please check out our e-commerce website, www.greenmethods.com.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-41", "d_text": "The sponges are packaged as described in Example 3.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-6", "d_text": "Second, nematodes kill their insect hosts within 48 hours. As mentioned earlier, this is due to enzymes produced by the Xenorhabdus bacteria.\nAlso, the infective juveniles can live for some time without nourishment as they search for a host.\nFinally, there is no evidence that parasitic nematodes or their symbiotic bacteria can develop in vertebrates. This makes nematode use for insect pest control safe and environmentally friendly. The United States Environmental Protection Agency (EPA) has ruled that nematodes are exempt from registration because they occur naturally and require no genetic modification by man. Beneficial nematodes can be an excellent tool in the lawn and garden to control certain pest insects. They can be used with organic gardening and are safe for kids and pets.\nWhat is a nematode? Nematodes are microscopic, whitish to transparent, unsegmented worms. They occupy almost every conceivable habitat on earth, both aquatic and terrestrial, and are among the most common multicelled organisms. Nematodes are generally wormlike and cylindrical in shape, often tapering at the head and tail ends; they are sometimes called roundworms or eelworms. There are thousands of kinds of nematodes, each with their particular feeding behavior -- for example, bacterial feeders, plant feeders, animal parasites, and insect parasites, to name a few.\nInsect-Parasitic Nematodes. Traditionally, soil-inhabiting insect pests are managed by applying pesticides to the soil or by using cultural practices, for example, tillage and crop rotation. Biological control can be another important way to manage soil-inhabiting insect pests. A group of organisms that shows promise as biological control agents for soil pests are insect-parasitic nematodes. These organisms, which belong to the families Steinernematidae and Heterorhabditidae, have been studied extensively as biological control agents for soil-dwelling stages of insect pests. These nematodes occur naturally in soil and possess a durable, motile infective stage that can actively seek out and infect a broad range of insects, but they do not infect birds or mammals. Because of these attributes, as well as their ease of mass production and exemption from EPA registration, a number of commercial enterprises produce these nematodes as biological \"insecticides.\"", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-4", "d_text": "The way to protect: spray with malatyon every 10 days.", "score": 8.086131989696522, "rank": 99}]} {"qid": 3, "question_text": "What makes UNESCO consider the Acropolis of Athens particularly significant as a World Heritage Site?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "An unprecedented conceptual design; embodied in architectural excellence\nPut the best of science, art and philosophy together in one creation and you have the definitive monument of human civilisation. UNESCO calls it the symbol of World Heritage. The world calls it the Athenian Acropolis!\nThe history of the Acropolis of Athens is long, with moments when democracy philosophy and art flourished, leading to its creation. Then there were the times when its best standing pieces were removed and shipped away from the city, dividing the monument in two. Today, the international community wants to reunite all of the Acropolis sculptures in Athens and restore both its physicality and meaning.\nThe Acropolis, and the Parthenon in particular, is the most characteristic monument of the ancient Greek civilisation. It continues to stand as a symbol in many ways: it is the symbol of democracy and the Greek civilisation. It also symbolises the beginning of the Western civilisation and stands as the icon of European culture. The Parthenon was dedicated to Athena Parthenos, the patron goddess of the city of Athens and goddess of wisdom. It was built under the instructions of Pericles, the political leader of Athens in the 5th century BC. The Parthenon was constructed between 447 and 438 BC and its sculptural decoration was completed in 432 BC. In 1987 it was inscribed as a World Heritage Site (UNESCO, 1987). Uniquely, capturing the gravity of the Athenian Acropolis as a symbol, UNESCO recognises that “[…] the Acropolis, the site of four of the greatest masterpieces of classical Greek art – the Parthenon, the Propylaea, the Erechtheum and the Temple of Athena Nike – can be seen as symbolizing the idea of world heritage” (UNESCO, 2006).\nDespite the unique symbolic and cultural value of the monument, the issue of the removal of the sculptures from the Athenian Acropolis by Elgin continues to shadow their history. Today, more than half of the Parthenon sculptures are in the British Museum in London and their return to Athens, for their display in the Acropolis Museum together with the other originals, is a cultural issue awaiting to be settled.\nIt is recommended that you start your reading about the Parthenon and the Acropolis of Athens at the Acropolis of Athens page of the Hellenic Ministry of Culture.", "score": 53.49234913975924, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "East meets West and classic meets contemporary in this iconic European capital. Athens is the crown jewel of Greece and a fascinating city for any visitor.\nThe history of Athens is world-renowned. Ancient attractions such as the Acropolis, Parthenon, Agora and Dionysus Theatre, created during the Greek Golden Age, have influenced architecture throughout the Western world.\nThe Athenian Acropolis, one of the Seven Wonders of the World, is instantly recognisable. Sitting high above Athens on the Acropolis Rock, this 5th-century site and its majestic temples were created to pay tribute to the gods and demonstrate the supremacy of Ancient Greece. It continues to impress today, but make sure you visit early in the day to avoid the burn of the scorching summer sun.\nEntering the Acropolis via the towering columns of the Poplyaia, you won’t fail to be impressed. On your way, you’ll see the unmissable Parthenon or ‘Temple of the Virgin,’ a famous national symbol, dominating the Athens skyline. This was once the most elaborate temple in Greece, framed by classic Doric style columns, with ornate friezes and sculptures. Inside, sat a precious statute of the city’s protector-goddess Athena made of ivory and gold.\nBelow the Parthenon sits the notable Dionysus Theatre. This grand mosaic-tiled, open-air theatre was the set of many Greek tragedies and plays and is one of the world’s first theatres. See the marbled seats of the elite and dignitaries, including Roman emperor Hadrian’s exclusive seat.\nAnother noteworthy site, the modern Acropolis Museum, houses many ancient sculptures and artefacts for even more insight into early Greek civilisation. The museum’s star attraction is the beautifully displayed Parthenon Marbles depicting scenes of ancient Athens. If you are lucky, you might even see live restorations and excavations at this impressive living museum.\nIf you love museums, Athens has all sorts. The National Archaeological Museum is one of the most prestigious in the world. Founded in the 19th-century, it contains the most comprehensive reference of Ancient Greek culture with more than 11,000 exhibits.\nAt the heart of civic life in Athens was the must-visit Roman Agora. Although difficult to imagine from its ruined remains, it contained the city’s busiest marketplace, libraries, courts and baths.", "score": 50.738947468012945, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "The Acropolis hill, also called the «Sacred Rock» of Athens, is the most important site of the city and constitutes one of the most recognizable monuments of the world. It is the most significant reference point of ancient Greek culture, as well as the symbol of the city of Athens itself as it represent the apogee of artistic development in the 5th century BC.\nA visit to Athens is not complete without visiting the Acropolis, hundreds of tourists each day accordingly make the pilgrimage. Acropolis from Filopappou Hill.\nAcropolis at dusk from Lycabettus Hill.Acropolis fromAcropolis of Athens is an ancient citadel located on a high rocky outcrop above the city of Athens and contains the remains of several ancient buildings of great architectural and historic significance, the most famous being the Parthenon.\nAlthough there are many other acropoleis in Greece, the significance of the Acropolis of Athens is such that it is commonly known as «The Acropolis» withoutthere is evidence that the hill was inhabited as far back as the fourth millennium BC, it was Pericles (c. 495 – 429 BC) in the fifth century BC who coordinated the construction of the site’s most important buildings including the Parthenon, the Propylaia, the Erechtheion and the temple of Athena Nike.\nThe Parthenon and the other buildings were seriously damaged during the 1687 siege by the Venetians in the Morean War when the Parthenon was being used for gunpowder storage and was hit by a cannonball.\nThe Acropolis was formally proclaimed as the preeminent monument on the European Cultural Heritage list of monuments on 26 March 2007.\n|From acropolis to Pireaus, Athens, Greece.|\n|Amphitheater in Acropolis, Athens, Greece.|\n|The Caryatid Porch of the Erechtheion, Acropolis, Athens, Greece.|\n|Erechtheum, Acropolis of Athens, Greece.|\n|The Parthenon on the top of the Acropolis Hill, Athens, Greece.|\n|The Propylaea of the Acropolis from inside, Athens, Greece.|\nOn the Acropolis\nThe Parthenon: The largest temple on the Acropolis, originally dedicated to the goddess of the city, Athena. Built between 447 and 438 BCE at the height of the Classical period.", "score": 48.0438835837705, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "A walk around the Acropolis\nThe Acropolis, the most famous attraction in Greece, is a landmark in the history of humanity, and a designated World Heritage landmark. This symbol of Ancient Athens and the civilisation that thrived here in the fifth century BC constitutes an architectural masterpiece of the era.\nThe lives of the ancient Athenians centred around the Acropolis, the focus of their sports, arts, politics, gastronomy, and commercial and religious life. The greatest philosophers and orators of the age lived, taught and created here.\nToday, we’ll walk in the footsteps of these renowned figures, past inspiring archaeological discoveries, and along the stone paths and hills they once walked. Welcome to the Acropolis of Athens.\nRoute | Estimated time: 5-7 hours\nOur visit to the Acropolis and environs will begin at the Thission electric subway station, past the antique bazaar and the crowded cafés and restaurants, immersed in colours, sounds and aromas. After five minutes we reach the entrance of the archaeological site of the ancient agora, the marketplace. Here we admire the Stoa of Attalos and the Temple of Hyphaestus.\nWe return to Thission and follow Agion Asomaton Street, turning left at Apostolou Pavlou Street. Walking past the tall trees and tables of the local cafés, we soon reach the Herakleidon Street plateau. This area, a magnet for the young and the hip, buzzes with life. If you’re into modern art, a visit to the Herakleidon Museum is a ‘must’.\nWe continue our stroll up Apostolou Pavlou Street, and above us to our left we can see the Sacred Rock, the Propylaia, the Erechtheion and the Parthenon. A few steps further and we see the “Thission” Summer Cinema, a picturesque throw-back to the past and a summer entertainment spot for modern-day Athenians. Just above the cinema we see the Church of St. Marina and the dome of the Athens Observatory.\nContinuing, we pass the cinema and at Eginitou Street, we find the entrance to the archaeological site of Pnyka, the birthplace of democracy. Admission is free, so be sure to stop by.\nAfter Pnyka, we follow the path to the left, towards Philopappos hill, where the stunning Church of St.", "score": 47.414269711626254, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "It looks like your browser is out of date.\nTo use all features of KLM.com safely, we recommend that you update your browser, or that you choose a different one. Continuing with this version may result in parts of the website not being displayed properly, if at all. Also, the security of your personal information is better safeguarded with an updated browser.\nThe Acropolis is probably the most important ancient Greek monument. This 2,500-year-old ruin city dominates Athens from the top of Acropolis Hill. And although the Parthenon Temple with its landmark columns is the most famous monument on the hill, it is only one of the many treasures the Acropolis has to offer. After years of restorations, the palaces, columns and sculptures once again reflect the city’s splendour, power and wealth.\nThe Acropolis is much more than a world-famous monument. It once was a city composed of many buildings but over the centuries it suffered greatly, both at the hands of its enemies and of archaeologists. The Byzantines transformed the temples into churches, the Turkish governor used the Erechtheion sanctuary to house his harem, and 200 years ago the British bought up large quantities of marble sculptures. Fortunately, many structures have since then been restored. Take a stroll through this amazing birthplace of Western civilization.", "score": 44.505642785707245, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Expert guide to Athens\nAn insider's guide to the best things to do and attractions in Athens, including visiting Acropolis, Plaka and Mt Lycabettus as well as day trips to the Temple of Poseidon and Delphi. By Jane Foster, Telegraph Travel's Athens expert.\nThe Acropolis, a Unesco World Heritage site, is Athens’s absolute must-see. Nearby, the New Acropolis Museum is a stunning 21st-century exhibition space which more than warrants a visit in this city where the juxtaposition of ancient and modern are part of the appeal.\nFive top sights\nRising above the concrete jungle that is modern Athens, the “sacred rock” is crowned by three temples dating from the fifth century BC, attracting three million visitors per year. The obvious starting point for a first-time visit is the largest and most impressive temple, the Parthenon, supported by 46 Doric columns and considered classical architecture’s most influential building. Be sure to walk below the Acropolis at night, too, when it is at its most magnificent, bathed in golden floodlighting.\nAddress: Acropolis Hill, Plaka\nContact: 00 30 210 321 4172; odysseus.culture.gr\nOpening times: daily, 8am-8pm (summer); 8.30am-3pm (winter)\nAdmission: €20; free on the first Sunday of the month Nov-Mar\nInaugurated in June 2009, this light, airy glass-and-concrete building was designed by Swiss architect Bernard Tschumi. Archaic and classical finds from the Acropolis site are displayed here – proud statues of the ancients and life-like stone carvings of animals. The top floor is devoted to the marble frieze that once ran around the top of the Parthenon. About half of the pieces are originals, while the remainder are white plaster copies. The missing pieces were removed by Lord Elgin in 1801 and are now in the British Museum in London. The Greeks have wanted them back for decades, and hope that this blatant presentation will finally convince the British to return them. There's also an excellent restaurant on the second floor, open during museum hours, and till midnight on Fridays.", "score": 43.89433290101281, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "3.1- 2 TRANSCRIPT FOR The Acropolis and Parthenon of Athens A visitor to Athens in the late fifth century bce would have been impressed by a grand complex of temples and other buildings. This was the Acropolis, or “highest city” in Greek. Originally a fortress, by this time the Athenian Acropolis was the site of some of the finest art and architecture in Greece. The Acropolis was an imposing mound of rock that rose steeply above the surrounding city. The largest building on the Acropolis was the Parthenon, built entirely of fine marble. Although more familiar to us as a temple, the Parthenon was also utilized as a treasury. It was used to store thousands of pounds of silver, paid annually to Athens by the Delian league: a group of memberstates set up in joint opposition to the Persians, but which Athens came to have command over. The treasure was kept in a chamber called the “Virgin Room,” or “Parthenon,” hence the temple’s name. To the north of the Parthenon was a smaller temple, dedicated to Athena Polias, the city’s patron goddess, who gave Athens its name. This building is known today as the Erectheum. To the east was a monumental entryway to the Acropolis, the Propylaia, or “Gateways,” named after its five doorways. The Propylaia was designed to be not merely a gatehouse, but also to impress visitors before they passed through to the temple precincts. Jutting out to one side of the Propylaia was a small temple dedicated to Athena Nike, the goddess of victory in war. Appropriately, in the fifth century bce the temple was adapted from a military building, and covered in fine marble during the construction of the Propylaia. To the south stood the open-air Theater of Dionysos, named after another Greek god. Sitting on semicircular banks of marble bleachers, the audience watched the actors perform in a circular space called the orchestra. Behind stood the skene, where the actors prepared for the performance. Athenians flocked to see the latest tragedy or comedy by such playwrights as Aeschylus, Aristophanes, and Euripides, all of whose works are still performed today.", "score": 43.6101570412109, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Athens, city of gods\nDiving in a permanent sunbathing, the beautiful city of Athens keeps shining, dominated by a plethora of archaeological remains that crossed the ages. Founded in 800 BC by the goddess Athena or by the hero Theseus (there are several different legends), the capital of Greece is today one of the oldest cities in the world. It has a stunning historic cityscape listed as a UNESCO World Heritage Site and jealously preserved by its inhabitants.\nA democratic and religious capital\nThe political and cultural cradle of the Mediterranean, Athens also founded the first republic of the history and saw the birth of many philosophers – such as Plato, Socrates and Aristotle (who later wrote the Athenians’ constitution) whose precepts continue to influence our Contemporary societies. At the same time, most Athenians also had religious functions for many centuries – the gods being an integral part of their lives.\nThe monuments of Athens: prowess of men at gods’ service\nIn order to honor the gods, many temples and other religious buildings were built by the Athenians. The Acropolis is one of them. This high rocky plateau surrounded by buttresses, was both a citadel and a religious sanctuary dedicated to the goddess Athena. The Athenians built many other temples, including the Parthenon, the Erechtheum and the temple of Athena Nike, as well as other spectacular monuments such as the Propylaea, the theater of Dionysus and the odeon of Herod Atticus. The Acropolis is today one of the most impressive men’s achievements.\nAnother place not to be missed is the Agora where the Athenians use to meet to debate laws and ideas. On the same site, the temple of the god Hephaestus (5th century) is one of the oldest witnesses of ancient Athens.\nMany other vestiges in Athens shows the importance of the Gods during Antiquity. Our team of professional guides has set up tailor-made tours to help you discover these architects and their secrets.", "score": 41.56216214633181, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "Athens Acropolis facts:\nHeight: 150m (490 ft)\nArea: 3 hectares\nProclaimed as the pre-eminent monument on the European Cultural Heritage list of monuments on 26th March 2007\nShort history of Athens Acropolis\nAs mentioned before, the first settlements on the Athens Acropolis date back to Neolithic times, i.e. 5000 years ago.\nIn Mycenaean times, there was a palace here and a Cyclopean massive circuit wall was built and served as the main defence for the acropolis until the 5th Ct AD.\nBefore Classical times there were a couple of temples standing at the Acropolis, including the Older Parthenon which was still under construction when the Persians sacked the city in 480 BC and burnt and looted the building.\nAfter Athens’ victory over Persia, the Golden Age of Athens followed, during which the classical leader Pericles commissioned the rebuilding of the temples and buildings on the Acropolis, including the Parthenon, the Propylaea, the Erechtheion and the temple of Athena Nike.\nThe two architects responsible for the construction of the Parthenon were Ictinus and Callicrates, while Phidias was in charge of sculpture.\nIn Classical times, the power was moved down to the Agora and the Acropolis became a purely religious site.\nDuring the Hellenistic and Roman times, many of the buildings in the area of the Acropolis were fixed and with every new usurper new monuments were built and added to the complex.\nThe last important ancient constructions were the Temple of Rome and Augustus during the Julio-Claudian period and the grand Odeon, the stone amphitheatre with a capacity of 5,000, built by Herodes Atticus.\nDining-room table tidbit: Through history the Athens Acropolis became the fortress of a medieval town, the Athens Parthenon was turned into a Byzantine church, crusader cathedral, an Ottoman mosque, then a warehouse to store gunpowder which is the reason the Parthenon doesn’t have a roof anymore. Namely, it was destroyed when the gunpowder exploded during the Venetian siege.\nWhat were the main monuments of the Athens Acropolis\nHere’s a what-is-what list and a plan of ruins that used to, or still stand, at the Athens Acropolis.\n1.", "score": 39.43204765964708, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "The ensuing explosion caused the cella to collapse, blowing out the central part of the walls and bringing down much of Phidias’ frieze.\nWhat is so special about the Parthenon?\nThe Parthenon is surely the most important monument of ancient Greece and is one of the most famous in the world. It was the most sacred of monuments, and was famous in antiquity as a Greek architectural masterpiece. The monument was a temple dedicated to the goddess Athena.\nWhat is the most famous Acropolis?\nThe most famous acropolis is the one in Athens. The Athenian Acropolis is home to one of the most famous buildings in the world: the Parthenon. This temple was built for the goddess Athena. It was decorated with beautiful sculptures which represent the greatest achievement of Greek artists.\nWho destroyed Acropolis?\nAnother monumental temple was built towards the end of the 6th century, and yet another was begun after the Athenian victory over the Persians at Marathon in 490 B.C. However, the Acropolis was captured and destroyed by the Persians 10 years later (in 480 B.C.).\nDid slaves build the Parthenon?\nYes, it is likely that slaves served as most or even all of the labor force for the Parthenon, given that the Athenian government owned many slaves\nHow much is entry to the Acropolis?\nThe cost of entrance to the Acropolis is about 20 euros and is good for the other sites in the area including the ancient agora, theatre of Dionysos, Kerameikos, Roman Agora, Tower of the Winds and the Temple of Olympian Zeus and is supposedly good for a week. You can also buy individual tickets to these other sites.\nHow much did Lord Elgin pay for the Elgin marbles?\nThe excavation and removal was completed in 1812 at a personal cost to Elgin of around £70,000 (equivalent to £4,430,000 in 2016 pounds). Elgin intended to use the marbles to decorate Broomhall House, his private home near Dunfermline in Scotland, but a costly divorce suit forced him to sell them to settle his debts.\nWho gave Lord Elgin permission to take the marbles?", "score": 39.20864051652462, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "When you visit Athens, you must see the worlds most inspirational building: the iconic Parthenon. This temple dedicated to Athena has inspired the architecture of public buildings, banks and edifices around the world. Found at the top of the Acropolis, the Parthenon rewards those with comfortable footwear, a sturdy stride and a determination to see this glorious edifice. No matter how many times youve seen this temple in photographs, youll not be prepared for the sheer grandeur of it in person.\nConsidered to be one of the most important surviving buildings of classical Greece and the most exquisite of Doric temples, the Parthenon has perplexed architects, historians, archaeologists and reconstruction engineers alike. Currently buttressed with scaffolding, it will take your breath away, whether you visit it in the morning, in the heat of the day or at night with the sun setting behind it.\nTo see important artefacts and friezes from this temple, go down to the Acropolis Museum. For a complete history of the area, visit the National Archaeology Museum. For an intact Doric temple, visit the nearby Temple of Haphaestus.\nThis is a conveniently situated apartment in an upscale area, next to the Hilton hotel. Built in the 1930's, this 110 square meters apartment is a fine example of the Bauhaus architecture of Athens. Tall ceilings, unique architectural features, lots of windows, spa …see more\nThe apartment is located in Acropolis area one of the most fashionable areas of Athens in the most beautiful pedestrian street.\nThe apart is within walking distance to the Acropolis Rock, a few meters from the Ancient Theater of Dionysus, the Herodion Ancient Odeon and …see more\nSpacious, luxurius house\nIn the heart of Athens historical centre,just across the New Acropolis Musuem ,only 50 m away from the Partenon and Acropolis 's entrance.\nNext to Herodium Odeon and the Acropolis archaeological sites.\nIn walking distance from the Acropolis …see more\nWe call our apartment KA7.\nThis apartment is located in a residential building and it is minutes walk away from the Acropolis museum. It has two bedrooms, one living room, a fully equipped kitchen, one bathroom and all modern cons including free WiFi.", "score": 39.11138492237371, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "HISTORY OF ATHENS:\nTHE ACROPOLIS: For thousands of years the Acropolis has been the symbol of Athens, a sacred rock and link connecting the magnificent ancient civilization with today's. The Acropolis, its monuments, its history and the myths connected with it are, rightly, the pride and glory of this City, the envy of all other Cities of the world. There is no Greek or foreign visitor who does not want to make the pilgrimage to the sacred rock and absorb its magnificence and beauty. If you have never been to the Acropolis we can assure you that it is a unique, unforgettable experience.\nPLAKA: As soon as you start walking around Plaka's stone-paved, narrow streets, you will have the feeling that you are travelling back in time. This is Athens' oldest and, thanks to the restoration efforts, which went into its buildings in recent years, most picturesque neighbourhood. You will be delighted by the beauty of the neo-classical colours of its houses, their architecture, their lovingly tended little Gardens, the elegance and the total atmosphere of the area. In Plaka, even the air is different, lighter, clearer, scented, like a gift from the gods. When you decide to take a walk around it be sure to take a map, because Plaka is a labyrinth and you may get the feeling that you are lost in its maze of narrow streets and alleyways. No need for alarm though. It is easy to orientate yourself: uphill is the Acropolis and downhill are Syntagma and Monastiraki.\nSOUNION: The temple of Poseidon, standing some 60 metres/200 feet above the sea at the edge of a Cliff on Cape Sounion, is one of the most breathtaking and deeply moving sights in Greece; and Greece has many of them. The temple is an hour's drive from central Athens and both the site itself and the route leading to it are worth every minute of the drive. The road runs along the Saronic coast and from the window of your car or bus you can enjoy the endless and brilliant blue sea. If you are travelling by car make sure you stop for a breath of sea-scented air and a walk on the beach. You will also find many coffee shops, fresh fish Tavernas and ouzeri along the way.", "score": 36.33892067162118, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "One of the best-known historical sites in Athens, the Parthenon is an ancient Greek temple on the Athenian Acropolis, dedicated to the goddess Athena during the fifth century BC. Its decorative sculptures are considered to be some of the high points of classical Greek art, an enduring symbol of Ancient Greece, democracy and Western civilization.\nThe Parthenon replaced an older temple of Athena, which historians call the Pre-Parthenon or Older Parthenon, that was demolished in the Persian invasion of 480 BC. Since 1975 numerous large-scale restoration projects have been undertaken to preserve remaining artefacts and ensure its structural intergrity.\nTHEATRE OF HERODES ATTICUS\nThe Odeon of Herodes Atticus, known as the “Herodeon” is a stone Roman theatre structure located on the southwest slope of the Acropolis of Athens, on the Dionysiou Aereopagitou Street. It is considered to be one of the best open air theatres in the world.\nANCIENT CEMETERY OF KERAMEIKOS\nThe cemetery of ancient Athens was continuously in use from the 9th century BC until the Roman times. It is an extraordinary archeological site in 11 acres, filled with tombstones and statues of astonishing design and quality.\nThe archeological site of Kerameikos is located at the end of Ermou street, on the northwest of the Acropolis.\nOverlooking the Acropolis and the Parthenon, the Philopappos Monument is an ancient Greek mausoleum, a square white-marble construction situated on he Philopappou Hill. It was dedicated to Gaius Julius Antiochus Epiphanes Philopappos, a prince from the Kingdom of Commagene.\nANCIENT AGORA STOA OF ATTALOS\nThe Stoa of Attalos was a stoa in the Agora of Athens. It was built by and named after King Attalos II of Pergamon, who ruled betwe4en 159 BC and 138 BC. It currently houses the Museum of the Ancient Agora.\nThe collection of the museum includes clay, bronze and glass objects, sculptures, coins and inscriptions from the 7th to the 5th century BC, as well as pottery of the Byzatine period and the Turkish conquest.", "score": 35.46523856685033, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "Parthenon – undoubtedly the rock-star of the Athens Acropolis and one of the most important architectural structures in the world. Read more about it in our article on Athens Parthenon.\n2. Old Temple of Athena – archaic temple dedicated to Athena Polias, patron deity of the city. It is possible it was built on top of the remains of the Mycenean palace.\n3. Erechtheum – a temple built in the 5th Ct BC, dedicated either to the legendary Greek hero Erichtonius or the legendary kind Erechteus II. Phidias was in charge of its sculpture. On its south side there is the famous \"Porch of the Maidens\", with six draped female figures (caryatids) as supporting columns. The caryatids that can be seen there today are the casts. The originals have been moved to the New Acropolis Museum to protect them from further destruction by the Athens city smog.\n4. Statue of Athena Promachos – at this spot used to stand the gigantic bronze statue (7m high on top of a 2m base) of Athena goddess, patron of Athens, standing with her shield resting upright against her leg, and a spear in her right hand. The statue could be seen from the sea. It was one of the first works of Phidias. It was one of the most famous statues of antiquity. It stood overlooking the city for over 1000 years until it was deported to Constantinople in 465 and finally destroyed by a Christian mob in 1203.\n5. Propylaea – the monumental entrance to the flat top of the Acropolis. It was one of the buildings built by Pericles in the 5th Ct. Besides providing security, its role was to create a dramatic first impression before seeing the Parthenon.\nUnder Frankish occupation (1204-1456) the Propylaia were converted into a residence for the Frankish ruler, and in the Ottoman period (1456-1833) into the Turkish garrison headquarters.\n6. Temple of Athena Nike – a small temple dedicated to one of many of Athena goddess’ forms, Victory. The Temple of Athena Nike was an expression of Athens' ambition to be the leading Greek city-state in the Peloponnese.\n7.", "score": 35.38875673241116, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "The most outstanding contribution to the appearance of the Acropolis was made in 447 — 438 BC by Ictinus and Callicrates, architects who built the Parthenon, a giant temple in honor of the patron deity of these lands — the virgin goddess Athena Parthenos. Despite the fact that the Parthenon is in a rather poor condition, its facade with columns is the most famous landmark of Greece.\nThe Parthenon's meticulous design, thought out right to the smallest details, which are completely invisible to the outside observer, creates an interesting optical illusion. The temple seems perfectly rectilinear, but in fact its contours don't have any straight lines. For example, the corner columns are not circular in the cross-section and thicker than others in the diameter. Otherwise they would seem thinner, but thanks to this technique all columns visually look the same.\nThere has always been a struggle for the Acropolis during the centuries of its existence. When Christianity came to Greece the Parthenon was converted into a Christian church of Virgin Mary, and the statue of Athena Parthenos was moved to Constantinople. In the 15th century, after the conquest of Greece by the Turks, the church was turned into a mosque with attached minarets, and one of the temples of the Acropolis, the Erechtheion, served as harem for the Turkish pasha.\nIn the 17th century the entire central part of the Parthenon was destroyed by a cannonball shot from a Venetian ship. After that persistent Venetians broke several sculptures while trying to remove them. In the beginning of 19th century Thomas Bruce, 7th Earl of Elgin, took everything that was possible to take: from friezes to caryatids (and Greece is still trying to persuade Britain to return the monuments back to where they belong). In addition to that Turks were constantly sapping the Acropolis in order to blow it up. They did not succeed, but during one of the ensuing battles Turkish cannonball heavily damaged the Erechtheion temple.\nOnly at the end of the 19th century the Acropolis finally saw some peace (with the exception of museum staff strikes). Its ancient appearance was restored where possible, some of its original bas-reliefs and sculptures are now in museums of London, Paris and Athens, and those sculptures that we see outdoor nowadays are copies.", "score": 34.367642334434315, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "This is no accident, as the whole idea was to impress both citizens and visitors to the city with its greatness, expressed in the crowning glory of Athens - the Parthenon, the temple to Athena Parthenos.\nParthenon, History, Art, and Architecture - Ancient Greece\n(Euripides: , translated by Gilbert Murray.)\nIn the great Greek dramas, the Chorus is a constant reminder that, though they cannot understand or explain them, there are other powers in the world than the wild passions of men.\nThe great dramatic festival of Athens was held in the spring in the theatre of Dionysus, to the south-east of the Acropolis.\nAthenian Treasury in Delphi | Giacobbe Giusti\nWhen you see the acropolis from anywhere in Athens it looks quite imposing. Fortunately you can only get close to and enter it the same way the ancients did - on foot. This way you gradually come to appreciate the size and achievements of the site and its buildings.", "score": 33.84813278537678, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "The Athenian Acropolis - Visit-Ancient-Greece\nThe temples on the north side of the Acropolis housed primarily the earlier Athenian cults and those of the Olympian gods, while the southern part of the Acropolis was dedicated to the cult of Athena in her many qualities: as Polias (patron of the city), Parthenos, Pallas, Promachos (goddess of war), Ergane (goddess of manual labour) and Nike (Victory).\nMinistry of Culture and Sports | Acropolis of Athens\nThe Acropolis became a sacred precinct in the eighth century BC with the establishment of the cult of Athena Polias, whose temple stood at the northeast side of the hill.\nThe temples on the north of Acropolis housed earlier sects and Olympian Gods and those at the south were dedicated to the Goddess Athena and her forms such as Polias, Parthenos, Pallas, Promachos, Ergane and Nike.\nThe Acropolis of Athens (Ancient Greek: Ἀκρόπολις, tr\nThere's another fascinating detail in the walls to the right of the Erechtheion. When the Athenians regained possession of the acropolis after they had defeated the Persians, they determined to keep the remains of the temples as they were, as a memorial. But later they decided to use the remains to fortify the walls. If you look closely, you can see where they inserted the old column sections into the walls to reinforce them.\nAcropolis in Greek means \"The Sacred Rock, the high city\"\nWhen you get to the east of the Parthenon (the Parthenon dominates the acropolis, we'll leave it until last) turn north. You'll come across the Rome and Augustus Altar. This was built by the Athenians in honour of Rome and the emperor Augustus.\nHistory of the Acropolis - Ancient Greece\nIn the mid-fifth century BC, when the Acropolis became the seat of the Athenian League and Athens was the greatest cultural centre of its time, Perikles initiated an ambitious building project which lasted the entire second half of the fifth century BC.\nHistory of the Acropolis Geography\nThe design of the Propylaia leads you through onto the acropolis itself. As you go through you suddenly catch sight of the Parthenon.", "score": 33.359874339359415, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Waking up early in Athens, our goal was to explore the ancient sites. First stop, the Acropolis! The center of ancient Athens, the Acropolis has always been a scared place. While most know the iconic Parthenon, there are many other monuments and structures throughout the park. On the walk to the top, you pass two different theaters: the Theater of Dionysus and the Odeon of Herodes Atticus (left and right above). How can you tell the Odeon is Roman? The arches!\nEven in early May before the tourist season really kicks off, crowds can get heavy at the Acropolis. I would recommend going early before it gets really hot and packed with tour groups, as you can see happening on the walk through the Propylaea (the gateway to Acropolis).\nRemember the caryatids from the Acropolis Museum?Here are the reproductions standing proudly at the Erechtheion, a temple dedicated to both Athena and Poseidon.\nOf course, the most iconic monument on the Acropolis is the Parthenon, the ancient temple dedicated to the Goddess Athena. Here I am in front of the scaffolding side (reminds me of being in New York!).\nHere is the view from the less scaffold covered side. Even though many of the decorative elements have been pilfered over the ages, it isn’t hard to imagine how magnificent it must have looked when first constructed.\nAround the base of the Acropolis are a plethora of historic ruins and sites. One that you can’t miss is the Ancient Agora. The Agora, translating to marketplace, was much more than a place where goods were bought and sold. This was a center of the city, a place for everything from religious festivals to philosophical debate. Here you can see the replica of the Stoa of Attalos, basically a covered marketplace. There are also statues of noted figures from centuries of use.\nOne of the most fascinating ruins on this site is the Temple of Hephaestus, the ancient god of the forge. What makes this temple so unique is how incredibly well preserved it is.\nAs you can see from these details, the marble friezes are remarkably intact. How did this happen? The temple was used throughout the ages as a religious site, becoming an Orthodox church around 700 AD. The last church service was held in 1833.", "score": 32.668446591215634, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "The Parthenon at the Acropolis in Athens is a temple, originally dedicated to Athene, the Ancient greek godess of war and Wisdom.\nBuilt betwwen 447 and 432 B.C, it is seen as a pinnacle of Ancient Helenic archetecture, being a classic example, on a grand scale of the standard \"Doric\" Temple design prevalent in ancient Greece.\nThe site of the Acropolis itself shows signs of inhabitation stretching back in to the stone age, this probably owing to the fact that it has a good viewpoint, and unusually for plateaus like this, it has an ample supply of water from fresh springs.\nParthenon 3D Video Flythrough (Format: Flash, Size: 500Kb)\nParthenon & Ancient Greece Crossword", "score": 32.33697232068628, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "State of Conservation\n|Site||State Party||Year||Threats*||Danger List|\n|acropolis, athensAcropolis, Athens||greece Greece||2004||Housing, Interpretative and visitation facilities,||No|\n|acropolis, athensAcropolis, Athens||greece Greece||2003||Housing,||No|\n|acropolis, athensAcropolis, Athens||greece Greece||2002||Housing,||No|\n|acropolis, athensAcropolis, Athens||greece Greece||2001||Housing,||No|\nThe threats indicated are listed in alphabetical order; their order does not constitute a classification according to the importance of their impact on the property.\nFurthermore, they are presented irrespective of the type of threat faced by the property, i.e. with specific and proven imminent danger (“ascertained danger”) or with threats which could have deleterious effects on the property’s Outstanding Universal Value (“potential danger”).", "score": 31.699549673433047, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "You can also start from The Review of the Seizure of the Parthenon sculptures, or read The Memorandum of the Greek Government for The Restitution of the Parthenon Marbles. But there is more to discover. This website brings you the key concepts associated with the Acropolis sculptures and the reasons why they should be reunited in Athens. You can find links to official pages and initiatives about the issue, as well as resources for your research as a traveller, student or academic! Welcome to AcropolisofAthens.gr!\nUNESCO. (1987). WH Committee: Report of 11th Session, Paris 1987. Paris: UNESCO / World Heritage Center. Retrieved from http://whc.unesco.org/archive/repcom87.htm#404\nUNESCO. (2006). Acropolis, Athens: (Cycle 1) Section II Summary. UNESCO / World Heritage Center. Retrieved from http://whc.unesco.org/en/list/404/documents/", "score": 31.618631566637713, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "Over the course of the next 15 years, money was poured into new transportation infrastructure projects, the restoration of surviving neoclassical buildings, the gentrification of the city's historical center and the renovation of many former industrial areas and the city's coastline. The restoration of charming neoclassical buildings in the city's historical center has been accompanied by the construction of attractive post-modern buildings in newer districts; both of which have begun to improve the aesthetic essence of the city. Athens today is ever evolving, forging a brand new identity for the 21st century.\nBut, there is a piece of famous architecture in Athens, and it is named The Parthenon. The Parthenon sits at the top of the Acropolis, a very important hill in Athens, which now serves as the city center. The Parthenon was built to honour the goddess Athena/ Athene, patron of Athens and goddess of war, wisdom and crafts. She is a maiden goddess.\nAthens hosted the 2004 Summer Olympic Games. While most of the sporting venues were located outside the city proper -in various locations throughout Attica- the entire urban area of Athens underwent major lasting changes that have improved the quality of life for visitors and residents alike. Aside from the excellent transportation infrastructure that was completed in time for the 2004 Olympics (from new freeways to light rail systems), the city's historic center underwent serious renovation. Most notable among the city's facelift projects are the Unification of Archaelogical Sites -which connects the city's classical-era ruins and monuments to each other through a network of pleasant pedestrianized streets- and the restoration of the picturesque neoclassical Thissio and Pláka districts.\nAthens first appears on the pages of history around 1400 B.C., at which time it was already a major cultural center of the Mycenaean civilization. The Acropolis and remnants of the Cyclopean Walls attest to its status as a Mycenaean fortress city. In 1200 B.C., many Mycenaean cities were destroyed and resettled by invading bands of Dorians, but Athenian tradition maintains that Athens escaped this fate and retained a \"pure Ionian bloodline.\" Beginning as early as 900 B.C., Athens became a leading trade center within the Greek world, owing to its central location, possession of the heavily fortified Acropolis and its quick access to the sea.", "score": 31.53153769937335, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "|Construction started||447 BC|\n|Destroyed||Partially on 26 September 1687|\n|Height||13.72 m (45.0 ft)|\n|Other dimensions||Cella: 29.8 by 19.2 m (98 by 63 ft)|\n|Size||69.5 by 30.9 m (228 by 101 ft)|\n|Design and construction|\n|Other designers||Phidias (sculptor)|\nThe Parthenon (; Ancient Greek: Παρθενών; Modern Greek: Παρθενώνας) is a former temple on the Athenian Acropolis, Greece, dedicated to the goddess Athena, whom the people of Athens considered their patron. Construction began in 447 BC when the Athenian Empire was at the height of its power. It was completed in 438 BC although decoration of the building continued until 432 BC. It is the most important surviving building of Classical Greece, generally considered the zenith of the Doric order. Its decorative sculptures are considered some of the high points of Greek art. The Parthenon is regarded as an enduring symbol of Ancient Greece, Athenian democracy and western civilization, and one of the world's greatest cultural monuments. The Greek Ministry of Culture is currently carrying out a program of selective restoration and reconstruction to ensure the stability of the partially ruined structure.\nThe Parthenon itself replaced an older temple of Athena, which historians call the Pre-Parthenon or Older Parthenon, that was destroyed in the Persian invasion of 480 BC. The temple is archaeoastronomically aligned to the Hyades. While a sacred building dedicated to the city's patron goddess, the Parthenon was actually used primarily as a treasury. For a time, it served as the treasury of the Delian League, which later became the Athenian Empire. In the 5th century AD, the Parthenon was converted into a Christian church dedicated to the Virgin Mary.\nAfter the Ottoman conquest, it was turned into a mosque in the early 1460s. On 26 September 1687, an Ottoman ammunition dump inside the building was ignited by Venetian bombardment. The resulting explosion severely damaged the Parthenon and its sculptures.", "score": 31.234455978064, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Have you ever walked across the Golden Gate Bridge in San Francisco? Taken a picture by the Eiffel Tower in Paris? Climbed the Great Wall in China? Explored the ancient temple of Angkor Wat in Cambodia? Or traveled to the Egyptian pyramids? Besides being popular tourist destinations, these sites are also World Heritage sites.\nWhat is a World Heritage site you ask?\nIt’s a natural or man-made structure or place that has been determined by the United Nations Educational, Scientific and Cultural Organization (UNESCO) as culturally important and something that should be protected. In other words, this group has decided that these sites need some extra attention and it would be a shame if we didn’t all do our best to make sure they’re around for generations to come (that’s your kids and their kids and so on!).\nIn July, UNESCO’s World Heritage Committee met in Krakow, Poland to decide which of the thirty-three nominated sites would be added to the list of World Heritage site. Twenty-one sites were added to the existing 1,052 sites on the list.\nSo, what were some of these sites? Here are just a few that we couldn’t resist sharing:\nTurkey: The city of Aphrodisias and the marble quarries near the city.\nChina: The pedestrian-only island of Kulangsu (also known as Gulangyu)\nGeorgia: The Gelati Monastery played an important role in science and education.\nCroatia, Italy, Montenegro: Venetian Works of Defence between 15th and 17th Centuries included a number of forts to protect the area from possible enemy attacks.", "score": 30.986746253153736, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "When I asked journalist and author Bruce Clark whether there is one thing about Athens that has remained unchanged through time, he immediately answered, “The Acropolis.”\n“There were times when Athens hardly existed as a town, but the Acropolis was always important as a strategic stronghold and also as a holy place. Western public opinion slightly overstates the secular democratic function of the Acropolis. In fact, it was always a place of holiness, whether of the ancient Greek religion, the Greek Christian religion or the Roman Catholic religion or Islam. It was always a place where people prayed and sensed a certain spiritual power, not only in the surface of the Acropolis but also in the caves. The entire geological structure has been invested with a holiness, and that is, maybe, the most continuous feature of Athens down the ages.”\nThe Economist’s international security editor tackles the history of the Greek capital with the same reverence in his new book “Athens: City of Wisdom” (published by Pegasus), which starts long before the construction of the Acropolis in the Classical years, and specifically in the 8th century BC, before ending in the present – with many stops in between.\nOur interview, however, started with the obvious question: What makes Athens wise? “The title reflects the fact that wisdom and cleverness was always one of the net exports of Athens. Athens has to import many things – and always has to import food, for example, for its population – but after the decline of Periclean Athens, when Athens lost its geopolitical importance, it didn’t cease to be a very powerful exporter of ideas and rhetorical skill,” explains Clark.\nAs the conversation turned to the “commodities” exported by Athens, I ask about another major issue, which he also refers to in his book. Clark dedicates several pages to Britain’s Lord Elgin and his “creative interpretation” of the Ottoman firmans issued during a period of close ties with Britain.\n“Even by the standards of the time, what Elgin did was shocking, and it was shocking to British and some Ottoman officials as well. It was an egregious and unusual act, even for a time when rich people in western Europe were taking many things from Greece and the east Mediterranean,” says Clark.\nStressing the British Museum’s role in educating the global public in world history, Clark says he believes the time has come for a decision on the matter of the reunification of the Parthenon Sculptures.", "score": 30.50845959210055, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Ancient Greece: Gods, grandeur & lasting legends\nDuring our drive into town, Thomas has also eulogised the Greek alphabet (“like a prayer to Apollo”) and shared his theory on how the country survived the economic crisis (“It was the light — the sun never abandoned us.”). As airport transfers go, it’s more thought-provoking than most. But then Greece is a one-off destination. I’m here to visit two of its key classical sites, beginning in the city that gave the world democracy. Modern-day Athens remains a singular sight, with the temple-topped crag of the Acropolis rising high above the souvlaki (grilled fast food) shops and market stalls of the streets below.\nIt’s almost 2,500 years since the celebrated statesman Pericles rebuilt the city after a series of wars with Persia and ushered in its so-called Golden Age. At a time when much of Europe was still scratching around in hovels, he commissioned the lofty temples and neatly proportioned buildings that still capture the imagination today.\nThe Parthenon is, of course, the best known of these, and when I climb the Acropolis to see it up close it’s a reminder of just how familiar its much-imitated columns have become. Its evolution from Greek temple into scaffold-clad tourist magnet has been eventful — it’s also seen service as a Byzantine church, Frankish cathedral, Turkish mosque and gunpowder store — but its earliest incarnation is the most evocative.\nNo less stirring is the view it grants of the city, radiating out to accommodate more than three million people. I look down on the ruins of the ancient Agora, the former heart of Athens, where Socrates, Plato and Aristotle held forth. The crumbled walls once formed libraries, shrines and fountain houses.\nI head next into the foothills of Mount Parnassus, three hours north of Athens, to visit the archaeological site of Delphi. For a long time it was considered the centre of the world. “In legend, Zeus released two eagles from different ends of the earth to meet at its midpoint,” says my guide, Georgia. “They came together in the skies right here.”\nThe setting is a rousing one — huge limestone bluffs loom overhead, spring wildflowers nod, and the vast valley floor is blanketed in olive groves. It’s suitably momentous, given the site’s past.", "score": 30.34345739794022, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "I always find myself awed by grandiose edifices. Beholding the Parthenon up close, unfailingly grandiose despite its ruined state blended with a construction site, made the visit to Acropolis instantly worthwhile.\nBuilt in the 5th century BC to honor goddess Athena, the patron of the city, the temple survived rise and fall of empires and had reincarnations as a church and a mosque, until grave damage was done to it at the end of the 17th century AD during hostilities between the Ottoman Empire and the Venetian Republic. In the early 19th century, representatives of the British Empire removed almost all of the then-remaining marble statuary, leaving the grand temple as an empty shell, which nonetheless retains the status of a signature piece of Doric architecture.\nAcropolis, of course, is not just about the Parthenon. The monumental complex is an enduring symbol of the classical antique civilization that gave birth to many forms of art and schools of thought that we value today.\nOne of the other important monuments in Acropolis is the Erechtheion, which is just a few decades younger than the Parthenon.\nThe temple is associated with a number of sacred relics of the Athenians, including the olive tree that sprouted when Athena struck the rock with her spear in her successful rivalry with Poseidon for the city. The present-day tree, seen by the side of Erechtheion, is a nice link to the legend, even though it was only planted a hundred years ago.\n“The Porch of the Maidens”, with six caryatids supporting the roof, is easily among the most eye-catching features of the Acropolis. The figures are all replicas, though; the originals are on display at the Acropolis Museum.\nThere are several other monuments on the hilltop and the southern slope of Acropolis. These are the remains of the temple of Asclepios.\nOne of the two major theaters, Odeon of Herodes Atticus.\nA statue near the Theatre of Dionysus.\nBeing in the center of the ancient city, Acropolis offers elevated perspectives on all other points of interest in Athens, such as this familiar to us temple of Olympian Zeus.\nThe building in this perspective is the new Acropolis Museum, which is still less than 10 years old.\nIt requires a separate fee to enter and undoubtedly has a lot to offer, but we chose to leave it off our itinerary.", "score": 30.074628860746945, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "The City of Athens\nAthens is the city with the most glorious history in the world, a city worshipped by gods and people, a magical city. The enchanting capital of Greece has always been a birthplace for civilization. It is the city where democracy was born and most of the wise men of ancient times. The most important civilization of ancient world flourished in Athens and relives through some of the world’s most formidable edifices.\nWho hasn’t heard of the Acropolis of Athens? Photos and history of the most famous archaeological monument in Europe have made the world tour causing feelings of admiration by thousands of people. Acropolis is nominated to be one of the 7 wonders of modern world. In fact the trademark of Athens is one of the favorites. The Holy Rock of Acropolis dates back to the 5th BC, the famous Golden Age of Periklis.\nAthens is a city of different aspects. A walk around the famous historic triangle (Plaka, Thission, Psyrri) the old neighborhoods, reveal the coexistence of different eras. Old mansions, well-preserved ones and other worn down by time, luxurious department stores and small intimate shops, fancy restaurants and traditional taverns. All have their place in this city. The heart of Athens beats in Syntagma Square, where the Greek Parliament can be found. Monastiraki, Kolonaki and Lycabettus Hill attract thousands of visitors all year round. A few kilometers from the historic center in Faliro, Glyfada, Voula and Vouliagmeni you can enjoy the sea breeze.\nAthens and Attica in general have the most important archaeological monuments (Acropolis, Odeion of Herodes Atticus, Roman Market, Panathinaiko Stadium, the Temple of Poseidon in Sounio etc.). In the capital you will admire many imposing neoclassic buildings, true ornaments of the city (The Greek Parliament, Athens Academy and the Athens University). Don’t miss the opportunity to visit the museums hosting unique treasures of Greece’s cultural inheritance (The New Acropolis Museum, the Archaeological Museum, The Byzantine Museum, The Cycladic Museum etc).\nThe climate in Athens is mild and relatively warm. The average temperature in Athens in May is about 25-28°C.", "score": 29.894049956548276, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Athens Acropolis - the rock star of ancient Athens Greece\n\"Athens Acropolis – learn why everyone rushes to visit Acropolis Greece, home of the Athens Parthenon dedicated to Athena Goddess\"\nThe only problem with world famous cultural sights like the Athens Acropolis is this: we have a feeling we’ve seen them before.\nThroughout our life we’re being bombed with images of the famous Athens rock: we’ve seen it in the text books, art monographs, postcards, travel brochures… and not to mention that the Athens Parthenon, the most important temple on the Athens Acropolis, is the most copied building in the world.\nThat’s why when we finally get to the real Acropolis it all looks somewhat familiar. We kind of almost take it for granted. To avoid this and to really appreciate the majesty of Athens Greece attractions like the Athens Acropolis, it helps to put it into an historical perspective.\nFirst of all, think about this: Athens Acropolis was first inhabited 5000 years BC. And now think that the Parthenon, Propylaea and Erecheteion were all built 5 centuries before Jesus Christ was even born, and discovering America sounded like the wildest science fiction story. And they’re all still standing up for you to see to this day!\nBut apart from that, besides looking at its physical appearance only, when visiting the Athens Acropolis, you will be visiting the symbol of many things important to Western Civilization in general. Acropolis Greece is where some of the famous myths took place, earliest cults started, democracy developed… it is a place that has been inspiring the world for two and a half millennia, a symbol of free spirit, philosophy and architecture.\nDining-room table tidbit: The Athens Acropolis was also known as Cecropia, after the legendary serpent-man, Cecrops, the first Athenian king.\nAlthough, when we say Acropolis, we automatically think of the Athens Acropolis, it’s important to know that every Greek town of that time had its own acropolis, for instance, you might have a chance to also see the Lindos, Rhodes acropolis as part of your Mediterranean cruise.\nAn Acropolis was a citadel of each town, built on a naturally elevated ground for the purpose of defence.", "score": 29.85036955757318, "rank": 29}, {"document_id": "doc-::chunk-3", "d_text": "Places as diverse and unique as the Pyramids of Egypt, the Great Barrier Reef in Australia, Galapagos Islands in Ecuador, the Taj Mahal in India, the Grand Canyon in the USA, and the Acropolis in Greece are examples of the 788 natural and cultural places inscribed on the World Heritage List to date.\nToday, the World Heritage concept is so well understood that sites on the list are a magnet for international cooperation and may thus receive financial assistance for heritage conservation projects from a variety of sources.\nTranquil beach in Coiba.\nSites also benefit from the elaboration and implementation of a comprehensive management plan that sets out adequate preservation measures and monitoring mechanisms. In support of these, experts offer technical training to the local site management team.\nWorld Heritage membership also brings an increase in public awareness of the site thus also increasing tourist activities. When these are well planned for and organized, respecting sustainable tourism principles, they can bring important funds to the site and to the local economy.\nA key benefit of ratification, particularly for developing countries, is access to the World Heritage Fund. Annually, about US$4 million is made available to assist in identifying, preserving and promoting World Heritage sites.", "score": 29.540865151748374, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "It was destroyed in 267 BC, and underwent restoration in the early 1950s. Since then, the theatre hosts concerts and other performances, mostly during the Athens Festival.\n6. The Temple of Athena Nike\nThis small temple is in the Acropolis, on the right of the gate leading to the Parthenon (Propylaea). It is a good example of classical architecture that has been restored several times. Nike means victory in Greek, and it is here that Athena was worshipped as the goddess of victory in wisdom and war.\n7. The Propylaea\nThe Propylaea was a monumental gateway to the Acropolis. Its building began around 437 BC, and the columns have the same proportion as the ones on the Parthenon. This gate used to function as a checkpoint to control the entrance to the Acropolis, playing an important role in terms of security.\n8. The Erechtheion\nOne of the most beautiful buildings in the Acropolis is the Erechtheion; a temple honouring Athena and Poseidon. It is better known for its balcony; the Porch of the Caryatids. The Caryatids are six beautiful female figures that function as supporting columns. Some of the original statues can be seen in the nearby Acropolis Museum.\n9. The Parthenon\nThrough the centuries, the most famous view of Greece; the Parthenon, has endured war, fire, revolutions, misguided restoration, and pollution. It has even functioned as a church and a mosque, and many parts today are either missing, or were taken by other countries.\n10.The Ancient Agora\nNot far from the Acropolis, down the slopes going through the neighbourhood of Plaka, one can easily reach the Ancient Agora. Here, the restored portico; known as the Stoa of Attalos, hosts the Museum of the Ancient Agora.\nNot far from the building, it is possible to visit the Temple of Hephaestus; one of the best preserved temples in Greece, which allows one to imagine how temples used to look like in ancient times.\nYou may be interested\nThe first Aegean Hawk S-70 helicopter was delivered to the Hellenic Navy by AeroservicesPanos - May 07, 2021\nThe first upgraded Aegean Hawk S-70 helicopter was delivered to the Hellenic Navy today, Thursday 6 May.", "score": 29.369514170860857, "rank": 31}, {"document_id": "doc-::chunk-18", "d_text": "This cultic complex is an interesting oddity. It drew together many aspects that Athenians could use to define themselves; their foundation myths involved gods and local heroes, and all were represented within the Erechtheion.It was created in the period which followed ahistoric clash of cultures between the Persians and Greeks,and throughout the period of its creation the growing city-state of Athens developed into the imperial power it would belauded as for the next two thousand years.\nWalking upwards along the processional way, passing the Temple to Athena Nike and under the Propylaea, the real feast for the eyes stands before you – the Parthenon. It is a building so thoroughly embedded in our collective imaginations through all forms of media, that seeing it evokes something in everyone.\nMy impression of the Parthenon has changed with each encounter. When I was 18 years old I could appreciate the elegance, but lacked any real understanding. At 24, when I returned back in 2008, much more was visible and my understanding of it was enriched by four years of studying Art History, Classics and Archaeology. I would have to try very hard not to be impressed. I sat and awkwardly sketched what my poor draftsman’s hand could barely grasp, but I was drawn to draw. Most recently, as a travel-wise woman in my 30s, I could appreciate the nuances at play within the monumental building of power, politics and art.\nThe painstaking nature of this current methodology of restoration work deserves comment. Previous restoration work in done in the 19th and 20th century led to problems which specialists are now trying to repair (wrong pieces were fit together and corrosive materials which were unknowingly unsuitable were also used).\nThe current mandate for repair work is to map out each stone to the smallest detail, and any structure that is assembled has to be done with an eye for future restoration (meaning, nothing that is done now cannot be undone). The slow pace of work might annoy some members of the public who wish to view the building in all of it’s glory, but preservation with a long-term view is obviously a worthwhile endeavour. The creation of this temple dedicated to Athena began in 447 B.C.E. and lasted right up to 432 B.C.E, built atop the previous “Pre-Parthenon” also destroyed by the Persians in 480 B.C.E.", "score": 28.688397004833252, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "Of course, it would be a mistake to suggest that the Acropolis and the Parthenon are the only Athens landmarks. A city with such a long and legendary history has preserved a lot of individual monuments, buildings, and entire neighborhoods. Historic areas of Agora and Plaka, Syntagma and Omonia Square, the Cathedral of Athens, the National Garden of Athens, and much more! We would like to show you these spectacular panoramas of ancient Athens from a bird's eye view!\n27 May 2013", "score": 28.541296154084183, "rank": 33}, {"document_id": "doc-::chunk-10", "d_text": "The museum, which exhibits approximately 4.000 artefacts, allows the sculptures to be viewed in natural light, with special glass and climate-control measures, protecting them from sunlight. The most impressive part of the museum is its top floor, where visitors will be able to view the frieze and then look out of the windows to view the Parthenon itself.\nOdeon of Herodes Atticus\nAt the footsteps of Acropolis, the Odeon was built in 161 A.D. under Tiberius Claudius Atticus Herodes. To date concerts, plays and ballets have been performed. The natural setting of Herodeion, with its marvelous arcades, the Parthenon as a backdrop and the moon up in the sky will certainly fascinate you.\nThe Ancient Agora, which means “market” in modern Greek, is situated at the footsteps of the Acropolis and in ancient times it served as the commercial centre of the city but also as political, cultural and religious center.\nOriginally built in the 4th century B.C. for the athletic competitions of the Great Panathinaia (ancient Greek festivities), the “Kallimarmaron” Stadium (meaning“beautiful marble”) was the venue of the first modern Olympic Games, in 1896.\nNational Archaeological Museum of Athens\nThe National Archaeological Museum of Athens is the largest in Greece and one of the most important museums in the world devoted to ancient Greek art. It was founded at the end of the 19th century to house and protect antiquities from all over Greece, thus displaying their historical, cultural and artistic value.\nByzantine & Christian Museum\nThe Byzantine and Christian Museum, which is based in Athens, is one of Greece’s national museums. Its areas of competency are centered on - but not limited to - religious artefacts of the Early Christian, Byzantine, Medieval, post-Byzantine and later periods. The Museum has over 25.000 artifacts in tis possession, which date from between the 3rd and 20th Century A.D.\nMuseum of Cycladic Art\nThe Museum of Cycladic Art is dedicated to the study and promotion of ancient cultures of the Aegean and Cyprus, with special emphasis on Cycladic Art of the 3rd millennium BC.", "score": 28.509416788395363, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Athens (Greek: Αθήνα, Athína), is the capital city of Greece with a metropolitan population of 3.7 million inhabitants. It is in many ways the birthplace of Classical Greece, and therefore of Western civilization. The design of the city is marked by Ottoman, Byzantine and Roman civilizations.\nThe sprawling city is bounded on three sides by Mt Ymettos, Mt Parnitha and Mt Pendeli; whilst inside Athens are twelve hills [the seven historical ones are: Acropolis, Areopagus, Hill of Philopappus, Observatory Hill (Muses Hill), Pnyx, Lykavittos (Lycabettus), Tourkovounia (Anchesmus)], the Acropolis and Lykavittos being the most prominent. These hills provide a refuge from the noise and commotion of the crowded city streets, offering amazing views down to Saronic Gulf, Athens' boundary with the Aegean Sea on its southern side. The streets of Athens (clearly signposted in Greek and English) now meld imperceptibly into Piraeus, the city's ancient (and still bustling) port.\nPlaces of interest to travelers can be found within a relatively small area surrounding the city centre at Syntagma Square (Plateia Syntagmatos). This epicentre is surrounded by the districts of the Plaka to the south, Monastiraki and Thissio to the west, Kolonaki to the northeast and Omonia to the northwest.\nThe first pre-historic settlements were constructed in 3000 BC around the hill of Acropolis. The legend says that the King of Athens, Theseus unified the ten tribes of early Athens into one kingdom (c. 1230 BC). This process of synoikismos – bringing together in one home – created the largest and wealthiest state on the Greek mainland, but it also created a larger class of people excluded from political life by the nobility. By the 7th century BC, social unrest had become widespread, and the Areopagus appointed Draco to draft a strict new code of law (hence \"draconian\"). When this failed, they appointed Solon with a mandate to create a new constitution (594 BC). This was the great beginning of a new social revolution, which was the result of the democracy under Clisthenes (508 BC).", "score": 27.85794637246519, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "Professor Frederico Mayor, UNESCO's General Manager, put together a scientific team of 7 specialists, to select 21 Monumental World Heritage finalists, created before the 21st century. The 7 specialists were: Cessar Pelli, former General Manager of UNESCO, Harry Seidle, Zaha Habid, Tadao Ando, Yung Ho Chang, and Aziz Tayod.\nThe criteria were social impact, culture, dimension, history and politics, hence the 21 finalists:\n- The Acropolis of Athens - Alhambra, in Spain - Ankor, in Cambodia - The Pyramid of Chichen Itza, in Mexico - Christ Redeemer, in Brazil - The Roman Colosseum, in Italy - The Statues of Easter Island, in Chile - The Eiffel Tower, in France - The Great Wall, in Chine - Hagia Sophia, in Turkey - The Kiyomizu Temple, in Japan - Machu Picchu, in Peru - The Neuschwanstein Castle, in Germany - Petra, in Jordan - The Pyramids of Giza, in Egypt - The Statue of Liberty, in the United States of America - Stonehenge, in the United Kingdom - The Kremlin and Red Square, in Russia - The Sydney Opera House, in Australia - The Taj Mahal, in India - Timbuktu, in Mali\n19 million people have voted so far, by sms, telephone and internet, contributing to the reconstruction of the Buddha's of Bamiyan in Afghanistan, which is the destiny of half of the election's revenues.\nHistory is in the making, and it is your chance to be apart of it!\nContact ARTEH to book your accommodation in Lisbon. See all ARTEH« Hotels at www.arteh-hotels.com.\nAbout ARTEH - Hotels and Resorts\nBest Independent Hotel Chain - Publituris Awards 2004 and 2005\nARTEH - Hotels and Resorts is a soft brand independent Hotel Chain with a Luxury Collection of Charming & Exclusive Boutique Hotels and Resorts, with Golf and Spa.\nARTEH offers a different lifestyle, in an exquisite and charming environment, be it to get away from daily stress, for a relaxing weekend, prolonged vacations or even a business event.\nWith the new Online Reservation System and ARTEH's selection, booking has never been so simple. Your destiny is only a Click away.\nFor further information on ARTEH please visit ARTEH.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "|The World Heritage List\nOn the UNESCO World Heritage List are currently 690 properties, ranging from complete inner cities (Rome) to islands, churches, mosques, castles and pyramids. Each of them was submitted to the World Heritage Committee by a national government, because they are considered to be of outstanding universal value.\nThe meaning of this universal value is to be taken broadly. The Heritage List features not only monuments of human achievement, like the inner city of Rome, the Pyramids of Egypt, the temples of Angkor in Cambodia and the Taj Mahal in India.\nThe list also features places that were the scene of human folly:The Auschwitz Concentration camp in Poland, Robben Island in South-Africa and the Atomic Bomb Dome in Hiroshima, Japan. It is in this group that the Madonna of Nagasaki deserves its own, rightful place.\nUNESCO World Heritage\nClick here to close this window", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Listen to the Episode “The Acropolis and the Golden Age of Athens”\nThe Acropolis of Athens, Greece, is one of the most recognizable landmarks in all of the world and belongs on any Greece itinerary. But to hear Ryan Stitt of The History of Ancient Greece podcast tell it, depending on when you were born, you won’t recognize the same Acropolis as the people who came before. The Parthenon and other buildings on the Acropolis were built around 460 B.C., and as Ryan tells me in this episode, the controversy over who the Acropolis is for and what it means is still raging on.\nThe Not-so-Humble Beginnings of the Acropolis of Athens\nThe Acropolis of Athens is one of the most recognizable historical landmarks in all of the world. Associated with the Golden Age of Greece, it was designed, from the very beginning, to be an ostentatious sight and site. Ryan Stitt, host of the “History of Greece” podcast, told me about how there’s archaeological evidence that people lived on The Acropolis as early as 6000 BC. But the construction of the Parthenon and other temples really took shape around 460 BC. Everything that stands today is from the Classical period. All of the architecture from the archaic times was destroyed during a war with Persia (though some statues were buried, and remained for archaeologists to find).\nA Beautiful Tourist Destination, a Perfect Battleground\nThe plateau of the Acropolis made it the perfect place for Athenians to stand and fight for their city-state. As Ryan told me, the Acropolis was alternately attacked by the Phoenicians, the Byzantines and the Ottomans, who all left their cultural touch on the landmark. But after it gained its independence from the Ottomans, the citizenry of Athens was overcome with national pride. And that pride in all things Greek motivated the people to preserve what was built by Greeks, and wove the idea of Ancient Greece into the fabric of their lives, leading up to today, when tourism is still a huge part of the Greek identity.\nPericles Does Work on the Acropolis\nSo what is left on the Acropolis of Athens, and how did all of this magnificent architecture get built? Once the threat from Persia ended, the famous Athenian leader Pericles began an ambitious building program.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Apollo Epicurius - the architecture of the divine\nIn the central Peloponnese, at Vasses in ancient Figaleia, at an elevation of 1.130 metres, stands the eternally proud temple of Apollo Epicurius. The inspiration behind its construction and the architect is considered to be the great Ictinus. This universal architectural gem was the first of the great monuments of Greece to be recognized by UNESCO as a World Heritage Site in 1986. The temple can be approached from Ilia, after an enchanting journey along the banks of the river Neda, or from Tripoli and Megalopolis.\nVasses (ancient Bassae) was always a sacred place, host to numerous temples. The region’s name means \"little valleys\". And indeed, the mountainous Peloponnesian land creates a magical landscape and within it rises the imposing site of the temple. The mountains of Kotylio, Lykaio, Tetrazio and Elaio stand guard around the valley of Vasses. All the gods of antiquity - Pan, Aphrodite, Artemis, and of course, Apollo, as both ‘Vassitas’ and ‘Epicurius’, that is, “the helper” - were worshipped in this natural sanctuary, and it was here that one of the greatest religious centres of the entire of Hellenic world was to be erected.\nPower, Beauty and Harmony\nThe temple in its current form was built between 420 and 400 BC. Archaeologists are convinced that under its foundations lies an even more ancient temple, probably from the seventh century BC. This “new” temple, a unique monument to the skills of its architect, Ictinus, embodies in its structure the entire wealth of architectural knowledge of Greek civilization. With both archaic and innovatory elements, it has been greatly admired by all visiting travellers through the centuries. Pausanias, the great traveller and geographer, who arrived in Vasses in the 2nd century AD, was stunned by its majesty and strength. It is speculated that the central column of the temple was designed to reflect the first rays of the summer solstice, symbolizing the eternal light of the sun god, Apollo. If this is true, then this is the first large scale sculptural work of art in the history of mankind to represent an abstract concept.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "The world is made up of numerous amounts of values, ethics, and morals including natural and cultural sites that represents heritage. ‘Universal value’, is the complete meaning of world heritage. The World Heritage List, contains all the natural and cultural sites that exists in this world. The main point of having organizations such as, UNESCO (United Nations Educational Scientific and Cultural Organizations), is to preserve as much as possible from a world heritage site.\nValues represent world heritage through our diverse way of thinking and our cultural biases. We incorporate our everyday values into who we are and what we do. Our heritage and where we come from are based on these values that create new ways of thinking while relying on past judgments. All together our values create culture and traditions that can stretch on for a life time. Thus making it a huge part of world heritage and the diversity of its people.\nAlong with values we have ethics that drive us in the direction we see fit to accompany our lifestyle. This too represents world heritage because it governs our actions which vary across the globe. Where we come from determines how we act and behave with and around others. It defines our morals and what we believe in which adds to heritage and faith. Overall ethics help form relations, constitute our behavior, and develops culture through conservation of nature along with tradition.\nBeauty is everywhere, in people, in landscapes, and of course in our heritage. Natural and cultural sites are everywhere and represent world heritage in a whole with diversity. Historical sites and world wonders make up some of this beauty that defines many of our backgrounds. For instance the Taj Mahal in India represents the culture and their heritage as people. Not only does it show their heritage, it shows the beauty of it through architecture and art. Therefore Heritage can be shown through many things that are all in a general sense beautiful and show the ways of other people including art.\nLastly, world heritage is important because it preserves the beauty of our earth. The organization that deals with this is UNESCO, as stated previously. They make sure important site s are preserved around the world to keep our heritage alive. As people it is important to see the things we created and how we formed the many cultures and sites that fascinate the world. World Heritage is an important part of our history and is important to the generations to come.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Athens is a vibrant and exciting city, packed with museums, monuments, parks, and squares, crammed with tavernas, restaurants, bars and cafes. Named after Greek goddess, Athena, it is a city full of culture and simply brimming with buildings of archaeological and historic significance. Athens holidays, whether short or long stay are a fantastic way to see part of this magnificent country.\nAthens is home to some of the most important heritage sites of Greek civilisation from antiquity through the Middle Ages to modern times. Although visitors will want to explore its culture and history, the city also offers a vibrant nightlife with a multitude of restaurants, tavernas, bars and clubs. With a wealth of sites to see, holidays or even city breaks in Athens will keep your itinerary filled with things to do. There are numerous direct flights from the UK and it is an ideal year round destination boasting mild winters and hot summers.\nAll visitors to Athens will wish to visit The Acropolis. It is the very symbol of the capital and home to one of the world’s most famous buildings, the Parthenon. An architectural masterpiece of white marble dedicated to goddess Athena, the Parthenon is a formidable sight, especially by night when illuminated. Additional monuments atop The Acropolis include the grand entrance of Propylaea with its imposing columns, the small temple of Athena Nike, built to commemorate the victory over the Persians, and the Erechtheion on the site where Athena planted an olive tree, her sacred symbol.", "score": 26.25355806650555, "rank": 41}, {"document_id": "doc-::chunk-3", "d_text": "It is only about three hundred yards the longest way, and about one hundred and twenty-five the shortest. Yet what spot in Greece contains more shrines of art or religion or more history to the square inch carved into or built upon its surface?\nThere is first the hard, crystalline limestone of which the hill itself is built, hoary with age and out-dating and outlasting everything that has been built upon it. Its summit must have been rough and jagged when the work was begun of planing it off to furnish the foundations for the dwelling-place of men and gods. Athens did not begin on the plain, and extend to the hill: it began on the hill, and spread to the plain. This lofty rock was far enough from the sea to furnish a safe retreat from the depredations of pirates, and it was easy to fortify it against attack. Those early dwellers, Pelasgic or other, did not put up a hedge or a board fence. They erected walls whose rough, solid masonry still winds its rugged courses around and over the Acropolis, as it did centuries before the Parthenon was built. Some of these walls were buried for ages until the spade of the excavator revealed them. Others rise stubbornly in the daylight, as if to dispute with the marble Propylea the trophy of permanence. Whatever myths may float around the heads of these early dwellers, the walls they built are solid facts, and will outlast the trivial masonry of our day.\nThen there are the traces of the devout spirit of early Greek occupation. He would be rash who would let misty conjectures of how long Athene or Artemis had been worshipped on this hill harden into any rigid chronology. It is known that Pisistratus lived on the Acropolis five centuries and a half before the Christian era; but other kings and tyrants had dwelt there before him, and this hill was the centre of civil and judicial life. That there was an early temple here to Athene is known, and in 1885 Dorpfeld pointed out its foundations near the Erechtheum. The temple was destroyed in the Persian wars, and perhaps rebuilt. Then the conception of a magnificent temple farther to the right, and covering vastly more space than the original one, took shape; and the foundations were broadly and strongly laid.", "score": 25.79126035102962, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "According to the UNESCO World Heritage Centre website: “To be included on the World Heritage List, sites must be of outstanding universal value and meet at least one out of ten selection criteria:\n- To represent a masterpiece of human creative genius\n- To exhibit an important interchange of human values, over a span of time or within a cultural area of the world, on developments in architecture or technology, monumental arts, town-planning or landscape design\n- To bear a unique or at least exceptional testimony to a cultural tradition or to a civilisation which is living or which has disappeared\n- To be an outstanding example of a type of building, architectural or technological ensemble or landscape which illustrates significant stage(s) in human history\n- To be an outstanding example of a traditional human settlement, land-use, or sea-use which is representative of a culture (or cultures), or human interaction with the environment especially when it has become vulnerable under the impact of irreversible change\n- To be directly or tangibly associated with events or living traditions, with ideas, or beliefs, with artistic and literary works of outstanding universal significance\n- To contain superlative natural phenomena or areas of exceptional natural beauty and aesthetic importance\n- To be outstanding examples representing major stages of Earth’s history, including the record of life, significant ongoing geological processes in the development of landforms, or significant geomorphic or physiographic features\n- To be outstanding examples representing significant ongoing ecological and biological processes in the evolution and development of terrestrial, fresh water, coastal and marine ecosystems and communities of plants and animals\n- To contain the most important and significant natural habitats for in-situ conservation of biological diversity, including those containing threatened species of outstanding universal value from the point of view of science or conservation.\nAs a result of UNESCO’s criteria, World Heritage sites include natural wonders such as Mount Fuji and Yellowstone National Park, as well as architectural marvels including Chichen Itza in Mexico and the Great Wall of China.\nEaster Island, Chile\nPerhaps one of the most iconic World Heritage sites is Easter Island. This remote Chilean island is famous for its 887 monumental statues, called moai. Found along the coast, often in formation, the moai were created by the island’s earliest inhabitants. The first humans on the island of Rapa Nui (the Polynesian name for Easter Island) are believed to have arrived in approximately 300-400 A.D.\nThe tall statues stand at an average height of 13 feet and weigh 13 tonnes.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Greece was elected to the World Heritage Committee together with 11 other countries during the 23rd General Assembly of the States Parties to the World Heritage Convention which met at UNESCO headquarters in Paris this week.\nDuring the session chaired Tebogo Seokolo, ambassador extraordinary and permanent delegate of South Africa to UNESCO, the general assembly elected 12 new members to the World Heritage Committee, namely Greece, Argentina, Belgium, Bulgaria, India, Italy, Japan, Mexico, Qatar, Rwanda, Saint Vincent and the Grenadines, Zambia.\n“Greece was elected in the first round as a member of the UNESCO World Heritage Committee for the next four years. This is an exceptional distinction for our country,” said Greek Culture Minister Lina Mendoni, adding that it was a result of “hard work and systematic efforts by delegations from the ministries of foreign affairs and culture.\n“Our goal is the active protection of the world’s cultural and natural heritage – which now has to deal with the effects of the climate crisis – as well as the preservation and strengthening of the universal value of monuments in all geographical regions of the world,” said Mendoni.\nGreek Foreign Minister Nikos Dendias on Twitter said the government was grateful to the countries that supported Greece’s candidacy.\n“Greece has just been elected to the UNESCO World Heritage Committee with 119 votes… Our country is ready to work to promote the protection of the world cultural heritage,” said Foreign Minister Nikos Dendias.\nThe World Heritage Committee is responsible for selecting the sites to be listed as UNESCO World Heritage Sites – including the World Heritage List and the List of World Heritage in Danger – defines the use of the World Heritage Fund, and allocates financial assistance upon requests from States Parties.\nThe committee consists of representatives from 21 state parties elected by the General Assembly of States Parties for a four-year term. These parties vote on decisions and proposals related to the World Heritage Convention and World Heritage List.\nCurrent members of the committee are: Argentina, Belgium, Bulgaria, Egypt, Ethiopia, Greece, India, Italy, Japan, Mali, Mexico, Nigeria, Oman, Qatar, Russian Federation, Rwanda, Saint Vincent and the Grenadines, Saudi Arabia, South Africa, Thailand, and Zambia.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Named after Athena, the goddess of wisdom,\nAgorá (Market):In its heyday, Agora was the centre of city life – today it hosts ruins from different periods. It was here that ordinary people, stall holders, and merchants mingled with public figures, officials, philosophers and politicians. The main attraction here is Hephaisteion (Temple of Haephaistos), one of the best-preserved ancient temples in Greece, and dating to the fifth century BC. Also visit the Museo tis Agoras (Museum of Agorá) that houses an amazing range of everyday artefacts found in the area. It is housed in the Stoa of Attalos.\nAcropolis: This UNESCO World Heritage Site dominates the city and the skyline. Acropolis refers to the rocky outcrop that formed the original settlement in\nDelphi:According to Greek mythology, Delphi is located at the point where the two eagles released to the East and West by God Zeus met, thereby marking the centre of the world. Delphi is the sanctuary of Apollo and the seat of his oracle. The ancient site is in ruins but still attracts thousands of visitors who throng here to see its remains. The site also houses the impressive Delphi Museum which exhibits various statues and offerings from the sanctuary of Delphi. The UNESCO World Heritage Site houses the Temple of Apollo, the Sacred Way, an amphitheatre, and a stadium.\nNational Archaeological Museum: The museum is housed in a late 19th century building and houses one of the finest collections of ancient Greek artefact including the fascinating Mycenaen Collection comprising beautifully crafted gold work dating from between the 16th and 11th centuries BC, and the Bronze Collection.\nTourism is one of the main industries in Greece and continues to flourish even in the uncertain economic times. Every year,", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "The Unesco compiled its first list in 1978: in year 2013, the listed sites are 964. They are divided into cultural, natural and mixed properties. Actually, the main part of the sites is in Italy, which boasts 48 sites, followed by Spain and China, both with 44.\nSicily, with its treasures of historical, cultural and natural importance, boasts 6 sites listed in the World Heritage List.\n1) Archaeological Area of Agrigento, listed in 1997\nFounded in the 6th Century B.C., the ancient city of Agrigento was one of the greatest Mediterranean centres. The remains of the Doric Temples which dominates the city are well preserved and are one of the most terrific monuments of Greek art and culture. They testify the magnificence and supremacy of the ancient city.\n2) Villa Romana del Casale, listed in 1997\nLate Roman Villa located in “Contrada Casale” (Casale district), at the foot of Mont Mangone. Villa Romana has been built around III – IV century B.C. and represents a great example of luxury Roman villa. The exceptional beauty and quality of the mosaics which decorate the villa illustrate the greatness and underline the importance of the Villa.\n3) Isole Eolie (Aeolian Islands), listed in 2000\nThe Aeolian Islands are located north of the coast of Sicily. The 7 islands which compose the archipelago (Panarea, Stromboli, Vulcano, Alicudi, Filicudi, Lipari and Salina, more 5 small islets) are all of volcanic origins and are separated from the land of Sicily by 200 m deep waters. Over the centuries, the Aeolian Islands have provided two different kinds of eruption (Vulcanian and Strombolian) and have given to the science of vulcanology the chance to enrich their education and improve their knowledge.\n4) Late Baroque Towns of the Val di Noto (South-Eastern Sicily), listed in 2002\nThe Baroque towns listed by Unesco were rebuilt in 1693 after a terrible earthquake.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "UNESCO’s World-Heritage regime began life 40 years ago, when dozens of countries signed up to the idea that the world’s cultural and natural patrimony was under threat not only from “traditional causes of decay” but also because of “changing social and economic conditions”. Among those who endorsed the principle was the Republican administration of Richard Nixon, which gave remarkably high priority to conservation and the environment. (Since then, America has had a stormy relationship with UNESCO; it cut off payments to the agency last year, under a law which denies funding to any body that admits Palestine.)\nIn many poorer countries which host heritage sites, the biggest changes since 1972 have been exploding populations and a huge rise in global tourism, combined with a lack of the governance needed to cope with both phenomena. Angkor Wat, a temple complex in Cambodia, and the Inca fortress of Machu Picchu in Peru (pictured above) are often cited as places of world-historical importance where a vast influx of tourists may be causing serious damage. By recognising and thus publicising individual sites, UNESCO and other cultural watchdogs risk harming the cause of conservation, which would be better served if visitors to the country were spread around a broader range of places.\nBut there are no easy ways to maintain heritage sites in relatively poor countries; it requires delicate balancing acts, much local diplomacy and long-term engagement, according to organisations that work in that field. Even a well-functioning state, be it democratic or authoritarian, will fail to conserve monuments unless local people see an interest in maintaining their heritage and using it rationally, says Vincent Michael, new chairman of the Global Heritage Fund (GHF), based in California. The effort will collapse if cultural heritage is seen either as a pesky impediment to making money, or as something to be exploited for short-term gain. Nor should local economies ever be too reliant on tourism, which can fall as rapidly as it rises….\nBut in many places where sites are at risk, government either does not operate at all, or functions only in the interest of a kleptocratic elite.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-3", "d_text": "The Temple of Apollo Epikourios\nSome 14km into the mountains south of Andhrítsena stands a World Heritage Site, the fifth-century BC Temple of Apollo at BASSAE (Vásses), occupying one of the remotest, highest (1131m) and arguably most spectacular sites in Greece. In addition, it is one of the best-preserved Classical monuments in the country and is considered to have been designed by Iktinos, architect of the Parthenon and the Hephaisteion in Athens.\nThere’s just one problem: nowadays to protect it from the elements during its complicated restoration, the magnificent temple is swathed in a gigantic grey marquee suspended from metal girders; and what’s left of its entablature and frieze lies dissected in neat rows on the ground to one side. Panels make clear that the work is badly needed to keep the whole thing from simply tumbling, stone by stone, into the valley it overlooks – and the marquee is quite a sight in itself – but visitors are likely to feel a bit disappointed, at least until you enter the tent and are awe-struck by the sheer scale and majesty of the thing, even more so as you walk all around it.", "score": 25.04086002129528, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "The Operational Guidelines for the Implementation of the World Heritage Convention state that “a site which is nominated for inclusion in the World Heritage List will be considered to be of outstanding universal value for the purpose of the Convention when the Committee finds it meets one or more of the criteria”\nCriteria for the inclusion of cultural properties in the World Heritage List:\n(i) represent a masterpiece of human creative genius; or\n(ii) exhibit an important interchange of human values, over a span of time or within a cultural area of the world, on developments in architecture or technology, monumental arts, town-planning or landscape design; or\n(iii) bear a unique or at least exceptional testimony to a cultural tradition or to a civilisation which is living or which has disappeared; or\n(iv) be an outstanding example of a type of building or architectural or technological ensemble or landscape which illustrates (a) significant stage(s) in human history; or\n(v) be an outstanding example of a traditional human settlement or land-use which is representative of a culture (or cultures), especially when it has become vulnerable under the impact of irreversible change; or\n(vi) be directly or tangibly associated with events or living traditions, with ideas, or with beliefs, with artistic and literary works of outstanding universal significance (the Committee considers that this criterion should justify inclusion in the List only in exceptional circumstances and in conjunction with other criteria cultural or natural);\nin addition they need to:\n(b)(i) meet the test of authenticity in design, material, workmanship or setting and in the case of cultural landscapes their distinctive character and components\n(ii) have adequate legal and /or contractual and/or traditional protection and management mechanisms to ensure the conservation of the nominated cultural properties or cultural landscapes. The existence of protective legislation at the national, provincial or municipal level and/or a well-established contractual or traditional protection as well as of adequate management and/or planning control mechanisms is therefore essential and, as is clearly indicated in the following.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Welcome to Pathankot City Website\nResponsive Mobile Wordpress Websites in just 24 hours!\n#free #domain #hosting #emails\nUNESCO, United Nation Educational Scientific and Cultural Organization is an agency of the UN (United Nation). And its headquarter is in Paris, capital of France.\nThe main task of this organization is to promote education, culture and science among people of the world.\nIt has 193 members and 11 associate members.\nIt also declares various heritage sites that need to be preserved. As these sites represent the cultures of different nations.\nIn India, UNESCO has also declared 37 sites\nIndian history is full of cultural and heritage sites, here since Indus valley civilization culture is flourishing and needs preservation.\nUNESCO helps in many ways preserving our golden culture.\nOur cultural value mainly contains paintings, Architecture, Poetry, and Natural vegetation which represent the true face of our culture.\nWhat UNESCO has given definition is the place on earth that enhances the universal value of humanity\nheritage sites are the past which we one can see or observe in present.\nUnder international treaties, these sites are irreplaceable and unique sites.\nCriteria for listing sites among World Heritage Sites as follow\n1. It has to be a masterpiece and should represent culture.\n2. Should represent human values over a period of time and architectural design, unique town planning or designs.\n3. The site should contain signs of civilization that is living or has lived in the past.\n4. A building must have extraordinarily built design. Besides, it should also represent the history of the evolution of human age during its time interval.\nCriteria for Natural sites\n1. Natural sites must have diverse species and natural beauties.\n2. Should contain symbols of earth natural changes, proofs of lives, current geographical changes\n3. Especially, if it contains endangered species or natural habitat it fulfils the requirement.\nBenefits in listing as a world heritage site\nGets recognized all over the world. And along with it environmentalists, NGOs pushes money to save these site.\nTourism also increases, national as well as at the international level. Which helps government generating revenues and employment.\ncountries governments, signatories of the Geneva Convention, who are bound to protect natural habitats and poaching also take part in saving these sites.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "The Temple of Athena Nike: First temple on the Acropolis to be built in the Ionic style, and one of the few exemplars of an amphiprostyle temple in all of Greece: what made it truly unique was the unit by which it was planned, which turns out to be the Egyptian foot of 300 mm.\nThe Erectheion: Dedicated to the worship of the two principal gods of Attica, Athena The Propylea: The ancient monumental gateway to the Acropolis.\nOn the South Side\nThe Odeon of Herodes Atticus: This ancient theatre is still used today for concerts and plays.\nThe Theatre of Dionysus.\nNew Acropolis Museum. Designed by Swiss architect Bernard Tschumi at a site south of the Acropolis, opened in June 2009. Located in Makryanni just below the Acropolis, it’s easily accessed from the Acropolis station of the Metro.\n|Acropolis of Athens.|\n|Acropolis of Athens.|\n|Odeon of Herodes Atticus, Acropolis of Athens.|\nFollowing European regulations, disabled access to the Acropolis can be gained by means of special paths and a purpose-built lift on the north face of the hill. Apparently this is only for the use of those in wheelchairs.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-2", "d_text": "The site numbers 17 places of worship, which are still in use, and many that are not. Dispersed around the mountain peaks they are connected by footpaths. The cult sites are believed to provide cures for barrenness, headaches, and back pain and give the blessing of longevity. Veneration for the mountain blends pre-Islamic and Islamic beliefs. The site is believed to represent the most complete example of a sacred mountain anywhere in Central Asia, worshipped over several millennia.\nFounded as a Greek colony in the 6th century B.C., Agrigento became one of the leading cities in the Mediterranean world. Its supremacy and pride are demonstrated by the remains of the magnificent Doric temples that dominate the ancient town, much of which still lies intact under today's fields and orchards. Selected excavated areas throw light on the later Hellenistic and Roman town and the burial practices of its early Christian inhabitants.\nThe churches and convents of Goa, the former capital of the Portuguese Indies – particularly the Church of Bom Jesus, which contains the tomb of St Francis-Xavier – illustrate the evangelization of Asia. These monuments were influential in spreading forms of Manueline, Mannerist and Baroque art in all the countries of Asia where missions were established.\nThis site has the remains of monuments such as the Roman city of Aquincum and the Gothic castle of Buda, which have had a considerable influence on the architecture of various periods. It is one of the world's outstanding urban landscapes and illustrates the great periods in the history of the Hungarian capital.\nDescending a long hill dominated by a giant statue of Hercules, the monumental water displays of Wilhelmshöhe were begun by Landgrave Carl of Hesse-Kassel in 1689 around an east-west axis and were developed further into the 19th century. Reservoirs and channels behind the Hercules Monument supply water to a complex system of hydro-pneumatic devices that supply the site’s large Baroque water theatre, grotto, fountains and 350-metre long Grand Cascade.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Unesco World Heritages in Iran\nThere are many historical monuments in Iran and many Unesco World Heritages in Iran, of which 700 cultural monuments registered in the UNESCO World Heritage List over the past years. Iran has already made great efforts to record its historical monuments at the UNESCO World Heritage List. And for this, it has already recorded 19 sites.\nThis action was taken by Iran because UNESCO decided to formulate a convention to identify, protect and introduce the cultural and natural heritage of human beings around the world in 1972. This organization acts as the United Nations Scientific, Cultural and Educational Arm.\nThe role of UNESCO in the field of tourism has gradually turned into supporting cultural development. So far, 180 countries have signed the World Heritage Convention and have registered more than 700 cultural and natural sites on the UNESCO World Heritage List. And Iran, with its rich history and rich cultural background, has recorded 17 sites in the UNESCO World list so far. In recent months, this number reached 19 with the global registration of two other monuments.\nBut which Iranian cultural and historical sites have been introduced to the world as an Unesco World Heritages in Iran?\nThe historic site of Chogha Zanbil in Shush was the first historical monument that was registered as an Unesco World Heritages in Iran on May 19, 1979 on the World Heritage List with the efforts of the late Shahriar Adel. From the factors contributing to the establishment of this complex on the world heritage list are the significance of this historic site as the most important work since the era of the Elamite regime, its 3000-year history, and also its natural attractions.\nThe natural erosion and destruction caused by the Iran-Iraq war inflicted great damage on this ancient city. In order to prevent further destruction of this region, an agreement was reached between the Iranian Cultural Heritage Organization, UNESCO, Japan Credit Institute and the Kerrake Institute of France (the International Organization for the Protection of Builders) for the implementation of a study on conservation and restoration of the site in 1998. In line with the implementation of this project, a permanent research base including a lab, a department for conservation and restoration, a library, a computer department and a department of pottery studies were created and equipped in the administrative part of the Haft Tepe Museum.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-7", "d_text": "As international tourism increased, Ellinikón Airport, south of the city, was expanded and modernized.The city water supply from an artificial lake at Marathon was insufficient to supply the new building construction, and the Mórnos River 110 miles to the northwest was dammed and tapped. Installation of a modern sewer system was undertaken, together with controls to check the floods that roar into Athens when heavy rains pour off the denuded mountains.Traditional featuresThe older Athens has not entirely disappeared in all this hubbub. Older men may have given up smoking hookahs in shadowy cafes but not their 33-bead kombouloi (“worry beads”), which were acquired from the Turks. Old Athens occupies the six streets sidling off Monastiráki Square, by the excavated Agora. Tiny open-fronted shops are hung with tinselled folk costumes and all of the monuments of Athens reproduced in copper, plaster, plastic, and paint. There is an alley of antique dealers, a street of smithies, one of hardware merchants, and another of wildly assorted miscellany.Close to this lively quarter is the Pláka, on the north slope of the Acropolis. Small, one-story houses, dating from about the time of independence, are clustered together up the hillside in peasant simplicity. There are appropriately tiny squares with tavernas, once celebrated for their folk music, dancing, and simple fare. There are vine-covered pergolas and some unpaved streets too narrow for cars. The baths built by the Turks still function morning and afternoon, but the bouzouki, a local relative of the lute, is giving way to the electric guitar. The taverna signs are multilingual, and the ubiquitous kitchen chair is being replaced by the plastic-ribbed restaurant seat. Progress laps at the Pláka like a vengeful sea, but the Acropolis is just up above, just under the stars.The AcropolisMany of Athens' bequests (all, if the theatre of Herodes Atticus may be regarded as an embodiment of the city's literature) to the world are expressed in and around the natural centre of Athens, the Acropolis (designated a World Heritage site in 1987). Rising some 500 feet above sea level, with springs near the base and a single approach, the Acropolis was an obvious choice of citadel and sanctuary from earliest times.", "score": 23.890251247992186, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "- 1 Why was the Greek Parthenon built?\n- 2 What happened to the Acropolis?\n- 3 What is the difference between the Parthenon and the Acropolis?\n- 4 Why is the Parthenon so important?\n- 5 Why was the Parthenon destroyed?\n- 6 What is so special about the Parthenon?\n- 7 What is the most famous Acropolis?\n- 8 Who destroyed Acropolis?\n- 9 Did slaves build the Parthenon?\n- 10 How much is entry to the Acropolis?\n- 11 How much did Lord Elgin pay for the Elgin marbles?\n- 12 Who gave Lord Elgin permission to take the marbles?\nWhy was the Greek Parthenon built?\nThe Parthenon was the center of religious life in the powerful Greek City-State of Athens, the head of the Delian League. Built in the 5 century B.C., it was a symbol of the power, wealth and elevated culture of Athens. It was the largest and most lavish temple the Greek mainland had ever seen.\nWhat happened to the Acropolis?\nThere’s no recorded history of what happened at the Acropolis before the Mycenaeans cultivated it during the end of the Bronze Age. In 480 B.C., the Persians attacked again and burned, leveled and looted the Old Parthenon and almost every other structure at the Acropolis.\nWhat is the difference between the Parthenon and the Acropolis?\nAcropolis is the area the Parthenon sits on. What’s the difference between Acropolis and the Parthenon? The Acropolis is the high hill in Athens that the Parthenon, an old temple, sits on. Acropolis is the hill and the Parthenon is the ancient structure.\nWhy is the Parthenon so important?\nWhy is the Parthenon important, special and famous? The Parthenon is so special because first of all is the symbol of Athens democracy. It was built after the victory on the Persians who occupied Athens in 480 BC. It was built to celebrate the victory and Athens political, economic and cultural superiority.\nWhy was the Parthenon destroyed?\nOn 26 September 1687 Morosini fired, one round scoring a direct hit on the powder magazine inside the Parthenon.", "score": 23.128157890489792, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "How does a Place become a World Heritage Site?\nTo become a World Heritage Site, a place must be considered of universal interest and must meet one of the selection criteria produced by UNESCO. The State Party of a country, which in the case of the UK is the Government, compiles a tentative list of sites to be considered for inscription. They are then evaluated by either ICOMOS (International Council of Monuments and Sites) for cultural sites and / or IUCN (World Conservation Union) for natural sites. The sites are then presented to the World Heritage Committee who make the final decision.\nThe nomination process involves the production of documents to demonstrate that the property is of “Outstanding Universal Value”, and a management plan to show that mechanisms are in place for the future protection and conservation of the site.\nFor further information:\nWhat is the Purpose?\nThe cultural heritage and the natural heritage are among the priceless and irreplaceable possessions, not only of each nation, but also of mankind as a whole. The loss, through deterioration or disappearance, of any of these most prized possessions constitutes an impoverishment of the heritage of all the peoples in the world. Parts of that heritage, because of their exceptional qualities, can be considered to be of outstanding universal value and as such worthy of special protection against the dangers, which increasingly threaten them.\nThe purpose is to identify the sites worthy of the status of World Heritage Sites and then ensure their protection and conservation.\nWhat are the criteria for a site becoming a World Heritage Site?", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "Our Exploring the Peloponnese tour is an in-depth experience of its remarkable beauty and variety. Currently, that itinerary includes no less than five UNESCO World Heritage sites!\nThe Archaeological sites of Mycenae and Tiryns\nThese are the two best-preserved of the great Mycenaean citadels surrounding the Plain of Argos. Linked with the mythology of the Homeric heroes and of Heracles, the quintessential hero of the Greek mainland, they were the incredibly well-defended centres of the palatial civilisation we call Mycenaean, thriving between 1500 and 1200 BC. Complex gates, underground cisterns, the so-called palaces, enormous tombs in their vicinity, and many other features make them first-rate sites, expressions of a still poorly-understood ancient culture.\nTo this day, their remains are jaw-droppingly monumental and highly evocative – but hard to understand without an expert guide(!). On our tour, we bring them to life and help you understand their significance, helped by the superb accompanying museums at Nafplio and at Mycenae itself.\nThe Sanctuary of Asklepios at Epidauros\nNear the shores of the Saronic Gulf, this is a place renowned for its peaceful and serene atmosphere, for its setting in a particularly peaceful and lovely Mediterranean landscape, with the scent of pine trees wafting across the site, but especially for its ancient theatre. The latter is considered the most beautiful and most perfectly proportioned of its kind, creating a strong sense of focus, helped by the superb acoustics, and an impression of harmony that belies its enormous size: it seats over 14,000. The remains of the nearby sanctuary, dedicated to Asklepios, the God of Healing, are also fascinating, showing the structure and functions of an ancient health resort.\nThe Archaeological Site of Olympia\nOne of antiquity's most famous places, Olympia is the site of a major panhellenic (all-Greek) sanctuary dedicated to Zeus and locale of the original Olympic Games. Its remains, mostly from the 7th century BC to the 5th AD, are extremely interesting, making it easy to imagine the place in its heyday.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "The Acropolis treasures that currently reside in the British Museum in London – where we saw them several times during our time of living in the UK – are a gaping hole in the museum’s collection, and the Greek government have been waging an understated battle for over three decades now to have them returned to their rightful home. When they are returned to Greece, the museum will become an essential companion to the World Heritage site.\nA perspective towards Mount Lycabettus.\nAnd a dawn view towards both Acropolis and Lycabettus from another high hill, Philopappos.\nAcropolis is the focal point of Athens, unmissable if you spend any time in the city. Ticket lines can get pretty long in the middle of the day, so either come right before the opening or wait until 5pm or so; only right after the opening and right before the closing time will you have a chance of not sharing the site with hundreds of other tourists. Of the two main entrances, the southern one has shorter lines at all times of the day. Minimum of one hour is needed to see all there is to see; if you let the awe overtake you, it could be quite longer.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "UNESCO WORLD HERITAGE\nThe city was declared a World Heritage Site by UNESCO in 1996 thanks to its gigantic ossuary of huge and bulky ramparts and palaces, that the city owes to the reign of Sultan Moulay Ismail, who was behind the first great work of the Alawite dynasty, reflecting the greatness of its designer.\nA circuit of nearly 13 kilometers allows to appreciate the size of the ramparts covering a historic belt of 40 kilometers.\nThe presence of this historic city today, containing rare remains and important monuments in the midst of a changing urban space, gives this urban heritage its universal value.\nThe features of Meknes reflect its outstanding universal value and relate partly to the monuments and partly to the entire urban fabric of the city, illustrating its shape of the 17th century.\nThe medina and its kasbah contain all the elements that testify to the exceptional universal value of this particular site : fortifications, urban fabric, earth architecture, civil, military and cultual buildings and gardens.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Not for any love for the regime ... but out of serious concerns for Iraqi national interests.”\nIn Paris, UNESCO said its World Heritage Committee had decided to place the six historic sites in Syria on its list of World Heritage in Danger to draw attention to the risks they are facing because of the conflict in the country.\n“The danger listing is intended to mobilise all possible support for the safeguarding of these properties which are recognised by the international community as being of outstanding universal value for humanity as a whole,” UNESCO said in a statement.\nThe sites concerned are the old city of Damascus; the Greco-Roman ruins at Palmyra; the old city of Bosra; the old city of Aleppo; the Crac des Chevaliers castle and Qal‘at Salah El-Din; and the ancient villages of Northern Syria. (Additional reporting by Erika Solomon in Beirut, Patrick Markey in Baghdad and John Irish in Paris; Writing by Paul Taylor)", "score": 23.030255035772623, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Achieving World Heritage status is a great honour and one that has been bestowed on many natural and architectural sites around the world. From Easter Island to Stonehenge, the Taj Mahal to the Acropolis, many UNESCO World Heritage sites are instantly recognisable.\nBeing listed as a World Heritage site isn’t just a matter of status, however, it also helps to protect the area or building from being harmed in any way by new developments or environmental factors. In July, 21 new places received the prestigious accolade. The list included the Lake District in the UK, caves and ice age art at Swabian Jura in Germany and Los Alerces National Park in Argentina.\nIn this article we will talk you through five of the most iconic World Heritage sites across the globe and exactly what it takes to be included in the list.\nWhat does it take to become a UNESCO World Heritage site?", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "The Castle of Pylos or Niokastro\nIt was built in 1573 A.D. by the Turks, then it passed to the Venetians and was restored in 1829 by the French Marshal Maison. The head office of the Underwater Archaeology Center is located here. (6 km - 00h.09΄). Read more at pylos.info.\nOne of the most important and imposing temples of antiquity. The building was erected in the second half of the fifth century B.C. and it is believed to be the work of Iktinos, the architect of the Parthenon. It is the only known temple of antiquity which combines three architectural orders. Read more at whc.unesco.org.\nThe most important religious and athletic centre in ancient Greece. Its fame rests upon the Olympic games during the classical period. Here, the great sculptor Pheidias crafted the gigantic chryselephantine statue of Zeus, listed as one of the Seven Wonders of the ancient world. The Archaeological Museum of Olympia is one of the most important museums in Greece with many precious exhibits. Read more at whc.unesco.org.\nThe history of the ruined Byzantine town of Peloponnese begins from the middle of the 13th century when the Franks accomplished the conquest of the Peloponnese. Mystras is a precious source of the historical knowledge, the art and the civilization of the two last centuries of Byzantium. Read more at whc.unesco.org.\nIts fame comes mainly from the Asclepieion, the most celebrated healing centre of the Classical World, which, among other buildings, includes the Theatre. It was designed by Polykleitos the Younger in the 4th century B.C. It seats up to 13 000 people and is marveled at for its exceptional acoustics. It is still used for performances today. Read more at ancient.eu.\nThe centre of the Mycenaean civilization from about 1600 B.C. to about 1100 B.C. In 1874 Heinrich Schliemann undertook a complete excavation. He found the ancient burial vaults with their royal skeletons and spectacular funereal artifacts made of gold. The site has been well- preserved and the admiration of visitors is aroused by the massive ruins of the Cyclopaean walls as well as the many vaulted tombs.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Landmarks of the modern era, dating back to the establishment of Athens as the capital of the independent Greek state in 1833, include the Hellenic Parliament (19th century) and the Athens Trilogy consisting of the National Library of Greece, the Athens University and the Academy of Athens. Athens was the host city of the first modern-day Olympic games in 1896, and 108 years later it welcomed home the 2004 Summer Olympics. Athens is home to the National Archeological Museum, featuring the world's largest collection of ancient Greek antiquities, as well as the new Acropolis Museum.\nMain article: History of AthensThe oldest known human presence in Athens is the Cave of Schist, which has been dated to between the 11th and 7th millennium BC. Athens has been continuously inhabited for at least 7000 years. By 1400 BC the settlement had become an important centre of the Mycenaean civilization and the Acropolis was the site of a major Mycenaean fortress, whose remains can be recognised from sections of the characteristic Cyclopean walls. Unlike other Mycenaean centers, such as Mycenae and Pylos, it is not known whether Athens suffered destruction in about 1200 BC, an event often attributed to a Dorian invasion, and the Athenians always maintained that they were \"pure\" Ionians with no Dorian element. However, Athens, like many other Bronze Age settlements, went into economic decline for around 150 years following this.\nIron Age burials, in the Kerameikos and other locations, are often richly provided for and demonstrate that from 900 BC onwards Athens was one of the leading centers of trade and prosperity in the region. The leading position of Athens may well have resulted from its central location in the Greek world, its secure stronghold on the Acropolis and its access to the sea, which gave it a natural advantage over inland rivals such as Thebes and Sparta.By the 6th century BC, widespread social unrest led to the reforms of Solon. These would pave the way for the eventual introduction of democracy by Cleisthenes in 508 BC. Athens had by this time become a significant naval power with a large fleet, and helped the rebellion of the Ionian cities against Persian rule.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "The World Heritage Committee suspended its 40th session, commenced in Istanbul in July 2016, following disruptions and concerns arising from an attempted military coup. The Session will now resume in Paris in October 2016.\nPrior to the suspension the Committee had completed assessment of new nominations to the World Heritage List and consideration of ‘state of conservation’ reports for more than 50 properties, including every property that is currently included on the List of World Heritage in Danger.\n21 new properties were inscribed on the World Heritage List, including the Archaeological Site of Philippi (Greece), Antequera Dolmens (Spain) and Ari (Turkey). A serial listing for the Architectural Work of Le Corbusier, spread across seven countries and Nan Madol – a ceremonial centre within the Federated States of Micronesia – were among the other inscriptions.\nRichard Mackay provided advice to the Committee during the state of conservation considerations, including briefings and response to questions on the Historic Centre of Vienna, the Historic City of Quito, St Sophia’s Cathedral in Kiev, and threats to the Mercantile City of Liverpool. World Heritage sites in Syria and Yemen continue to face major challenges, but until hostilities subside the focus must be on urgent first aid and repair of shattered residences and preparation for reconstruction. The Committee endorsed important advice from a Joint Reactive Monitoring Mission on the reconstruction and conservation of Kathmandu, but decided not to include Kathmandu on the List of World Heritage in Danger at this time. Commenting on the Committee Session, Richard observed that there is a strong commitment from State Parties to the Convention to ensure that there are appropriate measures for conserving and managing World Heritage places in a way which retains their Outstanding Universal Value:\n“the Committee sessions are like an iceberg – what occurs on the podium and in the public forum is only the small tip of months of advisory work and liaison, plus behind-the-scenes bilateral meetings. At this Committee session, it was inspirational to be part of a process that is so important in caring for the jewels of humanity’s common heritage” he said.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "The four areas of the property are the Archaeological Park, at the tip of the Historic peninsula; the Suleymaniye quarter with Suleymaniye Mosque complex, bazaars and vernacular settlement around it; the Zeyrek area of settlement around the Zeyrek Mosque (the former church of the Pantocrator), and the area along both sides of the Theodosian land walls including remains of the former Blachernae Palace. These areas display architectural achievements of successive imperial periods also including the 17th century Blue Mosque, the Sokollu Mehmet Pasha Mosque, the 16th century Şehzade Mosque complex, the 15th century Topkapi Palace, the hippodrome of Constantine, the aqueduct of Valens, the Justinian churches of Hagia Sophia, St. Irene, Küçük Ayasofya Mosque (the former church of the Sts Sergius and Bacchus), the Pantocrator Monastery founded under John II Comnene by Empress Irene; the former Church of the Holy Saviour of Chora with its mosaics and paintings dating from the 14th and 15th centuries; and many other exceptional examples of various building types including baths, cisterns, and tombs.\nCriterion (i): The Historic Areas of Istanbul include monuments recognised as unique architectural masterpieces of Byzantine and Ottoman periods such as Hagia Sophia, which was designed by Anthemios of Tralles and Isidoros of Miletus in 532-537 and the Suleymaniye Mosque complex designed by architect Sinan in 1550-1557.\nCriterion (ii): Throughout history the monuments in Istanbul have exerted considerable influence on the development of architecture, monumental arts and the organization of space, both in Europe and the Near East. Thus, the 6,650 meter terrestrial wall of Theodosius II with its second line of defence, created in 447, was one of the leading references for military architecture; Hagia Sophia became a model for an entire family of churches and later mosques, and the mosaics of the palaces and churches of Constantinople influenced both Eastern and Western art.\nCriterion (iii): Istanbul bears unique testimony to the Byzantine and Ottoman civilizations through its large number of high quality examples of a great range of building types, some with associated artworks.", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "If you pay some attention to UNESCO activities, you might have learnt of the world heritage site listings. This is a list of places that are regarded by UNESCO (United Nations Education, Scientific and Cultural Organization) to have some physical or cultural importance. It may be of outstanding beauty or an uncommon natural phenomenon. This might be a beach, building, forest, mountain, water-fall and so on It is maintained by the International World Heritage program that is governed by UNESCO World Heritage Committee made up of 21 UNESCO member states that the General Assembly selects.\nExamples of places listed here include The Pyramids of Giza and East Africa’s Serengeti’s wild diversity. Well, what is the usefulness of such a list or even the sites it contains? Read below to find out.\n1. It brings a sense of honor and global recognition. Despite their location, UNESCO declares the world heritage sites accessible to everyone from allover the world. Currently, around 850 or more sites have joined the list. A site on this list will be flooded with international tourists thus reflecting on the economy of that community and country. The diversity brought closer home and stories shared between locals and tourists are a good learning avenue.\n2. It helps protect and preserve these sites: By informing the international community about the existence of and significance of these sites, UNESCO instills in people the need to protect these sites both for ourselves and future generations. This is even more practical for the world endangered sites. Alongside UNESCO, The World Heritage Convention maintains a list of sites that risk being lost to a variety of circumstances; may be poaching, natural disasters, excessive tourism, urbanization or even war.\nA site on this list enjoys increased international vigilance and routine preservation efforts in addition to more funding from UNESCO. By 2015, the list had 48 endangered sites, most of which were in the war-prone Syria, Afghanistan and Iraq. By then, two properties had been delisted because of a proven impossibility to preserve them.\nThe World Heritage Convention set 10 criteria, six natural and four cultural, of which a site must meet one to be considered for addition into the list. Cultural heritage should be those that posses exceptional art or some historical cultural importance.\nNatural heritage sites are to posses unparalleled beauty or some unmatched natural phenomenon, rich in indigenous biodiversity, or bear significance in the Earth’s history.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Hospitable Greeks helped us organize our trip to Athens. We are very grateful to Yiannis Yiannakakis, head of Athens Walking Tours travel agency. In response to our \"cry for help\" in obtaining photography permits, he and his colleagues not only offered support in this tricky business and secured official permits for all requested locations, he also helped us with accommodations, transportation, and logistics, absorbing considerable portion of our expenses. Traveling the world, we have never met such generous hosts before.\nAdditional information about our hospitable guides is available here.\nBeing one of the oldest cities in the world Athens was first mentioned in 15th century BC. Athens is called the \"cradle of civilization\" — it's the birthplace of democracy, western philosophy, political science, literature, theater, and the Olympics.\nAthens is the land of Gods. According to ancient Greeks' beliefs, the city was a battleground where Athena, goddess of wisdom, fought with Poseidon, lord of seas. Athena emerged victorious and the city was named after her. However, offended Poseidon had his revenge by making the area waterless. Of course, this story is only one of many great ancient Greek myths, but the fact remains that water shortages plague the city to this very day. Moreover, hot Athens weather, probably, makes Greeks wish for a different outcome of the legendary battle.\nFor centuries Athens has been an important cultural center and a large powerful city. Many Athens landmarks have survived to this day, and the most famous of them, without a doubt, is the Acropolis. Actually, the word \"acropolis\" simply means \"upper city\" or \"a high place\" — such places were used to build temples for patron deity and could be found in almost any Greek settlement. But it was the Acropolis of Athens that became the famous landmark of the world and a symbol of Greece, just like the Eiffel Tower in Paris or the Kremlin in Moscow.\nIn ancient times (650 — 480 BC) numerous temples and sculptures of Greek deities were located here on a 300x130 meters rocky spur. Later, during the Mycenaean period (15-18th century BC) the Acropolis served as a fortified royal residence. Temples were built during peacetime or destroyed if the city was at war.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "- What is the history of Acropolis?\n- Who bombed the Acropolis?\n- What happened to the Acropolis?\n- Are they rebuilding the Acropolis?\n- What’s the difference between the Parthenon and the Acropolis?\n- What was the Acropolis used for?\n- When was the Acropolis built?\n- Why did the Acropolis need to be rebuilt?\n- Who destroyed the Acropolis?\nWhat is the history of Acropolis?\nThe Acropolis was home to one of the earliest known settlements in Greece, as early as 5000 BC.\nIn Mycenaean times – around 1500 BC – it was fortified with Cyclopean walls (parts of which can still be seen), enclosing a royal palace and temples to the cult of Athena..\nWho bombed the Acropolis?\nBombing the Parthenon Armed with knowledge of the Parthenon as a pivotal battle site, Francesco Morosini ordered subordinate Antonio Mutoni, head of the mortar brigade, to target the Parthenon. After three days of shelling, a mortar struck to Parthenon and detonated the gunpowder on September 26, 1687.\nWhat happened to the Acropolis?\nIn 480 B.C., the Persians attacked again and burned, leveled and looted the Old Parthenon and almost every other structure at the Acropolis. To prevent further losses, the Athenians buried the remaining sculptures inside natural caves and built two new fortifications, one of the rock’s north side and one on its south.\nAre they rebuilding the Acropolis?\nHistoric Decision Made to Rebuild Part of the Parthenon. The Greek Central Archaeological Council (KAS) decided on Wednesday that a part of the Parthenon, now in ruins on the Athens Acropolis, is to be rebuilt using mostly materials which are now lying on the ground.\nWhat’s the difference between the Parthenon and the Acropolis?\nWhat’s the difference between Acropolis and the Parthenon? The Acropolis is the high hill in Athens that the Parthenon, an old temple, sits on. … Acropolis is the hill and the Parthenon is the ancient structure.\nWhat was the Acropolis used for?\nHistory of the Acropolis The Acropolis in Athens was a fortress and military base during the Neolithic period, due to its position which offers a great view of the land and the sea.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "World Heritage sites are recognized for their OUV – places that are so unique and exceptional that their protection should be a shared and common responsibility of us all. A central difference between marine protected areas (MPAs) and marine World Heritage sites is the international oversight that comes with monitoring, evaluation and reporting obligations for the latter. To ensure the characteristics that make up a site’s World Heritage status will endure all sites inscribed on the UNESCO World Heritage List are subject to systematic monitoring and evaluation cycles embedded in the official procedures of the 1972 World Heritage Convention. Along with the recognition and inscription of an area on the List, the State of Conservation process is a key value added to the protection of MPAs that are globally unique. This monitoring and evaluation of all natural sites, including all marine ones, on UNESCO’s World Heritage List is done in cooperation with IUCN, which has an official advisory role, formally recognized under the World Heritage Convention.\nThe World Heritage Marine Programme has worked to facilitate the exchange of knowledge and resources across the community of managers of marine sites on the World Heritage List, creating a global network of conservation leaders that is increasingly equipped to navigate a changing ocean. Designation as a World Heritage site raises the visibility and profile of key ocean conservation concerns, and equips managers to advocate more effectively for their protection. Hence, the World Heritage Convention has played a crucial role in ensuring that local conservation problems receive international attention when their exceptional values are in jeopardy.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-3", "d_text": "Mystras is a little labyrinthine, so our expert guides add not just narrative and explanation, but orientation as well...\nFive World Heritage sites (Mycenae and Tiryns are listed as one) make our Exploring the Peloponnese our second-richest tour in this regard (after Exploring Sicily). This might change in the future, as the Venetian fortresses of Methoni and Nafplio, as well as ancient Messene, are considered possible contenders.\nFrom the Slopes of Mount Olympus to Shores of the Aegean: the Archaeology, Food and Wine of Macedonia is our definitive tour of Northern Greece. It includes three World Heritage Sites at present.\nThe Early Christian and Byzantine Monuments of Thessaloniki\nThey are just one aspect of that truly fascinating city – but a highly important one. The city, founded about 315 BC, became the main centre and port of Macedonia under the Romans and has remained so until the present day. It was especially significant in Late Roman and Byzantine times, when it was a seat of power, an economic hub and a place of fine art and architecture.\nKey monuments include the Rotonda, originally built as a mausoleum to the Roman Emperor Galerius around AD 300 and later dedicated as a church of St. George, the 5th and 7th century BC enormous Basilica of St. Demetrios, the 5th century Church of the Virgin Acheiropoietos and the splendid 8th century Agia Sophia. On our tour, we also see the city's excellent Archaeological and Byzantine Museums.\nThe Archaeological Site of Aigai\nAncient Aigai, better known by its modern name Vergina, is one of the most spectacular visits on any of our itineraries. The site was founded as the new capital to the Kingdom of Macedon by Philipp II, the father of Alexander the Great. When Philipp was murdered there in 336 BC, Alexander had him buried with lavish grace goods of jewellery, finery, furniture and weaponry. The mound covering Philipp's and a series of other royal tomb was discovered in the 1980s, yielding one of the world's most astonishing assemblages of archaeological treasure. It is now presented as an unbelievably well-presented underground museum.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-9", "d_text": "Athens, the capital and largest city of Greece, dominates the Attica periphery: as one of the world's oldest cities, its recorded history spans at least 3,000 years. The Greek capital has a population of 3.81 million (in 2011).\nA bustling and cosmopolitan metropolis, Athens is central to economic, financial, industrial, political and cultural life in Greece. It is rapidly becoming a leading business centre in the European Union.\nAthens is widely referred to as the cradle of Western civilization and the birthplace of democracy, largely due to the impact of its cultural and political achievements during the 5th and 4th centuries BC on the rest of the, then known, European continent. The heritage of the classical era is still evident in the city, represented by a number of ancient monuments and works of art, the most famous of all being the Parthenon on the Acropolis, widely considered a key landmark of early Western civilization. The city also retains a vast variety of Roman and Byzantine monuments, as well as a smaller number of remaining Ottoman monuments projecting the city's long history across the centuries. Landmarks of the modern era are also present, dating back to 1830 (the establishment of the independent Greek state), and taking in the Greek Parliament (19th century) and the Athens Trilogy (Library, University, and Academy).\nThe establishment of Athens as a city dates back to mythological times. The city’s history is still evident throughout Athens in the form of many Ancient, Roman, Byzantine and modern monuments. Today’s capital integrates the ancient and medieval history into the contemporary era. Monuments can be found all around the city center, side by side with contemporary constructions such as buildings, roads and train stations.\nThe Parthenon, a monument that constitutes the symbol of Greece worldwide, has been standing on the “sacred rock” of Athens, the Acropolis, for thousands of years. The Parthenon along with the other monuments of the Acropolis, are all excellent pieces of art, reflecting the Classical period and the Golden Age of ancient Athens in the 5th and 4th centuries B.C.\nThe Acropolis Museum\nDesigned by Bernard Tchumi in collaboration with Michalis Photiadis; the sparkling new museum, since its opening in June 2009, has already become the city’s top attraction and is expected to become one of the most visited and “must see” museums worldwide.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Athens. The city with the most glorious history in the world, a city worshiped by gods and people, a magical city. The enchanting capital of Greece has always been a birthplace for civilization. It is the city where democracy was born and most of the wise men of ancient times. On this excursion you will be “face to face” Greek capital! It will be our pleasure to guide you to Tomb of the Unknown Soldier that stands in front of Parliament House on Constitution Square, and see the Presidential Mansion that served as the Royal Palace before the country’s monarchy was abolished in 1974. You will see Panathenian Stadium – site of the first modern Olympic Games in 1896 – and then relax as your coach drives along Panespistimiou, home to Athens’ National Library and the Metropolitan Cathedral of Athens, known locally as Mitropoli.\nAfter seeing the highlights of modern Athens, you’ll step back in time with a tour of Athens’ past. Visit the Roman Temple of Olympian Zeus, and then stop at Athens’ crowning glory – the incredible Acropolis of Athens. Built on a rocky hill towering above the city, the UNESCO World Heritage-listed Acropolis is a cluster of ancient buildings that acted as a fortress – all reflecting the splendor and wealth of Athens during the 5th century BC.\nHighlights of the Acropolis include the Pantheon, the Propylaea gateway and the Temple of Athena Nike – built to represent Athens’ ambition to be the leading Greek city. After spending time at leisure here, you’ll make your way back to your starting point by coach.\nIf you opt to upgrade your tour to include an entrance ticket to the Athens Acropolis Museum, your guided tour will actually finish here at the Acropolis. After receiving your ticket from your guide, head inside this fascinating museum to see a staggering collection of more than 4,000 artifacts. Highlights include statues from the Archaic period and the impressive Pantheon Hall — dedicated to the history of the famous temple on the peak of the Acropolis.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "World Monuments Fund and UNESCO World Heritage Sites\nOver the course of World Monuments Fund’s forty-seven-year history, many of our projects have been at UNESCO World Heritage sites. Our engagement has ranged from catalytic support, helping local groups prepare site for World Heritage inscription, to conservation work at sites already on the list. World Heritage cultural sites reflect the achievements of communities over time and this vast array of special places recognizes that our planets is filled with extraordinary sites that range from the humble and obscure to the grand and famous.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Swagger around Europe and explore its iconic monuments from the rubble and ruins of Athens, royal stances of London to the archaeological ruins of Greece. Plan a Europe trip and get amazed by the magnificence of these UNESCO World Heritage Sites.\nPalace of Westminster, UK\nThe Palace of Westminster is a time-hallowed piece of Gothic art that retraces fathoms of history. The palace has retained most of its original form, albeit ignitions of two fires. The Palace of Westminster is so roomy that it is almost four times larger than a cricket pitch! Well, if you get lost, then just look at the colour of the carpet. The House of Commons has a green carpet, while the House of Lords has a red one.\nLeaning tower of Pisa, Italy\nSpanning construction over 199 years, the Leaning Tower of Pisa is famed for its unintended tilt formed due to its faulty foundation. Did you know that this monumental bell tower was never meant to lean, and Galileo conducted some ground-breaking experiments at its top? Posing as if you were pushing, kicking, punching or doing something completely insane is rated #1 on our must-do activities when here.\nFrom chronicles of gladiators to suffering of slaves, experience history like never before. Originally called as the Flavian Amphitheatre, it staged major battles between man and the beast. Boasting the status of largest built amphitheatre in Roman Empire, Colosseum has served as a model for modern stadiums and amphitheatres. The only distinguishing factor between the two stadiums is that the players involved in modern times survive the games. However, the fighters of ancient world either performed or perished. This UNESCO World Heritage Site had the capacity to house over 70,000 spectators managed to enter the monument through more than 80 gateways.\nShrouded in mystery for years, Stonehenge has bewildered archaeologists, scientists, astrologists and historians equally. No one knows as to how and why a ring of massive stones was constructed ages ago. Several school of thoughts have spawned stories around its existence. Recently, archaeologists discovered a detailed map underneath the massive stones using hi-tech scanning methods. The research also states that Stonehenge was surrounded by burial mounds, shrines and massive pits. UNESCO gave it a status of World Heritage Site in 1986.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "It is now a UNESCO World Heritage Site. Specialists from around the world are still working on digs to uncover the buildings, skeletons, and artifacts of this ancient town. Over two million tourists visit the site every year.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Athens (//; Modern Greek: Αθήνα, Athína; IPA: [aˈθina]; Katharevousa: Ἀθῆναι, Athinai; Ancient Greek: Ἀθῆναι, Athēnai) is the capital and largest city of Greece. Athens dominates the Attica region and is one of the world's oldest cities, with its recorded history spanning around 3,400 years. Classical Athens was a powerful city-state. A centre for the arts, learning and philosophy, home of Plato's Academy and Aristotle's Lyceum, it is widely referred to as the cradle of Western civilization and the birthplace of democracy, largely due to the impact of its cultural and political achievements during the 5th and 4th centuries BC in later centuries on the rest of the then known European continent. Today a cosmopolitan metropolis, modern Athens is central to economic, financial, industrial, political and cultural life in Greece. In 2008, Athens was ranked the world's 32nd richest city by purchasing power and the 25th most expensive in a UBS study.\nThe city of Athens has a population of 655,780 (796,442 back in 2004) within its administrative limits and a land area of 39 km2 (15 sq mi). The urban area of Athens (Greater Athens and Greater Piraeus) extends beyond the administrative municipal city limits, with a population of 3,074,160 (in 2011),over an area of 412 km2 (159 sq mi). According to Eurostat, the Athens Larger Urban Zone (LUZ) is the 7th most populous LUZ in the European Union (the 4th most populous capital city of the EU) with a population of 4,013,368 (in 2004). Athens is also the southernmost capital on the European mainland.\nThe heritage of the classical era is still evident in the city, represented by ancient monuments and works of art, the most famous of all being the Parthenon, considered a key landmark of early Western civilization. The city also retains Roman and Byzantine monuments, as well as a smaller number of Ottoman monuments.\nAthens is home to two UNESCO World Heritage Sites, the Acropolis of Athens and the medieval Daphni Monastery.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "When people talk about visiting Greece, it seems like they go straight to talking about beautiful islands with all white buildings and serene blue seas. For me, my trip to Greece would not have been complete without the three days I spent in the capitol city: Athens. Athens is an incredible city and full of so much history. It’s insane to me that some people might visit Greece and skip this amazing city! There are so many great parts of Athens. Its history, the monuments, the food… Athens is incredible and eveyone should add it to their upcoming travel itineraries!\nNote: This post contains affiliate links. If you purchase via my link I get a small commission at no additional cost to you. This helps support my blog and provide free content for you! Read my disclosure policy here.\nAthens is one of the world’s oldest cities with its history dating back over 3,000 years. Back before the unification of Greece, the city-state of Athens was known as a center for the arts, culture, and philosophy. Home to philosophers like Plato and Socrates and the birthplace of Democracy, the history of Athens is rich and has influenced all of our lives in some way. As a history buff, this was one of the main things that drew me in.\nAthens is also the birthplace of the Olympics both in their ancient and modern iterations. It seems like you see history at every turn in the city. You can see the Acropolis from almost anywhere(more on that in a minute). You can make your way to the Panathenaic Stadium, where the opening ceremony of the first modern olympics was held. It is also the only stadium in the world that is built entirely from marble!\nIt seems that everywhere you go in the city you can see the Acropolis nestled high above the city. As a UNESCO World Heritage site, it’s clear that the Acropolis is important, but it’s also incredibly beautiful. On your hike up to the top you pass the Theatre of Dionysus, then at the top you get to see The Parthenon, the Erechtheum, and sweeping views of the entire city!\nBuilt over 2000 years ago, the Acropolis has definitely seen better days, but if you want to see renderings of what it looked like in its prime, you can visit the amazing Acropolis Museum after you come back down.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "The aim of the UNESCO World Heritage Convention is to protect valuable cultural and natural sites around the world. Verla was named a World Heritage site due to its status as a unique and culturally and historically significant example of an industrial settlement from the turn of the 20th century.\nThe UNESCO World Heritage Convention\nUNESCO, founded in 1945, is an educational, scientific and cultural organisation and a specialised agency of the United Nations. Its purpose is to promote peace, reduce poverty and contribute to sustainable development and intercultural dialogue by promoting cooperation between nations through the use of science, education and culture.\nThe UNESCO World Heritage Convention is one of the most well-known international conventions. The Convention Concerning the Protection of the World’s Cultural and Natural Heritage was adopted in 1972. Finland ratified the convention in 1987. The purpose of the convention is to safeguard the preservation of valuable cultural and environmental heritage worldwide and protect it from destruction and decay. Around the world, 191 nations have committed to honouring the convention. As of July 2017, there are World Heritage sites in 167 different countries.\nThe prime catalyst for the World Heritage Convention was a major international effort in 1959–1968 to rescue the Abu Simbel temples in Egypt, which were threatened by the rising waters of the Nile due to the construction of a dam. The rescue operation awakened a strong international urge to protect the world’s irreplaceable cultural heritage.\nThe World Heritage Convention is based on a concern for safeguarding the world’s endangered cultural and environmental heritage for future generations. Its mission is to demonstrate the value of the world’s most significant cultural heritage sites and to preserve them through international cooperation. World Heritage is a matter of the common cultural and environmental heritage of the entire human race.\nCriteria for Cultural and Natural Heritage Sites\nIn order to be included in the World Heritage List, a cultural heritage site must be regarded as a masterpiece of human creativity or bear an exceptional testimony as to an existing or already extinct culture. For example, the site can be a building type that represents a significant historical era or illustrates the traditional settlements of a certain culture. It can also be associated with events, living traditions, ideologies, religions, beliefs or artistic and written works.\nAn environmental heritage site, on the other hand, could reveal something about an important developmental phase in the history of the Earth itself or be an example of ongoing ecological or biological change.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "The 41st UNESCO annual conference is being held from 2-12 July in Krakow, Poland. There, a committee will decide on which sites are of \"outstanding universal value\" and thus merit a protected status under international law. There are currently 1,052 World Heritage sites, including the Taj Mahal, Machu Picchu and Stonehenge to name but a few. There are 35 nominations for this year's convention, here is a look at just a few...\nAsmara ? Eritrea Asmara, the capital of Eritrea, is situated in the Horn of Africa and has a population of approximately 1 million. Italian colonial rule from the late 19th to mid 20th century has left the city with a variety of unusual and modernist architecture. The buildings feature neo-Romanesque, neo-classical, futuristand art-decostyles. Distinctive Italian style architecture is also present, notably in the wide streets and piazzas. But, these historic building are falling into ruin and are threatened by town planning, a place on the UNESCO list would ensure the monuments would be preserved and restored.\nTaputapuatea - Raiatea\nThe small commune of\nTaputapuateais located on Raiatea Island in the Pacific ocean. The French colony was once a significant religious centre in Polynesia and priests from neighboring islands would meet to worship the ancient deities and share their knowledge of the cosmos. Evidence of this ancient culture dating from 1000AD is to be found in surviving stone structuresand marae (communal sacred spaces of worship). Many artifacts like these on the Pacific Islands were destroyed by Christian settlers in the 19th century, so few are still standing. A UNESCO status means these archeological sites would preserve the memory of a civilization which has long since disappeared.\nAhmedabad ? India Ahmedabadis the 6th largest city in Indiawith a fast growing population of more than 6.3 million. The stunning location was chosen by The Times of India as the best city to livein 2016. Sites to look out for include the Jama Mosquebuilt by the sultan in 1424 and Sabarmati Ashram, where Mahatma Ghandi lived for many years. The city features rare architectural styles, specifically the Indo-Saracenic stylewhich is a fusion of Hindu and Persian designs.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "The archaeological site of Chichen Itza was inscribed on the list of World Heritage Site by Unesco in 1988. On July 7, 2007, was recognized as one of the New Seven Wonders of the modern world, by a private initiative without the support of Unesco, but with the recognition of millions of voters around the world.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Walking past the Acropolis, Parthenon, Temple of Olympian Zeus and the ancient Angora I realized how similar Athens is to Rome. I had been to Rome three years ago and the similarities between the two in terms of architecture is obvious. After all, the Roman empire was influenced by the classical Greek culture. The ruins in Athens are 7000 years old.\nGreece is a cradle of western civilization. Everywhere you turn, you will find ruins that are thousands of years old. If you love art, history or archeology, you'll love Greece.\nAt the Acropolis I soaked up on the gentle peace and the ancientness of the place which has seen life inhabited as far as the 4th millenium BC. From here you can look out at the sea (so blue and you now know why it's called the Aegean Sea) and down on the city, with the ancient Agora and Plaka hugging it's foothill. Acropolis means \"city on a hill\". As you look at this architectural wonder you are struck by the scale of this place steeped in history as you walk into the foot steps of the ancient Greeks . It's location definitely makes it the crowning glory of Athens. It stands above the city of Athens, an ancient citadel with some of the most significant monuments of global civilization. You can visit the holy places of Plato, Pericles and Aristotle, where democracy and philosophy were born.\nParthenon - It is the central and the largest of the Acropolis temples and is dedicated to Athena, the patron of the city. There are a total of 50 columns still remaining. It is now a UNESCO world heritage site. Since it has been built, the Parthenon has been used as a temple, a church, as a fortress and a gun powder storage facility. It was destroyed in a fire, in an explosion in 1687 and then looted by invaders. Standing in front of this monument I felt transported into ancient Greece. The feeling was very surreal.\nTemple of Dionysos - is on the southern slope of the Acropolis and a site of two performance venues. The wide semi circle of audience seats is cut out of natural rocks and can seat about 5500 people.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-3", "d_text": "The Municipality of Athens (also City of Athens) had a population of 664,046 (in 2011) within its administrative limits, and a land area of 38.96 km2 (15.04 sq mi). The urban area of Athens (Greater Athens and Greater Piraeus) extends beyond its administrative municipal city limits, with a population of 3,090,508 (in 2011) over an area of 412 km2 (159 sq mi). According to Eurostat in 2011, the functional urban area (FUA) of Athens was the 9th most populous FUA in the European Union (the 6th most populous capital city of the EU), with a population of 3.8 million people. Athens is also the southernmost capital on the European mainland.The heritage of the classical era is still evident in the city, represented by ancient monuments and works of art, the most famous of all being the Parthenon, considered a key landmark of early Western civilization. The city also retains Roman and Byzantine monuments, as well as a smaller number of Ottoman monuments.Athens is home to two UNESCO World Heritage Sites, the Acropolis of Athens and the medieval Daphni Monastery. Landmarks of the modern era, dating back to the establishment of Athens as the capital of the independent Greek state in 1834, include the Hellenic Parliament and the so-called \"architectural trilogy of Athens\", consisting of the National Library of Greece, the National and Kapodistrian University of Athens and the Academy of Athens. Athens is also home to several museums and cultural institutions, such as the National Archeological Museum, featuring the world's largest collection of ancient Greek antiquities, the Acropolis Museum, the Museum of Cycladic Art, the Benaki Museum and the Byzantine and Christian Museum. Athens was the host city of the first modern-day Olympic Games in 1896, and 108 years later it welcomed home the 2004 Summer Olympics.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "- ► Artifacts Unearthed in Egypt\n- ► Communal tombs uncovered in Upper Egypt\n- ► Roman catacomb discovered at Saqqara necropolis\n- ► Neolithic era temple unearthed in southeastern Turkey\n- ► The oldest pearl discovered near Abu Dhabi\n- ► Ancient gilded coffin is returned to Egypt\n- ► 1000 years old royal tomb found in north China\n- ► Largest ancient ruins of distillery discovered in China\n- ► Stunning ancient skull discovered in Ethiopia\n- ► Mural from 3800 years ago unearthed in Peru\nWorld Heritage Site\nA World Heritage Site is a place that is listed by the United Nations Educational, Scientific and Cultural Organization (UNESCO) as being of special cultural or physical significance. The list is maintained by the international World Heritage Programme administered by the UNESCO World Heritage Committee, composed of 21 UNESCO member states which are elected by the General Assembly.\nThe following is a list of UNESCO World Heritage Sites in 23 countries.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "#8 Taj Mahal, India\nKnowing the story of the Taj Mahal, we could call it the “mausoleum of love.” This historical site in India attracts thousands of visitors yearly to admire the architecture and witness the great love that the emperor Shah Jahan had for its beloved wife, Mumtaz Mahal.\n#9 Great Barrier Reef, Australia\nThe biggest coral reef in the world! And how could it not be on the Unesco list? It contains over 2,900 coral reefs, 1,500 species of fish, and 900 islands, making it one of the largest and most colorful places on Earth.\n#10 Petra, Jordan\nIt is considered the place of spirituality and one of the Seven Wonders of the World. The temples, theatres, and tombs built inside the rocks make the area one of the most famous archaeological sites in the world. It is located between the Red Sea and the Dead Sea, and thousands of people visit the site annually.\n#11 Machu Picchu, Peru\nBetween the Peruvian Andes and the Amazon Basin, 2,430m above sea-level stands the Historic Sanctuary of Machu Picchu, which is among the most significant achievements of Inca.\n#12 Old Havana, Cuba\nThe Baroque and neoclassical monuments create a remarkable character and an outstanding architecture around the historic center of the city. It is a must-visit city!\n#13 Great Wall, China\nThe Great Wall is considered a masterpiece and also has a symbolic significance as it was created to protect China. The wall is unique because of the methods used, in different times and places, to build walls, fortresses, passes, and beacon towers.\n#14 Itsukushima Shinto Shrine, Japan\nThis shrine illustrates the beauty and lifts the nature and human creativity. What is the best way to experience it? Take a boat cruise around the complex so that you can see all 20 buildings.\n#15 Acropolis, Athens, Greece\nAll the ancient history of Athens illustrated in this monumental complex. Acropolis is one of the most important ancient sites in the world and accepts thousands of visitors every year.\n#16 Cinque Terre, Italy\nCinque Terre is the name that represents the five villages of Monterosso, Vernazza, Corniglia, Manarola, and Riomaggiore.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "This is one of the most unique monasteries in the country when it comes to architecture, along with the incredible frescoes that wait within its walls. It became a UNESCO World Heritage Site in 1979, and rightly so.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "The Acropolis of Athens, Greece, a monument of great architectural and historic significance, came in second after the Angkor site in Cambodia on a list composed recently by the American news network CNN about the most beautiful world heritage sites.\nCNN made this list after the announcement that two people undertook the purchase of the most expensive vacation ever. The men, a Chinese student and an Italian businessman, will visit all 962 UNESCO World Heritage Sites in two years and the total cost of this luxury expedition, organized by the website VeryFirstTo.com, will surpass $1.5 million.\nThe survey listed the top 20 sites in the world. Of the Acropolis, CNN wrote: “The ancient Greek monument is enchanting whether someone walks on the rock or he admires it from a distance”.\nAccording to CNN the most beautiful World Heritage Sites are the following:\n1. Angkor, Cambodia\n2. Acropolis, Greece\n3. Bagan, Myanmar (Burma)\n4. Galápagos Islands, Ecuador\n5. Göreme National Park and the Rock Sites of Cappadocia, Turkey\\\n6. Great Barrier Reef, Australia\n7. Hampi, India\n8. Iguazu National Park, Brazil and Argentina\n9. Los Glaciares National Park, Argentina\n10. Machu Picchu, Peru\n11. Mont-Saint-Michel, France\n12. Petra, Jordan\n13. Pyramids of Giza, Egypt\n14. Rapa Nui, Chile\n15. Serengeti National Park, Tanzania\n16. Sigiriya, Sri Lanka\n17. Tulum, Mexico\n18. Valletta, Malta\n19. Venice and its lagoon, Italy\n20. Yellowstone National Park, United States.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-3", "d_text": "The National Library also forms part of the so-called “Neoclassical Trilogy” of the city of Athens. It consists of three solid parts, out of which the one in the middle, which is also the biggest, houses the reading room. To enter this part, one has to pass through a Doric-style row of columns (designed after the Temple of Hephaestus in the Ancient Agora of Thission, which served as its model), after climbing a monumental curved double Renascence style staircase.\nThe National and Kapodistrian University of Athens is the last part of the Neoclassical Trilogy. It was a design of the Danish architect Christian Hansen. The University of Athens in 1932 was officially named National and Kapodistrian University of Athens, in honour of Ioannis Kapodistrias, the first governor of Greece, after the nation’s independence. Today, this building houses the Rectorate, the Senate, the Great Hall of Ceremonies and important central services. Its forecourt, the Propylaea, is socio-historically significant as it has served as a main site for political rallies and demonstrations by students and other social groups involved in social rights movements.\nOn the hill of Acropolis we will visit the Architectural Masterpieces of the Golden Age of Athens: The Propylaea, the Temple of Athena Nike, the Erechtheion and finally the Parthenon, a temple which is dedicated to the maiden goddess Athena, whom the people of Athens considered their patron. Its construction began in 447 BC when the Athenian Empire was at the height of its power. It was completed in 438 BC, although decoration of the building continued until 432 BC. It is the most important surviving building of Classical Greece, generally considered the culmination of the development of the Doric order. Its decorative sculptures are considered some of the high points of Greek art. The Parthenon is regarded as an enduring symbol of Ancient Greece, Athenian democracy, western civilization and one of the world’s greatest cultural monuments.\nNew Acropolis Museum\nThe Acropolis Museum is an archaeological site-specific museum, housing more than 3.000 famous artefacts from the Athenian Acropolis, the most significant sanctuary of the ancient city. Architect Bernard Tschumi’s new Acropolis Museum replaced the old Museum on the Rock of the Acropolis.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "This field is required.\nEmail field is required.\nOnly JPG and PNG file with maximum size of 2 MB is allowed.\nThe Historic Areas of İstanbul, situated on a peninsula surrounded by the Sea of Marmara, Boğaziçi (Bosphorus), and Haliç (Golden Horn), were inscribed on the UNESCO World Heritage List in 1985.\nİstanbul is the only city situated on two continents in the world. The Historic Areas of İstanbul are represented by four main areas: Sultanahmet Archaeological Park, Süleymaniye Conservation Area, Zeyrek Conservation Area and Land Walls Conservation Area. These areas differ from each other in terms of the periods and characteristics of the cultural properties that they house, and they display the urban history of İstanbul.\nGöreme National Park and Cappadocia were inscribed on the World Heritage List in 1985 as 7 parts: Göreme National Park, Derinkuyu Underground City, Kaymaklı Underground City, Karlık Church, Theodore Church, Karain Columbaries and Soğanlı Archaeological Site.\nThe most significant feature of Göreme National Park and Rock Cut Cappadocia Region is the existence of a plenty of fairy-chimneys, formed by the wind and the rain water. The columbaries on the high slopes of Soğanlı, Zelve and Üzengi Valleys, and the monk cells carved in the depths of the valleys add value to the site.\nThe first Turkish buildings inscribed on the World Heritage List are the Ulu (Great) Mosque and Hospital of Divriği. This building complex was commissioned in the 13th century by Ahmet Shah and his wife Melike Turan of the Principality of Mengücekli. Renowned for its monumental architecture and traditional stone carving decorations of Anatolia, this masterpiece, with its two-domed mosque, hospital and tomb, was inscribed on the UNESCO World Heritage List in 1985.\nHaving been founded around 1650 BC, Hattusha was the capital of the Hittite Empire and became the focus of the arts and architecture of that time. It has been on the World Heritage List as a cultural asset since 1986.\nHattusha is an open-air archaeological museum consisting of two sites, the Lower City and the Upper City. Visible at the Lower City are the remains associated with civic life.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "The Great Temple is the principal cult building of the city. Due to its two cult rooms, this temple is believed to have been devoted to the Storm God Teşup and the Sun Goddess of Arinna City, which are the greatest gods of the Empire.\nSituated in the Kahta county of Adıyaman province and described as the sacred place of Commagene Kingdom with its enchanting statues standing ten metres high and inscriptions that are several metres long, Nemrut Mountain was inscribed on the World Heritage List as a cultural asset in 1987.\nNemrut Mountain houses the most majestic places of worship belonging to the Hellenistic Era in ancient Anatolia. According to the inscriptions, Antiochus I had a monumental tomb, a tumulus of cut stones over the tomb, and terraces along the three edges of the tumulus built in order to show his gratitude to the gods and his ancestors.\nXanthos, which was the capital of Lycian dating back to 3000s BC, is known to be the largest administrative centre of Lycia during antiquity. Letoon, which was inscribed on the World Heritage List together with Xanthos in 1988, was one of the most prominent religious centres in antiquity.\nThe archaeological value of Xanthos and Letoon make them very important parts of world heritage. The sites are about 4 km apart and they include the stone inscriptions on which the longest and the most important scripts in Lycian language are written.\nThe sacred Hierapolis of Phrygia, one of the antique cities of the Aegean, was inscribed on the UNESCO World Heritage List in 1988. The ancient city of Hierapolis is believed to have been founded by Eumenies II, the King of Pergamum, in the 2nd century BC, and to have been named after Hiera, the beautiful wife of Telephos, the legendary founder of Pergamon.\nThe city was attached to the Asia province of Roman Empire in 129 BC and administrated by proconsuls. The city saw its most brilliant years between 96 and 162 AD and it was attached to Pisidia Pacatiana in the 3rd century AD.\nSafranbolu, a unique Anatolian city that brings history to life through its mosques, market, neighbourhoods, streets and historic houses, was inscribed on the UNESCO World Heritage List in 1994.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Athena, the goddess of wisdom, was named Athena. The proof of its ancient flowering is everywhere, in the remains of monuments, statues, and sacred places, which still hold respect for one of the most important periods in history.\nThe Poseidon Adventure in Art\nIn Greek mythology, Poseidon is the god of the sea, so it only fits when the surprisingly forged bronze statue comes from the bottom of the Aegean Sea, where it lay for centuries after a shipwreck from Artemision Point. The two-meter-high figure stands with wide arms and moves forward at the left foot. The right hand held a trotter, and the unknown sculptor was clearly a master in exactly doubling the complicated balancing act, which was a gesture of seemingly simple spear throwing.\nThis work of art is a number of fascinating bronzes of the National Archaeological Museum of Athens. The museum's collection, which includes pieces from the prehistoric era, offers the world's finest Greek art collection. The renovation closes the museum for one and a half years, but was reopened before the 2004 Olympic Games.\n260 meters above the city, the Acropolis (the \"high city\") is not only the highest point of Athens, but it is also the main point for many people to visit Greece. It is the oldest known settlement in Greece and was a holy place for ancient Athens.\nIn the period from 448 to 420, Pericles's outstanding Athens statesman commissioned four new monuments on the Acropolis at the site of former ruins. The Athens sculptor Phidias is the leader of construction and interior design. The Ionic Erechtheum includes the Caryatids veranda with columns in the form of monumental female figures that identify the mystery. The Athena Nike Ionian temples, dedicated to Athena as the goddess of victory, were built during the Peloponnese War, freshen the Greek victory over the Persians in the Battle of Plataea. The Propylaea, the gateway to the Acropolis, replaced both the Doric and the ion columns with the earlier version destroyed by the Persians. Of course, the Acropolis will stay at the left side of Parthenon.\nParthenon, designed by Iktinos and Kallikrates, lasted 15 years.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "In our last post, we celebrated the recent addition of ancient Ephesus to the UNESCO World Heritage List by describing the various sites of such distinction that we visit on our tours and cruises of Turkey.\nThe next post - incidentally - will be about Italy, where the Arab-Norman heritage of Palermo and surroundings received the same honour this year.\nBut, moving from East to West, there's some ground to cover first, some seas to cross and a whole country to deal with: Greece. The \"cradle of western civilisation\" is home to a host of immensely significant sites, places of importance, interest, beauty and impact. Not all of them reflect the civilisation we call Classical Greece - they range from prehistoric citadels via Classical temples to Byzantine monasteries and beyond.\nWORLD HERITAGE SITES ON OUR TOURS AND CRUISES IN GREECE\nThe most obvious image of Greece and one of the most famous architectural monuments in the world, the Acropolis is the sacred rock in the heart of the city, ancient and modern. Settled since prehistory, it became the citadel of a Bronze Age realm and later the formal religious centre of the Classical city. Its redesign, masterminded by the political leader Perikles and the artist Pheidias, began in 450s BC, when Athens was at the height of her wealth and power. The main monuments than built include the awesome Propylaia, the ornate Temple of Nike, the highly original Erechtheion and – of course – the mighty Parthenon. A visit to the Acropolis should also include the shrines and sanctuaries along its slopes – and the state-of-the-art Acropolis Museum with its wonderful collections.\nThe Acropolis, in its full historical and urban context, is a central highlight on our Athens tour, and can easily be visited before or after our Peloponnese tour, which starts and ends in Athens. Guests on both tours could also easily add a visit to the UNESCO-listed monasteries at Daphni or Osios Loukas – or even a day trip to the Archaeological site of Delphi, the famous oracular sanctuary to Apollo, another World Heritage site.\nThe Peloponnese, the legendary and historical peninsula that makes up the southern part of the Greek Mainland, is an area of immense historical and cultural wealth.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "The Neolithic \"city\" of Çatalhöyük was renowned for its extraordinary arts and crafts, and the earliest finds were from 7,400 BC. The settlement was an international key to unlocking the basis of civilisation and agriculture. The social organisation of the Neolithic Site of Çatalhöyük and its urban plan are believed to represent the ideals of equality. Neolithic Site of Çatalhöyük was inscribed on the World Heritage List in 2012.\n\"Pergamon and Its Multi-layered Cultural Landscape\", the only capital city from the Hellenistic period, inholding the layers of Hellenistic, Roman, Eastern Roman and Ottoman periods have been inscribed to the World Heritage List of UNESCO in 2014.\nThe Areas, insribed to the World Heritage List consist of nine components; Pergamon City (multi-layered city), Kybele Sanctuary, Ilyas Tepe, Yigma Tepe, İkili Tumuli, Tavşan Tepe, X Tepe, A Tepe and Maltepe Tumulus. Ancient Pergamon settlement at the top of the Kale Hill, the capital of Hellenistic Attalid Dynasty, represents the outstanding example of urban planning of the Hellenistic period with its monumental architecture. Temple of Athena, , the steepest theater of the Hellenistic period, library, Great Altar of Pergamon, Dionysus Temple, agora, gymnasiums and high pressured water pipe-line system are the most outstanding examples of this planning system and architecture in the period. Great Altar of Pergamon and many other works produced by Pergamon School of Sculpture represent the climax in the art of sculpture in Hellenistic period.\nBursa, as the first capital of Ottoman Empire located on the north-western slopes of Uludağ Mountain and Cumalıkızık founded as a waqf village during the same period have been inscribed in the UNESCO World Heritage List in 2014.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "In its July 15th 2005 meeting in Durban, South Africa, the World Heritage Committee inscribed Megiddo and two other biblical tells in Israel — Hazor and Beersheba — on UNESCO's World Heritage List.\n\"Tels, or pre-historic settlement mounds,\" says the World Heritage Committee, \"are characteristic of the flatter lands of the eastern Mediterranean, particularly Lebanon, Syria, Israel and Eastern Turkey. Of more than 200 tells in Israel, Megiddo, Hazor and Beer Sheba are representative of tells that contain substantial remains of cities with biblical connections. The three tells also present some of the best examples in the Levant of elaborate Iron Age, underground water collecting systems, created to serve dense urban communities. Their traces of construction over the millennia reflect the existence of centralized authority, prosperous agricultural activity and the control of important trade routes.\" (http://whc.unesco.org/en/list/1108)\nThe Advisory Body Evaluation explains the nomination in the following words:\n\"Megiddo is one of the most impressive tells in the Levant. Strategically sited near the Aruna Pass, overlooking the fertile Jezreel Valley and with abundant water supplies, from the 4th millennium BC through to the 7th century BC, Megiddo was one of the most powerful cities in Canaan and Israel, controlling the Via Maris — the main international highway connecting Egypt to Syria, Anatolia and Mesopotamia. Epic battles that decided the fate of western Asia were fought nearby.\"\n\"Megiddo also has a central place in the Biblical narrative, extending from the Conquest of the Land through to the periods of the United and then Divided Monarchy and finally Assyrian domination....\"\n\"Megiddo is said to be the most excavated tel in the Levant, its twenty major strata contain the remains of around 30 different cities....\"\n\"Megiddo ... represents a cornerstone in the evolvement of the Judeo-Christian civilization through its central place in the biblical narrative, its formative role in messianic beliefs, and for its impressive building works by King Solomon.\"\nAccording to the Advisory Body Evaluation, the inscription of Megiddo and the two other tells as World Heritage Sites is based on four criteria:", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-2", "d_text": "The city is represented as three parts in the World Heritage List; Çukur, Kıranköy and Bağlar.\nThe city has a known history that dates back to 3000 BC and is located in a region which was ruled by the Hittites, the Phrygians, the Lydians, Persians, Hellenistic Kingdoms (Ponds), Romans, Seljuks, principalities of Çobanoğlu and Candaroğlu, and the Ottomans respectively.\nThe Ancient City of Troy, famous for being the site of Trojan War that Homer described in his epic poem The Iliad, was inscribed on the World Heritage List in 1998.\nWith its history dating back to 3000 BC, it is one of the most famous archaeological sites of the world. It is located within the boundaries of Çanakkale province. According to the foundation legend of Troy, the sea goddess Tethys and the titan of Atlantic Sea, Oceanus, had a daughter called Electra. Electra would become Zeus’s wife and would give birth to Dardanus. Dardanus’ son Tros would found the city called Truad, and his son Ilus would found the city of Troy.\nThe Selimiye Mosque and Complex are located in Edirne, the capital of Ottoman Empire before the conquest of İstanbul, and were inscribed on the UNESCO World Heritage List in 2011.\nThe mosque is visible from all parts of the city with its entire splendour. With its monumental dome and four slender minarets, the mosque was designed and built by Mimar Sinan, the world renowned royal architect. The construction of the mosque started in 1568, lasted seven years and was completed in 1575. Thousands were employed during the construction.Considered as the most important masterpiece of Ottoman art, the royal architect Mimar Sinan regarded the mosque as his \"masterwork\".\nÇatalhöyük has been renowned as one of the earliest settlements of the Neolithic Era, and sheds light on the dawn of human settlement with unique examples of the earliest domestic architecture and landscape painting as well as the sacred objects of mother-goddess cult. Çatalhöyük is in the Çumra county of Konya province, and it was discovered in 1958. Comprehensive scientific studies and excavations have been carried out on various dates since then.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-13", "d_text": "It is only through this identification that the sites will gain the ‘competitive advantage elements’ (see Liwieratos, 2009) that are essential for their economic and social sustainability.\n1 According to the Hellenic Statistical Authority (\n2 Some examples include the site of Emporios on the island of Chios (Archontidou-Argyri and Kokkinoforou, 2003), the site of Palamari on Skyros (Parlama, 2006), the Paliomonastiro in Achaia\n3 In Northern Greece, the archaeological park of Dion (\n106 ARIS TSARAVOPOULOS and GELY FRAGOU\n4 Spain (\nArticle 3, entitled ‘Content of protection’, stipulates, among other things, that the protection of the country’s cultural heritage ‘consists in […] its preservation and prevention of destruction […] its enhancement and integration into contemporary social life’ (Archaeological Law, 2002).\nArchaeological Law 2002. Law No. 3028/2002, On the Protection of Antiquities and Cultural Heritage in General (English translation) [online] [accessed 10 June 2012]. Available at:\nArchontidou-Argyri, A. and Kokkinoforou, M. eds. 2003. Emporio. A Settlement of the Early Historical Times. Works of Rehabilitation. Chios: Ministry of Culture, 20th Ephorate of Prehistoric and Classical Antiquities. Bevan, A., Conolly, J., and Tsaravopoulos, A. 2008. The Fragile Communities of Antikythera. Archaeology\nInternational, 10: 32–36.\nCaltsas, N., Vlachogianni, E., and Bougia, P. eds. 2012. The Antikythera Shipwreck. The Ship, the Treasures, the\nMechanism, Catalogue of the Exhibition in the National Archaeological Museum from April 2012 to April\n2013. Athens: National Archaeological Museum.\nCernea, M. M. 2001. Cultural Heritage and Development: A Framework for Action in the Middle East and\nAfrica. Washington, DC: World Bank.\nColdstream, N. and Huxley, G. eds. 1972.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-4", "d_text": "In the near future, Philip's Palace and a second museum will be added to the site's attractions.\nMeteora is one of Greece's best-known sites and perhaps its most picturesque, famous for its series of eremite monasteries, set in seemingly impossible locations on the tops of steep and tall rock pillars overlooking the fertile Plain of Thessaly. Monastic activity here began in the 11th century, but reached a peak of activity in the 14th to 16th. In Meteora's heyday, there were 24 monasteries, of which four are still occupied. They are fascinating not just for their incredible setting, but also for the fine examples of 15th and 16th century fresco paintings preserved in the chapels and attendant buildings.\nThe Archaeological Site of Philippi, another visit on From the Slopes of Mt Olympus to the Shores of the Aegean, is a candidate for future UNESCO listing (I believe it will be Greece's next site to reach that stage), as is Mt. Olympus itself. Intrepid visitors, as long as they are male, could also add a visit to the Monks' Republic of Mt. Athos, another listed site.\nSurprisingly enough, our Crete tour includes no World Heritage sites so far. The reason is a deplorable one: none of the island's remarkable cultural, archaeological and historical heritage has been granted UNESCO recognition yet. Over time, that might change: the Minoan Palaces of Knossos, Phaistos, Malia, Zakros and Kydonia are candidates, as are the Venetian fortifications of Iraklio, Rethymno and Chania.\nThe same lack applies to our Cruising to the Cyclades, but is certainly not a reflection of a lack of fascinating places to see. No applications have been made so far, but I foresee that the Bronze Age city of Akrotiri on Santorini will one day receive UNESCO recognition.\nOur second great island cruise, Cruising the Dodecanese, is a different affair. Three places along this grand itinerary have already been added into UNESCO's World Heritage list.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "There are over 1,000 UNESCO World Heritage Sites to choose from, all rich in history, culture and architecture. The United Nations selects these sites based on extensive research of historical, cultural and scientific importance. We’ve selected some of our favorite ones, which also happen to be breathtakingly beautiful and popular tourist destinations.\nHere are some of the world’s most beautiful UNESCO sites to visit.\n30.) Taj Mahal\nThe illustrious Taj Mahal is a palace built by the Mughal emperor to bury Shah Jahan to bury his favorite wife.\n29.) Machu Picchu\nThe Machu Picchu is one of the biggest attractions in all of South America and all of the world. It was one of the biggest empires of the Inca and it reaches up to 8,000 feet high.\n28.) Old Havana\nThe architecture that makes you feel like you’ve stepped back in time keeps plus the historical significance of the Cuban city center is why this is one of the World UNESCO site.\n27.) Pyramids of Giza\nThe pyramids are a remnant from the powerful Egyptian empire from over 4,500 years ago.\n26.) Great Wall\nThe Great Wall attracts more than ten million travelers a year to see it’s almost otherworldly vastness. The wall stretches for more than 13,000 miles and can be viewed from space.\nLocation: Athens, Greece\nThis UNESCO heritage site is the site of one of the biggest historical citadels on earth that housed several ancient culturally significant buildings.\n24.) Cinque Terre\nCinque Terre are more than just a beautiful popular tourist destination, these five coastal Italian villages are preserved part of history.\n23.) Vatican City\nVatican City is considered the world’s smallest country and has immense religious and historical significance. The country is also home to some of the most ancient art and artifacts from important historical eras like the Renaissance and Baroque era.\n22.) Giant’s Causeway\nLocation: Ireland, United Kingdom\nThis is one of the most unique UNESCO world heritage sites, the Giant’s Causeway is a compilation of rock formations that are the result of years of volcanic activity.\n21.) Chichen Itza\nMexico is home to many ancient ruins that are remnants of empires past; Chichen Itza is one of the best Maya ruins to see in the country.\n20.)", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-2", "d_text": "The monuments visible at Olympia include the famous Temple of Zeus, once home to the gold-and-ivory statue that was one of the Seven Wonders of the Ancient World, the venerable Temple of Hera, the bouleuterion or council chamber, an ancient hotel, the training facilities and – of course – the mother of all stadiums. The fact that all of the Greek World attended the Games, which took place every four years, meant that the participating city states all tried to be visibly represented and commemorated on the site, making it a microcosm of what ancient Greece was.\nThe site also includes a superb museum, full of first-rate sculpture, weaponry, athletic equipment and much more, including the wonderful statue of Hermes by Praxiteles - one of the very few surviving pieces by a famous ancient sculptor.\nThe Temple of Apollo Epikourios at Bassai\nThis extraordinary and romantic monument is one of the least-visited World Heritage sites in the country, due to its remote location in the rugged mountains of Arcadia. The same remoteness has ensured its extraordinary preservation, comparable to the better-known Greek Temples of Sicily, as it was never used as a quarry.\nThe modernist tent-like structure that currently covers the site to protect it from erosion adds to the strange and out-of-this world atmosphere that prevails at this near-perfect example of a 5th century BC Doric edifice. Getting to Bassai entails a drive through some of the Peloponnese's most remarkably scenery and some of the region's famously beautiful mountain villages.\nThe Archaeological Site of Mystras\nMystras is one of Greece's Medieval marvels. Initially founded in 1249 by William of Villehardouin, who erected the fortress on the hilltop above, the site, located near Sparta, became the Byzantine capital of the Peloponnese, especially in the 13th to 15th centuries, when it was a centre of thought, faith and art. Today, what is left of the once-busy town are the castle, the bulking Palace of the Despots, and especially numerous chapels, churches and monasteries, many of them richly decorated with fresco paintings of the Palaiologan Period. The site offers wonderful views over the plain of the river Eurotas and modern Sparta.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-3", "d_text": "Around the Palace Tholos graves have been excavated with remarkable objects, the closest being found at a distance of 80 metres from the Central Palace.\nThe Palace is a complex of buildings with a total of 105 ground floor apartments and other public spaces. It consists of four main buildings (western, central, northeast & wine warehouse), as well as some smaller buildings. The most important part is a large rectangular Crown Room\" with a circular hearth, also the bathroom with its clay bathtub and warehouses with numerous storage vessels.\nTemple of Apollo Epikourios\nOn the bare rocky slopes of Mount Kotilio stands one of the most important and imposing temples of antiquity, dedicated to Apollo Epikourios (\"Apollo the Helper\").. The temple is situated in a prominent position and is on the U.N.E.S.C.O. list of World Cultural Heritage sites along with the Egyptian Pyramids, the Parthenon and other monuments worldwide.\nThe Temple of Apollo Epikourios is one of the best surviving monuments of classical antiquity. In particular, it is the best preserved after the Temple of Hephaestus's in Athens. Of all the temples in the Peloponnese, after the Temple of Tegea, it could take first place for the quality of its marble and its harmonious ensemble.\nThe temple was dedicated to Apollo Epikourios by the inhabitants of Figalia because they overcame a plague epidemic. The inhabitants of Figalia had erected a temple in honour of Apollo Vassita in the 7th century B.C., and worshipped him with the name Epicure supporter in war or illness. He was given the name Epicurean during the wars against the Spartans around 650 B.C. The final Temple was built during the second half of the 5th century BC (420-410) by Iktino who was also the architect of the Parthenon and for this reason is sometimes referred to as the Parthenons Twin.\nThe construction managed to combine many iconographic characteristics that showed the conservative religious tradition of the Acadians embracing the new features of the classical era. Characterised by a multitude of both original outer and internal fittings which make it a unique monument in the history of ancient Greek architecture. Is has a Doric pavilion from local limestone.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-0", "d_text": "Decision : 32 COM 8B.56\nExamination of nominations and minor modifications to the boundaries of naturel, mixed and cultural properties to the World Heritage List - Historic Centres of Berat and Gjirokakastra (Albania)\nThe World Heritage Committee,\n1. Having examined Documents WHC-08/32.COM/8B.Add and WHC-08/32.COM/INF.8B1.Add,\n2. Inscribes the Historic Centres of Berat and Gjirokastra, Albania, on the World Heritage List on the basis of criteria (iii) and (iv);\n3. Adopts the following Statement of Outstanding Universal Value:\nThese two fortified historic centres are remarkably well preserved, and this is particularly true of their vernacular buildings. They have been continuously inhabited from ancient times down to the present day. Situated in the Balkans, in Southern Albania, and close to each other, they bear witness to the wealth and diversity of the urban and architectural heritage of this region.\nBerat and Gjirokastra bear witness to a way of life which has been influenced over a long period by the traditions of Islam during the Ottoman period, while at the same time incorporating more ancient influences. This way of life has respected Orthodox Christian traditions which have thus been able to continue their spiritual and cultural development, particularly at Berat.\nGjirokastra was built by major landowners. Around the ancient 13th century citadel, the town has houses with turrets (the Turkish kule) which are characteristic of the Balkans region. Gjirokastra contains several remarkable examples of houses of this type, which date from the 17th century, but also more elaborate examples dating from the early 19th century.\nBerat bears witness to a town which was fortified but open, and was over a long period inhabited by craftsmen and merchants. Its urban centre reflects a vernacular housing tradition of the Balkans, examples of which date mainly from the late 18th and the 19th centuries. This tradition has been adapted to suit the town's life styles, with tiered houses on the slopes, which are predominantly horizontal in layout, and make abundant use of the entering daylight.\nCriterion (iii): Berat and Gjirokastra bear outstanding testimony to the diversity of urban societies in the Balkans, and to longstanding ways of life which have today almost vanished.", "score": 8.086131989696522, "rank": 100}]} {"qid": 4, "question_text": "Can energy be created or destroyed during motion?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "conservation lawArticle Free Pass\nconservation law, also called law of conservation, in physics, several principles that state that certain physical properties (i.e., measurable quantities) do not change in the course of time within an isolated physical system. In classical physics, laws of this type govern energy, momentum, angular momentum, mass, and electric charge. In particle physics, other conservation laws apply to properties of subatomic particles that are invariant during interactions. An important function of conservation laws is that they make it possible to predict the macroscopic behaviour of a system without having to consider the microscopic details of the course of a physical process or chemical reaction.\nConservation of energy implies that energy can be neither created nor destroyed, although it can be changed from one form (mechanical, kinetic, chemical, etc.) into another. In an isolated system the sum of all forms of energy therefore remains constant. For example, a falling body has a constant amount of energy, but the form of the energy changes from potential to kinetic. According to the theory of relativity, energy and mass are equivalent. Thus, the rest mass of a body may be considered a form of potential energy, part of which can be converted into other forms of energy.\nConservation of linear momentum expresses the fact that a body or system of bodies in motion retains its total momentum, the product of mass and vector velocity, unless an external force is applied to it. In an isolated system (such as the universe), there are no external forces, so momentum is always conserved. Because momentum is conserved, its components in any direction will also be conserved. Application of the law of conservation of momentum is important in the solution of collision problems. The operation of rockets exemplifies the conservation of momentum: the increased forward momentum of the rocket is equal but opposite in sign to the momentum of the ejected exhaust gases.\nConservation of angular momentum of rotating bodies is analogous to the conservation of linear momentum. Angular momentum is a vector quantity whose conservation expresses the law that a body or system that is rotating continues to rotate at the same rate unless a twisting force, called a torque, is applied to it. The angular momentum of each bit of matter consists of the product of its mass, its distance from the axis of rotation, and the component of its velocity perpendicular to the line from the axis.", "score": 52.02336000975974, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Energy exists in many forms, such as heat, light, chemical energy, and electrical energy. Energy is the ability to bring about change or to do work. Thermodynamics is the study of energy.\nFirst Law of Thermodynamics: Energy can be changed from one form to another, but it cannot be created or destroyed. The total amount of energy and matter in the Universe remains constant, merely changing from one form to another. The First Law of Thermodynamics (Conservation) states that energy is always conserved, it cannot be created or destroyed. In essence, energy can be converted from one form into another. Click here for another page (developed by Dr. John Pratte, Clayton State Univ., GA) covering thermodynamics.\nThe Second Law of Thermodynamics states that \"in all energy exchanges, if no energy enters or leaves the system, the potential energy of the state will always be less than that of the initial state.\" This is also commonly referred to as entropy. A watchspring-driven watch will run until the potential energy in the spring is converted, and not again until energy is reapplied to the spring to rewind it. A car that has run out of gas will not run again until you walk 10 miles to a gas station and refuel the car. Once the potential energy locked in carbohydrates is converted into kinetic energy (energy in use or motion), the organism will get no more until energy is input again. In the process of energy transfer, some energy will dissipate as heat. Entropy is a measure of disorder: cells are NOT disordered and so have low entropy. The flow of energy maintains order and life. Entropy wins when organisms cease to take in energy and die.\nPotential energy, as the name implies, is energy that has not yet been used, thus the term potential. Kinetic energy is energy in use (or motion). A tank of gasoline has a certain potential energy that is converted into kinetic energy by the engine. When the potential is used up, you're outta gas! Batteries, when new or recharged, have a certain potential. When placed into a tape recorder and played at loud volume (the only settings for such things), the potential in the batteries is transformed into kinetic energy to drive the speakers. When the potential energy is all used up, the batteries are dead. In the case of rechargeable batteries, their potential is reelevated or restored.", "score": 51.4148185888657, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "BCA Semester 1: Physics - Forms of EnergyQ & A\nQ(2013): (a) State Law of Conservation of Energy.\n(b) Write the names of different forms of energy.\nLaw of Conservation of Energy: This law states that energy can neither be created nor be destroyed, but can change form. It means total energy of an isolated system cannot change. The total energy E of a system can change only by amounts of energy that are transferred to or from the system.\ni.e. W = △E = △Emech + △Eth + △Eint\nwhere △Emech = change in mechanical energy,\n△Eth = change in thermal energy\n△Eint = change in other type of internal energy of the system.\n(b) Forms of Energy\n|Type of Energy||Description|\n|Kinetic||Due to motion of a body|\n|Potential||Due to position or configuration of the body.|\n|Mechanical||Sum of potential and kinetic energy of a body.|\n|Electrical||Due to electrical field around a body.|\n|Chemical||Due to atoms and molecules of a body|\n|Magnetic||Due to magnetic field|\n|Nuclear||Due to binding force between sub-atomic particles.|\n|Gravitational||Due to gravitational Field|\n|Radiant||Due to radiation including light|\n|Thermal or Heat||heat energy due to difference in temperature|\n|Intrinsic or Internal||Due to mass possessed by an object|\nQ (2011): Does kinetic energy of a body depend upon direction of motion? Justify your answer.\nAnswer: No. Kinetic energy is a scalar quantity. It does not depend on the direction of motion. It depends on mass and magnitude of speed.", "score": 46.101676622508705, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "Gravitational potential energy U = mgh\ng= acceleration due to gravity,\nh= height above the surface, m = mass of the body.\nAccording to this theorem, work done by all the forces acting on a body is equal to the change in kinetic energy of the body\nWork done = Change in kinetic energy\nLaw of Conservation of Energy\nAccording to the law of conservation, energy can only be transformed from one form to another. It can neither be created nor be destroyed. e.g., when an object is dropped from the height, its potential energy continuously converts into kinetic energy.\nWhen an object is through upwards, its kinetc energy continuously converts into potential energy. The total energy before and after transformation always remains constant.\nPE + KE = constant or mgh+1/2mv²=constant\nTransformation of Energy\nThe conversion of energy from one form to other is known as the transformation of energy. The phenomenon of transformation of energy from useful to useless form is known as dissipation of energy.\n- Green plants prepare their own food (chemical energy) using solar energy through the process of photosynthesis.\n- When we throw a ball, the muscular energy which is stored in our body gets converted into kinetic energy of the ball.\n- When an athlete runs, the body’s internal energy is converted into kinetic energy.\nEinstein’s Mass-Energy Equivalence\nAccording to Einstein, neither mass nor energy of the universe is conserved but they are interconvertible. The conversion is expressed by the equation E=mc²\nc=3×108m/s is the speed of light", "score": 45.966963061231866, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "|The most important thing about energy is that it is conserved, which means that energy cannot be created or destroyed, it can only be converted into other kinds of energy. The total amount of energy remains constant. How many different kinds of energy are there? We know that mechanical energy consists of kinetic and potential energy, but energy can also appear in the form of heat, light, electric fields, magnetic fields, or even nuclear energy.\nWhen we are talking about mechanical systems we are only concerned with kinetic and potential energy. Friction converts kinetic energy into heat, and so it represents a net loss of mechanical energy. Once energy is converted into heat it is for all practical purposes lost forever, because the heat will just drift off into the environment. This is what the brakes on a car do. In order to stop the car, the friction produced by the brake pads must generate a quantity of heat equal to the kinetic energy of the car, and as a result the brakes get very, very hot.\nThe law of conservation of energy can be stated in three (equivalent) ways:\n- Energy cannot be created or destroyed, only changed into different kinds of energy.\n- The total energy of an isolated system is constant. (An isolated system has no energy or mass entering or leaving it.)\n- The Energy of an non-isolated system changes only by the amount added or removed. If the only energy involved is mechanical, this can be stated as W = DK + DPE, because the only way to change mechanical energy is to do work on the system. Doing work will either change the kinetic energy or the potential energy (or both) of the system.\nExample: Rock falling off cliff of height h (ignoring air friction)\nInitially, the rock has a PE of mgh (the work it would require to raise it that high from ground level). At first this is also the total energy of the system, because potential energy is the only kind of energy the system has at that time.\nAs it falls it loses PE and gains KE, but always the total energy remains the same. Since it started with a total energy of mgh, this will always be the total energy. On the way down, its energy will be a mixture of PE and KE, but will still add up to that original value of mgh:\nThis means that at a given height y we can calculate the velocity.", "score": 45.855149480100195, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Conservation of Kinetic and Potential Energy\nEnergy comes in many different forms. It can’t be created or destroyed, but it can move between objects and change forms. The energy of motion of a moving object is called kinetic energy, and is given by the equation\nKE = 0.5m(v^2)\nWhere m is the object’s mass, and v is the object’s velocity.\nAnother form of energy is called potential energy which is the energy an object has from being at a spot in a field where it would rather not be. When you lift an object up from the ground, the energy you put into lifting it becomes gravitational potential energy, given by PE = mgh, where m is mass, g is acceleration due to gravity (9.8 meters per second), and h is height. Pulling opposite poles of magnets apart or pushing identical poles close together also produces its own kind of potential energy.\nWhen you stop holding up an object, it starts to fall, and that gravitational potential energy transforms into kinetic energy (when it actually hits the ground, the energy gets dispersed in a complicated mess of heat and mechanical deformation).\nWhereas we could solve this problem using kinematic equations, conservation of energy makes these kind of problems much easier.\nThe total energy isn’t going to change from start to finish. It will just move entirely from potential to kinetic. At the start, it’s all potential, given by mgh.\n(5)(9.8)(200) = 9800 J That’s joules by the way, the basic unit of energy.\nThis is all going to become kinetic energy, so:\n9800 = 0.5m(v^2)\n9800 = 0.5(5)(v^2)\n3920 = (v^2)\nsqrt(3920) = v = 62.6 m/s\nThis is an incredibly powerful trick. Using this method, we could even find how fast the rock is moving at any particular height just by finding the potential at that height, subtracting it from the starting total, and setting the leftover equal to kinetic energy.\nAbout The Author\n|Physics Guru And General Math And Science Enthusia|\n|I\\'ve always had a passion for learning science and then turning around and teaching it to anyone who will listen. My interest led me to an undergrad degree in physics, an attempt at a teaching credential (which unfortunately imploded at 95% completion in an administrative snarl), a jump to a Master...|", "score": 42.245118096640326, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "P1- Conervation and dissipation of energy\n1 of 11\nP1.1 - Changes in energy stores.\n- Energy - stored in different energy stores\n- transferred by heating, waves , electric current , a force when it moves an object.\n- When an onject falls and gains speed its gravitational potentiall difference energy store decreases and its kinetic energy store increases.\n- when a falling object hits the floor without bouncing, its kinetic energy store is transfered by heating to the thermal energy store of the object and surroundings. Sound waves move away from point of impact.\n2 of 11\nP1.2 - Conservation of energy.\n- Energy - cant be created or destroyed!\n- Conservation of energy applies to all energy changes.\n- A Closed System = an isolated system where NO energy transfers take place in or out of the the system.\n- Energy can be transferred between energy stores within a closed system.\n3 of 11\nP1.3 - Energy and Work...\n- Work done is when a force makes an object move.\n- Energy transferred = work done.\n- Work done = Force x distance\n4 of 11\nP1.4 - Gravitational potential energy stores.\n- Increases when object moves up.\n- decreases when object moves down.\n- Increases when lifted up as work is done.\n- Gravitational field strength on Earth = 9.8N/kg\n- The GFS on the moon is 1/6 less than earths.\n5 of 11\nGravitational potential Energy Equation.\n6 of 11\nP1.5 - Kinetic and elastic energy stores.\n- on a moving object kinetic energy store depends on its mass and speed.\n- kinetic energy store ; E = 1/2 M X V^2\n- elastic potential energy = energy stored in an elastic when work is done\n- elastic potential energy in a stretched elastic; E = 1/2 k x e^2\n- e = extention of spring ^^^^^^\n7 of 11\nP1.6 - Energy dissiaption.\n- Useful energy is good! It goes to the place we want it too go to!\n- Wasted energy is NOT GOOD! Its transferred by an undesired pathway.\n- wasted enrgy is then transferred to surrondings, making it warmer.\n- As energy dissipates (spreads) it gets less useful!!", "score": 42.11798463920096, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "What Are the Laws of Energy?\nThe laws of energy that govern interactions between matter and energy, such as the transfer of heat from one body to another in the physical universe, are most fundamentally defined by the three laws of thermodynamics and Albert Einstein's discovery of his special and general theories of relativity. Physics itself is built upon these laws, as well as the basic three laws of motion defined by Isaac Newton and first published in 1687, which explain the interaction of all matter. The field of quantum mechanics which began to emerge in the early 20th century also clarified special circumstances for the laws of energy at a sub-atomic scale, upon which much of modern civilization as of 2011 is founded.\nOne of the fundamental principles of the laws of energy made clear by the first law of thermodynamics is that energy is neither created nor destroyed. All forms of energy such as light or sound energy can be changed into other forms, and this was first revealed in the mid-1800s by the work of James Joule, a pioneering English physicist after which the basic unit of energy, the joule, was named. After ten years of thinking about the nature of the relationship between matter and energy, Albert Einstein published his famous formula in 1905 of E=MC2, that stated that both matter and energy were versions of the same thing and could be changed into one another as well. Since the equation states that energy (E) equals mass (M) times the speed of light squared (C2), it was actually stating that, if you had enough energy, you could convert it into mass, and, if you accelerated mass enough, you could convert it into energy.\nThe second law of thermodynamics defined the laws of energy by stating that, in any activity where energy was used, its potential diminished, or it became less and less available for further work. This reflected the principle of entropy and explained where energy went when heat or light escaped into the surroundings, which had puzzled humanity for centuries. Entropy is the idea that high levels of concentrated energy, such as that in fuel before it is burned, eventually spread out into space as waste heat and cannot be recovered. It was in harmony with the first law of thermodynamics because energy was not being destroyed, but access to it was lost.\nThe third law of thermodynamics was clarified in 1906 by research conducted by Walther Nernst, a German chemist.", "score": 41.98302078288775, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "The work, W, done by a constant force on an object is defined as the product of the component of the force along the direction of displacement and the magnitude of the displacement.\nCalculation of Work:\nWhen calculating work, only the force that is applied in the direction of motion is considered. W=Fdcosθ\nIf the force and displacement are in the same direction, that would be considered positive work.\nPower is a measure of how quickly work is done.\nKinetic energy is energy of motion. All moving object possess kinetic energy.\nGravitational Potential Energy:\nGravitational Potential energy is the energy an object possessed due to its position.\nThe point that height is measured from. Any point can be used as a base level because the energy amount you calculate will be relative.\nConservation of Energy:\nEnergy cannot be created or destroyed; it may be transformed from one form into another, but the total amount of energy never changes.", "score": 41.514051324987186, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Mechanical energy is the energy which is applied to an object due to its motion or due to its position. Mechanical energy can be either kinetic energy (energy of motion) or potential energy (stored energy of position).\nConservation of Energy\nAs with momentum, energy is conserved in all interactions. The Conservation law of energy said that energy can’t be created and can’t be demolished, but only can be changed into other form of energy. In the 20th centuries, this definition has been expanded to include mass because they both are in a same form.\nIn this page, we will only deal with mechanical energy. While the potential energy decrease, the kinetic energy increases, means the total energy (mechanical energy) remains same before and after.\nTEbefore = TE after\nFor there, an object is not conserving the entire energy, but there is some energy loss. This energy is not destroyed, but changed into other form or absorbed by another object.\nWhen you pull the weight at a constant velocity, approximately how much pulling force do you have to supply? (Here we can say only approximately, because there is some friction force involved in the pulleys. If the pulleys have weight then the pulling force will have an extra weight of the lower set of pulleys where the actual weight is attached.)\nIf you assume frictionless and weightless pulleys , when you pull with constant velocity, you can answer the question by counting the number of ropes on the lower pulley that support the weight. For instance, in Figure 2 there are four ropes that are holding up the weight, so the pull force that you have to supply is only 1/4 as much as the weight. However, if the rope is wound around only one set of reels as in activity 3, then the pulling force is only 1/2 as much as the weight.\nNow if you attach the rope on the lower reel to start with as in Figure 4, then the pulling force is only 1/3 as much as the weight because only three ropes are supporting the weight. If you have more than two sets of reels, for example, use six reels arranged into three reels on each set as in Figure 5, then the pulling force is only 1/6 as much as the weight.\nIf you attach the top pulley to the ceiling, the ceiling will be pulled by the weight and the force you supply plus the weight of all the pulleys.", "score": 39.7958150062561, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Energy implies as the object’s capability to perform work. It is something that cannot be created or destroyed but can only be transformed. An object loses its energy, when it performs work, whereas it gains energy when the work is performed on it. Energy is broadly classified as kinetic energy and potential energy. While kinetic energy is the energy which an object contains because of a particular motion.\nOn the other hand, potential energy is the stored energy, because of its state of rest. As both the two forms of energy are measured in joules, people get easily confused between these two. So, take a read of the article which will help you to understand the differences between kinetic and potential energy.\nContent: Kinetic Energy Vs Potential Energy\n|Basis for Comparison||Kinetic Energy||Potential Energy|\n|Meaning||Kinetic energy refers to an energy present in the object, due to its property of being in motion.||The energy, contained in an object by virtue of its position, is called potential energy.|\n|Transferability||Can be transferred between objects.||Cannot be transferred between objects.|\n|Measured from||Place itself||Bottom|\n|Environment-relative||Relative to the environment of the object.||Non-relative to the environment of the object.|\n|Equation||0.5 mv^2, where m = mass and v = speed||mgh, where m = mass, g = gravity and h = height|\nDefinition of Kinetic Energy\nSimply put, the energy of motion is kinetic energy. The work required for accelerating the object of a certain mass, from the state of rest to motion. To speed up an object, we apply force, through which energy is transferred from one object to another, causing the object to move at a new and constant speed. The energy transferred is called kinetic energy, determined by speed and mass of the object, i.e. the greater the mass and speed, the more kinetic energy it contains.\nThe kinetic energy of an object, in motion, with a certain velocity, is same as the work performed on it. All the objects that are in motion or action, irrespective of horizontal or vertical motion, possess kinetic energy. It is the energy which an object acquires, owing to its state of motion. For example, Falling of coconut, flowing of a river, moving of car or bus, etc.", "score": 38.33839699931996, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "As we know that energy is the capacity of the physical system to perform different tasks, without it no work can be done. When the work is done, the energy gets transferred from one body to the other as the external force, or sources always apply in the process of the work done. At the same time, it should be kept mentioned that the energy can neither be created nor be destroyed. Around everything in the world possesses energy, but that doesn’t mean that energy always makes sure the work was done. The energy can even be stored within the body. By object’s status with even possessing the energy, we mainly have two types of energy. One of which is the kinetic energy and the other is the potential energy. As both of them have the SI unit joule, people often found it difficult to differentiate it between both the types of energy. Kinetic energy is the type of energy, which a body possesses by the property of being in the motion. Contrary to this, the potential energy is the type of energy, which a body possesses by virtue of its position is known as the potential energy.\nWhat is Kinetic Energy?\nThe kinetic energy is the type of energy which a body possesses due to the virtue of being in the motion. In other words, we can say that the kinetic energy keeps the object in the motion. The motion of the object is evaluated with relation to the environment of the object. If the object changes its position with respect to the environment, then it is said to be in the motion. When the object is in the position of rest, and we pull it, but due to a heavy mass, it doesn’t move. To make it move we apply more of the force on it. At last, the object gets in the motion. In all this process, the energy is transferred from the one body to the other to produce acceleration in the object carrying certain mass is known as the kinetic energy. The kinetic energy is directly dependent on the speed and mass; the greater the speed and the mass, the more the kinetic energy of the body. When the moving object finally stops, in then the kinetic energy gets converted into the potential energy which puts an object on the rest. A misconception here prevails that the energy comes to an end, so the movement of the object stops, but that is not the case as the energy can’t be created nor be destroyed.", "score": 37.011390729864864, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "The Big Idea\nEnergy is a measure of the amount of, or potential for, dynamical activity in something. The total amount of energy in the universe is constant. This symmetry is called a conservation law. Physicists have identified five conservation laws that govern our universe.\nA group of things (we’ll use the word system) has a certain amount of energy. Energy can be added to a system: when chemical bonds in a burning log break, they release heat. A system can also lose energy: when a spacecraft “burns up” its energy of motion during re-entry, it releases energy and the surrounding atmosphere absorbs it in the form of heat. A closed system is one for which the energy is constant, or conserved. In this chapter, we will often consider closed systems; although the total amount of energy stays the same, it can transform from one kind to another. We will consider transfers of energy between systems –- known as work-– in more detail in Chapter 8.", "score": 35.629318424614716, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beamadds to system energy, without either being either work-done or heat-added, in the classic senses).\nΔE = W + Q + E (3)\nWhere E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it.\nEnergy is also transferred from potential energy (Ep) to kinetic energy (Ek) and then back to potential energy constantly.This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:\nEpi + Eki = EpF + EkF\nThe equation can then be simplified further since Ep = mgh (mass times acceleration due to gravity times the height) and (half mass timesvelocity squared). Then the total amount of energy can be found by adding Ep + Ek = Etotal.\nEnergy and thermodynamics\nInternal energy is the sum of all microscopic forms of energy of a system. It is related to the molecular structure and the degree of molecular activity and may be viewed as the sum of kinetic and potential energies of the molecules; it comprises the following types of energy:\nType |Composition of internal energy (U) |\nSensible energy | The portion of the internal energy of a system associated with kinetic energies (molecular translation, rotation, and vibration; electron translation and spin; and nuclear spin) of the molecules. |\nLatent energy | The internal energy associated with the phase of a system. |\nChemical energy | The...", "score": 34.22302056161156, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "In the physical sciences, mechanical energy is the sum of potential energy and kinetic energy. It is the energy associated with the motion and position of an object. The law of conservation of mechanical energy states that in an isolated system that is only subject to conservative forces the mechanical energy is constant. If an object is moved in the opposite direction of a conservative net force, the potential energy will increase and if the speed (not the velocity) of the object is changed, the kinetic energy of the object is changed as well. In all real systems, however, non-conservative forces, like frictional forces, will be present, but often they are of negligible values and the mechanical energy's being constant can therefore be a useful approximation. In elastic collisions, the mechanical energy is conserved but in inelastic collisions, some mechanical energy is converted into heat. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule.\nMany modern devices, such as the electric motor or the steam engine, are used today to convert mechanical energy into other forms of energy, e.g. electrical energy, or to convert other forms of energy, like heat, into mechanical energy.\nEnergy is a scalar quantity and the mechanical energy of a system is the sum of the potential energy which is measured by the position of the parts of the system, and the kinetic energy which is also called the energy of motion:\nThe potential energy, U, depends on the position of an object subjected to a conservative force. It is defined as the object's ability to do work and is increased as the object is moved in the opposite direction of the direction of the force.[nb 1] If F represents the conservative force and x the position, the potential energy of the force between the two positions x1 and x2 is defined as the negative integral of F from x1 to x2:\nThe kinetic energy, K, depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them.[nb 2] It is defined as one half the product of the object's mass with the square of its speed, and the total kinetic energy of a system of objects is the sum of the kinetic energies of the respective objects:\nThe law of conservation of mechanical energy states that if a body or system is subjected only to conservative forces, the total mechanical energy of that body or system remains constant.", "score": 32.97706915165543, "rank": 15}, {"document_id": "doc-::chunk-2", "d_text": "Some Examples of kinetic energy Of daily life can be the movement of a roller coaster, a ball or a car. For example, a demolition ball stores energy when it is held high without activity. Do you know that working of a bow and arrow employs the first law of thermodynamics, which says that ‘energy can neither be created nor be destroyed; it can only be transferred from one form to another’? Hence, snow starts moving down the mountain swiftly. The planets revolving around the sun, the atoms spinning around the nucleus, a soccer ball that is moving or even a fish swimming are some of the examples of systems that possess mechanical energy. Like a rock about to fall off a precipice, a fruit in a tree has the ability to detach at any time due to the attraction exerted by gravitational forces on Earth. This is it for now. They include watching television, washing clothes, heating and lighting the home, taking a shower, working from home on your laptop or computer, running appliances and cooking. Dryer 3. The natural world has used the sun’s energy since the beginning of time, and while there has been lots of discussion about this, the truth is that the sun is both a problem and a solution. Some Examples of potential energy That we can find in the day to day are a swing, a demolition ball, a trampoline, a balloon or a pistol with spring, among others. When the bowstring is released, the arrow moves forward very quickly. Wind energy can be converted by a wind turbine that does just that. A dart gun works on the principle of elastic potential energy. Calculator 15. Read More: 6 Companies that Have Great Environmental Initiatives. But energy can not exit on it's own. The more an archer pulls back, the more potential energy will be gained by the limbs of the bow due to stretching. As we now know that the energy present in an object which is held at rest is called the potential energy. Water falls from the sky, converting potential energy to kinetic energy. Where does this energy in our body to do all the work comes from? When two spherical bodies move at the same speed but have different mass, the larger mass body will... 2 - Roller coaster. Thus, the potential energy of the stone is converted into a kinetic energy. Heating system 9.", "score": 32.72977737460305, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "If a car of of mass M stops due to retardation of frictional force , after\ntraversing a minimum distance then work energy theorem gives\nConservation of energy :\nIf a system is acted on by conservative forces, the total mechanical\ntheory of the system is conserved i.e. Mechanical energy E = KE + PE constant\ni.e E = T + U = constant\n(under conservative forces).\nOther forms of energy :\nIn addition to mechanical energy, there are other forms of energy e.g. thermal energy, light energy, sound energy, electrical energy, chemical energy, nuclear energy etc.\nIf the system is under the action of non-conservative forces, the conservation law for mechanical energy does not hold and a more general law stated as “The total energy of the universe remains constant” holds. This simply means that energy may be transformed from one form to another. For example in loudspeakers, electric bells the electrical energy is converted into sound energy while in electromagnet electrical energy is converted into magnetic energy and for a ball falling on earth, the mechanical energy is converted into heat energy etc.\nEinstein’s Mass Energy Equivalence :\nAccording to Einstein neither mass nor energy of the universe is conserved separately: but mass and energy are interconvertible according to relation\nThis relation is called Einstein’s mass energy equivalence. Accordingly,\n“Total (mass + energy) of universe is conserved”. This is the most general law of conservation of energy.\nAccording to mass energy equivalence the body of mass at rest has rest energy therefore the kinetic energy of the body\nWhere m is the mass of the body moving with speed v given by\nCoefficient of Restitution :\nWhen the two bodies collide directly, then the ratio of relative velocity after collision to the relative velocity before collision is a fixed quantity. The quantity\nis called the coefficient of restitution. Here u1, U2 are initial velocities and are final velocities of two bodies. The value of e lies between O and l. For a perfectly elastic collision e = 1 and for a perfectly inelastic collision e = 0.\nIf a body falls freely from a height H on a floor and bounces back to a height h. Then initial velocity u is given by,\nand the final velocity v is given by\nElastic and Inelastic Collisions :\nElastic collisions :\nA collision is said to be elastic if the total kinetic energy before and collision remains the same.", "score": 32.559033348724746, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Conservation of mechanical energy equation\nWhat is the formula for conservation of mechanical energy?\nThe conservation of mechanical energy can be written as “KE + PE = const”. Though energy cannot be created nor destroyed in an isolated system, it can be internally converted to any other form of energy.\nWhat is the conservation of mechanical energy?\nIn physical sciences, mechanical energy is the sum of potential energy and kinetic energy. It is the macroscopic energy associated with a system. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant.\nIs the mechanical energy conserved between A and B explain?\nMechanical energy is conserved so long as we ignore air resistance, friction, etc. Energy is “lost” to friction in the sense that it is not converted between potential and kinetic energy but rather into heat energy, which we cannot put back into the object.\nWhat are the 3 types of mechanical energy?\nMechanical energy is the energy that is possessed by an object due to its motion or due to its position. Mechanical energy can be either kinetic energy (energy of motion) or potential energy (stored energy of position).\nWhat are 5 mechanical energy examples?\nKinetic Mechanical EnergyRadiant Energy: Energy produced by light waves.Electrical Energy: Energy produced by electricity.Sound Energy: Energy produced by sound waves.Thermal Energy: Energy produced by heat.\nWhat is work formula?\nWork is done when a force that is applied to an object moves that object. The work is calculated by multiplying the force by the amount of movement of an object (W = F * d).\nWhat is mechanical energy in simple words?\nThe energy of an object due to its motion or position; the sum of an object’s kinetic energy and potential energy.\nIs a windmill an example of mechanical energy?\nWind Mill Windmills are the structures that convert wind energy into electrical energy and this energy is then supplied to our homes. Windmills run on the principle of mechanical energy and work. Moving air (wind) possesses some amount of energy in the form of kinetic energy (due to motion).\nIs a roller coaster an example of mechanical energy?\nKinetic energy is energy that an object has because of its motion. All moving objects possess kinetic energy, which is determined by the mass and speed of the object. In a roller coaster, the forms of kinetic are mechanical, sound and thermal.", "score": 32.202116054951475, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "The difference between a conservative and a non-conservative force is that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path. On the contrary, when a non-conservative force acts upon an object, the work done by the non-conservative force is dependent of the path.\nConservation of mechanical energy\nAccording to the law of conservation of mechanical energy, the mechanical energy of an isolated system remains constant in time, as long as the system is free of friction and other non-conservative forces. In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the law of conservation of mechanical energy can be used as a fair approximation. Though energy cannot be created or destroyed in an isolated system, it can be converted to another form of energy.\nThus, in a mechanical system like a swinging pendulum subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible, energy passes back and forth between kinetic and potential energy but never leaves the system. The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points. However, when taking the frictional forces into account, the system loses mechanical energy with each swing because of the work done by the pendulum to oppose these non-conservative forces.\nThat the loss of mechanical energy in a system always resulted in an increase of the system's temperature has been known for a long time, but it was the amateur physicist James Prescott Joule who first experimentally demonstrated how a certain amount of work done against friction resulted in a definite quantity of heat which should be conceived as the random motions of the particles that comprise matter. This equivalence between mechanical energy and heat is especially important when considering colliding objects. In an elastic collision, mechanical energy is conserved — the sum of the mechanical energies of the colliding objects is the same before and after the collision. After an inelastic collision, however, the mechanical energy of the system will have changed. Usually, the mechanical energy before the collision is greater than the mechanical energy after the collision.", "score": 32.17497037282585, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "If you've ever witnessed a video of a space shuttle lifting off, the chemical reaction that occurs also releases tremendous amounts of heat and light. Another useful form of the first law of thermodynamics relates heat and work for the change in energy of the internal system:\nWhile this formulation is more commonly used in physics, it is still important to know for chemistry.\nenergy can be transferred and destroyed, but may not be transformed., energy cannot be created or destroyed, but can be transferred or transformed., energy can be created, destroyed, transferred, and transformed., or energy in the universe is constant; therefore, destroying it would lead to its re-creation.\nSource: Boundless. “The First Law of Thermodynamics.” Boundless Biology. Boundless, 21 Jul. 2015. Retrieved 28 Nov. 2015 from https://www.boundless.com/biology/textbooks/boundless-biology-textbook/metabolism-6/potential-kinetic-free-and-activation-energy-69/the-first-law-of-thermodynamics-347-11484/", "score": 32.04288360101046, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "At a height y its PE is mgy and its KE is mv2, but their total is still the original amount of energy mgh, so\nmgh = mgy + mv2,\nfrom which it follows that\nNotice that we can find v at any point without knowing any details of the path!\nAt the very bottom, just before it hits, all the PE will be converted to KE,\nmgh = mv2\nWhen the rock hits the bottom, all of the KE will be lost. Some will be spent breaking the rock and ground into pieces, some will go off as sound waves, and some willgo into doing work on the ground by compressing it. All this moving matter will experience friction, which converts KE into heat. Wherever all the energy goes, it must still add up to the original amount mgh.\nExample: Block sliding down ramp with or without friction\nSuppose a block is released form rest at the top of the ramp and allowed to slide down. How fast will it be going when it reaches the bottom? You could solve this problem by using the equations for constant acceleration, but it is much simpler to solve using conservation of energy.\nThe initial energy, when the block is at rest at the top of the ramp, is purely gravitational potential energy. If we measure the height h from the bottom of the ramp, the initial energy is\nEi = mgh\nWhen the block reaches the bottom of the ramp. All of this potential energy will have been converted into kinetic energy, which is given by\nEf = mv2\nBut because energy is conserved, and we are assuming that we are not losing any mechanical energy through friction, the initial and final energies must be equal. Thus\nEi = Ef\nmgh = mv2\nwhich can be solved to give the final velocity as\nIf we allow for friction while the block is sliding down the ramp, we have to take into account the amount of mechanical energy that will be lost due to friction. The work done against friction will be\nWf = Fkd,\nwhere Fk is the force of kinetic friction,\nFk = Nmk,\nAnd d is the distance traveled along the ramp.", "score": 32.00655993124707, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "“Energy cannot be created or destroyed, it can only be changed from one form to another.”\n― Albert Einstein\nThis year his theory of gravitational waves were detected therefore proving Einstein’s theory.\nHe figured it out in 1916, and 100 years later – we verified he was right.\nWhy should we care about gravitation waves? What about energy? There are many other discoveries just around the corner. We have just touched the surface on the transformation of energy.\nWe know the world has two geographic poles, the polar N and S. That is earth’s magnetic field at play. Imagine a massive magnet bar inside of the Earth, and you will get an idea what Earth’s magnetic field is shaped like.\nNow earth does NOT have a giant bar magnet inside it – but it does have a field made by rotating swirling motion of molten iron around Earth’s outer core.\nSo we have the North Pole and the South Pole. The world also has two magnetic poles: the North Magnetic Pole and the South Magnetic Pole. The magnetic poles are near, but not exactly in the same places as the geographic poles. So two different things, a geographic pole and a magnetic pole.\nThe needle in a compass points towards a magnetic pole. The compass needle points pretty much due North unless your in the Southern Hemisphere – it points South.\nHowever, if you are near either pole, the compass really becomes useless. It just points to the magnetic pole, NOT the TRUE geographic pole.\nOur magnetic field is also tilted a little bit. So on an angle of about 11°, it is no guess that the magnetic poles and the geographic poles are not in the same place. I will note the magnetic poles actually move around. The spinning motions of the our planet’s magnetic field, the swirling motions, are changing all the time.\nThe magnetic field is actually changing, therefore the magnetic poles move. In the 1800s the poles moved an estimated 5.6 miles per year. For some reason, after 1970, they started moving faster. Recently they are moving around 25 miles per year.\nThe Northern and Southern Lights happen near the magnetic poles because of the charged particles (protons and electrons).\nWhile our magnetic fields and energy are fluctuating – our Universe continues to expand. Our world is constantly moving, vibrating, and traveling in circular patterns.\nA proven fact is all forms of matter contains an underlying energy of vibration.", "score": 31.226786992416642, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Kinetic Energy is the energy of an object (mass) in motion. All moving objects have kinetic energy. A simple example of kinetic energy differences would be considering the idea of being hit by a football or by a fast moving car. The car being heavier and moving faster has much more kinetic energy than the soccer ball.\nCalculating Kinetic Energy\nThe kinetic energy of an object can be calculated using the formula below:", "score": 30.740952954716363, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "The energy that is lost forms heat in the environment.\nIt is understood that without energy it is not possible to do work.\nEnergy is understood as the capacity of a body or mass to carry out work after being subjected to force. It is understood that without energy it is not possible to do work.\nThe energy of a body is related to its speed or its position. That is why thepotential energy, which is what a body possesses when it is at a certain height with respect to a reference system, from theKinetic energy, which is the energy possessed by a body that is in motion.\nThe unit by which energy is measured (the same as work) is the Joule (or Joule). One Joule represents the amount of work done by a constant force of 1 Newton, over a distance of 1 meter in the same direction as the force.\nIt is also possible to measure energy by calories (one Joule is the same as 0.24 calories).", "score": 30.739233157803767, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Kinetic energy is motion of waves, electrons, atoms, molecules, substances and objects. The types of kinetic energy are:\n- Radiant energy is electromagnetic energy that travels in transverse waves (light, x-rays, gamma and radio waves).\n- Thermal energy, or heat, is the energy that comes from the movement of atoms and molecules in a substance (geothermal energy of the earth).\n- Motion energy is energy stored in the movement of objects. The faster they move, the more energy is stored (wind).\n- Sound is the movement of energy through substances in compression/rarefaction waves. Sound is produced when a force causes an object or substance to vibrate.\n- Electrical energy is delivered by tiny charged particles called electrons, typically moving through a wire (lightning).", "score": 30.618955741698695, "rank": 25}, {"document_id": "doc-::chunk-6", "d_text": "is the capacity of a physical system to perform work. Energy exists in\nmany forms like heat, mechanical, electrical, and others. According to\nthe law of conservation of energy, the total energy of a system remains\nconstant. Energy may be transformed into another form, but it is\nconstant within a system.\nFor example, we all know\ntwo pool balls eventually come to rest after colliding. They stop\nmoving only because the applied energy\n(from moving the cue stick) is eventually converted to heat (from\nfriction with air and the table) and sound (which is not very much of\nthe energy loss). The ball movement along the table's felt\nsurface and through the air transfers energy outside the two\nmoving balls to the air and environment around the table and into the\ntable itself. The temperature of the table and air rises ever so\nslightly, because the applied energy moves outside the system we \"see\"!\nSince the heat energy is spread all around in a very large area, we\ndon't notice the temperature rise. We just notice the balls quickly\nexample is our car's brakes. The energy stored in the moving weight of\nthe car is converted to heat by friction of brake pads rubbing against\nmetal rotors attached to the rotating wheels. This converts stored\nenergy (the engine put into the weight of the vehicle) into heat, and\nthe heat (containing all of that energy) radiates out into the air.\nMost of what we actually do in a car is move heat around.\nNewton's first law\nmass continues in its state of rest, or continues uniform motion in a\nstraight line, unless it is compelled to change that state by forces\nimpressed upon it.\nguys like Newton sure had a lot of time on their hands to think about\nsimple things, but they got it right. A rocket coasting through outer\nspace is a good example. It will go on forever in a straight line\nunless it hits something, or unless gravity or some other force\npulls it in a new direction.The earth wants to move in a\nstraight line, except gravitational attraction to the sun bends its\npath constantly. A bullet reacts the same way, except friction with air\nand gravity changes the direction and speed gradually over distance.", "score": 30.00252576479714, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.\nFavoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.\nThermodynamics is the study of heat energy and other types of energy, such as work, and the various ways energy is transferred within chemical systems. \"Thermo-\" refers to heat, while \"dynamics\" refers to motion.\nThe First Law of Thermodynamics\nThe first law of thermodynamics deals with the total amount of energy in the universe. The law states that this total amount of energy is constant. In other words, there has always been, and always will be, exactly the same amount of energy in the universe.\nEnergy exists in many different forms. According to the first law of thermodynamics, energy can be transferred from place to place or changed between different forms, but it cannot be created or destroyed. The transfers and transformations of energy take place around us all the time. For instance, light bulbs transform electrical energy into light energy, and gas stoves transform chemical energy from natural gas into heat energy. Plants perform one of the most biologically useful transformations of energy on Earth: they convert the energy of sunlight into the chemical energy stored within organicmolecules.\nThermodynamics often divides the universe into two categories: the system and its surroundings. In chemistry, the system almost always refers to a given chemical reaction and the container in which it takes place. The first law of thermodynamics tells us that energy can neither be created nor destroyed, so we know that the energy that is absorbed in an endothermic chemical reaction must have been lost from the surroundings. Conversely, in an exothermic reaction, the heat that is released in the reaction is given off and absorbed by the surroundings. Stated mathematically, we have:\nWe know that chemical systems can either absorb heat from their surroundings, if the reaction is endothermic, or release heat to their surroundings, if the reaction is exothermic. However, chemical reactions are often used to do work instead of just exchanging heat. For instance, when rocket fuel burns and causes a space shuttle to lift off from the ground, the chemical reaction, by propelling the rocket, is doing work by applying a force over a distance.", "score": 29.71547487962048, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Mechanical energy can be termed as the energy of movement; as it is found in objects that are moving or have the potential to move. In physical science, it is the sum of potential energy and kinetic energy.\nThe formula for mechanical energy is;\nMechanical energy = kinetic energy + Potential energy\nLaw of Conservation of Mechanical Energy\nIt says that the mechanical energy of an object in a closed system remains constant if it is not under the influence of any dissipative force (for example friction, air resistance), except for the gravitational force.\nLet us try to understand the concept of mechanical energy more plainly by taking a few examples from everyday life.\n1. Wrecking Ball\nA wrecking ball is a large round structure that is used for the demolition of buildings. When the ball is held at a height, it contains some amount of potential energy (stored energy) and as soon as it falls, it gains some amount of kinetic energy too. When the wrecking ball hits the building to be demolished, it applies the force (in the form of mechanical energy), which causes the work to be done, as in this case, the demolition of buildings.\nWhenever we use a hammer to, let’s say, hit a nail and drive it into the wall, we are simply applying some force on the nail with the help of the hammer which is causing some work to be done. At rest, a hammer does not contain any kinetic energy but only some amount of potential energy. When we swing a hammer up to some distance from the nail before hitting it, kinetic energy comes into play, and the combination of kinetic energy and potential energy in the hammer, called mechanical energy, will cause the driving of the nail into the wall. Or, we can say that the force applied by the hammer to do work on the nail is mechanical energy, which is the sum of potential and kinetic energy.\n3. Dart Gun\nA dart gun is another example of mechanical energy observed in everyday life. A dart gun works on the principle of elastic potential energy. The spring used in the dart guns consists of stored elastic potential energy. When a dart gun is loaded, it causes the spring to compress. At that moment, the dart gun consists of elastic potential energy. Due to this energy, the spring is able to apply force on the dart and does work, i.e., displacement of the dart.\n4.", "score": 29.490464375127345, "rank": 28}, {"document_id": "doc-::chunk-125", "d_text": "The conservation of energy is thus intimately connected with the fact that the laws of physics are the same today as they were yesterday and as they will be tomorrow.\nScientists have been looking for over a century for any changes in the laws of physics with translations and rotations in space and with movement through time, and have never found any evidence for such changes. Thus momentum, angular momentum, and energy are strictly conserved in our universe. For the counter rotation device to create energy from nothing, all of physics would have to be thrown in the trashcan. The upset would be almost as severe as discovering that 1+1 = 3. Furthermore, a universe in which physics was time-dependent and energy was not conserved would be a dangerous place. Free electricity devices would become the weapons of the future—bombs and missiles that released energy from nothing. Moreover, as the free electricity devices produced energy from nothing, the mass/energy of the earth would increase and thus its gravitational field would also increase. Eventually, the gravity would become strong enough to cause gravitational collapse and the earth would become a black hole. Fortunately, this is all just science fiction because free electricity isn't real.\nGenerators and motors are very closely related and many motors that contain permanent magnets can also act as generators. If you move a permanent magnet past a coil of wire that is part of an electric circuit, you will cause current to flow through that coil and circuit. That's because a changing magnetic field, such as that near a moving magnet, is always accompanied in nature by an electric field. While magnetic fields push on magnetic poles, electric fields push on electric charges. With a coil of wire near the moving magnet, the moving magnet's electric field pushes charges through the coil and eventually through the entire circuit.\nA convenient arrangement for generating electricity endlessly is to mount a permanent magnet on a spindle and to place a coil of wire nearby. Then as the magnet spins, it will turn past the coil of wire and propel currents through that coil. With a little more engineering, you'll have a system that looks remarkably like the guts of a typical permanent magnet based motor.", "score": 29.457776441321283, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Where there are no opposing forces, a moving body needs no force to keep it moving with a steady velocity as we know from Newton's first law of motion.If, however, a resultant force does act on a moving body in the direction of its motion, then it will accelerate (Newton's second law of motion) and the work done by the force will become converted into increased kinetic energy in the body. So how we calculate it?.\n1Start with the formula for work: work done = mf × s , where m = mass of the object, f = constant acceleration, and s = linear distance over which the force is applied.Ad\n2Relate velocity, acceleration, and distance by Applying the equation v² = u² + 2 fs where v is the terminal velocity of the object and u is the initial velocity . Assuming the object starts from rest, u = 0.\n3Solve for acceleration to get a = v² / 2x\n4Substitute into the equation for work done = m × v² / 2s × s = kinetic energy or kinetic energy ( k.e.) = 1/2 m v²\n5Since the work required to accelerate an object exactly equals the kinetic energy imparted to it, this expression, 1/2 m v ², gives the energy in joules or ergs according to the system of units used.Ad\nWe could really use your help!\nIn other languages:\nThanks to all authors for creating a page that has been read 50,522 times.", "score": 29.44028477354281, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Simple explanation here. Live by these words: ENERGY IS CONSERVED. Gravitational Potential Energy (GPE) is stored energy. It is mgh. That is, mass times gravity times height. GPE is proportional to how high the object is. (GPE~h). The higher the object is, the more gravitational potential energy it contains. In most cases when we deal with GPE, we will, by convention, set the lowest point to be zero GPE (to make it easier for all of us). You can set any point to be 0 GPE but the lowest point is always the best place (unless specified in a question). -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Kinetic Energy (KE) is energy in motion, motional energy. It is 1/2m(v^2). That is, one-half of the mass times velocity squared. KE is squarely proportional to how fast an object is moving. (KE~v^2). The faster the object is moving, the more kinetic energy it contains. In most cases when we deal with KE, KE is 0 when the particle is not moving. For example, you throw an object in the air. At the highest point the object reaches, we say it is not moving at the instant and thus has no kinetic energy. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Now, we live by the words: ENERGY IS CONSERVED. By definition, the total energy of the system never changes. The potential energy and kinetic energy may change, but the total energy (KE+GPE) will not change. Now, when I dealt with energy, we were given a scenario that had an inital and final point. The forumla for the conservation of energy is: dKE = -dGPE (The change in kinetic energy is the negative change in potential energy) KE(f) - KE(i) = -[GPE(f) - GPE(i)] Distribute the negative sign KE(f) - KE(i) = GPE(i) - GPE(f) Separating final (f) and initial (i) KE(f) + GPE(f) = GPE(i) + KE(i) Flip it around... KE(i) + GPE(i) = KE(f) + GPE(f) And there we have it folks, the conservation of energy in simple terms.", "score": 29.395049926568106, "rank": 31}, {"document_id": "doc-::chunk-2", "d_text": "In inelastic collisions, some of the mechanical energy of the colliding objects is transformed into kinetic energy of the constituent particles. This increase in kinetic energy of the constituent particles is perceived as an increase in temperature. The collision can be described by saying some of the mechanical energy of the colliding objects has been converted into an equal amount of heat. Thus, the total energy of the system remains unchanged though the mechanical energy of the system has reduced.\nA satellite of mass at a distance away from the centre of Earth in space possesses both kinetic energy, , (by virtue of its motion) and gravitational potential energy, , (by virtue of its position within the Earth’s of mass gravitational field). Hence, total energy of a satellite is given by\nIf the satellite is in circular orbit, the energy conservation equation can be further simplified into\nsince in circular motion, Newton's 2nd Law of motion can be taken to be\nToday, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories:\n- An electric motor converts electrical energy into mechanical energy.\n- A generator converts mechanical energy into electrical energy.\n- A hydroelectric powerplant converts the mechanical energy of water in a storage dam into electrical energy.\n- An internal combustion engine is a heat engine that obtains mechanical energy from chemical energy by burning fuel. From this mechanical energy, the internal combustion engine often generates electricity.\n- A steam engine converts the heat energy of steam into mechanical energy.\n- A turbine converts the kinetic energy of a stream of gas or liquid into mechanical energy.\nDistinction from other types\nThe classification of energy into different types often follows the boundaries of the fields of study in the natural sciences.\n- Chemical energy is the kind of potential energy \"stored\" in chemical bonds and is studied in chemistry.\n- Nuclear energy is energy stored in interactions between the particles in the atomic nucleus and is studied in nuclear physics.\n- Electromagnetic energy is in the form of electric charges, magnetic fields, and photons. It is studied in electromagnetism.\n- Various forms of energy in quantum mechanics; e.g., the energy levels of electrons in an atom.", "score": 29.225447928585425, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Energy in a\nImagine throwing a ball against a wall\nas hard as you can. What will happen when it hits the\nBounces Off Wall\nEveryone knows that it will\nAnything in motion has\nkinetic energy---energy of motion. The kinetic\nenergy of the moving ball is converted to potential\nenergy as the ball is squashed into the wall. This\npotential energy becomes kinetic energy again as\nthe ball springs back into shape and propels itself\naway from the wall.\nThe change in the direction of the\nball is caused by the elastic nature of the ball itself.\nMuch like a spring, the ball \"squashed\" itself against the\nwall then rebounds as it becomes round again.\nNow imagine that the wall is moving\nand has its own kinetic energy. At the moment of collision,\nthe wall transfers some of that energy to the\nRemember the energy that the ball has when you throw it\nagainst a wall? It has the same energy when it hits a bat.\nBut a swinging bat adds energy too. The combined energies\nmake hitting a home run easier when a baseball is\nNow think about hitting a\nbaseball from a tee. All of the energy in this\nsystem starts with the batter and is transferred to\nthe bat. Some of the energy in the bat is then\ntransferred to the ball causing it to fly off the\nHitting a baseball that is\npitched involves more total energy. The ball has\nenergy and the bat has energy.\nWhy are most home runs hit off of\n© 2003-2005 Event-Based Science Project", "score": 28.08089075732633, "rank": 33}, {"document_id": "doc-::chunk-3", "d_text": "If you watch long enough or there is appreciable friction, you will discern that energyy is also converted into heat.\nNeglecting this production of heat, that is neglecting friction, we can write the total energy and conservation of energy as\nWhere h is measured above the lowest point. Note that the maximum KE occurs at h=0 and the maximum PE occurs where v=0, that is\nFigure 2: The pendulum Bob at three positions. The KE and PE are indicated at each position.\nLet's explore this example a bit further. To do so, we consider the free body diagram for the pendulum Bob including the tension in the pendulum wire and the gravitational force on Bob. This leads to:\nAt the lowest point of the swing, when the velocity has its maximum magnitude, the acceleration is centripetal, and directed straight up with magnitude\nSubstituting and noticing that ,\nWe can solve for the tension\nThus T the tension in the pendulum wire depends on the velocity, and is maximum when the KE is greatest, that is at the lowest point of the pendulums swing.\nAn interesting demonstration of this, and thus a way to confirm our application of conservation of energy to the pendulum Bob, is the set-up with a 1 kg Bob and a second, 2 kg mass (Pat) connected to the pendulum wire by way of a pulley as shown.\nPat will inform us when the tension in the string is greater than (the product of Pat's mass and the local gravitational field). This happens when\nWith the help of a little trigonometry, you can see that\nAnd when .\nWhen we tried this in lecture, we found that was not quite enough, because it is not true that \"no energy is lost due to friction,\" even for a half-swing of the pendulum. When we made the angle slightly greater than , Pat was lifted a bit.\nReturn to Physics 125", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-12", "d_text": "(For more on the difficult issue of God's continuous recreation or preservation of the material world, see, e.g., Gorham 2004, Hattab 2007, and Schmaltz 2008).\nIt is obvious that when God first created the world, He not only moved its parts in various ways, but also simultaneously caused some of the parts to push others and to transfer their motion to these others. So in now maintaining the world by the same action and with the same laws with which He created it, He conserves motion; not always contained in the same parts of matter, but transferred from some parts to others depending on the ways in which they come in contact. (Pr II 62)\nIn the Principles, Descartes conservation law only recognizes a body's degree of motion, which correlates to the scalar quantity “speed”, rather than the vectorial notion “velocity” (which is speed in a given direction). This distinction, between speed and velocity, surfaces in Descartes' seven rules of impact, which spell out in precise detail the outcomes of bodily collisions (although these rules only describe the collisions between two bodies traveling along the same straight line). Descartes' utilization of the concept of speed is manifest throughout the rules. For example:\nFourth, if the body C were entirely at rest,…and if C were slightly larger than B; the latter could never have the force to move C, no matter how great the speed at which B might approach C. Rather, B would be driven back by C in the opposite direction: because…a body which is at rest puts up more resistance to high speed than to low speed; and this increases in proportion to the differences in the speeds. Consequently, there would always be more force in C to resist than in B to drive, …. (Pr II 49F)\nAstonishingly, Descartes claims that a smaller body, regardless of its speed, can never move a larger stationary body. While obviously contradicting common experience, the fourth collision rule does nicely demonstrate the scalar nature of speed, as well as the primary importance of quantity of motion, in Cartesian dynamics. In this rule, Descartes faces the problem of preserving the total quantity of motion in situations distinguished by the larger body's complete rest, and thus zero value of quantity of motion.", "score": 27.216288042865944, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Kinetic energy, also known as the energy of motion, is all around us in different forms. Without it, there would be no light, heat, sound, or movement. Only when the other major type of energy, potential energy, converts to kinetic energy are we able to see, hear, and move about. Kinetic energy even works at the molecular level. Vibrating molecules produce heat, and subatomic particles called electrons can flow together to create electricity. From the basic movement of atoms producing heat to a car screeching to a stop, kinetic energy affects our everyday lives.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Energy is conserved. The initial 1000 J of PE is transformed into 900 J of KE and 100 J of thermal and sound energy, heating herself, the air, and the slide.\nYou can help us out by revising, improving and updating this answer.Update this answer\nAfter you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "However, we often define a certain position as our reference point, i.e. h=0 with PE=0 at h=0. Adopting this convention,\nFor example, the floor is at h=0.\nThe work done in lifting a book can be stored as PE and then recovered. This is true no matter how the book was lifted, that is if it was accelerated and decelerated rapidly or slowly. In principle, we could lift and horizontally translate the book so that it is stored at a height h, but with a horizontal displacement. Since the book must first gain and then lose some horizontal velocity, first positive and then negative work is done on the book by who or what moves it. In many practical cases, the positive work is not actually \"recovered,\" that is it is not stored in the same way that gravitational PE is stored. This is why it is often emphasized that \"the book is moved horizontally very slowly,\" so that the extra work done is so small as to be negligible.\nIn cases for which the work done by an applied force can be completely stored as PE, we say that work is done against a conservative force. In the case of the book, the conservative force is gravity. Electrical forces are conservative, but frictional forces are not.\nWe will consider a large number of examples, including that of lifting the book, that demonstrate the Conservation Law of Mass/Energy. This law is best stated that any system has a total energy E that remains constant. The total energy is the sum of KE, stored energy (PE), heat, sound, etc. In fact heat and sound and many other forms of energy are manifestations of motion, that is of the molecules of an object and of air. The contribution to E will be discussed more in Lecture 16. Conservation of energy can be expressed as follows:\nA long massive pendulum is a good example of conservation of energy because the amount of heat energy generated each cycle can be quite small compared to the maximum KE and PE. Focault used a pendulum, with a special pivot, that always swings in the same plane so that one can observe the earth turning underneath the pendulum swing during many hours, even days. As you observe the pendulum swing, you can watch energy converted from PE to KE to PE etc. as shown.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-2", "d_text": "For the equations of the conservation of momentum, the units are not important, as long as the same ones are used before and after the collision (i.e. as long as we compare quantities measured in the same units).\nm1 = 1500 kg v1 = 40 km/s\nLet v3 be the final velocity of the wreckage. The conservation of momentum gives\nm2 = 4500 kg v2 = -20 km/s\nP = m1v1 + m2v2 = (m1 + m2)v3\n1500*40 + 4500*(-20) = 60,000 - 90,000 = -30,000 = 6000*v\nThe wreckage moves at 5 km/s in the \"negative\" direction in which the truck was moving.\nv = - 5 km/s\n--Is kinetic energy conserved?\nNot likely, since each of the masses (if we consider them separately after the collision) now moves more slowly than before.\n--How much kinetic energy was lost?\nIf the result is to be expressed in joules, we better convert km/hr\n1 km/hr = (1000 meter/3600 sec) = 0.27777 m/s\nInitial velocities: v1 = 11.111 m/s v2 = 5.5555 m/s\nFinal velocity v3 = 1.3889 m/s\nKinetic energy =1/2 m v2\nKE of the car (1/2) 1500 (11.111)2 = 92, 593\nKE of the truck (1/2) 4500 (5.5555)2 = 69,444 joule\nTotal kinetic energy entering the collision 162,037 joule\nFinal KE (1/2) 6000 (1.3889)2 = 5,787 joule\nLoss 156,250 joule\n--Where did the lost energy go?\nIt probably went into heat.\n--(Optional) \"Humongous Airlines\" publicized the smooth ride of its new \"steadijet\" airliner by installing a billiards table in its first class cabin. While the plane is flying at a steady velocity v0, do collisions of two billiard balls in it conserve momentum?\n--Which velocities do we have to use in such a calculation--velocities relative to the airplane or to the ground?", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-3", "d_text": "In looking at a purported Free Energy (or perpetual-motion) machine we should thus look at the total interplay over a cycle between KE, PE and the three kinds of work (Stored KE, stored PE, displacement). Count the joules in and out and decide how to assign them in those 5 buckets. Do this assiduously and you can decide for yourself whether any Free Energy invention will actually work. The majority show a zero-sum overall when they are idealised to zero losses, so in real lossy life will just slow down and stop.\nSo, what does work? We know that solar panels work pretty well. We also know that if we have any cyclic motion or wave then we can rectify it and get some work done. One thing to remember is that the total energy doesn’t change – if we take some in one place it has to turn up in another. All we need to look for is a flow of energy that exists and to divert it so that it does what we want done before we let it go again. Displacement-type work does not take energy to perform, though there is an element of borrowing and returning from the energy bank. Energy is put in to start the motion, then the motion continues until we stop it and get the energy out again. That sword-to-ploughshare change is displacement-type work, in that no excess energy remains after the work has been done. The vast majority of what is normally regarded as work is displacement, where at the end of a cycle there is no work (KEwork or PEwork) actually done. Displacement work does not store any energy, and we should therefore be able to do it without using energy – all the energy we do put in ends up dissipated. At least we should be able to do it with a draw on the energy bank followed by a return of that energy.\nI think it’s possible to harvest infrared to get a reasonable amount of electrical power. The Robert Murray-Smith experiments showed that in a small way, and it may be possible to get the harvest up by quite a few orders of magnitude. For this, we’re stopping an IR photon (which is moving energy) and converting it to electrical energy (moving electron in this case) and then either using it in some movement (so it’s re-emitted) or storing it in a battery as chemical potential energy for a later release.", "score": 26.667540716627087, "rank": 40}, {"document_id": "doc-::chunk-210", "d_text": "You would then coast downward at a constant speed and would feel your normal weight. If you closed your eyes at this point, you would feel as though you were suspended on a strong upward stream of air. Unfortunately, this situation wouldn't last forever—you would eventually reach the ground. At that point, the ground would exert a tremendous upward force on you in order to stop you from penetrating into its surface. This upward force would cause you to decelerate very rapidly and it would also do you in.\nWhen two objects collide with one another, they usually bounce. What distinguishes an elastic collision from an inelastic collision is the extent to which that bounce retains the objects' total kinetic energy—the sum of their energies of motion. In an elastic collision, all of the kinetic energy that the two objects had before the collision is returned to them after the bounce, although it may be distributed differently between them. In an inelastic collision, at least some of their overall kinetic energy is transformed into another form during the bounce and the two objects have less total kinetic energy after the bounce than they had before it.\nJust where the missing energy goes during an inelastic collision depends on the objects. When large objects collide, most of this missing energy usually becomes heat and sound. In fact, the only objects that ever experience perfectly elastic collisions are atoms and molecules—the air molecules in front of you collide countless times each second and often do so in perfectly elastic collisions. When the collisions aren't elastic, the missing energy often becomes rotational energy or occasionally vibrational energy in the molecules. Actually, some of the collisions between air molecules are superelastic, meaning that the air molecules leave the collision with more total kinetic energy than they had before it. This extra energy came from stored energy in the molecules—typically from their rotational or vibrational energies. Such superelastic collisions can also occur in large objects, such as when a pin collides with a toy balloon.\nReturning to inelastic collisions, one of the best examples is a head-on automobile accident. In that case, the collision is often highly inelastic—most of the two cars' total kinetic energy is transformed into another form and they barely bounce at all. Much of this missing kinetic energy goes into deforming and heating the metal in the front of the car. That's why well-designed cars have so called \"crumple zones\" that are meant to absorb energy during a collision.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-6", "d_text": "We can understand this in terms of momentum flow:\nMomentum-flow is represented by a double arrow, with two half-arrowheads. The colored part of the arrow shows what is flowing, while the black part of the arrow shows in which direction it is flowing.\nIt is characteristic of an equilibrium situation that we have a closed circuit of momentum flow. The momentum flows around and around, without any net accumulation anywhere.\nThis idea – closed-circuit flow with no accumulation – is similar to the first law of motion, only stronger: no net force means no change in momentum.\n(For simplicity we have not considered the weight of the table itself; you can easily add that to the picture you want.)\nIn my experience, most people don’t have a problem with the idea of conservation when there is flow without accumulation. They have a pretty good intuition about continuity of flow in a loop. If anybody isn’t happy with this, put some water in a big round bowl and stir up a steady rotational flow. The water is conserved. The water is flowing, but for steady flow there is no accumulation anywhere.\nFigure 11 shows my favorite demonstration of momentum flow. Balls #2 through #5 are initially at rest. Ball #1 swings in from the left. Momentum leaves ball #1 and flows through balls #2, #3, and #4 without accumulating. It accumulates in ball #5, which goes flying.\nFigure 12 shows a video of such a thing in action: (Video courtesy of GiantNewtonsCradle.com.)\nTrying to analyze this in terms of forces would be tricky. A reasonable amount of momentum is transfered in near-zero time, which leads to a near-infinite force.\nTangential remark: The usual Newton’s cradle apparatus has a special property that is often overlooked: At rest, the balls are not touching; there is a paper-thin air gap. Therefore when ball #1 arrives, there is not a single collision, but rather a rapid sequence of collisions.\nThere is a good physics reason for this:\n|During an elastic collision between one ball and one other, conservation of momentum and conservation of energy give us two equations in two unknowns, so we can predict what happens.", "score": 26.302222504665828, "rank": 42}, {"document_id": "doc-::chunk-3", "d_text": "This motion produced friction between the sandpaper and the wood, causing the molecules to move faster. As a result, both the sandpaper and the wood became hotter. Thus, the mechanical energy of the moving sandpaper changed into heat energy.\nYou were also the source of motion when you plucked the tight rubber bands, causing them to vibrate. Sound is produced when a force causes something to vibrate and produce sound waves. Sound energy is carried in waves.\nAnother way in which mechanical energy can produce sound waves is by tapping on a table. Tapping on the table causes the table to vibrate in the same way plucking on the rubber bands caused them to vibrate. Sound waves actually travel faster through the table than through the air. You can put your ear next to the table and hear the tapping sounds clearly. You can also raise your head and hear the sounds as the sound waves pass through the table and then through the air.\nWhen electrical energy passes through a light bulb, it is changed into light energy and heat energy. Even though the heat energy is unwanted, it is still part of the electric bill. Engineers try to design light bulbs that increase the amount of light and decrease the amount of heat produced. Some progress has been made, but light bulbs continue to produce unwanted heat.\nStart with the energy being given off from a TV or a radio in your home. Try to figure out where this energy comes from. See how far back you can trace the energy changes. This gets a little complicated, so get ad good reference book to help you.\nWhat is the difference between an electric motor and an electric generator? They basically contain the same parts and are built the same way. However, an electric motor changes electric energy into mechanical energy, and an electric generator changes mechanical energy into electric energy.\nIn 1905, Albert Einstein proposed a theory that altered the law of conservation of energy. He said that matter can be changed into energy, and energy can be changed into matter, but the total amount of matter and energy in the universe remains the same. How was Einstein’s theory shown to be true?\nWhat Did You Learn?\nGive two examples of how one form of energy can change into heat energy. Give another example of an energy change.\nList two ways in which energy does work for us.\nThe following list contains examples of forces, properties of matter, and forms of energy.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "When physical systems interact, energy is converted into many different forms, but its total quantity always remains constant. The conservation of energy implies that the total output of all energy flows and transformations must equal the total input.\nEnergy flows among different systems represent the engine of the cosmos, and they happen everywhere, so often that we hardly notice them. Heat naturally flows from warmer to colder regions, hence our coffee cools in the morning. Particles move from high-pressure areas to low-pressure areas, and so the wind starts to howl. Water travels from regions of high potential energy to regions of low potential energy, making rivers flow. Electric charges journey from regions of high voltage to regions of low voltage, and thus currents are unleashed through conductors. The flow of energy through physical systems is one of the most common features of nature, and as these examples show, energy flows require gradients—differences in temperature, pressure, density, or other factors. Without these gradients, nature would never deliver any net flows, all physical systems would remain in equilibrium, and the world would be inert—and very boring. Energy flows are also important because they can generate mechanical work, which is any macroscopic displacement in response to a force.6 Lifting a weight and kicking a ball are both examples of performing mechanical work on another system. An important result from classical physics equates the quantity of work to the change in the mechanical energy of a physical system, revealing a useful relationship between these two variables.7\nAlthough energy flows can produce work, they rarely do so efficiently. Large macroscopic systems, like trucks or planets, routinely lose or gain mechanical energy through their interactions with the external world. The lead actor in this grand drama is dissipation, defined as any process that partially reduces or entirely eliminates the available mechanical energy of a physical system, converting it into heat or other products.8 As they interact with the external environment, physical systems often lose mechanical energy over time through friction, diffusion, turbulence, vibrations, collisions, and other similar dissipative effects, all of which prevent any energy source from being converted entirely into mechanical work. A simple example of dissipation is the heat produced when we rapidly rub our hands together. In the natural world, macroscopic energy flows are often accompanied by dissipative losses of one kind or another. Physical systems that can dissipate energy are capable of rich and complex interactions, making dissipation a central feature of the natural order.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-3", "d_text": "In the case of a weight that has been lifted from the ground, the weight has potential energy because if you let go of it, gravity will take over and cause the weight to move, towards your foot, and unless you move out of the way all of that newly acquired kinetic energy on the weight will transfer itself right into your toes. A simple equation for potential energy due to gravity, adequate for our purpose, is E = m * g * h, where m is the mass of the object, g is standard gravity (9.8 m/s² on Earth) and h is the height above the ground at which the object is at rest.\nKnowing these two facts, can you see why running does no work? There is no change in potential energy unless you are running uphill, and we assume you are running on a fairly level surface such as a track, or a flat sidewalk. There is also no change in kinetic energy. You are both the observer and the body in question; your kinetic energy (motion) relative to yourself is always zero.\nThis doesn’t mean you should stop running! Despite the fact that no measurable net work is being done, you are still burning calories, strengthening your legs and your cardiovascular system, and boosting your metabolism.\nRegarding exercises where work is done, but we still say zero: If you want your custom exercises given a proper work formula, give us a full, proper description of the movement (maybe a video as well, gym admins will be able to tag videos to exercises in the near future) and when we get the chance, we can come up with a formula and add it to the site. This will affect workout sessions posted in the past as well as the future, so go ahead and use any exercises you want in your workouts; when we know how to calculate the work done, your older sessions will show every ft-lb of work you did.\nIf you want to find out more about physics and Crossfit listen or read anything Coach Glassman has authored, and browse through the other great Crossfit Journal Articles. It’s where we started and continue to go for inspiration. Below are some CFJ articles to get you going.\n- Force, Distance and Time\n- Speech To Okinawa Marines\n- Fit Fest 09\n- ASEP Lecture\n- Back Squat Geometry\n- Ergometer Scores and Hall of Fame Workouts", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "Negative kinetic energies are not represented on the graph, and are shown with a zero line instead, because they are physically impossible.\nClick the \"Animate\" button. The cart begins to move to the right from its initial position, with a speed that is determined from its kinetic energy and mass. The cart reaches a turning point when its kinetic energy becomes zero, it momentarily stops and reverses direction. At this point the graphs also redraw automatically. If there is friction, the total energy line (red) in the middle graph, is now slanted to the left, when the cart is moving to the left. The kinetic energy graph (bottom) also changes, to reflect the loss in kinetic energy. The ends of the cart are treated like elastic bumpers, and the cart bounces off them with no loss of energy. If the cartís kinetic energy becomes zero everywhere (due to loss to friction), it then stops moving.Direct comments to: Chandima Cumaranatunge (programming aspects); Sanjay Rebello (physics content).", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-12", "d_text": "It disappears from region B and appears in region E. Furthermore, we can also say that the momentum that flows from B to E is the negative of the momentum that flows the other way, from E to B. That’s just two ways of saying the same thing.\nThe flow from E to B is not in addition to the flow in the other direction. There is only one flow. The point is that we have several different sets of words for describing the same flow. Some momentum is disappearing from region B and appearing in region E.\nIf we translate this idea to the language of forces, we get the third law of motion: The force that B exerts on E is equal-and-opposite to the force the E exerts on B. So we see that the third law of motion is a corollary to the law of conservation of momentum. Expressing the idea in terms of conservation is slightly more powerful.\nOf course there can be other forces, i.e. other momentum-flows, such as a flow from B to A or from B to C. However, if we restrict attention to the direct interaction between B and E, conservation tells us that whatever momentum crosses the boundary from B to E must simultaneously be lost from B and gained by E. This is absolutely exact, guaranteed.\nSuppose we are interested in the dynamics of region B. Common sense tells us that to solve the whole problem, we need to calculate the entire force on the region. We need to calculate the entire momentum-flow into the region. This is as it should be. On the other hand, there is a proverb that says a journey of 100 miles begins with a single step. In many cases, there is a tactical advantage to calculating the momentum flow across some small part of the boundary of region B, such as the part that abuts region E. The conservation idea can help us do this part of the calculation. During this step, we should not allow ourself to be distracted by what’s going on elsewhere. Later, we can repeat the process for all other parts of the boundary.", "score": 25.114855725729008, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "Then:\nBut , and solving for\nStep 5 : Now we have simplified as far as we can, we move onto momentum conservation:\nBut =0, and solving for\nStep 6 :\nNow we can substitute (B) into (A) to solve for :\nWe were lucky in this question because we could factorise. If you can't factorise, then you can always solve using the formula for solving quadratic equations. Remember:\nSo, just to check:\nStep 7 :\nSo finally, substituting into equation (B) to get :\nBut, according to the question, marble 1 is moving after the collision. So and . Therefore:\nAn inelastic collision is a collision in which total momentum is conserved but total kinetic energy is not conserved\nthe kinetic energy is transformed into other kinds of energy.\nSo the total momentum before an inelastic collisions is the same as after the collision. But the total kinetic energy before and after the inelastic collision is different. Of course this does not mean that total energy has not been conserved, rather the energy has been transformed into another type of energy.\nAs a rule of thumb, inelastic collisions happen when the colliding objects are distorted in some way. Usually they change their shape. To modify the shape of an object requires energy and this is where the missing kinetic energy goes. A classic example of an inelastic collision is a car crash. The cars change shape and there is a noticeable change in the kinetic energy of the cars before and after the collision. This energy was used to bend the metal and deform the cars. Another example of an inelastic collision is shown in the following picture.\nHere an asteroid (the small circle) is moving through space towards the moon (big circle). Before the moon and the asteroid collide, the total momentum of the system is:\nstands for and stands for and the total kinetic energy of the system is:\nWhen the asteroid collides inelastically with the moon, its kinetic energy is transformed mostly into heat energy. If this heat energy is large enough, it can cause the asteroid and the area of the moon's surface that it hit, to melt into liquid rock! From the force of impact of the asteroid, the molten rock flows outwards to form a moon crater.\nAfter the collision, the total momentum of the system will be the same as before.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "Such shells usually contain a core of tungsten or uranium (\"depleted uranium\" from which the component used for producing nuclear power has been removed), very heavy metals whose high mass can also carry a great deal of kinetic energy.]\nThe conservation of momentum is different--it is purely mechanical.\nThe total momentum going (say) into a collision always equals the total momentum coming out of it--there is nothing else momentum can convert to. It is therefore something we can always rely on in a calculation. The momentum given by a rocket to its gas jet is always equal to the momentum which it itself receives, regardless of the details of the process.\nThe way momentum will be introduced here is through an actual example.\nHere go into the lesson, the calculation of the recoil of a cannon.\nGuiding questions and additional tidbits:\n-- What is the momentum P of a mass m moving with velocity v?\n--Does this depend on the direction of v?\nYes, momentum is a vector quantity. If all motions are along the same line, we can take vector character into account by giving momenta in one direction a (+) sign and in the opposite direction a (-) sign.\n--State the important property of momentum.\nIn an isolated system, the sum of all momenta is conserved.\n-- What is \"an isolated system\"?\nA system with no forces acting on it from the outside.\n--When you jump across a ditch, your body clearly has a momentum P = mv during the jump. It did not have that momentum earlier and does not have it afterwards. How can you then say that P is conserved?\nWhen you jump, you brace your foot against the ground, so that the Earth, too, is part of the system. When your body takes off, an opposite and equal amount of momentum has been given to the Earth, and in principle the Earth actually moves back a tiny, unmeasurable amount. When you land, your momentum is given back to the Earth, and everything is as before.\n--A 1500 kg car going at 40 km/hr smashes head-on into a 4500 kg truck going in the opposite direction at 20 km/hr. The cars end up locked together. In what direction does the wreckage move (initially), and how fast?\nLet 1 denote the car, and let the + direction be the one in which it moved.\nLet 2 denote the truck, moving in the - direction.", "score": 24.367107987239297, "rank": 49}, {"document_id": "doc-::chunk-11", "d_text": "Electrical potential energy\nExample: Ionization energy of the electron in a hydrogen atom.\nChemical potential energy\nExample: fossil fuel like coal. The energy is only released when a chemical reaction takes place, ie burning it with oxygen.\nGravitational potential energy\nExample: The water behind a dam.\nWhat factors affect the amount of gravitational potential energy?\nWeight and height of an object\nExplain what happens to kinetic energy when the mass and speed of an object changes.\nKinetic energy also changes because the kinetic energy of an object depends on its mass and its speed. It is equal to half the mass multiplied by the square of the speed, multiplied by the constant ½.\nList examples of different types of kinetic energy.\nA car moving along a road has kinetic energy, Energy can be transferred from one object to another, such as when a rolling bowling ball transfers some of its kinetic energy to the pins and sets them in motion. Energy also transforms, or changes form. For example, the gravitational potential energy of a raised ram transforms to kinetic energy when the ram is released from its elevated position. And, when you raise a pendulum bob against the force of gravity, you do work on it. That work is stored as potential energy until you let the pendulum bob go. Its potential energy transforms to kinetic energy as it picks up speed and loses elevation.\nExplain the law of conservation of energy.\nWhenever energy is transformed or transferred, none is lost and none is gained. In the absence of work input or output, the total energy of a system before some process or event is equal to the total energy after.\nDefine thermal energy.\nEnergy resulting from the motion of particles. Thermal energy is a form of kinetic energy and is transferred as heat.\nNewton?s first law of motion\nEvery object continues in a state of rest, or in a state of motion in a straight line at a constant speed, unless it is compelled to change that state by forces exerted on it.\nNewton?s second law of motion\nThe acceleration produced by a net force on an object is directly proportional to the net force, is in the same direction as the net force, and is inversely proportional to the mass of the object.\nNewton?s third law of motion\nwhenever one object exerts a force on a second object, the second object exerts an equal and opposite force on the first object.\nNewton?s First Law of Motion-\nExamples: 1. when you play tug of war.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Recently I’ve had a couple of people contact me with some ideas for how to get “free energy” using buoyancy. I’ve also reviewed some ideas using motors and generators, and explained why I don’t expect them to work either. It’s maybe time to have a bit of a rant (again) about the difference between work and energy, and why I think that whereas Free Work may be attainable (and has been demonstrated), Free Energy is most unlikely to be shown.\nLet’s start with the paradox. There is the principle of Conservation of Mass/Energy, and since mass and energy are equivalent via E=MC² then in a closed universe we can’t change the amount of “stuff” we’ve got (mass/energy) but whatever we do we will end up with no more and no less. If we then look at a normal physics explanation of a process, we are told that we start with X joules of energy and we can do a little less than X joules of work, and then we have no energy left. This obviously doesn’t agree with the aforementioned CoE (conservation of energy). We really have a semantics problem, since the words we are using are not adequate.\nI’m going to separate the words out. We have two forms of energy here (kinetic and potential) and we also have work. Potential energy includes mass, energy that is stored in springs of some sort, gravitational potential (may be stored as mass, but I’ll skip that question for now) etc.. Kinetic energy is stored as things that are moving, so we include photons, moving masses and other unbound energy not stored as mass. Work is on the other hand a bit trickier. Whenever we do work, at the end of it things are just in a different configuration than before. Some work goes into kinetic energy (KE) and/or potential energy (PE), some of it is simply that a lump of stuff is a different shape or location. Hammering a lump of iron into a sword (or ploughshare) takes a lot of work, but at the end of it we have the same lump of iron in a different shape. Work is not a conserved quantity, and that is an important observation.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "How to convert kinetic energy into good, old-fashioned heat energy. The First Law of Thermodynamics states that heat cannot simply be created. Energy must be transferred from one form to another.\nWe could really use your help!\n- Moving your hands back and forth too rapidly, or with too much pressure may cause discomfort. Use caution to prevent rug burn.\nThings You'll Need\n- Two hands\n- Cold environment", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-2", "d_text": "Have a nice day.\nIs electrical energy produced? yes or no\nWhere does that energy come from? give your answer\nIf your answer is “from the moving air” then what is the difference between the air before going through the turbine and after it comes out of the turbine?\nMomentum is reduced, not temperature.\nIf you won’t accept the difference between potential energy, and absolute energy values, or that velocity is only a relative measurement, not an absolute measurement like tenperature, then I cannot explain it to you.\nVelocity is similar to potential energy in height with gravity. If I raise a rock above the ground, I have supplied potential energy. But I did not raise the temperature of the rock by lifting, nor will it cool as it crutches what it lands upon.\nVelocity is the same way. It has no measurement accept in creation to something else. Temperature is an absolute value and needs no reference frame. Think of throwing a ball or throwing a ball from a moving truck and the again thrown inside a moving truck. You cannot relate the velocity to the object’s tempature.\nnor will it cool as it crushes what it lands...\nVelocity is the same way. It has no measurement accept in relation to something else.\n“won’t accept the difference”\nOf course velocity is relative, and temperature has an absolute zero but in thermodynamics it’s differences in temperature that you deal with.\nNow answer the questions I posed. Here are the first two:\nIs electrical energy produced by the turbine?\nWhere does that energy come from?\nSay as much as you want but make sure you answer the questions.\nYes electrical energy is produced by the transfer of momentum of the air to the rotational force on the turbine.\nThe only effects on temperature are the friction of the moving surfaces and the inefficiencies of the wind turbine equipment. Both of these produce heat.\nThere is no drop in temperature.\nI have been referring to the energy in the moving mass as potential energy in relation to the still frame of reference of the windmill.\nA better term is kinetic energy. Momentum is the mass times the velocity. Work is the mass times acceleration (deceleration of the wind mass). Kinetic energy is 1/2 the mass times the velocity squared.\nWhen the wind strikes the turbine blades, the momentum of the collision must remain the same. In reality it is less of a collision and more of a sliding pull.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "Then:\nBut , and solving for\nStep 5 : Now we have simplified as far as we can, we move onto momentum conservation:\nBut =0, and solving for\nStep 6 :\nNow we can substitute (B) into (A) to solve for :\nWe were lucky in this question because we could factorise. If you can't factorise, then you can always solve using the formula for solving quadratic equations. Remember:\nSo, just to check:\nStep 7 :\nSo finally, substituting into equation (B) to get :\nBut, according to the question, marble 1 is moving after the collision. So and . Therefore:\nAn inelastic collision is a collision in which total momentum is conserved but total kinetic energy is not conserved\nthe kinetic energy is transformed into other kinds of energy.\nSo the total momentum before an inelastic collisions is the same as after the collision. But the total kinetic energy before and after the inelastic collision is different. Of course this does not mean that total energy has not been conserved, rather the energy has been transformed into another type of energy.\nAs a rule of thumb, inelastic collisions happen when the colliding objects are distorted in some way. Usually they change their shape. To modify the shape of an object requires energy and this is where the missing kinetic energy goes. A classic example of an inelastic collision is a car crash. The cars change shape and there is a noticeable change in the kinetic energy of the cars before and after the collision. This energy was used to bend the metal and deform the cars. Another example of an inelastic collision is shown in the following picture.\nHere an asteroid (the small circle) is moving through space towards the moon (big circle). Before the moon and the asteroid collide, the total momentum of the system is:\nstands for and stands for and the total kinetic energy of the system is:\nWhen the asteroid collides inelastically with the moon, its kinetic energy is transformed mostly into heat energy. If this heat energy is large enough, it can cause the asteroid and the area of the moon's surface that it hit, to melt into liquid rock! From the force of impact of the asteroid, the molten rock flows outwards to form a moon crater.\nAfter the collision, the total momentum of the system will be the same as before.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-3", "d_text": "- It is important to note that when measuring mechanical energy, an object is considered as a whole, as it is stated by Isaac Newton in his Principia: \"The motion of a whole is the same as the sum of the motions of the parts; that is, the change in position of its parts from their places, and thus the place of a whole is the same as the sum of the places of the parts and thereore is internal and in the whole body.\"\n- In physics, speed is a scalar quantity and velocity is a vector. In other words, velocity is speed with a direction and can therefore change without changing the speed of the object since speed is the numerical magnitude of a velocity.\n- Wilczek, Frank (2008). \"Conservation laws (physics)\". AccessScience. McGraw-Hill Companies. Retrieved 2011-08-26.\n- \"mechanical energy\". The New Encyclopædia Britannica: Micropædia: Ready Reference 7 (15th ed.). 2003.\n- Newton 1999, p. 409\n- \"Potential Energy\". Texas A&M University–Kingsville. Retrieved 2011-08-25.\n- Brodie 1998, pp. 129–131\n- Rusk, Rogers D. (2008). \"Speed\". AccessScience. McGraw-Hill Companies. Retrieved 2011-08-28.\n- Rusk, Rogers D. (2008). \"Velocity\". AccessScience. McGraw-Hill Companies. Retrieved 2011-08-28.\n- Brodie 1998, p. 101\n- Jain 2009, p. 9\n- Jain 2009, p. 12\n- Department of Physics. \"Review D: Potential Energy and the Conservation of Mechanical Energy\" (PDF). Massachusetts Institute of Technology. Retrieved 2011-08-03.\n- E. Roller, Duane; Leo Nedelsky (2008). \"Conservation of energy\". AccessScience. McGraw-Hill Companies. Retrieved 2011-08-26.\n- \"James Prescott Joule\". Scientists: Their Lives and Works. Gale. 2006. as cited on \"Student Resources in Context\". Gale.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "There’s some logic there, in that if something is moving then we should be able to make it do some work, but in the case of ZPE I suspect it’s a problem of how we measure things and that that that energy is not actually available but is instead imaginary. Just because we measure something to be in a slightly different position does not mean that it’s actually moved, but that since our measurements require us to use some sort of particle to hit the thing we can’t really be totally sure of the measurements. There’s an underlying uncertainty of where the fundamental particle actually is, which is a probability function. ZPE therefore seems to me extremely unlikely to be a source of new energy and thus to break CoE.\nWith mechanical systems such as the motor/generators, gravity/buoyancy machines or electromagnetic systems, we start by putting some work in to get it going. That work gets stored somewhere as either KE or PE, and if we get work out of the machine then the available stored KE and PE reduces until it reaches zero and the machine stops. An equivalent system would be a bath full of water, where you think that if you pour in a glass of water to make it overflow then you’ll get a continuous stream of water out. It doesn’t happen – you just get a glassful out and then it stops overflowing. You can’t get more energy out than you put in.\nSo far I’ve trashed pretty well all of the “traditional” Free Energy systems as being non-workable. All is not lost, though, since I’ve pointed out that work is not a conserved quantity. Since I’ve also pointed out that work can be subdivided into stored KE, stored PE and displacement, and it should be pretty obvious that a simple displacement is a zero-energy transaction, there is however a chance that we can get the displacement-type work (which is often what we want) for free. That displacement is a zero-energy transaction has been known since Newton, since he said that a body in motion would continue in that motion unless there was a force acting. So – if you lift something up you’ve put in work which is stored as gravitational potential energy, and if you put it down again you can get that energy out again as work, but if you move it sideways then no work is done except against friction, and that can be reduced arbitrarily asymptotic to zero.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "The effects of increasing entropy are unavoidable, and one important result of this fact is that in everything we do, there will always be some energy lost to entropy. For example, when an engine is running, it will always experience some friction in its mechanisms. Friction involves collisions between particles (as I’ve already mentioned), and so the particles involved tend to heat up, much like what happened with the bucket of warm water in a cool room. So the engine is only trying to push some pistons, but along the way it also transfers some heat energy into the metal, which then spreads out into the air around the engine. In other words, it ‘loses’ some of its energy in the form of heat.\nWe try to reduce the amount of friction in the engine using lubricants, but we can never avoid it entirely. Again, heat production is an example of an increase in entropy, which always tends to happen. And in any system we develop which uses energy to do some kind of work, there will always be some similar kind of entropy increase. Thus, no process can ever be 100% efficient.\nI’d like to end this section with a short parable about energy use and entropy. Imagine that you live in an imaginary world where money can’t always buy you the things you need. And you want to build a platform that can lift people up a short cliff – in other words, an outdoor elevator. Your first idea is to build a simple pulley system and have your friends at the top of the cliff pull the ropes. This works fine for a few minutes, but it’s a popular cliff, and pretty soon you find that you don’t have enough food to feed your friends so that they’ll have enough energy to pull the ropes. In order to feed them, you’d have to plant fields, harvest grains, and then cook meals every day.\nLuckily, you come up with a way around this problem: you tell your friends to tie big, heavy boulders to the other end of the rope. The boulders will fall, and the elevator will go up. Easy! What you have done is converted the gravitational potential energy of the boulders into kinetic energy as the system moved. Then, that kinetic energy turned back into gravitational potential energy for the elevator and its riders.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "This implies that food input is in the form of work. Food energy is reported in a special unit, known as the Calorie. This energy is measured by burning food in a calorimeter, which is how the units are determined.\nCan You Break The First Two Laws Of Thermodynamics\nTo break the first law of thermodynamics, wed have to create a “perpetual motion” machine that worked continuously without the input of any kind of power. That doesnt exist yet. All the machines that we know receive energy from a source and transform it into another form of energy. For example, steam engines convert thermal energy into mechanical energy.\nTo break the first law of thermodynamics, life itself would have to be reimagined. Living things also exist in concordance with the law of conservation of energy. Plants use photosynthesis to make food and animals and humans eat to survive.\nEating is basically extracting energy from food and converting it into chemical energy which is what actually gives us energy. We turn that chemical energy into mechanical energy when we move, and into thermal energy when we regulate our bodys temperature, etc.\nBut things may be a bit different in the quantum world. In 2002, chemical physicists of the Australian National University in Canberra demonstrated that the second law of thermodynamics can be briefly violated at the atomic scale. The scientists put latex beads in water and trapped them with a precise laser beam. Regularly measuring the movement of the beads and the entropy of the system, they observed that the change in entropy was negative over time intervals of a few tenths of a second.\nThis is a real-life demonstration of Maxwell’s demon, a thought experiment to break the second law of thermodynamics.\nYou May Like: A Drastic Way To Diet Algebra With Pizzazz\nFirst Law Of Thermodynamics Definition\n|Working Of A Thermal Power Station Is Based On 1st Law Of ThermodynamicsCredit: Wikimedia Commons|\nThe First Law Of Thermodynamics is one of the Physical Laws Of Thermodynamics that states that heat is a form of energy and the total energy of a system and its surrounding remained conserved or constant. Or in more simple terms, for an isolated system energy can neither be created nor be destroyed. It can only be converted from one form of energy to another. Thats why the 1st law of thermodynamics is also known as the Law Of Conservation Of Energy.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Kinetic Energy - Energy in Motion\nEnergy which an object processes due to its motion is called Kinetic Energy and to explain further it is work done to accelerate a body of given mass from rest to its current or desired velocity, \"KE\" represent Kinetic Energy. This online calculator has been developed to calculate KE.\nTo calculate Power with Velocity:\nWhat you want to calculate", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "1.3. Conservation of momentum and energy?\nAre total momentum and energy conserved in cosmology? This is a nontrivial question because the canonical momentum and Hamiltonian differ from the proper momentum and energy.\nConsider first the momentum of a particle in an unperturbed Robertson-Walker universe. With no perturbations, = 0 so that Hamilton's equation for p becomes dp / d = -am = 0, implying that the canonical momentum p is conserved. But, the proper momentum mv = a-1p measured by a comoving observer decreases as a increases. What happened to momentum conservation?\nThe key point is that v = dx / d is measured using a non-inertial (expanding) coordinate system. Suppose, instead, that we choose v to be a proper velocity measured relative to some fixed origin. Momentum conservation then implies v = constant (if = 0, as we assumed above). At = 1 and 2, the particle is at x1 and x2, respectively. Because dx / d gives the proper velocity relative to a comoving observer at the particle's position, at 1 we have dx / d = v - (/a)1 x1, while at 2, dx / d = v - ( / a)2 x2. (The proper velocity relative to the fixed origin is v in both cases, but the Hubble velocity at the particle's position - the velocity of a comoving observer - changes because the particle's position has changed.) Combining these, we find [(2) - (1)] / (2 - 1) - ( / a)[x(2) - x(1)] / (2 - 1) + O(2 - 1) or, in the limit 2 - 1 0, d2x / d2 = - (/a) dx / d. This is precisely our comoving equation of motion in the case = 0. Thus, the \"Hubble drag\" term ( / a)dx / d is merely a \"fictitious force\" arising from the use of non-inertial coordinates. Stated more physically, the particle appears to slow down because it is continually overtaking faster moving observers.\nEnergy conservation is more interesting. Let us check whether the Hamiltonian H(x, p, ) is conserved.", "score": 22.17709473034159, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "How can we help?\nYou can also find more resources in our\nSelect a category\nSomething is confusing\nSomething is broken\nI have a suggestion\nWhat is your email?\nWhat is 1 + 3?\nquiz-energy forms and transfers\nenergy due to motion\nenergy stored in and released from the nucleus of an atom\nenergy stored in and released from the bonds between atoms\nthe sum of the potential energy and kinetic energy in a system\nthe sum ofthe kinetic energy and potential energy of the particles that make up an object\nthe energy carried by electromagnetic waves\nsound is a form of energy carried by\nwhat factors determine how much gravitational potential energy an object has?\nmass and height\nLaw of conservation of energy states that energy can be __________________, but it cannot be created or destroyed\nA system that exchanges matter or energy with the environment is _____________ system\nwind energy is an example of a ___________ energy resource\nA movement of energy from one object to another without a change in the form of energy is called\nan example of energy transformation\nchemical energy in a log changes to thermal energy when the log burns\nan example of work\na bat hits a baseball to an outfielder\nexample of a nonrenewable resource", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Hello everybody. Imagine a box in which two spheres are separated by some distance. nothing is moving inside. Einsteins E=m*c^2 must be always valid. Since nothing moves the energy of box is E=m°*c^2 where m° is rest mass. Since the spheres exert gravitational force on each other they will be ruching towards each other after some time. just before the collison the energy of box is E=m*c^2 and it is higher than previous energy since the motion os spheres has increased their mass and made it m. Now you can see that law of conservation of energy is violated although no energy is given/take out of system. Thinking of this experiment I could not find the answer to the question, JUST WHERE DOES THE POTENTIAL ENERGY FIT IN THE E=MC^2 EQUATION ? OR DOES IT AT ALL ?", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Furthermore, the piston moved, thus a work is created. (Answer: Energy transfer is from the engine cylinder to the piston; Heat and Work)\nJoin to answer this question\nJoin a community of thousands of dedicated teachers and students.Join eNotes", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-2", "d_text": "Arguably conservation of momentum and the principle of Galilean invariance are one and the same principle.\nWhereas stating conservation of kinetic energy is very common the following property seems to be somewhat overlooked: kinetic energy is Galilean invariant. It's a necessary property: if kinetic energy would not be galilean invariant calculations would run into inconsistencies.\nIf kinetic energy is galilean invariant it must be possible to derive the conservation of kinetic energy from the invariance principles. So let's try that.\nLet the total kinetic energy be called Ek. Kinetic energy is ½mv2, but I have omitted the ½ because here it's non-essential. Then we have the following expression for the kinetic energy before the collision. (The symbol ∝ means 'is proportional to'.)\nBecause of the minus sign quite a few terms drop away against each other. After that cleanup the expression regroups as follows:\nAfter the collision\nFrom symmetry it's immediately clear that in the expression for the total kinetic energy after the collision the same terms will drop away against each other, so after the cleanup and regrouping the expression will be the same as above.\nThere is the following limitation: while the derivation shows that there will be a conserved quantity that is proportional to the masses involved and to (Vr)2, it doesn't go beyond that; it doesn't single out a particular expression for Ek.\nIt's interesting to see how readily the total kinetic energy can be separated into independent contributions: a component that correlates to the relative velocity (Vr) of two objects, and a component that correlates to their common velocity (Vc) with respect to some reference. This shows that kinetic energy satisfies Galilean invariance: the amount of kinetic energy that is involved in the collision process depends only on the relative velocity between the two objects.\nMomentum and kinetic energy\nWhat is the relation between momentum and kinetic energy?\nGiven that the corresponding conservation laws are both derivatives of the time symmmetry and Galilean invariance it appears that momentum and kinetic energy must in some sense be two sides of the same coin.\nThis work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.\nLast time this page was modified: July 05 2015", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "The kinetic energy of a body moving with a certain velocity is equal to the work done on it to make it acquire that velocity.\nLet us consider an object of mass m,moving with a uniform velocity u.Let it be displaced through a distance S.\nWhen a constant force F acts on it in direction of its displacement.\nWork Done= Force* Distance\nThe work done on the object will cause a change in its velocity.Let its velocity changes from u to v.Let a be the acceleration produced.\nThe relation connecting the initial velocity(u) and final velocity(v) of an object moving with a uniform acceleration(a) and displacement,S,is\nv2 – u2 =2aS\nv2 – u2\n———- = S\nFrom Newton’s Second law of motion\nv2 – u2 m\n———- * —– = S\nW= m (v2 – u2 )\nIf the object is starting from its stationary position ie u=0,then\nW=— m v2\nKinetic energy= —– m v2\nThus,kinetic energy possessed by an object of mass m,and moving with uniform velocity,v, is\nKE = —– m v2\nKinetic energy is directly proportional to mass of body and square of velocity of body.\nIf mass of a body is doubled,its kinetic energy also gets doubled and if mass of body is halved,its kinetic energy al get halved.\nIf the velocity of body is doubled,its kinetic energy,becomes four times.\nIf the velocity of body is halved,then its kinetic energy becomes one-forth.\nHeavy bodies moving with high velocities have more kinetic energy than slow moving bodies of small mass.\nA brick lying on the ground.It has no energy so it cannot do any work.\nLet us lift this brick to the roof of a house.Some work has been done in lifting this brick against the force of gravity.This work gets stored up in the brick in the form of potential energy.\nThe energy of a brick lying on the roof of a house is due to its higher position wrt ground.\nElastic potential energy is due to change in the shape of body.It can be brought about by compressing,bending or twisting.some work has to be done to change the shape of a body.\nThe energy of a body due to its position or change in shape is called potential energy.\nWhen you stretch a rubber band,energy transferred to band is its potential energy.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-2", "d_text": "To prove my point, lets look at a 'table' comparing KE, GPE, and the total energy of the system (which I will denote, W). http://xs72.xs.to/pics/06115/table.png [Broken] Note that in all cases, the total energy (W) of the system is 500 J at all times. The KE and the GPE fluctuate. Examine the change from Point A to Point B. The change in KE from Point A to Point B is (150 J - 0 J) = 150 J The change in GPE from Point A to Point B is (350 J - 500 J) = -150 J Based on the formula: dKE = -dGPE 150 J (dKE) = -(-150 J) (dGPE) 150 J = 150 J, thus energy is conserved. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= I hope by now you understand energy and I sure hope this topic is stickied as it is the best explanation you will ever find.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Why is matter not being created at the present time, nor being destroyed?\nMatter is being created and destroyed now. For example, a high energy X-ray can collide with the nucleus of an atom and disappear and two particles, an electron and an anti-electron (a.k.a. positron), will appear in its place. So extra matter is being produced from no matter. The important thing is that the amount of total energy stays the same, but the energy can change its form from electromagnetic radiation (the X-ray) to matter (the electron and positron). Also, an electron and positron can collide with and annihilate each other, producing X-rays.\nMac Mestayer, Staff Scientist (Other answers by Mac Mestayer)", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Question 1 Calculate the kinetic energy of a body of mass 2 Kg moving with a velocity of 0.1 m/s?\nQuestion 2 How much work should be done on a bicycle of mass 20Kg to increase its speed from 2m/s to 5m/s?\nQuestion 3 What are the various forms of energy?\nQuestion 4 What do you understand by kinetic energy of a body?\nQuestion 5 Write an expression for the kinetic energy of a body of mass m moving with velocity v?\nQuestion 6 How does kinetic energy depends on its mass and velocity?\nQuestion 7 If the speed of a body is halved,what will be the change in its kinetic energy?\nQuestion 8 A ball of mass 200g falls from a height of 5 m.What is its kinetic energy when it just reaches the ground?\nQuestion 9 Find the momentum of a body of mass 100 g having kinetic energy of 20J?\nQuestion 10 What is the kinetic energy of a body of mass 1 kg moving with speed of 2m/s?\nQuestion 11 What is potential energy?\nQuestion 12 Is potential energy a scalar quantity or vector quantity?\nQuestion 13 On what factors does the potential energy of a body depends?\nQuestion 14 Give example where body possesses kinetic energy?\nQuestion 15 What happens to the potential energy when height is doubled?\nQuestion 16 A body of mass 2 Kg is thrown vertically upwards with an initial velocity of 20m/s.What will be its potential energy at the end of 2sec?\nForms of Energy\nKinetic energy,Potential energy,Chemical energy,Heat energy,Light energy,Sound energy,Electrical energy,Nuclear energy.\nThe energy of a body due to its motion is called kinetic energy.\nEx:1)A moving cricket ball can do work in pushing back the stamps.\n2)Moving water can do work in turning the turbine for generating electricity.\n3)Moving wind can do work in turning the blades of wind mill.\n4)A moving hammer drives a nail into wood because of its kinetic energy.\n5)A moving bullet can penetrate even a steel plate.\n6)A moving bus,car,falling stone possesses kinetic energy.\n7)A falling coconut,running athlete possesses kinetic energy.\nThe kinetic energy of a moving body is measured by the amount of work it can do before coming to rest.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Quantity of motion\nMomentum and kinetic energy have in common that they are a measure of quantity of motion.\nWe take the following two postulates:\n- Time-reversal symmetry for perfectly elastic collision\n- Galilean invariance\nTime reversal symmetry\nWhen two objects of equal mass approach each other with equal velocity, and they collide with perfect elasticity, then their velocities will be reversed. That was a principle that from the 17th century on scientists relied upon in thinking about physical processes. Nowadays we have movie camera's, and we can play movies in reverse. We can readily see that in the case of a collision with perfect bounce we cannot discern whether the movie is being played forward or backward; perfectly elastic collision is symmetric in time.\n(17th century scientists saw proof of the time-reversal symmetry in collision experiments. For instance, the case of two pendulums side by side so that when hanging still the bobs just touch. Then when both bobs are released from the same height, moving towards each other, they bounce back to the same height as the height they were released from. Of course, in coming to these conclusions the scientists had to assume that the small discrepancies they saw were due to friction only.)\nTo be a powerful set of principles the time-reversal symmetry must be paired with the principle that was introduced by Galilei, which nowadays is called 'Galilean relativity'. Imagine a set of large boats, all sailing along on perfectly smooth water. Each boat has a uniform velocity, all boats have some velocity relative to the other boats of the set. Then any experiment conducted onboard any of those boats will find the same laws of motion.\nThe 17th century scientist Huygens pointed out the following procedure: if you want to calculate the outcome of any collision, then transform to the coordinate system that is co-moving with the common center of mass, reverse the velocities and then transform back to the original coordinate system.\nA more challenging case is where the mass of the two objects is unequal.\nCommon Center of Mass\nIn statics the Common Center of Mass (CCM) is an equilibrium point. Let two objects, with unequal mass, be connected by a stick with length L (for simplicity regard the stick as massless). Somewhere along the stick there will be an equilibrium point.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-3", "d_text": "It is a scalar quantity that is measured in Joules (Newton meters in SI units) which can be defined as:\nWork done= F×s×cosθ\nWhere F is the force applied to the object, s is the displacement of the object and cosθ is the cosine of the angle between the force and the displacement. In a linear example (with the force being exerted in the same direction as the displacement), the cosθ is equal to 1 and the equation simplifies to .\nExample calculation: If a force of 20 newtons pushes an object 5 meters in the same direction as the force what is the work done?\nF= 20 N s=5 m W=F×s=20×5= 100 J 100 Joules of work is done\nExamples (when is work done?):\nForce making an object move faster (accelerating) Lifting an object up (moving it to a higher position in the gravitational field) Compressing a spring\nWhen is work not done\nWhen there is no force Object moving at a constant speed Object not moving\nSome useful equations;\nIf an object is being lifted vertically the work done to it can be calculated using the equation\nWork done= mgh\nWhere m is the mass in kilograms, g is the earth's gravitational field strength (10N kg-1), and h is the height in meters\nWork done in compressing or extending a spring\nWork done = ½ kx2\nWhere k is Hooke's constant and x is the displacement\nEnergy and Power\nEnergy is the capacity for doing work. The amount of energy you transfer is equal to the work done. Energy is a measure of the amount of work done, this means that the units for energy and work must be the same- joules. Energy is like the \"currency\" for performing work. To do 100 joules of work, you must expend 100 joules of energy.\nConservation of energy\nIn any situation the change in energy must be accounted for. If it is 'lost' by one object it must be gained by another. This is the principle of conservation of energy which can be stated in several ways:\n- The total overall energy of a closed system must be constant\n- Energy is neither created or destroyed, it just changes form.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-4", "d_text": "- there is no change in the total energy of the universe\nEnergy can be in many different types these include:\n- Kinetic energy, Gravitational potential energy, Elastic potential energy, Electrostatic potential energy, Thermal energy, Electrical energy, Chemical energy, Nuclear energy, Internal energy, Radiant energy, Solar energy, and light energy.\nYou will need equations for the first three\n- Kinetic energy = ½ mv2 where m is the mass in kg, v is the velocity (in ms-1)\n- Gravitational potential energy= mgh where m is the mass in kg, g is the gravitational field strength, and h is the change in height\n- Elastic potential energy =½ kx2 where k is the spring constant and x is the extension\nPower -measured in Watts (W) or Joules per second (Js-1)- is the rate of doing work or the rate at which energy is transferred.\nPower= energy transferred÷time taken= energy transferred÷time taken\nIf something is moving at a constant velocity v against a constant frictional force f, the power P needed is P= fv\nIf you do 100 joules of work in one second (using 100 joules of energy), the power is 100 watts.\nEfficiency is the ratio of useful energy to the total energy transferred .\nThe change in the kinetic energy of an object is equal to the net work done on the object.\nThis fact is referred to as the Work-Energy Principle and is often a very useful tool in mechanics problem solving. It is derivable from conservation of energy and the application of the relationships for work and energy, so it is not independent of the conservation laws. It is in fact a specific application of conservation of energy. However, there are so many mechanical problems which are solved efficiently by applying this principle that it merits separate attention as a working principle.\nFor a straight-line collision, the net work done is equal to the average force of impact times the distance traveled during the impact.\nAverage impact force x distance traveled = change in kinetic energy\nIf a moving object is stopped by a collision, extending the stopping distance will reduce the average impact force.\nUniform Circular Motion (2.6)\nThe centripetal force with constant speed , at a distance from the center is defined as:", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Scientists predicted this process in the 1930s, but it has never been achieved in a single direct step.\nWhere does matter and energy come from?\nThe Big Bang. Everything we know in the universe – planets, people, stars, galaxies, gravity, matter and antimatter, energy and dark energy – all date from the cataclysmic Big Bang.\nCan something be created from nothing?\nSomething can be created from nothing\nBut such a perfect vacuum may not exist. One of the foundations of quantum theory is the Heisenberg uncertainty principle. It begins to be of profound importance to our understanding of nature at the atomic scale and below.\nAre humans made of energy?\nThe molecules present in the cell are made up of basic elements such as carbon, oxygen, hydrogen, and nitrogen. These elements possess energy; hence we can say that humans are made of energy.\nCan matter be destroyed?\nMatter makes up all visible objects in the universe, and it can be neither created nor destroyed.\nDo black holes destroy matter?\nSpecifically, as we understand it now, if you fall into a black hole you are guaranteed to hit the center, which is called the singularity. At the singularity you would be crushed into a ball of almost infinite density, which would destroy anything, even atoms, protons, or quarks.\nIs antimatter a real thing?\nAlthough it may sound like something out of science fiction, antimatter is real. Antimatter was created along with matter after the Big Bang. But antimatter is rare in today’s universe, and scientists aren’t sure why.\nCan space be created or destroyed?\nNo. And nor is energy. Energy is the one thing we can neither create nor destroy, and it seems to be intimately related to what space is. But nevertheless the stress-ball does get bigger as the pressure reduces, so it’s reasonable to say space is “created”.\nHow was space time created?\nUntil the 20th century, it was assumed that the three-dimensional geometry of the universe (its spatial expression in terms of coordinates, distances, and directions) was independent of one-dimensional time. The physicist Albert Einstein helped develop the idea of spacetime as part of his theory of relativity.\nWhere does space time come from?\nA quantum origin for spacetime.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "Mechanical energy to heat energy: Rub a piece of sandpaper quickly over a board several times. Feel the sandpaper and the board. What kind of energy is produced?\nMechanical energy to sound energy: Remove the cover from a sturdy box and cut three groves on opposite edges of the box. Now choose three rubber bands of equal length, but each with a different thickness. Stretch the rubber bands around the box, fitting each into one of the grooves. Pluck each rubber band. Observe that it is vibrating. Listen for a sound. Repeat for each rubber band. Compare the pitch made by the different rubber bands. Record your observations.\nThe Science Stuff\nEnergy is what enables matter to move or to change. Energy is found in many different forms, such as heat, light, electricity, mechanical (the energy in moving things), sound, nuclear, and chemical. One form of energy can be changed into another form of energy. Still, the total amount of energy never changes. This means that energy cannot be created or destroyed. These ideas are expressed in one of the most important laws in all of science – the law of conservation of energy.\nThese activities illustrate some of the main forms of energy. Each activity shows one form of energy being changed into another form of energy. Electrical energy changed into light and heat, mechanical energy changed into heat, and mechanical energy changed into sound.\nIn the first activity, when the equipment was wired together correctly, an electric circuit was completed. An electric current then moved through the dry cell, wires, and light bulb. As the electric current moved through the light bulb, electric energy changed into light energy and heat energy.\nThis activity illustrates another important concept about energy. It shows that energy can be transferred from one place to another. Much of the earth’s energy is transferred from the sun to the earth.\nRemember the conversation between Ella and her aunt? When Ella flipped the light switch, the electric current began to move through the wires and the light bulb. Inside the light bulb, electric energy changed into light and heat energy, which is the same thing that happened in your activity with electricity. When she turned the lights off, the objects in the room absorbed the heat and light energy. (This is a small amount of energy, and you probably couldn’t detect it without some sophisticated equipment.)\nWhen you rubbed a board with sandpaper, your motion produced mechanical energy.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "The kinetic force (energy ) from this event is said to have led to the extinction of the dinosaurs. Kinetic energy is the energy we get from an object in motion.\nFusion, or energy that comes when atom fuse, or combine, is what allows us to exist. This is what drives the stars. Our sun is using fusion to combine hydrogen and helium atoms together. In this process, energy in the form of radiation (Which becomes heat and light) is released. It is this energy from the sun that provides us with heat, light, and food on Earth.\nPosted: Monday, February 29, 2016", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "In physics, the potential energy you talk about the most in mechanics is\ngravitational potential energy. This is simply the potential energy caused by gravity.\nLike if we take that same ball, and hold it off the side of the Empire State Building, it\nhas the \"potential\" of gaining a lot of kinetic energy (by dropping it). It has\na lot of potential energy, but no kinetic (it's not moving...yet.) But once you do, all\nthat potential energy will be converted to kinetic, not all at once, but gradually. The\nball speeds up faster and faster as it falls. A rule; the higher an object is, the more\ngravitational potential energy it has.\nNow you don't have to know anything I have just said to do good in chemistry, just take\naway with you the Law of Energy Conservation. When you add the kinetic and potential\nenergies of that ball at any time during its fall, they will be the same. Energy can't be\ngained or lost; it is always converted. Even when that ball hits the ground, and has lost\nall its kinetic and potential energy, that energy didn't really die, it just got converted\nto something else. The pavement (or car, wherever it landed doesn't matter) will have\ngained a slight increase in temperature, and that's where the energy went. It was\ndissipated as heat. The energy in a closed system always remains constant. I say\nclosed system because if you considered the Earth a closed system, and then an outside\nsource of energy (such as the falling of a great meteorite) came in, then the closed\nsystem will have more energy.\nEnergy comes in all sorts of shapes and sizes. There's heat (probably most important\nenergy in chemistry), light, sound, mechanical, electrical, nuclear, matter/anti-matter\nreactions, the list goes on...\nLet's talk chemistry.\nChemistry 'n' Energy\nFinally, back to chemistry. In chemistry, reactions can give off heat, or absorb heat.\nThose that give off heat are called exothermic, and those that take in heat are\ncalled endothermic.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "In physics, energy (from the Greek ἐνέργεια - energeia, \"activity, operation\", from ἐνεργός - energos, \"active, working\") is a quantity that can be assigned to every particle, object, and system of objects as a consequence of the state of that particle, object or system of objects. Different forms of energy include kinetic, potential, thermal, gravitational, sound, elastic, light, andelectromagnetic energy. The forms of energy are often named after a related force. German physicist Hermann von Helmholtz established that all forms of energy are equivalent - energy in one form can disappear but the same amount of energy will appear in another form. Energy is subject to a conservation law. Energy is a scalar physical quantity. In the International System of Units (SI), energy ismeasured in joules, but in some fields other units such as kilowatt-hours and kilocalories are also used.\nBecause energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by the definition of energy the transfer of energy between the \"system\" and adjacent regions is work. A familiar example is mechanical work. Insimple cases this is written as the following equation:\nΔE = W (1)\nif there are no other energy-transfer processes involved. Here E is the amount of energy transferred, and W represents the work done on the system.\nMore generally, the energy transfer can be split into two categories:\nΔE = W + Q (2)\nWhere Q represents the heat flow into the system.\nThere are other ways inwhich an open system can gain or lose energy. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Winding a clock would be adding energy to a mechanicalsystem. These terms may be added to the above equation, or they can generally be subsumed into a quantity called \"energy addition term E\" which refers to any type of energy carried over the surface of a control volume or system volume.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-6", "d_text": "Mass-energy equivalence[change | change source]\nE=mc2, also called the mass-energy equivalence, is one of the things that Einstein is most famous for. It is a famous equation in physics and math that shows what happens when mass changes to energy or energy changes to mass. The \"E\" in the equation stands for energy. Energy is a number which you give to objects depending on how much they can change other things. For instance, a brick hanging over an egg can put enough energy onto the egg to break it. A feather hanging over an egg does not have enough energy to hurt the egg.\nA cannonball hangs on a rope from an iron ring. A horse pulls the cannonball to the right side. When the cannonball is released it will move back and forth as diagrammed. It would do that forever except that the movement of the rope in the ring and rubbing in other places causes friction, and the friction takes away a little energy all the time. If we ignore the losses due to friction, then the energy provided by the horse is given to the cannonball as potential energy. (It has energy because it is up high and can fall down.) As the cannonball swings down it gains more and more speed, so the nearer the bottom it gets the faster it is going and the harder it would hit you if you stood in front of it. Then it slows down as its kinetic energy is changed back into potential energy. \"Kinetic energy\" just means the energy something has because it is moving. \"Potential energy\" just means the energy something has because it is in some higher position than something else.\nWhen energy moves from one form to another, the amount of energy always remains the same. It cannot be made or destroyed. This rule is called the \"conservation law of energy\". For example, when you throw a ball, the energy is transferred from your hand to the ball as you release it. But the energy that was in your hand, and now the energy that is in the ball, is the same number. For a long time, people thought that the conservation of energy was all there was to talk about.\nWhen energy transforms into mass, the amount of energy does not remain the same. When mass transforms into energy, the amount of energy also does not remain the same. However, the amount of matter and energy remains the same.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-178", "d_text": "Krishnamurti: It is the same thing. Do we accept this that there is a beginning and ending of energy?\nF: Individuals may begin and end, but life does not. It creates.\nKrishnamurti: Do not bring in the individual yet. There is a movement of energy which is mechanical, which is measurable, which may end, and there is life-energy which you cannot manipulate; it goes on infinitely. We see that in one case there is wastage of energy and in the other there is non-wastage of energy.\nF: I do not see the other as a fact.\nKrishnamurti: All right. Let us see the movement of energy which can reach a height and decline. Is there any other form of energy which can never end, which is not related to the energy which begins, continues and withers away?\nF: That is a legitimate question.\nD: Is there any form of energy that will not decay?\nKrishnamurti: Now how are we going to find out? I have got it. What is energy that decays?\nF: What is the cause of energy you cannot answer.\nKrishnamurti: What is energy that decays? I did not say what is the cause of energy.\nP: Material energy decays. Why does it decay? By friction?\nD: By pressure?\nKrishnamurti: Is there any other form of energy which does not decay? One decays through friction. Is there any other form which does not decay?\nP: Not only does it decay but it is friction. I am positing it. Let us investigate. Its very nature is friction.\nF: No. I do not understand your method. The fact is that there is energy overcoming friction, and energy dissipating in friction.\nP: You say there is an energy which decays in friction through friction. I say its very nature is friction. All that movement which we call energy, in itself is friction. Show me why it is not so?\nF: What is friction?\nP: Friction is contradiction, resistance.\nF: Why should energy be identified with resistance?\nP: We say the nature of this which we call energy is friction. D: Energy is the capacity, biological capacity, to overcome resistance, but it dissipates itself in this process.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-2", "d_text": "Nuclear energy is interesting because when it operates, unlike burning coal or natural gas, it creates no pollution which might make you think it is a form of green energy. But the fuel that has been used to supply the atoms is highly radioactive and must be safely stored to avoid contamination of the environment. In addition, a nuclear reaction creates a lot of heat which is carried away by cooling water. If the supply of cooling water gets cut off from the reactor and the reaction continues, it becomes a very dangerous situation referred to as “meltdown” which can release deadly levels of radiation. Many people think that the “smoke” that they see coming from a nuclear power plant id radioactive. This is NOT the case. What you are seeing is water vapor from the water that was used to cool the reactor coming from the cooling towers. Some plants are built on lakes and actually transfer the heat to the lake itself. The sun is a large fusion reaction.\nThe second type of energy is kinetic energy or the energy of motion:\nKinetic Energy – is based on the motion of an object and is a function of how heavy the object is (its mass)and how fast it is moving, its velocity. If a 25 pound one year old child is running to jump on you, you’re probably not going to be too concerned and will let them hit you, but, if a 250 pound football player is running to jump on you, I suspect you’re going to try and move. This is because the energy of the impact will be 10 times higher for the football player moving at the same speed as the one year old. If I throw a b-b at you I don’t think you’ll be concerned, but if I shoot it from a b-b gun then you know it’s going to hurt. The b-b has very little mass but when given a lot of velocity and has a lot more energy. In fact, we would say it has exponentially more energy since velocity is squared in the formula for kinetic energy:\nKinetic Energy = ½ * m*v^2= ½ * mass * velocity^2\nWater running in a river or stream, the motion of waves and the wind are all examples of naturally occurring kinetic energy. We use this natural kinetic energy to generate electricity by driving an electric generator. Wind Energy, Hydro (water) Energy and Wave Energy are all considered both green and renewable forms of energy.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Wave Motion - How it works\nR ELATED F ORMS OF M OTION\nIn wave motion, energy—the ability to perform work, or to exert force over distance—is transmitted from one place to another without actually moving any matter along the wave. In some types of waves, such as those on the ocean, it might seem as though matter itself has been displaced; that is, it appears that the water has actually moved from its original position. In fact, this is not the case: molecules of water in an ocean wave move up and down, but they do not actually travel with the wave itself. Only the energy is moved.\nA wave is an example of a larger class of regular, repeated, and/or back-and-forth types of motion. As with wave motion, these varieties of movement may or may not involve matter, but, in any case, the key component is not matter, but energy. Broadest among these is periodic motion, or motion that is repeated at regular intervals called periods. A period might be the amount of time that it takes an object orbiting another (as, for instance, a satellite going around Earth) to complete one cycle of orbit. With wave motion, a period is the amount of time required to complete one full cycle of the wave, from trough to crest and back to trough.\nHarmonic motion is the repeated movement of a particle about a position of equilibrium, or balance. In harmonic motion—or, more specifically, simple harmonic motion—the object moves back and forth under the influence of a force directed toward the position of equilibrium, or the place where the object stops if it ceases to be in motion. A familiar example of harmonic motion, to anyone who has seen an old movie with a clichéd depiction of a hypnotist, is the back-and-forth movement of the hypnotist's watch, as he tries to control the mind of his patient.\nOne variety of harmonic motion is vibration, which wave motion resembles in some respects. Both wave motion and vibration are periodic, involving the regular repetition of a certain form of movement. In both, there is a continual conversion and reconversion between potential energy (the energy of an object due to its position, as for instance with a sled at the top of a hill) and kinetic energy (the energy of an object due to its motion, as with the sled when sliding down the hill.)", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "The energy conservation law is compatible with every single observation we have made inside the Milky Way in science, or outside science, so the empirical evidence in favor of it is overwhelming, diverse, and universal.\nTheoretically, the case is also clear. Emmy Noether demonstrated that conservation laws are linked to symmetries. The validity of the energy conservation law is equivalent to the time-translational symmetry of the laws of physics: the same phenomena occur if one starts with the same initial conditions but just a bit later. This is true for the laws of mechanics, field theory, electromagnetism, nuclear interactions, classical physics, quantum mechanics, thermodynamics and statistical physics, special relativity. The energy conservation law is valid in all these situations and respected by all the major theories describing these subfields of physics.\nMotors, those produced by Faraday, Tesla, or anyone else, as well as all other engines and objects in the Universe preserve the energy, too. And there exists no equivalence or analogy between the energy conservation law and the existence of Tesla's or Faraday's motor. In particular, there has never existed any solid evidence – empirical or theoretical – that electricity or magnetism couldn't do work. So any analogy between these totally different questions is an example of something technically referred to as demagogy. To create legitimate doubts about the validity of such an important and well-established law, one would need an observation or an arguments that actually discusses the technical properties of energy (and one would probably have to construct a viable theory disagreeing with the energy conservation law that is compatible with the observations) – rather than demagogic comparisons to completely different questions that the speaker desires to be answered by the same answer No although there doesn't exist a glimpse of a rational reason why the answers should be the same.\nTo see violations of the energy conservation law, one has to go to cosmology. However, due to the slow evolution of the Universe today, one needs to wait approximately for 10 billion years for the total energy of a system to change by an amount comparable to 100%. In the early stages of the cosmological evolution of our Universe, the total energy wasn't conserved – this is particularly important for cosmic inflation that created the energy of the whole Cosmos out of \"almost nothing\". But this non-conservation depended on the background spacetime's heavy violation of the time-translational symmetry.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "- What is the energy of moving things called?\n- What is the ability to make things move or change called?\n- How can energy make things move?\n- When work is done in the body gains?\n- When work is done on a body its energy?\n- Is it correct to say work is done every time force is applied?\n- When one can say that work is done on the body?\n- What is 1joule work?\n- What does peak current mean?\nTelekinesis, also known as Psychokinesis (PK) is simply the ability to move an object in some manner without coming into physical contact with it. In essence, it is the psychic ability to use the power of the mind to manipulate a specific target and conform it to your will.\nWhat is the energy of moving things called?\nEnergy that a moving object has due to its motion is Kinetic Energy. Kinetic Energy: Is energy of motion.\nWhat is the ability to make things move or change called?\nDefining Energy Energy is defined in science as the ability to move matter or change matter in some other way. Energy can also be defined as the ability to do work, which means using force to move an object over a distance.\nHow can energy make things move?\nMotion energy is the sum of potential and kinetic energy in an object that is used to do work. Work is when a force acts on an object and causes it to move, change shape, displace, or do something physical. Potential energy is energy that is stored in an object or substance.\nWhen work is done in the body gains?\nThe total amount of work done on a body equals the change in its kinetic energy. Work done is said to be positive when an external force acts in the direction of motion of the body. If positive work is done on a body by an external force, then the body gains kinetic energy.\nWhen work is done on a body its energy?\nwork done on body =change in kinetic energy of the body.\nIs it correct to say work is done every time force is applied?\nIf a force is applied but the object doesn’t move, no work is done; if a force is applied and the object moves a distance d in a direction other than the direction of the force, less work is done than if the object moves a distance d in the direction of the applied force.\nWhen one can say that work is done on the body?", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-7", "d_text": "Energy turns into mass and mass turns into energy in a way that is defined by Einstein's equation, E = mc2.\nThe \"m\" in Einstein's equation stands for mass. Mass is the amount of matter there is in some body. If you knew the number of protons and neutrons in a piece of matter such as a brick, then you could calculate its total mass as the sum of the masses of all the protons and of all the neutrons. (Electrons are so small that they are almost negligible.) Masses pull on each other, and a very large mass such as that of the Earth pulls very hard on things nearby. You would weigh much more on Jupiter than on Earth because Jupiter is so huge. You would weigh much less on the Moon because it is only about one-sixth the mass of Earth. Weight is related to the mass of the brick (or the person) and the mass of whatever is pulling it down on a spring scale — which may be smaller than the smallest moon in the solar system or larger than the Sun.\nMass, not weight, can be transformed into energy. Another way of expressing this idea is to say that matter can be transformed into energy. Units of mass are used to measure the amount of matter in something. The mass or the amount of matter in something determines how much energy that thing could be changed into.\nEnergy can also be transformed into mass. If you were pushing a baby buggy at a slow walk and found it easy to push, but pushed it at a fast walk and found it harder to move, then you would wonder what was wrong with the baby buggy. Then if you tried to run and found that moving the buggy at any faster speed was like pushing against a brick wall, you would be very surprised. The truth is that when something is moved then its mass is increased. Human beings ordinarily do not notice this increase in mass because at the speed humans ordinarily move the increase in mass in almost nothing.\nAs speeds get closer to the speed of light, then the changes in mass become impossible not to notice. The basic experience we all share in daily life is that the harder we push something like a car the faster we can get it going. But when something we are pushing is already going at some large part of the speed of light we find that it keeps gaining mass, so it gets harder and harder to get it going faster.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "And if big-bang should somehow exist, why is there no free-energy now?\nGravity pulling objects in?\nDoes the sun move when the earth revolves around it? Is this movement immediate or does it\ndepend on the speed of light? If it is not immediate it would mean some energy is lost!\n(just like the proton and electron example)\nI'm no specialist in gravity to answer this question, but any answer is interesting.\nEven if I look at the Higgs-field as a carrier of gravity the problem is similar.\nAccording to my theory there must be some other phenomenon that compensates or prevents\nthis energy loss if it happens. Some kind of expansion maybe?\nWhat about black-holes (if they exist), what is there to conserve energy?", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "As the gas cools, energy is delivered as propulsion (see the sections “Temperature” and “Euler’s equation”). We estimate the propulsion energy in 2 situations:\n(1) duct, moving in a straight line;\nFirst, write the conservation of linear momentum\nand let delta t --> 0. We get the thrust equation\nwhere Fext is an external force resisting the motion of the system, M_dot is negative because the system is losing mass as it moves and u is the velocity of the exiting gas w/respect to the stationary frame of reference. The second term is known as “thrust”. When u=0 and v=const=c we get an expression for the propulsion energy\n(2) rotating duct.\nFirst, write the conservation of angular momentum\nand let delt t --> 0. We get the thrust equation\nwhere tau_ext is an external torque due to the resistance of the surroundings; the second term in the Rhs represents rotational thrust; M_dot is negative since the system is losing mass during its motion. Again, if v=const=c the thrust is precisely counterbalanced by the external resistance and the propulsion energy turns out to be the same as above\nThis formula shows that a gaseous mass m, initially at rest in one frame of reference, makes the transition, at the expense of its own stagnation enthalpy, to a state of rest in another frame of reference, where the 2 reference frames move with respect to each other with constant linear velocity c. Thus, the above formula stems from the physics of a system with variable mass.\nIf we perform temperature analysis, the exact same formula comes out as the result. The released thrust energy is given through the decrease of total temperature\nbecause, as we saw in the “Temperature” section of this site,\nOne can call the mc^2 formula an “apparent mass-energy equivalence”, since to an observer in the starting frame where m is initially at rest, it seems that the removal of this mass leads to the liberation of energy. In fact, the mass only transitions between 2 frames of reference and is not equivalent to energy. It releases some of its pressure and thermal energy in the transition.\nThe information contained in this site is based on the following research articles written by Jeliazko G Polihronov and collaborators:", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-82", "d_text": "Instead, I suggest that any inventor who believes he or she has a free-energy device build that device and demonstrate it openly for the physics community. Take it to an American Physical Society conference and present it there. Let everyone in the audience examine it closely. Since anyone can join the APS and any APS member can talk at any major APS conference, there is plenty of opportunity. If someone succeeds in convincing the physics community that they have a true free-energy machine, more power to them (no pun intended). But given the absence of any observed failure of time-translation symmetry, and therefore the steadfast endurance of energy conservation laws, I don't expect any successful devices.\nYou're both right about temperature being associated with kinetic energy in molecules: the more kinetic energy each molecule has, the hotter the substance (e.g. a person) is. But not all kinetic energy \"counts\" in establishing temperature. Only the disordered kinetic energy, the tiny chucks of kinetic energy that belong to individual particles in a material contributes to that material's temperature. Ordered kinetic energy, such as the energy in a whole person who's running, is not involved in temperature. Whether an ice cube is sitting still on a table or flying through the air makes no difference to its temperature. It's still quite cold.\nFriction's role with respect to temperature is in raising that temperature. Friction is a great disorderer. If a person running down the track falls and skids along the ground, friction will turn that person's ordered kinetic energy into disordered kinetic energy and the person will get slightly hotter. No energy was created or destroyed in the fall and skid, but lots of formerly orderly kinetic energy became disordered kinetic energy—what I often call \"thermal kinetic energy.\"\nThe overall story is naturally a bit more complicated, but the basic idea here is correct. Once energy is in the form of thermal kinetic energy, it's stuck... like a glass vase that has been dropped and shattered into countless pieces, thermal kinetic energy can't be entirely reconstituted into orderly kinetic energy. Once energy has been distributed to all the individual molecules and atoms, getting them all to return their chunks of thermal kinetic energy is hopeless. Friction, even at the molecular level, isn't important at this point because the energy has already been fragmented and the most that any type of friction can do is pass that fragmented energy about between particles.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-179", "d_text": "But there are inefficiencies in your walking process that lead you to waste energy as heat in your own body. So the energy you convert from food energy to gravitational potential energy in climbing the stairs is fixed, but the energy you use in carrying out this procedure depends on how you do it. The extra energy you use mostly ends up as thermal energy, but some may end up as sound or chemical changes in the staircase, etc.\nActually, some bearings are dry (no grease or oil) and still last a very long time. The problem is that the idea touch-and-release behavior is hard to achieve in a bearing. The balls or rollers actually slip a tiny bit as they rotate and they may rub against the sides or retainers in the bearing. This rubbing produces wear as well as wasting energy. To reduce this wear and sliding friction, most bearings are lubricated.\nIf you brake your car too rapidly, the force of static friction between the wheels and the ground will become so large that it will exceed its limit and the wheels will begin to skid across the ground. Once skidding occurs, the stopping force becomes sliding friction instead of static friction. The sliding friction force is generally weaker than the maximum static friction force, so the stopping rate drops. But more importantly, you lose steering when the wheels skid. An anti-lock braking system senses when the wheels suddenly stop turning during braking and briefly release the brakes. The wheel can then turn again and static friction can reappear between the wheel and the ground.\nWhen a ball bounces, some of its molecules slide across one another rather than simply stretching or bending. This sliding leads to a form of internal sliding friction and sliding friction converts useful energy into thermal energy. The more sliding friction that occurs within the ball, the less the ball stores energy for the rebound and the worse the ball's bounce. The missing energy becomes thermal energy in the ball and the ball's temperature increases.\nActually, both a mouse ball and a bowling ball will bounce somewhat if you drop them on a suitably hard surface. It does have to do with elasticity. During the impact, the ball's surface dents and the force that dents the ball does work on the ball—the force on the ball's surface is inward and the ball's surface moves inward. Energy is thus being invested in the ball's surface. What the ball does with this energy depends on the ball.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-6", "d_text": "As I already wrote in a previous post, matter can be characterized as a\nsystem that contains particles with non-zero rest mass\n. Taking the simplest atom, viz. the hydrogen atom, as an example, this consists of a proton (that itself enjoys an internal structure by being composed of quarks that are held together by the exchange of gluons) and one electron, both interacting via the exchange of virtual photons. The latter are the messenger particles that convey the electromagnetic force; photons have zero rest mass, while the electron and the proton enjoy a nonzero rest mass. The mass of the latter is about 1840 times the mass of the former, so that most of the rest mass of the atom is concentrated in its nucleus.\nLet me try to provide a bit more information to some statements in postings above:\nMatter can neither be created or destroyed\nNo, that is not true. If matter and antimatter collide, e.g. hydrogen and antihydrogen (consisting of a (negatively charged) antiproton and a (positively charged) positron), then both annihilate into radiation, i.e., photons. However, the assertion above becomes correct if \"matter\" is replaced by \"mass\", so:\nMass can neither be created or destroyed\nIt is the motion of the electrons that creates the illusion of solid matter\n. Well, this \"motion\" must be regarded as being very different from the orbiting of, e.g., planets around a central star. If the electrons in an atom were really moving, then (since they are charged particles) this would imply that they would constantly produce electromagnetic radiation, so atoms would lose energy by radiating all the time and thus couldn't be stable. Actually, the probability distribution of electrons in an atom is static and thus does not change in time (unless perturbed by \"external\" effects). It is the complex phase of the electronic wavefunction that oscillates in time. On the other hand, it is true that the kinetic energy of the electrons is required to render atomic systems stable. Without this kinetic energy, the electrostatic attraction between the positive nucleus and the negatively charged electrons would make the latter to collapse into the nucleus. The structure of atoms and molecules and thus of \"ordinary\" matter is also significantly influenced by the fact that electrons are fermions and thus the Pauli exclusion principle holds which forbids any two electrons to occupy the same quantum state.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-12", "d_text": "The ice will melt, and eventually the glass of water will reach a state of equilibrium with the surrounding environment (it will stabilize at room temperature) at which point no potential difference in the field remains.\nAll field motion occurs as the energy field attempts to maintain equilibrium in field density. Ultimately the field must correspond to the distribution described by the inverse square law, however since the field is ‘quantized’ (it has become many fields) this result becomes impossible to achieve in practice, and the result is then perpetual field motion (the field becomes like Don Quixote, perpetually tilting at windmills). In the diagram above we illustrate the magnetic field and the electric field ‘forces’, which are at right angles to one another. The purpose of the magnetic field is to respond to changes in field density (potential differences) doing so in such a way as to achieve the desired final result of field equilibrium (what the field considers a correct density distribution of energy). The role of the electric field is to transfer energy within the field. You never have such a transfer of energy (a current) without a corresponding magnetic field being generated at the same time, and the reverse is also true, for wherever a magnetic field is present, currents of energy are on the move within a field.\nThe Hoover Dam\nThe Hoover Dam uses ‘momentum’ in the form of the motion of moving water allowed to fall down upon and spin electromagnetic turbine generators to generate electrical current, which is just momentum once again, this time in the form of flying electrons moving through a circuit in some wires (momentum having been transferred from a moving current of water to a moving current of electrons).\nIf you pick up a rock and then drop it on the ground, it will impact the ground and perhaps kick up a small cloud of dust. If you were to drop the same rock from very high in the atmosphere, the rock would begin to fall towards the surface of the earth, and the rock would be constantly accelerating as it descended. Finally the enormous velocity of the falling rock would generate so much air friction that the rock would burst into flames and then when hit impacted the surface it would generate an explosive impact. B y dropping the same rock from a high altitude it became a highly energetic, high velocity projectile.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-9", "d_text": "Ultimately, all energy can be translated into an improbably concentrated (compacted) distribution (dance) of photons. I suspect that all the other manifestations of potential, kinetic, heat, chemical, radiation and etc energy are complex manifestations of this basic \"photon into matter\" compaction (see the section below entitled, \"even wilder speculation here\"). In quantum foam, there is a rapid (which equates to a very short distance) return to the \"mean\". The \"mean\" should be represented by utter nothingness – \"nihil\" – \"zilch\" – \"non-existence\" (a virtual state) with quantum foam lying close by on either statistical side. (Indeed, the uncertainty principle seems to encourage a boiling, yocto-scopic jitteriness either side of nothing. Note that micro-, nano-, pico-, fempto- yocto- are increasingly small. There is more on this general principle in the speculation section.)\nWithin the closed system of our theoretically perfect box, no energy is added or lost; our definition has proclaimed this. All that happens is that the energy is re-distributed evenly throughout the box so that no usable \"macroscopic\" PDs remain that are then available to be tapped for our wilful purposes. A useful illustration is to think of a box containing oxygen and a jar of diesel, both in matching quantities that will ensure complete combustion once the diesel is ignited. This initial system (effectively two isolated systems, oxygen and diesel, before ignition) has low entropy and remains in this state until it is ignited (at which point the low to high entropy flux begins). Now, all the diesel burns up reacting with all the oxygen. It now transforms into a state of high entropy where no new work can be \"tapped\" within the box because equilibrium has been reached and no workable PDs are left (even though the contents of the box are now much hotter). Similarly, 100ml of deuterium in a very large box represents a potential state of very low entropy (sometimes described as negentropy, though the qualifier high- or low- should also be added to this term) which, if a sustained fusion reaction could be initiated, would lead to a flow to a state of high entropy even though the contents of the box have become extremely hot and the pressure very high – or we have decreed that no energy flows out of our perfect system.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Yes, energy is a vector quantity. There is an energy component to everything. This is a vector quantity because we cannot measure the force of gravity. In physics, force is a vector quantity. A force is the product of a vector and a scalar quantity called the mass of a body. This is true of all of our actions. When we move a body, say a basketball, there is a force on it. We have a force on the basketball.\nThe weight of a body is the same as the mass of a body. If we are on a mountain, we have a force on the mountain. We do not have a force on the mountain. We have a force on a man. These are the same forces that we have on our enemies.\nEnergy is a vector quantity. This means that the magnitude of energy depends on the direction it is measured in. The energy of an object is proportional to its mass and the speed at which it moves. (For example, if your finger is moving at a speed of one foot per second then the energy of your finger will be one joule per second. If your finger moves at a speed of one foot per second then the energy will be one joule per second.\nEnergy is an important factor in the creation of an object. It’s the energy of the object itself creating the object. The energy of an object is the energy stored in the object. This is why objects with large amounts of energy can explode or explode with a destructive force. An object with a small amount of energy will probably disintegrate. The energy of a material that is created by a machine is the energy that goes into the machine itself.\nThe energy that is created by a machine is called’residual energy’.\nNow, the energy of a material is not a vector quantity. It is not a momentum. It is not something that we think is really going to affect our lives and that we have to wait for it to get out of the machine. The energy of a material is the energy that gets released as the material is created. It is the energy that comes from the “maker” of the material.\nThe energy that is produced by a material is calledenergy storage. Energy storage is the electricity that is produced when a material is made by a machine. The energy storage of a material is calledcapacity. The capacity that is produced by a material is calledcapacity.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-20", "d_text": "When no more potential difference exists between your rocket’s momentum field and the surrounding earth momentum field, the centrifugal force vanishes and you have lost all of your ‘conserved momentum’ and the time has come for the centripetal force to begin to do its job of getting your rocket properly sorted within a sorted mass field according to the requirements of the Inverse Square Law. Rockets are quite dense, unlike weather balloons and so therefore your rocket must be sorted somewhere below the surface of the earth. We suggest here that even a ‘mass field’ composed of matter must be sorted according to the rules of the Inverse Square Law (light helium at the top of the properly sorted field and dense iron somewhere below the visible surface of the earth).\nNow once your rocket has come to a full stop the centripetal force will now begin to ‘sort’ the rocket according to density for the rocket will be found to be parked on the spot where a helium filled weather balloon is parked, which is the wrong place to park some rocket (this being a low density region of the mass field and a rocket, as we know, is much more dense than a helium balloon and therefore is parked on the wrong spot). The rocket then is pulled by this centripetal force to a lower region of the field, with higher density, which is where it belongs, and once again this happens at a certain fixed rate, one quanta of momentum energy at a time. At each step the rocket momentum field is brought into equilibrium with the density of the earth’s momentum field, which requires a transfer of momentum from the earth field to the rocket field.\nWe can see here that motion is the mechanism by which ‘transfer of momentum’ occurs and that motion and momentum transfer are therefore equivalent. This then leads us to draw the conclusion that ‘the so called law of the conservation of momentum’ is a myth for whenever motion is occurring through a gradient and constantly changing field, transfer of momentum quanta is occurring at the same time. In this way the momentum field of the rocket and the surrounding momentum field of the earth act like two cells of a battery and energy is in motion to constantly maintain field equilibrium.\nNow someone might suggest that ‘conserved momentum’ is only impossible in some ‘gravity field’ this being due to the fact that gravity will suck the conserved momentum right out of some object.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "This is almost always how we ...\nRuger american 45 magazine extension\nIn this case mass is not conserved and the mass of an object is not the sum of the masses of its parts. Thus, the mass of a box of light is more than the mass of the box and the sum of the masses of the photons (the latter being zero). Relativistic mass is equivalent to energy, which is why relativistic mass is not a commonly used term nowadays.\nEating six meals per day will ensure that your body always has the nutrients it needs to repair and build more muscle. You will never be in starvation mode. I will be honest, eating six meals a day takes planning and dedication. However, not for Mark Boyle who has turned his life into a radical experiment. Mark Boyle was born in 1979 in Ireland and moved to Great Britain after Mark Boyle began to realise that many of the world's problems are just symptoms of a deeper problem. He thought that money gave people the illusion of...\nInspire charter school investigation\nThe more intensely and flawlessly his techniques duplicate empirical objects, the easier it is today for the illusion to prevail that the outside world is the straightforward continuation of that presented on the screen. This purpose has been furthered by mechanical reproduction since the lightning takeover by the sound film.\nDec 21, 2020 · A body has kinetic energy when it is in motion. If a body of mass m is moving at a velocity v, then: Kinetic energy = (1/2)mv 2. The kinetic energy of a body at velocity v is the work that must be done on the body to accelerate it to that velocity. Example: A rifle bullet of mass 4 grams is moving at a velocity of 1200 m/s. Calculate its energy. Momentum is inertia in motion.It is the product of an object's mass and its velocity . Which has more momentum? An 80,000 pound big rig traveling 2 mph or a 4,000 pound SUV traveling 40 mph? circle one Big Rig SUV same Soccer Kicks, Slap Shots, and Egg Toss What is it that changes an object's momentum? an impulse .It is the product of\nLirr ticket refund coronavirus\nThe boy and toboggan together have a mass of 50 kg, and the slope is at an angle of 30° to the horizontal.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-154", "d_text": "However, the energy associated with rest mass is hard to release and only tiny fractions of it can be obtained through conventional means. Chemical reactions free only parts per billion of a material's rest mass as energy and even nuclear fission and fusion can release only about 1% of it. But when equal quantities of matter and antimatter collide, it's possible for 100% of their combined rest mass to become energy. Since two metric tons is 2000 kilograms and the speed of light is 300,000,000 meters/second, the energy in Einstein's formula is 1.8x1020 kilogram-meters2/second2 or 1.8x1020 joules. To give you an idea of how much energy that is, it could keep a 100-watt light bulb lit for 57 billion years.\nWhile it's true that microwaves twist water molecules back and forth, this twisting alone doesn't make the water molecules hot. To understand why, consider the water molecules in gaseous steam: microwaves twist those water molecules back and forth but they don't get hot. That's because the water molecules beginning twisting back and forth as the microwaves arrive and then stop twisting back and forth as the microwaves leave. In effect, the microwaves are only absorbed temporarily and are reemitted without doing anything permanent to the water molecules. Only by having the water molecules rub against something while they're twisting, as occurs in liquid water, can they be prevented from remitting the microwaves. That way the microwaves are absorbed and never remitted—the microwave energy becomes thermal energy and remains behind in the water.\nVisualize a boat riding on a passing wave—the boat begins bobbing up and down as the wave arrives but it stops bobbing as the wave departs. Overall, the boat doesn't absorb any energy from the wave. However, if the boat rubs against a dock as it bobs up and down, it will converts some of the wave's energy into thermal energy and the wave will have permanently transferred some of its energy to the boat and dock.\nYes, VCR's work on the same principle as an audio tape player: as a magnetized tape moves past the playback head, that tape's changing magnetic field produces a fluctuating electric field.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-6", "d_text": "Let’s say that these are the molecules, maybe this is, this one’s the purple one right over here. You have the same two molecules here. Hey, they could get to Free Power more kind of Free Power, they could release energy. But over here, you’re saying, “Well, look, they could. ” The change in enthalpy is negative.\nImpulsive gravitational energy absorbed and used by light weight small ball from the heavy ball due to gravitational amplification + standard gravity (Free Power. Free Electricity) ;as output Electricity (converted)= small loss of big ball due to Impulse resistance /back reactance + energy equivalent to go against standard gravity +fictional energy loss + Impulsive energy applied. ” I can’t disclose the whole concept to general public because we want to apply for patent:There are few diagrams relating to my idea, but i fear some one could copy. Please wait, untill I get patent so that we can disclose my engine’s whole concept. Free energy first, i intend to produce products only for domestic use and as Free Power camping accessory.\nI might have to play with it and see. Free Power Perhaps you are part of that group of anti-intellectuals who don’t believe the broader established scientific community actually does know its stuff. Ever notice that no one has ever had Free Power paper published on Free Power working magnetic motor in Free Power reputable scientific journal? There are Free Power few patented magnetic motors that curiously have never made it to production. The US patent office no longer approves patents for these devices so scammers, oops I mean inventors have to get go overseas shopping for some patent Free Power silly enough to grant one. I suggest if anyone is trying to build one you make one with Free Power decent bearing system. The wobbly system being shown on these recent videos is rubbish. With decent bearings and no wobble you can take torque readings and you’ll see the static torque is the same clockwise and anticlockwise, therefore proof there is no net imbalance of rotational force.\nThis simple contradiction dispels your idea. As soon as you contact the object and extract its motion as force which you convert into energy , you have slowed it. The longer you continue the more it slows until it is no longer moving. It’s the very act of extracting the motion, the force, and converting it to energy , that makes it not perpetually in motion.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "Generally, when we’re talking of how much energy we’ve got, we don’t look at the whole amount available, since that would produce ridiculous numbers when we count up the number of joules in the various bits of stuff we’re looking at. Instead, what we look at is a local excess of energy over another place and call this what we’ve got. The total energy in a litre of Diesel is massive, but we look at what we get from combustion of it (that converts a very small quantity into KE in the form of heat) and how much more heat we have than the ambient temperature. To convert that heat energy into work we need to let that excess KE move into the ambient and we harness that movement to do the work of moving our car from one place to another. Since we start from stationary and end stationary (and normally end up in the same parking-spot at the end of the day), in fact all that KE we’ve liberated by burning ends up as heat in the atmosphere and we’ve actually done no work at all. Yep, it gets a bit complex when you really follow where all the energy goes to and what has really happened.\nI’ll try to restate all this a bit more simply, though. The sum of KE and PE remains constant no matter what you do. We get work done when we harness the movement of energy from one place to another. Energy is conserved, but work is not even though we use the same units to measure it.\nA local concentration of kinetic energy will naturally spread out until the energy density is even and without any local high concentrations. This can be seen using water – pour a glass of it into a bowl and you end up with a flat surface where no point is higher than another. Pour the glass of water through a turbine and you can get work done in the process, but you don’t have to do this so the work obtained can be anywhere from zero to (almost) the available excess energy when the glassful gets poured.\nPart of Quantum Theory says that there is a residual amount of kinetic energy left even at absolute zero, and that things will thus still move. Some people think therefore that this Zero-Point Energy (ZPE) should be able to be tapped.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-185", "d_text": "Since most light sources put more energy in the red portion of the spectrum than in the blue portion of the spectrum, the blue cloth absorbs more energy than the red cloth. So the sequence of temperatures you observed is the one you should expect to observe.\nOne final note: most light sources also emit invisible infrared light, which also carries energy. Most of the light from an incandescent lamp is infrared. You can't tell by looking at a piece of cloth how much infrared light it absorbs and how much it reflects. Nonetheless, infrared light affects the cloth's temperature. A piece of white cloth that absorbs infrared light may become surprisingly hot and a piece of black cloth that reflects infrared light may not become as hot as you would expect.\nA roller coaster is a gravity-powered train. Since it has no engine or other means of propulsion, it relies on energy stored in the force of gravity to make it move. This energy, known as \"gravitational potential energy,\" exists because separating the roller coaster from the earth requires work—they have to be pulled apart to separate them. Since energy is a conserved quantity, meaning that it can't be created or destroyed, energy invested in the roller coaster by pulling it away from the earth doesn't disappear. It becomes stored energy: gravitational potential energy. The higher the roller coaster is above the earth's surface, the more gravitational potential energy it has.\nSince the top of the first hill is the highest point on the track, it's also the point at which the roller coaster's gravitational potential energy is greatest. Moreover, as the roller coaster passes over the top of the first hill, its total energy is greatest. Most of that total energy is gravitational potential energy but a small amount is kinetic energy, the energy of motion.\nFrom that point on, the roller coaster does two things with its energy. First, it begins to transform that energy from one form to another—from gravitational potential energy to kinetic energy and from kinetic energy to gravitational potential energy, back and forth. Second, it begins to transfer some of its energy to its environment, mostly in the form of heat and sound. Each time the roller coaster goes downhill, its gravitational potential energy decreases and its kinetic energy increases. Each time the roller coaster goes uphill, its kinetic energy decreases and its gravitational potential energy increases. But each transfer of energy isn't complete because some of the energy is lost to heat and sound.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-28", "d_text": "Retrieved 2008-01-04.\n- Davis, Doug. \"Conservation of Energy\". General physics. Retrieved 2008-01-04.\n- Wandmacher, Cornelius; Johnson, Arnold (1995). Metric Units in Engineering. ASCE Publications. p. 15. ISBN 0-7844-0070-9.\n- Corben, H.C.; Philip Stehle (1994). Classical Mechanics. New York: Dover publications. pp. 28–31. ISBN 0-486-68063-0.\n- Cutnell, John D.; Johnson, Kenneth W. (2003). Physics, Sixth Edition. Hoboken, New Jersey: John Wiley & Sons Inc. ISBN 0471151831.\n- Feynman, Richard P.; Leighton; Sands, Matthew (2010). The Feynman lectures on physics. Vol. I: Mainly mechanics, radiation and heat (New millennium ed.). New York: BasicBooks. ISBN 978-0465024933.\n- Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2010). The Feynman lectures on physics. Vol. II: Mainly electromagnetism and matter (New millennium ed.). New York: BasicBooks. ISBN 978-0465024940.\n- Halliday, David; Resnick, Robert; Krane, Kenneth S. (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 0-471-32057-9.\n- Kleppner, Daniel; Kolenkow, Robert J. (2010). An introduction to mechanics (3. print ed.). Cambridge: Cambridge University Press. ISBN 0521198216.\n- Parker, Sybil (1993). \"force\". Encyclopedia of Physics. Ohio: McGraw-Hill. p. 107,. ISBN 0-07-051400-3.\n- Sears F., Zemansky M. & Young H. (1982). University Physics. Reading, Massachusetts: Addison-Wesley. ISBN 0-201-07199-1.\n- Serway, Raymond A. (2003). Physics for Scientists and Engineers.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "the act, process, or state of changing place or position; movement; the passing of a body from one place or position to another, whether voluntary or involuntary; -- opposed to rest\npower of, or capacity for, motion\ndirection of movement; course; tendency; as, the motion of the planets is from west to east\nchange in the relative position of the parts of anything; action of a machine with respect to the relative movement of its parts\nmovement of the mind, desires, or passions; mental act, or impulse to any action; internal activity\na proposal or suggestion looking to action or progress; esp., a formal proposal made in a deliberative assembly; as, a motion to adjourn\nan application made to a court or judge orally in open court. Its object is to obtain an order or rule directing some act to be done in favor of the applicant\nchange of pitch in successive sounds, whether in the same part or in groups of parts\na puppet show or puppet\nto make a significant movement or gesture, as with the hand; as, to motion to one to take a seat\nto make proposal; to offer plans\nto direct or invite by a motion, as of the hand or head; as, to motion one to a seat\nto propose; to move\nIn physics, motion is a change in position of an object with respect to time and its reference point. Motion is typically described in terms of displacement, velocity, acceleration, and time. Motion is observed by attaching a frame of reference to a body and measuring its change in position relative to another reference frame. A body which does not move is said to be at rest, motionless, immobile, stationary, or to have constant position. An object's motion cannot change unless it is acted upon by a force, as described by Newton's first law. An object's momentum is directly related to the object's mass and velocity, and the total momentum of all objects in a closed system does not change with time, as described by the law of conservation of momentum. As there is no absolute frame of reference, absolute motion cannot be determined. Thus, everything in the universe can be considered to be moving. More generally, the term motion signifies a continuous change in the configuration of a physical system. For example, one can talk about motion of a wave or a quantum particle where the configuration consists of probabilities of occupying specific positions.\nU.S.", "score": 8.086131989696522, "rank": 99}]} {"qid": 5, "question_text": "Who did Carlos Alcaraz defeat at Wimbledon 2023 to win his first Grand Slam title?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Carlos Alcaraz is the third youngest Wimbledon title winner. The big four were dominating in the tennis world but now there is a young five. The Serb tasted defeat against the young Spaniard, Carlos Alcaraz with a 5-set game and a score line of (1-6), (78-66), (6-1), (3-6), (6-4).\nNovak shared some words on the new champion:\nI haven’t played a player like him ever, to be honest. Roger and Rafa had their obvious strengths and weaknesses. Carlos Alcaraz is a very complete player.", "score": 50.441330122033115, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Carlos Alcaraz was introduced to Roger Federer at a practice match by his coach Juan Carlos Ferrero. Carlos Alcaraz’s first meeting with Roger Federer was in 2019. Alcaraz was a fresh face to Federer as he was not that popular at that time.\nCarlos Alcaraz now has the 2023 Wimbledon title, one of the youngest to have this prestigious title. The Spaniard is very jocular about his win at the Wimbledon this year. Alcaraz made his commitment fulfilled that he will win. Also, he wanted to halt Novak Djokovic’s winning streak of 5 Wimbledon titles and 45 wins on the center court. Alcaraz, the World No.1 makes his ranking much more justified for him winning the title to Novak Djokovic.\nIn 2019, Roger Federer was revving up his throttle for the practice session at the Wimbledon tournaments. Prior to his quarterfinal match against Kei Nishikori, Federer enlisted the help of the Spanish tennis star Juan Carlos Ferrero for his warmup match. At the end of the warmup session, Carlos Alcaraz’s first meeting with Roger Federer was made by his coach. Ferrero introduced Roger to Spain’s fresh upcoming star “Carlos Alcaraz”.\nThis incident could be Alcaraz’s most exciting moment as a young teenager, meeting the Swiss legend. Alcaraz got much more will to pursue his dream to be the champion off this incident of playing against a living legend at a young age.\nCarlos Alcaraz was first exposed to the public view in an interview with tennis majors in 2020. Carlos Alcaraz provided them with many answers including,\n“It went extremely well, it was a unique experience for me,” Alcaraz said.\n“I was very pleased afterwards and I learnt a lot.\n“At the beginning I was a bit nervous since a lot of people were watching us, but as the practice went on I started to relax more and ultimately I enjoyed it very much”\nCarlos Alcaraz’s win at the 2023 Wimbledon\nThis match was between Carlos Alcaraz and Novak Djokovic, the No.1 v/s the No.2. The match between them was the third-longest match in Wimbledon final history, lasting nearly 5 hours. Wimbledon now also has a new face for the title and a pretty young one at that.", "score": 48.47202808177144, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "Accomplishments: A Trailblazer Shattering Records\nIn his meteoric rise, Alcaraz has already etched his name in tennis history. Claiming four ATP Tour titles, including two illustrious Masters 1000 crowns, he defied the notion of age barriers. Stepping into the footsteps of his idol, Rafael Nadal, Alcaraz became the youngest player since 2005 to break into the coveted top 10 rankings.\nHis relentless pursuit of greatness led him to the quarterfinals of the French Open and the semifinals of the US Open, announcing his arrival among the tennis elite. With every milestone surpassed, Alcaraz continues to redefine what is possible for a young prodigy with limitless potential.\nWimbledon 2023: A Historic Triumph for the Ages\nThe hallowed lawns of Wimbledon bore witness to a defining moment in tennis history. In 2023, the world stood in awe as Carlos Alcaraz claimed his maiden Grand Slam title, toppling the mighty Novak Djokovic in a gripping five-set battle.\nThe echoes of cheers resounded through the grounds as Alcaraz’s victory became one of the greatest upsets the tournament had ever seen. This resounding triumph solidified his place as one of the world’s finest athletes. With eyes fixed on the horizon, Alcaraz’s victory at Wimbledon served as a testament to his unwavering spirit, inspiring a generation of aspiring tennis champions.\nFuture Prospects: A Path Paved with Promise\nAs Alcaraz continues his tennis odyssey, the world awaits the unfolding of an extraordinary career. Though still in the dawn of his professional journey, his rapid ascent and exceptional talent have captivated fans and experts alike. Widely regarded as one of the most thrilling young players in the world, Alcaraz possesses the ingredients necessary to become the next luminary of men’s tennis. As he refines his skills and seeks greater consistency, the tennis realm eagerly anticipates the magnificent heights he will scale in the years to come. The sky’s the limit for this young maestro, and with every swing of his racket, he solidifies his status as the beacon of tennis’s glorious future.\nWrapping It Up\nCarlos Alcaraz, a name destined to be etched in the annals of tennis history. At just 19 years old, this prodigious talent has already accomplished what many only dream of.", "score": 47.80201300396806, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "The 2022 U.S. Open may go down in history as the arrival of Carlos Alcaraz.\nAt 19 years old, Spain’s phenom beat Norwegian Casper Ruud in the final to become the youngest Grand Slam men’s singles champion since Rafael Nadal won the first of his 22 majors at the 2005 French Open. Alcaraz also became the first teenager to ascend to No. 1 in the ATP rankings, which began in 1973.\nAlcaraz earned his first major title the hard way, becoming the third man in the Open Era to win back-to-back-to-back five-set matches en route to a major title. He spent 23 hours, 39 minutes on court over seven matches, the most for any man in a single major since time records began being kept in 1999.\nNadal’s quest for his 23rd major — and to move two clear of Novak Djokovic for the most in men’s history — ended at the hands of American Frances Tiafoe in the round of 16. Tiafoe became the first American born in 1989 or later to beat Nadal, Djokovic or Roger Federer in a major in 31 tries and the first American to make the U.S. Open semifinals since Andy Roddick in 2006 before falling to Alcaraz in an epic.\nDjokovic was ineligible for the U.S. Open because he is unvaccinated against COVID-19. U.S. rules required that any non-U.S. citizen must be fully vaccinated against the coronavirus in order to receive a visa to enter the country.\nFederer, a 20-time major champ, hasn’t played tournament tennis since undergoing a third knee surgery in an 18-month span after a quarterfinal exit at last year’s Wimbledon. He is expected to compete at the Swiss Indoors, his home tournament, in October, and possibly at least Wimbledon next year.\nAustralian Nick Kyrgios followed his breakthrough Wimbledon runner-up by ousting defending U.S. Open champion Daniil Medvedev in the fourth round, then lost a five-setter to another Russian, Karen Khachanov. in the quarterfinals.\nOlympicTalk is on Apple News. Favorite us!Follow @nbcolympictalk\n2022 U.S. Open Men’s Singles Draw", "score": 44.69431650510199, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Alcaraz had a week to remember at the Caja Magica. Entering the tournament as a top-10 player following his Barcelona triumph two weeks ago, the teenager downed two of the greatest players in the game's history - Rafael Nadal and Novak Djokovic - in successive matches.\nThat made him only the seventh player, and the first on clay, to beat the two all-time greats in consecutive matches in the same tournament. Alcaraz then steamrolled defending champion Alexander Zverev in the final.\nIn a lopsided 62-minute match, the 19-year-old dropped only four games as he strolled to the finish line to secure a tour-leading 28th match win and fourth title.\nWith his victory over Zverev, Alcaraz achieved a few important milestones. Here's a look at five of them:\n#1 Carlos Alcaraz becomes first teenager in more than 30 years to beat 3 top-5 players in the same tournament\nCarlos Alcaraz had a giant-killing week in the Spanish capital. A day after turning 19, he beat World No. 4 Rafael Nadal, a five-time Madrid champion, in the quarterfinals. It was his first victory over his illustrious compatriot in three attempts.\nAlcaraz proceeded to dump out World No. 1 and three-time winner Novak Djokovic in a pulsating third-set tie-break to move into his fourth final of the year. A day later, he returned to beat World No. 3 Zverev, becoming the first player to hand the German a straight-sets defeat in the Spanish capital.\nIn the process, Alcaraz became the youngest player, and first teenager, in 32 years (since the start of the ATP tour) to beat three top-five players in the same tournament. Before Alcaraz, Djokovic (2007 Montreal), Pete Sampras (1991 ATP Finals), Andre Agassi (1990 ATP Finals) and Lleyton Hewitt (2001 ATP Finals) were the youngest players to accomplish the feat.\nUnsurprisingly, all five players went on to win the title.\n#2 Carlos Alcaraz becomes youngest player to win Madrid Masters\nIt was a week to remember on many counts for Carlos Alcaraz. In a year of many firsts, Alcaraz added another with his victory at the Caja Magica.", "score": 44.508872626871764, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Defending champion Naomi Osaka of Japan and Greek third seed Stefanos Tsitsipas were both ousted from the US Open by 18-year-olds in epic matches today (Friday) at the Arthur Ashe Stadium in New York. Four-time Grand Slam champion, two of which were in New York, Naomi Osaka, ended her journey to defend her title in the third round after she was shocked by Canadian player Leylah Fernandez 5-7, 7-6 (7/2), 6-4. In his own turn, the third seed and this year’s French Open runner up, Stefanos Tsitsipas, was upset by Spain’s Carlos Alcaraz 6-3, 4-6, 7-6 (7-2), 0-6, 7-6 (7-5).\nAlcaraz twice led Tsitsipas by a set and showed maturity well beyond his years, as he also recovered from failing to take a game in the fourth. Encouraged by the roaring crowd, the talented Spaniard survived a breakpoint at 3-2 down in the deciding set and showed no sign of tension as he held serve to force the final tie-break. In a dazzling display, Alcaraz landed 61 winners and, after more than four hours of play, clinched his third match point with an assured forehand winner. Alcaraz will face German player Peter Gojowczyk in the fourth round.\n“I think without this crowd I haven’t the possibility to win the match,” said Alcaraz. “I was down at the beginning of the fourth set so thank you to the crowd for pushing me up in the fifth… It’s an incredible feeling for me. This victory means a lot to me. It’s the best match of my career, the best win, to beat Stefanos Tsitsipas is a dream come true for me.” Alcaraz is the youngest man in the US Open fourth round since 17-year-old American Michael Chang in 1989 and at any Slam since Ukraine’s Andrei Medvedev in the 1992 French Open.\nIn the Women’s Singles, Fernandez, who will turn 19 on Monday, upset the world No. 3 and defending champion Naomi Osaka at Arthur Ashe Stadium to advance to the fourth round of a grand slam for the first time.", "score": 42.35364332310683, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "A new tennis king was crowned at Wimbledon on Sunday, with Spaniard Carlos Alcaraz dethroning Novak Djokovic, but the centre court style in the stands was both regal and relaxed. Here are M.J. Bale’s top two picks from 2023, and the reigning Wimbledon style champion from 2016.\nKnown for his power and finesse on court, Swiss legend Roger Federer displayed similar grace and authority at the All England Lawn & Tennis Club with this classic ensemble: the beige/sand-coloured suit with stripe shirt and dot tie.\nM.J. Bale version: The Robertson Suit\nWilliam Bradley Pitt is unimpeachable when it comes to style, so if the two-time Oscar winner decides it’s ok to wear a knit polo to No. 1 court, then who are we to judge?\nM.J. Bale version: Dickie Knit Polo\nStill unbeaten at Wimbledon in eight years (in terms of courtside style), Bradley Cooper proves that nothing ages better, nor serves a bigger tailoring ace, than the timeless navy suit, sky blue shirt and navy tie combo.\nM.J. Bale version: Harvey Kingston Suit", "score": 40.25659801884636, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "An easy win over Nuno Borges was followed by trickier triumphs against Spanish compatriots Roberto Bautista Agut and Alejandro Davidovich Fokina.\nChasing down his third title of the season after triumphs at Buenos Aires and Indian Wells, home favourite Alcaraz started impressively.\nThe 19-year-old top seed won the first game without dropping a point, sealing it with an ace, ominously for Evans, who was pulled all over the court.\nAlcaraz broke for a 3-1 lead and the 2022 US Open champion showed little mercy from then on, winning six consecutive games before Evans broke for the first time, to trail 4-1 in the second set.\nAlcaraz clinched victory after an hour and 20 minutes with another pinpoint winner past Evans.\nTsitsipas lost on both occasions he reached the Barcelona final against Rafael Nadal in 2018 and 2021.\nNadal, a 22-time Grand Slam title winner, pulled out this year as he recovers from a hip injury.", "score": 39.92458509251695, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Indian Wells, the 1,000 Masters of the golden retirement of millionaires from Palm Springs (California), returns after two years of hiatus due to the pandemic. Without the Big Three (Nadal, Federer and Djokovic). Without Serena Williams or Asleigh Barty. And in it, Carlos Alcaraz (18 years old) takes another step in his fledgling and promising career. It debuts as seeded in a tournament of this category (30th and 38th in the ATP ranking), thanks to his quarterfinals at the US Open, which is why he is excluded from the first round and will make his debut directly in the second round towards the weekend … before the winner of the Adrian Manarino-Andy Murray. All a Grand Slam winner and former number one can be in front in his premiere. The third round is not easy either, as he could meet Alexander Zverev.\nThe Murcian has also signed up for the doubles draw with Pablo Carreño, which may give clues about his likely presence in the Davis Cup Finals in Madrid (from November 22). Roberto Bautista has also signed up with Alejandro Davidovich. With the absence of Nadal, the four have all the ballots to be in the appointment in which they defend their title.\nFor its part, Garbiñe Muguruza, who has just won the WTA 500 in Chicago, will face Alja Tomjanovic in the second round or a player from the previous one.\nOTHER SPANISH CROSSES\nPablo Carreño vs Facundo Bagnis or previous\nRoberto Bautista vs Guido Pella or Soonwoo Kwon\nCarlos Taberner vs Jaume Munar\nFeliciano Lopez vs Tommy Paul\nAlejandro Davidovich vs Stevie Johnson\nPablo Andújar vs previous\nPedro Martínez vs previous\nRoberto Carballés vs previous\nAlbert Ramos vs Lorenzo Musetti\nPaula Badosa vs Dayana Yastremska or Elsa Jacquemot\nSara Sorribes vs Claire Liu or previous\nNuria Párrizas vs Lauren Davis", "score": 37.58204521590572, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Stefanos Tsitsipas has defeated young gun Lorenzo Musetti 6-4, 5-7, 6-3 to reach the final of the Barcelona Opne for the third time.\nDefending champion Carlos Alcaraz swept past Dan Evans 6-2, 6-2 on Saturday to set up a final clash with Tsitsipas.\nWorld number five Tsitsipas needed three sets to beat Musetti, who put up a decent fight.\n“We had to cover lots of metres on the court and he had some incredible defensive shots that I really didn’t expect at all, it was such a mental challenge,” said Tsitsipas.\n“I had to go out there and fight it all through (with) the determination of a lion, and just (went) out there to do the best I can.”\nMusetti twice went a break up in an entertaining first set, but Tsitsipas, runner-up at the Australian Open in January, battled back immediately both times.\nThe Italian saved match point in the second set at 5-4 down to force a decisive third set, but was outplayed by his opponent, who showed more focus than earlier on.\nTsitsipas survived a break point in the first game of the third set and then broke himself for 2-0, applying pressure on Musetti’s serve.\nMusetti did not have to play his quarter-final clash after Jannik Sinner withdrew through illness, but Tsitsipas was the stronger in the third set.\nTsitsipas wrapped up the win after two hours and 28 minutes, seeing out his final two service games without dropping a point.\nWorld number two Alcaraz beat Tsitsipas in the quarter-finals at Barcelona last year in three sets in what was the Spaniard’s third win in three meetings against the Greek.\n“I feel really comfortable playing here in Barcelona and (I’m) playing well,” said Alcaraz, who has not dropped a set in the tournament this year.\n“Stefanos is playing great matches as well. Last year we had a spicy match, let’s say.\n“I know he’s a really nice guy off the court, so I’m going to try to forget everything that has happened in the matches before, try to focus on my game tomorrow, and try to get the win.”\nAlcaraz missed the Monte Carlo Masters last week because of hand and back problems and has not been at his most consistent on his way to the final.", "score": 37.24700551467217, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Third seed Jessica Pegula became the latest top-ranked player to bite the dust at the US Open on Monday as Carlos Alcaraz looked to keep his title defence on track.\nA day after world number one and reigning women’s champion Iga Swiatek was sent crashing out on Sunday, Pegula found herself heading for the exit as she slumped to defeat against American compatriot and close friend Madison Keys.\nPegula had gone into the US Open dreaming of a first ever Grand Slam title, buoyed by victory in last month’s WTA 1000 Canadian Open in Montreal.\nBut the 29-year-old’s US Open campaign came to an abrupt halt in front of a packed Arthur Ashe Stadium as 2017 US Open finalist Keys recorded a dominant 6-1, 6-3 win in just 61 minutes.\nPegula was left with no answer as Keys unleashed a stream of 21 winners to her six.\nKeys also punished her friend’s shaky serve, breaking her five times en route to victory.\n“It’s always tough having to play a friend but we’ve been doing it our whole lives at this point,” Keys, 28, said.\n“When we get on the court it’s all business and when we get off the court we go back to being friends.”\nThe 17th-seeded Keys will now face Wimbledon champion Marketa Vondrousova of the Czech Republic in the quarter-finals on Wednesday.\nNinth seed Vondrousova booked her place in the last eight with a battling win over unseeded American Peyton Stearns, coming from a set down to win 6-7 (3/7), 6-3, 6-2.\nVondrousova made history in July after becoming the first unseeded woman to win Wimbledon, her first Grand Slam title.\nThe 24-year-old has never been past the fourth round at the US Open, and admitted after her win on Monday she had surprised herself by advancing to the last eight.\n“She was playing great from the beginning, and I just tried to stay in the game,” Vondrousova said of Stearns.\n“I’m very happy. I actually didn’t expect it after Wimby — it was a lot of pressure.", "score": 36.75809806085141, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Carlos Alcaraz is playing a lot of matches lately with 8 matches in 11 days but he's not worried about any injury despite feeling pain at times.\nThe Spaniard missed tennis a lot during his 4-month absence and it's clear by the way he's performing in South America. Having to play many matches is not a concern for him even as he won his 8th match in 11 days.\nI'm so proud of myself. To be in a final again in my second tournament is a really special moment for me... I couldn't ask for a better start of 2023.\nAlcaraz did seem in discomfort at times during the match with his leg giving him pain at times. Asked about that, Alcaraz brushed it aside saying that it's normal to play with some pain, it's just the life of an athlete at times.\nI don't worry about that. It's a tennis player's life. Playing with some pain is normal for a tennis player. Even more if we are playing win by win, no break for almost 14 days in a row. It's normal. I'm going to take care of that and go into the final 100 per cent.\nAfter beating Nicolas Jarry in the semifinals, Alcaraz will play his second consecutive ATP final as he'll meet the same opponent again, Cameron Norrie. If the Spaniard wins, he will match Novak Djokovic's 6980 points, but won't move to world no. 1 spot.", "score": 35.932147760324774, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Carlos Alcaraz, the Spanish tennis phenom, has taken the tennis world by storm, currently holding the coveted world No. 1 ranking by the Association of Tennis Professionals (ATP). At just 19 years old, he has already broken records set by the legendary Rafael Nadal, becoming the youngest player to break into the top 10 since Nadal’s breakthrough in 2005. Discover more about the Tennis Phenom Carlos Alcaraz.\nFurthermore, Alcaraz’s triumph at the Masters 1000 events in Miami and Madrid in 2022 made him the youngest player since Nadal to achieve such a feat. His aggressive playing style, lightning-fast court coverage, and thunderous groundstrokes make him an exhilarating player to watch, evoking comparisons to his fellow countryman Nadal.\nEarly Life and Career: A Tennis Prodigy Emerges\nIn the charming town of El Palmar, Spain, in 2003, a future tennis legend was born. Carlos Alcaraz’s passion for the sport ignited at the tender age of four, and his remarkable talent quickly became evident. By the time he was eight, Alcaraz celebrated his first tournament victory, foreshadowing the extraordinary journey that lay ahead.\nA mere 12 years old and already ranked among the world’s top 100 junior players, his ascent seemed unstoppable. In 2018, Alcaraz made the pivotal decision to turn professional, captivating audiences with his youthful exuberance and fierce determination. The tennis world braced itself for the meteoric rise of this prodigious young player.\nPlaying Style: The Nadal-like Force to be Reckoned With\nOn the hallowed courts, Alcaraz’s playing style reverberates with echoes of greatness. An aggressive baseline player, he showcases lightning-fast speed, unleashing devastating groundstrokes that leave opponents reeling. With a two-handed backhand and a single-handed forehand, Alcaraz’s technique embodies both power and finesse.\nHis remarkable footwork grants him a masterful command of the court, while his deft volleying skills elevate his game to new heights. Many experts draw comparisons to the legendary Rafael Nadal, acknowledging Alcaraz as the heir apparent to his throne. As spectators marvel at his explosive energy and fearless approach, the tennis world braces itself for the extraordinary feats this young phenom will undoubtedly accomplish.", "score": 34.77537485433407, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "With all 8 fourth-round suits used Tuesday at the BNP Paribas Open, Felix Auger-Aliassime as well as Tommy Paul may have conserved the most effective for last. In a couple of three-setters on the day– the various other an exhilarating win for Daniil Medvedev versus Alexander Zverev– Auger-Aliassime eliminated 6 suit factors to border the American 3-6, 6 -3, 7-6(6).\nNext up for the 8th seed in his very first Indian Wells quarter-final: a Thursday face-off with Carlos Alcaraz, that passed Jack Draper by means of a second-set retired life previously in the evening.\nWith the magnificent triumph, Auger-Aliassime has actually currently gotten to the quarters at each of the previous 6 ATP Masters 1000 occasions, including his present run. He has actually progressed to that phase at 7 of the 9 Masters 1000s on the schedule, with Monte Carlo as well as Shanghai the single exemptions.\n“I always stayed positive, I kept my hopes up, I kept thinking, ‘OK, I’m not that far, I can come back,'” an eased Auger-Aliassime stated of his wonderful retreat versusPaul “At completion, when you’re down 0/40 on your offer, you recognize that … ‘OK if I win this very first one, offer well, however, however, we’re back on also terms.’\n“You just kind of take it one by one. It’s very cliche to say but it still works; that’s the proof. I’m really happy to get through. It’s a crazy feeling.”\nIn his very first ATP He ad2He advertisement conference with the rising American, Auger-Aliassime was fighting from behind all night, going down the very first collection as well as routing 0-3 in the 3rd. After levelling the decider at 3-3, he gazed down 3 suit factors at 0/40 while offering at 5-6\nBehind some prompt large offers, the 22-year-old won 5 straight indicate compel a tie-break. But some passionate play from Paul raised 3 even more suit factors at 6/ 3 after he racked up 5 straight factors of his very own.", "score": 34.71294156103479, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Carlos Alcaraz hits out at the fairway during his fourth-round victory over Peter Uihlein\nU21s World Amateur Championship\nVenue: Royal St George’s, Brighton Date: 27 May-7 June 2017 Coverage: Listen live to every round of the event on the BBC Sport website and BBC Radio 5 live\nMexico’s Carlos Alcaraz became the youngest player to reach the quarter-finals of a men’s US Open event on Friday.\nHe beat Peter Uihlein of the United States 3&2 to set up a last-eight clash with Spaniard Jon Rahm.\nThe 15-year-old saved four match points against Uihlein and also beat 2013 champion Bubba Watson in his previous round.\nAlcaraz is the first Mexican ever to play in the US Open, which starts on Thursday, but has been tied on 45th place at the Open Championship this week.\nBy winning the U21s world amateur title, he qualifies automatically for the Masters and, if he finishes 12th or better at Royal Birkdale, will earn automatic entry into next year’s Open at Royal Troon.\nSince claiming the world amateur title, Alcaraz, who already has a US Open and PGA Tour card, has said he would like to be ready to play on all the big tours in 2018.\nHe plans to use the tournament at Royal Birkdale to help him achieve that goal.", "score": 32.25460602812723, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Carlos Alcaraz says he has grown so much as a player in the last year that he is now comfortable playing at the highest level alongside the best players in the world.\nAlcaraz has had an astonishing rise to the top ten, ranked only 120 less than a year ago, the young Spaniard has come through the ranks and has won an impressive three tournaments already this year.\nThe 18-year-old has scooped titles at Rio de Janeiro, Miami and Barcelona and has an impressive 23 – 3 win loss record so far.\nWith his Madrid Open challenge about to start, Alcaraz reflected on how far he has come since he last played the tournament.\n\"I think that as a player I have grown a lot, [and also] as a person,\" Alcaraz said in his pre-tournament press conference. \"\n\"Last year I came here to live these kinds of matches, to be able to gain some experience, to be able to level myself against the best players in the world. Now I consider myself one of them.\"\nAlcaraz thinks although he has clearly grown in physique and strength, it is actually his mental strength that has been the biggest factor in his progress.\n“I would say my fitness has been important but definitely the most important part has been the mental game. I feel I have grown up so much in that part,\" said Alcaraz.\n\"That is why I am the world number nine right now and that is why I am playing at a good level. That is why I have been able to win great matches, so I think [my mentality] is the most important thing.”\nAlcaraz will be performing in his home country in front of an expectant crowd and he admits it does add extra pressure but he's excited.\n\"It's not easy to play at home. There is a lot of expectation, a lot of people that want to see you doing it well,\" Alcaraz said.\n\"But I'm a player that turns things around. I take that as a motivation, as an extra punch, extra help.\n“I am really excited to be competing here. It is great to play in Spain in front of my home crowd. I try not to think about the pressure.\n\"I just try and think about myself and play my best on court. I am trying to have a great time with the crowd pushing me up.", "score": 32.07332372524681, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Indian Wells, California. , Carlos Alcaraz He is three wins away from returning to the world top by reaching the quarterfinals of the BNP Paribas Open on Tuesday.\nAlcaraz, ranked No. 2, when upgraded jack draper Retired after 46 minutes of play with the Spaniard up 6–2, 2–0. It was Alcaraz’s 101st ATP Tour win.\nAlcaraz said, “I would say I made a good comeback, I hit great shots.” “I finished the match confident in my shots, so that I can go into the next round with more confidence.”\nDraper was affected by an abdominal injury that first surfaced in his win against Andy Murray one day before. The injury affected Britt’s serve, which dropped below 100 mph, and his movement. An athletic trainer visited him between sets and Draper won just one point in the first two games of the second set before leaving.\nCoco Gauff Rallyed from break down in third set to defeat Swedish qualifier Rebecca Peterson6-3, 1-6, 6-4 to reach the quarterfinals.\nDown 4–2, Gauff came back and saved three break points in the next game while serving at 4–all before closing out the match.\nFour years earlier, Peterson had defeated the then 14-year-old Gauff at a Challenger tournament in Michigan.\nGough said in court, “He beat me pretty badly.” “I think today was really a mental thing, just staying in the match. I wasn’t playing my best in some moments and I wasn’t serving like I wanted to, but I think my The mindset kept me in.” ,\nSixth-seeded American next plays No. 2 seed Aryan Sabalenkawho beat 16th seed Barbora Krejcikova6-3, 2-6, 6-4.\nNo. 3 jessica pegula lost to 15th seed Petra Kvitova American drops four match points after 6-2, 3-6, 7-6 (11)\n“It’s very tiring when it’s a match like this,” Kvitova said. “Up and down. Definitely one of the best matches I have played.”\nsorana cristia number 5 drawn Caroline Garcia 6-4, 4-6, 7-5, No.", "score": 32.05121424720042, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "Whether or not Novak Djokovic takes to the court for his fourth-round clash against Milos Raonic will be the big talking point of Sunday’s action.\nA blockbuster schedule begins with a clash between women’s title favourite Naomi Osaka and last year’s finalist Garbine Muguruza before Serena Williams takes on big-hitting Aryna Sabalenka.\nIga Swiatek will attempt to beat Simona Halep at a second successive grand slam while Dominic Thiem faces Grigor Dimitrov.\nCarlos Alcaraz reveals severity of his injury and when he will return to action\nCarlos Alcaraz has divulged the results of an MRI scan on the injury he suffered in Rio.\nHolger Rune reunites with Patrick Mouratoglou as he reaffirms his huge goals\n“With him I have achieved some of the greatest triumphs of my career so far.”\nAndy Murray set for ranking drop as he declares ‘this game isn’t for me anymore’ in brutal loss\nAndy Murray’s ranking is set to suffer after his painful, marathon loss in Doha.\nNovak Djokovic secures astonishing ranking milestone involving Rafael Nadal\nNovak Djokovic is set to achieve a staggering ranking statistic.\nStan Wawrinka gives telling Novak Djokovic verdict as he answers GOAT question\n“I know how hard it was to win with Federer, Nadal and Djokovic in the draws.”\n‘Jannik Sinner shares champion’s mentality with Novak Djokovic, Rafael Nadal’, feels former world No 8\n“That’s the champion’s mentality. These guys are always looking to get better.”\nWhat Carlos Alcaraz’s Rio misfortune means for Novak Djokovic and Jannik Sinner in ATP Rankings race\nNovak Djokovic set to stay at No 1 while Jannik Sinner is breathing down Carlos Alcaraz’s neck.\nAndy Murray issues ‘huge apology’ to his wife ‘for whatever I was doing to her mouth’\nTrust Andy Murray to pipe up with a funny comment about his first-ever ATP title win.\nWATCH: ATP player suffers incredible meltdown – A racket smash, a point penalty and ‘a bit of fortune’\nBotic van de Zandschulp had a second set to forget in Qatar.\nWATCH: Carlos Alcaraz has a ‘ball on his ankle’ after unfortunate Rio Open incident\nIt was a painful one for Carlos Alcaraz in Rio.", "score": 31.76704248115624, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Sept. 12 (UPI) -- Carlos Alcaraz, 19, officially became the youngest No. 1 player in ATP rankings history, while Ons Jabeur climbed to No. 2 in the WTA rankings and Serena Williams jumped 284 spots, the tennis organizations said Monday.\nCanada's Felix Auger-Aliassime dropped out of the Top 10, falling from No. 8 to No. 13. American Frances Tiafoe moved inside the Top 25 due to his run to the semifinals at the U.S. Open. Tiafoe moved from No. 26 to No. 19.", "score": 31.003527606014057, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "With hard court game clicking, Francisco Cerundolo’s Cinderella story continues in MiamiBy Mar 31, 2022\n22-year-old Iga Swiatek to surpass 20 million dollars in career prize moneyBy Aug 28, 2023\n20-year-old Carlos Alcaraz surpasses 20 million dollars in career prize moneyBy Aug 21, 2023\nThe ITF's Katrina Adams stays on the front lines for women with the Tory Burch Foundation Sports FellowshipBy May 02, 2023\nDaniil Medvedev, Stefanos Tsitsipas hit major career prize money milestones after Monte CarloBy Apr 17, 2023\nPetra Kvitova on that tie-break against Rybakina: “I think it was the longest one I ever played in my life”By Apr 02, 2023\nBarbora Krejcikova surpasses $10 million in career prize money after Indian WellsBy Mar 20, 2023\nCarlos Alcaraz gives shelter to ballkid as rain starts pouring in RioBy Feb 22, 2023\nGabriela Sabatini among packed crowd in Buenos Aires to watch Carlos Alcaraz’s comeback matchBy Feb 16, 2023\nTennis Channel Inside-In With Dani Klupenger: From the hardwood to the TC DeskBy Dec 27, 2022\nWith hard court game clicking, Francisco Cerundolo’s Cinderella story continues in Miami\nAfter making his mark alongside brother Juan Manuel, the 23-year-old ATP Masters 1000 debutant will next face No. 6 seed Casper Ruud for a place the Miami final.\nPublished Mar 31, 2022\nWATCH: Francisco Cerundolo speaks with Prakash Amritraj on Tennis Channel Live following his 2022 Miami quarterfinal victory.\nFrancisco Cerundolo had never played an ATP Masters 1000 event or won a hard court match in his ATP career before the Miami Open. Now the No. 103-ranked Argentine is the tournament’s lowest-ranked player ever to reach the semifinals, and he’s almost guaranteed a spot in the Top 50 when the tournament is done.\nIt’s been a career-changing few weeks for the 23-year-old, who began the tournament drawn into the same quarter as his younger brother Juan Manuel Cerundolo.", "score": 30.2300594572364, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "It was an efficient victory for 22-time Grand Slam champion Rafael Nadal over Botic van de Zandschulp, winning over the Dutchman, 6-4 6-2 7-6 (8-6) at the All England Club.\nThe proceedings on the Centre Court went on for 2 hours and 20 minutes, which ended before the 11 pm curfew. It almost left defending champion Novak Djokovic in limbo on Sunday. The victory ensures that Nadal is on course for the Grand Slam this calendar year.\nNadal now is set to reunite with American Taylor Fritz, who beat him in the Indian Wells final in March. The Spaniard was then playing with a stress fracture of a rib. Last month, Nadal had undergone radio-wave treatment for a nerve in his left foot so he could compete in SW19.\nConstant discussions about his fitness has been wearing down Nadal though.\n“I am a little bit tired to talk about my body. Sometimes I am tired about myself, all the issues that I am having. I prefer to not talk about that now. Sorry for that,” Nadal said.\n“But I am in the middle of the tournament and I have to keep going. All respect for the rest of the opponents. I am just trying my best every single day. For the moment I am healthy enough to keep going and fight for the things that I want.\n“It takes a lot of mental and physical effort to try to play this tournament after the things that I went through the last couple of months. But as everybody knows, Wimbledon is a tournament that I like so much.”\n“I have been three years without playing here. I really wanted to be back. That’s what I am doing. So that’s why it means a lot for me to be in the quarter-finals,” he added.\nThe 11th-seeded Fritz easily defeated Australia’s Jason Kubler 6-3 6-1 6-4 to make it to his first Grand Slam quarter-final.\n“My first Grand Slam quarter-final, that’s really a big deal,” said Fritz, who has yet to drop a set in the tournament.\n“Part of the final eight and… I’m glad I could get the win on the Fourth of July, being American.”\nThe victory was Fritz 8th straight on grass after winning the Wimbledon warm-up tournament in Eastbourne.", "score": 30.075197023778045, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "London, July 2 (IANS) Top seed Novak Djokovic was in cruise mode as he handed a battling American qualifier Denis Kudla a straight-sets defeat in the men’s singles third round of Wimbledon Championships on Friday.\nDjokovic defeated the Ukraine-born American Kudla 6-4, 6-3, 7-6(7) in two hours 17 minutes to secure his place in the fourth round of the championships.\nDjokovic, gunning for his sixth Wimbledon title, was too strong for the 114th ranked Kudla and though the American did put up a fight in the third set, the 34-year-old Serb was relentless and sealed victory in third set tiebreaker.\nOn his way to victory, Djokovic, who won his fifth title here in 2019, sent down eight aces and 34 winners. He committed 28 unforced errors to 35 by Kudla.\nEarlier on Friday, fifth seed Andrey Rublev defeated Fabio Fognini 6-3, 5-7, 6-4, 6-2 in under three hours to lead a Russian charge into fourth round.\nRublev converted five of the 13 break-points, fired 13 aces and won 73 per cent of first service points.\nThe 23-year-old, who is world No. 7, won the first set in 43 minutes. The Italian came back in the second set after having conceded early lead. He won three straight games after being 2-4 down to clinch the set.\nHowever, the Russian took the third set and in the fourth, he simply cruised winning five straight games after being 0-1 down.\nEarlier, Rublev’s fellow countryman Karen Khachanov defeated Frances Tiafoe 6-3, 6-4, 6-4 in one hour and 46 minutes. Tiafoe’s straight-sets loss was a surprise considering he had beaten third seed Stefanos Tsitsipas in the first round in straight sets.\nThe 25th seed Khachanov converted three of the six break-points and won 81 per cent of the first service points.\nIf world No.", "score": 30.037384745410485, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Eight-time champion Roger Federer recovered from a first-set wobble to record his 96th win at Wimbledon and advance to the second round with victory over South African debutant Lloyd Harris.\nAs verbal sparring partners Rafael Nadal and Nick Kyrgios set up the first men's blockbuster of the tournament, Swiss legend Federer started slowly as he began his bid for a record-extending 21st grand slam title, struggling to find his rhythm against an opponent who had never won a match on grass.\nIt was only the second time anyone had taken a set off Federer in a first-round match in his last 17 Wimbledon appearances.\nBut if 22-year-old Harris, ranked 86 in the world, harboured any hopes of a famous win his dreams were crushed brutally over the next three sets as the second seed won 3-6 6-1 6-2 6-2.\n\"I just struggled. As my legs weren't moving,\" admitted Federer.\n\"In defence you're weak. The next thing you know you're struggling.\"\nThe Swiss now plays Britain's Jay Clarke, who won his first grand slam match against American qualifier Noah Rubin 4-6 7-5 6-4 6-4.\nMore from Wimbledon:\n- Australia's queen of tennis leads batch of big guns at Wimbledon\n- Ash Barty's Wimbledon campaign off to a flying start\n- Nick Kyrgios battles back to win first set against compatriot Jordan Thompson\n- Ash Barty's candid response when asked whether she can win Wimbledon tournament\nThird-seeded Nadal, bidding to claim the French Open and Wimbledon in the same year for a third time, beat qualifier Yuichi Sugita 6-3 6-1 6-3 to seal a showdown with Kyrgios, who beat fellow Australian Jordan Thompson 7-6 (7-4) 3-6 7-6 (12-10) 0-6 6-1 and criticised the Spaniard recently in a podcast, branding him \"super salty\".\nAlso on Tuesday, fifth-seeded French Open runner-up Dominic Thiem crashed out in the first round after having to retire at the same stage last year, losing 6-7 (7-4) 7-6 (7-1) 6-3 6-0 to American Sam Querrey.", "score": 29.058717158342937, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "LONDON (Reuters) - Stan Wawrinka met stiff resistance from American teenager Taylor Fritz before he finally asserted his dominance in a 7-6(4) 6-1 6-7(2) 6-4 first-round victory at Wimbledon on Tuesday.\nThe Swiss fourth seed was far from his best, with his groundstrokes uncharacteristically erratic, but his fearsome one-handed backhand and experience were enough to see off the 18-year-old making his tournament debut.\nWawrinka, who will play Argentina’s Juan Martin Del Potro in the next round, said he and other players needed to get used to grasscourt conditions.\n“I know if I can start to win few matches, I can be dangerous to go far,” he told reporters.\nFritz, the youngest man in the draw, showed few nerves in the first set as he matched the twice grand slam champion shot-for-shot in a punishing baseline battle on Number One Court.\nBut his confidence seemed to evaporate after he lost the tiebreak. He won just one game in the second set as Wawrinka found his range and bossed the exchanges.\nWawrinka struggled for concentration in a scrappy third set in which he made a string of unforced errors and he lost the tiebreak. Visibly angry with himself, the 31-year-old stepped up his game to close out the match in the fourth.\nHe had encouraging words for Fritz.\n“He has a big potential,” Wawrinka said. “For sure he is the future of tennis.”\nEditing by Ed Osmond", "score": 28.41849056297017, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Your feedback is important to us!\nWe invite all our readers to share with us their views and comments about this article.\nDisclaimer: Comments submitted by third parties on this site are the sole responsibility of the individual(s) whose content is submitted. The Daily Star accepts no responsibility for the content of comment(s), including, without limitation, any error, omission or inaccuracy therein. Please note that your email address will NOT appear on the site.\nAlert: If you are facing problems with posting comments, please note that you must verify your email with Disqus prior to posting a comment. follow this link to make sure your account meets the requirements. (http://bit.ly/vDisqus)\nDefending champion Novak Djokovic won a third Wimbledon title and a ninth Grand Slam crown Sunday, ruthlessly shattering Roger Federer's bid for a record eighth All England Club triumph.World No. 1 Djokovic won 7-6 (7/1), 6-7 (10/12), 6-4, 6-3 to add this year's Wimbledon title to the Australian Open he captured in January.Federer had his opportunities but he could only convert one of seven break points in the match and as he pressed, he committed 35 unforced errors to Djokovic's 16 .In a rollercoaster rematch of last year's final, Federer was 4-2 up in the first set and had two set points.However, Djokovic, five years Federer's junior, stepped on the gas and raced away to the title.Federer wasted two break points in the fifth and 11th games of the second set having saved a first set point in the 10th.Djokovic was strangling the life out of Federer's game and another break gave him a 3-2 lead in the fourth set.\nFOLLOW THIS ARTICLE", "score": 28.363537651698934, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "ATP Erste Bank Open Day 1 Tennis Odds and Picks: Best Bets for Murray and Alcaraz Matches (Oct. 25)\nCredit: Grant Halverson, Getty. Carlos Alcaraz eyes up a forehand in the Winston-Salem Open.\nAfter Jannik Sinner and Aslan Karatsev secured titles in Antwerp and Moscow respectively, the tour moves to new locations in Vienna and St. Petersburg.\nThe draw in Vienna is ridiculous for a 500-level event, featuring five players ranked among the top 10 in the world. It will be a banner week for Wiener Stadthalle as it plays host to a high-quality event, and we should be in store for some great tennis.\nWithout further ado, here’s how I’m betting the first day of main draw action.\nMatch times are subject to change. Read here for tips on how to watch matches.\nCarlos Alcaraz (-128) vs. Dan Evans (+106)\n9:20 a.m. ET\nAlcaraz has not played a significant amount of tennis since the US Open, only participating in the BNP Paribas Open since then. In Indian Wells, he suffered a disappointing loss from a set-lead against Andy Murray. Nevertheless, he looked solid in that battle, but a 20% break point conversion rate doomed him.\nEvans had an up-and-down Indian Wells experience, pulling out a three-hour win in his first round matchup with Kei Nishikori but falling to Diego Schwartzman from a set and a break up.\nThe duo hasn’t met before, but the matchup projects to be an intriguing one. Alcaraz plays a heavy style of tennis that combines power off of both wings with an incredible athleticism. Evans prefers a more complex game that consists of a large dosage of backhand slices and attacking off the forehand.\nFor Evans, the question becomes how can he break down Alcaraz? The biggest weapons on the court belong to the 18-year-old, who is strong enough mentally to deal with the charismatic Evans. I don’t believe that the disparity in experience will play a big role between the two, and Evans won’t be able to get under Alcaraz’s skin as he does with many players.\nTrust Alcaraz to get by an inferior opponent.\nPick: Carlos Alcaraz -128 via FanDuel\nHubert Hurkacz (-172) vs.", "score": 27.86840194425595, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "6 on Monday, has won his first five career finals.\nHe became only the sixth player in the Open Era to do so, joining Ernests Gulbis, Martin Klizan, Thomas Enqvist, Andrei Medvedev and Sjeng Schalken. Unlike the quintet, Alcaraz hasn't dropped a set in his first five finals.\nIf he wins his next final, Alcaraz will join Gulbis and Klizan as the only players to go 6-0 in their first six title matches. The earliest that could happen for Alcaraz is at Roland Garros.", "score": 26.9697449642274, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "Were he to achieve all three, he would become the first man, and only the second player after Steffi Graf in 1988, to win the so-called Golden Slam.\nDjokovic’s long-time coach, Marian Vajda, believes it is a distinct possibility, saying with tongue in cheek: “We said with (fellow coach) Goran (Ivanisevic) that if he accomplishes the year (all) the Grand Slams that we’re going to quit.\n“I think it’s possible, much more possible. He loves to play in Wimbledon and the US Open. I was worried a little bit more about clay court because for a long time, in the past 10 years let’s say, he had great results and he could have done much more at the French Open but he just couldn’t make it.\n“As much as Novak is healthy – and he’s healthy right now, he’s in great shape – I think he has the ability to win the Grand Slam for this year. I’m pretty sure.”\nIt was a difficult defeat for Tsitsipas to swallow after leading by two sets but, in his first Grand Slam final, the Greek proved that he is very likely to be a slam winner in the near future.\nTsitsipas’ best result at Wimbledon was a run to the fourth round in 2018, and he is hopeful of performing strongly on the grass.\nHe said of his showing in Paris: “I’m happy with the way I performed, the way I tried things, even if they didn’t work. I don’t think I have regrets. I could have easily cried, but I see no reason for me crying because I tried everything. I couldn’t come up with anything better.\n“I’m looking forward to the grass-court season. I see there is opportunities there for me. I like playing on grass. I didn’t have the best results a few years ago, before Covid, when I last played on grass.\n“I think I have the game to play good on grass, too. I just need to be open-minded and adapt my game to this new, exciting surface.”\nHeather Watson gets the job done after eight minutes of play on Thursday\nHeather Watson moves along after victory over Wang Qiang.\nAlastair Gray’s Wimbledon singles journey ended by Taylor Fritz in the second round\nThe Twickenham ace battled well against the world No 14.", "score": 26.9697449642274, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "Olympic champion Andy Murray, still seeking his first Grand Slam title after four losses in finals, eked out a 7-6 (5), 7-6 (5), 4-6, 7-6 (4) victory over No. 30 Feliciano Lopez, who led in each of the three tiebreakers before faltering.\n\"Could have gone either way,\" Murray acknowledged. \"It was very hot and humid in the middle part of the match. I was struggling a bit with that.\"\nThe man he beat for the gold at the Summer Games, and lost to in the Wimbledon title match, Roger Federer, is also Murray's potential semifinal opponent in New York. Federer, as is often the case, barely was bothered Saturday while dismissing No. 25 Fernando Verdasco 6-3, 6-4, 6-4.\nAt The USTA Billie Jean King\nNational Tennis Center\nPurse: $25.5 million (Grand Slam)\nSurface: Hard-Outdoor Third Round\nMen's Singles Third Round\nNicolas Almagro (11), Spain, def. Jack Sock, United States, 7-6 (3), 6-7 (4), 7-6 (2), 6-1.\nMarin Cilic (12), Croatia, def. Kei Nishikori (17), Japan, 6-3, 6-4, 6-7 (3), 6-3.\nMartin Klizan, Slovakia, def. Jeremy Chardy (32), France, 6-4, 6-4, 6-4.\nRoger Federer (1), Switzerland, def. Fernando Verdasco (25), Spain, 6-3, 6-4, 6-4.\nAndy Murray (3), Britain, def. Feliciano Lopez (30), Spain, 7-6 (5), 7-6 (5), 4-6, 7-6 (4).\nMilos Raonic (15), Canada, def. James Blake, United States, 6-3, 6-0, 7-6 (3).\nTomas Berdych (6), Czech Republic, def.", "score": 26.9697449642274, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "LONDON -- Rafael Nadal claimed his first career grass-court title Sunday, defeating Novak Djokovic 7-6 (5), 7-5 in the Queen's Club final to become the first Spaniard to win on grass in 36 years.\nIt was the French Open champion's third win in the last three tournaments over the second-seeded Djokovic, following semifinal victories in Hamburg and at Roland Garros.\nAndres Gimeno was the last Spaniard to win on grass, at Eastbourne in 1972.\nNadal is also the first player to win at Roland Garros and Queen's Club in the same year since Ilie Nastase in 1973.\n\"This week was amazing for me,\" Nadal said.\nThe win should give Nadal a confidence boost ahead of Wimbledon.\nEspecially considering it follows his resounding straight-sets win over No. 1 Roger Federer in the French Open final.\n\"Wimbledon is (a) very, very important tournament, and the motivation is 100 percent,\" Nadal said. \"Doesn't matter if I am tired mentally. Physically is a little bit more important, but I think physically I'm fine.\"\nGerry Weber Open\nHALLE, Germany -- Roger Federer just wanted to survive a couple of rounds at the Gerry Weber Open after his painful French Open loss to Rafael Nadal.\nThe top-ranked Swiss did much more, beating Philipp Kohlschreiber 6-3, 6-4 in Sunday's final for his 10th grass title.\nThat matches Pete Sampras' total on the surface and takes his unbeaten streak on grass to 59 matches.\nTo further boost his confidence, Federer sailed through the Wimbledon warmup tournament without dropping a set or his serve.\n\"I'm really excited; I think that's the first time in my career I won a title\" without losing serve, Federer said. \"That was very special and I'm very proud to keep my streak going.\"\nIn fact, Federer also won a tournament in Doha in 2005 without being broken.\nHe restored some momentum after one of his worst defeats a rout in the French Open final last Sunday at the hands of Nadal.\nFederer took just four games.\n\"I didn't want to lose in the first and second round,\" Federer said.\n\"It would have been really tough for me losing on grass again for the first time and having just lost in Paris the final.\"", "score": 26.9697449642274, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "While his tournament is over, two of his long-time rivals at the top of tennis set up a semifinal showdown: Rafael Nadal and Novak Djokovic. Nadal, who’s won two of his 17 Grand Slam titles at Wimbledon, edged 2009 U.S. Open champion Juan Martin del Potro 7-5, 6-7 (7), 4-6, 6-4, 6-4 in a wildly entertaining match that featured diving shots by both and lasted 4 hours, 48 minutes. Djokovic, whose 12 major championships include three from the All England Club, got to his first Grand Slam semifinal since 2016 by beating No. 24 seed Kei Nishikori 6-3, 3-6, 6-2, 6-2.\nThe thin line between victory and defeat.Until today, Roger Federer had never lost at #Wimbledon having held match point… pic.twitter.com/GXtjW8Dkr7 — Wimbledon (@Wimbledon) July 11, 2018\nIn Friday’s other men’s match, Anderson will face No. 9 John Isner, the 33-year-old American who reached his first major semifinal in his 41st try by eliminating 2016 runner-up Milos Raonic 6-7 (5), 7-6 (7), 6-4, 6-3. Isner hit 25 aces, saved the only break point he faced, and has won all 95 of his service games in the tournament.\nNow Nadal or Djokovic can inch closer to Federer on the all-time Majors list:Federer – 20 Nadal – 17 Sampras – 14 Djokovic – 12 pic.twitter.com/UcFgZbXUiH — Adam Zagoria (@AdamZagoria) July 11, 2018", "score": 26.729213353819844, "rank": 31}, {"document_id": "doc-::chunk-3", "d_text": "In preparation for Wimbledon, Agut played in the Topshelf Open. He was the tournament's third seed. Agut won the title, his first ATP title, defeating former champion Benjamin Becker in the final in three sets.\nRoberto then played in the Wimbledon Championships. After defeating Steve Johnson and Jan Hernych, Agut took on the defending champion, Andy Murray, in the third round. Agut put up a great fight, but Murray's class on grass was just too difficult to beat. This was his best Wimbledon result.\nAfter Wimbledon, Roberto went back to playing on the clay surface in Germany. Roberto was the third seed in the Mercedes Cup based in Stuttgart. In the semifinal, Agut recorded an upset, beating defending champion Fabio Fognini for only the second time in his career. This result led Agut to take on Lukáš Rosol in the final. This was Roberto's third professional ATP tournament final. Agut won the final in three sets, claiming his second 250-level title.\nIn the last slam of the year, the U.S. Open, Roberto reached the fourth round after defeating Andreas Haider-Maurer, Tim Smyczek, and Adrian Mannarino on the way to taking on the no. 2 seed Roger Federer for the first time. Despite Agut's hard efforts, he could not stop Federer winning points at the net, and he therefore lost in straight sets. This was Agut's best ever US Open campaign and he equaled his best career Grand Slam result (2014 Australian Open).\nAgut would then head off to Russia to play in the Kremlin Cup tournament held in Moscow. Agut advanced all the way to the final where he took on the 2014 US Open champion, Marin Čilić, in the final. His brilliant tournament ended with a tight straight-set defeat.\nAfter his outstanding season, Agut won the ATP's Most Improved Player award.\nAt the end of the best season in his career so far, Agut finished 2014 with a singles ranking of world no. 15, and a doubles ranking of world no. 255.2015: Drops out of top 20\nAgut began his new season, as the third seed, in the 2015 Aircel Chennai Open. Agut progressed to the semi-final where he would lose to British qualifier Aljaž Bedene.", "score": 26.314250898792196, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Faced with this new challenge, Rafa looks “good, very happy to be in the semifinals.” “I know it was a good victory and that I gave a good level of tennis in the quarterfinals, but now there is another big challenge ahead. Zverev is playing great, he had a great tour on clay and he beat Alcaraz because he did a lot things well, so I have to play 100% again,” he said on Thursday in statements to the Roland Garros website. On Wednesday he reassured fans by saying on TVE that he will try to return to the French major next year. “I accept things as they come. In no way do I intend this to look like a goodbye. We are going to work to find a solution for the thing down there (the left foot). And I am confident that I will be able to return, although this last year things have been difficult. My dream is to continue,” she assured.\nOblivious to these tribulations, Zverev will try to achieve something that is pending in his career, win a member of the Big Three in a Slam. He has already beaten Nadal, Djokovic and Federer in other tournaments, but… “I haven’t beaten them in majors, although I was very close. I had tough, tough matches against them. But there is a big difference between being even and beating them. It’s still a big difference. So I hope to be able to use the energy of this victory (against Alcaraz) on the court on Friday (for today).” The prize is big: the final on Sunday.\nRoland Garros ATP Draw Results", "score": 26.196558651912557, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "- Andy Murray describes last year's Wimbledon defeat as \"worst of his career\"\n- Murray beat world No. 1 Novak Djokovic on Sunday to win 2013 men's tournament\n- The world No. 2 becomes the first British male to win singles title in 77 years\n- Murray defends his U.S. Open title when the tournament begins next month\nYou can be forgiven for losing track of time the morning after a night 77 years in the making.\n\"Oh sorry, that's my alarm,\" says an apologetic Andy Murray as his alarm trills while being interviewed. \"That was the finals day time for getting up, 915am ... just for yesterday.\"\nYesterday was Sunday July 7 and the day which saw Murray secure his place in British sporting history.\nIn the 24 hours between those two alarms going off, Murray's world irrevocably changed with his 6-4 7-5 6-4 win over over Novak Djokovic in straight sets ensuring he became Britain's first men's singles champion at Wimbledon since 1936.\nIt had been the most energy-sapping three sets of his career, with the world No. 2 eventually beating the world No. 1 following three hours of brutal battle, but Murray didn't want to go to sleep.\nAfter waiting his entire career to win the title Britain had so wanted for nearly eight decades, the Scot was scared of waking up to discover it was all a dream.\n\"That's the one worry you have when you go to bed,\" Murray told CNN after becoming the first male British singles champion at Wimbledon since Fred Perry.\n\"You wake up and it's actually not true, so I was obviously very happy and relieved that I had done it.\"\nTwelve months ago Murray had sobbed on Centre Court after losing his first Wimbledon final to Roger Federer.\nRedemption came a month later on the same court against the same opponent, only with a different outcome.\nThe Scot won to clinch Olympic gold at London 2012 and the hearts of a jubilant British public began to soften towards a man that had arguably come up short in the popularity stakes when compared to another British tennis star -- the now retired -- Tim Henman.\nIn September a first grand slam title promptly followed at the U.S. Open, as Murray beat Djokovic in five sets in New York.", "score": 25.697183125867774, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Tough road ahead for Alcaraz in French Open title quest\nCarlos Alcaraz and the man he just replaced atop the world rankings, Novak Djokovic, have been placed in the same half of the French Open field and could face each other in the semi-final.\nAlcaraz is seeded No.1 at a grand slam for the first time and was automatically placed in the top section of the bracket.\nDjokovic is No.3 and so could have ended up on either half. Had he landed in the bottom, he and Alcaraz only could have met in the final at Roland Garros, where injured 14-time champion Rafael Nadal is missing for the first time since his 2005 debut.\nTypically, the previous year's singles champions are invited to appear at the draw, so 2022 women's winner Iga Swiatek was present in Paris on Thursday, but Nadal was absent.\nSwiatek did not appear to show any ill effects from the hurt right thigh that caused her to stop playing in the third set of her quarter-final in Rome last weekend and indicated that the issue would not prevent her from competing in Paris, where she has won two of her three major trophies.\n\"It's like my favourite tournament in the whole year, so I'm always excited to come back,\" said Swiatek.\n\"Before the tournament, I get this extra motivation to practice harder, to make everything better.\"\nThe draw put her in a potential quarter-final against Coco Gauff in what would be a rematch of last year's final.\nAlcaraz, who just turned 20, and 36-year-old Djokovic have played each other just once previously, in the semi-finals of the Madrid Open in May 2022, which the Spaniard edged in a tough three-setter.\nThis year, the men's quarter-finals by seeding would be US Open champ Alcaraz, against Stefanos Tsitsipas, Djokovic against Andrey Rublev, Daniil Medvedev against Jannik Sinner and Casper Ruud, runner-up last year, against Holger Rune.\nThe women's match-ups in that round are scheduled to feature Elena Rybakina, the reigning Wimbledon champion, against Ons Jabeur, Australian Open champ Aryna Sabalenka against home hope Caroline Garcia and Jessica Pegula against Maria Sakkari.", "score": 25.65453875696252, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Britain's Andy Murray has secured his second Wimbledon title after beating Canada's Milos Raonic in straight sets in Sunday's men's singles final.\nMurray won 6-4, 7-6 [7-3] and 7-6 [7-2] to add to the title he won back in 2013 against Novak Djokovic.\nThe 29-year-old surpassed Fred Perry and Bunny Austin’s record after reaching his 11th grand slam final.\nPresent in the final between Murray and Raonic was the the Duke and Duchess of Cambridge who watched from the Royal Box.\nHis victory in three sets means he has now won his third Grand Slam.", "score": 25.65453875696252, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "The draw for Wimbledon 2021 has been confirmed, with defending champion Novak Djokovic opening up against British wildcard Jack Draper….\nThe draw for Wimbledon 2021 has been confirmed, with defending champion Novak Djokovic opening up against British wildcard Jack Draper.\nThe 134th edition of the grass court show-piece takes place at the All England Lawn Tennis Club from June 28 to July 11.\nThe tournament returns in 2021 following an enforced hiatus last year due to the global health crisis.\nReigning Men’s Singles champion Djokovic is bidding to claim his sixth Wimbledon title and has been drawn to face 19-year-old Draper in Round One, the world number 250 making his debut.\nTwo-time champion Andy Murray has been drawn to face Georgia’s Nikoloz Basilashvili as the Scot prepares to make his Wimbledon return for the first time since 2017.\nMurray, who is now ranked 119th after battling injury, has been given a wildcard entry into the main draw.\nEight-time champion Roger Federer has been placed in the bottom half of the draw and will potentially have to overcome Daniil Medvedev and Alexander Zverev if he wishes to reach a 13th Wimbledon final.\nThe Swiss superstar, who boasts a record of 101-13 at the All England Club since making his debut in 1999, will meet France’s Adrian Mannarino in Round One.\nThird seed Stefanos Tsitsipas, starts out against Frances Tiafoe, while Nick Kyrgios marks his first appearance since the Australian Open with a clash against recent Halle Open winner Ugo Humbert.\nRussia’s Daniil Medvedev will take on Jan-Lennard Struff, while Alexander Zverev plays Tallon Griekspoor and Queen’s Club champion Matteo Berrettini faces Guido Pella.\nIn the Women’s Singles draw, seven-time champion Serena Williams has been drawn to face Aliaksandra Sansovich in the opening round.\nWorld number one Ashleigh Barty faces an emotional First Round tie against Carla Suárez Navarro – who recently received the all-clear from cancer.\nAnother stand-out tie sees tenth seed Petra Kvitová face world number 69 Sloane Stephens, while 2017 semi-finalist Johanna Konta will take on Katerina Siniakova.", "score": 25.65453875696252, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "A clinical display\nA spellbinding display of brutal hitting and raw speed from Rafa Nadal secured the Spaniard his second Wimbledon title on Sunday with a 6-3, 7-5, 6-4 victory over 12th seed Tomas Berdych.\nNadal, who has not lost here since the 2007 final after victory in 2008 and injury prevented his defence last year, completely nullified the 1.95m Czech's biggest weapons to storm to an eighth career Grand Slam and open up a cavernous lead at the top of the world rankings.\nThe win, which completed a second French Open-Wimbledon double in three years for Nadal, came after two hours and 13 minutes when his 20th forehand winner left Berdych stranded at the net on his first championship point.\nImage: Rafa Nadal with the winners trophy\n'To have this trophy in my hands, amazing'\nHe celebrated by slumping on to his back on the baseline, holding his face in his hands before embracing Berdych at the net and performing an impromptu roly poly and double fist pump towards his entourage.\n\"More than a dream for me, always a dream to play in this final,\" Nadal said in an on-court interview.\n\"To have this trophy in my hands, amazing. You're (Berdych) doing an amazing season, sorry for today but I wish you luck for the rest of the season,\" he added.\nImage: Rafa Nadal with the winners trophy\nNothing could derail the Mallorcan\nFrom the moment Nadal bounced, weaved and stretched his way down the corridors of the All England Club to Centre Court like a fired-up prize fighter, there was a sense nothing could derail the 24-year-old.\nBoth held their opening service games to love on a bright and breezy day, and neither was under early pressure, and even a male fan bellowing \"I love you Rafa\" could not throw the Spaniard's focus.\nGame seven turned the tone of the match though, as Nadal stepped up a gear and Berdych's first serve deserted him.\nNadal's forehand was starting to eat up the Berdych serve, and the Mallorcan brought the crowd to life with a searing forehand pass down the line to bring up three break points.", "score": 25.65453875696252, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "I think most of the time I spend on tour. I practice in Slovakia between the tournaments. I had camps in Dubai. So I don’t live anywhere, to be honest.”\nShe said after her quarterfinal win: “I just want the war to end as soon as possible.”\nRafael Nadal has withdrawn from Wimbledon.\nThe 22-time Grand Slam champion has an abdominal injury and says he won’t play his semifinal match against Nick Kyrgios on Friday.\nElena Rybakina defeated 2019 champion Simona Halep 6-3, 6-3 on Centre Court to set up a Wimbledon final against Ons Jabeur.\nBoth the 17th-seeded Rybakina and third-seeded Jabeur are first-time Grand Slam finalists.\nRybakina is the first Kazakhstan player to reach a major final. Jabeur is the first Arab woman to reach a major final and the first African woman to do so in the Open era.\nThe 23-year-old Rybakina is the youngest Wimbledon finalist since 2015 when Garbiñe Muguruza lost to Serena Williams.\nHalep, the 2018 French Open champion, had reached the semifinals without dropping a set but was broken early in both sets.\nThe 30-year-old Romanian wasn’t able to defend her Wimbledon title last year — after the 2020 edition was canceled — because of a calf injury.\nThe second women’s semifinal match between Simona Halep and Elena Rybakina has started on Centre Court at Wimbledon.\nHalep won the title at the All England Club in 2019. Rybakina has never been this far at a major tournament.\nThe winner will play Ons Jabeur in Saturday’s final.\nOns Jabeur advanced to her first Grand Slam final by beating Tatjana Maria 6-2, 3-6, 6-1 on Centre Court at Wimbledon in a victory that is also a first for Arab and African women.\nThe Tunisian is the first Arab woman to reach a major final and the first African woman to do so in the Open era.\nThe third-seeded Jabeur will face either 2019 Wimbledon champion Simona Halep or 17th-seeded Elena Rybakina in Saturday’s final.", "score": 25.166794706880363, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Novak Djokovic has been drawn in the same half of the French Open men's singles draw as Carlos Alcaraz, meaning the pair could meet in the semi-finals.\nWorld number one Alcaraz, playing his first major as a Grand Slam champion, starts against a qualifier.\nDjokovic is seeded third as he goes for a record 23rd men's major title and plays American Aleksandar Kovacevic.\nCameron Norrie, one of only three British players in the singles, starts against France's Benoit Paire.\nFormer world number one Andy Murray withdrew last week to focus on the upcoming grass-court season.\nEmma Raducanu, who would have received a place in the main draw, is also not playing following operations on both her wrists and an ankle.\nAs a result, no Britons feature in the women's singles after six players lost in the qualifying rounds.\nIt is the first time the nation is not represented in a Grand Slam main draw since the 2009 US Open.\nThe French Open, which is the second major of the season, starts on the Roland Garros clay on Sunday.\nWhat else happened in the men's draw?\nThere is an unfamiliar feeling to this year's event with the absence of 14-time men's singles champion Rafael Nadal because of a hip injury.\nSpain's Nadal, 36, announced last week he would not be playing at Roland Garros for the first time in 19 years and he was not present at Thursday's draw, with tradition dictating the defending champion is usually there.\nInstead, all eyes were centred on which half of the draw Djokovic would land.\nThe Serb, who turned 36 last week, ended up in the same half as 20-year-old Alcaraz, who won the US Open last year but missed the season-opening major at the Australian Open through injury.\nRussia's Daniil Medvedev is seeded second after winning last week's Italian Open and starts his campaign against a qualifier.\nWho plays who in the women's singles?\nDefending champion Iga Swiatek, who is the top seed and bidding for a third French Open title, starts against Spanish world number 67 Cristina Bucsa.\nThe 21-year-old from Poland is optimistic a recent thigh injury will not prevent her from trying to claim a fourth Grand Slam title.", "score": 24.345461243037445, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Vondrousova Completes Stunning Run At Wimbledon, Wins First Grand Slam Denying Jabeur\n“I think everything is sinking in, Vondrousova said. “It’s unbelievable. It was very tough match, and I was so nervous before. I’m just so grateful and proud of myself.\n“It was really tough in some moments. I think it was just a great match. We had some great rallies. She’s amazing player. She’s amazing person. That was the tough part also. We know each other very well.\n“I’m just very happy that I kept fighting in the important moments.”\nThe 24-year-old lefty had just four career wins on grass courts entering the event, yet after five wins over seeds she leaves with the Wimbledon title.\n“I didn’t play well before on grass. When we were coming here, I was like, Okay, just play without stress, just try to win couple of matches. Then this happened,” she said.\nPlaying in her first final since the Tokyo Olympics, Vondrousova came out behind in a nervy start for both players on Centre which had the roof closed due to high winds. But with Jabeur in command at 4-2, then bottom dropped out for the Tunisian.\nVondrousova would run off 16 of the next 18 points and four straight games to take the opener.\nJabeur, in her third Slam final, would leave the court in disarray.\nVondrousova would hold but Jabeur would up her game to go up 3-1. Vondrousova again stormed back, this time for good taking advantage of a failing Jabeur backhand to win five of the last six games.\nVondrousova wins just her second career title, first since 2017 Biel in Switzerland. She had come up short in the 2019 French Open as a 19-year-old.\n“Winning, it’s amazing feeling,” Vondrousova added. “I have my husband here. My little sister came also on Friday. Yeah, I’m just very happy to share with the people I have here ’cause in Paris it was a bit sad. I couldn’t go there to hug them. Now this happened.", "score": 24.345461243037445, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "The American knew the match was there for the taking however placing Nadal away was a special story.\nHe pulled again to steer 5-4 just for Nadal to get his forehand going and declare three video games in a row to power an unlikely decider.\nFritz was an image of frustration, which solely elevated when Nadal broke to steer 4-3, however the Spaniard was nonetheless struggling to seek out dominance on serve and again got here his opponent right away.\nNadal seized management of the deciding tie-break by profitable the primary 5 factors and, though Fritz briefly threatened a comeback, he couldn’t deny the exceptional Spaniard as he clinched victory after 4 hours and 20 minutes.\nKyrgios, in the meantime, reached the primary Grand Slam semi-final of his chequered profession with a snug 6-4 6-3 7-6(5) victory over Chile’s Cristian Garin.\nThe unseeded 27-year-old misplaced the opening 9 factors on Courtroom One however finally had an excessive amount of firepower for Garin who had hoped to turn into Chile’s first Wimbledon semi-finalist.\nIn surpassing his earlier greatest Wimbledon run to the quarter-finals eight years in the past, Kyrgios turns into the primary Australian man to succeed in a Grand Slam semi-final since Lleyton Hewitt on the 2005 US Open.\n“A tremendous environment once more,” world quantity 40 Kyrgios, the lowest-ranked semi-finalist at Wimbledon since Marat Safin (75) and Rainer Schuettler (94) in 2008, stated.\n“I by no means thought I would be within the semi-final of a Grand Slam. I believed that ship had sailed — that I could have wasted that window in my profession.\n“I am actually completely happy I used to be in a position to come out right here with my group and placed on a efficiency.”", "score": 24.345461243037445, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Rosol stuns crowd by beating Nadal at Wimbledon\nWIMBLEDON, England It was Lukas Rosol and not Rafael Nadal who looked like a two-time Wimbledon champion used to pummeling opponents into submission on tennis' biggest stage.\nIt was Rosol, and not Nadal, who sprinted to and from his chair during changeovers like he had a never-ending supply of energy, pumped his fist and shouted to his entourage in the player's box. And it was the 100th-ranked, little-known Czech player making his first Wimbledon appearance - and not the 11-time Grand Slam winner - who got better and stronger as the second-round match on Centre Court progressed into the night.\nHe hit ace after ace to complete one of the biggest upsets tennis has seen in years.\nAs surprising as Rosol's five-set victory over Nadal was, the manner in which he completed it Thursday was perhaps equally stunning.\n\"In the fifth set he played more than unbelievable,\" Nadal said.\nHe wasn't the only one who struggled to believe what they were seeing.\nRosol, who had lost in qualifying for Wimbledon in each of the last five years, simply outclassed Nadal with his powerful serving and booming ground strokes. He hit cross-court backhand winners that measured 99 mph, he stepped up to whip scorching forehand returns, and he served so well that Nadal hardly tried to get to them by the final game. The last one he hit was his 22nd, and it wrapped up a 6-7 (9), 6-4, 6-4, 2-6, 6-4 victory that no one had seen coming.\nLeast of all Rosol himself.\n\"I'm not just surprised; it's like a miracle for me,\" he said. \"Like just some B team in Czech Republic can beat Real Madrid (in) soccer.\"\nBut Rosol fully earned the win, bouncing back from wasting three set points in the first set to win the next two. After Nadal leveled the match in the fourth, organizers then decided to slide the retractable roof out over Centre Court to allow the match to finish under the lights. That forced a 45-minute break that had Nadal agitated, but seemingly just made Rosol stronger.\nHe came out and broke Nadal in the first game, and never gave the Spaniard a chance to get back into the match.", "score": 24.345461243037445, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "World No. 1 Novak Djokovic returns to action at the Rolex Monte-Carlo Masters, but the two-time champion will have to negotiate a difficult path if he wants to add a third Monte-Carlo title to his collection following the release of the draw Friday.\nThe 34-year-old could face #NextGenATP Spaniard Carlos Alcaraz in a blockbuster quarter-final at the ATP Masters 1000 event, with Indian Wells titlist Taylor Fritz also in his quarter. A potential Monte-Carlo rematch with Daniel Evans looms in the third round, one year after the Briton upset Djokovic in the fourth round in Monaco – though Evans must first navigate an opening-round meeting with 14th seed Roberto Bautista Agut.\nDjokovic, who opens against Spaniard Alejandro Davidovich Fokina or American Marcos Giron, has never played Alcaraz, who arrives in Monte-Carlo high in confidence following his title run at the Miami Open presented by Itau. The eighth seed is making his debut at the clay-court event and will begin against Sebastian Korda or Botic van de Zandschulp. Alcaraz is 18-2 on the season, having also triumphed on clay in Rio de Janeiro.\nAmerican Fritz earned the biggest title of his career when he ended Rafael Nadal’s unbeaten start to the season to win the trophy in California last month. The 10th seed will play wild card Lucas Catarina in the first round, with French wild card Jo-Wilfried Tsonga or Croatian Marin Cilic awaiting next.\nIn the bottom half of the draw, reigning champion Stefanos Tsitsipas will start his title defense against Italian former champ Fabio Fognini or Frenchman Arthur Rinderknech, with Canadian Felix Auger-Aliassime seeded to meet the Greek in the quarter-finals.\nThird seed Tsitsipas downed Andrey Rublev to clinch his maiden Masters 1000 crown last year and will be aiming to regain top form this week after losing to Alcaraz in the fourth round in Miami at the end of March.\nMeanwhile, Auger-Aliassime has won just one match since he clinched the title in Rotterdam and reached the final in Marseille at the end of February. The sixth seed, who is making his fourth appearance in Monte-Carlo, will play #NextGenATP Italian Lorenzo Musetti or Benoit Paire in his opening match.", "score": 23.030255035772623, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Jannik Sinner lined up a forehand, drilled it down the line and dropped to the court on his back, giving himself a few moments to process how he’d managed to come back from two sets down to win his first Grand Slam title.\nThe 22-year-old Sinner found a way to turn defence into attack in his first major final and take the Australian Open title from Daniil Medvedev 3-6, 3-6, 6-4, 6-4, 6-3 on Sunday.\n“I still have to process it, because ... beating Novak [Djokovic] in the semis and then today Daniil in the final, they are tough players to beat,” Sinner said. “So it’s a great moment for me and my team. But in the other way, we also know that we have to improve if we want to have another chance to hold a big trophy again.”\nIt was his third straight win over a top 5 player after his quarter-final victory over Andrey Rublev and his semifinal upset that ended No. 1 Djokovic’s long domination of the tournament. Only Djokovic and Roger Federer have done that previously in a major played on hard courts.\nHe’s in great company.\nSinner is the first Italian to win the Australian Open and the youngest winner in a men’s final here since Djokovic won his first Grand Slam title in 2008.\nWith Carlos Alcaraz winning Wimbledon and Sinner winning the season-opening major, a generation shift is arriving.\nJannik Sinner of Italy. Photo / Getty Images.\nHe thanked his parents for not forcing anything on to him and for letting him make big life choices, including moving away from home at 14.\n“It’s been a hell of a journey,” the 22-year-old Sinner said, wiping his long, orange fringe out of his eyes, “even though I’m only 22.”\nFor 2021 US Open champion Medvedev, the loss was his fifth in six major finals. The third-seeded Medvedev set a record with his fourth five-set match of the tournament and time on court at a major in the Open era, his 24 hours and 17 minutes surpassing Carlos Alcaraz’s 23:40 at the 2022 U.S. Open.", "score": 23.030255035772623, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "7 Maria Sakkari beat 17th seed Karolina Pliskova 6-4, 5-7, 6-3 and No. 10 Elena Rybakina Defeated Varvara Gracheva 6-3, 6-0.\nDefending champion in men’s category taylor fritz Reached the quarterfinals with a 6-4, 6-3 win over Marton Fuksovics,\nnumber five Daniil Medvedev overcame a swollen right ankle and was seeded 12th alexander zverev6-7(5), 7-6(5), 7-5.\n“When I twisted my ankle, I twisted it too hard,” he said. “The moment I rolled it out, I was like, ‘Okay, I’m going to stand up and it’ll be fine,’ but then I continued to stay on the ground because the pain was building up. It was definitely insane.” match.”\nMedvedev took a medical timeout in the second set to tape his ankle before recording his 17th consecutive match win and improving to 22-2 this year.\n“It was definitely the hardest,” he said in court.\nZverev added to the drama by saving a match point and breaking at 5-5 in the final set. Medvedev made a perfect comeback and won.\nhe plays spain next time Alejandro Davidovich Fokinawho beat Christian Garin 6-3, 6-4.\nnumber 10 Cameron Norrie2021 champion beats sixth seed Andrey Rublev, 6–2, 6–4. Nori improved to 21-3 this year.\nAdvances to quarterfinals against Nori Frances Tiafoewho beat the qualifiers alejandro tabillo6-4, 6-4.\nnumber 11 genetic sinner Defeated stan wawrinka 6-1, 6-4.", "score": 23.030255035772623, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Darcis shocked Nadal...and himself\nWimbledon has witnessed some seismic shocks down the years but few could top twice champion Rafa Nadal's elimination at the hands of Steve Darcis, a Belgian ranked 135th in the world, in the first round on Monday.\nA year after losing to Czech Lukas Rosol in the second round, Nadal was outplayed by the 135th-ranked Darcis on Court One, losing 7-6(4), 7-6(8), 6-4 in front of a disbelieving crowd.\nImage: Steve Darcis of Belgium shakes hands at the net with Rafael Nadal of Spain after their first round match\nPhotographs: Mike Hewitt/Getty Images\nSwiss journeyman George Bastl beats Pete Sampras\nSampras, with seven Wimbledon trophies in his possession, endured one of the worst defeats of his career, losing 6-3, 6-2, 4-6, 3-6, 6-4 in the second round to a player ranked 145th in the world and who was a lucky loser from qualifying.\nAmerican Sampras recovered to win the U.S. Open a few weeks later before retiring.\nImage: George Bastl of Switzerland celebrates after his victory over Pete Sampras of the USA in 2002\nPhotographs: Al Bello/Getty Images\nKarlovic shocks champion Hewitt\nHewitt, the defending champion, won the first set 6-1 before unheralded Karlovic, ranked 202, wheeled out the big guns and battered the Australian into submission with a devastating display of serving.\nKarlovic won 1-6, 7-6, 6-3, 6-4 and for only the second time in the history of the event, the top seeded male was toppled on the first day.\nImage: Ivo Karlovic of Croatia shakes hands at the net after beating defending champion Lleyton Hewitt of Australia in 2003\nPhotographs: Alex Livesey/Getty Images\nPeter Doohan upsets Boris Becker\nBecker, the top seed and twice defending champion, seemed invincible on the Wimbledon grass but ran into 70th-ranked Doohan in the second round.", "score": 23.030255035772623, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Then, with Muguruza serving to stay in the first set at 5-4 down, came a 19-shot rally that changed everything.\nWilliams lost it as she dumped a forehand into the net, and was never the same player again.\nIt was all Muguruza after that, taking nine straight games to win her first Wimbledon crown, 7-5 6-0, and become the first player in history to defeat both Williams sisters in grand slam finals.\nWhat looked like a classic Wimbledon final after the first set turned into an anti-climax when Muguruza won the second, and the championship, 6-0 in 26 minutes on a challenge in front of a stunned Centre Court crowd.\n\"It was my hardest match today,\" said Muguruza, as former Spanish king Juan Carlos watched from the royal box. \"I grew up watching her play.\"\nWilliams' stunning collapse in the second set capped an emotional few weeks for the former top-ranked American, who had been involved in a car crash last month at home in Florida in which a man eventually died.\nOn July 8, Florida police said a newly surfaced video showed Williams \"was acting lawfully\" when she steered her car into a crossing before the fatal collision with another car on June 9, the Reuters news agency reported at the time.\n\"I try to do the same things you do, but I think there will be other opportunities\" Venus said at the trophy ceremony, when asked if she missed her sister and last year's winner, Serena Williams, who is at home awaiting the birth of her first child.\nMuguruza's triumph comes two years after she lost to Serena in her maiden grand slam finals at the All England Club.\nSix weeks ago, Muguruza crashed out of the French Open in tears, losing her Roland Garros crown in front of a hostile crowd rooting for her opponent and not many people would have bet on her winning the title on the Wimbledon grass.\nFor her temporary coach, Conchita Martinez, it must have been a case of deja-vu. In 1994, Martinez spoiled the party for Martina Navratilova, who had been trying to win her tenth Wimbledon crown at the age of 37.\n\"She just told me to go out there and forget about all of this,\" Muguruza said in a news conference. \"Try to think it's another match.\"", "score": 22.742342328025867, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "You Might Like:\nAlcaraz Makes Successful 2023 Debut, Wins Buenos Aires Over Norrie\nAlcaraz Sets Norrie Final In Buenos Aires\nAlcaraz Victorious Over Djere In 2023 Opener In Buenos Aires\nAlcaraz Opens 2023 Season In Buenos Aires; Norrie Seeded No. 2\nAlcaraz Advances To Buenos Aires SFs, Norrie Makes Comeback", "score": 21.695954918930884, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Carreno Busta takes it to deuce though, with a forehand winner. Match-point saved.\nCarreno Busta's sliced backhand lacks the power to make it across the net. Advantage and set point to Zverev. Another mistake from Carreno Busta, and Zverev is through to his first Grand Slam final! What a comeback win for the German!\nFourth Set: Alexander Zverev 6-4\nZverev serves for the set. Good start with an ace. Zverev shanks a backhand and it hits the net. 15-15. Double fault after he tries to go for the gusto on the second serve. Excellent drop volley from Zverev. 30-30. Good serve and he has set point. Ace and we go to the fifth set!\nThird Set: Alexander Zverev 6-3\nAce and 15-0. 30-0, after Carreno Busta can't make the return. Another ace and it's 40-0. Three set points. He's done it, he's won the third set! Incredible serving from the German!\nSecond Set: Pablo Carreno Busta 6-2\nAlexander Zverev wins the first point after Carreno Busta hits a forehand wide. Another forehand goes wide and Zverev is up 0-30. Carreno Busta gets his first point of the game after Zverev misses up a return. Excellent forehand winner from Carreno Busta and it's 30-30. An ace and he's on set point. Poor shot from Carreno Busta, we go to deuce.\nAnother mistake from Zverev. Advantage and set point to Carreno Busta! Another vamos and another set in the bag for the Spaniard!\nFirst Set: Pablo Carreno Busta 6-3\nFor the second time, Carreno Busta serves for the set. Zverev wins the first point, Carreno Busta the second. 15-15. Zverev makes an error after a little rally, it's 30-15. Ooh, Carreno Busta tries to go heavy on the forehand, and miscues it. 30-30. Carreno Busta is adjudged to have hit a backhand long, but he challenges.", "score": 21.695954918930884, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Austrian qualifier Dominic Thiem scored the biggest victory of his young career on Tuesday when he rallied for a 1-6, 6-2, 6-4 victory over third-ranked Stanislas Wawrinka at the Madrid Open.\nThe 20-year-old Thiem showed composure beyond his years in knocking the Australian Open champion out of the clay court event to reach the third round.\nThiem called on a thunderous array of baseline strokes and soft net touches to oust the third-seeded Wawrinka, who reached the final at Madrid’s Magic Box last year.\nThiem looked out of his depth as he struggled to find his rhythm early on Manolo Santana center court.\n“I didn’t have many matches against these top guys, so I wasn’t really used to his pace. He had an unbelievable start and I didn’t really know what happened,” the 70th-ranked Thiem said. “I started to get used to more and more his pace and angles and his game. I played unbelievable the second and the third set.”\nWawrinka may have dropped his guard after the easy opening as Thiem pulled even after bombarding his opponent with powerful backhands.\nWawrinka, coming off a clay court win in Monte Carlo, gathered himself for the final set — with both players holding serve until the 10th game when Thiem slapped another strong backhand across Wawrinka and down the line for 30-15. A gentle drop shot from Thiem following a long rally of ground strokes set up match point on Wawrinka’s serve.\n“I was in this famous ‘zone’ during the match. I was really unbelievably concentrated,” said Thiem, who scored his first victory over a top 10 player in his third attempt.\nThiem converted his third break point to win the match as Wawrinka sent his crosscourt backhand wide.\n“I had some chances in the third set (and) I should have played better when I had some opportunity,” said Wawrinka, who slipped to his fourth defeat of 2014. “But I was hesitating with my game and he went for it and deserved it.”\nWawrinka wasn’t the only Swiss player to exit the tournament on Tuesday.\nRoger Federer pulled out due to the birth of his second set of twins, this time boys — named Leo and Lenny.", "score": 21.695954918930884, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "When Rosol broke for a 3-2 lead in the second set with a cross-court backhand, he had a 24-9 edge in winners.\nNadal broke back to 4-all, whirling around and throwing a celebratory uppercut, but again was in trouble at 6-5 in the tiebreaker. On that set point, Nadal whipped a winner he called \"a perfect forehand for that moment\" to get to 6-all. Two points later, Rosol plopped a second serve into the net for a double-fault that ceded the set, and said later: \"In the end, he was more lucky.\"\nNadal probably would not agree with that assessment. He did agree about the significance of that sequence.\n\"The difference maybe is one point,\" said Nadal, who collected two of his 14 Grand Slam titles at Wimbledon but exited in the first round last year. \"Maybe if I lose that set point in the second set — if that forehand down the line went out — maybe (I) will be here with a loss.\"\nInstead, he raised the level of his play. He won 22 consecutive points on his serve, and moved better, bending so low his knee touched the grass on backhands. Nadal broke for a 2-1 lead in the third set, and again for a 1-0 lead in the fourth.\n\"If I had played the first set the way I did the last two, I would have won it, too, I think,\" Nadal said.\nThree seeded men lost, including No. 13 Richard Gasquet, who wasted nine match points and was beaten by 19-year-old Nick Kyrgios of Australia 3-6, 6-7 (4), 6-4, 7-5, 10-8. Winners included No. 5 Stan Wawrinka, No. 8 Milos Raonic, No. 9 John Isner and No. 10 Kei Nishikori among the men, and past champions Serena Williams and Maria Sharapova among the women.\nNadal's longtime rival, seven-time Wimbledon champion Roger Federer, turned in a far more straightforward performance, delivering 25 aces in a 6-3, 7-5, 6-3 win over 103rd-ranked Gilles Muller of Luxembourg to get back to the third round, too.", "score": 21.695954918930884, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Title holder in Rio, Carlos Alcaraz is the big favorite to succeed himself in this ATP 500. A real war machine on clay, the Spaniard crushed the competition last year and will want to achieve the same number. With the Buenos Aires tournament under his belt and already a title, the former world number 1 is in great shape. In the first round, he was opposed on Tuesday to the local wild-card Mateus Alves, 556th in the world. The match could not be completed due to heavy rain. Carlitos” was one game short of the round of 16, 6-4, 5-3. He won without surprise 6-4, 6-4 after the restart. The world number 2 will face Fabio Fognini this Thursday.\nIn a press conference, the Spaniard returned to the feelings after his return and his goal of 2023. “After four months off, I had a good week in Argentina, I’m confident. I want to have a great year, with a lot of tournaments. There is a little pressure because I have to defend several titles, but I want to enjoy it. The number one is a good goal, a goal that I would like to achieve this year, “he said. This week, Alcaraz is 590 points behind Novak Djokovic. Juan Carlos Ferrero’s protégé could have a title window in March, at the US Masters 1000, especially if Nole can’t participate. Not easy though, because Carlitos will have 1,360 points to defend out of 2,000 points to win. The margin is not thick.\nLeave a Reply", "score": 20.327251046010716, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Verdasco hammered four aces and saved six of seven break point chances in the match that lasted one hour, 52 minutes.\nVerdasco has won seven of 10 career meetings with Almagro.\n“I am so tired I don’t know if I can talk so much,” said Verdasco, who also reached the doubles final in Casablanca.\nAlmagro made Verdasco work for the victory as he hit eight aces, won 72 percent of his first serve points and saved seven of nine break points.\nIt marked the second year in a row Almagro finished runner-up in Houston, losing last year to John Isner.\nAlmagro was handed a free pass into the final when the US’ Sam Querrey was cut down by a back injury in the semi-finals.\n“He played a really good match today, and he’s the winner,” Almagro said of Verdasco. “It was a battle today, and he really concentrated on his serve. I had many chances and I didn’t [take them] and that’s the key of the match.”\nGRAND PRIX HASSAN II\nAP, CASABLANCA, Morocco\nEighth-seeded Guillermo Garcia-Lopez won his first title in nearly four years after rallying to beat fourth seed Marcel Granollers 5-7, 6-4, 6-3 in an all-Spanish final on clay at the Grand Prix Hassan II on Sunday.\nThe 30-year-old Garcia-Lopez’s previous title was on indoor hard courts in Bangkok in 2010 and he had lost his previous two finals — last year on clay at Bucharest and on indoor hard courts in St Petersburg, Russia.\nGranollers, who was bidding for his fifth career title, dropped his serve five times and was less consistent in Casablanca, Morocco.\nThe 28-year-old won only 60 percent of first-serve points, compared with 76 percent for Garcia-Lopez, who won their only previous meeting in the second round of the same tournament in 2010.\nGarcia-Lopez continued the recent dominance of his countrymen at the tournament, becoming the fourth Spaniard to take the title over the past six years after Tommy Robredo won it last year, Pablo Andujar in 2011 and 2012, and former French Open champion Juan Carlos Ferrero in 2009.", "score": 20.327251046010716, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "* Del Potro and Ferrer through to second round\n* Wins also for promising players Raonic and Dimitrov (Adds quotes, details, other results)\nBy Sonia Oxley\nLONDON, June 25 (Reuters) - Juan Martin del Potro provided some encouragement for the rest of the chasing pack in the men's game at Wimbledon on Tuesday.\nApart from single triumphs by the Argentine and Britain's Andy Murray, the trio of Roger Federer, Rafa Nadal and Novak Djokovic have won every one of the last 33 slams.\nThe Argentine, who reached the second round with a 6-2 7-5 6-1 victory over Spain's Albert Ramos, is in a group of players always waiting to capitalise on slip-ups by the triumvirate.\nNadal has already tumbled out at the All England Club, suffering a shock first-round loss to Steve Darcis on Monday, which spells good news for the cluster of 'best of the rest' players.\nMurray, who has reached three of the last four grand slam finals, has also opened up a gap with the chasing pack.\n\"All the players can beat the top 10 players, the top four, the top five,\" said Del Potro, whose 2009 U.S. Open victory and Murray's 2012 win at the same tournament are the only times a non-'Big Three' name has won a major since 2005.\n\"I knew after three or four years they win every grand slam, the same players... It's really difficult (to) break that name on the big tournaments,\" he told a news conference.\n\"But, of course, I'm trying. I like to play the grand slams. They are (the) longest tournaments, and you can play maybe a bad match and survive and that gives confidence for the next rounds. In the grand slam during 15 days, everything can happen.\"\nEighth seed Del Potro, who missed the French Open because of a virus, will face Canadian Jesse Levine in the second round after shaking off the rust to beat left-handed Ramos.\nHaving sailed through the first set, he made hard work of the second when, having broken and established a 5-2 lead, the net-shy Argentine was caught out by some drop shots.", "score": 20.327251046010716, "rank": 55}, {"document_id": "doc-::chunk-6", "d_text": "7 seed Roger Federer is bounced in the opening round of Wimbledon by 18-year-old Croat Mario Ancic by a 6-3, 7-6 (2), 6-3 margin. Says the No. 154-ranked Ancic, “I came first time to play Centre, Wimbledon, they put me on Centre Court for my first time. I qualified, nothing to lose, I was just confidence. I knew I could play. I believe in myself and just go out there and try to do my best. Just I didn’t care who did I play. Doesn’t matter…I knew him (Federer) from TV. I knew already how is he playing. I don’t know that he knew how I was playing, but that was my advantage. And yeah, I didn’t have any tactics, just I was enjoying.” Following the loss, Federer goes on to win his next 40 matches at Wimbledon – including five straight titles – before losing in the 2008 final to Rafael Nadal of Spain.\n1996 – “Hen-mania” begins at Wimbledon as 21-year-old Tim Henman wins his first big match at the All England Club, coming back from a two-sets-to-love deficit – and saving two match points – to upset No. 5 seed and reigning French Open champion Yevgeny Kafelnikov 7-6 (8-6), 6-3, 6-7 (2-7), 4-6, 7-5 in the first round in what Jennifer Frey of the Washington Post calls “a cliffhanger that enraptured the winner’s countrymen in the Centre Court seats.” Henman goes on to reach the quarterfinals, where he is defeated by American Todd Martin 7-6 (5), 7-6 (2), 6-4, but remains a threat to win the title of much of the next decade, thrilling British fans in the excitement of the possibility of a home-grown player becoming the first player to win the men’s singles title at Wimbledon since Fred Perry won his last of three titles in 1936.", "score": 20.327251046010716, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "With his victory over Zverev, the 19-year-old became the youngest player to win the Madrid Masters in the tournament's 20-year history. The previous youngest champion in Madrid was Nadal, who was 19 years and five months old in 2005 when he recovered from two sets down to beat Ivan Ljubicic in a fifth-set tie-break.\nBack then, the tournament was held on indoor hardcourt in October.\nThe previous youngest player to win the Madrid Masters on clay was Alexander Zverev (21 years, one month) in 2018.\n#3 Carlos Alcaraz is the youngest 5-time titlist on ATP tour in nearly 2 decades\nCarlos Alcaraz's early success has inevitably evoked comparisons with Rafael Nadal, who had a banner (79-10) year as a teenager in 2005.\nAlcaraz has won four titles this year, five overall, including his first ATP 500 title in Rio de Janeiro and his first Masters 1000 title in Miami. A little over a month after his Miami triumph, Alcaraz has become the youngest five-time titlist on the ATP tour in nearly two decades.\nBack in 2005, Nadal won six titles (all on clay) before his 19th birthday. That included two Masters 1000 titles.\nAlcaraz will now look to make his Grand Slam breakthrough this year, just like Nadal did while he was still in his teens.\n#4 Carlos Alcaraz is the youngest player in 17 years to win multiple Masters 1000 titles\nCarlos Alcaraz's Madrid success also meant he emulated his illustrious compatriot Nadal by becoming the youngest player in 17 years to win multiple Masters 1000 titles. Nadal was 18 when he won his first two Masters 1000 titles in Monte Carlo and Rome in 2005.\nEarlier this year, Alcaraz made his Masters 1000 breakthrough in Miami. He could have done it a fortnight earlier at Indian Wells but fell to his compatriot in a bruising three-hour three-set semifinal.\nUnfortunately, Alcaraz won't be the first teenager since Nadal to win Rome, as he has withdrawn from the tournament.\n#5 Carlos Alcaraz is the sixth player in the Open Era to go 5-0 in his first 5 career finals\nCarlos Alcaraz, who will ascend to a career-best ranking of World No.", "score": 18.90404751587654, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Britain’s Cameron Norrie suffered a heart-breaking defeat by seventh seed Albert Ramos-Vinolas in the final of the Estoril Open.\nNorrie was bidding to win his first ATP title and continue a brilliant season by claiming silverware, but he was edged out 4-6 6-3 7-6 (3) by Spaniard Ramos-Vinolas after two hours and 44 minutes.\nThe 25-year-old, whose previous final came in his home city of Auckland in 2019, had battled past second seed Cristian Garin and former US Open champion Marin Cilic in his previous two matches.\nHe now found himself up against a clay-court specialist in Ramos-Vinolas, who had won two previous titles on the surface.\nThe Spaniard responded well to an early Norrie break of serve and piled the pressure on the British number two.\nBut, as he has done so impressively all season, Norrie stayed calm, weathered the storm and seized his moment to claim the opening set.\nRamos-Vinolas took an injury time-out for an apparent groin problem after dropping serve to go 2-1 down in the second set, but Norrie was unable to convert a 40-0 lead in the next game and was made to pay as his opponent levelled the match.\nIt was Norrie’s turn to call the trainer down a break after three games of the decider, taking painkillers for a left foot issue, and he immediately got back on level terms.\nBoth men looked to be feeling the physical effects of a gruelling battle, but it was Ramos-Vinolas who emerged victorious, coming from 3-1 down to claim the tie-break.\nNikoloz Basilashvili won his second title of the season at the BMW Open in Munich without dropping a set.\nThe Georgian, who had been in dreadful form before winning the title in Doha in March, defeated German Jan-Lennard Struff 6-4 7-6 (5).\nOn the first day of play at the Madrid Open, 11th seed Denis Shapovalov was a confident 6-1 6-3 winner over Dusan Lajovic, while there were also victories for Tommy Paul, Alexander Bublik and Alex De Minaur.", "score": 18.90404751587654, "rank": 58}, {"document_id": "doc-::chunk-2", "d_text": "Rublev’s 20 wins this season have come courtesy of a quarter-final at the Australian Open, victory with Medvedev in the ATP Cup, and a run of semis in Doha, Dubai and Miami.\nMedvedev, however, has one of the most formidable quarters in the draw: first 36-ranked Filip Krajinovic or the 37-ranked Doha champion Nikoloz Basilashvili, with his first seed coming in the shape of defending champion Fabio Fognini, and either Marbella champion Pablo Carreno Busta or Buenos Aires champion Diego Schwartzman in the quarters.\nMore pretenders to the throne\nWith 17 23-and-under players in the main draw, there is plenty of potential among this burgeoning young generation, many of whom have already proved their worth against top-flight opposition: Rublev, along with Stefanos Tsitsipas and Alexander Zverev, are all in the top eight.\nThree more are teenagers, and the best of them thus far, Sinner, started 2021 with the Great Ocean Road title in Melbourne, and then made that stand-out run to the Miami final.\nWild card Lorenzo Musetti reached his first ATP500 semi-final in Acapulco but faces another high-flying Russian, Aslan Karatsev, in the first round, in what is a tough segment of the draw topped by Tsitsipas that also includes an unseeded Felix Auger-Aliassime, who will have Nadal’s uncle and near life-long coach, Toni, in his box.\nAnd another intriguing first-round match features junior No1, 17-year-old Holger Rune, who made the Santiago quarter-finals as a qualifier earlier this year, and has been training with Djokovic this week. He faces 22-year-old Casper Ruud, another former No1 junior.", "score": 18.90404751587654, "rank": 59}, {"document_id": "doc-::chunk-4", "d_text": "It was Federer’s earliest exit from Roland Garros since he was beaten in the third round by Gustavo Kuerten in 2004. Since then, Federer had reached the quarterfinals in a record 36 consecutive major tournaments, a streak that ended at Wimbledon last year when he was ousted in the second round. The Swiss master also has Grand Slam tournament streaks of 10 consecutive finals and 23 straight semifinals. Even Gulbis was humbled by his victory over the 32-year-old Federer. “I’m sorry I had to win,” Gulbis told the crowd following the match. “I know all of you like Roger.”\nWith the top two players, Serena Williams and Li Na, sitting on the sidelines, the road to the Roland Garros women’s singles title appeared to be wide open for the tournament’s number three seed, Agnieszka Radwanska of Poland. But those earlier upsets proved to be the boost the 21-year-old Ajla Tomljanovic of Croatia needed. “After seeing the two first seeds go out, you kind of feel you can do this too,” Tomljanovic said after knocking off Radwanska 6-4 6-4. “I grew up with these girls who are beating them. I went into the stadium for the first time, and she (Radwanska) kind of feels like home there, because she’s been there a lot more than I have. I went out there and inside I really thought I could win. I think that showed and it is why I won.”\nSTUMBLE BY STAN\nHe is the reigning Australian Open champion, a player who has posted victories over the top two players in the world this year. But the brilliant playmaking that has taken Stanislas Wawrinka to number three in the world was missing when he took to the court for his first-round match at Roland Garros. Two hours and 23 minutes later, Wawrinka had fallen to unheralded Spaniard Guillermo Garcia Lopez 6-4 5-7 6-2 6-0. Wawrinka became the highest seeded man to lose in the opening round at Roland Garros since third-seeded Andy Roddick lost to Igor Andreev in 2007.", "score": 18.90404751587654, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "RIO DE JANEIRO — Spanish teenager Carlos Alcaraz beat Argentina’s Diego Schwartzman 6-4, 6-2 to win the Rio Open.\nThe 18-year-old Alcaraz overcame third-seeded Schwartzman in the final to the delight of Brazilian fans at the clay-court tournament.\nThe seventh-seeded Spaniard won his first professional match in Rio de Janeiro two years ago and his first tournament last year at Umag, Croatia, also on clay.\nThe 29-year-old Schwartzman said his younger rival is doing “an amazing job.”\n“He is so young and is already achieving impressive things,” the Argentine said.\nAlcaraz praised Schwartzman’s fighting spirit and thanked the crowd for its support.\n“I have no words to describe what I had here from the first match until this final,” Alcaraz said.\nThe Spaniard converted five of his six break points overall and dominated with impressive baseline play.\nAlcaraz’s path to the title included upsetting top-seeded Matteo Berrettini in the quarterfinals and then beating another Italian, Fabio Fognini, in the semifinals.\nFitness was a key element of Sunday’s final. Both players had their quarterfinals and semifinals on Saturday due to heavy rain falling in Rio during the week.\nSchwartzman had threatened not to play his semifinal match against countryman Francisco Cerundolo if he didn’t have enough time to rest after a lengthy quarterfinal against Spain’s Pablo Andujar.", "score": 17.397046218763844, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "He joins Sebastian Korda in the last 16 as the first qualifiers to reach that stage at Roland Garros in nine years.\nAltmaier has yet to drop a set in the tournament and will face 17th seed Pablo Carreno Busta in the last 16.\nThe US Open semi-finalist was the winner of an all-Spanish meeting with 10th seed Roberto Bautista Agut, seeing off his compatriot 6-4 6-3 5-7 6-4.\nCarreno Busta, who was a break down in the fourth set, hit 65 winners to make it through in three hours and 22 minutes.\nGreek fifth seed Stefanos Tsitsipas, 22, was racing into the next round when Slovenian opponent Aljaz Bedene was forced to retire at 6-1 6-2 3-1 with an ankle problem.\nTsitsipas will play Grigor Dimitrov after Roberto Carballes Baena retired when the Bulgarian 18th seed was leading 6-1 6-3.\nRussian 13th seed Andrey Rublev breezed past Kevin Anderson to reach the fourth round at Roland Garros for the first time.\nRublev hit 27 winners and did not face a single break point as he dominated the South African to win 6-3 6-2 6-3.\nThe 22-year-old, who had never won a French Open match before this week, will play Marton Fucsovics for a place in the quarter-finals.\nHungarian Fucsovics beat Brazil’s Thiago Monteiro 7-5 6-1 6-3.", "score": 17.397046218763844, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "Update: Rafael Nadal struggles at times in Aussie opener but prevails in 4 sets\nMELBOURNE, Australia — Rafael Nadal never really looked in danger of becoming the first defending Australian Open champion to lose in the first round since his current coach Carlos Moya defeated Boris Becker a quarter of a century ago.\nStill, it wasn’t a peak performance for Nadal, who entered Monday’s match against 21-year-old Jack Draper 0-2 in 2023 with six losses in his last seven matches. After almost two hours of so-so play, Nadal found himself in the set.\nHe appeared to be pulling back, taking advantage of his opponent’s bout with cramps on an 85-degree-Fahrenheit afternoon when suddenly Draper went up a break in the fourth set. From there, however, Nadal did not drop another game and began his pursuit of a record 23rd Grand Slam championship with a 7-5, 2-6, 6-4, 6-1 victory that took more than 3.5 hours in Rod. Laver Arena.\n“I need a win, so that’s the main thing,” Nadal said. “The method doesn’t matter.”\nThat’s a good thing, because the 36-year-old Spaniard from Spain has not been in top form. Overall it was a bit of a struggle. However, he tried to turn things around with a silver lining, given his recent record and the knowledge that he has torn his abs twice in the past six months.\nIn Kyrgios has pulled out of the Australian Open with a knee injury\n8 a.m. Jake Michaels\nGauff, Pegula into Australian Open 2nd round\nAO Live Blog: Swiatek No. 1, Nadal after first-round win\n“I was humble enough to accept that there will be little fluctuations during the match,” Nadal said. “[That’s] a typical thing when you’re not in a winning mood.”\nBoth men are left-handed, but that’s pretty much where the similarities end, whether it’s style or age or experience or accomplishments.\nNadal, who is the No. 1 seed with top seed Carlos Alcaraz sidelined with injury, will be making his 67th Grand Slam appearance. Draper, who entered this week at a career-best No.", "score": 17.397046218763844, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "The Serbian, who recently split from his long-term coach Marian Vajda, is trying to regain his best form again ahead of the French Open.\nStreaking forward, Novak Djokovic caught up to the drop shot sliding into a soft-angled answer that left a bemused Feliciano Lopez applauding the second-seed's speed and slick racquet skills.\nHe was broken three consecutive times in the first set, and once in the second to give Coric a 5-3 lead that he converted to close out the match.\nIt almost paralleled his first match at Monte Carlo, against Gilles Simon: take a first set against a risky but ultimately much less skilled player, lose the second, and go down a break of serve in the third before coming back to win, aided by some critical gaffes on his opponent's part (Almagro's wild baseline errors, and Simon's poor serving with the match on his racket).\n\"When I started to control the points more towards the end of the first set and second set, I was hitting the ball pretty clean, creating a few chances\", Murray said. \"I've been playing at a high level during many weeks\". It was first win over a top-100 player this season for the 273-ranked Indian, who outplayed the Ukrainian 6-2 6-4 in one hour and 10 minutes.\nThe Spaniard takes on Fabio Fognini of Italy for the right to face Australia's Nick Kyrgios, who beat American Ryan Harrison 6-3 6-3. In their two previous encounters at Madrid, Nadal saved three match points to overcome Djokovic in a classic 2009 semi-final, before the Serb returned the favour in the 2011 final.\nNishikori will play in the third round against Spaniard David Ferrer, who advanced after 10th-seeded Jo-Wilfried Tsonga of France withdrew before the match because of a shoulder injury.\n'That's why I chose not to play today'.\nNadal is now 13-0 on clay this year as he swept aside Belgian ninth seed David Goffin 7-6 (7/3), 6-2 much to the delight a boisterous home crowd under the roof on the Manolo Santana centre court.\n\"It's the first match without my team that's been with me for 10 years\", he said. \"I'm expecting the best Djokovic tomorrow\".", "score": 17.397046218763844, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "In the doubles, Argentina got a consolation win, with Maximo Gonzalez and Horacio Zeballos getting up over Italian pairing Simone Bolelli and Fabio Fognini. The Argentinians won 7-5 2-6 6-3 to ensure they got at least one match in the tie, but will need to belt Croatia in the final match in order to have a chance of reaching the knockout stages of the tournament, having lost to Sweden on the opening day.\nCANADA (2) defeated SPAIN (1)\nPool B | Valencia\nRoberto Bautista Agut (ESP) defeated Vasek Pospisil (CAN) 3-6 6-3 6-3\nFelix Auger-Aliassime (CAN) defeated Carlos Alcaraz (ESP) 6-7 6-4 6-2\nFelix Auger Aliassime / Vasek Pospisil (CAN) defeated Marcel Granollers / Pedro Martinez (ESP) 4-6 6-4 7-5\nCanada spoiled the party against Spain to set itself up in prime position for a place in the knockout stages of the event. The Canadians won in another thriller 2-1 to move to the top of Pool C, but must still beat Serbia to ensure guaranteed qualification. It was the Spanish who got up early, as Roberto Bautista Agut took care of Vasek Pospisil in three sets, coming from behind to win 3-6 6-3 6-3. It was a battle of two styles, with Pospisil hitting 24 winners – including 11 aces – to nine, but also having 20 unforced errors to five. Bautista Agut’s consistency saw them claim the win in three sets.\nAfter a shock loss in the opening tie of the tournament to Soonwoo Kwon, Felix Auger-Aliassime bounced back to take down US Open winner Carlos Alcaraz in three sets. The world number 13 won 6-7 6-4 6-2 to outwork Alcaraz in the end, serving a whopping 16 aces to three and winning 81 per cent of his first serve points. Auger-Aliassime produced 23 total winners to 18, and hit three less unforced errors (17-20) in another come-from-behind victory.", "score": 15.758340881307905, "rank": 65}, {"document_id": "doc-::chunk-3", "d_text": "She won 56% and 31% of her first and second serve points respectively during that match, and those numbers simply don't cut it..\nyorkshire cricket players,After clarifying that he won against all members of the Big 4 in his career (the interviewer said three), Tsitsipas proceeded to note the strengths of each one, saying that they will \"remain probably one of the best in our sport.\",The tournament will be held from Friday, September 23 – Sunday, September 25 at the O2 arena in London. Additionally, an Open Practice Day will be held on Thursday, September 22..\nWhen Federer's children arrived on the court and cried as they hugged their father, Djokovic was moved by what he saw and he started sobbing as well.,But she experienced a steep decline in her performances thereafter. This year, Raducanu has managed to accumulate just 17 wins against 18 losses. Her best results have been quarterfinal finishes at the Stuttgart Open and the Citi Open, and most recently, reaching the semifinals of the Korea Open.,The Spaniard beat Djokovic on grass in the 2007 Wimbledon semifinals and in the Queen's Club final a year later..\nFurthermore, the Spaniard defeated Soonwoo Kwon in straight sets in the Davis Cup Finals to secure Spain's victory over South Korea and help his country qualify for the quarterfinals of the tournament.,22-time Grand Slam champion Rafael Nadal will join his compatriot and World No.1, Carlos Alcaraz, in securing a Spanish 1-2 in the ATP rankings on Monday.,In the third match of the day, Tiafoe lost the first set 6-1 to Tsitsipas before mounting a stunning comeback. He saved two match points in the second-set tie-breaker to win it 7-6(11), before clinching the match tie-breaker 10-8.", "score": 15.758340881307905, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "3 seeded Colombian pair Juan Sebastián Cabal and Robert Farah in a tough three-set contest.\nThe pair last competed in the Cincinnati Masters in 2020, where they advanced to the semifinals. They reached the final of the 2021 Winston-Salem Open and won the 2022 ATP Lyon Open with Ivan Dodig as their partner.\nHe achieved his first Grand Slam final in his career at the 2022 French Open, partnering with Dodig and defeating World No. 1 and No. 2 seeds Joe Salisbury and Rajeev Ram, as well as saving five set points in the quarterfinals. As a result, he broke into the top 25 and overtook Rajeev Ram as the American No. 2 player.", "score": 15.758340881307905, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Showing a great level of play and a very good mental attitude, German Alexander Zverev beat Serbian Novak Djokovic in two sets to be crowned champion in the Finals of the ATP, which were played in London.\nZverev, 21 years old, overcame a defeat to \"Nole\" in the group stage to not leave him any options in the final game of the tournament, which would end up being imposed with sets of 6-4 and 6-3.\nWith the number one position in the world insured for the end of the season, Djokovic wanted to close with a flourish the year of his return to tennis elite, but a boy ten years younger than him would be responsible for making him bite the dust.\nFor this he would do something that nobody else could do during the week: break the Serbian’s service, which has become his most powerful weapon after his elbow operation, since he has chosen to use a lighter racket and Make specific routines for that aspect of the game.\nThis Sunday, Zverev did not waste the four opportunities he had to break Djokovic’s service, appealing to attack the player born in Belgrade, and then finish him off with his own serve, with which he managed to impose up to 10 aces.\nAlexander Zverev wins his fourth title of the season, after taking the trophies in Munich, Madrid and Washington, and closes the season as the top winner with 58, four more than the Austrian Dominic Thiem and five more than Djokovic himself.\nMore importantly, with the victory he ends up confirming that there is emerging talent that should not be lost sight of, capable of defeating players with much more experience than them, such as Roger Federer and Novak Djokovic.", "score": 15.758340881307905, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "‘I don’t think he’s going to stop at 24 titles.’\nUS Open 2023 results: Novak Djokovic wins 24th major Tennis superstar Novak Djokovic has won a record-equalling 24th Grand Slam…\nThe US Open winner hailed his rivalry with Carlos Alcaraz.\n‘His passing hurt me deeply.’\n‘What are you still doing here?!’\nThe Serbian has now lifted THREE major trophies this year!\nThe 19-year-old lifts her first Grand Slam trophy!\nHe is one win away from a 24th Grand Slam title!\nUS Open 2023 results: Jack Draper wins, Dan Evans loses to Carlos Alcaraz, Cameron Norrie & Katie Boulter out Britain’s…\nMixed day for the British players at Flushing Meadows.\nUS Open 2023: Andy Murray calls VAR debut a ‘farce’ Britain’s Andy Murray has described Video Assist Review (VAR) as…\n‘The smell, oh my gosh.’\nCarlos Alcaraz can retain his title while Jessica Pegula can win a first slam.\nA change of plan for Venus Williams.\n‘I want her out. She needs to go.’\nThe Australian has missed all three Grand Slams so far this year.\n‘The stories around me started to snowball.’\n‘I’m actually shook by the level of disrespect…’\nThe fine will be taken out of Djokovic’s £1.17 million prize money.\nCarlos Alcaraz won the prestigious British tournament Wimbledon for the first time, beating one of the all-time ‘greats’ of tennis: Novak Djokovic.\n‘Probably he’s right!’\n‘Maybe I should have lost a few finals!’\nAlcaraz defeated Djokovic in an all-time classic.\n‘I feel sorry for the guy!’\nWill the Serbian ever get his hands on it?", "score": 13.897358463981183, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "into your inbox\nThousand have already subscribedy\nRoland-Garros Day 3 men’s recap – Djokovic, Nadal, Monfils win but Rublev and Auger-Aliassime go down\nYour guide to the biggest stories from the men’s side at 2021 Roland-Garros as the first round continues in Paris on Tuesday 1st June\n• Seeds winning on Tuesday at Roland-Garros (1st round): Novak Djokovic (1), Rafael Nadal (3), Matteo Berrettini (9), Diego Schwartzman (10), Gael Monfils (14), Alex de Minaur (21), Aslan Karatsev (24)\n• Seeds losing on Tuesday: Andrey Rublev (7), Felix Auger-Aliassime (20), Ugo Humbert (29)\n• Yesterday’s coverage about Federer and Medvedev winning among others results is available here.\n• To follow our full coverage of the 2021 French Open, please visit this page.\n• To follow live results and to see the men’s draw and women’s draw, we recommend you follow the official Roland Garros website.\n• Tomorrow’s programme at Roland-Garros will be available here. For today’s schedule, you can click here\n• All the practical information – how to watch the tournament, how to buy tickets and everything else you could possibly need to know – are hopefully on this page.\nMONFILS GETS SECOND WIN OF SEASON\nGael Monfils picked up his second win of the year and his second win since February of 2020 when he defeated an in-form Albert Ramos-Vinolas 1-6, 7-6(6), 6-4, 6-4. The Frenchman treated the crowd at Court Suzanne-Lenglen to an emotional victory after three hours and two minutes.\nMonfils will next meet Mikael Ymer, a five-set winner over Roberto Carballes Baena.\nRUBLEV OUT IN FIRST ROUND\nIn one of the two biggest upsets of the men’s tournament so far (also No 4 Dominic Thiem losing to Pablo Andujar), Andrey Rublev lost to Jan-Lennard Struff 6-3, 7-6(6), 4-6, 3-6, 6-4 after three hours and 46 minutes.", "score": 13.897358463981183, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Ons the road to maiden Grand Slam title\nTUNISIAN Ons Jabeur has become the first female player to reach the Wimbledon and US Open finals in the same year, after the fifth seed bulldozed her way into the Flushing Meadows decider. Decimating the red-hot Caroline Garcia in an unbelievable performance, Jabeur dropped just four games and did not face a break point on her way to a comfortable 6-1 6-3 victory in just 66 minutes. She will now take on world number one Iga Swiatek, who was less convincing in her win over sixth seed Aryna Sabalenka, needing three sets to reach the final, 3-6 6-1 6-4.\nAfter earning a day’s break, Jabeur came out firing to repeat her effort at Wimbledon where she made, and eventually lost the final to Elena Rybakina. Up against the most in-form player on Tour in Caroline Garcia, Jabeur showed no mercy as she completely destroyed the Frenchwoman, restricting Garcia’s usual power and getting on the front foot herself. The Tunisian served eight aces and hit 21 winners to only 15 unforced errors, not facing a break point for the first time in 2022.\nThough her serve percentage was terrible (43 per cent), Jabeur’s ability to somehow scrape through at win not only 83 per cent of her first serve points but 57 per cent of her second serve points, proved vital. It allowed her to stay on top, not face a break point and break her opponent four times from only the four chances.\n“She comes in the court and puts a lot of pressure on my second serves. I’m really glad she didn’t break me in the end … would have been really tough to go to 5-4,” Jabeur said post-match.\nDespite Garcia’s good run of form, the Frenchwoman has never been able to work out Jabeur, losing all seven times the pair have gone head-to-head. Jabeur said she used that mental strength to get through, and is looking forward to the huge US Open final on Saturday (Sunday morning AEST).\n“After Wimbledon, a lot of pressure on me,” Jabeur said. “I’m really, really relieved that I backed up my result.", "score": 13.897358463981183, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Three-time grand slam champion Stan Wawrinka, a former top three player currently ranked 284 following knee surgery in 2017 and tendon problems in 2021, produced some impressive tennis to beat former world number one Daniil Medvedev for his first top-10 win since 2020.\nIt’s for moments like this that I did everything to come back Stan Wawrinka\nWawrinka, who came through qualifying to join the main draw where he cleared his first-round opponent Joao Sousa in straight sets, missed a match point in the second set before clinching a 6-4 6-7(7) 6-3 victory to reach the quarter-finals of the Moselle Open in Metz, France.\n“I’m very happy with the level of play that I managed today,” Wawrinka, said after claiming his first win since 2020 over a top ten player.\n“It’s for moments like this that I did everything to come back.”\nBut the top seeded Medvedev, who has slipped to three in the world rankings and is keen to return to the No.1 spot, fought hard especially in the second set.\nThe 37-year-old Swiss saw the two break points which would have given him a 4-0 lead in the decider slide by as the Muscovite pulled himself back into contention to level at 3-3.\nWawrinka however, held his nerve and broke again, wrapping up the victory on his third match point.\n“Stan was better than me and he won,” Medvedev said. “I tried to hang on. I had opportunities but it wasn’t good enough.\n“But respect to Stan, I didn’t think he was 37!”\nWawrinka will next face Sweden’s Mikael Ymer in his first ATP quarter-final since January 2021.", "score": 13.897358463981183, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Serbia’s Novak Djokovic celebrates winning a point against Italy’s Matteo Berrettini during the men’s singles final on day thirteen of the Wimbledon Tennis Championships in London, Sunday, July 11, 2021. (Steve Paston/Pool Via AP)\nWIMBLEDON, England — Novak Djokovic tied Roger Federer and Rafael Nadal by claiming his 20th Grand Slam title Sunday, coming back to beat Matteo Berrettini 6-7 (4), 6-4, 6-4, 6-3 in the Wimbledon final.\nThe No. 1 -ranked Djokovic earned a third consecutive championship at the All England Club and sixth overall.\nHe adds that to nine titles at the Australian Open, three at the U.S. Open and two at the French Open to equal his two rivals for the most majors won by a man in tennis history.\nThe 34-year-old from Serbia is now the only man since Rod Laver in 1969 to win the first three major tournaments in a season. He can aim for a calendar-year Grand Slam — something last accomplished by a man when Laver did it 52 years ago — at the U.S. Open, which starts Aug. 30.\nThis was Djokovic’s 30th major final — among men, only Federer has played more, 31 — and the first for Berrettini, a 25-year-old from Italy who was seeded No. 7.\nIt was a big sporting day in London for Italians: Their national soccer team faced England at Wembley Stadium in the European Championship final at night.\nWith Marija Cicak officiating, the first female chair umpire for a men’s final at a tournament that began in 1877, play began at Centre Court as the sun made a rare appearance during the fortnight, the sky visible in between the clouds.\nThe opening game featured signs of edginess from both, but especially Djokovic, whose pair of double-faults contributed to the half-dozen combined unforced errors, compared with zero winners for either. He faced a break point but steadied himself and held there and, as was the case with every set, it was Djokovic who took the lead by getting through on Berrettini’s speedy serve.", "score": 11.600539066098397, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Indian-American Samir Banerjee performed brilliantly at Wimbledon to win the Junior Grand Slam title. Sameer defeated compatriot Viktor Lilov in straight sets 7-5, 6-3 in the final.\nIn this one-sided match, Samir was completely overwhelmed by Viktor Lilov and the pressure of the big final did not reflect on his game. Victor tried to fight somewhat against Sameer, but in the crucial moments, Banerjee displayed good serve and return shots to win the first set 7-5. Victor was greatly affected by the victory of the first set. He made some inexplicable mistakes against Sameer. Sameer, on the other hand, took full advantage of this to raise the level of his game point-by-point, with Victor succumbing to the pressure as time passed. And Sameer won the second set 6-3 to win the junior title.", "score": 11.600539066098397, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "In Buenos Aires, Argentina, the Argentina Open had plenty of upsets and intense matches on Tuesday. In the first round action, the 5th-seeded Francisco Cerundolo battled against Yannick Hanfmann in a thrilling three-setter, ultimately coming out on top 6-2, 4-6, 7-5.\nCerundolo’s aggressive play saw him tally four aces and win 61% of his first serve points, while also breaking Hanfmann’s serve five times. Although Hanfmann put up a tough fight and broke Cerundolo three times, it wasn’t enough to secure the win. In the end, it was Cerundolo’s resilience and determination that helped him take the victory.\nThe match came to a nail-biting conclusion when Cerundolo broke Hanfmann’s serve in the twelfth game of the third set, securing his place in the second round. This was the first tour-level meeting between the two players, and Cerundolo was undoubtedly pleased to have come out on top.\nIn other matches, there were plenty of upsets as Dusan Lajovic upset 6th-seeded Sebastian Baez in three sets 3-6, 7-6(7-5), 6-3, and Roberto Carballes Baena of Spain stunned the 8th-seeded Albert Ramos-Vinolas in two sets 7-5, 6-4. It was clear that the competition was fierce, and any player could come out on top.\nTwo-time tournament champion Dominic Thiem collected his first win of the season topping Alex Molchan 7-6(4), 6-3.\n“It was a good match,” said Thiem to the ATP. “I like it a lot here in Buenos Aires. Already the last days in practice I was feeling well.\n“I won my first match of the season against a very good opponent. I stayed focused throughout the whole match, as well in the difficult moments. So I’m happy and I’m trying to focus fully on the second round now.”\nWednesday’s slate promises more intense matches, including a highly anticipated match between Laslo Djere and No. 1 Carlos Alcaraz, and No. 2 Cameron Norrie’s match against Facundo Diaz Acosta.\nThe 19-year-old Alcaraz will make his 2023 debut after a leg injury forced his withdrawal from Australia.", "score": 11.600539066098397, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Re: ATP Finals\nWorld No. 20 Nicolas Almagro captured his sixth ATP World Tour title Sunday as he defeated top seed and defending champion Robin Soderling 7-5, 3-6, 6-2 in the final of the SkiStar Swedish Open, an ATP World Tour 250 clay-court tennis tournament in Bastad.\nAlmagro, who improved to a 6-2 record in ATP World Tour finals, earned 250 South African Airways 2010 ATP Ranking points and €72,150, while runner-up Soderling received 150 points and €37,900 in prize money.\n“It’s a great feeling to win here,” said Almagro. “It’s always amazing to win a final and I’m very happy with the week. I’m going to enjoy this moment and then prepare for next week in Hamburg.”\nVictory gave Almagro his first ATP World Tour title since triumphing at the Abierto Mexicano Telcel in 2009 with victory over Gael Monfils. All six of the Spaniard’s titles have come on outdoor clay, beginning with victory at the Valencia Open 500 in 2006 before it was moved to indoor hard court. The Murcia native is the seventh different Spaniard to win an ATP World Tour title this season and the sixth Spanish champion in the past 10 years in Bastad.\nThe fourth-seeded Almagro fell short in the 2007 Bastad final against David Ferrer but has impressed throughout the week with his smooth progress through the draw and carried his high-level into the final against Soderling, whom he defeated in their previous meeting on clay in Madrid.\nAfter saving two break points in the second game of the match, Almagro took his fourth opportunity to break Soderling’s serve in the 12th game and seal a one-set lead. Soderling hit back strongly, though, racing to a 3-0 lead in the second set before going on to level the match, saving one break point as he served out the set in the ninth game.\nWith the match finely poised on serve in the third set, Almagro lifted his level to win the final four games from 2-2 to seal victory in just less than two hours. Victory saw him level his career series with Soderling at 3-3, avenging the defeat he suffered to the Swede in the 2009 Bastad quarter-finals.", "score": 11.600539066098397, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Marin Cilic won his very first Grand Slam title. The towering 25- year-old from Croatia dominated Kay Nishikori in straights sets, 6-3, 6-3, 6-3.\nSerena Williams dominated Caroline Wozniacki 6-3 6-3 in a match that took only an hour and 15 minutes. Serena smacked over 7 times the number of winners (29 winners to Wozniacki’s 4).\nSerena Williams ended a difficult-for-her Grand Slam season in the best way possible, winning her third consecutive U.S. Open title by beating Caroline Wozniacki 6-3, 6-3 on Sunday.\nRoger Federer could not pull off another big escape at the U.S. Open, losing 6-3, 6-4, 6-4 in the semifinals Saturday against Croatia’s Marin Cilic.\nKei Nishikori became the first Asian man to advance to a Grand Slam final. The 24-year-old from Japan ousted the top-ranked player in the world, Novak Djokovic, 6-4, 1-6, 7-6, 6-3.\nJapan’s Kei Nishikori became the first man from Asia to reach a Grand Slam final, stunning top-ranked Novak Djokovic in four sets at the U.S. Open.\nFederer was broken only once, part of a three-game run for Bautista Agut that took the score from 5-1 to 5-4. But Federer had no trouble the second time he tried to serve out that set, which he ended with a pair of aces.\nCarlos Beltran hit a grand slam and drove in five runs as the New York Yankees broke out to support improbable fill-in starter Esmil Rogers and beat the sloppy Cleveland Indians 10-6 on Friday night for their sixth win in seven games.\nItaly’s Sara Errani and Roberta Vinci became the fifth duo to complete a career Grand Slam in women’s doubles together, winning Wimbledon for the first time Saturday.\nTaylor Teagarden hit a grand slam in his Mets debut, Daniel Murphy had a two-run shot and New York beat the Milwaukee Brewers 6-2 Tuesday night to snap a six-game skid.\nThe earliest real signs of trouble for Andy Murray came in the 10th game of his U.S.", "score": 8.086131989696522, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Nikola Mektic & Mate Pavic win Men’s Doubles Title\nCroatian Pair of Nikola Mektic & Mate Pavic defeated Marcel Granollers (Spain) & Horacio Zeballos (Argentina) to win the Men’s Doubles Title of Wimbledon.\n- With this, they became the 1st Croatian Pair to win a Grand Slam men’s doubles title.\nElise Mertens, Hsieh Su-wei win Women’s Doubles Title\nElise Mertens (Belgium) & Hsieh Su-wei (Chinese Taipei) defeated Russian pair of Veronika Kudermetova and Elena Vesnina to win the Women’s Doubles Title of Wimbledon.\nNeal Skupski & Desirae Krawczyk win Mixed Doubles Title\nNeal Skupski (UK) & Desirae Krawczyk (US) defeated Joe Salisbury (UK) & Harriet Dart (UK) to win the Mixed Doubles Title of Wimbledon.\nIndian-origin Samir Banerjee wins Boys Singles Title at Wimbledon\nIndia-origin Samir Banerjee who represents US defeated Victor Lilov (US) to win the Boys Singles Title of Wimbledon 2021. He is the 1st American Junior Champion at Wimbledon since Reilly Opelka in 2015.\n- Till now, 4 Indian players have won Junior Grand Slam Titles – Yuki Bhambri (2009 Australian Open), Leander Paes (Wimbledon 1990 and US Open 1991), Ramesh Krishnan (French Open and Wimbledon 1979) and Ramanathan Krishnan (Wimbledon 1954).", "score": 8.086131989696522, "rank": 78}, {"document_id": "doc-::chunk-5", "d_text": "70... Reached 2nd RD at five straight events, including three on grass...\n2004 -- Made Grand Slam debut as a qualifier at Roland Garros and reached 2nd RD (d. Reid, l. to Robredo)...Reached first ATP SF at Umag, defeating Hanescu, Saretta and Novak before falling to Canas...Compiled a 5-7 ATP record...Reached final at Rome Challenger...Posted 23-13 Challenger record...Claimed one Futures title in Portugal...\n2003 -- Made ATP debut in Valencia as a wild card (l. to Ferrero)...Reached first Challenger final in Seville (l. to Horna)...Was 11-9 in Challengers...In Futures, claimed one title while finishing as a runner-up four times...Had a Futures record of 23-16...Won a pair of Futures doubles titles...\n2002 -- Reached two Futures finals in Spain and compiled a 29-14 Futures record...\nReached Chennai SFs (w/Carreno Busta)...Reached the Australian Open 4R for the first time in 11 appearances, losing to Wawrinka 76(8) in fourth set...\nOn 8 February, became the first Spaniard to win the Zagreb title by beating Seppi in final…It was his fourth ATP World Tour title…\nFell in ATP World Tour Masters 1000 Miami 3R (l. to Monaco)...\nOn 26 April, became the first player over 30 to win Bucharest title since 2010 (Chela); was the first Spanish to win the title since 2009 (Montanes)...Improved to 5-3 in ATP World Tour finals, beating Vesely in two TB sets...\nReached Estoril SFs, losing to Gasquet 76(1) in third set...", "score": 8.086131989696522, "rank": 79}, {"document_id": "doc-::chunk-3", "d_text": "He won his third title in his home country by defeating Damian Patriarca, who forfeited the match, at the ITF Circuit event in Buenos Aires.\nDel Potro turned professional after the Italy F17 event in Bassano, and in his first professional tournament, the Lines Trophy in Reggio Emilia, he reached the semifinals, where he lost to countryman Martín Vassallo Argüello in three sets. Two tournaments later, he reached the final of the Credicard Citi MasterCard Tennis Cup in Campos do Jordão, Brazil, where he lost to André Sá in straight sets. After turning 17, he won the Montevideo Challenger by defeating Boris Pašanski in the final in three sets. That same year, he failed in his first attempt to qualify for his first Grand Slam, at the US Open, losing in the first round to Paraguayan Ramón Delgado. Throughout 2005, del Potro jumped over 900 positions to finish with a world ranking of no. 158, largely due to winning three Futures tournaments. He was the youngest player to finish in the year-end top 200.\nIn February, del Potro played his first ATP tour event in Viña del Mar, where he defeated Albert Portas, before losing to Fernando González in the second round. Later, seeded seventh, he won the Copa Club Campestre de Aguascalientes by defeating the likes of Dick Norman and Thiago Alves, before beating Sergio Roitman in the final.\nDel Potro qualified for the main draw of his first Grand Slam in the 2006 French Open at the age of 17. He lost in the opening round to former French Open champion and 24th seed Juan Carlos Ferrero. Having received a wild card, he reached the quarterfinals of the ATP event in Umag, Croatia, where he lost in three sets to the eventual champion, Stanislas Wawrinka. In Spain, he participated in the Open Castilla y León Challenger tournament held in Segovia, defeating top seed Fernando Verdasco in the quarterfinals and Benjamin Becker in the final.\nDel Potro qualified for his first US Open in 2006, after being seeded ninth in the qualifying stages, where he beat Brian Vahaly, Wayne Arthurs, and Daniel Köllerer in straight sets.", "score": 8.086131989696522, "rank": 80}]} {"qid": 6, "question_text": "What is RF value in chromatography and how is it measured?", "rank": [{"document_id": "doc-::chunk-3", "d_text": "Whenever Rƒ value of a solution is zero, the solute remains in the stationary phase and therefore it is immobile. If Rƒ value = 1, this means that the solute consists of no affinity for the stationary phase and travels by the solvent front. To compute the Rƒ value, take the distance traveled via the substance divided by the distance traveled by the solvent. For illustration, if a compound travels 1.5 cm and the solvent front travels 2.2 cm, (1.5/2.2) the Rƒ value equals to 0.68.\nPaper chromatography is one technique for testing the purity of compounds and recognizing substances. Paper chromatography is a helpful method since it is relatively quick and needs small quantities of material.\nIn paper chromatography, such as thin layer chromatography, substances are distributed between the stationary phase and a mobile phase. The stationary phase is generally a piece of high quality filter paper. The mobile phase is a developing solution which travels up the stationary phase, carrying the samples with it. The components of sample will separate readily according to how strongly they adsorb on the stationary phase versus how readily they dissolve in the mobile phase.\nWhenever a colored chemical sample is put on a filter paper, the colors separate from the sample by putting one end of the paper in a solvent. The solvent diffuses up the paper, dissolving the different molecules in the sample according to the polarities of the molecules and the solvent. If the sample includes more than one color, that signifies it should encompass more than one type of molecule. Due to various chemical structures of each and every type of molecule, the chances are extremely high that each and every molecule will have at least a slightly dissimilar polarity, giving each molecule a different solubility in the solvent. The unequal solubility causes the different color molecules to leave solution at various places as the solvent carries on moving up the paper. The more soluble a molecule is, the higher it will migrate up the paper. Whenever a chemical is very non-polar it will not dissolve at all in a very polar solvent. This is similar for a very polar chemical and a very non-polar solvent.\nThis is significant to note that whenever using water (that is, a very polar substance) as a solvent, the more polar the colour, the higher it will increase on the paper.", "score": 53.04906867021698, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "However, this retardation factor for a given protein compound will vary widely with changes in the adsorbents and/or solvents utilized. In addition, the retardation factor can vary greatly with the content of moisture in the adsorbent. The Rf values or the Retention Factors are then compared for analysis. This Rf value can be quantified as such:\nRf = (Distance that compound has traveled)/ (distance that the solvent has traveled)\nA light pencil line is drawn approximately 7 mm from the bottom of the plate and a small drop of a solution of the dye mixture is placed along the line. To show the original position of the drop, the line must be drawn in pencil. If it was drawn in ink, dyes from the ink would move up the TLC plate along with the dye mixture and the results would not be accurate. In order to get more accurate results, dot the TLC paper with the dye mixture a few times trying to build up material without widening the spots. A spot with a diameter of 1 mm will give good results. While dotting the TLC plate, be sure to not dot mixtures too close to one another because when the dye mixture rises up the TLC plate, it will clash with the other spots and the Rf values will be difficult to calculate.\nWhen the spots are dry, the TLC plate is placed in a beaker, with the solvent level below the pencil line. Cover the beaker to ensure that the atmosphere in the beaker is saturated with solvent vapor. Line the beaker with some filter paper soaked in solvent because this will help in the process of separating the mixture. Saturating the atmosphere in the beaker with solvent vapor stops the solvent from evaporating as it rises up the plate.\nAs the solvent slowly travels up the plate, the different components of the dye mixture travel at different rates and the mixture is separated into different colored spots. The solvent is allowed to rise until it approximately 1-1.5 cm from the top of the plate. This gives the maximum separation of the dye components for this particular combination of solvent and stationary phase.\nOnce the maximum separation of the dye components for this particular solvent and stationary phase solvent is induced, the TLC plate is removed from the beaker and allowed to dry. Immediately after removing the TLC plate, use a pencil to mark the solvent front before the solvent begins to evaporate.", "score": 49.17045020087696, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Thin-Layer Chromatography (TLC) is a chromatography using a thin layer plate prepared by fixing an adsorbent fixed in thin film.\nWhen one end of the thin layer plate is immersed in a solvent, the solvent migrates upward through gaps (capillary phenomenon). When a sample chemical is present on the plate, the sample also moves in a way being dragged by the solvent migration. At this time, the distance (Rf value) of sample migration differs depending on the difference / balance between the degree of adsorption to the fixed layer (adsorbent) and the affinity to the mobile layer (solvent). This principle is utilized to separate and identify organic compounds.\nSelection of TLC stain is also important to obtain good results in TLC analysis. For example, if you want to detect compounds with an amino group from multiple spots, you can selectively stain by using ninhydrin TS.\nCommon TLC stains, preparation method of stains, and stained compounds are shown below. FUJIFILM Wako offers ready-prepared product in addition to reagents as raw materials for stains. Utilize them according to the intended purposes.", "score": 47.88259355873764, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "The Rf of chlorophyll a was .341cm, and for xanthophyll it was .455cm, thus xanthophyll moved further than both chlorophylls a and b. The pigment that moved the furthest though was carotene with an Rf of .997cm.\nOne can also see from the results that carotene came the closet to the solvent front—lagging behind at a distance of only 0.2cm. Xanthophyll was 4.8cm from the solvent front. Chlorophylls a and b were 5.8cm and 6.7cm, respectively, from the solvent front.\nDistance to Distance Moved\nPigments Solvent Front By Pigment Rf\nChlorophyll b 8.8cm 2.1cm .227cm\nChlorophyll a 8.8cm 3.0cm .341cm\nXanthophyll 8.8cm 4.0cm .455cm\nCarotene 8.8cm 8.6cm .977cm\nIt is important to note that determining the Rf helps to standardize the procedure—others, by following the steps of this experiment and by calculating the Rf, can attain similar results.\nThe results show how fast the pigments moved in relation to the acetone, and in the time allowed. The pigment carotene, moved the furthest, therefore it traveled the fastest, but at a slightly slower rate than that of the acetone. This is evident from its distance from the solvent front; only 0.2 cm away. The pigment that moved the slowest, and therefore the smallest distance was chlorophyll b. The distance between chlorophyll b and the solvent front was 6.7 cm. Chlorophyll a was the second slowest, and xanthophyll was the second fastest pigment. The difference in the movements of each pigment was due to the solubility of a particular pigment and ability of that particular pigment to stick to the cellulose fibers of the paper. The solubility of a pigment and the size of a pigment’s molecules help to determine the rate, and thus the distance it will travel. The more soluble a pigment is, the further, and faster, it will travel.", "score": 45.24128515351756, "rank": 4}, {"document_id": "doc-::chunk-7", "d_text": "In case of lipids, the chromatogram might be transferred to a polyvinylidene difluoride (PVDF) membrane and then subjected to further analysis, for illustration mass spectrometry, a method termed as Far-Eastern blotting.\nOnce visible, the Rf value, or retardation factor, of each and every spot can be found out by dividing the distance the product traveled via the distance the solvent front traveled by employing the initial spotting site as reference. Such values based on the solvent utilized and the kind of TLC plate and are not physical constants.\nAs an illustration, the chromatography of an extract of green leaves (for illustration spinach) in 7 phases of development. Carotene elutes rapidly and is only visible until step 2. Chlorophyll A and B are halfway in the final step and lutein the first compound staining yellow.\nIn one study TLC has been applied in the screening of organic reactions for illustration in the fine-tuning of BINAP synthesis from 2-naphthol. In this process the alcohol and catalyst solution (for example iron(III) chloride) are put separately on the base line, then reacted and then immediately examined.\nThe separation of a mixture and isolation of the components in larger amounts is made probable via chromatography on a column than through TLC. The column is made up of glass and is packed by particles that comprise the stationary phase. The mixture under test is put on top of a layer of sand on the column, and a slow stream of solvent, the eluant, washes the mixture via it. The function of sand is to prevent the particles being disturbed through the liquid. The substance which is the least attracted to the stationary phase is washed out at the bottom of the column first, followed through the remaining components over a time-period.\n=> Preparation of Column:\nThe column is made up in the following steps:\nA) Put a wad of glass wool in the bottom of the tube and pour a layer of sand over this. The sand keeps fine particles and as well gives a flat horizontal base for the adsorbent column.\nB) Fill the tube by the first solvent to be employed and then add the dry adsorbent in a fine stream shaking or tapping the tube to dislodge air bubbles, and draining solvent out at the bottom to make room as required, however keeping the solvent level above the adsorbent.", "score": 45.18905695593759, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "This solvent moves through the paper due to capillary action\nand dissolves the mixture spot. Some parts of the\nsolvent mixture to be separated have a greater attraction for\nthe chromatography paper, so they move a lesser distance,\nwhile other parts of the solvent mixture have a lesser\nattraction, so they move a greater distance up the paper.\nChromatography of a Plant Pigment\nThe specific mixture placed on chromatography paper will\nseparate into consistent patterns as long as the same solvent,\npaper, and amount of time allowed for the separation are not\nchanged.. Different solvents will change the separation\npattern of the mixture. Mixtures that are colored can be\nseparated into component colors by paper chromatography.\nThe Rf value of a pigment is a statistic often computed\nfrom a chromatography separation. Each component\nof a solution. Each pigment in the solution will\nhave a specific Rf for the same solvent when the\nchromatography occurs for a specific length of time.\nCalculation of Rf\n|| distance the pigment travels from\nthe original spot of solvent\ndistance to the wetting front of the solvent\nGel electrophoresis is a procedure used to separate charged\nmolecules of different sizes by passing them through a gel in\nan electrical field. The gel serves to act as a\nsupport for the separation of the molecules of different\nsizes. The gel is usually composed of a jelly-like\nmaterial called agarose which is made from seaweed.\nMolecules such as DNA fragments of different lengths and\nproteins of different sizes are often separated in the gel.\nHoles are created in the gel which serve to hold the\nparticular DNA mixtures to be separated. The\nDNA fragments are then loaded into the wells in the gel.\nSeparation of DNA\ncontains very small holes which act to regulate the\nspeed which molecules can move through it based on\nthe size of the molecules. The smaller\nmolecules will move much more easily through the\nsmall holes in the gel. As a result,\nlarge fragments of DNA lag behind small fragments,\nthus allowing the experimenter to separate these\nmolecules based on their size.\nSometimes molecular weight markers\nare electrophoresed along with the specimen, so the\nexperimenter may know the size of the DNA fragment which has\nbeen separated.", "score": 45.17271496316276, "rank": 6}, {"document_id": "doc-::chunk-2", "d_text": "Other articles where Packed-column chromatography is discussed: chromatography: Column chromatography: A packed column contains particles that either constitute or support the stationary phase, and the mobile phase flows through the channels of the interstitial spaces. RF value is the degree of retention of a component retardation factor. If any dust clings to the glass, rinse it down with more eluent (go on to step 15.) Ready-to-use formats expertly packed with chromatography resins. Currently, mixed phase packed columns do not offer complete resolution of chlorinated pesticides, and require long analysis times (nearly 40 minutes). MiniChrom Columns - for analytical and preparative chromatography with small volumes and rapid separation. We also have an extensive selection of GC packed column components (empty glass columns, ready-to-use packings, stationary phases, and solid supports) for those customers wishing to pack their own columns. Packed GC columns Because the ï¬rst commercial instruments accepted only packed columns, all initial studies of GC were performedonpackedcolumns.Packedcolumns Figure 3.1 GC modes showing interaction between the mobile phase and the stationary phases. Theory has shown. https://www.britannica.com/science/packed-column-chromatography. The internal diameter of a packed column is around 2-4 mm. Scouting of conditions can be achieved on lab-scale chromatography systems or automated robotic systems. (B) Quantitative determination by high-resolution gas-chromatography of cannabinoids in same hashish sample. Liquidized stationary phases are used in packed columns. Process-scale chromatography columns must perform with a high degree of efficiency over many processing cycles (i.e., display high stability). This attachment or interaction depends on the polarity of solutes. The 1 mL columns can be used for quick screenings of application feasibility and lab scale purification on a convenient and easy-to-use pre-packed column. 4. Since launching OPUS ® Columns in 2012, Repligen is the recognized expert in packing lab-scale to manufacturing-scale columns today, and the innovation leader in downstream ⦠Column chromatography in chemistry is a chromatography method used to isolate a single chemical compound from a mixture. Supelcoâs complete line of packed GC columns are configured to fit most commercially available GC instruments. Alibaba.com offers 879 packed column chromatography products.", "score": 43.46348426357364, "rank": 7}, {"document_id": "doc-::chunk-6", "d_text": "Whenever the mobile phase is changed to a more polar solvent or mixture of solvents, it is more capable of dispelling solutes from the silica binding places and all compounds on the TLC plate will move higher up the plate. This is generally stated that 'strong' solvents (that is, elutants) push the analyzed compounds up the plate, whereas 'weak' elutants hardly move them. The order of strength or weakness based on the coating (that is, stationary phase) of the TLC plate. For silica gel coated TLC plates, the elutant strength rises in the given order: Perfluoroalkane (weakest), Hexane, Pentane, Carbon tetrachloride, Benzene/Toluene, Dichloromethane, Diethyl ether, Ethylacetate, Acetonitrile, Acetone, 2-Propanol/n-Butanol, Water, Methanol, Triethylamine, Acetic acid, Formic acid (strongest). For C18 coated plates the order is reverse. Practically this signifies that if we make use of a mixture of ethyl acetate and hexane as the mobile phase, adding more ethyl acetate yields in higher Rf values for all compounds on the TLC plate. Changing the polarity of the mobile phase will generally not result in reversed order of running of the compounds on the TLC plate. The eluotropic series can be employed as a guide in choosing a mobile phase. Whenever a reversed order of running of the compounds is desired, an apolar stationary phase must be employed, like C18-functionalized silica.\nAs the chemicals being separated might be colorless, some of the methods exist to visualize spots: Often a small quantity of a fluorescent compound, generally manganese-activated zinc silicate, is added to the adsorbent which lets the visualization of spots under a backlight (UV254). The adsorbent layer will therefore fluoresce light green by itself; however spots of analyte quench this fluorescence.\nIodine vapors are a common unspecific color reagent.\nParticular color reagents exist to which the TLC plate is dipped or which are sprayed to the plate: Potassium permanganate - oxidation, iodine and bromine.", "score": 41.928914786793044, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Chromatography is a separation process involving two stages, stationary and mobile phase. Mixture to be examined is adsorbed in stationary phase and mobile phase is passed into it, eventually compounds of mix have separated based on rate of adsorption and solubility. Both are physical properties. Normally, within this chromatography, a glass tube is full of adsorbent alumina or silica gel up-to one third of it is length. Then, it is saturated with selective solvent. Sometimes, column is full of slurry adsorbent + solvent. The column should have no space. Such a column is said as ‘well packed column’. In this technique, less polar compound will be eluted first. Because, less polar compound won’t be as adsorbed in polar stationary phase. Finally, more polar compound will come out.\nIn TLC, a plate glass/plastic is coated with a thin layer of solid adsorbent. A little drop of mixture is seen near the bottom of plate. Then, plate is set in solvent chamber in this a way that only bottom component gets dipped into solvent mobile phase. This liquid gradually rises up to TLC. In this approach, separation is measured by RF value. Separated compounds move to different space, which can be expressed by retention factor RF worth. Compound of lower polarity will have greater RF worth than more polar ones. Within this chromatography, stationary phase is water adsorbed in newspaper and mobile phase is combinations of different organic solvent and water. Any fall of organic solvent on a filter paper becomes partitioned between water and solvent. Then, this paper is dipped into variety of solvent mixtures and chromatograms are developed.\nAscending and descending, this two kinds of development generally occur. Like TLC, within this method too, separation is expressed by RF value. Compounds with higher RF value has lower polarity and vice versa. This is the most modern technique of what is chromatography. It is often utilized in analytical chemistry. In this technique, sample vaporized without decomposition is injected into pillar. The sample is transferred through this column from the flow of mobile phase. Here, mobile phase is inert carrier gases Ex- He or nitrogen. Column is coated with different stationary phases. So, essentially, elements of examined mixture are partitioned between strong stationary phase and portable gas. Each compounds elute in another time, which is called retention time. Compounds eluted at different retention period then get detected in a variety of detectors. Finally, these are listed in a recorder and chromatograms are accessed.", "score": 40.68230044272507, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Few methods of chemical analysis are truly specific to a particular analyte. It is often found that the analyte of interest must be separated from the myriad of individual compounds that may be present in a sample. As well as providing the analytical scientist with methods of separation, chromatographic techniques can also provide methods of analysis.\nChromatography involves a sample (or sample extract) being dissolved in a mobile phase (which may be a gas, a liquid or a supercritical fluid). The mobile phase is then forced through an immobile, immiscible stationary phase. The phases are chosen such that components of the sample have differing solubilities in each phase. A component which is quite soluble in the stationary phase will take longer to travel through it than a component which is not very soluble in the stationary phase but very soluble in the mobile phase. As a result of these differences in mobilities, sample components will become separated from each other as they travel through the stationary phase.\nTechniques such as H.P.L.C. (High Performance Liquid Chromatography) and G.C. (Gas Chromatography) use columns - narrow tubes packed with stationary phase, through which the mobile phase is forced. The sample is transported through the column by continuous addition of mobile phase. This process is called elution. The average rate at which an analyte moves through the column is determined by the time it spends in the mobile phase.\nThe distribution of analytes between phases can often be described quite simply. An analyte is in equilibrium between the two phases;\nThe equilibrium constant, K, is termed the partition coefficient; defined as the molar concentration of analyte in the stationary phase divided by the molar concentration of the analyte in the mobile phase.\nThe time between sample injection and an analyte peak reaching a detector at the end of the column is termed the retention time (tR ). Each analyte in a sample will have a different retention time. The time taken for the mobile phase to pass through the column is called tM.\nA term called the retention factor, k', is often used to describe the migration rate of an analyte on a column. You may also find it called the capacity factor. The retention factor for analyte A is defined as;\nt R and tM are easily obtained from a chromatogram. When an analytes retention factor is less than one, elution is so fast that accurate determination of the retention time is very difficult.", "score": 37.882095713029244, "rank": 10}, {"document_id": "doc-::chunk-13", "d_text": "In GC analysis, an identified volume of gaseous or liquid analyte is injected to the 'entrance' (head) of the column, generally employing a micro syringe (or, solid phase micro-extraction fibers, or a gas source switching system). As the carrier gas sweeps the analyte molecules via the column, this motion is inhibited through the adsorption of the analyte molecules either to the column walls or to packing materials in the column. The rate at which the molecules progress all along the column based on the strength of adsorption, which in turn based on the kind of molecule and on the stationary phase materials. Since each kind of molecule consists of a different rate of progression, the different components of the analyte mixture are separated as they progress all along the column and reach the end of the column at different times (that is, retention time). A detector is employed to monitor the outlet stream from the column; therefore, the time at which each and every component reaches the outlet and the amount of that component can be found out. In general, substances are recognized (that is, qualitatively) by the order in which they emerge (elute) from the column and via the retention time of the analyte in the column.\nFig: Schematic diagram of a gas chromatograph\n=> Component of a gas chromatograph:\nThe choice of carrier gas (that is, mobile phase) is significant, having hydrogen being the most proficient and providing the best separation. Though, helium consists of a larger range of flow rates which are comparable to hydrogen in effectiveness, with the added benefit that helium is non-flammable, and works by a greater number of detectors. Thus, helium is the most general carrier gas utilized.\nDetectors are the flame ionization detector (or FID) and the thermal conductivity detector (or TCD). Both are sensitive to a broad range of components, and both work over a broad range of concentrations. As TCDs are essentially universal and can be employed to detect any component other than the carrier gas (that is, as long as their thermal conductivities are different from that of the carrier gas, at detector temperature), FIDs are sensitive mainly to hydrocarbons, and are more sensitive to them than TCD. Though, an FID can't detect water. Both detectors are as well quite robust.", "score": 37.61028116352448, "rank": 11}, {"document_id": "doc-::chunk-2", "d_text": "Of particular note is the fact that the eluting solvent front, or zero column volume in isocratic elution, always precedes the sample off the column.\nA modification and extension of isocratic elution chromatography is found in step gradient chromatography wherein a series of eluants of varying composition is passed over the stationary phase. In reversed phase chromatography, step changes in the mobile phase modifier concentration (e.g., acetonitrile) are employed to elute or desorb the proteins.\nA schematic illustrating the operation of a chromatographic system in displacement mode is shown in FIG. 3. The column is initially equilibrated with a buffer in which most of the components to be separated have a relatively high affinity for the stationary phase. Following the equilibration step, a feed mixture containing the components to be separated is introduced into the column and is then followed by a constant infusion of the displacer solution. A displacer is selected such that it has a higher affinity for the stationary phase than any of the feed components. As a result, the displacer can effectively drive the feed components off the column ahead of its front. Under appropriate conditions, the displacer induces the feed components to develop into adjacent “squarewave” zones of highly concentrated pure material. The displacer emerges from the column following the zones of purified components. After the breakthrough of the displacer with the column effluent, the column is regenerated and is ready for another cycle.\nAn important distinction between displacement chromatography and elution chromatography is that in elution chromatography, desorbents, including for reversed phase chromatography, organic mobile phase modifiers such as acetonitrile, move through the feed zones, while in displacement chromatography, the displacer front always remains behind the adjacent feed zones in the displacement train. This distinction is important because relatively large separation factors are generally required to give satisfactory resolution in elution chromatography, while displacement chromatography can potentially purify components from mixtures having low separation factors. The key operational feature which distinguishes displacement chromatography from elution chromatography is the use of a displacer molecule. In elution chromatography, the eluant usually has a lower affinity for the stationary phase than do any of the components in the mixture to be separated, whereas, in displacement chromatography, the eluant, which is the displacer, has a higher affinity.", "score": 35.04963667158525, "rank": 12}, {"document_id": "doc-::chunk-11", "d_text": "The speed at which any component of a mixture travels down the column in elution mode based on numerous factors. However for two substances to travel at different speeds, and thus be resolved, there should be substantial differences in several interactions between the biomolecules and the chromatography matrix. The operating parameters are adjusted to maximize the effect of this difference. In most of the cases, baseline separation of the peaks can be accomplished only by gradient elution and low column loadings. Therefore, the two drawbacks to elution mode chromatography, particularly at the preparative scale, are operational complexity, because of gradient solvent pumping, and low throughput, because of low column loadings. Displacement chromatography has benefits over elution chromatography in that components are resolved to consecutive zones of pure substances instead of 'peaks'. As the method takes benefit of the nonlinearity of the isotherms, a larger column feed can be separated on a given column by the purified components recovered at significantly higher concentrations. Historically, displacement chromatography was applied to preparative separations of amino acids and rare earth elements and has as well been investigated for the isotope separation.\nTechniques by physical state of mobile phase:\nGas chromatography (GC):\nGas chromatography (GC) is a general kind of chromatography employed in analytical chemistry for separating and analyzing compounds which can be vaporized devoid of decomposition. Typical utilization of GC comprises testing the purity of a specific substance, or separating the various components of a mixture (that is, the relative amounts of these components can as well be determined). In some conditions, GC might assists in recognizing a compound. In preparative chromatography, GC can be employed to make pure compounds from a mixture.\nIn gas chromatography, the mobile phase (or moving phase) is a carrier gas, generally an inert gas like helium or an unreactive gas like nitrogen. The stationary phase is a microscopic layer of liquid or polymer on an inert solid support, within a piece of glass or metal tubing termed as a column (that is, an homage to the fractionating column employed in distillation). The instrument utilized to carry out gas chromatography is known as a gas chromatograph (or aerograph, gas separator).\nThe gaseous compounds being examined interact by the walls of the column that is coated by various stationary phases. This causes each and every compound to elute at a different time, termed as the retention time of the compound.", "score": 34.197689062970674, "rank": 13}, {"document_id": "doc-::chunk-4", "d_text": "And this number of plates is proportional to the square of the quotient of the distance of retention, namely the distance traversed by the substance within the column, by the width at mid-height of the peak of the curve representing the concentration of the substance as a function of its position within the column. This \"height equivalent to a theoretical plate\" corresponding to a substance within a given phase depends to a considerable extent on the thickness of said phase as measured in the direction at right angles to the flow of said phase. As said thickness is greater, so the concentration gradients fall to zero at a lower rate across the thickness in respect of an equal rate of flow; in other words, as the variations in concentration are greater, so the dispersion of the concentration of the substance is more appreciable at the outlet of the column and so the height equivalent to a theoretical plate increases. If it is desired to have a high degree of selectivity, it is thus necessary either to reduce to zero the concentration gradient which is transverse to the flow, that is to say to have a very low rate of circulation with respect to the velocities of equalization of the concentration within the thickness of the fluid layer or to have thicknesses of fluid layers which are small and of constant value. Moreover, if it is desired to obtain increased productivity, it is essential to ensure a substantial throughput or in other words an appreciable rate of circulation of the phases.\nThe present invention is directed to a method of rapid chromatogrpahic separation by exchange of substances between two fluid phases circulating in countercurrent flow, at least one phase being liquid, the separation of the substances between said two phases being distinguished by high selectivity in conjunction with high productivity, wherein said method essentially consists in injecting the substances to be separated into one of the phases, in causing the two phases to circulated on each side of a porous body having a thickness within the range of 1 to 200 microns while limiting the thickness of at least one liquid phase to a constant value within the range of 0.1 to 100 microns and in collecting after circulation the substances which are separated in each of the two phases.\nThe variation of concentration in one phase in the direction at right angles to the flow of said phase of one of the substances to be separated is limited to a pre-established value calculated as a function of the selectivity to be obtained, the H.E.T.P.", "score": 33.51712156002291, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "The LOD and LOQ values for RIT, OMB, PAR were obtained to be 0.02, 0.019and0.02, µg/ml and 0.07, 0.06 and 0.07 µg/ml, respectively. The method also exhibits good robustness for different chromatographic conditions like wavelength, flow rate, mobile phase, and injection volume.\nConclusion: The method was successfully employed, for the quantification of RIT, OMB, and PAR, in the quality control of in-house developed tablets, and can be applied for the industrial use.", "score": 33.30677542991883, "rank": 15}, {"document_id": "doc-::chunk-4", "d_text": "Chromatograph the System suitability solution and the Standard preparation, and record the peak responses as directed for Procedure: the relative retention times are 0.6 for inamrinone related compound C and 1.0 for inamrinone; the resolution, R, between the inamrinone related compound C and inamrinone peaks is not less than 3; and the relative standard deviation for replicate injections of the Standard preparation is not more than 2.0%.\nProcedure Separately inject equal volumes (about 20 µL) of the Standard preparation and the Assay preparation into the chromatograph, record the chromatograms, and measure the responses for the major peaks. Calculate the quantity, in mg, of amrinone (C10H9N3O) in each mL of the Injection taken by the formula:\n(0.1C / V)(rU / rS)in which C is the concentration, in µg per mL, of USP Inamrinone RS in the Standard preparation; V is the volume, in mL, of Injection taken; and rU and rS are the peak responses obtained from the Assay preparation and the Standard preparation, respectively.\nAuxiliary Information Please check for your question in the FAQs before contacting USP.Chromatographic Column\nUSP32NF27 Page 2622\nChromatographic columns text is not derived from, and not part of, USP 32 or NF 27.", "score": 32.96880299134887, "rank": 16}, {"document_id": "doc-::chunk-2", "d_text": "Gas-liquid chromatography gas chromatography in which the substances to be separated are moved by an inert gas along a tube filled with a finely divided inert solid coated with a nonvolatile oil; each component migrates at a rate determined by its solubility in oil and its vapor pressure.\nGel-filtration chromatography (gel-permeation chromatography) exclusion chromatography.\nIon exchange chromatography that utilizing ion exchange resins, to which are coupled either cations or anions that will exchange with other cations or anions in the material passed through their meshwork.\nMolecular sieve chromatography exclusion chromatography.\nPaper chromatography a form of chromatography in which a sheet of blotting paper, usually filter paper, is substituted for the adsorption column. After separation of the components as a consequence of their differential migratory velocities, they are stained to make the chromatogram visible. In the clinical laboratory, paper chromatography is employed to detect and identify sugars and amino acids.\nPartition chromatography a process of separation of solutes utilizing the partition of the solutes between two liquid phases, namely the original solvent and the film of solvent on the adsorption column.\nThin-layer chromatography that in which the stationary phase is a thin layer of an adsorbent such as silica gel coated on a flat plate. It is otherwise similar to paper chromatography.\nThe most widely used chromatographic techniques include Gel Filtration Chromatography, Ion Exchange Chromatography, Hydrophobic Interaction Chromatography, Affinity Chromatography, High performance (high pressure) liquid chromatography (HPLC) and Preparative Chromatography.\nChromatography may be classified according to its aim, technical details, state of mobile phase or other parameters. The aim of the chromatographic procedure can either be a preparative or an analytical chromatography.\nPreparative chromatography is used for separation of individual substance, so that the substance can be further used for some purpose (e.g. for use in pharmaceutical preparation). For example, purification of a reaction product from the reaction mixture in order to use it in pharmaceuticals. The aim of analytical chromatography is to detect presence (qualitative analysis) and amount (quantitative analysis) of certain component in the mixture.\nChromatographic separation methods are used for commercial scale production of clinical supply molecules through specialized SFC or SMB processes.", "score": 32.64620028196124, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "- step 1 - selection of the HPLC method and initial system\n- step 2 - selection of initial conditions\n- step 3 - selectivity optimization\n- step 4 - system optimization\n- step 5 - method validation.\n- keep it simple\n- try the most common columns and stationary phases first\n- thoroughly investigate binary mobile phases before going on to ternary\n- think of the factors that are likely to be significant in achieving the desired resolution.\nMobile phase composition, for example, is the most powerful way of optimizing selectivity whereas temperature has a minor effect and would only achieve small selectivity changes. pH will only significantly affect the retention of weak acids and bases. A flow diagram of an HPLC system is illustrated in Figure 1.\nTypes of chromatography. Reverse phase is the choice for the majority of samples, but if acidic or basic analytes are present then reverse phase ion suppression (for weak acids or bases) or reverse phase ion pairing (for strong acids or bases) should be used. The stationary phase should be C18 bonded. For low/medium polarity analytes, normal phase HPLC is a potential candidate, particularly if the separation of isomers is required. Cyano-bonded phases are easier to work with than plain silica for normal phase separations. For inorganic anion/cation analysis, ion exchange chromatography is best. Size exclusion chromatography would normally be considered for analysing high molecular weight compounds (.2000).\nColumn dimensions. For most samples (unless they are very complex), short columns (10–15 cm) are recommended to reduce method development time. Such columns afford shorter retention and equilibration times. A flow rate of 1-1.5 mL/min should be used initially. Packing particle size should be 3 or 5 μm.\nDetectors. Consideration must be given to the following:\n- Do the analytes have chromophores to enable UV detection?\n- Is more selective/sensitive detection required (Table I)?\n- What detection limits are necessary?\n- Will the sample require chemical derivatization to enhance detectability and/or improve the chromatography?\nFluorescence or electrochemical detectors should be used for trace analysis. For preparative HPLC, refractive index is preferred because it can handle high concentrations without overloading the detector.\nUV wavelength.", "score": 32.5435334758333, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Case study 1: enrichment\nA client API contained two process impurities—one identified and one unknown. Both were visible by UV detection at 260 nm,\na wavelength at which the API absorbs light less strongly. Thus, the relative absorbance chromatogram observed in the straightforward\ngradient RP-HPLC method (Method A, see Figure 1) was known to overrepresent the true abundance of both impurities. The unknown impurity had a relative absorbance of 0.5%\nand a true abundance estimated at less than 0.2%. Preparative isolation by scaling up Method A was rejected due to time, solvent\nconsumption, and the excessive fraction volumes that would be accumulated in the isolation of this low-abundance peak. It\nwas hoped that, by using SFC methods, the target impurity could be recovered more expediently in the milligram quantities\ndesired for a complete or partial structure elucidation by 2D NMR and MSn methods.\nFigure 1: Reversed-phase high-performance liquid chromatography (UV 260 nm, Method A) of the API showing known and unknown\nimpurities. The unknown impurity (0.5% by relative absorbance) is observed at 9.7 min. [QA: please define AU]\nUnder preparative SFC conditions with detection at 260 nm, the impurity peak signal was difficult to identify unambiguously.\nA preparative SFC method (Method B) was developed quickly and used to process several grams of API, with fractions collected\nbefore, during, and after the main peak eluted. These fractions were concentrated by rotary evaporation and analyzed using\nthe RP-HPLC method for the presence of the desired peak. The desired peak was captured and enriched in a fraction collected\nimmediately before the main peak, indicating that the chosen SFC method adequately resolved the peak from its neighbor. To\naccumulate the target peak in quantity sufficient for structural work, 50 g of the API were injected during a period of 7\nh of automated stacked injection chromatography (see Figure 2) to produce a highly enriched fraction with a total mass of 240 mg (see Figure 3).\nFigure 2: Preparative supercritical fluid chromatography (UV 260, Method B) chromatogram, showing the fractions collected\nto enrich the target impurity. The target elutes in fraction F1.", "score": 31.584005917543337, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "First time in india and in house R & D of complete one year, we, M/s Laby had developed paper\nChromatography Kit. Hence we are the sole manufacturer of Model-PC-10 Paper Chromatography Kit in the world.\nWe have also applied for its patent.\nIt is required in Colleges, Medical Colleges, Research Institutes for detection of RF Value.\nPaper Chromatography Cabinet: It is made of single piece bakelite moulding. The inner size of cabinet is 6 x 8 x 9” with front sliding glass door. The lid of cabinet is also made of bakelite.\nStainless Steel Solvent Pot: It is made of 316 Quality S.S. It is having the volume capacity of 150 ml. It is required to hold the solvent mixture.\nStainless Steel Hanger: It is a stainless Steel rod of size 6” and dia 2mm. It is used as hanger of Chromatography paper. It fits inside the grooves of the cabinet.\nChromatography Paper “1-Chro”: It is the world standard Chromatography paper. A smooth surface, 0.18mm thick with linear flow rate (water) of 130 mm/30 min. Good resolution for general analytical Separation and having following special features:\n* Simultaneous development of multiple samples on the same sheet under identical\n* Sequential development of the same sample with solvent or different concentrations of the\n* Suitability for two-dimensional chromatography (change in direction of the solvent front) with\npossible improved resolution.\nDrying Stand: One stand is supplied to accommodate processed (wet)Chromatography paper and to put it in oven to dry the same.\nGlass Sprayer with Rubber Balloon: The sprayer is made of Borosilicate Glass, specially\ndesigned for spraying the indicators on Chromatography Paper. A rubber balloon is connected\nto it. Glass Syringe: Glass syringe capacity 20ml. is provided to draw the solvent from S.S. Pot after practical is over.\nTLC Capillary: Pkt. of 25 high quality fine capillaries are supplied with cabinet.\nPrice in Rupee\nCompany have ISO:9001-2001, ISO:13485-20013, GMP, CE Certificates\nThe Manufacturer of Lab. Centrifuge Machines and Lab. Equipments.\nCopyright 2016 All Rights are Reserved", "score": 31.45648203378065, "rank": 20}, {"document_id": "doc-::chunk-3", "d_text": "The selectivity factor, a, can also be manipulated to improve separations. When a is close to unity, optimising k' and increasing N is not sufficient to give good separation in a reasonable time. In these cases, k' is optimised first, and then a is increased by one of the following procedures:\nYou should now be familiar with the terms used in chromatography, how species become separated from one another, and how various conditions can be manipulated to obtain well-resolved chromatograms with a minimum elution time.", "score": 31.280756771605123, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "I) The retention time is the characteristic time it takes for a specific analyte to pass via the system (that is, from the column inlet to the detector) under set conditions. The sample is the matter examined in chromatography. It might comprise of a single component or it might be a mixture of components. Whenever the sample is treated in the course of an analysis, the phase or the phases having the analyte of interest is/are termed to as the sample while everything out of interest separated from the sample before or in the course of the analysis is termed to as waste.\nJ) The solute refers to the sample components in the partition chromatography.\nK) The solvent refers to any substance able of solubilizing the other substance, and particularly the liquid mobile phase in LC.\nL) The stationary phase is the substance that is fixed in place for the chromatography process. Illustration comprises the silica layer in thin layer chromatography.\nTechniques by chromatographic bed shape:\nPlanar chromatography is a separation method in which the stationary phase is present on the plane. The plane can be a paper, or the paper might be impregnated through a substance as the stationary bed (that is, paper chromatography). It could as well be a layer of solid particles spread on a support like a glass plate (that is, thin layer chromatography). Different components of the sample mixture migrate at various rates according to how strongly they interact by the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each and every chemical can be employed to help in the recognition of an unknown substance.\na) Paper Chromatography:\nThe Paper chromatography is a method employed for separating and recognizing mixtures which are either colored or can be colored. The secondary or primary colors in ink can simply be separated by this method. This process is a powerful teaching tool however has been greatly substituted by thin layer chromatography. Complex mixtures of identical compounds like amino acids can be separated by employing a two-way paper chromatograph or else termed as two-dimensional (2-D) chromatography. In this process, two solvents are employed and the paper is rotated at around 90oC in between.\nThe retention factor (Rƒ) is stated as the ratio of the distance traveled via the substance to the distance traveled via the solvent. Rƒ values are generally deduced as a fraction of two decimal places.", "score": 31.225231668751537, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "For the greatest sensitivity λmax should be used, which detects all sample components that contain chromophores. UV wavelengths below 200 nm should be avoided because detector noise increases in this region. Higher wavelengths give greater selectivity.\nFluorescence wavelength. The excitation wavelength locates the excitation maximum; that is, the wavelength that gives the maximum emission intensity. The excitation is set to the maximum value then the emission is scanned to locate the emission intensity. Selection of the initial system could, therefore, be based on assessment of the nature of sample and analytes together with literature data, experience, expert system software and empirical approaches.\nGradient HPLC. With samples containing a large number of analytes (.20–30) or with a wide range of analyte retentivities, gradient elution will be necessary to avoid excessive retention.\nDetermination of initial conditions. The recommended method involves performing two gradient runs differing only in the run time. A binary system based on either acetonitrile/water (or aqueous buffer) or methanol/water (or aqueous buffer) should be used.\nSelectivity optimization in gradient HPLC. Initially, gradient conditions should be optimized using a binary system based on either acetonitrile/water (or aqueous buffer) or methanol/water (or aqueous buffer). If there is a serious lack of selectivity, a different organic modifier should be considered.\nStep 4 - system parameter optimization. This is used to find the desired balance between resolution and analysis time after satisfactory selectivity has been achieved. The parameters involved include column dimensions, column-packing particle size and flow rate. These parameters may be changed without affecting capacity factors or selectivity.\nMethod development and validation can be simultaneous, but they are two different processes, both downstream of method selection. Analytical methods used in quality control should ensure an acceptable degree of confidence that results of the analyses of raw materials, excipients, intermediates, bulk products or finished products are viable. Before a test procedure is validated, the criteria to be used must be determined.", "score": 30.716521273800872, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "A New Reverse Phase High Performance Liquid Chromatographic Method For Analysis Of Rofecoxib In Tablets\nA simple, selective, rapid, precise and economical reverse phase HPLC method has been developed for the determination of rofecoxib in tablets. The analyte is resolved by using a mobile phase (methanol and water in the ratio 50:50) at a flow rate of 1 ml/min on an isocratic HPLC system (Shimadzu) consisting of LC 10AT liquid pump, SPD 10A UV-Visible detector, a ODS C-18 RP column (4.6 mm I.DX25 cm) at a wavelength of 230 nm. The linear dynamic range for rofecoxib was 2-40 μg/ml by this method. Paracetamol was used as an internal standard.", "score": 30.703543211268833, "rank": 24}, {"document_id": "doc-::chunk-9", "d_text": "3 Rf is a radius of liquid column 50 between plate 36 and mesh plate 27, Rp is a radius of plate 36 and L is a distance between the two plates.\nThe capacitance existing between the plate 36 and mesh plate 27 is dependent on the different variables present, as follows:\n∈O=dielectric coefficient of a vacuum\n∈air=dielectric coefficient of the air between the two plates 27, 36\n∈liquid=dielectric coefficient of the liquid between the two plates\nRp=radius of the plate 36\nRf=radius of the liquid column 50 between the two plates 27, 36\nL=distance between the two plates 27, 36.\nThe dielectric coefficient of the liquid to be nebulized is much higher than the dielectric coefficient of the air and enables the amount of liquid or the radius of the liquid column to be measured accurately between the two plates.\nIn order for the capacitive measurement to function properly, the distance between plate 36 and mesh plate 27 has to be adjusted to a suitable distance. If the distance between the two plates is too great, the capacitance values between the plates becomes too low to obtain an accurate measurement result. If the distance between the plates is too little, the operation of the nebulizer apparatus may be adversely affected, as noted below.\nReliable results can be obtained when capacitance values vary over a range of a few pF and this will be obtained with radius Rp of 2.5 mm for plate 36 and a distance L of 0.5 mm between the plate 36 and mesh plate 27.\nThe area of the plate 36 also has an effect on sensitivity of the capacitive measurement. Shape of the area of plate 36 can have different forms. While a rounded or a square shape for plate 36 is easier to analyze, other forms such as triangular, rectangular, star, or other multi-sided configurations may be used. A round plate is discussed below for explanatory purposes. If the radius of the plate 36 is increased, the overlap area of the electrodes becomes larger and higher values of capacitance will be achieved. The radius of the plate 36 can also be made smaller, but the distance between the plate 36 and mesh plate 27 must be decreased at the same time to maintain a certain level of capacitance.\nFIG.", "score": 30.48297668249202, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Measuring levels of liquid and bulk solids is commonplace and can be done for control or inventory purposes. Numerous technologies are available for measuring liquid levels including magnetic float, magnetostrictive, hydrostatic, radar, TDR and electromechanical devices. However, an overlooked technology that can be used for measuring both liquid and bulk solids level exists and is often the least expensive approach, for bulk solids, and that is the RF admittance level transmitter.\nThe RF admittance level transmitter continuously measures the changing level of liquids and bulk solids by measuring the effect of the target materials dielectric properties on the total capacitance of the mass of material in the vessel at any given moment. Every material has a dielectric property or electric Permittivity. From Wikipedia: “In electromagnetism, absolute permittivity is the measure of the resistance that is encountered when forming an electric field in a medium. In other words, permittivity is a measure of how an electric field affects, and is affected by, a dielectric medium. The permittivity of a medium describes how much electric field (more correctly, flux) is ‘generated’ per unit charge in that medium. More electric flux exists in a medium with a low permittivity (per unit charge) because of polarization effects. Permittivity is directly related to electric susceptibility, which is a measure of how easily a dielectric polarizes in response to an electric field. Thus, permittivity relates to a material’s ability to resist an electric field (while unfortunately the word stem “permit” suggests the inverse quantity).”\nSo the RF admittance level transmitter simply measures the change of electrical capacitance. As the level of the target material in the vessel goes up or down, the capacitance increases and decreases respectively. The use of RF admittance level transmitters is typically limited to materials with a dielectric constant or relative permittivity of 1.8 and higher, though exceptions can exist.\nTo apply to many industries and materials the RF admittance level transmitter is typically available with a wide number of probe styles and types including rod and cable probes. For a better look at available configurations please review the catalog of the EB series RF admittance level transmitter.\nSetup of the RF admittance level transmitter is very easy. Most are 2-wire powered, meaning the low voltage DC power is supplied of=ver the same pair of wires as the transmitter output, usually 4-20mA.", "score": 30.318975460156008, "rank": 26}, {"document_id": "doc-::chunk-18", "d_text": "Kinetic Analysis\nWhen RF inhibition was detected for the RFK activity, the experimental kinetic profiles obtained when varying the flavin concentration at saturated ATP were fit to the equation describing the inhibition effect produced in a bi-substrate enzyme kinetics when two molecules of the substrate bind to the enzyme and one blocks the competent binding of the other, or when the product of the reaction is not released leading to a dead-end complex:\nIn these cases, a decrease in velocity (reaction rate, ν, divided by the total concentration of enzyme, [E]T) is usually observed at concentrations of the varying substrate, [S], around or greater than the dissociation constant of the inhibitor, Ki. Errors in the determined apparent Km and kcat (appKm and appkcat) will increase with Ki getting closer to KmS. Thus, for Ki values within a factor of two of KmS, errors in the determined parameters were ±35%, while they decreased to ±10% when Ki was more than 3-fold larger than KmS.\n1.3. Isothermal Titration Calorimetry\nThe association constant (Ka), the enthalpy change (ΔH) and the stoichiometry (N), or their average values, were obtained through non-linear regression of the experimental data to a model for one or two independent binding sites implemented in Origin 7.0 (OriginLab Corporation, Northampton, MA, USA, 2002).\nThe concentrations of protein and ligand after injection i are given by:\nwhere P0 and L0 are the initial concentration of protein in the cell and the concentration of ligand in the syringe, respectively, and v and V0 are the injection volume and the cell volume, respectively.\nIf the protein exhibits one ligand binding site (or two binding sites with similar thermodynamic binding parameters), the following equation must be solved for each injection i:\nwhich provides the concentration of free ligand after each injection, [L]i, assuming a given value of the association constant Ka. The concentration of protein-ligand complex formed up to injection i is calculated as follows:\nand the heat associated with each injection is given by:\nwhere ΔH is the enthalpy of ligand binding.\nIf the protein exhibits two different, independent ligand binding sites, the following equation must be solved for each injection i:\nwhich provides the concentration of free ligand after each injection, [L]i, assuming given values of the association constants Ka1 and Ka2.", "score": 30.155062361399235, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "The ICS-2100 system is the first Reagent-Free™ ion chromatography system with electrolytic sample preparation (RFIC-ESP™ system) and eluent generation (RFIC-EG™ system) capabilities designed to perform all types of electrolytically generated isocratic and gradient IC separations using conductivity detection. Microbore 2 mm columns and standard bore 4 mm columns are fully supported.\nThe ICS-2100 RFIC-ESP system now provides automation for many sample preparation techniques with multiple valving configurations and support for electrolytic sample preparation devices.\nThe ICS-2100 system provides high performance with unequalled ease-of-use when coupled with an AutoSuppression® device, such as the SRS® 300 suppressor. Chromeleon®software provides full control and digital data collection from a PC using simple USB connectivity.\nAutomated eluent generation\nLCD front panel control\nDigital conductivity detection\nVacuum degas (option)\nUSB connectivity, plug-n-play\n(h × w × d) 56.1 cm × 22.4 cm × 53.3 cm\n(22.1 in × 8.8 in × 21 in.)\nWeight 24.5 kg (54 lb)\nRequirements 100–240 V ac, 50–60Hz, autoranging\nTemperature 4–40 °C (40–104 °F); cold-room-compatible (4 °C) as long as system power remains on\nOperating Humidity Range 5–95% relative, non-condensing\nLeak Detection Built-in, optical sensor\nThermo Scientific DIONEX ICS-2100 Operators Manual (4 MB)\nThermo Scientific DIONEX ICS-2100 Product Specifications (2 MB)\nThermo Scientific ICS-2100 Installation Instructions (3 MB)\n|Client type||Machinery dealer|\n|Number of listings||245|\n|Last activity||Nov. 7, 2019|", "score": 29.592323302416695, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "This particular column is used for semivolatile, non-polar organic compounds such as the PAHs we will look at. The compounds must me in an organic solvent.\nThe capillary column is held in an oven that can be programmed to increase the temperature gradually (or in GC terms, ramped). this helps our separation. As the temperature increases, those compounds that have low boiling points elute from the column sooner than those that have higher boiling points. Therefore, there are actually two distinct separating forces, temperature and stationary phase interactions mentioned previously.\nAs the compounds are separated, they elute from the column and enter a detector. The detector is capable of creating an electronic signal whenever the presence of a compound is detected. The greater the concentration in the sample, the bigger the signal. The signal is then processed by a computer. The time from when the injection is made (time zero) to when elution occurs is referred to as the retention time (RT).\nWhile the instrument runs, the computer generates a graph from the signal. (See figure 1). This graph is called a chromatogram. Each of the peaks in the chromatogram represents the signal created when a compound elutes from the GC column into the detector. The x-axis shows the RT, and the y-axis shows the intensity (abundence) of the signal. In Figure 1, there are several peaks labeled with their RTs. Each peak represents an individual compound that was separated from a sample mixture. The peak at 4.97 minutes is from dodecane, the peak at 6.36 minutes is from biphenyl, the peak at 7.64 minutes is from chlorobiphenyl, and the peak at 9.41 minutes is from hexadecanoic acid methyl ester.\nIf the GC conditions (oven temperature ramp, column type, etc.) are the same, a given compound will always exit (elute) from the column at nearly the same RT. By knowing the RT for a given compound, we can make some assumptions about the identity of the compound. However, compounds that have similar properties often have the same retention times. Therefore, more information is usually required before an analytic al chemist can make an identification of a compound in a sample containing unknown components.\nAs the individual compounds elute from the GC column, they enter the electron ionization (mass spec) detector.", "score": 29.39423907063378, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "The RFScanner is a compact bench-top scanner that characterizes antennas in your own lab environment\nRFScanner measures the amplitude and phase of very-near-field magnetic emissions and uses these data to calculate and display far-field patterns and other parameters in real-time.\n- The RFScanner maximum radiator size is 10 cm x 16 cm.\n- RFScanner can be used to evaluate either standalone (i.e. passive) antennas or antennas that are embedded in wireless devices (i.e. active antennas).\n- RFScanner can be used with a few specific models of to display S11 and then calculate antenna gain and efficiency.\n- RFScanner calculates the right and left hand circularly polarized patterns and displays axial ratio patterns.\nRFScanner provides far-field patterns, bisections, EIRP and TRP in seconds. Novel near-field\nresults, including amplitude, polarity and phase give insights into the root causes of antenna performance\nchallenges and help troubleshoot far-field radiation patterns.", "score": 29.389771701589158, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Atomic Force Microscope Mediated Chromatography\n- Created on Thursday, 01 August 2013\nTrace-chemical and microfluidic analyses are taken to higher precision.\nThe atomic force microscope (AFM) is used to inject a sample, provide shear-driven liquid flow over a functionalized substrate, and detect separated components. This is demonstrated using lipophilic dyes and normal phase chromatography. A significant reduction in both size and separation time scales is achieved with a 25- micron-length column scale, and one-second separation times. The approach has general applications to trace chemical and microfluidic analysis.\nThe AFM is now a common tool for ultramicroscopy and nanotechnology. It has also been demonstrated to provide a number of microfluidic functions necessary for miniaturized chromatography. These include injection of sub-femtoliter samples, fluidic switching, and shear-driven pumping. The AFM probe tip can be used to selectively remove surface layers for subsequent microchemical analysis using infrared and tip-enhanced Raman spectroscopy. With its ability to image individual atoms, the AFM is a remarkably sensitive detector that can be used to detect separated components. These diverse functional components of microfluidic manipulation have been combined in this work to demonstrate AFM mediated chromatography.\nAFM mediated chromatography uses channel-less, shear-driven pumping. This is demonstrated with a thin, aluminum oxide substrate and a non-polar solvent system to separate a mixture of lipophilic dyes. In conventional chromatographic terms, this is analogous to thin-layer chromatography using normal phase alumina substrate with shear-driven pumping provided by the AFM tip-cantilever mechanism. The AFM detection of separated components is accomplished by exploiting the variation in the localized friction of the separated components. The AFM tip-cantilever provides the mechanism for producing shear-induced flows and rapid pumping. Shear-driven chromatography (SDC) is a relatively new concept that overcomes the speed and miniaturization limitations of conventional liquid chromatography. SDC is based on a sliding plate system, consisting of two flat surfaces, one of which has a recessed channel. A fluid flow is produced by axially sliding one plate past another, where the fluid has mechanical shear forces imposed at each point along the channel length. The shear-induced flow rates are very reproducible, and do not have pressure or voltage gradient limitations.", "score": 29.132336904573837, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Rapid-gradient HPLC method for measuring drug interactions with immobilized artificial membrane: Comparison with other lipophilicity measures.\nJournal of Pharmaceutical Sciences\n1085 - 1096.\nA fast-gradient high-performance liquid chromatographic (HPLC) method has been suggested to characterize the interactions of drugs with an immobilized artificial membrane (IAM). With a set of standards, the gradient retention times can be converted to Chromatographic Hydrophobicity Index values referring to IAM chromatography (CHIIAM) that approximates an acetonitrile concentration with which the equal distribution of compound can be achieved between the mobile phase and IAM. The CHIIAM values are more suitable for interlaboratory comparison and for high throughput screening of new molecular entities than the log k(IAM) values (isocratic retention factor on IAM). The fast- gradient method has been validated against the isocratic log k(IAM) values using the linear free energy relationship solvation equations based on the data from 48 compounds, The compound set was selected to provide a wide range and the least cross-correlation between the molecular descriptors in the solvation equation: SP = c + r.R-2 + s.pi(2)(H) + a.Sigma alpha(2)(H) + b.Sigma beta(2)(0) + v.V-x (2) where SP is a solute property (e.g., logarithm of partition coefficients, reversed-phase (RP)-HPLC retention parameters, such as log k, log k(w), etc.) and the explanatory variables are solute descriptors as follows: R, is an excess molar refraction that can be obtained from the measured refractive index of a compound, pi(2)(H) is the solute dipolarity/polarizability, Sigma alpha(2)(H) and Sigma beta(2)(0) are the solute overall or effective hydrogen-bond acidity and basicity, respectively, and V-x is the McGowan characteristic volume (in cm(3)/100 mol) that can be calculated for any solute simply from molecular structure using a table of atomic constants. It was found that the relative constants of the solvation equation were very similar for the CHIIAM and for the log k(IAM). The IAM lipophilicity scale was quite similar to the octanol/water lipophilicity scale for neutral compounds, The effect of charge on the interaction with IAM was studied by varying the mobile phase pH.", "score": 28.835024151798446, "rank": 32}, {"document_id": "doc-::chunk-6", "d_text": "For anionically polymerized linear polymers, the polymer is essentially monodisperse (weight average molecular weight/number average molecular weight ratio approaches unity), and it is both convenient and adequately descriptive to report the \"peak\" molecular weight of the narrow molecular weight distribution observed. Usually, the peak value is between the number and the weight average. The peak molecular weight is the molecular weight of the main species shown on the chromatograph. For polydisperse polymers the weight average molecular weight should be calculated from the chromatograph and used. For materials to be used in the columns of the GPC, styrene-divinyl benzene gels or silica gels are commonly used and are excellent materials. Tetrahydrofuran is an excellent solvent for polymers of the type described herein. A refractive index detector may be used.\nMeasurement of the true molecular weight of the final coupled radial or star polymer is not as straightforward or as easy to make using GPC. This is because the radial or star shaped molecules do not separate and elute through the packed GPC columns in the same manner as do the linear polymers used for the calibration, and, hence, the time of arrival at a UV or refractive index detector is not a good indicator of the molecular weight. A good method to use for a radial or star polymer is to measure the weight average molecular weight by light scattering techniques. The sample is dissolved in a suitable solvent at a concentration less than 1.0 gram of sample per 100 milliliters of solvent and filtered using a syringe and porous membrane filters of less than 0.5 microns pore size directly into the light scattering cell. The light scattering measurements are performed as a function of scattering angle and of polymer concentration using standard procedures. The differential refractive index (DRI) of the sample is measured at the same wavelength and in the same solvent used for the light scattering. The following references are herein incorporated by reference:\n1. Modern Size-Exclusion Liquid Chromatography, W. W. Yau, J. J. Kirkland, D. D. Bly, John Wiley & Sons, New York, N.Y., 1979.\n2. Light Scattering from Polymer Solution, M. B. Huglin, ed., Academic Press, New York, N.Y., 1972.\n3.", "score": 28.79591856578278, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "43 Maple Street\nMilvord, MA 01757\nInternet - http://www.waters.com\nGel Permeation Chromatography (GPC) is a separation method for the determination of molecular weight averages (Mn) and molecular weight distribution (PDI=Mw/Mn). The testing report can include calibration plot, chromatogram, number average molecular weight and PDI of the polymer. GPC-chloroform is using chloroform as a solvent. Gel Permeation Chromatography (GPC) separates molecules on the basis of hydrodynamic size (not Mw). The large molecules move more rapidly through the GPC column, and in this way the mixture can be separated. The elution volume of a molecule is related to its hydrodynamic volume. To relate the elution volume to the molecular weight of a polymer, a calibration with molecular weight standards is needed.\nPump: Waters Alliance HPLC System, 2690 Separation Module.\nTwo columns: Agilent, PLGEL 5μm, MIXED-D, 300x7.5 mm.\nWaters 2410 Differential Refractometer (RI)\nWaters 2998 Photodiode Array Detector (PDA)\nSolvent: Chloroform with 0.25% of TEA\nFlow Rate: 1 mL/min\nInjection: 100 μL.", "score": 28.754117506662578, "rank": 34}, {"document_id": "doc-::chunk-12", "d_text": "The comparison of retention times is what provides GC its analytical value.\nThe Gas chromatography is in principle identical to column chromatography (and also other forms of chromatography, like HPLC, TLC), however consists of some notable differences. Firstly, the procedure of separating the compounds in a mixture is taken out between a liquid stationary phase and a gas mobile phase, while in column chromatography the stationary phase is a solid and the mobile phase is liquid. (Therefore the full name of the process is 'Gas-liquid chromatography', referring to the mobile and stationary phases, correspondingly.) Secondly, the column via which the gas phase passes is positioned in an oven where the temperature of the gas can be controlled, while column chromatography (generally) has no such temperature control. Thirdly, the concentration of a compound in the gas phase is only a function of the vapor pressure of the gas.\nGas chromatography is as well identical to fractional distillation, as both methods separate the components of a mixture mainly based on boiling point (or vapor pressure) differences.\nThough, fractional distillation is generally employed to separate components of a mixture on a large scale, while GC can be employed on a much smaller scale (that is, micro scale).\nGas chromatography is as well at times termed as vapor-phase chromatography (VPC), or gas-liquid partition chromatography (GLPC). Such alternative names, and also their corresponding abbreviations, are often found in the scientific literature. Strictly speaking, GLPC is the most accurate terminology, and is therefore preferred by most of the writers.\n=> GC analysis:\nThe gas chromatograph is a chemical analysis instrument for separating the chemicals in a complex sample. A gas chromatograph employs a flow-through narrow tube termed as the column, via which various chemical constituents of a sample pass in a gas stream (that is, carrier gas, mobile phase) at various rates based on their different chemical and physical properties and their interaction having a specific column filling, termed as the stationary phase. As the chemicals exit the end of the column, they are detected and recognized electronically. The function of the stationary phase in the column is to separate various components, causing each one to exit the column at a different time (that is, retention time). Other parameters which can be employed to modify the order or time of retention are the carrier gas flow rate, column length and temperature.", "score": 28.119791676458945, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Chromatography was first developed in the first decade of the 20th century, mainly for the separation of plant pigments such as chlorophyll, carotenes and xanthophylls (green, orange, and yellow, respectively). The technique got its name from the property of colourful separation of the pigments in the process. Since the separation of chemical constituents is essential in any type of chemical analysis, chromatography became a vital and irreplaceable tool. Although later on, chromatography took many forms and applications, the same principles of chromatography cover all of them. However, its definition did change to a more generalised version.\nChromatography is a lab technique used to separate the constituents of a mixture dissolved in a mobile phase, using another phase. This process is based on the principle of differential speeds of the constituents in different phases, causing them to separate. Let us move to SEC.\nThe size exclusion chromatography or the molecular sieve chromatography is a method to separate macromolecules (10-5 to 10-3mm) based on their size and shape. Since a correlation between the size and weight can be drawn the method is also used to determine molecular weight.\nThe chromatography column tube is tightly filled with porous beads which are mainly made of agarose, polyacrylamide or dextran polymers. Then the buffer is added in the column and degassed so that there are no air bubbles.\nAfter degassing the column, the sample is loaded at the top of the column tube. Followed by this, the buffer is gradually added from the top. As the buffer flows through the column, it forces the solute out of the pores. Now, as the mixture moves down, components of the sample start moving at different rates based on their size.\nThe final step is to collect equal-sized fractions of the buffer (known as the eluate) eluting out of the column. These collected fractions are then tested by spectroscopic techniques to determine the concentration of the macromolecules in it. A plot of the eluted buffer fractions (volume) versus the absorbance, known as a chromatogram, is used to determine the success of the experiment performed.\nThe buffer used should have the pH and salt concentrations optimised for the target molecules to remain active. The working range of pore size is a decisive parameter for the separation of macromolecules.", "score": 27.72972038949445, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "High retention factors (greater than 20) mean that elution takes a very long time. Ideally, the retention factor for an analyte is between one and five.\nWe define a quantity called the selectivity factor, a , which describes the separation of two species (A and B) on the column;\nWhen calculating the selectivity factor, species A elutes faster than species B. The selectivity factor is always greater than one.\nTo obtain optimal separations, sharp, symmetrical chromatographic peaks must be obtained. This means that band broadening must be limited. It is also beneficial to measure the efficiency of the column.\nThe Theoretical Plate Model of Chromatography\nThe plate model supposes that the chromatographic column is contains a large number of separate layers, called theoretical plates. Separate equilibrations of the sample between the stationary and mobile phase occur in these \"plates\". The analyte moves down the column by transfer of equilibrated mobile phase from one plate to the next.\nIt is important to remember that the plates do not really exist; they are a figment of the imagination that helps us understand the processes at work in the column.They also serve as a way of measuring column efficiency, either by stating the number of theoretical plates in a column, N (the more plates the better), or by stating the plate height; the Height Equivalent to a Theoretical Plate (the smaller the better).\nIf the length of the column is L, then the HETP is\nThe number of theoretical plates that a real column possesses can be found by examining a chromatographic peak after elution;\nwhere w1/2 is the peak width at half-height.\nAs can be seen from this equation, columns behave as if they have different numbers of plates for different solutes in a mixture.\nThe Rate Theory of Chromatography\nA more realistic description of the processes at work inside a column takes account of the time taken for the solute to equilibrate between the stationary and mobile phase (unlike the plate model, which assumes that equilibration is infinitely fast). The resulting band shape of a chromatographic peak is therefore affected by the rate of elution. It is also affected by the different paths available to solute molecules as they travel between particles of stationary phase. If we consider the various mechanisms which contribute to band broadening, we arrive at the Van Deemter equation for plate height;\nwhere u is the average velocity of the mobile phase.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Chromatography is a method for separating components of mixtures, It is the most common term used for defining separation techniques used in the chemical and biopharmaceutical industry to describe the extraction and purification of pure chemical compounds.\nA system consisting of a stationary and a mobile phase is necessary for chromatographic separation. The stationary phase is a substance that binds and shortly releases the molecules moving through the system. The particles can move through the system due to the mobile phase, which can be for example a liquid (eluent) or a gas (carrier gas) that carries the molecules through the stationary phase.\nThe chromatographic separation process is based on the different mobility of different components in the chromatographic system (column, plate, etc.). The compounds that are more like the stationary phase (have higher affinity towards it), move slower that the compounds that are more like the mobile phase. The time spent on going through the chromatographic system is called the retention time (tR). Due to the different mobilities different compounds also have different retention times.\nThe detection of the compounds after the chromatographic separation is carried out visually or by using a special device for detection. The intent of preparative chromatography is to purify materials and use them for additional testing or as final products (1–3). For example, preparative chromatography is used to purify compounds from combinatorial libraries, to obtain material for clinical trials, and in large-scale production of drugs and vaccines (4,5). Consequently, the scales of preparative chromatography vary substantially.\nPreparative HPLC is used for the isolation and purification of valuable products in the chemical and pharmaceutical industry as well as in biotechnology and biochemistry. Depending on the working area the amount of compound to isolate or purify differs dramatically. It starts in the µg range for isolation of enzymes in biotechnology. At this scale we talk about micro purification. For identification and structure elucidation of unknown compounds in synthesis or natural product chemistry it is necessary to obtain pure compounds in amounts ranging from one to a few milligrams. Larger amounts, in gram quantity, are necessary for standards, reference compounds and compounds for toxicological and pharmacological testing. Industrial scale or production scale preparative HPLC, that is, kg quantities of compound, is often done nowadays for valuable pharmaceutical products.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Chromatography is the collective term for a set of techniques used to separate mixtures. These techniques include gas chromatography GC, thin layer chromatography TLC, Size exclusion Chromatography SEC, and higher performance liquid chromatography HPLC.\nChromatography involves passing a mix dissolved in mobile phase via stationary phase. The mobile phase is generally a liquid or a gas that transfers the mixture to be separated via a column or horizontal sheet that has a solid stationary phase.\nLiquid chromatography LC is a separation technique in which the mobile phase is a liquid. It can be carried out in either a column or a plane. LC is particularly helpful for the separation of electrons or ions that are dissolved in a solvent. Simple liquid chromatography includes a column using a fritted bottom that holds a stationary phase in equilibrium with a solvent. Commonly used stationary phases include solids, ionic groups on a resin, liquids on an inert solid support and porous inert particles. The mixture to be separated is loaded on the surface of the column followed by more solvent. The various components in the mixture pass through the column at different rates due to the variations from the partitioning behavior between the mobile liquid and stationary phases. Liquid chromatography is more widely used than other methods like gas chromatogram since the samples analyzed do not have to be vaporized. Additionally, the variations in temperature have a slight impact in liquid chromatography, unlike in other kinds of chromatography.\nHigh Performance Liquid Chromatography HPLC\nPresent day liquid Chromatography that normally utilizes tiny packing particles and a rather higher pressure is referred to as HPLC. It is basically an extremely improved form of column chromatography frequently used by biochemists to different amino acids and proteins because of their different behavior in solvents regarding the amount of electronic charge of every one. Instead being allowed to trickle through a column under gravity, the solvent is forced through under high pressures up to 400 atmospheres, which makes the process much quicker. Since smaller particles are used, with their dimensions being determined by a particle size analyzer, there’s greater surface area for connections between the stationary phase and the molecules flowing past it. This then allows for much greater separation of these components in the mix.\nThere are many advantages of HPLC. For one, it is an automated process which only requires a couple minutes to produce effects.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "By measuring the absorbance of the compound in a solution and calculating its molar extinction coefficient, the concentration of it can be quantified.\nUse: UV-Vis analysis on materials include consumer care products, pigments, coatings, organic conjugates (proteins, DNA, RNA) and metallic ions.\nGas Chromatography (GC) – HP6890 (FID Detector), Agilent6890N (FID Detector), Agilent7820A (Mass Detector)\nDescription: Gas chromatography is an analytical technique used for the determination of organic components that can be volatized. Several detectors are available including a Flame Ionization Detector (FID) that is capable of easily quantifying separated analytes and a mass spectrometer which is useful for identifying components.\nUse: With our expertise in Gas Chromatography Method Development we can analyze a broad range of sample types including volatiles, hydrocarbons, glycols, fatty acids, flavors and fragrances or develop a GC or GC/MS method for your unique analysis needs.\nDescription: Ion chromatography (or ion-exchange chromatography) is a chromatographic process that separates ions and polar molecules based on their affinity to an ion exchanger. It works on almost any kind of charged molecule and the two types of ion chromatography available are anion-exchange and cation-exchange. The water-soluble and charged molecules bind to moieties which are oppositely charged by forming ionic bonds to the insoluble stationary phase. T\nUse: The CarboPac PA1 column, which is a specialized anion-exchange capillary column, can be used to determine a wide pH range from 0 to 14 at all concentrations of buffer salts. It is ideal for the separation of neutral and acidic monosaccharides.\nDescription: Shimadzu has introduced a GPC system designed specifically to provide superior data reliability and ease of use. Gel permeation chromatography is essential in polymer chemistry for measuring the distribution of molecular weights. The Shimadzu GPC consits of an LC-20AD, CTO-20A, SPD-20A (Diode Array Detector), RID-20A (Refractive Index Detector), and LabSolutions GPC Software.\nUse: The technique is often used for the analysis of the molecular weights of polymers dissolved in THF.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-10", "d_text": "Even smaller particles like carbohydrates, proteins, metal ions or other chemical compounds are conjugated to the micro beads. Each and every binding particle which is linked to the micro bead can be supposed to bind in a 1:1 ratio having the solute sample sent via the column which required to be purified or separated.\nBinding between the target molecule to be separated and the binding molecule on the column beads can be modeled by employing a simple equilibrium reaction Keq = [CS]/([C][S]) here Keq is the equilibrium constant, [C] and [S] are the concentrations of the target molecule and the binding molecule on the column resin, correspondingly. [CS] is the concentration of the complex of the target molecule bound to the column resin.\nBy using this as a basis, three different isotherms can be employed to illustrate the binding dynamics of a column chromatography: linear, Langmuir and Freundlich.\nThe linear isotherm takes place if the solute concentration required to be purified is extremely small relative to the binding molecule. Therefore, the equilibrium can be stated as:\n[CS] = Keq[C]\nFor industrial scale utilizations, the net binding molecules on the column resin beads should be factored in because unoccupied sites should be taken into account. The Langmuir isotherm and Freundlich isotherm are helpful in illustrating this equilibrium. Langmuir Isotherm:\n[CS] = (KeqStot[C])/(1 + Keq[C]), here Stot is the net binding molecules on the beads.\nFreundlich Isotherm: [CS] = Keq[C]1/n\nThe Freundlich isotherm is employed whenever the column can bind to numerous different samples in the solution which requires to be purified. As most of the various samples encompass various binding constants to the beads, there are numerous different Keq's. Thus, the Langmuir isotherm is not an excellent model for binding in this case.\nA molecule having a high affinity for the chromatography matrix will compete efficiently for binding sites, and therefore displace all molecules having lesser affinities. There are various differences between the displacement and elution chromatography. In elution mode, substances generally emerge from a column in narrow, Gaussian peaks. Broad separation of peaks, preferably to baseline, is desired in order to accomplish the maximum purification.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-3", "d_text": "As is already known, there are at present in existence two main types of methods of separation by exchange of substances between two mobile phases, one phase being intended to move relative to the other:\nMETHODS OF THE TYPE INVOLVING DISTILLATION OR GAS-LIQUID OR LIQUID-LIQUID COUNTERCURRENT EXTRACTION AND CHARACTERIZED BY HIGH PRODUCTIVITY BUT FAIRLY LOW SELECTIVITY,\nMETHODS OF THE \"CHROMATOGRAPHIC\" TYPE IN WHICH ONE OF THE ACTIVE PHASES IS FIXED ON A STATIONARY SUPPORT. Methods of this type are necessarily discontinuous and have low productivity but very high selectivity.\nIn conventional methods of chromatographic separation, namely methods involving separation of a substance between two phases in which at least one is a liquid or a gas, one of the two phases is stationary and the other is circulating; in these separations, saturation of the stationary phase retained by a solid phase makes it necessary to work in discontinuous operation in order to regenerate the stationary phase periodically and to recover from this latter the substances which have been fixed therein.\nIt is known that, in order to overcome the disadvantage attached to this discontinuous operation, consideration has been given to solutions whereby the phase which was previously stationary is made mobile by causing it to circulate countercurrent to the other phase. These solutions do not usually prove satisfactory since, in the event that the phase which is made non-stationary is a solid phase, compaction phenomena are created by inertia and these give rise to major problems in regard to industrial utilization. A second solution in which the non-stationary phase is also a fluid phase has consisted in making use of a packing column filled with Raschig rings, for example, as in the case of a distillation column. In this case, however, the thickness of the film in which the exchange of substance between the two phases takes place, that is to say the thickness of the so-called \"active\" phase in chormatography is typically of the order of one or several tens of microns.\nThe selectivity of a method of chromatographic separation, that is to say the capacity of this method for separating a substance is measured by the \"height equivalent to a theoretical plate\" (H.E.T.P.). This height is equal to the height (or to the length) of the exchanger divided by the number of theoretical plates provided.", "score": 25.681739302329763, "rank": 42}, {"document_id": "doc-::chunk-9", "d_text": "As the column chromatography consists of a constant flow of eluted solution passing via the detector at varying concentrations, the detector should plot the concentration of the eluted sample over a course of time. This plot of sample concentration versus time is known as chromatogram.\nThe final goal of chromatography is to separate various components from a solution mixture. The resolution deduces the extent of separation between the components from the mixture. The higher the resolution of the chromatogram, the better the degree of separation of the samples the column provides. This data is a good manner of finding out the column's separation properties of that specific sample. The resolution can be computed from the chromatogram. The separate curves in the diagram symbolize various sample elution concentration profiles over time based on their affinity to the column resin. To compute resolution, the retention time and curve width are needed.\nThe time from the beginning of signal detection via the detector to the peak height of the elution concentration profile of each and every different sample is termed as retention time whereas the width of the concentration profile curve of the various samples in the chromatogram in units of time is termed as curve width.\nA simplified process of computing chromatogram resolution is to make use of the plate model. The plate model supposes that the column can be categorized to a certain number of sections, or plates and the mass balance can be computed for each and every individual plate. This approach estimates a typical chromatogram curve as the Gaussian distribution curve. By doing this, the curve width is predicted as four times the standard deviation of the curve, '4σ'. The retention time is the time from the beginning of signal detection to the time of the peak height of the Gaussian curve. From the variables, the resolution, plate number, and plate height of the column plate model can be computed by using the equations:\nRs = 2(tRB - tRA)/(wB + wA)\ntRB is the retention time of solute\ntRA is the retention time of solute A\nwB is the Gaussian curve width of solute B\nwA is the Gaussian curve width of solute A\nPlate Number (N) = (tR)2/(w/4)2\nPlate Height (H) = L/N\nHere, L is the length of the column\n=> Column Adsorption equilibrium:\nFor the adsorption column, the column resin (that is, the stationary phase) is comprised of micro beads.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Chromatography (from the Greek chroma 'color' and graphein 'to write') is the collective word for the set of laboratory methods for the separation of mixtures. The mixture is dissolved in a fluid termed as the 'mobile phase' that carries it via a structure holding the other material known as the 'stationary phase'. The different constituents of the mixture travel at various speeds, causing them to separate. The separation is mainly based on differential partitioning between the mobile and stationary phases. The subtle differences in a compound's partition coefficient yield in differential retention on the stationary phase and therefore changing the separation.\nChromatography might be preparative or analytical. The main purpose of preparative chromatography is to separate the components of a mixture for further use (and is therefore a form of purification). The Analytical chromatography is done usually by smaller amounts of material and is for measuring the relative proportions of analyte in a mixture.\nHistory of Chromatography:\nChromatography, exactly 'color writing', was first used by Russian scientist Michael Tswett in the year 1900. He carries on to work by chromatography in the first decade of the 20th century, mainly for the separation of plant pigments like chlorophyll, carotenes and xanthophylls. As such components encompass different colors (that is, green, orange and yellow correspondingly) they gave the method its name. Between the year 1930 and 1940 new kinds of chromatography emerged and the method became helpful for lots of separation processes. Archer John Martin Porter and Richard Laurence Millington Synge throughout the year 1940 and 1950 worked widely on chromatography method which led to the establishment of the principles and fundamental methods of partition chromatography, and their work encouraged the fast growth of some chromatographic methods: paper chromatography, gas chromatography and what would become termed as high performance liquid chromatography. The technology has since then advanced fast. Researchers found that the major principles of Tswett's chromatography could be applied in numerous different ways, resultant in the different varieties of chromatography illustrated below. The separation of increasingly identical molecules is made possible because of persistence enhancement in the technical performance of the chromatography.\nBasic Chromatography terms:\nA) The analyte is the substance to be separated during chromatography.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-7", "d_text": "The chromatographic behaviour in an RPLC system of a solute eluted with a mobile phase containing a surfactant above the CMC can be explained by considering three phases: stationary phase, bulk solvent, and micellar pseudophase. Figure 1 illustrates the three-phase model. Solutes are separated on the basis of their differential partitioning between bulk solvent and micelles in the mobile phase or surfactant-coated stationary phase. For water-insoluble species, partitioning can also occur via direct transfer of solutes between the micellar pseudophase and the modified stationary phase (Figure 2).\nThe partitioning equilibria in MLC can be described by three coefficients: (partition between aqueous solvent and stationary phase), (between aqueous solvent and micelles), and (between micelles and stationary phase). The coefficients and account for the solute affinity to the stationary phase and micelles, respectively, and have opposite effects on solute retention: as increases, the retention increases, whereas as increases, the retention is reduced due to the stronger association to micelles.\nThe retention behaviour depends on the interactions established by the solute with the surfactant-modified stationary phase and micelles. Neutral solutes eluted with non-ionic and ionic surfactants and charged solutes eluted with non-ionic surfactants will only be affected by nonpolar, dipole-dipole, and proton donor-acceptor interactions . Besides these interactions, charged solutes will interact electrostatically with ionic surfactants (i.e., with the charged surfactant layer on the stationary phase and the charged outer layer of micelles). In any case, the steric factor can also be important.\nWith ionic surfactants, two situations are possible according to the charges of solute and surfactant: repulsion or attraction (by both surfactant-modified stationary phase and micelles). In the case of electrostatic repulsion, charged solutes cannot be retained by the stationary phase and elute at the dead volume, unless significant hydrophobic interaction with the modified bonded layer exists. In contrast, combined electrostatic attraction and hydrophobic interactions with the modified stationary phase may give rise to strong retention in MLC. Mixtures of polar and nonpolar solutes can be resolved, provided that an appropriate surfactant is chosen.\n6.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Liquid Chromatography Detectors - LC Detectors Based on Refractive Index Measurement > The Christiansen Effect Detector > Page 27\nFigure 16. The Christiansen Effect Detector\nIn the optical unit there is a pre focused lamp having an adjustable voltage supply to allow low energy operation when the maximum sensitivity is not required. The condensing lens, aperture, achromat and beam splitting prisms are mounted in a single tube which permitted easy optical alignment prevented contamination from dust. The device contains two identical and interchangeable cells. The disadvantage of this detector is that the cells must be changed each time a different mobile phase is chosen in order to match the refractive index of the packing to that of the new mobile phase. The refractive indices of the cell packing can be closely matched to that of the mobile phase by using appropriate solvent mixtures. In most cases solvent mixing can be achieved without affecting the chromatographic resolution significantly (e.g. by replacing a small amount of n-heptane in a mixture with either n-hexane or n-octane depending on whether the refractive index needs to be increased or decreased. However a considerable knowledge of the effect of different solvents on solute retention is necessary to accomplish this procedure successfully. As a result of limitations inherent in his type of detector combined with the general disadvantages of the RI detector per se has not made the Christiansen Effect Detector very popular.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "The SFC method was four times faster than the reported high performance liquid chromatography (HPLC) separation. Recent advances in analytical SFC equipment and sensitivity have allowed sulfamethazine to be quantitated to 9 ng/mL (below the legal limit).\nSFC in Medicinal Chemistry Purifications\nThe second plenary lecture of the conference was given by Eric Francotte (Novartis Institutes for Biomedical Research), who reported on the expanding role SFC is playing in medicinal chemistry purifications. Novartis has implemented a worldwide initiative to promote the use of SFC purifications to increase medicinal chemistry productivity. The company is promoting the use of SFC over reversed-phase HPLC whenever possible. This approach shortens the time necessary to purify final compounds and frees up medicinal chemists' time by routing the time-intensive purification process to a specialty group. There is currently no generic achiral SFC stationary phase, thus an efficient screening process must be developed to minimize analysis time while maximizing purification success. Francotte reported on the use of eight column chemistries and a parallel SFC–mass spectrometry (MS) system to efficiently develop analytical SFC methods suitable for purification. In the two years since this initiative began, the purification approach has moved from 90% reversed-phase HPLC in 2010 to 80% SFC in 2012.\nAlthough purification is still the main application for SFC, the use of analytical SFC has been increasing. This was evident by the higher number of talks discussing analytical SFC compared to past conferences. Didier Thiébaut (ESPCI Paris Tech) presented his recent work on exploring two-dimensional SFC to separate complex mixtures. He presented work concerning the impact of various instrument parameters on the number of peaks observed and reported on the 2D SFC separation of a vacuum distillate from coal tar, showing comparable results to 2D gas chromatography (GC). Two-dimensional SFC allows for the use of long columns and can be used for heavier (higher molecular weight and more polar) samples; it is not restricted to oil samples.\nClaudio Brunelli presented work from Pfizer's Sandwich, UK, facility where they are working to move SFC from a generic approach to a critical tool for analytical method development.", "score": 25.263629609920603, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "In many of my previous posts I have used the term column volume, typically abbreviated as CV, as a value used to help determine separation quality and loading capacity. However, I recently was asked a question about this topic from a chemist who understands the column volume concept but wanted to better understand its definition and how it is determined.\nIn this post I will explain what a column volume is and how it is determined empirically.\nSome chemists think the internal volume of the or column – without packing material inside is the column volume. While useful in determining scale-up factors, the empty column’s volume is not the CV. The CV of any column is the volume inside of a packed column not occupied by the media. This volume includes both the interstitial volume (volume outside of the particles) and the media’s own internal porosity (pore volume). Combined, the two volumes constitute 70% to 80% of the packed column’s volume. Of course this means that the media only occupies 20% to 30% of the space in the cartridge.\nYou may be wondering why the volume % is expressed as a range. Well, each chromatography packing material has its own pore volume. Because the base media, typically silica, is synthetic, there is always variability from batch to batch, within a batch, and even within a single particle.\nMedia manufacturers always test their product to ensure it conforms to their specifications which is always expressed as a range. One of the primary tests performed on silica is a nitrogen BET analysis that measures both surface area and porosity. Here at Biotage, we test every batch of media we purchase to ensure it meets our specifications. The test results for any media batch are an average surface area and an average pore volume. From the surface area and pore volume the average pore diameter is calculated.\nMost flash silica used in normal-phase chromatography has an average surface area of 500 m2/g and a calculated pore diameter of 60Å. This equates to a pore volume of 0.75 mL. How did I arrive at this value? I adjusted the equation below, which calculates pore diameter, to calculate pore volume.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "SMB chromatography is a cost effective and environmentally friendly technology, using up to 90% less solvent than similar chromatographic purification schemes. It combines the advantages of high yields, high purity and accelerated scale-up for clinical to commercial-scale API purification. This chiral chromatography based technology works at the molecular level to separate active and non-active enantiomers for active ingredient production.\nThe technique is a valuable tool for the research biochemist and is readily adaptable to investigations conducted in the clinical laboratory. For example, chromatography is used to detect and identify in body fluids certain sugars and amino acids associated with inborn errors of metabolism.\nAdsorption chromatography that in which the stationary phase is an adsorbent.\nAffinity chromatography that based on a highly specific biologic interaction such as that between antigen and antibody, enzyme and substrate, or receptor and ligand. Any of these substances, covalently linked to an insoluble support or immobilized in a gel, may serve as the sorbent allowing the interacting substance to be isolated from relatively impure samples; often a 1000-fold purification can be achieved in one step.\nColumn chromatography the technique in which the various solutes of a solution are allowed to travel down a column, the individual components being adsorbed by the stationary phase. The most strongly adsorbed component will remain near the top of the column; the other components will pass to positions farther and farther down the column according to their affinity for the adsorbent. If the individual components are naturally colored, they will form a series of colored bands or zones.\nColumn chromatography has been employed to separate vitamins, steroids, hormones, and alkaloids and to determine the amounts of these substances in samples of body fluids.\nExclusion chromatography that in which the stationary phase is a gel having a closely controlled pore size. Molecules are separated based on molecular size and shape, smaller molecules being temporarily retained in the pores. Gas chromatography a type of automated chromatography in which the mobile phase is an inert gas. Volatile components of the sample are separated in the column and measured by a detector. The method has been applied in the clinical laboratory to separate and quantify steroids, barbiturates, and lipids.", "score": 24.465627805166232, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "|CHP Home||GenChem Analytical Instrumentation Index|\nThe underlying principles that determine chromatographic separations are dynamic behaviors that depends on partitioning and mass transport. These phenomena are too complex to model directly, and chromatography theory consists of empirical relationships to describe chromatographic columns and the separation of peaks in chromatograms.\nThe retention of an analyte by a column is described by the capacity factor, k', where:\nk' = tr - tm ------- tmwhere tr is the time for the analyte to pass through the column, and tm is the time for mobile phase to pass through the column.\nThe resolution of chromatographic columns is described by the theoretical plate height, H, or the number of theoretical plates, N. These two quantities are related by:\nN = L / H\nwhere L is the length of the column.\nH and N provide useful measures to compare the performance of different columns for a given analyte. Useful expressions are:\nH = L W2 / 16 tr2\nN = 16 (tr / W)2\nwhere W is the width of the peak at its base.\n|Top of Page|", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "B) Analytical chromatography is employed to find out the existence and the concentration of analyte(s) in the sample.\nC) A bonded phase is a stationary phase which is covalently bonded to the support particles or to the inside wall of the column tubing.\nD) The visual output of the chromatograph is a chromatogram that comprises of various peaks that correspond to various components of the separated mixture.\nFig: Basic Chromatography\nPlotted on the x-axis is the retention time and plotted on the y-axis a signal (for illustration acquired via a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response made by the analyte exiting the system. The signal is proportional to the concentration of the analyte separated.\nA) A chromatograph is equipment which lets a sophisticated separation example: gas chromatographic or liquid chromatographic separation.\nB) Chromatography is the physical process of separation in which the components to be separated are distributed between the two phases, one of which is stationary (that is, stationary phase) whereas the other (that is, the mobile phase) moves in the definite direction.\nC) The eluate is the mobile phase leaving the column.\nD) The eluent is the solvent which will carry out the analyte.\nE) The eluotropic sequence is a list of solvents ranked according to their eluting power.\nF) An immobilized phase is a stationary phase that is immobilized on the support particles or on the inner wall of the column tubing.\nG) The mobile phase is the phase that moves in a definite direction. This might be a liquid (that is, LC and Capillary Electrochromatography (CEC)), a gas (GC), or a supercritical fluid (that is, supercritical-fluid chromatography, SFC). The mobile phase comprises of the sample being separated or analyzed and the solvent which moves the sample via the column. In case of HPLC the mobile phase comprises of a non-polar solvent(s) like hexane in normal phase or polar solvents in the reverse phase chromatography and the sample being separated. The mobile phase moves via the chromatography column (that is, the stationary phase) where the sample interacts by the stationary phase and is separated.\nH) Preparative chromatography is employed to purify adequate quantities of a substance for further use, instead of analysis.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-6", "d_text": "distribution of molecules between the paer and the solvent\n- if the chemical is more attracted to the mobile phase, moves faster, yet if more attracted to the stationary phase more slower\n- chemicals seperate\nWhat are the advantages and disadvantages to paper and thin layer chromatography(TLC)?\n- limited use\n-requires only small volumes of solutions\n-good pre- experiment test\nWhat are aqueous solutions and non-aqueous solutions?\n- -aqueous - solution in water\n- non-aqueous- solutions with no water\nWhat is the mobile and stationary phases TLC and is TLC down?\n- mobile phase-solvent\n-stationaryp hase- an absorbent\nsolid supported on a glass plate or stiff plastic sheet\n-sample dissolved in solvent\n-applied to stationary phase\n- allowed to dry then more solution is added if dilute\n- sample analysed\nWhat is a chromatogram?\n1.add substance to solvent\n2. put solvent in a chromatography tank and put a lid over it\n3. after a few minutes put a prepared paer( maked with a base line and spotted)or TLC\n- all over the paper, but only above the baseline\n4. solvent rises up the paer, leaving a sustance that can be analysed\nHow are colourless sustances located on paper or TLC?\n1. develop chromatogram by spraying it with a locating agent that rea ts with substances to form coloured compounds\n2. Use on ultraviolet lamp with TLC\nplate that contains flurescers, so that tha spots appear violet in UV LIGHT\nHow can chromatograms be interpreted?\n1. comparing spots with those from standard reference material\n2. retardation factor(Rf)\nRf= distance moved by chemical divided by distance moved by solvent\nWhat are the advantages and disadvantages of Gas chromatography?(GC)\n- better than paper or TLC\n- more sensitive\n- can measure amounts of each chemical present\n- lots of high value equipmwnt\nWhat is the process of GC?\nmobile phase: carrier gas(helium0\nstationary phase: thin film of a liquid on the surface of powdered soil, packed into a sealed tube column\n- some compound carried move slowly than others\nas they have different boiling points or greater attraction to the stationary phase\n- very small smaples needed\nHow is the sustances seperated and detected in GC?", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "The physicochemical basis for separation by HIC and RPLC is the hydrophobic effect; proteins are separated on a hydrophobic stationary phase based on differences in hydrophobicity.\nAlthough both HIC and RPLC utilize a hydrophobic stationary phase, the composition of the surface of the hydrophobic or non-polar stationary phase differs. Materials used as a stationary phase in RPLC have what can be considered a “hard” hydrophobic surface, characterized by high interfacial tension between the surface and an aqueous mobile phase. In contrast, the surface of the stationary phase in HIC is composed of a hydrophilic organic layer having weakly non-polar groups attached. This is termed a “soft” surface. In addition, the surface density of hydrophobic groups is generally higher in RPLC systems Man in HIC.\nThe different surface characteristics of the two systems mandate the use of different mobile phases so that the interaction between the solute and the stationary phase results in a retention time which falls within the range of practical values for the separation. Thus, the mobile phases employed in HIC are typically aqueous with high salt concentrations; most proteins have a higher retention at higher salt concentration. Those used in RPLC are aqueous with organic modifiers, such as acetonitrile and methanol; most proteins have lower retention with higher organic modifier concentrations.\nA chromatographic system can be operated in one of two major modes, elution (including linear gradient, step gradient, and isocratic elution) or displacement. The two modes may be distinguished both in theory and in practice. In elution chromatography, a solution of the sample to be purified is applied to a stationary phase, commonly in a column. The mobile phase is chosen such that the sample is neither irreversibly adsorbed nor totally unadsorbed, but rather binds reversibly. As the mobile phase is flowed over the stationary phase, an equilibrium is established between the mobile phase and the stationary phase whereby, depending on the affinity for the stationary phase, the sample passes along the column at a speed which reflects its affinity relative to the other components that may occur in the original sample. The differential migration process is outlined schematically in FIG. 1, and a typical chromatogram is shown in FIG. 2.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "A few more plates are produced by columns as long as 250 mm, but this benefit is marginal (N only increases with the square root of length increase), and the drawbacks include longer run times and higher back pressures.\nFor initial screening, some workers recommend 30- or 50-mm column lengths, but these columns frequently fail to produce a high enough plate number for a practical separation. Microbore (less than 1 mm i.d.) or narrow-bore (2-rnm i.d.) columns are additional options. However, these columns require smaller sample volumes, less extra column plumbing, and smaller detector cell size and place unnecessary demands on the LC system during the method development stages. Hence it is preferable to use these short or narrow diameter columns during the fine-tuning phase of method development. In other words, as the workhorse for developing methods, best choice could be to use 150 mm x 4.6 mm, 5-m dp column. Normally, a mobile phase flow rate of 1.5 mL/min is advisable when running these columns.\nAnother important factor that has profound influence on chromatographic separation is mobile phase components, the organic and aqueous phase.\nThe choice of the organic solvent is another factor that might affect how well the separation goes. Three options are available with reversed-phase separations: methanol (MeOH), acetonitrile (ACN), and tetrahydrofuran (THF). Each solvent offers a distinct advantage in terms of selectivity, but chromatographers rarely know which solvent will be the best option based on this factor. Thus, you must base your decision on other factors when selecting the starting solvent.\nMajority of the work involves analyzing pharmaceutical compounds. Due to the extremely low UV absorbance of many of these samples, analysts frequently find themselves using detector settings of 220 run or less. Tetrahydrofuran has a high background absorbance, making it less useful below 240 nm. Gradients containing methanol typically drift off scale at wavelengths shorter than 220 nm, despite the fact that low concentrations of methanol can be used at low wave lengths. Further, You need a solvent that won’t react with the atmosphere or the samples. When using tetrahydrofuran, workers must exercise caution because it can break down and produce peroxides. According to some researchers, diluting tetrahydrofuran with water significantly reduces this issue.", "score": 24.12694989858406, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "The higher the numeric value, the more undulation present in the stencil. Lower values, on the other hand, indicate a smoother surface. Obviously, the printer is looking for the lowest possible Rz value at the appropriate stencil thickness for the required ink deposit. Sample Rz measurements for typical screen printing substrate are shown in FIGURE 6 for comparison purposes.\nIt is also possible to correlate emulsion thickness with the Rz value and establish data about different emulsions. If a sample emulsion is coated on the same screen under the exact same conditions, the Rz value and the emulsion buildup would be the same. Different emulsions, however, will produce different Rz values, even if the emulsion-over-mesh ratio is the same.\nSince mesh equalization is, to a great extent, dependent on the solids contest of the emulsion the Rz value will vary at the same coating thicknesses with the different emulsions that are used. For example, a traditional, high-grade diazo-sensitized emulsion with 27-28% solids might have and Rz value of approximately 9 microns. A diazo-sensitized photopolymer with 35-36% solids that is coated to the same wet thickness on the mesh might have a Rz value of 7 microns. Obviously, the solids content of these emulsions has determine the difference in the total shrinkage and thus the lower Rz value for the higher-solids, photopolymer emulsion.\nRz value benefits\nUntil now, the screenmaker and the printer could only rely on the printed results as a reliable means of evaluating stencil quality. Yet, even this judgment is flawed because so many other factors affect the final print quality: squeegee pressure, substrate, squeegee durometer, squeegee angle, ink viscosity, mesh choice, etc. Even the perfect stencil can produce an unacceptable print if these other variables are not controlled. With the development of quality-control parameters and measuring techniques, however, it is necessary to start with the best stencil possible for the printing conditions. The use of Rz values provides the screenmaker or the printer with quantifiable and repeatable stencil characteristics rather than subjective judgements.\nIn addition to the quality of the stencil surface, the emulsion buildup can be measured on the range of mesh counts used in a typical screen-printing plant, and this information can be recorded and used for future applications.", "score": 23.642463227796483, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "A, B, and C are factors which contribute to band broadening.\nA - Eddy diffusion\nThe mobile phase moves through the column which is packed with stationary phase. Solute molecules will take different paths through the stationary phase at random. This will cause broadening of the solute band, because different paths are of different lengths.\nB - Longitudinal diffusion\nThe concentration of analyte is less at the edges of the band than at the center. Analyte diffuses out from the center to the edges. This causes band broadening. If the velocity of the mobile phase is high then the analyte spends less time on the column, which decreases the effects of longitudinal diffusion.\nC - Resistance to mass transfer\nThe analyte takes a certain amount of time to equilibrate between the stationary and mobile phase. If the velocity of the mobile phase is high, and the analyte has a strong affinity for the stationary phase, then the analyte in the mobile phase will move ahead of the analyte in the stationary phase. The band of analyte is broadened. The higher the velocity of mobile phase, the worse the broadening becomes.\nVan Deemter plots\nA plot of plate height vs. average linear velocity of mobile phase.\nSuch plots are of considerable use in determining the optimum mobile phase flow rate.\nAlthough the selectivity factor, a, describes the separation of band centres, it does not take into account peak widths. Another measure of how well species have been separated is provided by measurement of the resolution. The resolution of two species, A and B, is defined as\nBaseline resolution is achieved when R = 1.5\nIt is useful to relate the resolution to the number of plates in the column, the selectivity factor and the retention factors of the two solutes;\nTo obtain high resolution, the three terms must be maximised. An increase in N, the number of theoretical plates, by lengthening the column leads to an increase in retention time and increased band broadening - which may not be desirable. Instead, to increase the number of plates, the height equivalent to a theoretical plate can be reduced by reducing the size of the stationary phase particles.\nIt is often found that by controlling the capacity factor, k', separations can be greatly improved. This can be achieved by changing the temperature (in Gas Chromatography) or the composition of the mobile phase (in Liquid Chromatography).", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "The detector is wired to the computer info station, the HPLC process element that data the electrical sign necessary to generate the chromatogram on its Show and also to recognize and quantitate the focus on the sample constituents (see Determine File). Considering that sample compound traits can be very different, many types of detectors are already made. By way of example, if a compound can take up ultraviolet gentle, a UV-absorbance detector is used. If your compound fluoresces, a fluorescence detector is used.\nAlongside one another the variables are variables in a resolution equation, which describes how properly two elements' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed period and HPLC typical stage separations, considering that Those people separations tend to be a lot more delicate than other HPLC modes (e.g. ion exchange and measurement exclusion).\nOur selection of large-quality Examine valves and relief valves are available in brass or stainless-steel and a variety of relationship dimensions for the link of cylinders to devices.\nA detector is required to see the divided compound bands because they elute in the HPLC column [most compounds have no shade, so we can't see them with our eyes]. The cell stage exits the detector and will be sent to waste, or collected, as wished-for. When the cell stage contains a separated compound band, HPLC presents a chance to acquire this portion of your eluate containing that purified compound for additional analyze. This is named preparative chromatography [discussed while in the section on HPLC Scale].\nSize-exclusion chromatography (SEC), also referred to as gel permeation chromatography or gel filtration chromatography, separates particles on the basis of molecular dimensions (basically by a particle's Stokes radius). It is normally a very low resolution chromatography and therefore it is usually reserved for the ultimate, \"sprucing\" phase on the purification. It is additionally helpful for identifying the tertiary construction and quaternary structure of purified proteins.\nIn UPLC, or ultra-superior overall performance liquid chromatography, column particle dimension of under 2um may be used. This enables for better separation than the typical particle size of 5um which can be used in HPLC.\nIn Determine H, the yellow band has entirely handed through the detector flow cell; the electrical sign created has become sent to the computer information station.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-35", "d_text": "The second valve 1043 was then closed and the first syringe 1040 was pressurized to approximately 5 psi to deliver ethanol until both dyes had eluted from the columns 1038.\n Any removal of a narrow plug of analyte from a column is susceptible to broadening and consequent ruining of the separation. Thus it is advantageous to be able to detect separated analytes on the column before they encounter these plug-broadening components. The chromatography device described here is highly amenable to on-column optical detection. As shown schematically in FIG. 17, for example, a device 1050 can be constructed of low-absorbance polymers so that light can pass through the polymer films 1051, 1053 and column 1052. Holes, such as hole 1055, can be incorporated into one or more opaque supporting layers (e.g., layer 1054) adjacent to optically clear layers 1051, 1053 that enclose the column. Alternatively, a hole (not shown) may be defined in a layer (e.g., layer 1051) enclosing the column 1052 and covered with a window of appropriate optical properties. Using a light source 1056, light can be transmitted through one or more windows, or reflected back through a window after interacting with an analyte on the column. A detector 1057, which may be within or preferably outside the device 1050, is preferably provided. These configurations enable a range of optical spectroscopies including absorbance, fluorescence, Raman scattering, polarimetry, circular dicroism and refractive index detection. With the appropriate window material and optical geometry, techniques such as surface plasmon resonance and attenuated total reflectance can be performed. These techniques can also be performed off-column as well or in a microfluidic device that does not employ a separation column. Window materials can also be used for other analytical techniques such as scintillation, chemilluminescence, electroluminescence, and electron capture. A range of electromagnetic energies can be used including ultraviolet, visible, near infrared and infrared.\n Analytical probes (not shown) can also be inserted into the microfluidic device and into the separation column. Examples of optical probes include absorbance, reflectance, attenuated total reflectance, fluorescence, Raman, and optical sensors.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-6", "d_text": "The choice of chromatographic technique is a determining factor for ensuring quantitative and qualitative analysis of small molecules and highly potent APIs extensively required for the clinical trial batches of finished drug products. The polarity of molecules is a determining factor in the chromatography process. The most common compounds separated through chromatography are synthetic resins, quinolones, flavonoids, synthetic hormones, antibodies, organic solutes, lipids, anions, carbohydrates, proteins, amines, alcohols and many organic non-polar compounds.\nThe type of interaction between stationary phase, mobile phase, and substances contained in the mixture is the basic component effective on separation of molecules from each other. Chromatography methods based on partition are very effective on separation, and identification of small molecules as lipids, esters, antibodies, complex small molecules, highly potent compounds and speciality chemicals. Affinity chromatography methods (i.e. ion-exchange chromatography) are more effective in the separation of macromolecules as nucleic acids, and proteins. Paper chromatography is used in the separation of proteins, and in studies related to protein synthesis; gas-liquid chromatography is utilized in the separation of all types of complex molecules such as antibodies, proteins, monoclonal antibodies etc.\nThe stationary phase of chromatographic sample is considered as a solid phase or a liquid phase coated on the surface of a solid phase. Along with that, there is a mobile phase flowing over the stationary phase which is referred to as the gaseous or liquid phase. If the mobile phase is liquid it is termed as liquid chromatography (LC), and if it is gas then it is called gas chromatography (GC). The purpose of applying chromatography which is used as a method of quantitative analysis as well as its separation is to achieve a satisfactory separation within a suitable time frame. Agarose-gel chromatography is used for the purification of RNA, DNA particles, and viruses.\nVarious other API manufacturing companies and CROs use High performance liquid chromatography (or high pressure liquid chromatography HPLC) in the process of drug discovery. High-performance liquid chromatography is a technique in analytical chemistry and bulk drug manufacturing that is used to separate, identify, and quantify each component in a mixture.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Many microcentrifuges only have settings for speed (revolutions per minute, RPM), not relative centrifugal force, but to be more accurate, certain procedures necessitate precise centrifugation conditions, which must be specified in terms of relative centrifugal force (RCF) expressed in units of gravity (times gravity or × g).\nThe relationship between revolutions per minute (RPM) and relative centrifugal force (RCF) is:\nWhere g is the relative centrifugal force, R is the radius of the rotor in centimeters, and S is the speed of the centrifuge in RPM.\nIn the following online tool you can calculate the centrifuge rotor speed, taking into account the different rotor types available. If you do not find your rotor here, you can enter the parameters manually.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Structural Biochemistry/Chromatography/Thin Layer\nThin layer chromatography (TLC) is an extremely valuable technique in the organic lab. It is used to separate mixtures, to check the purity of a mixture, or to monitor the progress of a reaction. The polarity of the solute, polarity of solvent, and polarity of adsorbent are crucial factors that determine the mobility rate of a compound along a TLC plate. This technique helps separate different mixtures of compounds based on their mobility differences. TLC can also be used to identify compounds by comparing it to a known compound\nThin layer chromatography (TLC): this technique was used to separate dried liquids with using liquid solvent (mobile phase) and a glass plate covered with silica gel (stationary phase). Basically, we can use any organic substance (cellulose polyamide, polyethylene, etc.) or inorganic substance (silica gel, aluminum oxide, etc.) in TLC. These substances must be able to divide and form uniform layers. On the surface of the plate, will be a very thin layer of silica which is considered the stationary phase. Then, add a small amount of solvent into a wide-mouth container (i.e. beaker or developing jar) just enough to cover the bottom of the container. Place the prepared TLC plate into the sealed container which has small amount of a solvent (moving phase). Due to capillary action, the solvent moves up to the plate and now we can remove the plate and analyze the Rf values.\nUsually TLC is done on a glass, plastic, or aluminum plate coated with silica gel, aluminum oxide, or cellulose. This coating is called the stationary phase. The sample is then applied to the bottom of the plate and the plate placed in a solvent, or the mobile phase. Capillary action pushes the sample up the plate. The rate the samples move up the plate depends on how tightly the sample binds to the stationary phase. This is determined by polarity. The Rf values or the Retention Factors are then compared for analysis. The retardation factor of a solute is defined as the ratio between the distance traveled by a compound to that of the solvent in a given amount of time. For this reason, Rf values will vary from a minimum of 0.0 to a maximum of 1.0.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "The resulting chromatogram has begun to seem on screen. Be aware that the chromatogram begins when the sample was 1st injected and starts off as a straight line set in close proximity to the bottom on the display. This can be called the baseline; it signifies pure cellular phase passing in the stream mobile as time passes.\nSCIEX forensic analysis answers provide fast, really precise data across a large number of compounds and biomarkers, through the regarded to the new and novel.\nAdvance your investigation with entrance-conclude devices created that will help you notice the complete power of your respective mass spectrometer. SCIEX has the broadest portfolio of ESI-MS entrance-ends which will aid many flow charges, sample necessities and sensitivities.\nBy reducing the pH with the solvent inside of a cation Trade column, For example, far more hydrogen ions are offered to contend for positions on the anionic stationary phase, thereby eluting weakly certain cations.\nMade with expandability and compatibility in mind, the Nexera XR ultra large effectiveness liquid chromatograph enables a lot more buyers to utilize high-speed, higher-resolution systems.\nEvery single vMethod supplies strategy problems, advisable sample prep, LC and MS problems, and information for applicable MS/MS library databases for important purposes.\nThe essential basic principle of displacement chromatography is: A molecule with a higher affinity with the chromatography matrix (the displacer) will compete proficiently for binding web pages, and so displace all molecules with lesser affinities.[eleven] You will discover distinctive discrepancies involving displacement and elution chromatography. In elution method, substances generally emerge from a column in narrow, Gaussian peaks. Broad separation of peaks, preferably to baseline, is desired in order to reach utmost purification. The pace at which any component of a combination travels down the column in elution manner is dependent upon several elements. But for 2 substances to journey at distinct speeds, and thereby be here fixed, there needs to be considerable distinctions in certain conversation in between the biomolecules and the chromatography matrix.\nis usually a xanthine alkaloid (psychoactive stimulant). Caffeine has some legitimatemedical makes use of in athletic schooling and inside the reduction of rigidity-sort complications. It is a drug that isnaturally made during the leaves and seeds of many plants. It’s also developed artificially andadded to sure foods.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-11", "d_text": "Excess hexane was run off, leaving a thin (<1.0 mm) guard layer of solvent to prevent drying and cracking of the silica. The mixture to be separated was added using a Pasteur pipette, taking care not to disturb the silica. This was allowed a moment to settle before the column tap was opened to allow the analyte to adsorb to the silica. Five aliquots of 25.0 ml hexane were used to obtain fractions in pre-weighed 50 ml RBFs.\n2.4.3 Gas chromatography with flame ionization (GC-FID) and mass spectrometric (GC-MS) detection\nAnalyses of products were undertaken using an Agilent Tech 7890A series gas chromatograph equipped with a 7683 series auto-sampler and a 7683B series autoinjector.\nFor GC-FID analysis, the gas chromatograph was fitted with an Rtx-5 (5%-Phenyl)-methylpolysiloxane column, 30 m x 0.32 mm ID, and a film thickness of 0.25 μm. Nitrogen was used as the carrier gas with a 1.0 ml min-1 flow rate. The injector volume was 1 μl and the injector temperature was 250 °C. The oven temperature was programmed to commence at 40 °C, rising by 10 °C min-1 to 300 °C, held at 300 °C for ten minutes. The flame ionization detector temperature was 300 °C, with hydrogen / air flow rates of 40 ml min-1 and 400 ml min-1 respectively.\nFor GC-MS analysis, the gas chromatograph was fitted with an Agilent HP-5ms column, 30 m x 0.25 mm ID, with a film thickness of 0.25 μm. Helium was used as the carrier gas with a 1.0 ml min-1 flow rate. The injector volume was 1 μl and the injection temperature was 300 °C. The oven temperature was programmed to commence at 40 °C, rising by 10 °C min-1 to 300 °C, held at 300 °C for ten minutes. The detector was an Agilent 5975A quadrupole mass selective detector with the ion source temperature set at 230 °C.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-4", "d_text": "Contract development and manufacturing organizations (CDMOs) can optimize chromatography methods depending on the objective and required purity using high-volume batch columns and perform preparative purification using preparative LC systems and packing materials, developed and produced in-house.\nPreparative chromatography can be performed with an analytical column (and system) to produce a few micrograms of material, up to process scale, providing a ton quantities of sample, which use 1-m-long, 200-mm i.d columns. The larger the quantity of analyte required, the further the technique is removed from analytical chromatography, both in terms of scale and ideology; the bigger the scale, the more \"nonchromatographic\" parameters have to be considered.\nSupercritical fluid chromatography (SFC) is a form of normal phase chromatography that uses a supercritical fluid such as carbon dioxide as the mobile phase. SFC is used for the analysis and purification of low to moderate molecular weight, thermally labile molecules and can also be used for the separation of chiral compounds. Basic separation principles are similar to those of high performance liquid chromatography (HPLC), however SFC typically utilizes carbon dioxide as the mobile phase; therefore the entire chromatographic flow path must be pressurized. Since the supercritical phase represents a state in which liquid and gas properties converge, supercritical fluid chromatography is sometimes called convergence chromatography.\nThe development of chromatographic methods has brought about a number of refinements and advancements in the process. As is well known, chromatographic separation of active pharmaceutical ingredients (APIs) and highly potent APIs (HPAPIs) require high accuracy, effective dosage forms and quick elution rates so that they are effectively fewer side effects and greater targeted therapeutic effects. So choosing the correct chromatographic method is a function of the types of molecules to be separated. Today, there are small molecules, peptides, antibody drug conjugates, fermented biological compounds and advanced micro-chemicals used for different therapeutic areas. These small molecules present various complexities during drug development and commercial scale production processes, which require chromatographic expertise and cutting-edge technology to ensure the highest quality of molecules produced.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-7", "d_text": "Thus, as the amount of fluid decreases, less power is absorbed in the tank and the resonant width at half-power increases, and the quality factor, Q, increases. Therefore, it may be seen that the quality factor, Q; which is inverselyproportional to the full width of the resonance curve at half maximum, may be used as a means of determining the quantity of the liquid within the cavity. Thus, by simply sweeping sweep oscillator 10 through several resonant frequencies and measuringthe power at R.F. detector 12, it is possible to determine the quality factor, Q, at each resonant frequency. These measurements are then averaged to produce a value of liquid mass present in the tank. This value is relatively insensitive to fluidorientation for the reason that Q is a function of energy absorbed and the energy absorbed is primarily dependent on the liquid mass present in the tank. Preferably, at least two or three modes of resonance are swept in this fashion to average outreadings. The average of these measurements is used to calculate Q, which is then displayed in terms of liquid quantity or mass on a suitably calibrated indicator 40.\nA typical trace of a resonance curve plotting crystal voltage at detector 12 versus time or, equivalently frequency, is shown in FIG. 2. At resonance, the tank plus liquid is highly absorptive, thus detected power is low and the power receivedby the detector is minimal, P.sub.min, at the resonant frequency F.sub.o. Conversely, off of resonance the received power increases to a maximum P.sub.max. The loaded Q of the tank is equal to F.sub.o /T, where T is the full width at half maximum ofthe resonance.\nThe alternate embodiment of FIG. 3 may be preferable for fluids with high loss tangent (low dielectric conductivity) or wherein measurements are required at high levels of fluid mass, in which case, low Q value measurements are required. In suchcases, the resonance becomes so broad that the half-power points are difficult to discern and measure.\nIn FIG. 3, like numerals with a \"prime\" superscript are used for like items in FIG. 1. As shown in FIG. 3, a sweep oscillator 10' is coupled to the antenna 19' in tank 20' via an isolator 50, a wavemeter 52 and a slotted coaxial line device 54.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-5", "d_text": "Thin layer chromatography can be run simply if the given process is carried out: A small spot of solution having the sample is applied to a plate, around 1.5 centimeters from the bottom edge. The solvent is allowed to fully evaporate off, or else a very poor or no separation will be accomplished. The plate must be dried in a vacuum chamber particularly if a non-volatile solvent was employed to apply the sample.\nA small amount of a suitable solvent is poured to an appropriate transparent container to a depth of less than 1 centimeter. The strip of filter paper is placed to the chamber, in such a way that its bottom touches the solvent, and the paper lies on the chamber wall and reaches approximately to the top of the container. The container is closed by a cover glass or any other lid and is left for some minutes to let the solvent vapors ascend the filter paper and saturate the air in the chamber. The TLC plate is then put in the chamber in such a way that the spots of the sample don't touch the surface of the solvent in the chamber, and the lid is closed. The solvent moves up the plate through capillary action, meets the sample mixture and carries it up the plate (that is, elutes the sample). Whenever the solvent front reaches no higher than the top of the filter paper in the chamber, the plate must be eliminated and dried.\n=> Separation Process:\nVarious compounds in the sample mixture travel at different rates because of the differences in their attraction to the stationary phase, and due to differences in solubility in the solvent. By changing the solvent, or possibly by using a mixture, the separation of components (that is, measured by the Rf value) can be adjusted. As well, the separation accomplished by a TLC plate can be employed to approximate the separation of a flash chromatography column.\nThe separation of compounds is mainly based on the competition of the solute and the mobile phase for binding places on the stationary phase. For illustration, if normal phase silica gel is employed as the stationary phase it can be considered polar. Given two compounds that vary in polarity, the more polar compound consists of a stronger interaction by the silica and is thus more capable to dispel the mobile phase from the binding places. As a result, the less polar compound moves higher up the plate (resultant in the higher Rf value).", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-5", "d_text": "Enhancement factor (EF) is defined as the maximal signal intensity (Imax-extraction) in the extraction step divided by the mean signal intensity (meanflushing) during the flushing step (from 0.49 to 1.00 min): (1) Signal-to-noise ratio (S/N) was calculated using the maximal signal intensity in the extraction step minus the average intensity of the saturation step (local baseline, from 1.35 to 1.85 min) divided by the root mean square of the blank sample in the extraction step (from 2.04 to 2.97 min): (2) In one part of this study—in order to compensate for differences in ionization efficiencies of the tested analytes—the corrected signal (Scorrected) was calculated based on the maximal signal intensity obtained in FE (Imax-FE) multiplied by a correction factor (CF):\nThe CF is defined as the average (averaging interval: 1 min) extracted ion currents (EICs) of the target analytes (Itarget) obtained in direct liquid infusion to APCI-MS (sample flow rate, 40 μL min−1; sample, 5 × 10−6 M analytes dissolved in 5 vol.% ethanol/water mixture) divided by the average intensity of EPR (IEPR, used as a reference; averaging interval: 1 min): (4) The calculations were done using Excel (version 16.0; Microsoft, Redmond, WA, USA) and Matlab software (version R2017a; MathWorks, Natick, MA, USA). Origin software (version 2018b; OriginLab, Northampton, MA, USA) was used to plot the figures.\nTo shed light on the mechanism of FE, we have studied the influence of a number of parameters on the extraction process. These parameters are grouped into four categories: (1) instrument-related; (2) method-related; (3) sample-related; (4) analyte-related.\nInstrument-related factors affecting FE\nExtract transfer tubing diameter and length\nFirst, we tested 30-cm sections of four types of PTFE tubing with different inner diameters (0.3, 0.6, 0.8, and 1.0 mm) as extract transfer tubing. The EFs and S/N increased as the inner diameter increased, especially for EPE, EHP, EN, and LIM (Fig.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "But very slowly because it is against the gravity as well as the compound is a complex one. Two dimensional technique is another complex set up which is used to separate complex mixtures. This is why this is termed as ascending technique. Packed GC Columns: Robust, Reliable and Reproducible. In contrast, typical capillary columns consist of a thin, fused silica glass tube with a thin, internal liquid phase coating. We must carefully observe whether the mobile phase develops over the level of solvent front. This is built due to its time consuming ability. NOW 50% OFF! Until I had to run a prior test changing the GC set for cappilar column. A packed column is a pressure vessel that has a packed section. OPUS® 5-80R Pre-packed Chromatography Columns Figure 3. The mobile phase is water, flowing at 1 mL/min. If column packing is required, the following guidelines will apply at all scales of operation: With a high binding capacity medium, use short, wide columns (typically 5â15 cm bed height) for rapid purification, even with low linear flow. Agilent J&W Packed GC Columns are of premium quality and provide excellent performance over other Packed Columns on the market. RF value does not have units since the both lower and upper cases are in distance. Because if the stationary phase is more polar than the mobile phase then high polar compounds in the mixture will tightly bound to the stationary phase where as less polar compounds will lightly bound to the stationary phase. for more details check here Column Internals, The stationary phase of this particular technique is a solid material on which the sample compounds are adsorbed. Gas Chromatographic Columns. Restek has developed a special packing material that exceeds EPA Method 608 resolution requirements while delivering a faster analysis time than any other pesticide phase packed column available. Table 2. Our Packed GC Columns are designed and manufactured to offer excellent and reproducible performance for all sample types associated with Packed Column separations. Some are liquid Distributors, Bed eliminators, Packing support plate etc. When using low-pressure liquid chromatography systems, pre-packed columns offer convenience and peace of mind.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Capillary Hydrodynamic Fractionator 2000\nThe CHDF-2000 can measure multimodal particle size distributions in the size range of 0.01-3.0 microns with very high resolution. The particles are physically fractionated according to size before detection by Matec Applied Sciences’ patented Capillary Hydrodynamic Fractionation (CHDF) technique. \"The CHDF-2000 combines the proven CHDF technique with powerful Windows operating software and an advanced hardware platform equivalent to modern HPLC systems into a rapid, efficient, and reliable particle size analyzer.\" Particle size populations differing by 10% can be detected and quantified with no assumptions regarding the shape of the particle size distributions. Results are independent of particle density. The instrument is capable of determining particle size distributions in fewer than 10 minutes independent of the complexity of the particle size distributions and sample density.\n(Matec Applied Sciences: CHDF-2000)\nThe CHDF method provides the greatest combination of speed and resolution of any particle sizing technique. The CHDF-2000 can even be used with an external autosampler for automated analysis of over 100 samples allowing continual 24-hour usage. (Matec Applied Sciences: CHDF-2000)", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-3", "d_text": "Values of RI can be measured very accurately and are used to correlate density and other properties of hydrocarbons with high reliability . Information obtained from RI measurements can be applied for various reservoir engineering calculations. The examples are PVT behavior and surface tension of reservoir fluids , wetting alterations in reservoirs [35, 36], and asphaltene precipitation [37–39]. The refractive index of light crude oils can be directly measured using conventional refractometer [35, 36, 38]. However, direct measurements of the refractive index of many crudes, natural bitumen, and heavy fuels are unattainable since these liquids are too opaque so RI is only measured for fairly dilute solution; in these cases RI is determined for a series of oil/solvent mixtures and the results are extrapolated (in an assumption of a certain mixing rule) to determine the value for the crude oil [35, 37, 40]. It is usually assumed that a solution of a crude oil (bitumen) behaves as an ideal binary mixture of the components [37–39].\nSolubility and RI have been related by the following formula : where is the solubility; Planck’s constant; the molar volume; hard sphere diameter; absorption frequency in the UV; Avogadro’s number; refractive index.\nThe solubility parameter mapping of Wiehe shows that asphaltene insolubility is dominated by aromaticity and molecular weight, not by polar or hydrogen bonding interactions. Studies at ambient conditions by Wand et al. have shown that the refractive index at the onset of precipitation (PRI) is an important characteristic of oil/precipitant mixtures.\nThe aromatic fraction has little or no influence on RIoil, whereas saturates correlate negatively and the resins and asphaltenes are positively correlated with RIoil . Generally speaking, anything that decreases the maltene RI also decreases asphaltene stability. An exception is the effect of increasing temperature, which causes thermal disaggregation even though RI decreases.\nThe refractive index is expressed as a function of composition and density through the Clausius-Mossotti or Lorenz-Lorentz equation [43, 44] in which the validity of the Lorenz-Lorentz equation to describe the density dependence of RI was investigated by Vedam and Limsuwan .\nAs reported previously , addition of aromatic or other hydrocarbon solvents has minimal effect on PRI.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "Using a pencil, a dim line was drawn across the paper approximately two centimeters from the sharpened tip of the paper. A plant extract was then applied, in repeated strokes, across the pencil line. Between each stroke time was allowed for the plant extract to dry—this process was enhanced by blowing on the paper after each application. The application of the plant extract was stopped when an ample amount had accumulated, forming a thin stripe. An acetone solution was placed within a test tube, but the amount of acetone used depended on the length of the chromatography paper because only the tip of the paper was positioned in the solution. The chromatography paper was attached to a paper clip and fastened to a cork that was stuck in the opening of the test tube.\nBetween eight and ten minutes was then allowed for the acetone solution to travel up the length of the chromatography paper. After this allotted time the paper was removed from the test tube and the furthest distance that the solvent traveled was marked with a pencil, and the distance it traveled was measured and recorded in Table 1. Then, using a completed chromatogram the locations of the various pigments were found, and thus the different pigments were identified. The distance of each of the pigments from the origin of the plant extract was measured and also recorded in Table 1. The following equation was then used to calculate the Rf:\nRf= Distance moved by pigment\nDistance from pigment origin to solvent front\nThese findings were then recorded in Table 1.\nTable 1 shows that out of all the pigments chlorophyll b traveled the smallest distance—2.1 centimeters—from the plant extract source. Chlorophyll a was next having moved a distance of 3.0 centimeters, and xanthophyll followed moving an additional centimeter with a distance of 4.0 centimeters. The pigment carotene moved the furthest from the plant extract origin. Its distance, more than double that of xanthophyll, was 8.6 centimeters.\nThe Rf represents the relationship between how far a pigment moved in comparison to the distance traveled by the solvent. The Rf for chlorophyll b (.227cm) is a number smaller than those for the remaining pigments, therefore, clearly it moved the smallest distance.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "LRF is one of auto measurement types of a digital oscilloscope in the time domain (“Delay” menu item) which is used for the measurement of the time delay between the first rising edge of channel 1 and the last falling edge of channel 2.\nFor more convenient display of a measured parameter the majority of oscilloscope auto measurement types are shown as abbreviations. Such short names contain the encoded meaning of the current auto measurement type. LRF abbreviation can be decoded as Last Rising Falling.\nThe example of the auto measurement of LRF delay time of Aktakom AOC-5302 wide screen oscilloscope is shown on the screenshot below.\nThe measured delay time (t) between the first rising edge of channel 1 and the last falling edge of channel 2 on the screenshot above is 958 ns.\nBack to the list", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-8", "d_text": "C) If the adsorbent has settled to a compact column, add the other layer of sand at the top of the adsorbent to prevent disturbance of the surface in which the solvent is added.\nD) Allow the solvent to drain down to just above the top sand layer. Note that the adsorbent column should be kept covered by solvent throughout the chromatography, or else, channels and cracks will build up.\nE) Dissolve the sample in a minimum volume of solvent and add up the solution to the solution to the column by a pipette and bulb. Let the solution to drain to the column and instantly add more solvent.\nIn the year 1978, W.C. still introduced a modified version of column chromatography termed as flash column chromatography (that is, flash). The method is very identical to the traditional column chromatography; apart from for that the solvent is driven via the column by applying positive pressure. This allowed most of the separations to be carried out in less than 20 minutes, with enhanced separations compared to the old technique. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped via the cartridge. Systems might as well be linked by detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in fast separations and less solvent usage.\nIn the expanded bed adsorption, a fluidized bed is employed, instead of a solid phase made through a packed bed. This lets omission of initial clearing steps like centrifugation and filtration, for culture broths or slurries of the broken cells.\n=> Column Chromatogram resolution computation:\nGenerally, column chromatography is set up by peristaltic pumps, flowing buffers and the solution sample via the top of the column. The solutions and buffers pass via the column where a fraction collector at the end of the column setup collects the eluted samples.\nBefore the fraction collection, the samples which are eluted from the column pass via a detector like a spectrophotometer or mass spectrometer in such a way that the concentration of the separated samples in the sample solution mixture can be found out.\nFor illustration, if you were to separate two different proteins having different binding capacities to the column from a solution sample, a good kind of detector would be a spectrophotometer by employing a wavelength of 280 nm. The higher the concentration of protein which passes via the eluted solution via the column, the higher the absorbance of that wavelength.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "energy is coupled to an antenna inserted into the tank and is also coupled to an R.F. energy detector. The receiver measures the received power as a function of frequency above and below the dominant resonant frequency, or harmonics thereof. The measured power decreases substantially before and after resonance because, at resonance, most of the power is absorbed or stored in the cavity and less is transmitted to the receiver. The measured power versus frequency as sensed by the receiver is used to calculate the quality factor, \"Q\" which is inversely proportioned to the amount of fluid in the tank.\n1. Apparatus for measuring the quantity of a mass of fluid material of known dielectric constant and conductivity present in a tank comprising:\n(a) a transmitter means for transmitting radio frequency electromagnetic energy for exciting the tank with such electromagnetic energy: and\n(b) receiver means responsive to said transmitter means for determining the ratio of the power entering the tank versus the power reflected from the tank as a function of frequency of the electromagnetic energy.\n2. The apparatus of claim 1 including means for determining the ratio of energy stored within the tank versus the energy dissipated within the tank, the quality factor Q, from the sensed power measurements to derive the quantity of materialpresent.\n3. The apparatus of claim 1 wherein the transmitter means comprises a sweep oscillator for generating electromagnetic energy across a predetermined frequency spectrum, and wherein the receiver means comprises a wavemeter coupled to said sweeposcillator for determining the frequency of the sweep oscillator for determining the frequency of the sweep oscillator at any given moment, and a slotted coaxial line device coupled to said wavemeter for determining said ratio of the power, generated bythe sweep oscillator, entering the tank versus the power reflectd from said tank as a function of frequency, and an antenna in said tank coupled to said coaxial line device for measuring the quantity of material in said tank based upon said ratio.\n4. The apparatus of claim 1 where in the fluid is a fuel adapted to be used in a low gravity environment.\n5. A method for measuring the quantity of material stored in a tank comprising:\n(a) transmitting radio frequency electromagnetic energy into said tank;\n(b) and determining the voltage standing wave ratio as a function of the electromagnetic energy reflected from the tank versus the energy entering said tank to determine the quantity of material present thereon.\n6.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "The chromogenic and fluorescent substances are advantageously immobilized on a solid substrate.\nIn accordance with the width of the absorption band of the non-fluorescent chromogenic agent emission of the fluorescent agent will vary, resulting in a reduction of fluorescence decay time. It has been found unexpectedly that the measuring of decay times will also supply information on substances which do not act as dynamic quenchers of the fluorescence radiation of the fluorescent agent.\nThus the method described is characterized by a considerable improvement in long-term stability vis-a-vis conventional techniques, which will permit the use of measuring arrangements corresponding to the invention in measuring stations, avoiding frequent calibrations which have hitherto been necessary.\nThe theoretical basis of this effect is provided by the so-called energy transfer (ET), according to which electronic energy can be transferred from a donor (the fluorescent substance in this instance) to an acceptor (the analyte-sensitive chromogenic substance in this instance). There are no free photons in this process. The energy transfer is governed by the Forster equation\nkET =Ro 6 /(r6 t) (3)\nkET being the velocity constant for ET, and Ro and r, respectively, signifying the so-called critical distance and the actual distance, respectively, of donor and acceptor. Ro (Forster radius) is that distance at which the probability of Forster ET is equal to that of spontaneous ET.\nThe efficiency of the energy transfer thus will depend on the quantum yield of the donor, the overlap of the emission spectrum of the acceptor, and their relative orientation and distance. Typical transfer distances are between 0.5 and 10 nm. The distance between donor and acceptor has considerable influence on the energy transfer, since the latter is dependent on the sixth power of this distance.\nAn application of the invention provides that for determining the hydrogen ion concentration 7-diethylamino-coumarin-3-carboxylic acid be used as a fluorescent substance, and methyl orange as a chromogenic substance, or rather, 8-aminopyrene-1,3,6-trisulphonate as a fluorescent substance and phenol red as a chromogenic substance.\nPreferably, fluorescent substance (donor) and chromogenic substance (acceptor) are covalently attached to each other.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Science Methods and Techniques – Chromatography.\nSeparation of individual chemical compounds from the mixture is a very common task in the process of scientific experiment. You deal with similar compounds and you need to figure out what is in the mixture. Chromatography is one of the science methods and techniques used to address this issue. It can be used to separate liquid or soluble compounds which have very similar properties and can not be extracted or separated by other methods.\nTo understand how chromatography works imagine group of people of similar fitness. They should carry weights along the road starting at the same place. They all take off at the same time and walk with approximately the same speed. All weights have identical look but actually they made of different alloys.\nLet's say half of them are 20kg weight (~44 lb) and half - 40kg. Each person takes the weight and walks with it.\nPersons with heavier weights will need to take a rest more frequently and perhaps rest longer then those who carry lighter weights.\nPretty soon our carriers and our weights will be separated in two groups. Something similar happens during the chromatography process.\nIn our example people who carried the weights were what in chromatography called mobile phase. The roadside where they would take the rest was stationary phase. And weights were our chemical compounds. Bunch of different weights at the start line was analyte. And the weight and gravity force worked as affinity between chemical compound and stationary phase. We forget to tell that at the start there were more people then weights, so some lucky went off with empty hands. They would travel further then all others. This would be our solvent front.\nOne important term in chromatography that you need to understand is Retention Factor.\nRetention factor is a value that helps to identify same chemical compounds on different chromatograms. This problem may occur where we compare different samples on different chromatograms(for example plant pigments extracted from different species of plant). It's calculated as a simple ratio: Rf = D2/D1, where D1 is the distance our compound traveled from the start line and D2 is the distance of the solvent front from the start line. For the same compound running on the same stationary phase with same solvent this ratio will always be the same!\nConclusion? The compound is the same for both chromatograms.\nChromatography at home.\npaper chromatography is probably the simpliest science method you could try at home.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around 20 kHz to around 300 GHz. This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies; these are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves. Different sources specify different upper and lower bounds for the frequency range.\n- Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology.\n- RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect.\n- RF currents applied to the body often do not cause the painful sensation and muscular contraction of electric shock that lower frequency currents produce. This is because the current changes direction too quickly to trigger depolarization of nerve membranes. However this does not mean RF currents are harmless; they can cause internal injury as well as serious superficial burns called RF burns.\n- RF current can easily ionize air, creating a conductive path through it. This property is exploited by \"high frequency\" units used in electric arc welding, which use currents at higher frequencies than power distribution uses.\n- Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. This is because capacitive reactance in a circuit decreases with frequency.\n- In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with frequency.\n- When conducted by an ordinary electric cable, RF current has a tendency to reflect from discontinuities in the cable such as connectors and travel back down the cable toward the source, causing a condition called standing waves. Therefore, RF current must be carried by specialized types of cable called transmission line, such as coaxial cables.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "S 2020 RI Detector\nThe S 2020Refractive Index Detector provides the sensitivity, stability, and reproducibility required for accurate RI detection.\nThe thermally separated optic, which includes a countercurrent heat exchanger and programmable temperature control, produces a highly stable baseline and an optimum Signal/Noise ratio.\nThe S 2020 Refractive Index Detector has auto-purge and autozero features, as well as RS232 connectivity for direct data acquisition without the use of an external signal interface.\nS 2020 is available for:\n- Micro mode\n- Analytical mode\n- Semi-preparative mode\nThe S 2020 RI Detector is an excellent Refractive Index Detector for a wide range of system designs. The detector provides the best sensitivity for microbore HPLC and the highest dynamic measuring range for preparative applications with the microflow cell (up to 3.0 ml/min), analytical flow cell (up to 10 ml/min), or semi-preparative flow cell (up to 50 ml/min) with measuring ranges of up to 500 µRIU, 1000 µRIU, and 20000 µRIU, respectively. Along with its extensive connection (both analog and digital), the S 2020 RI Detector can be used in practically any chromatographic system that requires refractive index detection.\n|Refractive Index Range||1.00 to 1.75|\n|Flow Cell Pressure||6 kg/cm²|\n|Integrator Output||± 1 V|\n|Recorder Output||± 10 mV/ 100 mV/ 1 V|\n|Recorder Offset||0 mV/ 10 mV/ 100 mV|\n|Recorder Range||8 steps (1:8) – (16:1)|\n|Digital Interface||RS232, Purge, Autozero, Start, Stop, DataOut: 1 Hz, 10 Hz|\n|Digital Output||TTL: Intensity Alarm|\n|Digital Input||TTL: Purge, Autozero, Start, Marker|\n|Temperature Setting||Ambient, 35°C to 55°C in 1 °C steps, Thermal Fuse 75°C|\n|Time Constant||RAW (0.0 sec.), Fast (0.4 sec.), Medium (0.8 sec.), Slow (1.2 sec.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-5", "d_text": "energy is coupled to an antenna inserted into the tank and is alsocoupled to an R.F. energy detector. The antenna is preferably located along the longitudinal axis of the tank and is adapted to predominantly propagate R.F. energy in an axially symmetric mode of propagation, i.e., TEM mode, such that the electricfield strength varies principally as a function of axial tank cavity length. The receiver measures the received power as a function of frequency above and below the dominant resonant frequency, or harmonics thereof.\nThe measured power decreases substantially before and after resonance because, at resonance, most of the power is absorbed or stored in the cavity and less is transmitted to the receiver. The measured power versus frequency as sensed by thereceiver is used to calculate the quality factor, \"Q\". The Q is proportional to the ratio of the R.F. energy stored in the tank or cavity versus the R.F. energy dissipated, and hence, inversely proportioned to the amount of fluid in the tank. The Qis inversely proportional to the width of the resonance curve (power versus frequency) at half maximum power. Accordingly, the width of the resonant frequency at half maximum is a simple preferred method for obtaining Q.\nMeasuring the Q, as contrasted to the prior art technique of measuring changes in resonant frequency, produces a relatively orientation insensitive method of determining fluid consumption. This is for the reason that the Q method is based on theabsorption of the R.F. energy by the fluid, rather than the change in mode structure, resulting in the change in resonant frequency. The amount of R.F. energy absorbed by the liquid is primarily dependent on the mass of liquid present and only veryminimally dependent on the orientation of the fluid. This slight dependence can be further minimized by the appropriate choice of resonant modes measured and by averaging measurements over several modes.\nBRIEF DESCRIPTION OF THE DRAWINGS\nFIG. 1 is a schematic drawing of the apparatus of the invention.\nFIG. 2 is a plot of detected power versus time/frequency illustrating the technique for calculating Q (quality factor) in accordance with the invention.\nFIG. 3 is a schematic drawing of an alternate embodiment of the invention using VSWR measurements to determine the quality factor, Q.\nBEST MODE OF CARRYING OUT THE INVENTION\nThe invention will now be described in detail in connection with FIGS. 1 and 2 of the drawings.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-16", "d_text": "Quantization - A process in which the continuous range of values of an input signal is divided into non-overlapping sub-ranges (chords) and to each sub-range a discrete value of the output is uniquely assigned a binary number.\nQuantization Distortion - The inherent distortion introduced in the process of quantization.\nReceiver - The device that picks up the radio signal from the transmitter, converts it into an audio signal and feeds audio into your sound system or recorder.\nReceiver Image A second frequency that a superhet receiver will respond to. The image frequency is two times the IF frequency either above or below the carrier frequency, depending upon whether the receiver design is \"low side\" or \"high side\" injection. An RF signal on the \"image\" frequency of the receiver will produce a difference signal in the mixer just as valid as the intended IF signal created by mixing the oscillator with the carrier.\nReflections - RF waves can reflect off of hills, buildings, moving cars, the atmosphere, and basically almost anything in the RF transmission environment. The reflections may vary in phase and strength from the original wave. Reflections are what allow radio waves to reach their targets around corners, behind buildings, under bridges, in parking garages, etc. RF transmissions bend around objects as a result of reflections.\nRelative Signal Strength Indication (RSSI) - A value representing the received signal strength of both the mobile unit and the base station. This value is used to initiate a power change or handoff.\nRepertory dialing - Sometimes known as \"memory dialing\" or \"speed-calling\". A feature that allows you to recall from nine to 99 (or more) phone numbers from a phone's memory with the touch of just one, two or three buttons.\nReturn Loss - A measure of VSWR, expressed in dB.\nReverse Control Channel (RECC) - The Control Channel that is used from the mobile station to the base station direction, also known as the control channel uplink.\nReverse Voice Channel (RVC) - The voice channel that is used in the mobile station to base station direction, also known as the voice channel uplink.\nRF - Radio Frequency. Also used generally to refer to the radio signal generated by the system transmitter, or to energy present from other sources that may be picked up by a wireless receiver.\nRFI - Radio Frequency Interference. A non-desired radio signal which creates noise or dropouts in the wireless system or noise in a sound system.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-3", "d_text": "Mobile phase is either a liquid (solid-liquid chromatography) or a gas (gas-solid chromatography). 824 3 Basic Overview on Gas Chromatography Columns Chromatography Columns. Pre-Packed Chromatography Columns Rapid screening of chromatographic conditions is necessary to identify the best purification conditions. At the beginning of the gas chromatography packed columns was the solely available column type. To read more about these packings/columns, please [â¦] The 10 cm bed height of the 5 mL columns allows initial process development on a bench scale. This develops along the gravity. Gas chromatographic columns are usually between 1 and 100 meters long. The stationary phase is a solvent held in the gap of a solvent. Columns - Gas Chromatography A Wide Range of Capillary Columns for General Purpose and MS Applications We also offer an extensive range of customer packed GC columns; encompassing over 300 stationary phases on upwards of 100 solid supports. !Get more details about Packed Column http://finepacindia.in/column-internals.html, Very good information. 3. Custom packed columns are not returnable or refundable. Thin layer chromatography, Open column. RF value = (Distance traveled by the component) / (Distance traveled by the mobile phase) The mobile phase travels up to the level of solvent front.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Sequential X-Ray Fluorescence Analysis (XRF)\nDescription of method:\nXRF is a nondestructive analytical technique for the identification and determination of the concentration of elements in solids, powder samples and liquids.\nWhen atoms are excited by high-energy X-ray photons, electrons in the form of photoelectrons are knocked out of the zone of the inner electron shells. The instable electron vacancies thus produced in one or several electron shells are replaced by electrons from the outer shells emitting the excess energy in the form of secondary X-ray photons. This phenomenon is known as \"fluorescence\". The energy of the emitting fluorescence photon is characteristic of the corresponding element (E = hc/l). Moreover, the number of emitting photons is proportional to the concentration of the element in the sample.\nMeasurement technique and instrumental equipment:\nA sequential spectrometer with a 3 kW Rh front window valve, 35 mm beam-Ø (capable of being narrowed to 1mm-Ø in the pentagrid), with its own cooling circuit with conductivity control, 12-fold sample changer is used. The fluorescent radiation is spectrometrically decomposed by different analyser crystals (Bragg's law) and the spectra are recorded by stepwise scanning of the angle of diffraction zone. The detector for \"hard X-rays\" (Ti-U) is a scintillation counter and for \"soft X-rays\" (O-Ti) a proportional counter.\nA qualitative analysis is performed by comparing the measured spectrum with a stored line library. For the semiquantitative analysis, a fundamental-parameter program using stored element sensitivity libraries is used. Matrix influences and material impurities, such as sample carriers and fixation materials, are also taken into account.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-14", "d_text": "As TCD is non-destructive, it can be operated in-series before an FID (destructive), therefore giving complementary detection of the similar analyte.\nThe other detectors are sensitive merely to particular kinds of substances, or work well just in narrower ranges of concentrations. Some of the gas chromatographs are joined to a mass spectrometer that acts as the detector. The combination is termed as GC-MS. Some of the GC-MS are joined to an NMR spectrometer that acts as a backup detector. This combination is termed as GC-MS-NMR. Some of the GC-MS-NMR is joined to an infrared spectrophotometer that acts as a backup detector. This combination is termed as GC-MS-NMR-IR. It should, though, be stressed this is extremely rare as most analyses required can be concluded through purely GC-MS.\nLiquid chromatography (or LC) is a separation method in which the mobile phase is a liquid. Liquid chromatography can be taken out either in a column or a plane. Present day liquid chromatography which in general uses very small packing particles and a relatively high pressure is termed to as the high performance liquid chromatography (HPLC).\nIn HPLC the sample is forced via a liquid at high pressure (that is, the mobile phase) via a column which is packed by a stationary phase comprised of irregularly or spherically shaped particles, a porous monolithic layer or a porous membrane. HPLC is historically categorized to two different sub-classes based on the polarity of the mobile and stationary phases. Techniques in which the stationary phase is more polar than the mobile phase (example: toluene as the mobile phase, silica as the stationary phase) are known as normal phase liquid chromatography (NPLC) and the opposite (example: water-methanol mixture as the mobile phase and C18 = octadecylsilyl as the stationary phase) is known as reversed phase liquid chromatography (RPLC). Ironically the 'normal phase' consists of fewer applications and RPLC is thus considerably used more.\nThe Affinity chromatography is a technique of separating biochemical mixtures and based on a highly specific interaction like that between antigen and antibody, enzyme and substrate or receptor and ligand.\nThe immobile phase is generally a gel matrix, frequently of agarose; a linear sugar molecule derived from algae.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-93", "d_text": "16, RTP was able to detect the presence of the pill and distinguish between the two vials.\nFIG. 17 demonstrates the use of RTP to detect the presence of YOx, which is insoluble in water, when the solution is fully mixed. Briefly, a sample vessel containing water and 10 mg of YOx was thoroughly mixed and analyzed using RTP. A strong signal was detected in the thoroughly mixed solution (note the signal at approximately 11200). This strong signal indicates that RTP is sufficiently sensitive to detect the presence of undissolved material (e.g., YOx) that is suspended within a sample (water).\nFIG. 18 shows that the strong RTP signal observed when the YOx/water solution was thoroughly mixed decreases when the YOx is allowed to settle to the bottom of the surface of the reaction vessel (note the signal at approximately 11200 and compare to that in FIG. 17). Without being bound by theory, particulate material located at the bottom of the reaction vessel reflects energy more directly and provides less opportunity for scattering or other affects that increase the total energy reflected back to the transducer.\nFIG. 19 shows the results of RTP signals obtained as a frozen lump of DMSO melts within a vial containing liquid DMSO. 0 corresponds to a completely melted aliquot of DMSO. Note that the RTP signal is roughly quantitative and increased as the DMSO melts.\nPeak Power Tracking\nPeak Power Tracking has been implemented in a Covaris instrument (Covaris Inc., Woburn, Mass.) having a transducer center frequency of f0=472 kHz. The frequency tuning range was chosen to be centered at 460 kHz with an allowable frequency range of ±10 kHz. A sample vessel containing 1 milliliter of water was placed in the instrument. The water bath was filled with 19° C. filtered water. The instrument was operated with a high power treatment consisting of 100 cycles per burst and a 20% duty cycle. A maximum voltage of 96V was applied to the RF power supply driving the transducer. With Peak Power Tracking, the average power input to the transducer remained constant at 87 Watts even when the vessel position was significantly altered and as the water bath temperature changed.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "What does RF CH mean?\nWhat is the difference between a RF channel and a virtual channel?\nThe RF channel is the channel that the TV station uses to broadcast its signal. Before the conversion to digital, TV stations were normally identified by their channel number, and most people knew that a particular network was on a specific TV channel. While analog broadcasting was being phased out and all TV stations were converting to digital broadcasting, it was necessary for the TV stations to continue to broadcast their analog signals on their original RF channels, and also broadcast their digital signals on a different RF channel. However, since the TV station used their broadcast channel as part of their identification, they wanted to keep using the same channel number. To allow this to happen, the new digital television broadcasting standards (ATSC) provided TV stations the ability to continue to use their original channel number and also tell the TV set to tune to the new RF broadcast channel when their virtual channel number was selected. The original channel number is called the virtual channel number, and it will be followed by a period and a second number (3.1, 7.1, etc.).\nOne of the benefits of converting to digital for the TV stations is that they now have the ability to transmit more than one program at the same time on the same RF channel. The number of additional channels they can broadcast is determined by the resolution of the program (SD vs. HD, text only, music only, etc.). The second number in their virtual channel number indicates that one of the additional programming sources from the same TV station is being viewed (10.2, 10.3, 10.4, etc.).\nWhen selecting an antenna, it is important to understand the difference between the RF broadcast channel and the virtual channel. Antennas are designed to receive specific ranges of RF channels, and the antenna needs to be selected for the RF channel you wish to receive. TV stations broadcast in two broad frequency ranges, called VHF and UHF. RF channels 2 through 13 are considered VHF, and RF channels 14 through 51 are considered UHF. In order to pick up the channels, the antenna has to be designed for the correct frequency range. It is very common today to find that TV stations using virtual channels 2 through 13 are actually using RF broadcast channels in the UHF range.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Definition of Relative fluorescence unit (rfu)\nGet Babylon's Dictionary & Translation Software Free Download Now!\nRelative fluorescence unit (rfu) Definition from Law Dictionaries & Glossaries\nPresident's DNA Initiative Glossary\na unit of measurement used in electrophoreses methods employing fluorescence detection. Fluorescence is detected on the CCD array as the labeled fragments, separated in the capillary by electrophoresis, and excited by the laser, pass the detection window. The software interprets the results, calculating the size or quantity of the fragments from the fluorescence intensity at each data point.Source: The President's DNA Initiative ( About )", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "It is based on selective ionic attractions between variously charged sample constituents and an ionized chromatographic matrix. The most commonly used ion exchangers consist of an organic polymeric backbone with either acidic or basic exchange sites on its porous surface. The charged resins are capable of exchanging their cations or anions with those ions in the liquid phase which have a greater affinity for the matrix. Exchange interactions that take place during the passage of various ions through the column cause separation into discrete ionic zones.\nThin layer chromatography is a technique in which the stationary phase is a suspension which forms a layer on a plastic or glass plate. It is most frequently an adsorbent (with a particle size of several microns) suspended in a suitable solvent, uniformly spread on a plate, and dried. The mobile phase is a liquid that ascends the plate by capillary action, and the components of the sample mixture are separated by the partition effect.\nReverse-phase chromatography is a type of chromatography in which hydrocarbons as well as polar samples are partitioned between a nonpolar stationary phase and a polar eluting phase. Under these conditions the most polar substances elute most rapidly. This is the reverse of the more common partition chromatography in which the stationary phase is polar and the least polar substances elute most rapidly with the nonpolar eluting phase. In reverse phase chromatography, the stationary phase often consists of a chain of atoms chemically bonded to an inert surface such as silica or glass, and the eluting phase is frequently aqueous methanol or aqueous acetonitrile.\nMolecular sieve chromatography, often called gel chromatography, has resulted in tremendous progress in the chemistry of biomacromolecules. Separation in molecular sieve chromatography is based on a selective process of penetration of molecules of different sizes and shapes through a porous gel medium. The largest molecules in the mixture do not penetrate the porous structure at all; the medium-size molecules can penetrate only some pores; and the small molecules can diffuse rather freely inside the medium and can spend a considerably longer time there. Consequently, if the porous material is contained in a column, mixtures of components with differing molecular weights can be effectively resolved.\nIn any chromatographic process, some components of a given mixture will be retained on the stationary phase longer than others. This allows for extremely selective chromatographic separations.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-3", "d_text": "In the Comlernmains spectrum of the paint sample nearly all the absorption bands of the pigment are masked by those of the resin; however, the fingerprint of the pigment can be seen in the Raman spectrum. In some instances a family of com- pounds may need to be measured and it is in forex russia ru comlearnmains situations where the chromatographic techniques are important to avoid the necessity for a separate assay for each component of interest in the mixture. However, chromium(VI) is considerably more available and toxic to biological organisms as chromate or dichromate.\nForex russia ru comlearnmains 1 HNO3, but it goes to show that there is forex russia ru comlearnmains will to shift whatever Western legacy one might like to trace in the human rights instruments to a truly global com- mitment. 1997). Electrophoretic separation of proteins (the gene products and their post-translational modifications) continues to be forex russia ru comlearnmains essential tool in the study of genetic polymorphisms and inherited diseases.\nProof. ) 1983 Studies in Relational Fрrex. The use of this device forex russia ru comlearnmains provided better results than conventional Soxhlet extraction Page 1189 EXTRACTIONMicrowave-Assisted Solvent Extraction 587 Refrigerant Extractor Distillation flask Heat source Syphon Refrigerant Extractor Syphon Distillation flask Heat source on the other hand, the use of low power entails irradiating the sample for a longer time to apply the same amount of energy.\n110 or 0. Lkp forex hyderabad SENSING Forex russia ru comlearnmains Thermodynamics of enantioselectivity between a racemic perfluorodiether and a forex russia ru comlearnmains g-cyclodextrin.\n,js1 fi1 ···fir fja fj1 ···fjs1. Suhrkamp, Frankfurt, Germany Assmann A 1980 Die Legitimita t der Fiktion. All Rights Reserved.\nFood Analysis Forex russia ru comlearnmains of kinetic methods in this field has grown dramatically over the last few years. This sampling rate will depend on the mass spectrometer rsusia and is usually set automatically once the resolution is specified.\n(1995) Functional analysis of activins during mammalian development.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "The RFDA professional is designed for automated impulse excitation measurements at room temperature. The RFDA professional system measures the resonant frequencies and internal friction or damping of samples and calculates the Young's modulus, shear modulus, Poisson's ratio of rectangular bars, cylindrical rods and discs according to the ASTM E1876-15 standard. Samples are mechanically tapped by an automated tapping device which allows the operator to determine the impact position and the excitation force accurately. The induced vibration signal is detected by a dedicated microphone with a large frequency range. These signals are amplified and send to a computer using a data acquisition board and afterwards, the elastic properties are calculated by the dedicated RFDA professional software. The user-friendly interface allows an easy interpretation and logging of the results..\nRFDA Professional Product flyer", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "In this paper, we present a novel broadband radio frequency (RF) sensor technology, which can be used for plasma process control, including Fault Detection and Classification (FDC). Plasma is a non-linear complex electrical load, therefore generates harmonics of the driving frequency in the electrical circuit. Plasma etch processes have dependencies on chamber pressure, delivered power, wall and substrate temperatures, gas phase and surface chemistry, chamber geometry and particles, and many other second order contributions. Any changes, which affect the plasma complex impedance, will be reflected in the Fourier spectrum of the driving RF power source.\nWe have found that high-resolution broadband sensing, up to 1GHz or more than 50 harmonics (for a fundamental frequency of 13.56MHz), greatly increases the effectiveness of RF sensing for process-state monitoring. This paper describes the measurement sampling technique; the broadband RF sensor and presents data from commercial plasma etch tool monitoring.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "RFQ stands for Radio-Frequency Quadrupole (also known as a Quadrupole mass analyzer\nwhen used as a mass filter), an instrument that is used in mass spectrometry. The RFQ was invented by Prof. Wolfgang Paul in the late 50's / early 60's at the University of Bonn (Germany). Paul shared the 1989 Nobel prize in Physics for his work.\nBy aligning four rods and applying an RF voltage between opposite pairs, a quadrupole field is created that alternates focuses in each transverse direction. Samples for mass analysis are ionized, for example by laser (MALDI) or discharge (electrospray or Inductively Coupled Plasma, ICR) and the resulting beam is sent through the RFQ and \"filtered\" by scanning the operating parameters (chiefly the RF amplitude). This gives a mass spectrum, or fingerprint, of the sample. Residual gas analyzers use this principle as well.\nA \"cooler\" is a device that lowers the temperature of an ion beam by reducing its energy dispersion, beam spot size, and divergence - effectively increasing the beam brightness (or brilliance). Several ion beam cooling methods exist. In the case of an RFQ, the most prevalent one is buffer-gas cooling, whereby an ion beam loses energy from collisions with a light, neutral and inert gas (typically helium). Cooling must take place within a confining field in order to counteract the thermal diffusion that results from the ion-atom collisions.\nApplications of ion cooling to Nuclear Physics (notably, mass... Read More", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Reversed-phase chromatography also called RPC , reverse-phase chromatography , or hydrophobic chromatography includes any chromatographic method that uses a hydrophobic stationary phase. In the s, most liquid chromatography was performed using a solid support stationary phase also called a column containing unmodified silica or alumina resins. This type of technique is now referred to as normal-phase chromatography. Since the stationary phase is hydrophilic in this technique, molecules with hydrophilic properties contained within the mobile phase will have a high affinity for the stationary phase, and therefore will adsorb to the column packing. Hydrophobic molecules experience less of an affinity for the column packing, and will pass through to be eluted and detected first.\n|Published (Last):||21 July 2006|\n|PDF File Size:||9.66 Mb|\n|ePub File Size:||12.78 Mb|\n|Price:||Free* [*Free Regsitration Required]|\nAn assessment of the retention behaviour of polycyclic aromatic hydrocarbons on reversed phase stationary phases : selectivity and retention on C 18 and phenyl-type surfaces. In this manuscript the retention and selectivity of a set of linear and non-linear PAHs were evaluated on five different reversed-phase columns. Overall, the results revealed that the phenyl-type columns offered better separation performance for the linear PAHs, while the separation of the structural isomer PAHs was enhanced on the C 18 columns.\nThe Propyl-phenyl column was found to have the highest molecular-stationary phase interactions, as evidenced by the greatest rate of change in 'S' 0. Interestingly, the Synergi polar-RP column, which also is a phenyl stationary phase behaved more ' C 18 -like' than 'phenyl-like' in many of the tests undertaken. This is probably not unexpected since all five phases were reversed phase.\nA mixed-mode chromatographic stationary phase , C 18 -DTT dithiothreitol silica SiO2 was prepared through \"thiol-ene\" click chemistry.\nThe obtained material was characterized by fourier transform infrared spectroscope, nitrogen adsorption analysis and contact angle analysis. Chromatographic performance of the C 18 -DTT was systemically evaluated by studying the effect of acetonitrile content, pH, buffer concentration of the mobile phase and column temperature.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Last updated: October 13, 2014\nDescription: RFs are autoantibodies that react with the Fc portion of IgG. The major immunoglobulin classes are capable of demonstrating RF activity, but the usual routine laboratory assays detect primarily IgM RF.\nMethods: Classically, IgM RF in serum is detected by agglutination of IgG-coated particles. The source of IgG may be human or rabbit because human IgM RF reacts with IgG molecules from various species. The particles may be latex beads or tanned erythrocytes. Addition of test serum in graded amounts (dilutions) may lead to agglutination of the coated particles and a positive result. The most dilute serum concentration that causes agglutination is the titer reported. The Rose-Waler test or sensitized sheep cell agglutination test was used previously. Although less sensitive, it was more specific than current assays. The specificity of the sheep cell agglutination test has been replaced by anti-CCP antibodies that are more specific than RF (for RA) but have comparable sensitivity.\nOther, more quantitative techniques include measurement of complexes that form between IgM RF and IgG by rate nephelometry or by capture of IgM RF on IgG-coated plastic wells, detected using enzyme-linked reagents (e.g., enzyme immunoassay, ELISA). Results from these assays may be reported in international units using standardized reagents. Normal ranges should be supplied for each assay.\nNormal Values: RF is normally not detected.\nClinical Associations: RF is found in a variety of conditions (Table 16).\n—RA: Approximately 80% of patients with RA are seropositive for RF. The remaining 20% are said to be seronegative. Distinction between seropositive and seronegative RA has been considered of some importance because patients without RF are thought to have milder disease and a less severe disease course. Nonetheless, some of the more severe extraarticular manifestations of RA, such as vasculitis and nodules, occur almost exclusively in high-titer seropositive patients. Treatment with some second-line drugs, notably gold salts and penicillamine, can lower or abolish RF positivity, whereas others (e.g., cyclosporine) do not affect RF titers.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-6", "d_text": "However, uncertainty can be minimized by using a reference value based on a well defined operational definition of the characteristic, and using the results of a measurement system that has higher order discrimination and traceable to NIST. Because the reference value is used as a surrogate for the true value, these terms are commonly used interchangeably. This usage is not recommended.\nA reference value, also known as the accepted reference value or master value, is a value of an artifact or ensemble that serves as an agreed upon reference for comparison. Accepted reference values are based upon the following:\n- Determined by averaging several measurements with a higher level (e.g., metrology lab or layout equipment) of measuring equipment\n- Legal values: defined and mandated by law\n- Theoretical values: based on scientific principles\n- Assigned values: based on experimental work (supported by sound theory) of some national or international organization\n- Consensus values: based on collaborative experimental work under the auspices of a scientific or engineering group; defined by a consensus of users such as professional and trade organizations\n- Agreement values: values expressly agreed upon by the affected parties\nIn all cases, the reference value needs to be based upon an operational definition and the results of an acceptable measurement system. To achieve this, the measuring system used to determine the reference value should include:\n- Instrument(s) with a higher order discrimination and a lower measurement system error than the systems used for normal evaluation\n- Be calibrated with standards traceable to the NIST or other NMI\nDiscrimination is the amount of change from a reference value that an instrument can detect and faithfully indicate. This is also referred to as readability or resolution. The measure of this ability is typically the value of the smallest graduation on the scale of the instrument. If the instrument has “coarse” graduations, then a half-graduation can be used. A general rule of thumb is the measuring instrument discrimination ought to be at least one-tenth of the range to be measured. Traditionally this range has been taken to be the product specification. Recently the 10 to 1 rule is being interpreted to mean that the measuring equipment is able to discriminate to at least one-tenth of the process variation. This is consistent with the philosophy of continual improvement (i.e., the process focus is a customer designated target). The above rule of thumb can be considered as a starting point to determine the discrimination since it does not include any other element of the measurement system’s variability.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-12", "d_text": "A measuring cell typically is a “detector,” which is an instrument or part of an instrument that indicates the presence of a compound by means of some specific spectroscopic or chemical property thereof.\nA “mass spectrometer” is an analyzer used in chemical analysis that operates by ionizing atoms or molecules and then measuring the relative mass of the ionized products. Mass spectrometry (“MS”) is routinely used to measure the molecular weight of a sample molecule, as well as the fragmentation characteristics of a sample to identify that sample. Mass spectrometry measures the ratio of the mass of the molecule to the ion's electric charge. The mass is customarily expressed in terms of atomic mass units, called Daltons. The charge or ionization is customarily expressed in terms of multiples of elementary charge. The ratio of the two is expressed as a “m/z” ratio value (mass/charge or mass/ionization ratio). Because the ion usually has a single charge, the m/z ratio is usually the mass of the “molecular ion,” or its molecular weight (“MW”).\nOne way of measuring the mass of the sample accelerates the charged molecule, or ion, into a magnetic or electric field. The sample ion moves under the influence of the magnetic or electric field. A detector can be placed at the end of the path through the magnetic field, and the m/z of the molecule calculated as a function of the path through the magnetic field and the strength of the magnetic field. A variety of different mass analyzers known in the art may be used, such as quadrupole, triple quadrupole, sector, FTMS, TOF, ion traps, e.g., linear quadrupole ion traps and 3-dimensional ion traps, and quadrupole-TOF hybrid, among others.\nMS may be carried out in the gas phase in which an electrically neutral sample at low pressure is passed through an electron beam. The simplest mass spectrometers introduce a gaseous, electrically neutral sample in vacuo, normally at pressures of about 10−6 Torr or less. In this MS technique, an electron beam strikes the sample and ejects one or more electrons after which the sample is ionized with a net positive charge. The ionized sample is then passed through a magnetic field and, depending on the course of the ionized sample through that field, the mass of the molecule to the ion's electric charge is measured.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "RF Power Meters\nRadio frequency (RF) power meters are the electronic test equipment of choice to collect information, analyze RF power, and display information in an easy-to-read digital format. Engineers use RF power meters to measure and document pulsed RF signals, noise-like signals, and pseudorandom signals. ValueTronics sells used, expertly refurbished RF power meters with high-speed measurement capacities for both benchtop and production environment use. Together with power sensors, RF meters can log measurements over many different power levels.\nItems 1-24 of 70", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-2", "d_text": "In either case, hapten in the sample binds to the limited sites of antibody in the second reagent so that less antibody sites are available to bind to and agglutinate the hapten-bearing particles. Agglutination is monitored visually, by absorbance, by light scatter or otherwise (e.g., by particle-counting). The reduced level of agglutination is then correlated with an increased level of hapten in the sample, commonly based upon a dose-response curve generated with controls or calibrators of known hapten concentration.\nSuch assays for hapten commonly employ polyclonal antibodies generated by inoculating an animal with a conjugate of the hapten (e.g., albumin-hapten conjugate). It has also been proposed to use monoclonal antibodies for such tests. The most widely available monoclonal antibodies result from mouse-mouse hybridizations and are one or another subclass of the immunoglobulin G (IgG) class. It is well known that immunoglobulins are also found of the IgA, IgE, IgM and IgD classes, which differ from each other in valency (e.g., IgG is divalent, IgM is decavalent).\nRheumatoid factor (RF) is an autoimmune antibody found in human serum which binds to the Fc fragment of IgG antibodies. RF is normally present at low levels, but can be present at elevated levels under various conditions and disease states. For some individuals, e.g., those with rheumatic arthritis, the RF concentration varies over time. In some diagnostic immunoassays the sample is pre-treated (e.g., with 2-mercaptoethanol or dithiothreitol) to inactivate RF activity prior to or during the assay. Otherwise, RF may interfere with the assay, especially by promoting agglutination even where analyte hapten has blocked many of the reagent antibody sites. If, for example, many of the divalent reagent antibodies have bound one sample hapten molecule, they can still bind at the other site to particle-bound hapten. With such monovalent binding, however, no agglutination would result; but the Fc fragment of the reagent antibody would be exposed on the particle. Elevated sample RF could agglutinate the particles via such Fc fragments.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-11", "d_text": "E. EXPECTED VALUES\n- The expected value in the normal population is negative. However, apparently healthy, asymptomatic individuals may have RF. These individuals usually have low titers. The incidence of false positives increases with age and is similar in females and males.\n- The frequency (percent) of rheumatoid arthritis patients in which RF is detected by the assay should be indicated.\n- The clinical significance of a positive test must be determined by evaluation of the patient's total clinical picture.\n- Waaler, E. On the occurrence of a factor in human serum activating the specific agglutination of sheep blood corpuscles. ActaPathol. Microbiol. Scan. 1940;17:172-178.\n- Rose HM, Ragan C, Pearce E, and Lipmann MO. Differential agglutination of normal and sensitized sheep erythrocytes by sera of patients with rheumatoid arthritis. Proc. Soc. Exp. Biol. Med. 1948;68:1-11.\n- Cathcart ES. Rheumatoid Factors in Laboratory Diagnostic Procedures in the Rheumatic Diseases. 2nd ed. Cohen AS (ed.). Boston, MA: Little, Brown, & Co., 1975;104.\n- Freyberg RH. Differential Diagnosis of Arthritis. Postgrad. Med. 1972;51(20):22-27.\n- Lawrence JS, Locke GB and Ball J. Rheumatoid Serum Factor in Populations in the U.K.I.Lung Disease and Rheumatoid Serum Factor. Clin. Exp. Immunol. 1971;8:723.\n- Linker JB III, and Williams RC Jr, Tests for detection of rheumatoid factors, In: Rose NR, Friedman H, and Fahey JL, Eds. In: Man. Clin. Lab. Immunol, 3rd Ed. Wash,DC: ASM;1986:759-761.\n- Jonsson T and Valdimarsson H. Is measurement of rheumatoid factor isotypes clinically useful? Annals of the Rheumatic Diseases. 1993;52(2):161-4.\n- Jonsson T and Valdimarsson. Clinical significance of rheumatoid factor isotypes in seropositive arthritis. Rheumatology Intl.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "What is Rf --> Op_Amp 1. Consider the amplifier circuit shown. What value of Rf will yield vout = 2V when Is = 10 mA and Ry = 2Rx = 500Ω 2. The way I did this was by employing KCL: (Is that applicable?) 3. The attempt at a solution Rx(Is) + Rf(Is) = Vout I'm actually not quite sure how to do this one. It's left me in a bit of a pickle.", "score": 8.086131989696522, "rank": 99}]} {"qid": 7, "question_text": "Why do large trucks need to make such wide turns at intersections?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "I would refer to the WB-62 design vehicle, which means a multi-unit tractor-trailer combination that has a 62 foot wheel base. This truck is actually about 69 feet from nose to tail. There are also the WB-40 and the WB-50 and the WB-65. For trivia purposes, the WB-65, which is 74 feet from nose to tail, is the vehicle used when designing interstaets and interstate ramp terminals.\nThe importance of the design vehicle becomes clear when you start putting together the design of intersections and sharp curves. Larger trucks need more room in order to make turns. The rear wheels of a truck–well, of any vehicle, really–will run to the inside of the front wheels. This wider path made during a turn is called overtracking, and its why you see large trucks swing out really wide when they’re making right angle turns at intersections. The distance and width needed to ensure that the rear wheels of the design vehicle stay off the edge of pavement, or out of the adjacent lanes, can add a lot of cost to a design project.\nNext time you’re walking in an area that has curbs and you come to an intersection, look at the corners. Do you see tire tracks up against the curb faces? Do you see tire tracks on top of the curb, or on the sidewalk? Are the corners, maybe including the pedestrian ramps, broken and cracked? If any of these are true, it means most likely that truck drivers are running their rear wheels up and over the curbs when they’re trying to make turns. This could be becuase the driver isn’t very good, but more likely it’s because there’s not enough room for the trucks to clear when taking their overtracking into account.\nLarge vehicles are also important to consider when we’re designing intersections and considering the time it takes to accelerate. For example, the distance you need to be able to see to your left, or to your right, when you’re trying to turn onto a roadway. On a 45 mile per hour road, with no other consideations, that distance should be 500 feet if you’re driving a car. If you’re driving a tractor trailer, then I have to add another 270 feet! A football field worth of view, basically. That means no hills, curves or bushes to block the view of that truck driver trying to make a left turn.", "score": 52.84625826547886, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Trucks Make Wide Turns\nWith a height up of up to 14 feet, fully-loaded trucks have a much higher center of gravity than typical passenger vehicles, or even SUVs (4.5 feet – 5 feet and 6.5 feet high, respectively). High centers of gravity make it easier for trucks to tip over. They will often move into adjacent lanes prior to and after a turning maneuver to avoid driving over a curb or sidewalk or hitting a car in an opposing travel lane.\nDon’t try to sneak past the right side and squeeze around them – it may be the last mistake you’ll ever make.\nWatch for the truck’s turn signal to see what the driver intends to do. Occasionally, truck drivers will fail to signal or the trailer signal light may be inoperative. Never attempt to cut in along the right side as the driver maneuvers left or you way become sandwiched between the turning truck and the curb. Safe drivers will avoid the truck’s No-Zones and wait to assess the truck driver’s intent before passing.\nAs a general rule, avoid passing trucks while turning and never pass them on the right side.\nDon’t Crowd an Intersection\nMany intersections are marked with stop lines, indicating where a driver must come to a complete stop. Crowding an intersection means to stop beyond the stop lines, leaving your vehicle exposed to trucks attempting to turn, as well as other vehicles and pedestrians. Crowding an intersection is illegal and puts you at risk of being hit by trucks and other vehicles turning from other areas of the intersection.\nDownload the wide turns fact sheet (pdf)\nA semi’s height (up to 14′), length and weight make it nearly impossible for the driver to make tight turns like regular cars and trucks. A fully-loaded truck also has a much higher center of gravity than typical passenger vehicles and SUVs, making it easier for a truck to tip over. Read more in the PDF.", "score": 48.95608481442971, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "04 Jun Safety Matters: Trucking\nDriving a truck comes with a number of constant hazards that drivers need to be aware of in order to keep themselves and others on the road safe. Share the following Safety Matters with your drivers.\nAvoiding Right Turn Squeeze Crashes\nLarge commercial vehicles can be challenging to maneuver, particularly on residential and city streets. Taking a turn too sharply or widely can lead to costly accidents and serious injuries. One common type of accident is the right turn squeeze crash, which occurs when a truck driver makes a wide right turn, leaving too much distance between the truck and the curb. When doing so, other drivers on the road may try to squeeze past the truck and could end up getting their vehicle caught underneath the truck’s trailer.\nIn order to make right turns as safe as possible, drivers should be aware of their environments and the potential challenges that they may impose. For example, a particularly narrow intersection can make right turns especially dangerous.\nIt is also important that the vehicles and trailers are in proper working order. A broken turn signal or a lack of adequate mirrors will make any type of driving unsafe.\nThere are many steps to making a turn. As a truck driver, you have a responsibility for not only your own safety, but that of pedestrians and other drivers as well. When making a right turn, adhere to the following steps in order to make the process as safe as possible:\n- Prepare for the turn by moving into the right-hand lane as early as you can.\n- Activate your turn signal well in advance, and reduce your speed.\n- As you approach the intersection, observe the area and make sure that you will be able to safely complete the turn.\n- When beginning your turn, keep the rear of your trailer in the right-hand lane and close to the curb.\n- Avoid swinging wide to the left or crossing into other lanes.\n- Use your mirrors to check for other vehicles, pedestrians or obstructions.\n- Never back up when completing a turn.\nIf you are unable to finish a turn, wait for other traffic to clear to do so.\nAs a driver, safety should be your top priority. If you find that a turn may not be possible, it is better to take a slight detour and ensure that you will be able to get to your destination safely. A small delay is far less costly than an accident. If you have any questions about making right turns safely, talk to your supervisor.", "score": 46.308957052641674, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Truck drivers admittedly recognize that making a right turn with their long and heavy loads is not easy. Every experienced big rig driver will admit that negotiating a right turn can be downright dangerous and likely have many accounts they could tell demonstrating this.\nClearly, the size of the big rigs themselves require that the big rig operators understand the special techniques required to negotiate right hand turns and be even more vigilant for pedestrians and vehicles around them while they negotiate the turn.\nA common scenario leading to a collision between a big rig and an auto during a right hand turn is due to the ‘swinging rear.’ As the truck makes its right turn, the rear of the trailer will swing in the opposite direction. In other words, to the left. This can cause a portion of the rear end of the truck end to up several inches (perhaps even feet) over into the adjoining lane. Should a car be there while the truck is ‘swinging’ over into the lane and comes into contact with another car, the force has at times caused these cars to be then pushed out of their lane and into the next adjoining lane resulting in the middle car getting crushed. The collisions are especially dangerous when the car that got hit by the rear of the big rig gets pushed in a lane involving traffic going in the opposite direction.\nThe size and length of many tractor trailers is an issue when negotiating the right turn which can be compounded if the street is not very wide. Oftentimes, truck drivers find themselves actually crossing the double lines of a two way street in order to get their big rig setup to complete the right hand turn. Therefore, many 18 wheeler tractor trailer collisions involve vehicles that were heading in the opposite direction of the big rig but were struck during the turn.\nPedestrians can also be crushed during a right hand turn scenario. At times, a pedestrian will pass while the big rig while it is negotiating the right hand turn. Many pedestrians, especially children, have nearly no experience walking near big rigs and may not take into consideration that the big rig will come swinging back. Other times, a big rig operator will simply end up crossing into the bike or pedestrian lanes taking the bicyclist or pedestrian by surprise. Sadly these accidents usually result in severe injury or death.\nWe at the Law Offices of Edward A. Smith respects those in the big rig community and the services they provide our nation. Many tractor trailer operators have years of safe driving habits. Many, however, have not been adequately trained in safety.", "score": 45.41024108238779, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Semi-trucks can pose a serious danger to other vehicles on the road.\nSemi-trucks and 18-wheelers are not easy to drive and maneuver. Even some standard maneuvers are hard to execute because of their large size and weight. Truck drivers must exercise extreme care and consideration when making turns or entering lanes with merging traffic. Other motorists should understand why semi-truck accidents happen and how to avoid them.\nDangerous Right-Hand Turns\nThe most difficult maneuver executed by a truck is a turn. When a right-hand turn is required, the truck driver will first turn the truck towards the left and then start making the right-hand turn. This can pose a danger to the vehicles on the left and right of the truck during the turn.\nWhen the truck swings to the left, the cab of the truck is in another lane, possibly leading to a sideways collision by a vehicle coming in that particular lane. The left swing can also appear to be an attempt to change lanes and the motorists in the right lane may think that the right lane is free and they may attempt to pass the truck. However, when the truck starts turning right, the vehicle in the right lane can get caught between the trailer of the truck on the left and the curb on the right. This can cause a lot of damage to the vehicle and can result in catastrophic injuries to the occupants.\nEstablishing Liability For an Accident\nA collision between a semi-truck and a car is dangerous for the people involved; however, the damage will be much higher for the car. Truck drivers are required to follow additional rules that specify how to act in various situations that may emerge while they are behind the wheel.\nIn case an accident occurs between a truck and a car and the truck driver is held liable for the crash, the driver or the trucking company would be required to pay for damages, injuries, and loss of income suffered. In many cases it might be difficult to establish liability, so it is advisable to consult an experienced St. Louis truck accident lawyer to ensure that your rights are protected and you receive the compensation you deserve. Call The Hoffmann Law Firm, L.L.C. at (314) 361-4242 for a case evaluation.", "score": 44.79407226585271, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Wide Right Turns a Big Road Hazard\nOver 4,000 people die in large truck accidents annually, and out of those deaths, some 68 percent of them are riding in a passenger vehicle. Accidents involving large trucks can be devastating simply due to the weight and size of commercial trucks. They can weigh up to 40 tons, compared to one to five tons for an average passenger vehicle. One of the most common types of trucking accidents involves wide right turns which are exceptionally dangerous because they can result in head on collisions. If you have been injured in a commercial trucking accident, you need to speak with an experienced personal injury attorney as soon as possible.\nWhy Wide Right Turn Accidents Occur\nIf you visualize how a truck has to make a right turn it is easier to understand how these accidents happen. The size of a truck makes it hard to maneuver a truck. To manage these turns, a truck must swing the truck to the left first. This can result in one of several types of right turn accidents. If the truck swings their vehicle to the left too far, it can veer into the next lane of traffic and strike the vehicles there.\nThe second type of wide right turn accident occurs if the truck driver doesn’t go far enough to the left before attempting to turn right. The failure to swing sufficiently to the left can cause the truck to roll over. This can be devastating enough because it could cause cargo to spill onto other vehicles or the roadway or result in a fire. In even more devastating situations, the truck could roll over on top of nearby vehicles.\nThe third common type of wide right turn accident occurs where there are two right turning lanes. When a truck turns, it may veer into the second turn lane. If a car is in that lane, it may be trapped or crushed by the turning truck.\nAlthough a truck driver should always use a turn signal to indicate their intentions, many drivers fail to realize how much a berth a truck needs to make a right turn. This may result in them not giving the truck enough space.\nDetermining Liability in a Wide Right Turn Accident\nIf you have suffered injuries in a wide right turn accident, you may be entitled to receive compensation for your injuries and damages. However, who pays the compensation depends on who is liable for the accident.", "score": 43.56449145860855, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Do you commute daily on the highway to and from work? Or maybe you’re getting ready for a cross-country road trip or just a trip to visit family. Perhaps you’re driving, or maybe your teenager will be getting some interstate driving practice. Regardless of who’s driving and where you are travelling, watch out for truckers. There are 3 inherent dangers when it comes to driving near truckers on the interstate. Take a moment with your family, particularly the less experienced drivers in your family, to review these hazards.\nWide turns. Truckers need extra space when turning – they aren’t just trying to annoy you, their vehicle is so large it needs extra space to be able to complete a turn. That may seem more annoying than dangerous, and you may be tempted to try to go around them, but if you get caught between a turning truck and another vehicle, it could result in serious damage and injury. Any time you’re driving near a large vehicle, make sure to give them extra space to avoid a collision.\nBlow outs. Ever noticed those black pieces on the side (or in the middle) of the interstate? Those come from tire blowouts – meaning a truck’s tire gave out and basically exploded on the road. If you happen to be driving next to a truck when this happens, it can cause multiple hazards. First of all, it’s loud – it can startle drivers. Second of all, it can cause a truck driver to lose control of the vehicle – meaning they could swerve into your lane. When this happens, it’s unexpected for both the truck driver and the other vehicles on the road. To avoid this, leave extra room for trucks and when passing, and do so as quickly as is safely possible.\nWind. Have you ever been driving on a windy day and felt the wind move your car? You may think that doesn’t happen to trucks because they are larger and heavier, but that’s not the case. In fact, large trucks can be even more susceptible to wind and, not only could the wind move the truck, it could even cause the truck to flip on its side. As you may have noticed from our first two hazards, the best way to avoid danger in these scenarios is, again, to give the trucker extra room. You should always be giving extra space to truckers, but be extra cautious if you notice it’s a windy day.", "score": 42.47517170688165, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "A commercial driver is trained to leave plenty of space around the truck or bus. In our smaller vehicles, we often see this space as a convenient avenue to a lane change. Do not cut in front too soon after passing a truck or bus. You should not pull back in until you see both of the truck’s headlights in your rearview mirror.\nDo not follow closely\nIf you are too close behind the bus, truck or RV, the driver probably cannot see you. You also cannot see the road in front of the driver. Leave yourself extra following distance, so you have more time to react and a better view of the road ahead.\nWatch for the commercial driver’s signals\nTrucks and buses make wide turns. A collision may occur when a truck or bus swings left to make a wide right turn, and an unaware driver tries to pass on the right as the bigger vehicle starts to swing right again.\nBeware of no-zones\nPlaces where a truck driver cannot see you are referred to as “no-zones.” No-zones are immediately in front of trucks, in back of trucks and to the side of trucks. If you cannot see the truck or bus driver in their side view mirror, the driver cannot see you.\nCommercial Vehicle Awareness\nBe Visible, Be Safe, Be Truck Aware.\nWatch the Road\n- Leave 15 seconds of time and space to slow down\nDon't Drive Drowsy. Take A Break. Drive Awake.\n- Watch for signs of drowsiness like blurry or heavy eyes, spacing out, or the inability to remember road signs\n- Pull over and take a break; take a walk and stretch to get your body and blood flow moving again\n- Adhere to continuous driving hour restrictions and do not exceed the limit\n- Without a seatbelt, you are 25 times more likely to be thrown from your vehicle.\n- It's the Law!\n- Follow safe driving and speed limits while being mindful of turns and road curves\nCheck Blindspots Frequently\n- Continuously checking blind spots and your mirrors will help eliminate missed vehicles surrounding you as you drive, merge, or turn\nBe Aware of Weather\n- Weather Conditions can make the road mor hazardous than normal\n- Slow down and take precautions! Ice and limited visibility due to weather could impact your driving at any time\nFor additional information, see the state of Ohio CDL manual or visit the Department of Motor Vehicles (DMV) office.", "score": 41.7879975118712, "rank": 8}, {"document_id": "doc-::chunk-2", "d_text": "- Anticipate that a tractor-trailer will make wide turns. If a truck has a turn signal flashing, do not try to squeeze by the truck. When stopped at intersections, do not stop in front of the line. Trucks need that space to make wide turns.\n- Be sure to wear your seat belt when in a car.\n- Do not drive while impaired under any circumstances.\n- Watch out for other drivers who may be impaired, and report them to the police.\n- Avoid distractions and keep your eyes on the road.\n- Leave early and give yourself plenty of time to reach your destination.\nContact a Commercial Truck Accident Attorney Today\nHave you been hurt in a commercial truck accident in South Carolina? If another driver was at fault, you may be entitled to seek compensation for your medical bills, lost income, and other expenses. Accidents involving commercial trucks may involve several insurance companies and can be more complicated to resolve. You will benefit from the guidance of an experienced S.C. truck accident attorney. Contact Joye Law Firm today for a free consultation with a South Carolina commercial truck accident lawyer.", "score": 40.27268412827957, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "It is always best to assume that the driver cannot see you and to drive accordingly.\nMake Sure They Have Room to Turn\nThe size of large trucks also makes it difficult for them to make turns. Large trucks make wide turns. Give them extra turning space. They often swing wide and may even start a turn from a middle lane. Give them room and never try to squeeze by a turning truck or enter the space between the turning truck and the curb.\nStay Away from Them When It’s Raining or Snowing\nIf you have ever shared the road with a large truck when it was raining, snowing, or there had recently been rain or snow, you know that another danger of sharing the road with a large truck is the spraying and splashing it creates. Snow, water, and mud can all get splash off a truck’s large multitude of tires onto a car’s windshield. This can seriously compromise your visibility. Give trucks extra space under these kinds of weather conditions.\nTruck Accident Injury Attorney\nIf you have been involved in a truck accident, you may have suffered catastrophic injuries. Attorney Randall F. Rogers is here to provide dedicated legal counsel to you during this difficult time. He will fight for your right to full and fair compensation for the harm you have endured. Contact us today.\nPosted in: Uncategorized", "score": 39.58692149504786, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "The Federal Motor Carrier Safety Administration (FMCSA) lists these tips for how to avoid problems between drivers of regular sized vehicles and large trucks:\n- Avoid the blind spots, or no zones, surrounding the truck\n- Pass safely and quickly, and look for the driver in their mirror\n- Never cut a truck off\n- Leave plenty of space between you and the 18 wheeler\n- Anticipate the truck’s wide turns\n- Don’t speed so you have time to react\n- Always wear your seatbelt\n- Pay attention to the road at all times\n- Make predictable decisions while driving so other drivers can anticipate your next move\nBy following these tips, you can stay as safe as possible on the road and lower the danger of sharing the road with large trucks. Truckers are trained and certified to navigate the road on their large rigs, but sometimes they make mistakes too. That’s why it’s important to always be vigilant and pay attention to what everyone else is doing on the road.\nOur Lawyers Can Represent You\nAt Pittman Roberts & Welsh, PLLC, we understand that collisions with large trucks can be devastating. Not only can they cause you serious physical injuries that change your life, but they can also affect you emotionally and financially. You could even suffer from injuries that keep you from returning to work, leaving you unable to earn the wages you need to support yourself and your family.\nOur truck accident lawyer can help you fight for your rights and hold the negligent truck driver, trucking company, or part manufacturer responsible for their actions. We know that you want to get back to normal and financially recover as best as you can. That’s why we’re here to help you. Reach out to our office today so we can start discussing the ways we can help.", "score": 37.818550910699464, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Truck drivers must be aware of the limitations of their vehicles at all times. They make wide turns, for example, that can cause accidents. They have significant blind spots not shared by other vehicles. All of this makes driving a truck vastly different from driving a car, and an inexperienced or negligent driver can injure others if he or she makes serious mistakes.\nOne of the major differences is that a truck takes much farther to stop than a passenger car. In fact, it probably takes a truck much longer to stop than you realize.\nNearly twice the stopping distance is necessary\nAn easy way to think about this is that it takes close to two football fields for a truck to come to a full stop if the truck was moving at 65 mph. That is, of course, assuming that road conditions are excellent, that the braking system on the truck is in good condition, etc. Technically, some experts say this should take 525 feet.\nA passenger car, on the other hand, usually just takes around 300 feet to stop. There are many reasons for this, with weight being perhaps the most important. A passenger car probably weighs around 4,000 pounds, even with passengers inside. The truck, on the other hand, can weigh as much as 80,000 pounds. This is such a massive difference that it’s impressive that it doesn’t take an even greater distance for the truck to stop.\nWhat if a negligent truck driver hits you?\nA truck driver who is not paying attention and who does not apply the brakes in time can suddenly find themselves facing a rear-end accident that they can’t avoid. If you’re injured in a crash like this, you need to know what legal steps to take to seek compensation.", "score": 35.62924147472572, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "The Ever-Present Danger of Tractor-Trailers\nThere are a few reasons why tractor-trailer wrecks can be much more dangerous than car wrecks. The obvious one is the size and weight of these vehicles. A tractor-trailer can weigh as much as 80,000 pounds. That’s 16 times the weight of your average passenger car! With that much mass, any wrong move by the trucker can be deadly.\nOn top of that, tractor-trailers do not have good braking ability. They need a long time before they can come to a full stop, which can be detrimental in a last-second situation.\nAnother contributing factor is semi-trucks’ lack of maneuverability. Because these trucks take a while to brake, turn widely, and can’t easily move in and out of tight spaces to avoid a suddenly stopped car or road debris, they have more of chances for a crash.\nLastly, many of these trucks are carrying dangerous or hazardous materials on board. Things that are flammable, or can be explosive. Those types of materials can turn a minor collision into a large-scale tragedy.\nCommon Causes of Semi-Truck Accidents\nHere are the common causes of big truck crashes:\n- Improperly loaded cargo: If the truck is improperly loaded, the load can shift and overturn the truck, or slip off and injure those behind the truck.\n- Bad driving: A driver who fails to follow road rules and behaves aggressively can easily cause a crash.\n- Stopped truck: When a truck is stopped on the side of the road, leaning into traffic, there is always potential for a collision with a car that didn’t see it in time.\n- Rear-ending: Due to their poor braking ability, trucks have a hard time stopping abruptly if the vehicle in front of them makes a sudden stop.\n- Turning left: Trucks can be quite slow with their turns. If there isn’t enough time for them to turn, it could lead to a collision.\n- Underride: An underride happens when a passenger car collides with a truck and runs under it. This type of accident usually ends in severe injuries or fatalities.\n- Fatigue: Drivers of big trucks have long distances to travel. Sleep-deprivation and tiredness play a huge role in truck-related crashes. In 2013, new federal laws were put in place to curb the number of hours a commercial truck driver can be on the road.", "score": 34.29985106911986, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "In some cases, drivers may see this space as an opening to cut in and turn. However, this may increase their risk of crashing. Therefore, motorists should watch for signals in order to determine truckers’ intentions and refrain from cutting in between large trucks and curbs.\nDo prepare for wind gusts\nEspecially when they are traveling at high rates of speed, commercial vehicles often create turbulence. These wind gusts may cause unprepared motorists to lose control of their vehicles. Thus, it is advisable for people to keep both hands on the wheel and expect turbulence when they are passing large trucks, or are being passed by these vehicles.\nConsult with an attorney\nAs a result of truck accidents, people in Colorado may suffer injuries that require extensive medical care. This may result in lost income, medical bills and other damages for which the trucker could be held liable. Thus, it may benefit those who have experienced such situations to work with an attorney. A lawyer may help ensure their rights are protected and explain their options for pursuing financial compensation.", "score": 33.97022558781726, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Truck drivers are a valuable resource to the United States economy, but they pose a high risk of danger to other drivers on the road. Annually, over 1.5 million tractor-trailer accidents lead to injury and/or death in our country.\nThere are four main reasons why tractor trailers are so dangerous:\n1. Lack of Maneuverability. Because of their size, tractor trailer drivers have a hard time avoiding road hazards. A semi-truck needs more time and space than the average vehicle to change its course.\n2. Poor Ability to Brake. A tractor trailer needs at least 350 feet to come to a complete stop from a travel speed of 60 mph.\n3. Size. Tractor Trailers with full loads can weight around 80,000 pounds. This is 16 times heavier than a car. With that much weight, impact of a tractor trailer is devastating.\n4. Dangerous Cargo. Tractor trailers carry a variety of dangerous cargo, such as flammable, toxic, and/or explosive items. This type of cargo can turn a minor accident into a catastrophic event.\nTruck drivers and trucking companies contribute to many of these safety problems in the following ways:\n- Driver fatigue. The Federal Motors Carrier Safety Administration puts a limit of 70 hours per week on truck drivers. Those drivers are also required to take sufficient breaks. However, economic concerns encourage drivers to ignore these rules.\n- Excessive Speed. Truck drives tend to drive at unsafe speeds, in order to meet quotas, delivery deadlines, and bonuses.\n- Improper Training. Learning to drive an 80,000 pound tractor trailer requires special training, and it isn’t uncommon for the importance of this training to go unappreciated.\n- Driver Distraction. Truck drivers spend many hours away from home and in their vehicles causing them to engage in unsafe behavior, including texting and driving.\n- Poor Maintenance. Tractor trailers are required by Federal Law to receive routine maintenance checks. Ignoring this requirement can lead to preventable and deadly accidents.\nInjuries caused by tractor trailer accidents are much more complex than an ordinary car accident case, and negligence in these types of cases doesn’t just fall on the truck driver. Trucking companies may be held liable if unsafe practices and procedures are found to be the cause of the accident.", "score": 33.076003956774926, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Longer wheel bases on motorhomes and trucks towing RV trailers make it necessary to change your turning patterns. You must turn wider at intersections or the rear wheel may roll over the curb. Go further into the intersection before starting the turn and adjust your lane position to increase the turning radius.\nCurves in the highway can also be tricky. Stay more to the center of the lane for right turns so the rear wheels will not move off the pavement. For a left turn or curve, stay more to the right of the lane to prevent the back of the trailer from tracking into the oncoming lane\nRecreational vehicles have a high center of gravity, so turning corners and taking curves must be done at slower speeds to prevent swaying. Slowdown before you enter the curve. Be sure you use your rear-view mirrors to watch tracking and tail swing.", "score": 32.841484017523314, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "FOLEY, Ala. -- Drivers of large trucks wanting to turn at downtown intersections could be rerouted around the city in an effort to reduce congestion, safety hazards and curb damage, local officials said.\nState and city engineers met Tuesday to study a proposal to require trucks needing to turn off Ala. 59 or U.S. 98 to use the Foley Beach Express. Since 59 and 98 are state-maintained highways, Foley must have approval from the Alabama Department of Transportation before imposing any restrictions, Wendell \"Butch\" Stokes, city engineer, said Wednesday.\nStokes said that if the proposal meets state approval, Alabama highway officials would tell the city what kind of signs and regulations are needed to put the restrictions in place.\nHe said trucks trying to turn at downtown intersections, particularly Ala. 59 and U.S. 98, often tie up traffic.\n\"That's exceptionally tight,\" Stokes said of the turn.\nThe plan would not restrict trucks traveling through Foley without turning.\n\"The basic problem with downtown truck routing is trucks trying to make turn movements off 59 or off 98,\" Stokes told City Council members at the last Foley work session. \"If they were going straight through town, that's not a problem. So what we've done is talked about possibly moving all the truck-turning movements to the intersection of 98 and the Foley Beach Express.\"\nStokes said trucks making local deliveries will still be allowed to turn within the city.\nMayor John Koniar said trucks often back up traffic and run over sidewalks while trying to negotiate the tight turns downtown.\n\"Curbs are being torn up and plants are being ruined,\" Koniar said. \"It's a tight turn and traffic has to back up some time to make it; it's not a good safety situation.\"\nTrucks on Ala. 59 would get off the highway at the intersection of the Foley Beach Express on the north side of the city if they needed to turn onto U.S. 98. The vehicles would then go south on the Beach Express and turn on U.S. 98, Stokes said. Trucks coming north on Ala. 59 would turn on Baldwin County 20 and then to the Beach Express if they needed to go east or west on U.S. 98.\n\"All of the turning movements will be done out here at the Foley Beach Express and 98,\" Stokes said. The intersection of the Beach Express and Ala. 59 is about three miles north of U.S. 98.", "score": 32.20121641599026, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "Similarly, if I’m working on signal timing, and I know that large trucks are going to make a significant percentage of my traffic stream, then I have to include the additional time it’s going to take to move each truck through the intersection. I can’t just make an assumption based on cars or I’ll end up with a huge congested mess and people calling me and asking what the hell I was thinking.\nThen of course, trucks impact how we design pavement! Trucks make a much larger impact on the asphalt and concrete than a typical car for one simple reason: they’re much heavier. So pavements have to be designed to handle the types and quantitiels of trucks that are expected.\nLastly, at least for this podcast, is noise and air pollution. Trucks make up a big part of the noise and air pollution that we transportation engineers have to take into account nowadays. It is possible that when designing a new roadway, the additional noise may be so great that we have to install soundwalls in order to avoid impacting the surroundings. Trucks add a lot to that noise level, so again, it’s important to know what type of vehilce is being designed for.\nAnd that’s it about trucks. There is more that I could discuss, getting into the juicy engineering details, but that covers all the basics.\nThanks for listening to talking traffic. If you like what you heard, or didn’t, be sure to let me know by leaving a comment on the show notes or sending an email to bill at talking traffic.org.\nThe music you’ve been listening to is by five star fall and can be found at magnatune .com. This episode is released under a creative commons attribution share alike 3.0 license. Feel free to distribute and/or modify this podcast, but please link back to me and to talkingtraffic.org.\nUntil next time, have a great week.", "score": 32.0940118527228, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Head-On Collisions: Why Negligent Truckers Cause These Tragic Crashes\nWhen fully loaded, a typical big rig weighs approximately 80,000 pounds, depending on cargo. A typical passenger vehicle weighs about 4,000 pounds. At an average speed of 65 mph, a semi-truck needs more than twice the stopping distance of a passenger vehicle. If a trucker doesn't have the time and space to avoid a crash, the force of the impact can be devastating. Unfortunately, head-on collisions are the leading cause of fatalities in truck accidents, causing 57.5 percent of all deaths in 2014.\nCauses of Deadly Head-On Collisions\nA head-on truck collision occurs when the front of a semi crashes into the front of another vehicle. Truck driver errors are usually the underlying reasons these wrecks occur. Common causes include:\n- Crossing the centerline. Whenever a trucker crosses the center lane, for whatever reason, a head-on collision is the likely result, as there's little time or space for lane correction.\n- Making wide turns. If a driver doesn't navigate a turn properly, he can cross right into another vehicle's lane.\n- Speeding around a curve. While speeding is never safe, doing so on a curve can make a trucker lose control of his truck and cross over the center lane.\n- Going through a stop. If a truck driver runs a red light or goes through a stop sign without the right of way, he can cause a head-on collision with a driver legally turning left in the intersection.\n- Passing improperly. When a truck driver attempts to pass a vehicle on a two-lane road or highway and doesn't have enough clearance, he could hit an oncoming vehicle if he has insufficient time to get back into his lane.\n- Causing a jackknife accident. Sharp turns and avoidance maneuvers sometimes causes a truck to jackknife, sending it head-on into other vehicles.\n- Driving when drowsy. If a trucker is fighting to stay awake or falls asleep at the wheel, he can veer into oncoming traffic.\n- Engaging in distracted driving. A distracted driver may be involved in talking on a cellphone, texting, looking at a GPS, or eating and drinking—behaviors that greatly reduce reaction time.\nIf you or a family member was injured in a head-on truck collision, call our office today to schedule your free consultation with Alan Morton.", "score": 31.855707470175776, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "These turns are dangerous for the driver, pedestrians and oncoming traffic.\nIn fact, according to the latest highway safety data, 31% of all serious accidents involve left hand turns.\nThe percentage of right-hand turns that cause a problem?\nDid you also know that all of the delivery routing programs for companies like UPS and Pepsi preclude left hand turns across traffic because early studies by MIT determined they were not only time-consuming but dangerous (and companies wanted to avoid the expense associated with their drivers being badly hurt and their trucks incurring substantial damage in the event of being struck by an oncoming vehicle).\nNow fast forward to driverless cars. The engineers designing these modern marvels claim that “teaching” a driverless car to safely manage a left-hand turn is in fact one of their most difficult challenges facing product developers.\n“How can that be,” you ask?", "score": 31.628100796043523, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Driving a tractor-trailer requires tremendous skill, experience and attention. For this reason, truckers must qualify for special commercial driver’s licenses (CDL) in Texas before they can get behind the wheel of an 18 wheeler. In addition, commercial vehicle drivers are subject to stricter rules designed to protect the safety of motorists.\nHowever, drivers face unique challenges when operating extremely large and heavy trucks that put even the most professional, careful driver at risk of an accident. And, alarmingly, too many truck drivers are distracted, intoxicated, impaired by drugs, sleep-deprived, inexperienced, and/or driving too fast so they can bring home a bigger paycheck.\nYou can take simple steps to avoid a tractor-trailer accident including:\n- Do not decelerate suddenly in front of a truck unless necessary: Even an empty tractor-trailer is very heavy and requires a longer time to stop. A fully loaded semi can take a distance of three football fields to stop. By remaining alert to hazards ahead of you, you can slow down gradually so the truck driver has more time to react.\n- Let a tailgating trucker pass you: If a semi is following close behind you, move out of the way as soon as it is safe to do so.\n- Don’t tailgate an 18-wheeler: Tailgating any vehicle is a bad idea. However, because you cannot see around a tractor-trailer, you may be unaware of heavy traffic, stoplights and other issues that require the truck to brake.\n- Be mindful of blind spots: Think about your surprise to realize that a car or motorcycle was travelling in your car’s blind spot. Now imagine this blind spot multiplied several time over on a tractor-trailer. Your car remains out of view while you are behind the truck and as you pass. You do not become visible until you are about parallel to the tractor.\n- Be patient — Getting stuck behind a semi can be frustrating, but wait until you can safely pass. These few moments of patience could save your life.\n- Be predictable and use your blinker — Avoid making erratic lane-changes and use your blinker to alert the semi driver of your intention to switch lanes. Remember big rigs are not as easily maneuverable as a car and so the driver cannot respond as quickly.\n- Pay attention — You have a duty to concentrate on your driving.", "score": 31.541882836337273, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "If a trucker has his turn signal blinking, leave room for the truck to merge or change lanes. Indicate your willingness to allow the truck in by flashing your lights.\n- Finally – Be patient while trucks turn or back-up. It often takes time and concentration to back a trailer up without hitting anything and sometimes a truck driver needs to make several attempts to reverse in tight quarters or to make a turn.\nDo your part to stay safe. Driving responsibly, defensively, and sharing the roads can help reduce the number of injuries and fatalities caused by crashes with large trucks.\nIf you or a loved one has been in an accident involving a tractor trailer and would like to learn more about your legal rights, please contact me at (561) 366-9099 or by email at email@example.com", "score": 31.1648099882785, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "For years, the trend has been towards more, longer, heavier, faster, trucks. Trucking companies and independent truckers lobbied for these changes in state highway regulation to squeeze more profitability out of their trips.\nForces of physics rule, however, when an accident involves big trucks. The heavier, longer, and faster the truck, the more the driver’s reactions, experience and skill, and the truck’s brakes, steering, and tires must compensate for. Operators’ maneuverability is further compromised by added truck length or the addition of multiple trailers. Of course, a truck even with current legal limits on mass and velocity will crush the largest SUV or pickup truck. Unfortunately for thousands yearly, this means that being involved in an accident with a tractor-trailer doubles the likelihood of fatality.\nTruck freight is predicted to increase significantly. “The Annual Energy Outlook 2009 (AEO), projects annual truck vehicle miles traveled (VMT) to grow by 2.5 percent per year, for each of the next 20 years [Energy Information Agency, 2009].This projection comes despite a first-time-ever decline in U.S. truck VMT in 2007 and 2008 because of high fuel prices and some recessionary pressure.\nThe more trucks on the road and the faster, longer, heavier, and less maneuverable they are, the greater the chances of injurious and fatal accidents. As congestion mounts on our crumbling Interstate Highway System, the risk is driven higher. In 2002, the U.S. Department of Transportation (DOT) estimated average daily long-haul truck traffic in the U.S. and mapped it.\nFederal Highway Administration, U.S. DOT, 2003. Click image to download a full size view.\nThe thickness of the red line corresponds with the daily volume of heavy truck traffic on that highway. Here is what the U.S. DOT anticipates will be the truck volume on U.S. highways in about 25 years:\nFederal Highway Administration, U.S. DOT, 2003. Click image to download a full size view.\nIf DOT projections are correct, certain highways will become little more than national truck routes.", "score": 30.77241958257305, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "Delivery vehicles are often large in size to accommodate the goods/substances being delivered, which means they are unwieldy and difficult to stop quickly. Their size combined with the stopping difficulty means that anybody who happens to get in their way will be difficult to avoid and will not come off well in a collision. Even if the vehicle managed to swerve out of the way, it can still cause significant damage to structures if it were to hit them and cause health hazards if it were to spill its contents. Plus, large vehicles can be difficult enough to control in favourable conditions, but can be extremely dangerous in conditions found in locations such as construction sites where mud and gradients can make driving conditions treacherous.", "score": 30.005248502513684, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "CNN explores a burning question: why do commercial delivery couriers always favour right hand turns? Apparently these services save “millions of gallons of fuel each year, and avoids emissions equivalent to over 20,000 passenger cars.” with this one practice. By avoiding left turns, you are avoiding delays that can make for traffic build ups. Even a left turning phase adds approximately 45 seconds to a left turn.\nAnd there is more-“a study on crash factors in intersection-related accidents from the US National Highway Traffic Safety Association shows that turning left is one of the leading “critical pre-crash events” (an event that made a collision inevitable), occurring in 22.2 percent of crashes, as opposed to 1.2 percent for right turns. ” Over 60 per cent of crashes happen while turning left, compared to 3 per cent of crashes involving right turns.\nInformation from data collected by New York City’s transportation planners conclude that pedestrians are three times more likely to be killed from left turn vehicles. The UPS carriers assess all the routes to avoid left hand turns to minimize idling time and increase time sensitivity. They have rebooted Google maps to reflect routes that minimize left hand turns, a practice developed in the 1970’s with the term “Loop Dispatch”.\nWhile this works great for pre-planned routes, in daily driving routes are less random. But when you see a big company courier truck making a left turn in traffic, you will know they are deviating from the “Loop Dispatch” plan.", "score": 29.974021969392254, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "The Hazard Perception Test\nSafe gaps - turning right\nRight turns are more complicated than left turns because you need to look for traffic approaching from the left and right. You may also need to give way to pedestrians crossing the road that you are entering. At an uncontrolled intersection (four-way intersection with no Stop or Give Way signs) you may also have to watch for approaching traffic.\nFeatures of right turns\nRight turns are generally less sharp than left turns.\nThe next picture shows the path taken by a left and right turning vehicle. While you have to cover more road to complete a right turn, because it is shallower you can generally accelerate quite quickly. This is necessary because you need to quickly match the speed of the traffic on the road that you are entering. As with left turns, the faster the traffic, the more time and space you need to complete a right turn.\nGuidelines for right turns\nGap selection for right turns is a skill that will take time and practice to develop. Here are some guidelines to help.\nIf you are turning right in a 60km/hour zone you will need a gap of at least 4 seconds between your car and vehicles approaching from the right, but a gap of at least 6 seconds from the left. The following picture illustrates this. This assumes that the traffic is travelling at 60km/hour - it may actually be faster - and that there is no on-coming traffic.\nYou need a smaller gap on the right because you will more quickly \"clear\" the traffic approaching from the right. But you need a bigger gap on the left because you need time to complete the turn on the far side of the road and accelerate to the speed of the traffic. Because it will take you about 3 seconds to get to the other side of the road, a 6 second gap to the left allows you 3 seconds for accelerating to the speed of the stream speed of traffic you are entering.\nTurning right at a cross intersection\nTurning right at a cross intersection (ie one with four directions) with oncoming traffic and traffic from the right and left, is harder. You will need to look three ways to judge a safe gap - to the front and the left and right. As shown in the picture below, you are also likely to be facing a Stop or Give Way sign.", "score": 29.134189973634236, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Why Is An Accident Involving A Commercial Vehicle More Dangerous?\nAccidents involving commercial trucks can result in serious injuries or death because commercial trucks are significantly heavier than passenger vehicles and sometimes carrying hazardous or flammable materials. A typical fully-loaded, large truck can weigh more than 80,000 pounds, while an average passenger automobile weighs approximately 3,000 pounds. Due to this size disparity and the basic laws of physics, any collision between a big rig or other commercial truck and a smaller passenger vehicle is likely to result in serious, even fatal, injuries. Truck drivers and trucking companies are held to higher legal standards than most other drivers, so if you have been injured in an accident with a commercial vehicle, you may be entitled to reimbursement for your injuries and other damages.", "score": 28.941783411194702, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "I got a taste last spring, when TriMet offered to let me test drive one in the Central Garage parking lot. Turning is anything but simple, even at the required 5 mph or less. It requires at least six steps of scanning in a matter of seconds.\nIn March, TriMet sent out a training bulletin to all bus drivers, reminding them to properly scan with \"advanced eye use skills\" and to \"rock and roll\" in their seats to see around various blind spots on the big vehicles as they turned into an intersection.\n\"Intersections are usually the most dangerous locations along your route,\" the bulletin states.\nRead the. (.PDF).\nHere are the six scans that every driver is required to make when turning into an intersection:\n1. Scan before you move\n2. Scan before you turn\n3. Scan during your turn\n4. Scan at the completion of your turn\n5. Scan before you enter an intersection\n6. Scan your mirrors every 5 to 8 seconds\n\"You need information before you move, so scan before you take your foot off of the brake,\" the bulletin reminds drivers. \"Turning is a risky maneuver as you are often crossing the path of other motorists, cyclists and pedestrians. That’s why turning requires three sets of scans; before, during and after the turn.\"\nInvestigators will likely never know if the operator of the bus that hit five pedestrians at Northwest Broadway and Glisan Street performed all of those scans before turning into the crosswalk.\nBut it's clear that the term \"driving\" doesn't do the job justice. It's more of an 8-hour wrestling match involving a big wheel, a bigger bus, sharp reflexes and clear cognizance.", "score": 28.629615089454912, "rank": 28}, {"document_id": "doc-::chunk-3", "d_text": "You should practise gap selection and compare notes with several more experienced drivers who you trust. Do this until you are confident that you can make consistently safe gap selections when turning right at T intersections when facing a Stop or Give Way sign.\nWhen you feel confident, repeat Steps One-Four for turns at 4-way intersections where you need to judge safe gaps to the front, left and right and are facing a Stop or Give Way sign.", "score": 28.56637317145652, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "A crash between a passenger car and an eighteen-wheeler often leaves the occupants of the smaller vehicle with devastating injuries.\nHowever, a recent study finds that the installation of advanced safety technology in large trucks can dramatically reduce truck-car collisions.\nToo many crashes\nIn 2015, there were over 400,000 roadway crashes involving large trucks across the U.S. These accidents resulted in more than 4,000 fatalities and 116,000 injuries. In 2019, in the state of Louisiana alone, large truck crashes caused 106 fatalities and 2,740 injuries. A study undertaken by the AAA Foundation for Traffic Safety shows how four advanced safety features installed on trucks could significantly curb the incidents of truck-car collisions.\nA big rig might collide with a smaller vehicle for various reasons. For example, poor maintenance can allow wear and tear to culminate in defective brakes or blown-out tires. Overloading cargo is a dangerous practice that can cause the truck to be unbalanced and prone to jackknifing or rolling over. Any issue that makes a large truck difficult to control can put nearby vehicles in extreme danger.\nThe AAA study highlighted four advanced safety technologies that could benefit truck drivers:\n- Warning systems: To alert the driver when the truck drifts out of its lane\n- Air disc brakes: To provide both performance and maintenance advantages over drum brakes\n- Automatic emergency braking system: To detect when the truck is too close to the vehicle in front of it and to apply brakes automatically if necessary\n- Video-based safety monitoring system: To monitor driver performance and behavior through use of an onboard camera and other sensors\nMotorists can help keep themselves safe by leaving large trucks plenty of room to maneuver, switch lanes and stop. However, the goal of advanced safety technology is to prevent collisions. The AAA study indicates that the video-based safety monitoring system alone has the ability to prevent up to 63,000 crashes annually along with 17,733 injuries and almost 300 fatalities.", "score": 28.56405329235716, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Large Truck Accidents\nPresented by Scott C. Murray, ESQ.\nMany of us have probably had a scary experience or two with 18-wheelers on the highway. In 2009, nearly 300,000 large trucks had accidents, accounting for 74,000 injuries and more than 3,000 deaths. There’s no dispute that accidents involving tractor trailers often result in catastrophic and sometimes fatal injuries.\nSo what are the major issues that we face driving side by side with these tractor trailers out on the roadway? Well -first – the National Highway Transportation Safety Administration reports that driver fatigue is responsible for 30 to 40% of all large truck crashes. In a recent survey, almost 20% of truckers admitted to falling asleep at the wheel at least once in the prior three months. And this is in spite of Federal regulations that limit truck drivers’ on-duty hours.\nAnother issue is what is called an underride accident. This is when a car rides up underneath the trailer. According to the Insurance Institute for Highway Safety, up to half of all accidents between a truck and a car involve an underride situation which dramatically increases the chance of death or severe injury. A European study revealed that 57% of the fatalities and 67% of the serious injuries could be prevented with improved rear underride protection.\nOther causes of accidents include mechanical failures because of sloppy maintenance, poor driver training, driving under the influence of alcohol or drugs, and, of course, distracted driving.\nHere are four tips on sharing the road and reducing your chances of getting into an accident with an 18 wheeler:\n- 1st – Do not ride in a trucker’s blind spot. If you can’t see the truck driver in his mirrors, he probably can’t see you.\n- 2nd – Leave plenty of space between you and a truck. Tractor trailers are heavy and difficult to maneuver. They need more time than a passenger car to react to road conditions. A fully-loaded large commercial truck can weigh 80,000 pounds or more and they take as much as three times the distance to stop as the average passenger car. When following behind a truck leave yourself 20 to 25 car lengths. This gives the truck enough time to react if road conditions suddenly change.\n- Also – Allow a truck to merge or change lanes.", "score": 28.302384773056183, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Every automobile accident has the potential to be catastrophic, but when it comes to big rig accidents, the likelihood of incurring major injuries – let alone fatalities – is far higher. This is owing to the fact that semi trucks are both large and heavy. Driving a car, you may feel as though you have no influence over whether or not you will be involved in a truck collision. This is not always true.\nThe reality is that by keeping an eye out for these four typical causes of large rig accidents, you can make your driving experience a lot safer. However, if you are involved in a truck accident, do not hesitate to contact The Law Offices of Larry H. Parker at 800-333-0000 for assistance.\nThe Severity of Accidents involving Large Trucks\nYou may be astonished to discover that there are over 3300 deadly large truck crashes on American highways each year on average in the United States. You may have been shocked to discover that there are an additional 74,000 incidents that result in injury every year. According to the most recent information from the National Highway Traffic Safety Administration, this is the case.\nThey also discovered that three out of every four deaths occurred in automobiles that collided with semi-trucks. Only 15 percent of deaths occurred in large trucks, and 10 percent of fatalities occurred in vehicles that were not occupied by humans in either vehicle. Until 2010, the number of large rig incidents was decreasing, and then the statistics began to trend higher.\nVehicles are not being maintained properly\nWhen a semi truck driver fails to keep their vehicle in perfect working order, the likelihood of an accident increases significantly. In many situations, it is impossible to determine whether a vehicle has not been properly maintained. Keep your distance, however, if you can hear their brakes screaming when they stop, see that their tires are in bad condition, or otherwise sense that a large truck is not properly maintained.\nDrivers with little or no experience\nOnce again, this can be difficult to detect from your car, but if you observe that a motorist appears anxious, makes exaggerated motions, or otherwise appears inexperienced, you should presume that they are at a high risk of being involved in an accident. Keep your distance.\nLoaded trucks that are not properly secured\nThere can be significant consequences if a truck is not properly loaded, whether it is laden with too much goods or just unevenly loaded, which can impair the driver’s ability to manage their vehicle.", "score": 28.292983858402042, "rank": 32}, {"document_id": "doc-::chunk-3", "d_text": "Reasons for limiting the legal trailer configurations include both safety concerns and the impracticality of designing and constructing roads that can accommodate the larger wheelbase of these vehicles and the larger minimum turning radii associated with them.\nMost states restrict operation of larger tandem trailer setups such as triple units, the “Turnpike Double” (twin 48–53 ft units) or the “Rocky Mountain Double” (a full 48–53 ft unit and a shorter 28 ft unit). In general, these types of setups are restricted to tolled turnpikes, such as I-80 through Ohio and Indiana and specific Western states. Tandem setups are not restricted to certain roads any more than a single setup. The exception are the units listed above. They are also not restricted because of weather or “difficulty” of operation.\nThe noticeable difference between tractor units in the U.S. and Europe is that almost all European models are “cab over engine” (COE or forward control), while the majority of U.S. trucks are conventional (or normal control). For repairs, the entire cab hinges forward to allow maintenance access. European trucks, whether small rigid or fully articulated, have a sheer face on the front. This allows for shorter trucks with longer trailers (with larger freight capacity) within the legal maximum total length. Furthermore, it offers greater manoeuvrability and better overview for the driver. Conversely, “conventional” cab tractors offer the driver a more comfortable driving environment and better protection in a collision as well as eliminating the need to empty the driver’s personal effects from the tractor whenever the engine requires service. Since the most common trailer used in the U.S. is 53 feet (16 m) in length, the difference in freight capacity between cab-over and conventional cab tractor/trailer combinations is negligible. In Europe the entire length of the vehicle is measured as total length, while in U.S. the cabin of the truck is normally not part of the measurement.\nIn Europe usually only the rear tractor axle has twin wheels, while larger size single wheels are used for the cargo trailer. The most common combination used in Europe is a semi tractor with two axles and a cargo trailer with three axles, giving five axles and 12 wheels in total. Lesser used (common in Scandinavia) are tractors with three axles, which feature twin wheels either on one or both rear axles.", "score": 27.474499421501736, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "If you feel like you can’t drive more than a mile or so on the highway without seeing at least one or two big trucks, there’s a good reason for that. Whatever you might call them – big rigs, large trucks, 18-wheelers, semis, or tractor-trailers, there are about 10.5 million currently registered in the United States. That’s a significant number, and one that only continues to grow. Commercial trucks like these are critical to keeping our economy moving, and they’re important. Unfortunately, they can also be dangerous. After all, commercial trucks are much larger than standard passenger vehicles, and as a result, when a crash occurs, it is typically the passenger vehicle (and its occupants) that sustains the most damage.\nAccording to the National Safety Council, in 2018, 4,862 big trucks were involved in fatal crashes – totaling 9% of the total fatal crashes in the United States that year. That same year, 112,000 large trucks were involved in injury-inducing accidents. If you have recently been the victim of an accident involving a large truck, you clearly aren’t alone. You may have arrived at this page injured, overwhelmed, and unsure what steps to take next. If so, the good news is that you do have rights under the law – and you should assert them. At That Clauson Law Firm, we’re here to help.\nIn thriving and growing cities like Cary, most people use the roadways every day – to drive to work, get their children to school or daycare, travel to the mountains or beach for vacation, or simply run day-to-day errands. The reality of the situation is that it’s almost impossible to avoid encountering big trucks as you go about your day. It is important though, to realize, that these accidents can occur in an instant, and often, through no fault of the driver who is the victim. Some of the most common causes of truck accidents include:\nAlthough these accidents can happen in an instant as a result of any one of these causes, the consequences they have can be long-lasting, and often far more severe than injuries caused in other types of car accidents, simply because of the size of the truck itself. Injuries may require ongoing treatment for months, or even years, and the truth is that life may never be the same after one of these accidents.", "score": 27.309521619721764, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "Look for pedestrians and use your turn signal before you exit. If there is no traffic in the roundabout, you may enter without yielding.\nTrucks/oversize vehicles and roundabouts\nRoundabouts are designed to accommodate vehicles of all sizes, including emergency vehicles, buses, farm equipment and semitrucks with trailers. Oversize vehicles and vehicles with trailers may straddle both lanes while driving through a roundabout.\nMany roundabouts are also designed with a truck apron, a raised section of pavement around the central island that acts as an extra lane for large vehicles. The back wheels of the oversize vehicle can ride up on the truck apron so the truck can easily complete the turn, while the raised portion of concrete discourages use by smaller vehicles.\nBecause large vehicles may need extra room to complete their turn in a roundabout, drivers should remember never to drive next to large vehicles in a roundabout.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Driving next to a large commercial truck can feel a bit intimidating due to its massive size and weight. Because they are so large, big rigs have substantial blind spots, which are also referred to as “no zones.” When other vehicles in the vicinity are driving within those “no zones,” the truck driver is unable to see that vehicle. This can result in devastating truck accidents if the truck driver changes lanes when another vehicle is in the truck’s blind spot. The passenger vehicle can be crushed, forced off the road, or end up hitting another vehicle because the truck driver did not see the other motorist.\nUnfortunately, the occupants of the passenger vehicle tend to suffer the most severe injuries.\nTruck drivers are trained on how to safely operate an 80,000-pound tractor trailer. However, drivers should also be aware of a truck’s blind spots and avoid driving within these areas. One rule of thumb to keep in mind is that if you cannot see the truck driver’s face in the side-view mirror, it is unlikely that he or she can see you.\nKnow a Tractor Trailer’s Blind Spots\nA tractor trailer’s three main blind spots include the following:\nFront: Because truck drivers’ seats are positioned much higher than other vehicles, it can be difficult for them to see another car when it is directly in front of them. This is because the front blind spot is usually approximately 20 feet long. As such, merging into a lane directly in front of a truck can be extremely dangerous. If a passenger car is rear ended by a large truck, it can cause serious injuries and fatalities.\nSide: Side blind spots tend to be the most dangerous because they take up the most amount of space. For example, the right side blind spot can be the entire length of the trailer and extend into three lanes. For this reason, drivers should avoid passing trucks on the right. Even though the left side of the truck has a blind spot, trucks drivers expect motorists to pass on the left.\nRear: Even though large trucks are equipped with rearview mirrors, they are essentially of no use, particularly when other motorists are following too closely. Ideally, drivers should stay approximately 25 car lengths, or the length of a football field, behind a tractor trailer. That way, the truck driver will be able to see other drivers on the road.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "There is a natural fear among a subset of drivers when they come across the path of a large semi-truck. Some are filled with dread or even experience anxiety by merely driving near a truck, passing a truck, or having a truck operate in an adjacent lane. When asked about these concerns, drivers regularly say that the large size and weight of a semi-truck makes them realize they are at an increased risk for harm when a collision occurs in the Phoenix area and therefore trucks make them nervous.\nThis belief is understandable, particularly when you realize that national data supports the fact that drivers and passengers in traditional cars face the brunt of the risk of harm during a truck accident.\nMany risks can arise when a truck and a car collide, when a truck strikes a fixed object, or when two or more trucks are involved in a collision. Among the most significant of the risks is a rollover, or an event where at least one vehicle involved leaves the ground and rolls onto its side or roof. Rollover crashes are more likely to cause injuries than many other forms of accidents and are more likely to cause a victim to become trapped inside a car, necessitating emergency rescue efforts to free that victim.\nIn 2012, there were approximately 317,000 accidents involving large trucks in the United States, according to the Federal Motor Carriers Safety Administration. In these collisions, a rollover incident was the first part of roughly five percent of all fatal accidents and three percent of all fatal accidents, but in addition to those crashes, rollovers happened in many more instances as a secondary, tertiary, or later result from an impact.\nWhy are rollovers so prevalent when it comes to semi-truck accidents in Arizona? First, semi-trucks are heavier than passenger vehicles which means that they need a greater distance to slow or stop than a typical car. If a semi-truck is forced to brake forcefully and with little warning, that semi may not stop in time to avoid a collision, and some semi drivers may be forced to take evasive actions as a result. Steering away from a roadway hazard or jerking the wheel to one side or another significantly increases the risks of a rollover taking place as the sudden shift in momentum of the truck can lead to a loss of a center of gravity, causing a trailer to roll. Secondly, the tires on semi-trucks often experience greater wear and are used for longer lives than tires on other cars.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "A tractor-trailer normally weighs up to 80,000 pounds, about twenty times the weight of the average passenger vehicle. An 18-wheeler will take far longer to stop compared to lighter vehicles, potentially making them much more dangerous. Anderson Injury Lawyers have represented hundreds of people injured by commercial trucks. We know how hazardous they are, especially when they’re unsafely driven by negligent drivers.\nA truck hauling an oversize load in Texas could be as heavy as 240,000 pounds, reports Wideloadshipping.com. These massive loads pose serious risks to those of us on the state’s highways and local roads. Stopping these vehicles in time to avoid an accident may be difficult or impossible.\nStopping distances for all vehicles increase as they become heavier, and when road conditions such as snow, ice, or rain make stopping more difficult. A fully loaded, well-maintained tractor-trailer traveling in good road conditions at highway speeds needs a distance of nearly two football fields to stop, according to the Federal Motor Carrier Safety Administration(FMCSA).\nWhy Does it Take So Long for an 18-Wheeler to Stop?\nThe factors involved in stopping time and distances include:\n- Driver delay: If the truck driver is distracted, fatigued, impaired, intoxicated, or asleep the driver won’t appreciate the need to stop. The truck continues to move as the driver is in a mental fog.\n- Brakes: If they’re worn and poorly maintained, they won’t work as well so it will take longer for the truck to stop\n- Tires: Truck tires are not all the same. Some reduce fuel consumption by rolling more easily, but they lack grip if the road is wet, making braking longer. No matter the design, the more they’re worn, the less effective they are at stopping the truck.\n- Road surface: The more friction there is between a tire and road, the easier it will be for the truck to stop. If the road is wet or covered in oil spills, gravel, or sand, the braking distance will be longer\n- Speed: The faster the truck is traveling, the longer it will take to stop. The slower it’s going, driver alertness and ability to use the brakes are more important.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Tractor Trailer Collisions\nA high percentage of traffic crashes and deaths involve large trucks. A large truck is any truck whose vehicle weight is over 10,000 pounds. Because of their size, crashes involving large trucks are more likely to result in serious injury and death than are car crashes. Approximately 10% of all those injured in a large truck crash will die. Large trucks are more likely to be involved in multiple-vehicle crashes than are passenger cars.\nIf you are injured in a collision with a tractor-trailer, you need the help of an attorney who is familiar with the trucking industry. Grant Morain has the training and experience to know where to look and what questions to ask to find out if there has been a violation of the safety rules. He knows who to call if an expert in trucking safety is needed. If the accident needs to be reconstructed to determine how it happened, he can call on experts that can help make sure the trucking company is held responsible.", "score": 26.907591512695294, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Road accidents that implicated heavy-duty trucks have become more frequent in recent years. Most of these trucks aren’t your ordinary pickup trucks but those massive 18-wheelers carrying big loads. The factors are different, but a lot of them are related to the drivers of those trucks. However, inspite of causing these accidents, the truck drivers almost never get hurt because of the tough construction of these 18-wheelers.\nA terrible accident of this kind would typically include many small cars, injuries and, very likely, death. Should you want to avoid these types of accidents, you probably need to get better at defensive driving. The unfortunate part is, in truck driving school, they are only shown how to drive fast, and completely disregard the concept of safe driving. Ask your trainer to not skip any lessons, and explore further during your free time.\nMany people think that driving a truck is the same as driving a sedan. Even though the driving basics and the traffic rules are related, we are talking about two different vehicles. A small-scale truck involves less attention, the maneuverability is higher and even the accidents involving small cars are less dangerous. Just think about parking a truck, and you will grasp the differences.\nA great chunk of incidents are caused by truckers falling asleep at the wheel. Even though the transport companies are obliged to give respectable transport times, (the company must allow 3 days for a 1500 kilometers transport in some countries), the drivers rather stay at their homes for one or two days, and then they would try to catch up in the last day. Aiming to operate a vehicle for twenty four hours without delay is very difficult, even with caffeine and other energizers in your body.\nSeveral major transport companies have installed GPS in the trucks to avoid these severe accidents. By having GPS, it prevents the driver from attempting to cut corners and pushes them to drive in a timely manner. While the original costs are higher, it’s going to pay off in the long term. A very important factor is that the expense of being in an accident will be avoided. In addition, people will consider your company as commendable in making the effort to ensure safety and reliability.\nThere are several instances when drivers would rather take the truck driver’s license and ignore the standard car test. Even though this doesn’t occur often, it does happen. It’s a good idea that you go for a normal car license then work your way up to a big truck.", "score": 26.84113907175503, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "The quantity of road accidents related to heavy-duty trucks has gone up over the past few years. These types of trucks aren’t your typical pickup trucks but those enormous 18-wheelers carrying big loads. Though the reasons for these accidents vary, the majority is the fault of the driver of these massive trucks. Surprisingly, even when those drivers are to blame, they usually get away from those kinds of accidents unharmed because of the robust construction of the truck.\nA serious accident of this kind might consist of several small cars, injuries, and even the death of some other drivers. If you want to avoid these types of crashes, you probably need to get good at defensive driving. Regrettably, the truck driving schools are trying to train the drivers fast, ignoring the security matters. You should make an effort to study defensive driving on your own.\nLots of people think that driving a semi truck is as easy as driving a regular car. Though both a car and an 18-wheeler do the same things, these vehicles run very differently. A small vehicle involves less attention, the maneuverability is higher and even the accidents involving small cars are less dangerous. It is possible to just imagine the real difference by trying to park a truck versus trying to park a car.\nA large number of mishaps involving trucks are caused by tiredness. Transport outfits are required to allow a practical amount of time for travel, like three days for a distance of 1500 kilometers, but many truck drivers would laze around the first two days and try to fit everything in in one. Driving for twenty four hours is not effortless at all possibly even for the most experienced driver.\nA few major transport companies have fitted GPS in the trucks to avoid these severe accidents. By having GPS, it inhibits the driver from trying to cut corners and makes them to drive in a timely manner. Even though upfront cost is high, it will be of significant advantage in the long term. To begin with, just think about the costs if one of your trucks is linked to a massive accident. In addition, the general public will view your company as commendable in making the effort to ensure safety and reliability.\nOccasionally, the drivers prefer to take their truck driver’s license directly without doing the training for smaller cars. Although this doesn’t occur on a regular basis, it does happen.", "score": 26.707721928236946, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Every year, large trucks — 18-wheelers, tractor-trailers, flatbeds and other rigs — are involved in crashes that cause fatalities and serious injuries. The United States Department of Transportation and the National Highway Transportation Safety Administration (NHTSA) report that in 2013, there were 3,964 people killed and an estimated 95,000 people injured in crashes involving large trucks in the United States. In 2013, approximately 1 out of every 10 fatal crashes involved a large truck. Nearly 342,000 large trucks were involved in police-reported traffic crashes in the United States, accounting for 9 percent of all vehicles involved in fatal crashes, and 3 percent of all vehicles involved in injury and property-damage-only crashes. Of the 3,964 truck-related fatalities in 2013: 71 percent were occupants of other vehicles; 17 percent were occupants of large trucks; and 11 percent were non-occupants.\nAs the above-described NHTSA statistics reveal, large truck crashes can have fatal or devastating consequences for occupants of other motor vehicles and pedestrians. The federal government imposes minimum safety standards for commercial motor carriers to ensure that: (1) commercial motor vehicles are maintained, equipped, loaded, and operated safely; (2) the responsibilities imposed on operators of commercial motor vehicles do not impair their ability to operate the vehicles safely; (3) the physical condition of operators of commercial motor vehicles is adequate to enable them to operate the vehicles safely and the periodic physical examinations required of such operators are performed by medical examiners who have received training in physical and medical examination standards; (4) the operation of commercial motor vehicles does not have a deleterious effect on the physical condition of the operators; and (5) an operator of a commercial motor vehicle is not coerced by a motor carrier, shipper, receiver, or transportation intermediary to operate a commercial motor vehicle in violation of these safety regulations. 49 U.S.C.A. § 31136 These federal motor carrier safety regulations appear at 49 CFR § 390.1 et seq. and address a variety of safety issues including, without limitation:\nFederal and state laws and safety regulations impose a duty upon truck companies and other commercial motor carriers to act responsibly, and keep our highways and roads safe for other motorists and pedestrians.", "score": 25.765710465195323, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "An intersection or an at-grade junction is an junction where two or more roads converge, diverge, meet or cross at the same height, as opposed to an interchange, which uses bridges or tunnels to separate different roads. Major intersections are often delineated by gores and may be classified by road segments, traffic controls and lane design.\nOne way to classify intersections is by the number of road segments (arms) that are involved.\nAnother way of classifying intersections is by traffic control technology:\nAt intersections, turns are usually allowed, but are often regulated to avoid interference with other traffic. Certain turns may be not allowed or may be limited by regulatory signs or signals, particularly those that cross oncoming traffic. Alternative designs often attempt to reduce or eliminate such potential conflicts.\nAt intersections with large proportions of turning traffic, turn lanes (also known as turn bays) may be provided. For example, in the intersection shown in the diagram,[clarification needed] left turn lanes are present in the right-left street.\nTurn lanes allow vehicles to cross oncoming traffic (i.e., a left turn in right-side driving countries, or a right turn in left-side driving countries), or to exit a road without crossing traffic (i.e., a right turn in right-side driving countries, or a left turn in left-side driving countries). Absence of a turn lane does not normally indicate a prohibition of turns in that direction. Instead, traffic control signs are used to prohibit specific turns.\nTurn lanes can increase the capacity of an intersection or improve safety. Turn lanes can have a dramatic effect on the safety of a junction. In rural areas, crash frequency can be reduced by up to 48% if left turn lanes are provided on both main-road approaches at stop-controlled intersections. At signalized intersections, crashes can be reduced by 33%. Results are slightly lower in urban areas.\nTurn lanes are marked with an arrow bending into the direction of the turn which is to be made from that lane. Multi-headed arrows indicate that vehicle drivers may travel in any one of the directions pointed to by an arrow.\nTraffic signals facing vehicles in turn lanes often have arrow-shaped indications. North America uses various indication patterns. Green arrows indicate protected turn phases, when vehicles may turn unhindered by oncoming traffic. Red arrows may be displayed to prohibit turns in that direction.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "2. Stay Out of a Big Rig’s Blind Spot\nAvoid tailgating or closely driving parallel to the truck’s trailer. These are large blind spots that big rigs have and remaining in these areas is especially dangerous as you may not have sufficient time to react to any erratic stops or changes made by the semi-truck. Instead, keep about three to four car lengths away from the trailer if you are traveling behind one. This gives you enough time to react to any sudden changes in traffic.\nWhen passing them on the interstate, pass by quickly on the left lane. If you are ahead of them on the right lane, slow down slightly, just enough for the operator to get the hint that he or she is allowed to pass you in the left lane quickly.\n3. Don’t Buy Into the Drafting Myth\nThe drafting myth is the common belief that drivers can conserve fuel and improve their own gas mileage by driving very close to the backend of an 18-wheeler by utilizing aerodynamics. In other words – tailgating.\nDon’t risk the safety of you and your family by falling for this myth as it requires you to be anxiously close to a traveling big rig. Much, much closer than the three to four car recommendation. Any sudden changes while in transit can spell disaster.\n4. Watch Out For Big Rig Right Turns\nBig rigs need a lot of space when making right turns. In order to stay on the road, the operator has to overcompensate when making their turn and this may entail occupying multiple lanes. In order to preserve your own safety in this particular circumstance, let the operator have the lane if he or she is indicating that they are going to make a right turn. Don’t pull up to the right of the rig, especially in his or her blind spot, if their blinker light is signaling a right turn.\n5. When the Weather is Poor, Stay Indoors\nIt’s no secret that heavy rain, ice, snow, or high winds can create dangerous driving conditions, but if you are an 18-wheeler operator, it’s just another workday. Driving during these harsh conditions may be a requirement for them in order to get their cargo to its rightful destination but if you don’t need to be out, then stay off the roads.\nIf you or a loved one has been the victim of a large truck accident, trust in the Cardone Law firm to deliver the quality legal care your situation deserves.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "More than likely you have had to drive next to a large truck in Florida. Whether you find yourself sitting next to one in traffic on I-75, or you are heading down U.S. 41 behind a semi, you probably don’t feel very comfortable when driving close to a tractor-trailer due to its large size. Not only are semis big and heavy, but they have large blind spots that can put motorists at risk.\nAlthough all vehicles have blind spots, commercial trucks have larger blind spots. Unfortunately, blind spots reduce truck drivers’ visibility, making their maneuvers more risky. Because of the sheer size of trucks alone, both truck drivers and drivers of passenger vehicles need to be careful when sharing the road.\nWhen a trucker doesn’t look very closely in his mirrors, he could change lanes directly into a smaller car. Not only can a truck changing lanes put a motorist in danger—causing the driver to swerve—but it can often lead to a serious crash. When truck drivers make poor choices and poor maneuvers, severe truck wrecks can occur.\nShockingly, truck drivers aren’t the only ones who are to blame in trucking accidents. In fact, drivers of cars often drive in trucks’ blind spots and cut trucks off. Because trucks can present a serious danger to motorists, it is important that all Sarasota drivers learn about semi trucks and how to share the road with them. Some of the things drivers should learn about trucks include:\n- No-Zones. Blind spots in large trucks are known as “No-Zones.” Because semis are large and high off the ground, there are areas around the truck that are hard for truck drivers to see. In fact, there are many No-Zones around commercial trucks including those in the front, rear, and side. Drivers need to avoid driving in these blind spots in order to reduce their risks of being in a truck accident.\n- Mirrors. If you can see a truck driver in his side mirror, he can probably see you. However, if you cannot see the driver, chances are he probably can’t see you. If this is the case, you should speed up or slow down in order to get out of the truck’s blind spot.\nUnfortunately, No-Zones are dangerous and are common areas involved in truck crashes.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "The braking distance is also four times longer. Triple the speed from 20 to 60 mph and the impact and braking distance is nine times greater. At 60 mph, your stopping distance is greater than the length of a football field. Increase the speed to 80 mph, and the impact and braking distance are 16 times greater than at 20 mph. High speeds greatly increase the severity of crashes and stopping distances. By slowing down, you can reduce braking distance.\nThe Effect of Vehicle Weight on Stopping Distance - The heavier the vehicle, the more work the brakes must do to stop it and the more heat they absorb. The brakes, tires, springs and shock absorbers on heavy vehicles are designed to work best when the vehicle is fully loaded. Empty trucks require greater stopping distances because an empty vehicle has less traction.\nSometimes it is hard to know if the road is slippery. Following are signs of slippery roads:\nFollowing are some rules to help prevent right-turn crashes:\nWhenever you are driving a vehicle and your attention is not on the road, you are putting yourself, your passengers, other vehicles and pedestrians in danger. Distracted driving can result when performing any activity that may shift your full attention from the driving task. Taking your eyes off the road or hands off the steering wheel presents obvious driving risks. Mental activities that take your mind away from driving are just as dangerous. Your eyes can gaze at objects in the driving scene but fail to see them because your attention is distracted elsewhere.\nActivities that can distract your attention include: talking to passengers; adjusting the radio, CD player or climate controls; eating, drinking or smoking; reading maps or other literature; picking up something that fell; reading billboards and other road advertisements; watching other people and vehicles including aggressive drivers; talking on a cellphone or CB radio; using telematic devices (such as navigation systems, pagers, etc.); daydreaming or being occupied with other mental distractions.\nRailroad crossings with steep approaches can cause your unit to hang up on the tracks. Never permit traffic conditions to trap you in a position where you have to stop on the tracks. Be sure you can get all the way across the tracks before you start across. It takes a typical tractor-trailer unit at least 14 seconds to clear a single track and more than 15 seconds to clear a double track.\nDo not shift gears while crossing railroad tracks.\nSelect the Right Gear Before Starting Down the Grade.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "It is well-known that one of the most dangerous things Americans do every day is get behind the wheel of an automobile. However, unless you happen to live in an area where there is very robust public transport, driving in the US is all but essential. With this being the case, it is very likely that you will end up in a car accident at some point. However, some car accidents are more severe than others. Getting into an accident with a semi truck is almost always more extreme than getting into an accident with another four-wheel passenger vehicle. According to the Federal Motor Carrier Safety Administration, one of the best things you can do to avoid an accident with a semi truck is to be aware of the blind spots.\nWhile all vehicles have blind spots, the blind spots associated with a semi truck are much larger. The biggest blind spot on a semi truck is actually on the right side of the truck: this blind spot nearly eclipses two entire lanes of traffic. This is the main reason why you should never try to pass a semi-truck on the right hand side. Doing so is extremely dangerous because there is a high chance that the driver of the semi truck will not see you.\nAnother good rule to follow is to assume that if you cannot see the side view mirrors of the semi truck, then the semi truck cannot see you. Experienced drivers of semi trucks will still be aware that there are cars in their blind spots, but you can make their job much easier and your drive much safer by not lingering in these blind spots for long.", "score": 24.570664514696013, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "It’s a little-known fact that trucks are without doubt the most dangerous vehicles on the road. The high center of gravity and wide turning radius makes it hard for truck drivers to control their vehicles in emergency situations, which can lead to devastating accidents. In this article, we’ll explain the reasons why they are so lethal and discuss how you can protect yourself from them.\nWhat Is A Truck?\nA truck is a vehicle that is used to transport goods and materials from one place to another. They vary in size, shape, weight capacity, and what they are designed for. Trucks can weigh up to 80 times more than cars or even 18 tons.\nSemi-trucks have two or three trailers that can be attached to them, weighing up to 80,000 pounds. They are what you see being used for shipping across the country. Dump trucks carry gravel and other materials like dirt onto construction sites or other destinations. Tractor-trailers transport goods from one location to another (which are stored in a special area at the back). Pickup trucks can be used for many different reasons including carrying cargo, hauling livestock, or transporting goods to and from the local market. Flatbeds are what mechanics use when they need to transport a car that is too large for their personal vehicles.\nWhy Are They So Dangerous?\nTruck accidents account for nearly 15% of fatalities each year according to federal statistics, which should cause all drivers to be concerned. Here are some of the key reasons why they are so lethal:\nThis makes it top heavy and therefore easy to topple over if it’s not properly balanced out throughout the entire truck bed. El Paso is a city located in the far western part of Texas in the United States of America. It’s possible to contact an improperly loaded truck accident lawyer in El Paso by submitting an online form, using live chat, or making a phone call. This can connect accident victims to professionals who can consult with reconstructionists and other experts in order to pursue full and fair compensation.\nThey take a long time to stop; their brakes require longer braking distances than most other vehicles. They can take up a lot of room on the roadway. Due to their size, they might not be able to go around corners as quickly or at all. This can cause problems when trying to change lanes and turn onto side roads etc., especially if there are cars present that have been stopped by an unexpected red traffic light or stop sign.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "A study of truck driver fatalities was conducted by The National Transportation Safety Board and The National Institute on Drug Abuse in eight states and one or more drugs were detected in 67% of these fatally injured drivers and 33% of these drivers had detectable blood concentrations of psychoactive drugs or alcohol.\n- Driver error. Heavy Truck drivers are responsible for the safe operation of their vehicle. When semi-trucks travel at rates that exceed the speed limit, they are speeding and increasing the likelihood of a jackknife or rollover.\n- “Under-rides” refer to passenger vehicles that slide under another vehicle, with the majority of these incidences happening between Heavy Trucks and passenger cars.\n- Blind spots exist in the front, back and sides of a Heavy Truck. When vehicles are in these blind spots, the Heavy Truck may make a wide right turn into the passenger car.\n- “Squeeze plays” involve heavy Trucks making wide right turns. When a car is caught between a Heavy Truck and a curb – the car caught in a “squeeze” that can cause a serious accident and injuries.\n- “Off-track” occurs w hen a Heavy Truck turns at high speed and swings into a neighboring lane without warning.\n- Following too closely. Unlike cars, Heavy Trucks require up to 40% more stopping space. Following too closely results in inadequate stopping distance between Heavy Trucks which then rear-end vehicles in front. It is not difficult to imagine the devastating results that occur when a car, van or SUV is hit from behind with over 10,000 lbs. of moving metal.\n- Substandard inspection. There are approximately 2 million plus roadside inspections of Heavy Trucks and about 23.2% of these Heavy Trucks were found to have serious violations, leading to catastrophic accidents and injuries on our country’s roads and highways.\n- Longer Combination Vehicles (LCVs) vehicles are tractor-trailer Heavy Truck combinations with two or more truck trailers that weigh more than 80,000 pounds. These Heavy Trucks are at high risk of jack-knife rollover, sway, and loss of control. Longer lengths, heights, and weights make these Heavy Trucks perform and handle differently than other types of tractor semi-trailers or twin trailers. The LCVs are more dangerous due to their tendency to sway and leave the lane they are traveling in, as well as requiring increased passing distance.\n- Hazardous Materials (hazmat).", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "This refers to a lane change or other sudden movement that may need to be taken when traveling alongside a large commercial truck.\nOne of the greatest disadvantages of driving next to 18 wheeler trucks is the many variances that exist between this type of vehicle and a standard automobile. The following facts shed a bit of light on the most glaring differences between the two types of vehicles:\n- An average car in the U.S. weighs 5,000 pounds\n- An average 18 wheeler in the U.S., if within load limits, weighs 80,000\n- An 18 wheeler engine is 6 times the size of a car engine, regarding both weight and size\n- Accidents involving 18 wheeler trucks result in a traffic fatality 9 percent of the time\n- It takes an 18 wheeler truck 40 percent more time to stop than that of a standard automobile\nWhen put in this perspective, it is clear why 18 wheeler trucks frighten many drivers on the roads and highways in Connecticut. While nothing can be done to change the variation in size, weight, and stoppage time between a tractor-trailer vehicle and a standard automobile, understanding the vast differences helps drivers to relate to just how dangerous these trucks may become. If you or a loved one has suffered injury from 18 wheeler trucks, please contact our personal injury law firm today for a no-obligation, free consultation.\nLaws Regarding 18 Wheeler Vehicles\nIn response to the growing number of truck accidents in the United States, the Federal Motor Carrier Safety Administration (FMCSA) was established which oversees and outlines the rules and regulations for all commercial trucks in the country. Regulating this trucking industry is vital, as both safety measures and intrastate travel require strict guidelines. In compliance with the FMCSA standards, all truckers are given a set of principles to follow which hopefully increases safety and reduces the frequency of motor vehicle collisions.\nWhen a large commercial vehicle prepares to embark on a long travel, there are quite a few potential hazards that must be addressed, especially those in regard to the standards outlined by the FMSCA. From background checks to a number of different tests, truckers are vetted rather thoroughly prior to operating a commercial vehicle.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "Designing the truck around taller tires results in a tall vehicle package and a higher center of gravity (with more body roll and load transfer), making the tires work harder, especially under braking and cornering situations and can make the vehicle less stable.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "When you’re driving in a regular-sized vehicle, it’s normal to feel some concern while driving alongside a commercial truck. It makes sense—large trucks are up to 30 times bigger than passenger vehicles, according to the Insurance Institute for Highway Safety (IIHS). This feeling also makes sense because the fatalities of truck accidents are occupants of passenger vehicles 67 percent of the time, compared to truck drivers who are the victims 16 percent of the time.\nIf you’ve been involved in a collision with a large truck that wasn’t your fault, then you could use the help of a lawyer. A truck accident attorney from Pittman Roberts & Welsh, PLLC can provide you with the legal representation you need after a crash in Jackson.\nDangers of Driving Beside a Large Truck\nThe size difference is one of the main dangers of driving alongside a large truck on the road, but there are other dangers to consider so that you can keep yourself and your passengers safe. If you need to pass a truck, it’s important that you do so as quickly and as safely as possible. When you are left to linger beside the trailer, that’s where the danger is.\nIf you’re beside a truck, it’s likely that you’re in one of their blind spots. This means that the driver can’t see you in the lane next to them, so if they need to change lanes for any reason, they could drive right into you. Additionally, if you need to change lanes for any reason, you could crash into the trailer, or even get into an underride accident where you are stuck under the trailer.\nEven if the truck doesn’t need to change lanes, another danger you could face while driving alongside the rig is a tire blowout. Since tractor trailers have so many tires, and they’re typically hauling thousands of pounds of goods, they are more susceptible to tire blowouts. These might not affect the truck, but if you’re driving alongside it in a regular sized vehicle, it could cause an accident for you.\nLet’s take a look at how you can avoid accidents with trucks as the driver of a regular sized vehicle.\nTips for Driving Near Trucks\nWhile it can be intimidating and dangerous to share the road with 18 wheelers, there are some ways you can stay safe.", "score": 24.278585621160715, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "In a jackknife, the cab of the truck swings around against its own trailer, resulting in a total loss of truck control. Speed contributes to jackknifing when the momentum built up by the trailer overwhelms the truck-tractor, essentially pushing it askew from behind. Jackknifes can also occur when a truck’s brakes are not properly balanced and apply uneven braking pressure at high speed.\nFinally, excessive speed can also lead to runaway accidents, such as when a truck descends a long hill and picks up too much momentum for the driver to keep it under control. Just about any kind of collision or catastrophe can result from a runaway accident, from pileups to rollovers to run-off-road accidents.\nTruck Driver Inattention\nDriving a truck requires special skill. Texas requires drivers of commercial vehicles to carry a Commercial Driver License (CDL) rated for the type of vehicle the driver operates. Drivers have to clear many hurdles to obtain a CDL, including written and on-road testing and certifications of their abilities to drive safely.\nOne thing all CDL holders know is that driving a commercial truck requires constant situational awareness. Drivers must pay close attention to the speed and position of their vehicle, as well as to all of the other vehicles with which the truck shares the road. A moment’s inattention can result in disaster. Inattention can result from a variety of contributing factors, including using a phone or GPS, having a conversation, or impairment from fatigue, drug, or alcohol use (see below).\nBlind Spot Inattention Is a Particular Danger\nKeeping tabs on other vehicles is no easy feat. Trucks have large blind spots on all four sides. For a typical tractor-trailer, those blind spots extend:\n- 20 feet in front of the cab;\n- 30 feet behind the trailer;\n- One lane-width on the driver’s side; and\n- Two lane-widths on the passenger side.\nDrivers use mirrors and, increasingly, side and rear cameras to help them “clear” their blind spots. But those tools are not fool-proof. To use them effectively, drivers need to keep constant track of other vehicles.", "score": 24.234289534804766, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Dangers of Sharing the Road with Large Trucks\nSharing the road with large trucks is simply something everyone must face. These oversized vehicles are common sites on the road, especially if you are traveling on the highway. It is important to understand the particular dangers posed by large trucks. Knowing and understanding how these vehicles move and operate will help you be mindful of how to stay safe when you find yourself sharing the road with them.\nBe Careful – Follow These Safety Tips for Sharing the Road with Large Trucks\nIt is best to start off with what is the most obviously unique trait about large trucks. They are, in fact, sizeable. They are big and they are heavy. This creates several dangers for those who are sharing the road with them. For instance, because large trucks weigh so much, it takes them much longer to brake than other vehicles. Some 18-wheelers can weigh more than 10,000 pounds. This means that the truck will require much more room to stop than another vehicle. In fact, it can easily take it twice as long to come to a full stop. Because of this, you should always give trucks as much space as possible.\nAvoid Making Them Brake Suddenly\nAvoid any situation where the truck driver may be forced to suddenly brake. While it may be tempting to speed up and get in front of a truck that is looking to move into your lane, do not do it. Not only could this lead to the driver not having enough time to adequately brake, but it could also create a situation where the driver suddenly swerves and jackknifes. These types of accidents can have catastrophic consequences for others on the road. The size and weight of large trucks mean devastation for other vehicles in the event of a collision.\nStay Out of Their Blind Spots\nAnother big danger of sharing the road with large trucks is the fact that they have huge blind spots. You may hear these blind spots referred to as “no zones” and that is because you should not linger in these areas and avoid entering into them as much as possible. The sides, front, and rear of large trucks are all no zones. When you drive in these areas, it is likely that your car disappears from the view of the truck driver. The general rule of thumb is that if you cannot see the face of the driver in the truck’s side mirror, then the driver cannot see you.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Road trains are part of Aussie life and all drivers have to share the road with them at one point or another. Road trains are the life blood of many industries and the drivers sacrifice a lot of time away from their families. When we know how to navigate around road trains, we make the journey for both ourselves and the truckies much safer and enjoyable.\nType of trucks on the road\nTrucks on the road can be anything from a rigid truck, semi-trailers, road trains which can be double, triple or quad, buses, grain and livestock transporters, tanker trucks, cement trucks and oversized vehicles.\nHow do you drive safely in the vicinity of one of these heavy beasts?\nStay out of their blind spots. Where are the blind spots of heavy vehicles?\nImmediately in front of the vehicle, beside the truck driver’s door, on the passenger side of the truck, this runs for the length of the truck and extends out three lanes, and directly behind a truck.\nI have seen stickers on the back of trucks declaring that if you cannot see their mirror they cannot see you. It is there for a reason – they really cannot see you. It is very important that you are visible.\nFollowing distances are very important when driving in front of road trains\nNever before has following distances been so important as they are when you are driving behind a truck. If you are directly behind a heavy vehicle you cannot see ahead. You have to allow enough time and distance to stop safely.\nStopping distances are different for road trains than for other vehicles\nA truck takes approximately 83 metres to stop at 60km/h whilst a car will take only 73meters. At 100km/h a truck will take about 185 meters to stop compared to a car’s 157 metres.\nCan you see why it is not a good idea to swerve in front of a truck and suddenly slam on your brakes. You are making it virtually impossible for them to stop safely and not harm you.\nOvertaking road trains\nHeavy vehicles marked with a warning, DO NOT OVERTAKE TURNING VEHICLE, have the right to take up more than one lane to turn at corners, intersections and roundabouts. When a heavy vehicle is turning, rather keep behind them.\nWhen you want to overtake a heavy vehicle on a highway, wait for the overtaking lanes whenever possible.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "When it comes to truck driving, safety is paramount. The sheer size of the vehicles alone makes them a hazard on the road, and they are far more likely to cause accidents and injuries than cars, motorbikes, and cycles are. If you’re a truck driver, you need to make sure you’re doing all you can, all the time, to keep yourself and your fellow road users safe and to ensure you don’t let accidents strip you of your trucking authority.\nBe aware of YOUR speed limits\nMore often than not, speed limit signs will dictate one maximum speed limit for smaller vehicles, and another one for larger ones. Your truck will fall under the larger-vehicle limit, which will be slower than the one imposed on cars, and it is of the utmost importance that you are aware of this. Do not confuse the limits, as doing so will see you go faster than you should be going with your type of vehicle, and will, therefore, make you more of a danger.\nYou should be particularly aware of your speed when you come to turns in the road. Trucks are far more likely to lose control of themselves as they curve than cars are, and their drivers MUST slow down when their road begins to deviate.\nPay attention to your space cushion\nYour space cushion is the room that immediately surrounds your truck on all of its sides, and other vehicles shouldn’t be encroaching upon it. Of course, you cannot account for other drivers, and anybody could get too close to you at any time. What you can account for, however, is your own driving skills, and making sure your truck never gets too close to anything else that it shares the road with.\nImportantly, this means maintaining a safe distance between yourself and the vehicle in front of you. As your truck will have more weight, size, and power than all of the cars that it follows, you going into the back of one can and will cause serious damage. For this reason, you should stay two vehicle lengths behind cars when the traffic is free-flowing, allowing you a safe stop time should the course you are taking dictate you to halt.\nMake sure your truck can be seen\nAs mentioned, the size of trucks makes them one of the biggest dangers on the road, and it is for this reason why safety procedures must be in place to ensure other road users know when your truck is approaching them.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Articulated trucks are famously inefficient and can be dangerous for cyclists and pedestrians. The European Commission recently announced plans to change the rules governing the size of trucks to make them safer, more efficient, and more spacious. MAN‘s new Concept S meets these qualifications with a slim front, flared wheel arches and curved cab lines, which break the traditional “shoebox” truck design. It also prevents cyclists from being dragged under the wheels in the event of an accident.\nThe new streamlined design has a large range of benefits. Firstly, it gives a large truck the air resistance of a car, reducing fuel consumption and CO2 emissions by 25%. The European Commission wants to fast-track this more aerodynamic, curvaceous design, which also improves driver visibility and could save hundreds of lives across the EU each year.\nBecause of UK and EU rules, trucks are generally no more than 16.5m long and 44 tonnes in weight. However, the new rules would allow for longer and more flexible designs.\nSpeaking to the BBC, Stephan Kopp, MAN’s senior manager for aerodynamic development, said it could be a revolution for the freight industry. “For several years now, we are trying to convince the European Union to allow us more freedom. If we are allowed to build longer trucks, then we can realise the potential in aerodynamics and safety.”\nVia BBC News", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "By Chris McCahill\nSide impact- and turn-related crash rates are lowest at intersections where average lane widths are between 10 and 10.5 feet, according to a study presented at the Canadian Institute of Transportation’s annual meeting last month. This challenges the long-held, but often disputed, assumption that wider lanes are safer.\nThe study looked at vehicle-to-vehicle crashes at 70 signalized intersections in Toronto and 190 in Tokyo over periods of four to five years. Crash rates were highest where average lane widths at the approaches were narrower than 10 feet or wider than 10.5 feet. Intersection approaches with 10-foot lanes also carried the highest traffic volumes. Bicycle and pedestrian volumes generally increased as lanes became narrower. There was no significant difference in truck volumes.\nNarrower lane widths (10 to 11 feet) are sanctioned in national policies outlined by AASHTO, particularly for urban areas, but the official standards in many states prohibit them. According to a 2010 study published in the ITE Journal, six states require a minimum of 12-foot lanes and another 24 states require 11-foot lanes.\nThe author of this most recent study notes that lane width guidelines, in particular, were established well before we had reliable crash and safety data. His work and other work cited in his paper show that the science behind many of those early assumptions is shaky. Fortunately, new research like this can help support a shift toward “substantive safety,” based on empirical evidence, rather than “nominal safety,” which assumes that design guidelines ensure safe outcomes—a topic that is covered in depth in ITE’s recent publication, Integration of Safety in the Project Development Process and Beyond: A Context Sensitive Approach.\nChris McCahill is a Senior Associate at SSTI.\nBy Chris McCahill", "score": 22.6904783802783, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Driving alongside an 18-wheeler big rig or semi truck can be scary. Not only is the sheer size of a large truck intimidating, but at some point while sharing the road with one, we inevitably stop and think about just how devastating an accident with such a monstrous vehicle would be. Truck accidents can cause a horrendous amount of destruction due to how much weight and force they carry.\nLuckily, there are a few things any driver can do to stay safe while sharing the road with large trucks. Here are some tips:\n- Always be especially alert when driving near a large trailer truck or big rig. Look out for any sudden swerves or movements.\n- Make sure you’re always visible to the driver of the truck. Trucks have massive blind spots and a higher cab, so it can be difficult for truck drivers to see passenger cars. Avoid driving in a truck’s blind spots and make yourself visible directly from the driver’s cab if possible. If you can see the driver in their mirror, they should be able to see you as well.\n- Turn your headlights on in conditions where the light is low to help truck drivers see your better.\n- Keep a safer difference than you normally would when following or driving alongside a large truck. Keep an extra lane between the truck and your vehicle if possible.\n- Give trucks as much room as possible when you notice they’re making a turn. The driver will need to make a wide turn and it may be wider than you expect. While waiting your turn at an intersection, for example, you may want to give the truck the entire intersection to make their turn before moving forward.\n- Give large trucks more space to stop. The larger the vehicle, the more time it takes for the driver to stop. Overcompensate if you have to. A large truck with full cargo can take as much as 300 yards to come to a complete stop from 60mph.\n- When driving around a large truck, plan your moves well ahead of time and give truck drivers more time to respond to your actions.\n- Use your turn signals earlier and merge/change lanes ahead of time. If you’re afraid of being too close to the truck and need to stop, alert the driver ahead of time with a quick “tap” on your brakes.\n- When passing a large truck, try and pass on the right side. The right side has fewer blind spots than the left.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "- When an over-sized load signals to get over, there’s a good reason. Let them in.\n- Don’t follow too closely; the increased width makes it hard to see what’s behind.\n- Communicate on CB if possible to communicate intent.\n- Look for the red flags on the freight; this marks the widest point on the load.\n- Don’t try to squeeze beside them in a truck stop parking space. They may need 2 spots and in many states they’re forbidden to use rest areas per the regulations.\n- When it’s safe to pass, do so quickly in order to minimize the time you’re in a high risk zone with minimal horizontal clearance.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Car accidents that take place in intersections are among the most common and dangerous types. Unfortunately, many of these wrecks could likely be avoided if drivers exhibited just a little more caution. Here are some of the telltale signs that you are on your way to being in a wreck at an intersection.\n1: Crowded conditions.\nWhenever you approach an intersection and you notice that it is very crowded with drivers, be extra careful. These are situations where a driver making a foolish left turn or pulling out in front of someone into a lane could cause a disaster. Don’t be afraid to slow down a bit and assess the situation before you proceed into the intersection.\n2: Rush hour.\nThere are some people out there who simply cannot bear the thought of having to wait for a traffic light to change. To avoid doing so, they will barrel through a yellow light like nobody’s business. If you happen to be making a left on the yellow, make certain that nobody is coming. Check twice if you have to. It may actually save your life.\nPeople who tailgate other vehicles are among the most significant threats on the road. This is, in fact, one of the most negligent things that any driver can do. If you have somebody tailgating you, be wary of lights that are about to turn yellow. You can almost bet money on the fact that the driver behind you isn’t paying attention to the lights and that they’re probably paying attention to your car and venting frustration that you’re not driving fast enough for their liking.\n4: Wide turns.\nIn the summer, be particularly wary of people in large vehicles who probably are not professional drivers. For example, recreational vehicles towing boats, large vehicles towing campers and other larger, noncommercial vehicles are all significant threats to everyone else on the road. Be wary of drivers who don’t understand how wide they have to turn at an intersection to clear the curb and make sure you stay away from their right side. They very well may sideswipe you otherwise.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Remind all the drivers among your family and friends to always look for people doing lane-changes without signalling, particularly at intersections when they realize they’re heading the wrong way! See all five photo’s, below.\nThese photographs, taken at Albany, NY, just yesterday (and as always, from the passenger seat), show two truck drivers who were either distracted or simply not concentrating, then they eventually wake up to the fact that they are heading the wrong way.\nIt could, of course be any drivers, by no means just truck drivers, but it cannot be ignored than reckless maneuvers such as these are even more dangerous when large vehicles are involved.\nFollow events by looking at each photo in turn. And remember to be aware that somebody might do this right in front of you, with no warning at all. Keep your distance!\nInterestingly, some text on the back of the orange truck (which was not displaying any license plate at all) read “Keep Back 100 Feet.” But any driver who wants to do everything possible to stay safe should constantly look out, ahead and behind, for anyone who is driving badly, then do their best to keep well clear of them.\nPlease feel free to forward the URL or a link to this page to your family and friends.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "However, a truck and trailer or a tractor and semitrailer combination hauling pulpwood or unprocessed logs may be operated with a maximum width of not to exceed 108 inches in accordance with a special permit issued under section 725.\n(5) The total outside body width of a bus, a trailer coach, a trailer, a semitrailer, a truck camper, or a motor home shall not exceed 102 inches. However, an appurtenance of a trailer coach, a truck camper, or a motor home that extends not more than 6 inches beyond the total outside body width is not a violation of this section.\n(6) A vehicle shall not extend beyond the center line of a state trunk line highway except when authorized by law. Except as provided in subsection (2), if the width of the vehicle makes it impossible to stay away from the center line, a permit shall be obtained under section 725.\n(7) The director of the state transportation department, a county road commission, or a local authority may designate a highway under the agency's jurisdiction as a highway on which a person may operate a vehicle or vehicle combination that is not more than 102 inches in width, including load, the operation of which would otherwise be prohibited by this section. The agency making the designation may require that the owner or lessee of the vehicle or of each vehicle in the vehicle combination secure a permit before operating the vehicle or vehicle combination. This subsection does not restrict the issuance of a special permit under section 725 for the operation of a vehicle or vehicle combination. This subsection does not permit the operation of a vehicle or vehicle combination described in section 722a carrying a load described in that section if the operation would otherwise result in a violation of that section.\n(8) The director of the state transportation department, a county road commission, or a local authority may issue a special permit under section 725 to a person operating a vehicle or vehicle combination if all of the following are met:\n(a) The vehicle or vehicle combination, including load, is not more than 106 inches in width.\n(b) The vehicle or vehicle combination is used solely to move new motor vehicles or parts or components of new motor vehicles between facilities that meet all of the following:\n(i) New motor vehicles or parts or components of new motor vehicles are manufactured or assembled in the facilities.\n(ii) The facilities are located within 10 miles of each other.", "score": 21.107226877652625, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "Because large trucks have the capability for causing serious injuries to drivers and passengers, it is in your best interest to avoid traveling in their blind spots and to let others know the same. You can share this article with those you know on Facebook by selecting the like button to the left of the screen.", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "Watch Your Speed\nStaying in control of your speed is essential, but in many cases that can only be done if you keep an eye on the signs. Speed limits change as you move from rural to urban areas, and they also change from state to state. Don’t pass by a speed limit sign without noting the number and adjusting your speed accordingly. Although driving the limit is essential to truck driving safety, there are a lot of times when you’ll need to go even slower such as:\n- when going around curves\n- when entering an interstate\n- in work zones\nWeather is another issue that affects how fast you can safely drive in a semi, and we’ll cover that next.\nTruck Driving Safety Relies on Being Weather Aware\nWeather impacts truck driving safety in many ways such as how your truck handles on the road and how quickly you can stop. In fact, weather conditions contribute to nearly 25 percent of truck accidents when speeding is involved. Check in on weather websites when planning your trip, but also be flexible enough to respond properly when you’re hit with unexpected conditions. Wet roads require you reduce your speed by one-third and you’ll need to cut it by one-half on icy, snowy roads.\nExpected or not, weather can affect visibility, too. When rain, snow, and even fog interfere with your view of the road, pull off entirely, not just to the shoulder. That puts you and other drivers at risk. They may think you’re on the road and still moving and could mistakenly run into the back of your truck.\nGive Yourself a Cushion\nThey call ‘em the big rigs for a reason. Your truck’s height and weight dwarf most other vehicles on the road. Plus, the mammoth size affects how much time and space you need to stop. For those reasons, you should maintain a buffer zone around your truck—a cushion of space that keeps you and other drivers safe. You need space above, below, in front, in back, and to both sides of your truck to stay clear of overpasses and tunnels, uneven or sloped roads, and objects in front, in back, and to the sides of you including other vehicles, toll booths, bridges, etc. Furthermore, trucks need a wide berth when making turns and backing up. Having a cushion helps ensure you won’t hit anything when executing those moves.\nWhat’s more, if a semi needs to stop suddenly, it can’t.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "If you're in the intersection and the light goes yellow know that you own the intersection - do not proceed until you are absolutely sure that that oncoming traffic has come to a stop. And this is one of the reasons for new drivers why left-hand turns are one of the highest risk maneuvers because it results in a crash that is a T-bone crash. And if there's passengers in the vehicle it is often fatal for them.\nIt can to be very dangerous for the driver as well because the side of the vehicle where the doors are and the passenger compartment of the vehicle is, there isn't any protection there. So know that when you're making a left-hand turn and this is what we can learn from this video is make sure that that oncoming traffic is coming to a stop and then that you're not getting pressured by the vehicles behind you to go and misinterpret the gap and there is sufficient gap for you to go. So know that on a left-hand turn as well, when you're sitting there waiting make sure that your steer tires are straight because if you get rear-ended accidentally you'll get driven into oncoming traffic. Especially if you're on highways as in this video clip here with the crash analysis where the speeds are a bit higher.\nAnd the higher the speeds the more dangerous the crash is going to be and the more susceptible you're going to be to being killed in that car crash.\nQuestion for my smart drivers:\nLeave a comment down in the comment section there, all that helps out the new drivers who are working towards becoming better drivers and staying crash free. If you like what you see here share, subscribe, leave a comment down in the comment section there. As well, hit that thumbs up button.\nCheck out all the videos on the channel here if you're working towards a license or starting a career as a truck or bus driver. Lots of good information here and head over to the Smart Drive Test website. Great information over there. As well, online courses that you can purchase. All of the courses are guaranteed pass your road test first time or 60-day money-back guarantee.\nenter youtube30 for 30% off course in coupon box", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Semi-trucks can be down right scary for motorists in smaller vehicles. Seeing one creeping into your lane on the highway is enough to cause you to lay into a full on honk. Watching one make a right turn onto the street where you’re in the left lane waiting to turn might have you thinking you should back up a touch as you chant “Don’t hit me. Don’t hit me.”\nBut, the drivers deserve credit for the amount of control they have over the massive rigs and trailers. Take this semi driver for example. In the video below, he starts to turn left, but since he appears to be going a tad too fast, he begins to skid. Amazingly, he corrects the skid making it look like it was intentional and he was just pretending to be 007 in an Aston Martin.\nWatch the footage taken by frightened passengers behind him on a rainy day (Warning: Some strong language):\nWe can’t confirm when the video was officially taken. The timestamp in the clip states 2025, but those timestamps are wrong or altered. We do know it was uploaded to YouTube Friday by a user named rousnek located in Germany. This is the only video the user has uploaded to the video-sharing site.\n- Shocking Footage Catches Rouge Semi-Truck Slamming Into Gas Pump\n- Drivers Narrowly Escape Death After Truck Towing Semi Slides Off Icy Norway Cliff", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "To the Editor:\nI just visited the site of newly constructed Rowan Boulevard at U.S. Route 322 in Glassboro. While this intersection looks to be near completion, its configuration looks to be a controversy in the making.\nThe \"roundabout\" that replaces the elbow turn just east of the Rowan University campus is so narrow curb-to-curb that large and medium size trucks will need to be forever banned from using that section of Route 322 as it currently exists. This would mean that trucks going east on Route 322 would be forced to detour through a school zone on Heston Road on their way to Route 47, where they would turn right to rejoin Route 322 eventually. Even cars will need to be extremely careful negotiating the tight circle.\nLooking at the new design, it makes you wonder how such a main artery could be so narrowly conceived in the planning stages. Unless there is something that doesn't meet the eye, it appears the template is set. If it is, there are questions on the horizon.", "score": 19.44951185328346, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Truck accidents happen due to the negligence of truck drivers and motorists in smaller passenger vehicles alike. If we want to see a day when there are no truck accidents across the country, then it is going to take everyone to make that happen. You can help do your part by knowing how to safely share the road with a commercial truck, big rig, semi-truck, or whatever else you might call them.\nFive easy-to-remember tips to share the road with commercial trucks are:\n- Stay out of blind spots: A commercial truck has much larger blind spots than compared to most other vehicles on the road. You need to do your best to stay out of them. There is a blind spot directly behind and another in front of the truck, each capable of hiding about one or one-and-a-half cars. To the left, about two lanes are obscured in a blind spot that is cone-shaped and extending away from the left-side mirror. On the right is the largest blind spot, which covers two to four lanes to the right and behind.\n- Pass on the left: Because the right side of a commercial truck has the largest blind spot, you should always try to pass on the left when it is an option. Passing on the left will allow you to move through a smaller blind spot and, therefore, spend less time “invisible” to the trucker. If you can, make eye contact with the trucker in their left-side mirror as you pass to confirm that they know you are there. Judge your distances correctly, accelerate slightly if you can safely, and complete the pass without delay.\n- Increase your following distance: Never tailgate a big rig. Not only are you hiding in a blind spot when tailgating, but you will also be blinding yourself to the road ahead. When you can’t see what is happening around and in front of you, the chances of getting caught in an accident will increase. Do yourself, the trucker, and everyone else on the road a favor and increase your following distance behind a commercial truck to roughly three car lengths or five seconds, whichever is greater.\n- Watch for wide turns: Big rigs take big turns, especially right-hand turns at 90-degree intersections. You should never stay next to the side of a commercial truck as it approaches an intersection or while it is using its turn signal. Even if you are in the next lane over, the trailer could swing wide and collide with your car.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "THE DO’S AND DON’TS OF SHARING COLORADO ROADS WITH LARGE TRUCKS\nOn behalf of Paul Wilkinson at The Paul Wilkinson Law Firm, LLC\nTrucking accidents are all too common in Colorado and may result in serious injuries, but there are things that drivers can do to safely share the roads.\nEach day, motorists in Colorado and across the U.S. share the streets and highways with commercial vehicles. Unfortunately, collisions involving these large trucks and smaller, passenger automobiles are all too common and often result in serious injuries or death for those involved. In fact, the Federal Motor Carrier Safety Administration reports that of the more than 370,000 motor vehicle accidents that occurred in 2012 involved at least one commercial vehicle. While not all such wrecks can be prevented, there are some things that drivers can do to safely share the roads with large trucks.\nDo allow large trucks ample room to stop\nGenerally, large commercial vehicles require more room to stop than cars, trucks and SUVs. The Colorado Driver Handbook points out that loaded trucks may need a minimum of 290 feet to stop when they are traveling at 55 mph. It is important that drivers keep this in mind to help them avoid some truck accidents. Motorists should refrain from suddenly stopping when driving in front of large trucks as much as possible.\nDo not drive in the no zones\nTractor trailers have blind spots in the front, on both sides and in the rear. When they are in these areas, or so-called no zones, people’s vehicles may disappear or be too close. This may restrict the ability of truckers to safely maneuver their vehicles, which may result in trucking collisions. Therefore, it is recommended that drivers refrain from traveling in large trucks’ blind spots for extended periods of time.\nDo use caution when passing\nAlthough their size is plainly visible, many people underestimate the time it takes to pass large trucks. Before attempting to pass these vehicles, it is important for people to make certain they have enough time and open road in front of them. This may help prevent situations in which they have to cut back in front of a tractor trailer and stop or slow suddenly.\nDo not cut in between trucks and curbs\nOperating tractor trailers is different than driving smaller automobiles. Sometimes, truckers may have to move to the left in order to make a right-hand turn or they may leave extra space between their vehicles and the curb in order to make safe turns.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Oversized Load Truck Accident Lawyers in Kentucky\nTrucks with oversized loads pose a threat to others on the road because of their size and how difficult they are to drive safely. Many things can go wrong and cause an accident. It’s best to stay away from these vehicles as best you can.\nWhat is an Oversized Load?\nAn oversized load could be steel beams used on a bridge, wind turbines, drilling rig parts, or manufactured housing.\nThere are oversized and overweight loads. They go beyond the dimensions accepted by a state, while an overweight load is heavier than the standard weight. Kentucky’s accepted truck weight and sizes can be found here. Municipalities can have restrictions on local roads. The federal government also imposes rules on hauling oversized loads.\nInterstate and other major highways generally have 12-foot-wide travel lanes. The maximum legal vehicle width is 8.5 feet in every state. If a load is bigger than that, it’s an oversized load. A load that’s extra long or high is also oversized.\nSpecial permits are required to move this cargo legally, and specific safety procedures must be followed. There may be escort vehicles, special lighting, and flags to make the trip safer for all involved.\nWhy is Hauling an Oversized Load Dangerous?\nOversized loads are hazardous to other road traffic and structures along the route, such as bridges and overpasses. Oversized loads often impede traffic and block lanes. There may be secondary accidents caused when traffic backs up or drivers try to pass trucks carrying oversized loads.\nWhen hauling an excessively wide, long, tall, or heavy load, there are risks that drivers must consider to avoid an accident, according to Equipment and Contracting. This includes:\n- The load’s weight may deflate tires, causing a blowout and loss of control\n- The truck is more likely to roll over, especially at higher speeds and making turns\n- The load’s weight and extreme dimensions mean it has a momentum all its own, which may be difficult for the truck and driver to control.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-4", "d_text": "So either pass the truck very fast, or lay back with enough room to avoid the truck if it has a blowout, then pass it quickly once the car in front of you moves.\nIf you’re too lazy to search on your own, here’s a video of a truck getting a blowout, losing control, and killing the driver [Editor’s Note: The video referenced has been removed from YouTube]. If you’re on the side of the truck when this happens, you’re going to get a cross on the side of the road right next to theirs. Take a look at the interstate and take note of the rubber pieces you see in the lanes and on the shoulders. They’re referred to as gators, because broken tire pieces are long and skinny like a gator, and they weigh an awful lot–sometimes as much as an alligator. Take note of the skid marks on the highways, the areas where you know a wreck took place. Someone got hurt where you see that. And it can happen to you.\nMost truckers are great drivers. They care about doing their jobs safely because if they don’t, people will die, namely them. And they are making a meager living doing so. A wreck can ruin their career. There are the random few who screw it up for the rest and give a bad name. But the majority are very careful. I just hope you take note of the roads when you’re on them in the future. I’m convinced some folks just don’t know any better because nobody taught them. Pass this sort of thing along to your friends. It’s just common sense.\n[Edit] Trucks pass slowly because most are speed-governed to below the speed limit. If the truck is a nationally known name (Swift, Wal Mart, Yellow Freight, etc), it’s on GPS, is tracked by the 1/10th of a mile by someone at a computer console, and is speed limited). Drivers can be fired for breaking rules. I’m glad so many of you are seeing this for what it is–an opportunity to learn about something you may not have known before now. Perhaps it will save some of you one day. Yeah, it’s a long essay, but a lot of this can’t be summed up easily.\n[Edit] Regarding air brakes, they do lock up in the “full brake” position when air is depleted, so pardon my incorrect explanation.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "GREEN BAY – Round and around they go.\nThe circular intersection on Green Bay’s west side stays pretty busy.\nThe DOT says anywhere from 40,000 to 45,0000 cars and trucks travel through this roundabout on West Mason every single day.\nBut not everyone makes it through safely.\nOfficials say in the last four months at least three semis have tipped over in the roundabout at West Mason Street near U.S. 41.\nAre roundabouts built to handle semis?\n“A contributing factor in all three of them were drivers were going too fast,” said Randy Asman, DOT traffic engineer.\nWith more than 11,000 drivers, Schneider says it makes sure its drivers are well trained before they head out on the road full time.\nThat training also includes how to navigate a roundabout.\n“We talk to them about, most critically, to look ahead and read the signs as to which lane they should set up in based on the way they want to navigate through. So are they going to do a through movement, a left turn, or a right turn and make sure that they’re in the correct lane,” said Senior V.P. Safety Security Schneider Safety & Security Donald Osterberg.\nBrown County has the highest amount of roundabouts in the state totaling 47.\nThe DOT says they aren’t going away, by 2018 you’ll be seeing 26 more.\nThat’s because the circular intersections are considered much safer. Studies have shown roundabouts can reduce crashes by 75 percent at intersections where stop signs or signals used to be.\n“Semi’s fit in all of our roundabouts. There’s a truck apron on the inside to help trucks maneuver through the roundabout. So are they not safe for roundabouts? I would say that’s not the case,” Asman said.\nAsman says no matter what you’re driving, you should slow down and always pay attention while in a roundabout.\nThe DOT says drivers should give trucks enough room while traveling through a roundabout and never try to pass them.", "score": 17.66385277391369, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Anyone who has ever traveled on a freeway or highway has probably come into range of a large truck or semi. These vehicles are massive in comparison to smaller passenger vehicles, so it makes sense that drivers may feel wary when they approach larger trucks.\nThe Federal Motor Carrier Safety Administration explains that it’s important to give large trucks enough space around them. Doing so makes it possible for the driver to see your vehicle by keeping the blind spots clear. Additionally, staying a distance away from the vehicle may give the driver more time to slow down or maneuver when necessary.\nLarge trucks have four main blind spots\nWhen you approach a semitruck from the back, remember that there are four blind spots that you need to keep your vehicle out of. The first blind spot is directly behind the vehicle. If you are close enough to the back that you can’t see the driver’s mirrors, the driver cannot see you.\nTwo additional blind spots run along the sides of a semi. These are areas in which the mirror is angled in a way that it will miss any vehicles next to the truck, and the driver won’t see them when looking over, either.\nThe fourth blind spot is in front of the truck. If you weave in front of the truck too closely, the driver won’t be able to see you over the front hood.\nStopping distances are a challenge for large truck drivers\nAnother reason you need to give more space to large trucks is because of their stopping distances. Trucks, buses and other large vehicles need more time to stop, so you’ll want to be several car lengths ahead of the vehicle at all times.\nTurns become problematic for truck drivers\nDue to the shape of a large truck, drivers often have trouble turning and staying in the correct lane when turning right. They may swing wide to make enough room for their trailers. Passenger vehicle drivers should stay behind the white lines painted on the road to give them enough room to turn.\nThese are a few reasons why you need to be aware of large trucks and maintain your distance. Staying away could help you avoid a truck crash.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "I cannot see you if you are within 25 car lengths behind. I can take a full length school bus, fold the mirrors in, and park it behind the truck, and you would never see it in my mirrors.”\nBecause of their size, semis need a significant safety cushion so they can stop safely, Hawkins explains. When the tank on the 55-foot to 60-foot long semi he drives is full, it takes the length of “one football field and both end zones to stop.”\nStopping distance can also be impacted by the cargo being carried. Smooth bore tankers (ones that don’t use baffles to compartmentalize their cargo) have nothing inside to slow down the flow of the liquid. Therefore, forward and back surge is very strong. Smooth bore tanks are usually those that transport food products such as milk. Sanitation regulations rule out the use of baffles because of the difficulty in cleaning the inside of the tank. Corrosive liquids are also routinely transported in smooth bore tanks.\nHe also warns passenger car drivers not to linger while passing big trucks. And once by a semi, Mr. Hawkins said, “don’t get back into the lane until you can see both my headlights in your rear view mirror.”\nThe American Trucking Association estimate there are 15 million trucks on U.S. roads and highways, 2.3 million of which are semis.\nTruck drivers need a significant safety cushion so they can stop safely, Hawkins explains. When the tank on the 55-foot to 60-foot long semi he drives is full, it takes the length of “one football field and both end zones to stop.”\nMr. Hawkins, who lives in Perrysburg, has noticed some changes during his three decades behind the wheel of a rig: “Traffic is getting heavier and there’s not as much courtesy,” he said.\nAnd sitting high in the cabin, truck drivers see quite easily into passenger vehicles. “I’ve seen guys shaving and people reading the newspaper,” he said — and yes, he means while they were driving.\n“Everybody” is distracted, he laments, whether it’s young people talking to their friends, a parent trying to sooth a crying baby in the back seat, someone eating, or putting on makeup.\nThe No. 1 offense? That’s easy: passenger car motorists using hand-held cell phones to talk and text. And if you’re curious, Mr.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "What is the widest load without permit?\nAny load more than 8.5 feet wide is, by definition, an oversize load, and with few exceptions will require a state permit to travel on public highways. In some cases, on local, narrower roads, the maximum legal trailer width may be just 8 feet.\nWhat size is a wide load?\nIf your load meets all weight limits, but not width limits, it is considered a wide load. Generally, if your vehicle or load is wider than 8'6″ you will need wide load permits. Legal length is usually 48′ to 53′, and maximum weight is about 46,000 pounds.\nCan wide loads travel at night?\nwide may move at night on Interstates and four-lane divided highways. In some cases, maximum width loads may be required to move at night during periods of least traffic.\nCan pilot cars stop traffic?\n“Yes, pilot cars are allowed to stop traffic and facilitate the movement of an oversize load through congested areas and other situations that would cause a danger to the public. “WAC 468-38-100 describes the requirements to become a pilot-vehicle operator and what they are required and allowed to do.”\nWhat is the difference between oversize and wide load?\nOversize covers everything from over height, over width, over weight, over length. Wide Load is just another name for the over width loads.\nHow do you flag an oversize load?\n\"Oversize Load\" signs and flags required for all over width and over length permit loads. Signs should be mounted at front bumper and at rear of load. Flags (18\" x 18\") should be mounted at corners of load and widest extremities. Over 4' rear overhang must be flagged (two flags if overhang is more than 2' wide).\nWhat is classed as a wide load?\nAn 'abnormal load' is a vehicle that has any of the following: A weight of more than 44,000 kilograms. … An axle load of more than 11,500 kilograms for a single driving axle. A width of more than 2.9 metres. A length of more than 18.65 metres.\nHow do I get a wide load permit?\nTo obtain State permits, you will need to contact the State(s) in which you wish to travel.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-12", "d_text": "(Boese, supra, at p. 458.) The court emphasized that the driver “`by the exercise of ordinary care could have parked said truck at a place reasonably convenient for making deliveries to the [market], and so as not to endanger the safety of plaintiff in crossing [the road] at the place in question. . . . `”(Id. at p. 457, italics omitted.) “[N]o evidence established that it was necessary to park the truck in its shown position in order to deliver dairy products at the store. In fact [the driver] testified he occasionally parked the truck elsewhere while making deliveries at the store. “(Id. at p. 458.)\nIn Atlantic Mutual v. Kenney (Ct.App. 1991)323 Md. 116 [591 A.2d 507] (Atlantic), a vehicle owned and insured by the plaintiffs that was traveling on a highway, Route 170, collided with a motorist who was attempting to make a left turn onto the highway from a shopping center parking lot. The court upheld an award of damages against the owner and operator of a parked truck that blocked the views of the drivers who collided. The trial judge who heard the case “could have found that [the driver’s] act in parking his 45-foot tractor trailer so as to occupy the entire curb area between the entrance and exit driveways of the shopping center would substantially obstruct the view of motorists using the exit and those proceeding on Route 170, and significantly increase the risk of an accident at that point. “(Id., 591 A.2d at p. 511.) The court wrote: “Even if we assume there was no violation of [a] parking statute, we conclude that the evidence was sufficient to support a finding that [the driver] was negligent in parking the tractor trailer. “(Ibid.) The court observed that “[p]arking an ordinary vehicle too close to a driveway or an intersection might not create the same risk. An ordinary motor vehicle is less than 18 feet in length and, because of front, side, and rear windows, often provides a view of traffic beyond it. “(Ibid.) However, the truck here “was no ordinary motor vehicle.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "They struggle to swerve into another lane at short notice. Technical issues such as brake failure occasionally occur.\nTruck drivers can be easily distracted while driving due to their job being so monotonous. Poor parking (or lack of!) creates obstructions and causes accidents. Bad driver behavior sometimes occurs, e.g. breaking the road rules, road rage, bullying. There’s also driver fatigue due to their shifts being so long, plus high staff turnover and poor training.\nHow To Stay Safe\nWe can’t do much about the size of trucks, but what we can work on is how to stay safe around them, especially if you live near highways or roads that are frequented by 18-wheelers.\nThe following tips may be of help:\nThink Of Visibility And Distance\nMake sure your car’s headlights are up to date (with no burnt-out bulbs) so drivers behind you can see you more easily. Even though some rigs have bright lights on top nowadays, many still rely upon old-fashioned halogen bulbs that will not light up the road as well. Don’t drive too closely to the truck in front of you, especially if it’s slowing down. Give them plenty of room and stay back several car lengths at least.\nTrucks cannot see as well around their vehicle as smaller vehicles, although they are equipped with extra mirrors on both sides. As a result, you should never assume that the other driver can see you, and don’t try and pass a big rig on the right side while it’s making its turn. Instead, stay back until they are finished turning.\nBe Careful In Lanes\nAlways pay attention when changing lanes near a large 18-wheeler and never cut one-off or try to force them into another lane. If you need to, let them have the right of way. Trucks often make wide turns due to their size, which means that you will need to allow them extra room when passing. Never try and share a lane with a truck or go-between lanes when driving near them. Never assume a truck is going to stay in its lane – it may drift over from time to time without the driver realizing.\nAvoid distractions when driving near trucks, such as talking on your cell phone or fiddling with the radio. Stay aware of other cars around you, especially the ones that are much larger than your own.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-2", "d_text": "This is probably the most common mistake in non-trucker thinking, and it’s likely because you don’t understand why trucks have a hard time stopping. 60,000lb trucks cannot. We have air brakes, and if you cause us to use our air up and not recharge quickly, we have no brakes for the next stop. The brakes can overheat very quickly, and when brakes get hot, they no longer work effectively. Stopping distance is increased significantly. Someone’s gonna die.\nWhen you see a car broken down or stopped on the side of the road, you should really move a lane over. Why? Because people with suicidal thoughts hide on the front of vehicles. Happens more frequently than you’d believe. Jason Aldean, a popular country musician, just had this happen to his crew on a recent tour. Also, someone could be working on a car (fixing a flat, etc) and if you brush up against them, they’re going to die. You’re going to probably wreck after hitting them, and you’ll probably die. Was it so hard to just move over for a split second while you passed? What if some kid is on the other side of the car, and the brain-dead parent who let them out in the first place lost hold of them and they ran into the right lane? Seriously, if you don’t believe this stuff happens, then you’ve not driven enough yet. Also, you could be passing, and the car in the left lane next to you could force you (accidentally) onto the shoulder, and you might hit the parked car. You’re probably gonna die. Again, a split second can make a difference.\nIf a truck has a turn signal on to move to the left lane, there’s likely a very good reason for it. And one of those reasons is very likely NOT to make your day worse, so please don’t be the jerk who speeds up to cut it off and prevent it from moving over. They’re probably trying to avoid something like a parked car in the emergency lane. Trucks also run on cruise control, and they tend to keep a constant speed except on hills. If a truck is slowed down, it takes a lot more effort to get started going again. Again, the driver is just trying to stay moving. He or she isn’t trying to give you a hard time.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-2", "d_text": "Department of Transportation (DOT) has stated its concern in several studies about the increased chances of finding poor brakes on bigger trucks with more axles. Heavier trucks also have a higher risk of rollovers as they add more weight on the same number of axles. … When those loads also involve cargo that can easily shift, such as…liquids in cargo tanks, extra-heavy trucks become extremely unstable in emergency steering maneuvers or when sudden braking is required to negotiate a sharp curve.\n…the U.S. DOT found that if LCVs increased their operations nationwide, they would suffer an 11 percent higher overall fatal crash rate. This finding was further confirmed in another DOT study that specifically cautioned against the increased use of long combinations pulling multiple trailers because of amplification or sway of the last trailing units and poorer control of load transfer as compared with single semi-trailer trucks which makes LCVs more prone to out-of-control and rollover crashes.\nAnother coalition of 150 companies, including Kraft Foods, Coca-Cola, and Miller-Coors, is currently lobbying Congress to allow trucks weighing 120% of current limits to haul on interstate highways. Congress is preparing to consider a bill allowing states to raise truck weight limits on interstate highways from 80,000 pounds to 97,000. Trucks would then be required to add a sixth trailer axle to compensate for additional weight.\nOpponents of the bill include railroad companies, which fear loss of market share to larger trucks, a group of survivors of the 2007 Minneapolis bridge collapse, state public-safety officials, and some independent truckers. These independents, represented by the Independent Drivers Association, fear they would be pushed into buying costly new rigs. The IDA’s representative says stability is \"substantially reduced on bigger and heavier trucks.\"\nMap: Coalition Against Bigger Trucks\nAlso supporting larger, heavier trucks is the Reason Foundation, a Los Angeles-based, self-described libertarian think-tank. The Foundation is a driving force in support of greatly expanding the role of trucking in the U.S. The Foundation supports tolling all interstates to generate money to maintain and build more highway lanes. They also advocate for private toll roads and truck-only toll roads both for LCVs and for general truck transportation. The foundation endorses greater federal spending to re-construct interstate highways, including a new national system of truck-only toll lanes.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "Accelerating truck freight volume has many deleterious public health and safety implications for those traveling these routes, and residents who live near these burgeoning “freight corridors.”\nIn Tennessee, the state Department of Transportation projects that heavy truck VMT will grow 50% faster than the growth rate for any other vehicle type.\nTennessee Department of Transportation, Plan Go--I-40/I-81 Corridor Feasibility Study, Bristol to Memphis, TN, 2008\nWhile large trucks accounted for three percent of all registered vehicles and seven percent of total vehicle miles traveled in 2003, about 12 percent of all traffic fatalities in 2004 resulted from a collision involving a large truck.\nA battle is raging over the expanding role trucks play in American transportation. The Coalition Against Bigger Trucks is an alliance of consumer, health and safety and major insurance companies and insurance agents’ organizations. The banner “headline” on their home page says, “One triple trailer truck is as long as a Boeing 737 and as heavy as 27 SUVs.” This coalition is a leader in the fight against state or federal legislation that would allow Longer Combination Vehicles (LCVs)—“Triples” trucks hauling three 28-foot trailers, “Doubles,” trucks pulling two trailers—one at least 48 feet, the other 28 feet long, or two 48-foot or longer trailers.\nOffice of Transportation Policy Studies, FHWA U.S. DOT,\nLobbying pressure from the railroads and safety advocates led Congress in 1991 to freeze expansion of the operation of LCVs on interstate highways to only those western states where they were then legal. However, the 19 state governors of the Western Governors’ Association recently petitioned Congress to allow trailer doubles and triples up to 120 feet long on Western interstates.\nTestifying before the U.S. Senate and Public works Committee in July, 2008, Jackie Gillan, Vice President of the Coalition explained in detail some of the major safety concerns about heavier and longer trucks and LCVs with multiple trailers:\nEach year, about 5,000 people die in crashes involving big trucks and this fatality toll has not changed in the past decade. A large part of the reason is the increased numbers of heavier trucks, sometimes pulling two and even three trailers …\nRoadcheck 2008 found that 52.6 percent of all commercial motor vehicle defects resulting in OOS[Out of Service] orders were faulty brakes. The U.S.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-3", "d_text": "You can regain your speed in a matter of a few seconds. Trucks take a lot more energy and time to do the same.\nAbout that cruise control, a lot of you need to understand how that works in your own cars. It’s profoundly frustrating to have to leapfrog with cars who can’t hold a constant speed. I mean, you really have no idea. About the only thing I can think of here is like trying to pass someone on foot in a crowded airport as you are running to catch a connector, and the people in front of you keep moving left and right, oblivious to you behind them, and you can’t get around. I know it’s not the exact same, but you know how irritating that is, right? Passing a truck and slowing down so they have to pass you is profoundly worse.\nDo not (and I can’t stress this enough) ride alongside a truck, ever. Scenario: You’re in the left lane trying to pass a truck in the right. If there’s a car in front of you, and you’re both not passing the truck quickly, and you’re sort of “lingering” waiting for the a*&hat in front of you to move on, you’re in the kill zone. Two horrible things can happen. First and most obvious, if the truck loses control or has to change lanes quickly, you’re going to get crushed because the truck has nowhere else to go. You’re the path of least resistance. You’re probably gonna die. Second is that trucks tend to have tire blowouts. If you’ve never seen the power of a truck tire blowout, take a minute to look up such videos on Youtube. It may scare the shit out of you from the comfort of your chair right now. And if it scares the shit out of you while you’re behind the wheel, you will lose control of your own car and drive under the rig or off the road. It’s exactly like hearing a bomb go off, and sometimes the tire exits the truck with such force that it will knock a car off the road. Your reaction skills likely will not save you. Many times, the truck can lose control and crash. It always swerves right before the worst, and if you’re in that kill zone, you’re going out with it.", "score": 15.652736444556414, "rank": 82}, {"document_id": "doc-::chunk-2", "d_text": "That’s a deadly combination that could easily lead to an accident.\nThe government has actually passed laws to make sure that drivers get the amount of sleep they should. The requirements are that for every 11 hours of driving, the drivers should have ten consecutive hours of rest. That “rest” isn’t just sleep – it’s eight hours of sleep mixed with a two hour break. Unfortunately, not every driver sticks to this. That can lead them to be fatigued when they head back out on the road, thus raising the possibility that they cause a crash.\nThat being said, studies have shown that a minority of big rig accidents are actually caused by the big rig drivers themselves. Instead, they’re caused by other drivers. This can happen for a variety of reasons. Trucks have large “blind spots,” bigger than a car or other kind of vehicle might have. If you’re driving in this vehicle’s blind spot, no matter how talented and conscientious the truck driver is, they might not be able to see you.\nIf you’ve never driven a big rig, then you might not be aware of how fast they can be. Too many accidents occur every year because someone misjudged the speed of a big rig. They thought it wasn’t going as fast as it actually was, and then crashed into it. This can also manifest itself when someone merges in front of the truck and then reduces their speed. Big rigs can do a lot of things, but they don’t exactly stop on a dime.\nMaking a left turn in front of a truck is often necessary, particularly if you’re on a major highway. However, you want to be as careful as possible when doing this. Make sure that you’re paying the truck the proper respect and not treating it like it’s some kind of regular car. Obviously, whenever you turn, you always want to signal. However, you want to be especially clear about signaling when you change lanes around a big rig. You’re already a careful driver; when it comes to big rigs, it doesn’t hurt to be a bit more careful.\nThe LA Injury Group can help you through every kind of big rig accident. These accidents are more common than most people think. In southern California, with so much time spent on the road, with so many trucks around, these accidents occur frequently. When you’re hurt in a big rig accident, it’s natural to not know where to turn.", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "The public needs to understand that time restrictions are all fine and well if there is adequate parking, and every rule restricting drive hours needs to be accompanied by money to fund it. Making rules is pointless if you don’t provide people a way to follow them.\n4. Being impatient isn’t going to make us go faster. Honking and pulling around us unsafely isn’t going to do anything other than risk both our lives, and the lives of everyone else around us. Bryan has excellent advice. “This is not NASCAR out here! They are not saving any gas by being two feet off our trailer. It makes us nervous.” He goes on to say, “If we are kind enough to move over to allow you to get on the freeway, then do so. Move ahead of us, or slow down enough so we can get back in the slow lane where we belong.”\n5. No matter how big your personal vehicle is, it’s not anything like driving a commercial vehicle. You can’t impose your driving experience in a Hummer with a boat trailer onto being able to understand what it’s like to drive a tractor-trailer — it’s not the same, it’s not even close, it never will be. If you want to learn what it’s like to drive a tractor-trailer, ride along in one for a day. A lot of states and private entities have instituted truck awareness programs and implemented driving safety around commercial vehicles in their drivers’ education. If you are a newly licensed individual, or have teenagers who will be driving soon, this type of education is incredibly important, especially if you’re using the highways. Which leads us to more advice from drivers.6. “We make wide turns…if we swing to the left and have our right signal on it isn’t because we are dyslexic…we need that much room.” Candy Crichfield goes on to explain the nuances of those wide rights: “If you see I am going to be turning your way (hint: there are little flashy lights all along the side of the truck indicating that I am going to turn) stop short and let me around.”\n7. Christine Gonyea and Richard Porky Young both mentioned the importance of four wheelers understanding blind spots.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "By Garland H. Campbell, Dublin, VA\nThose of you who think confining trucks to right-hand lanes will promote highway safety need to hear from someone older and wiser. I’m 62, with over 30 years of trucking the 48, two and a half million safe miles. An important rule of safe driving is to maintain a “cushion of air,” that is, space, all around your vehicle. That’s no one in front or behind or on either side of you. This is being destroyed with the possibility of lane restrictions!\nAlready you have the public (four-wheelers) trained to keep to the right. Most of them are not at work, just in transit, with their minds who knows where. They create a significant volume of traffic in those right lanes. Why then do you want to further congest those lanes with our big trucks? To move us to where we are not welcome, and where there is no room for us?\nYou deny me my rights to exercise my skills of safe driving by presuming to make choices for me! You attempt to put into a box what will not fit in a box. Therefore, you assume all liability for my safety and the safety of the motoring public by posting your lane restrictions. My first priority is to drive safely and protect myself!\nIf a truck length is 70 feet and you attach four spaces to each truck and you have 5,280 trucks, how many miles of highway do you need? Now, answer the same question for cars, say adding 30 feet per car length. Now, add them together. Remember, according to guidelines, four spaces are only good for 40 mph.\nNow do you see why my advice is to remove all lane restrictions so the traffic can make full use of the highway space we have? No one in an office can make the decision for a driver on the highway. He must be free to make his own decision. Remove the signs.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "Many large commercial truck companies dispatch their defense attorneys and accident investigators to the accident scene as soon as possible following an accident involving a large truck or tractor-trailer. This occurs whatever time of day or night and, in many cases, the defense attorneys and/or accident investigators can arrive before the local police! These commercial companies have a lot at stack after an accident, since their big rigs can cause serious injuries. The defense attorneys will prepare the driver to speak with the police in a way to minimize the exposure of the trucking company, if at all possbile. For these reasons, it is extremely important that you contact a California commercial truck accident attorney if you have been injured or suffered the loss of a loved one in a large California truck or tractor-trailer accident.\nSome Reasons as to Why Large Commercial Truck Accidents Occur:\n- Inattentive drivers or drivers who fall asleep at the wheel (this is a big reason!)\n- Negligent maintenance and/or repair of tractor trailers (failure to properly maintain)\n- Bad weather conditions, including rain, snow, or ice\n- Inability to brake in time to avoid acciden\n- The physics do not lie: Tractor-trailers can weigh 10 times more than a passenger vehicle, and sometimes even more than that. Therefore, the amount of time it takes for a large truck or tractor trailer to brake and stop is significantly greater than that of a passenger vehicle. If a tractor-trailer driver does not maintain a reasonable and safe distance behind passenger vehicles while driving, they may be unable to stop and avoid an accident in the event of an emergency.\nUnited States federal regulations limit the amount of time a driver of a tractor-trailer may spend driving in a single day, and over the course of a week. This is for good reason regulated because again, there is a lot at stake with big rig truck drivers and potential vehicle accidents. However, reality can dictate what the truck drivers actually do. Since the truck drivers have pressure by their corporate employers to deliver their cargo and/or make additional earnings, commercial truck drivers frequently disregard federal and state laws and drive for more hours than permitted by federal and state law. Some even go so far as to maintain and a falsify sets of log book/sleep logs; and use medication to get them \"artificially awake.\"", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "Resident David Bastille has created signs for the neighborhood, which read “Big trucks belong on the big roads.” Fellow resident Suzanne Lehmert Adelman is helping distribute these signs, available for $10 to cover production costs.\n“We have a very big problem over here. We are under siege from these HUGE trucks, and it gets worse every day. We are doing our best, but let’s face it ... they are bigger than us,” says Lehmert Adelman, a school bus driver for the town of Holliston.", "score": 13.190487705459049, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "You can’t see around it, you don’t know why they’re driving slowly, and you won’t succeed in trying to move it with your car. And if you happen to hit it, there’s a chance the truck won’t even move, so you could wreck and the driver would never know.\nAgain on the highway, please for the love of all humanity, don’t cut off a truck by leapfrogging it and jumping between it and the car or truck it is following. You’re going to scare the hell out of the driver, and you are taking a stupid risk in getting crushed if anything goes wrong with the truck we’re driving. Plenty of times I was in the passing lane following another vehicle as we passed a slower one, and someone would come up the right lane and try to butt in and get ahead. One guy tried to push me with his little 4 door Jeep Wrangler. Seriously, he tried coming over on my lane, hand out the window giving me the finger, trying to get me to slow down so he could cut in front. I didn’t budge, but he kept trying to “occupy” my lane. He was taking his life and that of his passengers into foolish territory, because the truck I was in is 60,000 pounds. It would have rolled right over the Jeep. Some of you are thinking “You should have slowed down and let him over.” Ok, so now you’re the car behind me who is tailgating me, and I slow down to let this impatient guy cut me off with less than a car length at 70mph. Now you’re slowed down because I’m slowed down, and the cars behind you are slowed down. And if you’re tailgating me, you’re probably going to wreck horribly if the guy in front of me does manage to get under my wheels. And all this because some douche thought he was better than everyone in the left lane, and rode out the right lane to pass everyone else. If we all come to a stop 30 miles down the road, I assure you, that guy would have gained about 10 car lengths on me. Look at the risks he took for such a small accomplishment.\nJust because our trucks are large doesn’t mean we’re going slow or that we can stop.", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Related to #4, driveways should not be located too close to nearby street intersections. Doing so will create offset or dog-leg intersections with other streets or high-volume driveways. Offset intersections can create erratic traffic patterns and detract from drivers' abilities to look out for pedestrians.\n6. Skewed driveway and street intersections (those not at right angles) can cause problems. Intersection angles should be between 75 and 90 degrees.\n7. It is often desirable for parking lot exit driveways to have two lanes, one for left-turning vehicles and one for right turners. This helps reduce congestion, because the right-turning cars can proceed while the left turners are waiting for traffic from the right to clear.\n8. Lane-use signs and pavement markings specified by the federal Manual on Uniform Traffic Control Devices (MUTCD) should be employed to demark traffic lanes. The examples in the MUTCD represent decades of experience with what works and what doesn't.\n9. Signs, utility poles and other appurtenances should not be too close to the edge of the traffic lanes or parking areas. Objects within striking distance of an overhanging bumper will be struck. Even for low-speed situations, the major engineering manuals recommend at least a 1.5- to 2-foot clearance from the face-of-curb top the near edge of any roadside appurtenance. It is not unusual to see sign installation crews place the sign pole behind the curb, but not allow for the width of the sign, so the sign itself protrudes out into the space occupied by vehicles.\n10. A school bus is wider and longer than a passenger car, so it requires more room to maneuver. School buses also have much greater offtracking on sharp turns. Lanes and aisles intended for school buses need to be wider so the bus will not sideswipe other vehicles. The designer must check driveways and streets intended for school bus use to make sure that all intersections and curves provide plenty of room for the bus to turn. The designer may even need to make measurements of the turning path taken by school buses and provide a margin in excess of the minimum space needed for school buses. Recently published results of field tests conducted at the University of Arkansas included drawings of the measured turning paths of the largest Type C and D school buses.", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "Many people owe their livelihoods to trucking: according to the American Trucking Associations, there are 7.4 million people employed in the American trucking industry, including 3.5 million truck drivers.\nThe Federal Motor Carrier Safety Administration’s 2018 Pocket Guide to Large Truck and Bus Statistics reports that in 2016, there were more than 2.7 million tractor-trailers registered in the U.S., and they travelled a total of 287.9 billion miles on U.S. roads that year. (The FMCSA is a division of the U.S. Department of Transportation.)\nAlso found in that publication is 2016 roadside inspection data. This data shows that a total of 342,736 drivers were cited for failure to log, update, or provide accurate information Driver’s Record of Duty Status in that year alone. 51,149 drivers had worked beyond the eight-hour limit since the end of their last off-duty or sleeper period.\nAs a result, there were more than 500,000 collisions reported involving large trucks in the U.S. in 2016. Of these crashes, 119,000 led to injuries and 4,213 crashes led to deaths. According to early data for 2017, there were 4,455 fatal crashes.\nCauses of Large Truck Accidents:\n- The truck is traveling too fast for the conditions;\n- The truck is following the vehicle in front too closely, resulting in a rear-end collision;\n- The truck is not in a roadworthy condition;\n- The truck’s brakes are worn or otherwise incapable of stopping the truck in time to avoid a collision;\n- The truck driver is fatigued or falling asleep at the wheel from not taking the required rest periods;\n- Driver inattentiveness or distraction;\n- Drug use, especially to keep the driver awake;\n- Making an illegal maneuver;\n- Failing to yield the right of way before entering a roadway or making a turn;\n- The driver losing control of the truck due to such things as a shifting load or inclement weather conditions such as rain, snow, or high winds.\nWho Regulates Trucks?\nThe FMCSA federal regulations regarding truckers, including quality control, weight limitations, and mandated rest time. The FMCSA also enforces the Commercial Motor Vehicle Safety Act of 1986, which outlines the standards for obtaining a commercial driver’s license (CDL).", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "Villasenor is one of 45 students taking classes at the 6-year-old Star school to try to master driving trucks bearing trailers that legally can extend 53 feet and weigh up to 80,000 pounds. Just managing the oversize wheel is like hugging a beach ball.\nFor Babbitt, co-founder of the school, it has taken nearly a quarter-century of professional driving to make him more comfortable in the roomy confines of a truck's cab than in his regular car. But he knows it is not the same for beginners.\n\"Seventy percent of the students come in here thinking it would be easy,\" Babbitt said as he stood in a 100-yard strip of blacktop outside Star Truck's offices. \"They think, `I like to drive, so I'll be a truck driver.' Then they say, `I had no idea it was this hard.' \"\nThat is why Babbitt's school spends so much time on the basics, requiring students to perform maneuvers again and again--backing up straight, backing into docks at an angle, turning right. Babbitt and three other instructors put students through their paces, shifting the company's Ford 2500Ns and practicing among cones set up in a shipping terminal lot just north of the Hawthorne Race Course.\nStudent Jerry Gagliano, 40, a Cicero cosmetologist, repeatedly backed the large white tractor-trailers through the cones, sometimes with a bob and a weave, sometimes with the load listing behind the rig. But never perfectly.\n\"I know the principle behind it,\" he said later. \"I just need to not do it too fast.\"\nThat is the basic problem with Villasenor's right turns. Babbitt patiently coaches him \"to get tight to the curb\" and \"watch your trailer all the way through until it hits the pinnacle of a corner, the apex,\" but Villasenor has yet to accomplish a completely on-target turn.\nAnother hard thing to master, he said, is learning to trust the outsized mirrors that help him chart the movement of his trailer. Still, even the mirrors cannot help a trucker with the enormous blind spots.\nThen there is shifting, which is more complicated than handling the manual transmission in an automobile. In cars, drivers depress the clutch once. But in a truck, operators must work it twice: once to move out of gear, then again to move to another.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "In many countries, driving a truck requires a special driving license. The requirements and limitations vary with each different jurisdiction.\nAs we've learned the Federal-Aid Highway Act of 1956 was crucial in the construction of the Interstate Highway System. Described as an interconnected network of the controlled-access freeway. It also allowed larger trucks to travel at higher speeds through rural and urban areas alike.This act was also the first to allow the first federal largest gross vehicle weight limits for trucks, set at 73,208 pounds (33,207 kg). The very same year, Malcolm McLean pioneered modern containerized intermodal shipping. This allowed for the more efficient transfer of cargo between truck, train, and ships.\nThe public idea of the trucking industry in the United States popular culture has gone through many transformations.However, images of the masculine side of trucking are a common theme throughout time.The 1940's first made truckers popular, with their songs and movies about truck drivers. Then in the 1950's theywere depictedas heroes of the road, living a life of freedom on the open road.Trucking culture peaked in the 1970's as theywere glorifiedas modern days cowboys, outlaws, and rebels. Since then the portrayal has come with a more negative connotation as we see in the 1990's.Unfortunately, the depiction of truck drivers went from such a positive depiction to that of troubled serial killers.\nThe United States' Interstate Highway System is full of bypasses and loops with the designation of a three-digit number.Usually beginning with an even digit, it is important to note that this pattern ishighlyinconsistent. For example, in Des Moines, Iowa the genuine bypass is the main route.Morespecifically, it is Interstate 35 and Interstate 80, with the loop into downtown Des Moines being Interstate 235. As itis illustratedin this example, they do not alwaysconsistentlybegin with an even number.However, the 'correct' designationis exemplifiedin Omaha, Nebraska.In Omaha, Interstate 480 traverses the downtown area, whichis bypassed byInterstate 80, Interstate 680, and Interstate 95. Interstate 95 then in turn goes through Philadelphia, Pennsylvania. Furthermore, Interstate 295 is the bypass around Philadelphia, which leads into New Jersey.Although this can all be rather confusing, it is most important to understand the Interstate Highway System and the role bypasses play.", "score": 9.460542230531878, "rank": 92}, {"document_id": "doc-::chunk-12", "d_text": "And quit calling me Shirley.\nI climbed up into the city from I-95. There was a sign directing Wide Load traffic to the left. There is also several steel columns for the elevated train all over the road. There must also be a school because backpack toting pedestrians are everywhere. In front of me is a street; two traffic lanes and a left turn center lane. The steel columns are on either side of the center lane making it a tunnel. While the light is still red, I scan the scene calculating if I should angle through the center lane into the far right or if I should go all the way through the intersection and make the full turn.\nNormally a left turn is much preferred to a right turn in a semi. Your trailer will 'off-track' as you pull it through the corner. This causes the trailer to turn further inside the corner than you and the cab do. A left turn gives you the whole road to work with. A right turn is tight. Trucks will take out stop signs, light poles and pedestrians if the driver is not careful. The steel columns in the middle of traffic pretty much make this left more like a right turn.\nTimes Up!! The light is green. On impulse, I take the full turn. I pushed my luck enough back in the construction zone. Halfway through the turn, I am way too close to one of the steel columns. I turn wider and ride the curb with my right hand steer tire. We just make it through. The next light is a right turn back to the highway. I am taking this turn very wide too. On the entrance ramp, there is one of those little triangular island curbs to ease the flow around the curve and separate the traffic coming straight across from the left. The backpack toting crowd all jostle to a halt as I go right up and over the island. My diesel tanks are just 8\" above the ground. Luckily the curb is quite low. No sense in having a HazMat spill in the city. Whew, I am back on the highway and headed to my pickup. How the hell would a Wide Load get through there?\nI get back to the the area around I-495 and realize the exit goes both east and west. I don't know why the Atlas uses different shields for the two roads. I quickly find Van Dam and exit again.", "score": 8.086131989696522, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Websites and Citations:\n- Theme Music: Five Star Fall, Mercurial Girl, Magnatune.com\nHello and welcome to another edition of Talking Traffic. My name is Bill Ruhsam and I host this podcast and its sister website, talking traffic dot org. Today is Monday, June 27, 2011. This is episode 39 of Talking traffic.\nToday’s topic is about trucks. All kinds of trucks. But before I dive into the nebulous term that is “trucks” let me throw some engineer-speak at you.\nWhen we are designing or analyzing roads, we talk about the “Design Vehicle”. The design vehicle is the largest type of vehicle that is most likely to use that road. For example, if I’m working on a residential subdivision, I dont’ need to design the road to allow a tractor-trailer to turn around in a cul-de-sac. No, a typical large vehilce for a subdivision would be a UPS or FedEx truck. Now, the occasional tractor trailer might come into that roadway, maybe a moving truck, but it only happens occasionally, and the inconvenience caused by having to back up a tractor-trailer to turn it around is small compared to the inconvenience of designing the subdivision to allow the truck to drive around as if it were an interstate.\nIt’s important to determine at teh very beginning what your design vehicle is because it will affect many different things in your roadway project.\nNow, let’s talk about trucks. When I, as a traffic engineer say “truck”, I mean some specific types of vehicles. I’m not talking about pickups or SUVs or dooleys or anything that you migt see in a Ford commercial. I’m talking about larger vehicles that are intended to carry freight. These trucks break down into two cateogries. Single Unit trucks, and Multi Unit trucks.\nSingle Unit trucks are trucks that don’t articulate, that have a single frame to which the wheels are attached. UPS and FedEx trucks are good examples of these. dump trucks and garbage trucks and smaller moving trucks are all examples of the single unit truck.\nMulti-unit trucks include the typical tractor-trailer combination that you see everywhere. These come in various sizes measured from the center of the front axle to the center of the rear most axle. So, when I’m throwing engineering speak at people.", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Delivering the Goods Before you make a single delivery, our local delivery trucks have already made the rounds. Designed with our DriverFirst™ philosophy in mind, they come built with the driver’s needs in mind. Besides a cab that maximizes comfort, safety and drivability, our Diamond Logic® advanced electronic system brings new levels of brainpower, flexibility, and convenience. And so you can make the rounds without making a dent, we’ve included a collision mitigation system. Standard.\nThank you. Your information has been submitted and an International Truck representative will call you shortly with next steps.\nThe International trucks have the best turning radius out there. Our drivers are really happy to have them. When you’re driving the streets of Manhattan, you don’t have much space to work with.” Read More", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "GREEN BAY - Round and around they go.\nThe circular intersection on Green Bay's west side stays pretty busy.\nThe DOT says anywhere from 40,000 to 45,0000 cars and trucks travel through this roundabout on West Mason every single day.\nBut not everyone makes it through safely.\nOfficials say in the last four months at least three semis have tipped over in the roundabout at West Mason Street near U.S. 41.\nAre roundabouts built to handle semis?\n\"A contributing factor in all three of them were drivers were going too fast,\" said Randy Asman, DOT traffic engineer.\nWith more than 11,000 drivers, Schneider says it makes sure its drivers are well trained before they head out on the road full time.\nThat training also includes how to navigate a roundabout.\n\"We talk to them about, most critically, to look ahead and read the signs as to which lane they should set up in based on the way they want to navigate through. So are they going to do a through movement, a left turn, or a right turn and make sure that they're in the correct lane,\" said Senior V.P. Safety Security Schneider Safety & Security Donald Osterberg.\nBrown County has the highest amount of roundabouts in the state totaling 47.\nThe DOT says they aren't going away, by 2018 you'll be seeing 26 more.\nThat's because the circular intersections are considered much safer. Studies have shown roundabouts can reduce crashes by 75 percent at intersections where stop signs or signals used to be.\n\"Semi's fit in all of our roundabouts. There's a truck apron on the inside to help trucks maneuver through the roundabout. So are they not safe for roundabouts? I would say that's not the case,\" Asman said.\nAsman says no matter what you're driving, you should slow down and always pay attention while in a roundabout.\nThe DOT says drivers should give trucks enough room while traveling through a roundabout and never try to pass them.", "score": 8.086131989696522, "rank": 96}]} {"qid": 8, "question_text": "What was the purpose of the uprising in Italy in 1861 led by Garibaldi and his thousand men?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "1000 (Viva V.E.R.D.I.) is a song based on the historical battle that took place in Italy in 1861 where a group of political figures in the north banded together to overthrow the Austrian, French, and Spanish regimes in Italy. The aim was to end foreign occupancy, unite Italy under one flag, language, and monarch, and aim for a future free from oppression.\n1861, Giuseppe Garibaldi leaves the shores of Genova for Sicily with 1000 men dressed in denim jeans and red shirts. The locals know there will be an uprising and they are ready. Although they are poor and have little, they are armed with centuries of pent up fury. There is a promise made to southern Italy: that unification under the Savoia monarchy will drive out the foreign occupants including the Spanish Bourbons, and put the power back in the hands of the people. There is a growing hope for change – a growing belief that the futility of their century’s old fate can indeed come to an end. They will sacrifice everything to turn on their masters. They cannot go back. They will not lose.\n3. 1000 (VIVA V.E.R.D.I.)\nRomina (voice, guitar), Francesco Pellegrino (colascione, tammorra), Gianluca Campanino (tammorra), Ben Grossman (tammorra, percussion), Mike Herriott (trumpet, french horn, trombone), Drew Jurecka (violin), Rebekah Wolkstein (violin), Shannon Knights (viola), Rachel Pmedli (cello), Roberto Occhipinti (double bass).\nAscia, gancio, sega, pala, / Ax, hook, saw, shovel\nfalce, raspa, zappa, spada, / scythe, rasp, hoe, sword,\nlama, pietra, unghia, denti, / blade, stone, fingernail, teeth,\nurli, strilli, maledetti! / hollers, screams, cursed people!\nO diman’ O diman’ / O tomorrow, O tomorrow\nA Marsala, Garibald’ / Garibaldi at Marsala\nSuona a tromba, suona a guerra / Sound the trumpet, Sound the war\nSuona in cielo, suona in terra.", "score": 53.01987491428436, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "France sent 30,000 men to fight for the Pope and when these troops attacked Rome, Garibaldi organised the defence of the city while Mazzini controlled its government. Garibaldi's troops, wearing their famous red shirts, were defeated, but most of them managed to escape, and Garibaldi went to the United States. Returning to Europe in 1854, he settled on Caprera, a tiny island off Sardinia.\nIn 1859 Garibaldi was fighting the Austrians in north Italy once more, this time successfully. In May 1860, with 1,000 volunteers, he landed on the island of Sicily (off southern Italy) where the people were suffering under the brutal rule of King Francis of Naples. Within three months 25,000 of King Francis's troops were defeated, and Garibaldi led his men to the mainland of Italy, won the Battle of Reggio and marched on Naples. Francis fled and his kingdom of southern Italy was handed over to King Victor Emmanuel of Sardinia, who afterwards became King of the whole of Italy. Garibaldi, rejecting all offers of wealth and titles, returned to his simple farm on the island of Caprera.\nFrom there in 1862 he returned once more to raise an army and march on Rome, which was still a separate state ruled by the Pope. However, Victor Emmanuel did not want further strife and forced Garibaldi to stop. In 1867 he tried again and was defeated by the Pope's army, which had French troops fighting with it.\nGaribaldi retired to his farm and died at the age of 75, honoured and mourned throughout the nation he had fought to free.", "score": 49.16589974281659, "rank": 2}, {"document_id": "doc-::chunk-9", "d_text": "By the end of the eighteenth-century, with revolutionary ferment changing the monarchical and political landscape across the European continent, and especially in France, Sicily also came close to ridding itself of its monarchy. By the late 1840s, corruption, mutiny, and violence effectively ended the rule of the Bourbons. This made Sicily into the ideal starting point for Garibaldi’s campaign to unify Italy, and he arrived on the island with his 1,000-strong army of Red Shirts in 1860.\nGaribaldi, Unification, World War II, through to today (1860-to date)\nOnce Giuseppe Garibaldi had landed at Marsala in June 1860, his army was joined by increasingly large numbers of dissident Sicilian fighters, whose loyalty no longer rested with their Bourbon rulers. United, with a strong sense of patriotic purpose, a fair degree of strategic cunning and bare-faced bravura, and a good dose of luck, they eventually defeated the Bourbons. In 1870, Garibaldi officially handed Sicily over to the king of the rest of Italy, Victor Emmanuel II, of the royal house of Savoy, and Sicily became part of the newly unified Kingdom of Italy. However exuberance was quickly dispelled, when the nature of the new order became clear, with only 1% of Sicilians being entitled to vote in the new Italian Parliament. Sicily was once again the outpost of an empire, with absentee rulers who understood little and cared less about the Sicilians who struggled to make a subsistence living from agriculture and fishing.\nOver the following century, the poverty of the island led to mass emigration. One and a half million Sicilians found their ways to the Americas, and it was in America that they were recruited to be willing participants in the final invasion of the island, by supplying US Intelligence with detailed information on the topography of the island, its towns, and the names of those in Sicily who would assist their cause. In July 1943, the US Army and allied forces, under the five-star leadership of Generals Patton and Montgomery, landed at Gela and Pozzallo, respectively. They numbered over 160,000, which was larger than any invading force, at any point in Sicily’s long history!", "score": 47.80858175902564, "rank": 3}, {"document_id": "doc-::chunk-6", "d_text": "Italian unification and Liberal Italy\nMain articles: Italian unification and Military history of Italy during World War I\nThe legendary \"handshake of Teano\" between Giuseppe Garibaldi and Victor Emmanuel II: on 26 October 1860, General Garibaldi sacrificed republican hopes for the sake of Italian unity under a monarchy.\nThe creation of the Kingdom of Italy was the result of efforts by Italian nationalists and monarchists loyal to the House of Savoy to establish a united state encompassing the entire Italian Peninsula. In the context of the 1848 liberal revolutions that swept through Europe, an unsuccessful war was declared on Austria. The Kingdom of Sardinia again attacked the Austrian Empire in the Second Italian War of Independence of 1859, with the aid of France, resulting in liberating Lombardy.\nIn 1860–61, Giuseppe Garibaldi led the drive for unification in Naples and Sicily, allowing the Sardinian government led by the Count of Cavour to declare a united Italian kingdom on 17 March 1861. In 1866, Victor Emmanuel II allied with Prussia during the Austro-Prussian War, waging the Third Italian War of Independence which allowed Italy to annex Venetia. Finally, as France during the disastrous Franco-Prussian War of 1870 abandoned its garrisons in Rome, the Savoy rushed to fill the power gap by taking over the Papal States.\nItalian infantry at the Battle of Isonzo. More than 650,000 Italian soldiers lost their lives on the battlefields of World War I.\nThe Sardinian Albertine Statute of 1848, extended to the whole Kingdom of Italy in 1861, provided for basic freedoms, but electoral laws excluded the non-propertied and uneducated classes from voting. The government of the new kingdom took place in a framework of parliamentary constitutional monarchy dominated by liberal forces. In 1913, male universal suffrage was adopted. As Northern Italy quickly industrialized, the South and rural areas of North remained underdeveloped and overpopulated, forcing millions of people to migrate abroad, while the Italian Socialist Party constantly increased in strength, challenging the traditional liberal and conservative establishment.\nStarting from the last two decades of the 19th century, Italy developed into a colonial power by forcing Somalia, Eritrea and later Libya and the Dodecanese under its rule.", "score": 46.48740474203641, "rank": 4}, {"document_id": "doc-::chunk-10", "d_text": "Muslims, Jews, Byzantine Greeks, Lombards, and Normans worked together fairly amicably. During this time many extraordinary buildings were constructed.\"Norman Sicily of the 12th Century\"\nItalian unificationThe led by captured Sicily in 1860, as part of the . The conquest started at Marsala, and native Sicilians joined him in the capture of the southern Italian peninsula. Garibaldi's march was completed with the Siege of Gaeta (1861), Siege of Gaeta, where the final Bourbons were expelled and Garibaldi announced his dictatorship in the name of Victor Emanuel II of Italy, Victor Emmanuel II of Kingdom of Sardinia. Sicily became part of the Kingdom of Sardinia after a referendum where more than 75% of Sicily voted in favour of the annexation on 21 October 1860 (but not everyone was allowed to vote). As a result of the proclamation of the Kingdom of Italy, Sicily became part of the kingdom on 17 March 1861. The Sicilian economy (and the wider ''mezzogiorno'' economy) remained relatively underdeveloped after the , in spite of the strong investments made by the Kingdom of Italy in terms of modern infrastructure, and this caused an unprecedented Italian diaspora, wave of emigration. In 1894, organisations of workers and peasants known as the ''Fasci Siciliani'' protested against the bad social and economic conditions of the island, but they were suppressed in a few days. The 1908 Messina earthquake, Messina earthquake of 28 December 1908 killed more than 80,000 people. This period was also characterized by the first contact between the Sicilian mafia (the crime syndicate also known as Cosa Nostra) and the Italian government. The Mafia's origins are still uncertain, but it is generally accepted that it emerged in the 18th century initially in the role of private enforcers hired to protect the property of landowners and merchants from the groups of bandits (''Briganti'') who frequently pillaged the countryside and towns. The battle against the Mafia made by the Kingdom of Italy was controversial and ambiguous.", "score": 46.319908389736426, "rank": 5}, {"document_id": "doc-::chunk-3", "d_text": "There is much discussion even today about exactly how a relatively small band of Garibaldini, augmented at most by a few thousand irregulars picked up along the way, managed to make their way up the peninsula against what, at least on paper, appeared to be an overwhelmingly superior force. It is probably best to view Garibaldi's victory as resulting from a combination of factors. First, Garibaldi, himself, was a master of the hit-and-run harassing tactics that would one day become known as \"guerrilla warfare.\" He was also a firm believer in Napoleon's dictum that \"morale is to material as ten is to one\"—and his Redshirts had morale to burn. They were the righteous bringers of a new nation, and there is little doubt that large numbers of the long-suffering peasantry in Calabria and Puglia (perhaps less so as he moved further north towards Naples) genuinely viewed them as liberators.\nsituation in the Bourbon military also worked to\nGaribaldi's advantage. There was massive desertion among\nroyalist troops, many of whom felt that they were now\nbound up in defending a lost cause. Additionally, the\nofficer corps had been bitterly split for at least a\ndecade between old-guard royalists and those who felt\nthat the time for a united Italy had come at last.\nAll this, and more, combined to produce the unlikely sight, on September 7, 1860, of Giuseppe Garibaldi and a small group of companions entering Naples unopposed, by train (!) from Salerno and then in an open carriage from the station to the Royal Palace. They were miles ahead of the army. The king had fled to Gaeta the day before and the city and remaining troops welcomed the Risorgimento by giving Garibaldi a hero's welcome.\nA Bourbon force of about 20,000 troops had remained loyal to the king and gone north with him. Initially, the king had intended his retreat as somewhat of a strategic withdrawal. He had no intention of surrendering his kingdom without a fight. His army, near Gaeta, was, however, also being pressed from the north by the advancing army of King Victor Emanuel of Piedmont, who had finally decided to get on the bandwagon of unification before Garibaldi got all the credit.", "score": 43.794595514732514, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "Thus hemmed in, the Bourbons made a desperate effort in early October to break out and retake their kingdom by storming south at the Volturno River. Garibaldi was called upon for one of the few times in his life to fight a pitched battle instead of one of his guerrilla actions, and to defend instead of attack. He commanded troops along a twenty-kilometer front against a superior attacking force and held.\nOn October 25th, near\nCapua, Garibaldi greeted Victor Emanuel of Piedmont's\nRoyal House of Savoy with the words, \"Greetings to the\nfirst King of Italy\" and surrendered his conquests\n—Sicily, half the Italian peninsula and the vast\nNeapolitan Royal Navy (considerably superior to northern\nItalian fleets of the time) —without the slightest\nhesitation or thought of reward for himself, simply\nbecause it was the right thing to do.\nFor their efforts,\nGaribaldi and his superb men were completely and utterly\nsnubbed by the new rulers of Italy. The egalitarian\ninitiatives such as free education and land reform that\nGaribaldi had set up during his brief reign as \"Dictator\nof Naples\" were revoked, provoking for another\ndecade in much of the south what almost amounted to a civil war as recently liberated\nsubjects of the Bourbons took to the hills to escape\ntheir liberators from the north.\nlike the way things had turned out, but figured it was\njust more injustice he would have to straighten out when\nhe got around to it. He spent the last twenty years of\nhis life actively trying to do just that in one way or\nanother, in one place or another. He would fight more battles,\nbe arrested and imprisoned (he escaped) and even be\nelected to parliament. He didn't have a political bone\nin his body, and he continued to be saddened and\nconfounded by the politics of those who refused to do\nthe right thing. The Kingdom of Naples, which Garibaldi\nhad handed to Victor Emanuel on a silver platter, was\nofficially dissolved on Oct. 22, 1860, when Neapolitans\nvoted by plebiscite to become part of Mazzini's \"new\nItaly…united for all Italians\".", "score": 42.11299860707657, "rank": 7}, {"document_id": "doc-::chunk-22", "d_text": "He spent fourteen years there, taking part in several wars, and returned to Italy in 1848.\nAfter the Revolutions of 1848, the apparent leader of the Italian unification movement was Italian nationalist Giuseppe Garibaldi. He was popular amongst southern Italians. Garibaldi led the Italian republican drive for unification in southern Italy, but the northern Italian monarchy of the House of Savoy in the Kingdom of Piedmont-Sardinia whose government was led by Camillo Benso, conte di Cavour, also had the ambition of establishing a united Italian state. Though the kingdom had no physical connection to Rome (deemed the natural capital of Italy), the kingdom had successfully challenged Austria in the Second Italian War of Independence, liberatingLombardy-Venetia from Austrian rule. The kingdom also had established important alliances which helped it improve the possibility of Italian unification, such as Britain and France in the Crimean War.\nThe transition was not smooth for the south (the “Mezzogiorno“). The entire region south of Naples was afflicted with numerous deep economic and social liabilities.Transportation was difficult, soil fertility was low with extensive erosion, deforestation was severe, many businesses could stay open only because of high protective tariffs, large estates were often poorly managed, most peasant had only very small plots, there was chronic unemployment and high crime rates.\nCavour decided the basic problem was poor government, and believed that could be remedied by strict application of the Piedmonese legal system. The main result was an upsurge in brigandage, and a heavy outflow of millions of peasants in the Italian diaspora, especially to the United States and South America. Others relocated to the northern industrial cities such as Genoa, Milan and Turin, and sent money home.\nLiberal Italy (1861–1922)\nItaly became a nation-state belatedly—on 17 March 1861, when most of the states of the peninsula were united under king Victor Emmanuel II of the Savoy dynasty, which ruled over Piedmont. The architects of Italian unification were Count Camillo Benso di Cavour, the Chief Minister of Victor Emmanuel, and Giuseppe Garibaldi, a general and national hero. In 1866 Prussian Prime Minister Otto von Bismarck offered Victor Emmanuel II an alliance with the Kingdom of Prussia in the Austro-Prussian War. In exchange Prussia would allow Italy to annex Austrian-controlled Venice.", "score": 41.405172044879464, "rank": 8}, {"document_id": "doc-::chunk-8", "d_text": "In George Macaulay Trevelyan’s account of the key Battle of Calatafimi, the “Thousand” heroically defeated the Neapolitans who greatly outnumbered them. Before the battle, when the bugler blew the réveillée of Como, “[t]he unexpected music rang through the noonday stillness like a summons to the soul of Italy” (254); although, as Gilmour puts it, Garibaldi made “an unprovoked attack on a recognized state with which his country, Piedmont-Sardinia, was not at war” (192-3). “It was indeed a heroic enterprise but it was also, incontrovertibly, illegal” (Gilmour 192). Cavour, represented as the architect of unification, in fact opposed a united Italy until the late 1850s; in 1856, for example, he denigrated a Venetian pro-unification patriot for favouring “the idea of Italian unity and other such nonsense” (cited in Gilmour 179). His change of heart, revisionist historians insist, was part of his expansionist plans for Piedmont, which resulted in annexation of other Italian states to Piedmont—rather than Italian unification, these historians argue, the final wars of independence in 1859-60 resulted in the Piedmontization of Italy. Cavour’s annexing meant “the imposition of northern laws, customs and institutions on distant regions with no experience of their workings” (Gilmour 198). Massimo d’Azeglio (a prominent Sardinian politician) calculated that, despite the overwhelming show of support in the plebiscites (which arguably put the question of unity in a biased fashion, not mentioning annexation), only a fifth of Neopolitans wanted to be annexed (Gilmour 197). And then there was the King himself, promoted by d’Azeglio as “il re galantuomo” (the gentleman king), and celebrated as the heroic “father of Italy” in countless statues all over the Italian nation. [See, for example, Figure 4: Monument to Vittorio Emanuele II, Bergamo, Italy].", "score": 39.772825987446055, "rank": 9}, {"document_id": "doc-::chunk-2", "d_text": "In order to keep it within bounds he hurried on a peace with Austria at Villa Franca. According to that peace the Austrians were still to retain very considerable Venetian territory in Italy; but the rest of Lombardy they handed over to Napoleon, who ceded it to the King of Sardinia. The Italians were furious in their disappointment. They considered Napoleon a greater enemy of theirs than were the Austrians. They claimed, and not without a fair show of justice, that one more battle, the success of which was scarcely doubtful, would have made secure the unity of Italy. They reproached Napoleon with a childish fear of the anger of the Pope, Pius IX., and with the intention of keeping Italy in her old anarchy. Garibaldi and other Italian patriots, especially Mazzini, published innumerable pamphlets, calling upon the Italian nation to rise in a body and to drive out her enemies. Cavour, who continually clung to his diplomacy, and who was, moreover, crushed by illness, overwork, and the considerable strain of continuous vigilance and diplomatic negotiations, still managed to hold the balance between the wavering of Napoleon, the hostility of the Austrians and the Pope, and the excessive claims of the ultras. He died in June, 1861, and by that time the unity of Italy was a foregone conclusion. The patriots under Garibaldi had, by their bold initiative in Sicily and Naples, so irretrievably engaged and compromised the people of southern Italy, that one part of Italy after another declared for Victor Emmanuel, hitherto only King of Lombardy, as King of Italy. The inevitable and necessary advent of the unity of Italy was finally quite clearly shown in 1866, when Victor Emmanuel, although beaten by Austria on sea and on land at Lissa and at Custozza, nevertheless made good his claim to the Venetian territory still in the hands of Austria, so that the whole of Italy, except the city of Rome, was in August, 1866, under the rule of Victor Emmanuel as King of Italy. The City of Rome was entered by the Italians a few weeks after the commencement of the Franco-German War, and ever since Italy has been a united monarchy.\nThe events of the fifties and sixties of the last century fully proved the correctness of Cavour’s policy.", "score": 38.99574634579709, "rank": 10}, {"document_id": "doc-::chunk-3", "d_text": "(September) After a lightning campaign in Calabria, he captures Naples, the largest town in Italy, and makes himself \"Dictator of the Two Sicilies.\" (October) After a big battle on the Volturno River, he holds plebiscites in Sicily and Naples, and then gives the whole of southern Italy to Cavour, proclaiming Victor Emanuel as King of a united nation. (November) He returns to Caprera, which now remains permanently his home.\n1861 (April) He attacks Cavour in parliament over the latter's ungenerous treatment of the volunteers. (July) President Lincoln offers him a command in the American Civil War, but has to withdraw the offer after a storm of protest from the Vatican.\n1862 (July) He begins agitating in Sicily for another march on Rome, evidently with some encouragement from the King and Rattazzi, the Prime Minister. (August) Seriously wounded in a clash with Italian troops at Aspromonte, in Calabria. (October) After being imprisoned, he is granted an amnesty by the King.\n1863 Resigns from parliament because of martial law being applied in Sicily.\nTriumphal reception in England.\nGaribaldi welcomed in London.\nA reception given by the Duchess of Sutherland at Stafford House.\nThe whole country shut down for 3 days when Garibaldi visited London in 1864. High and low received him except Queen Victoria and the royal family. Thousands of children lined the streets and they all chanted this little ditty:\nWe'll get a rope,\nAnd hang the Pope:\nSo up with Garibaldi!\n1866 Leads another volunteer army in a new war against Austria, after which Venice is joined to Italy.\n1867 Again attempts a march on Rome, but is beaten by papal and French forces at Mentana, and once again is arrested by the Italian government.\n1870 Joins republican France in the Franco-Prussian war, and is made commander of an army in the Vosges....This is one of the most important years in history. After 1260 years, Rome ceases to be governed by the Popes and becomes the Capital of the new united Italy. Pius IX declares himself infallible in the same year!!\nMap of Italy after the fall of the Papal States\nThe Fourth Beast Papal Rome receives a deadly wound from the sword of Garibaldi.", "score": 37.07626643262727, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "The Final UnificationCavour encourages Garibaldi to join Victor EmmanuelOct., 1860 Southern Italy holds votes to unite with the NorthItaly is BORN!!!Cavour dies three months later\nAdd Venice and RomeItaly allied with Prussia against Austria in 1866 and was given VeniceDuring Franco-Prussian War (1871), French protected Rome couldnt spare troops to defend the city so Italy took it", "score": 35.73988463721781, "rank": 12}, {"document_id": "doc-::chunk-6", "d_text": "They occupied Ischia di Castro, Farnese, Acquapendente and Valentano, but, when attacked, took refuge in the convent of S. Francesco in Bagnoregio. The papal troops, however, organized themselves quickly and, after meeting in Montefiascone, drove the “brigands” or “bandits” across the border, regaining control of the entire area. However, after Garibaldi’s victory at Monterotondo, papal troops abandoned all their garrisons, afraid of an invasion from the regular troops of the Savoy and concentrated their efforts on Rome and Civitavecchia, where they expected the French reinforcements to land. The Garibaldini decided to take advantage of the situation, leaving their base in Torre Alfina to occupy Viterbo, taking control of the strategic strongholds of Bagnoregio, Valentano and Montefiascone, placing the latter under their military command. But just when the plebiscite had begun to welcome annexation to the Kingdom of Italy, news of Garibaldi’s defeat at the Battle of Mentana arrived. The Garibaldini therefore abandoned all their positions and left the Papal States. Only in 1871, with the fall of Rome, did the territory of Bolsena Lake finally became part of the Kingdom of Italy.", "score": 33.664406745040154, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "The Pope’s Irish Battalion – Battalion of St. Patrick\nIn 1860, 1,400 Irishmen travelled to Italy in response to Pope Pius IX’s call for help in thwarting Italian efforts to seize Papal Lands.\nUp to the mid-nineteenth century Italy was a patchwork of small independent states, each influenced to a degree by neighbouring super powers such as France and Austria. A uniformitarian movement took hold in the 1850’s, however, with Giuseppe Garibaldi and Giuseppe Mazzini included among its leaders. Key to their aims was the annexation of the Papal States, a vast territory interposed much like a wide band across the middle of the Italian peninsula. With no viable military force to protect his lands, an increasingly worried Pope Pius IX issued a call to Catholics throughout Europe for men and arms to raise an army in his defence.\nBy March 1860, Papal Emissaries had arrived in Dublin to recruit an Irish battalion to serve the Pope. At the forefront of this effort was an alliance between Count Charles McDonnell of Vienna, a ‘Chamberlain’ to the Pope, and Alexander Martin Sullivan, the Editor of the national newspaper. Within a matter of weeks, the resultant recruitment committee had organised rallies in support of the Pope throughout the country, and more than £80,000 was collected (equivalent to £5 million today), most of it channelled to the Vatican through the Irish Pontifical College in Rome. Sermons from the pulpits of parish churches also acted as conduits for the call to arms that emanated from St. Peter's Square.\nReligion was not the sole motivation for the Irish volunteers. Their response was also against the anti-Catholic or anti-Papal elements within the British Establishment. In an effort to destabilise French and Austrian influence among the Italian states the government openly supported the reunification movement with support of the Royal Navy in southern Italy. The British government also introduced legislation – The Foreign Enlistment Act - which prohibited British citizens from joining foreign armies. The Irish Constabulary was charged with enforcing this act but, using their discretion, there was no will to enforce the Act. Indeed a large number of members of the Force resigned their positions and travelled to Italy to join the Papal army. These were trained men and were an invaluable asset to the new army.", "score": 32.96099873846508, "rank": 14}, {"document_id": "doc-::chunk-2", "d_text": "As Marx noted, “The Roman revolution was an attack on property and bourgeois order as dreadful as the June revolution [in France].”\nBut the revolution in France had been defeated, so “the re-establishment of bourgeois rule in France required the restoration of papal rule in Rome”.\nRadical ideas had already taken hold in Venice and Florence. This was another reason for 6,000 French troops to attack the radicals in control of Rome.\nHaving just returned from South America, Garibaldi led working class volunteers in a heroic defence of the city for three months until 20,000 professional soldiers finally crushed them.\nItalian democrats had been defeated once again by powerful foreign forces. But they had been most successful in areas where working class people had been part of their struggle.\nWorkers saw that they were fighting for something far more important than a flag.\n“The Hungarian, the Pole, the Italian shall not be free as long as the worker remains a slave,” as Marx put it.\nGaribaldi and Mazzini moved in the opposite direction, however, trying to break the link between national independence and political egalitarianism.\nThey united with nationalists in Piedmont in the north, whose leaders wanted to make an alliance with France to get the Austrians out of northern Italy alone.\nCount Cavour, the prime minister of Piedmont, admitted he knew far more about southern England than southern Italy – he even once claimed that Sicilians spoke Arabic!\nBut whatever his inclinations, Cavour still needed Garibaldi for his military skills and popularity. If Garibaldi appealed for volunteers, people came running.\nAfter a series of incredible victories against the Austrians in the north, Garibaldi was allowed to sail to Sicily in April 1860, which was already in revolt. He left with just 1,000 men – and there has since always been a suspicion that Cavour sent him off as a diversion, expecting him to be killed.\nYet in Sicily Garibaldi won a stunning victory against one of Europe’s most powerful armies. The rebels were forced to use unorthodox tactics at times, such as charging the enemy with fixed bayonets because their rifles were so old and unreliable.\nJust as in 1848, the rebel victory happened because peasants rose throughout Sicily against their landlords and joined Garibaldi. But in other respects this was a different revolution from 1848, which was an attempted “revolution from below”.", "score": 32.58788177033631, "rank": 15}, {"document_id": "doc-::chunk-28", "d_text": "Across the Austrian Empire, nationalist sentiments among Austria's various ethnic groups led to the revolutions in Austria to take several different forms. Liberal sentiments prevailed extensively among the German Austrians, which were further complicated by the simultaneous events in the German states. The Hungarians within the Empire largely sought to establish their own independent kingdom or republic, which resulted in a revolution in Hungary. Italians within the Austrian Empire likewise sought to unify with the other Italian-speaking states of the Italian Peninsula to form a \"Kingdom of Italy\". \nThe revolution in Vienna sparked anti-Habsburg riots in Milan and Venice. Field Marshal Joseph Radetzky was unable to defeat the Venetian and Milanese insurgents in Lombardy-Venetia, and had to order his forces to evacuate western Italy, pulling his forces back to a chain of defensive fortresses between Milan and Venice known as the Quadrilatero. With Vienna itself in the middle of an uprising against the Habsburg Monarchy, the Austrian Empire appeared on the brink of collapse. On 23 March 1848, just one day after Radetzky was forced to retreat from Milan, The Kingdom of Sardinia declared war on the Austrian Empire, sparking the First Italian War of Independence. \nFirst War of Italian Independence Edit\nVenice was at the time one of Austria's largest and most important ports, and the revolution which began there nearly led to the disintegration of the Austrian Navy. The Austrian commander of the Venetian Naval Yard was beaten to death by his own men, while the head of the city's Marine Guard was unable to provide any aid to suppress the uprising as most of the men under his command deserted. Vice-Admiral Anton von Martini, Commander-in-Chief of the Navy, attempted to put an end to the rebellion but was betrayed by his officers, the majority of whom were Venetians, and subsequently captured and held prisoner. By the end of March, the Austrian troops in Venice were forced from the city and the Austrian Navy appeared to be collapsing as many of the Austrian sailors and officers were of Italian descent. Fearing mutinies, Austrian officers ultimately relieved these Italian sailors of their duty and permitted them to return home. While this action left the Navy drastically undermanned, it prevented any wide-scale disintegration within the Navy which the Austrian Army had repeatedly suffered from in Italy.", "score": 32.30697936606017, "rank": 16}, {"document_id": "doc-::chunk-6", "d_text": "The aim of the insurgents, who were mostly Italian workers, was to overthrow Austrian rule, but their conspiratorial tactics led them to failure. Marx analysed it in a number of articles (see present edition, Vol. 11, pp. 508-09, 513-16 and 535-37).\nSource: Marx and Engels Collected Works, Volume 12\n(pp.13-17), Progress Publishers, Moscow 1979", "score": 32.186781496713856, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "Napoleon could no longer doubt the very serious character of the threats constantly levelled at him by the Italian patriots. Under the pretence of taking the waters at Plombiéres in central eastern France, he had an interview with Cavour, and there a formal alliance was made and a promise given that at an early date war should be made against Austria both by France and Sardinia, and after the successful termination of the war Austria’s power in Italy would be put an end to.\nAlthough Napoleon, as already remarked, was quite sincere in his ideas about the principle of nationality, and seriously believed that nothing but good could come from a still greater union amongst the distracted territories of Italy and other countries, yet personally he was not in favour of the union of the whole of the Italian Peninsula. At that time a number of French diplomatists and politicians warned him of the inevitable consequences that a unity of all Italy could not but entail upon the prestige and power of France. Italy, they said, if united, will only be the prelude to a similar union in Germany and in other portions of Europe, and France will inevitably suffer from the rise of new and powerful national states. Napoleon did not deny the force of these arguments. However, he hoped to keep the patriotic enthusiasm of the Italians within bounds, and to make of Italy, not one kingdom under the rule of the House of Savoy, but four kingdoms under the suzerainty of France. In this entirely false view he was confirmed by the subtlety and diplomacy of Cavour, who himself very well knew that once Austria’s power was broken in Italy, and the friendship and moral support of France and England secured, nothing could prevent the Italians from establishing themselves as one single united monarchy. Napoleon declared war against Austria, and the war was rapidly finished by the campaign of 1859, the two most important engagements being at Magenta, near Milan, and at Solferino, close to Mantua. The Austrian army, although in no wise inferior to that of the French, was badly generalled, and a few misunderstandings sufficed to produce the defeat of Austria in both engagements. The Italians, drunk with enthusiasm, wanted to force Napoleon to continue the campaign, hoping to oust the Austrians from Italy altogether. However, Napoleon now took fright at the vast waves of national enthusiasm roused in Italy.", "score": 31.864287919654654, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "During the dramatic events of the spring and early summer of 1849, when the French troops attacked the Roman Republic for a whole month, Porta San Pancrazio played a major role in the desperate defence of Rome led by Giuseppe Garibaldi.\nIn memory of that heroic resistance, for which men such as Emilio Dandolo, Luciano Manara, Goffredo Mameli gave their lives, for the 150th anniversary of the Unification of Italy, Porta San Pancrazio has become a museum dedicated to the 1849 Roman Republic and the memory of Garibaldi. A strongly evocative location and a privileged standpoint of the historical and monumental area of the Janiculum, where the memory of that battle is still presents in the monuments. The San Pancrazio gate, located on the height of the Janiculum in the perimeter of the Urbanian or Gianicolense walls, was built in 1854-57 by the architect Virginio Vespignani on the ruins of the gate created by Marcantonio De Rossi in 1648 and partially destroyed during the war of 1849. In turn, the seventeenth-century Aurelia gate had replaced the old gate in the Aurelian walls slightly behind than it does today. On April 19, 1951 the City Council designated the National Association of Veterans and the Garibaldi Veterans for the construction of the museum, which was opened in 1976 with two sections: the first on the History of the Risorgimento and Garibaldi and the second on the hustory of the Italian Garibaldi division.", "score": 31.24484630072453, "rank": 19}, {"document_id": "doc-::chunk-7", "d_text": "How else can one explain, for example, the fact that a South American guerrilla leader of genius, Giuseppe Garibaldi, followed by 1,000 descamisados, conquered in a few weeks one of the oldest kingdoms in Europe, the Two Sicilies, most of whose inhabitants were loyal subjects of their king and (as they showed later, after the unification, when they carried on a bloody war in the mountains for years) tenacious opponents of the usurper and of the “godless” liberals? How could this kingdom, defended by a well-equipped, well-trained, and reasonably well-commanded army, and the best navy in Italy, collapse like a house of cards?\nDenis Mack Smith’s conception of the Risorgimento satisfies the self-examining mood of Italian readers today, but not entirely. They welcome the pruning of the rhetorical foliage, the ruthless exposition of shortcomings and mistakes, the sharp delineation of character. They also welcome the analysis of the Risorgimento’s imperfections, from which many puzzling developments followed. Nevertheless Mack Smith is not Italian and his view of Italy is inevitably that of a twentieth-century, middle-class, northern Protestant scholar. Many of his censures seem inspired by an unconfessed desire to see Italy transform herself by magic into a law-abiding, tidy, fair-playing, decorous country, and by a perpetual disappointment in things as they are. His curiosity about the past is often moved by a desire to understand and remedy contemporary defects.\nThis, in other words, is the view from the Babington Tea Room. This famous and ancient pasticceria is on the left of the Spanish Steps, a pendant to the house where Keats died. For almost a century English residents or visitors found refuge in the only place, south of the Alps, where one could get a nice cup of tea, hot muffins, scones, and properly buttered toast. English lovers of Italy, nannies, spinsters, retired diplomats, dilettantes of all kinds met there and endlessly talked about the country and its inhabitants.", "score": 30.996513494778508, "rank": 20}, {"document_id": "doc-::chunk-6", "d_text": "This is a link to a good 19th-century political cartoon plus accompanying article from Harper's Weekly about Garibaldi's conquering of the Kingdom of Naples at", "score": 30.93953060993562, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "And while your tremendous courage astonishes the world, we are sadly reminded how this old Europe, which also can boast a great cause of liberty to fight for, has not found the mind or heart to equal you.\nGiuseppe Garibaldi, Scritti politici e militari, ed. Domenico Ciàmpoli, Rome 1907\nIf Abraham Lincoln had been able to obtain the services of the brilliant Giuseppe Garibaldi, the American Civil War may have ended in short order. As it was, for his military expeditions in South America and Europe (Italy, Austria and France), Garibaldi is known as the “Hero of Two Worlds”.", "score": 30.707890051734196, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The obelisk is a 5-meter high stella representing the flying banner. The description on the backside reads: In 1833, Giuseppe Garibaldi took an oath of dedicating his life to liberation and unification of his Homeland Italy. Under leadership of the national hero Giuseppe Garibaldi, the country was liberated and unified. On the front side, it says:\nIn the person of Garibaldi Italy had a hero of antique kind, who was capable of producing miracles and who produced miracles (Friedrich Engels).\nThe obelisk in honor of Garibaldi’s visit to Taganrog was inaugurated on June 2, 1961 for the centenary of Italy’s liberation. The local artist Yakovenko realized the project of the monument. The bas-relief (Italian hero’s profile and a palm branch) was produced by the artist Baranov. In 1986, the bas-relief was replaced due to technical reasons and a new bas-relief by the artist Beglov was installed.\nThis is the only monument in honor of Giuseppe Garibaldi in the former Soviet Union.", "score": 30.3670143151384, "rank": 23}, {"document_id": "doc-::chunk-15", "d_text": "The Bandiera brothers, Austrian naval officers, absconded in 1844 under the influence of Mazzinian revolutionary zeal to join a reported uprising in Calabria and declare a republic. Their venture was tragically misjudged; the Calabrian insurrection was minor and had been suppressed before their arrival. They were found and executed. Mazzini had discouraged the brothers, but he nevertheless turned their executions into a glorious martyrdom (see Gilmour 157-8).", "score": 29.908540711816748, "rank": 24}, {"document_id": "doc-::chunk-5", "d_text": "After a series of smaller attempts throughout 1855 and 1856, in the later half of 1857 Napoleone's police force discovered an assassination plot involving former members of the government, and possibly a member of the exiled Tuscan royal family, Gian de'Medici, though the extent of the latter is a subject of debate.\nUnder Napoleone's direction, a company of his dragoons secretly crossed into the Aragonese Kingdom of Naples, where Gian was a local nobleman, and kidnapped him. After a short, secret, trial Gian was executed.\nThe event caused a serious diplomatic incident between the Italian Republic and Aragon, who until this had maintained a position of armed neutrality while it dealt with revolutionaries within its own territory. Aragon issued its first condemnation against the revolutionary government, and began considering further action. The incident also benefited Aragon's counter-revolutionary efforts in Naples; whereas many were expecting the fall of Aragonese authority over Naples to a revolutionary government, potentially a pro-Napoleone one, now Napoleone's raid have the pro-Aragon forces a rallying call, one that the majority of Neapolitans responded to.\nMeanwhile, the assassination plot, and the alleged Medici involvement gave Napoleone an opportunity. Using the plot, he intended to re-create the hereditary, with himself as emperor. The move would have two effects: 1) by having a Buonapartist dynasty enshrined in the constitution, it would delegitimize a Medici restoration, and 2) by making himself an emperor, Napoleone would put himself on equal footing with the Holy Roman Emperor, and reject imperial authority over the lands under his control.\nNapoleone was \"elected\" emperor in a lopsided, and probably fixed, referendum where 98.3% of those voting approved of the decision. Napoleones coronation was widely regarded as a sham, the Pope refused to coronate Napoleone, but Buonaparte never intended to ask. Instead, the High Judge of the Republic had Napoleone raise his hand, and answer a series of vows of what was expected to do as emperor (protect the Freedom of Speech, ensure Freedom of Religion, etc.). Afterwards, the High Judge extended Napoleone a cushion on which his imperial crown rested.", "score": 29.90367342930921, "rank": 25}, {"document_id": "doc-::chunk-9", "d_text": "Mack Smith and David Gilmour point out how very far from the truth this image was (the former, for example, reports that the King told a British Ambassador “that there were only two ways of governing Italians, by bayonets or bribery” [“Documentary Falsification” 185]). And, as we have seen, Mazzini forged his image as a tragic outsider, a romantic nationalist, a patriot tragically exiled from his beloved land. Italian patriotic heroism sustained the “beautiful legend” of the teleological, inevitable movement to unification and liberty, as it depended on depicting the four prominent political players—the passionate zeal of Mazzini, the politican acumen of Cavour, the gentlemanly kingship of Victor Emanuel and the revolutionary heroism of Garibaldi—as complimentary and working in the end for a common goal, despite their clear difference of agenda and philosophy.\nWhile Risorgmento print culture helped create such celebrities for the Italian literate classes, the concept of Italian independence was circulated in Britain by many authors in the nineteenth century. The complexity of the term “Risorgimento,” and the very problem it underscores about the concept “Italy,” was fed by British writers who inscribed and questioned the concept of “Bella Italia,” or beautiful Italy, which had a long tradition of mythologization in British literature. The feminization and aestheticization of Italy troped her as a beautiful, neglected, tragic woman, perhaps most famously in Lord Byron’s description of Venice in Canto Four of Childe Harold’s Pilgrimage (1818), where the city is figured as a tragic, abandoned “Ocean queen” (Canto 4, stanza 17, l. 7).", "score": 29.405638921532777, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "If he was grand in his wrath, he was grand also in his ideal aspirations; whether he thundered with the withering eloquence of a Cicero, or pleaded for the Brotherhood of Man with the accents of love; whether he bowed his head humbly before the power of one great G.o.d, or rose fanatically to preach the new Gospel: \"Dio e il popolo,\" G.o.d the first cause, the People sole legitimate interpreter of His law of eternal progress.\nThe conviction that spoke from that man's lips was so intense, that it kindled conviction; his soul so stirred that one's soul could not but vibrate responsively. To be sure, at the time I am speaking of, every conversation seemed to lead up to the one all-absorbing topic, the unification of Italy. She must be freed from the yoke of the Austrian or the Frenchman; the dungeons of King Bomba must be opened and the fetters forged at the Vatican shaken off. His eyes sparkled as he spoke, and reflected the ever-glowing and illuminating fire within; he held you magnetically. He would penetrate into some innermost recess of your conscience and kindle a spark where all had been darkness. Whilst under the influence of that eye, that voice, you felt as if you could leave father and mother and follow him, the elect of Providence, who had come to overthrow the whole wretched fabric of falsehoods holding mankind in bondage. He gave you eyes to see, and ears to hear, and you too were stirred to rise and go forth to propagate the new Gospel, \"The Duties of Man.\"\nWhat he wrote, what he spoke, was something beyond revealed religion, the outcome of a faith that looked upwards to gather a new revelation of the eternal law that governs the universe. Gospel, Koran, Talmud, merged in his mind in the new faith, rising over the horizon to illuminate humanity.\nThere was another side of his nature that many a time deeply impressed me. The enthusiast, the conspirator, would give way to the poet, the dreamer, as he would speak of G.o.d's nature, and of its loveliest creation, Woman; of innocent childhood, of suns.h.i.+ne and flowers.\nI have heard much said about Woman and Woman's Rights since the days of Mazzini, from pulpit and platform, from easy-chair and office-stool.", "score": 29.148059604900737, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "The Papal States divided Italy for about 1260 years (606-1866)\nGiuseppe Garibaldi -\nFather of Modern Italy\n1807 (July 4) Born at Nice or Nizza (at that time part of France), the son of Domenico Garibaldi, a fisherman and coastal trader. The Great Liberator of the old world was born on the 31 birthday of the United States and just 2 years before the Great Liberator of the New World, Abraham Lincoln in 1809. His birthplace Nice or Nizza was always part of Italy until it was ceded to the French in 1796.\n1814 Nice is once again joined to the Kingdom of Piedmont-Sardinia.\n1824-33 Garibaldi lives as a sailor in the Mediterranean and Black Sea.\n1832 He acquires his master's certificate as a merchant captain.\n1833 In touch with Mazzini's patriotic organization, Young Italy, and visits its headquarters at Marseilles.\n1834 As a naval rating in the Piedmontese navy, he takes part in a mutiny for the republican cause. Sentenced to death by default, after escaping to France.\n1835 Takes casual jobs in France and with the Bey of Tunis.\n1836 Sails for Rio de Janeiro from Marseilles in a 200 ton brigantine.\nHe meets his Brazilian born wife Anita who becomes his companion-in-arms and heroine of the Risorgimento. She was just as brave as Garibaldi often fighting side by side with her hero husband. She died during the retreat from Rome in '49.\n1836-40 As soldier, corsair, and naval captain, he fights for the break-away province of Rio Grande, in its attempt to free itself from the Brazilian Empire.\n1841 He tries his hand at various jobs-including cattle herdsman, trader, and schoolmaster at Montevideo.\n1842 Put in command of the small Orientale (Uruguayan) fleet against Manuel de Rosas, the dictator of Argentina.\n1843 Also becomes commander of the newly formed Italian Legion at Montevideo.\n1846 Wins the \"battle\" of St. Antonio, after which a sword of honor is subscribed for him in Italy. Lord John Russell is appointed Prime Minister in Great Britain.\n1847 Briefly in command of the defense of Montevideo. Offers his services to Pope Pius IX but is refused.", "score": 28.697236936553523, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "1848 (April) Leads eighty of his legionaries back to Italy. (July) Vainly offers to fight for the king of Piedmont. (August) In command of a volunteer unit at Milan against the Austrians, and survives two brisk engagements at Luino and Morrazzone.\n1849 (February) As an elected deputy in the Roman Assembly (after the flight of Pius IX), he proposes the creation of a Roman Republic. (April) As a general of brigade, he beats off an attack by the French at the St. Pancrazio gate of Rome. (May) Defeats a Neapolitan army at Velletri. (June) Takes a principal part in defending Rome against further French attacks. (July) Leads a few thousand men from Rome through central Italy to escape from French and Austrian armies. (August) After disbanding his men in San Marino, he is chased at sea and on land by the Austrians; his first wife, Anita, dies. (September) As soon as he arrives back in Piedmontese territory, he is arrested and deported as an undesirable.\nGaribaldi is pursued by 100,000 of the Pope's soldiers. His beloved wife Anita, who is sick and pregnant, refuses to leave his side and she dies on the beach. The Pope had placed an enormous bounty on his head but not one Italian betrays him to the Papal Army.\nPope Pius IX (1846-1878).\nPope blesses the victorious French Army at the Vatican.\nPope Pius IX was the longest reigning Pope in history and the great antagonist of Italian unity. During his reign the firing squads and the scaffolds were kept busy day and night. He urged the Austrians to set up the guillotine and he would not allow railroads to be built in the Papal States.\n1849-50 Lives for seven months in Tangiers, where he writes the first edition of his memoirs.\nTicker tape parade on Broadway.\nGaribaldi was offered a ticker tape parade up the \"canyon of heroes\" in New York City. The Jesuits stirred up the Irish Catholics against him and in order to keep the peace he refused the offer. Of all the many world famous personalities to have been offered this singular honor, Garibaldi remains the only person to date to have refused it!!", "score": 28.51031581095812, "rank": 29}, {"document_id": "doc-::chunk-10", "d_text": "In 1861 after the annexation of almost all the peninsula the Kingdom of Italy was proclaimed at Florence and that of Sardinia came to an end.\nThe following is a list of the kings: Victor Amadeus II (1718-30), who abdicated in favour of his son Charles Emmanuel 111 (1730-73), regretting which he was imprisoned at Moncalieri where he died (1732). Charles Emmanuel to conquer the Milanese allied himself with France and Spain, in the War of the Polish Succession; he was frequently victorious but only obtained the region on the right of the Ticino (1738). He took part in the War of the Austrian Succession; gained splendid victories (the siege of Toulon, 1746; the battle of Col dell' Assietta, 1747), but with very little profit, gaining only the county of Angers, and Arona, the valley of Ossola, Vigevano, and Bobbio. Victor Amadeus 111 (1773-96), for having crushed the nationalist movement in Savoy (1791) with excessive severity, was overthrown by the revolutionary army which captured Savoy and Nizza. He allied himself with Austria and the campaign was conducted with varying fortunes, but when Bonaparte took command of the French troops Victor Amadeus had to agree to a humiliating peace. Charles Emmanuel IV (1796-1802) made an offensive treaty with France, whereupon his subjects revolted. The rebellion was crushed with severity and thousands of democrats emigrated either into France or to the Cisalpine Republic, whence they returned in arms. The royalists having obtained the upper hand, France intervened and obliged the king to abandon his possessions on the mainland (19 December, 1798). Charles Emmanuel withdrew to Sardinia; and in 1802 abdicated in favour of his brother Victor Emmanuel I (1802-21), who in 1814 was returned to Turin and saw his dominions increased by the inclusion of Genoa.\nAs happened elsewhere the restoration did not do justice to the legitimate aspirations of the democrats. There followed the revolution of 1821 caused by a demand for a Constitution and for war with Austria to obtain possession of Lombardy, which Piedmont had coveted for centuries.", "score": 28.43939544716867, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Here was the fulfillment of the army career Charette had long worked for, yet, it was to be a very short assignment due to the outbreak of war between Austria and France in northern Italy. Opposed as Charette might have been to the French government, he could not bring himself to take up arms against his own people and his own country and so was forced to resign his commission in 1859 and look for an opportunity to serve in a worthy cause against a wicked enemy. Unfortunately there were plenty of enemies to be found for a young, conservative, traditional Catholic royalist like Charette; especially in Italy.\nFor some time the Italian states had been harassed by radical, liberal revolutionaries and nationalists who sought to create a united Italian nation state by overthrowing the local Italian princes, destroying their countries and, for good measure, tearing down the Catholic Church as the source of all evil in their warped, ultra-liberal viewpoint. One of the places where this conflict was most intense was in the very conservative Kingdom of the Two Sicilies. Charrette had two brothers who opted to defend the Sicilian King Francis II and his lovely and heroic wife Queen Maria Sophia. This was a noble endeavor, but Charette himself opted instead to join in the defense of the Papal States and Blessed Pope Pius IX in May of 1860. As the Papal States had come under attack from the liberal Italian nationalists Pius IX was forced to organize a military defense. He appointed Monsignor Xavier de Merode Minister of War and General Christophe de Lamoriciere as commander of the papal armed forces. The army was an international one with volunteers from Italy, Switzerland, Austria, Germany, Spain, Portugal, France, Ireland and Canada. Charette was commissioned captain in the first company of French and Belgian troops known a year later as the Pontifical Zouaves.\nCharette was certainly in good company as a great many of his comrades in the papal army were legitimist French royalists to the extent that one of their Italian enemies remarked that the officers of the papal army could easily have been a list of guests at the court of King Louis XIV. It was a truly heroic army with men like Charrette and his comrades, with very little pay, few supplies and hopelessly outnumbered fighting for the Church and the sovereignty of the Pope against a vastly superior enemy.", "score": 28.084482464310952, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "On 10th October 1943 the King of Italy, who with his Government had fled from Rome to Brindisi, declared war on Germany. Sporadic resistance to the Nazis began in German-occupied Italy almost immediately. In the mountain valleys and towns, armed groups were formed from Italian soldiers on the run - who feared deportation as forced labor to Germany - and escaped Allied POW's. Some of the more active partisans were Communist bands. The Communist Garibaldi band was active in the frontier area in north-East Italy, namely Friuli Venezia Giulia, and cooperated with Tito's partisans on the Yugoslav frontier. The Garibaldi made no secret of the fact that they supported Tito's claim to absorb the Venezia Giulia Region into Yugoslavia after the war.\nOn the other hand the Osoppo Partisan Brigade operated under the orders of the 'Committee of Liberation' which was formed to coordinate the resistance to German occupation by disparate partisan groups of diverse political persuasions. The 'Committee of Liberation' was comprised of the political parties that had been in hiding since their suppression by Mussolini in 1922. Their first acts were to aid escaping Allied POWs and deserters from the Italian army. By the end of November 1944, after further Cossack and German attacks, the Garibaldi partisans had dispersed to the East of the Isonzo River and passed completely under Yugoslav control. A few Osoppo partisans lay low in the high mountain villages where Cossack and German patrols did not penetrate.\nA number of the Garibaldi objected to being under the Yugoslavs and deserted to the Osoppo. Both the Garibaldi leaders and the IX Yugoslav Corps were furious and their relations with both the Osoppo partisans and the British S.O.E. units supporting them became openly hostile. By February 1945 the Garibaldi Commanders had launched a violent anti-British and anti-American campaign which threatened the lives of British and American Commanders and resulted in the stealing of arms and money from British troops. However, the avowed intention of the Garibaldi partisans and the Yugoslavs was to destroy the Osoppo partisan units.", "score": 27.802741259774102, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "This is the second part of the section on the unity of Italy beginning from, “…in order to place them under obligations to Italy, had sent out a considerable corps of Italian soldiers to the Crimea as an auxiliary army for the allies.”\nThe decisive event, however, was the attempted crime of Orsini. It appears that Napoleon III., long before he succeeded in ascending the throne of France, and when he was still a roaming adventurer, had promised to the Italian patriots that whenever he should succeed in his aspirations he would extend to them a helping hand and put an end to the political and social anarchy of Italy. There is little doubt that Napoleon took these promises pretty seriously. Like all the members of the Napoleon family, ha had deep Italian sympathies; and, moreover, his general policy made him take his early promises to the Italian patriots as part of a policy both practical and sublime. However, the exigencies of his home as well as his foreign policy, the great war with Russia from 1854 to 1856, had prevented him from realizing his promises; and to numerous secret reminders of the part of the Italian patriots he answered evasively. These patriots had always threatened him with death unless he redeemed the promises made to them in the autumn of 1857. The most resolute of these patriots, Orsini, left London fro Paris, determined to put an end to the life of Napoleon. With several accomplices he ambushed Napoleon in a street near the Opéra in Paris, whither Napoleon, his wife Eugénie, and other members of his court were repairing in the evening of the 14th January, 1858. Orsini and his accomplices threw several bombs at the carriage of the Emperor; the bombs exploded, and killed and wounded over one hundred and forty persons; however, the Emperor and his wife escaped unscathed. Orsini in prison behaved with the most heroic steadfastness. Napoleon really wanted to pardon him, but it appeared that it would have been unwise to pardon the assassin of so many persons; the indignation of the French public was too intense. Orsini, however, made the Emperor promise that a French army would enter Italy and wage war with Austria, and having obtained this formal promise from Napoleon, Orsini mounted the scaffold with serenity.", "score": 27.044144764755448, "rank": 33}, {"document_id": "doc-::chunk-3", "d_text": "Although some land was redistributed to peasants, when peasants pushed further by attacking landlords and taking over their land, Garibaldi’s forces repressed those actions ruthlessly.\nGaribaldi’s successful campaign on the mainland continued to guarantee him popularity, and forced Piedmont to accept that the whole of the peninsula would be united.\nBut this Italian unification was a “revolution from above” and it stopped halfway – Gramsci called it a “passive revolution”.\nAs Marx’s collaborator Frederick Engels would write many years later, “The bourgeoisie, which gained power during and after national emancipation, was neither able nor willing to complete its victory.\n“It has not destroyed the relics of feudalism, nor reorganised national production along the lines of a modern bourgeois model.”\nGaribaldi wanted nothing for his massive achievements – he refused all honours and financial awards.\nAlthough he would later fight to allow Rome and Venice to join the new nation of Italy, he essentially retired to a tiny island off Sardinia.\nNot even Abraham Lincoln could tempt him to take a major command in the Union forces during the American Civil War.\nAs Garibaldi grew older, the limitations of his “revolution from above” became clear. In the south, people resented new taxes placed on consumer goods such as salt and tobacco to pay for the Crimean war, as well as military conscription.\nIn many areas there were virtual revolts, with armed gangs attacking troops and robbing wherever and whatever they could.\nOver 100,000 Piedmontese troops were sent down to quell the south by destroying villages. They were greeted as an occupying foreign army.\nItaly came into being as a “bastard state”, according to Gramsci.\nGaribaldi’s life carries both hope and a warning to us today. In Latin America a number of charismatic leaders have appeared on the world stage.\nSo far they have not created a situation of “revolution from below” – and the conservative forces that oppose them could still be victorious if the revolutionary process is not deepened.\nBut some heroes and leaders are a source of inspiration. What Che Guevara was to the 20th century, Garibaldi was to the 19th. He died, penniless but still a hero, in 1882.\nThe Resistable Rise of Benito Mussolini by Tom Behan is available from Bookmarks, the socialist bookshop. Phone 020 7637 1848 or go to » www.bookmarks.uk.com", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "italian unification the supremo pizza of nationalism\nPost on 17-Jan-2016\nEmbed Size (px)\nItalian UnificationThe Supremo Pizza of Nationalism\n1800s - The Age of Nationalism Peoples inspired by example of George Washington and motivated against Napoleon\nThe Italian SituationItaly did not existMultiple Independent kingdomsSome foreign ownedFrance= Kingdom of the two SiciliesAustria= Lombardy and VenetiaTransportation and communication hard\nEarly NationalismRisorgimentoGiuseppe Mazzini led Young Italy\nRevolution of 1848Republican Revolution in SicilySpread north to Austrian Italy (Lombardy/Venetia)King Charles Albert of Sardinia rallies Italian countries against Austrians and he was winning!!!\nA Young Charles Albert\nThe Revolution Fails to Make ItalyApril, 1848 Pope Pius IX removes Papal States from the warDidnt like Catholic peoples making war and killing other CatholicsCoalition falls apart\nPope Pius IX\nEnd of the First AttemptMazzini and Young Italy overthrow the Pope and take RomeAngered over his influence in war with AustriaFrance, Spain, and Naples invade Rome and put Pius IX back into power.\nYoung Italy dead, but Charles Albert smells like a rose for his nationalism\nCharles Albert, King of SardiniaA Phoenix in the Ashes?\nSardinia leads the WayVictor Emmanuel becomes King of Sardinia after death of father.Helped in unifying the North of Italy by his chief minister, Count Camillo di Cavour\nCount Camillo di CavourSaw that Italian unification lay in defeating AustriaA Secret PlanSupported France and UK in the Crimean War to gain favors from them\nA Franco-Austrian War, 1859Secret Treaty of Plombiers-Les-BainsFrance fights Austria and Italy gets Lombardy- Venetia if Italy gives France Nice and Savoy\nA Partial VictoryViva VERDI (Victor Emmanuel, Re DItalia) united ItalyNapoleon III pulls out as war too costly.Tuscany, Parma, Modena & Romagna join Sardinia to fight for V.E. IIIs ItalyItaly gets LombardyNapoleon III of France\nThe SouthThe Rise of GaribaldiRevolutionary RootsRed Shirts -Led Revolution in 1860 after death of Sicilian King Ferdinand II Took Naples with 1000 men then marched NorthCavour is nervous!!!", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "Napoleon’s moment of coronation gives a flavour of how the campaign for Italian national independence known as the Risorgimento was so very contingent on myth; perhaps, indeed, the Risorgimento could be termed the most mythologized political and cultural movement in nineteenth-century Europe. Even the name Risorgimento (meaning political and cultural “resurgence,” “rebirth” and “resurrection”) was chosen for its symbolic weight by one of unification’s chief architects, Cavour (Camillo Benso, Count of Cavour), as the title of his liberal newspaper (founded in Turin on 15 December 1847), and the term for political unity stuck. Il Risorgimento took advantage of the new press freedoms granted by King Charles Albert of Piedmont (father of Victor Emanuel) in 1847, underlining the important relationship between print culture and Italian nationalism. In Tuscany, the abolition of press censorship by the Austrian Grand Duke Leopold led to another new pro-unification newspaper, The Tuscan Athenaeum (30 October 1847-22 January 1848), founded by Thomas Trollope (son of Fanny, brother of Anthony), modeled on the British Athenaeum. Trollope’s newspaper had a much smaller circulation and profile than Cavour’s, but nevertheless it was significant for its ambition and Anglo-Italian emphasis. The Tuscan Athenaeum, like Il Risorgimento, took advantage of the changing political climate to circulate a pro-Risorgimento message among the middle classes of Tuscany, and especially Florence where the newspaper was published. The Tuscan Athenaeum, however short-lived, was especially distinctive for two things. Firstly, it made a clear editorial call for moderation, spelled out in the very first issue, which associated its message with British liberalism. Secondly, it explicitly articulated the necessity of British interest in Italian politics. The newspaper was directed at mostly a British expatriate readership, and offered translations of Italian patriotic journalism from such other Tuscan newspapers as L’Alba to educate and motivate its audience in the Italian national cause.\nThe Risorgimento political philosophy was also circulated through another important channel: the journalism of the revolutionary and charismatic Giuseppe Mazzini.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "Manuel Rosas, the dictator of Argentina, set his eyes on Uruguay. Leading a regiment of Italian soldiers, Giuseppe Garibaldi left to defend his new country. Even though Anita stayed behind to take care of their children she found herself defending the city of Montevideo from the Argentinians. She led a group of women in an organized effort to build walls to fortify the city against the invaders. It was because of her and the efforts of the women that she worked with that Montevideo was saved.\nAt the close of another war, Giuseppe Garibaldi received the letter that he had been waiting a long time for. The King of Sardinia wanted him to come home. Italy needed him. So the Garibaldi packed their bags and moved to Italy.\nWhile Giuseppe began his campaign for a united Italy, independent of other countries, Anita took on the role of publicist and recruiter in addition to tending to their children. However, when Giuseppe needed her, she rushed to his side and there she stayed. She followed him on his campaign to liberate Rome. Their attempt at a liberation failed and the army was forced to retreat.\nIt was during this retreat that Anita fell ill and died. She was 28 years old and eight months pregnant.\nAnita Garibaldi leaves a legacy unlike any other woman: A freedom fighter, martyr and mother, she is an inspiration the world over.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "« ПредишнаНапред »\nVenice, they had performed deeds of valor and patriotism, not unworthy the brightest days of ancient Rome. The Roman and the Tuscan, the Venetian, the Piedmontese, and the Lombard, had fought side by side against the common enemy, and mingled their blood in the common cause. Thenceforth that cause was no longer the creed of a cliqne or a club—no longer the idol of a few patriotic enthusiasts. It had found for itself a sanctuary in every Italian heart. It had become the center, towards which the Italian mind, with all its convictions and aspirations, still continues to gravitate, with unerring certainty.\nAnd now—to resume the thread of our narration—when the shock of Novara was past, the mourning for Charles Albert over, and the treaty with Austria, after long debate, had been ratified by Parliament, the government of Victor Emanuel set earnestly about the work of reform commenced by his father. Though little was known about the young king, except that his mother and wife were Austrian princesses, that he had been the pupil of Jesuits, and that as a soldier he was brave and intrepid in battle, yet he gave to his subjects a very acceptable pledge of his devotion to the constitution, in the choice that he made of his counselors. First in his confidence was Massimo D'Azeglio, the head of the ministry. Next to him in prominence, was Count Cavour, a man of great talents, wealth and ambition, the rival of D'Azeglio, and now his successor. These two men, by thus uniting their influence, were enabled to secure the support of what, in French phraseology, is called the right center and the left center of the houses of legislation, and thus command a majority over the opposite extremes of radicalism and conservatism. Though they were destined to meet with determined opposition, still they found most able supporters both in the cabinet and in the Parliament. There was La Marmora, who commanded the Sardinians in the Crimea. There was Cibrario, the able historian of Piedinont. There was Brofferio, whose eloquence in the Chamber of Deputies often swept all before it.", "score": 26.794824290429847, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "Particularly notable are the Austro-Prussian War of 1866, after which Italy gained Venetia, and the Franco-Prussian War of 1870-1871, which finally unified Italy with the gain of the Papal States and Rome. This final war also brought about the downfall of Napoleon’s nephew, Louis Napoleon or Napoleon III, as well as the unification of Germany. Unlike Italy, the German struggle for unity did not attract the same intense identifications and passionate romanticised zeal as the Risorgimento, perhaps because Germany was taken more seriously as an economic, modernized nation with stronger military and diplomatic power. As Maura O’Connor points out, the different fates of Germany and Italy in the nineteenth century illustrates the unevenness of economic and political change.\nUnification of Italy, and its freedom from foreign occupiers, was achieved in 1861 through the military intervention of Napoleon’s nephew, Napoleon III, together with King Victor Emanuel of Sardinia (Piedmont), who became King of a united Italy through a series of plebiscites (Venice and Rome were not incorporated into the Kingdom until 1866 and 1870 respectively). But unification was both a cultural movement as well as a political phenomenon, and the example of Napoleon’s coronation demonstrates the fraught relationship between culture and politics. In addition, Napoleon’s adoption of the kingship of France underlines another key problematic at the heart of the concept of “Italy”: a belief in Italy as a national entity did not necessarily directly correspond with a movement to free the land from foreign rule. Furthermore, those advocating for political independence did not necessarily want unity; as Martin Clark points out, for many Italian writers and intellectuals, Italy’s diversity meant that “a single Italian state seemed [. . .] . . . not only impossible but also undesirable” (4). Those more inclined towards any kind of national unity favoured federalism (4). But, although unification, liberty and independence—the attested trio of Risorgimento goals—were largely achieved in 1861, this still did not equate to creating Italy as a nation. As Clark observes, “If Italian identity is multiple now, it was even more multiple then” (8).", "score": 26.764359229960334, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Mazzini, writing under the pseudonym \"Piccolo Tigre,\" was the author the Permanent Instruction of the Alta Vendita, which detailed the overthrow of Catholicism by infiltrating it. On October 31 of that year he was arrested at Genoa and interned at Savona. Although freed in early 1831, he chose exile instead of life confined into the small hamlet which was requested of him by the police, moving to Geneva in Switzerland.\nIn 1831 he went to La giovine Italia (Young Italy). Young Italy was a secret society formed to promote Italian unification. Mazzini believed that a popular uprising would create a unified Italy, and would touch off a European-wide revolutionary movement. The group's motto was God and the People, and its basic principle was the unification of the several states and kingdoms of the peninsula into a single republic as the only true foundation of Italian liberty. The new nation had to be: \"One, Independent, Free Republic\".\nThe Mazzinian political activism met some success in Tuscany, Abruzzi, Sicily, Piedmont and his native Liguria, especially among several military officers. Young Italy counted ca 60,000 adherents in 1833, with branches in Genoa and other cities. In that year Mazzini launched a first attempt of insurrection, which would spread from Chambéry (then part of the Kingdom of Sardinia), Alessandria, Turin and Genoa. However, the Savoy government discovered the plot before it could begin and many revolutionaries (including Vincenzo Gioberti) were arrested. The repression was ruthless: 12 participants were executed, while Mazzini's best friend and director of the Genoese section of the Giovine Italia, Jacopo Ruffini, killed himself. Mazzini was tried in absence and sentenced to death.\nDespite this setback (whose victims later created numerous doubts and psychological strife in Mazzini), he organized another uprising for the following year. A group of Italian exiles were to enter Piedmont from Switzerland and spread the revolution there, while Giuseppe Garibaldi, who had recently joined the Giovine Italia, was to do the same from Genoa. However, the Piedmontese troops easily crushed the new attempt.", "score": 25.65453875696252, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "“Il Risorgimento”, the popular term for Italian Unification, is a complex and contentious term that connects two highly symbolic moments in the peninsula’s history: the crowning of Napoleon as King of Italy in 1805 to the 1861 unification of most of Italy with the military and diplomatic assistance of his nephew Napoleon III and King Victor Emanuel of Piedmont. The complex and contradictory set of myths that insist on the “beautiful legend” of the Risorgimento cover over the very difficulty and controversy of the notion of Italy itself.\nThe Napoleonic proclamation of the Kingdom of Italy only encompassed the northern part of what we now know as Italy, and it only lasted until his abdication of the thrones of both France and Italy on 11 April 1814. His Italian kingdom was what Christopher Duggan terms “an arena for constitutional experiments, with boundaries being rubbed out and redrawn with such frequency that many people must have been unsure at any given moment exactly whose subjects they were” (92). While “[t]hese changes were made without any reference to the Italian people” (Duggan 93), and Napoleonic rule led to discontent and unrest (being the monarch might mean saving Italy as an entity, but it also meant high taxes and unpopular centralization), foreign conquerors in Italy had a history of being welcomed and supported by Italians. As Denis Mack Smith notes, “the main obstruction in the way of the patriotic movement was not foreign governments,” but rather “the slowness of the great bulk of Italians to accept or even to comprehend the idea of Italy” (The Making of Italy 2). Napoleon’s stint as King of Italy, with its dependence on a revitalized concept of Italy, was an important precursor to the Risorgimento. Even the more negative aspects of French rule ultimately benefited a unified Italy, as Duggan argues, for the anger generated by his treatment of the Kingdom of Italy—divided up and gifted to his family, treated as secondary to the French empire—fed the movement for cultural nationalism, especially when Napoleon tried to impose French as the official language (96). It is important to underline the involvement other European nations had in Italian unification, both diplomatic and military. While some investments were more symbolic and diplomatic (especially that of Britain), Germany, Austria and France engaged in military conflict over Italian territorial claims.", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-7", "d_text": "more than an ordinary victory, and for the Austrians more than an ordinary defeat” (8), made all the more dramatic by the description of Magenta itself as “a little town in Lombardy” (8). The newspaper quotes an official French communication thought to be written by Napoleon himself, triumphant at the battle’s display of “prodigies of valour” (8) that involved taking more than a thousand prisoners. By 1860, the Illustrated London News uses the term “magenta” to describe a heroic victory, when it begins a description of a debate in the House of Commons thus: “Mr. Gladstone won his Magenta gallantly, and with extraordinary damage to the enemy.” The Battle of Magenta entered the “beautiful legend” of the Risorgimento as a resounding Italian patriotic triumph; as Gilmour wryly comments, the pictures of the battle that adorn the Risorgimento museums across Italy, like Fattori’s, like the Italian educational textbooks, turned it into a Piedmontese victory, although Victor Emanuel’s army in fact arrived at night when the battle was over (185, 259). Napoleon III—dismayed with the heavy casualties at Magenta and then Solferino, with the Lombards’ reluctance to be liberated, and with the military incompetence of the Piedmontese (who did not possess accurate maps of Lombardy, and who could garner only a tenth of the promised 200,000 patriotic volunteers [Gilmour 187])—suddenly (and unexpectedly to his allies) made a Peace at Villafranca in July. But Magenta entered into the Risorgimento historiography as a brave patriotic Italian victory, as a decisive blow for independence, liberty and unity.\nParticularly notable, along with patriotic re-telling of battles, were the mythologizing of the major political figures to craft a narrative of heroic patriotism. One major player in unification, the revolutionary Giuseppe Garibaldi, was especially and famously adept at massaging his personality cult, as Lucy Riall’s recent biography admirably demonstrates in detail. Perhaps most notable was Garibaldi’s famous campaign in the south with the expedition of the “Thousand,” an army that in fact (and despite patriotic legend) had grown to over 21,000 by the time it defeated the 25,000 badly organized Neapolitan troops in Sicily (Gilmour 194).", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-155", "d_text": "The first of a long series of arrests came at age fourteen, when he was apprehended for writing an “insolent and threatening” letter to King Victor Emmanuel II.\nMalatesta became politically active in his early youth, first as a Republican, then an Anarchist. He was introduced to Mazzinian Republicanism while studying medicine at the University of Naples; however, he was expelled from the university in 1871 for joining a demonstration. Partly via his enthusiasm for the Paris Commune and partly via his friendship with Carmelo Palladino, he joined the Naples section of the International Workingmen’s Association that same year, as well as teaching himself to be a mechanic and electrician. In 1872 he met Mikhail Bakunin, with whom he participated in the St Imier congress of the International.\nWhile respecting “complete autonomy of local groups” the congress defined propaganda actions that all could follow and agreed that “propaganda by the deed” was the path to social revolution. For the next four years, Malatesta helped spread Internationalist propaganda in Italy; he was imprisoned twice for these activities.\nIn April 1877, Malatesta, Carlo Cafiero, the Russian Stepniak and about thirty others started an insurrection in the province of Benevento, taking the villages of Letino and Gallo without a struggle. The revolutionaries burned tax registers and declared the end of the King’s reign and were met by enthusiasm. After leaving Gallo, however, they were arrested by government troops and held for sixteen months before being acquitted. After A murder attempt by one Giovanni Passannante on king Umberto I, the radicals were kept under constant surveillance by the police. Even though the Anarchists claimed to have no connection to Passannante, Malatesta, being an advocate of social revolution, was included in this surveillance. After returning to Naples, he was forced to leave Italy altogether in the fall of 1878 because of these conditions, beginning his life in exile.\nHis constant work as an organizer and speaker embodied his ideals of free association: for Malatesta, it was useful to join an organization only for the purpose of doing something with that group of people. There was no sense in belonging to a group simply to belong.\nHe argued against pure syndicalism. Malatesta thought that trade-unions were reformist, and could even be, at times, conservative.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-11", "d_text": "As the king had agreed with Austria and Naples not to grant the Constitution, he abdicated in favour of Charles Felix, his brother, who was absent at the time; Charles Albert, Prince of Carignano, assumed the regency and on 13 March, 1821, promulgated the Constitution of Spain, which was not accepted by Charles Felix (1821-31). Meanwhile, the revolutionary party had joined in the movement for Italian unity, but there was difference of opinion as to the form of that unity, whether there should be a great republic, or a federation of republics, or again a single monarchy or a federation of principalities. Many however were indifferent to the form. In 1831, therefore, disturbances began in Central Italy but were easily suppressed. The same year Charles Felix died without offspring and was succeeded by Charles Albert (1831-48). The Piedmontese then decided in favour of a United Kingdom of Italy under the House of Savoy, and to that end all the efforts of the Sardinian Government were henceforward directed. In 1847 Charles Albert granted freedom of the press and other liberal institutions. On 8 February he promulgated the statute which still remains the fundamental law of the Kingdom of Italy. One month later he declared war on Austria in order to come to the rescue of the Lombards who were eager to throw off the Austrian yoke at once. Though victorious in the first engagements, he suffered a severe defeat at Custoza and, after the armistice of Salasco, was again defeated at Novara (1849).\nThe King of Sardinia had for the time being to abandon his idea of conquest. Charles Albert abdicated in favour of his son Victor Emmanuel II (1849-78) and withdrew to Oporto where he died the same year. There followed ten years of military preparations, which were tested in the Crimean War, and vigorous diplomatic and sectarian operations to the detriment of the other Italian rulers, carried out under the direction and inspiration of Count di Cavour, who did not hesitate to enter into league with Mazzini, the head of the Republicans, knowing well that the latter's principles while bringing about the destruction of the other Italian states on the one hand, could not on the other, serve as a basis for a permanent political organization.", "score": 25.58529771811077, "rank": 44}, {"document_id": "doc-::chunk-23", "d_text": "King Emmanuel agreed to the alliance and the Third Italian War of Independencebegan. The victory against Austria allowed Italy to annex Venice. The one major obstacle to Italian unity remained Rome.\nIn 1870, Prussia went to war with France starting the Franco-Prussian War. To keep the large Prussian army at bay, France abandoned its positions in Rome in order to fight the Prussians. Italy benefited from Prussia’s victory against France by being able to take over the Papal State from French authority. Italian unification was completed, and shortly afterward Italy’s capital was moved to Rome. Rome itself remained for a decade under the Papacy, and became part of the Kingdom of Italy only on 20 September 1870, the final date of Italian unification. The Vatican City is now, since the Lateran Treaty of 1929, an independent enclave surrounded by Italy, as is San Marino.\nIn Northern Italy, industrialisation and modernisation began in the last part of the 19th century. The south, at the same time, was overpopulated, forcing millions of people to search for a better life abroad. It is estimated that around one million Italian people moved to other European countries such as France, Switzerland, Germany, Belgium and Luxembourg.\nParliamentary democracy developed considerably in the 20th century. The Sardinian Statuto Albertino of 1848, extended to the wholeKingdom of Italy in 1861, provided for basic freedoms, but the electoral laws excluded the non-propertied and uneducated classes from voting.\nAfter unification, Italy’s politics favored radical socialism due to a regionally fragmented right, as conservative Prime Minister Marco Minghetti only held on to power by enacting revolutionary and socialist-leaning policies to appease the opposition such as the nationalization of railways. In 1876, Minghetti was ousted and replaced by socialist Agostino Depretis, who began the long Socialist Period. The Socialist Period was marked by corruption, government instability, poverty, and use of authoritarian measures by the Italian government.\nDepretis began his term as Prime Minister by initiating an experimental political idea called Trasformismo (transformism). The theory ofTrasformismo was that a cabinet should select a variety of moderates and capable politicians from a non-partisan perspective.", "score": 25.003385053013787, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Denis Mack Smith, Senior Research Fellow of All Souls College, Oxford, has dedicated a large part of his professional life to Italy, to Italian history, and, more particularly, to the fateful years between 1848 and 1870, when the country finally managed to be unified under one king and one law. He is now moving up a half century and preparing a book on the fascist era, which he considers the unfortunate and almost inevitable consequence of the shortcomings of the Risorgimento, its “poisoned fruits.”\nOf course, the history of Italy during the last 100 years is extremely important for Italians; it is colorful, filled with picturesque characters and noble heroes. Still it is only marginal when seen as part of European history. What happened south of the Alps was the belated and peripheral effect of the political, ideological, and industrial revolutions which had taken place in America and in the rest of Europe generations before. Nevertheless, the Italian “resurgence” attracted and still attracts a large, possibly disproportionate, number of foreign, mostly English, historians.\nFor some reason, Englishmen more than any other literati have studied and described all kinds of foreign nations, near and far, illustrious and obscure, past and present, their art, religion, language, politics, manners, and morals. As a result the rest of the world is often forced, faute de mieux, to see lesser known countries exclusively through British prejudices and predilections. In the last century and a half a large number of these xenophiles have been smitten with a burning passion for Italy. They were historians, antiquarians, novelists, poets, authors of travel reminiscences and guidebooks.\nThe reasons for this infatuation seem unaccountable and vaguely abnormal to many Italians. Why Italy? Why should these northerners worry so much about a people whom most of them obviously love but rarely esteem, whose real virtues and vices they seldom understand, whose language they frequently misspell (Garibaldi’s and Mazzini’s first name, Giuseppe, is almost invariably Guiseppe in English), and whose history never seems quite to satisfy them?\nPerhaps John Ruskin and Robert Browning could be considered among the forerunners of this trend, particularly Robert Browning.", "score": 24.999999999998536, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Giuseppe Mazzini was born in Genoa when it was part of the French Empire and his father was an advocate of the ideas of the French Revolution. Genoa’s intellectual elite resented being handed over to the authoritarian monarchical Piedmont in 1815 and Mazzini, soon after graduating as a lawyer, started writing for newspapers that challenged the House of Savoy and were quickly closed down by the Piedmontese authorities.\nIn 1831, Mazzini joined the patriotic movement, the Carbonari, and he was arrested and imprisoned in Savona. On his release, he went into exile in Marseille, where he founded La Giovine Italia, Young Italy, a secret society who wanted to unify Italy into a liberal republic. Mazzini was convinced that this would be brought about by a popular uprising that would in turn spark a revolution across Europe.\nBy 1833, La Giovine Italia had 60,000 members and Mazzini launched the first of a series of failed revolts in Genoa. The uprising was brutally suppressed, its leaders executed and the director of the Genoa branch, Jacopo Ruffini, killed himself in prison. Mazzini was tried in absentia and sentenced to death. Another revolt failed in 1834 but undaunted, Mazzini now founded Young Europe, a movement ahead of its time, that envisaged a Europe of liberal republics linked in a federal state. It was a move that saw him arrested in both Switzerland and Paris.\nIn exile in England he continued to plot revolts. Above all he was impressed by the power of the press and was quick to see it as a potential weapon for garnering support, both in Britain and Italy. It was Mazzini who was to create the great myth of Garibaldi, the military leader of the Risorgimento. After another failed uprising in Genoa, Mazzini was sidelined in the story of the Risorgimento by Camillo Benso, the Piedmontese prime minister, who in 1856 now took a leading role, although he still remained active in Italian politics until his death in 1872. He’s buried in Genoa’s Staglieno cemetery.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-5", "d_text": "Italian and British revisionist historians of the twentieth century, such as Denis Mack Smith, Christopher Duggan and Martin Clark, focus on the relationship between Mazzini and Cavour to demonstrate that the Risorgimento was, in fact, not an inevitable nationalist movement for unity, but rather a series of non-teleological, complex events (see, in particular, Clark 4), whose wars were largely fought between foreign powers (such as the crucial battles of Magenta and Solferino in June 1859). Indeed, many prefer not to use the term Risorgimento at all, seeing it as a problematic phrase that supports and simplifies a complex nationalist mythology concealing the real power behind unification: the dynastic expansionist interests of Napoleon III and Victor Emanuel. The main driving force behind unification in the 1850s, Cavour, pursued arguably underhand and devious politics that were driven by the urge to expand Piedmont’s territories in the south and expulse Austria, something that revisionist historians have termed derogatorily the “piemontesizzazione” of Italy. In addition, the revisionist accounts of unification emphasise it as a minority movement of the elite middle classes (what Piero Gobetti in Risorgimento senza eroi describes as a “failed revolution,” and what Antonio Gramsci calls a “passive revolution” that bypasses the economically disenfranchised) conducted by northern Italy at the expense of the interests of the south (seen as a backward “Mezzogiorno”). For example, while pro-unification print culture is widely accepted by historians as crucial to nationalist unity, Martin Clark notes that only 2.5% of the population could speak Italian and, indeed, few Italians actually wanted unification (4). Maxime Du Camp writes of overhearing crowds in Naples, following the plebiscites, shouting “Long live Italy,”, but then members of the crowd asked him what “Italy” was (Gilmour 199). Gilmour’s recent study underlines the problem of the very concept of Italy, noting that Metternich’s description of Italy in 1847 as merely “une expression géographique” has some truth, for at that time it was made up of eight independent states that had last been united by the Romans (158).", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "What was happening in Italy in the 1890s?\nIt unified penal legislation in Italy, abolished capital punishment and recognised the right to strike. January 1 – The Kingdom of Italy establishes Italian Eritrea as its colony in the Horn of Africa. May 17 – The Cavalleria rusticana opera by Pietro Mascagni premiers at the Teatro Costanzi in Rome.\nWhat was happening in Italy in the early 1900’s?\nThe Italy of 1900 was a new country but it was also a weak one. The majority of the country was poor and there was little respect for the government. Even the royal family was not safe. In 1900, King Hubert was assassinated.\nWhat was happening in Italy in the 1800s?\nIn the 1800s much of Italy wanted to unify into a single country. In 1871 Italy became a constitutional monarchy and an independent unified country. … He turned Italy into a fascist state where he was dictator. He sided with the Axis Powers of Germany and Japan in World War II.\nWhat are some major events that happened in Italy?\nItaly historical timeline\n- 2000–1200 BC. Tribes from central Europe and Asia, the Villanovans, settle in northern Italy.\n- c. 800 BC. …\n- 753 BC. Legendary date of Rome’s founding.\n- 750 BC. Greeks start to colonise southern Italy.\n- 509 BC. Rome becomes a republic.\n- 390 BC. Gauls sack Rome, but are expelled.\n- 343–264 BC. …\n- 264–146 BC.\nWhat was Italy called before Italy?\nThe Greeks gradually came to apply the name Italia to a larger region, but it was during the reign of Augustus, at the end of the 1st century BC, that the term was expanded to cover the entire peninsula until the Alps, now entirely under Roman rule.\nWhat started the Italian unification?\nThe Franco-Austrian War of 1859 was the agent that began the physical process of Italian unification. The Austrians were defeated by the French and Piedmontese at Magenta and Solferino, and thus relinquished Lombardy. By the end of the year Lombardy was added to the holdings of Piedmont-Sardinia.\nWho ruled Italy in 1500s?", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-8", "d_text": "Luigi Galleani was a major figure in the anarchist movement, specifically among Italian anarchists, known as an unflinching advocate of propaganda by the deed. Galleani savored insurrectionary anarchism, seeing the Idea (as they termed anarchism) as a crusade and anarchists as martyrs pursuing holy vengeance and retribution against State, Capital, and Church.\nGalleani was most influential Italian anarchist of the early 20th century. He was an accomplished radical orator, strongly charismatic, and inspired countless followers among his Italian comrades. He edited the principal Italian anarchist paper, Cronaca Sovversiva, which ran for fifteen years until its eventual suppression by the US government.\nBorn to middle class parents, Galleani became an anarchist in his late teen years while studying law at the University of Turin. He refused to practice law, which he now held in contempt, and turned his attentions to anarchist propaganda. He was forced to flee to France to evade threatened prosecution in Italy, but was expelled from France for taking part in a May Day demonstration.\nGalleani later lived briefly in Switzerland, where he spent some time with students of the University of Geneva before again being expelled as a dangerous agitator, this time for arranging a celebration in honor of the Haymarket martyrs. He went back to Italy only to run afoul of the police again as a result of his insurgent activities. His return to Italy ended with his arrest for charges of conspiracy, where he spent five years in jail, exiled on the island of Pantelleria, off the coast of Sicily.\nEscaping Pantelleria in 1900, Galleani fled to Egypt, staying among Italian comrades for a year until threatened with extradition, whereupon he fled to London. He was 40 years old at this time, and arrived at the United States in 1901, barely a month after the assassination of President McKinley at the hand of a self-proclaimed anarchist.\nSettling in Paterson, New Jersey, Galleani assumed editorship of La Questione Sociale, the leading Italian anarchist periodical in America. In 1902, the Paterson silk workers engaged in a strike, and Galleani threw his oratorical talents in with the strikers, urging workers to declare a general strike and overcome capitalism, spellbinding his audiences with his rhetorical flourish and clarity of thought.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-18", "d_text": "This treaty gave Austrian recognition to the existence of the Cisalpine Republic (made up of Lombardy, Emilia Romagna and small parts of Tuscany and Veneto), and annexed Piedmont to France. Even if, like the other states created by the invasion, the Cisalpine Republic was just a satellite of France, these satellites sparked a nationalist movement. The Cisalpine Republic was converted into the Italian Republic in 1802, under the presidency of Napoleon.\nIn 1805, after the French victory over the Third Coalition and the Peace of Pressburg, Napoleon recovered Veneto and Dalmatia, annexing them to the Italian Republic and renaming it the Kingdom of Italy. Also that year a second satellite state, the Ligurian Republic (successor to the old Republic of Genoa), was pressured into merging with France. In 1806, he conquered the Kingdom of Naples and granted it to his brother and then (from 1808) to Joachim Murat, along with marrying his sisters Elisa and Paolina off to the princes of Massa-Carrara and Guastalla. In 1808, he also annexed Marche and Tuscany to the Kingdom of Italy.\nIn 1809, Bonaparte occupied Rome, for contrasts with the pope, who had excommunicated him, and to maintain his own state efficiently,exiling the Pope first to Savona and then to France.\nAfter Russia, the other states of Europe re-allied themselves and defeated Napoleon at the Battle of Leipzig, after which his Italian allied states, with Murat first among them, abandoned him to ally with Austria. Defeated at Paris on 6 April 1814, Napoleon was compelled to renounce his throne and sent into exile on Elba. The resulting Congress of Vienna (1814) restored a situation close to that of 1795, dividing Italy between Austria (in the north-east and Lombardy), the Kingdom of Sardinia, the Kingdom of the Two Sicilies(in the south and in Sicily), and Tuscany, the Papal States and other minor states in the centre. However, old republics such as Veniceand Genoa were not recreated, Venice went to Austria, and Genoa went to the Kingdom of Sardinia.", "score": 23.205175709568138, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "The Coming of Garibaldi\nThe fall of the House of Bourbon and the Kingdom of Naples in 1860 is part of the success of the risorgimento, the movement to unite Italy into a single nation. This success is due primarily to three persons: a theoretician, a politician, and a soldier.\nThe theoretician, the ideologue, the one who preached national unity, was Giuseppe Mazzini (1805-72). He was not a particularly practical man, and he spent much of his life in exile, proclaiming from abroad the idea of Italian unity as a fulfillment of destiny, one which he apparently saw as a nebulous combination of ancient Italian glory and reasonable aspirations to modern nation-statehood.\nThe politician of the risorgimento,\nas shrewd as Metternich and Bismark, was Camillo\nBenzo Conte di Cavour (1810-61) prime minister of\nthe Kingdom of Piedmont. Northern Italy in the mid-1800s\nwas a patchwork. There was Piedmont, Lombardy, Modena,\nVenezia, the Papal States, etc. etc., each a sovereign\nstate. It was Cavour who had the political acumen to\nhandle the problem of northern unity before turning to\nthe greater one of unifying north and south. He was,\nhowever, a conservative and calculating man, one who\nfavored a gradual process of unification over\npolitician Cavour might have had his way if the soldier,\nGiuseppe Garibaldi (1807-82), as a young man in\nthe 1830s, had not wandered into a tavern where the\ntheoretician, Mazzini, was holding forth to fellow\nmembers of the revolutionary group, Young Italy.\nyou mean by Italy?\" asked one. \"The Kingdom of\nNaples? The Kingdom of Piedmont? The Duchy of\n\"I mean the new Italy… the united Italy of all the Italians,\" said Mazzini.\n\"At that point,\"\nrecalled Garibaldi in later years, \"I felt as Columbus\nmust have felt when he first sighted land.\"\nAs a pirate, patriot, soldier of fortune, lover and guerrilla fighter from Italy to South America, Garibaldi had a patent on the swashbuckle. He survived imprisonment, torture, severe wounds and exile.", "score": 23.030255035772623, "rank": 52}, {"document_id": "doc-::chunk-3", "d_text": "Skirmishes which left a dozen dead on the battlefield are usually called battles and earnestly analyzed as if they were Austerlitz or Waterloo.\nThe facts that Denis Mack Smith dug up and published often contradict the official mythology. Official mythologies are common to all countries. All countries cherish one or two particular periods of their histories, which they ennoble and embellish, to justify and give meaning to their present and to give a purpose to their future. This habit may be merely useful or ornamental to great, old, and solid nations. It is extremely important to recent and ramshackle ones. For Italy the myth of the Risorgimento has been almost literally a matter of life and death.\nThe Risorgimento had been a movement led by a motley elite, an exiguous middle-class liberal minority that was deprecated and opposed by the upper- and lower-class conservative majority. Each liberal or revolutionary group in the movement plotted and fought to create an imaginary future country of its own, entirely different from all others. The leaders were largely unremarkable men, some of them high minded, most of them fanatics or crackpots, only a few really conscious of their role in history. Perhaps Camillo Cavour was the only genuinely great man among them, the only one who might have had a success in another country.\nThe final outcome was a flimsy compromise, mainly the result of the royal conquest of Italy and not of the popular vague de fond so many had wished or imagined had taken place. “This is only the ghost of Italy,” Mazzini wrote just before he died. “It is an illusion, a lie…a corpse without a truly living soul inside it. Italy has been put together just like a mosaic, piece by piece, and the battles for this cause have been won on our behalf by foreigners….” In the end, inevitably, the Catholics hated the liberal democratic unified secular kingdom, men from all other regions hated the domination of the Piedmontese, Milan hated Rome, the Tuscans hated everybody else, the south hated the north, the republicans hated the monarchists, the middle class feared the revolution, and everybody resented the new heavy taxes necessary to pay for the wars of independence, the indispensable public works, and the armaments needed to keep up the pretense that Italy had become a first-class power.\nThe myths, therefore, were confirmed, embellished, enriched, and defended.", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-25", "d_text": "Both radical and conservative forces in the Italian parliament demanded that the government investigate how to improve agriculture in Italy.The investigation which started in 1877 and was released eight years later, showed that agriculture was not improving, that landowners were swallowing up revenue from their lands and contributing almost nothing to the development of the land. There was aggravation by lower class Italians to the break-up of communal lands which benefited only landlords.Most of the workers on the agricultural lands were not peasants but short-term labourers who at best were employed for one year.Peasants without stable income were forced to live off meager food supplies, disease was spreading rapidly, plagues were reported, including a major cholera epidemic which killed at least 55,000 people.\nA 1905 Fiat advertisement.\nThe Italian government could not deal with the situation effectively due to the mass overspending of the Depretis government that left Italy in huge debt. Italy also suffered economically because of overproduction of grapes for their vineyards in the 1870s and 1880s when France’s vineyard industry was suffering from vine disease caused by insects. Italy during that time prospered as the largest exporter of wine in Europe but following the recovery of France in 1888, southern Italy was overproducing and had to split into which caused greater unemployment and bankruptcies.In 1913 male universal suffrage was allowed. The Socialist Party became the main political party, outclassing the traditional liberal and conservative organisations.\nStarting from the last two decades of the 19th century, Italy developed its own colonial Empire. Italian colonies were Somalia and Eritrea; an attempt to occupy Ethiopia failed in the First Italo–Ethiopian War of 1895–1896. In 1911, Giovanni Giolitti‘s government sent forces to occupy Libya and declared war on the Ottoman Empire which held Libya. Italy soon conquered and annexed Tripoli and the Dodecanese Islands. Nationalists advocated Italy’s domination of the Mediterranean Sea by occupying Greece as well as the Adriatic coastal region of Dalmatia.\nItaly in the Great War\nThe First World War (1914–1918) was an unexpected development that forced the decision whether or not to honor the alliance with Germany. At first Italy remained neutral, saying that the Triple Alliance was only for defensive purposes. Public opinion in Italy was sharply divided, with Catholics and socialists recommending peace.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "He spoke to striking workers and others at events organized by Italian-speaking subscribers to his very popular anarchist newspaper, for many years based out of the hotbed of Italian anarchism, the quarry town of Barre, Vermont.\nHowever, Galleani had to leave Vermont for his own safety, and relocate to the even bigger hotbed of Italian anarchists at the time, the suburbs of Boston.\nThe idea of an insurrection for revolutionary change advocated by Galleani in the U.S. had its germination in the turmoil of late 19th century Italy where he lived for the first 40 years of his life. At the time, Italian socialists, like many others, were divided over the question of whether a parliamentary path could bring real, substantive change to society.\nGalleani’s popular and often very persuasive side of the argument was that participating in parliamentary politics in post-1848 bourgeois democracies in Europe was pointless. He called for armed uprisings to bring down capitalism and the state.\nAs poverty and hunger continued to be widespread in Italy and throughout Europe at the time, Galleani’s orientation of encouraging as many general strikes and insurrections as possible in the hope they would spread had a large following. His ideas continued to be popular among workers in the U.S. as well.\nHis advocacy of “propaganda by the deed” often published in Cronaca Sovversiva held that it was appropriate and even necessary to retaliate against, for example, police chiefs or others for killing striking workers, by killing them in turn. Galleani’s newspaper famously included a bomb-making recipe-book insert on one occasion, with an unrelated title hoping that would help evade the censors.\nA series of bombings ascribed to Galleanists were carried out between 1914 and 1920 targeting politicians, owners, and police, the last and most famous being the 1920 Wall Street bombing which killed 38 people and severely wounded 143. It was these attacks which earned Galleani the title of this book.\nWhether the tactics and strategies employed by Galleani’s wing of the large, international anarchist movement of the day would be considered adventurist, ultra-left, foolhardy, completely nuts or perfectly reasonable today obviously depends on who you talk to.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-20", "d_text": "At the time, the struggle for Italian unification was perceived to be waged primarily against the Austrian Empire and the Habsburgs, since they directly controlled the predominantly Italian-speaking northeastern part of present-day Italy and were the single most powerful force against unification. The Austrian Empire vigorously repressed nationalist sentiment growing on the Italian peninsula, as well as in the other parts of Habsburg domains. Austrian Chancellor Franz Metternich, an influential diplomat at the Congress of Vienna, stated that the word Italy was nothing more than “a geographic expression.\nArtistic and literary sentiment also turned towards nationalism; and perhaps the most famous of proto-nationalist works wasAlessandro Manzoni‘s I Promessi Sposi (The Betrothed). Some read this novel as a thinly veiled allegorical critique of Austrian rule. The novel was published in 1827 and extensively revised in the following years. The 1840 version of I Promessi Sposi used a standardized version of the Tuscan dialect, a conscious effort by the author to provide a language and force people to learn it.\nThose in favour of unification also faced opposition from the Holy See, particularly after failed attempts to broker a confederation with the Papal States, which would have left the Papacy with some measure of autonomy over the region. The pope at the time, Pius IX, feared that giving up power in the region could mean the persecution of Italian Catholics.\nEven among those who wanted to see the peninsula unified into one country, different groups could not agree on what form a unified state would take. Vincenzo Gioberti, a Piedmontese priest, had suggested a confederation of Italian states under rulership of the Pope. His book,Of the Moral and Civil Primacy of the Italians, was published in 1843 and created a link between the Papacy and the Risorgimento. Many leading revolutionaries wanted a republic, but eventually it was a king and his chief minister who had the power to unite the Italian states as a monarchy.\nOne of the most influential revolutionary groups was the Carbonari (coal-burners), a secret organization formed in southern Italy early in the 19th century. Inspired by the principles of the French Revolution, its members were mainly drawn from the middle class and intellectuals.", "score": 22.042526188572243, "rank": 56}, {"document_id": "doc-::chunk-21", "d_text": "After the Congress of Vienna divided the Italian peninsula among the European powers, the Carbonari movement spread into the Papal States, the Kingdom of Sardinia, theGrand Duchy of Tuscany, the Duchy of Modena and the Kingdom of Lombardy-Venetia.\nThe revolutionaries were so feared that the reigning authorities passed an ordinance condemning to death anyone who attended a Carbonari meeting. The society, however, continued to exist and was at the root of many of the political disturbances in Italy from 1820 until after unification. The Carbonari condemned Napoleon III to death for failing to unite Italy, and the group almost succeeded in assassinating him in 1858. Many leaders of the unification movement were at one time members of this organization. (Note: Napoleon III, as a young man, fought on the side of the ‘Carbonari’.)\nTwo prominent radical figures in the unification movement were Giuseppe Mazzini andGiuseppe Garibaldi. The more conservative constitutional monarchic figures included Count Cavour and Victor Emmanuel II, who would later become the first king of a united Italy.\nMazzini’s activity in revolutionary movements caused him to be imprisoned soon after he joined. While in prison, he concluded that Italy could – and therefore should – be unified and formulated his program for establishing a free, independent, and republican nation with Rome as its capital. After Mazzini’s release in 1831, he went to Marseille, where he organized a new political society called La Giovine Italia (Young Italy). The new society, whose motto was “God and the People,” sought the unification of Italy.\nThe creation of the Kingdom of Italy was the result of concerted efforts by Italian nationalists and monarchists loyal to the House of Savoy to establish a united kingdom encompassing the entire Italian Peninsula.\nThe Kingdom of Sardinia industrialized from 1830 onward. A constitution, the Statuto Albertino was enacted in the year of revolutions, 1848, under liberal pressure. Under the same pressure, the First Italian War of Independence was declared on Austria. After initial success the war took a turn for the worse and the Kingdom of Sardinia lost.\nGaribaldi, a native of Nice (then part of the Kingdom of Sardinia), participated in an uprising in Piedmont in 1834, was sentenced to death, and escaped to South America.", "score": 21.695954918930884, "rank": 57}, {"document_id": "doc-::chunk-6", "d_text": "The Risorgimento is definitely not “one damned thing after another,” the sum of a series of events; a chain of right and wrong guesses, skirmishes, campaigns, and battles lost or won by many bad and a few good generals; the exchanges of letters, the secret pacts, the shabby or brilliant schemes, the speeches, the famous sayings, the protagonists’ ideas, the sacrifices made by a few heroes, and the cowardice of others. It cannot be entirely explained as the product of Fortune, or as the belated adaptation of Italy to the age of steam, parliamentary democracy, nationalism, centralized bureaucracy, railways, and textile mills.\nThe Risorgimento was all this, to be sure, but it was also a popular, pâpier-maché epic, or somewhat like the Romance of the Knights of Charlemagne which is shown in Sicilian puppet theaters. The Romance is enriched by erroneous versions of events, distorted by wishful thinking, or entirely imagined. There was the “spirit of the age,” of which the greatest expression was Verdi’s music, but also innumerable stirring poems by forgotten poets, which the enlightened middle class as well as many artisans were carried away by. It was a case of Dieu le veult, and one had to put up with the men available, exploit whatever political opportunities were offered, and cheerfully accept whatever sacrifices were necessary. Men went into exile or dungeons, were hanged or died in battle for this imaginary Risorgimento which contemporary historians are right to demolish.\nThe faith in liberty, independence, and unity, this amore per l’Italia, enflamed, as I said, a very small minority, perhaps less than 5 percent of the population. (In the 1859 war there were more Italians on the Austrian than on the Piedmontese side and they fought better; in the naval engagement at Lissa, in 1866, in which the Austrian navy defeated the Italians, the Austrian sailors were all Venetians, and all orders were given in their patois.) Nevertheless the other 95 percent of the population, neither liberals nor patriots, were aware somehow that there was no way to stem the tide and were resigned to the fact that history was against them. Few of them put up any resistance.", "score": 21.695954918930884, "rank": 58}, {"document_id": "doc-::chunk-12", "d_text": "At first Sicily was able to remain as an independent kingdom under personal union, while the Bourbons ruled over both from Naples. However, the advent of Napoleon's First French Empire saw Naples taken at the Battle of Campo Tenese and Bonapartist Kings of Naples were instated. Ferdinand III the Bourbon was forced to retreat to Sicily which he was still in complete control of with the help of British naval protection.\nFollowing this Sicily joined the Napoleonic Wars, after the wars were won Sicily and Naples formally merged as the Two Sicilies under the Bourbons. Major revolutionary movements occurred in 1820 and 1848 against the Bourbon government with Sicily seeking independence; the second of which, the 1848 revolution resulted in a short period of independence for Sicily. However, in 1849 the Bourbons retook the control of the island and dominated it until 1860.\nIn 1860, as part of the Risorgimento, the Expedition of the Thousand led by Giuseppe Garibaldi captured Sicily. The conquest started at Marsala, and native Sicilians joined him in the capture of the southern Italian peninsula. Garibaldi's march was finally completed with the Siege of Gaeta, where the final Bourbons were expelled and Garibaldi announced his dictatorship in the name of Victor Emmanuel II of Kingdom of Sardinia. Sicily became part of the Kingdom of Sardinia after a referendum where more than 75% of Sicily voted in favor of the annexation on 21 October 1860 (but not everyone was allowed to vote). As a result of the Kingdom of Italy proclamation, Sicily became part of the kingdom on 17 March 1861.\nAfter the Italian Unification, in spite of the strong investments made by the Kingdom of Italy in terms of modern infrastructure, the Sicilian (and the wider mezzogiorno) economy remained relatively underdeveloped and this caused an unprecedented wave of emigration. In 1894, organizations of workers and peasants known as the Fasci Siciliani, protested against the bad social and economic conditions of the island but they were suppressed in a few days. The Messina earthquake of 28 December 1908 killed over 80,000 people. This period was also characterised by the first contact between the Mafia, the Sicilian crime syndicate (also known as Cosa Nostra), and the Italian government.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-144", "d_text": "Following a prolonged campaign of strikes and demonstrations across Italy to protest the rising cost of living, a mass demonstration of workers had taken place through the streets of Milan in 1898. The march took an increasingly violent turn and, fearing an attack upon the Royal Palace, troops were ordered to fire on the crowd, who were unarmed. The protestors marched toward the palace, which was surrounded by a strong military force under the command of General Fiorenzo Bava-Beccaris. The crowd ignored the order to disperse, whereupon Bava-Beccaris gave the signal to fire with muskets and cannons, resulting in a massacre of the demonstrators, in which more than ninety people died. The shootings, known as the Bava-Beccaris massacre named after the general who had ordered the attack.\nKing Umberto later decorated Bava-Beccaris,complimenting him upon his “brave defense of the royal house” — as a result of which Bresci became determined to kill the king.\nUnexpectedly, to his comrades, Bresci approached them at La Questione Social and demanded the return of a loan which had been used to set up the paper. Offering no explanation for his actions and leaving his comrades deeply bitter towards him, Bresci left the United States with the intention of assassinating King Umberto I of Italy (fulfilling the propaganda by the deed ethos).\nTwo months later Bresci had made his way to the small town of Monza, some 10 miles north of Milan. The town was the location of one of the king’s royal villas which he would be staying at for several weeks. It was here that Bresci committed his attentat. On the evening of July 29, while the king was handing out prizes to athletes after a sporting event, Bresci burst from the crowd and shot the king three times using a 32 revolver, killing him almost immediately.\nBresci was captured and put on trial, represented by the famous Anarchist lawyer Francesco Saverio Merlino, Bresci stood trial in Milan and was sentenced to a life of hard labour on Santo Stefano, the island prison infamous for its many Anarchists and socialist prisoners. He was not to stay long. Less than a year later he was found hanged in his cell, his body being thrown into the sea by prison guards soon after.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "The sound defeat Italy had received in the Abyssinian War, the muddled mismanagement of the army in the Turkish War, and finally, to cap it all, the disgraceful actions of the General Staff in the World War, heaping a staggering total of dead and wounded on the battle fields, led the masses to have only the greatest contempt for the martial ability of its so-called rulers.\nThe peaceful pursuits of the Italian population for many centuries had favored a practical anti-militarism. The Socialist Party in Italy grew rapidly over the issue of imperialism and war, and in 1896 was able successfully to oppose the further continuance of the war in Abyssinia. After the defeat at Adowa and following the termination of the war, huge disorders spread throughout the country, especially in Milan, leading to the killing of over four hundred by the soldiery of the government. (*1) So frightened were the rulers that all Italy was declared in a state of siege. Again, in the war with Turkey over Africa in 1911 and 1912, the socialists took a threatening attitude. While they could not stop the slaughter, they were able to purge their ranks at the time of a good many of the national and chauvinist reformists within the Party. Thus, by 1914, the Italian Socialist Party was in a strong position to fight the entrance of Italy into the World War, and helped to delay Italy's participation. When the War broke out, its official stand was \"neither to help nor to hinder.\"\nImmediately before the World War, the workers' movements had developed into menacing proportions; in 1914, from June seventh to June fourteenth, a general insurrection raged throughout the city of Bologna and the province of Romagna. The World War came just in time to prevent the overthrow of the royalist regime.\nThe weak position of the ruling class in Italy was due also to the fact that Italy was an exceedingly poor country, unable to stand the expenses of imperialist adventures. Italy had no iron and no coal. Her chief economy was agriculture. Only in the twentieth century was she able to begin to develop her water power into electricity. Thus Italy was a second rate power, unable to meet her industrial rivals in Europe and unable, therefore, to carve out for herself any large colonial domain.", "score": 21.361300790500703, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "he revived the constitution of 1848 and relinquished his absolute powers. There was even talk of an alliance between a liberalized Naples and the Piedmont kingdom of northern Italy, an Italian federation, of sorts. This, indeed, would have been a watered-down risorgimento, but it would have thwarted Garibaldi.\nEven northern politicians, primarily Cavour, while theoretically in favor of Italian unification, were aghast at the thought of a popular revolutionary army led by a thousand redshirted lunatics storming up the peninsula, spreading a message of instant universal brotherhood. Garibaldi, after all, in his youth had had to do with a mystic band of Christian communards, the St. Simonians, who, years before Karl Marx, had preached: From each according to his capacity; To each according to his works; The end of the exploitation of man by man; and The abolition of all privileges of birth.\nGaribaldi landed at\nMarsala and a few days later engaged a superior force\nnear Calatafimi. He threw caution to the winds (he\ndidn't have very much of it to begin with), said, \"Here\nwe either make Italy or die\", and led a ferocious\nbayonet charge uphill, literally overrunning the enemy.\nAnd that was more or less\nthat. Sicilian irregulars in rebellion against the royal\nforces had been watching the engagement from nearby\nhillsides. They liked what they saw. Soon Garibaldi's\nforces were swelled by a ragtag collection of rebels\narmed with guns, axes, clubs and whatever else could\nkill a Bourbon. Together they marched on Palermo and by\nceaseless guerrilla street fighting drove the Bourbon\ncommander into asking for an armistice, the only\ncondition being that royalist forces be allowed to leave\nthe island for the mainland.\nWith 3,500 men\nunder him, Garibaldi then crossed to the mainland on\nAugust 19 and started the 300-mile slog in the heat of\nsummer up towards Naples, his reputation preceding him\nby leaps and bounds. Peasants were already calling him\nthe \"Father of Italy,\" mothers brought their\nbabies out to be blessed by him, and there was an air of\nnatural invincibility about him as he moved north.", "score": 20.327251046010716, "rank": 62}, {"document_id": "doc-::chunk-3", "d_text": "He enjoyed only a brief period of direct political power, as a member—along with Carlo Armellini and Aurelio Saffi—of a triumvirate of the Roman Republic after the ancient Roman model (9 February 1849– 3 July 1849). This opportunity to form a republic was seized following the assassination of the Papal Minister of Justice, Pellegrino Rossi, after which Pope Pius IX fled to Gaeta (on 24 November 1848) for the protection of Ferdinand II of Naples. The Roman Republic instituted a number of important reforms, such as freedom of religion (the Pope had only permitted Catholicism and Judaism) and the abolition of the Pope’s temporal power. The Republic was seen by patriots as a model for an independent Italy, but the experiment in a radical new constitution ended when French troops entered the city in support of the Pope and defeated the Republican army (led by Giuseppe Garibaldi). This episode is considered one of the most significant experiments in republicanism, popular democracy and constitutional reform in Italy’s history. But it also gave Mazzini his only political office. After the fall of the Republic he went again into exile and largely watched events in his native country from the sidelines (mostly in England). Nevertheless, Mazzini was an important player in the articulation of a national Italian identity, especially in his support of insurrections, coups and assassinations. He founded what was in effect the first modern Italian political organization, Giovine Italia (Young Italy), a revolutionary, popular pro-democracy group that espoused unification and republicanism. Along with La Giovine Italia, the party’s newspaper, Mazzini also encouraged a wave of patriotic journalism across Europe for, as Lucy Riall notes (32), he believed “in the unity of thought and action” (the title of one of his journals, founded in 1858 in London, was Pensiero e Azione [Thought and Action]), and tirelessly romanticized his self-image, his revolutionary violence, and even his defeats (which were represented as heroic failures). His political ideology created a new language for Risorgimento politics, as Riall comments, which was contingent on the symbolism of “nation” and “republicanism” and “il popolo” (the people), and on the circulation of a romantic ideal of the Italian struggle (32).", "score": 20.327251046010716, "rank": 63}, {"document_id": "doc-::chunk-3", "d_text": "In the 14th and 15th centuries Italy was divided among five powers — the kingdom of Naples, the duchy of Milan, the republics of Florence and Venice and the papacy; the Medici family flourished; the papacy was restored to Rome; and Florence as a republic acknowledged the influence of Savonarola. The 16th century was the most disastrous in Italian history. The rivalry of Charles V and Francis I filled the land with foreign armies, the papacy being the gainer from the struggles. Francis I was driven out of Italy; Rome was sacked in 1527, the sack lasting seven months; the Medici were driven out, and restored and made grand-dukes of Tuscany. In 1529 the peace of Cambrai left Charles V master of Italy, and the peace of Château Cambray (1559) made his son Philip its undisputed lord. The papacy was strengthened by the founding of the order of Jesuits, the inquisition and additions to its territory; and Venice made her last great achievement in a war that had lasted five centuries by the conquest of the Peloponnesus in 1684. After each of three European wars of the 18th century Italy was divided afresh. Napoleon entered Italy in 1796, reconquered it at Marengo in 1800, and was crowned king of Italy in 1805. The congress of Vienna in 1815 restored Italy to its former state. The year of revolution, 1848, opened with the party of Mazzini supreme. Pope Pius IX became a fugitive, and Garibaldi was in the field. Rome and Venice yielded to French armies, the pope and other petty sovereigns of Italy returned, and the revolution proved a failure. But Victor Emmanuel, Cavour and Garibaldi were ready for the coming struggle. Cavour made terms with Louis Napoleon, and the French and Italian troops won the battles of Magenta and Solferino, driving Austria to the east. In February, 1861, the first Italian parliament met at Turin, and Victor Emmanuel was proclaimed king of Italy. Venice was restored in 1866, and on Sept. 20, 1870, the king entered Rome and the emancipation of Italy was complete.", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Fascist and anti-Fascist violence in Italy (1919–1926)\n|Civil unrest in Italy (1919–26)|\nBenito Mussolini and Fascists during the March on Rome in 1922.\n|Far-left and anti-Fascists||Government||Fascist|\n|Commanders and leaders|\nItaly witnessed significant widespread civil unrest and political strife in the aftermath of World War I and the rise of the Fascist movement led by Benito Mussolini which opposed the rise of the international left, especially the far-left along with others who opposed Fascism.\nFascists and communists fought on the streets during this period as the two factions competed to gain power in Italy. The already tense political environment in Italy escalated into major civil unrest when Fascists began attacking their rivals, beginning on April 15, 1919 with Fascists attacking the offices of the Italian Socialist Party's newspaper Avanti!.\nViolence grew in 1921 with Italian army officers beginning to assist the Fascists with their violence against communists and socialists. With the Fascist movement growing, anti-fascists of various political allegiances (but generally of the international left) combined into the Arditi del Popolo (People's Militia) in 1921. With the threat of a general strike being initiated by anarchists, communists, and socialists, the Fascists launched a coup against the Italian government with the March on Rome in 1922 which pressured Prime Minister Luigi Facta to resign and allowed Mussolini to be appointed Prime Minister by the King Victor Emmanuel III. Two months after Mussolini took over as Prime Minister, Fascists attacked and killed members of the local labour movement in Turin in what became known as the 1922 Turin Massacre. The next act of violence was the assassination of Socialist deputy Giacomo Matteotti by Fascist militant Amerigo Dumini in 1924. A right-wing fascist deputy, Armando Casalini, was killed on a tramway in retaliation for Matteotti's murder by the anti-fascist Giovanni Corvi. This was followed by a Fascist takeover of the Italian government and multiple assassination attempts were made against Mussolini in 1926, with the last attempt on October 31, 1926. On November 9, 1926, the Fascist government initiated emergency powers which resulted in the arrest of multiple anti-Fascists including communist Antonio Gramsci.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-24", "d_text": "In practice, trasformismo was authoritarian and corrupt, Depretis pressured districts to vote for his candidates if they wished to gain favourable concessions from Depretis when in power. The results of the 1876 election resulted in only four representatives from the right being elected, allowing the government to be dominated by Depretis. Despotic and corrupt actions are believed to be the key means in which Depretis managed to keep support in southern Italy. Depretis put through authoritarian measures, such as the banning public meetings, placing “dangerous” individuals in internal exile on remote penal islands across Italy and adopting militarist policies. Depretis enacted controversial legislation for the time, such was abolishing arrest for debt, making elementary education free and compulsory while ending compulsory religious teaching in elementary schools.\nThe first government of Depretis collapsed after his dismisal of his Interior Minister, and ended with his resignation in 1877. The second government of Depretis started in 1881. Depretis’ goals included widening suffrage in 1882 and increasing the tax intake from Italians by expanding the minimum requirements of who could pay taxes and the creation of a new electoral system called which resulted in large numbers of inexperienced deputies in the Italian parliament.In 1887, Depretis was finally pushed out of office after years of political decline.\nIn 1887, Depretis cabinet minister and former Garibaldi republican Francesco Crispi became Prime Minister. Crispi’s major concerns before during his reign was protecting Italy from their dangerous neighbour Austria-Hungary. To challenge the threat, Crispi worked to build Italy as a great world power through increased military expenditures, advocation of expansionism, and trying to win Germany’s favor even by joining the Triple Alliance which included both Germany and Austria-Hungary in 1882 which remained officially intact until 1915. While helping Italy develop strategically, he continued trasformismo and was authoritarian, once suggesting the use of martial law to ban opposition parties. Despite being authoritarian, Crispi put through liberal policies such as the Public Health Act of 1888 and establishing tribunals for redress against abuses by the government.\nThe overwhelming attention paid to foreign policy alienated the agricultural community in Italy which had been in decline since 1873.”.", "score": 19.404527245541964, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "They were typically aristocrats and business leaders, totally uninterested in democracy or in improving the lives of working people.\nAs a young man Garibaldi gravitated towards a secret movement known as La Giovine Italia, “Young Italy”. It was led by Giuseppe Mazzini. He believed in revolutionary action to unite Italy as a republic, rather than as a monarchy.\nHowever it is always difficult to build a mass movement under a dictatorship, and Garibaldi’s first experience of armed insurrection, in Genoa in 1834, was a dismal failure.\nThe following year Garibaldi moved to South America where he spent the next 13 years taking part in a variety of national liberation movements.\nWord of his exploits started to feed back to Italy and he acquired a reputation as the “hero of two worlds”.\nThe year 1848 was a turning point for popular struggle throughout Europe, as revolt after revolt broke out against hated rulers. This continent-wide uprising began in Sicily.\nKarl Marx noted at the time, “The bloody revolt of the people in Palermo affected the paralysed mass of the people like an electric shock and reawakened their great revolutionary memories and passions.”\nIn northern Italy the catalyst was revolution in Austria and the fall of its ruler, Prince Metternich. The Austrian army was occupying northern Italy and many began to desert. An uprising broke out in Milan four days after Metternich’s fall.\nPoor people and peasants supported this movement, since they suffered from conscription into the Austrian army – which landowners could pay to avoid – and had to pay heavy taxes to the Austrian empire.\nThe whole continent was soon in revolt. As Marx wrote, “Every postal delivery brought a fresh report of revolution, now from Italy, now from Germany, now from the remotest south east of Europe, and sustained the general intoxication of the people.”\nOne of the highest points in this revolutionary wave was the Roman Republic, proclaimed in Rome in January 1849.\nNot only did it oppose monarchy, it was also in favour of fair systems of taxation. It all looked similar to the French Revolution of 1789, right down to the planting of “trees of liberty”.\nIndeed there had been another revolution in France the previous June.", "score": 18.90404751587654, "rank": 67}, {"document_id": "doc-::chunk-3", "d_text": "The \"Young Europe\" movement also inspired a group of young Turkish army cadets and students who, later in history, named themselves the \"Young Turks\".\nIn 1843 he organized another riot in Bologna, which attracted the attention of two young officers of the Austrian Navy, Attilio and Emilio Bandiera. With Mazzini's support, they landed near Cosenza (Kingdom of Naples), but were arrested and executed. Mazzini accused the British government of having passed information about the expeditions to the Neapolitans, and question was raised in the British Parliament. When it was admitted that his private letters had indeed been opened, and its contents revealed by the Foreign Office to the Austrian and Neapolitan governments, Mazzini gained popularity and support among the British liberals, who were outraged by such a blatant intrusion of the government into his private correspondence.\nIn 1847 he moved again to London, where he wrote a long \"open letter\" to Pope Pius IX, whose apparently liberal reforms had gained him a momentary status as possible paladin of the unification of Italy. The Pope, however, did not reply. He also founded the People's International League. By March 8, 1848 Mazzini was in Paris, where he launched a new political association, the Associazione Nazionale Italiana.\nThe 1848–49 revolts\nOn April 7, 1848 Mazzini reached Milan, whose population had rebelled against the Austrian garrison and established a provisional government. The First Italian War of Independence, started by the Piedmontese king Charles Albert to exploit the favourable circumstances in Milan, turned into a total failure. Mazzini, who had never been popular in the city because he wanted Lombardy to become a republic instead of joining Piedmont, abandoned Milan. He joined Garibaldi's irregular force at Bergamo, moving to Switzerland with him.\nOn February 9, 1849 a republic was declared in Rome, with Pius IX already having been forced to flee to Gaeta the preceding November. On the same day the Republic was declared, Mazzini reached the city. He was appointed, together with Carlo Armellini and Aurelio Saffi, as a member of the \"triumvirate\" of the new republic on March 29, becoming soon the true leader of the government and showing good administrative capabilities in social reforms.", "score": 18.90404751587654, "rank": 68}, {"document_id": "doc-::chunk-122", "d_text": "Meanwhile the new Pope had been elected, and from Piedmont to Calabria we hailed in him the Banner that was to lead our hosts to war.\nSo time passed and we reached the last months of '47. The villa on Iseo had been closed since the end of August. Roberto had no great liking for his gloomy palace in Milan, and it had been his habit to spend nine months of the year at Siviano; but he was now too much engrossed in his work to remain away from Milan, and his wife and sister had joined him there as soon as the midsummer heat was over. During the autumn he had called me once or twice to the city to consult me on business connected with his fruit-farms; and in the course of our talks he had sometimes let fall a hint of graver matters. It was in July of that year that a troop of Croats had marched into Ferrara, with muskets and cannon loaded. The lighted matches of their cannon had fired the sleeping hate of Austria, and the whole country now echoed the Lombard cry: \"Out with the barbarian!\" All talk of adjustment, compromise, reorganization, shrivelled on lips that the live coal of patriotism had touched. Italy for the Italians, and then--monarchy, federation, republic, it mattered not what!\nThe oppressor's grip had tightened on our throats and the clear-sighted saw well enough that Metternich's policy was to provoke a rebellion and then crush it under the Croat heel. But it was too late to cry prudence in Lombardy. With the first days of the new year the tobacco riots had drawn blood in Milan. Soon afterward the Lions' Club was closed, and edicts were issued forbidding the singing of Pio Nono's hymn, the wearing of white and blue, the collecting of subscriptions for the victims of the riots. To each prohibition Milan returned a fresh defiance. The ladies of the nobility put on mourning for the rioters who had been shot down by the soldiery. Half the members of the Guardia Nobile resigned and Count Borromeo sent back his Golden Fleece to the Emperor. Fresh regiments were continually pouring into Milan and it was no secret that Radetsky was strengthening the fortifications. Late in January several leading liberals were arrested and sent into exile, and two weeks later martial law was proclaimed in Milan.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "For two centuries, between the French Revolution and the collapse of the Soviet Union, the model for military mobilization in Europe was associated with the nation-state and conscription. Citizens played a key role in the defence of their state: the nation-state was considered the primary political unit of the international system and citizens were usually expected to fight for their own state. But non-state mobilization did not disappear. As Nir Arielli and Bruce Collins showed, the history of military mobilization does not fit neatly into national boxes. Transnational war volunteers have existed for decades between the 19th and the 20th century and have played a central role in many European wars.\nIn August 1936 the Italian anarchist Lanciotto Corsi was 43 years old and had been living in France for five years. He had a wife, five children and a precarious job in Marseille. He decided to go to Spain, where a civil war had started a few weeks earlier. In Barcelona, Corsi joined an international group fighting on the Aragonese front. He wrote to his wife that he went to Spain to fight fascism, as, for the moment, it was impossible to do it in Italy.Why did a mature man decide to leave his family and his job to go to fight in a foreign country?\nBetween WWI and WWII, within the anti-fascist movement, thousands of Europeans were spurred into action by the political struggles in their home societies. The Spanish Civil War represented one of the latest causes because of which many people decided, voluntarily, to fight for. Around 40,000 international volunteers fought against Franco’s troops.\nThis project’s overall objective is to carry out a transnational study of the legacies and the survival of the myth of the so-called Garibaldinism between the experience of the the Italian unification (1861) and the Spanish Civil War (1936-39). Following both a social and cultural history perspective, it will analyse how the legacy of Giuseppe Garibaldi in Southern Europe was very strong and was linked with social and political claims. In our approach Garibaldinism will refer to a political and cultural phenomenon aimed at encouraging a board and consciously “popular” type of aggregation strictly linked to the tradition of armed voluntarism and the attempt to form an ideal homogeneous block that goes beyond the single ideological matrices and political formations of which it is composed.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Giuseppe Garibaldi and The Creation of Italy. Rome today\nGiuseppe Garibaldi and his Achievements for Italy\nThe son of a seaman, he was born on July 4, 1807, in Nice, which is now in France but was then an Italian town. From his earliest years he loved adventure, and at the age of 15 tried to run away to sea. With his father's agreement he later went to sea and by the time he was 25 had worked his way from cabin boy to captain.\nIn 1832 came the turning point of Garibaldi's life, for he met another Italian, Giuseppe Mazzini, who was the organiser of the “Young Italy” movement. At this time Italy was made up of several separate states, many of which were ruled by kings who paid no attention to the wishes of their people. Great parts of what is now northern Italy belonged to Austria. The members of the “Young Italy” movement sought to drive out the Austrians, to unite all the states into one country and to set up one government for the whole country. They were determined to make that government democratic; that is, to have a government that would rule in accordance with the wishes of the people.\nIn 1834 Garibaldi helped Mazzini to organise a rebellion in Savoy (northem Italy). The attempt failed, Garibaldi fled to France, was sentenced to death in his absence and went to live in exile in South America. For 12 years he led a band of men who were fighting for the Republic of the Rio Grande, which was rebelling against Brazil. He suffered shipwreck, imprisonment and torture so bravely that his followers worshipped him. He organised other Italian exiles into an army known as the Italian Legion, which in 1843 and 1846 saved Montevideo, the capital of Uruguay, from attack by Argentina.\nIn 1848 Garibaldi returned to Italy and found the north Italians in revolt against the Austrians. He formed and fought with a volunteer army of 3,000 men, but after the Italians were defeated had to escape to Switzerland. In 1849 he was back in Italy, defending Rome against the French. The people of Rome had rebelled against the government of the Pope (who at that time ruled over great areas in the same way as an ordinary king), and set up their own government.", "score": 17.397046218763844, "rank": 71}, {"document_id": "doc-::chunk-4", "d_text": "However, when the French troops called by the Pope made clear that the resistance of the Republican troops, led by Garibaldi, was in vain, on July 12, 1849, Mazzini set out for Marseille, from where he moved again to Switzerland.\nMazzini spent all of 1850 hiding from the Swiss police. In July he founded the association Amici di Italia (Friends of Italy) in London, to attract consensus towards the Italian liberation cause. Two failed riots in Crimean War. Also in vain was the expedition of Felice Orsini in Carrara of 1853–54.\nIn 1856 he returned to Genoa to organize a series of uprisings: the only serious attempt was that of Victor Emmanuel II and his skilled prime minister, Camillo Benso, Conte di Cavour. The latter defined him as \"Chief of the assassins\".\nIn 1858 he founded another journal in London, Pensiero e azione (\"Thought and Action\"). Also there, on February 21, 1859, together with 151 republicans he signed a manifesto against the alliance between Piedmont and the Emperor of France which resulted in the Giorgio Pallavicino to move away.\nIn 1862 he again joined Garibaldi during his failed attempt to free Rome. In 1866 Venetia was ceded by France, who had obtained it from Austria at the end of the Austro-Prussian War, to the new Kingdom of Italy, which had been created in 1861 under the Savoy monarchy. At this time Mazzini was frequently in polemics with the course followed by the unification of his country, and in 1867 he refused a seat in the Italian Chamber of Deputies. In 1870, during an attempt to free Sicily, he was arrested and imprisoned in Gaeta. He was freed in October due to the amnesty conceded after the successful capture of Rome, and returned to London in mid-December.\nKarl Marx, in an interview by R. Landor from 1871, said that Mazzini's ideas represented \"nothing better than the old idea of a middle-class republic.\" Marx believed, especially after the Revolutions of 1848, that Mazzini's point of view had become reactionary, and the proletariat had nothing to do with it. In another interview, Marx described Mazzini as \"that everlasting old ass\".", "score": 17.397046218763844, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "Garibaldi stayed at this house on Staten Island, New York. It was the home of inventor Antonio Meucci who is said to have invented the telephone before Alexander Graham Bell!!\nGaribaldi Monument in New York City\nGaribaldi statue in Washington Sq. Park, downtown New York City. Make it destination #1 when you visit the Big Apple.\n1851-52 Travels to Peru.\n1852-53 As a \"citizen of Peru,\" he captains a clipper to the far east, returning to Lima via Australia and New Zealand.\n1854 Returns by way of New York, carrying a cargo of coal from Newcastle (England) to Genoa.\n1855 Engaged to an English lady, Mrs. Emma Roberts. Buys part of the Island of Caprera, north of Sardinia.\n1856 Comes to England on a scheme (largely financed by individual British politicians and British secret service funds) to buy a ship and lead an expedition to release political prisoners in Naples; but the ship is wrecked.\n1858 Goes to Turin to meet Count Cavour, the Piedmontese Prime Minister, who wants him to organize a corps of volunteers, in anticipation of another war against Austria.\n1859 (April) As a general in the Piedmontese army, he forms this corps, the Cacciatori delle Alpi, and war begins. (May) Takes Varese and Como, while the main Franco-Piedmontese forces are fighting in the plain of Lombardy. (September) After the armistice of Villafranco, Baron Ricosok gives him command of the army of Tuscany. (November) When his project to march into the Papal States is overruled, he returns to civil life.\n1860 (April) As deputy for Nice in the Piedmontese parliament at Turin, he attacks Cavour for ceding Nice to Louis Napoleon, Emperor of the French. (May) He sets out with a thousand volunteers on a piratical raid against the forces of the Neapolitan Bourbons. After an engagement at Calatafimi, he captures Palermo, the capital of Sicily. (July) He wins the battle of Milazzo, near Messina. (August) Crosses the Straits of Messina, eluding the sizable Neapolitan navy.", "score": 17.397046218763844, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "Between 19th and 20th Century several European generations, the last was the antifascist one, claimed for themselves the cultural, political, and ideal heritage of Garibaldinism. There were radical volunteers wearing the traditional red shirt in Poland (1863), at Crete (1866-67), in France (1870-71), in the Balkans (1876), in Greece (1897), in Serbia (1912 and 1914), in France again (1914) and in Spain (1936-39). A significant part of those volunteers were republicans, anarchists or socialists.\nThere are three main questions addressed by this research project:\n- Is it possible to identify a long-term tradition of international armed volunteering linked with political radicalism between the 19th and the 20th Century?\n- Is it right to speak about a transnational Garibaldinism?\n- Is it possible to identify Garibaldinism as a bridge between different radical political creeds (e.g. Anarchism, Socialism and, later on, Communism)?", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "David Lazzaretti was born in Arcidosso in 1834 and was a carriage driver by profession. In 1860, he volunteered to join the Piedmont army, which fought the papal troops in Umbria and Marche. Moved by some visions, between April and September 1868, he went several times to Rome where he obtained an audience with Pius IX. Then, between October and December of that year, he had his most significant visionary experience in a cave in Sabina, from which the story about him began. When he returned to Monte Amiata, he had adopted the guise of a 'holy' man, and many believers gathered to listen to his preaching. He retired to Monte Labbro where, between 1870 and 1872, he laid the foundations of a community founded on Christian brotherhood which shared goods and work, and opened schools for children and adults. Monte Labbro was the spiritual centre and gathering place for the faithful. Lazzaretti's work not only aroused the interest of proselytes and his supporters, but also persecutors who were fearful of his ambition for social and religious reform.\nBetween March and April 1878, Lazzaretti underwent a trial in front of the Holy Office. At the beginning of July, he returned to Amiata where he and his followers were accused of hiding a subversive public order plot in the form of religion. On the morning of 17th August, a red wooden flag with the inscription \"The Republic is the Kingdom of God\" was hoisted on the tower. At dawn on 18th August, Lazzaretti and his people descended in a procession from Monte Labbro to announce the advent of the new age of law and justice to the world – the reign of the Holy Spirit. They weren’t carrying any weapons. In Arcidosso, a huge crowd had arrived from nearby villages to welcome them. However, the Carabinieri, a branch of the Italian police force, were also waiting for them and they fired on the defenceless crowd. David was shot in the head and died later that evening. He was 44 years old.", "score": 17.2276963349887, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Giuseppe Garibaldi resigned his commission of leader of the army of Unification (I Mille) on September 18, 1860 and retired to his home on the island of Caprera off the coast of Sardinia. He was 53 years old and recovering from a battle wound.\nIn 1861, at the outbreak of the American Civil War, Garibaldi was approached by a representative of the United States Government, reportedly on behalf of President Abraham Lincoln. The Union Army was in disarray and Lincoln was unhappy with those in command. He needed a proven military leader.\n“The offer came at a moment in Garibaldi’s life when he lived in semi-exile—too little of a politician to scheme for personal advancement, too much of a national idol to be put behind bars on the Italian mainland. The hero of the movement for a unified Italy, he had led a spectacularly successful revolt against a reactionary regime in Sicily and in Naples—the so-called Two Sicilies—in 1860, but now he was in temporary retirement.\nOn lonely Caprera, a wild, rocky island covered with juniper and myrtle and stunted olive trees, below La Maddalena off the northeastern corner of Sardinia, Garibaldi tended his vines and figs, built stone walls to fence in his goats, and looked out to the sea, dreaming. The conqueror of the kingdom of the Two Sicilies, in gray trousers and slouch hat, his red shirt and poncho flapping in the wind, refused all titles and honors for himself and sought only lenience for his followers. “How men are treated like oranges—squeezed dry and then cast aside!” he said.\nHe had wanted to march on Rome, against the “myrmidons of Napoleon in,” supposedly there to protect the pope, and defeat the Bourbon troops. But Victor Emmanuel II, king of Sardinia and now of Sicily and Naples as well, decided that French help was needed to complete unification of Italy and called off Garibaldi’s advance. Going back to Caprera, Garibaldi leaned against the steamer rail and said to his legion of Red Shirts: “Addio—a Roma!”\nAbraham Lincoln’s Offer\nThrough the letter, dated July 17, 1861, from Secretary of State William H. Seward to H.S. Sanford, the U.S.", "score": 15.758340881307905, "rank": 76}, {"document_id": "doc-::chunk-19", "d_text": "On Napoleon’s escape and return to France (the Hundred Days), he regained Murat’s support, but Murat proved unable to convince the Italians to fight for Napoleon with his Proclamation of Rimini and was beaten and killed. The Italian kingdoms thus fell, and Italy’s Restoration period began, with many pre-Napoleonic sovereigns returned to their thrones. Piedmont, Genoa and Nice came to be united, as did Sardinia (which went on to create the State of Savoy), while Lombardy, Veneto, Istria and Dalmatia were re-annexed to Austria.\nThe dukedoms of Parma and Modena re-formed, and the Papal States and the Kingdom of Naples returned to the Bourbons. The political and social events in the restoration period of Italy (1815–1835) led to popular uprisings throughout the peninsula and greatly shaped what would become the Italian Wars of Independence. All this led to a new Kingdom of Italy and Italian unification.\nUnification (1814 to 1861)\nIt is difficult to pin down exact dates for the beginning and end of Italian reunification, but most scholars agree that it began with the end of Napoleonic rule and the Congress of Vienna in 1815, and approximately ended with the Franco-Prussian War in 1871, though the last “città irredente” did not join the Kingdom of Italy until the Italian victory in World War I.\nAs Napoleon’s reign began to fail, other national monarchs he had installed tried to keep their thrones by feeding those nationalistic sentiments, setting the stage for the revolutions to come. Among these monarchs were the viceroy of Italy, Eugène de Beauharnais, who tried to get Austrian approval for his succession to the Kingdom of Italy, and Joachim Murat, who called for Italian patriots’ help for the unification of Italy under his rule. Following the defeat of Napoleonic France, the Congress of Vienna (1815) was convened to redraw the European continent. In Italy, the Congress restored the pre-Napoleonic patchwork of independent governments, either directly ruled or strongly influenced by the prevailing European powers, particularly Austria.", "score": 15.758340881307905, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "The painting comes from the collection of Enrico Piceni, the discerning art lover and journalist who was also the driving force behind the important series of monographic studies on 19th-century masters published by Arnoldo Mondadori in the 1930s. It is a replica of the work commissioned by Cesare Sala and shown at the Esposizione di Belle Arti dell’Accademia di Brera in 1861 with the title Garibaldi on the Hill over Sant’Angelo (1861, Milan, Civiche Raccolte Storiche). It portrays Garibaldi on the high ground overlooking the plain of the River Volturno, where the outline of the town of Capua can be discerned. Two mounted soldiers on the right seem to be waiting for the general as he pauses pensively on the spot where his army of volunteers has just won a heroic victory. Garibaldi was soon after to meet Vittorio Emanuele II at Teano and hand over the conquered territories, thus marking the end of the expedition of his one thousand volunteers and the beginning of the Kingdom of Italy. Unquestionably the greatest leader of the Risorgimento, Garibaldi appears in the history painting of Gerolamo Induno as an epic symbol and a very human figure at the same time. In this work he stands out in isolation against the sun-drenched landscape of southern Italy but displays the informality of the ordinary mortal in the way he holds the cigar between his fingers and the natural pose of the arm resting on the hilt of his sword. The realism of this portrait is inseparably interwoven with the heroic character of its subject, thus endowing the work with the unmistakable power to bring the events and personages of contemporary history closer to the life of the common people, as shown also in the canvases of the painter’s brother Domenico Induno.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-4", "d_text": "The legend of the Risorgimento became sacred and untouchable. All the leaders were shown to have been great, disinterested men, Plutarchian characters, united by a common purpose and mutual respect, divided only occasionally by superficial discordances. Memorable sayings were religiously preserved, but when none was available they were fabricated either by improving on reality or by outright invention. One example: When Victor Emanuel arrived in Rome for the first time, in the autumn of 1870 (the carriage trip from Florence under pouring rains on muddy roads had been especially trying), he said a few words in Piedmontese patois to his aide-de-camp: “Finalment, i suma” or “We’ve finally arrived.” The line was translated in all schoolbooks for generations as: “A Roma ci siamo e ci resteremo” or “We are in Rome. We will stay.” It had been transformed into a political pronouncement squarely aimed at the Pope’s temporal power and papal legitimists’ hopes.\nMack Smith is not only an antlike researcher but, as is true of many other contemporary English historians, a remarkably good writer. Good writing is seldom the product of divine inspiration. Almost always it is the art of thinking clearly, of knowing exactly what one wants to say, in what order and proportions, and saying it with the precise words. His work is so rich in the new material he has gathered (or the material many of his Italian predecessors did not dare use) that he can often afford to leave out anecdotes and lines too well known to be recounted one more time. His books can be read with as much pleasure as good novels, at least by Italians who know the background. Dramatic events quickly follow one another relentlessly and logically; historic speeches and pamphlets are reduced to a few significant lines; characters are sharply drawn.\nHe is, furthermore, a prolific writer. His stock of source material is so vast that he could put together a new book on the Risorgimento in a relatively short time without repeating himself. His Italy: a Modern History, Garibaldi, and Medieval and Modern Sicily were immediately translated into Italian, published by the revered house of Láterza at Bari, which was Benedetto Croce’s publisher, and almost all made the Italian best-seller list.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "In the end as per the Paris Agreement, Britain accepted that the 13 colonies were independent. Since the declaration of independence was made on 4th July, it became a day of great significance to the Americans.\nThe American War of Independence acted as an inspiration for the French Revolution. Many of the Spanish and Portuguese colonies of America got inspired to become free and hence revolted against their motherland. The new nation called the United States of America was born. This is the most important significance of the American War of Independence.\nHow were economic factors responsible for the French revolution?\nEven at the end of the 18th century. France was economically backward. Though its neighboring country England became an Industrial Country. France remained as the .agricultural country. Feudalism still existed.\nThe clergy, the nobles led to luxurious life at the cost of ordinary people. The farmers and other skilled laborers could not pay taxes levied on them by the kings. The farmers revolted against the system.\nIndustries were under the control of trade unions. Even the industries were strong enough to compete with other countries like England. Totally there was economic instability in the country because of maladministration.\nWhat was the role of Garibaldi in Italy’s unification?\nGaribaldi was a soldier and fighter. He played an important role in the unification of Italy.\nHe joined the ‘Young Italy’ established by Mazzini. After that, he constituted an army called ‘Red Brigade’ or “Red Shirts”.\nWith the help of Sardinia, fought with Austria. In 1860, he fought against the twin states of Sicily using his Red Garibaldi Brigade. He conquered them. Thus he played his role in the unification of Italy.\nWho was the architect of the unification of Germany? Write a note on him.\nThe architect of the unification of Germany was Ottowan Bismark.\nHe was the chief minister of the King of Prussia, William I. He had begun his career as a Government servant, a member of the assembly called “Diet” and as an ambassador in various nations and had gained a lot of popularity. He was aware of the German States Association under the leadership of Austria and knew about the activities and weaknesses of this organization. Having worked in Austria, France, and Russia as an ambassador, he had the knowledge of their strength and weakness.", "score": 14.591992257517537, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "If you visit Italy, you will notice that, wherever you’re staying, the main street or square will almost invariably be named after Giuseppe Garibaldi. Garibaldi is the national hero who led the movement to unite Italy in the mid-19th century.\nItaly only became a unified state between 1859 and 1871. Before that it was a patchwork of different states.\n“We Italians adore Garibaldi – from the cradle we are taught to admire him,” Antonio Gramsci, the great Italian Marxist, wrote.\n“If one were to ask Italian youngsters who they would most like to be, the overwhelming majority would certainly opt for the blond hero.”\nBut it is not just the left that would like to claim Garibaldi as one of their own. The Italian fascist dictator Benito Mussolini and other leaders of the Italian right have been fascinated by Garibaldi’s military exploits and admired him for his patriotism.\nGaribaldi’s exploits made him an international hero during his lifetime. An account from April 1864 describes one of his visits to London:\n“The working men of London had organised a procession for the purpose of meeting and welcoming the liberator of Italy.\n“But this procession, though numbering 50,000 intelligent artisans, was completely swallowed up in the mighty ovation by the whole metropolitan people, and served merely as a foretaste to Garibaldi of the extraordinary testimony which was about to be given of the estimation in which his principles and services in the cause of liberty were held by the English people.”\nGaribaldi was in London primarily to raise funds to finance an expedition to free Venice, which was still under Austrian rule. He mixed in government circles, and spent many evenings chattering with the middle classes.\nBut in a contradiction that sums up his life, Garibaldi had also been invited to London by the city’s trades council.\nThe reaction he provoked among workers and trade unions began to worry the government. It eventually ordered him out of the country – and Queen Victoria made clear she regarded this as good riddance.\nBut why did people make such a fuss about Garibaldi, and why is his memory so contested? He was born 200 years ago in the city of Nice, then an Italian-French area, the son of a fisherman.\nLike many people in continental Europe at the time, Garibaldi experienced brutal domination by a foreign power. However the local opponents of this rule were often not much better.", "score": 13.897358463981183, "rank": 81}, {"document_id": "doc-::chunk-8", "d_text": "In 1796, he forced Austria to retreat, and he then spent the next three years conquering the Italian peninsula in the name of the French revolution. This included declaring himself King of Italy in 1805 in a region around Venice. He ruled the rest of the peninsula but mainly through his rule of France. In 1809, Napoleon made it to Rome. He had been excommunicated earlier by the Pope, but now the Pope fled. There was a lot more fighting and eventually Napoleon was exiled to Elba, escaped, and still did not succeed.\nThe seed of unification was planted. Revolution and independence were popular at that time in Europe and abroad. The British colonies had declared themselves free a few decades back, and the French people had had a revolution as well. On the Italian peninsula, where the seed of unification was sprouting, the flame blew into the Italian Wars of Independence from 1848-1866. These wars were mainly against Austria, that big brute, and the Kingdom of Sardinia did most of the heavy lifting.\nPart of the unification of Italy involved choosing a king. King Victor Emmanuel II of the Kingdom of Sardinia was chosen as King of Italy in 1861. Italy was a kingdom until 1946 when a referendum created the modern Italian Republic. Giuseppe Garibaldi was an Italian general who is considered one of the fathers of modern Italy. Garibaldi’s life took him all around the world and he became a much admired figure in history. For the sake of a book on food, I mention him as part of history. He helped create Italy as a unified country, but there are still Italians who wish he had stayed home.\nThe flag of Italy, “il Tricolore” contains three panels with equal vertical panels of green, white, and red. Green is for the evergreen scrubland of the Mediterranean, white for the snowcapped Alps, and red for the blood shed during the revolution. Garibaldi’s men were also called redshirts after their uniform. According to Catholic interpretation, the flag colors represent hope, faith, and love. The “love” part of this represents the virtue of charity and love for God and for thy neighbor. Quite appropriate for a land of neighbors.", "score": 13.897358463981183, "rank": 82}, {"document_id": "doc-::chunk-2", "d_text": "It was in this atmosphere that the writer and soldier Gabriele D'Annunzio, who'd done much to convince Italy to enter the war in the first place, led a band of black-shirted militant nationals against Fiume in the north, setting up his own quixotic government that lasted for several months.\nOf all the fire-breathing Italian demagogues to emerge after the war, only Mussolini articulated, maintained, and sold a vision of Italian nationalism and greatness that the masses could believe. His widest appeal, he realized, would be to the million war veterans; he figured that if he could get their support he could lead all of Italy. Which is why he formed his own political movement known as the Fasci Italiani di Combattimento--Italian Combat Leagues--which soon became known as the Fascist party, its followers called Fascisti. Their uniform was the black shirt--an homage to D'Annunzio--and their primary tactic was the violent disruption of rallies held by other political parties, especially the socialists.\nThe years 1920 and 1921 were misery in Italy--leftist-led labor strikes, food riots, and full-fledged tax revolts. Life for average Italians became a kind of earthly hell, with the police and army powerless to protect them.\nIf he had planned it this way, Mussolini couldn't have given himself a more glorious opportunity. He sent out his bands of armed Black Shirts, like hired guns in the Old West, to restore order in the streets and factories. Naturally, his most generous employers were industrialists and big landlords, anxious to again make profits. But on his own, with a wink and a nod from Rome, he also took it upon himself to destroy communist and socialist groups, and left-wing trade unions, most of which were Catholic. That's how his National Fascist Party consolidated its own power, by reducing the power of the Church, the left, and the government. For the elections of 1921, the liberal government even brought the Fascists into an electoral coalition as a way to keep the beast fed, and the Fascists won 35 seats, including one by Mussolini himself. Government ministers, and King Victor Emmanuel, hoped Mussolini would be happy with what he'd achieved, and they planned to continue using his Black Shirts to put down threats from the left.", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "The handwriting appears nervy and frenetic, with numerous spelling errors, among which are \"Ilia\" for \"Italia\" and \"Ballilla\" for \"Balilla\". The second manuscript is the copy that Goffredo Mameli sent to Michele Novaro for setting to music. It shows a much steadier handwriting, fixes misspellings, and has a significant modification: the incipit is \"Fratelli d'Italia\". This copy is in the Museo del Risorgimento in Turin. The hymn was also printed on leaflets in Genoa, by the printing office Casamara. The Istituto Mazziniano has a copy of these, with hand annotations by Mameli himself. This sheet, subsequent to the two manuscripts, lacks the last strophe (\"Son giunchi che piegano...\") for fear of censorship. These leaflets were to be distributed on the December 10 demonstration, in Genoa.3\nDecember 10, 1847 was an historical day for Italy: the demonstration was officially dedicated to the 101st anniversary of the popular rebellion which led to the expulsion of the Austrian powers from the city; in fact it was an excuse to protest against foreign occupations in Italy and induce Carlo Alberto to embrace the Italian cause of liberty. In this occasion the tricolor flag was shown and Mameli's hymn was publicly sung for the first time. After December 10 the hymn spread all over the Italian peninsula, brought by the same patriots that participated to the Genoa demonstration. In the 1848, Mameli's hymn was very popular among the Italian people and it was commonly sung during demonstrations, protests and revolts as a symbol of the Italian Unification in most part of Italy. In the Five Days of Milan, the rebels sang the Song of the Italians during clashes against the Austrian Empire.4 In the 1860, the corps of volunteers led by Giuseppe Garibaldi used to sing the hymn in the battles against the Bourbons in Sicily and Southern Italy.5 Giuseppe Verdi, in his Inno delle nazioni (Hymn of the Nations), composed for the London International Exhibition of 1862, chose Il Canto degli Italiani to represent Italy, putting it beside God Save the Queen and La Marseillaise.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "He also convinced his king to exchange the province of Savoy and the city of Nice, the birthplace of Italian patriot Giuseppe Garibaldi, for the strategic friendship of France against Austria.\nHe might be called “a man for all seasons” and we Kurds might see some of our own leaders in him. But this indefatigable Kissinger of Italy also earned the well-deserved title of liar for all times.\nThe man who actually freed Italy and captured the imagination of Italians and Europeans alike was the native of Nice, Guiseppe Garibaldi (1807-1882). His contemporaries likened him to Spartan Leonidas, Roman Cincinnatus and American George Washington.\nBritain’s poet laureate Lord Tennyson, after planting a tree in his honor, movingly wrote:\nOr watch the waving pine which here\nThe warrior of Caprera set,\nA name that earth will not forget\nTill earth has roll’d her latest year—.\nLiberation movements used to command the attention of 19th century poets. At the dawn of 21st century, the bards have gone quiet, but the prose writers are busy conflating our desire to be free, alas, with selfishness.\nIn Italy, the decisive countdown for liberation started on May 5, 1860. With the help of Cavour, Garibaldi and his Mille (thousand) Redshirts sailed to Marsala in western Sicily with a hoodwinking from Great Britain.\nFour months later, all of Sicily and southern Italy were in the hands of 21 thousand Redshirts and Garibaldi. Italians were winning and those dying could finally say, as Leopardi wanted them to:\n“Beloved native land,\nthe life you gave me I give back to you.”\nTen years later—with a little bit of help from Germany, modern Italy was born.\nBut instead of focusing its energies on making Italians better patriots, it foolishly aspired to become an imperial power with fleeting forays into Eritrea, Somalia, Libya, Dodecanese Islands, and Ethiopia.\nUnheeded observation of Chancellor Bismarck was spot-on: “Italy has a large appetite but false teeth.”\nLeopardi, the Italian poet, can finally rest in peace. These days, Italians are not dying at the outskirts of Moscow from cold.", "score": 11.600539066098397, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "The second most populous state in Europe was the multinational Empire.\nRevolution had different implications depending on whether it broke out in the empire's western or eastern areas.\nIn the west, revolutionaries looked to refining an already developed liberalism to benefit an educated, relatively affluent urban population.\nIn the east, liberalism was more about freeing serfs in areas where most of the population was poor.\nThe invasion of the royal palace in the second week of March was so shocking that Metternich resigned and fled the country.\nThe March Laws were passed by the long restive Magyars in their Diet.\nThere were challenges to Habsburg rule in other parts of the empire.\nThe Austrian military garrison was driven out of Milan, the largest city of northern Italy and capital of Lombardy.\nA republic was proclaimed in Venice.\nRioting in Berlin prompted the king of Prussia to promise a constitution.\nBy the end of the month, the leaders of the smaller German states, facing similar unrest, had agreed to call an assembly that would represent all German states, with the understanding that a more centralized and unified state form would emerge.\nThe assembly was formed on the basis of a democratic vote and met in May of 1848.\nThe hopes of German nationalists for about a year came to symbolize the failures of German liberalism.\nThe king of Piedmont-Sardinia, the major independent state in the north of Italy, ordered an invasion of Lombardy, hoping to increase his holdings and create a powerful northern Italian state.\nHe was alarmed by the fact that troops from Tuscany, as far south as Naples, began to march to the north to help drive out the Austrians.\nThe men were united by vague visions of a unified Italy, but there was little consensus about the form a unified Italian state would assume.\nDespite Italy's clear natural frontiers, most Italians north of Rome had little enthusiasm for an immediate union with the backward areas south of Rome.\nLocal and regional fidelities in Italy remained strong into the twentieth century.\nThe March uprisings are considered to be the most widespread wave of revolution in European history.\nSocialists, radicals, and moderate republicans differed on too many fundamental points to work effectively together in France.\nMost of them did not have political experience.\nThey couldn't assemble a reliable military power.\nIn France and the rest of Europe, the left did not enjoy broad or reliable popular support.\nThere was no match for trained and disciplined troops in the initial surge of support.", "score": 11.600539066098397, "rank": 86}, {"document_id": "doc-::chunk-3", "d_text": "But the left never stopped trying to incite revolution, so with the government unable defeat them, that job fell to the Fascists.\nWHAT DID the average Italian man want? Not revolution. No, he wanted only to earn a decent living and to drink wine and make love to his wife under a warm roof that didn't leak. He wanted life to be bearable again and orderly--for the trains to run on time--and by promising that, Mussolini was like Spartacus two thousand years before, his army of admirers and followers growing with every battle won. When he stood before the cheering throngs at Fascist rallies, they called out \"Il Duce\"--The Leader--because that's what he was to them, the man who would lead them out of turmoil and despair.\nThroughout 1922, several parliamentary governments were formed and quickly collapsed under the weight of unrest and uncertainty. Then came October. In the lengthening shadows and falling leaves, tens of thousands of black-shirted Fascists gathered from all over Italy and descended on Rome, an occupying force awaiting word from its commanding general. Mussolini threatened Victor Emmanuel with all-out civil war if the king didn't name him, Il Duce, to lead the next government. In response, Emmanuel began mobilizing his army against the Fascists--martial rule. But his ministers and generals said that that would be madness, and convinced him to appease Mussolini by making him prime minister of a coalition government; that, they said, would defuse the situation.\nOver the next five years, under the guise of law and order, Mussolini whittled away at the constitution. In 1924, a leading socialist deputy was found murdered, with no one claiming responsibility but no one denying it, either; the man was, after all, a leftist. In the same way, all opposition parties and leaders were harassed and threatened by Mussolini's secret police, and soon the press was ordered to endorse the governmental position or face the consequences. Whatever outrage there was dribbled out in whispers, because the factory workers and the landlords and the moneyed interests were more content; the average man could again work at a job that paid him a little, not too much, but enough for a bottle of wine and a bowl of pasta and a warm place to make love to his wife, and Italy was once again a unified country.", "score": 11.600539066098397, "rank": 87}, {"document_id": "doc-::chunk-17", "d_text": "The Black Death repeatedly returned to haunt Italy throughout the 14th to 17th centuries. The plague of 1575–77 claimed some 50,000 victims in Venice. In the first half of the 17th century a plague claimed some 1,730,000 victims, or about 14% of Italy’s population. The Great Plague of Milan occurred from 1629 through 1631 in northern Italy, with the cities of Lombardy and Veniceexperiencing particularly high death rates. In 1656 the plague killed about half of Naples‘ 300,000 inhabitants.\nThe Musicians by Caravaggio\nNapoleonic invasion (1796)\nItaly before the Napoleonic invasion (1796).\nAt the end of the 18th century, Italy was almost in the same political conditions as in the 16th century; the main differences were that Austria had replaced Spain as the dominant foreign power after the War of Spanish Succession (and that too was not true with regards to Naples and Sicily), and that the dukes of Savoy (a mountainous region between Italy and France) had become kings of Sardinia by increasing their Italian possessions, which now included Sardinia and the north-western region of Piedmont.\nThis situation was shaken in 1796, when the French Army of Italy under Napoleoninvaded Italy, with the aims of forcing the First Coalition to abandon Sardinia (where they had created an anti-revolutionary puppet-ruler) and forcing Austria to withdraw from Italy. The first battles came on 9 April, between the French and the Piedmontese, and within only two weeks Victor Amadeus III of Sardinia was forced to sign an armistice. On 15 May the French general then entered Milan, where he was welcomed as a liberator. Subsequently beating off Austrian counterattacks and continuing to advance, he arrived in the Veneto in 1797. Here occurred theVeronese Easters, an act of rebellion against French oppression, that tied down Napoleon for about a week.\nOn October 1797 Napoleon signed the Treaty of Campo Formio, by which the Republic of Venice was annexed to the Austrian state, dashing Italian nationalists’ hopes that it might become an independent state.", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-3", "d_text": "A public announcement from the PCI in September 1943 stated:\nTo the tyranny of Nazism, that claims to reduce to slavery through violence and terror, we must respond with violence and terror.— Appeal of PCI to the Italian People, September 1943\nThe GAP's mission was claimed to be delivering \"justice\" to Nazi tyranny and terror, with emphasis on the selection of targets: \"the official, hierarchical collaborators, agents hired to denounce men of the Resistance and Jews, the Nazi police informants and law enforcement organizations of CSR\", thus differentiating it from the Nazi terror. However, partisan memoirs discussed the \"elimination of enemies especially heinous\", such as torturers, spies and provocateurs. Some orders from branch command partisans insisted on protecting the innocent, instead of providing lists of categories to be hit as individuals deserving of punishment. Part of the Italian press during the war agreed that murders were carried out of most moderate Republican fascists, willing to compromise and negotiate, such as Aldo Resega, Igino Ghisellini, Eugenio Facchini and the philosopher Giovanni Gentile.\nWomen also participated in the resistance, mainly procuring supplies, clothing and medicines, anti-fascist propaganda, fundraising, maintenance of communications, partisan relays, participated in strikes and demonstrations against fascism. Some women actively participated in the conflict as combatants.\nThe first detachment of guerilla fighters rose up in Piedmont in mid-1944 as the Garibaldi Brigade Eusebio Giambone. Partisan forces varied by seasons, German and fascist repression and also by Italian topography, never exceeding 200,000 men actively involved, with an important support by residents of occupied territories. Nonetheless it was an important factor that immobilized a conspicuous part of German forces in Italy, and to keep German communication lines insecure.\nWhen the Italian Resistance movement began following the armistice, with various Italian soldiers of disbanded units and many young people not willing to be conscripted into the fascist forces, Mussolini's Italian Social Republic (RSI) also began putting together an army. This was formed with what was left of the previous Regio Esercito and Regia Marina corps, fascist volunteers and drafted personnel.", "score": 10.648241559563893, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "As a general, he was fearless, commanding respect and loyalty from his men by fighting right alongside them in hand-to-hand combat. He was a man of action with an acute sense of justice and a childlike belief that good would triumph over immorality and corruption. He didn't want to win battles for politicians —he distrusted them. He was simply and truly out to smite evil. He was what most twelve-year-old boys want to be when they grow up, and if you ever have a strange dream in which you are beset by enemies and plagued by wrongdoers, and your dreammeister lets you choose whomever you want for help, take Garibaldi. Ask Cavour and Mazzini. They took him, and they didn't even like him. He was that good.\nThus, in May of 1860,\nFrancis II, King of the Two Sicilies had excellent\nreason to worry. Garibaldi, over the objections of the\nultra-cautious Cavour, had just smuggled a small and\nalmost unarmed (!) group of men out of the port of Genoa\naboard two leaky tubs and set off to liberate the\nItalian south. He would start in Sicily, in support of a\nlocal uprising, and work his way over to the mainland\nand on up to the capital, the city of Naples. He cajoled\nand threatened weapons and ammunition out of the\ncommanders of a few armories along the way as he plodded\nsouth toward Sicily, where his famous \"Thousand\nredshirts\" (1,089, to be exact) would take on a regular\narmy twenty times that number. (Garibaldi set sail from\nGenoa on May 5 and landed in Sicily on May 11.)\nThe Kingdom of the Two Sicilies had a sizable army and the largest navy in the Mediterranean. Socially and politically, however, it had been standing still since the post-Napoleonic Restoration in 1815, surviving the Carbonari revolution of 1820 and successfully resisting calls for reform only by being propped up by the Austrian army and Swiss mercenaries. Many of the kingdom's liberals and intellectuals had left, and by 1860 even King Francis could sense what was coming. In June of that year (after Garibaldi had already taken Sicily!)", "score": 8.086131989696522, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "|Triumvir of the Roman Republic|\nMarch 29 – July 1, 1849\n|Preceded by||Republic established|\n|Succeeded by||Aurelio Saliceti|\n22 June 1805\nGenoa, Ligurian Republic, now Italy (or First French Empire)\n10 March 1872\nPisa, Kingdom of Italy\n|Alma mater||University of Genoa|\n|Profession||Politician, journalist, and activist for Italian independence/unification.|\nGiuseppe Mazzini (Italian pronunciation: ; 22 June 1805 – 10 March 1872), was an Italian politician, journalist and activist for the unification of Italy. His efforts helped bring about the independent and unified Italy in place of the several separate states, many dominated by foreign powers that existed until the 19th century. He also helped define the modern European movement for popular democracy in a republican state.\nMazzini was born in Genoa, then part of the Ligurian Republic, under the rule of the French Empire. His father, Giacomo Mazzini, originally from Chiavari, was a university professor who had adhered to Jacobin ideology; his mother, Maria Drago, was renowned for her beauty and religious (Jansenist) fervour. Since a very early age, Mazzini showed good learning qualities (as well as a precocious interest towards politics and literature), and was admitted to the University at only 14, graduating in law in 1826, initially practicing as a \"poor man's lawyer\". He also hoped to become a historical novelist or a dramatist, and in the same year he wrote his first essay, Dell'amor patrio di Dante (\"On Dante's Patriotic Love\"), which was published in 1837. In 1828–29 he collaborated with a Genoese newspaper, L'indicatore genovese, which was however soon closed by the Piedmontese authorities. He then became one of the leading authors of L'Indicatore Livornese, published at Livorno by F.D. Guerrazzi, until this paper was closed down by the authorities, too.\nIn 1827 Mazzini travelled to Tuscany, where he became a member of the Carbonari, a secret association with political purposes.", "score": 8.086131989696522, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Italian poet Giacomo Leopardi (1798-1837) didn’t live long enough to see a free Italy, but wrote some of the most moving verses to lament his homeland’s subjugation by France under Napoleon Bonaparte.\nIn a poem titled, To Italy, he mourns the loss of Italian soldiers in Bonaparte’s ill-fated Russian campaign with these haunting words that can easily pass for Kurdish fighters who died in Korea for Turkish Adnan Menderes or Kuwait for Arab Saddam Hussein.\nOh miserable is he—who dies in battle,\nnot for his country’s soil, his faithful wife\nand precious children,\nbut he dies serving someone else,\ndies at the hand’s of that man’s enemies,\nand can’t say at the end: Beloved native land,\nthe life you gave me I give back to you.\nItalians devoured his poems like hungry wolves and set out to liberate Italy from the yoke of France and Austria.\nThree of them deserve special Kurdish notice—for what may be called the soul, brain and sword of Italian freedom.\nGiuseppe Mazzini (1805-1872) spoke to the soul of Italians with his passion for liberty. As if addressing the stateless peoples all over the world, he said:\n“Do not beguile yourselves with the hope of emancipation from unjust social conditions if you do not first conquer a country for yourselves.”\n“Without country,” he harangued his fellow Italians: “you are the bastards of humanity.”\nTruer words have never been uttered so succinctly on behalf of freedom anywhere in the world.\nMazzini may have voiced the deepest yearnings of Italian patriots, but the brain behind unification was Count Cavour (1810-1861), the prime minister of King Victor Emmanuel in the Kingdom of Piedmont, a small Italian state bordering France.\nUnlike Mazzini, who advocated mingling blood and iron for liberation, Cavour opted for diplomacy, especially with Great Britain, the superpower of 19th century, to achieve the same goal.\nSo when Great Britain, together with France and Ottoman Empire, declared war on Russia to preserve the territorial integrity of the crumbling Turkish state for geopolitical reasons, Count Cavour rushed in 15,000 Italian soldiers to Crimea to help Great Britain win the war in 1853.\nFour years later, the same Cavour allowed Russia to establish a naval base in Piedmont.", "score": 8.086131989696522, "rank": 92}, {"document_id": "doc-::chunk-16", "d_text": "After preparing the international terrain at the Leghorn meeting, Mussolini waited for an opportunity to carry out his plan to take over Albania. This opportunity was given to him by the Dukagjin Uprising. In early 1926, the Bashkimi Kombëtar Committee, with its headquarters in Bari, was informed that demoralisation within the Albanian regime had reached a zenith because of corruption and the oppression of the population, and because of the concessions given to foreign circles in contradiction with vital national interests. Convinced that a well-organised movement could overthrow the regime without great sacrifice and would be welcomed by the majority of the population, the Committee made a corresponding proposal to Albanian patriots in the north and south of the country. The patriots in the south replied that they were ready to act unconditionally at the moment the Committee deemed suitable. The patriots in the north, for their part, asked for 5,000 gold Napoleons, a sum ten times greater than what the Committee had. A little later, however, the situation in the north changed radically and in the autumn of 1926, there was great support for an uprising in Albania, in particular after the proposal made by Baron Aloisi. At this time, the Catholic clergy and the leaders of the Muslim community in Shkodra informed the Committee that they were ready to organise an uprising without any financial backing. They added that the situation was so tense that, if the Bashkimi Kombëtar Committee did not act, other irresponsible and ill-prepared individuals would take over leadership of a national uprising which, under such conditions, would have catastrophic repercussions for the country. Unfortunately this is exactly what happened. The Bashkimi Kombëtar Committee told patriots in Albania to make the necessary preparations, but to wait for orders before rising in revolt. The reason why the uprising did not begin immediately was a disagreement among the leaders of our organisation (Bashkimi Kombëtar) with regard to collaboration with Konare. Sotir Peci and the leaders of the north were strictly against any uprising conducted with Konare leaders because participation of the latter would compromise the movement internationally such that it would be decried as a communist revolt.", "score": 8.086131989696522, "rank": 93}]} {"qid": 9, "question_text": "What exactly gives particles their mass according to physics?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "…………. The Higgs boson, a sub-atomic particle that pervades the Universe, is crucial to something called the Standard Model, which attempts to unite the building blocks of matter with three of the four known forces of nature – only gravity is left out.\nUntil Peter Higgs, Francois Englert and Robert Brout came along with their theory in 1964, the Standard Model was under threat because it could not explain how sub-atomic particles – and hence all matter in the Universe – have mass and therefore structure.\nThe proposition of a sub-atomic particle that creates a field through which other particles interact was the missing piece of the puzzle. It is this interaction between matter and the field created by the Higgs boson that imparts mass to matter. ………..\nFor Detail, Visit", "score": 52.30550023837512, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Extrapolating from Higgs' theory, scientists were able to explain how all particles get their mass which would explain, in turn, how everything in the universe, from scientists at CERN to the grand Jura Mountains that surround them, comes to have weight.\nIt works like this: Across the postBig Bang universe, collections of Higgs bosons make up a pervasive Higgs field which is theoretically where particles get mass. Moving particles through a Higgs field is like pulling a weightless pearl necklace through a jar of honey, except imagine that the honey is everywhere and the interaction is continuous. Some particles, such as photons, which are weightless particles of light, are able to cut through the sticky Higgs field without picking up mass. Other particles get bogged down, accumulating mass and becoming very heavy. Which is to say that even though the universe appears to be asymmetrical in this way, it actually is not the Higgs field doesn't destroy nature's symmetry; it just hides it.\nThe way to find the Higgs boson is to create an environment that mimics the moment postBig Bang. The powerful LHC runs at up to 7 trillion electron volts (TeV) and sends particles through temperatures colder than deep space at velocities approaching the speed of light. (The second most powerful particle accelerator, at Fermilab in Illinois, runs at 1 TeV.) The added juice allows scientists to get closer to the high energy that existed after the Big Bang. And high energies are needed, because the Higgs is thought to be quite heavy. (In Einstein's famous equation E=MC2, C represents the speed of light, which is constant; so in order to find high-mass particles, or M, you need high energies, E.) It's possible, of course, that even at such high energies, the Higgs boson will not be found. It may not exist.\nBut if it does exist, the Higgs would help plug a hole in the so-called Standard Model the far-reaching set of equations that incorporates all that is known about the interaction of subatomic particles and is the closest thing physicists have to a testable \"theory of everything.\" But many theoreticians feel that even if the Higgs boson exists, the Standard Model is unsatisfactory; for instance, it is unable to explain the presence of gravity, or the existence of something called \"dark matter,\" which prevents spiral galaxies like our Milky Way from falling apart.", "score": 50.411025326230465, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "I will send you to a more general physics blog, where a contributor Stephen Wolfram on Higgs, particle physics discusses how the masses enter within the standard model.\nHere’s how it basically works. Every type of particle in the Standard Model is associated with waves propagating in a field—just as photons are associated with waves propagating in the electromagnetic field. But for almost all types of particles, the average amplitude value of the underlying field is zero. But for the Higgs field, one imagines something different. One imagines instead that there’s a nonlinear instability that’s built into the mathematical equations that govern it, that leads to a nonzero average value for the field throughout the universe.\nAnd it’s then assumed that all types of particles continually interact with this background field—in such a way as to act so that they have a mass. But what mass? Well, that’s determined by how strongly a particle interacts with the background field. And that in turn is determined by a parameter that one inserts into the model. So to get the observed masses of the particles, one’s just inserting one parameter for each particle, and then arranging it to give the mass of the particle.\nThat might seem contrived. But at some level it’s OK. It would have been nice if the theory had predicted the masses of the particles. But given that it does not, inserting their values as interaction strengths seems as reasonable as anything.\nThe standard model does not predict the masses of the particles composing it, so there is nothing to apply. It is put in by hand, and then maybe higher order corrections etc in the case of quarks can play a role in modifying the input values.", "score": 48.448696328149786, "rank": 3}, {"document_id": "doc-::chunk-9", "d_text": "Two “up” quarks and a “down” quark in the case of the proton and one “up” quark and two “down” quarks in the case of the neutron. Those quarks are fundamental particles.\nThe Higgs interacts with the electron and the quarks and gives them mass. You could say it “generates” the mass. I’m tempted to say that without the Higgs those fundamental particles wouldn’t have mass. So, there you have it. This is one of its roles. Without this Higgs, we would not understand at all how electrons and quarks have mass, and we wouldn’t understand how to correctly calculate the mass of an atom!\nNow, any physicist who has made it this far is cringing with my last statement – as a quick reading of it implies that all the mass of an atom comes from the Higgs. It turns out that we know of several different ways that mass can be “generated” – and the Higgs is just one of them. It also happens to be the only one that, up until July 4th, we didn’t have any direct proof for. An atom, a proton, etc., has contributions from more than just the Higgs – indeed, most of a proton’s mass (and hence, an atom’s mass) comes from another mechanism. But this is a technical aside. And by reading this you know more than many reporters who are talking about the story!\nThe Higgs plays a second role. This is a little harder to explain, and I don’t see it discussed much in the press. And, to us physicists, this feels like the really important thing. “Electro-Weak Symmetry Breaking”. Oh yeah! It comes down to this: we want to tell a coherent, unified, story from the time of the big-bang to now. The thing about the big-bang is that was *really* hot. So hot, in fact, that the rules of physics that we see directly around us don’t seem to apply. Everything was symmetric back then – it all looked the same. We have quarks and electrons now, which gives us matter – but then it was so hot that they didn’t really exist – rather, we think, some single type of particle existed.", "score": 46.83149878097057, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Trend: Searching for the Higgs\nSince the 1970s, physicists have known that two fundamental forces of nature, the electromagnetic force and the weak force, can be unified into a single force—the electroweak force—if the particles that carry these forces are massless. The photon, which carries the electromagnetic force, is massless, but the particles that carry the weak force have substantial mass, explaining why the weak force is weaker than the electromagnetic force. This unification can still work if a new spin-zero boson, the Higgs boson, is introduced, allowing the particles that carry the weak force to be massive. In addition, interactions with the Higgs boson are responsible for the masses of all particles. These ideas form the basis of the standard model of particle physics, which is consistent with almost all observations. Gravity can act once particles have mass due to the Higgs boson—the Higgs boson is not the source of the gravitational force. The one outstanding missing piece in this entire picture is the Higgs boson itself. What are the prospects for its discovery?\nThe standard model of particle physics and the Higgs boson\nMatter is made up of spin- fermions, the particles known as leptons (the “light ones”) and quarks. There are three families of leptons, each consisting of two particles: the electron with its corresponding neutrino (), the muon () and its neutrino, and the tau lepton () and its neutrino . Electrons are familiar from electric current and as constituents of atoms; they are the lightest electrically charged particles. Muons and tau leptons are also charged and can be considered to be heavier electrons. Neutrinos are neutral and (almost) massless. All of the leptons can be directly observed, some more easily than others.\nQuarks also come in three families, and they also have electrical charge, but their charges are fractions of the charge of the electron ( and ). They cannot be directly observed—the particles we do observe, such as the proton and the pion, are made up of either three quarks or a quark and its antiparticle, an antiquark. Every particle has a corresponding antiparticle with the same mass but opposite charge, for example, the antiparticle of the electron is the positively charged positron.", "score": 45.19850247762651, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Scientists at the Cern European physics research centre say they have found signs of Higgs boson, the so-called “God particle” and the missing link of particle physics.\nScientists have long been searching for the Higgs boson particle because its existence proves the theory about the existence of the Higgs field, which they say gives mass to all particles in the universe.\nThe Higgs field works by restricting the speed of particles it interacts with, which would otherwise zip around the universe at the speed of light.\nThe field acts in the opposite way to, for example, a person who “becomes” lighter when swimming in water.\nScientists at the Cern physics research centre near Geneva said, however, they had found no conclusive proof of the existence of the particle.\n“If the Higgs observation is confirmed…this really will be one of the discoveries of the century,” said Themis Bowcock, a professor of particle physics at Liverpool University.\n“Physicists will have uncovered a keystone in the makeup of the Universe…whose influence we see and feel every day of our lives.”\nThe leaders of two experiments, Altas and CMS, revealed their findings at Cern, where they have tried to find traces of the elusive boson by smashing particles together in the Large Hadron Collider at high speed.\n“Both experiments have the signals pointing in essentially the same direction,” said Oliver Buchmueller, senior physicist on CMS. “It seems that both Atlas and us have found the signals are at the same mass level. That is obviously very important.”\nProf Stefan Soldner-Rembold, from the University of Manchester said: “Within one year we will probably know whether the Higgs particle exists, but it is likely not going to be a Christmas present.”\nThe theory, expounded by British physicist Peter Higgs in the 1960s, talks about the field as a kind of lattice that filled the entire universe, 100th of a billionth of a second after the Big Bang.\nElementary particles, with no mass, had previously been whizzing around at the speed of light. But once the massless particles passed through the field, Higgs boson particles became highly attracted to them, creating friction and slowing the particles’ movement.\nSome types of particle travel through the field virtually unimpeded, others dragged to slower velocities by varying amounts.\nAs the particle slowed, their energy was condensed into a super-concentrated form of energy: mass.", "score": 44.45857458399527, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Particles moving through that field, if they interact with the field, experience a sort of drag. That drag is mass. So - just like particles like neutrinos aren't affected by electromagnetic fields, some particles like photons won't have mass because they don't interact with the field that produces mass. We call that field the Higgs' field.\n(The previous paragraph formerly contained an error. The higgs field produces mass, not gravity. Just a stupid typo; my fingers got ahead of my brain.)\nSo physicists proposed the existence of the Higgs' field. But how could they test it?\nIt's a field. Fields have exchange particles. What would the exchange particles of the Higgs' field be? Exchange particles are bosons, so this one is, naturally, called a Higgs' boson. So if the Higgs' field exists, then it will have an exchange particle. If the standard model of physics is right, then we can use it to predict the mass that that boson must have.\nSo - if we can find a particle whose mass matches what we predict, and it has the correct properties for a mass-field exchange particle, then we can infer that the Higgs' field is real, and is the cause of mass.\nHow did they find the Higgs' boson?\nWe have a pretty good idea of what the mass of the Higgs' boson must be. We can describe that mass in terms of a quantity of energy. (See the infamous Einstein equation!) If we can take particles that we can easily see and manipulate, and we can accelerate them up to super-super high speed, and collide them together. If the energy of a collision matches the mass of a particle, it can create that kind of particle. So we slam together, say, two protons at high enough energy, we'll get a Higgs' boson.\nBut things are never quite that easy. There are a bunch of problems. First, the kind of collision that can produce a Higgs' doesn't always produce one. It can produce a variety of results, depending on the specifics of the collision as well as purely random factors. Second, it produces a lot more than just a Higgs'. We're talking about an extremely complex, extremely high energy collision, with a ton of complex results. And third, the Higgs' boson isn't particularly stable.", "score": 42.87294118183325, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Look the Higgs is a postulated particle. It was born as a mathematical trick in order to solve some problems concerning symmetry in quantum field theory. The Higgs has mass because we defined it like that. The Higgs particle gives mass to elementary particles via it's interaction with these particles. This interaction can be expressed in terms of a coupling between the higgs field and the elementary particle field. The coefficient of the product of these fields is the mass of the elementary particle. This is just how the QFT formalism works. This is all very nice but the question remains as to wether this is true. This is why many scientists await the first experimental verification of this system of spontaneous breakdown and mass generation. Also, elementary particles need to be massless when in gauge theory because of symmetry reasons. I have written many topics on this in my journal, go check it out\nsearch for the entries on the Higgs field and 'why elementary particles are massless'\nThis is very interesting stuff but not that easy.\nAlso i read analogy stuff like 'the gravitons are the photons'. Do not pay any attention to this because it is fundamentally wrong. Gravitons are very different in nature, they indeed mediate the gravitational force but they are different in nature because they ARE particles of space time itself. There is not such a think as gravitational radiation or some sort. If this were the case, the analogy with photons is justified. Keep this in mind...\nScroll down to the Higgs field entry and read why elementary particles are massless on the next page (8)", "score": 42.40607784674721, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "So if protons and neutrons each consist of three quarks that make up about 1% of their masses and also gluons that are massless, where does the other 99% of their mass come from? It comes from the energy of interaction of the gluons with the quarks. In other words, from the energy stored in the spring-like forces.\nAs I will discuss later, the Higgs phenomenon, via the Higgs field, gives rise to the masses of just the elementary particles, which in this case are the quarks and leptons and the W+, W-, and Z particles. So if the Higgs field were not there, the quarks would become massless but the masses of the protons and neutrons would be practically unchanged. If we omit for the moment the as-yet-undetected dark matter whose composition we do not know, almost all the mass of known matter in the universe comes from protons and neutrons. (To learn more about dark matter and dark energy, see parts 8 and 9 of the 16-part series of posts titled Big Bang for Beginners that I wrote in 2010.)\nSince the proton and neutron masses would be almost unchanged by the absence of the Higgs, the mass of the almost everything in the universe would remain pretty much the same as it is now. So could life as we know it still exist? The catch is with the electrons. Even though their mass is so small and thus makes a negligible contribution to the mass of everyday objects, without the Higgs they too would become massless and would then travel at the speed of light (like all massless particles do) and thus would not become bound to form electrically neutral atoms. Since matter as we know it is made up of those neutral atoms bound together in a wide variety of ways, we could not exist.", "score": 38.96362860668757, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "\"Scientists' best theory for why different things have mass is the \"Higgs field\" - where mass can be seen as a measure of the resistance to movement. The \"Higgs field\" is shown here as a room of physicists chatting among themselves.So that's the new standard model of the universe, allegedly, in modern theoretical physics, without gravity of course, which turns out to be a bother.\nA well-known scientist walks into the room and causes a bit of a stir - attracting admirers with each step and interacting strongly with them - signing autographs and stopping to chat.\nAs she becomes surrounded by admiring fans, she finds it harder to move across the room - in this analogy, she acquires mass due to the \"field\" of fans, with each fan acting like a single Higgs boson.\nIf a less popular scientist enters the room, only a small crowd gathers, with no-one clamouring for attention. He finds it easier to move across the room - by analogy, his interaction with the bosons is lower, and so he has a lower mass.\"\nIf gravity were added, we suspect the floor under the scientists in the above model would collapse at some point of idolizing boson accumulation and then what would we have?\nThe Guardian wrote more recently in The Higgs boson does a new trick (probably):\n\"In the Standard Model of physics, the fundamental building blocks of nature are quarks (which live inside hadrons) and leptons (such as the electron, and its heavier siblings, the muon and the tau). These building blocks interact with each other via fundamental forces carried by bosons - the photon carries electromagnetism, the W and Z bosons carry the weak nuclear force, and the gluon carries the strong force.We always wonder where those \"fundamental forces\" that are taken as givens come from, or where the boson gets its \"interactive\" power, but this does not appear to bother the physicists. The main thing is that the current math formulas work and they get enough \"bumps\" in the hadron colliders.\nAll those particles (except the photon and the gluon, which are massless) acquire their mass by interacting with the Higgs boson, the discovery of which was announced last year on the fourth of July.\"\nSomehow, we think these epicyclic-type theoretical models do not yet really explain the \"real\" universe. Give it another few thousand years, at least.", "score": 38.472977933591, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "Therefore, the slower the particle is made to go, the greater its mass becomes.\nCern has often referred to the analogy of a bustling Hollywood party.\nWhen a celebrity arrives, the crowd is evenly distributed around a room. When the famous actress arrives, the people closest to the door swarm around her. As she walks through the room, she constantly attracts the people closest to her.\nWhen she gets far enough away, parts of the crowd return to their other conversations. So by restricting her movements across the room, the crowd are giving her mass.\nWithout the Higgs field, particles would never have merged together to form atoms and molecules and, in turn, stars and planets.\nThe Higgs boson itself, like other bosons which have actually been observed, are the smallest fundamental units of matter in the universe. Smaller than atoms, smaller than their components, and smaller than the compenents of their components.", "score": 38.29443810844451, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "These charge geometries show the weak force to be a magnetic dipole [edge] interaction between mass energy geometries and Matter topologies and the strong force to be a charge [fascia] interaction between Matter topologies themselves.\nThe key to understanding the mechanics of the quantum and uniting it with the dynamics of the Cosmos is through the use of equilateral charged energy geometries. Deep within the data of these collider experiments, the tetryon of Matter is hidden away. We can declare the search for the Higgs boson to be over.\nIndependently and almost simultaneously in 1964, three groups of physicists proposed the Higgs Mechanism through which the inertial mass properties of Matter are created: Francois Englert and Robert Brout: by Peter Higgs (inspired by the ideas of Philip Anderson); and by Gerald Guralnik, C.R. Hagen and Tom Kibble. The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, and is an essential part of the standard model.\nAs the Universe cooled and the temperature fell below a critical value, an invisible force field called the 'Higgs Field' was formed together with the associated 'Higgs Boson.' The Higgs mechanism is a process by which vector bosons can obtain mass.\nThe Higgs Field is another name for the Aether, or vacuum energy field that fills free Space. It's interaction with inductive Tetryonic topologies create the inertial properties observed in mass-Matter.\nThe Higgs boson can also be considered another name for the 2-dimensional charged mass-energy fascia of all Matter, whose inductive (Quantised Angular Momentum) quanta create the physics property of inertial mass. Inertial mass is a measure of a body's resistance to changes in acceleration due to external forces. Resistance to motion of charged fascia through vacuum energy fields creates inertial mass.\nAll charged fascia [Higgs bosons] are comprised of squared numbers of equilateral Planck quanta within a larger equilateral geometry. The equilateral Quantised Angular Momentum of inductive Planck mass energy geometries within squared [Higgs] Fascia of Matter Topologies is what creates inertial properties of Matter at the quantum level.\nOne day soon, the value and validity of Tetryonics will rise to the surface.", "score": 36.97831980054945, "rank": 12}, {"document_id": "doc-::chunk-3", "d_text": "Even when the universe seems empty this field is there. Without it, we would not exist, because it is from contact with the field that particles acquire mass. The theory proposed by Englert and Higgs describes this process.\nOn 4 July 2012, at the CERN laboratory for particle physics, the theory was confirmed by the discovery of a Higgs particle. CERN's particle collider, LHC (Large Hadron Collider), is probably the largest and the most complex machine ever constructed by humans. Two research groups of some 3,000 scientists each, ATLAS and CMS, managed to extract the Higgs particle from billions of particle collisions in the LHC.\nEven though it is a great achievement to have found the Higgs particle—the missing piece in the Standard Model puzzle—the Standard Model is not the final piece in the cosmic puzzle. One of the reasons for this is that the Standard Model treats certain particles, neutrinos, as being virtually massless, whereas recent studies show that they actually do have mass. Another reason is that the model only describes visible matter, which only accounts for one fifth of all matter in the cosmos. To find the mysterious dark matter is one of the objectives as scientists continue the chase of unknown particles at CERN.\nNobel physics: A closer look at the Higgs boson\nSo what is the Higgs boson, the elusive particle that physicists Peter Higgs and Francois Englert theorized about and won the Nobel Prize for on Tuesday? The subatomic particle—which has also been called the \"God particle\" by some because it is seen as fundamental to the creation of the universe—has been the subject of an intense scientific hunt at the world's biggest atom smasher near Geneva. Last year, scientists at CERN, the European Organization for Nuclear Research, announced they had finally detected the long-sought particle.\nWHAT EXACTLY IS THE GOD PARTICLE?\nEverything we see around us is made of atoms, inside of which are electrons, protons and neutrons. And those, in turn, are made of quarks and other subatomic particles. Scientists have wondered how these tiny building blocks of the universe acquire mass. Without mass, the particles wouldn't hold together—and there would be no matter.\nOne theory proposed separately by Higgs and Englert is that a new particle must be creating a \"sticky\" energy field that acts as a drag on other particles.", "score": 35.97384107453178, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "The situation grew desperate: The equations for particles with the properties of W and Z, when forced to accommodate non-zero mass, led to mathematical inconsistencies.\nThe right kind of cosmic medium could rescue the situation, however. Such a medium could slow down the motion of W and Z particles, and make them appear to have non-zero mass, even though their fundamental mass––that is, the mass they would exhibit in ideally empty space––is zero. Using that idea, theorists built a wonderfully successful account of all the phenomena of the weak interaction, fully worthy to stand beside our successful theories of the electromagnetic, strong, and gravitational interactions. Our laws of fundamental physics reached a qualitatively new level of completeness and economy.\nThe predictions of those “medium-based” laws got tested with the sharpest precision and in the most extreme conditions that experimenters could devise. They were eager to disprove them. The Swedish Academy of Sciences gives prizes for things like that! But the laws passed every test, with flying colors. This grand synthesis has been so successful, for so long, that it has become known as the Standard Model.\nOf course, the success of the Standard Model gave circumstantial evidence for the cosmic medium it relies on. Nevertheless, until a few months ago a nagging question remained: What is that cosmic medium made from? No known particles had the right properties.\nOn July 4, 2012, scientists at the CERN laboratory, near Geneva, announced the discovery of a new particle that seemed as though it might have the required properties. Over the last few months, more detailed measurements have confirmed and sharpened the initial discovery. Several tough consistency checks came in positive. In March CERN declared victory. The main building block of our cosmic Ocean, the Higgs particle, has been successfully identified.\nWhy does it matter?\nThe discovery of the Higgs particle is, first and foremost, a ringing affirmation of fundamental harmony between Mind and Matter. Mind, in the form of human thought, was able to predict the existence of a qualitatively new form of Matter before ever having encountered it, based on esthetic preference for beautiful equations.\nWe plumb the depth of that Mind-Matter harmony if we meditate on the challenge the Higgs particle discovery posed.\nThe Higgs particle is heavy, couples poorly to matter, and is extremely unstable.", "score": 35.495115785687645, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "The particle credited with giving others mass, the Higgs boson, may also be to blame for the universe flying apart ever faster. That’s because the Higgs boson could, in principle, be giving rise to dark energy.\nThe standard model of particle physics encompasses the fundamental particles that make up matter, as well as associated fields. The photon, for instance, is tied to the electromagnetic field. Discovered last year, the Higgs boson also comes with an associated field but, unlike others of its class, the Higgs field is scalar – it does not act in a specific direction.\nTaken together, the known particle fields create a certain density of energy permeating the universe. Before the discovery of dark energy, particle physicists were worried that the simplest versions of the standard model predicted an enormous, possibly infinite energy density that would force the universe to expand at an ever-increasing rate.\nThat seemed improbable until observations of distant supernovae showed that galaxies are not only moving away from each other, but accelerating. The discovery seemed to resolve the issue, but it turns out that the culprit, which we now call dark energy, is much weaker than the standard model indicates.\n“It’s very different from what we would predict,” says Frank Wilczek of the Massachusetts Institute of Technology. “This is the profound embarrassment of this fundamental feature of the universe.”\nSpurred by the appearance of the long-anticipated Higgs boson, physicists Lawrence Krauss of Arizona State University in Tempe and James Dent of the University of Louisiana at Lafayette may be on the trail of why dark energy is so wimpy.\n“What we show is, if the Higgs exists – which it appears to – it can be a portal to new physics and in principle be associated with a new field, which could give an energy density in the universe that’s of the right order of magnitude,” says Krauss.\nEven before the Higgs was discovered, Krauss was wondering if other scalar fields could couple to the Higgs field, offering links to new physical phenomena. But he was actually a Higgs sceptic until the very end.\n“I was preparing papers on why the Higgs doesn’t exist, expecting them not to see it at the LHC.", "score": 33.545301110177505, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "There's only, you know, a handful of elementary particles that we're all made of, and each one has a specific electric charge, a specific interaction with a strong nuclear force and with gravity and also a certain mass.\nAnd it turns out that if you look at the theories of physics that we have, that was such a success, they have so much symmetry built into them that there's an implication, namely that all of the particles they describe should be massless, should have exactly zero mass.\nAnd that implication is patently false. So you need to do something about it. And in 1964, the year The Beatles came to America, a handful of physicists figured out a way to fix that problem, by introducing a new field into nature, a field who breaks the symmetry that is built into particle physics and by breaking that symmetry allows all the other particles to get mass.\nThat field is the Higgs field, and the vibrations in that field give us the Higgs boson.\nFLATOW: So the experiments at CERN were able to sort of ring that field, shock the field, and out pops a Higgs?\nCARROLL: That's exactly right, and in fact, that's how you make all particles in particle physics. The photon, the particle of light, is a wave in the electromagnetic field. And even things that we really think of as particles, not fields, like the electron or the up quark, these are still fields. This is just the lesson of 20th-century physics is that the world is described by quantum field theory.\nSo it's the Higgs field that is doing the work, giving mass to all the other particles, but it's the Higgs boson which is a vibration in that field that lets us tell the Higgs field is there.\nFLATOW: So could you ring the gravitational field and have a gravity thing pop out?\nCARROLL: Absolutely. I mean, at Cal Tech, we're very interested in trying to do that, and the thing is that gravity is such a weak force that you're never going to detect an individual graviton, a particle of gravity. What you could do is detect the combination of a gajillion(ph) gravitons that makes up a gravitational wave, and that's an ongoing project, looking for gravitational radiation from astrophysical sources.\nFLATOW: That's an interesting project.", "score": 32.689310760811395, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "Then, of course, there's the question whether there's any reason to expect, if you look on this particular very small scale inside matter, is there going to be anything there? And I think we have a couple of reasons for thinking there will be something there. We have this theory of elementary particles and the forces between them, the structure of matter. This model works extremely well, but we know it's incomplete. And one of the reasons why it's incomplete is, if you write down the basic theory, it looks like all the particles would be massless. That's clearly not true. If you look in the universe today you see that some particles are heavy, some are not. So there has to be an explanation for that.\nNow, in fact, this \"Standard Model\" of particles contains an explanation, but at the moment, it's very much a theoretical explanation. It's a hypothesis. We don't know whether it's correct or not.\nThis is an idea that was suggested by Peter Higgs and others way back in the 1960s. According to their idea, there should be a new particle which could be produced and observed at the LHC, called the Higgs boson. This is in some sense the holy grail of particle physics, to find this missing link in the standard model. So that's one thing that we're really looking forward to with the LHC. In fact, back when we persuaded the politicians to stump up the money to build the thing, that's probably what we told them.\nNow, the other reasons for thinking there are new physics … one of my personal research interests is dark matter. Astrophysicists tell us that something like 90 percent of the matter in the universe is some sort of invisible stuff, and nobody knows what it is. They can see that it attracts gravitationally visible particles, and presumably it's not made of the same constituents as the visible matter. It could be something that we're going to be able to produce with the LHC. There are various different ideas about what might be, but quite generally I think there are good reasons to think that these dark matter particles, if they exist, will be observable at the LHC.\nQ: Just speaking about dark matter, is there thought that these are particles that exist homogeneously in all of reality?", "score": 32.588228206053955, "rank": 17}, {"document_id": "doc-::chunk-17", "d_text": "For example, two up quarks with charge +2/3 combined with 1 down quark with charge -1/3 gives a net asymmetrical expression of charge +1 (as in the proton). Likewise, one up +2/3 plus 2 down -1/3 charges for the neutron result in a net symmetrical charge expression of zero.\nThus the Higgs boson represents that state where the mass, charge and spin vectors (contained within the ST units forming it) are each lined up to point in equal but opposite directions to give a net symmetrical expression of 0, 0, and 0, respectively.\nBreak the symmetry of the spin vector, changing it from equal and opposite, to additive, and you now have the graviton at mass 0, charge 0 and spin 2. The graviton is actually composed of two spin 1 photons that are constructively added together (i.e. positive interference construction). More on this shortly.\nThe photon, once again, represents the fundamental dynamic dimensional ST particle that has mass 0, charge 0 and spin 1. It is symmetrical in two vectors (mass and charge), but asymmetrical in spin (which, along with zero mass, accounts for its net lightspeed velocity with each ST pulse).\nTo note is that all spin on any particle can point in one of two opposite directions and therefore you can have net expressions of spin that are equal and opposite, i.e. left-handed and right-handed. Likewise, i.e. the photon, you can have left- and right-handed spin 1 states (or counter-clockwise and clockwise if you're riding the photon).\nSo going up the particle ladder, we have photons which have asymmetrical spin 1 vectors which can spin additively in one of two directions, say either clockwise, Ck, or counterclockwise, CCk. Now if two Ck photons constructively interfere, they generate (momentarily) a graviton with spin 2 (Ck). Likewise, two CCk photons can generate a graviton with spin 2 (CCk). However, a Ck photon interfering with a CCk photon will generate a spin 0, mass 0, charge 0 Higgs boson.", "score": 32.06463458822141, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "This isn't exactly my area of expertise, but I've gotten requests by both email and twitter to try to explain yesterday's news about the Higgs' boson.\n- What is this Higgs' boson thing?\n- How did they find it?\n- What does the five sigma stuff mean?\n- Why do they talk about it as a \"Higgs'-like particle\"?\nSo, first things first. What is a Higgs' boson?\nWhen things in the universe interact, they usually don't actually interacts by touching each other directly. They interact through forces and fields. What that means is a bit tricky. I can define it mathematically, but it won't do a bit of good for intuition. But the basic idea is that space itself has some properties. A point in space, even when it's completely empty, it has some properties.\nOutside of empty space, we have particles of various types. Those particles interact with each other, and with space itself. Those interactions are what end up producing the universe we see and live in.\nFields are, essentially, a property of space. A field is, at its simplest, a kind of property of space that is defined at every point in space.\nWhen particles interact with fields, they can end up exchanging energy. They do that through a particular kind of particle, called an exchange particle. For example, think about an electromagnetic field. An electron orbits an atomic nucleus, due to forces created by the electromagnetic fields of the electrons and protons. When an electron moves to a lower-energy orbital, it produces a photon; when it absorbs a photon, it can jump to a higher orbital. The photon is the exchange particle for the electromagnetic field. Exchange particles are instances of a kind of particle called a boson.\nSo.. one of the really big mysteries of physics is: why do some particles have mass, and other particles don't? That is, some particles, like protons, have masses. Others, like photons, don't. Why is that?\nIt's quite a big mystery. Based on our best model - called the standard model - we can predict all of the basic kinds of particles, and what their masses should be. But we didn't have a clue about why there's mass at all!\nSo, following the usual pattern in particle physics, we predict that there's a field.", "score": 32.005870870273064, "rank": 19}, {"document_id": "doc-::chunk-3", "d_text": "the Standard Model has Nature doing a most unnatural thing:\nimparting masses to each of the particles with random abandon:\nallowing the photon to be massless and the neutrinos to be near-massless,\nwhile the most massive particle, the top quark, measures in at\n175 billion electron volts and the second-most massive, the bottom\nquark, is a mere 5 billion electron volts. (Shochet explains an\nelectron volt as \"how much energy an electron gets when it\ngoes through a one-volt battery\"-a measurement system that\ndraws on Einstein's theory of relativity and the notion that mass\nand energy are interchangeable. So the particles' masses\nare, in essence, how much energy they deposit when they go splat\nagainst the walls of the accelerator detectors.)\nthe mass conundrum begins with finding the Higgs.\nthe poet William Blake wrote, is eternal delight. The search for\nthe Higgs boson is a quest for energy: high enough levels of it,\nmore accurate measurements of it, a better understanding of it.\nThe small step between delight and light would not\nbe lost on the visionary poet. Says Henry Frisch, a visiting scientist\nat Fermilab, \"It's really intensity - it's luminosity\n- that we need. Which is to say we need more collisions.\"\ncollisions is, in theory, simple work. Researchers begin with\nhydrogen, the simplest and lightest element, with one proton and\none electron. LEP isolates the electron, while the Tevatron uses\nthe proton. Huge numbers of the particles are cooled and stored,\nas are the particles' antimatter, which carries a reverse electric\ncharge. The collider at LEP used antielectrons (or positrons),\nwhile Fermilab uses antiprotons.\naccelerators are built in circular tunnels-four miles in circumference\nat Fermilab, 17 at CERN-with magnets and high-voltage accelerating\ndevices placed at regular intervals around the loop. The particles\nare released into the tunnel, and as they race around the loop,\nthey pick up energy from the high-voltage field, fattening them\nfor the slaughter ahead.\nthe antiparticles are also fed into the tunnel but, because they\nhave an opposite charge, they travel in the opposite direction.", "score": 31.889104114158805, "rank": 20}, {"document_id": "doc-::chunk-6", "d_text": "For example in electron field, whenever ripple is created then you get electron.\nAs well as when ripple created in quark field then quark is obtain.\nAnd the best part is that, when ripple is vanishes then particle associated with this ripple is also vanishes.\nActually in this field the energy given to the field position you see that particle only.\nAnd as the energy spread out to the field it seems that particle is moving.\nTherefore in some field you need more energy to generate particle. But in some field less.\nBecause how much energy needed in a field to move the particle, it depend upon the mass of the particle.\nFor example Higgs Bosons are heavier than electron, so it required large Hadron collider to generate particle.\nIn which large amount of energy given to Higgs Boson then it was seen Higgs Bosson particle.\nQuantum Physics Law of Attraction And How different Forces work ?\nWell, these field exist in space at same place in space and time.\nAnd sometime energy is exchanged between these fields.\nSo forces also work there and many time one particle is changes to other particle.\nTherefore quantum is the best theory to understand our nature right now.\nBut still there’re many unsolved puzzle in front of us like Dark energy and Dark matter.\nSo physicist are trying to solve this puzzle.\nBut still they’re unknown about this field.\nNow i will suggest you to buy this best book for quantum physics theory from Amazon.\nThis is an affiliate link if you buy from this link then you’ll help Physics Easy Tips commission.\nYou learned in this article what is building block of this universe ? And understood the main building block are fields not electron and quarks. Where you provide energy to generate these particles. Today topic is closed now. I hope that you enjoy learning about what is quantum physics. If this article is helpful for you then comment, like and share on social media. what you have learned new in this article write in comment box right now. If you have any doubt ask me always here for your support Thanks for sharing this article.", "score": 31.054572831046503, "rank": 21}, {"document_id": "doc-::chunk-6", "d_text": "As I already wrote in a previous post, matter can be characterized as a\nsystem that contains particles with non-zero rest mass\n. Taking the simplest atom, viz. the hydrogen atom, as an example, this consists of a proton (that itself enjoys an internal structure by being composed of quarks that are held together by the exchange of gluons) and one electron, both interacting via the exchange of virtual photons. The latter are the messenger particles that convey the electromagnetic force; photons have zero rest mass, while the electron and the proton enjoy a nonzero rest mass. The mass of the latter is about 1840 times the mass of the former, so that most of the rest mass of the atom is concentrated in its nucleus.\nLet me try to provide a bit more information to some statements in postings above:\nMatter can neither be created or destroyed\nNo, that is not true. If matter and antimatter collide, e.g. hydrogen and antihydrogen (consisting of a (negatively charged) antiproton and a (positively charged) positron), then both annihilate into radiation, i.e., photons. However, the assertion above becomes correct if \"matter\" is replaced by \"mass\", so:\nMass can neither be created or destroyed\nIt is the motion of the electrons that creates the illusion of solid matter\n. Well, this \"motion\" must be regarded as being very different from the orbiting of, e.g., planets around a central star. If the electrons in an atom were really moving, then (since they are charged particles) this would imply that they would constantly produce electromagnetic radiation, so atoms would lose energy by radiating all the time and thus couldn't be stable. Actually, the probability distribution of electrons in an atom is static and thus does not change in time (unless perturbed by \"external\" effects). It is the complex phase of the electronic wavefunction that oscillates in time. On the other hand, it is true that the kinetic energy of the electrons is required to render atomic systems stable. Without this kinetic energy, the electrostatic attraction between the positive nucleus and the negatively charged electrons would make the latter to collapse into the nucleus. The structure of atoms and molecules and thus of \"ordinary\" matter is also significantly influenced by the fact that electrons are fermions and thus the Pauli exclusion principle holds which forbids any two electrons to occupy the same quantum state.", "score": 30.8546346633578, "rank": 22}, {"document_id": "doc-::chunk-8", "d_text": "Thus, both the world selector and the condensor describing the internal particle structure can be split up in a manner allowing a system of partial metrics to be specified for each hermetric form. An appropriate choice of indices (ij) then results in a solution of the general energy spectrum corresponding to a separation of the discrete spectra c and d. These solutions actually yield discrete spectra of inertial masses, showing good agreement with measured particle and resonance spectra.\nThe following picture regarding the spectrum of elementary particles found in high energy experiments emerges from the theoretical analysis above:\nElementary particles having rest mass constitute self-couplings of free energy. They are indeed elementary as far as their property of having rest mass is concerned, but internally they possess a very subtle, dynamic structure. For this reason they are \"elementary\" only in a relative sense.\nActually, such a particle appears as an elementary flow system in R6 (equivalent to energy flows) of primitive dynamic units called protosimplexes, which combine to form flux aggregates. The protosimplex flow is a circulatory, periodic motion similar to an oscillation. A particle can only exist if the flux period comprises at least one full cycle, so that the duration of a particle's stability is always expressible as an integer multiple of the flux period. Every dynamical R8-structure possible constitutes a flux aggregate described by a set of 6 quantum numbers. All of them, however, result from an underlying basic symmetry of very small extent, essentially determined by the configuration number k, which can only assume the values k = 1 and k = 2. The empirically introduced baryonic charge then corresponds to k – 1, i.e. k = 1 refers to mesons and k = 2 to baryons.\nThe physically relevant parts of an R6-flux aggregate are its k + 1 components in the physical space, R3, which are enveloped by a metric field. Thus, mesons contain two and baryons three components. Evidently, there exists an analogy to the empirically formulated concept of quarks. If this is true, then quarks are not fundamental particles but non-separable, quasi-corpuscular subconstituents in R3 of a mesonic or baryonic elementary particle. In this picture the condition of quark \"confinement\" is unnecessary.", "score": 30.628749862953683, "rank": 23}, {"document_id": "doc-::chunk-4", "d_text": "Brout and Englert made no mention of it in their paper, although they were aware of its manifestation in condensed-matter physics. Guralnik, Hagen and Kibble suppressed it in their analysis, which was simplified to focus on the removal of its massless companion. Higgs alone pursued it. What is being called Higgs’s boson is, in effect, Goldstone’s massive boson. Although at least six physicists can lay claim to this particular mechanism for generating mass, only Higgs realized the importance of the massive boson in testing the theory.An understanding of the terms is not as important as a perception that various competing teams appeared to be playing with shadows in the dark, and making up concepts as they went along. Can a particle really be a carrier of a force? Can mechanisms generate mass just because a theory needs it? Where is the mass coming from? As useful as the terms and nomenclature become to theory, does nature owe any obligation to conform to human conceptions? Did nature suddenly change properties this year when one Higgs boson became five? The intuitive answer to such questions is that of course nature didn’t change: we did. Our scientific understanding of nature changed. But then can we assume it is improving? Is it evolving? Is our understanding continuously changing, and if so, is there any point at which we can say we understand something with a sufficient degree of certainty? At what point do we jettison things textbooks have been teaching for decades? Can we assume we have the story right now? What unforeseen discoveries in the next few years will have us regretting that what we are learning in 2010 is all wrong? These are serious questions, underscored by another example in New Scientist this week, “Anti-neutrino’s odd behaviour points to new physics,” as if all we need right now is a new physics (the hard science). Reporter Anil Ananthaswamy wrote, “The astounding ability of these subatomic particles to morph from one type to another may have created another crack in our understanding of nature.” This crack, he said, “cannot be explained by standard model physics.” Granted, neutrino physics experiments are difficult, but a Fermilab test of theory produced unexpected results. Jenny Thomas of University College London put a happy face on it: “If the effect is real, then there is some physics that is not expected.", "score": 30.224201788717945, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Broadly defined, particle physics aims to answer the fundamental questions of the nature of mass, energy, and matter, and their relations to the cosmological history of the Universe.\nAs the recent discovery of the Higgs Boson, as well as direct evidence of cosmic inflation, have shown, there is great excitement and anticipation about the next round of compelling questions about the origin of particle masses, the nature of dark matter, and the role leptons, and in particular neutrinos, may play in the matter-antimatter asymmetry of the Universe.\nThe energy scales relevant for these questions range from the TeV to perhaps the Planck scale. Experimental exploration of these questions requires advances in accelerator and detector technologies to unprecedented energy reach as well as sensitivity and precision. New facilities coming online in the next decade promise to open new horizons and revolutionize our view of the particle world.\nParticle theory addresses a host of fundamental questions about particles, symmetries and spacetime. As experiments at the Large Hadronic Collider (LHC) directly probe the TeV energy scale, questions about the origin of the weak scale and of particle masses become paramount. Is this physics related to new strong forces of nature, to new underlying symmetries that relate particles of different spin, or to additional spatial dimensions that have so far remained hidden? Will this physics include the particles that constitute the dark matter of the universe, and will measurements at the LHC allow a prediction of the observed cosmological abundance? String theory remains the leading candidate for a quantum theory of gravity, but a crucial debate has emerged as to whether its predictions are unique, or whether our universe is part of a multiverse. All of these fundamental questions about particles and spacetime lead to corresponding questions about the early history of the universe at ever higher temperatures. The most compelling links between cosmological observations and fundamental theory involve dark matter, inflation, the cosmological baryon excess and dark energy.", "score": 30.070353152662314, "rank": 25}, {"document_id": "doc-::chunk-6", "d_text": "where mX* the desired gravitational charge of the particle , qX2 is the square of the particle charge x.\nWe draw readers’ attention to the differences between the gravitational charge of a charged elementary particle mX and its inertial electromagnetic mass mX The gravitational charge mX* (see expression (III.1.4)) has no direct connection with the inertial electromagnetic mass mX defined by expression (III.1.3). The gravitational charge mX* and the inertial electromagnetic mass mX reflect the different properties of the elementary particle. Therefore, we have no right to talk about the “equivalence” (identity) of these masses. Now we can proceed to the determination of the gravitational charge of other elementary particles.\n• Electron: The inertial mass of the electron is 1/1836 of the inertial mass of the proton. The electric charge of an electron is equal to the electric charge of a proton. If we calculate the gravitational charge of the electron using (III.1.4), then it turns out that it is numerically equal to the gravitational charge of the proton. For an electron, the ratio\n• Neutron: From the point of view of the quadratic effect, the neutron is a special particle, because it does not have a pronounced charge. We study macroscopic phenomena, so we do not need to consider the quark model of a neutron. It is important for us to bear in mind the following experimental fact. The difference of inertial masses between the proton and the neutron is small by the standards of nuclear physics and is about 1.3 MeV. As a result, the neutron in the nuclei may be located in a deeper potential well than the proton, and therefore the beta decay of the neutron is energetically unfavorable. This leads to the fact that the neutron in the nuclei can be stable\nMoreover, in neutron-deficient nuclei, a beta transition of a proton into a neutron occurs with the capture of an orbital electron. For this reason, from a macroscopic point of view, we can conditionally consider a neutron as a “bundle” of a proton and a neutron with a relatively weak electrostatic interaction of electric charges. Following this logic, we can assume that the gravitational charge of the neutron is the sum of the gravitational charge of the proton and the gravitational charge of the electron.", "score": 29.9738221248568, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "The two groups will double-check each other's work and provide a bit of friendly rivalry as to who can discover what first.) The collision will concentrate all that speeding energy in an infinitesimally small space. And then that ball of pure energy will become something else entirely. \"By Einstein's E = m2, you can make particles whose mass is less than the amount of energy you have available,\" says Martinus Veltman, a physics professor at Utrecht University in the Netherlands and a Nobel laureate. Energy becomes mass. This, in a nutshell, is why the protons need to go so fast—with more energy, the LHC can summon ever-heavier particles out of the ether. And the heavier particles are the interesting ones. The heavy ones are new.\nHere's what we know about what the universe is made of: We have the ordinary, common matter, like protons and electrons. In addition, there's all the stuff that transmits a force, like photons of light, or gravitons, which pull heavy objects together. That's the universe—matter and force—and physicists have spent the past 60 years or so uncovering the details of how all the matter particles and the force particles interact. The totality of that work is called the Standard Model of particle physics, and any particle physicist will tell you that it is the most successful theory in the history of human existence, powerful enough to predict the results of experiments down to one part in a trillion.\nAnd yet the Standard Model is almost certainly not the whole picture. While particle physicists have been busy constructing the Standard Model, astronomers and cosmologists have been working on another task, a giant cosmic accounting project. What they see—or, more precisely, don't—is a clear sign that there are far more things in heaven and earth than are dreamt of by the Ph.D.s.\nIf you go out and count up all the stars and galaxies and supernovae and the like, you should get an estimate of how much total mass there is in the universe. But if you estimate the mass another way—say, by looking at how quickly galaxies rotate (the more mass in a galaxy, the faster it spins) or by noting how galaxies clump together in large groups—you will conclude that the universe has much more mass than we can see. About five times as much, by the latest reckoning. Since it can't be seen, we call it dark matter.", "score": 29.843412289784204, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "The God Particle\nPhysicist Brian Greene explains the Higgs boson particle, also known as the \"God Particle,\" and why you should care about it. This energetic and delightful talk will make you wish your high school physics teacher taught like this. Greene says the feat of finding such a particle is akin to \"trying to hear a tiny, delicate whisper over the massive thundering, deafening din of a NASCAR race.\"\nGreene gave this talk in 2012 just days before CERN (the European Organization for Nuclear Research) confirmed the theory of the Higgs boson using the Large Hadron Collider. The following year, the Nobel prize in physics was awarded jointly to François Englert and Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles.”\nAround 1970, Peter Higgs imagines that all of space is uniformly filled with an invisible substance that’s sort of like molasses. When a particle, like an electron, tries to move through this molasses, the resistance it encounters is what we interpret as the mass of the particle. In fact, the idea is that different particles would have different degrees of stickiness which means they would experience a different amount of resistance as they try to borrow through this pervasive molasses. Higgs’s theory, if proven, would rewrite the very meaning of nothingness because the field, or molasses, is essentially an unremoveable occupant of space.\nThe Large Hadron Collider at CERN, in Geneva, is about 18 miles around. Protons are sent cycling around the collider in opposite directions, near the speed of light, so fast that they can traverse that 18-mile race track more than 11,000 times each second. And these particles engage in head-on collisions. It’s a monumental challenge to carry out this procedure. “It’s like trying to hear a tiny, delicate whisper over the thundering, deafening din of a NASCAR race,” says Greene.\nBy the numbers\nFundamental discovery can have a profound impact on the way we live our lives, but we must wait for theoretical discoveries to turn into practical applications. Confirming the existence of the Higgs particle in 2012 substantiated 30 years of theoretical science. In short, the Higgs discovery put the bang in the Big Bang Theory.", "score": 29.478138444559043, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Here are some excerpts from a BBC article on a so-called \"God\" Particle -- that might explain how matter attains mass.\nCern scientists reporting from the Large Hadron Collider (LHC) have claimed the discovery of a new particle consistent with the Higgs boson.\nThe particle has been the subject of a 45-year hunt to explain how matter attains its mass.\nBoth of the Higgs boson-hunting experiments at the LHC see a level of certainty in their data worthy of a \"discovery\".\nThey claimed that by combining two data sets, they had attained a confidence level just at the \"five-sigma\" point - about a one-in-3.5 million chance that the signal they see would appear if there were no Higgs particle.\nHowever, a full combination of the CMS data brings that number just back to 4.9 sigma - a one-in-two million chance.\nRead more here:", "score": 29.423736360968665, "rank": 29}, {"document_id": "doc-::chunk-2", "d_text": "Shochet and Henry J. Frisch, professor in physics,\nthe Fermi Institute, and the College, are among the 2,000 scientists\nconducting experiments at Fermilab.\nHiggs, if it exists, will be the latest in a series of subatomic\nparticles discovered during the last century. The fermions, named\nfor the Chicago Nobel physicist, are the building blocks of atoms.\nThey include three pairs of quarks (top and bottom, up and down,\nand the fancifully named charmed and strange) and three pairs\nof leptons (the electron, muon, and tau and their corresponding\nbosons, on the other hand, carry the forces that interact with\nthe fermions, causing them to stick together, have inertia, and\ndisintegrate under radioactive conditions. Four of the five predicted\nbosons have already been seen in the lab. The photon, or the particle\nof light, carries the electromagnetic force, which binds atoms\ntogether to create molecules and, eventually, berries and steel.\nThe gluon carries the strong force, which holds atomic nuclei\ntogether by binding together quarks to form protons and neutrons.\nThe Z and W bosons carry the weak force, which governs radioactive\ndecay, the demise of all unstable particles. Last but not least,\nthe Higgs boson carries the sticky muck that gives the fermions\ninertia, or mass.\nthe roster of fermions and bosons and their interactions create\nthe Standard Model, the summary of scientists' present understanding\nof Nature. Every physicist will tell you it's an imperfect model-or,\nmore precisely, an inelegant model, because it requires\nparameters such as mass to be factored in, rather than accounting\nfor them on its own-but it's what they have for now, and the Higgs\nis an important part of it.\nmost natural way for it all to work,\" Shochet explains, \"would\nbe for all the elementary particles to have no rest mass whatsoever-to\nhave mass that only came from kinetic energy, the energy from\nmotion, and potential energy, the energy associated with things\ninteracting. But we know that's not the case.\"", "score": 29.33260742122482, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Named for Peter Higgs, the retired Scottish\ntheorist who proposed its existence in the early 1960s, the Higgs\nis a hot object of pursuit for experimental physicists because,\nafter so much waiting, finally seeing it in the lab would\nbe the first step toward understanding its nature and properties.\ndoes it mean for a particle to \"give\" another particle\nits mass? When the Higgs hubbub began late last year, many well-meaning\njournalists explained mass as the weight of something,\nand the Higgs as the mechanism that gives things weight. But mass\nis not the reason why a hunk of steel weighs a ton and a gooseberry\nweighs a few grams. That's a function of gravity. Rather, mass\nis why a thing has inertia, causing it naturally to resist a change\nof motion, which is why a gooseberry and a hunk of steel both\nstay right where they are until some intervening force (late-season\nrain, a forklift) moves them.\nHiggs boson is the particle that carries the inertia. The more\ntightly it gloms on to other particles, the less likely they are\nto be swayed by intervening forces. Melvyn J. Shochet, the Elaine\nM. & Samuel D. Kersten Jr. professor in physics, the Fermi\nInstitute, and the College, describes the phenomenon created by\nthe Higgs as, in essence, sticky muck: \"It's like soldiers\nwalking through mud. Interaction with the mud slows them down\nand makes it much harder to walk.\" In fact, particle physicists\nbelieve all of Nature is covered in a blanket of mud, and they\nwould like very much to scoop up a glob from this Higgs field\nto study how, exactly, it gloms onto other particles.\nthey won't have to wait another five years to do it. Just as LEP\nwas shutting down in December, another particle accelerator 40\nmiles outside Chicago was preparing to fire up in March to resume\nthe hunt for the Higgs. The Tevatron accelerator, located at the\nFermi National Accelerator Laboratory in Batavia, Illinois, has\nundergone extensive equipment upgrades, increasing the intensity\nof its colliding beams and the machine's ability to tag data like\nthat seen at LEP.", "score": 29.323962799692026, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The Higgs. Whaaaa? July 6, 2012Posted by gordonwatts in ATLAS, CMS, Higgs, LHC, physics, press.\nOk. This post is for all my non-physics friends who have been asking me… What just happened? Why is everyone talking about this Higgs thing!?\nIt does what!?\nActually, two things. It gives fundamental particles mass. Not much help, eh? Fundamental particles are, well, fundamental – the most basic things in nature. We are made out of arms & legs and a few other bits. Arms & legs and everything else are made out of cells. Cells are made out of molecules. Molecules are made out of atoms. Note we’ve not reached anything fundamental yet – we can keep peeling back the layers of the onion and peer inside. Inside the atom are electrons in a cloud around the nucleus. Yes! We’ve got a first fundamental particle: the electron! Everything we’ve done up to now says it stops with the electron. There is nothing inside it. It is a fundamental particle.\nWe aren’t done with the nucleus yet, however. Pop that open and you’ll find protons and neutrons. Not even those guys are fundamental, however – inside each of them you’ll find quarks – about 3 of them. Two “up” quarks and a “down” quark in the case of the proton and one “up” quark and two “down” quarks in the case of the neutron. Those quarks are fundamental particles.\nThe Higgs interacts with the electron and the quarks and gives them mass. You could say it “generates” the mass. I’m tempted to say that without the Higgs those fundamental particles wouldn’t have mass. So, there you have it. This is one of its roles. Without this Higgs, we would not understand at all how electrons and quarks have mass, and we wouldn’t understand how to correctly calculate the mass of an atom!\nNow, any physicist who has made it this far is cringing with my last statement – as a quick reading of it implies that all the mass of an atom comes from the Higgs. It turns out that we know of several different ways that mass can be “generated” – and the Higgs is just one of them.", "score": 29.165932227747074, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "This text accompanies the video.\nThe Standard Model of particle physics is the most successful scientific theory of all time. It describes how everything in the universe is made of 12 different types of matter particles, interacting with three forces, all bound together by a rather special particle called the Higgs boson. It’s the pinnacle of 400 years of science and gives the correct answer to hundreds of thousands of experiments. In this explainer, Cambridge University physicist David Tong recreates the model, piece by piece, to provide some intuition for how the fundamental building blocks of our universe fit together. At the end of the video, he also points out what’s missing from the model and what work is left to do in order to complete the Theory of Everything.\nAnd then when you have looked at that, you can look at this.\nBy which time you will find you actually know nothing at all about anything that makes up the universe in which we live, at least nothing that makes any practical difference in what you do.", "score": 28.90337748864346, "rank": 33}, {"document_id": "doc-::chunk-20", "d_text": "In the late 1940s a number of experiments with cosmic rays revealed new types of particles, the existence of which had not been anticipated. They were called strange particles, and their properties were studied intensively in the 1950s. Then, in the 1960s, many new particles were found in experiments with the large accelerators. The electron, proton, neutron, photon, and all the particles discovered since 1932 are collectively called elementary particles. But the term is actually a misnomer, for most of the particles, such as the proton, have been found to have very complicated internal structure.\nElementary particle physics is concerned with (1) the internal structure of these building blocks and (2) how they interact with one another to form nuclei. The physical principles that explain how atoms and molecules are built from nuclei and electrons are already known. At present, vigorous research is being conducted on both fronts in order to learn the physical principles upon which all matter is built.\nOne popular theory about the internal structure of elementary particles is that they are made of so-called quarks (. Quark), which are subparticles of fractional charge; a proton, for example, is made up of three quarks. This theory was first proposed in 1964 by the American physicists Murray Gell-Mann and George Zweig. Despite the theory's ability to explain a number of phenomena, no quarks have yet been found, and current theory suggests that quarks may never be released as separate entities except under such extreme conditions as those found during the very creation of the universe. The theory postulated three kinds of quarks, but later experiments, especially the discovery of the J/psi particle in 1974 by the American physicists Samuel C. C. Ting and Burton Richter, called for the introduction of three additional kinds.\nIn 1931 the American physicist Harold Clayton Urey discovered the hydrogen\nisotope deuterium and made heavy water from it. The deuterium nucleus,\nor deuteron (one proton plus one neutron), makes an excellent bombarding\nparticle for inducing nuclear reactions. The French physicists Irène\nand Frédéric Joliot-Curie produced the first artificially\nradioactive nucleus in 1933 and 1934, leading to the production of radioisotopes\nfor use in archaeology, biology, medicine, chemistry, and other sciences.", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-6", "d_text": "These HIGGS Boson of u-HIGGS, d-HIGGS are Force Carriers of the Weak Gravity Force; and sparticles; d-HIGGSINO, and u-HIGGSINO Bosons of the Electroweak Gravity Force. It's their absorption and emission that creates mass of subatomic and larger particles and sparticles in the nucleus. The other Force Carriers are Gravitons; c-Graviton, t-Graviton of the Weak Gravity Force, and s-Gravitino and b-Gravitino of the Electroweak Gravity Force. It's the emission and absorption of these Gravitons and Gravitinos that create the SPIN, MOMENTUM and ENERGY of subatomic and larger particles and sparticles inside the nucleus. The 2 Force Carriers of Weak Gravity and Electroweak Gravity are similar to the 2 Force Carriers; W+, W-, and ZO (+-=Z0, -+=Z0) of the Weak Nuclear Force and sparticles; Wino, Zino of the Electroweak Force. The weak interaction in the nucleus is DUE to the Weak Gravity and Electroweak Gravity. The Male HIGGS Bosons have 3 Flavors or Faces; H2 or u-HIGGS, H4 or d-HIGGS, H5 or d-HIGGSINO and u-HIGGSINO, all are Zero or Neutral charged.\nThe material particles and sparticles of the Male HIGGS Fields or Weak Gravity Force and Electroweak Gravity are what we called Quarks HIGGS (Qhiggs) and Squark HIGGS (Sqhiggs). They are cubical in shape and they housed or confined each specific type of QUARK; Charm(c), Up(u), Top(t), Down(d) and SQUARK; Strange(s), Down(d), Bottom(b), Up(u) and why we called them QUARK HIGGS and SQUARK HIGGS.", "score": 27.388120820457882, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "So we conclude that each type of particle corresponds to a representation of G, and if we can classify the group representations of G, we will have much more information about the possibilities and properties of H, and hence what types of particles can exist.\nThe group of translations and Lorentz transformations form the Poincaré group, and this group is certainly a subgroup of G (neglecting general relativity effects, or in other words, in flat space). Hence, any representation of G will in particular be a representation of the Poincaré group. Representations of the Poincaré group are in many cases characterized by a nonnegative mass and a half-integer spin (see Wigner's classification); this can be thought of as the reason that particles have quantized spin. (Note that there are in fact other possible representations, such as tachyons, infraparticles, etc., which in some cases do not have quantized spin or fixed mass.)\nWhile the spacetime symmetries in the Poincaré group are particularly easy to visualize and believe, there are also other types of symmetries, called internal symmetries. One example is color SU(3), an exact symmetry corresponding to the continuous interchange of the three quark colors.\nAlthough the above symmetries are believed to be exact, other symmetries are only approximate.\nAs an example of what an approximate symmetry means, suppose we lived inside an infinite ferromagnet, with magnetization in some particular direction. An experimentalist in this situation would find not one but two distinct types of electrons: one with spin along the direction of the magnetization, with a slightly lower energy (and consequently, a lower mass), and one with spin anti-aligned, with a higher mass. Our usual SO(3) rotational symmetry, which ordinarily connects the spin-up electron with the spin-down electron, has in this hypothetical case become only an approximate symmetry, relating different types of particles to each other.\nLie algebras versus Lie groups\nMany (but not all) symmetries or approximate symmetries, for example the ones above, form Lie groups. Rather than study the representation theory of these Lie groups, it is often preferable to study the closely related representation theory of the corresponding Lie algebras, which are usually simpler to compute.\nIn general, an approximate symmetry arises when there are very strong interactions that obey that symmetry, along with weaker interactions that do not.", "score": 27.04166607735528, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Scientists Close to Understanding Mass With Particle\nScientists seeking to explain the origins of matter discovered a particle that may support a decades-old theory of physics, bringing people closer to understanding unseen parts of the universe.\nThe observed particle is the heaviest boson ever found, said Joe Incandela, spokesman for one of the experiments at CERN, the European Organization for Nuclear Research, at a seminar yesterday at its Geneva headquarters. Scientists stopped short of claiming they have found the elusive Higgs boson, a theoretical particle that could explain where mass comes from.\n“As a layman, I think I would say ‘we have it,’” said Rolf-Dieter Heuer, director of CERN, at a press conference in Geneva. It will take at least three to four years of research to fully understand the properties of the observed particle, Heuer said.\nThe announcement brings humankind closer to answering a millennia-old question that the ancient Greeks wrestled with: what is matter made of? The particle is a key to the Standard Model, a theory explaining how the universe is built, and its existence would help scientists gain a better understanding of how galaxies hold together. It also could open a door to exploring other parts of physics such as superparticles or dark matter that telescopes can’t detect.\n‘Sings and Dances’\nThe new boson “sings and dances like” the theoretical particle, said Pauline Gagnon, a researcher on the Atlas set of experiments in Geneva, in an interview in Melbourne, where she was attending the bi-annual International Conference on High Energy Physics. “There is no doubt it comes from a different signal, different channels, with different experiments. We just need in the next few months with more data to ascertain exactly what are the properties of this particle to see if it is exactly the Standard Model Higgs boson or some variation of it.\"\nParticle physics is the study of the elemental building blocks that make up matter. These particles, with names such as quark, fermion, lepton and boson, can’t be subdivided. They exist and interact within several unseen ‘‘fields’’ that permeate the universe.\nThe field that generates mass for objects is named for U.K. physicist Peter Higgs, who in the 1960s was one of the first scientists to outline a working theory on how elemental particles achieve mass. Higgs was one of four of the theorists attending yesterday’s meeting in Geneva.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "In this theory, there are several kinds of elementary particles. There are strongly interacting quarks, which make up the protons and neutrons inside atomic nuclei as well as most of the new particles discovered in the 1950s and 1960s. There are more weakly interacting particles called leptons, of which the prototype is the electron.\nThere are also “force carrier” particles that move between quarks and leptons to produce various forces. These include (1) photons, the particles of light responsible for electromagnetic forces; (2) closely related particles called W and Z bosons that are responsible for the weak nuclear forces that allow quarks or leptons of one species to change into a different species—for instance, allowing negatively charged “down quarks” to turn into positively charged “up quarks” when carbon-14 decays into nitrogen-14 (it is this gradual decay that enables carbon dating); and (3) massless gluons that produce the strong nuclear forces that hold quarks together inside protons and neutrons.\nSuccessful as the Standard Model has been, it is clearly not the end of the story. For one thing, the masses of the quarks and leptons in this theory have so far had to be derived from experiment, rather than deduced from some fundamental principle. We have been looking at the list of these masses for decades now, feeling that we ought to understand them, but without making any sense of them. It has been as if we were trying to read an inscription in a forgotten language, like Linear A. Also, some important things are not included in the Standard Model, such as gravitation and the dark matter that astronomers tell us makes up five sixths of the matter of the universe.\nSo now we are waiting for results from a new accelerator at CERN that we hope will let us make the next step beyond the Standard Model. This is the Large Hadron Collider, or LHC. It is an underground ring seventeen miles in circumference crossing the border between Switzerland and France. In it two beams of protons are accelerated in opposite directions to energies that will eventually reach 7 TeV in each beam, that is, about 7,500 times the energy in the mass of a proton. The beams are made to collide at several stations around the ring, where detectors with the mass of World War II cruisers sort out the various particles created in these collisions.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "At the heart of the Standard Model is the idea of elementary particles; the very basic building blocks of the building blocks of the building blocks of... well... pretty much everything really.\nSo here's a table that nicely organizes all the elementary particles of the Standard Model. Don't expect it to mean anything just yet, but I'll be referring back to it from time to time...\nThere are basically two types of particle in the universe. The first class consists of particles that make up the matter all around us (the chairs and tables of the world). These are called FERMIONS, being named after a pioneer of particle physics by the name of Enrico Fermi.\nThe second class consists of particles that cause interaction between matter particles (think of them as the 'springs' that bind things together or push them apart). These particles are called BOSONS, being named after another important early physicist by the name of Satyendra Nath Bose.\nIn short, fermions are matter particles and bosons are interaction particles. Now let's drill down on each of these categories...\nParticles can push and pull on each other in several ways corresponding to four fundamental forces: the gravitational force, the strong force, the weak force, and the electromagnetic force.\nThe Standard Model can't handle gravity yet; it's way too tricky. But the others can be understood very well by thinking of the forces as being mediated by particles, called bosons.\nGLUONS carry the strong force and are usually denoted by a lowercase g. I don't have much to say about gluons yet, but I just wanted this paragraph to be the same length as the next two. There. That should do it.\nPHOTONS carry the electromagnetic force and are usually denoted by the greek letter γ (gamma). Photons are better known for their role as the 'light particles' that enable us to see stuff.\nGAUGE BOSONS carry the weak force. There are two types of gauge bosons denoted by the symbols W and Z. Most physicists just refer to these particles as W- and Z-Bosons, however the name WEAKON (being suggestive of the weak force) was once proposed.\nIt didn't take hold among physicists, but the lexicographers seem to have taken notice ;-)\nNOTE — You may be wondering how forces can be 'carried' by particles.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-2", "d_text": "Feynman explains:\nClearly as soon as we have to put forces on the inside of the electron, the beauty of the whole idea [i.e. deriving mass from Maxwell’s equations], begins to disappear. Things get very complicated. You would want to ask: how strong are the stresses? How does the electron shake? Does it oscillate? What are all its internal properties? And so on. It might be possible that the electron does have some complicated internal properties. If we made a theory of an electron along those lines, it would predict odd properties, the modes of oscillation, which haven’t apparently been observed. We say “apparently” because we observe a lot of things in nature that still do not make sense. We may someday find out that one of the things that we don’t understand today (for example the muon), can, in fact, be explained as an oscillation of the Poincare stresses…there are so many things about fundamental particles that we still don’t understand. Anyway, the complex structure implied by this theory is undesirable, and the attempt to explain all mass in terms of electromagnetism…has led to a blind alley.\nIt turns out, however, that nobody has ever succeeded in making a self-consistent quantum theory out of any of the modified [classical] theories…We do not know how to make a consistent theory - including the quantum mechanics - which does not produce an infinity for the self-energy of the electron, or any point charge. And, at the same time, there is no satisfactory theory that describes a non-point charge. It is an unsolved problem.\nOf course, in developing our RST-based theory, we escape the horns of this dilemma, by recognizing that electrical, magnetic, and gravitational forces, are properties of scalar, not vectorial, or M2, motions, and that they are certainly not autonomous entities that can exist independently on the surface of a sphere, as a consequence of interacting charges. In the new system, we can define force and acceleration (mass) in space|time terms in a way that eliminates the infinities of point particles and the need for Poincare stresses, in non-point particles.\nThat is to say, we can redefine force, just as we redefine motion: In the universe of motion, like everything else, force and acceleration must be motions, a combination of motions, or a relation between motions.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "In English, many things are named after a particular country – but have you ever wondered what those things are called in those countries?\n1física de partículas femenino\n- When asked what the most important issue in particle physics is today, my colleagues offer three burning questions: What is the origin of mass?\n- String theory always has lots of scalar-field moduli and these can potentially play important roles in particle physics and cosmology.\n- The first decade of LFT has been extremely productive and has had a longlasting impact on theoretical particle physics and field theory.\n- In particle physics, masses of scalar fields tend to be very large.\n- The existence of quarks inside the mesons and baryons had to be deduced mathematically because free quarks have never been observed by particle physics.\n- I would like to end by discussing the future of string theory, not as a mathematical subject but as a framework for particle physics and cosmology.\n- Since the discovery of quarks in the 1960s, the core questions in nuclear and particle physics have evolved dramatically.\n- In the Standard Model of particle physics, the Higgs mechanism allows the generation of such masses but it cannot predict the actual mass values.\n- In Deep Down Things, he takes the reader on a fascinating journey into the bizarre, subatomic world of particle physics.\n- The article becomes heavy in places, especially in dealing with particle physics and quantum mechanics.\n- In the same way it doesn't help us to look at Quantum and particle physics to describe cellular properties, because the level is too far reduced and the sorts of statements we can make are too general.\n- Then the electron was discovered, and particle physics was born.\n- Some will express surprise that the biggest job in particle physics has gone to a non-particle physicist.\n- Fifty years is a long time in particle physics - and not just because most subatomic particles only exist for tiny fractions of a second.\n- That hallmark has indeed proved true for quarks, which form the bedrock of the standard model, the dominant paradigm of particle physics.\n- The delineation of atomic substructure and mechanisms of subatomic processes evolved into the modern study of particle physics.\n- But this quark excess can't be explained using the Standard Model of particle physics.\n- How do others see the future of CERN and particle physics in Europe?\n- If the Standard Model of particle physics were perfectly symmetric, none of the particles in the model would have any mass.", "score": 26.67547067651041, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "So what exactly is the physical universe made of … everything that we consider to be “matter” (indeed the only “stuff” that actually exists if you are a materialist reductionist)? Matter exists within various “force fields” such as gravity, the “electro-weak” force and the “strong” force. But if we break down matter to its absolute basic constituents what is it? Well, simply three “things” make up everything physical These are electrons, an “up quark” and a “down quark”. There are actually six quarks but for all intents and purposes matter is made up of just the “up” and “down”. As you will recall the atom consists of a nucleus that is made up of a proton and a neutron and a number of electrons “orbiting” around the nucleus. All protons are made of two up quarks, one down quark and all neutrons consist of one up quark and two down quarks. The electrons are not made of anything as they are “point particles”. By this is meant that they have no actual extension in space. Let me be clear about this, they do not have any actual physical presence in space … it does not take up space. Got that? So if it has no presence in space it might as well be a ghost and, in many ways electrons are like ghosts … which, of course, like precognition, do not exist …. But what about the up and down quarks? Well each up quark is identical to every other up quark and every down quark is identical to every other down quark which in turn creates the identical protons and identical neutrons. The only thing that makes atoms different from each other is the number of electrons wizzing round and the number of protons and neutrons in the nucleus. So literally EVERYTHING that physically exists is made up of these two quarks and the electrons which don’t actually exist in physical space (Just like photons, particles of light which are also point particles … the things that “illuminate” physical objects so we can “see” them). Is it me or is this not simply amazing …. and do you not feel, like me, that somewhere along the way we are missing something?", "score": 26.357536772203648, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Particle physicists had two nightmares before the Higgs particle was discovered in 2012. The first was that the Large Hadron Collider (LHC) particle accelerator would see precisely nothing. For if it did, it would likely be the last large accelerator ever built to probe the fundamental makeup of the cosmos. The second was that the LHC would discover the Higgs particle predicted by theoretical physicist Peter Higgs in 1964 ... and nothing else.\nEach time we peel back one layer of reality, other layers beckon. So each important new development in science generally leaves us with more questions than answers. But it also usually leaves us with at least the outline of a road map to help us begin to seek answers to those questions. The successful discovery of the Higgs particle, and with it the validation of the existence of an invisible background Higgs field throughout space (in the quantum world, every particle like the Higgs is associated with a field), was a profound validation of the bold scientific developments of the 20th century.\nHowever, the words of Sheldon Glashow continue to ring true: The Higgs is like a toilet. It hides all the messy details we would rather not speak of. The Higgs field interacts with most elementary particles as they travel through space, producing a resistive force that slows their motion and makes them appear massive. Thus, the masses of elementary particles that we measure, and that make the world of our experience possible is something of an illusion—an accident of our particular experience.\nAs elegant as this idea might be, it is essentially an ad hoc addition to the Standard Model of physics—which explains three of the four known forces of nature, and how these forces interact with matter. It is added to the theory to do what is required to accurately model the world of our experience. But it is not required by the theory. The universe could have happily existed with massless particles and a long-range weak force (which, along with the strong force, gravity, and electromagnetism, make up the four known forces). We would just not be here to ask about them. Moreover, the detailed physics of the Higgs is undetermined within the Standard Model alone. The Higgs could have been 20 times heavier, or 100 times lighter.\nWhy, then, does the Higgs exist at all? And why does it have the mass it does?", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-19", "d_text": "In 1935 the Japanese physicist Yukawa Hideki developed a theory explaining\nhow a nucleus is held together, despite the mutual repulsion of its protons,\nby postulating the existence of a particle intermediate in mass between\nthe electron and the proton. In 1936 Anderson and his coworkers discovered\na new particle of 207 electron masses in secondary cosmic radiation; now\ncalled the mu-meson or muon, it was first thought to be Yukawa's nuclear\n\"glue.\" Subsequent experiments by the British physicist Cecil Frank Powell\nand others led to the discovery of a somewhat heavier particle of 270 electron\nmasses, the pi-meson or pion (also obtained from secondary cosmic radiation),\nwhich was eventually identified as the missing link in Yukawa's theory.\nMany additional particles have since been found in secondary cosmic radiation and through the use of large accelerators. They include numerous massive particles, classed as hadrons (particles that take part in the \"strong\" interaction, which binds atomic nuclei together), including hyperons and various heavy mesons with masses ranging from about one to three proton masses; and intermediate vector bosons such as the W and Z0 particles, the carriers of the \"weak\" nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws involving quantum numbers, such as baryon number, strangeness, and isotopic spin.\nIn 1931 Pauli, in order to explain the apparent failure of some conservation laws in certain radioactive processes, postulated the existence of electrically neutral particles of zero-rest mass that nevertheless could carry energy and momentum. This idea was further developed by the Italian-born American physicist Enrico Fermi, who named the missing particle the neutrino. Uncharged and tiny, it is elusive, easily able to penetrate the entire earth with only a small likelihood of capture. Nevertheless, it was eventually discovered in a difficult experiment performed by the Americans Frederick Reines and Clyde Lorrain Cowan, Jr. Understanding of the internal structure of protons and neutrons has also been derived from the experiments of the American physicist Robert Hofstadter, using fast electrons from linear accelerators.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-3", "d_text": "Charged particles interact by the exchange of photons — the carrier of the electromagnetic force. Whenever an electron repels another electron or an electron orbits a nucleus, a photon is responsible. Photons are massless, uncharged, and have an unlimited range. The mathematical model used to describe the interaction of charged particles through the exchange of photons is known as quantum electrodynamics (QED).\nQuarks stick to other quarks because they possess a characteristic known as color (or color charge). Quarks come in one of three colors: red, green, and blue. Don't let the word mislead you. Quarks are much too small to to be visible and thus could never have a perceptual property like color. The names were chosen because of a wonderful analogy. The colors of quarks in the standard model combine like the colors of light in human vision.\nRed light plus green light plus blue light appears to us humans as \"colorless\" white light. A baryon is a triplet of one red, one green, and one blue quark. Put them together and you get a color neutral particle. A color plus its opposite color also gives white light. Red light plus cyan light looks the same to humans as white light, for example. A meson is a doublet of one colored quark and one anticolored antiquark. Put them together and you get another color neutral particle.\nThere's something about color that makes it want to hide itself from anything bigger than a nucleus. Quarks can't stand being apart from one another. They just have to join up and always do so in a way that hides their color from the outside world. One color is never favored over another when quarks get together. Matter is color neutral down to the very small scale.\nColored particles are bound together by the appropriately named gluons. Gluons are also colored, but in a more complicated way than the quarks are. Six of the eight gluons have two colors, one has four, and another has six. Gluons glue quarks together, but they also stick to themselves. One consequence of this is that they they can't reach out and do much beyond the nucleus.\nThe mathematical model used to describe the interaction of colored particles through the exchange of gluons is known as quantum chromodynamics (QCD).", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-9", "d_text": "But the electron interacts with the photon field (i.e. the\nelectromagnetic field) and can create a disturbance in it; in doing\nso it too ceases to be a normal particle and becomes a more general\ndisturbance. The combination of the two disturbances (i.e. the two\n\"virtual particles\") remains a particle with the energy,\nmomentum and mass of the incoming electron.\nNow there are many other types of disturbances that fields can\nexhibit that are not particles. Another example, and scientifically\none of the most important, shows up in the very nature of particles\nthemselves. A particle is not as simple as I have naively\ndescribed. Even to say a particle like an electron is a\nripple purely in the electron field is an approximate\nstatement, and sometimes the fact that it is not exactly true\nIt turns out that since electrons carry electric charge, their\nvery presence disturbs the electromagnetic field around them, and so\nelectrons spend some of their time as a combination of two\ndisturbances, one in in the electron field and one in the\nelectromagnetic field. The disturbance in the electron field\nis not an electron particle, and the disturbance in the photon field\nis not a photon particle. However, the combination of the two\nis just such as to be a nice ripple, with a well-defined energy and\nmomentum, and with an electron’s mass. This is sketchily\nillustrated in Figure 3.\nFig. 4: The Feynman diagram needed to calculate the process in\nFig. 3. One says \"the electron emits and reabsorbs a virtual\nphoton\", but this is just shorthand for the physics shown in\nThe language physicists use in describing this is the following:\n“The electron can turn into a virtual photon and a virtual\nelectron, which then turn back into a real electron.” And they draw\na Feynman diagram that looks like Figure 4. But what they\nreally mean is what I have just described in the\nprevious paragraph. The Feynman diagram is actually a calculational\ntool, not a picture of the physical phenomenon; if you want to\ncalculate how big this effect is, you take that diagram , translate\nit into a mathematical expression according to Feynman’s rules, set\nto work for a little while with some paper and pen, and soon obtain\nFig.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-5", "d_text": "by R. D. Peccei - arXiv , 1999\nThese lectures describe some aspects of the physics of massive neutrinos. After a brief introduction of neutrinos in the Standard Model, the author discusses possible patterns for their masses. A discussion of Dirac and Majorana masses is included.\nNon-Abelian Discrete Symmetries in Particle Physics\nby Hajime Ishimori, at al. - arXiv , 2010\nThe authors review pedagogically non-Abelian discrete groups, which play an important role in the particle physics. They show group-theoretical aspects for many concrete groups, such as representations, their tensor products.\nNeutrino masses and mixings and...\nby Alessandro Strumia, Francesco Vissani - arXiv , 2010\nThe authors review experimental and theoretical results related to neutrino physics with emphasis on neutrino masses and mixings, and outline possible lines of development. The physics is presented in a simple way, avoiding unnecessary formalisms.\nQuantum Principles and Particles\nby Walter Wilcox - Baylor University , 1992\nThis is an introductory text, it includes material which overlaps with particle physics, including a number of interesting models as well as material on identical particles. The ideas of quantum mechanics are introduced via Process Diagrams.\nLessons in Particle Physics\nby Luis Anchordoqui, Francis Halzen - arXiv , 2009\nThis is a preliminary draft of lecture notes for a (500 level) course in elementary particles. From the table of contents: General Principles; Symmetries and Invariants; QED; Hard Scattering Processes; Precision Electroweak Physics.\nElementary Particle Physics\nby Paolo Franzini - University of Rome , 2009\nThe Electromagnetic Interaction; The Weak Interaction; Strangeness; Quark Mixing; Quantum Chromodynamics; Hadron Spectroscopy; High Energy Scattering; The Electro-weak Interaction; Spontaneous Symmetry Breaking, the Higgs Scalar; etc.\nA Simple Introduction to Particle Physics\nby M. Robinson, K. Bland, G. Cleaver, J. Dittmann, T. Ali - arXiv , 2008\nAn introduction to the relevant mathematical and physical ideas that form the foundation of Particle Physics, including Group Theory, Relativistic Quantum Mechanics, Quantum Field Theory and Interactions, Abelian and Non-Abelian Gauge Theory, etc.", "score": 25.468142132006083, "rank": 47}, {"document_id": "doc-::chunk-6", "d_text": "MASS AND CHARGE OF ELEMENTARY PARTICLES\nMassa (u) Carica (e)\n1.67262 10-27 1.60218 10-19 1.00728\n1.67493 10-27 0\nELETTRONE 9.10939 10-31 -1.60218 10-19 0.00055\nATOMIC NUMBER, MASS NUMBER, ISOTOPES ATOMIC NUMBER Z : number of protons of an atom An element is designated by Z: all atoms of the same element have the same Z. However, elements are usually designated not by Z but by a one-two letter symbol Sy. MASS NUMBER A: the sum of protons and neutrons A – Z is the number of neutrons: not fixed given a value of Z! Atoms with same Z (same element) with different A are ISOTOPES\nAn atom is characterised by both Z and A. The complete notation is:\nUsually Z is omitted (not necessary). A indicated only if required.\nIf we are not interested in isotopes, then the atom is characterised by Z, the number of protons in the nucleus: Z is also the number of electrons of the neutral atom This latter definition is much more important in chemistry!\nCorrelation between (A – Z) number of neutrons and Z, number of protons\nNot all values of Z and A-Z are possible Nuclei are stable up to Z = 83 (Bismuth). For Z > 83, e.g. Po Z = 84, nuclei become unstable and undergo nuclear reactions The number of elements is finite because nuclei become unstable radioactive decay: unstable nuclei transform through emission of particles For light nuclei Z ~ A – Z (up to Z = 20)\nhow do we weight atoms?\nwith MS, mass spectrometry • for the electron (e/m) • for atoms • also for molecules The particle, carrying a charge e and having a speed v, is deviated by the electromagnetic field\nMASS OF AN ATOM M One would expect additivity: M = Z mp + (A-Z) mn + Z me Instead: M < Z mp + (A-Z) mn + Z me Defect of mass!! Why? Relativistic Effect: mass defect measures the binding energy among nucleons EINSTEIN RELATIONSHIP\nΔE = Δm.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "So to summarize: The current Standard Model of elementary particle physics consists of six quarks (up, down, strange, charm, bottom, and top); six leptons (electron, muon, tau, electron neutrino, muon neutrino, and tau neutrino); and the so-called gauge bosons that act as the agents that carry the four kinds of forces that exist in nature: the graviton (for gravity), the photon (for electromagnetism), the gluon (for the strong nuclear force), and the W+, W-, and the uncharged Z bosons (for the weak nuclear force), making 18 particles in all.\nAnd then we have the recently discovered Higgs particle of mass 126.5 GeV, standing alone. The Higgs particle has many properties in common with the other force particles so that it is often lumped together with them for that reason, but unlike them is not the carrier of any known force.\nAll the particles except the Higgs were detected by 1995, the last of them being the top quark. The masses of the quarks are hard to determine precisely since they are never found in an isolated state but always in combination with other quarks and gluons inside non-elementary particles. The masses of the neutrinos are hard to determine because their interactions with matter is so weak. (How we detect and measure the properties of something microscopic is by seeing how it disturbs the things that we can see and measure. But that requires the particle to interact with matter in the first place and neutrinos are notoriously reluctant to do so.)\nIt is these 19 particles that form the elementary particle spectrum from which the Standard Model is built. (Each particle also has its corresponding anti-particle but since the properties of each antiparticle are completely determined by the properties of the corresponding particle, they are not listed separately.) Of these 19 particles, the masses of the graviton, photon, and gluon are predicted to be identically zero, and the masses of W+ and W- are predicted by theory to be identical. That leaves us with 15 masses whose values have to be determined by experiment.\nIn addition, we have to include as additional parameters the strength of the interaction of each of the four kinds of forces, giving 19 parameters in all to be determined by experiment.\nNext: Particle and waves", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-4", "d_text": "Yang-Mills theory underlies much of the theory of modern particle physics. To jump ahead in the story, the Standard Model of particle physics is modeled by a composite Lie group: SU(3)xSU(2)xU(1). As you may guess, SU(3) is the group that continuously mixes three entities; and here it is used to quantum-mechanically mix the three \"colors\" of the quarks. SU(2) in this model mixes an up quark with a down quark and the leptons: an electron with an electron-neutrino--the two particles that emanate together in beta decay (hence capturing the action of the weak force; the strong force being modeled by the SU(3))--and the same for the muon and the muon neutrino, and tau with the tau neutrino (and similarly SU(3) also mixes the higher-order quarks of the second and third \"generations\"). The U(1) part of the composite group can be described (simplistically, here) as representing electromagnetism.\nPart of the model, the unification of electromagnetism with the weak nuclear force (responsible for beta decay)--using the composite SU(2)xU(1)--was the work of Steven Weinberg in 1967 (and independently, in different ways, of Abdus Salam and Sheldon Glashow, who shared the 1979 Nobel Prize with Weinberg for the electroweak unification work). in his work, Weinberg used what he called \"the Higgs mechanism\" to break the original symmetry of the unified electroweak force when it split during the very early universe into electromegnetism and the weak force. When this happened, the Z and W+ and W- bosons that carry the action of the weak force inside a nucleus of matter gained mass.\nThe \"Higgs mechanism,\" whose action is carried out by the then-hypothesized Higgs boson (now confirmed to exist by CERN) is what gave mass to these three bosons and, by implication, to all massive particles in the universe (save, perhaps, neutrinos, where another process may have been at play).", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-10", "d_text": "This has the consequence that the results of a measurement are now sometimes \"quantized\", i.e. they appear in discrete portions. This is, of course, difficult to imagine in the context of \"forces\". However, the potentials V(x,y,z) or fields, from which the forces generally can be derived, are treated similar to classical position variables, i.e., .\nThis becomes different only in the framework of quantum field theory, where these fields are also quantized.\nHowever, already in quantum mechanics there is one \"caveat\", namely the particles acting onto each other do not only possess the spatial variable, but also a discrete intrinsic angular momentum-like variable called the \"spin\", and there is the Pauli principle relating the space and the spin variables. Depending on the value of the spin, identical particles split into two different classes, fermions and bosons. If two identical fermions (e.g. electrons) have a symmetric spin function (e.g. parallel spins) the spatial variables must be antisymmetric (i.e. they exclude each other from their places much as if there was a repulsive force), and vice versa, i.e. for antiparallel spins the position variables must be symmetric (i.e. the apparent force must be attractive). Thus in the case of two fermions there is a strictly negative correlation between spatial and spin variables, whereas for two bosons (e.g. quanta of electromagnetic waves, photons) the correlation is strictly positive.\nThus the notion \"force\" loses already part of its meaning.\nIn modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be \"fundamental interactions\".:199–128 When particle A emits (creates) or absorbs (annihilates) virtual particle B, a momentum conservation results in recoil of particle A making impression of repulsion or attraction between particles A A' exchanging by B. This description applies to all forces arising from fundamental interactions.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-24", "d_text": "Remember, there is no net charge expression without accompanying mass expression, while there can be the opposite, a massive ST unit with zero net charge. Asymmetry of the charge and spin vectors is a requirement for net expression of these parameters. The mass forming vector is not really a separate structural vector in the same sense as the charge and spin vectors. Rather it is manifested as the result of the particular combination(s) of the other two and in the end acts just like a vector in the sense that if those combinations produce a composite asymmetry mass is expressed (like a localized knot in the ST field). When they produce a composite symmetry, like in the Higgs, gluon, photon and graviton bosons, no mass is expressed and whatever asymmetry is present in the spin vectors (like the spin 1 gluons and photons, and the spin 2 gravitons), that spin energy now translates into 100% mobility with each ST pulse. The Higgs Boson, with spin 0 and charge 0, is assigned a mass and mobility of 0 here to be fully consistent with this model. Its perfect symmetry has no net expression mass, charge or spin. Nevertheless it has mass-energy resulting from the equal and opposite individual charge, spin (and mass) vectors. It spins in place acting as the multipotential transformative gateway that all other ST units are derived, and measured, from and return to. Through interference, photons can become gravitons or Higgs Bosons. Interference of gravitons can also regenerate Higgs. The accounting system for the conservation laws has come full circle.\nThe irony is that the condensate of matter is light and we see it, while the fluid of which it is composed is dark and invisible. While the curvature of ST inwards toward its mass-energy seen should not exactly point to curvature outward where inflation began is the light.For those who are in doubt, yet consider the Conservation of Energy to be inviolable, consider this:\nIf energy is conserved now, it must have been conserved in the past and in the future as well (that is if the notions of a past, present and future are endeared likewise).\nAll the energy now is the same, in amount, as all the energy at the Big Bang, both before and after inflation.\nEnergy disposition, that is its spatial dispersement, must likewise account for the total sum.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-6", "d_text": "Earlier, I explained how all particles can be classified as either fermions or bosons. This dichotomy is based on quite a technical argument that has been questioned from time to time.\nOne compelling analysis suggests that in a two-dimensional system, the traditional dichotomy would break down, making room for a whole spectrum of possibilities in between bosons and fermions.\nNaturally, ANYthing between a bosON and a fermiON, should be called an ANYON.\nSounding much more like a term from neuroanatomy than particle physics, an axion is a hypothetical particle that was invented to resolve a perplexing issue called The Strong CP Problem.\nThis problem had to do with the differences between the theories of the strong and weak forces with respect to a type of symmetry called CP.\nIt doesn't really matter though. The main point is that there's nothing else you can make with these letters.\nA glueball is what you would get if you could stick a bunch of gluons together. And you have to admit, even though it sounds more like a team sport than a particle, it's a pretty darn sensible name.\nTheoretically, gluons should be able to stick together like this, but, like most things in particle physics, it's a lot easier said than done. In the meantime, feel free to play it.\nThe greatest problem in modern physics is that general relativity (the theory of space and time) and quantum theory (the theory of matter and interactions) refuse to talk to each other.\nA very popular assumption among physicists is that gravity will turn out to be a quantum field (just like the other forces) and the (alleged) particle that mediates the corresponding force is called the graviton. We've yet to detect one, but lots of people are trying.\nA model was proposed in the 1970s in which the (now assumed to be) elementary quarks and leptons were actually composite particles. The hypothetical constituent particles were called preons. Although physicists rarely say never, the idea has been largely discredited.\nI wonder what the preons were made of? ;-)\nA very popular, although speculative, theory in particle physics is called SUPERSYMMETRY (often called Susy by physicists; it's a kind of secret handshake). It's way too difficult to even summarize here, but the upshot of it would be that all known particles would have new (as yet undetected) partners.", "score": 23.749532590174518, "rank": 53}, {"document_id": "doc-::chunk-10", "d_text": "This is due to a condition of magnetic disorder (expanded and “hot” corpuscles: for ex., the neutron, compared with compact, highly magnetic and “cold” proton), or to a different collocation of the “barrier of potential” of the particles constituting the corpuscle (as in the proton-electron system: there is normally equilibrium between attractive effects of a particle and “repulsive” effects of the other), or at a regular magnetic anti parallelism of equal composing particles (nucleus and saturated layers, with protons in anti parallel couples, as in the “inert” gas: there is equilibrium between attractive effects of a pole and “repulsive” effects of the other) (30). As for the value presumed unitary of the “charge” both positive and negative, it is preordinate by the caliber of our instruments, whose sensibility is at the limit of a certain gravitational intensity: this appear identical, because perceived at different distances from the field’s center of the proton of the electron which acts as a screen. In other terms, the instrumental perception arrives up to the “potential barrier” of the proton and to the one of the electron, which has the same intensity, because the first is much more distant by the proton than the second is of the electron (31).\nLet’s now return to the problem of the measure of mass, which become executed in base of the gravitational effects of the masses themselves. Let’s now observe in this regard that the factors of density (active effect) and of the magnetism has in the gravitational interaction a much restricted radius of prevailing influence, over which remains almost exclusively sensible the nude factor of the amount of matter (still remaining measured in the effects instead that in absolute): right for this Newton’s formula can prescind without too much harm, in the measure of the macro cosmic gravitation, by those two factors, whose gravitational character is however clearly underlined by the similarity of the formulas of the electric and magnetic interaction with the Newtonian law.\nTherefore depends by the method and by the instrument of measure employed, if the values of mass result by the calculation almost naked or altered by the coefficients of the very short distances (electric and magnetic “charges”, “nuclear forces”, etc.), which must be deducted to reach the pure and simple effect of the “quantity of matter”.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "When the Nobel Prize Committee awarded in 2015\nUniversity of Tokyo, Kashiwa, Japan\nArthur B. McDonald\nSudbury Neutrino Observatory Collaboration\nQueen’s University, Kingston, Canada\nfor their experimental work showing that neutrinos might have a mass, it had to admit in the press release that:\n“The discovery led to the far-reaching conclusion that neutrinos, which for a long time were considered massless (?), must have some mass, however small.\nFor particle physics this was a historic discovery. Its Standard Model of the innermost workings of matter had been incredibly successful, having resisted all experimental challenges for more than twenty years. However, as it requires neutrinos to be massless, the new observations had clearly showed that the Standard Model cannot be the complete theory of the fundamental constituents of the universe.” (for more information read here)\nLet me summarize some of the greatest blunders that have been made in physics so far and expose it as “fake science” only because physicists have not realized that their discipline is simply applied mathematics to the physical world and have not employed it appropriately to established axiomatic, formalistic standards. Therefore, before one can reform physics, one should apply rigidly and methodologically the principle of mathematical formalism as first introduced by Hilbert and led, through the famous Grundlagenstreit (Foundation dispute) between the two world wars in Europe, to Goedel’s irrefutable proof of the invalidity of mathematics and the acknowledgement of the Foundation Crisis of mathematics that simmered since the beginning of the 20th century after B. Russell presented his famous paradoxes (antinomies):\n1) Neither photons, nor neutrinos are massless particles. Physicists have failed to understand epistemologically their own definition of mass, which is based on mathematics and is in fact “energy relationship“. All particles and systems of nature have energy and thus mass (for further information read here).\n2) This eliminates the ridiculous concept of “dark matter” that accounts for 95% of the total mass in the universe according to the current standard model in cosmology which is another epitome of “fake science” as the recent dispute on the inflation hypothesis not being a real science has truly revealed (read here). The 95% missing matter is the mass of the photon space-time which is now considered to be “massless”.", "score": 23.264431807715418, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Standard physics classes in most high schools talk about matter being made of atoms. And atoms are made of protons, neutrons and electrons. And that in each of those parts of the atom, there are three quarks. Those are the basics, but as it turns out, it's a little more complicated. There are not always just three quarks. Protons will always have three quarks, two down and one up, and these are valence quarks. It doesn't stop there however. Some protons will also have anti quark pairs and other particles called gluons that hold it all together. But wait,there's more!\nWhere does all this stuff come from? Einstein said it best in his E=mc2 formula. Mass can create energy and energy can create mass and that's where the magic happens, all the energy from the quark and anti-quarks makes more matter. There are laws however that still exist. Quark pairs must match, and there must always be one more up quark than there are down quarks. It's the concept of equal and opposite reactions that keeps everything from flying off into a huge mess. Check out the sweet way this video demonstrates some cool particle physics.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-3", "d_text": "Standard Model of Physics Particles\nIt had a time , when you were seems atom is small particle of a matter.\nAnd it can not be divided further more, so after some research again it was found that electron and proton is actually smaller than atom.\nWhereas quantum theory came and then it searched some more new particles which you were totally unknown.\nSo to understand about this new particle a new theory was needed which is quantum field theory.\nAnd this theory predicted about some more new particles are present in the universe.\nAfter that when these particles were accelerated with high energy and collide together in a accelerator.\nThen you found many new particles and it become a large number of particles.\nSo you have to divide these particles in different groups according to their properties.\nAnd this was the properties.\n- Electrical charge\n- Lifetime (which time Unless it does not decay to become a lighter particle)\nSo it was developed a model such that according to their properties they’re put together.\nAnd along with this properties you were able to explain what’s reason for different, different forces work.\nWhat Does Quantum Physics Mean About Particle\nTherefore this model is called Standard Model of Physics Particles.\nSo lets come to know what’re the different types of particles in it and what is their role ?\nHence all types of particle present in the universe is mainly divided into two categories.\nWhich’re given below.\nFermions build a mater, whether it’s a tree, mountain.\nWhereas four fundamental forces in this universe work due to Bosons.\nDo you know what’re four fundamental forces\nNow in Fermions family there’re two types of particles.\nSo Quarks are again six types (Up, down, charm, strange, top, bottom)\nThese’re responsible for making electron and proton inside the nucleus.\nBut Quarks are not stable so they always make new particle after combination together.\nWhich’s known as Hadrons.\nNow Hadrons are again divided into two parts.\n- Baryons (3 quarks)\nAlso Baryons made with combination of three quarks. So Proton and neutron comes under Baryons.\nBecause proton made with two up and one down quarks. Whereas neutron made with two down and one up quarks.\nHow Electron is generated according to Quantum principle ?\nNow rest four quarks combine together to make new other combination.\nAnd due to this you get new, new other particles.\nIf you’ll talk about Leptons these’re those particles which’s not made with quarks.\nSo Which’re as bellow.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "The immediate analogy of a ball being thrown back and forth between two people, thereby pushing them apart, soon breaks down when you try to explain attraction. I'm afraid there is no analogy, as quanta don't behave like ordinary particles. It is quite helpful to think of them as 'probability waves'. It is even more helpful not to think about them at all.\nSo there are the names of the particles that carry the forces that push and pull matter particles around. Now let's get on to the matter itself. The stuff you, me, and your ugly aunty are all made of...\nFermions are the matter particles, remember? But isn't matter made of atoms? Yes, it sure is. But atoms aren't the basic building blocks. To get to those, we have to dig deeper.\nBut first, we should probably have a quick recap on atoms, just to make sure we're on the same page. So here's a picture that probably brings back painful memories of your childhood science class...\nAs you can see, the atom consists of a hard core called the NUCLEUS around which a bunch of tiny particles called ELECTRONS orbit. The nucleus is composed of two types of particles called PROTONS and NEUTRONS, which are collectively known as NUCLEONS.\nAnd what keeps this whole assembly together? Bosons! In particular, gluons and photons generate all the little attractive and repulsive forces that keep every little min-solar-system doing its thing. But we've already talked about bosons. Let's get back to fermions...\nFermions are grouped into two categories, according to how they do or don't interact with each other: QUARKS and LEPTONS.\nThe quarks interact via the strong force, which is another way of saying that they get pushed and pulled around by gluons (they were the carriers of the strong force, remember?).\nUnlike electrons, which are all the same as each other, there are six different types of quark.\nQuarks come in pairs in a natural, but difficult to explain, way that I won't go into here. This pairing is reflected in the naming convention used for these quarks: up, down, charm, strange, top, bottom. As you can see in the pink table just above, the first letter of these labels is usually used as a shorthand symbol (u, d, etc.).", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-2", "d_text": "Therefore, gravitational energy is obtained through the presence of the Scwarzschild radius. The fundamental reasons behind mass, of course, will be much more complicated. This is only an insight into mass in terms of the radius itself and gravitational energy when in relation to the gravitational charge (or inertial mass) of the system. The Origin of the Quantization of Charges\nHere is a nice excerpt about the charge quantization method adopted by Motzhttp://encyclopedia2.thefreedictiona...tion+condition\nWhat we have is\nwhere your constants in the paper have been set to natural units and the angular momentum component comes in multiples of\nIt seems to say that\nplays the role of a magnetic charge - this basically squares the charge on the left handside, by a quick analysis of the dimensions previously analysed.\nHere, referenced by Motz, you can see the magnetic charge is given as\n, the electric charge of course, still given by\nwhat we have essentially is\nIn light of this, one may also see this can be derived from the Heaviside relationship since it has the familiar\nin it.The Radius\nWe may start with the quantum relationship\nKnowing the quantized condition\nwe may replace\nRearranging and the solving for the mass gives\nwe may replace the mass with the definition of the Planck mass in this equation, this gives\nActually, this is not the Planck mass exactly, it is about a factor of\ngreater, however, the Planck mass is usually an approximation so what we have here is possibly the correct value.\nAfter rearraging and a little further solving for the radius we end up with\nwhich makes the radius the Compton wavelength. In a sense, one can think of the Compton wavelength then as the ''size of a particle,'' but only loosely speaking. Motz' Uncertainty\nAnd so, I feel the need to explain an uncertainty relationship Motz has detailed in paper. But whilst doing so, I also feel the need to explain that such a black hole particle is expected to give up its mass in a form of Unruh-Hawking radiation. The hotter a black hole is, the faster it gives up it's radiation. This is described by the temperature equation for a black hole\nThe time in which such a particle would give up the energy proportional to the temperature is given as\nThat is very quick indeed, no experimental possibilities today could measure such an action.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "(This means that, for instance, a magnet attracts a nail because both objects exchange photons.) The graviton is the particle associated with gravity. The strong force is carried by eight particles known as gluons. Finally, the weak force is transmitted by three particles, the W+, the W- , and the Z.\nThe behavior of all of these particles and forces is described with tremendous precision by the Standard Model, with one notable exception: gravity. For technical reasons, the gravitational force, the most familiar in our every day lives, has proven very difficult to describe microscopically. This has been for many years one of the most important problems in theoretical physics– to formulate a quantum theory of gravity.\nIn the last few decades, string theory has emerged as the most promising candidate for a microscopic theory of gravity. And it is infinitely more ambitious than that: it attempts to provide a complete, unified, and consistent description of the fundamental structure of our universe. (For this reason it is sometimes, quite arrogantly, called a ‘Theory of Everything’).\nThe essential idea behind string theory is this: all of the different ‘fundamental ‘ particles of the Standard Model are really just different manifestations of one basic object: a string. How can that be? Well, we would ordinarily picture an electron, for instance, as a point with no internal structure. A point cannot do anything but move. But, if string theory is correct, then under an extremely powerful ‘microscope’ we would realize that the electron is not really a point, but a tiny loop of string. A string can do something aside from moving— it can oscillate in different ways. Just as the strings on a violin or on a piano have resonant frequencies at which they prefer to vibrate—patterns that our ears sense as various musical notes and their higher harmonics—the same holds true for the loops of string theory. But rather than producing musical notes, each of the preferred mass and force charges are determined by the string’s oscillatory pattern. If it oscillates a certain way, then from a distance, unable to tell it is really a string, we see an electron. But if it oscillates some other way, well, then we call it a photon, or a quark, or a … you get the idea. So, if string theory is correct, the entire universe is made of strings!", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-4", "d_text": "The whole sticky mess is called the strong force or the strong interaction since it results in forces in the nucleus that are stronger than the electromagnetic force. Without the strong force, every nucleus would blow itself to smithereens.\nThere are twelve named elementary fermions. The difference between them is one of flavor. The word \"flavor\" is used here to mean \"type\" and it applies only to fermions. Don't let the word mislead you. Subatomic particles are much too small to have any characteristics that could be directly observed by human senses.\nFlavored particles interact weakly through the exchange of W or Z bosons — the carriers of the weak force (also known as intermediate vector bosons). When a neutron decays into a proton, a W boson is responsible. When a neutron captures a neutrino, a W boson did it. The mathematical model used to describe the interaction of flavored particles through the exchange of intermediate vector bosons is known as quantum flavordynamics (QFD), but this is a term that is rarely used by practicing particle physicists. At higher energies, the weak and electromagnetic forces are indistinguishable. The mathematical model that describes both of these fundamental forces is known as electroweak theory (EWT). This is the conventional name for the theory of the weak force.\nAll fermions are thought to have mass. Particles in generation I are less massive than those in generation II, which are less massive than those in generation III. Within the generations, quarks are more massive than leptons and neutrinos are less massive than the other leptons. Bosons are divided when it comes to mass. Gluons and photons are massless. The W, Z, and higgs bosons are massive. This\n|particles||mass (kg)||mass (u)||mass (MeV/c2)|\n|neutrinos||νe||electron neutrino||< 10−35||< 10−8||< 10−5|\n|νμ||muon neutrino||< 10−35||< 10−8||< 10−5|\n|ντ||tau neutrino||< 10−35||< 10−8||< 10−5|\nGravity is the force between objects due to their mass.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-9", "d_text": "The significance of a possible \"quantum chromodynamics\" will have to be derived on the basis of a unified description of possible interactions. This problem is being investigated in .\nResponsible for the inertial mass are the protosimplexes, i.e. the basic building blocks of flux aggregates, which form the structures of the k + 1 subconstituents in R3. They compose 4 concentric spherical shell-like configuration zones maintaining a dynamical equilibrium, during whose existence there appears a measurable particle mass. However, an attempt to measure the mass of a subconstituent part by scattering experiments will result in a very broad, variable bandwidth of measurements, because such a mass depends on the instantaneous flux phase. The sum of the k + 1 subconstituent masses, on the other hand, is constant and gives in essence the measurable particle mass. The relevant quantity in this connection is the degree to which the 4 configuration zones in R3 are occupied by dynamic flux elements.\nFor k = 1 and k = 2 there are altogether 25 sets of 6 quantum numbers each, characterizing the occupation of configuration zones and the corresponding invariant rest masses. The particles belonging to these invariant basic patterns are in turn combined into several families of spin isomorphisms,in which the spatial flux dynamics of the configuration zones is in dynamic equilibrium.\nIn all these terms there exists a single basic invariant framework of occupied zones, depending only on whether k = 1 or k = 2. Substituted into the mass formula derived in this reproduces the masses of electron and proton to very good accuracy. The masses of all other ground states are produced in similar quality. However, the mass formula contains ratios of coupling constants, which could not at the time be derived theoretically. and therefore had to be adjusted to fit experiments carried out at CERN in 1974. Only in has it become possible to derive the coupling constants from first principles, but a revised set of particle masses has not yet been calculated.\nIt seems that the lifetime of a state depends on the deviation of its configuration zone occupation from the framework structure mentioned above. It is conceivable, in analogy to the optically active antipodes of organic chemistry, that there exist isomers with spatial reflection symmetry also in the area of flux aggregates, giving rise to variations in lifetime.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-11", "d_text": "The same is true, by the way, for all the other electrically\ncharged fields, including those of the muon, the up quark, and so\n[Here, by the way, we come across another reason why “virtual\nparticle” is a problematic term. I have had several people\nask me something like this: “ Since the diagram in Figure 6 seems\nto show that the photon spends some of its time as made from two\nmassive particles [recall the electron and the positron both have the\nsame mass, corresponding to a mass-energy (E = m c-squared) of\nwhy doesn’t that give the photon a mass?” Part of the\nanswer is that the diagram does not show\nthat the photon spends part of its time as made from two massive\nparticles. Virtual particles, which are what appear in the loop\nin that diagram, are not particles. They are not nice ripples,\nbut more general disturbances. And only particles have the\nexpected relation between their energy, momentum and mass; the more\ngeneral disturbances do not satisfy these relations. So your\nintuition is simply misled by misreading the diagram. Instead,\none has to do a real computation of the\neffect of these disturbances. In the case of the photon, it\nturns out the effect of this process on the photon mass is exactly\nFig. 7: The electron can generate disturbances in the photon\nfield; the resulting photon disturbance can in turn create\ndisturbances in other electrically charged fields, such as the muon\nAnd it goes on from there. Our picture of an electron in Figure 3\nwas itself still too naive, because the photon disturbance around the\nelectron itself disturbs the muon field, polarizing it in its turn.\nThis is shown in Figure 7, and the corresponding Feynman diagram is\nshown in Figure 8. This goes on and on, with a ripple in any\nfield disturbing, to a greater or lesser degree, all of the fields\nwith which it directly or even indirectly has an interaction.\nFig. 8: The Feynman diagram needed to calculate the process shown\nin Figure 7.\nSo we learn that particles are just not simple objects, and\nalthough I often naively describe them as simple ripples in a single\nfield, that’s not exactly true.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "For example, even though they’re not fundamental, if you placed two protons a single meter apart, the electromagnetic repulsion between them would be approximately 10⁴⁰ times stronger than the gravitational attraction. Or, and I’ll write it out just this once, we’d need to increase the force of gravity’s strength by 10,000,000,000,000,000,000,000,000,000,000,000,000,000 in order to have its strength be comparable to the other known forces.\nYou can’t just “make” a proton weigh 10²⁰ times as much as it would normally; that’s what it would take to make gravity bring two protons together, overcoming the electromagnetic force.\nInstead, if you want to make a reaction like the one above happen spontaneously, where protons do overcome their electromagnetic repulsion, you need something like 10⁵⁶ protons all together. Only by collecting that many of them, under their combined force of gravity, can you overcome electromagnetism and bring these particles together. As it turns out, 10⁵⁶ protons is approximately the minimum mass of a successful star.\nThat’s a description of the way our Universe works, but we don’t understand why. Why is gravity so much weaker than all the other forces? Why is the “gravitational charge” (i.e., mass) so much weaker than the electric or color charge, or even than the weak charge, for that matter?\nThat’s what the Hierarchy Problem is, and that problem is by many measures the greatest unsolved problem in physics. We don’t know the answer, but we’re not completely in the dark on this. Theoretically, we have some good ideas as to what the solution might be, and a tool to help us investigate whether any of these possibilities could be correct.\nSo far, the Large Hadron Collider — the highest-energy particle collider ever developed — has reached unprecedented energies under laboratory conditions here on Earth, collecting huge amounts of data and reconstructing exactly what took place at the collision points. This includes the creation of new, never-before-seen particles (like the Higgs, which the LHC discovered), our old, familiar standard model particles (quarks, leptons, and gauge bosons), and it can — if they exist — produce any other particles that may exist beyond the standard model.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "It took us thirty years, but we eventually did it. We renormalized all the forces, and we came up with quantum chromodynamics.\nThe graviton can be thought of as a field, but these infinities are real\nIn a many-body loop diagram, our results diverge no matter what we do:\nBut what about gravity? Why didn’t we renormalize gravity? Well, the answer is because the graviton, the particle that carries the force of gravity, doesn’t have a spin of 1. It has a spin of 2. Yeah. So that’s a big problem, because it means that gravity isn’t just harder to renormalize, it means gravity can’t be renormalized at all. It’s a nonrenormalizable force.\nThat’s not to say we didn’t try. We did. For decades. We developed all kinds of crazy theories, and invented at least one major branch of mathematics in the process. But ultimately, we failed. Gravity just isn’t renormalizable.\nI see extended 1-D objects with no mass:\nTo solve this problem, physicists hypothesized the existence of massless strings, which are very small one-demensional objects that just have units of length. These strings, on the atomic scale, become protons, neutrons, and electrons, as well as photons, gluons, bosons, and even gravitons. One of the consequences of string theory is that it eliminates the momentum problem, and thus the need to renormalize at all. So finally, we can combine gravity with the other three forces to make a unified theory of everything. Hooray!\nVibrations. Modes! They become particles:\nSo how do strings become particles like quarks, electrons, photons, and gravitons? The answer lies in their vibrations. Strings can vibrate in different ways, depending on their properties. Each of the different vibrations produces what we see, at the atomic scale, as a particle. Strings that vibrate at different frequencies become different particles with different energies.\nThis vibration is also affected by the length of the string, the tension of the string, and whether the string is open or closed. Open strings have ends that are unconnected and free to move, while closed strings have ends that are connected into a loop. Strings can also vibrate at different modes, where higher modes have higher energy states.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "Quarks that are produced in particle interactions or decays materialize as “jets” of ordinary particles collimated close to the original quark direction .\nFour fundamental forces act on the fundamental fermions: gravity, the weak force (responsible for nuclear beta decay), the electromagnetic force, and the strong force. These forces occur through the exchange of fundamental bosons: the graviton, the charged and neutral and bosons, the photon, and eight gluons. (Gravity will not be discussed further here.) All of the fundamental fermions have interactions via the weak force, and all of the charged fundamental fermions have electromagnetic interactions. Only the quarks can interact via the strong force, and particles such as protons that are made up of quarks and therefore have strong interactions are called hadrons (the “heavy ones”). The fundamental particles and forces are summarized in Fig. 1.\nPhotons, which have no mass, carry the electromagnetic force, whereas the massive charged and neutral are responsible for the weak interactions; all of these particles are spin-one bosons. The minimal standard model requires in addition a massive scalar boson, the Higgs boson, to allow the and to be massive, as described by the Higgs mechanism . The lowest energy state of the Higgs field has a nonzero value, which has the dimensions of mass. Particles obtain their mass from their interactions with this Higgs field—this is the reason the Higgs boson plays such a major role in physics. The photon has no such interactions, so it retains its massless character, while the masses of the and are approximately times the mass of the proton. The asymmetry between the masses of the photon and the and bosons is called “electroweak symmetry breaking.”\nAccording to theory, the Higgs occurs as a doublet of complex scalar fields, giving four degrees of freedom. Three of the four degrees of freedom are unphysical but are needed as intermediate states in the theory, while the fourth degree of freedom corresponds to the single physical Higgs boson. Once the Higgs mechanism is included, the electromagnetic and weak interactions are unified into one interaction—the electroweak interaction .\nThe Higgs boson, or something else that plays its role, is necessary in the standard model, but it has not yet been observed. Therefore its discovery is of utmost importance in particle physics.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-10", "d_text": "Perhaps the two equal-mass components of the K0-meson, K0S and K0L are to be interpreted in this way.\nFinally, the requirement that empty space be characterized by vanishing zonal occupations and electric charge states leads to some masses in the case of k =1, which may be interpreted as neutrino states. However, these refer neither to rest masses nor to free field energies (in analogy to photons), but to quantum-like \"field catalysts\", i.e. particles able to catalyse nuclear reactions that otherwise would not take place. They transfer group theoretical properties, arising from the sets of quantum numbers, through physical space.\nThe formula derived in for the spectrum of elementary particles also depends on an integer N >= 0, where N = 0 refers to the 25 ground state masses. For N > 0 the sets of quantum numbers again yield masses, which now denote resonance excitations of the basic structural patterns to states of higher energy. According to the dynamics of configuration zones only a single set of zonal occupations is possible for each N. Evidently, the corresponding masses represent short-lived resonance states, for all measured resonances appear among these spectra. In each case N is limited, since for every set x of quantum numbers there exists a finite resonance limit, Gx < , such that the closed intervals 0<= N<= G < apply to every resonance order N, including the ground state.\nOut of the relatively large number of logically possible particle masses present-day high energy accelerator experiments only record the small subset of particles whose probabilities of formation (depending on experimental conditions) are sufficiently large. What evidently still is lacking is a general mathematical expression relating these probabilities of formation to particle properties and experimental boundary conditions.\nThe theory developed in (1] and represents a semi-classical investigation. A third volume \"Strukturen der physikallschen Welt und ihrer nichtmateriellen Seite\" (Structures of the Physical World and its Non-Material Aspect), , leads beyond the semi-classical domain to a dynamics in the hyperspace of R12.\nInstitut für Kraftfeldphysik u. Kosmologie\n Heim, B. EIementarstrukturen der Materie, Vol.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "Kaufmann found that e/m decreased more rapidly with velocity than predicted by the Gamma Factor of Relativity, and this was initially interpreted as evidence that Einstein’s theory was falsified.\nIn anticipation of such experimental conclusions, Special Relativity axiomatically declared that light can have no mass, since any entity having mass becomes infinitely ‘heavy’ if it moves at the speed of light. And this saved Relativity for the time being.\nBut all this should have been a warning that physics was moving down the wrong path. How can anybody draw conclusions from subjective observations stemming from experiments, and declare that activity to be Scientific?\nAt that time, no one entertained the idea that the ratio e/m decreases because the charge of the electron, e, decreases with increasing velocity. Of course not! Relativity was gaining momentum in the scientific community and they would not allow anybody to deviate them from their course; as is the case today.\nAt the beginning of the twentieth century the sub-atomic nature of matter was not yet understood (is it now?). The nature of radiation was a total mystery (X-rays had just been discovered, and it was concluded that radium emitted two types of ‘radiation’, alpha and beta rays. Such nonsense led them to irrationally conclude that atoms are not the fundamental, basic, indivisible building blocks of nature. It was easier for them to deal with MASS as the variable, and the electron as the constant, in the inseparable connection e/m.\nSo What is MASS?\nMass is typically defined by the mathematicians as “the quantity of X”, where X usually designates matter (atoms).\nAll quantities are concepts that were invented by man. The Universe embodies a relation of objects and space, not “mass”. Mass is purely a conceptual relation of a measured quantity. It takes a human with a brain to perform such magical wizardries, as counting and relating. And this is obvious because the SI unit for mass is the “kilogram”. This means that the “mass” of an object is the same as the “weight” of the object.\nSo is the mass of an object the MEASURE of the amount of matter? Of course it is, because weight is also a MEASURE of the amount of matter. We have no idea how many atoms comprise the object because this is impossible to determine. All we know is how much it weighs!", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "In any type of matter there are\ngenerally many, many electrons and protons. In 12 grams of carbon-12, there\nare almost one million, million, million million atoms. If we pull those\natoms apart and separate the electrons and protons, the force between them\nis unbelievably enormous (see the penny puzzle).\nIn every atom, there are an even number of electrons\nand protons, so the external electric forces tend to cancel. However, there\nis a tiny bit of force that is left over that we know as \"the force\nof gravity\" (see The Secret of Gravity).\nHaving an equal number of electrons and protons, each atom consists of a\nnumber of hydrogen atoms (another way to look at the problem that physicists\nprobably wouldn't appreciate). The neutron is simply a compressed hydrogen\natom that is rather unstable by itself. The neutrons associate closely with\nthe protons in an atom, so the nucleus can acquire additional neutrons,\nand the new atoms are called \"isotopes\". With these considerations,\nwe can view all types of mass as an assemblage (of one sort or another)\nof hydrogen atoms. An atom can also gain or lose a few electrons, thus creating\nions. The point is that all mass consists of electrons and protons and in\npairs (hydrogen atoms).\nThe internal workings of these atoms can be incredibly\ncomplex, with all sorts of orbit paths and resonances. My present endeavor,\nto analyze these workings. In any case, it becomes clear that we\nneed to understand hydrogen in order to understand the universe,\nand it only consists of two tiny particles! What an incredible and highly\nintelligent design! The models of electrons, protons and atoms were all\ndevised from a vast number of measurements. These measurements are of the\nforce fields that surround them and the dynamic electromagnetic fields that\nthey create. Some of that energy can escape into space. The antenna radiation\nequations date back at least to 1936 (Mesny), and they are extremely important\nin understanding physical phenomena.\nThe two charges of the hydrogen atom are separated,\nand we call that a \"electric dipole\". This dipole has an electric\nfield around it that can be, and has been, measured extensively. The two\ncharges attract one another, and the closer they get, the greater the force.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "Instead, they found that these electrons had a range of energies.\nAustrian physicist Wolfgang Pauli suggested that the electron was sharing the energy with an unseen partner. Following up this suggestion in 1933, Italian physicist Enrico Fermi published a theory of beta decay that included an unseen particle that had no electric charge and no intrinsic mass.\nHe called it the neutrino (Italian for ``little neutral one''). Two decades later, experimenters demonstrated its existence by managing to detect a few of the millions of neutrinos pouring out of a nuclear reactor.\nPhysicists know the neutrino has no electric charge because it leaves no trail in detectors that track charged particles. But there has never been any proof that it has no mass. This has just been taken for granted.\nThen, in 1980, Valentin Lubimov and his team at the Institute for Theoretical and Experimental Physics in Moscow challenged that assumption. They had taken a detailed look at the way the energy released in beta decay is shared between electrons and neutrinos.\nThey claimed to have found that a small amount of the energy is converted into neutrino mass.\nIt was a startling claim, which many physicists disbelieved because of large uncertainties in the experiment.\nWalter Kundig at the Swiss Institute for Nuclear Research in Zurich and John Wilkerson at the Los Alamos National Laboratory in New Mexico have led experiments to try to confirm the Moscow results. But, so far, they have been unable to do so.\nPhysicists measure particle energies in terms of electron volts. One electron volt (eV) is the energy an electron gains when it is accelerated by a voltage difference of one volt.\nThe Moscow group last year claimed that about 30 eV is converted into neutrino mass in beta decay. That's quite small - only about 1/10000th the mass of an electron. But it's large for a particle that's supposed to have no mass at all. The Swiss group, however, found that the neutrino mass was less than 18 eV and probably was zero. Dr. Wilkerson's team at Los Alamos has found that the mass must at least be under 27 eV. Improvements in their experiment should enable them to search for a neutrino mass as low as 10 eV.\nThe Stoeffl team at the Livermore laboratory expects to do better than that.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-5", "d_text": "He turns on itself like a spinning top without stopping and it has some form of connivance very discrete (weak interaction) with many other particles. ||me||9,109 382 6(16)×10-31 kg|\n|Muon mass In the standard model of particle physics, the muon is an elementary particle with a negative charge. The muon has the same physical properties as the electron, but with a mass 207 times larger, it is also called heavy electron. ||mµ||1,883 531 40(33)×10-28 kg|\n|Tau mass Tau or tauon is a particle of the lepton family. its properties are close to those of the electron and the muon, but it is more massive and short-lived. With its associated neutrino and t quarks (top) and b (bottom or beauty), it forms the third series (the most massive) of fermions in the standard model. ||mτ||3,167 77(52)×10-27 kg|\n|Boson mass Z0 The Gluon is the mediator of the strong interaction, ie of the nuclear force, the photon is the mediator of the electromagnetic interaction, the weak interaction but still has no mediator. The physicist Peter Higgs imagined one in the 1960s. This hypothetical particle called Boson. Thus the Higgs mechanism fills the entire universe and all space, molasses, a field of bosons. ||mz°||1,625 56(13)×10-25 kg|\n|Boson mass W The Gluon is the mediator of the strong interaction, ie of the nuclear force, the photon is the mediator of the electromagnetic interaction, the weak interaction but still has no mediator. The physicist Peter Higgs imagined one in the 1960s. This hypothetical particle called Boson. Thus the Higgs mechanism fills the entire universe and all space, molasses, a field of bosons. ||mw||1,4334(18)×10-25 kg|\nnota: Meter (symbol m) is the length of the path traveled in a vacuum by light during 1/299 792 458 second.\nnota: Kilogram (symbol kg) is the base unit of mass in the International System of Units (SI). It is defined as being equal to the mass of the international prototype of the kilogram.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Physicists are homing in on the mass of the neutrino, nature’s most elusive subatomic particle. The latest super-accurate measurement, made by an experiment in Germany, shows that the neutrino is around half a million times less massive than the electron, the lightest particle of normal atomic matter.\nAccording to the Standard Model, the high point of 300 years of physics which describes the fundamental building blocks of matter and three non-gravitational forces that glue them together, the neutrino should be massless.\nSo why should we care about a mass measurement (no matter how tiny) of a neutrino? Well, it may provide vital clues to the fabled ‘theory of everything’ – a deeper, more fundamental theory of physics of which the Standard Model is believed to be but an approximation.\nHunting the elusive ghost particle\nThe latest neutrino measurement was made in Karlsruhe, Germany, where physicists exploited the ‘beta decay’ of tritium. Tritium is a heavy type – or ‘isotope’ – of hydrogen. In beta decay, the unstable core – the ‘nucleus’ – of an atom sheds surplus energy by spitting out an electron and an antineutrino (the neutrino and its ‘antimatter’ twin have the same mass).\nNeutrinos are fantastically antisocial, interacting so rarely with normal matter that they could pass unhindered through several light-years of lead. Consequently, the physicists at the Karlsruhe Tritium Experiment, or KATRIN, must infer the neutrino mass from measurements made on their electrons.\nThey can do this because the amount of energy emitted by the tritium nuclei is always the same. The energy is divided between the electron and the neutrino – if an electron has lots of energy, then it must mean that its associated neutrino only has a little bit. So if the physicists only allow the most energetic electrons to reach their detector, it ensures that their associated neutrinos will have very little energy – this allows them to make a more accurate reading of the neutrinos’ mass.\nRead more about the Standard Model:\n- The weak nuclear interaction: the enigmatic fundamental force that makes life possible\n- Who really discovered the Higgs particle?\n- Wolfgang Pauli and the discovery of the Universe’s most elusive particle\nKATRIN is an extraordinary piece of engineering.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "The fact that a particle does not choose a specific state until its observation (measuring) brought Einstein to remark: “God does not play dice.” It is clear that Einstein meant that there must be an underlying, understandable reason for the presumed transmission of information. However, to this day, a satisfactory explanation for this phenomenon has not been found.\nThere are also questions in which micro level and macro level both play a role. First of all, there is the attraction of a photon by a gravitational field. A photon is deflected in its track by a heavy mass in space (Figure 1). Why does the photon obey to Einstein’s ideas of curved space/time? Traditionally the photon is considered to be massless, the reason why the underlying mechanism has not yet been fully understood. Then there is the gravitational redshift that a photon (in space) undergoes when close to an object with an enormous curvature (black hole). In fact, on the event horizon of a black hole, the redshift becomes extreme (infinite). Although both of these phenomena have been universally accepted and observed, there is no full comprehension. Why does the photon undergo such a deflection and what is the mechanism of the gravitational redshift?\nFig. 1 (Deflection photon close to an object with a heavy mass)1\nThese and other matters lead physicists to constantly re-evaluate the interpretation of quantum physics. Their mutual goal is always to find a reformulation of the existing framework.\nIn this article, we propose a theory that, in fact, forms the foundation for the understanding of nuclear forces both on the micro as well as the macro scale. For the observed phenomenon’s, we offer an unconventional explanation. Will the previously mentioned pressing questions be answered? We believe so.\nIn this article, we will make a number of assumptions that will fit the model we propose.\nThe basis of the theory is: The most elementary particle in existence is the dimensional basic. This particle has only one property: An infinite curvature in the center. The particle itself has no dimensions (no length, no width, no heights). The particle is found everywhere in the universe. The particle is always moving through space/time. Through agglomeration, or rather joint interaction, the particles form phenomena that at a certain moment rise above the observational limit. The db itself exists below the observational limit and so it can never be demonstrated.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-4", "d_text": "So, if all the atoms constituting the elements are made up of the same particles, what then is it that makes the elements different from each other and causes the formation of infinitely diverse matters?\nIt is the number of protons in the nuclei of the atoms that principally differentiates the elements from each other. There is one proton in the hydrogen atom, the lightest element, 2 protons in the helium atom, the second lightest element, 79 protons in the gold atom, 8 protons in the oxygen atom and 26 protons in the iron atom. What differentiates gold from iron and iron from oxygen is simply the different numbers of protons in their atoms. The air we breathe, our bodies, the plants and animals, planets in space, animate and inanimate, bitter and sweet, solid and liquid, everything… all of these are ultimately made up of protons, neutrons and electrons.\nThe Borderline of Physical Existence: the Quarks\nUntil 20 years ago, it was believed that the smallest particles making up the atoms were protons and neutrons. Yet, most recently, it has been discovered that there are much smaller particles in the atom that form the abovementioned particles.\nThis discovery led to the development of a branch of physics called \"Particle Physics\" investigating the \"sub-particles\" within the atom and their particular movements. Research conducted by particle physics revealed that the protons and neutrons making up the atom are actually formed of sub-particles called \"quarks\".\nThe dimension of the quarks that form the proton, which is so small as to exceed the capabilities of human imagination, is much more astounding: 10-18 (0.000000000000000001) metres.\nThe quarks inside a proton can never be pulled apart from each other very much because the \"strong nuclear force\" that is responsible for keeping the particles inside the nucleus together operates here as well. This force serves as a rubber band between the quarks. As the distance between the quarks increases, so does this force and two quarks cannot become more distant from each other than a quadrillionth of a metre. These rubber bands between the quarks are formed by gluons that possess the strong nuclear force. The quarks and the gluons have a very strong interaction. However, scientists have not yet been able to discover how this interaction takes place.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "The name “physics” came from Greek, meaning “nature”. Indeed, physics could explain the whole universe with its complicated string theory. To me, chemistry and biology are actually physics. At the end of the day, substances and all reactions are made of particles and the forces among them.\nThis is a talk by Professor Michio Kaku which could give anyone inspiration for the science\nBonus for anyone interested in string theory:\nIt would be amazing to think that the smallest particles are not particles but “dancing” filaments of energy. Different dancing patterns will give us different recipes to make particles, then we have different things in the world. Generally, things are energy and energy are things!\nTo me, the statement is so true because of two reasons:\n– It is actually isomorphic with the formula E=mc2 by Einstein.\n– The number of Dancing patterns is unlimited and that’s why scientists keep discovering lots of new particles.\nAnd I have a couple of thinking:\n– The Universe (in capital, meaning the universe that we are living in) is made of energy! Then is it possible that our Universe is just a dancing filaments in other universes?\n– Dancing filaments make particles, is it true to say if they dont dance, they make “the dark energy”?\n– What make them dance? (or easier, what can we do to change the dance patterns, so we will be able to transform particle)\n– Do the filaments have anything to do with Higgs Boson?\nYou might have more questions than me after watching this talk by Bryan Greene\nand Introduction of String theory by Michio Kaku:\nP/S: non-matter “dark energy” is different to “dark matter”.\nThese are some assumptions about dark matter:\nHow we know that the universe is expanding:", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "TL;DR – Matter is seen as a result of energy field interactions in the Universe and in order to get rid of matter, fields need to stop existing first.\nIf someone would ask you to describe matter, you’d probably say that anything which occupies space and has some volume is called matter.\nBut what if you were asked to define mass? Well, the most general answer found in every school’s textbook is that mass is the matter contained in an object.\nIt’s funny how we describe mass as ‘matter’ and matter as ‘something’. Ever wondered what actually that ‘something’ is?\nHiggs Boson: The Internet’s Darling\nThere are a lot of fields swarming around us. Every particle we know so far has its own field. So we have both types of particles: with mass (electrons etc.) and without mass (photons etc.).\nIf you know how relative velocity works and how friction plays its role, you might get a hang of it. See, every field exists even when we cannot feel its presence. That’s because whenever we experience a field, we are actually interacting with its respective particle which is nothing but an excitation or concentrated energy in that field.\nFrom the recent observations, we know that the Higgs Field is never null, it is always excited with a positive value. So it exists everywhere and always has its effect on everything.\nTo know how things get their mass, we need to understand:\nHow Fields Interact\nOne field can penetrate and interact with the other with the help of its excitation. Basically, their respective excitations are their means of communication. Ever noticed why atoms weigh more than the masses of their subatomic particles combined? That’s because not all the energy is coming from the particles that we notice in there: the protons, neutrons and the electrons.\nComing back to our concept of relative velocity, whenever a particle passes through another field, it experiences a sort of a drag (like friction) in its way. This slowing down in its speed is what the particle perceives as mass. The original speed would have, otherwise, been the speed of light of the particle that now has a mass. And more specifically, a rest mass (because we can measure it). But we know that particles like photons have no rest mass and they travel at the speed of light. That’s probably because they don’t feel any drag from other fields.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-6", "d_text": "The equation says that just as Space and Time are not the separate things they appear to be Mass and Energy are also two sides of the same coin. When we look at it in one way we see Mass when we look at it another we see Energy. Likewise when we look at it in one way we see Space, when we look at it another way we see Time. This lead to the concept of the Spacetime Continuum.\nThe Standard Model of quantum mechanics classifies particles based on their basic properties: Mass, Charge, and Spin. Spin is related to the angular momentum of a particle. The first classification is based on Spin. Particles fall into two classes:\nWhen a particle and its antiparticle meet they mutually annihilate one another and convert all their mass into energy producing a photon having the exact energy of the combined particles (via E = mc2). The process can actually happen in reverse. A high energy photon can turn into a particle-antiparticle pair. The antiparticle will generally soon find a particle to annihilate with again to create another photon of the same energy. This is called pair production.\nThe fact that matter exist at all in the Universe means that there is very little antimatter present in the Universe. Else there would be constant annihilation and the Universe would be full of gamma-rays.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "well i am trying to understand the basic of mass and i got myself into a deadlock of questions firstly the definition of mass as i know is :quantitative measure of an object's resistance to the change of its speed. assumption:particle have charge but no mass and they just fall into the significant field of each other,there is nothing they interact with except each other and a force F. now consider a situation where a particle with charge say -q comes into significant field influence of another particle with charge +Q,it then follow a particular path P,suppose a force F is used to the bring a change in path of -q,the force F does so in two ways:at first it tries to change the path of -q by change it's state of motion(either by accelerating or deceleration him) ,thus the resistance offered to F relative to him appeared because of the electromagnetic interaction between the charges. In the second case the force F tries to change the trajectory of -q but by not changing it's state of motion(maintaining it's velocity),in this case also the resistance offered was due to the electromagnetic interaction. so according to the definition of mass the resistance offered in first scenario should add to mass but not second,though it can be clearly seen that the resistance offered is because of the electromagnetic interaction under action. SO WHAT IS WRONG WITH THE PICTURE DEPICTED?", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-51", "d_text": "These fluids,\nagain, can be imagined in two different ways: we may imagine that\nthey consist of a substance uniformly occupying the whole of the\nspace where the fluid is; or we may imagine that they consist of\nclouds of little corpuscles each of which is a minute sphere of elec\ntricity. Experiment, however, has decided in favour of the second\nview, and some thirty years ago it showed that negative electricity\nconsists of minute corpuscles which are all identical, and have a\n22 MATTER AND LIGHT\nmass and an electric charge of extremely small dimensions, called\nelectrons. These have been successfully segregated from Matter in\nbulk, and their behaviour when moving in empty space has been\nobserved; and it has been found that in fact they move in the way\nin which small particles, electrically charged, ought to move in\naccordance with the Laws of classical Mechanics; while by observing\ntheir behaviour in the presence of electrical or magnetic fields it\nhas proved possible to measure both their charge and their mass\nwhich, I repeat, are extremely small. The demonstration of the\ncorpuscular structure of positive electricity, on the other hand, is\nless direct; nevertheless physicists have come to the conclusion that\npositive electricity, too, is subdivided into corpuscles which are\nidentical with each other, today known as protons. 1\nThe proton has a mass which, though still extremely small, is\nnearly 2,000 times greater than that of the electron, a fact indicative\nof a curious asymmetry between positive and negative electricity.\nThe charge of the proton, on the other hand, is equal to that of the\nelectron in absolute value, but of course bears an opposite sign,\nbeing positive and not negative.\nElectrons and protons, then, have extremely small mass. This\nmass, however, is not equal to zero, and a really vast number of\nprotons and electrons may make up a fairly considerable total\nmass. Hence it is tempting to assume that all material substances\nwhose essential characteristics consist in the fact that they possess\nweight and inertia, in other words, that they have mass consist\nin the last analysis exclusively of vast numbers of protons and\nelectrons.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "Probabilities are the best you can do with subatomic processes. Take for example the decay of radium. If we lock it away in a vault for 1600 years, when we open the vault half of the original sample will be there. If we wait another 1600 years, half of that half. Every 1600 years, half of all the radium atoms in the universe decay. We can know with great precision the bulk statistical properties but we can not know which atoms will undergo decay or when an atom will decay. The Copenhagen interpretation of quantum mechanics is that, for any given atom, it is pure chance. We can only predict the state of the system, not of an individual atom.\nWhen you look at these particles, they aren’t exactly like a particle at all, they don’t seem to be solid with defined edges, rather we determine what is known as a nuclear cross-section by firing other particles and look at how things bounce to deduce the effective size of the particles. Their actual boundaries seem rather fuzzy and undefined.\nIn reality particles might not be physical at all, possibly some sort of standing wave. Then you have to ask yourselves waves of what. The Copenhagen interpretation of quantum mechanics says they are probability waves. Matter doesn’t really exist until we observe it under this interpretation. Some people have taken this to the extreme and suggest that nothing in our reality exists until we observe it.\nSo what you have then is not really matter at all, but more of an idea or thought. A potential for something to exist, not something. At one point I had become convinced that an intelligent consciousness was an elementary property of matter, just as mass, charge, and spin. I have changed my view on this however to where I now believe matter is consciousness. Everything is thought.\n“In the beginning there was the Word, and the Word was with God, and the Word was God.”\nThe word wasn’t spoken for in the beginning there was no air to propagate sound. The word could have only been thought. The thought didn’t bring the universe into existence, it is the universe. God, in this interpretation isn’t something that is separate from the universe, God is the all that is.\nWe are part of that, a drop of water from the ocean, a tiny fraction of God’s thought.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "And we can’t measure their mass at such a high speed of light.\nVirtual and Real Particles\nVirtual particles are the excitations in the fields which we can’t notice because they interact with each other very swiftly to give rise to noticeable dense excitations called real particles. Thus, the real particles are actually the best approximations of all the possible virtual states of that particle. And we usually measure the mass of the real particle. This doesn’t mean that the virtual particles don’t contribute to the actual overall mass of any system. A real particle still spends some time in its virtual states (for a very very short time though). So, we can measure the approximated effect of these virtual particles time-to-time as an additional mass.\nThe Quantum Vacuum\nThe Quantum Vacuum is the most stable state or maybe the ground state of the Universe. It is the place where virtual particles are in action, appearing and disappearing in fractions of fractions of seconds. The Quantum Vacuum is what gives rise to the real particles with the most stable and long approximations of its virtual particles. The Quantum Vacuum has still some measurable energy due to the existence of Quantum Vacuum Fluctuations. In order to get rid of all the energy and matter, we need to clear up the fields in the Quantum Vacuum which is ultimately next to impossible.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "It seems from above, the scientific community has continued to proceed on the basis of mass and energy being two separate realities.\nPicoPhysics do not distinguish between mass and energy. The thought process in Pico-Physics is to unify divergence experiences of nature. Mainstream physics moves ahead with coding the experiences into mathematics equation for un-ambiguous communications.\nThe mathematical equations continue to represent the nature but interpretations may not. We can see this in interpreting ‘E=Mc2‘ from where originates the thinking of mass less particles of energy as well as Higgs Bosons. This thinking was exposed in blogs at http://blog.vixra.org/2012/07/04/higgs-live-vixra-combinations/ and http://blog.vixra.org/2012/07/04/congratulations-its-a-boson/ .\nA photon, that represent natural unit of Knergy, is the particle that exists. The self inflicted confinement due to gravitation, bundles multiple photons into a confined region identified as particle in mainstream physics. Photon is sometimes refered as UCO – unit conserved object in PicoPhysics. Integer number of UCOs are confined into main stream particles. These particles have the limitation on observed speed being less that light speed due to vortex like flow of UCO inside the particle. So particles with mass have the speed of light as limit. Other feasibility on particle is to capture singularities in space as a particle. These particles have limited life time, and have no limitation of speed of light. Neutrino is one such particle that shall have no limit on speed.\nThus mass and energy are simultaneous in PicoPhysics. Property of inertia and gravitation is directly attributed to Knergy. Energy is a measure of these properties.\nUnderstanding CERN results\nPicoPhysics is sought to be presented in four parts. The first part covering basic concepts is now fully spelt out on site. Part two prepares ground and explains creation (Re-arrangement of energy into vortexes, with self-inflicted refraction) of elementary particle. Part three will cover extra nucleus part (Atomic structure) and electrodynamics.\nThe part four comes back to foundations. We need three units of measure to express physical identities. Plank’s constant as natural unit for Knergy is one of them.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-11", "d_text": "So, if I must measure the mass of a iron bar relatively to the sample mass of a second iron bar, I can use as a instrument of measure the same sample bar, trowing it with a known force against the other and measuring the acceleration impressed to this. In such case, however, the resulting will differ widely depending on the fact that the two bars are both magnetic (and that homologous or opposite poles are facing), or only one, or none. This because the method and the instrument of measure are sensible, in the interaction at a very short distance, at the gravitational magnetic effect.\nFor the same calculation I will also be able to use the Earth as instrument of measure, putting on a scales the two bar. The magnetic characters of the bars in front of the gravitational terrestrial field will then become almost irrelevant, enormous as a value of mass, but relatively weak for the value of dipolarity (magnetism): the scales will give me in any case the relation almost exact between the two masses. Naturally, using this method, I will take note of the practice proportionality of the force to the masses; using the other, of the almost non proportionality of the force applied to the masses.\nMoving to the world of particles, we will meet completely analogue situations. Making the particles interact between them, we will mainly take over the gravitational effects perceptible at the shortest distances – density and magnetism – and we will pull out “charges”, mythological signs of “plus”, “minus” and “anti-”, nuclear forces of binding and exchange, and so on. Made the tare of all this ingredients, we will calculate the masses. It is the method of the two iron bar. Or we will use the electromagnetic fields, which are the equivalent – made the proportion with the particles – of the terrestrial gravitational field used in the method of the scales. And here is the “spectrograph of mass”, which will give values of mass closer to the naked ones of the quantity of matter, being sufficient for calculating them having at the beginning in the particles an equal condition of “charge”.\nBut in the reading of the results a coarse gap takes over respect to the measure of macro cosmic masses: presuming that the forces in play in the electric and magnetic fields are not of gravitational type, they become absolutely considered non proportional to the particles masses, so as the electromagnetic forces manifest if applied to macro cosmic masses.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-21", "d_text": "(Note: in the Theory of Absolutes, the Strong and Weak forces will be developed in a manner to show that they are both subsets of the electrical and magnetic forces. Likewise, gravity may be a subset of these two, or possibly a third fundamental force. The larger point is that field forces (e.g. Electrical and magnetic), are actually the action of Conscious Particles. Their movement in response to the presence of other particles gives the appearance of a force. When in actuality, all particulate movement is self-directed, chosen, willful, and lawful, and in relationship to information received by other particles both near and far.)\nThe Subatomic Particles and the Conscious Particles:\nThe aggregations of Conscious Particles that form subatomic particles are held together by electrical and magnetic forces in a dynamic harmonic stability. The common subatomic particles (i.e. electrons, neutrons, and protons – the rudimentary constituents of the atom) and the exotic subatomic particles (e.g. pions, kaons, muons... formed by high energy collisions or by decay of larger particles) are the smallest level of particulate aggregations visible by force and energy type experiments (i.e. bubble chambers).\nEach particle of the subatomic zoo forms from a unique configuration of Conscious Particles. And, the larger subatomic particles assemble from aggregates of these smaller subunits. Each unique subatomic particle forms as an aggregate dynamic assembly of a specific conformation of many Conscious Particles and particle groups. Each subatomic particle forms as a dynamic, resonant, equilibrium assembly of attractive and repulsive magnetic and electrical forces. The periodic relationship of stable configurations results in the ability to predict the existence of new, unseen, subatomic entities. The full list of all the subatomic particles in the subatomic zoo is called the Standard Model.\nImplications of the Theory\nScience versus Creationism:\nWe shall assume as an axiomatic hypothesis that God is an existent entity, and assume He has the ability to create law abiding particles using the force and abilities of His consciousness.\nThe question is thus, “What are the laws that govern the Conscious Particles?” The answer to this question is the ultimate physics theory. In it lies the unification of all physical phenomena. Its answer may also govern, or at least largely influence, the processes of what we recognize as life.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-2", "d_text": "Only about one collision per trillion will produce a Higgs boson in the giant atom collider, and it took CERN several months after the discovery of a new \"Higgs-like\" boson to conclude that the particle was, in fact, very much like the one expected in the original formulation.\nThe phrase \"God particle\" was coined by Nobel-winning physicist Leon Lederman, but it's disliked by most physicists because it connotes the supernatural. Lederman said later that the phrase—mostly used by laymen—was really meant to convey that he felt it was the \"goddamn particle,\" because it proved so hard to find.\nMichael Turner, president of the American Physical Society, an organization of physicists, said the Higgs particle captured the public's imagination.\n\"If you're a physicist, you can't get in a taxi anywhere in the world without having the driver ask you about the Higgs particle,\" said Turner, a cosmologist at the University of Chicago.\nTurner said the Higgs is the first in a class of particles that scientists think played a role in shaping the universe. That means it points the way to tackling mysteries such as the nature of dark energy and dark matter, he said.\nThe physics prize was the second of this year's Nobels to be announced. On Monday, the Nobel in medicine was given to U.S. scientists James Rothman, Randy Schekman and Thomas Sudhof for discoveries about how key substances are moved around within cells.\nThe Nobel Committee Prize Announcement\nFrançois Englert and Peter W. Higgs are jointly awarded the Nobel Prize in Physics 2013 for the theory of how particles acquire mass. In 1964, they proposed the theory independently of each other (Englert together with his now deceased colleague Robert Brout). In 2012, their ideas were confirmed by the discovery of a so called Higgs particle at the CERN laboratory outside Geneva in Switzerland.\nThe awarded theory is a central part of the Standard Model of particle physics that describes how the world is constructed. According to the Standard Model, every thing, from flowers and people to stars and planets, consists of just a few building blocks: matter particles. These particles are governed by forces mediated by force particles that make sure everything works as it should.The entire Standard Model also rests on the existence of a special kind of particle: the Higgs particle. This particle originates from an invisible field that fills up all space.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-7", "d_text": "See The Foundation of the General Theory of Relativity page 185 where Einstein said \"the energy of the gravitational field shall act gravitatively in the same way as any other kind of energy\". Also see this paper: http://arxiv.org/abs/1209.0563 . I wouldn't say it gets everything right, but I do think it's barking up the right tree.\nJohn, you're jumping a gun to try and beat science and you're making an arse of yourself.\nAccording to QM, anything will be a particle.\nMass: Higgs boson.\nand so on.\nSo, yes, we do \"know\" that there will be a particle associated with it unless the physics is entirely broken and in ways that don't show up except here.\nWow: everything isn't a particle. The messenger particles of QED are virtual photons. They aren't actual particles. Hydrogen atoms don't twinkle., magnets don't shine. It's like you divide the field up into little squares and say each is a virtual particle. See comment #25 onwards in Ethan's weak-force blog entry:\nA gravitational field doesn't literally consist of particles called gravitons flying around. That's why you can detect a gravitational field quite easily by dropping your pencil, but you can't detect gravitons. Because they're virtual particles. Like you've divided the field up into little squares and said each is a virtual particle. And note that the Higgs mechanism is thought responsible for only 1% of the mass of matter. The rest is down to E=mc². So you don't \"know\" that dark matter consists of particles. OK? And by the way, Matt Strassler agrees with me on that.\n\"The messenger particles of QED are virtual photons.\"\nPhotons are particles, John.\nEven when virtual, they are particles.\n@ 20 John\nThe fundamental postulate of QM is that EVERYTHING (except maybe spacetime itself) in the Universe can be quantized into discrete packets of energy. Those packets being called particles.\nThus any field, force, energy, matter... whatever is that makes up our material world can be quantized. Thus DM.. whatever it is, can be quantized. It does have it's discrete energy packet value... it's particle\nSinisa: honestly, it doesn't work like that.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-4", "d_text": "These’re stable so no need to combine itself.\nTherefore the Heisenberg uncertainty principle predict about such particle which exist without any cause.\nBecause there’s probability to exist these particles. So these types of particles are called virtual particles.\nAnd these types of particle are already seen. whereas they looks very stranger but follow a particular pattern.\nSo actually they always appear in a pair in which one is regular matter and other is Anti matter.\nTherefore whenever electron is made then it made with its anti matter always.\nWhich’s known as positron. its mass is same but charge are opposite.\nHow Does Quantum Physics Work Actually ?\nIt’s made due to energy so when they collide then release pure energy in large amount.\nHence like electron, all Lepton has ant particles.\nNow you’ve know Fermions are responsible to make matter in the universe.\nBut which force how will work it has no any role about this.\nThis’s only responsibility of Bosons.\nNow it’s best time to know about types of Bosons.\n- Photons (Electromagnetic force)\n- W/Z Boson (weak nuclear force)\n- Gluons (Strong nuclear force)\n- Gravitons (Gravitational force)\nBefore this you were assuming electric and magnetic force differently.\nBut now it’s combine together as electromagnetic force.\nSo electromagnetic force work due to photons.\nBecause it’s clear that without photons neither magnet will attract iron towards it.\nAs well as without photons electricity will work.\nAnd you experience about these forces in your daily life.\nSo the best story is that unfortunately there’s no any particle found for Gravitational force cause on quantum level.\nBut it’s predicted that there must be a particle responsible for Gravitational force.\nTherefore for the time being this gravitational particle name is given as Gravitons.\nQuantum Mechanics Vs Quantum Physics\nWell, it’s combination of Quantum Mechanics and field theory.\nSo in classical Physics this world look likes very simple.\nYou’re able to predict present event on the basis of past. As well as future event on the basis of present event.\nBut quantum Physics tell us how much the world look simple is not really simple.\nSo in Classical Physics object is either particle or wave.\nBut in quantum Physics particle is neither particle nor wave.\nBecause it’s combination of particle and wave.\nUnless you’ll not observe it’ll not change their form.\nBut as you observe it changes form either in particle or wave.\nSo in quantum Physics you’re able to predict future event probability only.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "It is commonly accepted as a fact now that the visible matter (made of known particles) accounts only about 5% of the mass of this universe while 95% of it is invisible. In recent years, the neutrinos although not normally visible are ruled out as the key factor on this issue. Today, most of the hopes for the answer are placed on the s-particles, postulated by many SUSY (supersymmetric theories). Yet, the recent LHC data has ruled out many standard types of SUSY. Thus, many SUSY advocators are now desperately hanging on to some bizarre theories which require many fine-turnings.\nIn this Axiomatic physics (AP), the above SUSY arguments are simply wrong for the following reasons.\nA. The Naturalness principle (NP) --- no fine-turning is allowed in the Nature physics. See the article “Axiomatic physics, the final physics, at http://prebabel.blogspot.com/2012/04/axiomatic-physics-final-physics.html “\nB. The SUSY with s-particles are simply wrong in AP, see the following articles,\ni. “Origin of time, the breaking of a perfect symmetry, at http://prebabel.blogspot.com/2012/04/origin-of-time-breaking-of-perfect.html “.\nii. “Supersymmetry, Gone with the wind, at http://prebabel.blogspot.com/2011/10/supersymmetry-gone-with-wind.html “.\niii. No s-particle is allowed in AP --- see the article “48, the exact number for the number of elementary particles, http://prebabel.blogspot.com/2012/04/48-exact-number-for-number-of.html “.\nWhile the above articles showed that s-particles cannot be the dark matter, they do not give an answer for the issue. As the “dark matter” effect is very much the fact of our universe, it is a genuine issue also in this AP. Thus, a solution must be derived axiomatically.\nI will begin this axiomatic derivation by looking the example of proton’s mass. While the proton can be written as the composite of three quarks [u, u, d], its internal structure is, in fact, very complicated according to many test data. In addition to the three quarks, there are many “quark and anti-quark” pairs and many gluons.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "It has the unusual property of becoming stronger as the distance between interacting quarks increases. It eventually becomes so strong that it is impossible to separate two quarks, because the amount of energy required to do so is so great that it would immediately create a new quark-antiquark pair. Theories describing the strong force predict a whole range of exotic particles consisting of different combinations of quarks and gluons. However, these particles have never been observed. They include ‘glueballs’, composite particles consisting solely of gluons. In the PANDA experiment, scientists want to use particle-antiparticle annihilation as a ‘trick’ to create such particles, and discover which of them actually exist and what their properties are. This would greatly increase our understanding of the strong force.\nHow does mass arise?\nIn this context, scientists also want to find out how particles come to have their mass. A proton, for example, weighs 50 times more than the three quarks composing it. Hence, the mass arises because the strong force (the gluons) binds the quarks together to create the proton. If scientists can improve our knowledge about the strong force with the help of the PANDA experiment, they might also enhance our understanding how matter gets its mass.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-2", "d_text": "(It's nicknamed \"the God Particle,\" mostly to get the rest of the human race as excited as the physicists who came up with the whole concept.)\nWhat it all means, in a nutshell, is this:\nIt basically explains how energy turns into elementary particles, or something to that effect. Let's just call it \"the origin of the origin.\" All of this means absolutely nothing to me. It's far beyond the capabilities of my poor little brain. But that's okay, because what physicists don't realize is that philosophers are basically physicists who have found a shortcut to answering those kinds of questions ... and a few that haven't been asked yet. And as a trained daydreamer, I can jump into the field of philosophy without so much as taking a deep breath. So, off we go.\nWhen I was but a kid, a thousand years ago, I had a teacher who was gung-ho on particles. The smaller the better. He would explain how they spun faster than you can imagine, broke down and reassembled elsewhere across space without wasting even a fraction of a nanosecond. How they influenced each other's behavior over long distances instantly. How they could be in two places at the same time and, wonder of wonders, could be both a particle and/or a wave. This guy could talk forever about all kinds of strange magic, and I loved it. Still do, in fact. Even if I never really understood much of it.\nOne day, when this teacher was talking about particles and waves, and wave particles, and I was lost in my daydreams, I woke up and raised my hand. When I got permission to speak (in those days you needed permission), I said that in my opinion it was obvious that physicists were looking in the wrong direction. \"It's not about particles,\" I said, \"it's about the space between the particles.\" And I collapsed right back into my daydreams.\nNow, it seems that statement, rightfully denied and ridiculed by my teacher, since I just grabbed it out of thin air, appears to be on the cusp of being vindicated ... thanks to Higgs boson, Higgs Field, an obese piece of machinery, and whatever else they are messing with over there in Europe. But while smart people in white coats are busy discovering the origin of the origin, I'll jump ahead and explain why I think that the end will not be forthcoming, ever.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Update: I went over this answer and clarified some parts. Most importantly I expanded the Forces section to connect better with the question.\nI like your reasoning and you actually come to the right conclusions, so congratulations on that! But understanding the relation between forces and particles isn't that simple and in my opinion the best one can do is provide you with the bottom-up description of how one arrives to the notion of force when one starts with particles. So here comes the firmware you wanted. I hope you won't find it too long-winded.\nSo let's start with particle physics. The building blocks are particles and interactions between them. That's all there is to it. Imagine you have a bunch of particles of various types (massive, massless, scalar, vector, charged, color-charged and so on) and at first you could suppose that all kinds of processes between this particles are allowed (e.g. three photons meeting at a point and creating a gluon and a quark; or sever electrons meeting at a point and creating four electrons a photon and three gravitons). Physics could indeed look like this and it would be an incomprehensible mess if it did.\nFortunately for us, there are few organizing principles that make the particle physics reasonably simple (but not too simple, mind you!). These principles are known as conservation laws. After having done large number of experiments, we became convinced that electric charged is conserved (the number is the same before and after the experiment). We have also found that momentum is conserved. And lots of other things too. This means that processes such as the ones I mentioned before are already ruled out because they violate some if these laws. Only processes that can survive (very strict) conservation requirements are to be considered possible in a theory that could describe our world.\nAnother important principle is that we want our interactions simple. This one is not of experimental nature but it is appealing and in any case, it's easier to start with simpler interactions and only if that doesn't work trying to introduce more complex ones. Again fortunately for us, it turns out basic interactions are very simple. In a given interaction point there is always just a small number of particles. Namely:\n- two: particle changing direction\n- particle absorbing another particle, e.g.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "Neutrinos have very little mass (even for leptons) and interact so weakly with the rest of the particles that they are exceptionally difficult to detect. The name is a play on words. The Italian word for neutron (neutrone) sounds like the word neutral (neutro) with an augmentative suffix (-one) tacked on the end. That is, it sounds something like \"big neutral\" to Italian ears. Replace the augmentative suffix -one with the diminutive suffix -ino and you have a \"little neutral\", which is a good description of what a neutrino is — a diminutive neutral particle. Aaaaaw, so small and precious.\nFermions belong to one of three known generations from ordinary (I), to exotic (II), to very exotic (III). (These are adjectives I selected to describe the generations.) Generation I particles can combine to form hadrons with effectively infinite life spans (stable atoms made of electrons, protons, and neutrons for example). Generation II particles always form unstable hadrons. The longest lived hadron containing a generation II quark is the lambda particle (made of an up, down, and strange quark). It has a mean lifetime less than a billionth of a second, which is long-lived for an unstable hadron. Generation III particles are divided in their behavior. The bottom quark isn't much stranger than a strange quark, but the top quark is so short-lived that it doesn't exist long enough to do anything. It falls apart before anything knows it exists. Top quarks are only known from their decay products.\nCharge is the property of matter that gives rise to electric and magnetic phenomena (known collectively as electromagnetism). Charge is quantized, which means it can only exist in discrete amounts with restricted values — multiples and fractions of the elementary charge. Particles that exist independently (the electron, muon, and tau) carry multiples of the elementary charge, while quarks carry fractions of the elementary charge. Quarks always bind together in groups whose total charge is an integral multiple of the elementary charge, which is why no one has ever directly measured a fractional charge. In addition, since opposite charges attract, electrons tend to bind to protons to form atoms that are neutral overall. We don't normally notice the electrical nature of matter because of this.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "In the shadows of the Jura Mountains protons are being accelerated to the highest energies ever produced by human beings. Deep underground, in the Large Hadron Collider at CERN, the European Center for Particle Physics, which straddles the Swiss-French border near Geneva, a beam of protons are colliding head-on with other protons at the record setting 7 trillion electron-volts (TeV). It’s a process that has been described as akin to smashing fine watches together to determine what makes them tick.\nTaxpayers from the UK to Australia, Japan to Germany, China to Canada, Italy to Estonia, Vietnam to the United States who have invested over 10 billion Euro into the machine may reasonably ask: What do I get?\nThe expectations are clear if arcane.\nWhat we can expect of these experiments, ATLAS (the contrived acronym for A Torioidal LHC ApparatuS) and CMS (Compact Muon Solenoid) at one collision point a couple miles around the beampipe’s arc, is a great deal of new information about the universe.\nThe Holy Grail of the experiments is to either discover the long-sought Higgs boson or demonstrate unequivocally that the Higgs does not exist. The Higgs, which was called “The God Particle” by Nobel Laureate Leon Lederman, is a particle that was proposed by Scottish physicist Peter Higgs to answer the question: how do particles obtain mass. It’s an esoteric sounding question, but to push the frontier of human understanding farther, it must be answered. You see, in the Standard Model of Particle Physics – a model that encompasses everything experimentally verified about the basic building blocks of nature – there is no good “reason” for particles to have mass, but they obviously do. Every theory of how stuff works at the most fundamental level is predicated on this annoyingly successful model. The masses of particles don’t emerge from the theory in a satisfying burst of understanding, they have to be measured and installed by hand.\nMass is a measure of the inertia of an object. The heavier it is, the harder it is to turn or accelerate. The Higgs mechanism describes a field of particles, not unlike the gravitational fields that permeate space near things like stars and planets, that saturates the universe and constantly interacts with particles in a way that slows them down, imposing inertia on them.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "If you're feeling as if you've lost a bit of weight recently, blame physicists – recent calculations make the mass of the proton 30 billionths of a percent lighter than before.\nOkay, so the sub-atomic particle hasn't changed size, but new experiments have promoted a rethink on what the proton probably weighs. Strangely, while the new figure is three times more accurate than the previous one, nobody is sure why the new number is different.\nExperiments conducted by the Max Planck Institute for Nuclear Physics in Germany used magnetic fields to trap a proton inside a vacuum chilled to near absolute zero, where they watched it wander around.\nBy determining the particle's velocity they could calculate its mass as compared to the nucleus of carbon 12 and a particle of hydrogen at a set frequency.\nFor those interested, a proton now happens to be 1.007 276 466 583 atomic units, slightly different to the mass of 1.007 276 466 879 atomic units the Committee on Data for Science and Technology (CODATA) currently has recorded.\nThe difference might not seem like a big deal, but there is no clear reason for it.\n\"Of course, 99 percent of the time, it's an experimental issue,\" CODATA member Peter Mohr told New Scientist's Sophia Chen.\nThe research has been put onto the pre-review website arXiv.org, where the physics community can take a close look at their figures.\nWith an accuracy of 32 parts per trillion, the tweak is an improvement on the last number by a factor of three. The researchers now plan on implementing some changes to the experiments to try to improve on their precision.\nYou won't need to go fiddling with your bathroom scales, but CODATA will be required judge what to make of the difference in light of potential changes to its recommended values.\nSimilar to the Planck constant (which also changed recently) and the speed of light, the mass of the proton is a fundamental unit in physics.\nIt's used to determine a limiting value in spectroscopy called the Rydberg constant.\nIt could also help physicists answer a rather trivial question: \"Why is there stuff instead or no stuff?\"\nClearly stuff exists, but since matter and anti-matter have an annoying habit of cancelling each other out in a burst of radiation, there must be a fundamental difference in how much matter and antimatter formed early in the Universe's formation for the particles making up cows and comets to exist at all.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "At the same time, the referee (Y. Nambu) mentioned that the same idea had just been proposed by R. Brout and F. Englert, with the paper “Broken Symmetry and the masses of Gauge Vector mesons”, which had been released exactly at the same date…\nIn 1975, John Ellis, Mary Gaillard and Dimitri Nanopoulos published “A phenomenological profile of the Higgs boson”. Today it could make for amusing reading: “ We apologize to experimentalists for having no idea what is the mass of the Higgs boson, …, and for not being sure of its couplings to other particles, except that they are probably all very small. For these reasons we do not want to encourage big experimental searches for the Higgs boson…”. Thankfully the scientists didn’t strictly follow these recommendations…\nTo close this historical part, here are some links to the original articles quoted above: the paper from R. Brout and F. Englert, the paper from P. Higgs, the paper from J. Ellis, M. Gaillard and D. Nanopoulos.\nDuring the last century, many discoveries have been made, culminating in what we call today the Standard Model.\nThis Standard Model has been very successful, as it could both describe and predict some experimental measurements. For instance it predicted the measurement of the anomalous magnetic dipole moment of the electron with an accuracy of around 1 part in 1 billion (a = 0.00115965218073 !). It also describes the components of matter and 3 out of the 4 interactions, but there are many remaining questions.\nOne of them is to understand differences in terms of mass scales. Why an electron (911 keV at rest) has such a different mass than a proton (938 MeV)… The same for other particles? Where does mass come from? etc… To answer these questions, many ideas have been proposed over the last decades, but only the Higgs mechanism remains as a serious option. One could think that the Higgs particle comes with a field, and that this Higgs field would be responsible for giving masses to particles.\n3. Summary of all searches\n3.1) LEP At the previous main accelerator of CERN, called the LEP, physicists started to look for the Higgs particle.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "And yet, if the teleporter was functioning correctly, you haven’t changed – there is still the same amount of ‘stuff’ (matter) in your body as there was previously. So we need another term to describe that quantity that didn’t change. This is ‘mass’.\nOn Earth, the two words are interchangeable for everyday use, because gravity has about the same strength everywhere on the surface of the planet. If an object has twice the mass, it will have twice the weight. But your weight changes depending on where you are in the universe. If you go to the moon, for example, you’ll weigh about one sixth of your weight on Earth. (This is due to the fact that the moon has a much smaller mass than the Earth does and so exerts a smaller gravitational force on you.) Your mass, on the other hand, won’t have changed.\nMass is a property of all matter. It has two main effects: the first is to cause objects to resist acceleration when a force is applied; the other is to experience and apply gravitational forces when other objects are nearby. Today, physicists are actively researching what may seem like an unanswerable question: why does mass exist? One part of a possible answer to this question requires the existence of a particle called the Higgs Boson (check out a great animation about the Higgs in this previous post!). The Higgs Boson has been hyberbolically nicknamed the “God Particle” because its existence would help validate the Standard Model of particle physics (to be discussed later). As I write this, scientists in Europe are busily smashing particles together at the Large Hadron Collider, hoping that the Higgs boson might pop out of the resulting collisions.\nBut in some ways, this line of thinking answers more of the “how” then the “why”. For now, it seems that mass is just something we all have to deal with.\nThere are some particles (and by “some” I mean “two”) that are currently known to have no mass at all. The most commonly known of these is the photon, which is the particle that light is made out of. Because photons are not restricted by the limitations of mass, they only ever move at one speed – the fastest speed anything can move at, the speed of light.\nWe beings made out of matter will never know what it’s like to be a photon – we’ll never be able to experience masslessness.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "source: Big Questions Online\nauthor: Frank Wilczek\nEditors’ Note: It has been one year since scientists announced the discovery of a particle believed to be Higgs. Read this essay by a Nobel prize winner in physics and share your comments on ourFacebook page.\nImagine a planet encrusted with ice, beneath which a vast ocean lies. (Imagine Europa.)\nWithin that ocean a species of brilliant fish evolved. Those fish were so intelligent that they took up physics, and formulated the laws that govern motion. At first they derived quite complicated laws, because the motion of bodies within water is complicated.\nOne day, however, a genius among fish, call her Fish Newton, had a startling new idea. She proposed fundamental laws of motion––Newton’s laws––that are simpler and more beautiful than the laws the fish had derived directly from experience. She demonstrated mathematically that you could reproduce the observed motions from the new, simpler laws, if you assume that there is a space-filling medium that complicates things. She called it Ocean.\nOf course our fish had been immersed in Ocean for eons, but without knowing it. Since it was ever-present, they took it for granted. They regarded it as an aspect of space itself––as mere emptiness. But Fish Newton invited them to consider that they might be immersed in a material medium.\nThus inspired, fish scientists set out to find the atoms of Fish Newton’s hypothetical medium. And soon they did!\nThat story is our own. We humans, like those fish, have been living within a material medium for millennia, without being consciously aware of it.\nThe first inkling of its existence came in the 1960s. By that time physicists had devised especially beautiful equations for describing elementary particles with zero mass. Nature likes those equations, too. The photons responsible for electromagnetism, the gravitons responsible for gravity, and the color gluons responsible for the strong force are all zero mass particles. Electromagnetism, gravity, and the strong force are three of the four fundamental interactions known to physics. The other is the weak force.\nA problem arose, however, for the W and Z bosons, which are responsible for the weak force. Though they have many properties in common with photons and color gluons, Wand Z bosons have non-zero mass. So it appeared that one could not use the beautiful equations for zero mass particles to describe them.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-4", "d_text": "||tp||5,19121 71(40)×10-44 s|\n|Planck mass Planck mass or the mass of a Planck particle is the hypothetical tiny black hole whose Schwarzschild radius is equal to the Planck length. Unlike all other base units Planck and most Planck units derived the Planck mass has a more or less conceivable human scale. Tradition has it that this is the mass of the egg of the flea. The Planck mass is an idealized mass thought to have a special significance for quantum gravity. ||mp||1,672 621 71(29)×10-27 kg|\n|Planck temperaturePlanck temperature is the highest temperature in the physical theories. At one end of the temperature scale was the lowest temperature possible, the absolute zero (0 K) and on the other the highest possible temperature, which is the Planck equal to 1.416 x 10 32 K. This temperature corresponds to the temperature of the Universe at the time called Planck time is 10 -44 seconds after the bigbang. ||Tp||1.41679 x 1032 K|\nconstant In physics, the fine structure constant, represented by the Greek letter α is a fundamental constant that governs the electromagnetic force ensuring consistency of atoms and molecules. It was proposed in 1916 by the German physicist Arnold Sommerfeld. In quantum electrodynamics, the fine structure constant serves as the coupling constant, representing the strength of interaction between the electrons and photons. Its value can not be predicted by the theory, but only determined by experimental results. This is actually one of the 29 free parameters of the Standard Model of particle physics.\n|Proton mass The nuclei of Atoms are composed of protons and neutrons. around these nuclei, revolves a cloud of electrons. These three elements (protons, neutrons and electrons) are practically all matter. While the electron is considered as a particle \"no size\", the proton, which is made up of quarks, is an \"extended\" object. ||mn||1,674 927 28(29)×10-27 kg|\n|Electron mass The electron is an elementary particle that has a charge of negative sign. This is one of the components of the atom with neutrons and protons but the electron is rather a kind of electric point weighing, which no one knows or where he is or where it goes.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "|The Standard Model of particle physics|\ncontains many fundamental values which\nmay be a result of pure accident\nPhysics, unlike biology or geology, was not considered to be a historical science until now. Physicists have prided themselves on being able to derive the vast bulk of phenomena in the universe from first principles. Biology - and chemistry, as a matter of fact - are different. Chance and contingency play an important role in the evolution of chemical and biological phenomena, so beyond a point scientists in these disciplines have realized that it's pointless to ask questions about origins and first principles.\nThe overriding \"fundamental law\" in biology is that of evolution by natural selection. But while the law is fundamental on a macro scale, its details at a micro level don't lend themselves to real explanation in terms of origins. For instance the bacterial flagellum is a product of accident and time, a key structure involved in locomotion, feeding and flight that resulted from gene sharing, recombination and selective survival of certain species spread over billions of years. While one can speculate, it is impossible to know for certain all the details that led to the evolution of this marvelous molecular motor. Thus biologists have accepted history and accident as integral parts of their fundamental laws.\nPhysics was different until now. Almost everything in the universe could be explained in terms of fundamental laws like Einstein's theory of general relativity or the laws of quantum mechanics. If you wanted to explain the shape and structure of a galaxy you could seek the explanation in the precise motion of the various particles governed by the laws of gravity. If you wanted to explain why water is H20 and not H30 you could seek the explanation in the principles of quantum mechanics that in turn dictate the laws of chemical bonding.\nBut beyond this wildly successful level of explanation seems to lie an impasse. The problem arises when you try to explain one of the most profound facts of nature, the fact that the fundamental constants of nature are fine-tuned to a fault, that the universe as we know it would not exist if these constants had even slightly different values. For instance, it is impossible to imagine life existing had the strength of the strong force binding nuclei together been even a few percent smaller or larger. Scientists have struggled for decades to explain why other numbers like the value of Planck's constant or the electron's mass are what they are.", "score": 8.086131989696522, "rank": 99}]} {"qid": 10, "question_text": "What is the maximum delta-v that can be achieved with an engine that has an ISP of 350 seconds?", "rank": [{"document_id": "doc-::chunk-2", "d_text": "- Use this equation to figure out the Δv per stage:\n- Δv = ln ( Mstart / Mdry ) * Isp * g\n- Δv = ln ( Starting Mass / Dry Mass ) X Isp X 9.81\n- Single Stage Rocket that weighs 23 tons when full, 15 tons when fuel is emptied, and engine that outputs 120 seconds Isp.\n- Δv = ln ( 23 Tons / 15 Tons ) × 120 seconds Isp × 9.81m/s² = Total Δv of 503.2 m/s\n- Simplified version of the Δv calculation to find the maximum Δv a craft with the given ISP could hope to achieve. This is done by using a magic 0 mass engine and not having a payload.\n- Δv =21.576745349086 * Isp\n- Explained / Examples:\n- This calculation only uses the mass of the fuel tanks and so the ln ( Mstart / Mdry ) part of the Δv equation has been replaced by a constant as Mstart / Mdry is always 9 (or worse with some fuel tanks) regardless of how many fuel tanks you use.\n- The following example will use a single stage and fuel tanks in the T-100 to Jumbo 64 range with an engine that outputs 380 seconds Isp.\n- Δv = ln ( 18 Tons / 2 Tons ) × 380 seconds Isp × 9.81m/s² = Maximum Δv of 8199.1632327878 m/s\n- Δv = 2.1972245773 × 380 seconds Isp × 9.82m/s² = Maximum Δv of 8199.1632327878 m/s (Replaced the log of mass with a constant as the ratio of total mass to dry mass is constant regardless of the number of tanks used as there is no other mass involved)\n- Δv = 21.576745349086 × 380 seconds Isp = Maximum Δv of 8199.1632327878 m/s (Reduced to its most simple form by combining all the constants)\n- How to calculate the Δv of a rocket stage that transitions from Kerbin atmosphere to vacuum.\n- Assumption: It takes approximately 1000 m/s of Δv to escape Kerbin's atmosphere before vacuum Δv values take over for the stage powering the transition.", "score": 52.49915496038697, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Subway style Δv map:\nTotal Δv values\nΔv change values\nΔv with Phase Angles\nPrecise Total Δv values\nMaximum Δv Chart\n- This chart is a quick guide to what engine to use for a single stage interplanetary ship. No matter how much fuel you add you will never reach these ΔV without staging to shed mass or using the slingshot maneuver.\nISP (s) Max Δv (m/s) Engines 290 6257 LV-1\n300 6473 24-77 320 6905 Mark-55 330 7120 Mainsail 350 7552 Skipper\n360 7768 KS-25X4\n370 7983 LV-T30\n380 8199 KR-2L 390 8415 Poodle\n800 17261 LV-N\n- Copy template:\n- TWR = F / (m * g) > 1\n- When Isp is the same for all engines in a stage, then the Isp is equal to a single engine. So six 200 Isp engines still yields only 200 Isp.\n- When Isp is different for engines in a single stage, then use the following equation:\n- Isp = ( F1 + F2 + ... ) / ( ( F1 / Isp1 ) + ( F2 / Isp2 ) + ... )\n- Isp = ( Force of Thrust of 1st Engine + Force of Thrust of 2nd Engine...and so on... ) / ( ( Force of Thrust of 1st Engine / Isp of 1st Engine ) + ( Force of Thrust of 2nd Engine / Isp of 2nd Engine ) + ...and so on... )\n- Two engines, one rated 200 newtons and 120 seconds Isp ; another engine rated 50 newtons and 200 seconds Isp.\n- Isp = (200 newtons + 50 newtons) / ( ( 200 newtons / 120 ) + ( 50 newtons / 200 ) = 130.89 seconds Isp\n- For atmospheric Δv value, use atmospheric thrust values.\n- For vacuum Δv value, use vacuum thrust values.", "score": 49.78630414618576, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Lox/LH2 propellant rocket stage. Loaded/empty mass 450,000/40,000 kg. Thrust 5,736.00 kN. Vacuum specific impulse 460 seconds.\nCost $ : 10.000 million. No Engines: 13.\nStatus: Study 1969.\nMore... - Chronology...\nGross mass: 450,000 kg (990,000 lb).\nUnfuelled mass: 40,000 kg (88,000 lb).\nHeight: 40.00 m (131.00 ft).\nDiameter: 7.65 m (25.09 ft).\nSpan: 10.00 m (32.00 ft).\nThrust: 5,736.00 kN (1,289,504 lbf).\nSpecific impulse: 460 s.\nSpecific impulse sea level: 409 s.\nBurn time: 318 s.\nMBB-ATC500 MBB lox/lh2 rocket engine. 441.3 kN. Study 1969. Isp=460s. Used on Beta launch vehicle. More...\nAssociated Launch Vehicles\nBeta German SSTO VTOVL orbital launch vehicle. In 1969 rocket pioneer Dietrich Koelle was working at MBB (Messerschmitt-Bolkow-Blohm). There he sketched out a reusable VTOVL design called BETA using Bono's SASSTO as a starting point. The vehicle, taking European technology into account, was a bit heavier than Bono's design. But the thorough analysis showed even this design would be capable of delivering 2 tonnes of payload to orbit. More...\nLox/LH2 Liquid oxygen was the earliest, cheapest, safest, and eventually the preferred oxidiser for large space launchers. Its main drawback is that it is moderately cryogenic, and therefore not suitable for military uses where storage of the fuelled missile and quick launch are required. Liquid hydrogen was identified by all the leading rocket visionaries as the theoretically ideal rocket fuel. It had big drawbacks, however - it was highly cryogenic, and it had a very low density, making for large tanks. The United States mastered hydrogen technology for the highly classified Lockheed CL-400 Suntan reconnaissance aircraft in the mid-1950's.", "score": 43.92230172637914, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "But there is a solution; Nuclear Thermal Rockets!\nNASA and friends did quite a bit of research into NTRs back in the 1960s. They even got as far as ground testing one.\nHere's a still photo of it;\nAccording to the Atomic Rockets site (which everyone and their children should read), you can get an exhaust velocity of aboot 8 km/s if you use hydrogen as your propellant (light things are better propellants, and hydrogen is pretty light). Which translates to a specific impulse of about 800 seconds. Pretty damn solid, especially compared to chemical rockets. Some of the later design studies I've seen have said we could get up to 950 seconds out of better designs.\nThis is really, really helpful for your hypothetical Mars/Jupiter/Saturn mission. Reducing the amount of fuel mass you need is going to reduce the amount of launches you need to put your mission together, thereby saving you money. This is important, since NASA's budget currently is about enough to pay for a small pizza, or a six pack of crap beer, but not both. Alternatively, you can still put a massive spacecraft together, but take a less efficient trajectory, and get to your destination quicker. Since spending months in space exposed to cosmic radiation and zero g is a nontrivial problem, this is pretty cool.\n|A nice chart from Atomic Rockets showing the relationship between trip time and delta-v.|", "score": 43.44791356194852, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Lox/Kerosene propellant rocket stage. Loaded/empty mass 952,500/68,000 kg. Thrust 15,532.70 kN. Vacuum specific impulse 304 seconds. Boeing Low-Cost Saturn Derivative Study, 1967 (trade study of 260 inch first stages for S-IVB, all delivering 86,000 lb pyld to LEO): S-IC Technology Liquid Booster: 260 inch liquid booster with 2 x F-1 engines, recoverable/reusable\nNo Engines: 2.\nStatus: Study 1967.\nMore... - Chronology...\nGross mass: 952,500 kg (2,099,900 lb).\nUnfuelled mass: 68,000 kg (149,000 lb).\nHeight: 36.36 m (119.29 ft).\nDiameter: 6.61 m (21.68 ft).\nSpan: 10.58 m (34.71 ft).\nThrust: 15,532.70 kN (3,491,890 lbf).\nSpecific impulse: 304 s.\nSpecific impulse sea level: 265 s.\nBurn time: 167 s.\nF-1 Rocketdyne Lox/Kerosene rocket engine. 7740.5 kN. Isp=304s. Largest liquid rocket engine ever developed and flown. Severe combustion stability problems were solved during development and it never failed in flight. First flight 1967. More...\nAssociated Launch Vehicles\nSaturn S-IC-TLB American orbital launch vehicle. Boeing Low-Cost Saturn Derivative Study, 1967 (trade study of 260 inch first stages for S-IVB, all delivering 86,000 lb pyld to LEO): S-IC Technology Liquid Booster: 260 inch liquid booster with 2 x F-1 engines, recoverable/reusable More...\nLox/Kerosene Liquid oxygen was the earliest, cheapest, safest, and eventually the preferred oxidiser for large space launchers. Its main drawback is that it is moderately cryogenic, and therefore not suitable for military uses where storage of the fuelled missile and quick launch are required. In January 1953 Rocketdyne commenced the REAP program to develop a number of improvements to the engines being developed for the Navaho and Atlas missiles. Among these was development of a special grade of kerosene suitable for rocket engines.", "score": 43.29438653118079, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "The CE-7.5 is a cryogenic rocket engine developed by the Indian Space Research Organisation to power the upper stage of its GSLV Mk-2 launch vehicle. The engine was developed as a part of the Cryogenic Upper Stage Project (CUSP). It replaced the KVD-1 (RD-56) Russian cryogenic engine that powered the upper stage of GSLV Mk-1.\n|Country of origin||India|\n|First flight||15 April 2010 (failure) |\n5 January 2014 (success)\n|Designer||LPSC, Indian Space Research Organisation|\n|Manufacturer||Hindustan Aeronautics Limited|\n|Propellant||LOX / LH2|\n|Thrust (vac.)||73.5 kN (16,500 lbf)|\n|Chamber pressure||5.8 MPa (58 bar) / 7.5 MPa (75 bar)|\n|Isp (vac.)||454 seconds (4.45 km/s)|\n|Length||2.14 m (7.0 ft)|\n|Diameter||1.56 m (5.1 ft)|\n|Dry weight||435 kg|\n|Upper stage of GSLV Mk.II|\nThe specifications and key characteristics of the engine are:\n- Operating Cycle – Staged combustion\n- Propellant Combination – LOX / LH2\n- Maximum thrust (Vacuum) – 73.55 kN\n- Operating Thrust Range (as demonstrated during GSLV Mk2 D5 flight) – 73.55 kN to 82 kN \n- Engine Specific Impulse - 454 ± 3 seconds (4.452 ± 0.029 km/s)\n- Engine Burn Duration (Nom) – 720 seconds\n- Propellant Mass – 12800 kg\n- Two independent regulators: thrust control and mixture ratio control\n- Steering during thrust: provided by two gimballed steering engines\nISRO formally started the Cryogenic Upper Stage Project in 1994. The engine successfully completed the Flight Acceptance Hot Test in 2008, and was integrated with propellant tanks, third-stage structures and associated feed lines for the first launch. The first flight attempt took place in April 2010 during the GSLV Mk.II D3/GSAT-3 mission.", "score": 43.27875469320735, "rank": 6}, {"document_id": "doc-::chunk-6", "d_text": "“The problem is not so much the amount of energy; you have gobs and gobs of energy,” says Emrich. “The problem is power, which is how fast you get the energy out of the system. A hydrogen bomb releases a huge amount of energy instantly but melts everything in sight.”\nBy contrast, the superconducting magnets corral the power of all that energy and essentially squirt it out the end. “Magnetic fields don’t melt,” says Emrich.\nIn theory, the engine could unleash a specific impulse of a million seconds. It would need only 1/10th of that to propel a craft to Mars in two weeks. But Emrich notes that to make a fusion-powered spaceship light enough to reach Mars in two weeks, propulsion experts will need a breakthrough in materials science.\n“Mars in 30 days?” he says. “That’s getting closer.”\nIf and when new materials make that possible, Mars may in fact be too close to Earth for a fusion rocket to truly show what it’s got under the hood. A trip to Jupiter, on the other hand, 366 million miles away at its closest approach, would give the crew of a fusion-powered spacecraft almost 183 million miles of acceleration to the journey’s midpoint. By then, a fusion engine delivering about 30,000 seconds of impulse would have gathered a speed of 50 miles per second—about 180,000 miles an hour. After decelerating for the next 90 days, it would slip into orbit around Jupiter; by then, the trip would have lasted 180 days, only six times as long as a one-way trip to Mars, despite covering 10.5 times the distance. True, while the astronauts are exploring Jupiter, Earth wanders farther away than it was at launch time; however, at these speeds, orbital separation between the planets becomes less of a problem.\n“The space program began the day humans chose to walk out of their caves,” says Chang Díaz. “By exploring space we are doing nothing less than insuring our own survival.” Chang Díaz believes that humans will either become extinct on Earth or expand into space. If we pull off the latter, he says, our notion of Earth will change forever.\nAs for Cast Away’s Chuck Noland, he eventually concludes that it would be better to risk it all and die trying to escape his imprisonment than to waste away on the beach.", "score": 41.140654994954865, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "This page no longer updated from 31 October 2001. Latest version can be found at www.astronautix.com\nDesigner: Isayev. Propellants: N2O4/UDMH Thrust(vac): 3 kgf. Thrust(vac): 0.03 kN. Isp: 285 sec. Burn time: 25,000 sec. Mass Engine: 1 kg. Chambers: 1. Chamber Pressure: 8.00 bar. Area Ratio: 45.00. Oxidizer to Fuel Ratio: 1.85. Country: Russia. Status: In Production. References: 350 . Comments: Bi-propellant hypergolic (self-igniting) engine, pressure-fed. 300,000 ignitions.\nBack to Index\nLast update 12 March 2001.\nDefinitions of Technical Terms.\nContact Mark Wade with any corrections or comments.\nConditions for use of drawings, pictures, or other materials from this site..\n© Mark Wade, 2001 .", "score": 40.47197051922499, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "They predict their prototype could produce a specific impulse (Isp) of 2000 sec, which is an equivalent to an exhaust velocity of 20,000 m/s.\nThey are looking to raise $69,000 by November 3, 2012 to get their project started. At the time of this writing, the team has just over $54,000.\nHere’s a video from HyperV:\n“We invite you, the citizens of Earth, to join with us as we design, construct, test, and execute this demonstration,” the team wrote on their Kickstarter page. “The culmination of this project will be an all-up, laboratory demonstration of our prototype thruster.”", "score": 39.95834947871775, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "RocketdyneRS-68 (Rocket System 68) is a liquid hydrogen/ liquid oxygenengine developed starting in the 1990s with the goal of producing a simpler, less-costly heavy-lift rocket engine for the Delta IV rocket. The RS-68 produces a thrustof 663,000 lbf (2.9 MN) at sea level, while the RS-68A variant has produced 700,000 lbf (3.1 MN) in testing.cite press release | publisher=PRNewswire| date= 2008-09-25| title=United Launch Alliance First RS-68A Hot-Fire Engine Test a Success | url=http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/09-25-2008/0004892842&EDATE= |quote=Currently, the RS-68 engine can deliver more than 660,000 pounds of sea level thrust and the upgraded RS-68A will increase this to more than 700,000 pounds. The RS-68A also improves on the specific impulse, or fuel efficiency, of the RS-68. | accessdate=2008-09-30] The RS-68B variant is the proposed main engine for NASA's Project Constellationand has 80% fewer parts than the Space Shuttle Main Engine.\nThe RS-68 was developed at\nRocketdynePropulsion and Power, located in Canoga Park, Los Angeles, California, to power the Delta IV Evolved Expendable Launch Vehicle ( EELV). The combustion chamber burns liquid hydrogen and liquid oxygen at 1486 lbf/in² (9.7 MPa) at 102% with a 1:6 engine mixture ratio.\nAt a maximum 102% thrust, the engine produces 758,000 lbf (3.3 MN) in a vacuum and 663,000 lbf (2.9 MN) at sea level. The engine's mass is 14,560 lb (6,600 kg) at 96 Inches(2.4384 m). With this thrust, the engine has a thrust-to-weight ratio of 51.2, and a\nspecific impulseof 410 s (4 kN·s/kg) in a vacuum and 365 s (3.58 kN·s/kg) at sea level.", "score": 39.938614482321654, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "|Country of origin||Soviet Union|\n|Designer||Kuznetsov Design Bureau|\n|Application||1st stage multi-engine|\n|Successor||AJ26-58, AJ26-59, AJ26-62|\n|Propellant||LOX / RP-1 (rocket-grade kerosene)|\n|Thrust (vac.)||1,753.8 kN (394,300 lbf)|\n|Thrust (SL)||1,505 kN (338,000 lbf)|\n|Chamber pressure||145 bar (14,500 kPa)|\n|Isp (vac.)||331 seconds (3.25 km/s)|\n|Isp (SL)||297 seconds (2.91 km/s)|\n|Length||3.7 m (12 ft)|\n|Diameter||2 m (6 ft 7 in)|\n|Dry weight||1,235 kg (2,723 lb)|\nThe NK-33 and NK-43 are rocket engines designed and built in the late 1960s and early 1970s by the Kuznetsov Design Bureau. The NK designation is presumably derived from the name of the chief designer, Nikolay Kuznetsov. They were intended for the ill-fated Soviet N-1 rocket moon shot. The NK-33 engine is among the highest thrust-to-weight ratio of any Earth-launchable rocket engine, second only to the SpaceX Merlin 1D engine, while achieving a very high specific impulse. NK-33 was by many measures the highest performance LOX/kerosene rocket engine ever created.\nThe NK-43 is similar to the NK-33, but is designed for an upper stage, not a first stage. It has a longer nozzle, optimized for operation at altitude, where ambient air pressure is low, or perhaps zero. This gives it a higher thrust and specific impulse, but makes it longer and heavier.\nModified versions of these engines by Aerojet are known as the AJ26-58, AJ26-59 and AJ26-62.\nNK-33 and NK-43 are derived from the earlier NK-15 and NK-15V engines, respectively.\nThe engines are high-pressure, regeneratively cooled staged combustion cycle bipropellant rocket engines, and use oxygen-rich preburners to drive the turbopumps. The turbopumps require subcooled liquid oxygen (LOX) to cool the bearings.", "score": 38.122499525457, "rank": 11}, {"document_id": "doc-::chunk-7", "d_text": "And as most engines are made of metal or carbon, hot oxidizer-rich exhaust is extremely corrosive, where fuel-rich exhaust is less so. American engines have all been fuel-rich. Some Soviet engines have been oxidizer-rich.\nAdditionally, there is a difference between mixture ratios for optimum Isp and optimum thrust. During launch, shortly after takeoff, high thrust is at a premium. This can be achieved at some temporary reduction of Isp by increasing the oxidiser ratio initially, and then transitioning to more fuel-rich mixtures. Since engine size is typically scaled for takeoff thrust this permits reduction of the weight of rocket engine, pipes and pumps and the extra propellant use can be more than compensated by increases of acceleration towards the end of the burn by having a reduced dry mass.\nAlthough liquid hydrogen gives a high Isp, its low density is a significant disadvantage: hydrogen occupies about 7x more volume per kilogram than dense fuels such as kerosene. This not only penalises the tankage, but also the pipes and fuel pumps leading from the tank, which need to be 7x bigger and heavier. (The oxidiser side of the engine and tankage is of course unaffected.) This makes the vehicle's dry mass much higher, so the use of liquid hydrogen is not such a big win as might be expected. Indeed, some dense hydrocarbon/LOX propellant combinations have higher performance when the dry mass penalties are included.\nDue to lower Isp, dense propellant launch vehicles have a higher takeoff mass, but this does not mean a proportionately high cost; on the contrary, the vehicle may well end up cheaper. Liquid hydrogen is quite an expensive fuel to produce and store, and causes many practical difficulties with design and manufacture of the vehicle.\nBecause of the higher overall weight, a dense-fuelled launch vehicle necessarily requires higher takeoff thrust, but it carries this thrust capability all the way to orbit. This, in combination with the better thrust/weight ratios, means that dense-fuelled vehicles reach orbit earlier, thereby minimizing losses due to gravity drag. Thus, the effective delta-v requirement for these vehicles are reduced.\nHowever, liquid hydrogen does give clear advantages when the overall mass needs to be minimised; for example the Saturn V vehicle used it on the upper stages; this reduced weight meant that the dense-fuelled first stage could be made significantly smaller, saving quite a lot of money.", "score": 37.49545339481833, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "RS-2200 engine test\nRocketdyne lox/lh2 rocket engine. 2201 kN. Development cancelled 1999. Isp=455s. Linear Aerospike Engine developed for use on the Lockheed Reusable Launch Vehicle, the production follow-on to the X-33.\nThe RS-2200 Linear Aerospike Engine is being developed for use on the Lockheed Martin Skunk Works' Reusable Launch Vehicle, the production follow-on to the X-33. The Aerospike allows the smallest, lowest cost RLV to be developed because the engine fills the base, reducing base drag, and is integral to the vehicle, reducing installed weight when compared to a bell-shaped engine. The. Aerospike is the same as bell shaped rocket engines except that its nozzle is open to the atmosphere. The open plume compensates for decreasing atmospheric pressure as the vehicle ascends, keeping the engine's performance very high along the entire trajectory. This altitude compensating feature allows a simple, low-risk gas generator cycle to be. used. Over $500 million were invested in Aerospike engines up to the contract award date of the X-33, and full size linear engines have accumulated 73 tests and over 4,000 seconds of operation. Throttling, Percent Thrust: 20-109. Dimensions: Forward End: 6.4 m wide X 2.36 m long. Aft End: 2.36 m wide X 2.36 m long. Length: 4.32 m. Designed for booster applications. Gas generator, pump-fed.\nThrust (sl): 1,917.200 kN (431,004 lbf). Thrust (sl): 195,500 kgf. Chamber Pressure: 153.00 bar. Area Ratio: 173. Oxidizer to Fuel Ratio: 6.\nStatus: Development cancelled 1999.\nMore... - Chronology...\nHeight: 4.32 m (14.17 ft).\nDiameter: 6.40 m (20.90 ft).\nThrust: 2,201.00 kN (494,804 lbf).\nSpecific impulse: 455 s.\nSpecific impulse sea level: 347 s.\nFirst Launch: 1998.\nAssociated Launch Vehicles\nX-33 American winged rocketplane.", "score": 37.4122620212201, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "The T/W and specific thrust could be even better than the one owned by LANL. The catch is that the cycle doesn’t close. The drive gas pressure for the cagejet stage must be higher than the pressure it produces. Enter the turborocket. By using a fuel rich rocket to drive the turbine and burning the excess fuel in an afterburner with the compressed air, a very high T/W can be achieved with a very simple system. The drive rocket should use about 40% oxygen and 60% kerosene.\nThe cagejet is capable of using multiple staged rotors in the same manner as a multi spool turbofan except that each stage has its’ own bearing and doesn’t have to share shaft space with other spools. The cagejet compressor can attain higher stage compression ratios than axial flow because the adverse pressure gradient on the blades is countered by the centrifugal force. The separate stages can run at different flow rates to match airflow conditions in ways that even sophisticated turbofans can’t do. Each stage is capable of a compression ratio of 2, so three stages can get a compression ratio of 8. A fourth stage can be driven by the rocket compressing 25% of the air to a final compression ratio of 16 for a fraction of the total flow.\nThe 16 atm air compressed by the fourth stage burns stoichiometric with the fuel rich turbine exhaust in an afterburner and is used as drive gas to compress the total air flow in stage three to 8 atm. The exhaust from stage three can be used as is to drive the first two stages and give decent thrust with an Isp in the 1,500 range. Or the 8 atm afterburner can add kerosene for a stoichiometric burn. Then 25% of the flow could drive the first two stages and have an exit Isp in the 50 range including the air. With the mixture ratio of 16, net fuel Isp for 25% of the flow is 800. The other 75% of the flow is used as an 8 atm fuel/LOX/air rocket. Net Isp for this portion would be in the 1,200 range considering the LOX mass. Engine total net Isp at high thrust would be about 1,100.\nT/W for this system would be around 25 if the numbers work.", "score": 37.29263441638124, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "[Via Satellite 07-20-2015] The Indian Space Research Organization (ISRO) completed a hot test of a high thrust cryogenic rocket engine for a duration of 800 seconds. The engine generated a nominal thrust of 19 tons and ran for approximately 25 percent more time than the engine burn required for flight.\nISRO is designing the engine to power the cryogenic stage (C25) — the upper stage of the next generation Geosynchronous Satellite Launch Vehicle (GSLV) Mk-3 — capable of launching 4-ton-class satellites. The space agency’s Liquid Propulsion Systems Center (LPSC) created the engine.\nAccording to ISRO, the performance of the engine closely matched pre-test predictions made with in-house developed cryogenic engine mathematical modeling and simulation software. The July 16 hot test was the 10th in a series of tests planned and executed as part of the development of the engine. ISRO is planning further tests in high altitude conditions and in stage configuration prior to flight stage realization.", "score": 35.32671863320595, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "There are a number of variants proposed that double to triple the thrust to weight of a turbojet. This is at the cost of extreme fuel consumption compared to any jet, though much better than a rocket. Dense fuel Isp ranges from 400-700 during subsonic cruise with hydrogen about 50% better. It turns out that the standard turborocket burns too much fuel for cruising, and weighs too much to carry through serious acceleration.\nI spent a good bit of time on the problem before realizing that modifying existing components just wasn’t going to work. Designing an effective ABE for a spaceplanes’ cruise phase really needs a new approach. You need better T/W and Isp than a normal turborocket. Designing around known systems just doesn’t get the job done. What is needed is a turbojet that is really really light for the power it produces. Or an engine that mimics a turbojet/turbofan and is really really light.\nSome way must be found to eliminate as much mass and as many components as possible while retaining the desired capability. This usually involves finding ways to have one component do two jobs, and do them better than the original. LANL holds the rights on one patented method for increasing the performance of a turbine engine. The turbine blades are hollow and used as a centrifugal compressor. The air keeps the turbine blades cool enough to allow much higher turbine inlet temperatures than normal. The heat absorbed by the air is used for power (regenerative cooling)as it is mixed with fuel in the burner before driving the turbine. With one part being both compressor and turbine, weight is less. By using higher turbine inlet temperatures, the engine is more fuel efficient and gets more thrust out of each unit of air (specific thrust). Higher specific thrust means that less inlet and duct work is required per unit of thrust. This is a win for everybody if they get it into use.\nI stumbled across another way of getting similar results. Some varieties of the squirrel cage fan have the characteristics of a short blade centrifugal compressor, and a short blade radial inflow turbine. By using one wheel as a compressor for 75% of the cycle, and as a turbine for 25% of the cycle, the blades are regenerative cooled, and it is possible to burn stoichiometric in front of the turbine.", "score": 34.262738047220296, "rank": 16}, {"document_id": "doc-::chunk-4", "d_text": "It is not as clear that hall thrusters are a good solution at an isp of 1000s or lower as their efficiencies drop off rapidly.", "score": 33.74399883102675, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "Since the maximum Isp mixture ratio (r) for oxygen/gasoline is 2.5, we have:\nPhysical Properties of Selected Rocket Propellants\nThe thrust developed per pound of total propellant burned per second is known as specific impulse and is defined as\nThe chemical and physical properties of gaseous oxygen, methyl alcohol, and gasoline are given in Table II.\n|Propellant||Gaseous Oxygen||Methyl Alcohol||Gasoline|\n|Effect on metals||none||none||none|\n|Density||0.083 lb/ft3||48 lb/ft3||44.5 lb/ft3|", "score": 33.14850120092231, "rank": 18}, {"document_id": "doc-::chunk-2", "d_text": "Question – Why was Raptor thrust reduced from ~300 tons-force to ~170 tons-force?\nOne would think that for (full-flow staged combustion…) rocket engines bigger is usually better: better surface-to-volume ratio, less friction, less heat flow to handle at boundaries, etc., which, combined with the target wet mass of the rocket defines a distinct ‘optimum size’ sweet spot where the sum of engines reaches the best thrust-to-weight ratio.\nYet Raptor’s s/l thrust was reduced from last year’s ~300 tons-force to ~170 tons-force, which change appears to be too large of a reduction to be solely dictated by optimum single engine TWR considerations.\nWhat were the main factors that led to this change?\nElonMuskElon Musk – We chickened out\nThe engine thrust dropped roughly in proportion to the vehicle mass reduction from the first IAC talk. In order to be able to land the BF Ship with an engine failure at the worst possible moment, you have to have multiple engines. The difficulty of deep throttling an engine increases in a non-linear way, so 2:1 is fairly easy, but a deep 5:1 is very hard. Granularity is also a big factor. If you just have two engines that do everything, the engine complexity is much higher and, if one fails, you’ve lost half your power. Btw, we modified the BFS design since IAC to add a third medium area ratio Raptor engine partly for that reason (lose only 1/3 thrust in engine out) and allow landings with higher payload mass for the Earth to Earth transport function.", "score": 33.0074169617371, "rank": 19}, {"document_id": "doc-::chunk-3", "d_text": "T/W 25 and Isp of 1,000 is the lower end of acceptable performance for a HTHL vehicle. The engine is useless for VTVL, throttling problems and flight profile restrictions. One thing that helps this engine concept is that the cagejet layout naturally lends itself to very thin cages of large diameter. By turning this system 90 degrees from the normal jet engine placement, it is possible to mount these engines inside the wings or tail surfaces. The vertical tail in particular is a desirable location.\nThe uses I see for this are for vehicles that need medium duration climb/cruise with serious acceleration during portions of the flight. The Rocketplane proposals would benefit from a lighter engine with more thrust. The White Knight series could use something like this to supplement normal turbofans for accelerating off the runway and initial climb. Turn them off during the cruise then light them again for a pop up from 10,000 feet to 60,000 feet for a seriously enhanced release altitude and attitude. The Air Launch group could enhance an airliner for their carrier.\nPerhaps the most useful thing for it would be for testing spacecraft airframes. It would be nice to be able to wring out the airframes early on and have back up thrust during early rocket airframe trials.\nThis is the same concept I discussed at Space Access 2004 with a couple more rotors. It is a lot less complicated than it sounds. You can get an idea of it by driving a squirrel cage fan with shop air and feeling the wind it throws out the rest of the perimeter. Use a face shield.\nThe sketch is a simple two rotor system. A four rotor system is actually smaller because the burns are at 16 atm and 8 atm instead of 4 and 2 in the above. The boost rotor only has to handle 1/8 of the volume/second of the main rotor. It can be seen that the individual cages can operate at optimum rpm for the changing conditions of a flight without being slaved to the other stages. It is a simpler method of flow matching than the variable stators on modern turbojets and fans.", "score": 32.54301851265082, "rank": 20}, {"document_id": "doc-::chunk-5", "d_text": "the distance traveled during the deceleration phase alone--then find the fuel/payload ratio using the above formula, then square the resulting ratio to get the ratio for the whole trip. (Similarly, if you have the final velocity Delta-v at the end of the acceleration phase/beginning of the deceleration phase, then you can use the formula e^((c/v_e) * atanh(Delta-v/c)) I mentioned earlier, which gives the mass ratio as a function of Delta-v and the exhaust velocity v_e, then square the result to get the ratio for the whole trip including both acceleration and deceleration phases.)\nIf you want to know the rationale for this, start by noting that the formula I posted should work fine for the deceleration phase alone (decelerating at A meters/second² from an initial velocity V to rest requires exactly the same fuel ratio as accelerating at A meters/second² from an initial state of rest to a final velocity of V), so it can tell you how many kilograms of fuel+payload will be needed at the beginning of the deceleration for every kilogram of payload at the end of deceleration. Then the number of kilograms at the beginning of deceleration can be treated as the \"payload\" mass that needs to be brought up to speed during the first acceleration phase, so you'll have the same ratio as before for fuel+payload to final payload over the course of the acceleration. Thus if the mass at the beginning of deceleration is R times bigger than the final mass at the end of deceleration, and the mass at the beginning of acceleration is also R times bigger than the mass at the end of acceleration (which is also the mass at the beginning of deceleration), then this implies the mass at the beginning of acceleration is R*R = R² times bigger than the mass at the end of deceleration.\nThe Serenity and other ships (not sure if it's all of them) use what is essentially controlled fusion explosions at the tail end of the ship to propel the ship at greater than normal speeds. It is never made clear how fast the ships do move at top speeds, but it is still at sub-light speeds. But even so, they appear to achieve speeds much greater than what is capable of our technology today.", "score": 31.847858358217447, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "At full throttle, depending on the efficiency of its three RVacs at sea level, Starship S20’s six Raptor V1.0 engines could produce ~1100 tons (~2.4M lbf) of thrust. By comparison, SpaceX’s workhorse Falcon 9 rocket produces around 760 tons (~1.7M lbf) of thrust at liftoff, meaning that Starship will likely become the most powerful single-core rocket the company has ever tested even if it never throttles above ~70%.\nThere’s a good chance that SpaceX will start Ship 20’s next round of tests by separately firing both sets of three Raptor Center and Vacuum engines or with a mixed three or four-engine test to follow the latest two-engine test. SpaceX could also take the most iterative approach and test three, four, and five engines at a time before the final six-engine test. Regardless, virtually all possible static fire tests Ship 20 is now configured to perform will be program ‘firsts’ of some kind. Stay tuned for updates on the first of those tests, preparations for which could begin as early as 10am CDT (15:00 UTC), November 1st.", "score": 31.818755951093344, "rank": 22}, {"document_id": "doc-::chunk-4", "d_text": "If anyone wants to play around with different distances (in case Serenity makes shorter hops and refuels) and different exhaust velocities, I'll put all the above equations together into one long one that can just be copy and pasted into the the online calculator to find the ratio of masses before and after using up the fuel, just substitute whatever distance you want for the four D's in the equation (measured in light hours), whatever acceleration you want for the four A's in the equation (where 5.5g = 0.00647, and 1g = 0.00011776), and whatever exhaust velocity you want for the single V in the equation (measured as a fraction of light speed):\ne^((1/V) * atanh((A * sqrt((D)² + (2 * D/A)))/(sqrt(1 + (A * sqrt((D)² + (2 * D/A)))²))))\nFor example, plugging in D=55, A=0.00647, and V=0.089, the calculator gives the same mass ratio of 10079 found earlier.\nOn the other hand, I suppose you could imagine they use some exotic technology (perhaps the same technology they use to make artificial gravity) to accelerate the exhaust to speeds much faster than real-world fusion reactions, or you could ignore the material on the DVD saying it was fusion and imagine it was something like a matter/antimatter reaction, where the effective exhaust velocity can be close to light speed. For example, if we use the same equation but with an exhaust velocity of 0.9c rather than 0.089c, the ratio of initial mass with fuel to final mass without fuel would just be around 2.5.\nNote: That last formula was assuming continuous steady acceleration all the way from departure point to destination, which would result in reaching the destination at a very high speed; a more natural assumption might be to accelerate for the first half of the trip, then decelerate at the same rate for the second half, so you'd come to rest at your destination (you can also imagine a constant-velocity coasting phase in between the end of acceleration and the beginning of deceleration, this wouldn't affect the amount of fuel used). In that case, all you have to do is set the D to be half the total distance crossed during both acceleration and deceleration--i.e.", "score": 31.516794495624307, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "It looks like you're using an Ad Blocker.\nPlease white-list or disable AboveTopSecret.com in your ad-blocking tool.\nSome features of ATS will be disabled while you continue to use an ad-blocker.\nThe performance of a rocket depends almost entirely on the velocity with which the propellant is exhausted,” he notes. Thus, “the elementary laws of mechanics – in this case relativistic mechanics, but still the elementary laws of mechanics – inexorably impose a certain relation between the initial mass and the final mass of the rocket in the ideal case… It follows very simply from conservation of momentum and energy, the mass-energy relation, and nothing else.\nFor our vehicle we shall clearly want a propellant with a very high exhaust velocity. Putting all practical questions aside, I propose, in my first design, to use the ideal nuclear fusion propellant… I am going to burn hydrogen to helium with 100 percent efficiency; by means unspecified I shall throw the helium out the back with kinetic energy, as seen from the rocket, equivalent to the entire mass change. You can’t beat that, with fusion. One can easily work out the exhaust velocity; it is about 1/8 the velocity of light. The equation of Figure 13 tells us that to attain a speed 0.99c we need an initial mass which is a little over a billion times the final mass.\nThis is no place for timidity, so let us take the ultimate step and switch to the perfect matter-antimatter propellant…. The resulting energy leaves our rocket with an exhaust velocity of c or thereabouts. This makes the situation very much better. To get up to 99 percent the velocity of light only a ratio of 14 is needed between the initial mass and the final mass.\nWell, this is preposterous, you are saying. That is exactly my point. It is preposterous. And remember, our conclusions are forced on us by the elementary laws of mechanics.\nAll this stuff about traveling around the universe in space suits – except for local exploration, which I have not discussed – belongs back where it came from, on the cereal box.\na one-way trip of thirty-seven years (the distance to Zeta 1 or 2 Reticuli) at 99.9 percent c would take only twenty months’ crew time; at 99.99 percent c it would take only six months’ crew time.", "score": 31.347439173901176, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "But what about the real-world engineering of actually building such an engine—managing the plasma and its thermal properties, then successfully firing it for a long period of time? That has proven challenging, and it has led many to doubt Vasimr’s practicality.\n…Speaking almost no English at the time, he immigrated to the United States from Costa Rica in 1969 to finish high school. Chang-Díaz then earned a doctoral degree in plasma physics from Massachusetts Institute of Technology. Later, as an astronaut, Chang-Díaz flew seven Space Shuttle missions, tying Jerry Ross’ record for most spaceflights by anyone, ever.\n…The rocket engine starts with a neutral gas as a feedstock for plasma, in this case argon. The first stage of the rocket ionizes the argon and turns it into a relatively “cold” plasma. The engine then injects the plasma into the second stage, the “booster,” where it is subjected to a physics phenomenon known as ion cyclotron resonance heating. Essentially, the booster uses a radio frequency that excites the ions, swinging them back and forth.\nAs the ions resonate and gain more energy, they are spun up into a stream of superheated plasma. This stream then passes through a corkscrew-shaped nozzle and is accelerated out of the back of the rocket, producing a thrust….\nThe Sun powers both the production of plasma and the booster exciting the plasma, and the extent to which it does either can be shifted. When a spacecraft needs more thrust, more power can be put into making plasma. This process uses more propellant, but it provides the thrust needed to move out of a gravity well, such as Earth orbit. Later, when the vehicle is moving quickly, more power can be shifted to the booster, providing a higher specific impulse and greater fuel economy.\nAt the last deglaciation Earth’s largest biome, mammoth-steppe, vanished….Analyzes of fossils 14C dates and reconstruction of mammoth steppe climatic envelope indicated that changing climate wasn’t a reason for extinction of this ecosystem. We calculate, based on animal skeleton density in frozen soils of northern Siberia, that mammoth-steppe animal biomass and plant productivity, even in these coldest and driest of the planet’s grasslands were close to those of an African savanna.", "score": 31.319705768620302, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Lunar prospector. Harvester. Solar system class performance.\n~6.5 - 10.9km/s ΔV\n2,350kg fully fueled\n500kg payload capability\n28v, 12v, 5v, 3.3v\nWith 9 PECO engines churning out solar system conquering ΔV the MX-9 can deliver up to 500kg to the lunar surface from GTO\nDesigned for Frontier Class exploration capabilities, MX-9 will support robust lunar sample return operations. Like it’s MX-5 little brother, the MX-9 can also be outfitted with MX-1 or MX-2 staged systems that can deliver over 10kms ΔV and extend its reach to span the solar system, and beyond.\nAvailable in orbiter, lander, deep space probe and sample return configurations.", "score": 30.53333292433365, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "VASIMR plasma engine: Earth to Mars in 39 days?\nIn Arthur C. Clarke’s classic science fiction novels and movies 2001: A Space Odyssey and 2010: Odyssey Two, the spaceships Discovery and Alexei Leonov make interplanetary journeys using plasma drives. Nuclear reactors heat hydrogen or ammonia to a plasma state that’s energetic enough to provide thrust.\nAn electric power source ionizes hydrogen, deuterium, or helium fuel into a plasma by stripping away electrons. Magnetic fields then direct the charged gas in the proper direction to provide thrust.\n“A rocket engine is a canister holding high-pressure gas,” Chang Diaz explained. “When you open a hole at one end, the gas squirts out and the rocket goes the other way. The hotter the stuff in the canister, the higher the speed it escapes and the faster the rocket goes. But if it’s too hot, it melts the canister.”\nThe VASIMR engine is different, Chang Diaz explained, because of the fuel’s electrical charge: “When gas gets above 10,000 [kelvins], it changes to plasma – an electrically charged soup of particles. And these particles can be held together by a magnetic field. The magnetic field becomes the canister, and there is no limit to how hot you can make the plasma.”\nChang Diaz has pointed out that hydrogen would be an advantageous fuel for the VASIMR engine because the spacecraft would not have to lift off carrying all the fuel it needs for the journey.\n“We’re likely to find hydrogen pretty much anywhere we go in the Solar System,” he said.\nA spacecraft using conventional chemical rockets would take eight months to get to Mars during opposition. However, the VASIMR engine would make the journey in as little as 39 days.\nChang Diaz explained: “Remember, you are accelerating the first half of the journey – the other half you’re slowing, so you will reach Mars but not pass it. The top speed with respect to the Sun would be about 32 miles per second [or 51.5 km/s]. But that requires a nuclear power source to heat the plasma to the proper temperature.”\nThe use of nuclear power in space is not without its controversy. In 1997, there was widespread public concern when NASA’s Cassini probe, which carried a plutonium battery, made a flyby of Earth to perform a gravity assist.", "score": 30.043465833993995, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Learn about this topic in these articles:\neffective exhaust velocity in rockets\n...for systems using electromagnetic acceleration. In engineering circles, notably in the United States, the effective exhaust velocity is widely expressed in units of seconds, which is referred to as specific impulse. Values in seconds are obtained by dividing the effective exhaust velocities by the constant factor 9.81 metres per second squared (32.2 feet per second squared).", "score": 29.79207822342704, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Kerbal Space Program rocket scientist's cheat sheet: Delta-v maps, equations and more for your reference so you can get from here to there and back again.\n- 1 수학\n- 2 Math examples\n- 3 See also\n추력비(Thrust to Weight Ratio; TWR)\n- → 참고하기: Thrust-to-weight ratio\n이건 제 2 뉴턴 운동 법칙를 따릅니다. 만약 이 값이 1보다 작다면, 땅을 떠나지 못하겠죠. 발사하는 곳의 표면 기준의 중력가속도 값이 필요하다는 것을 잘 알아두세요!\n- 엔진의 출력\n- 전체 중량\n- 중력 가속도 값\n합산비추력(Combined Specific Impulse; Isp)\n- → 참고하기: Specific impulse\nIf the Isp is the same for all engines in a stage, then the Isp is equal to a single engine. If the Isp is different for engines in a single stage, then use the following equation:\n델타V (Delta-v; Δv)\n- → 참고하기: Tutorial:Advanced Rocket Design\nBasic calculation of a rocket's Δv. Use the atmospheric and vacuum thrust values for atmospheric and vacuum Δv, respectively.\n- is the velocity change possible in m/s\n- is the starting mass in the same unit as\n- is the end mass in the same unit as\n- is the specific impulse of the engine in seconds\nTrue Δv of a stage that crosses from atmosphere to vacuum\n|other bodies' data missing|\nCalculation of a rocket stage's Δv, taking into account transitioning from atmosphere to vacuum. Δvout is the amount of Δv required to leave a body's atmosphere, not reach orbit. This equation is useful to figure out the actual Δv of a stage that transitions from atmosphere to vacuum.\nVarious fan-made maps showing the Δv required to travel to a certain body.", "score": 29.39259288421308, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "The energy from that would be forced out the back of the ship in a so-called “z-pinch” using a “magnetic nozzle”, a component which the team are also developing. The engine’s potential top speed? Over 100,000km/h. That’s roughly the same speed at which the Earth orbits the Sun.", "score": 28.916941000144423, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "For 60 years, NASA has used chemical rockets to send its astronauts into space, and to get its spacecraft from planet to planet. The huge million-pound thrusts sustained for only a few minutes were enough to do the job, mainly to break the tyranny of Earth’s gravity and get tons of payload into space. This also means that Mars is over 200 days away from Earth and Pluto is nearly 10 years. The problem: rockets use propellant (called reaction mass) which can only be ejected at speeds of a few kilometers per second. To make interplanetary travel a lot zippier, and to reduce its harmful effects on passenger health, we have to use rocket engines that eject mass at far-higher speeds.\nIn the 1960s when I was but a wee lad, it was the hey-day of chemical rockets leading up to the massive Saturn V, but I also knew about other technologies being investigated like nuclear rockets and ion engines. Both were the stuff of science fiction, and in fact I read science fiction stories that were based upon these futuristic technologies. But I bided my time and dreamed that in a few decades after Apollo, we would be traveling to the asteroid belt and beyond on day trips with these exotic rocket engines pushing us along.\nWell…I am not writing this blog from a passenger ship orbiting Saturn, but the modern-day reality 50 years later is still pretty exciting. Nuclear rockets have been tested and found workable but too risky and heavy to launch. Ion engines, however, have definitely come into their own!\nMost geosynchronous satellites use small ‘stationkeeping’ ion thrusters to keep them in their proper orbit slots, and NASA has sent several spacecraft like Deep Space-1, Dawn powered by ion engines to rendezvous with asteroids. Japan’s Hayabusha spacecraft also used ion engines as did ESA’s Beppi-Colombo and the LISA Pathfinder. These engines eject charged, xenon atoms ( ions) to speeds as high as 200,000 mph (90 km/sec), but the thrust is so low it takes a long time for spacecraft to build up to kilometer/sec speeds. The Dawn spacecraft, for example, took 2000 days to get to 10 km/sec although it only used a few hundred pounds of xenon!\nBut on the drawing boards even more powerful engines are being developed.", "score": 28.699344772168935, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Rocket propellant is mass that is stored, usually in some form of propellant tank, prior to being used as the propulsive mass that is ejected from a rocket engine in the form of a fluid jet to produce thrust. A fuel propellant is often burned with an oxidizer propellant to produce large volumes of very hot gas. These gases expand and push on a nozzle, which accelerates them until they rush out of the back of the rocket at extremely high speed, making thrust. Sometimes the propellant is not burned, but can be externally heated for more performance. For smaller attitude control thrusters, a compressed gas escapes the spacecraft through a propelling nozzle.\nIn ion propulsion, the propellant is made of electrically charged atoms, which are magnetically pushed out of the back of the spacecraft. Magnetically accelerated ion drives are not usually considered to be rockets however, but a similar class of thrusters use electrical heating and magnetic nozzles.\nRockets create thrust by expeling mass backwards in a high speed jet (see Newton's Third Law). Chemical rockets, the subject of this article, create thrust by reacting propellants into very hot gas, which then expands and accelerates within a nozzle out the back. The amount of the resulting forward force, known as thrust, that is produced is the mass flow rate of the propellants multiplied by their exhaust velocity (relative to the rocket), as specified by Newton's third law of motion. Thrust is therefore the equal and opposite reaction that moves the rocket, and not any interaction of the exhaust stream with air around the rocket (but see base bleed). Equivalently, one can think of a rocket being accelerated upwards by the pressure of the combusting gases in the combustion chamber and nozzle. This operational principle stands in contrast to the commonly held assumption that a rocket \"pushes\" against the air behind or below it. Rockets in fact perform better in space (where there is in theory nothing behind or beneath them to push against), because they do not need to overcome air resistance and atmospheric pressure on the outside of the nozzle.\nThe maximum velocity that a rocket can attain in the absence of any external forces is primarily a function of its mass ratio and its exhaust velocity. The relationship is described by the rocket equation: Vf = Veln(M0 / Mf). The mass ratio is just a way to express what proportion of the rocket is fuel when it starts accelerating.", "score": 27.994808662794743, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "“We have 12-kilowatt systems in development that could be used in much higher power-level systems,” said Julie Van Kleeck, vice president of space programs at Sacramento, Calif.-based Aerojet. The company tested one such thruster at Glenn last year, Van Kleeck said.\nA 12-kilowatt engine more than doubles the power of the 4.5-kilowatt, xenon-fueled BPT-4000 Aerojet provided for the Air Force’s Advanced Extremely High Frequency communications satellites.\nAerojet built the thrusters for the Air Force satellite by licensing technology from a Natick, Mass., company called Busek Space Propulsion and Systems. In April, Busek won $5.1 million from NASA’s Small Business Innovation Research program to work on bigger solar electric thrusters in the 10 kilowatt to 20 kilowatt range.\nA Little Thrust Can Go a Long Way\nAlthough NASA insists on pushing the state of the art for the asteroid retrieval mission and later Mars missions, even the comparatively small and weak electric propulsion systems flying today are powerful enough to be useful. The Air Force andproved that in 2010, when the service’s Advanced Extremely High Frequency satellite lost its main onboard engine and had to rely on Aerojet-built electric thrusters to boost from the transfer orbit where its rocket left it up to geostationary orbit. The trip took nine months.\nThe electric thrusters provided by Aerojet for the Air Force satellite — which are among the most powerful being flown today — produce only about 250 millinewtons of thrust. NASA’s Dawn spacecraft, which launched in 2007 to explore two of the largest asteroids in the solar system, gets a whopping 90 millinewtons or so from its xenon-fueled ion engine.\nFor comparison, a commercially available 8-gram, solid-fuel motor for a model rocket can produce just over 10 newtons of thrust at its peak, making it about 1,000 times more powerful than Dawn’s engine — for the fraction of a second the toy motor is capable of firing.\nDawn’s engine, like other electric thrusters, can stay lit for much longer than that. And they need to. The tiny amount of thrust such engines produce means spacecraft that use them must perform long burns in order to get anywhere.", "score": 27.72972038949445, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "A next-generation plasma rocket being developed by former NASA astronaut Franklin Chang Diaz called the Variable Specific Impulse Magnetoplasma Rocket (VASIMR) has been touted as a way to get astronauts to Mars in weeks rather than months, as well as an innovative, cheap way to re-boost the International Space Station. But in a biting commentary posted on Space News and the Mars Society website, “Mars Direct” advocate Robert Zubrin calls VASIMR a “hoax” saying the engine “is neither revolutionary nor particularly promising. Rather, it is just another addition to the family of electric thrusters, which convert electric power to jet thrust, but are markedly inferior to the ones we already have,” adding, “There is thus no basis whatsoever for believing in the feasibility of Chang Diaz’s fantasy power system.”\nThe VASIMR uses plasma as a propellant. A gas is ionized using radio waves entering into a plasma state. As ions the plasma can be directed and accelerated by a magnetic field to create specific thrust. The purported advantage of the VASIMR lies in its ability to change from high impulse to low impulse thrust as needed, making it an ideal candidate for a mission beyond low Earth orbit.\nChang Diaz’ company, the Ad Astra Rocket Company successfully tested the VASIMR VX-200 plasma engine in 2009. It ran at 201 kilowatts in a vacuum chamber, passing the 200-kilowatt mark for the first time. “It’s the most powerful plasma rocket in the world right now,” said Chang-Diaz at the time. Ad Astra has signed a Space Act agreement with NASA to test a 200-kilowatt VASIMR engine on the International Space Station, reportedly in 2013.\nThe tests would provide periodic boosts to the space station, which gradually drops in altitude due to atmospheric drag. ISS boosts are currently provided by spacecraft with conventional thrusters, which consume about 7.5 tons of propellant per year. By cutting this amount down to 0.3 tons, Chang-Diaz estimates that VASIMR could save NASA millions of dollars per year.\nFor the engine to enable trips to Mars in a reported 39 days, a 10- to 20-megawatt VASIMR engine ion engine would need to be coupled with nuclear power to dramatically shorten human transit times between planets.", "score": 27.624422175032723, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "The engine ignited, but the ignition did not sustain as the Fuel Booster Turbo Pump (FBTP) shut down after reaching a speed of about 34,500 rpm 480 milliseconds after ignition, due to the FBTP being starved of Liquid Hydrogen (LH2). On 27 March 2013 the engine was successfully tested under vacuum conditions. The engine performed as expected and was qualified to power the third stage of the GSLV Mk-2 rocket. On 5 January 2014 the cryogenic engine performed successfully and launched the GSAT-14 satellite in the GSLV-D5/GSAT-14 mission.\nCE-7.5 is being used in the third stage of ISRO's GSLV Mk.II rocket.\n- \"Cryogenic engine test a big success, say ISRO officials\". Indian Express. Retrieved 27 December 2013.\n- \"GSLV-D3\". ISRO. Archived from the original on 16 April 2010. Retrieved 8 January 2014.\n- \"GSLV-D3 brochure\" (PDF). ISRO. Archived from the original (PDF) on 7 February 2014.\n- \"GSLV MkIII, the next milestone\". Frontline. 7 February 2014.\n- \"Flight Acceptance Hot Test Of Indigenous Cryogenic Engine Successful\". ISRO. Retrieved 8 January 2014.\n- \"Indigenous Cryogenic Upper Stage\". Archived from the original on 6 August 2014. Retrieved 27 September 2014.\n- \"GSLV-D5\". ISRO. Archived from the original on 6 October 2014. Retrieved 27 September 2014.\n- \"GSLV-D5 launch video – CE-7.5 thrust was uprated by 9.5% to 82 kN and then brought back to nominal thrust of 73.55 kN\". Doordarshan National TV.\n- \"How ISRO developed the indigenous cryogenic engine\". The Economic Times.\n- \"Archived copy\". Archived from the original on 4 January 2014. Retrieved 5 January 2014.CS1 maint: Archived copy as title (link)\n- \"Indigenous Cryogenic Upper Stage Successfully Flight Tested On-board GSLV-D5\". ISRO.", "score": 27.26890466633041, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "• Apoapsis Delta V Burn: The rocket firing at the highest point of a Transfer Orbit.\n• Periapsis Delta V Burn: The rocket firing at the lowest point of a Transfer Orbit.\n• Transfer Time: The time between apoapsis and periapsis Delta V rocket firings.\nGetting from one orbital altitude to another not only takes delta v, it also takes time. It isn’t as simple as aiming yourself and firing your rocket. Orbital mechanics is a ballet in space, where the lower your orbit the faster you go, and vise versa.\nAny change in orbital velocity and you wind up in an elliptically-shaped orbit, instead of the fairly stable circular orbit.\nTransferring from one orbit to another is accomplished by changing a circular orbit (represented in by the green and red circles in the image above) into an elliptical orbit (represented by the yellow curve). The Transfer Time is the time it takes for the spacecraft to follow the path of the yellow curve.\nTo get from one orbit to another, we must push ourselves off, and then brake when we get there. We coast in between these two maneuvers. We can calculate how much time we will coast from one point in space to another.\nThe equation to calculate the transfer time from one orbital altitude to another is given by one of the Hohmann Transfer Orbit Equations:\n• Ts is Transfer Time in seconds\n• r1 is the smaller orbital radius\n• r2 is the larger orbital radius\n• µ (mu) is Earth Standard Gravity = 9.80665 m/s^2\n• π (pi) is the constant 3.141592653589793238462643383279502884197169399…\nThe number π (pi) is involved in the equation because we are actually going around in circles.\nLet’s suppose you want to go from the Hubble Space Telescope (HST) to the International Space Station (ISS). How long would it take to get there?\nThe orbital radius is simply the sum of the orbital altitude plus the radius of the Earth. So for the HST (r1), we get 6,708.14 km, and for the ISS (r2) we get 6,948.14 km.\nPlugging everything in the equation, we get 2,673 seconds of transfer time. If we divide that by sixty (seconds per minute) we get 44.55 minutes.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "The change in velocity needed to perform a maneuver such as this can be calculated using the vis-viva equation which is derived from Newton’s law of universal gravitation and kepler’s third law. The Hohmann transfer has a variety of applications, especially in regards to travelling to other celestial bodies; however, it is more nuanced as you must include phase angles and other complex concepts.\nBaker, Robert and Maud Makemson. An Introduction to Astrodynamics 2nd ed., New York, Academic Press, 1967.\nBraeunig, Robert. Rocket and Space Technology. http://www.braeunig.us/space/index_top.htm\nLodgson, Tom. Orbital Mechanics: Theory and Applications. New York, Wiley, 1997.\nElert, Glenn. The Physics Hypertextbook. https://physics.info/", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "These ships were designed for four major propulsive events: first, to accelerate from Earth orbit velocity to Mars transfer orbit velocity; second, to decelerate from Mars transfer orbit velocity to Mars orbit velocity; third, to accelerate from Mars orbit velocity to Earth transfer orbit velocity; and fourth, to decelerate from Earth transfer orbit velocity to Earth orbit velocity. The total amount of change in velocity required for these four maneuvers is 11.48 kilometers per second. The amount of propellant required to do this using the low-performance rocket motors of the day required that the spaceships be over 98 percent propellant when departing from Earth orbit. This meant the empty weight of the spaceship, plus the weight of the crew and all their supplies and equipment had to be less than 2 percent of the departure mass. In all of human history, no one has ever come close to building any type of vehicle with a propellant fraction that high and with an empty weight fraction that low. Even if it were possible to build such a vehicle with existing technology, it would not be affordable on a commercial basis due to the extremely low payload fraction.\nNow assume a non-rotating Skyhook in Earth orbit that has an upper endpoint velocity of just short of Earth escape velocity, an outpost space station at the Earth-Moon L2 Lagrange Point with a local source of propellant (either the Moon or a near-Earth asteroid that has been moved to L2), and a non-rotating Skyhook in Mars orbit that has an upper endpoint velocity of just short of Mars escape velocity as well as a local source of propellant (either Mars or one of the Martian moons), and look at what happens to the change in velocity requirement and the propellant fraction of an Earth-Mars spaceship.\nPassengers and cargo bound for Mars, fly to the lower end of the Skyhook that is in Earth orbit using the fully reusable combination launch system described in previous posts. From there they transfer to the upper end of the Skyhook where they transfer to the spaceship that will take them to the outpost space station at the Earth-Moon L2 Lagrange Point where they board the Earth-Mars spaceship.\nOn the day of departure, the Earth-Mars spaceship leaves the halo orbit at L2 and heads toward Earth where it will perform a gravitational slingshot maneuver as it accelerates to Mars transfer orbit velocity. The change in velocity for these two maneuvers is approximately 1,500 meters per second.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "The engine is made from three main modules: the igniter, injector, and the main engine body, all of which were SLS 3D printed by Shapeways and ExOne in bronze steel and machined afterwards to get the exact fit. Post-processing is a problem due to hardness of sintered metal so some tools break, the internal coolant lines are still not possible to 3d print due the geometry complexity and metal powder residues.\nThe price is very low: 3d printed igniter costs some $60, the injector $80 and the rocket engine $260, for a total of just $400. Space exploration with extremely low budget!\nThe engine is still in development and not yet finished but it is a big step forward in open sourcing aerospace engineering!\n- Fuel: GOX / Ethanol\n- Fuel Mass Flow: 0.0545 kg/sec\n- Oxidizer Mass Flow: 0.0545 kg/sec\n- Total Mass Flow: 0.1093 kg/sec\n- Design Mixture Ratio: 1:1\n- Design Force: 50 lbf\n- Design Chamber Pressure: 150 psia\n- Design Temp: 2572 Kelvin\n- Design Specific Impulse: 209 Isp\nHere are some photos of it:\n|Ignited engine. You can clearly see the mach diamonds.|\n|3D printed engine and main lines / sensors|\nHere you can see a live test fire and mach diamonds:\nProject homepage with files and instructions:\nFor previous 3d printed rocket engine named \"Tri-D\" look at:\nTo boldly go where no one has gone before!", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "This probably won’t make sense to anyone else, but I need to make a note reminding myself about something I thought about over lunch. (I always lose them otherwise, so maybe this will help me remember.) Read on if that kind of thing interests you.\nGiven: an ideal pseudoablative mass driver which uses pattern conversion to translate the trailing surface into a high-velocity sheet, imparting an acceleration to the remaining mass and exposing a self-similar surface. The mass driver is completely consumed by the process.\n- Develop a set of equations relating the mass of the driver (md), the exit velocity of the propellant (vx), the surface area of the trailing surface (At) , the mass of the payload (mp), and the total velocity imparted to the payload (vp). Use the rocketry equation as a starting point, but use relativistic pseudovelocities in terms of multiples of c.\n- Figure out how to wedge the instantaneous acceleration (a) in there somewhere.\n- Re-develop the “ideal conversion” equation, which relates the md necessary to accelerate mp to a given pseudovelocity.\n- What kind of exit particle would provide the best efficiency for the driver? A photon?\n- Determine how close this engine comes to the “ideal conversion” developed in 2.\n- Determine the time and md / mp required for an in-system transit at 1g with midpoint turnaround.\n- Determine the time and md / mp required for a 4ly transit at 1g with midpoint turnaround.", "score": 26.357536772203648, "rank": 40}, {"document_id": "doc-::chunk-8", "d_text": "This results in a final minimum\ndelivered total impulse of 96,831 lb-sec. We believe that the\nflight motor should have had a slightly higher delivered Isp due\nto altitude effects and delivered just over 100,000 lb-sec.\nVideo of the launch is at http://www.hybrids.com/video/csxt_flight.mpg\nmiles/113km... This posting at arocket was forwarded\nto me my Andrew Case:\nI just got\na call from Ky out at Black Rock (Tue 5/18 7:45pm cdt). They found\nthe payload section intact 20 miles from the launch site and had\njust returned to base. They are opening it up right now.\nThe 12,000lb thrust motor burned for 10 seconds and pushed the\nrocket to an estimated 4,200 miles per hour. The GPS units cut\nin and out so the exact speed and altitude are not determined,\nbut the altitude is estimated to be 70 miles.\nThe pyrotechnic separation of the rocket sections worked as planned\nand the nosecone section landed under a Rocketman R4 parachute\nand landed on the side of a mountain.\nIt is believed that the booster/motor section came in ballistic.\n3 sonic booms were heard as it re-entered. [possibly echoes from\nthe mountains? - jph]\nKy will be calling in to a morning radio program in the Twin Cities:\nKS95 at 6:45am Wed morning. Several television talk show appearances\nare scheduled (I don't have the times yet).\nKy said that everything went perfectly on the first launch attempt\nand the flight was flawless.\nQuite an accomplishment! Congratulations again to the CSXT team.\nCSXT article... This\narticle - Team\nClaims Success With Rocket Launch - Space.com/AP - May.18.04\n- brings up the question of whether this will be recognized as an\nofficial record at Federation\nAeronautique Internationale. In our records\npage here, we cite the previous max altitude for an amateur\ngroup as 80km by George Garboden and Reaction\nResearch Society, Nov.23,1996, Black Rock, Nevada with the as\n2-stage booster with Dart 2nd stage.\nBTW: The article\nputs Burt Rutan and the SS1 in with \"other amateur groups\"!!", "score": 26.227040774166106, "rank": 41}, {"document_id": "doc-::chunk-3", "d_text": "- Note: This equation is an guess, approximation, and is not 100% accurate. Per forum user stupid_chris who came up with the equation: \"The results will vary a bit depending on your TWR and such, but it should usually be pretty darn accurate.\"\n- Equation for Kerbin Atmospheric Escape:\n- True Δv = ( ( Δv atm - 1000 ) / Δv atm ) * Δv vac + 1000\n- True Δv = ( ( Total Δv in atmosphere - 1000 m/s) / Total Δv in atmosphere ) X Total Δv in vacuum + 1000\n- Single Stage with total atmospheric Δv of 5000 m/s, and rated 6000 Δv in vacuum.\n- Transitional Δv = ( ( 5000 Δv atm - 1000 Δv Required to escape Kerbin atmosphere ) / 5000 Δv atm ) X 6000 Δv vac + 1000 Δv Required to escape Kerbin atmosphere = Total Δv of 5800 m/s", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "The one thing to keep in mind is that in order to perform a gravity-assist maneuver, you need to be able to enter a hyperbolic orbit around a given body that is moving relative to your destination. And, in order to be in such an orbit, there is a specific range of velocities for every object that you must have (dependent on mass of the object). So the fastest you can get to by gravity-assist is much less than relativistic speeds because at relativistic speeds, you would not be able to enter into a proper hyperbolic orbit.\nIt is true that at any high speed, a flyby constitutes a hyperbolic orbit; however, to use a gravitational slingshot, you need to enter against the object's motion and exit with the motion. At relativistic speeds and for most regular bodies, your orbit would closely resemble a straight line, there could be no gain of velocity.\nA good gravity assist works if you can ensure that your hyperbolic trajectory minimizes the angle $\\theta$ between entry and exit. It is given by:\nWhere $e$ is the eccentricity of the orbit and must satisfy $e\\geq1$. Additionally, as your velocity increases, it will force $e$ to become larger unless you significantly increase the mass of each subsequent object.\nThe fastest a spacecraft can get to using gravity-assists very much depends on the largest mass of the objects you use. However, I cannot give you an estimate of a number because due to the sheer impracticality of using gravity-assists to achieve extreme velocities, we (rocket scientists) haven't ever tried computing a theoretical limit. I can guarantee you though that without using high density objects (neutron stars, black holes, etc.), no spacecraft will reach velocities near the speed of light by gravity slingshots alone.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-4", "d_text": "When the fuel of the big rocket is finished, we reach a velocity V, then the second stage is ignited, adding another V to the velocity, for a total of 2V.\nStill faster! Now the rocket has mass 8M of which 4M is fuel of the first stage, while 4M\nis the two-stage rocket of the preceding design. The first stage gives velocity V, to which the other two add 2V, for a total of 3V.\nBy now you can see the trend. If the mass of the final payload is M, then\n| Total mass\n|| Gives final velocity|\n| 2M || V|\n| 4M || 2V|\n| 8M || 3V|\n| 16M || 4V|\n| 32M || 5V|\n| 64M || 6V|\nEach time the velocity increases by one notch, the mass doubles.\nOne cannot avoid this sort of thing by giving up staging--say, in the rocket of mass 8M, by firing all the 7M of fuel in one blast. That is because (as already noted), as the payload (+ remaining fuel) gain speed, less and less momentum is transferred, the jet first having to overcome the forward motion, Indeed, the correct derivation (which uses calculus) gives the equivalent of a huge number of little stages, fired one after the other. The same exponential result is still obtained.\nThis is one of the great problems of spaceflight, especially with the first stages which rise from the ground: even a small payload requires a huge rocket. Perhaps some day space explorers will be able to shave off some fuel weight by using air-breathing rockets (\"scramjets\") but only for the lowest 1/4 to 1/3 of the orbital velocity. Launching from a high-flying airplane--like Burt Rutan's \"SpaceshipOne\", or the \"Pegasus rocket--also helps cut air resistance, another factor. But no other shortcuts are in sight. Once in orbit, of course, more efficient but more gradual ways of generating thrust can be enlisted, like ion propulsion.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Say at some point the human race gets its act together and actually decides to send a spaceship to another star. We’ll say it’s unmanned because that makes everything so much easier; it’s just a robot probe that’s intended to sate our curiosity about what Alpha Centauri actually bloody looks like, and so we don’t need any life support systems or generation ship malarkey going on. We’ll also say that getting the probe out of the Earth’s gravity well – aka the trickiest part of spaceflight – is similarly not an issue because we’re building it in orbit or something. This leaves us just one, fairly major problem: how do we make it go?\n“But Hentzau,” I hear you say, “the laws of Newtonian mechanics say that making it go should be the easiest thing of all! After all, if something is floating in space and you give it a push it’ll just keep going forever and ever, according to Newton’s third law. Surely the hard part isn’t making it go, but in getting it to stop?” Or at least I think I hear you say that; I’m pretty sleep deprived right now so it could also be the crazed squawking of an itinerant bluebird1. Anyway, assuming a really simple case where the probe is stationary with respect to its destination you could indeed just give it a little nudge and it would start wending its merry way to Alpha Centauri without braking for anybody. The only problem is it would take a while. In fact it would take so long that I’m not entirely sure it would get there before Alpha Centauri went nova, and we don’t even have that kind of time. The human race has a notoriously short attention span and we’re going to want results soon – within the next couple of centuries, say. This means that we are going to have to mount some form of interstellar propulsion engine on our probe, and interstellar propulsion engines are, via this rather convoluted introduction, the thing I wanted to talk about today.\nMaking something go in interstellar space is a completely different matter to getting out of a gravity well or travelling between planets. For the former the key is a shitload of thrust delivered over a very short period of time so that you don’t fall back to Earth, which is why rocket engines are currently the best we’ve got for that.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Mass units are arbitrary; use whatever you like, as long as you're consistent. Similarly, Velocity units are arbitrary; the delta-V computed will be in the same units. Specific impulse is in seconds.\nA good explanation of delta-V is given in Chapter 2 of Space Settlements: A Design Study. The diagram to the right is based on Figure 2.2 of this book, and indicates the delta-V needed to transfer among various orbits.\nSpecific Impulse is exhaust velocity divided by g, the acceleration due to gravity on Earth's surface (9.80665 m/s^2). It's a common measure of the \"mass efficiency\" of a rocket engine. You can use the radio buttons above to convert between specific impulse and exhaust velocity.\nThe above calculator is very simplistic; it assumes no drag, and it allows only a single burn. But it can be applied to a great many models. For example, you could calculate the delta-V attainable by a baseball pitcher, throwing baseballs to change his orbit. Or, if you know the delta-V between an asteroid orbit and Earth orbit, you can calculate how much mass you'd have to eject from the asteroid to park it (which of course depends on how fast your mass launcher is).", "score": 25.028117158896464, "rank": 46}, {"document_id": "doc-::chunk-5", "d_text": "However, to provide a complete picture of the equations used in rocket engine design, we present below the equation for the flow of liquid through a simple orifice (a round drilled hole, for example)\nw = propellalnt flow rate, lb/sec\nA = area of orifice, ft2\n(deltaP) = pressure drop across orifice, lb/ft^2\n(rho) = density of propellant, lb/ft^3\ng = gravitational constant, 32.2 ft/sec2\nCd = orifice discharge coefficient\nThe discharge coefficient for a well-shaped simple orifice will usually have a value between 0.5 and 0.7.\nThe injection velocity, or vclocity of the liquid stream issuing from the orifice, is given by\nInjection pressure drops of 70 to 150 psi, or injection vclocities of 50 to 100 ft/sec are usually used in small liquid-fuel rocket engines. The injection pressure drop must be high enough to eliminate combustion instability inside the combustion chamber but must not be so high that the tankage and pressurization system used to supply fuel to the engine are penalized.\nA second type of injector is the spray nozzle in which conical, solid cone, hollow cone, or other type of spray sheet can be obtained. When a liquid hydrocarbon fuel is forced through a spray nozzle (similar to those used in home oil burners) the resulting fuel droplets are easily mixed with gaseous oxygen and the resulting mixture readily vaporized and burned. Spray nozzles are especially attractive for the amateur builder since several companies manufacture them commercially for oil burners and other applications. The amateur need only determine the size and spray characteristics required for his engine design and the correct spray nozzle can then be purchascd at low cost. Figure 7 illustrates the two types of injectors.\nThe use of commercial spray nozzles for amateur built rocket engines is highly recommended.\nFigure 7 Fuel injectors for Amatuer Rocket Engines.", "score": 25.000000000000068, "rank": 47}, {"document_id": "doc-::chunk-15", "d_text": "Delta V: 2 m/s\n260 km X 345 km orbit to 265 km X 368 km orbit. Delta V: 7 m/s\n265 km X 368 km orbit to 267 km X 391 km orbit. Delta V: 6 m/s\n267 km X 391 km orbit to 300 km X 310 km orbit. Delta V: 32 m/s\nTotal Delta V: 83 m/s\nHome - Browse - Contact\n© / Conditions for Use", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-3", "d_text": "That implies a dV ~ 6 km/s, but Cassini’s actual SOI was a mere 0.626 km/s, which means Baxter used the wrong figures. Kind of odd since he had copied the “Discovery’s” flight-plan directly from Cassini – every event day in the long voyage is direct from Cassini’s chief events. Since the launch was set in 2008 this is unlikely – the planets would rarely line up for exactly the same flight-plan.\nHere’s Cassini’s approach by orbital radii that I could get figures for…\nRadial Distance (km) Speed (km/s) Notes\n158,500 22.5 Ring plane\n24.00 SOI burn begin\n80,230 31.50 Periapsis\n30.40 SOI burn end\nCassini began its journey with 3,132 kg of propellant – good old space-storable N2O4/UDMH. That mix gets 3,028 m/s exhaust velocity, and Cassini’s main engine puts out 445 N thrust. Total dV ~ 2,500 m/s. A previous major burn was at the last aphelion before the Venus/Earth swing-by that sent Cassini to Jupiter/Saturn – it lasted 88 minutes.\nThe SOI burn lasted 96.4 minutes and used 850 kg of Cassini’s propellant . Here’s a table summarising the different burns so far – that I have records of…\nDuration (minutes) Mass Used (kg) Notes\n88 776 Deep Space Maneuver\n6 53 TCM 20\n1.2 10.6 TCM 21\n96.4 850 SOI\n61.8 545 Periapsis raise\n253.4 2235 Summation\n102 897 difference\nThe trick now – as far as recreating Baxter’s figures for “Discovery” goes – is working out the dV needed to send the vehicle on its way from LEO. Baxter has four revived Saturn V’s reconditioned to orbit fuelled up Saturn IVB stages at LEO to boost the Shuttle on its way. That’s a lot of hardware, but that’s chemical rockets for you. Once in orbit the Shuttle can dump its SSMEs – which are superfluous for N2O4/UDMH burns – and save lots of extra weight by dumping the tail plane.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-7", "d_text": "If one considers the case of thrusting with maximum acceleration along arcs which are symmetrical around apogee , one can see that the contributions to semimajor axis and eccentricity variations given by the components are negligible (since they are multiplied by ). Therefore, a good suboptimal thrust direction can be obtained by imposing as the only non-zero component of the thrust acceleration. Under these assumptions, the variation of Keplerian parameters will be It should be noted that the terms of the variation of and which depend on will also be very small due to the presence of integrated around . Figure 2 visualises the proposed pattern for thrusting arcs.\nIn order to obtain a fast propagation of the thrusting arcs, the analytical propagation of perturbed motion with finite perturbative elements in time (FPET) derived in [19, 21] will be used. In order to employ FPET, one has also to assume that the thrust acceleration is constant around each thrusting arc, which is reasonable given the low propellant consumption per arc. This assumption ensures that, if the engine thrust is constant, the resulting acceleration can also be considered constant over short thrusting arcs.\nMotion propagation with FPET is based on a first-order analytical solution of perturbed Keplerian motion. In this formulation, the state is expressed in nonsingular equinoctial elements: Assuming constant thrust-acceleration in the radial-transverse reference frame: then one can obtain a first-order analytical expansion of the variation of equinoctial elements, parameterised in Longitude and with respect to a reference longitude : where where a are reference values at and are first-order terms as reported in [19, 21]. In ,it has also been shown that this analytical propagation scheme provides good accuracy along relatively long trajectory arcs.\nAs explained above, the only nonzero component of the acceleration will be and since the aim is obtaining a decrease of the orbit energy, it will also be in the negative direction. Therefore, the acceleration azimuth will be and the elevation (since, as already mentioned, the motion will be within the initial orbit plane). The variation of equinoctial elements after an apogee thrusting arc will be given by where is the apocentre longitude, and are the longitudes at the start and end of thrusting, respectively. is the semiamplitude of the apogee thrusting arc. Note that, given that , there is no variation on and and thus they are omitted.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-3", "d_text": "Rocket Escape velocities\nFirst cosmic velocity = 7.82Km/sec\nSecond cosmic velocity = 11.2 Km/sec\nThird cosmic velocity = 16.6 Km/sec\nCalculating escape velocity:\nKinetic energy of the rocket needs to be at least equal to the potential energy of the planet at a radius r from its center\n0.5m.v2 = G.M.m /r\nRearranging for the value of the escape velocity vesc\nv = ( 2G.M /r )0.5\nWhere m is the mass of the projectile or rocket.\nv is the escape velocity m/s\nM is the mass of the planetary body we are trying to escape. In our case Earth.\nG is the gravitational constant.6.67.10-11 Nm2/Kg2\nr is the radius or distance between the center of mass of the rocket from the center of mass of planet mass M.\nNote: ( )0. 5 Is equivalent to square root\nNow how much pressure do we need to reach the 3rd cosmic velocity....\nWhere's my calculator..\nThe maximum velocity currently known is the reference velocity of light in a vacuum at 300000 Km/sec.\nThis velocity decreases when light passes through higher density materials:\nSpeed of light through water 225000kms/sec\nthrough glass 200000kms/sec\nthrough diamond 125000kms/sec\nThis means that we would need to improve our current rocket performance by18.103..to approach the speed of light.\nA pretty humbling thought.\nThere is another slight problem in that Einstein's E=mc2 means that as the space craft approaches the speed of light its mass would increase. Since for this formula to be true the speed of light cannot be exceeded no matter how much energy you put into the system.\nSecond problem is that the majority of images we see of stars in the night sky have taken light years to arrive. A sought of cosmic archive if you like.\nSo some of the stars we see might not exist any longer!\nThe further we can look into the depths of space the further we actually travel back in light time towards the origin of the universe. The so called 'Big Bang'.\nesa Global gravity variation research satellite\nTo be honest we are currently unable to explain what makes gravity ?\nPretty basic really. But inspite of all the formula it is still a scientific blind spot.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-7", "d_text": "But actually it takes 42 seconds. To come closer to the planned result, you do half your delta v before and the other half after the maneuver node. But your rocket becomes lighter as you burn and exhaust fuel, so the acceleration increases during the burn, and the distance covered during the first half of delta v is longer than during the second half. To compensate for that, you could use the buttons for how much delta v you burn before and after the maneuver node. It only allows 10% steps, which is probably too coarse. Or you could manually start burning a little sooner or later. Would you know which way to adjust? I don't think it's worth it, and go with 50%. It won't be exact anyways, and you'll have to make correction maneuvers, like it's done irl.\nTime warp till 3-5 min before burn start. I'll time warp 13d in the tracking station. There I select my rocket which shows the maneuver node so I can RMB click it so I can see the time to the node. Double clicking Kerbin pacifies the node.\nTurn the rocket to the blue maneuver marker, which is the direction you should burn. If it's not visible, a blue arrow will point to it.\nWait till burn start. Now the rocket should still point to the maneuver marker. But to simplify the game KSP switches to a rotating frame of reference fixed to the orbital rotation, I think, during time warp, which makes the rocket rotate slowly relative to the stars ... and the maneuver marker/direction. So watch out if it changed significantly and correct if necessary.\nBurn. But wait, what's that line in the bar? And the numbers 4 and 2? Stage 4 doesn't have enough fuel for the burn, so mid burn you have to stage as seamless as possible.\nAlso, your burn may not be exactly accurate, so it's a good idea not to rely just on the instruments, but watch where your real trajectory is. Focus on Duna, zoom out, so you can at least see your whole planned trajectory (this way you should have the SOI of Duna in view, where your real trajectory will appear. Whoa, the planned trajectory is not there. In the 90 before the burn I quickly correct the maneuvers.", "score": 24.13039021385966, "rank": 52}, {"document_id": "doc-::chunk-6", "d_text": "That's the right hand rule.\nWhen you accelerate (anti)normal, your direction of motion (your velocity vector, which is speed and direction) changes a bit towards 'up' ('down'). This also changes your orbital plane, like holding out your hand flat and turning your arm along its axis. On the left/right side your orbit (or hand) will move up and down, while along the turn axis it will not. So to move your flyby trajectory, you must make an (anti)normal maneuver 90 degrees before your encounter, or half way between Kerbin and Duna:\nZoom back out, double click the Sun, look at the solar system from 'above', (optionally turn till Kerbin and Duna are on the same horizontal line), and click the trajectory half way between Kerbin and Duna.\nFocus Duna again and zoom in.\nIn the control box select maneuver 2. Adjust (anti)normal delta v to bring the trajectory up/down towards Duna. I want an equatorial orbit, so I want the trajectory to be horizontal.\nKeep adjusting pro/retrograde at maneuver 1 and (anti)normal at maneuver 2 and zooming in till your trajectory is at Duna. Put your Duna Pe at about 25 km. Fly by at the side that will result in a prograde Duna orbit. Duna rotates counterclockwise when looking at it from above.\nYou can avoid Ike encounters by changing prograde delta v and compensating with departure time.\nBTW, another way to get your trajectory up/down to Duna is to make an (anti)normal maneuver at the ascending/descending node (AN/DN). The way I do it uses less delta v, but has higher approach velocity at the encounter. I don't worry about that, cause aerobraking takes care of that.\nDeparture from low Kerbin orbit\nNext to the navball you see three times, e.g.\n- Node in t-13d, 0h, 35m\n- Burn Time: 42s\n- Start burn in: 13d, 0h\n1 says when the maneuver node is. 2 says how long you have to thrust with your rocket engine(s). 3 says when to begin burning.\nNow KSP maneuver nodes assume you accelerate your rocket instantly by e.g. 1052.7 m/s.", "score": 23.642463227796483, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "There’s only one problem with this approach. Well, two problems, with the second, smaller one being that you can only accelerate like this for the first half of the trip. Then you have to turn the spacecraft around and burn the engine in the opposite direction to slow it down and make sure it doesn’t whip by its target at this unholy velocity you’ve accelerated it to. However, the big problem is that providing sustained low-grade acceleration for a prolonged period of time is impossible using conventional rocket engines. They’re built for rapid acceleration over small time scales and spend their propellant profligately in achieving this end, and even if you burn it as efficiently as possible a rocket engine would still run out of fuel long before it ever achieved any significant velocity. You can’t stuff a half-century’s worth of rocket fuel onto a spaceship, after all.\nIn order to do interstellar travel, then, we’re going to need a whole new type of spaceship engine. The thrust it provides does not need to be large, but it does need to be constant and very long-lasting. This gives us two options:\n1) Use a very efficient fuel source that will somehow last the duration of the trip. This basically means nuclear rockets, a concept that’s been on the drawing board for years but which has never gone anywhere because people are a bit nervous about having nuclear reactors hurtling through the sky at several hundred miles per hour. There are high-efficiency thrust systems currently in development – see ion thrusters and the SMART-1 probe – that nevertheless require a powerful source of electricity to run. Nuclear reactors fit the bill and would last a half-century or more to boot, but they require several quantum leaps on the technology side of things (like the LFTR reactors I was talking about the other day) before they’re viable as a propulsion system.\n2) Don’t carry your fuel with you. Get it from somewhere else, while you’re travelling. This is not as dumb as it sounds, with the two major ideas in this category being light sails and the Bussard ramjet.\nLight sails are fairly easy to understand. All light exerts an infinitesimal force whenever it strikes an object, kind of like a really crappy version of the wind on Earth, so you just put a really big solar sail on your probe and focus an incredibly powerful laser on it from the Earth which “pushes” the spacecraft along.", "score": 23.062269370638315, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "On Saturday I investigated SEP systems for spacecraft going from LEO to the Moon and having a relatively small LEO mass of 20 tonnes. In hindsight, some of my numbers were a bit dodgy; more realistic values are for 15 tonnes of non-propulsion equipment.\nBut what happens when there's a possible initial mass of 100 tonnes, as could be launched to LEO by a Saturn V?\nAt this scale, SEP doesn't make sense. Sticking to my maximum transfer time of 2 years, the math works out as follows: mf / m0 = 0.843, so 84.3 tonnes of payload and systems and 15.7 tonnes of fuel. Using just one ion thruster as before, a Δv of 6.33x103 ms-1 would take 100 years. That means that to transfer in 2 years, I'd need 50 thrusters, corresponding to 250 kW of power consumption.\nThat's an awful lot of power, and a solar panel array that big is just ridiculous, without even taking radiation degradation into account. So, nuclear power could be used instead. Up to 1993 as part of the Space Defence Initiative NASA developed a reactor designated the SP-100 . This was capable of delivering power levels up to about a MW at between 10-50 kgkW-1 (better specific mass at higher power). By the way: for the environmentalists, the SP100's nuclear fuel container is designed to reach the ground intact in the event of the reactor unexpectedly breaking up in the atmosphere. Additionally, I'm assuming a 1000 km starting orbit, and an orbital height of 700 km is considered by NASA to be \"nuclear safe\".\nPulling figures out of thin air, let's assume that a 250 kW reactor can be built at 30 kg/kW. That means that 7.5 tonnes would be power source. Additionally, I'd need 250 kW of ion thrusters, at about (yep, more made-up numbers) 13 kg/kW, another 3.3 tonnes.\nAdding all that up, for 100 tonnes at LEO, I'd get 15.7 tonnes of fuel, 7.5 tonnes of nuclear reactor and 3.3 tonnes of thrusters, leaving 73.5 tonnes for payload. Very nice.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "- Orbit Kerbin\n- Plan maneuvers to encouner Duna\n- Execute maneuvers\n- Get surface sample\n- Orbit Duna\n- Plan maneuvers to encouner Kerbin\n- Execute maneuvers\n- Basic Rocketry\n- Engineering 101\n- General Rocketry\n- Advanced Rocketry\n- Building level 2\n- Launch pad due to rocket mass\n- Tracking station for visible conics\n- Mission control for maneuver planning\n- Austronaut complex for EVA\n- R&D for surface sample\nPilot with one or more stars (orbiting Kerbin will do this)\nThe delta-v map at https://i.imgur.com/gBoLsSt.png says we need\n3400 m/s for low Kerbin orbit (LKO) (that's very generous)\n950 m/s to get to the edge of Kerbin's sphere of influence (SOI)\n130 m/s to encounter Duna\n10 m/s or less for a plane change\n0 m/s aerobraking\n20 m/s landing burn\n1450 m/s to low Duna orbit (LDO)\n360 m/s to Duna's SOI\n250 m/s to encounter Kerbin\n6580 m/s total\nThese are all vacuum delta v's. From LKO we need 3180 m/s. That's a more accurate number.\n- Mk16 Parachute\n- Mk1 Command Pod\n- TD-12 Decoupler\n- FL-T400 Fuel Tank\n- LV-909 \"Terrier\" Liquid Fuel Engine\n- TD-12 Decoupler\n- FL-T400 Fuel Tank\n- FL-T400 Fuel Tank\n- FL-T400 Fuel Tank\n- FL-T400 Fuel Tank\n- 4x Basic Fin\n- LV-T30 \"Reliant\" Liquid Fuel Engine\n- TT-38K Radial Decoupler\n- Aerodynamic Nose Cone\n- BACC \"Thumper\" Solid Fuel Booster\nDelta-v: 6162 m/s\nMount the boosters at the lower end of the decouplers, so their top turns away at decoupling and aerodynamics drives them away from the rocket. I put the bottom of the boosters slightly below the Reliant so the rocket would stand more stably on the launch pad. Launch stability enhancers do a better job at that, but due to minimalistic tech I don't use them.\nRemove the monopropellant from the command pod.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "- Check list\n- VAB Engineer report\nThe delta-v display disagrees with Kerbal Engineer Redux (KER) mod. If I put the Reliant in stage 4 they agree.\nClick on the Crew button and select your 1+ star pilot.\nClick the launch button.\nLow Kerbin orbit\nYou are on the launch pad now.\nFor most rockets I work out an ascend profile to orbit by trial and error. For this one the \"manual\" is:\n- Activate SAS\n- Switch the lower left infobox to 'Maneuver Mode'. This show's you apoapsis (Ap) and periapsis (Pe).\n- stage (press space)\n- at 10 m/s: turn/yaw 20 degrees right/east (from 90 to 70 degrees)\n- while turning, click prograde hold\n- Just before the boosters burn out fully turn on the Reliant.\n- When the boosters burn out: stage\n- At 70.5 km Ap turn off the Reliant\n- Switch to the map (you could save now with F5)\n- Click Ap with the right mouse button (RMB).\n- 16 s before Ap: full thrust; when the time to Ap rises, cut thrust\n- 8 s before Ap: full thrust; when the time to Ap rises, cut thrust\n- 4 s before Ap: full thrust; when the time to Ap rises, cut thrust\n- 2 s before Ap: full thrust; when time to Ap is about 20 s: cut thrust\n- Now you have ~10 s to RMB click Pe.\n- Use low thrust to keep time to Ap between 1-10 s.\n- When Pe reaches 70.5 km you have a great orbit. :)\nI have 3251 m/s left now. I used 2911 m/s to get to orbit!\nSave as \"1\".\nThis rocket turned out a bit stubborn. It tended to drift south because the radial decouplers were flexing and the twisted boosters would create an aerodynamic force. This probably could be mitigated with (auto)struts (not used due to minimal tech). Correct for the drift early with WASD.\nThe rocket flys very shallow. This is efficient, unless it really flies too low, creating too much air drag, heat and explodes or passes Ap. It's ok if the fins explode :).", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "An unmanned craft could tolerate very large accelerations, perhaps 100 g. A human-crewed Orion, however, must use some sort of damping system behind the pusher plate to smooth the instantaneous acceleration to a level that humans can comfortably withstand – typically about 2 to 4 g.\nThe high performance depends on the high exhaust velocity, in order to maximize the rocket's force for a given mass of propellant. The velocity of the plasma debris is proportional to the square root of the change in the temperature (Tc) of the nuclear fireball. Since fireballs routinely achieve ten million degrees Celsius or more in less than a millisecond, they create very high velocities. However, a practical design must also limit the destructive radius of the fireball. The diameter of the nuclear fireball is proportional to the square root of the bomb's explosive yield.\nThe shape of the bomb's reaction mass is critical to efficiency. The original project designed bombs with a reaction mass made of tungsten. The bomb's geometry and materials focused the X-rays and plasma from the core of nuclear explosive to hit the reaction mass. In effect each bomb would be a nuclear shaped charge.\nA bomb with a cylinder of reaction mass expands into a flat, disk-shaped wave of plasma when it explodes. A bomb with a disk-shaped reaction mass expands into a far more efficient cigar-shaped wave of plasma debris. The cigar shape focuses much of the plasma to impinge onto the pusher-plate.\nFor example, a 10 kiloton of TNT equivalent atomic explosion will achieve a plasma debris velocity of about 100 km/s, and the destructive plasma fireball is only about 100 meters in diameter. A 1 megaton TNT explosion will have a plasma debris velocity of about 10,000 km/s but the diameter of the plasma fireball will be about 1000 m..\nThe maximum effective specific impulse, Isp, of an Orion nuclear pulse drive generally is equal to:\nwhere C0 is the collimation factor (what fraction of the explosion plasma debris will actually hit the impulse absorber plate when a pulse unit explodes), Ve is the nuclear pulse unit plasma debris velocity, and gn is the standard acceleration of gravity (9.81 m/s2; this factor is not necessary if Isp is measured in N·s/kg or m/s).", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-2", "d_text": "When the Earth-Mars spaceship gets to Mars it slows down to just under Mars escape velocity and enters a high Mars orbit. The change in velocity required for this maneuver is approximately 900 meters per second. The total change in velocity required for this voyage is 2.4 kilometers per second. That is 1/5th the amount of change in velocity required by the spaceships used in Wernher von Braun’s original Mars fleet. This difference is due to both the Skyhooks and the outpost space stations with local sources of propellant at both Earth and Mars.\nThe transfer of passengers and cargo from the Earth-Mars spaceship to Mars will be performed by smaller spacecraft operating from the upper end of the Martian Skyhook using locally supplied propellant. These spacecraft will also be used to carry locally supplied propellant to the Earth-Mars spaceship for its return trip to Earth. If it is assumed that the Earth-Mars spaceship uses existing LOX/LH2 chemical rocket motors with oversized expansion nozzles (a specific impulse of 480 seconds), the propellant fraction for the journey to Mars will be 40 percent. This will leave plenty of room in the mass budget for the Earth-Mars spaceship to make it completely reusable (no drop-off propellant tanks or stages), allow for a spinning section with artificial gravity for the crew and passengers, a hydroponics garden for fresh vegetables, plus plenty of radiation shielding. If it is assumed that the empty weight fraction for this Earth-Mars spaceship is 40 percent of the departure mass, that will leave 20 percent for the payload.\nIf a nuclear thermal rocket motor that uses water for reaction mass is used for the propulsion system for the Earth-Mars spaceship, the propellant fraction for the trip to Mars drops to 25 percent and the payload fraction increases to 35 percent. When used with a combination launch system that includes an escape velocity capable Skyhook, either of these Earth-Mars spaceships would make a trip to Mars mass market affordable on a commercial basis and would operate at a profit for the owners.\nEither of these spaceships would also be capable of making trips to the asteroid belt, and to dwarf planets like Ceres, both affordable and possible.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "The velocity change would be at the rate of 3,000 m/s per year of thrusting by the photon rocket.\nIf a photon rocket begins its journey in low earth orbit, then one year of thrusting may be required to achieve an earth escape velocity of 11.2 km/s if the vehicle is already in orbit at a velocity of 9,100 m/s, and 400 m/s additional velocity is obtained from the east to west rotation of the earth. The photon thrust will be sufficient to more than counterbalance the pull of the sun's gravity, allowing the photon rocket to maintain a heliocentric velocity of 30 km/s in interplanetary space upon escaping the Earth's gravitational field. Eighty years of steady photonic thrusting would be then required to obtain a final velocity of 240 km/s in this hypothetical case. At a 30 km/s heliocentric velocity, the photon ship would recede a distance of 600,000,000 miles (1 Tm) from the Sun per year.\nIt is possible to obtain even higher specific impulse; that of some other photonic propulsion devices (e.g., solar sails) is effectively infinite because no carried fuel is required. Alternatively, such devices as ion thrusters, while having a notably lower specific impulse, give a much better thrust-to-power ratio; for photons, that ratio is 1 / c, whereas for slow particles (that is, nonrelativistic; even the output from typical ion thrusters counts) the ratio is 2 / v, which is much larger (since ). (This is in a sense an unfair comparison, since the photons must be created and other particles are merely accelerated, but nonetheless the impulses per carried mass and per applied energy—the practical quantities—are as given.) The photonic rocket is thus wasteful when power and not mass is at a premium, or when enough mass can be saved through the use of a weaker power source that reaction mass can be included without penalty.\nA laser could be used as a photon rocket engine, and would solve the reflection/collimation problem, but lasers are absolutely less efficient at converting energy into light than blackbody radiation is—though one should also note the benefits of lasers vs blackbody source, including unidirectional controllable beam and the mass and durability of the radiation source.\nFeasible current, or near-term fission reactor designs can generate up to 2.2 kW per kilogram of reactor mass.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-3", "d_text": "$\\endgroup$– JoshuaJan 6, 2019 at 1:30\n3$\\begingroup$ @Joshua, KSP also has much higher delta-V budgets than NASA. $\\endgroup$– MarkJan 6, 2019 at 2:09\n$\\begingroup$ @Mark: Will 15 m/s do? That's about what I'd use. $\\endgroup$– JoshuaJan 6, 2019 at 3:51\n1$\\begingroup$ @Joshua Also, docking in KSP is often done at much higher speeds. When the space shuttle docked with the ISS, it would approach at 0.03 m/s (nasa.gov/pdf/593865main_AP_ST_Phys_ShuttleODS.pdf), while, in my KSP playing, I usually approach the rendez-vous at around 20+m/s, depending on the situation, and even make the final approach for docking 50x faster than the shuttle. $\\endgroup$ Jan 6, 2019 at 6:26\n$\\begingroup$ @Joshua, the total delta-V expenditure during rendezvous for Gemini 6A was about 80 m/s, including the plane change, phase change, and establishing an intersecting orbit. The portion corresponding to your \"15 m/s\" is probably the \"braking maneuver\", at 19.8 m/s, although it might be the \"terminal phase\" maneuvers, at 38 m/s. (Terminal-phase maneuvers were initiated at a distance of 320 km.) $\\endgroup$– MarkJan 6, 2019 at 7:16\nFirst, the Gemini IV maneuver was station-keeping, not rendezvous. Since the target was the just-separated upper stage, the two spacecraft were already rendezvoused, and point-and-burn would have worked if they'd done it properly.\nAccording to the Gemini IV mission report, the main causes of station-keeping failure were a mix of procedural mistakes and inadequately-aggressive maneuvering:\nReview of these figures shows that the velocity increments applied through 00:09:21 g.e.t. succeeded in reducing the separation rate, but left a residual rate of 1.5 ft/sec away from the launch vehicle.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-5", "d_text": "VASIMR, the Variable Specific Impulse Magnetoplasma Rocket, combines features of the high-thrust/low-specific-impulse chemical rocket, and the low- thrust/high-specific-impulse nuclear rocket. VASIMR is a plasma rocket. Instead of a combustion chamber, it uses three staged, magnetic cells that first ionize hydrogen and turn it into a super hot plasma, then further energize it with electromagnetic waves to maximize thrust. Chang Díaz promises his rocket could attain a speed of 31 miles a second, and would reduce a one-way trip to Mars from three months to one. His team has made slow progress on the concept since the late 1980s. Last fall, his VX-200 rocket prototype’s first stage, powered by argon, reached a milestone: a successful, full-power firing in his Webster, Texas lab. Having spent about $25 million from several government sources so far, and with equipment, lab space, and personnel from NASA, Chang Díaz is coming closer to a flight test. NASA is considering testing the rocket on the International Space Station, perhaps as soon as 2011 or 2012, where it may contribute to maintaining the huge laboratory’s orbit.\nAfter VASIMR, the next step up in velocity is a nuclear fusion rocket. Scientists haven’t yet re-created sustained, controlled fusion, the chemical process that powers stars and promises enormous benefits as a power source on Earth, but that hasn’t stopped them from getting a lot of money from governments to try. The International Thermonuclear Experimental Reactor, being built in southern France, is a joint project of the European Union, Japan, China, India, South Korea, Russia, and the United States. The reactor will cost at least $15 billion, is not expected to begin operation until 2018, and is the size of an office building, but scientists hope that once they achieve fusion on the ground, reactors can be downsized for space travel. Fusion gives off more energy and less radiation than fission, and could propel a ship at high speed. In one scenario, its exhaust would be contained by a string of superconducting magnets shaped like huge washers, each perhaps 15 feet in diameter. The string of magnets would reach back from the reactor for the length of several football fields.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-4", "d_text": "A high bypass fan engine normally includes a large first fan surrounded by a separate duct, allowing a majority of the fan air to bypass the engine. The fan acts much like a propeller in a turboprop, without the problems of propeller slipstream and drag.\nExample: The Boeing 747 has four high bypass fan engines.\nEngine Spool-up Time\nFigure: Engine acceleration times, from Davies, figure 4.11\n[Davies, page 59.]\n- In a propeller installation the constant speeding ability of the propeller keeps the engine turning at an r.p.m. which is a compromise between the approach and baulked landing power conditions and power is altered by varying the boost pressure. To increase power quickly the boost is increased, the propeller coarsens off and the demanded thrust is supplied quickly. 'Quickly' in this context means about 3 to 4 seconds because of the propeller momentary overspeed tendency which is not acceptable to a pilot with any sympathy for mechanical engineering devices.\n- Efficiency in a jet engine is highest at high r.p.m. where the compressor is working closest to its optimum conditions of gas flow, etc. At low r.p.m. the operating cycle is generally inefficient. If a sudden demand is made for more thrust from an r.p.m. equivalent to a normal approach r.p.m. the engine will respond immediately and full thrust can be achieved in about 2 secs. From a lower r.p.m., however, a sudden demand for maximum thrust will tend to overfuel the engine and cause it to overheat or surge. To prevent this various limiters are contained in the fuel control unit and these serve to restrict the engine until it is at an r.p.m. at which it can respond to a rapid acceleration without distress. This critical r.p.m. is most noticeable when doing a slam acceleration from an idle thrust setting. Acceleration is initially very slow indeed but then changes to very quick as r.p.m. rises through this significant value. From idle thrust to substantially full thrust at a typical approach speed takes about 6 secs. on average. Some engines are better than others, but there is also a scatter between individual engines of the same type; so occasionally the full 8 secs. permitted by the requirements is needed.\nThere have been two contrary trends in jet engine design when it comes to spool-up time.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-11", "d_text": "Per Proton’s typical mission profile, the second stage burns for 210 seconds, separating from the third stage at the 327-second mark into the flight at a speed of 4.45 Kilometers per second and 120 Kilometers in altitude. Debris of the second stage impact 1,985 Kilometers from the launch site.\n|Type||Storable Propellant Stage|\n|Propulsion||RD-0213 Engine + RD-0214 Vernier|\n|RD-0213 Thrust Vac||583kN|\n|Chamber Pressure||14.7 MPa|\n|Chamber Pressure||5.3 MPa|\nThe third stage of the Proton-M Rocket is the final stage of the actual launch vehicle that delivers the orbital unit to a preliminary orbit or a suborbital trajectory, depending on the mission profile.\nThis stage also utilizes a conventional design and consumes Nitrogen Tetroxide and Unsymmetrical Dimethylhydrazine a propellants. An RD-0213 engine powers the vehicle providing 583 Kilonewtons of thrust.\nThis engine is a non-gimbaled version of the RD-0210 engine that is used on the second stage. For vehicle control and additional thrust, an RD-0214 Engine with four gimbaled nozzles is installed on the third stage.\nThe RD-0214 provides 31 Kilonewtons of thrust for a total 3rd stage thrust 62,600 Kilograms.\nThe third stage houses the vehicle’s Navigation, Guidance and Control System that operated the vehicle during all aspects of powered flight which is fully automated and does not require commands from ground stations. A triple redundant digital guidance system is used to control the vehicle.\nThe Control System has been upgraded several times since the Proton-M started operations in 2001. The guidance mode used is cosed-loop. A high-precision three-axis gyro stabilizer provides exact attitude data to the digital flight computer. The avionics system also provides flight termination in case of a major anomaly during ascent.", "score": 21.510964691583258, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "52 sec.|\n|9. DAICHI-2 separation||15 min. 47 sec.||15 min. 42 sec.|\n*1 Quick estimation value prior to the detailed estimation\n*2 The values are updated ones based on actual measurement data such as engine performance which are unique for the H-IIA F24 engines. Therefore, they are slightly different from the values in the Launch Plan.\n*3 When the combustion chamber presser becomes 10% against the largest combustion pressure.\n*4 The definition of SRB-A jettison is the separation of the rear brace.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "The velocity of a rocket bike depends on the type of rocket attached to the bike. More specifically, It depends on the Delta-V that the rocket can impart on the mount that is connected to the bike. Delta V is the effective exhaust velocity c of the rocket times the natural log of the initial mass over the final mass.\nThe initial mass will be the mass of the biker, the bike, the rocket mount, and the rocket itself. The final mass will be everything except for the mass of the propellant used in the rocket.\nThe change in velocity of the bike will be negligible if the mass of the propellant used is not a significant percentage of the bike’s mass.\nThe effective exhaust velocity can be found by multiplying the gravity at sea level by the rocket’s specific impulse.\nIf you take a G class rocket which is the largest rocket motor that a person can buy without needing a certification, then you’ll roughly have 160 Newton seconds of total impulse.\nTo go from total impulse to specific impulse, divide the total impulse by the weight of the rocket motor.\nThis will lead to an effective exhaust velocity of 1406 m/s.\nNow you’ll need the weight before and after the rocket has spent its propellant. Let’s say that bike has a mass of 18 lbs with 175 lbs. person driving the bike. The 130 grams rocket motor and rocket mount weighs about 87671 grams. Let’s consider that mass of everything after the rocket has burnt all of the propellant is 87110 grams.\nGiven that the average burn time for one of these rocket motors is 3 seconds, the acceleration that the rocket will experience is\nThe total distance traveled, negating friction will be 2.6 meters.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "As for speed, in the first case where Serenity just accelerates continuously at 5.5g (which minimizes fuel), its final velocity after having travelled for those 141.5 hours will be around 0.675c. In the second case where it accelerates for 96.2 hours until the midpoint of the journey and then decelerates, its speed at the midpoint will be 0.528c. Calculations:\nAnother formula on the relativistic rocket page says that if I accelerate continuously, my final change in velocity at the end should be v = (a * t) / (sqrt(1 + (a * t/c)²)), in this case (0.00647*141.515)/(sqrt(1 + (0.00647*141.515)²)) = 0.675298 light-hours/hour, i.e. 0.675c. And for the second case of 96.2 hours, we have (0.00647*96.2)/(sqrt(1 + (0.00647*96.2)²)) = 0.528c.\nUsing this change in velocity, the Tsiolkovsky rocket equation can tell you the ratio of initial mass m₀ (both fuel and payload) to final mass m₁ (just the payload, fuel used up) needed to get such a large change in velocity. The formula depends on the velocity of the exhaust, and I don't think that's stated directly in any Firefly material, but this answer says some material on the DVD indicated that Firefly-class ships light up their tails when they travel due to a nuclear fusion reaction. So Serenity is probably a type of fusion rocket, perhaps similar to Project Daedalus in which some real-world engineers tried to draw up a rough plan for an interstellar space probe, and which used a fusion reaction involving deuterium and helium-3. The chart on this page indicates that this is one of the best fusion reactions in terms of exhaust velocity, with a v_e of about 8.9% the speed of light or 0.089c (which is incidentally much, much faster than any chemical rocket like the ones we have today), so I assumed that in my calculations.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-4", "d_text": "Come in too shallow, and you skip off the atmosphere to become the next Voyager. “Very little room for error,” he says. “You get one crack at it.”\nHis solution: a nuclear thermal rocket. It would produce thrust the way chemical rockets do: by heating a propellant—in this case, hydrogen—and ejecting the expanded gas through a nozzle. Instead of heating hydrogen through combustion, however, the nuclear rocket vaporizes it through the controlled fission, or splitting of atomic nuclei, of uranium. Because nuclear fuel has a greater energy density, it lasts a lot longer than chemicals, so you can keep the engine running and continue to accelerate for half the trip. Then, with the speedometer clicking off about 15 miles per second—twice the speed reached by returning Apollo astronauts—you’d swing the ship around to point the other way and use the engine’s thrust to decelerate for the rest of the trip. Even when factoring in the weight of the reactor, a nuclear engine would cut the transit time in half.\nIn a program called NERVA (Nuclear Engine for Rocket Vehicle Applications), NASA built a nuclear thermal rocket in the 1960s. It delivered a specific impulse of 850 seconds—twice the efficiency of the best chemical rockets—and could have been tweaked to deliver up to 1,000 seconds. As NASA prepared to follow the moon missions with human voyages to the planets, the nuclear thermal rocket was a serious candidate to replace chemical engines in the Saturn V launch vehicle’s upper stages. Instead, despite more than 20 successful test firings in the Nevada desert, NERVA died in the mid-1970s.\nNuclear thermal rockets are limited by the heat tolerance of the uranium fuel and the engine’s structure, so engineers have experimented with new fuel elements and heat-resistant materials. At Marshall, Emrich has constructed a simulator that can test a nuclear rocket’s components by subjecting them to some of the conditions that fission would produce—the temperatures and pressures, though not the radioactivity. Because work on NASA’s Ares rocket, which will boost astronauts to the moon, has taken over the propulsion lab, Emrich is moving his simulator to another facility.\nNot far from NASA’s Johnson Space Center in Houston, Franklin Chang Díaz, a former NASA astronaut and veteran of seven space shuttle flights, is developing an alternative to the nuclear thermal rocket.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "As you watch the third cartoon's animation, imagine that the cannon has been packed with still more gunpowder, sending the cannonball out a little faster. With this extra energy, the cannonball would miss Earth's surface at periapsis by a greater margin, right?\nRight. By applying more energy at apoapsis, you have raised the periapsis altitude.\nAnd of course, as seen in these cartoons, the opposite is true: if you decrease energy when you're at apoapsis, you'll lower the periapsis altitude. In the cartoon, that's less gunpowder, where the middle graphic shows periapsis low enough to impact the surface. In the next chapter you'll see how this key enables flight from one planet to another.\nNow suppose you increase speed when you're at periapsis, by firing an onboard rocket. What would happen to the cannonball in the third cartoon?\nJust as you suspect, it will cause the apoapsis altitude to increase. The cannonball would climb to a higher altitude and clear that annoying mountain at apoapsis.\nAnd its opposite is true, too: decreasing energy at periapsis will lower the apoapsis altitude. Imagine the cannonball skimming through the tops of some trees as it flys through periapsis. This slowing effect would rob energy from the cannonball, and it could not continue to climb to quite as high an apoapsis altitude as before.\nIn practice, you can remove energy from a spacecraft's orbit at periapsis by firing the onboard rocket thrusters there and using up more propellant, or by intentionally and carefully dipping into the planet's atmosphere to use frictional drag. The latter is called aerobraking, a technique used at Venus and at Mars that conserves rocket propellant.\nOrbiting a Real Planet\nIsaac Newton's cannonball is really a pretty good analogy. It makes it clear that to get a spacecraft into orbit, you need to raise it up and accelerate it until it is going so fast that as it falls, it falls completely around the planet.\nIn practical terms, you don't generally want to be less than about 150 kilometers above surface of Earth. At that altitude, the atmosphere is so thin that it doesn't present much frictional drag to slow you down. You need your rocket to speed the spacecraft to the neighborhood of 30,000 km/hr (about 19,000 mph).", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "III was to apply lessons learned from the CUS and its CE-7.5 engine and upscale the design to create a larger upper stage and a more powerful engine with a target thrust of 200 Kilonewtons.\nVideo Credit: ISRO\nAn initial sub-orbital flight test of the LVM3 rocket was performed in December 2014 featuring a mass simulator in place of the C25 upper stage to collect data on how the rocket would perform during operation of its twin boosters and large liquid-fueled first stage. The mission carried the Crew Module Atmospheric Re-entry Experiment (CARE) – a scaled version of India’s future crew capsule that completed a re-entry demonstration and splashed down under parachutes in the Bay of Bengal for recovery and post-flight analysis.\nWhile the lower stages of the LVM3 rocket went through a successful demo flight, C25 was still in the midst of its development process. A pair of engines were used for qualification testing at sea level conditions picking up with a full-duration burn in April 2015 and an endurance test in July for a burn time of 800 seconds followed by another full burn February 2016. A third production CE-20 went through an acceptance test in high-altitude conditions in December 2016, intended to be flown on the first C25 stage.\nThe first integrated test of a C25 rocket stage was performed on January 25 for a 50-second validation test followed by the 640-second full-duration test carried out on Friday that marked the final planned pre-flight test of the C25 upper stage. “The performance of the Stage during the hot test was as predicted,” ISRO said in a statement. “Successful hot test for flight duration qualifies the design of the stage and the robustness of the facilities conceived and established towards its development.”\nTesting of C25 was realized in just four months and, in addition to hot fire tests, included a validation of processing and service procedures needed in the preparation of the stage for flight.\nThe C25 stage stands 13.3 meters tall and matches the core stage’s four-meter diameter. It holds 27,800 Kilograms of Liquid Oxygen and Liquid Hydrogen for consumption by CE-20, an open cycle gas generator engine that can run at a fixed thrust level between 180 and 220 Kilonewonts, typically operating around 186kN.\nLVM3 acts as a launcher for satellites into Geostationary Transfer Orbit and India’s future crew vehicle.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "So I am modelling an Earth - Mars low thrust trajectory in a simple manner. I'm telling my model to not model any trajectories where the spacecraft is moving any more than 200 m/s quicker or slower than the planet upon rendezvous. This is just for the conic section. So when the spacecraft arrives at Mars, I calculate its speed relative to the planet at that point, and I am treating that as the hyperbolic excess speed (V_infinity), which is constrained to be less than 200m/s. Now, I am getting trajectories that fit the criteria, and they happen to burn very little fuel throughout the journey - my question is, is this physically possible, is it real? Is it even desirable to have low arrival C3's? All the literature I read have arrival C3's/V_infinities in the order of km's/s (or km's^2/s^2) but I have it constrained to mere m's/s. I would have thought, intuitively, the closer the s/c can get to matching the planets velocity, the less of a burn needed for orbital capture, ergo fuel saved?\nMy orbit looks like a spiral thats made 2 revolutions of the sun. Time of flight, approximately 1400 days. Delta - V is much larger than a chemically propelled ship, but that's OK I guess as the fuel usage is lower. Plotting Mars as an elliptical orbit, not just circular. Hope that helps.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Voyager 1, launched in 1977, is the furthest manmade object from Earth, traveling at 10.6 miles per second. Even traveling at this incredible speed, it would take just over 70,000 years to reach the closest star to our solar system.\nRecently, NASA began developing Solar Probe plus, which will study our own sun. Through a series of seven gravitational assists with the planet Venus, the probe will reach the extraordinary speed of 125 miles per second. This is a full seven times faster than Voyager 1, which would allow it to make a trip to another star (if that were its objective) in 6,450 years. While this will still be an incredible accomplishment, no currently existing propulsion technology has the capability to fly to another solar system on timescales comparable with a human lifetime.\nProject Icarus is a five year theoretical design study of spacecraft that would reach another star within this all-important time restraint. The project was launched in 2009. One of the Terms of Reference of Project Icarus is:\n\"The spacecraft must reach its stellar destination within as fast a time as possible, not exceeding a century and ideally much sooner.\"\nClearly, current technology still has some way to come in order to accomplish this goal, based on the figures discussed so far. In fact, based on the distance to the closest star, it appears that we need to accelerate to approximately 5 percent the speed of light, and this is the figure we will focus on for the remainder of the article.\nOne could be forgiven for just assuming that if we continue to build bigger and bigger chemical rockets, that eventually we'll build one big enough that it could reach 5 percent the speed of light. Interestingly, the laws of physics tell us that this is, in fact, impossible.\nThe pioneering rocket scientist, Konstantin Tsiolkovsky, developed an equation that predicts how much rocket fuel one would need to reach a given top speed. For chemical rockets, the type that are used today, this equation predicts that to reach 5 percent the speed of light, one would need more chemical rocket fuel than there is matter in the known universe!\nGiven this sobering result, it's fairly clear that we need to start looking for alternative forms of propulsion if we're ever going to reach the stars on the timescale of a human lifespan.\nTwo popular technologies that may be able to accomplish this have been explored in some detail.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "If you use a small nuclear reactor that delivers 1 gigawatt of electricity, you can get thrusts as high as 500 Newtons (100 pounds). What this means is that a 1 ton spacecraft would accelerate at 0.5 meters/sec/sec and reach Mars in about a week! Meanwhile, NASA plans to use some type of 200 kilowatt, ion engine design with solar panels to transport cargo and humans to Mars in the 2030s. Test runs will also be carried out with the Asteroid Redirect Mission ca 2021 also using a smaller 50 kilowatt solar-electric design.\nSo we are literally at the cusp of seeing a whole new technology for interplanetary travel being deployed. If you want to read more about the future of interplanetary travel, have a look at my book ‘Interplanetary Travel: An astronomer’s guide’, which covers destinations, exploration and propulsion technology, available at Amazon.com.\nCheck back here on Wednesday, January 11 for the next installment!\nIon engine schematic http://plasmalab.aero.upm.es/~plasmalab/information/Research/ElectricPropulsion.html", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-8", "d_text": "This vector is shown below in the Earth Mean Equator and Equinox of J2000 coordinate frame:\ndelta-V vector for TCM 3: 0.076190 m/sec 0.063028 m/sec 0.036165 m/sec delta-V magnitude: 0.10529 m/sec or 0.2355 mph delta-V direction (unit vector): 0.72364 0.59863 0.34349\nTCM 3 was executed somewhat differently than either TCM 1 or 2. For TCM 1, the spacecraft was turned so that its spin axis was pointing along the delta-V direction and the thrusters were fired to produce a force in that direction only. This was an axial burn. This could not be done for TCM 2 due to constraints on the spacecraft attitude. Instead, the spacecraft fired thrusters to produce forces both along and nearly normal to its spin axis. These axial and lateral delta-V components were sized so that their sum equaled the overall desired delta-V vector for TCM 2.\nAs was the case for TCM 2, thermal and power constraints precluded doing TCM 3 as an axial burn by turning the spacecraft directly to the delta-V direction. So, TCM 3 was implemented as a combination of axial and lateral burns. But in this case, we deliberately added an extra component to the delta-V breakdown. The reason for this is that our analysis for the late contingency maneuver - TCM 5 - has shown that, if executed, this maneuver would require a velocity change exclusively in the lateral direction. During the last 24 hours before entry when TCM 5 would be executed, the spacecraft has turned to the orientation required for safe descent through the atmosphere and cannot be turned from that orientation to do the TCM. And the geometry is such that the delta-Vs needed to target back to the desired landing site are nearly perpendicular to the entry attitude direction, so that TCM 5 will have to be performed as one or more lateral burns. In the worst case, 5 sets of lateral burns would be performed consecutively, each imparting a 0.4 m/sec velocity change. So far, the spacecraft has only performed a lateral burn once as part of TCM 2 and the velocity change for that burn was only 0.1 m/sec.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-15", "d_text": "(2) Examples of Propellants used in Different Rockets\n- Saturn booster rocket of American space programme used a mixture of kerosene and liquid oxygen as the propellant in the initial stage whereas liquid oxygen and liquid hydrogen were used as propellant in high\n- Russian rockets such as Proton used a liquid propellant consisting of kerosene and liquid\n- The Indian satellites SLV-3 and ASLV used composite solid\n- The rocket PLSV will use solid propellant in the first and third stages and liquid propellant in second and\nfourth stages. The liquid propellant will consist of\nand unsymmetrical dimethyl hydrazine (UDMH) and\nand monomethyl hydrazine (MMH) respectively.\nIn our country, Indian Space Research Organization (ISRO) has been set up to launch and utilize two classes of satellites : remote sensing satellites and communication satellites. The Polar Satellite Launch Vehicle (PSLV) is a remote sensing satellite. India has succeeded in launching several space vehicles using various rocket propellants. India’s latest vehicle, PSLV–C4 took flight on 12th September, 2002 and it was named METSAT MISSION. It consists of four stage vehicle. The first stage is one of the largest solid propellant boosters in the world and carries about 138 tonnes of hydroxyl terminated polybutadiene (HTPB) based propellant.\nThe second stage uses indigenously built VIKAS engine and carries 40 tonnes of liquid propellant unsymmetrical dimethyl hydrazine (VDMH) as fuel and nitrogen tetroxide ( N 2O4 ) as oxidizer.\nThe third stage uses 7.6 tonne of HTPB based solid propellant.\nThe fourth and terminal stage of PSLV-C4 has a twin engine configuration using liquid propellant. Each engine uses 2.5 tonnes of monomethyl hydrazine as fuel and mixed oxides of nitrogen as oxidizer.\n(3) Calculation of specific impulse of propellant\nThe function of rocket propellant is based on specific impulse which measures the kinetic energy producing ability of the propellant. The specific impulse (I s ) can be calculated from the following equation,\nI s =\ng = Ratio of specific heat at constant pressure to specific heat at constant volume.\nTc = Combustion chamber temperature. M=Average molecular mass of exhaust products.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Then it continues at the same speed for 5.83 years before the engines or photon sail (depending on which way the ships are traveling) are used to decelerate the vehicle. The ship's deceleration phase also lasts for five and a half months at 1.5G.\nAlso notable are the time dilation effects experienced at higher speeds; an Earth-time voyage of 6.75 years seems significantly shorter at 0.7 times the speed of light. In accordance with Einstein's theory of relativity, from the crew's point of view it is only five years' travel due to time dilation.\nThe ISV Venture Star uses multiple types of propulsion, all of which are used over the course of an interstellar journey between Earth and Pandora. The Venture star has:\n- Two matter-antimatter engines\n- One photon sail\n- One fusion PME (Planetary Maneuvering Engine).\nThe photon sail is used for the outward acceleration phase to Pandora from Earth in the form of beamed photons from a solid state molecular laser from Earth. The ship's matter-antimatter engines are used for the deceleration phase on approach to Pandora, and the ship then coasts the last few million kilometers. The sequence is reversed for return to Earth. When close enough to Pandora or Earth, the Planetary Maneuvering Engine is used to maneuver into a low delta-v orbit, from which it can launch Valkyrie shuttle craft to the surface.\nMatter-Antimatter engines Edit\nInterstellar Vehicles require a huge amount of thrust in order to reach the speeds that are needed for economic and (relatively) swift travel between solar systems and stellar bodies. Humans took roughly two centuries to create advanced and reliable matter-antimatter based propulsion systems.\nAn ISV has two matter-antimatter engines arranged symmetrically in a tractor configuration that pulls the ship behind them. They are angled slightly away from the body of the ship, a few degrees off the ship’s longitudinal axis, so their exhaust plumes bypass the ship’s structure. This results in a slight loss of thrust efficiency because the engines do slightly push toward each other. The lost thrust is deemed acceptable because the angling separates the body of the ship from the plume’s thermal radiation. Scientists considered placing the engines at the back instead, but the mass-savings advantage of a tensile structure outweighs the disadvantages of shielding.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-9", "d_text": "The rocket shall have a display on the exterior identifying the calculated center of pressure consistently optimizing virtually an unlimited range of radial burning solid rocket motor grain geometries. Optimization tools were applied to the design process of solid rocket motor grains through an optimization framework developed to interface optimization tools with the solid rocket motor design system The QUANTUM LEAP II is an awesome two stage kit for Level 1 and 2 flights. This rocket stands 91.5 (fully configured) tall, is 3.0 diameter, and has a 8 total fin design. This rocket comes standard with a 54mm Kwik-Switch motor mount and can fly on H thru J motors\nUnited Model 5322 Destination Mars Colonizer Model Rocket Starter Set - Includes Rocket Kit (Beginner Skill Level), Launch Pad, Launch Controller, Glue, Four AA Batteries, and Two Engines. 5.0 out of 5 stars. 1. $64.99. $64 Steve Eves broke two world records when his 1/10th scale model of a Saturn V rocket lifted off from a field on Maryland's Eastern Shore. The 36-ft.-tall rocket was the largest amateur rocket ever. Never thought we'd say this, but if the standard G-Class was too vanilla for you, Brabus has a flavour that might be right up your alley. The latest to join the Rocket series of projects by the German automotive tuning company is the Brabus 900 Rocket Edition, and like its name suggests is a poster child for an unnecessary amount of power and mind-blowing acceleration — two qualities more.\nRocket engines are reaction engines. The basic principle driving a rocket engine is the famous Newtonian principle that to every action there is an equal and opposite reaction. A rocket engine is throwing mass in one direction and benefiting from the reaction that occurs in the other direction as a result The total impulse (I,N-s) of a rocket motor/engine is defined as: I total! r F(t)dt 0 T (4.1) For F and G class motors, the impulse will be roughly between 50-60 Ns and 100-120 Ns, respectively. The total impulse and used propellant weight during burn of duration T can be used to calculate the specific impulse (I sp ,sec). ! I sp.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "There is thus no basis whatsoever for believing in the feasibility of Chang Diaz’s fantasy power system.”\nChang Diaz, however, says in his paper: “Assuming advanced technologies [emphasis added] that reduce the total specific mass to less than 2 kg/kW, trip times of less than 60 days will be possible with 200 MW of electrical power. One-way trips to Mars lasting less than 39 days are even conceivable using 200 MW of power if technological advances allow the specific mass to be reduced to near or below 1 kg/kW.”\nLEFT: Artist’s rendition of a lunar tug with 200 kW solar powered VASIMR®. RIGHT: Artist’s rendition of a human mission to Mars with 10 MW NEP-VASIMR®. Images Credit: Ad Astra Rocket Company\nIn other words, Chang Diaz is allowing for further developments that would enable such a reactor.\nZubrin, however, stated: “[T]he fact that the [Obama] administration is not making an effort to develop a space nuclear reactor of any kind, let alone the gigantic super-advanced one needed for the VASIMR hyper drive, demonstrates that the program is being conducted on false premises.”\nThe 2011 NASA research paper “Multi-MW Closed Cycle MHD Nuclear Space Power Via Nonequilibrium He/Xe Working Plasma” by Ron J. Litchford and Nobuhiro Harada, indicates that such developments are feasible in the near future.\nWhether the VASIMR engine is viable or not, in 2015, NASA awarded Chang Diaz’s firm – Ad Astra Rocket Company™ – a three-year, $9 million contract. Up to now, the VASIMR engine has fired at fifty kilowatts for one minute – still a long way from Chang Diaz’s goal of 200 megawatts.\nIn its current form, the VASIMR engine uses argon for fuel. The first stage of the rocket heats the argon to plasma and injects it into the booster. There, a radio frequency excites the ions in a process called ion cyclotron resonance heating. As they pick up energy, they are spun into a stream of superheated plasma and accelerated out the back of the rocket.\nVideo courtesy of Ad Astra Rocket Company\nCollin R. Skocik has been captivated by space flight since the maiden flight of space shuttle Columbia in April of 1981.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Second stage engine cutoff (SECO)||14 min. 46 sec.|\n|10. HTV5 separation||14 min. 54 sec.|\n*1 Quick estimation value prior to the detailed estimation\n*2 When the combustion chamber pressure becomes 10% against the largest combustion pressure.\n*3 The definition of SRB-A jettison is the separation of the rear brace.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Nuclear/LH2 propellant rocket stage. Loaded/empty mass 1,500,000/500,000 kg. Thrust 14,700.00 kN. Vacuum specific impulse 900 seconds. N1 nuclear upper stage study, 1963. Figures calculated based on given total stage thrust, specific impulse, engine mass.\nNo Engines: 20.\nStatus: Study 1963.\nMore... - Chronology...\nGross mass: 1,500,000 kg (3,300,000 lb).\nUnfuelled mass: 500,000 kg (1,100,000 lb).\nHeight: 100.00 m (320.00 ft).\nDiameter: 17.00 m (55.00 ft).\nSpan: 17.00 m (55.00 ft).\nThrust: 14,700.00 kN (3,304,600 lbf).\nSpecific impulse: 900 s.\nBurn time: 590 s.\nYaRD Type V Korolev nuclear/lh2 rocket engine. 392 kN. Study 1963. Design considered in N1 nuclear upper stage studies. Outgrowth of work done by Bondaryuk and Glushko on YaRD engines for nuclear ICBM's, but using liquid hydrogen as propellant. Isp=900s. More...\nAssociated Launch Vehicles\nN1 Nuclear V Russian nuclear orbital launch vehicle. Second primary alternative considered for the 1963 nuclear N1 study. The immense liquid hydrogen tank of the second nuclear stage would have dwarfed the N1 first stage mounted below it in the shadows. The extremely poor thrust to weight ratio of the Type V engine design compared to that of the Type A remains unexplained. More...\nNuclear/LH2 Nuclear thermal engines use the heat of a nuclear reactor to heat a propellant. Although early Russian designs used ammonia or alcohol as propellant, the ideal working fluid for space applications is the liquid form of the lightest element, hydrogen. Nuclear engines would have twice the performance of conventional chemical rocket engines. Although successfully ground-tested in both Russia and America, they have never been flown due primarily to environmental and safety concerns. Liquid hydrogen was identified by all the leading rocket visionaries as the theoretically ideal rocket fuel. It had big drawbacks, however - it was highly cryogenic, and it had a very low density, making for large tanks.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "For the latter, the distances are (relatively) small enough that we have the luxury of just being able to coax something onto a transfer orbit using a very small amount of propellant and then leaving it until it eventually arrives anywhere from a few months to a decade later. These are both impractical approaches to interstellar travel for obvious reasons: rocket engines require a lot of propellant which is quickly exhausted, rendering the engine useless, while interplanetary probes might sound nippy but they are, in fact, achingly slow. For an example, the Voyager probes are the fastest man-made objects in existence, taking advantage of a once-every-175-years alignment of the planets to do some crazy gravitational slingshotting that accelerated them up to a rather bracing 15 km s-1. Let’s say we could do that again for our interstellar probe, and that we could do it in such a way that it would end up pointed at Alpha Centauri. At 15 km s-1 the probe would cover the four light years separating the Sun and Alpha Centauri in just under 80,000 years.\nClearly this isn’t practical if we want human civilisation to still exist when our probe reaches its destination, so if we intend to try interstellar travel in a serious way then we need to experiment with something a little different. Where achieving orbit is a matter of delivering a lot of acceleration quickly, and interplanetary travel relies on a small amount of acceleration delivered early, practical interstellar travel requires that we instead subject our probe to a small amount of acceleration over a very long timespan. The thing with velocity in space is that it’s cumulative. There’s no air resistance dragging you back; once you add 5 km s-1 to a spacecraft’s relative velocity that 5 km s-1 will stay there forever until you remove it by making a burn in the opposite direction and manually decelerating the spacecraft. An engine that provides an acceleration of 2 cm s-2 might not sound very good, but if it can provide that rate of acceleration for a full year the spacecraft it’s attached to will end up travelling at a whopping 630 km s-1, covering the distance to Alpha Centauri in a nippy 452 years. And that’s if it stops accelerating after a year, remember. If it keeps going there’s no reason it couldn’t reach a significant fraction of light speed.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-5", "d_text": "By rearranging the terms in (1), one obtains: Under the assumption that the total propulsive power of the IBS spacecraft is constant and that the total propulsive thrust is proportional to it , one can write thus Therefore, the maximum acceleration acting on the IBS-debris combination can be computed as a function of the total available thrust : It is assumed here to have a 1000 kg IBS spacecraft with a total available thrust of 0.5 N. Such a high thrust would correspond to a substantial power and propulsion system mass, however, this is deemed realistic if one considers that the payload of the IBS spacecraft is in fact its propulsion and power systems. Hence, the propulsion and power systems might be oversized compared to other applications in which ion engines are used for propulsion only. Note that the validity of the methodology proposed in this paper would not be affected even if lower thrust levels were considered. Thus, in this case, considering, for example, an 800 kg debris, the magnitude of the acceleration would be 1.923·10-7 km/s2. If one considers instead the spacecraft alone, the acceleration achievable would be slightly higher, 5·10-7 km/s2. Given this order of magnitude, the thrust acceleration can be considered as a perturbative force compared to the Earth’s gravitational force, and therefore, the analytical approach to the propagation of the low-thrust motion described in can be applied.\n3. Mission Profile\nThe objective of this study is that of optimising the performance and cost of a debris de-orbiting mission performed by a single spacecraft. As mentioned in the introduction, it is assumed that there are five pieces of debris of different masses and lying in circular orbits with different radii and orientations. It is assumed that the IBS spacecraft departs from a low-Earth parking orbit, rendezvous with the first object, transfers it to an elliptical re-entry orbit, rendezvous with the second object, transfers it to a second elliptical re-entry orbit, and so forth until all five pieces of debris are removed. One important issue is defining in which order the pieces of debris need to be de-orbited. In the following, all possible sequences are generated a priori and optimised one by one.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "Plasma is actually a distinct fourth state of matter, the other three being solid, liquid, and gas. The most glaring example of plasma is at the center of our solar system, though it’s quite abundant in nature and human applications, from lightning to plasma TVs.\nIndeed, ion thrusters have been used for years on satellites and even some deep-space missions. In 2015, for instance, ion engines powered NASA’s Dawn probe into an orbit around the dwarf planet Ceres, which sits in an asteroid belt between the orbits of Mars and Jupiter.\nWarp Speed Ahead\nWhile ion thrusters lack the punch of a Falcon Heavy rocket in terms of the sudden acceleration required to pull free of Earth’s gravity, the much smaller engines excel in near-frictionless outer space, where they are at least 10 times more fuel-efficient.\nThat means a spacecraft with ion thrusters can constantly accelerate, eventually reaching speeds much greater than conventional chemical engines. For example, the retired Space Shuttle could hit speeds of 18,000 mph. A spacecraft powered by ion thrusters could theoretically streak through the cosmos at more than 200,000 mph, according to NASA.\nFormer astronaut Franklin Chang Diaz, who leads Ad Astra, has said he could do the cannonball run to Mars in less than 40 days. He first conceived of Ad Astra’s Variable Specific Impulse Magnetoplasma Rocket (VASIMR) back in the 1980s.\nMuch more recently, the company demonstrated that it could produce 100 kilowatts of power from the VASIMR engine during the course of 100 non-consecutive hours. The next step is to fire the engine to produce a plasma ball as hot as the sun and run it for 100 hours straight. Aerojet Rocketdyne is also reportedly ready for the next 100-hour test phase of its Hall thruster, another type of plasma-based engine. The best most ion thrusters can do today is about 5 kW.\nMeanwhile, MSNW is exploring various prototypes related to fusion rockets, which would expel plasma produced from fusing a mixture of hydrogen and helium isotopes that had been heated by low-frequency radio waves. The process converts some of the mass of the atoms into energy. A lot of energy.\nOut of Thin Air\nNot to be outdone, the European Space Agency just tested another type of ion thruster that can literally live on thin air.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-3", "d_text": "I would recommend a ballistic parachute deployment at a 90 degree angle to the rocket so that the pilot chute will clear the fins.\nThe fact that the rocket is fat at the top and skinny in the main body makes it difficult to use a launch lug like a conventional model rocket would use. Some method of keeping the rocket stable during its first few moments of flight needs to be conceived.\nA possible solution to the rocket parachute deployment would be to use a parachute with a long bridle line . The parachute shroud lines would not start unstowing until the bridle line is fully stretched out and the parachute and its deployment bag are behind the fins of the rocket. This should prevent an entanglement.\nEngine burn time 7 seconds + /- 1 second\nPowered flight distance 1600 feet + /- 200 feet\nRocket speed at\nengine burnout 280 mph +/- 20 mph\nCoast distance 1400 feet + /- 200 feet\nPeak altitude 3000 feet + /- 250 feet\nMaximum acceleration 5 Gs\n( Earth plus rocket engine)\nLoaded rocket weight 1750 lb.\nMinimum liftoff thrust 5500 lb.\nEstimated flight time of\nrocket before parachute\ndeployment 20 seconds\nEstimated total time in air 3 minutes\nMaximum pilot weight\nwith parachute 250 lb.\nStabilization method fin\nEngine solid propellant\nIgnition method electrical\nAllow your imagination to soar. You walk towards the launch tower with flight suit and helmet on. Pause a minute and look at the strange rocket that will carry you into the blue sky. Climb into the capsule, strap yourself in, and say a quick prayer before the countdown begins. As the countdown ends, hear the sound of rolling thunder when the solid fuel ignites and you are pushed into your seat. The roar of the engine becomes deafening as the fiery rocket defies the earth and goes whizzing through the sky. The crushing G forces are replaced with a euphoric sense of weightlessness as the engine spits its last flame. The ship coasts for a few precious moments. You are relieved when you feel the jolt from the main parachute opening. It is time to eject the canopy, bail out, give a good \"Airborne\" hard arch and pull the ripcord. The beautiful red, white, and blue parachute opens. Spiral down and land in front of the clapping crowd.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-5", "d_text": "His 1968 paper \"Interstellar Transport\" (Physics Today, October 1968, p. 41–45) retained the concept of large nuclear explosions but Dyson moved away from the use of fission bombs and considered the use of one megaton deuterium fusion explosions instead. His conclusions were simple: the debris velocity of fusion explosions was probably in the 3000–30,000 km/s range and the reflecting geometry of Orion's hemispherical pusher plate would reduce that range to 750–15,000 km/s .\nTo estimate the upper and lower limits of what could be done using contemporary technology (in 1968), Dyson considered two starship designs. The more conservative energy limited pusher plate design simply had to absorb all the thermal energy of each impinging explosion (4×1015 joules, half of which would be absorbed by the pusher plate) without melting. Dyson estimated that if the exposed surface consisted of copper with a thickness of 1 mm, then the diameter and mass of the hemispherical pusher plate would have to be 20 kilometers and 5 million metric tons, respectively. 100 seconds would be required to allow the copper to radiatively cool before the next explosion. It would then take on the order of 1000 years for the energy-limited heat sink Orion design to reach Alpha Centauri.\nIn order to improve on this performance while reducing size and cost, Dyson also considered an alternative momentum limited pusher plate design where an ablation coating of the exposed surface is substituted to get rid of the excess heat. The limitation is then set by the capacity of shock absorbers to transfer momentum from the impulsively accelerated pusher plate to the smoothly accelerated vehicle. Dyson calculated that the properties of available materials limited the velocity transferred by each explosion to ~30 meters per second independent of the size and nature of the explosion. If the vehicle is to be accelerated at 1 Earth gravity (9.81 m/s) with this velocity transfer, then the pulse rate is one explosion every three seconds.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "Although chemical rockets can produce millions of pounds of thrust for a few minutes at a time, ion engines produce thrusts measured in ounces for thousands of days at a time. In space, a little goes a long way. Let’s do the math!\nThe Deep Space I engines use 2,300 watts of electrical energy, and produced F= 92 milliNewtons of thrust, which is only 1/3 of an ounce! The spacecraft has a mass of m= 486 kg, so from Newton’s famous ‘F=ma’ we get an acceleration of a= 0.2 millimeters/sec/sec. It takes about 60 days to get to 1 kilometer/sec speeds. The Dawn mission, launched in 2007 has now visited asteroid Vesta (2011) and dwarf planet Ceres (2015) using a 10 kilowatt ion engine system with 937 pounds of xenon, and achieved a record-breaking speed change of 10 kilometers/sec, some 2.5 times greater than the Deep Space-1 spacecraft.\nThe thing that limits the thrust of the xenon-based ion engines is the electrical energy available. Currently, kilowatt engines are the rage because spacecraft can only use small solar panels to generate the electricity. But NASA is not standing still on this.\nThe NEXIS ion engine was developed by NASA’s Jet Propulsion Laboratory, and this photograph was taken when the engine’s power consumption was 27 kW, with a thrust of 0.5 Newtons (about 2 ounces).\nAn extensive research study on the design of megawatt ion engines by the late David Fearn was presented at the Space Power Symposium of the 56th International Astronautical Congress in 2005. The conclusion was that these megawatt ion engines pose no particular design challenges and can achieve exhaust speeds that exceed 10 km/second. Among the largest ion engines that have actually been tested so far is a 5 megawatt engine developed in 1984 by the Culham Laboratory. With a beam energy of 80 kV, the exhaust speed is a whopping 4000 km/second, and the thrust was 2.4 Newtons (0.5 pounds).\nAll we have to do is come up with efficient means of generating megawatts of power to get at truly enormous exhaust speeds. Currently the only ideas are high-efficiency solar panels and small fission reactors.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "1. The problem statement, all variables and given/known data A rocket is launched at an angle of 53° above the horizontal with an initial speed of 75 m/s. It moves in powered flight along its initial line of motion with an acceleration of 25 m/s^2. At 30 s(seconds) from launch, its engine fails and the rocket continues to move as a projectile. (a) What is the rocket's max altitude? (ymax) (b) What is the rocket's total time of flight? (tτ) (c) What is the rocket's horizontal range? (R/x) My given information looks like this: θ= 53°, v1= 75 m/s, a=25 m/s, t= 30s, Δy=0 2. Relevant equations x=vcosθt y=vsinθt+.5at^2 ymax= -(vsinθ)^2/2a 3. The attempt at a solution I tried resolving the velocity into x and y components but i'm not even sure if that's useful information. I also attempted to use the above equations but to no avail... I just need to get this started so that I can finish it. The problem is i'm not sure where to start.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "If our engine is even close to a jet engine in reliability, has a flak shield to protect against a rapid unscheduled disassembly and we have more engines than the typical two of most airliners, then exceeding airline safety should be possible.\nThat will be especially important for point to point journeys on Earth. The advantage of getting somewhere in 30 mins by rocket instead of 15 hours by plane will be negatively affected if “but also, you might die” is on the ticket.\n* Will be starting with a full-scale Ship doing short hops of a few hundred kilometers altitude and lateral distance. Those are fairly easy on the vehicle, as no heat shield is needed, we can have a large amount of reserve propellant and don’t need the high area ratio, deep space Raptor engines.\nNext step will be doing orbital velocity Ship flights, which will need all of the above. Worth noting that BFS is capable of reaching orbit by itself with low payload, but having the BF Booster increases payload by more than an order of magnitude. Earth is the wrong planet for single stage to orbit. No problemo on Mars.\n* Landing will not be a hoverslam, depending on what you mean by the “slam” part. Thrust to weight of 1.3 will feel quite gentle. The tanker will only feel the 0.3 part, as gravity cancels out the 1. Launch is also around 1.3 T/W, so it will look pretty much like a launch in reverse….\n* The main tanks will be vented to vacuum, the outside of the ship is well insulated (primarily for reentry heating) and the nose of the ship will be pointed mostly towards the sun, so very little heat is expected to reach the header tanks. That said, the propellant can be cooled either with a small amount of evaporation. Down the road, we might add a cryocooler.\n* 3 light-minutes at closest distance. So you could Snapchat, I suppose. If that’s a thing in the future.\n* But, yes, it would make sense to strip the headers out and do a UDP-style feed with extreme compression and a CRC check to confirm the packet is good, then do a batch resend of the CRC-failed packets. Something like that. Earth to Mars is over 22 light-minutes at max distance.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Enter at a steep pitch and then bring it down to around a pitch of 20.\nYou should still have 300-400(ish) dV spare for raising the Pe back up to a stable orbit after aerobraking and then deorbiting for return to KSC (and should still have a fair bit of fuel to run the jet engines if needed).\nSome of pics show it with the NEBULA decal mod to add the flags to the wings but I took those off so it could be posted as a pure stock craft.\nDo you really want to downvote this?\nDon't forget, people build craft at all skill levels, just 'cos something is 'newbish' doesn't mean it needs hatin'.\nIt will cost you 5 of your own points to downvote", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-6", "d_text": "The dimensions and performance of Dyson's vehicles are given in the table below\n|Ship diameter (meters)||20,000 m||100 m|\n|Mass of empty ship (metric tons)||10,000,000 t (incl.5,000,000 t copper hemisphere)||100,000 t (incl.50,000 t structure+payload)|\n|+Number of bombs = total bomb mass (each 1MT bomb weighs 1 metric ton)||30,000,000||300,000|\n|=Departure mass (metric tons)||40,000,000 t||400,000 t|\n|Maximum velocity (kilometers per second)||1000 km/s (=0.33% of the speed of light)||10,000 km/s (=3.3% of the speed of light)|\n|Mean acceleration (Earth gravities)||0.00003 g (accelerate for 100 years)||1 g (accelerate for 10 days)|\n|Estimated cost||1 U.S. GNP (1968)||0.1 U.S. GNP|\nLater studies indicate that the top cruise velocity that can theoretically be achieved by a thermonuclear Orion starship is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by matter-antimatter pulse units would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light.\nAt 0.1c, Orion thermonuclear starships would require a flight time of at least 44 years to reach Alpha Centauri, not counting time needed to reach that speed (about 36 days at constant acceleration of 1g or 9.8 m/s2). At 0.1c, an Orion starship would require 100 years to travel 10 light years. The late astronomer Carl Sagan suggested that this would be an excellent use for current stockpiles of nuclear weapons.\nA concept similar to Orion was designed by the British Interplanetary Society (B.I.S.) in the years 1973–1974. Project Daedalus was to be a robotic interstellar probe to Barnard's Star that would travel at 12% of the speed of light (0.12c).", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-4", "d_text": "“Only 3 seconds, but got through the ignition sequence, which is always the big question at this phase,” Garvey said in an email. “Used techniques previously developed with our 500 lbf LOX/propylene engine. Also continued to evaluate thrust vector control.\n“The work plan still is to conduct additional testing this spring / summer (more ignition cycles, extend the duration), followed eventually by a flight test featuring this engine,” Garvey wrote.\nThe company is developing the engine under a NASA Small Business Innovation Research (SBIR) Phase II contract awarded last year. The award was for a total of up $700,000 for work over a period of two years.\nGarvey’s initial goal is to deliver 10 kg payloads into a 250-km orbit. A larger version of the booster will be designed to place satellites weighing up to 20 kg into a 450-km orbit.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-8", "d_text": "The efficiency of the T55 is 0.5, which, when you do the math, comes down to 2.2 kilograms (over 4 pounds or 3 liters) of aviation kerosene every second. My fuel tank holds 11 gallons. It’d run out in under 14 seconds. That’s the price of power.\nAbandon all Logic ye who Enter Here\nThe world’s largest diesel engine (the Wärtsilä-Sulzer RTA98-C) probably wouldn’t fit in my house, even if you hollowed out all those troublesome floors and walls. It looks like this:\nIt’s built to power the world’s largest oil tankers. The individual cylinders are so big that, when the engineers are maintaining them, they just climb inside and sit down at the bottom. To even consider attaching that monster to my car is to descend into madness. Luckily, I’m already down here. The RTA98-C produces a horrifying 46,000+ horsepower, and 7,600,000 newton-meters of torque at 102 RPM. That torque is ridiculous. Here’s how ridiculous: if you took the engines on a 747 and attached them so that they pointed up and down (respectively), to make the plane spin like a pinwheel, the plane would be experiencing the same torque that the RTA98-C puts out. It produces so much torque that, if you attached an I-beam 150 meters long to the crankshaft, it could lift me even if I sat on the end. And even if you include the mass of the I-beam.\nBut you’re not hear to hear about bizarre man-lifting contraptions. You’re here for speed! We already know that the limitation isn’t going to be the force the engine can put out. It’s the motor’s low speed. The RTA98-C could propel my car at all of…2.87 mph (4.61 km/h). Even when I’m out of shape, I can walk faster than that. But I can’t do what the Wärtsilä would let me do: pull with almost ten times the force of a Shuttle solid rocket booster. I could haul the whole Titanic. Over land. On a trailer, I could probably haul a couple of office buildings.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-4", "d_text": "Since we had been\nworking with very low flow rates for a while, we had a 6 hose from the tank to\nthe ¼ servo valve, then another short 6 hose to a 1/8 restriction right before\nthe engine. As the 6 hoses clear out,\nthe total pressure drop in the system goes down, letting the thrust go up. We have rediscovered this many times, and\nalways seem to forget about it when looking at test runs with a nicely repeating\nthrust spike at the end.\nFor the second run, we exchanged the tank to valve hose for\na 10, and removed the 1/8 restriction.\nThe run was good again, with steady thrust increasing from 100lb to\n150lb, but it still peaked another 50lb at the end, due to the second short\nlength of 6 hose. The run was a little\nrougher, probably because the pressure drop from the 1/8 restriction was\nproviding some flow damping. The\nmeasured Isp across these two runs was about 100s, which isnt too bad for the\nlow chamber pressure and notable transient behavior at the start. The thrust peaked (after clearing all the\nrestrictive plumbing) at 200 lbf at 173 psi tank pressure with a 1.25 diameter\nFor the third run, we put the 1/8 restriction back in to\nsmooth it out, but moved the servo valve directly to the side of the engine,\neliminating the second length of hose. We\nloaded 4x the propellant, which would give a long enough run to get everything\nfully stabilized and measured. On\nfiring, the thrust ramped up, but instead of lighting, it went cloudy, and\nthrust dropped off rapidly.\nMy only current theory, assuming the catalyst isnt\ndeteriorating with each run, is that we only have enough active catalyst area\nto make about 200 lbf of thrust, and the initial inrush of liquid before there\nis chamber pressure is contributing to a cooling of the catalyst, which must\nrace against the building of chamber pressure.\nWe can test this by trying to crack the throttle to build some chamber\npressure before opening it all the way up.\nA larger cavitating venturi would be the proper solution.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-2", "d_text": "Satellite launches, especially those lifting payloads to low-earth orbits, initially boost upwards, but then accelerate the payload on a path nearly parallel with the earth’s surface to reach the velocity needed to sustain the orbit. Low-thrust engines are typically used during the latter phase of the boosted trajectory to achieve the needed radial velocity. The maximum altitude of the payload is on the order of 200 to 500 km, depending on the orbital parameters required by the mission. Ballistic missiles, on the other hand, boost the warhead to high altitudes, allowing the payload to coast downrange to a maximum distance. A 10,000 km range intercontinental ballistic missile (ICBM) reaches a peak altitude of more than 1,000 km when on a minimum energy (i.e. maximum range) trajectory. Lifting a warhead to such heights requires high-thrust engines to avoid gravity losses while accelerating upward.\nWith the exception of the July 2006 firing of the Taepodong-2, which exploded too early in its liftoff trajectory to determine its mission, all of the other large rockets launched by North Korea were designed to maximize performance as a satellite launcher. In each case, the Taepodong-1 and Unha rockets flew on trajectories fully consistent with a satellite launch. Further, the Taepodong-1 used a low-thrust (Isayev 5D67) engine scavenged from an S-200 (NATO designated SA-5) air-defense missile on the second stage. Flight data displayed in the North Korean control room during the December 2012 Unha-3 launch, indicate that the second stage is a modified Scud-B missile with a larger diameter airframe to hold more fuel. The third stage is likely similar to that found on Iran’s Safir carrier rocket, which consists of vernier (i.e. steering) engines from either the Soviet R-27 (NATO designated SS-N-6) submarine-launched ballistic missile (SLBM), or another Soviet system, such as the ROTA, which was never fielded by the Soviet Union. The Unha’s use of long-burning, low-thrust upper stages is optimal for space missions, though if used as a ballistic missile, the low-thrust engines would suffer significant gravity losses during its upward trajectory, robbing the missile of roughly 800 km of range.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-3", "d_text": "Back in the early-mid 1800s, steamships had a similar problem to today’s launch vehicles. The steam engines of the day burned so much coal that the ships were limited in how far they could travel and still have enough room left over to carry a worthwhile amount of cargo. They solved this problem by breaking up the longer shipping routes into shorter lengths with strategically placed coaling stations. This allowed the early steamships to travel the globe while carrying a lot less coal and a lot more cargo. This significantly reduced the cost of shipping goods and people around the world while allowing the shipping companies to operate at a higher profit margin. It was a win-win solution for everyone.\nIn the case of Earth to orbit spaceflight, the problem isn’t distance traveled, the problem is the amount of speed the rocket needs to achieve to reach orbit. Since it isn’t possible to place a refueling station halfway up, the only other option is to reduce the amount of speed the launch vehicle needs to achieve to reach orbit. This can be done by adding speed to the launch vehicle at both the beginning and the end of its flight to orbit using externally applied power. This will significantly reduce the amount of propellant the launch vehicle needs to carry, which will allow it to carry more payload.\nThis is what a Combination Launch System does.\nA combination launch system adds velocity to the launch vehicle at the beginning of its flight to orbit using either a catapult,\nor by air launching the launch vehicle from high in the atmosphere with a carrier aircraft.\nThe combination launch system also adds velocity to the launch vehicle at the end of the flight with a non-rotating skyhook.\nThe end result is that the launch vehicle only needs to carry the propellant for the increase in speed that occurs in the middle part of the flight. The total amount of speed supplied by a mature combination launch system represents up to 1/3 or more of the total speed required for reaching orbit. This reduces the take-off weight to payload weight ratio of the launch vehicle from 200:1 down to 20:1 or less.\nThis will also allow the launch vehicle to be built as a 100% reusable single-stage vehicle that is much smaller in size than existing launch vehicles.\nFor example, the Falcon 9, which carries 6,000 pounds of usable payload to the International Space Station, has a take-off weight of 1.2 million pounds.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "If your delta-V is too minimal the spacecraft will not be capable to perform any useful missions whatsoever.\nSpacecraft have parameters too, it is simply that they are odd actions you have not encountered in advance of. I am going to record the greater significant ones below, but they will be thoroughly discussed on other pages. Refer back to this list in case you operate throughout an unfamiliar term.\nWhen this summary was put before Rob Herrick, an epidemiologist, he did not Assume it had been possible.\nThe fourth and many Excessive tactic is always to cheat the equation itself, for making the whole equation not pertinent to the spacecraft. The equation assumes which the spacecraft is carrying all of the propellant desired for the mission, This may be bent quite a few means.\nAll three people are in use for many years, though each loved ones has somewhat modern users pushing the limits. All share common basic physics with regards to their efficiencies.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-19", "d_text": "Five minutes into the flight, the engine was overheating and Richmond found the aircraft to be overly sensitive in pitch. Lacking the canopy, he was buffeted by the 200-mph slipstream. Then a broken radiator fitting scaled him with steam from the radiator. He hastily made for landing without a good idea of the aircraft's ideal landing speeds only to discover what the stall speed was when he was still 30 feet above the ground.\nThe T.5 pancaked into the ground, driving the landing gears through the wing. The tail broke off and Richmond survived with significant burns. No further attempts were made as one month later the Battle of Britain began. The T.5 had a total of six minutes of flying time.\nSource: Aviation History, July 2010. \"Built for Speed\" by Stephan Wilkinson, p18-19.\n10 May 2010\nNo launch vehicle is so closely identified with the satellite communications industry than the Delta rocket. A direct descendant of the 1950s Thor intermediate-range ballistic missile and initially known as the Thor-Delta, through the 1960s the Delta rocket had firmly established its reputation as a reliable medium-lift launch vehicle that had not only orbited the first telecommunications satellites (Echo, Syncom, Telstar) but also the first weather satellites (TIROS) as well as a wide range of scientific probes (the Explorer series and the OSO solar-observation satellites). The vehicle was progressively modified for increases in capacity and performance but the real revolution launched aboard the Delta rocket would come in 1972.\nThe early Delta rockets used the Rocketdyne MB-3 liquid fuel engine for the first stage, the MB-3 being derived from the MA-3 engine that Rocketdyne built for the Atlas ICBM. Rocketdyne began work on a successor engine called the H-1 that would be used on the Saturn IB rocket for NASA. Having progressively tweaked the design to a nearly 50% increase in thrust, Rocketdyne built 322 H-1 engines for NASA and it was a relatively easy task to adopt the proven H-1 to be the new Delta first stage engine as the RS-27.\nAt the same time the Delta got a new, more powerful engine, in June 1972 the Federal Communications Commission decided to allow private companies to compete for domestic satellite communications. Prior to this, only international communications had any competition.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "In any case, NASA's next big thing is SLS (Space Launch System). There's a few other games in town (Elon Musk and his various efforts, as well as the Russian Angaras and the like look pretty interesting), but for now SLS looks to have the biggest payload of anything in the near future. It makes me very happy that NASA is taking concrete steps toward the development of a genuine heavy lift vehicle, and I sincerely hope that it all goes well, and we get people back on the moon and other places sooner rather than later.\nHowever, there's a minor problem I have with SLS. Well, actually, it's with the payload proposals and upper stages I've seen; they're all chemically powered.\nWhy is this a problem? Well, chemical propellants (such as LOX/LH2, Kerolox, and similar) have a problem. While they have great thrust (very useful when you're lifting big things out of the gravity well), their efficiency is terrible. At the most, you're only getting about 460 seconds of specific impulse out of chemical propellants, probably a lot less if you're using ones that you can store in space long term. This means that if you want to get somewhere far away (like Mars), you're going to need a horrendous mass ratio (Mass ratio is the ratio of the initial mass of the spacecraft to the empty mass. So, if you have a spacecraft that weighs 200 tons, 100 of which is fuel that you use to get where you're going, you have a mass ratio of 2). Here's a quick little graph I mocked up in MATLAB to show you how your mass ratio varies with delta-V and specific impulse;\nAs you can see, it's pretty much an exponential relationship. (A quick examination of the Tsiolkovsky Equation will show why.) For reference, going one way to Mars orbit from LEO takes roughly 5,000 m/s of delta-V, according to my quick and dirty math.\nSo, what if we can stretch out the specific impulse axis of that graph a bit? Maybe down to around 800, 900 or so. That would be pretty cool, right? But is there anything we could do using today's technology that can get us there?\nWell, if we really wanted to, we could build an Orion drive. Some people might be mildly annoyed by that, though.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-13", "d_text": "(f)Compute the thrusting time at perigee from (13). If , proceed to step g. Otherwise, break the iterative sequence and go to step 7.(g)Update and TOF: (h)Update the IBS spacecraft mass: (i)Set and .(j)Compute the current acceleration on the spacecraft: (k)Compute the time of flight spent coasting before apogee from to .(l)Compute the equinoctial parameters after the thrusting apogee arc as in (12).(m)Compute the thrusting time at apogee from (13). If , proceed to step n. Otherwise, break the iterative sequence and go to step 7.(n)Update and TOF: (o)Update the IBS spacecraft mass: (p)Set and .(7)Back-track the point at which and compute the corresponding equinoctial parameters and update accordingly.(8)Compute the mismatch between the actual final conditions and the target orbit:\nSummarizing, the TPBVP has been reduced to an optimisation problem in the form:\nThis problem can be solved with a gradient-based optimisation algorithm like MATLAB’s fmincon. Note that the time of flight is specified a priori and, therefore, it might occur that this duration is too short as to obtain the change in the orbital parameters specified by the boundary constraints. In this case, the problem is infeasible and the optimisation is terminated after a maximum of 50 iterations if the constraints are not satisfied.\nIn the following, an example of transfer from an elliptical orbit with 300 km perigee altitude and eccentricity 0.031 (corresponding to the final orbit of a de-orbiting strategy) to a circular orbit of 1100 km altitude (corresponding to the orbit of the next debris in an hypothetical removal sequence). Parameters of the two orbits are reported in Table 2. Note that the total plane rotation in this case is 10 degrees. The specified time of flight is 70 days.\nFirst, it is considered the case of a coplanar transfer, that is, will be computed. The optimisation problem was solved with fmincon in 6 iterations and less than 10 seconds, returning a minimum cost of 0.301 km/s, with 1001 revolutions.", "score": 8.086131989696522, "rank": 99}]} {"qid": 11, "question_text": "What happened to the guitarist who recorded the guitar solo in 'Rock Around The Clock' with Bill Haley?", "rank": [{"document_id": "doc-::chunk-4", "d_text": "When Big Joe Turner, a blues shouter who had an R. & B. hit with the song, sings the same line, we see something different.\nHaley died in 1981, at the age of fifty-five, a year or so before John Swenson published his biography, “Bill Haley: The Daddy of Rock and Roll_._” The Stray Cats were about to become famous by updating much of Haley’s sound. He is little known today compared to Berry, Presley, Little Richard, and the other early kings of rock and roll, for whom he helped cut a path. But “Rock Around the Clock” has remained intermittently iconic. It’s a youthful grandfather to any number of rock party anthems: “Rock and Roll Music,” “Rock and Roll All Nite,” “I Love Rock and Roll.” And when something of the early rebellious rock spirit is sought, it’s Haley’s record that people often reach for. In 1987, LL Cool J tied his hip hop to rock and roll by egging on his d.j.—“Go Cut Creator Go”—instead of that noted guitarist Johnny B. Goode, and by kicking off the track with “One, two, THREE o’clock, four o’clock ROCK!”\n“Rock Around the Clock” climbed to number one in large part because it was the theme music to “Blackboard Jungle,” the 1955 Glenn Ford film that linked rock and roll to a whole range of juvenile delinquencies, from the smashing of jazz albums to attempted rape. During a nineteen-fifties revival, nearly twenty years later, “Rock Around the Clock” was chosen to be the first track on the “American Graffiti” soundtrack, was the theme song for the first two seasons of “Happy Days” and, in 1974, briefly returned to the Billboard Top 40. This nostalgic, early-nineteen-seventies moment presented rock and roll, and Haley’s record in particular, as good, clean fun, a flashback to music now devoid of its former ability to scare as well as to delight.\nThat was nothing new.", "score": 51.50796772611412, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "The doors to the club were open and you could hear everything outside and there was a crowd out in the street listening to them. And the crowd outside drew a crowd. They caused quite a stir. It was obvious ‘Rock Around the Clock’ was going to be popular.\"\nNearly a year after at the band debuted \"Rock Around the Clock\" in New Jersey, the song got a boost from Hollywood. It played during the opening titles of the drama \"Blackboard Jungle,\" a movie about teen delinquents. Decca re-released it and the single vaulted to the top of the pop chart in the summer of 1955, remaining number one for nearly two months. Haley and His Comets also had hits with covers of R&B tunes, \"See You Later, Alligator\" and \"Shake, Rattle and Roll.\"\nHaley, who grew up in the Philadelphia suburbs, successfully reinvented himself after years of struggling as a country crooner. His group, the Saddlemen, was the house band at the Twin Bar, a Gloucester City tavern. When Haley hired Richards and Ambrose, he segued from twang to rock, commercializing songs by blues artists.\n\"I always thought that Bill would have been happier if he had made it in the country field rather than rock ‘n’ roll,\" said Richards. \"Bill was a champion yodeler. The Saddlemen worked as just four guys doing cowboy and hillbilly stuff. They didn’t have a drummer. After I came into the band and they added the saxophone, that started a tremendous transition. They changed the name to the Comets and they took off the cowboy suits.\"\nWhen Haley and His Comets returned to Wildwood for a Labor Day concert in 1955, thousands of screaming fans packed an amusement pier for outdoor show, according to Richards.\n\"Wildwood was an entertainment mecca during the 1950s,\" he said. \"In the summertime, it was better than New York City. It was the top spot on the East Coast. At every corner, there was great entertainment. When they started taking down the old clubs, they ruined it. If they were smart, they’d put the place back like it used to be and bring the entertainment back.\"\nWildwood was once nicknamed \"Little Las Vegas\" for its glitzy nightlife.", "score": 50.34466440643294, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "There wouldn’t be a second rock-and-roll Billboard chart-topper until The Platters’ “The Great Pretender,” in early 1956, or a third until Elvis Presley’s “Heartbreak Hotel,” that spring. By then, though, Chuck Berry, Fats Domino, Carl Perkins and, again, Haley (with “See You Later, Alligator”) had all invaded the top ten. Predictably, several acts, including Pat Boone and Gale Storm, landed major pop hits during the same period with conventional covers of less commercially successful rock-and-roll hits.\nHaley’s “Rock Around the Clock” was by no means the first rock-and-roll record. Nominees for that title are largely dependent upon how a particular nominator values the music. If you think of rock and roll as what white people called a certain form of energetic, black Rhythm & Blues after stealing the style, you might choose “Rocket 88,” the 1951 R. & B. hit released by Jackie Brenston, the vocalist in Ike Turner’s band. (Haley and his group, then still going as the Saddlemen, cut a version of the song that same year.) On the other hand, if you think rock and roll mostly marks an integrationist cultural shift among young white teenagers, who weren’t thieves but knew what they liked, then you might go with something like Haley’s minor 1952 hit “Rock the Joint,” a twanged-up, proto-rockabilly take on a 1949 R. & B. hit from Jimmy Preston.* The Haley biographer John Swenson has identified “Rock the Joint” as “the beginning of the age of rock & roll,” a regional success that “became an anthem for [white] school kids who refused to make the limited choice of listening to segregated music.” Then again, if you define rock and roll as a combination of R. & B. with a smaller but essential portion taken from country music, with plenty of black gospel and some white pop mixed in as well, perhaps you think that it’s all far too complicated to reduce to any specific first record.\nNo matter which way you look at it, though, Bill Haley is a foundational figure.", "score": 48.556140012581295, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Marshall Lytle, whose snappy bass work anchored Bill Haley and the Comets' \"Rock Around the Clock,\" died Saturday at his home in New Port Richie, Florida. He was 79. According to The New York Times, Lytle died of lung cancer.\nLytle was a teenage guitar player working at a Chester, Pennsylvania, radio station when Haley, who worked at another station, recruited him to play bass. Lytle notably didn't play the instrument, but Haley taught him the rudiments of slap-bass in a 30-minute session. The technique was key for country music, which was the focus of Haley's band – then known as Bill Haley and His Saddlemen.\n\"He got this old bass fiddle out, started slapping it, with a shuffle beat, and showed me the basic three notes you need on a little bass run to get started with,\" Lytle said in a 2011 radio interview. \"I gave it a try and I said, 'Hell, I can do that.'\"\nHaley's band changed their name to Bill Haley and the Comets in 1952, and Lytle played on memorable tracks including \"Crazy, Man, Crazy\" and \"Shake, Rattle and Roll.\" The bassist developed a lively stage presence, throwing his bass into the air, lifting it over his shoulder and riding it like a horse. The group landed their biggest hit in 1955 after a version of their classic recording of \"Rock Around the Clock\" appeared in the opening credits of the movie Blackboard Jungle.\nLytle and two other band members left Bill Haley and the Comets in 1955 after a salary dispute, branching off into their own group, the Jodimars. The band would become a popular Las Vegas lounge act. Lytle changed his name to Tommy Page in the 1960s at the suggestion of a booking agent, who said his name was too closely connected to his stint with Bill Haley and the Comets.\nThough Haley died in 1981, Lytle and other Comets reunited in 1987 and performed off and on until 2009. Lytle, along with other Comets, was inducted into the Rock and Roll Hall of Fame last year.\nLytle was born on September 1st, 1933, in Old Fort, North Carolina, before his father moved the family to Pennsylvania.", "score": 48.46389779816643, "rank": 4}, {"document_id": "doc-::chunk-3", "d_text": "When Charlie Gillett published “The Sound of the City: The Rise of Rock ‘n’ Roll,” the genre’s first formal history, in 1970, he identified five early forms of the music, the first of which was “Northern band rock ‘n’ roll, exemplified by Bill Haley and His Comets.” Just last year, in “Yeah! Yeah! Yeah!: The Story of Pop Music from Bill Haley to Beyoncé,” Bob Stanley didn’t just toss Haley into his subtitle, he went all in. “Haley invented rock ‘n’ roll,” he writes. “No one had blended country & R&B before Haley wrote and recorded ‘Rock the Joint’; no one hit the Billboard Top 20 with something that could be safely labeled rock ‘n’ roll before ‘Crazy Man Crazy’; no one scored a rocking number before ‘Rock around the Clock’ turned the music world upside down.”\nThat “invented” overstates the case, but it serves as a corrective, too, spotlighting the essential-to-the-form contributions of an artist who is usually mentioned only in passing. There are a number of reasons why Haley gets overlooked. He was only a year older than Chuck Berry but seemed more like twenty years older. He looked like a puffy George Reeves, right down to the Superman spit curl, and he favored loud, checked sport jackets, normally associated with vacationing appliance salesmen. (When Jerry Lewis parodied the rock craze in his 1958 film “Rock-a-Bye Baby,” his band was costumed the same way, in mimicry of the Comets.) Onstage, Haley lacked the charisma and energy of other early rockers. He wasn’t a new teen idol but a familiar type: the bandleader. His Comets were a rockin’ band, but one only as good as its material, and after “Shake, Rattle and Roll” (the first rock-and-roll record to go top ten, in 1954) and “Rock Around the Clock,” the quality of their work dropped precipitously. Even Haley’s most thrilling performances were sexless. In “Shake, Rattle and Roll,” when Haley sings “I’m like a one-eyed cat peepin’ in a seafood store,” we imagine a cat, with one eye, looking hungrily into the window of a seafood shop.", "score": 46.73208017374724, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "Although the record only made it to the mid-20s on the Billboard pop chart, its influence was massive in scope.\nHere was a black rock-and-roll record with across-the-board appeal, embraced by white teenagers and Southern hillbilly musicians alike (including Elvis Presley, who added it to his stage show). And it was fortunate for the electric guitar that one of its earliest champions was not only an extraordinary musician and showman, but also one of pop music’s greatest and most enduring singer-songwriters.\nWith monster hits such as Roll Over Beethoven (1956), Rock and Roll Music (1957) and Sweet Little Sixteen (1958), Chuck Berry did much to forge the genre. His formula was ingenious: write lethally funny lyrics about the teenage experience, strap them into a high-octane groove, add a little country twang, shake it up with a showstopping guitar solo, and then watch the acclaim pour in.\nIt was a recipe that would dominate popular music for decades to come. And if early listeners didn’t understand how important his guitar was to the mix, Chuck soon made the connection explicit.\nIn his 1958 masterpiece Johnny B. Goode, Berry created the ultimate rock-and-roll folk hero in just a few snappy verses. As we all know, Goode wasn’t pounding a piano, singing into a microphone, or blowing a sax. In his choice of the electric guitar, something sleek and of the moment, the fictional character of Goode would forge an image of the archetypal rocker, doing as much to shape the history of the instrument as any real-life figure ever has.\nThe song’s opening riff is a clarion call – perhaps the greatest intro in rock-and-roll history. It was played by Berry on an electric Gibson ES-350T, and it indeed sounded “just like a-ringin’ a bell.”\nThe tale begins “deep down in Louisiana,” where a country boy from a poor household is doing his best to get by. Johnny, we discover, “never ever learned to read or write so well,” but he has something better than a formal education or a diploma – he has talent, street smarts, and a guitar.\nJohnny’s Gibson is his instrument and also his ticket out of the backwoods. After introducing our hero, the anthem turns to his guitar.", "score": 44.49197059730293, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Scotty Moore, Rock Pioneer and Elvis Presley’s Guitarist, Dies at 84\nScotty Moore, the pioneering rock ‘n’ roll guitarist whose fluid picking propelled Elvis Presley’s first recordings for Sun Records, died Tuesday in Nashville, according to Memphis newspaper the Commercial Appeal. He was 84.\nMoore was a member of a local country combo in Memphis when he was drafted by Sun owner Sam Phillips to support with the young, untested teenage singer on his debut recordings.\nHis crisp, flowing, melodic guitar lines, heavily influenced by Chet Atkins’ early work but also infused with deep blues feeling, highlighted the singles issued by Sun during Presley’s rise to fame in 1954-55.\nMoore went on to work behind Presley after he moved to major label RCA in 1956, appearing on such major hits as “Heartbreak Hotel” and “Blue Suede Shoes.” He also took supporting roles in several of Presley’s early feature films, and took a key instrumental role in his 1968 “comeback special.”\nHe was inducted into the Rock and Roll Hall of Fame in 2000.\nBorn Winfield Scott Moore III on Dec. 27, 1931, on a farm outside Gasden, TN, Moore, the youngest of 14 children, began playing guitar at 8. He enlisted in the Navy and served in Korea from 1948-52.\nMoore founded the Starlite Wranglers shortly after his release from the service; the sextet also included the comedic bass player Bill Black. The band had recorded a single for the young, blues-oriented label Sun in May 1954.\nAt the suggestion of Sun office manager Marion Keisker, Moore called up Presley – whose name, he later said, sounded like “a name out of science fiction.” After Moore and Black had rehearsed with the vocalist, they entered the studio on July 5, 1954. After running down several tunes they knew in common, at Phillips’ urging the trio cut a version of Arthur Crudup’s blues “That’s All Right.”\nReleased as a single later that month, the song took off regionally and jump-started Presley’s career.", "score": 41.83257931438282, "rank": 7}, {"document_id": "doc-::chunk-6", "d_text": "Paul Friedlander describes the defining elements of rockabilly, which he similarly characterizes as \"essentially ... an Elvis Presley construction\": \"the raw, emotive, and slurred vocal style and emphasis on rhythmic feeling [of] the blues with the string band and strummed rhythm guitar [of] country\". In \"That's All Right\", the Presley trio's first record, Moore's guitar solo, \"a combination of Merle Travis–style country finger-picking, double-stop slides from acoustic boogie, and blues-based bent-note, single-string work, is a microcosm of this fusion.\"\nMany popular guitarists cite Moore as the performer that brought the lead guitarist to a dominant role in a rock 'n' roll band.[cita requerida] Although some lead guitarists and vocalists, such as Chuck Berry and the blues legend BB King, had gained popularity by the 1950s, Presley rarely played his own lead while performing, instead providing rhythm guitar and leaving the lead duties to Moore. As a guitarist, Moore was a noticeable presence in Presley's performances, despite his introverted demeanor. He became an inspiration to many subsequent popular guitarists, including George Harrison, Jeff Beck, and Keith Richards of the Rolling Stones. While Moore was working on his memoir with co-author James L. Dickerson, Richards told Dickerson, \"Everyone else wanted to be Elvis—I wanted to be Scotty.\":xiii Richards has stated many times (in Rolling Stone magazine and in his autobiography, Life) that he could never figure out how to play the \"stop time\" break and figure that Moore played on \"I'm Left, You're Right, She's Gone\" (Sun), and that he hopes it will remain a mystery.[cita requerida]\nOne of the key pieces of equipment in Moore's sound on many of the recordings with Presley, besides his guitars, was the Ray Butts EchoSonic, first used by Chet Atkins, a guitar amplifier with a tape echo built in, which allowed him to take his trademark slapback echo on the road.\nÚltimos años y muerte[editar]\nScotty Moore co-wrote the songs \"My Kind of Carrying On\" and \"Now She Cares No More\" which were released as Sun 202 on Sun Records in 1954 when he was a member of the group Doug Poindexter and the Starlite Wranglers with Bill Black as the bassist.", "score": 41.36659136354729, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Happy Birthday Scotty Moore!!!!!!!!!!!\nThe man who played guitar on Elvis's \"That's alright,\" has inspired millions to pick up the guitar and play it. His finger picking style was gleaned from Chet Atkins. Scotty Moore is listed by Rolling Stone magazine as #44 of the all time guitarists! Scotty's role in Elvis Presley's band help bring the guitar as a lead instrument to the forefront of the band. Scotty played Gibson ES-295's, Gibson L-5, and then Gibson Super 400's. He employed slapback echo (Ray Butts echosonic) that has become a standard among many players including Brian Setzer.", "score": 40.69436961635564, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "STILL ALIVE AND WONDERING WHY?\nSUNDAY, JUNE 7, 2009\nRene Hall, as an arranger and session guitarist was one of the most influential men behind the scenes of rock'n'roll and rhythm and blues for over twenty years, yet he has been ignored and/or written out of history to such an extreme that I can't even find one photo of him to go with this posting. He gave one interview in his life, to the U.K. collector's mag New Kommotion in 1980.\nHall had a long career and was in demand constantly, he never seemed to lack for work, mostly as an arranger. Today's posting however will examine only a small part of that career, his work as a session guitarist, and from there we will focus on the years 1957-60 when he recorded the records that best fit my own personal definition of what great rock'n'roll is. After all, it's my blog.\nRene Hall was born in New Orleans in 1912 and began his musical career picking six string banjo in Papa Celestin's Orchestra, playing traditional New Orleans jazz. He worked on the riverboats in the 1940's with Sam Morgan's Orchestra and later with Sydney's Southern Syncopaters. Somehow he ended up in Tulsa, Oklahoma where he switched to guitar and played with Ernie Fields' band (he'd record with Fields in the fifties). With Fields he moved to St. Louis where he got a job writing arrangements, conducting and playing trombone with jazz piano giant Earth \"Fatha\" Hines. For an example of Hines genius find a copy of Louis Armstrong's Weather Bird.\nHall hit New York City in 1945 where he got arranging work at the Apollo Theater in Harlem, working with acts like Roy Milton and Louis Jordan. At the Apollo he discovered Billy Ward and the Dominos, with their incredible lead singer Clyde McPhatter and got them their first record deal with Federal Records out of Cincinnati, a subsidiary of the R&B/C&W giant King. He appeared playing guitar on many of their early hits including Do Something For Me, the 1951 smash. He toured with the Dominos, making it as far as England where they played army bases, then moved with them to Las Vegas when they settled in for a long term job at the Dunes Hotel.", "score": 40.019269009591206, "rank": 10}, {"document_id": "doc-::chunk-2", "d_text": "In the 1980's Specialty issued two LP's of Larry Williams outtakes (Unreleased Larry Williams and Hocus Pocus), recordings much rawer then the issued sides. These discs went out of print fast and much of the material has never appeared on CD, but one of these included a version of Bad Boy where Rene Hall plays what must be one of the most out of control guitar solos of all time. You can practially smell the smoke coming from the tubes in his amp.\nWhile at Specialty, Rene Hall also cut three solo 45's, only one was a guitar instrumental, and he only played on one side, but it's quite a classic-- Twitchy b/w Flippin'. The a-side features Willie Joe Duncan, who played a Unitar (one string guitar), and the tune is basically a re-recording of Unitar Rock, which had appeared on the b-side of Bob Froggy Landers' Cherokee Dance a year earlier. Duncan not only had one string on his guitar, he seems to have known only one tune, but a hell of a tune it is. Flippin' features Hall's guitar and is a pretty good rocker in it's own right. His next Specialty 45, also issued in 1957 was a version of venerable wino classic Thunderbird b/w When The Saints Go Marching In. For more on the Thunderbird connection see my April posting on the subject. His final Specialty 45 came in early '57, a slice of novelty exotica that I've always loved-- Cleo, it was backed with an instrumental version of Frankie & Johnny that featured Plas Johnson's blaring tenor.\nBumps Blackwell, with Rene Hall as arranger had taken Specialty gospel star Sam Cooke of the Soul Stirrers and recorded a pop tune, complete with white back up singers, called You Send Me, which Rupe hated and refused to release, fearing it would offend the gospel fans. Blackwell was sure he had a hit record and a future star in Cooke and worked out a deal with Rupe where in exchange for back royalties he was owed he could have Cooke's contract and take You Send Me elsewhere. They took it to Bob Keane who issued it on the Keen label and of course it was a huge hit. Hall stayed on with Cooke as his guitarist and arranger until his death, but that's away from our subject today.", "score": 37.90186053380006, "rank": 11}, {"document_id": "doc-::chunk-5", "d_text": "“I know when he got ‘Winchester Cathedral.’ He was thinking, ‘What am I gonna do with this piece of crap?’ But he worked it up to have an old-timey banjo sound, and it became a masterpiece.” It, and several other meticulous H.R. transcriptions, are included in Holder’s book.\nHollywood studio guitar doyen Bob Bain laughed, “Howard would pull all-nighters before those sessions. He’d stay up arranging, then go straight to the studio to record. Jack [Marshall] and Howard would come to my place and stay up writing charts and arrangements for the next day’s session. Even if I wasn’t there, my wife, Judy, would give them the run of the place. Sometimes, I’d be on the date with them the next day… though I had enough sense to get some sleep!”\nThe Capitol albums brought Roberts major visibility among guitarists and jazz fans. He was, however, paid only scale for the dates, and never got a dime on the back end.\nThe Studio Years\nHolder recalls Roberts’ reaction to much of L.A.’s jazz scene moving to New York in the early ’60s. “Many of the L.A. jazz musicians consequently turned to the film and TV studios for their livelihoods,” he said. That’s when Roberts quickly became a first-call session player who would eventually, and later routinely, log more than 900 sessions per year. That includes playing on nearly 400 film scores. Howard said between 1966 and ’76, he played on more than 2,000 record albums.\nIn addition to “The Deputy,” Roberts’ TV work included playing the eerie theme for “The Twilight Zone,” working on the scores for “The Munsters,” “The Flintstones,” “The Addams Family,” “Gilligan’s Island” and hundreds more. He even played scene-transition cues on the plectrum banjo for “The Beverly Hillbillies.” On his record dates, he lent his talent to such artists as Peggy Lee, Dean Martin, Elvis Presley, Ray Charles, Bobby Darin, Duane Eddy, The Monkees, Jimmy Smith, The Beach Boys, Rick Nelson, Johnny “Guitar” Watson, The Electric Prunes, and even Chet Atkins.\nOne story commonly passed along illuminates Roberts’ iconoclastic and colorful nature.", "score": 36.09406059453622, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Duane Eddy (born April 26, 1938) is a Grammy Award-winning American early rock and roll guitarist famous for his \"twangy guitar\" style. He produced a streak of hit singles in the late 1950s and early 1960s, including \"Rebel Rouser,\" \"Forty Miles of Bad Road,\" \"Because They're Young,\" and \"The Lonely One.\"\nEddy's 1959 debut album, Have Twangy Guitar Will Travel, stayed on the charts for a record 82 weeks. He recorded more than 25 albums with wide-ranging themes, including his 1986 collaboration with Art of Noise that featured a reworking of his 1960 hit, \"Peter Gunn.\" The single became top-ten hit worldwide and won the Grammy Award for Best Rock Instrumental. His playing influenced a generation of musicians, including George Harrison, Dave Davies (of the The Kinks), Bruce Springsteen, and Mark Knopfler.\nEddy was the first rock-and-roll guitarist to have a signature model guitar. In 2004, he received the Guitar Player Magazine \"Legend Award.\" Inducted into the Rock and Roll Hall of Fame in 1994, he is often acclaimed as the most successful rock-and-roll instrumentalist of all time.\nBorn in Corning, New York in 1938, Eddy began playing guitar at age five, emulating his cowboy hero, Gene Autry. His family moved west to Arizona in 1951. In early 1954, Eddy met local disc jockey Lee Hazlewood in the town of Coolidge. Hazlewood would become his longtime partner, co-writer, and producer. Together, they created a successful formula based upon Eddy's unique style and approach to the guitar and Hazelwood's experimental vision with sound in the recording studio.\nElements of country, blues, jazz, and gospel infused Eddy's instrumentals, which had memorable musical \"hooks\" and evocative titles like \"Rebel Rouser,\" \"Forty Miles of Bad Road,\" \"Cannonball,\" \"The Lonely One,\" \"Shazam,\" and \"Some Kind-a Earthquake.\" The latter has the distinction of being the shortest song to ever break into the Top 40, at 1 minute, 17 seconds. Eddy's records were often punctuated with rebel yells and saxophone breaks.", "score": 34.71900524067149, "rank": 13}, {"document_id": "doc-::chunk-3", "d_text": "In spring, 1997, Eddy was inducted into the Rockwalk on Hollywood's Sunset Boulevard, placing his handprints and signature into the cement along with his friends Chet Atkins, Scotty Moore, and James Burton. In 2004 he was presented with the Guitar Player Magazine \"Legend Award.\" Eddy was the second recipient of the award, the first having been presented to Eddy's own guitar hero, Les Paul.\nEddy popularized the hard-driving, twangy sound that became part of the musical culture of rock-and-roll guitar. Combining strong, dramatic, single-note melodies, bending the low strings, and a combination of echo, vibrato bar, and tremolo effects, he produced a signature sound that would be featured on an unprecedented string of 34 chart singles, 15 of which made the top 40, with sales of over 100 million worldwide.\nHis playing also influenced generations of new musicians. Among those who acknowledge his influence are The Ventures, George Harrison, Dave Davies (The Kinks), Hank Marvin (The Shadows), Ry Cooder, John Entwistle (The Who), Bruce Springsteen, and Mark Knopfler. Eddy was also the first rock-and-roll guitarist to have a signature model guitar. In 1960, Guild Guitars introduced the Duane Eddy Models DE-400 and the deluxe DE-500. A limited edition of the DE-500 model was reissued briefly in 1983 to mark Eddy's twenty-fifth anniversary in the recording industry. The Gretsch \"Chet Atkins 6120\" model has long been associated with Eddy. In 1997, Gretsch Guitars started production of the Duane Eddy Signature Model, DE-6120. In 2004, The Gibson Custom Art and Historic Division introduced the new Duane Eddy Signature Gibson guitar.", "score": 33.71519018019844, "rank": 14}, {"document_id": "doc-::chunk-20", "d_text": "Usually when you say \"solos,\" I picture Steve Vai up there – like, ten feet tall and hideous.\nI also like Mick Ronson. He's not the best guitarist ever, but he's the coolest – ever. He looked cool, played cool, everything was just so cool. And, of course, he played guitar on \"Jack and Diane,\" too.\nI think Steve Howe from Yes is the best. He's not a typical math-rock guitarist. He never used distortion but he still thrashed, which I could never figure out. And who isn't a fan of Eddie Van Halen? When that first record came out, I wanted to fucking kill myself! I said, \"I'm wasting my fuckin' time, aren't I?\"\nThere aren't any guitarists anymore who are going to light the world on fire. Maybe everything's been done. Once Eddie Van Halen hit the scene, he did everything that hadn't been done – and that was it. I started liking the emotional guitarists, like Kurt [Cobain] and the punk-rock guitarists. I thought they were more like it.\nLeslie West [of Mountain] never gets any recognition. I've always been a big fan of his, since back when he was a fat kid dropping out of high school in Forest Hills [Queens]. He was, to me, one of the top five guitar players of his era. His playing is so soulful and tasteful. His break in \"Theme for an Imaginary Western\" is the best thing I've ever heard. It builds so me-lodically. The last note in the break – he hits one of those notes that just shoots up the octave, this harmonic jump. The whole solo is a thing of beauty.\nJoe Perry (Aerosmith)\nMy favorite guitarist is Steve Rose. He was the first guy I ever saw actually play an electric guitar onstage. He was arguably the best guitar player for a few towns around [Boston], and he gave guitar lessons. You talk about guitar heroes – I can remember seeing him and his band, the Wildcats, at a high school dance. I was like, \"Shit!\" It wasn't just something coming out of the radio; it wasn't the Beatles on The Ed Sullivan Show. I didn't dance with anybody; I just watched him play guitar.", "score": 33.047195178277654, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Legendary studio and live guitarist Reggie Young, who played on classic songs by Waylon Jennings, Elvis Presley, Hank Williams, Jr. and more, died Thursday (Jan. 18) in Nashville. He was 82.\nYoung was born in Caruthersville, Mo. in 1936. He was raised in Osceola, Ark. and moved to Memphis at the age of 14. The Memphis Commercial-Appeal reports that Young began playing guitar professionally at the age of 15.\nAfter serving in the Army (and turning down an offer to join the CIA), Young returned to Memphis in the mid-1960s. There he worked at American Studios as part of the legendary house band The Memphis Boys. The Memphis Boys played a pivotal role in late '60s music, performing on Dusty Springfield's \"Son of a Preacher Man,\" Neil Diamond's \"Sweet Caroline,\" Elvis Presley's \"Suspicious Minds\" and more. The band played on over 100 hit pop, country, rock and soul singles.\nThroughout his career Young played with a variety of country artists, including Carl Perkins, Johnny Horton, Waylon Jennings, Kenny Rogers, Merle Haggard, Willie Nelson and Guy Clark. Among the country songs Young lent his talent to are Willie Nelson's \"Always On My Mind,\" Hank Williams, Jr.'s \"Family Tradition\" and Jennings' \"Luckenbach, Texas.\" He also toured with The Highwaymen for several years.\nYoung released his solo album Forever Young in 2017.", "score": 32.52743150405413, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Peter Frampton sold millions of records with the help of a customized Gibson guitar. Three decades ago, that guitar was destroyed in a plane crash ... or so he thought.\nThe story begins in 1970, when Frampton and his old band Humble Pie scored a gig playing two sets a night at the Fillmore West in San Francisco. Frampton says the first night was a rough go: The guitar he was using fed back at loud volumes and\nmade soloing a chore. After the show, an audience member approached him and offered to help.\n\"He said, 'Well, look, I have a Les Paul that I've sort of modified myself a little. Would you like to try it tomorrow?'\"\nFrampton tells weekends on All Things Considered host Guy Raz. \"I said, 'Well, I've never really had much luck with Les Pauls, but you know what? At this point, I'll try anything.'\"\nThe arrangement turned\nout to be love at first strum. \"I used it that night, and for both sets, I don't think my feet touched the ground the whole time,\" Frampton says. \"I mean, I levitated.\"\nThat guitar — a shiny black number\nwith an added pickup — became Frampton's signature instrument. He continued to use it with Humble Pie, and in his solo material, played it almost exclusively for years. It even made the cover of his classic 1976 live album, Frampton Comes Alive!\nIn 1980, while Frampton was on tour in South America, the guitar was put on a cargo plane in Venezuela, en route to Panama. The plane crashed right after takeoff.\nI'm thinking, 'It's gone,'\" Frampton recalls. \"But the thing is, I'm also sitting in a restaurant where I can see the pilot's wife. She's waiting in the hotel for her husband, who, unfortunately, didn't make it. So we were all overcome, because people lost\ntheir lives as well as our complete stage of gear.\"\nWhat Frampton didn't know is that the guitar had survived, albeit with some bumps and bruises. It fell into the hands of a musician on the Caribbean island\nof Curaçao, who owned it for many years before a local guitar collector spotted it and contacted Frampton. After some negotiation, the guitar was returned to Frampton last month.", "score": 31.933207665949862, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "He does appear on Jerry Hawkins’ rockabilly classic Swing Daddy Swing on Ebb (one of his best solos ever), as well appearing on several Dale Hawkins Checker 45’s, the best, guitar playing-wise being his rendition of My Babe (here’s an alternate take ), I Want To Love You, and Liza Jane. While touring with Dale Hawkins he found time for some session work and can be heard on two excellent rockabilly singles on Imperial from that same year– Al Jones’ Loretta (written by Merle Kilgore who penned Ring Of Fire) and Bobby Jay’s So Lonely, a nice Gene Vincent sounding rocker.\nMeanwhile, back at the Hayride, Buchanan, following in James Burton’s footsteps moved from Dale Hawkins’ band to Bob Luman’s outfit, recording with four songs with Luman issued by Warner Brothers in 1959, the most interesting of these discs is My Baby Walks All Over Me.\nRockabilly is all about guitar playing, and especially on the Dale Hawkins sides Buchanan is really in his element. Dale was (and is) a great band leader, and he knew how to get the best of of his guitarists.\nWith the Hawkins band, young Roy Buchanan was touring constantly. He finally left Dale in Toronto and joined his cousin Ronnie Hawkins’ Hawks briefly around 1960, his only recording with them however was on bass. He soon settled in the greater Washington D.C./Maryland/Virginia area where he would spend the rest of his life. There was plenty of work for a guitar player, mostly in rough biker and cowboy joints (remember this is the time and place where Link Wray & the Rayman ruled the roost), and Buchanan soon made a name for himself as far north as Philadelphia where he started doing session work. It’s these Phili recordings that I’d call Buchanan’s best.\nAgain, it’s almost impossible to figure out the exact order these discs were released but the first record under his own name was After Hours b/w Whiskers (Bomarc) in 1961. I’ve already blogged about the a-side (see Feb. 6 entry), the b-side (actually a retitled version of Johnny Heartsman’s Johnnie’s House Party) is just as great, capturing Buchanan in top form, take a listen and and ask has a white guy ever bent strings so soulfully?", "score": 31.000824960849876, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "It fell into the hands of a musician on the Caribbean island of Curaçao, who owned it for many years before a local guitar collector spotted it and contacted Frampton. After some negotiation, the guitar was returned to Frampton last month.\n\"It's sort of a matte black now — it's not shiny so much anymore. The binding needs a little bit of work on the neck; the electronics need replacing,\" Frampton says. He adds, though, that he'll limit repairs on the instrument to \"whatever needs to be replaced on it to make it just playable. But it must retain its battle scars.\"\nFrampton says he knows his diehard fans will be clamoring to see him play the unique guitar again, and he's more than happy to comply.\n\"Oh, it's got to go on the road,\" he says. \"For it to be given back to me ... It's not something I'm going to hide in the closet.\"\nGUY RAZ, HOST:\nHere's another story about detective work, though this one doesn't include dogs or whales, but rather, Peter Frampton's guitar. Actually, the guitar he played on his mega-selling record, \"Frampton Comes Alive.\"\n(SOUNDBITE OF MUSIC)\nRAZ: In 1980, the guitar was lost in a plane crash in Venezuela, or so Peter Frampton thought. But before we get to that part of the story, a little background on the guitar in question. It was 1970. Frampton was playing with a band called Humble Pie at the Fillmore West in San Francisco. He wasn't playing well, so another musician lent Frampton his spare 1954 customized Gibson Les Paul.\nPETER FRAMPTON: I said, well, I've never really had much luck with Les Pauls, but you know what? At this point, I'll try anything. I used it that night for both sets. I don't think my feet touched the ground the whole night.\n(SOUNDBITE OF SONG, \"BABY, I LOVE YOUR WAY\")\nFRAMPTON: (Singing) Ooh, baby, I love your way every day...\nRAZ: This becomes your guitar.\nRAZ: This is the guitar on the famous cover of \"Frampton Comes Alive.\"", "score": 30.996996759377357, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "I thought that I was listening to Waylon play guitar, only to find out later that those early records featured Wayne Moss on electric… the early Haggard stuff was James Burton. It added more to the mystique of “session players” in my mind.\nA few years later in the early seventies, before moving to town, I became a big Reggie Young fan. I would buy any record regardless of who the artists were, as long as Reggie played guitar on it. When I finally moved to town and got to know him as a friend it was the absolute best! My hero became my friend.", "score": 30.596129722501246, "rank": 20}, {"document_id": "doc-::chunk-2", "d_text": "Nashville guitarists, a group that included Ray Edenton and later Reggie Young, could usually play in any style required.\nThough Rickenbacker had introduced a solid body model in the 1930's, it never caught on. In 1950 Leo Fender introduced the Fender broadcaster (changed to Telecaster), the first successful solid body guitar, and its success largely come from country pickers. Other gifted soloists also appeared, including Jimmy Bryant, who played dazzlingly fast country jazz and whose playing was much in demand in L.A. recording studios in the 50's, and Joe Maphis, a pioneer in flatpicking fiddle tunes on guitar, who played the first doubleneck \"Mosrite\" brand electric guitar made by Semie Moseley. Gretsch's Chet Atkins line and Gibson's Byrdland, designed by Billy Byrd and Hank Garland, also caught on.\nBut acoustic stylists hadn't stagnated during this period. Lester Flatt, building on the styles of earlier players like Roy Harvey, created a punchy guitar style combining chords and bass runs that he used with Bill Monroe's Blue Grass Boys and then with his partner, Earl Scruggs. Other fine bluegrass guitarists included The Stanley Brothers' George Shuffler, who \"crosspicked\" his instrument like a mandolin, as did guitarist Bill Napier. Blind guitarist Doc Watson also picked up the idea of finger picking fiddle tunes as Joe Maphis had. Hank Snow, who often soloed on his records, showed the influence of Karl Farr. Like Snow, singer Billy Grammer was another superb guitar soloist.\nIn the 60's, the Fender Telecaster stylings of country-rockabilly guitarist James Burton, singer Buck Owens (who played guitar on many Capitol rock and country released), Owens' lead guitarist don rich and Merle Haggard's guitarist Roy Nichols all had considerable impact, as did the nylon string playing of Jerry Reed, who expanded the Travis-Atkins style to use all the fingers of the right hand. Owens, Reed, Roy Clark and Glen Campbell were among the best known singers of the 60's who were also formidable guitarists. In the 70's, telecasters symbolized the Outlaw movement through Waylon Jennings' prominent use of the instrument.", "score": 30.317596750195616, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Roy Nichols: 1932 to 2001\nWest Coast guitarist Roy Nichols was one of the most influential axemen in country music. Though he was best known for his long run as a member of Merle Haggard’s band the Strangers, Nichols was already a seasoned pro by the time he began playing with Merle, having performed with such legends as Rose Maddox, Wynn Stewart, the Farmer Boys, and Johnny Cash. Still, it’s with Bakersfield that Nichols is most strongly identified: His sharp-edged Telecaster leads became an identifying mark of the honky-tonk music coming out of that working-class California city during the 1950s and ’60s.\nNichols was born October 21, 1932, in Chandler, Arizona; his family moved to California shortly afterward. At age 16 he landed an impressive gig as lead guitarist with the Maddox Brothers & Rose, one of California’s top country acts at the time. In 1953 he played with Lefty Frizzell onstage and on a handful of recordings. A couple years later, Nichols settled in Bakersfield, where he worked in the house band of Cousin Herb Henson’s popular “Trading Post” TV show. He also played with Wynn Stewart into the early 1960s.\nNichols joined Haggard’s band in 1965. His strong, confident guitar leads, alternately punchy, playful, and moody, were a prominent and distinct element of Merle’s recordings and performances. The Strangers also released instrumental albums that showed off their tremendous talents. Nichols eventually tired of life on the road, however, quitting the Strangers in 1987 and going into semi-retirement.\nIn 1996, Nichols had a serious stroke. To help defray his medical costs, Los Angeles country singer Kathy Robertson, with help from Bonnie Owens, released two tribute CDs under the banner To Roy Nichols With Love. Nichols died July 3. His influence, and his powerful picking, will ring in the ears of country fans for generations to come.", "score": 29.930583178347216, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "In my humble opinion, one of the best post-60s onward rock guitar players of all time passed away.\nJeff Beck had a technique that stood out, to say the least. He had the ability to play any music style with expertise, yet his signature playing was hard to duplicate by other great players. It made him a stand-alone giant!\nIn his later concert days, it was beautiful how he introduced many bass players on tour, especially outstanding women bass players… Tal Wilkenson as one example comes to mind.\nBeck died Tuesday, January 10, 2023, at the age of 78.\nJeff will be missed by a ton of fans. Yet, we’ve been privileged enough to have many of his recordings and videos to enjoy for the rest of the time we each remain on this planet.\nCredit is given: Rolling Stone .com article: Hall of Fame musician and former Yardbird guitarist [Jeff Beck] dies following a short bout with bacterial meningitis… By: Daniel Kreps, Kory Grow\nAs noted in this RS press release…\n“Led Zeppelin’s Jimmy Page, Beck’s Yardbirds bandmate who inducted the guitarist into the Rock Hall in 2009, wrote on social media Wednesday, “The six stringed Warrior is no longer here for us to admire the spell he could weave around our mortal emotions. Jeff could channel music from the ethereal. His technique unique. His imaginations apparently limitless. Jeff I will miss you along with your millions of fans. Jeff Beck Rest in Peace.”\nIt further mentioned…\nIn 2009, 17 years after Beck was inducted into the Rock and Roll Hall of Fame as a member of the Yardbirds, he delivered one of the greatest induction speeches of all time when he reentered the Rock Hall for his solo work. “Someone told me I should be proud tonight. But I’m not, because they kicked me out. They did. Fuck them,” he quipped at the 1992 ceremony. “I couldn’t believe I was even nominated,” Beck told Rolling Stone at the time. “I thought the Yardbirds was as close as I’d get to getting in. I’ve gone on long after that and gone through different musical changes.", "score": 29.916671891083233, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "The great Mickey “Guitar” Baker has left this mortal coil. Those who don’t know the name have almost certainly heard him sing and play on Mickey & Sylvia’s 1956 hit “Love is Strange,” and amongst numerous other accomplishments, that song endures as the breakthrough for which he is best known. It’s a glorious combination of sophistication and sexiness, and spinning it with the volume up loud is a fine memorial to a crucial and undersung figure in the formulation of rock ‘n’ roll.\nTo a large extent, the lasting appeal of the 1950’s rock ‘n’ roll explosion is defined in contemporary terms by a widely celebrated handful of originators and the subsequent explosion of wildcats who reacted to the sound of sweetly broken ground with worthwhile recordings of their own. One thread finds a bunch of unkempt, well-intentioned hicks succumbing to the potency of uncut rhythm and blues and combining it with the essence of their own tradition to fuse a new music that conquered the world.\nAnother storyline finds scores of African-American musicians perfecting the everlasting beauty of R&B to big sales figures but little cultural fanfare; that is until a burgeoning and restless youth culture discovered it, adapted it, and in some cases diluted it for a wider marketplace, with a few savvy black musicians making the shrewd adjustments necessary to become stars themselves.\nThe reality of both narratives, one the tale of Elvis Presley, Carl Perkins, Jerry Lee Lewis, and Johnny Cash, the other the story of Chuck Berry, Bo Diddley, and Little Richard, is indeed the bulk of the original rock ‘n’ roll impulse. But it’s not the entirety of the situation, and considering it the whole of the thing is how an enormously important figure like Bill Haley gets unfairly saddled with the reputation of being perhaps rock music’s biggest square.\nIn the service of conciseness and clarity, history has tidied up the birth of r ‘n’ r a little too much, for it was a considerably messier conception than is generally related. Indeed, rock’s genesis encompassed hard-working dudes like Haley, weaned on Western Swing and playing countess low-paying gigs, and adapting material from the likes of Big Joe Turner in hopes of scoring a hit single and pocketing an increase in spending cash.", "score": 29.896985398524297, "rank": 24}, {"document_id": "doc-::chunk-5", "d_text": "His was praised in print by critics and guitar players everywhere, toured the world, had his own signature model guitar marketed by Fender, and was doing quite well despite his inability to make an interesting album.\nEach time he reached a commercial peak however, he’d bore audiences to tears and end up back on the Maryland/Virginia/D.C. bar circuit, but even there the competition was nipping at his heals. His friend and student Danny Gatton had become his main competitor, and made records that were much more interesting than Buchanan’s. He was a depressed man in his final years. On August 14, 1988 he was arrested in Fairfax, Virginia for public intoxication, and that night was found hanging in his cell, a suicide according to the police (murdered by the police according to some of his friends).\nSad fuckin’ story, no? But not unique. There’s a million tragic stories in rock’n’roll: Lafayette “The Thing ” Thomas, who made Jimmy McCracklin’s The Walk a hit ended life working as a hose fitter, Pete “Guitar” Lewis of the Johnny Otis Show died a homeless wino, Kenny Paulson, star of Freddie Cannon’s Tallahassee Lassie and Buzz Buzz A Diddle It (and one time Dale Hawkins side man) was almost murdered in prison, and died of an overdose, utterly forgotten, in 1972, and oddly enough, Buchanan’s one time pal and student Danny Gatton, who would commit suicide, shooting himself in his garage, Robert Quine, committed suicide in 2004 had not recorded commercially in four years at the time of his death. Buchanan did better for himself than any of those guys. Hell, he made a living at music, which is more than most musicians do.\nIn 1989 the U.K. Krazy Kat label released an LP Roy Buchanan: The Early Years, I’m not sure if it ever made it to CD but it had fourteen of the above tracks and is well worth looking for.\nAs far as his Polydor and Alligator output, I find them unlistenable and recommend them only for students and collectors.", "score": 29.604258804594217, "rank": 25}, {"document_id": "doc-::chunk-2", "d_text": "In ’52, Roberts scored his first record date, the obscure “Jam Session No. 10” with reed man Gerry Mulligan and pianist Jimmy Rowles. Later that year, he recorded Live at the Haig with the Wardell Grey Quintet, then a Bobby Troup album for Capitol in ’53.\nBy ’55, he was working with drummer Chico Hamilton and bassist George Duvivier. They recorded an album for the Pacific Jazz label entitled The Chico Hamilton Trio, a recording that netted Roberts the Downbeat New Star Award.\nIn ’56, Bobby Troup signed Roberts to Verve, a label where Kessel had an artist-and-repertoire position. Kessel produced Mr. Roberts Plays Guitar featuring arrangements by three of Hollywood’s best – Jack Marshall, Marty Paich, and Bill Holman. Another album for Verve, Good Pickin’s, followed in ’59. Roberts was becoming a success.\nOne of his session dates became a legendary Hollywood studio story. In May of ’58, he was hired for a Peggy Lee record date. When it was time to lay down the track for what would become Lee’s huge hit, “Fever,” producer Jack Marshall decided to lose the guitar part. Consequently, that’s Howard snapping his fingers along with Max Bennett’s bass line and Lee’s vocals. Some still wonder if he got paid what session players call a “double”; he made the date with his guitar, but ended up appearing with another “instrument” – snapping his fingers.\nIn ’59, Marshall was composing hip background scores for a western TV series entitled “The Deputy,” which starred Henry Fonda. Marshall wanted to feature jazz guitar on the scores, and hired Howard to improvise over many of the action sequences. Having a jazz guitar line complement a scene with cowboys riding at full gallop was a fresh and distinctive approach.\n“Jack Marshall let Howard just blow as much as he wanted to,” studio vet Bill Pitman said of the sessions.\nThe “Black Guitar” was Howard Roberts’ trademark guitar of the 1960s-’70s. “H.R.” preferred this highly modified instrument during his most active years, playing it on countless studio dates. It can be heard on many of his recordings, including Color Him Funky, H.R.", "score": 29.11145504097938, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Guitar/mandolin custom built by Paul Bigsby for Grady Martin, 1952.\nIt is Martin who plays the memorable riff on Roy Orbison's \"Oh Pretty Woman.\" That alone makes him candidate for Rock'n'Roll sainthood. However, he also helped invent 'guitar distortion.' A tube was blown in the middle of a take at a Marty Robbins session and the resulting fuzz-toned solo was left in the song, the 1961 smash hit \"Don't Worry.\"\nBesides Chet Atkins, Martin was the only studio musician to play with both Hank Williams AND Elvis Presley.\nwith Anita Carter & Hank Williams\nCopacabana Club, NYC 3-26-52\nFrom the Country Music Foundation's \"Encyclopedia of Country Music\" (1998):\nb. Chapel Hill, Tennessee, January 17, 1929\nGrady Martin is one of the true legends of Nashville's original \"A-Team\" of studio musicians; his greatest strength was his versatility. Whether playing the fiddle or guitar - electric, acoustic, or six-string electric bass - his creativity helped to make hits of many records from the 1950s through the 1970s.\nThomas Grady Martin was just fifteen when he joined Big Jeff & His Radio Playboys as their fiddler in 1944. In 1946 he joined Paul Howard's western swing-oriented Arkansas Cotton Pickers as half of Howard's \"twin guitar\" ensemble, along with Robert \"Jabbo\" Arrington. After Howard left the Grand Ole Opry, Opry newcomer Little Jimmy Dickens hired several former Cotton Pickers, including Martin, as his original Country Boys road band.\nOff the road, Martin began working recording sessions. He led Red Foley's band on the ABC-TV show Ozark Jubilee. Paying service to a strong business relationship with Decca A&R man Paul Cohen and his successor, Owen Bradley, Martin began to record instrumental singles and LPs for Decca, including a country-jazz instrumental LP as part of Decca's Country and Western Dance-O-Rama series. Martin recorded many more Decca recordings as lead for the Nashville pop band the Slew Foot Five.\nMartin's role as studio guitarist yielded numerous memorable moments.", "score": 29.11025213492538, "rank": 27}, {"document_id": "doc-::chunk-2", "d_text": "As it is, Dolan's sides -- cut with the likes of Jimmy Bryant, Speedy West, Billy Strange, and Cliffie Stone, among other renowned session players and country virtuosi -- continued to attract a country audience well into the mid-'50s, treading a fine line between Western swing, honky tonk, and traditional country, with the occasional novelty song thrown in. \"I'll Hate Myself Tomorrow\" and \"Wham Bam Thank You Mam\" seem like two sides of the same set of soiled sheets as presented on the A- and B-side of a 45 single, respectively. Also worth mentioning is Dolan's connection to the fascinating subject of people getting whacked over the heads with guitars. The song \"Playin' Dominoes and Shootin' Dice,\" another of Dolan's efforts on the outer edges of rockabilly, is a description of a \"guitar picker (who) lived a life of wine and liquor.\" This doesn't suit his girlfriend, who retaliates with an \"el kabong.\" These passages written by Dolan should make him a favorite among guitarists: \"Then his old guitar, she swung it/For his head, she really hung it/Bruises, knots, and bumps began to rise.\"\nHe enjoyed a fairly lucrative stage career on the West Coast, marred only by some of his more boisterous and overbearing behavior at clubs. According to those who knew him, the primary reason for Dolan's problems stemmed from a severe lack of confidence in his appeal and talent. He was, at best, a limited guitarist, which was sometimes a virtue in his earlier sides, and his vocal delivery developed a weightiness and ponderousness as his career progressed -- with a little flexibility and imagination (and especially with his background as an announcer), he might have slipped into a recording career similar to that of J.P. \"Big Bopper\" Richardson, with the right songs and producer to work with. Evidently, he lacked the impetus and ambition, and it was no accident that Dolan gave up his singing career when rock & roll came along, perhaps recognizing that, much more than the over-30 Bill Haley, he would never be able to find a niche among younger listeners.", "score": 28.782056795361548, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "The first three releases were all done on the West coast and published by Capitol records, the big California concern.Then at the end of the selection, here are more Little Richard tunes, some very rare. Enjoy!\nThe multi-session guitar player BILLY STRANGE (1930-2012) sang a truck driver’s song in 1952, « Diesel Smoke, Dangerous Curves », complete with truck honkers effects, braking grinding sounds and woman’s yellings, which goes faster and faster until the final break. (Capitol 2032).\nDiesel Smoke, Dangerous Curves\nJump Rope Boogie\nThen the ubiquitous CLIFFIE STONE, bass player, bandleader and entertainer (Hometown Jamboree) for the jumping, jiving « Jump Rope Boogie » (Capitol 1496).\nThird Capitol exposure goes with OLE RASMUSSEN, leader of the Nebraska Corn Hunters. Defintely a Western flavoured Hillbilly. Medium paced « Gonna See My Sunday Baby Tonight ».(Capitol 1323). lazy vocal with yells to the backing musicians.\nGonna See My Sunday Baby Tonight\nHoyt Scoggins & The Georgia Boys\nOn a Starday Custom serie # 606 (from January 1957), the very nice, fast « What’s The Price (To, Set Me Free » by HOYT SCOGGINS & His Georgia Boys. An agile guitar, on a very fast Hillbilly boogie. A splendid track..\nWhat's The Price (To Set Me Free)\nRock'n'Roll Fever Ain't Got Me\nJim Harless & the Lonesome Valley Boys\nJIM HARLESS next one, from Bristol, TN in a mix-up of Hillbilly and Bluegrass (good banjo all through) for « Rock’n’Roll Fever Ain’t Got Me ». A bit of fiddle and a strong rhythm guitar.(Shadow 104, unknown date).\nIt’s impossible to fix which version came first on of « The Hot Guitar », either by Eddie Hill on Mercury 6374 (backed by MM. Chet Atkins and Hank Garland) or by TED BROOKS (Vocal by Henry Kimbrell) on Decca 46374, both issued in October 1951. Guitar tour-de-force in both cases.", "score": 28.541878690326737, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "He stayed with Billy Ward and the Dominos two and a half years. He also had started a solo recording career before leaving New York, his earliest sides appeared on the Jubilee label in 1950-- Blue Creek Hop (sorry about the messed up beginning, it's the only copy I could find)\nwas his first release. Jubilee issued a second single -- Rene's Boogie later that year, but I've never heard it. He also recorded for Decca and Victor in 1952-3, these sides are very rare, and are in the same light jazzy R&B style as Blue Creek Hop. Well executed, but lacking the spark of true genius that would mark his playing a few short years later.\nPardon the digression, back in Vegas, Hall was growing bored with the Dominos and soon headed for Los Angeles where he found a job at club at 42 Street and Western but trouble with the musician's union forced him to give it up (they required a six month residency in state, so as a new comer he was shut out of any steady gigs) so on the recommendation of a friend-- Carl Peterson at Universal Attractions he approached Art Rupe the owner of Specialty Records, then flying high on the success of Little Richard, for a job, which he got. Rupe immediately put him to work with Bumps Blackwell working on a Little Richard session cutting Hey Hey Hey. Hall told New Kommotion's Stu Coleman \"That was my first experience with hard rock\", a style to which he would adapt well. He was sent to Bakersfield where Richard was appearing in a club, then worked out some arrangements for sessions that were later cut in L.A.. Rupe was so pleased with Rene's arranging abilities that he put him in charge of his latest discovery-- Larry Williams a pimp turned rocker being groomed by Rupe as the next Little Richard. Working with producer Sonny Bono and using many of the same musicians that appeared on Richard's sides (Earl Palmer on drums, Plas Johnson on sax, Roy Montrell on guitar) they soon produced three hits with Larry Williams-- Slow Down b/w Dizzie Miss Lizzy, Short Fat Fannie b/w High School Dance, and Bad Boy b/w She Said Yeah, tunes that would later be recorded by everyone from the Beatles and Rolling Stones to the Flamin' Groovies. On some of these session Hall played guitar along with Roy Montrell.", "score": 28.00104090141106, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "He used to record himself playing on his portable Grundig tape recorder, and listening to that over and over until he felt he got it right.In October 1963 Clapton joined The Yardbirds, and stayed with them until March 1965.\nThat got me into the band, and then we started making money, I found I had nothing else to spend it on but guitars, so maybe once a month I bought a guitar.\nInfluenced by guitarists like Buddy Guy, Freddie King, and B. King, Clapton developed a distinctive style and rapidly became one of the most talked about guitarists in the British music scene. During this time, Clapton really began to develop as a singer, songwriter, and guitarist. Their hits include “Sunshine of Your Love”, “White Room” and “Crossroads”.\nCream had sold a millions of records and have played throughout the U. Clapton’s next band was Blind Faith (1969), and later on Derek and the Dominos which he formed in 1970 together with keyboardist Bobby Whitlock, bassist Carl Radle and drummer Jim Gordon.\nIf you Google a photo of a vintage Jazz II, you’ll probably notice that almost every single guitar has uniquely shaped inlays, while Eric’s Kay has simple block inlays.\nNext to that, you’ll also notice that Eric’s guitar is fitted with a black round pickguard, while every other Kay Jazz II features a very distinctively shaped pickguard mostly in white.", "score": 27.76066366564727, "rank": 31}, {"document_id": "doc-::chunk-7", "d_text": "He co-wrote the instrumental \"Have Guitar Will Travel\" in 1958 with Bill Black, which was released as a 45 single, 107, on the Fernwood Records label.\n- Moore, Scotty; Dickerson, James L. (1997). That's Alright, Elvis: The Untold Story of Elvis's First Guitarist and Manager, Scotty Moore. Schirmer Books. ISBN 978-0028645995.\n- Moore, Scotty; Dickerson, James L. (2013). Scotty and Elvis: Aboard the Mystery Train. University Press of Mississippi. ISBN 978-1617038181.\n- Grimes, William (28 de junio de 2016). «Scotty Moore, Hard-Driving Guitarist Who Backed Elvis Presley, Dies at 84». The New York Times (en inglés estadounidense). ISSN 0362-4331. Consultado el 7 de julio de 2017.\n- Ernst Jorgensen, Elvis Presley: A Life in Music. The Complete Recording Sessions. New York: St. Martin's Press, 1998, p. 92. ISBN 0312263155\n- «100 Greatest Guitarists: Scotty Moore». Rolling Stone. ISSN 0035-791X. Consultado el 1 de enero de 2015.\n- «Elvis guitarist Scotty Moore dies aged 84». BBC News. 29 de junio de 2016. Consultado el 14 de noviembre de 2016.\n- Sweeting, Adam (30 de junio de 2016). «Scotty Moore obituary». The Guardian. Consultado el 15 de noviembre de 2016.\n- «Scotty Moore 1931–2016: The Guitarist Who Made ‘the King’ Rock». Daily Express. 2 de julio de 2016. Consultado el 15 de noviembre de 2016.\n- Rubin, Dave (1 de noviembre de 2015). Inside Rock Guitar: Four Decades of the Greatest Electric Rock Guitarists. Hal Leonard. pp. 25-26. ISBN 978-1-4950-5639-0.", "score": 27.737977870935516, "rank": 32}, {"document_id": "doc-::chunk-3", "d_text": "I just thought, this is more like it! Also, his solos weren't restricted to a three-minute pop format; they were long and really developed.\"\nClapton has stated that he got the idea for a blues-rock power trio while watching Guy's trio perform in England in 1965. Clapton later formed the rock band Cream, \"the first rock supergroup to become superstars\" and \"the first top group to truly exploit the power-trio format, in the process laying the foundation for much blues-rock and hard rock of the 1960s and 1970s.\"\nClapton said that \"Buddy Guy was to me what Elvis was for others.\" In a 1985 article in Musician magazine, he was quoted as saying that \"Buddy Guy is by far and without a doubt the best guitar player alive...if you see him in person, the way he plays is beyond anyone. Total freedom of spirit, I guess. He really changed the course of rock and roll blues.\" In inducting Guy into the Rock and Roll Hall of Fame, Clapton said, \"No matter how great the song, or performance, my ear would always find him out. He stood out in the mix, simply by virtue of the originality and vitality of his playing.\"\nRecalls Guy, \"Eric Clapton and I are the best of friends and I like the tune \"Strange Brew\" and we were sitting and having a drink one day and I said 'Man, that \"Strange Brew\" ... you just cracked me up with that note.' And he said 'You should...cause it's your licks ...' \" As soon as Clapton completed his sessions with Derek & the Dominos in October 1970, he co-produced (with Ahmet Ertegün and Tom Dowd) the album Buddy Guy & Junior Wells Play the Blues, with Guy's longtime harp and vocal companion, Junior Wells. The record, released in 1972, is regarded by some critics as among the finest electric blues recordings of the modern era.\nIn recognition of Guy's influence on the career of Jimi Hendrix, the Hendrix family invited him to headline all-star casts at several tribute concerts they organized, \"calling on a legend to celebrate a legend.\" Hendrix himself once said that \"Heaven is lying at Buddy Guy’s feet while listening to him play guitar.\"", "score": 27.208347123665924, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "\"T-Bone Walker single-handedly developed the style and way to play blues on electric guitar that was totally different than anything that had been done before,\" says Robillard. \"He used a lot of double timing in his soloing, which at that time was something only horn players did, you never heard a guitar player do it — very unusual and very innovative. He'd be playing actually twice as many notes per beat.\"\nWalker Bridged Blues, Jazz\nThe guitarist and singer made his first recording — a blues 78 — when he was 19. He didn't record again for more than a decade, and by that time, he was playing in the Les Hite and Freddie Slack orchestras.\nWalker had already met a young man who would do for jazz guitar what Walker did for blues — electrify it. Charlie Christian was six years younger than Walker. The two played shows together and Christian influenced Walker's approach to the blues. In the March 1977 issue of Guitar Player Magazine, the late Jimmy Witherspoon compared Walker to another jazz great.\n\"All I can say is that he's the Charlie Parker of guitars when it comes to blues,\" Witherspoon said. \"And in jazz guitarists, he's right with Charlie Christian. No one else can touch T-Bone in the blues on guitar.\"\nWalker went on to perform and record with the likes of Johnny Hodges, Lester Young, Dizzy Gillespie and Count Basie, among many others.\nThe Roots Of Rock\nWalker held the guitar differently — perpendicular to his body and parallel with the stage floor. He also played it behind his head long before Jimi Hendrix took that stunt mainstream. That wasn't the only Walker influence on rock 'n' roll.\n\"Chuck Berry just took T-Bone's style and put it to a different beat,\" says Robillard. \"And a lot of the technique and the little T-Bone phrases that define his style, Chuck Berry, when he rearranged the beat, they became rock 'n roll guitar licks. So in essence, T-Bone was not only the first electric blues guitar player, but he was the first electric rock 'n roll guitar player, really.\"\nBut it was Walker's \"Stormy Monday\" blues that became his signature. Robillard says it's a different kind of blues.\n\"The guitar chord line, it's a little guitar ninth chord figure. That was a unique thing and it became T-Bone's signature.", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-3", "d_text": "He played with Gene Vincent's Blue Caps and he's kind of the first rock & roll guitar player, really. He wasn't afraid to mix all sorts of different styles: You can hear country in there, there's swing in there, you can hear this new sound called rockabilly. I think he played with the Blue Caps for less than a year, but he made such an impact. It's just the way he crafted his solos. A lot of guitar players jump in on the first solo like they're playing for the back row. But Cliff would build them up. He's so distinct. He would use a flat pick and three-finger picks, like a banjo player. That's totally a unique way of playing; I've never heard that from anyone else.\nI was down in Virginia about six years ago, and I set it up to meet him. Someone calls me that morning and says, \"Well, you're not gonna believe it, but Cliff had a heart attack onstage last night.\" So I missed him by one day.\nKeith Richards (The Rolling Stones)\nTo me, Chuck Berry always was the epitome of rhythm & blues playing, rock & roll playing. It was beautiful, effortless, and his timing was perfection. He is rhythm man supreme. He plays that lovely double-string stuff, which I got down a long time ago but I'm still getting the hang of. Later I realized why he played that way – because of the sheer physical size of the guy. I mean, he makes one of those big Gibsons look like a ukulele!\nEverybody has to adapt their own physical possibilities to the instrument. Some guys have tiny little hands that can zip all over the thing. If you don't, you find another way. So given the size of his hands, it's not surprising that Chuck figured out a style where you didn't have to just nimbly pick one string at a time. He got harmonies down so that every note has another note behind it, which gives it that really strong, broad sound. It's fascinating. He's playing half-chords all the time.\nI mean, those records Chuck made in the Fifties still basically stand out as your rock & roll guitar playing to the max. Especially when you add it to the songwriting and the singing and everything else. There's your package.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Elvis Guitar Collection All Elvis Owned Gibson Guitars – Elvis Guitars has become synonymous with rock n rock. The preferred musical instrument for a legend of rock icons Throughout the years, Elvis had a string of Gibson guitars he used on stage\nScotty Moores 82-year-old fingers have no more music left in them. Arthritis has invaded the hands of the man Rolling Stone magazine rated 29th in its 2011 list of the 100 Greatest Guitarists of all time, ahead of legends such as Mark Knopfler, Joe Walsh, Muddy Waters, Slash, Dickey Betts, Bonnie Raitt, Carl Perkins, Roger McGuinn and Paul Simon. Scotty Moore was Elvis Presleys original lead guitar player who laid down the licks on classic early hits such as Hound Dog and Jailhouse Rock. He was also Elvis close friend and worked as his manager for a short time. Its been about seven or eight years since I was able to play, Moore says by phone from his secluded home outside Nashville. I had stopped playing professionally several years before that, but Id still play once in a while for my own enjoyment … just whatever song came to mind, really. Sure, I miss it. But the arthritis came on gradually, so I knew the day would come that I had to deal with it.\nLes Paul, guitarist had Gibson Guitar brand named in his honor. The Les Paul Guitar is the most sought after guitar by most famous rock bands\nElvis Presley’s Gibson Doveguitar was played by ‘The King’ during a performance at the Las Vegas Hilton in December 1976. Elvis gave the instrument to his bodyguard Sam Thompson in January 1977. On the back of the guitar is some plastic residue where the belt on Elvis’s jumpsuit melted because of the heat of the stage lighting. Described as ‘one of the most significant guitars to ever reach the market’, the Elvis guitar expected to sell for up to US$500,000.\nLes Paul, guitarist who had Gibson Guitar named in his honor and who changed the course of music with the electric guitar and multi-track recording and had a string of hits, died August 13, 2009. He was 94.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-6", "d_text": "was working at the Louisiana Hayride and working clubs in and around\nShreveport with all the Hayride acts and we met him when we went down\nthere. I don't think he played with us the first time we appeared.\nThe best I can remember, he played with somebody else on the show and we heard\nand liked him and asked if he would like to play with us the next time.\nAnd he did. He would speed up or slow down just like we would and we said,\n\"Boy, this is great\". And he started working with us every time\nthere was money to include him on the dates. D.J. actually went on the\npayroll in December, 1955.\nSo this was getting into the RCA period.\nA lot of people may also not know that Elvis actually played your guitar or\nBill's bass on some songs.\nYeah, Elvis played my guitar on \"One Night\". He also played\nBill's bass on \"You're So Square\". And although he played fairly\ngood piano and good rhythm, he wasn't an accomplished musician by any\nmeans. But he had a real uncanny sense of rhythm, and I think that's what\nmade him such a great singer. That rhythm just seem to come out of him,\nespecially on up-tempo things.\nApproximately how many songs did you record throughout the years?\nI don't have any idea. Somebody told me one time it was over 500.\nI've never tried to count.\nSo that would be 1954-67?\n1968. The \"Comeback Special\" was the last thing I worked with\nDo you have a few favorite tunes?\nOh (pause) yeah. I have a lot of favorites. The one I like best on\nthe ballad side is \"Don't\". I always really liked that\none. Up-tempo, there were just several of them. A lot of the early\nthings like \"Mystery Train\" and \"Good-Rockin Tonight\".\nI always thought that lick on the front of \"Don't Be Cruel\" was a\nYeah, I believe I got paid for that one. I believe it was a bout 8 notes\non the front and a chord on the end.\nAlright, it's time for some guitar talk. Around 1954 you were seen\nplaying a ES-295, correct?\nWas this your first good guitar?", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Graham Reid | | 5 min read\nMention the name “Link Wray” these days and most people will draw a blank. A few might confidently say, “Rumble” – the gang-fight title of his raw, distorted guitar instrument from '58 – but after that things might get murky.\nLink Wray – born Fred Lincoln Wray -- died in late 2005 age 76, and is frequently confused with other guitar twangers of his era like Duane Eddy (Rebel Rouser, the theme to Peter Gunn) and Dick “King of the Surf Guitar” Dale whose biggest hit was Misirlou in '62.\nBut Link Wray was cut from a very different cloth.\nHe was born poor to a family in rural North Carolina in '24 (his mother a full-blood Shawnee raised in Klan territory, but who some believed possessed special powers) and he got an acoustic guitar from his father at 14.\nHe learned from a black one-man-band guy called Hambone who taught him how to tune the damn thing, and how to play with a knife. He grew up with the blues, and of course country music.\nWhen he was 16 the family moved to Portsmouth in Virginia and Link got himself – briefly – a job as a messenger boy in the dockyards where his father and older brother worked as fitters.\nOnce he'd saved enough he bought an electric guitar from a Sears and Roebuck catalogue then, after a stint in the army (Germany then Korea) he got more serious about music.\nBy the mid Fifties he and brother Doug (on drums) were gigging around local clubs playing country music, they moved to Washington where they cut some sides but at age 27 and married with two children everything came to a halt for Link.\nHe was hospitalised with tuberculosis and had his left lung removed.\nBy the time he recovered the music culture had changed.\nThe Wrays literally changed their tune – Link adopting some weird exaggeration of Presley's vocal mannerisms – but the break came when he recorded an instrumental called Oddball.\nIt was loosely based on the slow and somewhat dull instrumental The Stroll but Wray added his own touch: he took a pen and punched holes in his tweeters to get a dirty and buzzing sound.\nUnfortunately no one was much interested in it . . . except the teenage step-daughter of record exec Archie Bleyer of New York's Cadence label.", "score": 26.794824290429847, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Les Paul Biography\nDied 13 August 2009 in White Plains, New York, USA\nJazz, blues and country guitarist, songwriter, luthier, recording engineer and inventor. He designed one of the first solid body guitars, which made the sound of rock and roll possible. In the 1940s he made early experiments with overdubbing, delay effects such as tape delay, phasing effects and multitrack recording.\nHe recorded in the 1950s with his wife Mary Ford on vocals.\nInducted into Rock And Roll Hall of Fame in 1988 (Early Influence).", "score": 26.067160785581386, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "He was Elvis’s guitar player on all the Sun Records stuff. He’s on “Mystery Train”, he’s on “Baby Let’s Play House”. Now I know the man, I’ve played with him. I know the band. But back then, just being able to get through “I’m Left, You’re Right, She’s Gone”, that was the epitome of guitar playing. And then “Mystery Train” and “Money Honey”. I’d have died and gone to heaven just to play like that.” How the hell was that done? That’s the stuff I first brought to the johns at Sidcup, playing a borrowed f-hole archtop Höfner. That was before the music led me back into the roots of Elvis and Buddy – back to the blues.\nTo this day there’s a Scotty Moore lick I still can’t get down and he won’t tell me. Forty-nine years it’s eluded me. He claims he can’t remember the one I’m talking about. It’s not that he won’t show me; he says, “I don’t know which one you mean.” It’s on “I’m Left, You’re Right, She’s Gone.” I think it’s in E major. He has a rundown when it hits the 5 chord, the B down to the A down to the E, which is like a yodeling sort of thing, which I’ve never been quite able to figure. It’s also on “Baby Let’s Play House.” When you get to “But don’t you be nobody’s fool / Now baby, come back, baby …” and right at that last line, the lick is in there. It’s probably some simple trick. But it goes too fast, and also there’s a bunch of notes involved: which finger moves and which one doesn’t? I’ve never heard anybody else pull it off. Creedence Clearwater got a version of this song down, but when it comes to that move, no. And Scotty’s a sly dog. He’s very dry. “Hey, youngster, you’ve got time to figure it out.” Every time I see him, it’s “Learnt that lick yet?”", "score": 25.65453875696252, "rank": 40}, {"document_id": "doc-::chunk-7", "d_text": "\"Be-Bop-a-Lula\" was used in the film The Girl Can't Help It. But for some reason, Cliff wasn't in it. Nobody really knows why. Maybe he didn't look the part. You have to listen to Vincent's album [Bluejean Bop] Gene Vincent and His Blue Caps. It's almost barbaric. It's like a barroom brawl or a punch-up in a swimming pool. You can hear this echo going, and it's just amazing. Among all the screaming and the shouting, you can hear this guitar. It sounds like someone was being impaled on a spit. Maybe it was Cliff's way of saying, \"You barbaric bastards. I'm a jazz player. But if this is the sort of thing you want, I'm only doing this once.\" He never did anything like that again. Anyone with an affinity for rockabilly will know that there was never anybody so explicit in their guitar playing. This is quadruple-X-rated guitar playing.\nDjango reinhardt was the best there ever was. His technique, his playing ability and his tone were all incredible. A friend of mine, Johnny Gimble, who plays fiddle, was a big fan of his – and also Stephane Grappelli, who played fiddle in the Django Reinhardt band – and he gave me a tape of Django and Stephane playing together. That was back when I was twenty years old, I guess, and I've been a big fan ever since. He has so many great songs, it's just his style. It's like Sinatra, you know. The voice is there – and it wouldn't matter what he played. He could play scales and it'd be beautiful. I really like everything he's played, and I still listen to his records a lot. There's other great guitar players that I like: Grady Martin, who played guitar out of Nashville for so many years, was a fantastic guitar player, and Chet Atkins. I grew up listening to the music of all these guys, but Django is the one that's done the most for me than anybody.\nEddie Van Halen\nI think the whole guitar-god thing is funny. To be a legend, don't you have to be dead? Call me a legend when I'm gone. How about just a guitar player?", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-6", "d_text": "—Matt Blackett\nAlonzo “Lonnie” Johnson is best known to guitarists for his groundbreaking acoustic six- and 12-string work in the late Twenties, including his celebrated duets with jazz guitarist Eddie Lang in 1929, and his 1927 recording “6/88 Glide,” featuring what is now widely considered to be the first flatpicked single-note guitar solo. But Johnson’s career continued for decades after that, and in 1947 he began playing electric. You’ll find great electric solos scattered throughout his subsequent tunes, but the brief but rocking romp on 1949’s “Playing Around” notably foreshadows moves that early rockers such as Eddie Cochran, Cliff Gallup, and Scotty Moore will explore a few years later. —Barry Cleveland\nLike Hendrix, to whom he is overly, if not unfairly, compared, Robin Trower’s blues roots run deep. Forty years into his solo career he still makes records worth listening to, these days filled with more classic blues tunes than ever. Still, the best example of his rooted playing might be “Whisky Train,” a tune he wrote for Procol Harum’s fourth album, Home. The song could be considered one long cowbell-driven guitar solo, with Trower riding one of the great guitar riffs over and over, occasionally answering brief Gary Booker vocal sections with short modern blues excursions that preview his style as a solo artist. —Michael Ross\nIt’s no easy task to choose a favorite Rory Gallagher blues solo, but his slide work on “Bullfrog Blues” is a serious contender. Leaving his trademark Strat behind (several YouTube videos show him playing a Gretsch Corvette), Gallagher gets to work in open-A tuning, with a capo on the second fret. The solo itself uses licks in the I, IV, and V chord positions at the fifth, seventh, and 12th frets, and it isn’t unlike Gallagher’s acoustic bottleneck work—though a ferocious amount of gain yields one of the meanest electric slide tones that you’ll ever encounter. –Teja Gerken\n“Ball and Biscuit”\nJack White kicked the blues straight in the nuts on “Ball and Biscuit” utilizing a bizarre, ferocious sound the likes of which had never before been heard in the history of America’s senior guitar genre.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Who can actually say when the notion of vagabond outlaw, crackshot guitarist-as-superhero first entered into the rock idiom? If it was the end of the 60’s, when single performers were rapidly losing ground to the ‘groops’, and we all felt comfortable enough with our musical extravagances to heap such gleaming status upon an individual, then it was natural that, in nearly all cases, the bulk of attention inevitably fell on who plays the guitar. That is, the guitarist as quarterback or pitcher, center of attraction, focal point, el comandante. The physical presence of a gunfighter guitarist within a group—American, British, Indian, Redneck, City-Black, Hire-the-Handicapped—always insured at least moderate interest in what the group was up to. So just about anything above and beyond the traditional role of accompanist that set any rootin’-tootin’ guitarist apart from everybody else, and that usually fell under the category of brains/ability/taste, meant instant immortality.\nThe first hotshots whose names became household words, as far as high-quality blues-bred rock ‘n roll was concerned (that, at least, is the genre we're limited to here) were Jimmy Page, Mike Bloomfield, Eric Clapton, Peter Townshend, Keith Richard and, if you lived east of the Mississippi, the Blues Project's Danny Kalb. The next solid wave of machine-gunners included Johnny Winter, Larry Coryell, Jorma Kaukonen, Robbie Robertson, Bert Jansch and, finally discovered by most, John Fahey. Without elaborating on the latter “experimental” phase or the former “early primitive” phase, suffice to say that the last great flowering of guitarists whose names assumed that “household word” status included the likes of Jerry Garcia, John Fogerty, John McLaughlin, Carlos Santana, J. Geils, and the last American to say something new on the guitar, Duane Allman.\nHousehold words. When you hear the name you immediately visualize the gaunt stranger with the ax.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "|Nombre real||Winfield Scott Moore III|\n|Nacimiento||27 de diciembre de 1931 Gadsden, Estados Unidos|\n|Muerte||28 de junio de 2016 (84 años) Nashville, Estados Unidos|\n|Ocupación||Músico, Ingeniero de sonido|\n|Género(s)||Rock and roll, rockabilly|\n|Período de actividad||1950–2009|\nWinfield Scott \"Scotty\" Moore III (Gadsden, 27 de diciembre de 1931–Nashville, 28 de junio de 2016) fue un músico y compositor estadounidense. Conocido mundialmente por tocar junto a Elvis Presley en la primera parte de su carrera, entre 1954 y los primeros años de Elvis en Hollywood.\nEl crítico de rock Dave Marsh acredita a Moore con la invención de la canción de poder, en la canción de 1957 de Presley \"Jailhouse Rock\", la introducción de la cual Moore y el baterista D.J. Fontana, según este último, \"copió de una versión de swing de los años 40 de 'The Anvil Chorus'\". Moore ocupó el puesto 29 en la lista de la revista Rolling Stone de los 100 Mejores Guitarristas de Todos los Tiempos en 2011. Fue incluido en el Salón de la Fama del Rock and Roll en 2000 y en el Salón de la Fama de la Música de Memphis en 2015. El guitarrista principal de los Rolling Stones, Keith Richards, ha dicho de Moore,\nWhen I heard \"Heartbreak Hotel\", I knew what I wanted to do in life. It was as plain as day. All I wanted to do in the world was to be able to play and sound like that. Everyone else wanted to be Elvis, I wanted to be Scotty.\nWinfield Scott Moore III was born near Gadsden, Tennessee, to Mattie (née Hefley) as the youngest of four boys by fourteen years. He learned to play the guitar from family and friends at age eight. Although underage when he enlisted, Moore served in the United States Navy in China and Korea from 1948 through January 1952.", "score": 25.589573428404645, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Long before he became king of the blues bar bores, Roy Buchanan was actually a great rock’n’roll guitar player. His best recordings unfortunately are spread out over a handful of obscure 45’s on small labels, coast to coast. By the time he’d made a name for himself the fire had pretty much gone out of his playing, and while he was always a great technician, he simply was not a much of a band leader, so his LP’s are dreadfully dull affairs.\nToday, however, we will give a listen to those early sides, and these records I believe more than justify his reputation as one of the all time greats.\nBuchanan was born in Ozark, Arkansas, September 23, 1939, and raised in Pixley, California which is in the Central Valley, south of Fresno. I think the Joads end up there at the end of Grapes of Wrath. His father was a Pentecostal preacher. Roy Nicholas, the country guitar great, star of the best Maddox Brothers & Rose 4-Star recordings and later Merle Haggard’s band lived ten miles away and was an early influence. Another influence was Jimmy Nolen of the Johnny Otis Show who Buchanan claims to have met at age 15, more likely he saw him on Otis’ TV Show (Buchanan was known for, lets call it, stretching the truth).\nHe would site Nolen’s Federal recording of After Hours as his favorite record through out his life.\nRoy Buchanan left home as a teenager and in 1958 he hooked up with Dale Hawkins’ band in Tulsa, traveling with them to Shreveport, Louisiana, home of the Louisiana Hayride radio show (where Elvis started) and a hotbed of guitar playing talent– James Burton, Scotty Moore, Carl Adams, and dozens of other six string hotshits passed through Shreveport where Dale Hawkins and his brother Jerry were both based, Buchanan cut his some of his earliest sides with the Hawkins brothers.\nIt’s hard to figure out what order his earliest sides appeared, but in 1958 he may have recorded with Alis Lesley on Era, a moot point since I don’t have that particular record.", "score": 24.999999999998536, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Early rock 'n' roll guitar wizard Bo Diddley died near Jacksonville, Florida, Monday, 2 June, of heart failure. He was 79 years old. One of the most primitive of the early rockers, Bo took the blues and folk music of his native Mississippi and combined them with Latin American and African rhythms to come up with his trademark \"hambone\" beat. VOA's Doug Levine tells us more about the career of Bo Diddley.\nAlong with the legendary Chuck Berry, Bo Diddley was considered one of the most influential guitarists of the early rock era. His powerful rhythm, which became known as the \"Bo Diddley beat\" has been imitated by countless musicians.\nBorn Otha Ellas Bates in Mississippi in 1928, Bo was sent to Chicago to live with an aunt, Gussie McDaniel, who later adopted him. He dropped his first and last names to become Ellas McDaniel. His stage name Bo Diddley came from two sources; the diddley bow, an African stringed instrument, and the slang expression for a mischievous boy.\nBo studied violin at age seven and became a virtuoso. In the early-1940s, while he was in his teens, he taught himself to play guitar, drawing on the influences of jazz artist Louis Jordan and bluesman John Lee Hooker. Bo explained how he developed his trademark sound after his sister bought him his own guitar.\n\"I took it home and learned how to play on one string, 'When The Saints Go Marching In.' The other strings didn't make a difference,\" he said. \"Then I accidentally tuned it one day the way that I'm tuning it now. I say I'm playing backwards. I don't play like the average guitar player, the cats [musicians] who move their fingers all around like this. I do it in chords, and basically, do almost the same thing.\"\nAt age 13, Bo Diddley became a street musician, eventually joining with others to form a street corner band. Driven by maracas, congas and bass, Bo played his infectious, hypnotic guitar phrases. When the band was ready to perform in Chicago nightclubs, he bought an electric guitar so he could get more volume.", "score": 24.345461243037445, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Posted by themusicsover on January 10, 2010\nJuly 22, 1929 – January 10, 2009\nBilly Brown was an American rockabilly guitarist and singer who launched his career upon his return from the Korean War. Brown released a handful of singles before he was signed by Columbia Records in 1957. A young Jerry Reed played on a few of those early Columbia recordings. None of his releases sold particularly well. Billy Brown was 79 when he passed away on January 10, 2009.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-4", "d_text": "The famously fickle and laborious Strat cat played a ’59 Les Paul Standard dubbed “Buddy” through a Fuzz Face and a 100-watt Marshall on the solo—a first-take monster in the moment. Brandishing a sizzling tone and feeding off of Miller’s vocal set up, Johnson’s searing first solo soars to the heavens. Perfectly timed major thirds sound surprisingly blue, and EJ incorporates just enough diminished and chromatic runs to add spice without pushing too far beyond the boundaries of the blues. —Jimmy Leslie\n“I’m Going Home”\nIt’s hard to think of Alvin Lee without taking note of his solo in Ten Years After’s “I’m Going Home.” The band first recorded the song on its 1968 release Undead, and it upped the fast shuffle’s octane level during its performance at the Woodstock festival. Playing his iconic “Big Red” 1959 Gibson ES-335, Lee takes the unusual step to start his solo accompanied only by drums for a full 24 bars, playing without the comfort of harmonic guidance from the band. He then proceeds to play one of the most blistering and fluid, Chuck Berry-influenced solos you’ll ever come across. –Teja Gerken\nReleased on Mick Taylor’s first post-Rolling Stones solo album Mick Taylor, “Slow Blues” is a study in how to avoid mere noodling while essentially blowing for the entire duration of an instrumental track. The fact that “Slow Blues” uses a very cool, modified, 12-bar progression with a distinctive bass line and chorused-sounding 13th chords taking the place of an actual melody certainly helps in keeping the tune engaging, but Taylor’s throaty, reverb-drenched tone and dynamic playing keep the tune moving forward in a way that is not to be taken for granted in such an extended solo exploration. –Teja Gerken\n“Call It Stormy Monday”\nChances are, you’re not old enough to remember the impact this song made when it was originally released in 1947. (By way of perspective—Clapton was only two years old then, and the first Stratocaster was still seven years off.) So you may listen now and find yourself thinking, “What’s the big whoop? I’ve heard other guitarists play that stuff.” The big whoop is: Walker invented that stuff. Without his influence, there might’ve been no B.B.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Peter Frampton Reunited with His Lost 1954 Gibson Les Paul After 31 Years\nA 1954 Gibson Les Paul that Peter Frampton played during his Humble Pie and solo days has been found after investigation by two Frampton fans, according to Gibson.com. The guitar had been presumed lost in a 1980 cargo plane crash.\nThe Curaçao Tourist Board acquired the guitar, and experts from Gibson Guitar confirmed it was, indeed, the missing guitar, long missing from Frampton’s collection.\nFrampton was given the guitar in 1970 by a man named Mark Mariana at a Humble Pie gig at the Fillmore West. Frampton borrowed Mariana’s guitar for the show and afterward tried to buy it from him. “But to my surprise he said he couldn’t sell it to me, he wanted to give it to me!” he said.\nFrampton played the guitar exclusively on Humble Pie’s Rock On and Rocking the Fillmore albums and his own seminal Frampton Comes Alive!, one of the top-selling live records of all time.\nFrampton was recently reunited with the guitar in Nashville, Tennessee.\n“I am still in a state of shock, first off, that the guitar even exists, let alone that it has been returned to me. I know I have my guitar back, but I will never forget the lives that were lost in this crash. I am so thankful for the efforts of those who made this possible … and, now that it is back I am going insure it for 2 million dollars and it’s never going out of my sight again! It was always my No. 1 guitar and it will be reinstated there as soon as possible. Some minor repairs are needed. And, I just can’t wait to get Mark Mariana on the phone.”", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "What can you say about Les Paul that hasn’t been said? The guy changed the course of rock and roll and, therefore, the course of popular music itself. He played frequently up to this year, and died Thursday at age 94. One of the staple guitars in music – the Gibson Les Paul – is named for him, designed for him.\nThat commercially available guitar will perhaps be his most visible legacy – most every kid who dreams of being a rock and roller and every garage guitar geek longs for a Les Paul, and vintage models are already selling for tens of thousands of dollars. Paul’s own hits may not be heard so much on radio any more, and though we all hear the effects that he created in just about every song on the radio every day, we probably don’t realize that he did it first, that he showed the rest of us how.\nHere’s a hilarious video of him later in his career, and it shows his sense of humor, his ease on stage and his consummate talent on the guitar.", "score": 24.288896501842334, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "The Rock and Roll Hall of Fame will pay tribute to the “father of the electric guitar” this fall.\nLes Paul will be honored at the annual American Music Masters series, a weeklong event that begins Nov. 10, Rock Hall officials said Tuesday. A tribute concert – artists will be named later – is scheduled Nov. 15 at Cleveland’s State Theater.\nPaul, 93, is hoping to attend, said Rock Hall President and CEO Terry Stewart.\n“You have an inductee who in some ways maybe has had one of the biggest influences of all our inductees with the creation of his solid-body guitar, overdubbing … not to mention his musical styling and his ability to play,” Stewart said. “He’s become an idol and an icon to people in the rock world, as well as people in jazz and popular music.”\nPaul began playing guitar as a child and by 13 was performing semiprofessionally as a country-music guitarist. He later made his mark as a jazz-pop musician, recording hits like “How High the Moon” with his wife, singer Colleen Summers.\nHe built a solid-body electric guitar in 1941 – an invention born from his frustration that audiences were unable to hear him play.\nIn 1952, Gibson introduced the Les Paul model, which became the instrument of choice for musicians such as Duane Allman, Eric Clapton and Jimmy Page.\n“It’s not just his innovation and his musical playing, but sort of the residual effects of that guitar,” Stewart said. “It’s become the beginning point for so many people in music, particularly rock music.”\nPaul still performs weekly at the Iridium Jazz Club in New York City. He was inducted into the early influence category of the Rock Hall in 1988.\nPaul is only the second living recipient of the annual American Music Masters award, which began in 1996 to pay tribute to artists who helped change American culture. Jerry Lee Lewis was the first living recipient in 2007. Past recipients include Woody Guthrie, Muddy Waters and Sam Cooke.", "score": 23.69199449380812, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "He added a sax player (first Percy France, from 1955 Clifford Scott) and recruited a new guitarist, Billy Butler, a Philadelphia session musician. Technically, Doggett was not a great organist, but when it came to setting a danceable groove, he was superb. He soloed on occasions, but mainly concentrated on the rhythm and the groove and left the heavy lifting for Clifford Scott and Billy Butler. Doggett, Scott and Butler made a formidable trio. Add drummer Berisford 'Shep' Shepherd and bass player Carl Pruitt or Edwyn Conley, and it was among the best R&B bands of the decade.\nKing had already released 24 non-charting singles by Doggett when his chart luck finally changed, and in a big way too. \"Honky Tonk (Parts 1 & 2)\" is one of the all-time great R&B and rock instrumentals. It topped the R&B charts for 13 weeks, also reached # 2 on the pop charts and sold a staggering 1.5 million copies by the end of 1956. For the most part, \"Honky Tonk\" was improvised during a Sunday-night dance in Lima, Ohio. Audience reaction was such that Doggett's band had to do the new song at least nine times again. Doggett knew he had a smash and recorded the song in New York on June 16, 1956 (in one take, according to Bill himself). With a duration of 5 minutes and 22 seconds, the song had to be divided between the two sides. Part 1 is dominated by Billy Butler's guitar and side 2 by Clifford Scott's saxophone.\nIn general, \"Honky Tonk\" is not regarded as the beginning of instrumental rock n roll as a genre of its own, as it was more R&B than R&R. It was Bill Justis's \"Raunchy\", one year later, that really set the instrumental ball rolling.\n\"Honky Tonk\" was followed in the charts by a Doggett cover of \"Slow Walk\" (# 4 R&B, # 26 pop), though the original by Sil Austin was a slightly bigger hit.", "score": 23.205175709568138, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Legendary jazz guitarist Les Paul, known for his contributions to guitar design and recording technology, has died at the age of 94, according to a joint statement released by Gibson Guitar, the company that produced his iconic Gibson Les Paul guitar, and New York's Iridium jazz club, where he continued to play weekly gigs almost until the end of his life.\nPaul's reputation as a guitarist and recording artist are overshadowed by his contributions to music technology. He pioneered many sound recording techniques still in use today, and was also instrumental in developing the modern solid-body electric guitar, which formed the backbone of decades of popular music.\nIn the mid-1930s, Paul experimented with building an amplified guitar, using a plank of lumber as the starting point, and adding a pickup connected to an external amplifier (both Leo Fender and Adolph Rickenbacker developed similar designs for solid-body electric guitars around the same time). The Gibson Guitar Corporation eventually designed a solid-body electric guitar based on Paul's concepts and signed him to a long-term endorsement deal.\nPerhaps more important, Les Paul was among the first musicians to employ multitrack recording--the basis for nearly all modern recorded music. His 1947 recording of, \"Lover (When You're Near Me),\" was made from eight separate guitar parts, dubbed over each other. While this early experiment was done with acetate disks, the technique moved onto magnetic tape and today's nonlinear hard-drive recording.\nHis work on multitracking also led to popular recording techniques such as phasing and delay, which were achieved by manipulating the actual magnetic tape used in the recording process.\nWhile he was inducted into the Rock and Roll Hall of Fame, the National Broadcasters Hall of Fame, and the Grammy Hall of Fame, it was his work on the solid-body electric guitar and multitrack recording that earned him a spot in the National Inventors Hall of Fame--not the usual place you'd find a jazz guitarist.", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-4", "d_text": "G), but let’s face it, Rumble don’t need no harmonica. Mostly he stepped out of the spotlight, only in his mid-twenties he was already embittered by lack of success, and found a regular gig backing Danny Denver, playing country bars, far from the limelight. With the introduction of wah wah pedals, fuzztones, etc. he felt lost, tricks he’d spent years learning to do were now available for a small price at your local guitar shop. The emergence of Jimi Hendrix in ’67 must have shook him because he told an interviewer later: “when you play at that volume the amps and guitar plays you, I like to use the smallest amp possible, it gives you maximum control”. Roy was feeling the heat, and competing was against his reclusive nature, he would shun the limelight for the rest of his life. Sometimes he appeared with his own band– Roy Buchanan and the Poor Boys but mostly he backed up Danny Denver until a Rolling Stone magazine article in 1969 proclaimed him “the greatest guitar player alive you never heard of” which led a path of rock stars to the crappy clubs Denver was playing to herald Buchanan’s talent. He claims the Rolling Stones asked him to replace Brian Jones but he turned them down because he didn’t want to learn their repertoire. This is most likely bullshit.\nStill, players like George Harrison and Jeff Beck sung him praises and soon a PBS documentary brought him to fame’s doorstep, kicking and screaming. He was signed by Polydor and cut a series of boring ass records. He retired and came back several times, the last on Alligator Records in the late 80’s. Unfortunately the fire had gone out. Unlike our previous subject, Charlie Christian, given the freedom to stretch out, Buchanan had become a bore. Under the tight restraints of the three minute 45, and with a strong band leader like Dale Hawkins or producers like Leiber and Stoller, he was great, but as heard on his LP’s, twenty minute solos on standards like Green Onions were duller than dishwater. He became hugely popular in an era that worshipped “chops” and “tasty riffs”, even selling out Carnegie Hall at one point.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "There are a lot of great guitarists that have recorded music over the years. There’s just something about a player who is one with his instrument, like it’s an extension of his or body. Many of these great guitar performances are lost to time because the technology to record them wasn’t available widely in the early 1900s. Luckily for us recoding technology has improved over the years and hundreds of great concerts have been committed to tape.\nCream – Crossroads\nCrossroads was originally called Crossroad Blues, written and recorded in 1936 by the legendary Robert Johnson. Many of Johnson’s songs have been covered over the years but Cream’s live version from the 1968 double album Wheels of Fire is the most famous. Why? Because of the way Clapton restructured the song. He turned it into a standard tuning twelve bar blues rocker. And to make the song even better Clapton played not one, but two guitar solos on the track. The first solo is a pretty straightforward blues solo but the second, while remaining a bluesy feel, is a blazing solo that proves once and for all why his nickname was God in the 1960s. If you get have a moment look up other live performances online to see and hear for yourself.\nEddie Van Halen – Live Solo\nEddie Van Halen burst onto the scene and into the consciousness of guitarists everywhere with the short but blistering solo Eruption on Van Halen’s first album. He went on to record an instrumental on two other albums, the flamenco inspired Spanish Fly on Van Halen II and the organ sounding Cathedral on Diver Down. In 1986, during Van Halen’s first tour with new singer Sammy Hagar, the band recorded and released a live concert video. The eleventh track on the tape is merely called Guitar Solo. It starts out with Edward playing a clean little ditty that was later recorded as 316 (his son’s birthday). After that he blasts off for eleven minutes of pure rock n roll guitar soloing as only the best guitarist of his era can. He shreds melodically (as is his style) and during the course of the solo he fits in parts of Eruption, Spanish Fly, and Cathedral. This is probably the greatest guitar solo of Eddie Van Halen’s ever caught on tape and must be heard to be fully appreciated.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "The great rock guitarist Link Wray died on Friday. In the 1997 edition of the All Music Guide to Rock, the late Cub Koda had this to say about him: \"Quite simply, Link Wray invented the power chord, the major modus operandi of modern rock guitarists. Listen to any of the tracks he recorded between that landmark instrumental (Rumble--Lee) in 1958 through his Swan recordings in the early 1960s and you'll hear the blueprints for heavy metal, thrash, you name it. Though rock historians always like to draw a nice, clean line between the distorted electric guitar work that fuels early blues records to the late-'60s Hendrix-Clapton-Beck-Page-Townshend mob, with no stops in between, a quick spin of any of the sides Link recorded during his golden decade punches holes in that theory right quick. If a direct line from a black blues musician crankin' up his amp and playing with a ton of violence and aggression can be traced to a young, white guy doing a mutated form of same, the line points straight to Link Wray, no contest.\"\nRIP Link (No Pun Intended...)", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Although I 'discovered' Roy Buchanan when I was a blues-loving kid in the mid-80s, his first brush with something resembling fame came in 1971, when a documentary, The Best Unknown Guitarist in the World, aired on public TV.\nThe documentary was, of course, about Buchanan, a Washington, D.C.-based blues-rock virtuoso whose gritty sound and distinctive technique inspired scores of guitarists, including Jeff Beck.\nSadly, Buchanan is still fairly unknown to the general public (including scores of guitarists).\nBuchanan had a distinctive tone, as can be heard in the live video below. He played his vintage Fender Telecaster through a Fender Vibrolux amp with the tone all the way up (and then some, it seems), using the guitar's volume knob mid-solo to create mesmerizing, keyboard-style effects. Buchanan could play harmonics at will and mute individual strings with his free right-hand fingers while picking or pinching others. He was best known for his gut-wrenching bends and incredibly 'pointy' sound. He committed suicide in 1988 at age 48 while in jail for public drunkenness.\nBelow, check out Buchanan's undated performance of When a Guitar Plays the Blues, the title track from his 1985 album on Alligator Records (This is a good one to own, by the way). There's a solo intro section at the beginning of the video, and the actual solo begins at around 5:25. Note that there's another extended solo section around 8:18. Enjoy!", "score": 21.695954918930884, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Danny Gatton was truly the guitarist's guitarist. Hailed by Guitar Player as the 'World's Greatest Unknown Guitar Player' he was cherished by the likes of Les Paul, Joe Perry, Steve Vai, Joe Pass, Eric Clapton, Jeff Beck, Vince Gill, Albert Lee and countless other guitar luminaries. Until his tragic and untimely death in 1994 he was every guitar players best resource for a treasure of 'hot licks'. Never one to tour he remains an underground treasure for those 'in the know'. If you play the guitar, you owe it to yourself to check out this great instrumental genius.", "score": 21.695954918930884, "rank": 58}, {"document_id": "doc-::chunk-4", "d_text": "As the electric guitar began to take on its signature shape in the Fifties – almost indistinguishable from the guitars of today – it bears noting how conspicuously sexy that design had become.\nWith curves that cartoonishly mimicked the lines of a woman’s hips, and an undeniably phallic neck, the guitar may have preceded the sexual revolution of the Sixties, but it would become a perfect visual complement to it. Its provocative design was something Berry was one of the first to acknowledge, and that he rarely failed to exploit in his live shows.\nWhile he had to be careful with how far he went – Chuck was one of the first black crossover rock-and-roll artists in a very racially charged time – he wasn’t that careful. He often played to predominately white audiences at rock-and-roll stage shows booked by DJ/promoter Alan Freed, or in Hollywood films such as Rock, Rock, Rock!, Go, Johnny Go! and Mister Rock and Roll.\nDuring each appearance, Berry would kiss his Gibson or Gretsch on the neck, wrestling with it as he made it scream and swoon during his wild solos, jutting it lasciviously from his waist as he did splits. The Gibson ES-350T was particularly well suited to Berry’s gyrations.\nIn the mid Fifties, electric guitar players had two choices: either a full hollow-body or a compact solid-body. Gibson had been receiving requests from players for something in-between the two styles, so in 1955 their first “thinline” electrics were developed.\nThe guitar’s medium build was a perfect fit for Chuck’s high-energy stage presentation. Chuck’s guitar antics and wild gyrations were provocative stuff for suburban kids, who were used to gently swaying crooners such as Frank Sinatra or Rosemary Clooney. And while they might not have understood all the implications of his act, one thing was clear: the electric guitar presented a seriously dangerous, sexy alternative to the comparatively staid piano or the sax.\nAt the end of the Fifties, dozens of guitar-playing Johnny B. Goodes appeared, irrevocably changing the musical landscape, all playing in small electric combos resembling those pioneered by Chuck Berry and Muddy Waters.\nAs for Waters, he had a few more hits, including Mannish Boy in 1956 and She’s Nineteen Years Old in 1958.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-46", "d_text": "Atkins sometimes\njoked that early on his playing sounded ''like two guitarists playing\nDuring the 1940s he toured with many acts, including Red Foley, The Carter\nFamily and Kitty Wells. RCA executive Steve Sholes took Atkins on as a\nprotege in the 1950s, using him as the house guitarist on recording\nRCA began issuing instrumental albums by Atkins in 1953. George Harrison,\nwhose guitar work on early Beatles records is heavily influenced by Atkins,\nwrote the liner notes for ''Chet Atkins Picks on the Beatles.''\nSholes put Atkins in charge of RCA Nashville when he was promoted in 1957.\nThere, he helped Nashville survive the challenge of rock 'n' roll with the\nNashville Sound. The lavish sound has been criticized by purists who prefer\ntheir country music raw and unadorned.\nAtkins was unrepentant, saying that at the time his goal was simply ''to\nkeep my job.''\n''And the way you do that is you make a hit record once in a while,'' he\nsaid in 1993. ''And the way you do that is you give the audience something\nAtkins quit his job as an executive in the 1970s and concentrated on playing\nhis guitar. He's collaborated with a wide range of artists on solo albums,\nincluding Mark Knopfler, Paul McCartney, Eric Johnson, George Benson, Susie\nBogguss and Earl Klugh.\nAt the time he became ill, Atkins had just released a CD, ''The Day Finger\nPickers took over the World.'' He also had begun regular Monday night\nperformances at a Nashville club.\n''If I know I've got to go do a show, I practice quite a bit, because you\ncan't get out there and embarrass yourself.'' Atkins said in 1996.\n''So I thought, if I play every week I won't be so rusty and I'll play a lot\nSurvivors include his wife of more than 50 years, Leona Johnson Atkins, and\na daughter, Merle Atkins.\nThe funeral is Tuesday morning at Nashville's Ryman Auditorium, the former\nhome of the Grand Ole Opry.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "In his prime, Howard Roberts played more than 900 studio dates annually and recorded the hippest guitar records of the era. His legion of fans still revere his incalculable influence and musical legacy.\nVesta Roberts, who grew up in a family of lumberjacks, gave birth to Howard just three weeks before the Wall Street Crash in October of 1929. Howard’s dad, a cowboy, wasn’t happy about the boy’s affinity for music.\nBut his mother prayed for her baby to be a musician. And Howard Roberts often told the story about, “When I was about eight years old, I fell asleep in the back seat of my parents’ car one very hot summer afternoon. When I woke up I just blurted out, ‘I have to play the guitar!’” So when his dad saw the youngster’s attempt to build one from a board and bailing wire, he acquiesced. For Christmas, he bought young Howard an $18 Kalamazoo student-model acoustic manufactured by Gibson.\nBy age 15, Roberts’ guitar teacher, Horace Hatchett, told the boy’s dad, “Howard has his own style of playing and there’s nothing else I can show him. He plays better than I do.” Howard was already playing club dates in their hometown Phoenix area – usually blues and jazz gigs on which he would gain playing experience and develop his improvising skills. He was receiving an extensive education in the blues from a number of black musicians, one of whom was the brilliant trumpeter Art Farmer. Journalist Steve Voce, in his 1992 article in The Independent Newsletter, quoted Roberts on those nightclub gigs, “I came out of the blues. I started in that scene when I was 15 and it was the most valuable experience in the world for me.”\nRoberts had created an heroic practice regimen with his roommate, guitarist Howard Heitmeyer. The two would practice three or four hours in the morning, catch an afternoon movie, then return to practice until it was time to hit the clubs, gig or not. Heitmeyer would remain Roberts’ lifelong friend, and someone with a comprehensive talent Roberts found staggering.\nAt age 17, Roberts was drawn to a class created by composer/theorist Joseph Schillinger, whose students included George Gershwin, Tommy Dorsey, Benny Goodman, and Oscar Levant.", "score": 21.361300790500703, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "Then Ronnie said, “Why don’t you do something at the end of that so I can take a break for a few minutes?” so I came up with those three chords at the end and Allen played over them, then I soloed and then he soloed… it all evolved out of a jam one night. So, we started playing it that way, but Ronnie kept saying, “It’s not long enough. Make it longer.”\nOn the studio version, “Collins played the entire solo himself on his Gibson Explorer.” Says Rossington, “He was bad. He was super bad! He was bad-to-the-bone bad… the way he was doin’ it, he was just so hot! He just did it once and did it again and it was done.” And there you have it.\nIf this list didn’t have any Clapton on it, I’d probably get death threats. Luckily we have an isolated Clapton track, but not from a Clapton band. Instead, above, hear his guest work on the George Harrison-penned and -sung Beatles’ song “While My Guitar Gently Weeps” from 1968. In a previous post on this masterfully iconic recording, Mike Springer described Clapton’s technique and gear: “For the impression of a person weeping and wailing, Clapton used the fingers on his fretting hand to bend the strings deeply, in a highly expressive descending vibrato. He was playing a 1957 Gibson Les Paul, a guitar he had once owned but had given to Harrison, who nicknamed it ‘Lucy.’”\nI’ll admit, I grew up assuming that Harrison played the leads in this song, an assumption that colored my assessment of Harrison’s playing in general. But while he’s certainly no slouch, even he admitted that this was better left to the man they call “Slowhand” (a nickname, by the way, that has nothing to do with his playing). Typically humble and understated, Harrison described to Guitar World in 1987 how Clapton came to guest on the song:\nNo, my ego would rather have Eric play on it. I’ll tell you, I worked on that song with John, Paul, and Ringo one day, and they were not interested in it at all. And I knew inside of me that it was a nice song.", "score": 20.327251046010716, "rank": 62}, {"document_id": "doc-::chunk-2", "d_text": "After that Van Eps joined Freddie Martin, who had the most popular sweet dance band after Guy Lombardo's, from 1931 to 1933. He began to solo on jazz records in 1934, by which time he had joined Benny Goodman's band, and he can be heard playing confidently with Jack Teagarden and Goodman on Adrian Rollini's \"Somebody Loves Me\" of that year. A few months later he soloed between Bunny Berigan and Teddy Wilson on Red Norvo's recording of \"Bug House\".\nGoodman was about to move on to greater things, but Van Eps left him to join Ray Noble's band for a year before moving to Hollywood in 1936. His work as a studio musician there gave him security but kept him out of the public eye. It was at this time that he wrote a guitarist's manual and designed the seven-stringed instrument.\nAfter a further period with Noble in 1941, Van Eps abandoned music professionally (although he continued to practise on his instrument for nine hours a day, as he always did when not working) and joined his father in his sound laboratory for two years.\nWhen the war ended Van Eps returned to Hollywood as a freelance in the film and recording studios, and it was here that he spent most of the rest of his career. He recorded an outstanding trio session with the pianist Jess Stacy in 1951 and also soloed on some of Paul Weston's LPs.\nIn 1955 he had a role in the film Pete Kelly's Blues backing Peggy Lee as a member of the fine band led by the trumpeter Dick Cathcart. He continued the role in the television series that followed in 1959. He made jazz albums on his own (Mellow Guitar for Columbia in 1956) and with other studio musicians, notably in Matty Matlock's Rampart Street Paraders.\nFurther albums under his own name followed for Capitol in the Sixties, but serious ill-health curtailed his appearances at the beginning of the Seventies, although he appeared at jazz festivals until he broke three fingers in 1977. He toured Europe with the clarinettist Peanuts Hucko in 1986 and in 1991 made the first of the exquisite albums for Concord Jazz with Howard Alden.", "score": 20.327251046010716, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "I know it's not piano, but anybody who has been in the music business for a while knows and respects Les Paul...\nCNN) -- Les Paul, whose innovations with the electric guitar and studio technology made him one of the most important figures in recorded music, has died, according to a statement from his publicists. Paul was 94.\nPaul died in White Plains, New York, from complications of severe pneumonia, according to the statement.\nPaul was a guitar and electronics mastermind whose creations -- such as multitrack recording, tape delay and the solid-body guitar that bears his name, the Gibson Les Paul -- helped give rise to modern popular music, including rock 'n' roll. No slouch on the guitar himself, he continued playing at clubs into his 90s despite being hampered by arthritis.\n\"If you only have two fingers [to work with], you have to think, how will you play that chord?\" he told CNN.com in a 2002 phone interview. \"So you think of how to replace that chord with several notes, and it gives the illusion of sounding like a chord.\"\nLester William Polfuss was born in Waukesha, Wisconsin, on June 9, 1915. Even as a child he showed an aptitude for tinkering, taking apart electric appliances to see what made them tick.\n\"I had to build it, make it and perfect it,\" Paul said in 2002. He was nicknamed the \"Wizard of Waukesha.\"\nIn the 1930s and '40s, he played with several big band singers, including Bing Crosby, Frank Sinatra and the Andrews Sisters, as well as with his own Les Paul Trio. In the early 1950s, he had a handful of huge hits with his then-wife, Mary Ford, such as \"How High the Moon\" and \"Vaya Con Dios.\"\nHis guitar style, heavily influenced by jazzman Django Reinhardt, featured lightning-quick runs and double-time rhythms. In 1948, after being involved in a severe car accident, he asked the doctor to set his arm permanently in a guitar-playing position.\nPaul also credited Crosby for teaching him about timing, phrasing and preparation.\nCrosby \"didn't say it, he did it -- one time only. Unless he blew the lyrics, he did one take.\"", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "This guy hopped out of his car, knowing he was late, but stopping to light a cigarette and shoot the breeze with us for a few minutes. He had his work uniform on, but was sporting a fairly impressive afro billowing out from underneath his headgear. While he was smoking, I asked him what he was listening to when he pulled up. He looked down at me and said, really he sneered, \"Johnny 'Guitar,' 'Real Mutha F'Ya,\" then he strolled into the back of the store like he was fifteen minutes early instead of fifteen minutes late.\nWatson was, like many, influenced by the great T-Bone Walker. He started out playing with such future blues legends as Albert Collins and Johnny Copeland in Houston, but migrated to Los Angeles as a 15-year-old, where he hooked up with sax man Chuck Higgins' band, playing piano and singing on the original \"Motor Head Baby,\" cut by Higgins on the Combo label in 1952. He later cut a livelier version the next year for Federal Records, billed as Young John Watson....this time playing guitar, along with Wayne Bennett, who would later play on many of Bobby \"Blue\" Bland's Duke recordings.\nDuring this time, Watson would travel to New Orleans and hang out with Guitar Slim, whose wild act included hooking up a long power cord to his guitar to allow him to play in the audience and even into the street, along with wild suits of many colors (with matching hair and shoes). Watson took it a step further.\n\"I saw what Slim was up to and figured I could do the same. Liked his fire and flash. Truth is, it got to a point where we'd gig together and march into the club with me carrying him on my shoulders, both of us firing away on our guitars like World War III. But I did Slim one better. Got me a 200-foot cord, so when I played auditoriums, I'd start in the back of the balcony and work my way down. I could see rock 'n' roll coming, and could see that when it came in, it'd be riding on the back of some flame-eating guitar. Well, hell, by then 'Guitar' was my middle name.\"\n\"I'd play with my teeth. I'd play standing on my hands, play it over my head and under my legs. See the technology was changing.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Roy Buchanan (September 23, 1939 – August 14, 1988) was an American guitarist and blues musician. A pioneer of the Telecaster sound, Buchanan worked as both a sideman and solo artist, with two gold albums early in his career, and two later solo albums that made it on to the Billboard chart. Despite never having achieved stardom, he is still considered a highly influential guitar player. Guitar Player praised him as having one of the \"50 Greatest Tones of all Time.\" He appeared on the PBS music program Austin City Limits in 1977 during Season 2.", "score": 19.404527245541964, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "It was he who played the throbbing leads on Johnny Horton's 1956 hit \"Honky Tonk Man,\" the exquisite nylon string guitar on Marty Robbins's 1959 crossover smash \"El Paso,\" and Lefty Frizzell's 1964 \"Saginaw Michigan.\" One of the most famous sessions was an accidental malfunction in mid-take when Grady played the distorted \"fuzz\" guitar solo on Robbins's 1960 hit \"Don't Worry.\" Though studio musicians in those days rarely received credit for their work, Martin's efforts didn't go unnoticed. Producers often designated him \"session leader,\" which meant he led the musicians and directed the impromptu arrangements that became a landmark of Nashville sessions. In other words, he often became the de-facto producer in the process.\nMartin continued to play sessions through the 1970s, working extensively with Conway Twitty and Loretta Lynn, and produced the country-rock band Brush Arbor. His funky leads helped to make a hit of Jeanne Pruett's 1973 \"Satin Sheets.\" Martin eventually returned to performing, first with Jerry Reed and then with Willie Nelson's band, with whom he worked from 1980 to 1994. Martin became the first recipient of Nashville Music Association's Masters Award in 1983.\nRecorded April 7, 1959, Bradley Studios,\n16th Ave S., Nashville, Tennessee\nReleased September 1959\n\"Gunfighter Ballads and Trail Songs\" LP (Columbia)\nSingle released on October 26, 1959 - #1 (2 wks)\nMarty Robbins - Vocals, Guitar\nGrady Martin - Lead Guitar\nJack Pruett - Guitar\nBob Moore - Bass\nProduced by Don Law and Frank Jones\nGrady Martin, a guitarist for many of country music's top stars, died last week. He is remembered as a member of Nashville's \"A-team\" of session musicians, backing artists such as Willie Nelson, Joan Baez and Johnny Cash. Noah Adams talks with Bob Moore, a long time friend of Martin's and a fellow musician. (5:45)\nHear Bob's NPR Interview now (RealAudio).\nO B I T U A R Y\nTennessean.Com Obit†††††Country.Com Obit\nSon Josh Martin (right) with Bob Moore and Hargus \"Pig\" Robbins, Nov. 2000", "score": 18.90404751587654, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "In the early '30s, Walker had a street act with Charlie Christian, an ex-Dallasite living in Oklahoma City, who would be immortalized as jazz's first great electric guitarist. Let that settle in: The two greatest guitar pioneers of the 20th century were a pair of Texans who played together for tips on street corners in Oklahoma City. But after being prodded by friends to relocate to L.A. for more musical opportunities, Walker left his musical partner, his wife and everything else behind in late '35 and took off on Route 66.\nHis first gig on the vaunted Central Avenue of black nightclubs was as a dancer and emcee with Big Jim Wynn's band. But even though he wasn't playing guitar onstage, Walker was tinkering with amplification techniques. Hugh Gregory's Roadhouse Blues book, which meticulously explores the roots of Stevie Ray Vaughan, quotes Wynn as saying that Walker \"had a funny little box ...a contraption he'd made himself.\"\nIt wasn't until July 1942, however, that Walker played electric guitar on a record. Hired as a rhythm player for a session by Freddie Slack, Walker was given two spotlight turns on \"Mean Old World\" and \"I Got a Break Baby.\" When Walker's crisply pronounced notes interspersed with trumpet-like slurs and whelps, the guitar lost its secondary status.\nBefore Walker, the blues was a solo acoustic form. With amplification bringing the guitar up front, no longer to be drowned out by horns or drums, T-Bone laid the full-band framework that would rule R&B in the post-war decade and eventually spin off into the rock 'n' roll combo.\n1947-'48 would prove to be Walker's landmark period. After signing with the Black & White label, led by \"music first\" mogul Ralph Bass, Walker and his topflight band recorded more than 50 titles in 18 months, ranging from the raucous \"T-Bone Boogie\" to the pop ballad \"I'm Still in Love With You\" to the slow blues classic \"Call It Stormy Monday.\"\nFifteen years later, a 12-year-old white kid, sitting in his bedroom in T-Bone's old neighborhood, was trying to make Walker's riffs part of his own musical lexicon. \"I'd try to get into his head when I listened to his records,\" Jimmie Vaughan says.", "score": 18.90404751587654, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Paul never stopped tinkering with electronics, and after Crosby gave him an early audiotape recorder, Paul went to work changing it. It eventually led to multitrack recording; on Paul and Ford's hits, he plays many of the guitar parts, and Ford harmonizes with herself. Multitrack recording is now the industry standard.\nBut Paul likely will be best remembered for the Gibson Les Paul, a variation on the solid-body guitar he built in the late 1930s and offered to the guitar company.\n\"For 10 years, I was a laugh,\" he told CNN in an interview. \"[But] kept pounding at them and pounding at them saying hey, here's where it's at. Here's where tomorrow, this is it. You can drown out anybody with it. And you can make all these different sounds that you can't do with a regular guitar.\"\nGibson, spurred by rival Fender, finally took Paul up on his offer and introduced the model in 1952. It has since become the go-to guitar for such performers as Eric Clapton.\nPaul is enshrined in the Rock and Roll Hall of Fame, the Inventors Hall of Fame and the Songwriters Hall of Fame.\nHe admired the places guitarists and engineers took his inventions, but he said there was nothing to replace good, old-fashioned elbow grease and soul.\n\"I learned a long time ago that one note can go a long way if it's the right one,\" he said in 2002, \"and it will probably whip the guy with 20 notes.\"", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "business of old, when the rock guitar gods of the youth movement were Eric\nClapton, Jimmy Page and Jeff Beck, one American blues guitarist in particular was\nheld in similar esteem. Sadly, today, as reported by American Blues Scene, that\ngreat musician, Johnny Winter, the\nwhite haired, wild\n|Johnny Winter. As most of us remember him|\nfrom Beaumont, Texas, has passed on. Johnny, 70, died in Zurich, Switzerland, during a European tour.\nJohnny Winter, in the 1970s,\noutshone even such great contemporaries as Gerry Garcia and Mike Bloomfield,\nwhen it came to American electric blues guitar-playing status in the 1970s. Jimi\nHendrix, of course, had died at the turn of the decade, just down the road from\nwhere I lived in South Kensington, although I couldn’t afford to live in South\nKensington today. Only rich rock musicians and foreign mega-rich can afford to\nlive there today.\nUnlike today, hardly anyone outside of black America was aware of Muddy Waters in the early 70s, either. (It\ntook Johnny Winter to reintroduce Muddy Waters to the world, when Johnny\npersuaded his record company to sign Muddy in 1977. With Johnny Winter at the\ncontrol desk, Muddy Waters returned to the 1950s Chicago sound we now know and\nlove so well.)\n|Two rock legends, sadly now both gone. Johnny Winter with Janis Joplin|\nbecome the new torchbearer for American blues-rock (was there any other type of\nrock in those days?) although Johnny, himself, always claimed he was a blues,\nnot a rock, artist.\nFortuitously, Johnny Winter was\nwith CBS Records in England, where I worked in the press office, parent\nColumbia having signed him in 1968, after seeing Johnny open for another of\ntheir acts, Mike Bloomfield.\n|Brothers: Edgar and Johnny|\ncatalogue and, immediately, I was hooked on his absolute mastery of electric\nblues guitar. Younger brother, Edgar Winter, was just coming into the frame,\nand I remember a particularly strong track they did together, with Johnny’s searing\nguitar-playing imprinted on my mind to this day. When my sons were boys they\nwould play this track over and over.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "I hope I’m not the only person that has this problem. Sometimes a piece of music will get in my head and I can’t rest until I have tracked it to its source and listened to it enough times to expunge the demon. And it’s always some arcane scrap of song that generally has no relevance to the here and now. The latest such invasion came from listening to Connie Francis’ Lipstick on Your Collar on an oldies station, and really for the first time appreciating a wild guitar solo in the middle of what is more or less a nonsense song. For those who remember it, Lipstick starts out with a chorus of nasty female voices chanting Nyah Nyah Nyah Nyah, at one of their friends who has just learned her boyfriend has two-timed her, But almost immediately a guitarist breaks in with a short musical phrase that suggests there may be an adult in the room. The guitar fades into the background until just about the one minute mark when Connie sings:\nBet your bottom dollar you and I are through,\nCuz lipstick on your collar told a tale on you…\nAnd then follows 25 seconds of pure guitar heaven as the soloist punches out what has been described as one of the great guitar solos of the era and one that stands up well today despite the juvenile tune that surrounds it.\nWho was the virtuoso? Well it turns out it was George Barnes—almost forgotten today, but in his day was in the same league or better than the Charlie Christians, the Les Pauls, the Django Reinhardt’s and the rest of those great jazz guitar pioneers of the 30’s and 40’s . It is believed Barnes made the first electric guitar recording ever around 1933. He can be heard accompanying Big Bill Broonzy around the same time. At age 14 he has his own band. He recorded prolifically through the early 40’s and quickly became a hot item on the New York and Chicago jazz scene playing with the top artists of the day. As the 50’s came along George Barnes was working with Decca records both as a recording artist and a session man. But like many of his fellow jazz musicians Barnes also supplemented his living as a New York studio musician, playing on hundreds of albums and jingles from the early 1950s through the late 1960s.", "score": 17.397046218763844, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "The following content is related to the September 2013 issue of Guitar World. For the full range of interviews, features, tabs and more, pick up the new issue on newsstands now, or in our online store.\nBy the late Forties, electric guitar was firmly established as an important instrumental voice in down-home blues, but in the realm of uptown rhythm and blues, with the exception of T-Bone Walker and a few of his disciples, guitar solos were still relatively rare. Saxophone was king, and no saxophonist of the era was more popular than Louis Jordan, a.k.a. “Mr. Jukebox.”\nIn addition to his witty vocals and swinging arrangements on classics like “I’m Gonna Move to the Outskirts of Town,” “Let the Good Times Roll,” “Caldonia,” and many others, Jordan’s wailing, bluesy alto solos foreshadowed the dynamic string stretching of a new generation of electric guitar heroes.", "score": 17.397046218763844, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Some have tried playing the guitar with their teeth, behind their back, with their feet, etc. And then there was the inventive guitarist who, many decades ago, decided to slip a bottle over his finger and slide it along his guitar's strings (He probably emptied the bottle himself, if you know what I mean).\nDuane Allman had three primary Les Pauls during his time with the Allman Brothers Band. The 1957 goldtop that he played on the band’s first two albums as well as most of the Derek and the Dominos Layla sessions has been on display at the Big House Museum in Macon, Georgia.\nIn honor of the expansive new box set from Rounder Records, Skydog: The Duane Allman Retrospective, we focused on his single-note soloing on classic Allman Brothers’ cuts like “Stormy Monday” and “Whipping Post.” This month’s column is dedicated to Duane’s mastery of the art of slide guitar.\n\"I remember hearing 'Hey Jude' by Wilson Pickett and calling either Ahmet Ertegun or Tom Dowd and saying, 'Who's that guitar player?'\" says Eric Clapton in the top video below. It turns out the guitar player was a 22-year-old Duane Allman, aka \"Skydog.\"\nNow in paperback, Randy Poe's Skydog: The Duane Allman Story (Backbeat Books) is revised and expanded, with a new afterword by the author, plus a foreword by Billy Gibbons of ZZ Top. It's the definitive biography of Duane Allman, one of the most revered guitarists of his generation.\nThe Allman Brothers Band At Fillmore East has been considered rock’s best live album since its 1971 release. Recorded March 12 and 13, 1971, at the New York club, the album captured the original Allman Brothers Band at the peak of their powers, playing with verve, grace, intensity and seemingly telepathic communication.\nThe Allman Brothers Band was largely Duane’s conception, and it was his unflagging energy and incredible guitar playing that drove them to mesmerizing heights as they blended rock, jazz, blues and country in new and exciting ways. Unfortunately, the guitarist was killed in a motorcycle accident in October of ’71 just as the band was achieving large-scale commercial recognition.", "score": 17.397046218763844, "rank": 73}, {"document_id": "doc-::chunk-2", "d_text": "In spite of this, a lot of guitarists have flown a Jet during their careers – players as diverse as Diddley, Cliff Gallup, Atkins, George Harrison, Thumbs Carlille, Hank Garland, Billy Zoom, Jeff Beck, Joe Perry, Tom Keifer, and Dan Fogelberg. The next time you’re at a guitar show, if you see a squadron of Gretsch Jets, check ‘em out. And if there’s a cool-looking red one, take it for a test flight.\nThis article originally appeared in Vintage Guitar Classics No. 3. All copyrights are by the author and Vintage Guitar magazine. Unauthorized replication or use is strictly prohibited.\nYou can receive more great articles like this in our twice-monthly e-mail newsletter, Vintage Guitar Overdrive, FREE from your friends at Vintage Guitar magazine. VG Overdrive also keeps you up-to-date on VG’s exclusive product giveaways! CLICK HERE to receive the FREE Vintage Guitar Overdrive.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-2", "d_text": "In the film, Jet and the Jetblacks played “Man From Nowhere”, whilst the duo performed “(Doin’ The) Hully Gully”, a vocal track released as the flipside of their hit “Scarlett O’Hara”.\nHarris was declared bankrupt in 1988. The BBC reported that it took Harris 30 years of heavy drinking before he finally admitted to being an alcoholic and sought help. For many years Harris made a point in his stage shows of saying how long it had been since he quit drinking, winning applause from audiences who knew how it had wrecked his career in the 60s. Harris still played occasionally, with backing band the Diamonds or as a guest with the Rapiers, and guested with Tony Meehan at Cliff Richard’s 1989 ‘The Event’ concerts.\nIn 1998, he was awarded a Fender Lifetime Achievement Award for his role in popularising the bass guitar in Britain. He appeared annually at Bruce Welch’s ‘Shadowmania’ and toured backed by the Rapiers (a Shadows tribute band). He recorded continuously from the late 1980s with a variety of collaborators including Tangent, Alan Jones (also an ex-Shadows bassist), Bobby Graham and the Local Heroes. His previous problems with stage nerves had seemingly disappeared, and 2006 saw Harris’s first single release in over forty years, “San Antonio”. In 2010, Harris chose to begin appearing with the Shadowers. Regular tour dates and studio recordings with the Shadowers, Brian “Licorice” Locking (Harris’ successor in the Shadows) and Alan Jones, though discussed, never materialised due to Harris’ poor health.\nIn 2007, Harris was invited by UK singer Marty Wilde to be a special guest on his 50th Anniversary tour. This culminated in an evening at the London Palladium with other guests including Wilde’s daughters Kim and Roxanne, Justin Hayward of the Moody Blues, members of the original Wildcats – Big Jim Sullivan, Licorice Locking and Brian Bennett, who also joined Hank Marvin and Bruce Welch of the Shadows on stage with Wilde and his band the Wildcats (Neville Marten and Eddie Allen on guitar, Roger Newell bass, and Bryan Fitzpatrick, drums).", "score": 17.2276963349887, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "In his first Rolling Stone interview, published in March 1968, Jimi Hendrix described the moment that changed his life forever: \"The first guitarist I was aware of was Muddy Waters. I heard one of his old records when I was a little boy, and it scared me to death . . . 'Wow, what is that all about?'\"\nThe history of rock & roll guitar is writ large and loud in its signature licks, riffs and solos: the barnyard bounce and pink-Cadillac shine of Scotty Moore's epochal break in Elvis Presley's \"That's All Right\"; Keith Richards' distorted three-note stomp in the Rolling Stones' \"(I Can't Get No) Satisfaction\"; Kurt Cobain's fireball power chords in Nirvana's \"Smells Like Teen Spirit.\"\nBut rock & roll's infinite capacity for renewal and surprise is packed into the lightning-bolt impact of those Great Guitar Moments – the way a simple hook, a feedback squeal, even a cocksure pose can send a kid over the moon and then reaching for his or her own instrument. The following pages are a celebration of those flashes of discovery, related by more than thirty-five master players in rock, blues, folk, punk and hip-hop – an extraordinary testament to the enduring power and magic of the electric guitar.\nThe instrument is a lot older than rock & roll itself. Les Paul, a pioneer in guitar design and multitrack recording, was playing a primitive electric guitar as early as 1928, using his parents' radio as an amplifier. In 1937, Eldon Shamblin of Bob Wills' Texas Playboys was the first country musician to play a solid-body model, a bantam-size Rickenbacker Electro Spanish – until his boss told him to stop, claiming that the damn thing didn't look like a proper guitar.\nBut in rock & roll, the whole point of the guitar is to be anything but proper. Everything you need to know about the implicit sexuality of the guitar, and its potential for exuberant violence, can be seen in 1950s stage photos and footage of Presley – his guitar hanging over his waist like a tommy gun, banging into his pelvis with rhythmic authority. Amplification liberated the instrument from the rich, warm but literal sound of wire resonating against wood.", "score": 15.758340881307905, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "The first to arrive, Big Bill Broonzy in 1951, managed to convince many concert-goers that he was “the last American bluesman,” but Lonnie Johnson’s breakthrough solo set at Royal Festival Hall the following year put an end to that publicity stunt, as did appearances by Josh White, Sister Rosetta Tharpe, and Sonny Terry and Brownie McGhee. These artists became musical touchstones for Britain’s first pop guitar stars, Big Jim Sullivan and Hank Marvin. Idolized by young Jimmy Page, studio legend Sullivan was steeped in records by Lead Belly and Sonny Terry and Brownie McGhee. “When I was starting the guitar,” Sullivan recalled, “we used to go out on the Thames in a big riverboat with people like Sonny and Brownie and Big Bill Broonzy. They would be playing, and I’d just sit there watching them. That was the highlight for me.” Instrumental star Hank Marvin, who formed the Shadows with Cliff Richard in 1958, cited Broonzy and Lead Belly as his main influences.\nIn Chicago, Big Bill Broonzy usually played electric guitar with a small ensemble, but in England he stuck to traditional solo acoustic blues. Before his death in 1958, Broonzy recommended that Chris Barber bring over Muddy Waters, the reigning king of Chicago blues. Just back from a tour of raucous clubs in the American South, Muddy flew over with his pianist, Otis Spann. Unaware that Broonzy had presented himself as a country blues artist, Muddy opened his first British show with his Fender Telecaster and amp at full throttle. Aghast purists retreated from the venue. “I didn’t have no idea what was going on,” Waters explained to writer James Rooney. “I was touring with Chris Barber – a Dixieland band. They thought I was a Big Bill Broonzy, which I wasn’t. I had my amplifier, and Spann and I was going to do a Chicago thing. We opened up in Leeds, England. I was definitely too loud for them. The next morning we were in the headlines of the paper – ‘Screaming Guitar and Howling Piano.’ That was when they were into the folk thing before the Rolling Stones.” Muddy lowered his settings for the rest of the tour, which reportedly went well.", "score": 15.758340881307905, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Greetings my friend and fellow strummers in this months column I will discuss that in my opinion that Artist recognition is one of the most important aspect of guitar marketing. That is a statement I truly believe, and in this column I will trace the popularity of certain guitars and the artists that I believe are responsible for their success. I will also list some guitar players and the guitars I found to be intriguing. I will list the guitars first and the artists that were associated with it. Remember my friends knowing what guitars your favorite players play is part of getting a sound similar to them, but it is only a small part of it.\nEddie Cochran was only 21 years old when he died in an auto accident while on tour in England on April 17th 1960. In his brief but illustrious career Eddie recorded some of the most influential early rock and roll, tunes like, Twenty Flight Rock, C’mon Everybody, Too Much Monkey Business, and Something Else, but Eddie’s Summertime Blues was a monster hit. Summertime Blues was also covered by Blue Cheer (a Billboard Top 40 hit) and the Who (Live at Leeds) but neither version could match the magic and originality of Eddie’s version.\nThis is the story of a personal journey through the world of music that begins humbly and ends just as humbly as it started. The fact that your reporter (should I say “moi”?) has experienced it at all is amazing enough, for under any other circumstances I might not have found myself in circumstances that presented so ripe an opportunity to learn and understand that most sensuous, invigorating, physically challenging and just plain righteous of musical instruments: the guitar.\nMike Stern is one of those lucky few: a guitarist who can do it all. Though he’s known for the depth and precision of his jazzy ballads and rip-snortin’ fusion instrumentals, he’s equally respected for the woozy bends and woody tone of his paeans to the greats of blues and rock. Listen to any of his many excellent releases (all of which remain active in the Atlantic catalog), and you’ll caught by the power of his deceivingly subtle blend.\nLast month guitar legend Link Wray passed away at his Copenhagen home at the age of seventy-six. A master of raw tone and minimalist riffs, Link Wray was the great grandfather of the power chord.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-2", "d_text": "“I had my Fender Vibro-King, and stomped on all of my pedals for that solo.” It peaks when Clark launches into a Chuck Berry-like lick at the 12th fret, and then starts incorporating the G at the 15th fret and the F# at the 14th fret on the high E string. “I’d been experimenting in that range,” revealed Clark. “I played that lick over and over to build momentum. We were eager to prove ourselves, and there was an overwhelming sense of ‘Let’s go for it!’” —Jimmy Leslie\nJimmie Vaughan (The Fabulous Thunderbirds)\nThe other Vaughan is as cool as the other side of the pillow, especially compared to his fire-spitting brother. They both favor Strats, but the similarities pretty much end there. Jimmie rarely plays fast or dirty, and is never flash. He mostly sticks to stabbing single notes within a traditional framework giving them plenty of space to breathe. Jimmie Vaughan reminds us that less notes can certainly mean more, and solo on the title track from the Fabulous Thunderbirds’ 1986 album Tuff Enuff is a shining example. Vaughan doesn’t usually do effects, but in this instance shimmering reverb and delay add remarkable depth to his sparse phrasing. It’s hard to find better evidence of a pure blues solo building a perfect bridge to a crossover hit. —Jimmy Leslie\nHound Dog Taylor\n“Wild About You Baby”\nFamously called “The Ramones of the blues” by the Village Voice, Hound Dog Taylor and his band the House Rockers played a ferociously raw kind of boogie blues. Based on the familiar “Dust My Broom” slide riff, “Wild About You Baby” (from Hound Dog Taylor and the House Rockers) is all about a game of call-and-response between the vocals and the guitar. When the time comes for Taylor to solo, he doesn’t stray far from the main riff, and his note choices are perfect examples of a solo taking the place of a vocal line. –Teja Gerken\nAlthough he’s known for his monstrous chops, Greg Koch displays tasty restraint for most of this slow blues, and the results are simply delicious.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "Since his remarkable range of talent defies distinct categorization, the best in the business call upon him when they want the best-- studio musician, sideman, songwriter, author, teacher, bandleader, film consultant, or to go on tour. He also produces his own, and other artists' music.\nEvery step of the way, since his first solo recording in 1978, Arlen Roth has received accolades and earned the highest acclaim. Vintage Guitar magazine ranked him One of the Top 100 Influential Guitarists of the Century, and he's also on their list for having performed one of The Top 10 Greatest Recorded Guitar Sounds. He recently was a contributing artist on Guitar Harvest , a compilation release featuring world-class guitarists, voted one of the Top Ten All-Time Best Guitar Recordings by Rolling Stone magazine.\nSeveral best-selling guitars bear his name, with three more in production: a new Arlen Roth Archtop model by Curtis Guitars, the innovative Terraplane, a steel-bodied resonator guitar inspired by the classic Hudson Terraplane car, by Simon Guitars, and Warren Guitars' popular Arlen Roth Signature model.\nHis first of eight solo releases, Arlen Roth: Guitarist, won the Critics Award for \"Best Instrumental Album\" at Montreaux in 1978. Hot Pickups was No. 2 on the U.K. charts in 1979. His Toolin' Around CD (1993) has become a hard-to-find classic, and is being re-released in 2005 due to ongoing demand. On Toolin' Around , Arlen plays duets with legends such as Danny Gatton, Brian Setzer, Albert Lee, Duane Eddy, Duke Robillard, Jerry Douglas, and Sam Bush, as well as several solo pieces. A big bonus for fans is The Making of Toolin' Around, a film (recorded on DVD) available for the first time. He also contributed to Incarnation, a tribute to legendary blues man Robert Johnson , and performed on two recent Jimi Hendrix tribute CDs. .", "score": 14.591992257517537, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Artist: The Beatles\nWho played it: George Harrison\nUK chart position: 1\nWhy it rocks: George Harrison’s genius lay in his ability to supply the perfect solo for the song. Never flashy, Harrison’s speciality was the ‘song within a song’ style of solo that typifies his outing on one of the rare Harrison tunes that Lennon and McCartney allowed as a single. Although not technically difficult, Something’s solo features tricky bends and fretboard positions that don’t fall instantly under the fingers.\nNotice how the solo finishes by reiterating the song’s opening lick. Here’s how shredmeister Paul Gilbert describes Something: “This is my favourite ‘feel’ solo ever. George Harrison did a masterful job playing those notes, and they are STUNNING notes.”\nFind it on: Abbey Road\nDid you know? George Harrison admitted that he got the idea for Something from Apple’s first signing, singer-songwriter James Taylor. Harrison created his song’s opening line from the title of Taylor’s own song, Something In The Way She Moves.\nTrack: All Right Now\nWho played it: Paul Kossoff\nUK chart position: 1\nWhy it rocks: It’s all about the climax. Every great guitar solo should have a beginning, a middle and an end, and the classically trained Paul Kossoff understood this. The solo opens with a hammer-on from the open third string to the second fret and builds gradually up the fretboard from there. The whole solo is played on the top three strings and the majority of it on the top two. Could you be that creative and memorable with such a restricted palette? Kossoff echoes the solo’s repetitive triplet lick at his Les Paul’s 17th fret for a stunning crescendo to one of rock’s greatest ever guitar solos.\nFind it on: Fire And Water\nDid you know? Paul Kossoff worked as a guitar salesman for Selmer’s in Charing Cross Road at the same time as another budding six-stringer, Mahavishnu Orchestra’s John McLaughlin!\nTrack: Johnny B Goode\nArtist: Chuck Berry\nWho played it: Chuck Berry\nUK chart position: 8\nWhy it rocks: Berry was the guitar star of his day and Johnny B Goode the twinkling jewel in his crown.", "score": 13.897358463981183, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Monday, June 27, 2011\nIt's occurred to me that I could probably expand my readership here and my DJ profile in general if I would focus a bit more on a particular style. I guess that's always been one of my issues in life - I just like too many damn things! Right, so today it's a straight Blues record from a giant figure in that genre - Lightnin' Hopkins. I've been in a particularly Blues kinda mood lately since I decided to finally try to learn how to use the harmonica I've got laying around. My poor neighbors!\nPeep this clip from the '67 documentary \"The Blues Accordin' to Lightnin' Hopkins\".\nHopkins died of esophageal cancer in Houston January 30, 1982 at the age of 69. His New York Times obituary named him as \"one of the great county blues and perhaps the greatest single influence on rock guitar players.\"\n|Some other ol' LP I got layin' around...|\nI believe Jewel was active well into the 90s.\nAnd of course, the clip!\nTuesday, June 21, 2011\nMonday, June 20, 2011\nDuane Eddy managed to become a guitar god in the late 50's putting together simple yet rocking little tunes like this one. Born in Corning, NY in 1938, he was already playing guitar at 5. His sonic guise was that of a twangmaster extraordinaire, playing leads on the bass strings of his guitar to great effect. Weird factoid: a 20,000 gallon water tank was used as an echo chamber on one of his early records, \"Moovin' and Groovin'\" to accentuate his sound.\nHis follow-up, \"Rebel Rouser\" was the big hit, getting all the way up to #6 on the charts. Why don't we have instrumental hits anymore? Get cracking musicians!\nThe man has worked with everyone from Ravi Shankar to Waylon Jennings, and he just keeps cranking out the tunes. He was also the first guitarist to have a signature model, and was inducted into the Rock hall of fame in '94. He's still kicking around God bless him, doing his twang thang.", "score": 13.897358463981183, "rank": 82}, {"document_id": "doc-::chunk-7", "d_text": "Kessel responded, “The only thing I can teach you is that there’s nothing I can teach you.”\nThis is a1937 Epiphone Broadway, but it has a Deluxe-style neck and fingerboard. I bought it about five years ago from a guitar player named John Hannam, who lives in Oregon and hung out with Howard when he lived there. John acquired the guitar from Howard’s widow, Patty, and my understanding is that Howard got the guitar from George Van Eps. I recently showed the guitar to Bob Bain, who played a lot with George Van Eps, and Bob instantly remembered George playing this guitar before George went to the seven-string.As far as I know, the guitar was built by Epiphone with this neck, and blond finish. John had it refinished by a luthier named Saul Koll, who did a beautiful job. John Carruthers built and installed the floating pickup for me. The guitar sounds incredible, and plays great!\nThere’s an old Epiphone ad that shows Howard Roberts playing what looks like this guitar. – Tim May\nSpector and H.R. – Oil and Water\nHolder also recalls how Roberts told him about one day becoming so aggravated after a studio date that he smashed his Martin nylon-string in the fireplace. “It was because of a Phil Spector session,” Holder said.\nH.R.’s relationship with the producer was indeed strained. Often, Spector’s sessions called for Roberts to play a barre chord on a 12-string for hours at a time. It was typical of Spector to slavishly rehearse his musicians for hours. Consequently, Howard developed hand problems. “I had to get a specialist from Canada to come down and straighten me out,” he said.\nHolder related another dissonant episode between Roberts and Spector, when the producer had a penchant for packing heat at his sessions. Roberts, an avid outdoorsman with a fervent respect for firearms, recoiled on a date when Spector’s pistol fired into the ceiling. H.R. left the session, telling Spector, “I just can’t do this. I can’t stay here.", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "In my humble opinion, one of the best post-60s onward rock guitar players of all time passed away.\nJeff Beck had a technique that stood out, to say the least. He had the ability to play any music style with expertise, yet his signature playing was hard to duplicate by other great players. It made him a stand-alone giant!\nIn his later concert days, it was beautiful how he introduced many bass players on tour, especially outstanding women bass players… Tal Wilkenson as one example comes to mind.\nBeck died Tuesday, January 10, 2023, at the age of 78.\nJeff will be missed by a ton of fans. Yet, we’ve been privileged enough to have many of his recordings and videos to enjoy for the rest of the time we each remain on this planet.\nCredit is given: Rolling Stone .com article: Hall of Fame musician and former Yardbird guitarist [Jeff Beck] dies following a short bout with bacterial meningitis… By: Daniel Kreps, Kory Grow\nAs noted in this RS press release…\n“Led Zeppelin’s Jimmy Page, Beck’s Yardbirds bandmate who inducted the guitarist into the Rock Hall in 2009, wrote on social media Wednesday, “The six stringed Warrior is no longer here for us to admire the spell he could weave around our mortal emotions. Jeff could channel music from the ethereal. His technique unique. His imaginations apparently limitless. Jeff I will miss you along with your millions of fans. Jeff Beck Rest in Peace.”\nIt further mentioned…\nIn 2009, 17 years after Beck was inducted into the Rock and Roll Hall of Fame as a member of the Yardbirds, he delivered one of the greatest induction speeches of all time when he reentered the Rock Hall for his solo work. “Someone told me I should be proud tonight. But I’m not, because they kicked me out. They did. Fuck them,” he quipped at the 1992 ceremony. “I couldn’t believe I was even nominated,” Beck told Rolling Stone at the time. “I thought the Yardbirds was as close as I’d get to getting in. I’ve gone on long after that and gone through different musical changes. It’s very nice to hear that people have been listening.”\nJazz-rock legacy of the Mahavishnu Orchestra, John McLaughlin (age 75 at the time of this blog post) is retiring from touring.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Sport & Auto\n- About Future\n- Digital Future\n- Cookies Policy\n- Terms & Conditions\n- Investor Relations\n- Contact Future\nA rock ‘n’ roll pioneer and musical innovator, Holly’s rhythmic guitar style owes much to the fact that he grew up playing a banjo, evinced on galloping anthems like Not Fade Away and Peggy Sue.\nBut he was a master at sweet, minimalist leads, too, such as the one heard on Words Of Love.\nConventional in appearance, he made his mark with the then way-out-looking Stratocaster. His first record featured a mid-‘50s Strat (seen here), bought with money borrowed from his brother Larry. It was stolen on tour in 1957.\nHis last Strat, a ’58 three-tone sunburst, played but an hour before he died, is currently on display at The Buddy Holly Center.", "score": 11.600539066098397, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "This is an excerpt from Play It Loud: An Epic History of the Style, Sound & Revolution of the Electric Guitar (opens in new tab) by Brad Tolinski and Alan di Perna.\nIn 1954, at the age of 41, blues musician Muddy Waters had finally made it to the top. Or so it seemed.\nHoochie Coochie Man and I Just Want to Make Love to You, released that year, became his biggest sellers and remained in the Top 10 R&B charts for more than three months. But the winds of change were blowing, and that summer blues record sales suddenly plummeted, dropping by a dramatic 25 percent and sending shock waves through the Chicago music industry.\nRecord execs blamed the poor economy, but a fresh and more youth-oriented brand of music was starting to sweep the airwaves, one that was built on the bones of the very R&B, blues and country that they had so carefully nurtured. As Waters would later sing, “the blues had a baby and they named it rock and roll.”\nIn ’54, Elvis Presley paired the country tune Blue Moon of Kentucky with That’s All Right, a song originally performed by blues singer Arthur Crudup, and the single sold an impressive 20,000 copies. That same year, Bill Haley & His Comets sold millions with Shake, Rattle and Roll and Rock Around the Clock. In response, Leonard Chess of Chess Records sniffed out and signed two young electric guitar–playing rockers of his own.\nBo Diddley and his rectangular guitar would become an enormous commercial success; but it was Chuck Berry’s crossover to the white teenage market that would make him a legend.\nCharles Edward Anderson Berry, born to a middle-class family in St. Louis in 1926, was a brown-eyed charmer who loved the blues and poetry with almost equal fervor. After winning a high school talent contest with a guitar-and-vocal rendition of Jay McShann’s Confessin’ the Blues, he became serious about making music and started working the local East St. Louis club scene, where he put all of his skills to good use.\nWhile Berry excelled at playing the blues of Muddy Waters and crooning in the suave manner of Nat King Cole, it was his ability to play “white music” that would eventually make him a star.\n“The music played most around St. Louis was country-western and swing,” Berry said in his autobiography.", "score": 11.600539066098397, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "On the album Hello Dolly, Louis says “Take it, Big Chief,” as Moore launches into a trombone solo of the song “Someday You’ll be Sorry.”\nHe freelanced when the orchestra broke up, performing at venues such as Jimmy Ryan’s in New York City and throughout Europe. During the 1970s, he led his own Dixieland band and recorded his first albums, Russell ‘Big Chief’ Moore Powwow Jazz Band (1973) and ’Big Chief’ Russell Moore and Joe Licari with the Galvanized Jazz Band (1976).\nMoore has played the International Jazz Festival in Paris, France, the Kennedy Center for the Arts in New York City, and many places in between. He has performed at inaugural balls for Presidents Kennedy, Johnson and Nixon, and has played one of the receptions for the marriage Prince Charles and Diana. One of his favorite venues, however, was at the Gila River Indian Community, where he frequently gave concerts. In March, 1982, Moore was honored on the broadcast “First Americans in the Arts.” He died from diabetes in 1983 at age of 70.\nDuane Eddy, Lee Hazlewood, and Sanford Clark\nDuane Eddy put the twang in rock and roll. His signature sound brought a new dimension to rock music and popularized the electric guitar. He is considered the most successful instrumentalist in rock history, influencing future rock artists throughout the world. The Rock and Roll Hall of Fame, which inducted Eddy in 1994, refers to Eddy as “one of the earliest guitar heroes.”\nEddy (born in 1938) moved to Phoenix from Corning, New York with his family when he was a teenager. A guitar player since age five (he was enthralled with cowboy songs Eddy dropped out of high school at sixteen and began playing with Al Casey in 1955. Eddy started experimenting with “twang” by playing lead on the bass strings of his guitar, a Chet Atkins-model Gretsch 6120 hollowbody. This sound matured into unabashed, gut-pulling reverberation when Eddy met Lee Hazlewood in 1957. Hazlewood, a Phoenix disk jockey and songwriter, helped Eddy develop low, haunting sounds by picking single-note melodies on the low strings of his guitar.", "score": 11.600539066098397, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Guitarist Jim Hall, an understated yet profoundly influential presence on jazz guitar, died in his sleep Tuesday morning at his New York City home. He was 83.\nHall, whose career began in the '50s as part of the West Coast jazz scene with Jimmy Giuffre and Chico Hamilton, recorded with wealth of jazz royalty over his career, including Ben Webster, Ella Fitzgerald, Bill Evans and Sonny Rollins, who worked with Hall on his landmark 1962 album \"The Bridge\" as well as his celebrated 2011 live release, \"Road Shows Vol 2.\"\nThe guitarist led his own trio since the '60s, and continued to maintain a busy recording and touring schedule. He appeared at this year's Newport Jazz Festival (joined by fellow guitarist Julian Lage) and was reportedly planning a duo tour of Japan with his frequent collaborator Ron Carter for January 2014.\nAmong his memorable recent recordings include 2008's \"Hemispheres,\" a lush collaboration with Bill Frisell, and the 2010 album \"Conversations,\" recorded with drummer Joey Baron.\nAvant garde-leaning guitarist Nels Cline, writing in JazzTimes in 2011, described Hall as \"a paragon of taste, tone, creativity and inventiveness. These qualities have kept Hall’s music pushing forward: never stale, often surprising.\"\nTributes and remembrances have been flowing through Twitter much of the day. Saxophonist Charles Lloyd described Hall's death as \"A personal loss of a kindred spirit whose sensitivity and humor buoyed my spirit,\" and local guitarist Anthony Wilson wrote, \"You were all inspiration & elegance. A true master.\"\nThere are a wealth of recordings available that act as prime examples of Hall's mastery, but below you can hear his performance with Rollins on \"Without a Song\" from \"The Bridge.\"\nA full obituary will appear at www.latimes.com/obituaries.\nTwitter: @chrisbartonCopyright © 2015, Los Angeles Times", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Electric Guitar Heroes\nOklahoma and the enhancement of an instrument\nBob Dunn, who toured between the World Wars with the Panhandle Cowboys and Indians, was an early adapter and virtuoso of the electric steel guitar. His Jan. 27, 1935, studio recording of the tune “Taking Off” with Milton Brown and his Musical Brownies is considered the first recording to use an electrically amplified instrument. Vintage Guitar magazine credits him as “the amplified guitar’s first stylist,” noting that, “In a way, Dunn was a sort of Jeff Beck of the steel guitar; his solos were often otherworldly, with cascades of arpeggios, jarring staccato notes and Hawaiian chime effects blasting through the mix of instruments.”\nTHE CLAIM: Robert Lee Dunn, born Feb. 8, 1908, in Braggs, Okla., invented the electric steel guitar.\nTHE SOURCE: The Encyclopedia of Oklahoma History and Culture, published by the Oklahoma Historical Society.\nTHE TRUTH: Dunn did not, however, invent the instrument. Credit goes to Texas-born George Delmetia Beauchamp — a vaudeville performer who experimented with ways to have his guitar heard above an orchestra — for creating the first string-driven, electro- magnetic guitar. Brad Tolinski and Alan di Perna, authors of the 2016 Random House book Play It Loud: An Epic History of the Style, Sound and Revolution of the Electric Guitar, note that the Beauchamp design became the basis of the first commercially produced electric guitar, the RO-PAT-IN A-25 “Frying Pan,” which hit the market in 1932. RO-PAT-IN (inspired by the term ElectRO-PAtent-INstruments) would evolve into Rickenbacker International, which continues manufacturing instruments today.\nIn an unexpected twist, the first documented public performance of an electric guitar does have an Oklahoma connection. According to Guitar Aficionado magazine, the first-ever concert to feature the instrument occurred in Kansas on Oct. 31, 1932, at Wichita’s Shadowland Pavilion. The performer, guitarist Gage Kelso Brewer, was born in 1904 in Gage, Oklahoma Territory.\nBrewer’s pioneering performance featured two RO-PAT-IN prototypes – the electric Spanish prototype and the A-25 Frying Pan – that he had obtained from his friend George Beauchamp. An Oct.", "score": 10.648241559563893, "rank": 89}, {"document_id": "doc-::chunk-4", "d_text": "Rene Hall also cut a solo single for Del-Fi, The Untouchables, a pretty good record but lacking the fire of the Valens and Romero discs.\nAll through the late fifties Rene Hall kept busy free lancing, he did arrangements for Patience and Prudence, Jan & Arnie (Gas Money), Bumble B. & the Stingers (Nut Rocker) and others. As a guitarist he showed up on all of Googie Rene's Class sides including this killer that Bob Quine turned me on to-- Side Tracked. One of my favorite discs to feature a Rene Hall guitar solo is this raucous piece of slop by Earl Palmer & the Partytimers with the Jayhawks-- Johnny's House Party Part One which appeared on Aladdin around '58. Everything about this record is great, in fact they all sound drunk, but it's the guitar solo that gives it the extra push over the edge into what we can call genius.\nRene Hall would spend the early sixties doing all sorts of studio work, mostly as an arranger but his main meal ticket was Sam Cooke. As an arranger his greatest moment was probably A Change Gonna Come, Cooke's last and greatest record. When Sam Cooke died he went back to free lancing, he never lacked for arranging work. He even returned to Specialty to play bass on a Little Richard session (Bama Lama Loo b/w Annie's back, which also featured Don and Dewey on guitars). In the early 70's he signed on as Marvin Gaye's musical director, working on all of Gaye's classic hits-- Let's Get It On, What's Goin' On, etc. When Gaye died he found himself one of the most in demand arrangers in the business and worked constantly until his death in 1988.\nIt's not like Rene Hall was unsung in the industry, he was a highly paid professional, and a successful one at that. Rock'n'roll guitar playing was only a small part of his career, but one that should surely be acknowledged since he was so brilliant at it. So I guess it's up to me, since nobody else seems to give a hoot. Rene Hall-- I salute you.\nPOSTED BY THE HOUND AT 11:19 PM\nLABELS: CHAN ROMERO, EARL PALMER, RENE HALL, RITCHIE VALENS", "score": 8.086131989696522, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "We were saddened to hear of the passing of the great guitarist Johnny Smith the other day and he will be greatly missed by the worldwide guitar community. Smith was a highly diverse musician and was equally at home playing jazz as he was sight-reading complex scores with the New York Philharmonic.\nHis unique guitar playing was characterised by intricate close position chord voicings and rapidly ascending melodic lines which drew much praise and admiration from fellow guitarists. His most famous and critically acclaimed album was Moonlight in Vermont (one of Down Beat magazine’s top two jazz records for 1952, also featuring saxophonist Stan Getz).\nHe also wrote the popular composition “Walk Don’t Run“, which was written for a 1954 recording session as a counter melody to the harmony of the famous jazz standard “Softly, As in the Morning Sunrise”. Chet Atkins also covered this song.\nA group of musicians who eventually became The Ventures heard the Chet Atkins version, simplified it for their own sound [and sped it up] and eventually recorded it in 1960. The Ventures’ version went to No. 2 on the Billboard Top 100 for a week in September of that year.\nSmith moved from the public eye in the 1960’s to concentrate on teaching and to run a music store, however he remained as one of the most influential masters of ‘Cool Jazz’ inspiring a whole new generation of guitarists with his fluid and intricate playing.", "score": 8.086131989696522, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Guy interacting with the audience in a live performance\n|Birth name||George Guy|\nJuly 30, 1936|\nLettsworth, Louisiana, US\n|Genres||Blues, Chicago blues, electric blues, blues rock|\n|Labels||RCA, Cobra, Chess, Delmark, Silvertone, MCA, Atlantic, MPS, Charly, Zomba Group, Jive, Vanguard, JSP, Rhino Records, Purple Pyramid, Flyright, AIM Recording Co., Alligator Records, Blues Ball Records|\nSonny Boy Williamson II\nStevie Ray Vaughan\nThe Damn Right Blues Band\n|Fender Buddy Guy Signature Stratocaster|\nGeorge \"Buddy\" Guy (born July 30, 1936) is an American blues guitarist and singer. He is an exponent of Chicago blues and has influenced guitarists including Jimi Hendrix, Eric Clapton, Jimmy Page, Keith Richards, Jeff Beck, John Mayer and Stevie Ray Vaughan. In the 1960s, Guy played with Muddy Waters as a house guitarist at Chess Records and began a musical partnership with the harmonica player Junior Wells.\nGuy was ranked 30th in Rolling Stone magazine's 100 Greatest Guitarists of All Time. His song \"Stone Crazy\" was ranked 78th in Rolling Stone's list of the 100 Greatest Guitar Songs of All Time. Clapton once described him as \"the best guitar player alive\".\nGuy was born and raised in Lettsworth, Louisiana. He began learning to play the guitar using a two-string diddley bow he made. Later he was given a Harmony acoustic guitar, which, decades later in Guy's lengthy career, was donated to the Rock and Roll Hall of Fame.\nSoon after moving to Chicago on September 25, 1957, Guy fell under the influence of Muddy Waters. In 1958, a competition with West Side guitarists Magic Sam and Otis Rush gave Guy a record contract. Soon afterwards he recorded for Cobra Records. He recorded sessions with Junior Wells for Delmark Records under the pseudonym Friendly Chap in 1965 and 1966.\nGuy’s early career was impeded by conservative business choices made by his record company (Chess Records) Chess, his record label from 1959 to 1968, refused to record Guy playing in the novel style of his live shows. Leonard Chess, Chess Records founder, denounced Guy’s playing as \"noise\".", "score": 8.086131989696522, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "Allison became one of the most sought after drummers and as a session musician played on recordings with Bobby Vee, Johnny Burnette, Eddie Cochran, Johnny Rivers, Waylon Jennings, Paul McCartney and many others. His drumming style can be heard on Buddy Holly’s “Peggy Sue” and The Everly Brothers’ “Till I Kissed You”. Phil Everly once spoke of Allison as “the most creative drummer in rock and roll.” On Rolling Stone magazine’s “Book Of Lists” J.I. Allison is rated as one of the top three drummers in rock and roll history. Allison also wrote, or co-wrote, “That’ll Be The Day”, “Peggy Sue” and “More Than I Can say”. In 2013 J.I. Allison House opened in Lubbock, Texas, in tribute to Allison’s legacy of music-making.\nBassist Joe. B. Mauldin is widely regarded in the rock music industry as one of the top rock ‘n roll bass players. He was a recording engineer at Gold Star Studios in LA. He was recording engineer for many stars including Herb Alpert & The Tijuana Brass, Phil Spector, Leon Russell and Maureen McGovern. Mauldin wrote a number of songs recorded by Buddy Holly and The Crickets including “I’m Gonna Love You Too” and “Well All Right”. He has toured with The Everly Brothers, Johnny Burnette and Waylon Jennings.\nSonny Curtis, also a native Texan, played lead guitar on Buddy Holly’s first Decca sessions. Curtis was born in a dugout in 1937 in Meadow, Texas. His parents were cotton farmers contending with the Dust Bowl of the Great Depression. He was a teenage pal and lead guitarist with Buddy Holly in Lubbock, Texas, in a pre-Crickets band called The Three Tunes. Sonny is his actual first name, not a nickname. His fluid guitar playing style was a major influence on Waylon Jennings. In addition to his work with the Crickets, Sonny has enjoyed enormous success as a solo recording artist and as one of Nashville’s most respected songwriters. His songs have been recorded by artists from Bing Crosby to the Bear on the Andy Williams Show.", "score": 8.086131989696522, "rank": 93}]} {"qid": 12, "question_text": "What are the target timelines for infant hearing screening and intervention in Washington state?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Data Management and Quality Assurance\nManaging your program's data is crucial to ensuring the success and quality of your newborn hearing screening program. A good data management system will ensure that no babies are lost to follow-up, as well as help ensure the quality of your program.\nStatewide tracking and surveillance\nThe Washington State Department of Health (DOH), in cooperation with the Centers for Disease Control and Prevention (CDC), developed a statewide EHDDI Tracking and Surveillance system linked to the metabolic screening system. The tracking and surveillance system helps to ensure that all babies are screened for hearing loss prior to one month of age, those that are referred receive diagnostic audiological evaluation by 3 months of age, and those that are diagnosed with hearing loss receive early intervention by 6 months of age.\nWhile Washington state does not mandate universal newborn hearing screening, about 96% of infants are screened before hospital discharge. All birthing hospitals (except two military facilities) and many clinics in Washington report newborn hearing screening results and diagnostic results to EHDDI.\nThe EHDDI program collaborates with the Early Support for Infants and Toddlers (ESIT) program to determine if infants with hearing loss receive appropriate and timely early intervention services. The ESIT program is Washington state's Part C program that provides services to children birth to 3 years of age who have disabilities, including hearing loss. For updates on this project, please see the EHDDI program website and the ESIT website .\nFor more information about this project , please contact:\nWashington State Department of Health\n1610 NE 150th St.\nShoreline, WA 98155\nPhone: 206-418-5613 or 888-WAEHDDI (toll-free)\nData management for your hospital\nIt is important to have a system in place within your hospital to internally manage the data collected from your newborn hearing screening program. This will help you manage your workload and make sure that no babies are missed.\nMany larger programs prefer to purchase one of the commercially available software programs, while smaller programs often find simpler, less expensive options sufficient.\nWe recommend that all hospitals, regardless of size, keep a log sheet or book to manage daily screening needs. This helps ensure that all babies get screened prior to discharge.\nSample spreadsheet for data management (XLS)\nBefore choosing a data management option, you should consider what data you want to monitor and how much you want the system to do for you.", "score": 53.08748889459845, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "EHDI National Goals\nNational Goals, Program Objectives, and Performance Measures for the Early Hearing Detection and Intervention (EHDI) Tracking and Surveillance System\nGoal 4. All infants and children with late onset, progressive or acquired hearing loss will be identified at the earliest possible time.\n|Program Objectives||Performance Indicators|\n|4.1 Risk factors: Each hospital, audiologist and other providers, will identify infants with risk factors for hearing loss and transmit the information to state.|\na. Number and percent of infants with one or more risk factors.\n|4.2 Monitoring of at-risk infants. Each state will have a mechanism in place to monitor the hearing status of infants at risk for late onset and progressive hearing loss.|\na. Number and percent of infants with risk factors who are re-screened by 6 months.\n|4.3 Acquired hearing loss. Each state will have a mechanism in place to identify and provide follow-up services for infants and children with acquired hearing loss.|\na. Number and percent of infants and children identified with acquired hearing loss.\n- Centers for Disease Control and Prevention\nNational Center on Birth Defects and Developmental Disabilities\nHearing Loss Team\n1600 Clifton Road\nAtlanta, GA 30333\nTTY: (888) 232-6348\n- Contact CDC-INFO", "score": 49.24649019099454, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "Audiological or medical/surgical management, educational and (re)habilitation methods, and child and family support are available strategies for subjects with confirmed hearing loss . Recognised benefits of UNHS are better language outcomes at school age and improved long-term language development [13, 14].\nSince the 1999, process and outcome performance indicators and benchmarks were established for Early Hearing Detection and Intervention (EHDI) programs (i.e., identification before 3 months of age and intervention by 6 months of age) to evaluate progress and determine consistency and stability [16, 17]. In 2007 the JCIH recommended timely and accurate monitoring of relevant quality measures, based on its reviewed performance indicators and benchmarks, as an essential practice for inter-program comparison and continuous quality improvement .\nWith the aim to verify whether literature reporting experiences on hospital-based UNHS programs include sufficient information to allow inter-program comparisons according to the already available indicators/benchmarks defined by the AAP and JCIH, we performed a systematic review . We found that not all studies reported all the data necessary for calculating the complete proposed set of quality indicators, and that when comparing available data on indicators with corresponding benchmarks, the full achievement of all the recommended targets is an open challenge. We also found substantial heterogeneity in terms of extent of hearing loss (hearing threshold, uni- vs. bilateral hearing loss), criteria for identification of neonates at higher risk of hearing loss, screening tests used, personnel performing the tests, testing environment.\nIn order to overcome these heterogeneous findings we think it is necessary to optimize the implementation of Universal Newborn Hearing Screening programs with an appropriate application of the planning, executing, and monitoring, verifications and reporting phases. For this reason we propose a conceptual framework that logically integrates these three phases and, consequently, a tool (a check-list) for their rationalization and standardization.\nDiscussion - The conceptual framework for rationalized and standardized UNHS programs\nA planning phase based on indications from guidelines and recommendations, specificities of the local context, benchmarks, reports from verification phase. The Deliver the protocol Action is activated, apart from the first instance, if the benchmarks are not achieved, if guidelines/recommendations are updated, or if the context specificities change. The output of this phase is the protocol for UNHS execution.\nAn executing phase where the protocol is applied and where data should be generated and managed for monitoring. The outputs of this phase are the raw data for process monitoring.", "score": 46.31872701827023, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Am Fam Physician. 1999 Sep 1;60(3):1020.\nThe Task Force on Newborn and Infant Hearing of the American Academy of Pediatrics (AAP) has issued a new policy statement recommending development of universal newborn hearing screening programs nationwide. The statement, published in the February 1999 issue of Pediatrics, discusses the justifications for universal hearing screening, tracking and follow-up, identification and intervention, evaluation, and other recommendations and issues. The AAP establishes parameters with the goal of ensuring that all newborns with hearing loss be identified.\n“Significant hearing loss is one of the most common major abnormalities present at birth and, if undetected, will impede speech, language and cognitive development,” according to the policy statement. The average age for detection of significant hearing loss in the United States is less than 14 months, but the AAP would like to see all newborns screened before they leave the hospital so that treatment can be initiated before these infants are six months old. The policy states that about one to three per 1,000 infants in well-baby nurseries have significant loss of hearing in both ears, and about two to four per 100 infants in the intensive care unit have hearing loss.\nThe following five essential elements for an effective screening program were identified by the AAP: initial screening with a test having a high degree of sensitivity and specificity; tracking and follow-up; identification; intervention; and evaluation. Guidelines for each of the five essential elements are discussed in the statement. The following guidelines have been excerpted from the section of the guidelines that discusses identification and intervention in a universal neonatal hearing screening program:\nThe goal of universal screening is that 100 percent of infants with significant congenital hearing loss be identified by three months of age, and appropriate and necessary intervention be initiated by six months of age.\nAppropriate and necessary care for the infant with significant hearing loss should be directed and coordinated by the child's physician within the medical home, with support from appropriate ancillary services.\nA regionalized approach to identification and intervention for infants with significant hearing loss is essential, ensuring access for all children with significant hearing loss to appropriate expert services. The AAP recognizes that professionals with competency in this area may not be available in every community. The child's physician, working with the state department of health, must ensure that every infant with significant hearing loss is referred to the appropriate professionals within the regionalized system.", "score": 46.020208428775405, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "The United States Preventive Services Task Force has acknowledged the significant effect that congenital hearing loss has on communication skills psychosocial development and educational progress and have found that early detection of hearing loss improves language development.2 Others also have confirmed that language skills are closely linked to early identification of hearing loss3 and prospects to utilization of early intervention services.4 5 Children with congenital hearing loss who are identified and receive intervention no later than 6 months of age perform up to 40 percentile points higher on language expressive measures and interpersonal adjustment within the school setting.6-9 Such delays may lead to adulthood challenges in education and employment.10 Mandatory infant hearing screening has been recommended by the National Institutes of Health 11 Joint Committee on Infant Hearing (JCIH) 12 and the American Academy of Pediatrics15 in order to initiate the process of hearing loss identification and this screening has been implemented in most says. Newborns who fail their hearing screening or high-risk children undergo an outpatient audiological diagnostic assessment that may take several outpatient encounters in order to obtain definitive diagnosis. Appropriate follow-up through diagnostic and intervention services for children who do not pass a hearing screening or who are diagnosed with hearing loss has become a major national healthcare concern. Disparities in diagnostic and intervention services for some socioeconomic groups are at a high risk of becoming lost to followup.16-18 Patients in rural areas face additional access-to-care barriers that compound these concerns. According to a recent economic statement 19 85 of Kentucky’s 120 counties are considered rural and approximately 1.8 million people Riociguat (BAY 63-2521) live in these counties. Furthermore the Appalachian Riociguat (BAY 63-2521) region of Kentucky which encompasses the eastern and south central portion of the state is considered to be mostly rural based on the 2003 United States Department of Agriculture Rural-Urban Continuum Coding system20 (Beale codes). This Appalachian region is acknowledged nationally as suffering from extreme health disparities and is underserved in healthcare services. The 54 Appalachian counties in Kentucky are plagued by poverty unemployment and a shortage of healthcare. Considering the barriers to any type of care in Appalachia you will find multiple points in the diagnostic and treatment algorithm in which children with hearing loss potentially can be lost to follow up or have a dramatic delay in receiving timely intervention.", "score": 45.26890340065038, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "How is a hearing test for a baby is done?\nIt is done in the hospital before the baby is discharged home, by baby’s bedside, in a quiet setting, using a machine to perform a hearing screening. Some leads will be placed at the baby’s forehead and back of the ears, as well as earplugs in both ears for the screening. It will take 15 to 20 minutes to test both ears, and the results are either “Pass” or “Refer”. For babies with the “Refer” result, their parents will be given an appointment date to bring them in for an audiological diagnostic assessment. The appointment date should be within 1 month, or no later than 3 months of age.", "score": 45.0454053252104, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "The instrument is automated and provides a pass-fail report; no test interpretation by an audiologist is required. Because motion artifact interferes with test results, ABR is best performed in infants and children who are sleeping or, if necessary, sedated.”\nWhy Is It Important to Have Your Baby’s Hearing Tested?\nBecause the ability to hear is closely tied to language development, spoken language skills, and social-emotional development, it is extremely important to screen for hearing loss at different stages of your child’s life.\nHearing screening should ideally occur before your baby is discharged from the hospital or birthing center or as soon as possible before one month of age.\nTypes of Newborn Hearing Screens\nDid you know there are different screening methods to determine what your baby hears?\nSome of the most common newborn screening tests include:\n- Automated Auditory Brainstem Response or Auditory Brainstem Response (AABR). This screening method measures responses from the inner ear as well as the auditory brainstem, testing as sound travels up to the brain. Small earphones are placed over the baby’s ears, while sticker electrodes placed around the ears or her head measure whether there is a response from the baby’s ear and brainstem. If your baby passes this screen, a response from both areas will be noted.\n- Auditory Evoked Response (AER). According to a study published in 2015 in The Open Biomedical Engineering Journal22. Paulraj MP, Subramaniam K, Yaccob SB, Adom AH, Hema CR.. Auditory evoked potential response and hearing loss: a review. Open Biomedical Engineering Journal. 2015;9, 17-24. https://doi.org/10.2174/1874120701509010017 by Paulraj and colleagues, a baby’s hearing test using this method involves the use of an electroencephalogram (EEG). The authors report that “EEG-based hearing threshold level determination is most suitable for babies and persons who lack verbal communication and behavioral response to sound stimulation. AER reflects the auditory ability of an individual.”\n- Otoacoustic Emissions (OAE). This diagnostic test measures part of the inner ear’s response to sound and is appropriate for babies and children. Soft conductor tips placed in the baby’s ear canal emit quiet clicks while a computer records a soft echo from the ear.", "score": 40.75119730432818, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "The Maternal Exit Survey and the Maternal CATI Interview will address the following research questions: (1) What are the factors that impede or enable families to follow-up for early hearing evaluation and intervention; (2) What EHDI strategies implemented by hospitals appear to be most successful in reducing loss to follow-up; and (3) Is loss to follow-up associated with maternal characteristics such as parity, age or ethnicity? Both surveys will be available in English and Spanish.\nHearing loss is the most common disorder that can be detected through newborn screening programs. Prior to the implementation of newborn hearing screening, children with hearing loss typically were not identified until 2 to 3 years of age. This is well beyond the period of early language development. Now, with comprehensive EHDI programs, the average age of identification of children with hearing loss has been reduced so that it is now possible to provide interventions for children younger than one year of age. With early identification, children with hearing loss can begin receiving appropriate intervention services that provide the best opportunity for these children to reach their maximum potential in such areas as language, communication, social and emotional development, and school achievement.\nNewborn hearing screening is only the first step in the identification of children with hearing loss. Children who do not pass their screening need to be further evaluated to determine if they have hearing loss. The value of newborn hearing screening cannot be realized unless children complete the screening, evaluation, and intervention process. Since recent data indicate that nearly 40 percent of children do not complete the evaluation-intervention process, this project is designed to understand what barriers exist in following through with evaluation and intervention. This evaluation also plans to provide data necessary to develop innovative solutions that can be applied by states, hospitals, and local programs. Results from this collection have the potential to strengthen the EHDI process and minimize social and economic disability among persons born with hearing loss.\nBy evaluating the policy, structural, personal, and financial factors and barriers associated with loss to follow-up in the EHDI program, this study seeks to identify “best practices” for improving detection, referral to Start Printed Page 57880evaluation and intervention, and adherence to intervention. CDC's plan to publish data and results from this evaluation will help state health officials, other Federal agencies, and other stakeholders to improve the EHDI process-providing direct benefit to infants with hearing loss and their families. The total estimated burden hours are 940.", "score": 40.5390603306156, "rank": 8}, {"document_id": "doc-::chunk-4", "d_text": "The surveillance program for early identification of infants and children with late onset (especially in presence of high risk factors) – It is recommended to perform regular surveillance of developmental milestones, auditory skills, parental concerns, and middle-ear status to for all infants, together with an objective standardized screening of global development, at 9, 18, and 24 to 30 months of age or at any time if the health care professional or family has concern . The cooperation among all the involved operators, services and institutions - The identification of the key roles is an essential step for an appropriate management of the entire process and for monitoring purposes.\nMonitoring, verifying and reporting\nUniversality– completeness of universality in both recruitment and follow-up phases;\nTimely detection– specification of follow-up deadlines for identification and intervention, and determination of the observed prevalence;\nOverreferral – Efficient use of highly specialized care.\nVerification of program performance\nFor each reported indicator a reference benchmark has been identified which represent a consensus of expert opinion in the field of newborn hearing screening and intervention, and are the minimal requirements that should be attained by high quality EHDI programs. Frequent measures of quality permit prompt recognition and correction of any unstable component of the EHDI process. The JCIH recommends timely and accurate monitoring of relevant quality measures for inter-program comparison and continuous quality improvement .\nReporting of process indicators\nThis is the output of the Monitor, Verify and Report Activity and it is used to make process results explicit and, as previously reported, as the basis for possible re-planning and/or reorganization.\nThe proposed framework has been used as a conceptual guidance for building a checklist (Additional file 2) intended to support UNHS program coordinators in the planning, monitoring, reporting and verification.\nAs reported in the 2007 JCIH Position Statement , regular measurement of performance and routine monitoring of indicators are recommended for inter-programme comparison and continuous quality improvement. With the aim of achieving high quality UNHS programs by achieving AAP and JCIH quality benchmarks, our work proposes a conceptual framework and a checklist. The former is a way to optimize, rationalise and standardise the implementation of UNHS programs by considering all the relevant phases: Planning, executing, and monitoring, verifying and reporting. The latter allows an inter-program comparison by removing heterogeneity in processes description and assessment.\nThe paper is a contribution toward a standardisation in reporting UNHS experiences which may favour the emerging of best practises.", "score": 39.09667096763391, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "They also recommend that such screening be performed in all birthing hospitals and coordinated by state departments of health.\nHow are infants screened?\nHearing tests can be performed while the baby is asleep or quiet and do not require the infant's participation. Sounds (tones or clicks) are played through small earphones and responses to the sounds are automatically measured by electrodes or a probe microphone. Tests are quick, painless, and non-invasive.\n- Alexander Graham Bell Association for the Deaf and Hard of Hearing\n- American Academy of Audiology\n- American Academy of Pediatrics - Early Hearing Detection and Intervention (EHDI)\n- American Society for Deaf Children\n- American Speech Language and Hearing Association (ASHA)\n- Boys Town National Research Hospital\n- Center for Disease Control: Early Hearing Detection and Intervention\n- Hands and Voices\n- Hearing Loss Association of America\n- Joint Committee on Infant Hearing\n- National Association of the Deaf\n- National Center for Hearing Assessment and Management (NCHAM)\n- National Institute for Deafness and other Communication Disorders\nThis is an excellent site with valuable information for parents. They include an unbiased look at communication options for children who are deaf or hard of hearing.", "score": 38.32941753937607, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "First three years of life are critical for a child to develop speech and language skill. For effective language and communication skill development child need to hear normally. If any hearing impairment is there it should be managed at the earliest.\nWith modern technologies like Brain-stem evoked response audiometry it is possible to identify hearing-loss within days of birth.\nThere are certain criteria which needs to pay attention and newborn should be subjected to hearing evaluation:–\n1. Parental concern about hearing levels or speech delay in their child\n2. Family history of hearing loss\n3. History of in-utero (cytomegalovirus, rubella or syphilis) or post natal infections (meningitis)\n4. Low birth weight babies\n5. Hyper Bilirubinemia\n6. Cranio facial deformities or certain syndromes\n7. Head injury\n8. Recurrent or Persistent otitis media with effusion\n9. Exposure to ototoxic drugs", "score": 37.31229485706153, "rank": 11}, {"document_id": "doc-::chunk-2", "d_text": "If there is notable hearing loss, no echo or a reduced echo from the ear will be noted.\nAs your child grows, different methods of hearing tests may be used to determine normal hearing development.\nHow to Tell if a Baby Needs a Hearing Test\nAccording to the American Academy of Pediatrics (AAP), “the goal is for all babies to have a newborn hearing screening by one month of age, ideally before they go home from the hospital; identified by three months of age and enrolled in early intervention or treatment, if identified as deaf or hard of hearing, by six months of age.”\nBut how can you tell if your new baby is experiencing hearing loss? The AAP and The American Academy of Audiology recommend seeking a physician’s advice if your child:\n- Has a family history or risk factors of childhood hearing loss.\n- Had a neonatal intensive care unit (NICU) stay of longer than five days.\n- Received intravenous (IV) administration of antibiotics known to cause damage to hearing and/or balance organs.\n- Was exposed to infections that occurred before or after birth, such as herpes, rubella, syphilis, toxoplasmosis, or viral or bacterial meningitis.\n- Experienced head injury or birth trauma, resulting in damage to the auditory nerve.\n- Doesn’t startle at loud noises by one month or turn toward sounds by three to four months of age.\n- Doesn’t notice you until he sees you.\n- Concentrates on vibrating noises more than other types of sounds.\n- Doesn’t seem to enjoy being read to.\n- Is slow to begin talking, hard to understand, or doesn’t say single words such as “dada” or “mama” by 12 to 15 months of age.\n- Doesn’t always respond when called, especially from another room.\n- Seems to hear some sounds but not others. (Some hearing loss affects only high-pitched sounds; some children have hearing loss in only one ear.)\n- Has trouble holding their head steady or is slow to sit or walk unsupported. (In some children with hearing loss, the part of the inner ear that provides information about balance and movement of the head is also damaged.)\n- Wants the TV volume louder than other members of the family.\nHearing loss can develop over time, even if your baby initially passed their newborn hearing test: It is recommended that children have their hearing screened at different points as they grow.\nWho Performs a Baby Hearing Test?", "score": 37.043191871359504, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Washington State law (WAC 246-760) requires schools to conduct auditory and visual screenings of children each year. All students in kindergarten through third grade, fifth and seventh grade are screened. If your child is not scheduled to be screened this year, you may request a screening if you have concerns by contacting your child’s teacher or emailing firstname.lastname@example.org including your child’s name, school, grade level and teacher.\nParents who DO NOT want their child screened for either vision and/or hearing will need to send a letter to their child’s school each year indicating their child is to be excluded. Please contact the Health Services office at 360-473-1073 if you have any questions. Additional information regarding the vision and hearing screening process is available here.", "score": 35.70951914665566, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "How Do Doctors Test a Baby’s Hearing?\n- A baby’s hearing screening is relatively simple and is important to repeat at different points throughout childhood.\n- Symptoms of hearing loss should be promptly reported to your child’s pediatrician.\n- Intervention services are available for children who are born deaf or are developing hearing loss.\nThough it is rare, babies can be born with a detectable level of hearing loss. A baby’s hearing loss can occur from a variety of conditions, some of which can damage the hearing nerve. According to the National Institute on Deafness and Other Communication Disorders (NIDCD), only 0.2 to 0.3% of babies are born with congenital hearing loss.\nBecause the most critical time for children to learn language is in the first three years of life, it’s very important to correctly identify babies who have affected hearing.\nThe CDC recommends that the best way to identify childhood hearing loss is to screen newborns through a universal hearing screening process.\nFortunately, testing a baby’s hearing may seem more complicated than it really is! Screening newborns for hearing loss is probably one of the easiest tests your baby will ever experience.\nBaby Hearing Test: What Parents Need to Know\nA hearing screening may sound vague or confusing, but it is actually a simple process to determine typical hearing patterns in a newborn.\nWhat Is a Baby Hearing Test?\nNewborn hearing screening, or early hearing detection, is a simple method of passing soft sounds or clicks through your baby’s ear and measuring the brain’s response. Referred to as an automated auditory brainstem response (ABR), the test is easy, painless, and generally over after just a few minutes.\nMost babies sleep through their hearing screening, which is helpful in providing accurate results since movement can interfere with the recording. As referenced in the American Family Physician Journal11. Bush, Jennifer S.. AAP Issues Screening Recommendations to Identify Hearing Loss in Children. American Family Physician Journal. 2003;67(11), 2409-2413. https://www.aafp.org/pubs/afp/issues/2003/0601/p2409.html, the American Academy of Pediatrics (AAP) recommends that babies be asleep or sedated for their hearing exam:\n“The automated auditory brainstem response (ABR) is one objective means of evaluating hearing. It is currently used in many newborn-screening programs, but can be used in children of any age.", "score": 35.311749613553204, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Statistics from Altmetric.com\nAbout 1 in 1000 children are born each year with hearing impairment sufficiently severe to compromise speech and language development and communication. There has been much work in recent years to reduce the age of diagnosis and intervention for these children. The paper by Pimperton et al,1 provides important evidence to support the observations of those working clinically with these children, that early identification and habilitation of significant hearing impairment in children pays dividends in terms of education. The cohort of children on whom this paper is based was identified by universal newborn hearing screening before the establishment of NHSP, the national newborn hearing screening programme. The same cohort was studied earlier at an average age of 7.9 years2 when significant benefit in language development was shown in those diagnosed before 9 months of age compared with those identified when older than 9 months. The particular value of this paper is that it has looked at performance in the second decade as well as the first, and there is a paucity of work in this age group. Pimperton et al have highlighted the value of early diagnosis and intervention in establishing good language skills, which underpin later reading comprehension.\nManagement of deafness in children has seen a significant change in the last 15 years. Prior to newborn hearing screening, the average age of diagnosis of deafness severe enough to compromise speech and language development (moderate-to-profound deafness bilaterally) was 26 months, with hearing aid fitting at 32.2 months.3 Many of these children failed to acquire good speech and oral language. Amplification using hearing aids gave access to sound, but the late start meant that the critical period for good speech and language acquisition was passed. The average reading age of a deaf school leaver was said to be about 8 years, and career and educational opportunities were, as a consequence, limited.\nThe impetus for change came from the Joint Committee on Infant Hearing, a multidisciplinary American group established in 1969, which first examined the evidence to support early screening and early intervention. Their latest supplement in 2013 presents current evidence.4 The majority of this research has come from the USA. Christine Yoshinaga-Itano, working for the Colorado Early Hearing Detection and Intervention programme, has monitored the identified children, providing excellent research data.", "score": 34.14389355185281, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "A longitudinal study of deaf and hard of hearing children over a 7-year period identified age-appropriate speech and language development for children whose hearing loss was identified early and who had appropriate intervention, with the best results for those identified within the first 2 months of age. These children were shown to have better social–emotional development because of early and effective intervention.5 Her findings have been supported by others in the USA and also in this country as in this paper by Pimperton et al.\nExamining the outcome of interventions for deaf children is difficult because of the many variables—degree of loss, additional needs, quality of intervention as well as the child's aptitude. A longitudinal study has benefit over a cross-sectional study in such a diverse group. It also provides evidence of sustained benefit. Its value in deaf children cannot be underestimated because screening costs and robust evidence of improved educational outcomes demonstrate value for money.\nIn England, funding for the NHSP from the National Screening Committee of the Department of Health was agreed in March 2000, and the project was established jointly with the Medical Research Council. The programme was rolled out across England over the next few years with 100% coverage (all children being offered screening) achieved by March 2006.\nThe beginning was a big change for many. It was not just the application of a two-level screen, which in itself was a challenge; it was what followed. Children were being referred from the screen at hours of age, and the specialist diagnostic tests needed were done to the level of expertise required in only a few centres. The knowledge and skills needed to fit hearing aids to tiny babies were also in short supply. The teachers of the deaf providing peripatetic services were used to dealing with deaf toddlers and schoolchildren, but were now expected to cater for the hearing needs of small infants, only a few weeks of age. Only a few medics were adequately trained for the role ahead of them. In short, the learning curve was steep.\nNHSP has not been just a screening programme, it had a much broader brief until recently—to ensure identified children were appropriately managed for the first 3 years of life. It has had precise standards of practice and performance with the child and the family at the centre, and it has provided an educational programme for involved professionals. A close working relationship with the National Deaf Children's Society resulted in the development of comprehensive information leaflets and support for parents.", "score": 33.460116103355304, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Posts for: February, 2019\nA hearing screening is the easiest way to determine if your child is suffering from hearing loss. Thanks to a hearing screening, your pediatrician can determine the degree of hearing loss and how best to help your child hear well again. If your child’s hearing loss goes undiagnosed, it can lead to problems with normal development, learning disabilities, and problems socializing with others.\nYour child could be suffering hearing loss from a variety of causes including a family history of hearing problems, infection during pregnancy, or birth complications. Hearing problems can also be caused by middle ear infections, infectious diseases, or even loud noises.\nSo, how do you know if your child needs a hearing screening? According to the Centers for Disease Control (CDC) these are some of the most common signs and symptoms of hearing loss in babies and children:\n- Not turning toward sounds at 6 months\n- Not saying single words at 1 year\n- Not hearing all sounds\n- Not answering to their name\n- Delayed or unclear speech\n- Difficulty following directions\nHearing screenings are often performed at well-child visits and during school physicals. If your child hasn’t had a hearing screening, and you notice any of the signs and symptoms listed above, you should schedule a hearing screen as soon as possible. Early detection of hearing difficulties leads to early treatment, which is much better for your child.\nIf your child has hearing difficulties, don’t worry. There are many effective ways to help with hearing loss including:\n- State-of-the-art hearing aids, cochlear implants and other hearing devices\n- Medications if the hearing loss is caused by an ear infection\n- Surgical treatment to correct structural issues which may be causing the hearing loss\n- Alternative communication techniques\n- Educational and supportive services for the family\nA hearing screening is important to the health and well-being of your child. You don’t want your child to miss out on all of the beautiful sounds of life. Your pediatrician can help you schedule a hearing screening to get your child started on the road to hearing well.\nNamed after the characteristic sound of its notorious coughing fits, whooping cough is an extraordinarily uncomfortable condition that typically manifests itself in babies and in children ages 11 to 18 whose vaccine-provided immunities have begun to fade. In addition to causing several debilitating symptoms, whooping cough also carries the possibility of infant mortality, particularly for patients under 12 months old.", "score": 33.24800781014654, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "Treatment options may include hearing aids, cochlear implants, or other therapies.\n- Cost-Effective: Early detection and treatment of hearing loss can save families and the healthcare system significant costs over time. Delayed diagnosis and intervention may increase healthcare costs and lifelong learning difficulties.\n- Peace of Mind: UNHS can provide parents with peace of mind, knowing that their child's hearing has been thoroughly tested and any issues can be addressed early.\nIn the Nutshell:\n- UNHS is a quick and non-invasive screening process that can be done shortly after birth. The test typically takes just a few minutes and is painless for the baby.\n- UNHS is essential for all newborns, not just those with a family history of hearing loss or other risk factors. Some babies with hearing loss may not have any obvious signs or symptoms, so screening is necessary to identify those who may benefit from early intervention.\n- UNHS is just the first step in ensuring that children with hearing loss receive the support they need. If a baby fails the initial screening, further testing and evaluation will be necessary to determine the extent and type of hearing loss and to develop an appropriate treatment plan.\n- UNHS is part of a comprehensive approach to supporting children with hearing loss that includes early intervention services, family support, and ongoing monitoring and follow-up.\n- Parents should be aware that even if their child passes the initial screening, you should monitor their hearing and speech development. You should speak with their healthcare provider to determine if further evaluation is necessary if you have any concerns.\nWhat can we do?\nThe Listening Lab Malaysia provides hearing tests for newborns and infants and gives proper recommendations after diagnosis. We also promote childhood hearing tests such as Tympanometry, Middle Ear Muscle Reflex (MEMR) Test, Auditory Brainstem Response (ABR) Test, Auditory Steady State Response (ASSR) Test, Central Auditory Evoked Potential (CAEP) Test and Otoacoustic Emissions (OAE) Test. Let your child undergo the test in our clinic for proper guidance and planning. Book an appointment today and save money by getting the right treatment plan for your child. You may also contact us through Whatsapp.", "score": 32.82085397272866, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "Implementation of a universal neonatal hearing screening program may increase demand for qualified personnel to provide age-appropriate identification and intervention services for infants with significant hearing loss. As a result, the AAP recommends the training and education of additional expert care professionals.\nThe AAP does not recommend a preferred screening method. The statement discusses both evoked otoacoustic emissions and auditory brainstem response to be used either alone or in combination. Both methods, according to the AAP, are easy to perform and are noninvasive.\nCopyright © 1999 by the American Academy of Family Physicians.\nThis content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact email@example.com for copyright questions and/or permission requests.\nWant to use this article elsewhere? Get Permissions", "score": 32.65734703220001, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "I came across a good article in the New York Times which highlights the need for hearing tests for newborns: without them it is difficult to predict what might be wrong if the child is not speaking or reaching other developmental milestones.\nHearing tests are mandatory in 40 states, and routine but optional in the rest. There’s a good reason for the rule:\n“We need to identify children early and provide them with hearing tools and training by the time they are 6 months,” said Dr. John Greinwald, a pediatric otologist at Cincinnati Children’s Hospital Medical Center. Studies now confirm that the earlier the intervention, the better the chance that the child will develop listening and language skills.\n“If you hear from birth, you learn to listen,” said Anne Oyler, an audiologist for the American Speech, Language and Hearing Association. “More than 90 percent of what babies learn is from incidental listening. If a child isn’t fitted with hearing aids until 2, that is when he or she will have to start learning what sounds are. If we catch kids in the first few months, we don’t see delays and they do beautifully.”\nHearing impairments are pretty common: 1 in 1000 babies are born deaf. The prevalence of this disability is what interested me in pursuing hearing research for my thesis, but also makes it equally distressing when precautions that could aid children in development go unheeded.", "score": 32.40644219962944, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Neonatal Hearing DepartmentMaternity Clinic\nIASO Maternity Hospital has been performing otoacoustic emissions tests in neonates (screening test) since 1996. It is a simple, quick and harmless test applied in the best maternity hospitals all over the world and offers information about the infant’s hearing (if the infant/child has normal hearing or not). During the test, continuous rhythmical sounds are transmitted to newborns and the sound responses of the ear are recorded via a special appliance linked to a computer. The ideal time to have this test performed is during the first days after birth, because newborns sleep for the most part of the day. The test can also be performed during sleep, so the newborn is not aware of it.\nIn the event of a positive result, the hearing ability of the patient is normal to a significant extent. The incidence of neonatal hearing loss is 3-4‰ and the incidence of severe hearing loss (deafness) is 1‰. Note that these rates also apply for newborns without hereditary history of hearing loss. The detection of hearing loss during the first weeks, and consequently the opportunity for early intervention, is significantly important for speech development, as well as the intellectual and mental progress in children.\nTo this end, the otoacoustic emissions test, which is performed on all newborns across the board in medically advanced countries, such as the USA, Western European countries and elsewhere, has been performed on all newborns at IASO for the last twenty years. IASO was the first Maternity Hospital in Greece to universally implement this test and has been the only one that did so for many years. This IASO program is one of the largest worldwide.\nIn the last fifteen years, a brain stem evoked acoustic responses test is performed in certain cases, for the same reason and indications.\nAll these are consistent with the guidelines of the World Health Organization. The recording of the screening test remains in the patient file and its results are written down in the child’s personal health booklet (p. 15).", "score": 31.448022196488854, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Just Kids Diagnostic and Treatment Center offers hearing screening and assessment for infants, toddlers, preschoolers, school age children and adults.\nScreening for newborns is available on-site. The screening is simple and pain free and can be done while the infant is resting. The purpose of hearing screening is to detect hearing loss important to speech development.\nSchool age children should be screened periodically. Adult screening is also available.\nFor more information, please call 631-924-1000.", "score": 31.076049708574157, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Objective. To compare the language abilities of earlier- and later-identified deaf and hard-of-hearing children.\nMethod. We compared the receptive and expressive language abilities of 72 deaf or hard-of-hearing children whose hearing losses were identified by 6 months of age with 78 children whose hearing losses were identified after the age of 6 months. All of the children received early intervention services within an average of 2 months after identification. The participants' receptive and expressive language abilities were measured using the Minnesota Child Development Inventory.\nResults. Children whose hearing losses were identified by 6 months of age demonstrated significantly better language scores than children identified after 6 months of age. For children with normal cognitive abilities, this language advantage was found across all test ages, communication modes, degrees of hearing loss, and socioeconomic strata. It also was independent of gender, minority status, and the presence or absence of additional disabilities.\nConclusions. Significantly better language development was associated with early identification of hearing loss and early intervention. There was no significant difference between the earlier- and later-identified groups on several variables frequently associated with language ability in deaf and hard-of-hearing children. Thus, the variable on which the two groups differed (age of identification and intervention) must be considered a potential explanation for the language advantage documented in the earlier-identified group.\n- SD =\n- standard deviation •\n- df =\n- degrees of freedom •\n- dB =\n- decibels •\n- dB HL =\n- decibels hearing level •\n- CQ =\n- cognitive quotient •\n- MCDI =\n- Minnesota Child Development Inventory •\n- MLU =\n- mean length of utterance •\n- LQ =\n- language quotient •\n- ANCOVA =\n- analysis of covariance\nHearing loss that is bilateral and permanent is estimated to be present in 1.2 to 5.7 per 1000 live births.1–4 The typical consequences of this condition include significant delays in language development and academic achievement.", "score": 30.651534342612063, "rank": 23}, {"document_id": "doc-::chunk-3", "d_text": "Apuzzo and Yoshinaga-Itano reported that the first age-of-identification group (ie, those children identified before 3 months of age) had significantly higher language scores than those identified after the age of 2 months despite all children receiving similar intervention programming.\nIn the Apuzzo and Yoshinaga-Itano29 study, all of the children in the earlier-identified group were diagnosed within the first 2 months of life because they presented with characteristics on the high risk registry for hearing loss. Within that study, there were only a few children without significant cognitive delay identified before 12 months of age despite including the entire sample of young children with hearing loss from a 10-year database of >350 children. Because of the small number of children in the earlier-identified group, the question of whether early identification and intervention was associated with better language scores for all deaf and hard-of-hearing children or only for children who exhibited specific demographic characteristics could not be addressed. Because of the institution of universal newborn hearing screening, within the last few years the number of children identified early with hearing loss who have normal cognitive ability has increased dramatically.\nMoeller30 reported a retrospective longitudinal study of 100 deaf and hard-of-hearing children, 25 of whom had been identified before 6 months of age. These children were tested every 6 months until the age of 5 years. Children identified with hearing loss before 6 months of age maintained age-appropriate language skills and had significantly better language skills than those children who were identified after 6 months of age. Similar to the study conducted by Apuzzo and Yoshinaga-Itano,29 Moeller's early identification group consisted primarily of children identified through the high-risk register for hearing loss. Additionally, the earlier- and later-identified groups were not comparable on the full range of demographic variables frequently associated with language ability in deaf and hard-of-hearing children.\nThe purpose of the present investigation was to compare the language skills of a large group of children whose hearing losses were identified by 6 months of age with children who were identified after the age of 6 months. Because it was hypothesized that the advantage of early identification might vary, the effect was examined within a variety of subgroups formed on the basis of demographic variables frequently associated with language development.", "score": 30.61004839826558, "rank": 24}, {"document_id": "doc-::chunk-7", "d_text": "The Labor Day campaign is called that because more babies are born around this time of year than any other. The campaign is designed to remind parents to have their newborn babies' hearing screened.\nThe Hispanic/Latino/Latina community does not receive sufficient healthcare information. Outreach to this community includes efforts to disseminate information in Spanish about hearing screenings, follow-through visits, and how medical professionals can help increase the number of infants who return for hearing screening evaluations. The campaign will also use the Hispanic/Latino/Latina media services to provide information that will reach their communities. Another focus is to make information available in the Combined Health Information Database (CHID) so that resources are available to healthcare professionals who work with these communities.\n18.104.22.168 Performance Measures\n- Increase knowledge about the importance of hearing screening and follow through for under-represented groups to ensure improved communication, occupational, and financial outcomes for these children.\n- Increase knowledge of professionals about the importance of follow through after hearing screening for appropriate interventions.", "score": 30.1766801175609, "rank": 25}, {"document_id": "doc-::chunk-4", "d_text": "Trends in Amplification. 1999;4(2), 51-60. https://doi.org/10.1177/108471389900400205 in order to evaluate for hearing loss and gain a better understanding of your baby’s ears.\nIntervention Options After Your Baby Hearing Test\nIf your baby has been diagnosed with hearing loss of any sort, you’ll want to start looking into intervention services as soon as possible to reduce any developmental delay in your baby.\nAs you work to identify options with your child’s healthcare team, consider such ideas as:\nWorking with a speech-language pathologist (SLP) can help identify valuable strategies for learning to communicate with your little one experiencing hearing loss. Your baby is most likely to show spoken language development at a rate similar to a baby without hearing loss when diagnosed early.\nSign language is a valuable tool to enhance communication with pre-verbal and non-verbal children of all ages.\nMany devices, such as a frequency modulation (FM) system, are available to help children who lose hearing abilities maximize sounds heard or communicate with hearing individuals.\nAccording to the American Speech-Language-Hearing Association, assistive hearing devices like hearing aids can be used successfully for babies and very young children. Professionals at the American Academy of Audiology report that cochlear implants can also be used in children and that “cochlear implants are an option when benefit from hearing aids is limited and should not be viewed as a last resort.”\nA child’s hearing ability can change over time. Because hearing loss in a young child can significantly impact their development and communication skills, performing regular screening tests allows for early intervention services if a child demonstrates hearing loss.\n- Bush, Jennifer S. (2003). AAP Issues Screening Recommendations to Identify Hearing Loss in Children. American Family Physician Journal. 67(11), 2409-2413. https://www.aafp.org/pubs/afp/issues/2003/0601/p2409.html\n- Paulraj MP, Subramaniam K, Yaccob SB, Adom AH, Hema CR. (2015). Auditory evoked potential response and hearing loss: a review. Open Biomedical Engineering Journal. 9, 17-24. https://doi.org/10.2174/1874120701509010017\n- Singh G, Archana G. (2008). Unraveling the mystery of vernix caseosa.", "score": 29.962806878422594, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "The Centers for Disease Control and Prevention (CDC) publishes a list of information collection requests under review by the Office of Management and Budget (OMB) in compliance with the Paperwork Reduction Act (44 U.S.C. Chapter 35). To request a copy of these requests, call the CDC Reports Clearance Officer at (404) 371-5983 or send an e-mail to email@example.com. Send written comments to CDC Desk Officer, Human Resources and Housing Branch, New Executive Office Building, Room 10235, Washington, DC 20503 or by fax to (202) 395-6974. Written comments should be received within 30 days of this notice.\nAssessment of State Early Hearing Detection and Intervention Programs (EHDI): A Program Operations Evaluation Protocol—New—National Center on Birth Defects and Developmental Disabilities (NCBDDD), Centers for Disease Control and Prevention (CDC).\nBackground and Brief Description: Every year, an estimated 12,000 newborns are diagnosed with permanent hearing loss, a condition that if not identified and treated early can lead to impaired functioning and development. CDC's role in the detection, diagnosis, and treatment of early hearing loss through the “Early Hearing Detection and Intervention Program” (EHDI) is of vital importance for families of newborns and infants affected by hearing loss. Nonetheless, recent data indicate that only 60 percent of the newborns that fail hearing screening are evaluated by the recommended 3 months of age.\nThe evaluation will involve an integrative evaluation approach that encompasses the following activities, conducted in Arkansas, Massachusetts, Michigan, Utah, and Virginia: (1) A 10-minute survey of 3,000 mothers whose newborns have been screened (the “Maternal Exit Survey”); and (2) a 20-minute computer-assisted telephone interviewing (CATI) survey of 1,000 mothers of newborns who have been referred for additional hearing evaluation (the “Maternal CATI Interview.”) To complete these interviews, it is expected that 5,000 will be contacted. The overall burden on all contacted women is expected to be approximately 940 hours.", "score": 29.920289731649653, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Hearing Screening Associates is a full-service company that offers an all-inclusive service for screening the hearing of newborn infants. The concept covers everything from personnel and testing to equipment to reporting and billing.\nThe Newborn Hearing Screening Technician performs mandated hearing screening tests for newborn infants, provides education to parents pertaining to the relevance of hearing screening and early intervention, and records patient information and hearing screen results. The screener is responsible to conduct these hearing screening tests prior to patient discharge supporting patient care goals and objectives. This part-time position is based in a busy Newborn Nursery / NICU hospital setting.\nConducts newborn hearing screenings on all infants in the nursery and NICU prior to discharge from the hospital, coordinates post-discharge follow up care as needed utilizing appropriate procedures and computer technology.\nWorks directly with nursing staff, patients, and parents to communicate relevance of hearing screening and early intervention to support positive outcomes; provide screening results to parents. Must be able to analyze and communicate responses to problem situations in a manner consistent with the company and customer's needs.\nMaintain and use hearing screening equipment, assist with database management.\nInteracts as needed with various hospital departments; Performs duties in accordance with the policies and procedures of HSA, hospitals, and respective departments.\nUpholds the code of conduct and compliance policies.\nUpon completion of HSA/Hospital training; has thorough knowledge of procedural variables and instrumentation in the application of Otoacoustic Emissions Assessment, Transient Emissions, and Automated Auditory Brainstem Response (AABR) testing.\nAll other duties as assigned.\nHigh School Diploma or General Education Degree (GED) required.\nOne (1) or more years of healthcare or childcare training/experience, or related college curriculum is a plus.\nPossess basic computer skills.\nAbility to attend all company provided training.\nWillingness and ability to learn and operate hearing screening equipment.\nMust be able to move or lift up to 15 pounds.\nEmployment is contingent upon the following:\nClear seven (7) year background check\nVerifiable prior employment including title and dates of hire\nMust pass Drug Test\nBloodwork required to verify immunization Titers\nVaccinations as required by hospitals, including Flu Vaccine\nHearing Screening Associates is an Equal Opportunity / Affirmative Action employer, all qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, sex, national origin, disability, or protected veteran status.", "score": 29.91700697153169, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Listening Stars : Parent Infant Program (0-18 Months)\nEarly Intervention plays a vital role.\nEarly diagnosis is the key to successful early intervention. Babies who are detected early can access further testing quickly and parents can access support and information. Children whose problems are identified early & begins with early intervention strategies before six months of age, have the best chance of developing age appropriate speech and language.\nAs Babies begin to learn things right from the time they are in the womb of their mother. It is advised not to waste critical time. Critical Period for developing the ability for spoken language is from 0-3 years. At that stage, the brain has the maximum neural Plasticity. The child too must learn early to grow up with the hearing aids/cochlear implant (CI) and accept them easily. Most importantly, children learn to make use of their residual hearing and acquire speech and language faster if they start earlier. Starting at an older age, not only slows down the ability to learn and speak, it can severely jeopardise.\nThe programme is especially for Babies/ Infants diagnosed with any developmental delays. It provides them early training to identify and understand the sounds around them.\nBaby Developmental Milestones\nThe following milestones can be used as a guide to monitor child’s development as he/she grows:\nBirth –Eighteen Months Old\n- Localizes sound source with accuracy\n- Discriminate angry & Friendly vocal tones\n- Appears to enjoy listening new words\n- Understands more simple instruction\n- Enjoys sounds making toys & objects.\n- Awakes easily to loud, sudden noises\nOur early intervention program is a multi-disciplinary approach with emphasis on assisting the child to reach their full potential. The services are individualized to each child’s need and integrate many disciplines such as Audiologist, Speech Pathologists, Special Educator, Psychologists, Occupational Therapists, Early childhood Specialist & Music therapist.\nSchedule an appointment for your child today!", "score": 29.75381943185149, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Service Provider Qualifications\nInfants and babies who are deaf or hard of hearing generally are eligible for early intervention services. The goal of these services is to ensure that deaf and hard of hearing infants, toddlers, and children develop age-appropriate language, social skills, and cognitive skills. Qualified, specialized early intervention personnel are necessary to help achieve this goal. The Joint Committee on Infant Hearing1 recommends that:\nAll individuals who provide services to infants with hearing loss should have training and expertise in auditory, speech, and language development; communication approaches for infants with hearing loss and their families (eg, cued speech, sign language systems including American Sign Language); and child development.\nThe Conference of Educational Administrators of Schools and Programs for the Deaf recommends that:\n[Early intervention] providers should be credentialed by the early intervention system in the state in which they work. Minimally, they must have education and experience with the 0-3 population and have a degree in Deaf Education. They should know about the acquisition and development of spoken and signed language in children who are deaf and hard of hearing. They should possess the training and skills necessary to help children develop age appropriate language. They should be skilled in working with families from diverse backgrounds.\nAccording to the Joint Committee on Infant Hearing, early interventionists for deaf and hard of hearing infants and toddlers and their families should be able to provide families with:\nThe Joint Committee on Infant Hearing also states that early interventionists should also be able to:\nFamilies should seek out early intervention programs that have professionals with these qualifications or advocate within their early intervention system to obtain these professionals.\n1 Members of the Joint Committee include: American Academy of Audiology, American Academy of Pediatrics, American Speech-Language-Hearing Association, Council on Education of the Deaf (CED member organizations include Alexander Graham Bell Association for the Deaf and Hard of Hearing, American Society for Deaf Children, Association of College Educators - Deaf and Hard of Hearing, Conference of Educational Administrators of Schools and Programs for the Deaf, Convention of American Instructors of the Deaf, and National Association of the Deaf), and Directors of Speech and Hearing Programs in State Health and Welfare Agencies.\nConference of Educational Administrators of Schools and Programs for the Deaf. (2006). Position on Early Intervention Programs for Children with Hearing Loss. www.ceasd.org\nJoint Committee on Infant Hearing. (2000).", "score": 29.413938247347957, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "• Absence of startling response to a sudden loud noise\n• Absence of response to familiar voices in older infants, when spoken out of the infant’s view\n• If a child is not using voice or speech to ask, seek attention or call out\nAssessment Techniques for Hearing Loss\nHearing tests for infants are carried out objectively. Auditory Brain Stem response test uses electrodes placed on the infant’s head with a sound-generator inserted into the external ear, to record the responses of the auditory nerve to a sound. Otoacoustic emissions test records the echo of the sound given out via the speakers into the ear, wherein the absence of the echo confirms hearing loss.\nInitial Steps towards Intervention\nThe first step towards the intervention program is to provide the suitable amplification for the child diagnosed with hearing loss using a hearing aid or cochlear implant. The goal of the intervention program should be to help the child to learn to communicate, maximize the use of residual hearing and interact with the society. Everyone in the family and immediate surroundings of the child becomes a contributor in the intervention program.\nThe team of professionals in the intervention program includes Otolaryngologist, Audiologist, Speech and Language therapist, primary care physician and psychologist if required. They can help in deciding whether the child can be helped to develop verbal language skills, use of sign language or any other means of communication. This decision is based on the overall cognitive skills, physical ability and intelligence of the hearing impaired child.\nAuditory Training after Amplification\nChildren or adults with severe or profound hearing loss may still be able to hear some low frequency sounds and this hearing ability in them is called residual hearing. It is very important to preserve and use this residual hearing in the intervention program. Amplification with hearing aid or cochlear implant will help the child hear the sounds while the Auditory training sessions will help the child differentiate, discriminate and identify the sounds he or she hears. The auditory training can be extended to speech sound discrimination and identification, ultimately helping the child to improve on verbal communication skills.\nCritical Period of Language Learning\nA child with normal hearing learns to use verbal communication and spoken language during the critical period for language development, before two years of age. During this development, almost 80 percent of first language is learnt by the child, building up the vocabulary thereafter. Studies have shown that if this period is missed for language learning, the language development may not be as natural.", "score": 29.400176296701783, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The DOH Department Circular on the Guidelines for Universal Newborn Hearing Screening Program (UNHSP) Implementation was released last March 31, 2014 to all health leaders and stakeholders including hospitals, health providers and patients across the country.\nThe Department of Health- Family Health Office (DOH-FHO) and the Newborn Hearing Screening Center- National Institute of Health, UP-Manila (NHSC-NIH-UPM) developed guidelines for the implementation of Universal Newborn Hearing Screening Program (UNHSP) which will serve as a comprehensive guide and reference material for service providers and health workers who are engaged in the provision of newborn hearing screening, be it actual screening, training of health workers, or application of intervention strategies. Included in the circular are the roles guidelines of service providers for a clearer delineation and discharge of their functions.\nIn the circular are the Universal Newborn Hearing Screening Act 2009 and Manual of Operations for Republic Act 9709. You may download a copy here.", "score": 29.282436236290426, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "Each state sets their own eligibility requirements, so depending on the type and/or severity of the hearing loss, some D/HH children may not be eligible to receive Part C services.\nNOTE: Refer a child diagnosed D/HH even if\n- You are unclear about eligibility for a specific case, or\n- The child is three (3) years of age or older.\nEven if the family is found to be ineligible for services, the EI case coordinator will be able to share other resources with the family.\nTo locate your EI program, contact\nThe Early Childhood Technical Assistance Center or your state EHDI program.\n- Provider’s legal obligation (2011 Part C regulations:§ 303.302) (pg. p.60141)\n- National Center on Hearing Assessment and Management [8.16 MB, 8 pages]\n- Part C of the Individuals with Disabilities Education Act (IDEA)\n- American Speech-Language and Hearing Association\n- The Early Childhood Technical Assistance Center\n- Individualized Family Service Plan (IFSP)\n- The Early Intervention / IFSP Process [20.1 KB, 1 page]\nOther helpful resources\nCenters for Disease Control and Prevention Hearing Screening and Follow-up Survey", "score": 28.050833249062027, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "Children learn speech and language from listening to others. The first few years are particularly critical to development. When a hearing loss exists a child does not get the full benefit of language learning experiences. A hearing loss may be temporary, progressive or permanent but left unnoticed, delays in speech and language learning can occur.\nKidsAbility Infant Hearing Screening Information During COVID-19\nDuring the Covid19 pandemic, newborn screening in the hospital and in the community has been temporarily suspended in order to protect the safety of newborns, families, staff and the greater community. The Ministry of Health is working with the regional partners to identify when services will resume and which services will be considered a priority in the first phase of re-opening. To date, we have no direction for screening but certainly understand the concerns of parents. Further information will be provided as soon as it is available.\nWho can tell me if my child has a hearing impairment?\nNewborns - Birth to 4 months\nAll parents of babies born in Ontario are offered a free hearing screening for their newborn before they leave the hospital.\nIf your baby was not screened and is not yet 4 months of age call ErinoakKids Client Services Intake Centre at 1-877-374-6625 to arrange a community screening assessment through the Central West Infant Hearing Program.\nToddlers – 4 to 12 months\nIf your physician or other professional is concerned that your child may have a permanent hearing loss or has risk factors for this diagnosis, then he may call ErinoakKids Client Services Intake Centre at 1-877-374-6625 to arrange an audiology assessment through the Central West Infant Hearing Program.\nPreschoolers – One year of age to school entry\nChildren may be assessed by a local Audiologist on referral from a family physician. Any child identified with a permanent hearing loss may receive ongoing audiology services from the Central West Infant Hearing Program. You can call directly after receiving the diagnosis for your child of a permanent loss.\nDo I wait for a hearing assessment before getting help for my child’s speech and language development?\nIf you feel your baby is having difficulty learning to communicate or interact with others Contact Us. KidsAbility works in partnership with other service providers to support you and your child as you learn about your child’s developmental needs and navigate through systems of support.\nWhat services are available once my child is diagnosed with a hearing impairment?1.", "score": 28.043388100672384, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "The National Deaf Children's Society has celebrated a long-running campaign that has seen 36,000 babies receive hearing screen tests with a champagne reception in the House of Commons.\nThe reception celebrated the first anniversary of the Newborn Hearing Screening Programme, which has been piloted in 23 sites across England.\nThe organisation fought for 10 years to get free tests.\nThe programme allows babies' hearing to be tested within days of birth.\nPreviously, babies were not tested until they were six to eight-months old.\nSusan Daniels, chief executive of the National Deaf Children's Society, said: \"The programme revolutionises the identification of deafness and is the first step in changing the lives of babies born deaf in the UK. It is important that we ensure the newborn screening is available to all parents by 2004 and that all the support services are in place.\"", "score": 27.1201151855931, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Referring Deaf or Hard of Hearing Children to Early Intervention\nEarly Intervention for Children who are Deaf or Hard of Hearing\nDid you know that providers, such as audiologists, otolaryngologists, and pediatricians have a legal obligation to make a referral to the state’s Early Intervention (EI) program as soon as possible, but no later than seven calendar days after the child has been identified with a permanent hearing loss?\nDevelopmental risk and early intervention\nChildren who are born deaf or hard of hearing (D/HH) are at risk for developmental delay because they may lack early exposure to an accessible language. However, data reported to the Centers for Disease Control and Prevention show that some D/HH children were not documented as having received early intervention services. In addition, some D/HH children may be born with other conditions that also can result in developmental delays. Referring D/HH children to early intervention as soon as possible will help ensure that they can reach their full potential. Early Intervention represents the goal of the entire Early Hearing Detection and Intervention (EHDI) process. To realize the benefits of early identification, intervention services need to be\n- Targeted, and\nIn addition, intervention services for D/HH children need to be implemented promptly and in a family-centered manner. If a family refuses your referral to intervention services, be sure to document the parents’ decision.\nWhat is Early Intervention (EI)?\nEI is a system of services available to children under the age of three who are eligible and may have a developmental or language delay [284KB, 16 pages], disability, or special health condition that is likely to lead to non-typical development. Evaluation and service coordination are provided free of charge.\nIf a child isn’t developing as expected in certain areas, the family may be able to receive early intervention services from the state’s lead agency that works with families and children with different developmental needs. EI services are developmental services that are\n- Selected in collaboration with the parents; and\n- Usually provided at no cost (some Federal or State laws provide for a system of payments by families, including a schedule of sliding fees).", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "STEP 1: Parents will be asked if they want to consent to hearing screening\nHearing screening is not mandatory but is a medical recommendation. Parents or legal guardians can decline hearing screening. However, before making a final decision please speak with a health care provider about the pros and cons of this decision.\nSTEP 2: The Infant Hearing Program performs the hearing screen and expanded hearing screening is offered as an option if it is recommended that the baby have an appointment with an audiologist\nBased on a baby’s hearing screening results, the Infant Hearing Program might recommend that the baby have an appointment with an audiologist for a hearing assessment. If so, the baby is eligible for expanded hearing screening and the Infant Hearing Program hearing screeners will discuss this option with parents/legal guardians.\nOnce a parent or legal guardian has had their questions answered and has enough information to make a decision, the hearing screener will document the choice for expanded hearing screening. The selection will be recorded on the hearing screening form, along with the name of the person providing consent and the hearing screener who provided the information. The last page of the hearing screening form (parent education page) will be torn off and the selection recorded and provided to the parent or legal guardian for their records.\nSTEP 3: If the parent/guardian consents to expanded hearing screening, the Infant Hearing Program screening form is sent to Newborn Screening Ontario and the test is performed\nThe Infant Hearing Program sends the screening form to Newborn Screening Ontario (NSO) to notify NSO to perform the hearing loss risk factor blood spot screen. If a baby passes their hearing screen, they will not be eligible for the hearing loss risk factor blood spot screen\nResults from the hearing loss risk factor blood spot screen should be available within a week once NSO is notified to perform the screen.\nNewborn Screening Ontario and the Infant Hearing Program have collaborated toprovide screening for hearing loss risk factors. The dried blood spot is already available for many babies so it is painless to have the hearing loss risk factor blood spot screen. As well, testing for congenital cytomegalovirus needs to be performed on a blood sample taken during the first 3 weeks of life. In most cases the dried blood spot is taken in the first week of life and sent to Newborn Screening Ontario for newborn blood spot screening.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Universal Early Hearing Detection\nCourse Language: English\nUniversal Early Hearing Detection: Making a difference one baby at a time\nInstructor:Dr. Judith A. Marlowe, PhD, FAAA, CCC-A, Executive Director Audiology & Professional Relations, Natus Medical Incorporated\n294 STUDENTS ENROLLED\nThis educational session reviews the criteria for mass screening programs, illustrates how screening every newborn for hearing satisfies these requirements, outlines professional guidelines, recommendations, and benchmarks for effective and efficient EHDI programs, and summarizes evidence supporting the significant benefits of early detection and intervention.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "Yoshinaga-Itano, who is a Professor at the University of Colorado at Boulder in the Department of Speech, Language and Hearing Sciences, also advocates the globalization of infant hearing screening as Chair of the Joint Committee on Infant Hearing (JCIH). A JCIH policy statement, updated in 2007, sets criteria for these programs, and most countries look to it as the standard but tailor it for their area and culture, she says.\nYoshinaga-Itano has been trotting around the globe for more than 20 years, doing humanitarian work and setting up programs in China, Mexico, and Southeast Asia, including Thailand and the Philippines. Most recently, she and her team worked with audiometric technicians in the rural area of Luoyang, about an hour outside Xian, one of China's ancient capitals. Most technicians in China are trained by manufacturers because no professional audiology program exists there yet, so the technicians typically have big gaps in their knowledge, she admits. During her time there, the team worked with newborns and children between the ages of 2 and 6, and showed technicians how to do secondary levels of testing, checking hearing aids and fitting children for new ones. Most of the children already fitted with hearing aids were underamplified, she says.\n“When you're working side-by-side with 10 teachers and technicians at a school, you see right away what they don't understand,” Yoshinaga-Itano says. “But they learned fast and we saw dramatic changes in how they did basic fittings, identification and basic principles of intervention. We also noticed that once we got people trained in an area of China, other people from nearby areas were coming to learn from them. It created a ripple effect in the region.”\nWith more than a billion people in China, the challenges of reaching the majority of the population are staggering, but Yoshinaga-Itano credits newer automated technology as a big leap forward in bringing newborn screening to needy populations that previously had no access to testing and follow-up care.\nNow it's possible to screen newborns just about anywhere in the world, she says. Costs have declined, devices are more portable, and digitization and battery-operated instruments have made screenings more accessible because administrators don't have to rely on local electricity, which isn't available everywhere. And with automated equipment, those with lower skill levels can be trained to use it in remote regions, she adds.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "(2018, December 20) | Useful to early interventionists, lead agencies, and Parent Centers on screening during evaluation of an infant or toddler for hearing loss or deafness.\nThis Dear Colleague letter from OSEP responds to a question about the evaluation process for an infant or toddler suspected of being deaf or hard of hearing to determine eligibility for early intervention services under Part C of IDEA. The inquirer also sought guidance on the applicable evaluation timelines and required protocols.\nIn answering the writer’s questions, OSEP explains in concise detail what the IDEA requires. Included in the discussion is the need for parental consent before evaluating a child suspected of having a disability or developmental delay, what that evaluation must involve (including a family-directed assessment to identify the family’s resources, priorities, and concerns and the supports and services necessary), andhow each state’s policies are a factor to be considered. The letter also answers questions such as:\nMay a previous hearing screening (such as a newborn hearing screening outcome or a hearing screening result provided by an Early Head Start program or a health care provider) meet the Part C evaluation requirements?\nHow can Part C programs ensure that a hearing screening or evaluation is completed in a timely manner when a child is determined to be eligible for Part C services based on an established condition?\nIf an initial evaluation has begun and the child requires treatment to resolve any temporary medical conditions before the hearing evaluation can be completed, how should Part C programs ensure that the child remains actively in the eligibility determination process if the hearing evaluation requires more than 45 days to complete?\nThe 4-page Dear Colleague letter is available in PDF, at:\nSOURCE ARTICLE: Center for Parent Information and Resources\nGive us a call at (727) 523-1130 or (800) 825-5736 or request a callback by clicking below.", "score": 26.694876686997123, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Language development is impacted if there is any loss in the ability to hear.\nEven a mild hearing loss can dramatically effect a child’s language development. Early identification of hearing loss is key to providing appropriate intervention services to promote language development.\nGuam EHDI works to assure that all babies born on Guam get screened for hearing loss by 1 month. If a baby does not pass their hearing screen, a Diagnostic Audiological Evaluation (DAE) will be conducted by 3 months of age. A baby with hearing loss will be enrolled in early intervention services by 6 months of age.\nIf you have concerns about your baby’s hearing, discuss this with your baby’s doctor, or contact Guam Early Intervention System (GEIS) at (671) 300-5776 to schedule a FREE hearing screening.\nFor more information on the Newborn Hearing Screening Program please call (671) 735-2466.", "score": 26.566045042054146, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "El cribratge universal de la hipoacúsia. Incorporació i implementació del Programa de cribratge en els centres i serveis de la xarxa pública i privada amb atenció maternoinfantil\nIntroduction: A screening programme for hearing loss is being implemented within the Catalan Public Hospital Network since 2010. In 2015 Decree 4/2015, of 13 January, was approved to extend a universal screening coverage. To implement it, different sessions were conducted for coordination and training was offered to a hundred percent of private healthcare centers. Objective: The objective is to assess the Early Hearing Loss Detection Programme through the process indicators and results describing the innovation with the recent addition of private centers (Decree 4/2015, of 13 January). Methods: Descriptive study of process indicators and results of the Early Hearing Loss Detection Programme in Catalonia record since the beginning – in 2010 – to 2015. Description of the extension process in the private centers network. Results: The number of screened newborns has passed from 9,178 (2010) to 44,632 (2015), which represents a 386% increase. In 2015, this represents a coverage of 91,7%. 468 cases were diagnosed (from 2010 to 2015) and 110 were diagnosed in 2014. In 2015 80 newborns were diagnosed with hearing loss (provisional data). Conclusions: The Programme presents good coverage in the Catalan Public Hospital Network, whilst in the private health network it is being fostered as a universal coverage public health programme. This initiative promotes that all the cases of newborns diagnosed with hearing loss are reported by private birth centers to the record, while public birth centers keep mantaining good coverage and reporting. There is good interaction among the different agents involved in the screening.\nHearing loss; Screening; Automated evoked potentials; Registry\nPrats B, Fernandez-Bardon MR, Cabezas-Peña C. El cribratge universal de la hipoacúsia. Incorporació i implementació del Programa de cribratge en els centres i serveis de la xarxa pública i privada amb atenció maternoinfantil. Butll Epidemiol Catalunya.", "score": 25.703854003797073, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Early Intervention for Hearing Impairment\nEarly intervention is a term used for the process of identifying conditions in children that may cause delays in development of physical skills, cognitive skills, communication, adaptive skills and social and emotional development. The main focus of early identification and early intervention for a child with hearing loss is on developing communication skills, both verbal and sign language, to the maximum possible extent. Further, if hearing impairment is associated with other disorders like mental retardation, blindness, cerebral palsy, etc., the intervention program addresses those needs also.\nHearing impairment is detrimental to the normal speech and language development in such children. While speech and language are innate skills of human beings, these skills develop only by hearing, listening and incorporating the use of words, intonations, voice variations, and the patterns in which words are strung together to form expressions.\nWhy Early Intervention?\nThe American Academy of Pediatrics recommends a hearing screening to be done for every baby a few weeks after birth, and also follow-up visits up to two years of age. With modern technology of manual and automatic hearing testing routines, congenital hearing loss is being identified in children before they are 3 months of age. Identification of hearing impairment is possible by looking into the risk factors, responses of the baby to sounds around and seeking professional help when the risks and signs indicate a hearing problem.\nRisk Factors for Hearing Loss\nSome of the important risk factors associated with hearing loss can be listed and an informal screening can help in determining whether to look out for signs of hearing impairment. The risk factors include:\n• Family history of hearing impairment\n• Prenatal infections in the mother like rubella, herpes simplex and toxoplasmosis\n• Abnormal development of the structures in head and neck region like cleft palate, atresia (absence of pinna), etc\n• Premature birth\n• Medications given in neonatal period\n• Syndrome associated with hearing loss like Waardenburg, neurofibromatosis, etc\n• Recurrent infections in the ear, nose and throat\n• Infections like meningitis, measles, mumps, etc\n• Head trauma\nCertain symptoms may be noticed by the mother or primary caregiver of the baby that can call for a detailed hearing assessment to be done.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Washing your hands often, especially after wiping a young child’s nose, mouth or tears or changing diapers is important, said Stephanie Browning McVicar, co-author and director of the Cytomegalovirus Public Health Initiative at the Utah Department of Health. “What is also essential, though, is not sharing food, drink or utensils, particularly with young children, while pregnant.”\nThe bill also requires all infants who fail two hearing screens to be tested for CMV within three weeks of birth unless a parent declines the test.\nBy using that time frame, health providers are able to distinguish between congenital CMV and CMV acquired after birth, which is rarely associated with health problems. The screening parameters also are designed to identify infants who do not have any symptoms but are most at risk for hearing loss.\nThe researchers used Utah Department of Health and Vital Records data to assess whether 509 asymptomatic infants who failed hearing tests between July 1, 2013 and June 30, 2015 underwent CMV screening and the results of that screening.\nThey found that 62 percent of these infants were tested for CMV and three-quarters were screened within the three-week time frame. Fourteen of those infants were CMV positive and six had hearing loss. Of the infants who were tested more than 21 days after birth, seven were CMV positive and three had hearing loss.\nThe researchers conclude that because these infants had no signs of infection, it is “highly likely” they would not have been diagnosed later as having congenitally acquired CMV. Identification of CMV-positive infants increased opportunities to watch their health more closely and intervene, when needed, more quickly. They also found more infants received timely diagnostic hearing tests after the law took effect.\n“This result has major implications for all children who fail their newborn hearing screening since speech and language outcomes depend upon early hearing loss diagnosis,” said Albert Park, co-author and chief of the U’s pediatric otolaryngology division. “CMV infected infants with hearing loss may benefit from antiviral therapy.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "A conceptual framework for rationalized and standardized Universal Newborn Hearing Screening (UNHS) programs\n- Carlo Giacomo Leo†1, 2Email authorView ORCID ID profile,\n- Pierpaolo Mincarone†3,\n- Saverio Sabina1,\n- Giuseppe Latini1 and\n- John B. Wong2, 4\n© Leo et al. 2016\nReceived: 22 December 2015\nAccepted: 4 February 2016\nPublished: 12 February 2016\nCongenital hearing loss is the most frequent birth defect. The American Academy of Pediatrics and the Joint Committee on Infant Hearing established quality of care process indicators for Universal Newborn Hearing Screening starting from 1999. In a previous systematic review of Universal Newborn Hearing Screening studies we highlighted substantial variability in program design and in reported performance data. In order to overcome these heterogeneous findings we think it is necessary to optimize the implementation of Universal Newborn Hearing Screening programs with an appropriate application of the planning, executing, and monitoring, verifications and reporting phases. For this reason we propose a conceptual framework that logically integrates these three phases and, consequently, a tool (a check-list) for their rationalization and standardization.\nOur paper intends to stimulate debate on how to ameliorate the routine application of high quality Universal Newborn Hearing Screening programs. The conceptual framework is proposed to optimize, rationalise and standardise their implementation. The checklist is intended to allow an inter-program comparison by removing heterogeneity in processes description and assessment.\nKeywordsUniversal Newborn Hearing Screening Checklist Quality improvement UNHS\nSensorineural hearing loss is one of the most frequently occurring permanent congenital defects at birth with a prevalence of 0.1–0.3 % for newborns [1–4] (2–5 % in presence of audiological risk factors) . Its late diagnosis could negative influence language, learning and speech development with lifelong consequences [6–11]. Universal Neonatal Hearing Screening (UNHS) programs were developed in several countries to identify the majority of newborns with hearing impairment. UNHS programs adopt, as screening tests, otoacoustic emissions (OAEs) and/or automated auditory brainstem response (aABR) testing. Those who are positive at tests are referred to full audiological diagnosis.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "How we do it: Employment of listeningdevelopment criteria during assessment of infants who use cochlear implants Brittan A Barker 1, Maura H Kenworthy 2, Elizabeth A Walker 2 1\nLouisiana State University, USA, 2University of Iowa, USA\nThere are currently no formal, standardized procedures for assessing speech processing and perception during infancy. This lack of tools makes interpretation of infant data challenging. This article describes how our clinical research center established listening-development criteria for infants with cochlear implants. The listening-development criteria incorporate programming, audiometric, and parent-report measures to estimate adequate audibility of the speech signal prior to the infants’ inclusion in research protocols. This paper operationally defines the listening-development criteria, discusses its importance, and presents data from 10 infants who met the listening criteria on average after 6 months of device use. Keywords: Cochlear implants, Infant, Audibility, Assessment, Speech perception\nIn 2000, the Food and Drug Administration approved implantation in infants 12-months-old and older. Today many centers are even implanting children as young as 6-months-old (Waltzman and Roland, 2005). However, the field still lacks a complete understanding of programming issues that contribute to the ability to detect and process sound in these very young children. Much of this gap in knowledge is due to the fact that there are currently no formal, standardized procedures for assessing speech processing and perception during infancy. Our clinical research center established listening-development criteria for infants with cochlear implants (CIs) to address this challenge. The listening-development criteria incorporate programming, audiometric, and parent-report measures to estimate adequate audibility of the speech signal prior to inclusion in research protocols evaluating their listening development. These criteria and the method of their employment are described within.\nWhy is adequate audibility in infants with CIs important? Adequate audibility of the speech signal is crucial for spoken language learning. Years of research examining speech perception and spoken language development in typically developing infants and children with hearing aids support this notion (e.g. Jusczyk, 1997; Stelmachowicz et al., 2000). There is a dearth Correspondence to: Brittan A Barker, PhD, Department of Communication Sciences & Disorders, College of Humanities & Social Sciences, Louisiana State University, 63 Hatcher Hall, Baton Rouge, LA 70803, USA.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "Students, left to right, Kami DeFee, 5, Cadence Elliott, 5, and Lucas McCaslin, 4, assembled a stick man on the floor of their preschool classroom with the help of teacher Elizabeth Anderson, at right, on Tuesday, September 7, 2021 at the Spokane HOPE Center for Deaf and Hard of Hearing Children in Spokane, Washington. It was the first day of the school year for students receiving services through the center, typically those aged 12 months to five years. The nonprofit has contracts with Washington State and local school districts to help children with hearing problems, but reimbursement only covers about a third of the cost. The rest of the funds come from grants, donations and general fundraising. In October they will have their annual Hoedown for HOPE fundraiser. (Jesse Tinsley/The Spokesman-Review)\nFrom birth to age 3, HOPE teachers conduct hour-long home visits where they provide parents with methods to teach their child to speak.\nWhen the toddler is ready, they move on to HOPE Preschool at 1821 E. Sprague Ave. It’s where he learns socialization skills and builds on what he’s learned in the so-called ‘Birth to 3 Years’ program.\nEach teacher works with up to 30 families at any one time and provides assistance to schools in the Spokane and Spokane Valley areas. Recently, the school was asked to help more rural areas, Driscoll said.\nEven after sending children to kindergarten classes, HOPE teachers are checking in with families as they navigate the “traditional” school system, said Laurel Graham, an early intervention provider.\n“We also continue with this empowering role,” Graham said. “I have a parent who is nervous about his grandson going to preschool because he will be the first child with cochlear implants, so we were just thinking for his son in particular s ‘He was stressed.”\nParents also need to know the gear and lingo of the community, said Amy Hardie, HOPE’s director of education.\nHardie said it’s so they can advocate for their own child as they enter school, often in a classroom where a teacher may not have that knowledge.\n“It’s an important part of the process, so we need to make sure our kids are self-advocates for their gear, and if they know their battery isn’t working, they can tell their friends,” said Bold.", "score": 25.138444657173878, "rank": 47}, {"document_id": "doc-::chunk-18", "d_text": "Also, screening programs that only test infants who present with one or more risk factors for hearing loss are typically testing only ∼50% of children who actually have a hearing loss.1 ,3 ,41 ,42 These factors have resulted in an average age of identification of 11 to 19 months for children with known risk factors for hearing loss2 ,17 42–44 and 15 to 19 months for children without apparent risk.17 ,43 ,44\nTaken as a group, previous and present research findings suggest that the first year of life, especially the first 6 months, is critical for children with hearing loss. When hearing loss was identified and treated by this time, several independent researchers have reported that, as a group, children demonstrated average language scores that fell within the normal range when they were 1 to 5 years old.28 ,30 This finding is encouraging and suggests that early identification and subsequent intervention is associated with improved language development in deaf and hard-of-hearing children. If this is the case, it is critical that all infants with hearing loss be identified by 6 months of age and receive early intervention; universal newborn hearing screening would be an excellent vehicle for achieving this goal.\nThis study was supported by the National Institutes of Health (contract number NO1-DC-4–2141), Maternal and Child Health, the Colorado Department of Education, the University of Colorado-Boulder, and the Colorado Department of Public Health and Environment.\nWe wish to acknowledge the contributions of following individuals to this project: Arlene Stredler-Brown, Mah-rya Apuzzo, Deborah DiPalma, Amy Dodd, Colette Roy, Joan McGill Eden, Kathy Watters, Colorado Home Intervention Program regional coordinators, Colorado Home Intervention Program parent facilitators, and the participating families.\n- Received August 5, 1997.\n- Accepted June 22, 1998.\nReprint requests to (C.Y.-I.) University of Colorado-Boulder, CDSS Building, Campus Box 409, Boulder, CO 80309.\n- Watkins P,\n- Baldwin M,\n- McEnery G\n- Northern JL,\n- Hayes DH\n- Geers A,\n- Moog J\n- ↵Webster A. Deafness, Development, and Literacy. London, England: Methuen and Company Ltd; 1986\n- ↵Allen TE. Patterns of academic achievement among hearing impaired students: 1974 and 1983.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Newswise - A new study of Colorado birth records shows that infants with low Apgar scores - the widely used measure of newborn health - are 10 times less likely to receive an initial hearing loss screening than babies with normal Apgars. Low-weight babies also are four times more likely to go untested. In both cases, these babies are at greater risk for the most common birth defect: hearing loss.\n\"While the data do not suggest why these babies are missed, we can clearly conclude that clinical measures showing poor health are strongly associated with both missed screening and risk of hearing loss,\" said lead study author Mathew Christensen, Ph.D.\nChristensen is the program evaluator at the Colorado Department of Public Health and Environment. He and his colleagues analyzed more than 200,000 state birth records from January 2002 to December 2004.\nThe study appears online and in the December issue of the American Journal of Preventive Medicine.\nNinety-eight percent of the infants received hearing screening a day or so after birth, but the 2 percent who did not undergo screening were likely to be those who needed it most. Moreover, of those who had a positive test - indicating loss of hearing - 18 percent did not receive timely follow up, which is a function of individual hospitals' outreach programs.\nSuch tests are the standard of care in the United States and 42 states require them, according to the Web site of the National Center for Hearing Assessment and Management.\nStudy co-author Vickie Thomson, Ph.D., director of newborn screening programs at the Department of Public Health and Environment, said that newborns' hearing is tested while they are resting or asleep and involves sending a signal of clicks and then measuring the reaction of the inner ear or brain to the sounds. Testing usually occurs four hours or more after birth or the next day.\nThomson said that her experience as an audiologist leads her to conclude that many small or low-Apgar babies could be too involved in other procedures or discharged too soon for clinicians to perform the test.\nKarl White, Ph.D., director of the National Center for Hearing Assessment and Management at Utah State University, said this finding is not surprising but is important: \"Basically, it says sick babies are less likely to get screened.\"\nAccording to the researchers, studies show that intervention by age of six months results in a return to near-normal ability to develop speech and language.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-7", "d_text": "doi:10.1136/adc.2008.151092.View ArticlePubMedGoogle Scholar\n- NIH. Consensus development conference statement - early identification of hearing impairment in infants and young-children - 1–3 March 1993. Int J Pediatr Otorhinolaryngol. 1993;27(3):215–27.View ArticleGoogle Scholar\n- American Academy of Pediatrics Task Force on Newborn and Infant Hearing. Newborn and infant hearing loss: Detection and intervention. Pediatrics. 1999;103(2):527–30.View ArticleGoogle Scholar\n- Busa J, Harrison J, Chappell J, Yoshinaga-Itano C, Grimes A, Brookhouser PE, et al. Year 2007 position statement: Principles and guidelines for early hearing detection and intervention programs. Pediatrics. 2007;120(4):898–921. doi:10.1542/peds.2007-2333.View ArticleGoogle Scholar\n- Mincarone P, Leo CG, Sabina S, Costantini D, Cozzolino F, Wong JB, et al. Evaluating reporting and process quality of publications on UNHS: a systematic review of programmes. BMC Pediatr. 2015;15(1):86. doi:10.1186/s12887-015-0404-x.PubMed CentralView ArticlePubMedGoogle Scholar\n- Bureau International d’Audiophonologie. BIAP Recommendation 02/1 bis. 1996. http://www.biap.org/index.php?option=com_content&view=article&id=5%3Arecommandation-biap-021-bis&catid=65%3Act-2-classification-des-surdites&Itemid=19&lang=en. Accessed 11/07/2014.\n- World Health Organization. Primary Ear And Hearing Care Training Resource. Advanced Level., Geneva. 2006. http://www.who.int/pbd/deafness/activities/hearing_care/advanced.pdf. Accessed 11/07/2014.\n- Clark JG. Uses and abuses of hearing loss classification. Asha. 1981;23(7):493–500.PubMedGoogle Scholar\n- National Institutes of Health.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Objectives: This study examined the ability of click auditory brainstem response (ABR) undertaken below the age of 6 months (from expected date of delivery) to differentiate between conductive and sensorineural hearing loss (SNHL), using the latency of wave V measured 20 dB above threshold.\nDesign: Subjects were recruited if they had an ABR threshold of ≥ 40 dB nHL and ≤ 70 dB nHL in one or both ears measured below the age of 6 months and they had also attended follow-up appointments for behavioral assessment of their hearing in which the type of hearing loss had been confirmed. Forty-five children (84 ears) with SNHL, 82 children (141 ears) with temporary conductive hearing loss (TCHL), and 5 children (10 ears) with permanent conductive hearing loss (PCHL) were recruited. The differences between mean wave V latencies measured 20 dB above ABR threshold were examined using the independent t-test for the groups of cases with SNHL, TCHL, and PCHL. Signal-detection theory was used to examine the relationship between sensitivity and specificity when the latency of wave V 20 dB above threshold was used to identify the presence of SNHL. Receiver operating characteristics were generated and the coordinates of the curve examined for the best compromise between sensitivity and false-alarm rate. The specificity, positive predictive value, and probability of missing a true case were determined for the most promising criteria.\nResults: There were significant differences between the two groups with SNHL and TCHL. The mean latency of wave V 20 dB above threshold was 1 msec shorter in those with SNHL compared with those with TCHL. There were significant differences between children with PCHL and SNHL but no difference between those with PCHL and TCHL. When a criterion of < 7.6 msec was chosen to predict the presence of SNHL the test sensitivity was 0.98, test specificity 0.71, and positive predictive value was 0.66. Nine out of 10 of those with a latency 20 dB above threshold of < 7.0 msec had an SNHL.\nConclusions: The latency of wave V 20 dB above threshold measured using click ABR is a useful indicator of the type of hearing loss in babies referred from newborn hearing screening.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Alexandria, VA – A new review of medical databases shows that neonatal hearing loss, already one of the most common birth disorders in the United States, is especially prevalent among Hispanic-Americans and those from low-income households, according to the April 2009 issue of Otolaryngology-Head and Neck Surgery. The wide-ranging study focused on hearing loss in newborns (neonates), children, and adolescents.\nThe authors also note serious flaws in the collecting of data on pediatric hearing loss, resulting in a fractured body of knowledge that is hindering a more complete evaluation of the problem's scope.\nThe researchers found that the average instance of neonatal (younger than one month old) hearing loss was 1.1 per 1,000 infants screened. The number varies from state to state, with cases being most prevalent in Hawaii (3.61 per 1,000), followed by Massachusetts and Wyoming.\nWhen looking at children as a larger group (combining neonatal through adolescent), the research indicates that compared to other ethnic groups, Hispanic-American children in all subgroups (Mexican-American, Cuban-American, and Puerto Rican) show a higher prevalence of hearing loss, with a similar prevalence existing in children in low-income households. The authors note that it is unclear whether instances of hearing loss actually increases as children grow older, adding particular weight to the neonatal results.\nThe authors conclude that in addition to the statistics presented, there exists a real need to establish a more unified system for the collection of regional and national health data. They note that within the existing databases, data collection methodologies are not standardized; the authors suggest creating multi-institutional national data repositories in an effort to standardize the information as it is collected. This could include a neonatal hearing loss screening registry within the Universal Newborn Screening Programs.\nApproximately two to four of every 1,000 children in the United States are born deaf or hard-of-hearing. Studies have shown that early diagnosis of hearing loss is crucial to the development of speech, language, cognitive, and psychosocial abilities. One in every four children born with serious hearing loss does not receive a diagnosis until age three or older, making early hearing screening a necessary step for ensuring a healthy life for a child.", "score": 24.296145996203016, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Auditory maturation and congenital hearing loss in NICU infants\nRijping van het auditieve systeem en congenitaal gehoorverlies bij NICU kinderen\nThe number of preterm births has increased over the past decades as a result of increasing maternal age and in vitro fertilization (1). At the same time the survival of preterm infants has increased due to advances in perinatal and neonatal care. For example, antenatal corticosteroids for women with threatened preterm delivery, high-frequency oscillatory ventilation and inhaled nitric oxide have now become standard therapy (1). Unfortunately, these improvements sometimes come at a price. Neonatal intensive care unit (NICU) survivors have an increased risk of neurodevelopmental impairment, such as cerebral palsy, cognitive delay, blindness and deafness (2). Infants admitted to the NICU have an increased risk of congenital (present at birth) and acquired hearing loss compared to infants admitted to the well-baby nursery (3). Multiple risk factors have been associated with congenital hearing loss (Table 1) (4). Many of these risk factors occur in daily NICU care. The increased knowledge of the etiology of congenital hearing loss has put the emphasis not only on treating, but also on preventing congenital hearing loss. For example, bilirubin serum levels are kept within a very strict range in NICU infants. While prevention may not always be possible, the increased awareness has resulted in earlier diagnosis and careful counseling. Between 2002 and 2006 the universal newborn hearing screening (UNHS) program was introduced in the Netherlands. This has resulted in earlier identification and referral of infants with congenital hearing loss. Several studies have shown that early and adequate intervention of infants with congenital hearing loss minimizes future problems with speech and language development (5-6). Treatment before the age of six months results in better speech and language development at school age.\n|Keywords||NICU, congenital hearing loss, deafness, infants, maturation|\n|Promotor||J.B. van Goudoever (Hans) , R.J.", "score": 23.388393941208253, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Procedures, guidelines and reports\nThe documents in this section have been developed by the NSU for health professionals providing services in or associated with the Universal Newborn Hearing and Early Intervention Programme (UNHSEIP).\nThe workforce development strategy and action plan (the Strategy) addresses the development of the newborn hearing screening and audiology workforce required for the implementation of the UNHSEIP. It recognises that the effective delivery of the UNHSEIP requires a multidisciplinary team.\nIn this section\nThe purpose of the National Policy and Quality Standards is to document the operational policy and quality standards of practice for providers of services within the Universal Newborn Hearing Screening and Early Intervention Programme (UNHSEIP)\nThe Universal Newborn Hearing and Early Intervention Programme(UNHSEIP) carries out a two stage regime using aABR technology with the BERAphone equipment, to screen for hearing loss. This is introduced below.\nThe NSU recommends that the parents and guardians of all babies born in New Zealand are offered newborn hearing screening for their babies.\nReports into quality improvements for the UNHSEIP programme", "score": 23.215990088743876, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Permanent childhood hearing loss can be congenital, delayed-onset, progressive, or acquired in nature. Congenital hearing loss refers to hearing loss that is present at birth and is often identified through a newborn hearing screening conducted shortly after birth. While estimates vary, some hearing loss in childhood is delayed-onset or progressive in nature. As a result, it is important to provide audiologic monitoring over time for children who are considered to be \"at risk\" for hearing loss. In addition, some mild hearing losses as well as auditory neuropathy may not be identified through newborn hearing screening due to the current limitations of the test equipment or testing methodology used.\nHearing is critical to speech and language development, communication, literacy, and learning. Early identification and intervention of hearing loss can lessen the impact on a child's development (Sininger, Grimes, & Christensen, 2010; Yoshinaga-Itano, Baca, & Sedey, 2010). The Joint Committee on Infant Hearing (JCIH, 2007) recommends that\n- all children be screened for hearing loss no later than 1 month of age,\n- hearing and medical evaluations be completed no later than 3 months of age,\n- infants with confirmed hearing loss are fit with amplification (if the family chooses and if appropriate) within 1 month of diagnosis,\n- early intervention services begin no later than 6 months of age.\nNote: The scope of this content is limited to the diagnosis and management of permanent hearing loss for children from birth through 5 years of age from an audiological perspective. Resources for hearing screening and habilitation, as well as hearing loss for school-age and adult populations, are under development.\nFamilies who are actively involved in the assessment and treatment process achieve better outcomes (DesJardin, 2006). It is paramount that audiologists incorporate family-centered practice into the identification and treatment of young children who are deaf or hard of hearing; family-centered activities include\n- engaging the family and using the child's toys from home in various aspects of the evaluation and therapy sessions,\n- suspending judgment and building rapport with the family about their needs and interests,\n- matching re/habilitation with the family needs and goals,\n- recognizing the family's rights regarding informed consent and confidentiality.\nThe goal of family-centered practice is to create a partnership with the family so that the family fully participates in all aspects of the child's care.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-3", "d_text": "The typology and the number of tests, and the healthcare setting in which performing the examinations (before or after the discharge) – The program needs to be well balanced for sensitivity, specificity, coverage of the population and costs per subject identified. E.g., Kennedy et al. reported to have changed their protocol using unilateral failure on aABR, rather than bilateral failure on Transient Evoked OAEs (TEOAEs) testing, as the second step; this change was associated with a reduction in the screen-failure rate from 2.4 % (95 % CI 2.2–2.6) to 1.3 % (1.1–1.5) of babies screened. The presence of specific protocol for neonates at higher risk (e.g., aABR for NICU staying in NICUs for more than 5 days instead of TEOAEs) - Such neonates, in fact, are at risk of having neural hearing loss (auditory neuropathy/auditory dyssynchrony) which is not detectable with TEOAEs. The set of examinations for the full audiological evaluation – E.g., the one recommended by the JCIH . The tasks to be performed to increase the percentage of enrolment and to reduce the lost to follow up (neonates referred to further examinations that do not show at the planned appointments) - With reference to the former, it has to be noted that specific actions should be done for an appropriate communication with families creating the conditions for an informed consent. With reference to the latter, a survey conducted in USA shows that only 62 % of all newborns who need a diagnostic evaluation actually did it and, out of them, only 52 % by the age of 3 months (as recommended by the JCIH). The lost to follow-up at all stages of the EHDI process continues to be a serious concern also for the World Health Organization (WHO) that states the importance, for its success, of monitoring and implementing all the phases of the screening (responsibilities, training, information campaign, procedures of quality assurance).", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "At the time, this statement was reasonable in that before Bess and Paradise's commentary, studies examining the effects of early identification and subsequent intervention either defined early identification as before 18 months (rather than 6 months) of age26 or did not specify the number of children identified by the age of 6 months.27 Nevertheless, in one of these older studies, White and White26 reported significantly better language scores for a group of severely and profoundly deaf children whose average age of identification was 11.9 months (with an average age at intervention of 14 months) as compared with children with the same degree of hearing loss whose average age of identification was 19.5 months (with an average age at intervention of 26 months).\nSince the publication of Bess and Paradise's commentary, Robinshaw28 described 5 young children with severe and profound hearing loss whose deafness was confirmed between 3 and 5 months of age. All of the children wore hearing aids by the age of 6 months. Robinshaw compared her deaf children with 5 normally-hearing control children and to data from a previous study involving 12 children with severe and profound hearing loss whose average age of identification was 2 years, 3 months. She found that the earlier-identified children acquired vocal communicative and linguistic skills at an age similar to the 5 normally-hearing control children and well before the deaf children who were identified later. Her investigation supports the value of early identification followed by immediate amplification; however, the group of children studied was small, only children with severe and profound hearing loss were included, and no data from standardized assessments were presented. In addition, the only treatment consistent across all 5 children was the early fitting of amplification. The frequency of additional early intervention varied among children.\nApuzzo and Yoshinaga-Itano29 responded to Bess and Paradise's25 concerns more directly. They compared language ability at 40 months of age across four age-of-identification groups: 1) 0 to 2 months, 2) 3 to 12 months, 3) 13 to 18 months, and 4) 19 to 25 months. The hearing loss of the children in each of the groups ranged from mild to profound and all of the children received ongoing intervention services from the same program shortly after their hearing loss was identified.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "This question will hopefully be addressed in an upcoming NIH funded clinical trial that our group will be conducting to compare hearing, speech and language outcomes in CMV infected infants.”\nThe researchers suggest, based on their analysis of the data, that screening compliance could be increased by focusing educational and outreach efforts on certain groups who were less likely to get their infants screened for congenital CMV: less educated mothers, babies not born in a hospital and infants who received hearing tests later than 14 days after birth.\nOther co-authors of the study include Cathleen D. Zick, professor in the Department of Family and Consumer Studies at the University of Utah, and Jill Boettger, CMV data coordinator at the Utah Department of Health.\nThe full study can be found here.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-2", "d_text": "A monitoring, reporting and verifying phase where the Monitor action is activated periodically or upon request to aggregate data and build performance indicators. The verification is made comparing indicators with benchmarks and a report is generated with an analysis of reasons for possible deviations which can push for a redefinition of the protocol and/or for a re-organization of its execution. Reports can be used also for disseminating purposes.\nA more detailed description of the single phases follows in the following sections (note that the EXECUTING UNHS PROGRAM phase is out of the scope of our work and will not be discussed further).\nPlanning UNHS program\nIn order to deliver a protocol it necessary to define the target and the processes.\nDefinition of the target\nTwo elements are of importance: the definition of hearing loss and the identification of criteria used to define newborns at higher risk of hearing loss. Hearing threshold and uni- vs. bilateral have an impact in the number of neonates going through testing and evaluations, the number of infants admitted to therapy, the rate of newborns with hearing loss with early diagnosis and treatment, the number of neonates that could erroneously be evaluated as with no hearing deficits. These two parameters are fundamental for inter-program comparison. Several classifications of Hearing Loss have been formulated [19–21] which brings in the definition of the levels of severity (see Additional file 1). It is therefore necessary to make the choice explicit also for their impact on the typology of treatment/rehabilitation. We have previously observed that, in the lack of standardization, several thresholds have been applied in UNHS programs (26 to 40 dB HL).(6) Newborns with risk factors for neonatal hearing loss have about a 10 fold probability for hearing deficits with respect to the overall population [1, 2]. Criteria for higher audiological risk have been defined by several subjects (JCIH , the US National Institutes for Health - NIH , ASHA ) or are chosen directly by program coordinators (e.g., Clemens et al. ).\nThe audiological risk criteria are relevant for that: specific audiological risk may require a specific screening protocol; the timing and number of hearing re-evaluation (surveillance) for infants at risk should be customized and individualized depending on the relative likelihood of a subsequent delayed onset hearing loss.\nDefinition of the processes\nActivities, detailed actions, decision nodes, workflows, roles, environmental conditions have to be identified and specified. More specifically key issues are reported.", "score": 22.87988481440692, "rank": 59}, {"document_id": "doc-::chunk-19", "d_text": "In: Schildroth AN, Karchmer MA, eds. Deaf Children in America. Boston, MA: College-Hill Press; 1986:161–206\n- Holt JA\n- ↵Quigley SP. Environment and communication in the language development of deaf children. In: Bradford LJ, Hardy WG, eds. Hearing and Hearing Impairment. New York, NY: Grune and Stratton; 1979:287–298\n- Elssmann S,\n- Matkin N,\n- Sabo M\n- ↵Mehl AL, Thomson V. Newborn hearing screening: the great omission. Pediatrics. 1998;101(1). URL: http://www.pediatrics.org/cgi/content/full/101/1/e4\n- Ross M\n- American Academy of Pediatrics\n- ↵US Department of Health and Human Services. Healthy People 2000. Public Health Service, DIIHS Publication No. 91–50213: Washington, DC: US Government Printing Office; 1990\n- ↵NIH Consensus Statement. Identification of Hearing Impairment in Infants and Young Children. Bethesda, MD: NIH; 1993;11:1–24\n- American Academy of Audiology\n- Bess FH,\n- Paradise JL\n- White SJ,\n- White REC\n- ↵Moeller MP. Early intervention of hearing loss in children. Presented at the Fourth International Symposium on Childhood Deafness; October 9–13, 1996; Kiawah Island, SC\n- ↵Stredler-Brown A, Yoshinaga-Itano C. F. A. M. I. L. Y assessment: a multidisciplinary evaluation tool. In: Roush J, Matkin N, eds. Infants and Toddlers With Hearing Loss. Baltimore, MD: York Press; 1994:133–161\n- ↵Calhoun D. A Comparison of Two Methods of Evaluating Play in Toddlers. Ft Collins, CO: Colorado State University; 1987. Thesis\n- ↵Ireton H, Thwing E. The Minnesota Child Development Inventory.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "The test must be timely. The first bloodspot test should be done between 24 and 36 hours of age or prior to discharge from the hospital. For some disorders, false negative results can occur with later testing. The second screen should be done at the first outpatient visit between 5 and 10 days of age.\nThe hearing test is done in the hospital and any re-screening should be done within 2 weeks and diagnostic testing should be done as soon as possible following the failed outpatient scree. Completing diagnostic testing before three months of age ensures that testing can be done without anesthesia or sedation.\nThe Arizona screening panel includes:\n- 6 amino acid disorders\n- fatty acid oxidation disorders\n- 9 organic acid disorders\n- Biotinidase deficiency\n- Classic galactosemia\n- Congenital Hypothyroidism\n- Congenital Adrenal Hyperplasia\n- 3 hemoglobin diseases\n- Cystic Fibrosis\n- Hearing loss\nThe incidence in the population is rare, but the potential devastating results and the high costs of treating undiagnosed infants is thought to justify the mass screening. Hearing loss is the most common approximately 2-4 per 1000 births,\nArizona Newborn Screening Program Guidelines-August 2010", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "Commercially available software\nIf your hospital has a large number of births, we recommend that you purchase one of the commercially available software programs. In addition to tracking data, these software programs can generate regular reports and follow-up letters to families and physicians.\nThere are several software products that are currently commercially available for newborn hearing screening data management:\nSome of these companies can be used for managing all aspects of a statewide newborn screening program, such as hearing screenings, birth defect information, and metabolic tests.\nHospitals with smaller numbers of births may find that the software included with the equipment they purchase to be sufficient. Some of the hospitals even find a simple logbook or excel spreadsheet to be adequate.\nWhat data should you monitor\nThe number of births your hospital has will generally dictate how you decide to track and monitor your data. You should be able to easily obtain the following information:\n- Overall refer rate. How many babies were screened, how many babies passed, and how many referred.\n- Refer rates for individual screeners. This tells you how well each screener is doing.\n- Infants that were missed prior to discharge. You will need to follow-up and make sure these infants obtain a hearing screening.\n- Infants that need to return for a re-screen. You will need to make sure these infants return for their follow-up re-screen appointments.\n- Infants that need a diagnostic audiologic evaluation. You will need to follow-up with the infant's parents or pediatrician to ensure that they scheduled a diagnostic evaluation.\nThe Department of Health sends hospitals monthly reports with these screening statistics. Some hospitals find these reports to be sufficient for data management purposes. As mentioned above, each hospital, regardless of size, should keep its own record of hearing screenings, be it a log sheet, book, or electronic record, to manage daily screening needs.\nQuality assurance guidelines\nThe following guidelines serve as a good baseline by which to monitor your program:\n- Within three months of initiating a hearing screening program:\n- Maintain a referral rate no higher than 8% for the initial screening.\n- If the hospital performs outpatient rescreening, maintain a referral rate no higher than 4%.\n- Within six months of program initiation, screen a minimum of 95% of infants prior to discharge or before 1 month of age.\n- The benchmark for percent of infants lost after not passing the initial screen should be 10% or less.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "Objective To examine the incidence of pediatric congenital hearing loss and the timing of diagnosis in a rural region of hearing healthcare disparity. the timing of diagnostic screening. Results In Kentucky during 2009-2011 there Riociguat (BAY 63-2521) were 6 970 newborns who failed hearing screening; the incidence of newborn hearing loss was 1.71 per 1000 births (1.28/1000 in Appalachia and 1.87/1000 in non-Appalachia). 23.8% of Appalachian newborns compared with 17.3% of non-Appalachian children failed to obtain follow-up diagnostic testing. Children from Appalachia were significantly delayed in obtaining a final diagnosis of hearing loss compared with children from non-Appalachian regions (p=0.04). Conclusion Congenital hearing loss in children from rural regions with hearing healthcare disparities is usually a common problem and these children are at risk for any delay in the timing of diagnosis which has the potential to limit language and social development. It KIR3DL3 is important to further assess the causative factors and develop interventions that can address this hearing healthcare disparity issue. With an incidence of approximately 1.4 per 1000 newborns screened1 hearing loss is the most common neonatal sensory disorder in the United States. The sense of hearing is usually important during the early years of life for the development of speech language and cognition. Hearing impairment in early child years Riociguat (BAY 63-2521) can result in lifelong learning delay and disability; however early identification and intervention can prevent educational and interpersonal effects.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Monday, May 8, 2017, 2:00 – 2:15 p.m. EDT\nUsing a Multi-Modal Approach to Support Children with Hearing Loss\nKaren Latimer, Assistive Technology Specialist, Delaware Assistive Technology Initiative, Center for Disabilities Studies\nKaren Latimer email@example.com\nLink to webinar: https://ncham.adobeconnect.com/bhsm/\nVia Washington State Department of Early Learning's Early Support for Infants and Toddlers Program:\nCoffee Break webinars last 15 minutes! How cool is that?\nMay is Better Hearing and Speech Month! In celebration, OSEP’s Early Childhood Assistive Technology Model Demonstration grantees and Center on Technology and Disability is partnering with the Office of Head Start’s Early Childhood Hearing Outreach (ECHO) Initiative to join the American Speech-Language-Hearing Association (ASHA) in celebrating this year’s theme “Communication: The Key to Connection.” Throughout the month of May, ASHA partners with national and local stakeholders to engage in a multifaceted public education campaign to raise awareness about the critical need to intervene early when young children are identified with communication disorders. The Coffee Break Webinars will focus on raising awareness about the use of assistive technology and the importance of frequent hearing screenings. Please join the Coffee Break webinar series to learn more about assistive technology and hearing screening. There is no pre-registration to join the webinars.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-8", "d_text": "Summary of the National Institutes of Health consensus: early identification of hearing impairment in infants and young children. Am J Otol. 1994;15(2):130–1.Google Scholar\n- American Speech-Language-Hearing Association. Guidelines for the identification of hearing impairment in at risk infants age birth to 6 months. Asha. 1988;30(4):61–4.Google Scholar\n- Clemens CJ, Davis SA, Bailey AR. The false-positive in universal newborn hearing screening. Pediatrics. 2000;106(1):5. doi:10.1542/peds.106.1.e7.View ArticleGoogle Scholar\n- Kennedy C, Kimm L, Thornton R, Davis A. False positives in universal neonatal screening for permanent childhood hearing impairment. Lancet. 2000;356(9245):1903–4. doi:10.1016/s0140-6736(00)03267-0.View ArticlePubMedGoogle Scholar\n- Shulman S, Besculides M, Saltzman A, Ireys H, White KR, Forsman I. Evaluation of the Universal Newborn Hearing Screening and Intervention Program. Pediatrics. 2010;126:S19–27. doi:10.1542/peds.2010-0354F.View ArticlePubMedGoogle Scholar\n- World Health Organization. Newborn and infant hearing screening - Current issues and guiding principles for action: outcome of a who informal consultation held at WHO headquarters, Geneva, Switzerland. WHO. November 2009. http://www.who.int/blindness/publications/Newborn_and_Infant_Hearing_Screening_Report.pdf.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-16", "d_text": "Despite the many similarities between the two groups, there were two identified variables on which the groups differed, ie, age of identification (and subsequent intervention) and cognitive ability. Differences in the participants' cognitive abilities were controlled statistically in all analyses. Thus, the remaining variable (age of identification and subsequent intervention) must be considered as a possible explanation for the language differences noted at 1 to 3 years of age.\nTo provide the most solid evidence that early identification and subsequent intervention impacts later language ability, a controlled, prospective investigation with random assignment to early- versus late-identified groups and treatment versus no-treatment groups might be proposed. Presently, such a study is not feasible for several reasons. First, random assignment to groups based on time of identification is not possible in an increasing number of states because of recent legislative mandates to screen the hearing of all newborns. Even in those states without universal hearing screening programs, parental cooperation for such a study is likely to be quite low. Under the Individuals with Disabilities Education Act, families are entitled to a timely evaluation if they suspect their child has a disability. Once parents become suspicious that their child has a hearing loss, it is unlikely they would be willing to delay an evaluation even if they previously had consented to being placed in a late-identification group.\nSoliciting participation in a study that might result in assignment to a no-treatment (or delayed-treatment) group also is likely to meet with substantial parental resistance. This is because, in addition to timely assessment, the Individuals with Disabilities Education Act stipulates the provision of prompt intervention services after a disability is identified. It is likely that most parents would not be willing to delay these federally-guaranteed services for their child in the interest of research.\nBecause of the obstacles to randomly assigning children to early- and late-identification/intervention groups, the topic of early identification and intervention must be explored through descriptive studies using naturally occurring groups of children. The results of such descriptive studies become more powerful when they are replicated by a variety of different researchers with independent samples of children. Such is the case with the present question. The language advantage reported in this study for children who were identified earlier is consistent with several previous studies on the early identification of hearing loss. White and White,26Robinshaw,28 Moeller,30 and Apuzzo and Yoshinaga-Itano29 all have reported significantly better language scores for children whose hearing losses were identified earlier.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Information for Hearing Screening Professionals\nTo operate smoothly, your program will need screeners and a program coordinator. You will need to make sure you have coverage of all shifts so that no babies are missed prior to discharge.\nThe program coordinator is often an OB or nurse manager. Sometimes this person is a consulting audiologist.\nThis person will oversee the screening program, train the screeners, monitor referral rates, and ensure that follow-up is obtained for infants who do not pass the hearing screening.\nThe program coordinator often also assists with performing the hearing screenings. Generally speaking, your program coordinator will need to commit 2–6 hours per week per thousand births to manage and coordinate your program.\nSee a sample job description for program coordinators (PDF).\nThe screeners will be responsible for performing the hearing screenings. Your screening staff should have a low turnover rate and be reliable. It is important not to have too many screeners.\nThe fewer screeners you have, the more practice each screener will get, the better and faster they will be at screening. Limiting the number of screeners also ensures that screeners take responsibility for the program.\nYour screeners can be:\n- Nurses/nursing aides\n- Lactation consultants\n- OT/PT personnel\n- Technicians hired and trained specifically for screening\nSee a sample job description for screeners (PDF).\nTraining Your Screeners\nYour screeners will need to be trained in:\n- Operation of equipment\n- Proper communication of screening results\n- Education of parents about hearing screening\n- Infection control procedures\n- Baby handling skills\nYour screeners should also be evaluated at regular intervals to help ensure the ongoing quality of your program. Annual competencies are recommended. The National Center for Hearing Assessment and Management (NCHAM) has created a free and interactive web-based Newborn Hearing Screening Training Curriculum that can be used by programs to complete annual screener competencies.\nIt is recommended that you use a checklist when training screeners to ensure adequate skills in all aspects of the screening process.\nA separate checklist for communication of the screening results is recommended as well, due to the importance of proper communication of the screening results and what they really mean.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Republic Act 9709 or the Universal Newborn Hearing Screening and Intervention Act of 2009 gives hope in the early prevention of the development of congenital hearing disabilities in newborn babies in the Philippines.\nIn the research forum on hearing impairment, “Understanding the Silent World of the Hearing Impaired,” organized by the Metro Manila Health Research and Development Consortium (MMHRDC) and the Philippine Council for Health Research Development of the Department of Science and Technology (PCHRD – DOST), held at De La Salle University, Manila on November 18, 2011, the importance of having newborns screened for risks of developing hearing disabilities because of congenital anomalies was addressed. Dr Charlotte Chiong, Assistant Director for the Philippine National Ear Institute and a strong advocate of the law, spoke on the importance of the implementation of RA 9709.\n“As hearing impairment is an invisible disability, majority of the people do not fully understand the implications of hearing loss and its consequences on the deaf person,” Dr. Chiong said. “For those parents who are not fully aware of such early detection and intervention programs, most of the time they settle and give up on their child’s hearing disability altogether.”\nRA 9709, which was signed and approved by President Gloria Macapagal-Arroyo on August 12, 2009, mandates that every newborn infant in the country undergo the Universal Newborn Hearing Screening (UNHS) before they reach their third month of age. The law compels parents, health practitioners and medical institutions to provide the necessary procedures to detect risks of developing congenital hearing loss in children in the future. Infants who were born in hospitals should be screened before they are discharged; while newborn babies who were born at home should be screened before the third month after birth.\nAccording to Dr. Chiong, “Hearing impairment poses a great deal of consequences that mainly affects the individual’s communicative and linguistic skills, as well as social and educational competencies. One of the goals of UNHS is to prevent these repercussions through early identification and intervention by allowing the child to develop normal speech and have an improved quality of life alongside their hearing peers.”\nIn the Philippines about 8 babies are born deaf every day while three babies are born with hearing impairment every three hours. Through the RA, every newborn baby in the country gains access to all the necessary medical screening as well as medical intervention for congenital hearing defects.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-2", "d_text": "Give us books, pamphlets, phone numbers, support groups, anything that will be helpful to us in understanding our child's hearing loss and where to find help. If we ask a question, and you don't have the answer, help us find the resource where we can find the answer. As children and parents grow, their choices and need for information grows.*Hands & Voices (2007) How difficult would it be to do this?\n12 Intervention (cont.) Goal #3: Assist parents in their ability to establish an effective language learning environment for their child…*Strategies: (*Spencer, P. (2003) Sensitivity to infant visually aware and responsive to opt. to interact…treat behavior as intentional…expect com. Topic responsiveness follow childs lead Talking to infant melodic & repetitive Use visual & tactical cues… pointing & stroking\n13 Intervention (cont.) Goal #3: Strategies (cont.) Exaggerated facial expression Establish mutual eye gaze moving into childs line of vision Commenting on what the child is doing & feeling associating language with actions Expanding on childs communicative behavior providing more conventional communicative models Waiting for the child to look up Producing short utterances, w/ repetition, of single signs or words Pointing or tapping objects being discussed…\n14 Intervention (cont.) Goal #3: Strategies (cont.) Moving interesting objects up to face Tapping the child to signal look at me Attempting to prolong the interaction for as many turns as possible Use action orientated vs. naming activities Provide opportunities for parents of newly identified children to talk with parents of older children who are d/hh + observe the parents interacting with their children How difficult would it be to do this?\n15 Intervention (cont.) Goal #4: [provided by the participants] Strategies:\n16 Resources Communication Choice Decision Wisconsins babies & hearing: An interactive notebook for families with a young child who is deaf or hard of hearing - Building blocks for communication. Web Site: National Center for Hearing Assessment & ManagementNational Center for Hearing Assessment & Management (www.infanthearing.org/familysupport/wisconsin/index.html)www.infanthearing.org/familysupport/wisconsin/index.html\n17 Resources (cont.)", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "One in every 350 babies is born deaf or hard of hearing in Minnesota each year (approximately 200 total). Nationally, CDC data show 1.4 out of every 1000 infants screened at birth (range 0-4.6 per 1000 screened) have some degree of hearing loss. This makes hearing loss one of the most common conditions present at birth. This prevalence rises to 5 per 1000 children with hearing loss by ages 3-17, due to children developing or identified with hearing loss later in childhood. The majority of babies who are deaf or hard of hearing are born to hearing parents and most often, they have no experience with hearing loss.\nThe degree of hearing loss is measured in decibels (dB). For children, a slight hearing loss ranges from 16-25 dB, a mild hearing loss ranges from 26 dB to 40 dB, moderate from 41 to 55 dB, moderately severe from 56 to 70, severe 71 to 90 dB, and a profound hearing loss > 90 dB HL. The degree of hearing loss can be the same or different in each ear.\nA child with a mild high frequency hearing loss will have trouble hearing and understanding soft speech, speech from a distance, and speech with background noise. With moderate hearing loss the child will have difficulty with conversations even at close distances. An audiogram, which is a graph, illustrates the type, degree, and configuration of hearing loss. To understand more about audiograms, visit American Speech-Language-Hearing Association: The Audiogram.\nEarly intervention has a positive impact on language and child development. We know that children who do not receive early intervention services may have communication difficulties. These children may also face educational, psychological, and social challenges. This is why it is important to identify hearing loss as early as possible.\nMinnesota's Early Hearing Detection and Intervention (EHDI) Program works to ensure that every baby who does not pass hearing screening has timely and appropriate follow-up. This includes an audiologic evaluation (hearing test) if needed. If a child is identified with hearing loss, Minnesota's EHDI program helps families access appropriate and timely intervention, statewide services, and needed resources.\nFor more information about hearing loss, visit Minnesota's Early Hearing Detection and Intervention (EHDI) Program. Other helpful information can be found at: CDC Hearing Loss in Children, Babyhearing.org, Minnesota Hands and Voices, and the American Speech-Language-Hearing Association.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-17", "d_text": "In the present investigation, and in all four studies documenting a language advantage for the earlier-identified group, children received early intervention services shortly after their hearing losses were identified. It is unlikely that language differences of the magnitude documented in these studies would occur simply by identifying hearing loss early; early identification alone is unlikely to result in improved outcomes if it is not followed by early intervention.\nResearch on school-aged children with severe-to-profound hearing losses indicates a 40-point discrepancy between performance intelligence scores (mean of 100) and verbal intelligence scores (mean of 60)9 ,39 ,40; even academically successful deaf students demonstrate a 20-point discrepancy. It is interesting that a cognitive-language quotient discrepancy was already present by 3 years of age in the later-identified children in this study, raising the possibility that the cognitive-linguistic gap previously reported in school-aged children may have its roots in the first year of life.\nIn the four previous investigations that have noted better language skills in early-identified children, the average age of identification for the early group was below 12 months of age (with three of the four studies defining early identification as before 3 to 6 months of age). In the present study, there was no significant difference in language scores between four subgroups of later-identified children who were divided sequentially according to age of identification (from 7 months to greater than 25 months of age). This may explain the results of a previous study that examined the contribution age of intervention makes to later language ability and failed to find any significant contribution.27 In that study, 91% of the children began intervention some time before 3 years of age. Specific information regarding the distribution by age of intervention was not provided; however, unless a large proportion of the children began intervention in the first 6 months of life, this study is consistent with the results of the present investigation. That is, the present findings, and the pattern that has emerged from previous studies, suggest that for an earlier-identified group to demonstrate significantly better language skills than a later-identified group, identification must truly occur early (ie, within the first 6 months of life).\nBefore the advent of universal newborn hearing screening programs, identifying hearing loss by 6 months of age was rarely accomplished. Parents generally do not suspect a hearing loss until their child fails to meet important speech and language milestones at 1 to 2 years of age.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "\"It's also created a strong partnership between the state laboratory, practitioners, and medical experts at OHSU.\"\nThe screening itself is performed when the infant is only a few days old. Small drops of the baby's blood are collected on special filter paper, which is sent to the state lab in Portland where tests are done.\nThe lab tests the blood for pheynlketornuria phenylketonuria (PKU), Maple Syrup Urine Disease, sickle cell disease, hypothyroidism, galactosemia, biotinidase deficiency, hypothyroidism, and MCD, which screens for congenital adrenal hyperplasia (CAH), and several urea cycle, fatty, amino and organic amino, organic, and fatty acid disorders (including MCAD)and congenital adrenal hyperplasia (CAH). During the past 18 months these disorders have been detected, and disease prevented, in 72 Oregon newborns, Skeels said.\nThe March of Dimes also recommends that newborns be tested for hearing loss. In 2000, the Oregon legislature passed a bill requiring all hospitals with more than 200 births per year to screen babies for hearing loss.\nTwenty-four other states require newborn hearing screening. Oregon's program is overseen by DHS, according to Skeels.\nInformation about newborn screening is on the Web at www.dhs.state.or.us/publichealth/nbs/index.cfm; to learn more about newborn hearing\ngo to www.dhs.state.or.us/publichealth/pch/hearing/index.cfm", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-14", "d_text": "Furthermore, early intervention and preschool providers and parents should regularly conduct observational checks of the child’s listening ability through the HA (e.g., Ling 6 sound test), which may put them in a position to notice threshold shifts or HA malfunction and make immediate referrals to the audiologist.\n- Adjust HA gain to account for the decline in the sound level in the ear canal as the child experiences normal physical growth (McCreery et al. 2015a, this issue, pp. 24S–37S).\n- Carefully monitor the outcomes of all CHH, but exercise special vigilance with regard to children with moderate-to-severe HL, who are at the greatest risk for language (Tomblin et al. 2015a, this issue, pp. 76S–91S) and auditory delays (McCreery et al. 2015b, this issue, pp. 60S–75S).\n- Consider also that it may be especially important to carefully monitor and support caregiver–child interactions for dyads from lower SES backgrounds as well as caregivers who have children with moderate-to-severe HL (Ambrose et al. 2015, this issue, pp. 48S–59S).\n- Provide additional support to caregivers from lower SES backgrounds related to the achievement of consistent HA use (Walker et al. 2015, this issue, pp. 38S–47S). This might include regular hands-on training with the HAs and provision of access to appropriate parent-to-parent support experiences.\nThese suggested practice shifts have costs associated with them. The practicalities of limited or no reimbursement, limitations in staff time, and economic barriers for families may represent major barriers to implementation. In light of these issues, we have sought to highlight populations at greater risk (moderate-to-severe losses and lower SES backgrounds) and suggest that any needs associated with children in these groups be considered priorities for intervention. Furthermore, past experience with CHH suggests there are costs associated with not making the suggested practice shifts, in terms of failing to prevent the impact of HL on literacy, socialization, and employment (Bess et al.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-4", "d_text": "Realistic Timelines for Developing Speech and Language\nParent Priorities: At Time of Diagnosis (Mild-Moderate Hearing Loss) l Causes of Hearing Loss l Understanding the Audiogram l Learning to Listen and Speak / Understanding the Ear and Hearing l Coping with Emotional Aspects / Communication Options\nParent Priorities: A Few Months Later (Mild-Moderate Hearing Loss) l Learning to Listen and Speak l Realistic Timelines for Developing Speech and Language l Responsibilities of Early Intervention Agencies l Legal Rights of Children with Hearing Loss\nParent Priorities: A Few Months Later (Mild-Moderate Hearing Loss) l Opportunities to Interact with Other Parents There are other families out there that can and will support you and your decisions without making judgments. Find us. Speak to other parents, they will help you heal\nMild to Moderate At DiagnosisA Few Months Later 1.Causes of Hearing Loss 2.Understanding the Audiogram 3.Learning to Listen and Speak and Understanding the Ear and Hearing 4.Coping with Emotional Aspects and Communication Options 1. Learning to Listen and Speak 2. Realistic Timelines for Developing Speech and Language 3. Responsibilities of Early Intervention Agencies 4. Legal Rights of Children with Hearing Loss and Opportunities to Interact with Other Parents\nAdvice for (Entry Level) Providers from Families l Form a relationship with the family l Include other children in the family l Be an intuitive listener l Provide information regarding typical development as well as hearing loss l Please dont always be so overly energetic l Keep up to date on the newest stuff Rice & Lenihan 2005\nListening is the Recurring Theme \"Many a man would rather you heard his story than granted his request.\" Phillip Stanhope, Earl of Chesterfield \"I remind myself every morning: Nothing I say this day will teach me anything. So if I'm going to learn, I must do it by listening.\" Larry King\nTop 10 Things Parents Want Us to Hear l with apologies to Dave… Adapte d from Roush and Matkin, 2004\n#10 Talk to us… but listen too l If youre new at this (and even if youre not) theres a tendency to talk too much.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Hearing screening of newborns is a prerequisite for the early intervention and support needed by hearing impaired children. Early intervention is beneficial for the children's speech, educational, social and emotional development.\nThis has always been the main argument in favour of the universal newborn hearing screening programmes, first implemented in the 1990's and still being introduced around the world. Uruguay is among the countries currently preparing to offer hearing screening of all newborns. The first reports about the long-term effects of these programmes have been issued. They conclude that hearing screening works as advertised.\nVlandern: 85 percent in mainstream schools\nIn the Vlandern region of Belgium, all newborns have been hearing screened since 1998. The vast majority of newborns found to have hearing loss in screenings between 1998 and 2003 made it into mainstream schools, according to a recent evaluation of these children's speech and educational development.\n85 percent of the children five and a half years of age or older and with no other disability than hearing loss go to mainstream schools. Among those, whose hearing loss was treated with a cochlear implant, 79 percent attend mainstream schools.\nThe researchers behind this study concluded that early intervention in hearing impaired children may improve language outcomes and subsequent school and occupational performance.\nUSA: Screening benefits speech development\nIn the United States, hearing screenings became common practice following a recommendation made by the US Preventive Services Task Force in 2001. By 2006, 46 of the 50 states had hearing screening programmes.\nA new report, based on a survey of scientific articles about hearing screening, published since 2002, confirmed that children with hearing loss who had universal newborn hearing screening have better language outcome at school age than those not screened.\nIn particular, the ability to listen and understand speech is supported by early intervention following hearing screening. Children whose hearing loss was identified early, and children who had hearing screenings as newborns were found at 8 years of age to be better at listening and understanding speech than those whose hearing loss was discovered late and children who had not been hearing screened. No difference was found between the two groups in terms of speech ability and language.\nSources: International Journal of Pediatric Otorhinolaryngology; Pediatrics; www.infanthearing.org", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Idaho's Infant Toddler Program (ITP) coordinates a system of early intervention services to assist Idaho children birth to three years of age who have a developmental delay or who have conditions (such as prematurity, Down Syndrome, hearing loss) that may result in a developmental delay.\nThe ITP links children with services that promote their physical, mental and emotional development and supports the needs of their families. These can include therapeutic, educational, and supportive services, such as:\n- Family education\n- Speech therapy\n- Occupational therapy\n- Service coordination\n- Family training\n- Home visits\n- Health services\nChildren referred to the Infant Toddler Program are assessed to see if they meet program eligibility. If eligible, an Individualized Family Service Plan (IFSP) is written that outlines services for the child and their family. This plan is reviewed every six months. At three years of age, ITP assists with the child's transition to a developmental preschool program or other community services.\nHearing Loss is the most common birth disorder in newborns. It affects how your baby perceives sound and is able to communicate with you and the world. 90% of infants with hearing loss are born to hearing parents. Please don't wait. Much can be done if hearing loss is identified early.\nZERO TO THREE is a national nonprofit organization that provides parents, professional and policy makers the knowledge and the know-how to nurure early development.\nNeuroscientists have documented that our earliest days, weeks and months of life are a period of unparalleled growth when trillions of brain cell connections are made. Research and clinical experience also demonstrate that health and development are directly influenced by the quality of care and experiences a child has with his parents and other adults.\nThat is why at ZERO TO THREE our mission is to ensure that all babies and toddlers have a strong start in life.\nWe know that as babies, the way we are held, talked to and cared for teache us about who we are and how er are valued. This profuondly shapes who we will become.\nEarly experiences set a course for a lifelong process of discovery about ourselves and the world around us. Simply put, early experiences matter. We encourage you to learn more about very young children, early development and the work of ZERO TO THREE by exploring our site.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-2", "d_text": "Marcus Riccelli (D-Spokane), whose daughter, Bryn, was born last July at Providence Sacred Heart Medical Center & Children's Hospital in Spokane.\nThirty days after Bryn's birth, Riccelli received a call that there was an abnormality on her screening. The screening would have to be redone, a worrisome prospect given that some of the disorders can sicken and kill children within the first few days or weeks of life.\n\"For me, it was a true feeling of powerlessness,\" Riccelli said. \"I don't want any other parent to feel that.\"\nIt turned out that Bryn did not have one of the 28 rare disorders that Washington screens for and she is in good health today. The problem was that there had not been enough blood in the initial sample, meaning it had to be retaken.\nRiccelli said the Journal Sentinel investigation put the scope of the problem into perspective for his fellow lawmakers.\n\"It was helpful to be able to say that this isn't just a state problem, it's a national problem,\" he said.\nA pending regulation in Oregon will require newborn screening samples to arrive at the lab no more than five days after collection, but preferably within 24 to 48 hours. The change is expected to become final within a few weeks.\nAbout half of the country has regulations that require hospitals to send blood samples for testing within 24 hours of collection, although those regulations often are not followed or enforced.\nIn addition to Arizona and Washington state, health officials in Wisconsin and Nevada have said they will post hospitals' newborn screening performance online in coming months. This is the only way prospective parents would be able to tell if the hospital where their baby will be born promptly sends newborn screening samples to state labs.\nIn June, the Journal Sentinel requested newborn screening data from every U.S. state and the District of Columbia. Twenty-four states and Washington, D.C., would not release information identifying hospital names. Many cited patient privacy, even though children's names and outcomes of tests were not requested. Other states, including Wisconsin, initially said releasing such information would be adversarial to hospitals or might reveal their business practices.\nSince last year, four additional states released newborn screening to the Journal Sentinel, including South Carolina, which ranked among the worst in the nation, underscoring the need for transparency.\nHospitals in all states have been urged by the American Hospital Association and often their own health departments to review newborn screening protocols and performance.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "Because a hearing loss is not visible, it is often overlooked and can go undetected for some time. Surprisingly, an educationally significant hearing loss may go undetected in a child as late as six years of age, even with the most advance medical techniques available(Schwartz,2). Below is a checklist of milestones from the Bill Wilkerson Center that your child should be accomplishing at various ages:\n|3 to 6 months\n||Enjoys rattles and other sound making toys\nResponds to pleasant tones by cooing\nStops playing and appears to listen to sounds or speech\nWatches a speaker’s face\nBegins to turn head toward sounds that are out of sight\n|Laughs out loud\nCries differently for pain,hunger, and discomfort\nCoos-produces an assortment of oohs, ahs, and other vowel sounds\n|6 to 9 months\n||Responds to soft levels of speech and other sounds\nTemporarily stops action in response to “no”\nTurns head directly toward voices and interesting sounds\nBegins to understand routine words when used with a hand gesture( e.g. bye-bye or up)\n|Babbles-repeats consonants-vowel combinations such as ba-ba-ba\nMakes a raspberry sound\nMakes sounds with rising and falling pitches\n|9 to 12 months\n||Follows simple directions presented with gestures (e.g. give it to me, come here)\nResponds to his or her own even when spoken quietly\nWill turn and find sound in any direction\n|Vocalizes to get attention\nProduces a variety of speech sounds (e,g,m,b,d) in several pitches\n|12 to 18 months\n||Knows the names of familiar objects, persons, pets\nFollows routine directions presented without gestural or visual cues (e.g.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-5", "d_text": "In the future it would be useful to utilize objective measures (i.e. electrical stapedial reflex threshold and electrical compound action potential measures) in conjunction with behavioral measures to produce appropriate speech processor MAPs and more accurate listening criteria. It would also be important to examine long-term data on listening criteria to determine the accuracy and stability of these measures in addition to the factors associated with satisfaction.\nAcknowledgments The authors would like to thank Tanya VanVoorst, Greta Stamper, and Kathryn Engelhardt for gathering listening-development criteria data. We would also like to thank all the infants and their families who contributed to these criteria.\nReferences ASHA 2004. Cochlear implants. Technical report. Barker B.A., Tomblin J.B. 2004. Bimodal speech perception in infant hearing aid and cochlear implant users. Archives of Otolaryngology — Head and Neck Surgery, 130: 82–86. Estabrooks W. 1998. Appendix C: Auditory-verbal ages and stages of development. In: Estabrooks W. (ed). Cochlear Implants for Kids. Washington, DC: Alexander Graham Bell Association for the Deaf. Hogan C.A., Turner C.W. 1998. High-frequency audibility: benefits for hearing-impaired listeners. Journal of the Acoustical Society of America, 104: 432–441. Houston D.M., Pisoni D.B., Kirk K.I., Ying E.A., Miyamoto R.T. 2003. Speech perception skills of deaf infants following cochlear implantation: A first report. International Journal of Pediatric Otorhinolaryngology, 67: 479–495. Humes L.E., Watson B.U., Christensen L.A., Cokely C.G., Halling D.C., Lee L. 1994. Factors associated with individual differences in clinical measures of speech recognition among the elderly. Journal of Speech and Hearing Research, 37: 465–474. Jusczyk P.W. 1997. Discovery of Spoken Language. Cambridge, MA: MIT Press. Miyamoto R.T., Houston D.M., Kirk K.I., Perdew A.E., Svirsky M.A. 2003. Language development in deaf infants following cochlear implantation.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "South Carolina operates an early hearing detection and intervention program called First Sound. This is how it works:\n- South Carolina hospitals that birth an average of 100 or more babies per year screen each newborn baby for hearing and send the results to DHEC.\n- Infants who do not pass the initial hearing screening in the hospital are referred for rescreening; this should be performed by the time the infant is one month old. The rescreening may be done at the birth hospital or at an audiologist's (hearing specialist) office.\n- Whenever an infant does not pass the rescreening, we refer them/their family to a participating audiologist for a diagnostic hearing evaluation; this should be performed before the baby is three months old.\n- If the audiologist confirms hearing loss, we then refer the infant and their family to Babynet . This should be done by six months of age. Babynet will work with the family to get the infant the hearing intervention services needed.\nDHEC also tracks infants for three years if they pass their hospital screening but are at high risk for developing hearing loss.\nIf your infant has not been screened for hearing loss or if you are a health care provider wishing to refer an infant to First Sound, contact us at (803) 898-0708. Fax: 803-898-4453\nFacts about Newborn Hearing Loss\n- Hearing loss occurs in newborn infants more frequently than any other health condition for which newborn infant screening is required. Studies show that hearing loss occurs in approximately 2-4 out of 1,000 babies.\n- Hearing is vitally important to development of language skills. Infants begin developing speech and language from the moment they are born. Eighty percent of the language ability of a child is established by the age of 18 months.\n- Without early hearing detection and intervention programs like First Sound, hearing loss is often not identified until children are 18 months to 3 years of age.\n- Children with hearing loss who do not receive early intervention and treatment may require extensive (usually publicly funded) special education services.\n- Early detection of hearing loss in a child and early intervention and treatment has been demonstrated to be highly effective.\n- Universal screening/detection of hearing loss in infants before three months of age, with appropriate intervention no later than six months of age, is endorsed by the major speech-language-hearing associations in the U.S. as well as the American Academy of Pediatrics.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Program Updates Power Point\npresentation (OSPI and ESIT, 8/10, PDF)\nThis Power Point provides a summary of changes of the Early Support for Infants\nand Toddlers program and Indicator B-12.\nEarly Support for Infants and\nToddlers FFY 08 APR Data (ESIT, 8/10, PDF)\nThis chart outlines Washington State Part C Annual Performance Report (APR)\nFast Facts About the Early\nSupport for Infants and Toddlers Program (ESIT, 8/10, PDF)\nThis handout provides an overview of Part C services delivered in Washington\nProviding Early Support Services for Infants and Toddlers (ESIT, 8/10, PDF)\nThis document provides information for school districts about providing Part C\nservices directly using school district staff.\nIntervention/Individualized Family Service Plan (IFSP) Process\n(NECTAC, 2006, PDF)\nThis chart outlines the IFSP process from initial referral to transition into\nPart B services.\nChallenge and Change by Greg Abell (8/10, PDF)\nThis Power Point examines the elements of effective teaming and dynamics of\nElectronic Data Management System by Bob Morris (8/10, PDF)\nThis Power Point describes the electronic Data Management System (DMS) and\nprovides information to school districts about accessing Part C reports and\nIndividualized Family Service Plans (IFSPs) on the DMS.\nSupporting Infants and Toddlers with Sensory Impairments by Dr. Nancy Hatfield\nThis Power Point provides information about infants and toddlers with hearing\nloss and/or visual impairment.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-17", "d_text": "(2011);32:605–616\nFitzpatrick E. M., Olds J., Gaboury I., et al. Comparison of outcomes in children with hearing aids and cochlear implants. Cochlear Implants Int. (2012);13:5–15\nGumperz J. J., Kaltman H., O’Connor M. C.Tannen D.. Cohesion in spoken and written discourse: Ethnic style and the transition to literacy. In Coherence in Spoken and Written Discourse. (1984) Norwood, NJ Arnold:3–19\nJoint Committee on Infant Hearing (JCIH). Joint Committee on Infant Hearing (JCIH). . Principles and guidelines for early hearing detection and intervention [Position statement]. (2007) Retrieved from www.jcih.org/posstatements.htm\nJones C., Launer S.Seewald R. C., Bamford J. M.. Pediatric fittings in 2010: The Sound Foundations Cuper project. In A Sound Foundation Through Early Amplification: Proceedings of the 2010 International Conference. (2011) Chicago, IL Phonak.:187–192\nMcCreery R. W., Bentler R. A., Roush P. A.. Characteristics of hearing aid fittings in infants and young children. Ear Hear. (2012);34:701–710\nMcCreery R. W., Walker E. A., Spratford M., et al. Longitudinal predictors of aided speech audibility in infants and children. Ear Hear. (2015a);36:24S–37S\nMcCreery R. W., Walker E. A., Spratford M., et al. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing\n. Ear Hear. (2015b);36:60S–75S\nMoeller M. P., Hoover B., Peterson B., et al. Consistency of hearing aid use in infants with early-identified hearing loss. Am J Audiol. (2009);18:14–23\nMoeller M. P., Tomblin J. B.. An introduction to the outcomes of children with hearing loss study. Ear Hear.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Good hearing is essential for speech and language development and plays an important part in a child’s social and emotional growth. It is important that parents are aware of their child’s hearing from the moment their child is born.\nA child’s hearing can be affected by many things. Some newborns run a high risk of hearing loss due to hereditary or prenatal complications including rubella, syphilis, low birth weight, and meningitis. Toddlers and preschool children may acquire hearing loss with earaches, colds, running ears, upper respiratory infections, or allergies.\nSigns of Hearing Loss\n- Does not pay attention or react to loud noises around the house\n- Has had frequent ear infections and /or fluid draining from the ears\n- Has trouble locating sounds\n- Speaks loudly\n- Has unclear speech\n- Stops early babbling\nIf you are concerned about or suspect a hearing problem, or if you notice any of the signs listed above, see your doctor as soon as you can and ask about a hearing test.\nThe BC Early Hearing Program\nBC Early Hearing Program through the Abbotsford Public Health Unit provides newborn hearing screening, diagnosis and intervention upon birth. Service open to all babies in BC, the test is covered by BC Medical (MSP). Screening conducted in hospital before baby and mother are discharged, or at local public health unit with a hearing clinic. The Health Unit provides service for entry-level school age screening and sales, fitting, and maintenance of hearing equipment and assistive listening devices. They also provide eye and ear examinations and vision and hearing screening services to children of school age.\nAbbotsford Public Health Unit\nHours of operation: Mon - Fri, 8:30 a.m. - 4:30 p.m.\nFor more information on having your newborn's hearing checked visit Fraser Health's webpage.\nFor a complete list of local audiologists, refer to the ‘Audiologists’ and ‘Hearing Analysis’ sections of the Yellow Pages.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "Email: [email protected]\n© W.S. Maney & Son Ltd. 2011 DOI 10.1179/146701010X486543\nof research, however, exploring audibility of the speech signal in children with CIs. This is likely due to the fact that there is currently no documented method — behavioral or objective — to measure audibility of the implant’s electric signal in its user. Understanding which aspects of the speech signal are audible by the CI user is important. However, it is particularly important for the infant CI user who has limited language skills, and is therefore unable to provide reliable feedback. The literature focusing on people with hearing loss consistently illustrates that audibility of the speech signal affects spoken language perception (Hogan and Turner, 1998), production (Moore and Bass-Ringdahl, 2002), and comprehension (Vermeulen et al., 2007). This appears to hold true across the lifespan from early childhood (Rvachew et al., 1999) to later adulthood (Humes et al., 1994). It stands to reason that audibility would be equally important in infancy when language acquisition begins. We believe that researchers and clinicians should have an understanding of each infant’s access to the speech signal prior to assessing speech perception and spoken language development. Without knowledge of infant CI users’ audibility, such assessments will likely be in vain and result in equivocal infant data (Barker and Tomblin, 2004) and an unclear path for aural habilitation. As our center’s first step in gaining an understanding of audibility via CIs we established listening-development criteria for infant users. We currently use these criteria to ensure speech audibility in infant CI users prior to\nCochlear Implants International\nBarker et al.\nListening-development criteria for infant cochlear implant users\nthe evaluation of their speech perception skills and listening development.\nThe listening-development criteria At our clinical research center we currently monitor each infant’s listening development using the succeeding listening criteria. Each infant’s listening skills are typically assessed during clinical follow-up visits at 1-month- and 2-months-post-initial stimulation and every 2 months thereafter until the infant reaches the listening criteria. When an infant meets the listening-development criteria, the following standards are satisfied: (1) a plateau is noted in the infant’s MAP (i.e.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "These delays are apparent for both children with mild and moderate hearing loss5–7 as well as for those whose losses fall in the severe and profound ranges.8–11 Despite advances in hearing aid technology, improved educational techniques, and intensive intervention services, there has been virtually no change in the academic statistics of this population since the systematic collection of national data >30 years ago.12 ,13 These data indicate that the average deaf student graduates from high school with language and academic achievement levels below that of the average fourth-grade hearing student.14 ,15 Similarly, for hard-of-hearing children, achievement is also below that of their hearing peers with average reading scores for high school graduates at the fifth-grade level.15 These limitations in reading have a pervasive negative impact on overall academic achievement.16\nMany professionals in both health care and special education have supported early identification of hearing loss and subsequent intervention as a means to improving the language and academic outcomes of deaf and hard-of-hearing individuals.4 17–20 In 1994, the Joint Committee on Infant Hearing21 released a position statement endorsing the goal of universal detection of infants with hearing loss as early as possible, preferably by 3 months of age. This position statement was endorsed by the American Academy of Pediatrics. This priority is in concert with the national initiative Healthy People 2000,22 the National Institutes of Health Consensus Statement,23 and the position statement of the American Academy of Audiology.24 All of these position statements support the need to identify all infants with hearing loss. Both the Joint Committee on Infant Hearing and the American Academy of Audiology recommend accomplishing this goal by evaluating all infants before discharge from the newborn nursery.\nDespite widespread support for universal newborn hearing screening, this mandate has been challenged by Bess and Paradise25partly on the grounds that “no empirical evidence … supports the proposition that outcomes in children with congenital hearing loss are more favorable if treatment is begun early in infancy rather than later in childhood (eg, 6 months vs 18 months)”.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Help Me Grow Washington\nHelp Me Grow Washington connects families to the health and development resources needed to give ALL kids the best start.\nTwo Ways to Get Started Today!\nCall our friendly and knowledgeable staff at the Help Me Grow Washington Hotline 1-800-322-2588 to learn about developmental screening for your child and the Ages and Stages Questionnaire, or you can get started right now. It's easy and there is no cost involved!\nWhat does Help Me Grow Washington offer families?\n- Free developmental screening for all kids under 5\n(no waiting lists or income requirements)\n- Community resources like parenting classes, medical clinics and food banks\n- Referrals for further evaluation and early intervention services\n- Activities and games that support healthy growth and learning\nHow Can Developmental Screening Help My Child?\nDevelopmental screening is important for ALL kids! 1 in 6 kids has a developmental delay, but only 30% of those kids are detected through parent observations and regular checkups. Often, the signs are hard to see, even for a professional.\nScreening all kids regularly is the best way to catch delays early, when intervention is most effective. Even for families with kids developing on track, screening is a fast, flexible and fun way to learn about what’s coming next and what you can do to encourage healthy growth!\nThe Ages and Stages Questionnaire\nTo screen kids Help Me Grow Washington uses a survey tool called the Ages and Stages Questionnaire (ASQ). Developmental screening cannot give you a diagnosis; however it can show you if your child is developing more slowly than kids in the same age group.\nThe ASQ covers 5 areas of development:\n- Communication – how kids use language\n- Gross Motor – how kids move their bodies\n- Fine Motor – how kids use their hands\n- Problem Solving – how kids interact with their world\n- Personal-Social – how kids calm themselves down", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Early detection can make a big difference in the lives of children with hearing loss.\nEarly intervention in a stimulating environment – even during infancy – can help expand your child’s development.\nBoston’s Bobby Moakley celebrates on top of Copper Mountain during the No Barriers Summit from June 23-26. The college sophomore returns to Colorado as keynote speaker for the No Barriers “What’s Your Everest?” event in Keystone this weekend, July 22-23.\nThe inner ear processes low-frequency sounds, important for speech and music perception, differently to high-frequency sounds, new research has found.\nThe Catalyst Center, the National Center for Health Insurance and Financing for Children and Youth with Special Health Care Needs is hosting a webinar –\nDate: Wednesday, July 27, 2016\nTime: 9:30 to 10:30 a.m. PT\n12:30 to 1:30 p.m. ET\nLooking for ways to improve reading comprehension? Find recommendations for teaching foundational reading skills to students in kindergarten through 3rd grade. Each recommendation includes implementation steps and solutions for common obstacles. Download the guide at:\nWhen Tameka Goldsmith looks at her son’s accomplishments—and she has filled plastic bins with report cards and photos of his travels, certificates and copies of the kind words teachers wrote about him on scholarship applications—she remembers what doctors said about Kyree before he ever started kindergarten.\nThey said he’d never speak—that, because of his autism and hearing loss, he’d never function in a regular classroom.\nWe know that time management is always a challenge in busy hearing screening programs. We can’t make the days longer (summer solstice is past!), but in the spirit of the 4th of July, we can offer greater freedom in learning opportunities to meet your upcoming training needs.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Early detection can make a big difference in the lives of children with hearing loss.\nEarly intervention in a stimulating environment – even during infancy – can help expand your child’s development.\nBoston’s Bobby Moakley celebrates on top of Copper Mountain during the No Barriers Summit from June 23-26. The college sophomore returns to Colorado as keynote speaker for the No Barriers “What’s Your Everest?” event in Keystone this weekend, July 22-23.\nThe inner ear processes low-frequency sounds, important for speech and music perception, differently to high-frequency sounds, new research has found.\nThe Catalyst Center, the National Center for Health Insurance and Financing for Children and Youth with Special Health Care Needs is hosting a webinar –\nDate: Wednesday, July 27, 2016\nTime: 9:30 to 10:30 a.m. PT\n12:30 to 1:30 p.m. ET\nLooking for ways to improve reading comprehension? Find recommendations for teaching foundational reading skills to students in kindergarten through 3rd grade. Each recommendation includes implementation steps and solutions for common obstacles. Download the guide at:\nWhen Tameka Goldsmith looks at her son’s accomplishments—and she has filled plastic bins with report cards and photos of his travels, certificates and copies of the kind words teachers wrote about him on scholarship applications—she remembers what doctors said about Kyree before he ever started kindergarten.\nThey said he’d never speak—that, because of his autism and hearing loss, he’d never function in a regular classroom.\nWe know that time management is always a challenge in busy hearing screening programs. We can’t make the days longer (summer solstice is past!), but in the spirit of the 4th of July, we can offer greater freedom in learning opportunities to meet your upcoming training needs.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-4", "d_text": "[ 1985 c 213 § 33.\nSeverability—1967 ex.s. c 102:\nSee note following RCW 43.70.130\nRules and regulations—\nVisual and auditory screening of pupils: RCW 28A.210.020", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-5", "d_text": "Automatic auditory brainstem response\nAmerican Academy of Pediatrics\nEarly hearing detection and intervention\nJoint Committee on infant hearing\nNeonatal intensive care unit\nEvoked otoacoustic emissions\nTransiently evoked OAEs\nUniversal Newborn Hearing Screening\nOur work has been complemented by the contribution of Roberto Guarino (National Research Council, Institute of Clinical Physiology) who collaborated in representing the framework of the screening processes.\nOpen AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.\n- BarskyFirkser L, Sun S. Universal Newborn Hearing Screenings: a three-year experience. Pediatrics. 1997;99(6):e4. doi:10.1542/peds.99.6.e4.View ArticleGoogle Scholar\n- Mehl AL, Thomson V. The colorado newborn hearing screening project, 1992–1999: on the threshold of effective population- based universal newborn hearing screening. Pediatrics. 2002;109(1):8. doi:10.1542/peds.109.1.e7.View ArticleGoogle Scholar\n- Vartiainen E, Kemppinen P, Karjalainen S. Prevalence and etiology of bilateral sensorineural hearing impairment in a finnish childhood population. Int J Pediatr Otorhinolaryngol. 1997;41(2):175–85.View ArticlePubMedGoogle Scholar\n- US Center for Diseases Control and Prevention. Summary of 2009 National CDC EHDI Data2009\n- Norton SJ, Gorga MP, Widen JE, Folsom RC, Sininger Y, Cone-Wesson B, et al. Identification of neonatal hearing impairment: a multicenter investigation. Ear Hear. 2000;21(5):348–56.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-7", "d_text": "In addition, a vision screening is required for 1st, 3rd, 5th and 7th graders.\nCode 53A-11-201 (1996) also requires each local school board to implement rules as prescribed by the Department of Health for vision, dental, abnormal spinal curvature, and hearing examinations of students attending the district’s schools. Code 53A-11-203 (2010) requires children entering school under the age of eight to have a vision screening. Each school district may conduct free vision screening clinics for children aged 3 1/2 to 8. The statute also authorizes districts to provide free vision screening for children ages 8 and older, establishes guidelines for administering a free vision screening programs, and establish penalties for a violation of certain provisions related to free vision screenings.\n16 VSA 1422 (2009) requires periodic hearing and vision screening of school-aged children screening by primary care providers and school districts based on research-based guidelines developed by the commissioner of health in consultation with the commissioner of education. Each year in grades 1, 2, 3, 5, 7,and 9 and any pupil who appears to have defective vision or appears to be need for test.\nCode 22.1-273 (1995) calls for the principal of each school identified by the school board to ensure the testing of sight and hearing of relevant pupils unless it was included as part of the examination required in Code 22.1-270 (2004).\nRCW 28A.210.020 (1971) requires all school boards to provide vision and hearing screening for their students.\nWAC 246-760 Schools shall conduct auditory and visual screening of children:\n(1) In kindergarten and grades one, two, three, five, and seven; and\n(2) For any child showing symptoms of possible loss in auditory or visual acuity referred to the district by parents, guardians, or school staff.\n(3) If resources permit, schools shall annually screen children at other grade levels.\nPersonnel conducting the screening must use a Snellen test chart for screening for distance central vision acuity. Either the Snellen E chart or the standard Snellen distance acuity chart may be used as appropriate to the child’s age and abilities. The test chart must be properly illuminated and glare free. Other screening procedures equivalent to the Snellen test may be used only if approved by the state board of health.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "Satisfying the listening-development criteria After establishing these criteria we wanted to determine the rate at which infants satisfy these listeningdevelopment criteria. We predicted that there would be great variability in the rate at which the infants satisfied the listening-development criteria. This prediction echoes the individual differences that dominated other spoken language perception and production measures across pediatric CI users for the past 20 years (ASHA, 2004). We looked at the rate of criteria satisfaction in 10 children (7 males) with profound, bilateral sensorineural hearing loss (SNHL). All the infants were born to hearing parents and were identified as having SNHL within the first year of life. The infants’ ages at the time of surgery for placement of their CI devices ranged from 11 to 21 months with an average age at implantation of 15 months (SD = 2.76 months). All the infants were followed longitudinally as part of a comprehensive, CI center study. American English was the primary language spoken in each child’s home (i.e. English was spoken more than 50% of the time in the infant’s listening environment). Infants and toddlers had no known visual abnormalities. The individual profiles are presented in Table 1.\nIndividual outcomes Table 1 illustrates the variability in CI experience at which the infants satisfied the listening-development criteria. Note that these infants met the listening criteria on average after 6.4 months of device use (SD = 2.5 months).\nDiscussion We found that there are vast individual differences in the rate at which it takes an infant to reach the proposed listening criteria. Infants needed as few as 2 months to as many as 10 months of device experience before meeting the listening criteria. These data suggest that although many researchers and clinicians are quick to begin testing infant CI users shortly after initial stimulation (e.g. Houston et al., 2003; Miyamoto et al., 2003), it may be beneficial to require that listening criteria be met prior to assessment. Such criteria are likely to ensure adequate audibility of the speech signal, resulting in more valid measures of infant speech perception. Secondly, it is possible that adequate speech audibility in these very young children may also contribute to\nBarker et al.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "The card is supposed to be sent within 24 hours to a lab for testing.\nAbout one in every 800 babies is born with a potentially severe or deadly condition that can be treated and managed if the child is properly tested. These babies often appear healthy at birth but can become extremely sick within days. The entire premise of newborn screening is to detect disorders quickly so babies can be treated early, averting death and preventing or limiting brain damage, disability and a lifetime of costly medical care.\nArizona was one of the poorest performing states in the Journal Sentinel analysis, yet has become a model thanks to its transparency and to a top health official's willingness to solve the problem, said Scott Becker, executive director of the Association of Public Health Laboratories, a group that includes directors of state labs throughout the country.\n\"The health official and the state can lead by communicating — saying, 'This is an important issue and I expect, in our state, that we are going to take this seriously,'\" Becker said.\nHumble's target in his state is based on federally backed guidelines recommending that blood samples take no more than three days to arrive at labs for newborn screening. Children with many of the disorders screened for can die or become extremely ill just a few days after birth. But states often use varying thresholds to track samples — if they track them at all.\nIn Texas, the percentage of blood samples received at the state lab within four days rose to 95.6% in March — a big improvement from about 85% last year.\nSince the state's poor performance was reported, Texas health department officials have been examining the processes of both top- and bottom-performing hospitals to figure out what's going right and what is going wrong, and communicating what they learn to hospital staffs.\nWashington state has passed comprehensive legislation in response to the Milwaukee Journal Sentinel's investigation, although it was one of the better-performing states in the analysis of hospital-by-hospital data.\nIn mid-March, Gov. Jay Inslee signed a bill that requires hospitals to collect blood samples for screening within 48 hours of a baby's birth, and then deliver the sample to the state public health lab within 72 hours of collection. Days when the lab is closed, such as Sundays, are not counted against the 72-hour requirement.\nThe law also requires the health department to publicly post each hospital's performance on the agency's website each year.\nThe bill had personal significance for sponsor state Rep.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-2", "d_text": "Through its robust quality assurance programme, NHSP ensured that every member of the multidisciplinary team was trained and involved in providing optimal care. Implementation of NHSP was like a military campaign, well thought out and, although not without its problems, successful. This success has ensured that many other countries have followed suite, and screening is offered across the UK as well as in other parts of the world.\nThere were two other important changes at the same time as the inception of NHSP. The start in 2000 coincided with MCHAS—modernising children's hearing aids—a project which introduced digital hearing aids into the UK for all children, with consistently high standards of fitting and maintenance of hearing aid care. At about the same time, there was an increasing confidence in, and acceptance of, cochlear implants for children. Early diagnosis following NHSP has reduced the age of implantation, affording the child the earliest possible access to good speech signals, with excellent results in speech and language development.\nNHSP has also facilitated aetiological investigation by making this a part of the whole process and ensuring that medics have had the training to take this forward. In addition, part of NHSP has been the collection of robust and invaluable epidemiological data, but, unfortunately, the outcomes of screening have not yet been published. Although Public Health England, which has taken over hearing screening in England, now only covers screening, the pattern for subsequent care has been established and continues.\nThe children are the true beneficiaries of this exciting project. The effect of NHSP has been to significantly lower the age of confirmation of deafness. Figures show that the vast majority of congenitally deaf children have their hearing loss confirmed by 6 months of age with many identified within the first 4 weeks of life. Identification is the first step and ensures that habilitation is started within the first 6 months, with hearing aids being a part of that for many children. We are seeing the impact of early diagnosis in better speech and language skills and educational attainment. Pimperton et al's paper has shown exciting results with relatively early diagnosis (by 9 months). Hopefully, based on evidence from the USA, with lower ages of diagnosis, we may see even better reading competences for all deaf children in the future.\nCompeting interests None declared.\nProvenance and peer review Commissioned; internally peer reviewed.", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "In Texas, newborn hearing screening was mandated in 1999 through the passage of House Bill 714. In 2011 the mandate was expanded through House Bill 411 and Senate Bill 229. The Texas Department of State Health Services (DSHS) is the oversight agency.\nJames T. (Jim) Walsh, US House Representative from 1989 – 2009, was active in the passage of the first nationwide act that guaranteed hearing screenings for newborns and infants. In 1991, Walsh sponsored and introduced the Hearing Loss Testing Act. The Newborn and Infant Screening and Intervention Program Act was authored and sponsored, mainly, by Walsh in 1999. On March 11, 2009, the act was renamed as the James T. Walsh Universal Newborn Hearing Screening Program, and was identified within 42 United States Code 280g-1. The Act is for “the early detection, diagnosis, and treatment regarding hearing loss in newborns and infants,” and included several provisions so that these endeavors would be accomplished.\nSince that time, Congress has expanded and reauthorized the original Act. Current legislation, known as the Early Hearing Detection and Intervention Act 2010, was approved and signed by President Obama on December 22, 2010. In 2015, the US House of Representatives proposed amended legislation to reauthorize a program for early detection, diagnosis, and treatment regarding deaf and hard-of-hearing newborns, infants, and young children.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-6", "d_text": "This curriculum has been tested to be used by teachers nation-wide in urban, rural and suburban, public, and private settings.\nThere will be continuing initiatives to reach Hispanic/Latino/Latina individuals through participation with various Spanish language and Hispanic interest-meetings, exhibit opportunities, and collaborative efforts with the NIH Hispanic Communications Work Group that includes the Radio Unica/Wal-Mart Hispanic Latino/Latina Health Fair series. Most NIDCD health information materials are available in Spanish. An initiative to reach the rural community with advice about hunting and farming equipment uses shooting instructors as the dissemination source. WISE EARS!® information is provided to these individuals through their classroom instructors.\n188.8.131.52 Performance Measures\n- Increase Public Awareness through education and mass media efforts.\n- Expand the campaign on the national level by increasing the coalition membership.\n184.108.40.206 Outcome Measures\n- Track outreach efforts through data provided by North American Precis (NAPS) to increase the awareness of the importance of protection against noise-induced hearing loss amongst the under-represented groups. NAPS will target Spanish language populations specifically.\n3.2 Area of Emphasis Two: Early Hearing Detection and Intervention\nEach year, approximately two to three out of 1,000 babies born in the United States have a detectable hearing loss, which can affect their speech, language, social, and cognitive development. More lose their hearing later during childhood. Many of these children may need to learn speech and language differently, so it's important to detect deafness or hearing loss as early as possible.\n3.2.1 Objective One: Increase Public Awareness on the Importance of Newborn Hearing Screening and Communication Options\n220.127.116.11 Action Plan\nNIDCD established the Early ID Ad Hoc Committee in January 2000. It now includes representatives from 14 organizations. The committee meets quarterly to create, share, and participate in collaborative efforts. These efforts focus on increasing the parents and families awareness of the importance in having a child's hearing screened, options for their child if he or she is diagnosed with hearing loss, and emphasizes initiatives to increase follow through for children who are believed to have hearing loss at birth.\nNIDCD began a multi-year campaign called \"Labor Day\" in September of 2003 and will expand that initiative in 2004 with the support of work group members.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "Infant Hearing Screening Advisory Committee Meeting\n[33 Pa.B. 1421]\nThe Infant Hearing Screening Advisory Committee, established under the Infant Hearing Education, Assessment, Reporting and Referral Act (11 P. S. §§ 876-1-- 876-9) will hold a public meeting on Wednesday, March 26, 2003, in Conference Room 614, Department of Labor and Industry Building, 7th and Forster Streets, Harrisburg, PA, from 1 p.m. to 4 p.m.\nThe Department of Health reserves the right to cancel this meeting without prior notice.\nFor additional information or persons with a disability who wish to attend the meeting and require an auxiliary aid, service or other accommodation to do so, contact Karl Hoffman, Program Administrator, Hearing Program, Division of Newborn Disease Prevention and Identification, (717) 783-8143 or for speech and/or hearing impaired persons, V/TT (717) 783-6514 or the Pennsylvania AT&T Relay Services at (800) 654-5984.\nROBERT S. MUSCALUS, D.O.,\n[Pa.B. Doc. No. 03-472. Filed for public inspection March 14, 2003, 9:00 a.m.]\nNo part of the information on this site may be reproduced for profit or sold for profit.\nThis material has been drawn directly from the official Pennsylvania Bulletin full text database. Due to the limitations of HTML or differences in display capabilities of different browsers, this version may differ slightly from the official printed version.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-6", "d_text": "doi:10.1097/00003446-200010000-00003.View ArticlePubMedGoogle Scholar\n- Rach GH, Zielhuis GA, van den Broek P. The influence of chronic persistent otitis media with effusion on language development of 2- to 4-year-olds. Int J Pediatr Otorhinolaryngol. 1988;15(3):253–61.View ArticlePubMedGoogle Scholar\n- Moeller MP OM, Eccarius M. Receptive language skills. In: MJ O, editor. Language and learning skills of hearing-impaired children: ASHA Monograph 23. Rockville, MD: ASHA; 1986.Google Scholar\n- Allen TE. Patterns of academic achievement among hearing impaired students: 1974 and. Deaf children in America. 1983;1986:161–206.Google Scholar\n- Davis A, Hind S. The impact of hearing impairment: a global health problem. Int J Pediatr Otorhinolaryngol. 1999;49 Suppl 1:S51–4.View ArticlePubMedGoogle Scholar\n- van Eldik TT. Behavior problems with deaf Dutch boys. Am Ann Deaf. 1994;139(4):394–9.View ArticlePubMedGoogle Scholar\n- Vostanis P, Hayes M, Du Feu M, Warren J. Detection of behavioural and emotional problems in deaf children and adolescents: comparison of two rating scales. Child Care Health Dev. 1997;23(3):233–46.View ArticlePubMedGoogle Scholar\n- Patel H, Feldman M. Universal newborn hearing screening. Paediatr Child Health. 2011;16(5):301–10.PubMed CentralPubMedGoogle Scholar\n- Nelson HD, Bougatsos C, Nygren P. Universal newborn hearing screening: systematic review to update the 2001 US preventive services task force recommendation. Pediatrics. 2008;122(1):e266–76. doi:10.1542/peds.2007-1422.View ArticlePubMedGoogle Scholar\n- Wolff R, Hommerich J, Riemsma R, Antes G, Lange S, Kleijnen J. Hearing screening in newborns: systematic review of accuracy, effectiveness, and effects of interventions after screening. Arch Dis Child. 2010;95(2):130–5.", "score": 8.086131989696522, "rank": 98}]} {"qid": 13, "question_text": "Why was the Church of the Savior on Spilled Blood built in St. Petersburg?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "1882 – Design Competition for Church of the Savior on Spilled Blood, St. Petersburg, Russia\nThis marvelous Russian-style church was built on the spot where Emperor Alexander II was assassinated in March 1881 by a group of revolutionaries, who threw a bomb at his royal carriage. The decision was taken to build a church on the spot where the Emperor was mortally wounded, and an architectural competition was held. The church was built between 1883 and 1907 and was officially called the Resurrection of Christ Church. The construction of the church was almost entirely funded by the Imperial family and thousands of private donators. Both the interior and exterior of the church is decorated with incredibly detailed mosaics, designed and created by the most prominent Russian artists of the day (V.M. Vasnetsov, M.V. Nesterov and M.A. Vrubel). Interestingly, despite the church’s very obviously Russian aspect, its principle architect, Alfred Alexandrovich Parland, was not even Russian by birth, nor the winner of the competition. The design harks back to medieval Russian architecture in the spirit of romantic nationalism.\nThe church was closed for services in the 1930s, when the Bolsheviks went on an offensive against religion and destroyed churches all over the country. It remained closed and under restoration for over 30 years and was finally re-opened in 1997 in all its dazzling former glory.\nTranslations of the church’s name vary between guidebooks and include The Church of the Savior on Blood, The Resurrection Church and The Church of the Resurrection of Christ.", "score": 52.66282606575993, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Petersburg street by assassins after he had freed the common people from serfdom, offering them their first opportunity at freedom, private business, and prosperity. The people who admired him were devastated and decided to contribute funds to erect the most dazzling church in Europe on the site where he died. The church is breathtakingly beautiful, colorful and architecturally amazing.\nSince our first impression of St. Petersburg was of dismal grey Soviet-style apartment buildings and dismal grey skies and weather, the colors and magnificence of the church are a welcome sight. However, what broke my heart is that the church is only a museum. Then we learned that St. Isaac’s Cathedral is also only a museum.\nWait a minute! The Church of the Savior on Spilled Bood was built with donated funds as a house of God to honor a czar who was a devoted believer! How can it be a museum rather than a church? The answer, of course, lies in the years of the Soviet Union, when the State became God and religion was condemned. Of course the Soviets preserved both the castles of the czars and the churches because of the value of their contents. But the meaning is gone.\nDuring my undergraduate college years as an English major, I studied Russian literature extensively. As expressed by great writers like Tolstoy and Dostoyevsky, Russians were a deeply spiritual people. They were troubled, of course, with angst and obsessiveness affected by depressing weather and history. But underneath was strong belief and dedication to God.\nConsidering the land of Tolstoy as atheistic is so difficult for me. But as I viewed the beautiful church, I realized that at least some of Russia’s difficulties are due to that atheism. God is not dead – He still holds history in the palm of His hand and waits for people to realize that any “way” other than belief in Jesus is not the way to Heaven nor to true joy.\nSo I said a prayer as I always do at churches – a prayer for the people of St. Petersburg, that they might return to faith and the freedom it offers. And for those who attend the few open churches, that their faith and devotion might be strengthened. Amen.\nI just want to continue to share ideas about grief and life with people who long as I do for comfort and understanding.", "score": 48.569316435147314, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "The Church of the Savior on Spilled Blood - 3D Jigsaw Puzzle\n|Object||Put together the jigsaw puzzle.|\n|Types||101-499 Pieces, 3D|\n|Dimensions||44 cm x 30 cm x 54 cm / 17.3 in x 11.8 in x 21.3 in|\nPuzzle Pieces: 233\nSize When Completed: 44 cm x 30 cm x 54 cm / 17.3 in x 11.8 in x 21.3 in\nThe Church of the Savior on Spilled Blood is one of the main sights of St. Petersburg, Russia. It is also variously called the Church on Spilt Blood and the Cathedral of the Resurrection of Christ, its official name. This Church was built on the site where Tsar Alexander II was assassinated and was dedicated in his memory.\nCustomer Reviews |Write your own review!\nThis item has yet to be rated", "score": 48.19814677768195, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "From Russia With Love: The amazing city of St. Petersburg\nDaily Express, December 14, 2013\nTHE beautiful and historic city of Saint Petersburg is the ultimate winter wonderland.\nThe State Hermitage Museum in St Petersburg [GETTY]\nPure white snow cascades around my shoulders as I look up to try to absorb the immense gold-leaf dome of St Isaac’s Cathedral. This spectacular structure, festooned with porticoes, frescoes, statues and colonnades, has been restored to how it would have looked to the Romanov tsars.\nAnother iconic building, the Church Of The Saviour On Spilled Blood, will also have you craning your neck. With colourful onion domes and dazzling mosaics, the church is one of the city’s most popular attractions and built on the spot where Emperor Alexander II was assassinated on March 13, 1881.\nIt is the vastness of St Petersburg’s cathedrals and palaces, the width of its main shopping street, Nevsky Prospect, and the opulence of its squares, parks and monuments that make you wish you had panoramic vision.\nI find myself constantly walking backwards, bumping into fur-wrapped locals, as I try to get some perspective on the cultural jewels of Russia’s second-largest city.\nMy first visit here was in the summer when the city seemed completely different, with scorching temperatures, winding queues and tour groups packed around the major sights.\nNow, as the nights grow longer and colder, I can effortlessly stroll into the cream of St Petersburg’s many attractions – none more colossal than the State Hermitage.\nThis special city is a real winter wonderland [GETTY]\nSt Petersburg boasts more than 200 museums and the Hermitage, mainly housed in the Winter Palace, is one of the most impressive in the world. It’s said you need.\n11 years to look at every item on display within the endless corridors, ballrooms and grand halls. As a first time visitor I focus on the masterpieces in the state rooms where I meander past works by Renoir, Van Gogh, Manet, Monet and Degas.\nTo rest my weary feet afterwards, I relax in the Casa Leto, a privately-run hotel a few minutes away from the main sights.\nDown Nevsky I stroll next morning, past boutiques, bookshops, designer-clad women and the plethora of bridges that curve over the vast canal network.", "score": 45.080851461048816, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Peter the Great’s city is an exercise in invention. Its canals reflect a spellbinding collection of cultural palaces, while the environment inspired many great artists, writers and musical maestros.\nThe former Mikhailovsky\nPalace, now the Russian Museum, houses one of the country’s finest art\ncollections, including works by Ilya Repin and Kazimir Malevich. The palace,\ndesigned by Carlo Rossi, was built in 1819-1829 (00 7 812 595 4248; rusmuseum.ru; Inzhenernaya; £8).\nThe Church of the\nSaviour on Spilled Blood was built on the spot where Alexander II was killed by\nterrorists in 1881. It reopened in 1997 after its 7,000 sq metres of mosaics\nwere restored (00 7 812 315 1636; eng.cathedral.ru/saviour;\nKanal Griboyedova; £7).\nViewing the city from\nits canals is an idyllic way of touring St Petersburg. From May to October,\nfind boats at the Fontanka River dock and on the Neva River outside the Hermitage\nand the Admiralty. Anglo Tourismo runs guided tours in English from near\nAnichkov Bridge (anglotourismo.com;\ndaily tours £11).\nPeter and Paul\nfortress is one of the city’s oldest buildings. For wonderful views, walk the\nNevskaya Panorama then head inside the SS Peter & Paul Cathedral, with its\n122-metre gilded spire (00 7 812 238 4550; spbmuseum.ru; £5.50).\nKirovsky Islands are\nthe outer deltas of Petrograd Side. They were granted to 18th- and 19th-century\ncourt favourites and developed into playgrounds. Accessible by Metro, they’re\npopular for picnics, boating and on White Nights. Rent a rowboat on Yelagin\nIsland for £4.50 per hour.\nEat and drink\nYou’ll struggle to get enough\nof the traditional savoury and sweet pies served at Stolle cafés throughout the\ncity and on Yelagin Island (stolle.ru;\nKonyushennaya; pies from £1.50).\nClassy, kosher restaurant LeChaim is the\ncity’s best place for Jewish cooking.", "score": 42.51332905382559, "rank": 5}, {"document_id": "doc-::chunk-4", "d_text": "Whereas most of the architecture in St Petersburg is Baroque and Neoclassical, the Church on Spilled Blood is medieval Russian architecture that captures the take on Russian nationalism. It was intentionally constructed to resemble St Basil’s Cathedral in Russia! 😍\nThe Church of the Savior on Spilled Blood is open daily 10:30am to 6pm and located at 2 Naberezhnaya Kanala Griboyedov, Saint Petersburg, Russia, 191186\nWhere To Eat\nOne of the best restaurants I have eaten in all of my travels and one I highly recommend is\nRussian Vodka Room No 1\nTake a break and have lunch or dinner at the Russian Vodka Room.\nYou can’t miss out on Russian cuisine because that would just be a tragedy…the food is fantastic! This is the Russian Mushroom Soup with Pea Barley served with sour cream.\nBowls. It’s the word to describe how much of this I could have eaten! Sour cream is *almost* like cream cheese for me. So, the more, the merrier!\nAlso, I had the Russian Meat and Pork Dumplings and they were absolutely fantastic as well. Actually, we didn’t have anything here that wasn’t out of this world yummy!\nMy friend Jackie got the classic Russian dish Beef Stroganoff and we shared all three for a complete experience…oh wait, not exactly complete. Because when in Russia…\nYou must try Vodka, right?? In the Caribbean? Try Rum. When in Rome, try wine and limoncello. And, when in Russia, try Vodka!\nI had the Cherry Russian Homemade Vodka. Although I don’t like anything straight, this had a LOT of flavor and was really good! Oh, and don’t worry, if you don’t like cherry they have a million other flavors on the menu!\nTIPS: Go mid-afternoon between lunch and dinner times to avoid the crowds. Get several dishes and vodka flavors to share for the full experience!\nRussian Vodka Room No1 is open daily 12pm-12am and located at Konnogvardeyskiy Bul’var, 4, Saint Petersburg, Russia, 190000\nSadly, the end of my day in Saint Petersburg also ended my time in Russia. I wanted to stay longer and explore more of it.", "score": 42.2239219116553, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Isaac's Cathedral and the Bronze Horseman statue, commemorating famed poet Alexander Pushkin's ode to the city founder, to the Church of the Saviour on Spilled Blood, built atop the site where assassins finally succeeded in killing Tsar Alexander II, a tsar who sought change too fast for some, but too slow for others. Change and revolution are common themes to Saint Petersburg. Palace Square may hold a column celebrating Russia's defeat of the French during Napoleon's unsuccessful 19th century invasion, but it was the site of significant scenes of internal revolt, such as Bloody Sunday in 1905, when demonstrators were fired upon by Tsar Nicholas II's guards, and the October Revolution in 1917, when the Bolsheviks came to power, overthrowing the monarchy and moving the capital to Moscow.\nI spent a month in college in Saint Petersburg in the back half of the 1990's, and visited the city again in the early 2000's, and the city has left in indelible mark in my memories. I highly recommend visiting during summer, when the nights are long and visitors can stay up late strolling the Neva and watching the drawbridges over the river open and close. The city is rightly called the \"Venice of the North\", and it is well worth exploring the numerous canals on foot and by boat. A stroll down Nevsky Prospekt shows off exciting architecture like the Admiralty, Kazan Cathedral, and Dom Knigi (The House of Books), while side streets lead to the exquisite Church of the Savior on Spilled Blood (with its nearby Russian craft market) or to the Bank Bridge with its griffins. No visit to Saint Petersburg is complete without a stop at The Hermitage, but I also recommend spending time at the Russian Museum, to see works by Russian artists.\nThis World Heritage Site is all-encompassing enough to include many palaces outside the city. Not to be missed are Peterhof, also known as Petrodvorets, with its grand and gilded cascade of fountains that serves as centerpiece to Peter the Great's homage to Versailles, or Tsarskoye Selo, also known as Pushkin, where Catherine the Great commissioned a grand palace with a famed Amber Room looted by the Germans during World War II.", "score": 41.43758145267325, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "The domes of the church of the savior on spilled blood The domes However, the era in which Church of the Savior was built was a time of resurgence of nationalism, thus the classic Russian style of the church.\nLooking at both the interior and exterior, it's easy to see why the church cost about 4.6 million rubles, way over the budgeted 3.6 million. The outside was designed to mirror the magnificent St. Basil's in Moscow, the city's easily-recognizable centerpiece, and the building - both inside and outside - features of mosaics, most of them designed by the prominent artists of the time, including Viktor Vasnetsov, Mikhail Nesterov and Mikhail Vrubel. The majority of the mosaics depict biblical scenes and saints though some are just patterns. The colorful onion domes, of which the central one reaches a height of 81 meter (266 ft), are covered with bright enamels. The cathedral boasts a luxurious and rich decor, ornamental architraves, frames, corbels, ceramic tiles, and colored glazed tiles. The belfry is decorated with mosaic coats-of-arms of cities and regions of the Russian empire.\nInteresting fact: This church never functioned as a public place of worship. Today it is a Museum of Mosaics.\nDuring the Russian Revolution of 1917, much of this amazing church was ransacked and the interior was seriously damaged. In the 1930s, the Soviets closed the church, as they did with most churches in St. Petersburg. During World War II, it was used as a storage facility for food. If suffered yet more damage during the war, and afterwards, was used for many years as storage space for a local opera company. In 1970, St. Isaac's Cathedral assumed management of the church, and funds garnered from the cathedral (which was, at that time, a museum) were used to restore the Church of the Savior. Restoration was finally complete in 1997 and remains one of St. Petersburg's top tourist attractions.\nThe Cathedral should not be perceived as simply a cult building; its idea is broader and deeper. The image of the Savior in this Cathedral reflects not so much the cult aspect, but the political, historical, artistic, stylistic importance of the monument, and underscores its importance for the city.", "score": 41.18697210508718, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "Keepers of History\nWalking on the Nevsky Prospect, a five-kilometer-long boulevard, it’s impossible not to notice the beautiful church of the shed blood of Christ, the most important Russian Orthodox church, which was built at the site where Russian emperor Alexander II was killed in the late 19th century. The luxurious interior and the preserved piece of stone where the emperor’s blood was shed, attract tourists from all over the world.\nOut of the churches that represent a faithful witness of Russian tradition and history, the Petropavlovskaya left a special impression on me. It is located in the fortress under the same name, under the protection of UNESCO on the island of Zayachi, surrounded by Neva. It contains tombs of almost all Russian emperors, from Peter the Great to Nikolay II Romanov. Tsar Nicholas II and his family were buried in that holy place only a few years ago, after DNA analysis confirmed the identity of the remains found in the forest near Yekaterinburg, where the Bolsheviks burned the remains of the Imperial family after they were brutally murdered in the basement of the house in which they were imprisoned. I was very shaken by this story.\nThe Russian Versailles\nOne sunny afternoon I went on a tour of the park and the castle of the Peterhof complex, located around twenty kilometers from St. Petersburg, also under the protection of the UNESCO. It’s not too crowded in the autumn in front of the former imperial residence, which is also known as the Russian Versailles, with magnificent golden fountains in the middle of the park, so you’ll enjoy the beauty of nature in peace and the water that breaks down from the top of the Great Cascade, the largest fountain in the world. In October the fountains are turned off and the complex closes with a huge ceremony, attended by thousands of people.\nA café with a turbulent past\nThe Nevsky Prospect is full of restaurants with great kitchen. The literary café is a place where you can order fantastic food and enjoy its history. Story has it that the poet Alexander Pushkin ate his last meal here before he was killed in a duel with Dantes, a French nobleman who allegedly liked Pushkin’s wife Natalia, the beauty of the Petersburg elite.\nI spent a wonderful evening at the Mariinskiy Theater, the former Imperial Ballet House, and today one of the most famous opera and ballet houses in the world.", "score": 40.91306430960231, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Savior on Blood\nThe Cathedral of Our Savior on the Spilt Blood is a unique masterpiece combining the old Russian style of architecture, talent of the best Russian artists and Italian stone carvers, and the art of Roman mosaic. Mosaics cover almost all the surface of the walls and ceilings there are 7 500 square meters of mosaics all in all. It is a unique example of a Russian style cathedral in the centre of St. Petersburg. Its coloured onion domes are so famous, that they are recognized today as one of the symbols of the city. The style of the church was inspired by St Basil's Cathedral on Red Square in Moscow. The onion domes, mosaics and intricately decorated facade are characteristic of 16th to 17th century Russian architecture. Thus, it differs in from most of the buildings of st. Petersburg which follow the western traditions of Neo-Classical and Baroque architecture. The church is one of the top tourists attractions of St. Petersburg, a UNESCO World Heritage Site, and a popular spot for taking photographs and acquiring Russian souvenirs. You have an opportunity to see the Spilt Blood Cathedral in all its beauty.\nThis church was built on the very site of Tsar Alexander II murder.\nOfficially consecrated as the Church of the Resurrection of Christ, the Russian Orthodox gem more commonly known as the Church of the Savior on Spilled Blood was built to honor tsar Alexander II of Russia, who was assassinated at the site where the church now sits, hence the reference to \"spilled blood\". The section of the street on which the assassination took place is enclosed within the walls of the church and the site of the murder is marked by a chapel in the building. At the request of Alexander III, son of Alexander II, construction on the church began in 1883. Funding for this amazing structure was almost totally provided by the Imperial family with other donations made by private individuals. The project was completed in 1907.\nThe principle architect chosen for the project was Alfred Alexandrovich Parland, who was, incidentally, a non-Russian-born individual. The architecture of the church varies greatly from other buildings and religious structures in St. Petersburg, which were largely constructed in the Baroque and neo-Classical styles.", "score": 40.44846735052144, "rank": 10}, {"document_id": "doc-::chunk-16", "d_text": "It remained closed and under restoration for over 30 years. It was reopened in August 1997, after 27 years of restoration, but has not been reconsecrated and does not function as a full-time place of worship. The Church of the Saviour on Blood is a Museum of Mosaics. In the pre-Revolution period it was NOT used as a public place of worship.\nExterior: Architecturally, the Cathedral differs from St. Petersburg's other structures. The city's architecture is predominantly Baroque and Neoclassical, but the Savior on Blood harks back to medieval Russian architecture in the spirit of romantic nationalism. It intentionally resembles the 17th-century Yaroslavl churches and the celebrated St. Basil's Cathedral in Moscow. Interestingly, despite the church’s very obviously Russian aspect, its principle architect, A. Parland, was not even Russian by birth. Both the interior and exterior of the church are decorated with incredibly detailed mosaics, designed and created by the most prominent Russian artists of the day (M.V. Nesterov, V.M. Vasnetsov and M.A. Vrubel). The Church of Our Savior on the Spilled Blood resembles a bit the St. Basil Cathedral on the Red Square in Moscow. The church is a perfect example of how you expect Russia to be: the golden onion domes, and fantastic colors, are so typical of the country, and are absolutely exquisite. What a colorful place to visit ! The views from the Griboyedov Canal are stunning ! I took thousand pictures of this cathedral !!!\nThe Church of Our Savior on the Spilled Blood from the Griboyedov Canal:\nKazan Cathedral from the Griboyedov Canal:\nInterior: Absolutely beautiful - words cannot describe this masterpiece. The interiors walls are covered with what looks like paintings, but upon a closer look are actually tiny mosaics. The walls and ceilings inside the Church are COMPLETELY covered in intricately detailed mosaics and It might be the most beautiful mosaic mural decoration you have ever seen. The mosaic work is AMAZING. Every inch of space seems to be covered with huge religious mosaics, soaring upward to the ornate vaulted ceiling, where the face of Christ looks down in a blaze of light.", "score": 39.57577348096385, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "It intentionally resembles the 17th-century Yaroslavl churches and the celebrated St. Basil's Cathedral in Moscow.\nThe Church contains over 7500 square metres of mosaics — according to its restorers, more than any other church in the world. The interior was designed by some of the most celebrated Russian artists of the day — including Viktor Vasnetsov, Mikhail Nesterov and Mikhail Vrubel — but the church's chief architect, Alfred Alexandrovich Parland, was relatively little-known (born in St. Petersburg in 1842 in a Baltic-German Lutheran family). Perhaps not surprisingly, the Church's construction ran well over budget, having been estimated at 3.6 million roubles but ending up costing over 4.6 million. The walls and ceilings inside the Church are completely covered in intricately detailed mosaics — the main pictures being biblical scenes or figures — but with very fine patterned borders setting off each picture.\nIn the aftermath of the Russian Revolution, the church was ransacked and looted, badly damaging its interior. The Soviet government closed the church in the early 1930s. During the Second World War when many people were starving due to the Siege of Leningrad by Nazi German military forces, the church was used as a temporary morgue for those who died in combat and from starvation and illness. The church suffered significant damage. After the war, it was used as a warehouse for vegetables, leading to the sardonic name of Saviour on Potatoes.\nIn July 1970, management of the Church passed to Saint Isaac's Cathedral (then used as a highly profitable museum) and proceeds from the Cathedral were funneled back into restoring the Church. It was reopened in August 1997, after 27 years of restoration, but has not been reconsecrated and does not function as a full-time place of worship; it is a Museum of Mosaics. Even before the Revolution it never functioned as a public place of worship; having been dedicated exclusively to the memory of the assassinated tsar, the only services were panikhidas (memorial services). The Church is now one of the main tourist attractions in St. Petersburg.", "score": 38.42279036810921, "rank": 12}, {"document_id": "doc-::chunk-2", "d_text": "The territory adjacent to the cathedral is one of the oldest areas of St. Petersburg, its historic downtown, which was formed in the first third of the 18th century. Because the church is located in the very heart of the city, its surroundings are of highest value. The historical and cultural environment here is extremely rich, represented by such treasures of world spiritual culture as the Russian Museum, the Maly Opera Theater, the Grand Philharmonic Hall, and churches of various confessions. The Cathedral is probably the only building in the city that stands out so much in its architectural and spatial environment with its silhouette, composition, and rich decor.\nThe Church of the Savior on Spilled Blood", "score": 36.40298045792946, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "This is also the central shopping street in the city -- the Gostiny Dvor department store is here -- and something of an entertainment hotspot.\nAddress: Nevsky Prospekt\nPalace SquareWidely considered to be the main square in the city, Palace Square combines several different styles of architecture. On the northern side is the Winter Palace and across the square on the southern side is the former Imperial Army General Staff building, through which you can gain access to Nevsky Prospekt via the Triumphal Arch. The Alexander Column acts as a central focus point. Named after Emperor Alexander I, it stands as a monument to the Russian victory against Napoleon's French army.\nAddress: Dvortsovaya Pl City Centre\nThe Church of Our Saviour on Spilled BloodBuilt on the site on which Alexander II was assassinated in March 1881, the church is famed for its richly decorated façade and Russian-style onion domes, not dissimilar to that of the St. Basil's Cathedral in Moscow.\nAddress: Griboedov canal embankment, 26, À\nRussian Museum of EthnographyEstablished in 1895 by Emperor Nicholas II and opened in 1898, the museum is a must-visit attraction if you love the arts. There are several sites -- Marble Palace, Benois Wing, Mikhailosky Palace, Mikhailovsky Castle, Stroganov Palace. These can all be accessed by taking the Metro to Nevsky Prospekt. Permanent exhibitions include Russian Art in the Context of World Art (Marble Palace), Art of the 20th century (Benois Wing), Mineral Study (Stroganov Palace) and Russian Art of the 18th century (Mikhailovsky Palace).\nAddress: Inzhenernaya ul 4/1", "score": 34.42132768573593, "rank": 14}, {"document_id": "doc-::chunk-6", "d_text": "The central dome is decorated with the Christ Pantocrator, and effigies of Christ, the Virgin and St. John the Baptist decorate the ceilings of small domes.\nThe Church of St. Savior ” on the Spilled Blood.”\nThe Yusupov Palace on the Moika edge whose name is linked to the murder of Gregory Rasputin.\nBuilt in the late eighteenth century for Count Andrei Shuvalov, the Yusupov palace became famous due to its prominent owners: the Yusupov family, originally Tatars and one of Russia’s richest dynasties; and of course because it is linked to the murder of Gregory Rasputin. This is one of the few palaces of St. Petersburg that have kept the luxurious decor of reception rooms, private apartments, private gallery and a real little theater. Chaine des Rotisseurs there celebrated the tenth anniversary of National Bailiwick of Russia. Visitors are fascinated by the beauty of its interiors: the rows of parade unfold on the floor of the mezzanine floor which is accessed by a staircase of white marble. The first row of pageantry includes several rooms that have retained an ornate that date back to 1830. In the second row leading to the theater, there is an art gallery with several rooms where fabulous masterpieces of art belonging to the Yusupov are exposed, some of which are now in the Hermitage Museum. Indeed, five generations of Yusupov lived in this magnificent palace. After the Bolshevik Revolution of 1917, the family left Russia to live in Paris, and the building became a national museum showcasing the lifestyle of the Yusupov family.\nExhibition of the “Murder of Rasputin”\nAn exhibition depicting the murder of Gregory Rasputin can be seen in the rooms of the apartment of Felix and Irina Yusupov, which is accessed through a private entrance or asmall vestibule. Rasputin, the legendary character with the ability to cure diseases through prayer, had a great influence on the family of Tsar Nicolas II. The aristocracy of the Court wanted him away from the imperial couple. A plot was well organized against him by Felix Yusupov, the last owner of the palace Yusupov and the Grand Duke Dmitri Pavlovich.", "score": 33.687045276974494, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "We’d originally booked to stay a week in St Petersburg, but after a few days in this wonderful city we decided to stay another week as there’s so much to see and like about it!\nAmong our favourite places were the Church of Our Saviour on Spilled Blood and the Hermitage. Other highlights included wandering along the main street, Nevsky Prospect, where we found a fantastic food hall (to take photos inside you had to either pay a fee or make a purchase, so we bought some expensive bread!) as well as admiring the impressive buildings and designer shops.\nThere seem to be loads of churches and cathedrals in St Petersburg. Another of our favourites was St Isaac’s Cathedral with an amazing iconostasis (altar screen) and beautiful mosaics.\nOn our final day we visited Peterhof, a stately home just outside St Petersburg which is famous for its fountains. This was worth seeing and the reason that we visited so late in our stay was that the fountain season only started on 27 April (they are turned off during the winter). However like many museums in Russia it has a two tier pricing structure with cheaper tickets for Russian nationals and more expensive ones for foreigners. I don’t object to the principal of this if it means that more Russians get to see their historic sites, but in the case of Peterhof we felt the foreigners ticket was a bit overpriced, especially as you had to pay for each museum in the grounds separately and they were typically ~£6-10 each, and that’s after the £9 to get into the gardens.\nWe also celebrated Julie’s birthday in St Petersburg and managed to find some amazing cake at the Bushe cafe before a sushi dinner followed by beers and vodka shots (now that we know how) at our local, Barcelona Bar.\nHere’s our St Petersburg round up:\nWhat photo takes you right back to St Petersburg?\nThis was a tough one. We did so much that it was difficult to choose a single photo, but we decided on the magnificent Church of Our Saviour on Spilled Blood. Not only because its even more impressive inside than out, but our hostel was just a block away from it so we saw it every day.\nSummarise St Petersburg in three words.", "score": 32.719274027110366, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "A couple weeks ago, I got to to visit the land of Anastasia! It was incredible – everything from the actual sights to the Russian letters on their signs was uniquely foreign and beautiful! While there, I saw everything from the Peter and Paul Fortress to the Cathedral on Spilled Blood. I’ll take you through a short digital tour now – so you can get a glimpse of the things I was blessed enough to experience!\nDancing Bears…Painted Wings…Things I Almost Remember…\nThe Battleship Aurora is a symbol of the Revolution because it fired the blank shot that started the October Revolution (Bolsheviks in 1917) and sent the troops to storming the Winter Palace (now known as The Hermitage).\nThis was a palace built by Paul the First, the son of Catherine the Great. He built it with a moat around it for added protection because he was paranoid that someone wanted to\nkill him. However, within a few months of moving in somebody (I think one of his guards) strangled him to death in his bedroom. I thought this story was ironic, so I took a picture of the castle. Sorry for the poor quality of the picture.\nThis is a water taxi. Saint Petersburg is looking into getting more of them to help alleviate the traffic dilemma of the town. The canals go to a lot of places in the town, so it could be beneficial if people accept it.\nThis is the Peter and Paul Cathedral. It is located on Peter Island inside the Peter and Paul Fortress. The fortress holds 32 emperors' graves. The Fortress is sometimes called the Russian Bastille. Here Peter's son died because he started an\nassassination plot to hill his father. However, people weren't executed here. They were only interrogated. In 1917 it was turned into a museum. The Cathedral was built in 1732, and it is an Orthodox Cathedral. It is 122 meters high, and it has real gold on top. It is gilded, so it never has to be replaced. This has the tombs of all the emperors from Peter the Great to Anastasia's family. This is where the tour guide killed my hopes that I was descended from Anastasia too - because she told me that Anastasia is buried here. DNA testing proved it. I choose to believe in her continuing existence though.", "score": 32.63037925674805, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "And in 1837 the monuments of the 1812 war heroes – Kutuzov and Barclay de Tolly – were erected on the square in front of the Cathedral .\nToday the Kazan Cathedral is an operating church with daily divine service and free admission. The Cathedral is always crowded and there is a little light inside, the absence of which creates a feeling of tranquility when you are in the temple.\nInside the temple is divided by granite columns to three so-called hall-corridors, whose ceilings are decorated with painting of flowers, and on the floor there is a mosaic of the Karelian marble in pink and grey shades.\nNo doubts that the Kazan Cathedral alongside with the Cathedral of the Savior on Spilled Blood and St. Isaac’s Cathedral, is the main church of St. Petersburg. It possesses a great historical value and is considered to be an architecture masterpiece, visited by thousands of tourists and city residents every year.", "score": 32.1632211193673, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "xxxxxPeter the Great began building the city of St. Petersburg in 1703. Situated at the eastern end of the Gulf of Finland, it was given spacious squares and broad avenues, and its public buildings were designed by Italian and French baroque artists. The famous landmark is the Winter Palace, the residence of the Tsars during the winter months. The royal family moved there from Moscow in 1713, and it then became the new capital city, remaining so until 1918. Seen as a symbol of the new Russia, it was a “window” overlooking the West, drawing Russia into the mainstream of European politics. Over the years this beautiful city has become a major seaport and an industrial and cultural centre. It has also played a significant role in the country’s political life, being the scene of the Decembrist Uprising in 1825 (G4), for example, and the starting point of the Revolutions in 1905 and 1917.\nTHE FOUNDING OF ST. PETERSBURG 1703 (AN)\nxxxxxPeter the Great announced the building of St. Petersburg in May 1703 (illustrated), on the site of the Swedish fortress of Nienshants, captured that year. The city is situated on the delta of the River Neva, at the eastern end of the Gulf of Finland, and today occupies both banks of the river and a number of large delta islands. A major seaport and industrial centre, it ranks today among the world’s most beautiful cities and, because of its many canals and river channels, is often called the Venice of the North. Apart from its commercial value, it is an important cultural centre -\nxxxxxThe city, with its spacious squares and broad avenues, was built by forced labour, but its sumptuous public buildings were designed by Italian and French baroque architects, brought to Russia for the task. The Fortress of Saints Peter and Paul, built for the holding of political prisoners, still stands, but the most famous landmark is the Winter Palace -\nxxxxxThe royal family moved to St. Petersburg from Moscow in 1713, and it was in that year that the city, a symbol of the new Russia, became Peter’s capital and remained so until 1918.", "score": 31.682853815807977, "rank": 19}, {"document_id": "doc-::chunk-2", "d_text": "We are at the shore of the Neva River, at the Castle's Quay. Look! On the\nsmall Island over there is the Peter & Paul's Fort. Here Tsar Peter I. has\nfounded Saint-Petersburg. In the Fort itself is located the highest building\nof the Town - Peter & Paul's Cathedral.\nFrom here, from the Kirovsk-Bridge, we can see very well the Eastern tip of\nthe Basilius-Island, the largest island in the delta of the Neva River. In\nfront of you is the world famous architectural collection of the Strelky\n(Eastern Tip): The Stock Exchange, the Rostral-Columns, the Tower of the\n\"Kunstkamera\" (Chamber of Arts) and the Customs House.\nAt the right hand of it there is the former Tsar's Residence - the Winter\nNowadays in the building of the Winter Palace there is accomodated the world\nfamous Museum of Arts, the \"Ermitage\".\n\"Nevsky Prospekt\" - the Main Street of Saint Petersburg, is the center of\nthe cultural and public life of the Town. It begins at the Admiralty\nBuilding, close by to the Neva River, and ends at the\nAlexander-Nevsky-Square, as well at the Neva River.\nHere we find castles, museums, churches, temples, theatres, the public\nlibrary, shops, restaurants and cinemas.\nAnd here we see as well the Kazan Cathedral, and soon we will see the most\nbeautiful place of Saint Petersburg, the \"Palace of the Coast\" (Ploshad\nRight ahead of us is the Russian Theatre, and left the Little Opera and\nBallet Theatre. In this theatre are shown musical comedies and philharmony.\nIn the centre of the Square, in the green area, there is the Monument of\nAlexander Pushkin, one of the most famous Russian poets.\nBack to the narrative...\nWell, this was a brief description of Old Saint Petersburg. I could submit hundreds of pictures from this beautiful area, but unfortunately this is contrary to the confluence rules.\nTo a foreigner's question not knowing Russia which is the most recommendable town there to visit, the answer will be clearly: \"Saint Petersburg\". It was and is a jewel. Compared with Saint Petersburg Moscow is just only a big village.", "score": 31.356390127045685, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "St. Petersburg is quite a young city but at the same time, it always has been not only a cultural and political center of the country but also a religious one. In its lifetime, the city faced many difficult situations like floods, famine, wars, etc. However, saint patrons of St. Petersburg always protected people. This is the reason why different churches and cathedrals have appeared in all parts of the city since the very beginning of the 18th century. And of course, every religious construction here has a tremendous story behind. Moreover, every church can be considered as a peculiar masterpiece created by the best architects who had ever been working in St. Petersburg. During this tour, you will have an amazing opportunity to get closer to the most legendary churches and cathedrals of the city.\nPlease don't hesitate to send us your suggestions!\nSubmit your suggestion", "score": 31.04440126318, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "We can say that a visit to The Spit of Basil Island will allow you to see exactly the architecture that was conceived by architects and Peter I at the beginning of the XVIII century.\nThe Spit of Basil Island is a unique architectural monument of St.Petersburg. It personifies exactly the initial ideas for the development of the city. Moreover, nowhere else in the city is there any better view for enjoying the beautiful panoramas of the city with so many architectural monuments.\nAuthor: Vladimir Sinyavsky\nLocation : St.Petersburg\nVisit the grand locations The Spit of Basil Island and many more that St Petersburg has to offer!\nContact us@ Russian InfoCenter to Book Classic Russia tour 5n/6d and more tour packages from India to Russia\nChoose from flights ex Delhi, Mumbai & Calcutta\nFill the form below and we’ll get back to you right away!", "score": 30.893834598155934, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "St Petersburg is the jewel in Russia’s crown, a city of canals that is reminiscent in many ways of Venice. The tsars ruled their empire here for centuries until the revolution in 1917 – putting the city high up on the lists of history buffs everywhere. Watch this gorgeous time-lapse from the city of the tsars!\nSights to see in St Petersburg include the Winter Palace – former home of the tsars, and now the home of the Hermitage collection; one of the world’s largest and most impressive art collections. Then there’s St. Peter and Paul Fortress, with its extraordinary baroque interior. Up until the revolution, this was a political prison; home for a while to Trotsky himself. Then there’s St Isaac’s cathedral – one of the world’s largest – and the famous Church on Spilled Blood, built on the spot where Tsar Alexander II was murdered in 1881. And don’t miss Catherine Palace, and the Summer Gardens at Peterhof, overlooking the Gulf of Finland. There’s a load to see and do – just as well this video is a time-lapse after all!", "score": 30.784290962803553, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Peter’s city, then and now\nA photographer revisits—and revisualizes—a city of his youth\nOne of the charms of European cities is that any inkling of their origins is either lost in the distant past or conjured into a class of story called a foundation myth. Somehow, imperceptibly, a convenient Celtic village on a dank river becomes Alesia, drops from sight, then turns up later as Paris.\nIn Russia, the current capital Moscow has exactly the same kind of obscure pedigree: a river site (among hundreds) one day happens to make good, generally for reasons no one today can quite figure out.\nSt. Petersburg is different from these.\nPeter the Great built that city deliberately to have his fully European home base, his new capital of a militarily powerful Russian state that intended to participate widely in world affairs. One day nothing had been there—in this case, nothing but a swamp where the Neva River met the Gulf of Finland. The next day construction had begun on part of a wall. Peter planned everything: He had the site surveyed, he financed it, and up went his namesake in regular mounds of waterlogged soil draining constantly into a system of canals. The date of founding was May 27, 1703, when the cornerstone of the Peter and Paul fortress was laid.\nThe founding of St. Petersburg and the style of its layout and construction made manifest Peter’s personal vision for Russia: The new city was his very expensive calling card to the rest of the world.\nPeter left a legacy that we in Western Europe and the United States have been living with ever since, like it or not. In a week, St. Petersburg will celebrate its 300th anniversary. Commemorations have been going on both in Russia and abroad for many months leading up to the actual date.\nBy way of recognizing the anniversary here in Chico, the Office of International Programs at California State University has brought a fine exhibition to the Humanities Center Gallery, in Trinity Hall, of photographs of Peter’s city by the Czech Jiri Tondl.\nTondl took many of his pictures when he was a university student there from 1972 to 1977. He shot in black-and-white. He emphasized the everyday life of ordinary residents and stayed away from subject matter the Soviet state might find unwelcome.", "score": 30.72996082435542, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "The Historic Centre of Saint Petersburg and Related Groups of Monuments has a planned urban design with many baroque and neo-classical monumental buildings.\nThe shape of the city was developed by Peter the Great during the 18th century. In communist times, it was officially renamed in Leningrad.\nAmong the \"related group of monuments\" mentioned above is the Peter and Paul Fortress. This was the first project taken up by Czar Peter, and he moulded it after architecture he had seen in the Netherlands.\nThe magnificent Hermitage (Winter Palace) is also in St. Petersburg. It's one of the best museums in the world, and the collection has both volume and quality.\nMap of St. PetersburgLoad map\nVisit July 1990\nSt. Petersburg appeared to have more in common with its Scandinavian neighbours than Moscow. They were proud of that too: during my 2 week trip the Russian guide kept telling us \"Wait til we get to St. Petersburg, its the best of Russia's big cities\".\nActually, when we finally got there after Moscow and Kiew I didn't totally agree. I found it to be cold (in the psychological sense of the word). It also definitely is a harbour city, so you must like water and watching ships (I don't).\nBesides the stunning Hermitage I visited the house where Pushkin used to live. It was a solemn expedition: the man was an icon in Russia.\nThe Historic Centre of Saint Petersburg and Related Groups of Monuments is easily my favorite of all World Heritage Sites I've seen in Russia. This 300-year-old city was wrested from swamps to become a capital, much like a city started later that century half a world away and much closer to home for me. Saint Petersburg is a feast for travelers, replete with history, culture, and architecture.\nSaint Petersburg was a dream made manifest by Peter the Great, who envisioned a great European city built upon the Neva River delta looking westward. This city is a joy to explore on foot, from the broad avenue of Nevsky Prospekt to the grand square in front of the Winter Palace, home to the renowned Hermitage Museum; from Vasilievsky Island to the Peter and Paul Fortress; from St.", "score": 30.63088096461826, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "St. Isaac’s Cathedral is an outstanding monument of Russian architecture of the 19th century and one of the greatest dome constructions in the world, smaller in size only in comparison with the Cathedral of St. Peter in Rome, St. Paul’s Cathedral in London and Santa Maria del Fiore Cathedral in Florence\nGrandness of this temple is shown by its size: the height of the building is 101.5 m; the length is 111.2 m; and the width is 97.6 m.\nThe cathedral is one of the dominant buildings of St. Petersburg and the second in height after the Sts. Peter and Paul Cathedral. Its monumental and majestic image creates a unique accent in the city’s skyline and serves as much a landmark of the northern capital, as the spire of the Cathedral in the Peter and Paul Fortress and a gold ship atop of the Admiralty.", "score": 30.31106213833196, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Visit to the cathedrals of St. Petersburg (by car)\n- Saint Petersburg\nThis guided tour by private car or bus will include visits to the three main cathedrals of St. Petersburg: St. Isaac's Cathedral, Church of the Saviour on Spilt Blood and the Holy Trinity Alexander Nevsky Lavra.\nSt. Isaac's Cathedral (the Cathedral of St. Isaac of Dalmatia) is the largest Orthodox church in St. Petersburg and one of the most impressive buildings in St. Petersburg. The gilded dome of St. Isaac is clearly visible from various places in the city. The exterior is decorated with 112 granite columns each of which weighs about 100 tons and more than 400 bas-reliefs, the interior is impressing with elegant marble, malachite and lapis lazuli decor. St Isaac's colonnade located at an altitude of 43 meters offers stunning views of the city.\nChurch of the Resurrection or \"Saviour on the Spilt Blood\" was built on the site of the assassination of the Emperor Alexander II. The cathedral was built in the Russian style accoding to the model of St. Basil's Cathedral in Moscow. The main decoration of the construction is imosaic decoration that covers the area of over 7,000 sq.m. During the Soviet period the church was used as a warehouse because of \"no cultural value.\" In 1997 the church was renovated and re-opened.\nThe Holy Trinity Alexander Nevsky Lavra (the Alexander Nevsky Monastery) was constructed by the order of Peter the Great at the end of Nevsky Prospekt where he commanded to place the relics of St. Alexander Nevsky (the patron of the new Russian capital). Among the buildings of the Monastery there is a majestic Holy Trinity Cathedral built in the style of early classicism, the Lazarevskoye Cemetery (necropolis of the XVIII century) and the Tikhvin Cemetery (necropolis of Masters of Arts, the burial place of Lomonosov, Suvorov, Karamzin, Mussorgsky, Tchaikovsky, Dostoevsky and other prominent figures of Russia).", "score": 30.21243764631161, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "The Church is situated in the centre of the estate. The first wooden church was built there in 1716, in Stroganovs' time. It was consecrated and dedicated to the Blachernitissa Icon of the Theotokos, the family sacred icon of the estate's owners.\nThe Icon was given this name since originally it was kept in Blachernae (a small town on the bank of the Bosphorus, near Constantinople). The Byzantine empress built there a monastery in the 5th century. A church dedicated to the Theotokos was situated there and owned the Blachernitissa. As late as in that time, the Icon was revered as a miracle-working. So, in 1830, when cholera raged in Moscow, nobody fell ill with it in Kuzminki parish. Duke Sergey Golitsyn cast a bell of 260 poods (4,258 kg or 9,390 lb.) in weight in honour of this event.\nAfter Constantinople was occupied by the Turks, the Icon was hidden in the patriarchate. Then, for the safety reasons, it was moved to the Monastery of Athos and in 1653 (1654), it was gifted to Tsar Alexis I by some merchant.\nHowever, there is a legend that two Blachernitissa icons were delivered to Moscow. One of them came to the Cathedral of the Dormition, Moscow, and the other became a patronal icon of the Church of the Blachernitissa Icon of the Theotokos in Kuzminki. The icon of the Virgin Mary with child is painted with a rare embossed technique and, according to modern experts, dates back to the 7th century. Today, the Icon is kept in the State Tretyakov Gallery.\nThe stone church was built in 1759 to 1774 with support from Duke Mikhail Golitsyn. Originally, it was a baroque church. However, as early as in 1784 to 1787, the Church was rebuilt as a classical one. Architects: I. Zherebtsov, R. Kazakov, I. Yegotov.\nIn 1812, during Napoleon's invasion, the church was looted. Witnesses said that the French rode into the Church and carried off holy vessels and icons.", "score": 30.107098869268473, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "St.Petersburg (known as Petrograd in 1914-1924 and as Leningrad in 1924-1991), the northern capital of Russian Federation, the seaport, the administrative centre of Leningrad region, one of the most beautiful cities in the world, was founded by Russian tsar Peter the Great on the small Zayachy island in the mouth of the Neva river as a fortress on May 27, 1703. The city became the capital of Russia in 1712 up to 1918 when the capital was transfered back to Moscow. The northern geografical location of St.Petersburg (the same latitude as Greenland, Alaska and Chukotka) explains the white nights from June 11 up to July 2 when the sun sets only 9 degrees below the horizon and the faint twilight gradually turns into the dawn. The city was built by the most famous Russian and European architects and nicknamed \"the Babylon of the Snows\" and \"the Venice of the North\". The present city is a large industrial, transport, scientific and cultural centre of Russia with a territory of 620 sq.km and a population of 5 million. The Petropavlovskaya Fortress is a remarkable , historical and architectural memorial in St.Petersburg. The Winter Palace, designed by Rastrelli in the 18th century, the former residence of Russian tsars, located on the Palace Square, is one of the most beautiful architectural ensembles in the world. The Hermitage with its collection of over 2,5 million exhibits is one of the very finest world art museums. The Russian Museum contains 3 thousand paintings (10th-20th centuries) of Russian artists. The St.Isaac`s Cathedral (1818-1858) is an outstanding monument of the late Russian Classicism. The majestic architectural ensembles in the suburbs are well-known all over the world. Peterhof or Petrodvorets - the former Russian imperial residence, Pavlovsk (18-19th centuries), Pushkino (18-19th centuries) - the former country residence of the Russian tsars, are among them.\nIn the northern part of the Ladoga Lake which was called Nyevo in the ancient times, there lie numerous islands,\nthe largest of them being Valaam.", "score": 30.039871803540148, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "This new location allowed greater trade and cultural contacts between the Russian capital and the West.\nHe also adopted some Western bureaucratic and economic policies, which enabled him to govern his expanding territory more effectively.\nHowever, Peter is also known for his autocratic and oftentimes cruel manner of governing. He was often at war with his neighbors in an effort to expand Russia's borders. A series of widespread revolts was mercilessly crushed during his reign. And St. Petersburg was built at great cost in human life by serf labor.\nThe $25 million, 30-story structure, topped by a red light as a hazard warning to low-flying aircraft, depicts a monstrous Peter standing triumphantly atop a miniaturized 18th-century Russian Navy vessel crashing through cresting waves.\nThe monument, which supporters call Moscow's Statue of Liberty, was designed by a highly successful but much maligned Moscow-based international sculptor, Zurab Tsereteli. In recent years, Mr. Tsereteli has built an empire of monuments in this city, which many observers attribute more to his close relations with Moscow Mayor Yuri Luzhkov rather than to his artistic ability.\n\"In Russia today, all these things have become close to politics,\" complains Marat Gelman, a Moscow art dealer leading the movement against the Peter monument.\nThe Peter the Great monument has many Muscovites up in arms about its artistic merit and historical symbolism.\n\"This is art?\" snorted a passerby. \"I think it's just a way for Luzhkov to stand out as a politician. Its style doesn't fit Moscow, and it has just ruined my lunch spot.\"\nIn recent months, dozens of artists and other citizens, exasperated by Tsereteli's monuments and the city government's resolve in continuing construction on Peter, have held public protests demanding that the statue be removed. The protests have brought some results, in contrast to Soviet days, when entire neighborhoods of Russian cities were torn down without notice to make way for projects promoting the \"advancement of Socialism.\"\nIn response to the frequent protests, Mr. Luzhkov created a commission in March to help him decide the fate of the Peter the Great statue and has promised that he will be guided by public-opinion polls.\nA poll carried out earlier this year found that Muscovites who disapproved of Tsereteli's work outnumbered his fans by almost 2 to 1.", "score": 29.973926308809407, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "St. Petersburg is the City of Tsars but I would personally call it the city of creativity as architecturally it’s one of the most beautiful cities I’ve ever seen. St Petersburg has all the ingredients for a memorable travel experience such as the exquisite art, lavish architecture and rich cultural traditions that have inspired and nurtured some of the modern world’s greatest literature, music and visual art.\nPeter the Great founded this city in 1703. What is so wonderful about St Petersburg is that it combines Russian heritage with a European outlook. The city is Russia’s cultural hub.\nIf you explore the city with a travel guide or a local, you may hear different and interesting stories about the palaces, museums, broad avenues and winding canals. St Petersburg’s turbulent past has endowed the city with grand architectural and artistic treasures.\nThere are so many attractions in St. Petersburg the list will make your head spin. Here are five places to visit in St Petersburg:\n1-Peter and Paul Fortress\nThe cathedral is where the Romanovs are buried. There’s a former prison and various exhibitions. This is a mandatory stop on any tour as it gives you great insights into Russian history.\n2-St Isaac’s Cathedral\nSt Isaac’s Cathedral has a lavish interior and is actually a museum (services are held in the cathedral on major religious holidays). You probably won’t want to bother climbing the steps to the colonnade but if you do summon up enough energy the views are amazing. The cathedral was designed in 1818 and over 100kg of gold leaf was used to cover the dome.\n3-State Hermitage Museum and Winter Palace\nThis museum is absolutely huge and one of the best museums I’ve seen. Russsia’s Tsars once resided here. Now it has one of the world’s best art collections. Even if you are not an art connoisseur, you will be amazed at the creations on display.\n4-Pushkin and Catharine Palace\nThe summer residence of the Russian Tsars conjures images of a grand era and a Russian Baroque wonder. Catherine Palace’s famous Amber Room is worth the trip. The palace was the residence of the Tsars in the 18th and 19th centuries.\n5-Church on the Spilled Blood\nMake sure you take a look at the five-domed Russian Orthodox church, which is decorated with a mosaic interior. The name came about in reference to the assassination attempt on Tsar Alexander II at this spot in 1881.", "score": 29.53519462025887, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The city of St. Petersburg is known for a large number of historical attractions. Many of them are known even to those who have never been to a city on the Neva. We will focus on one of them – the Spit of Basil Island.\nThe Spit of Basil Island can be called a visiting card of the historic city center. This architectural monument is located so well that it offers a beautiful view of the heart of the city. It offers beautiful views of the Neva, the Winter Palace and the Peter and Paul Fortress. We can say that this monument is an example of the harmony of architecture and nature.\nHistorically, this area was planned starting from the first steps of building a city. Back in 1716, the project of the architect Domenico Trezzini assumed a trapezoidal square surrounded by houses, but Peter I decided to make the place a cultural and business center of the city. Therefore, next to The Spit of Basil Island, such historically significant architectural monument as the Exchange Building was built.\nThe development history of The Spit of Basil Islandis quite interesting. If the area itself did not change significantly, the Exchange building survived several projects. The exchange building was erected in wood in the 1730s, and in the 1780s it was rebuilt in stone. After that, in 1805 a new project appeared under the leadership of Jean-François Tom de Tomon. He was inspired by his albums with sketches of Roman monuments, which he made while traveling to Italy. Thus, the new building was built in the style of ancient architecture. The facade was turned to the Neva, the building is surrounded by a colonnade of 44 columns.\nEven more interesting are the two majestic Rostral columns with statues of sea deities at the foot, which served as lanterns for the port of the capital in the 19th century. The project of these columns was created considering the personification of the power and greatness of the state’s navy and refers to the ancient Roman custom of decorating columns with rosters – the noses of ships that were defeated in battles. Thus, the architect was able to transfer the architecture of ancient times to the city on the Neva.\nDespite the large number of projects for the development of this place, the area has always remained the central object of all projects.", "score": 29.359033596101376, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Todor Alexandrov Blvd., next to Metrostation no.7\nThe St. Spas church was built and consecrated after the Liberation, in 1883, on a corner lot between Nishka and Lomska streets (today Todor Alexandrov Blvd. and George Washington Str.), upon the foundations of a small 15th – 16th c. chapel of the same name, dug two-meters into the ground. Reconstructed profoundly and expanded with a narthex and two side naves by the Revival master-builder Georgi Novakov Dzhongar, the building was enhanced by the addition of large-scale architectural details: capitals, alcoves, cornices, gables, windows and domes of various shapes.\nThe church was furnished with iconostases by the woodcarver and icon painter Peter Filipov; icons, in a modernized Russian style departing from the Byzantine iconographic dogma – by an unknown author; murals – by the artists Haralampi Tachev, Apostol Hristov, and Dechko and Radomir Mandov. By the altar of Saint Spas was buried the hanged on November 15, 1877, Kiro Geoshev – bookseller from Sofia, associate of Vasil Levski and member of the Sofia Revolutionary Committee. In the western section of the churchyard was the tomb of Major General Sava Mutkurov; by the altar curtain – the tombs of the parents of Ivan Denkoglu – merchant and patron of the Bulgarian Revival Enlightenment.\nThe British – American bombing attack on Sofia of March 30, 1944, devastated and scorched half of the temple. After the end of the war, the surviving section was restored and hosted services until the early 1980s, when it was demolished to free space for the monumental building of the Bulgarian Foreign Trade Bank (today UniCredit Bulbank). In 1986 – 1987, the medieval St. Spas – a monument of culture of national importance, was conserved within a dedicated underground space.", "score": 29.345039179026966, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "St. Petersburg’s skyline says it all. From the banks of the Volga the views include the golden cathedral towers of the Peter and Paul Fortress and the pointed towering Minerats and bright blue tiles of the mosque dome just to the right of it. Peter the Great wanted his capital to reflect the cultural make up of his vast empire.\nRussian southern cities which lie on the Caspian Sea were the only real Russian ports linking the nation to the Silk Road trade. St. Petersburg is just up the Volga River from the trading post village of Novgorod. Russia supplied furs, honey and slaves to Muslim lands as far as Baghdad. The original route connecting the Volga River to the Caspian Sea until the 11th century. By the 13th century, another route linking the Black Sea to the Byzantine and Persian Empires replaced the original. This is the route workers travelled when Peter the Great invited all Russians to help construct their new capital St. Petersburg. This included the first large number of Muslims to travel to this part of the country. They were the Tatars from the Volga Region.\nThe Russian Empire connects the east to the west making it more Eurasia then Russia and covers almost one sixth of the earth’s surface. This being said – there has always been a rich cultural, linguistic and religious diversity among all of its people. Peter the Great had genuine interest in the affairs of the muslim community since Russia was beginning to extent it’s empire into Ottoman territory. Among many things, he personally ordered the first Russian language Qur’an to be published in 1716 to help welcome in Russia’s new subjects. It wasn’t until much later that a proper mosque was built for those how made St. Petersburg their home.\nThe mosque has suffered a lot over the centuries. It was shutdown by the Bolsheviks – like most religious institutions – and later used as a storage warehouse during World War II. The mosque’s doors remained locked through 1956 but didn’t get any major renovations until the 1980’s.\nToday, nearly where a half a million residents and many descendants of the Tatars. The Great Mosque with its tall blue dome is hard to miss even on a foggy day. Most non-muslim visitors can only view the mosque from the outside gates since it is a working mosque.", "score": 28.116692447421833, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "In 1991 not far from the Cathedral by the min alley the monument of Peter I, presented to the city by its author Mikhail Shemyakin was set up.\nOn the territory of the fortress there are the expositions of the Museum of the History of Saint Petersburg. The oldest one is placed in \"former prison\" of the Trubetskoi Bastion, built in 1870-72. Since Peter's reign the most important state criminals were sentenced to solitary confinement cells, later - to isolated cells of a Secret House of the Alexeevsky ravelin. The prisoners of \"Russian Bastilia\" were: the son of Peter I - Tcarevich Alexey, Artemy Volynsky, Tadeush Kostjushko, the participants of the Freedom movement, Alexander Radichshev, Decembrists, Petrashevs, Fjodor Dostoevsky, Mikhail Bakunin, Nikolai Chernyshevsky, Peter Kropotkin, Maxim Gorky, members of \"The People's Will\", members of Social Revolutionary Group, Bolsheviks… After the February Revolution the Tsar ministers were sentenced into prison of the Trubetskoi Bastion, and in the night on 26 October 1917 - the members of the Temporary Government. During the Civil War there were sacrifices of \"red terror, the members of Kronstadt uprising in 1921. The exposition firstly opened in the prison in 1924, tells about its building, regime and several generations of prisoners.\nIn former Commandant's House since 1975 the exposition \"The History of Saint-Petersburg\" was opened. Nowadays it presents the past of by - Neva territories from ancient times to the foundation of Petersburg in 1703 and about the history of the city until the mid - 19th century. In the Engineer's House there are various exhibitions from the richest museum funds. In the Ioann ravelin room there is a museum of Gasodynamic Laboratory, devoted to the development of national rocket building and astronautics. There in 30's there was the first in the USSR experimental and gasodynamical laboratory on constructive design of rocket engines. The exposition tells about the founders of astronautics - Konstantin Tsiolkovsky, Nikolay Zhukovsky, Sergei Koroljov, and many others.", "score": 27.72972038949445, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Khram Spasa na Krovi (Church of the Saviour on the Spilt Blood)\nSt Petersburg (Federal City of St Petersburg. Northwestern Federal District) RF\nEnter your email address to subscribe to this blog and receive notifications of new posts by email.\nJoin 169 other followers\nTheme: Rubric. Blog at WordPress.com.\nGet every new post delivered to your Inbox.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Please Support the Bible Translation Work of the Updated American Standard Version (UASV)\nLeningrad is the former name of St. Petersburg, Russia. St. Petersburg is a Russian port city on the Baltic Sea. It was the imperial capital for 2 centuries, having been founded in 1703 by Peter the Great, subject of the city’s iconic “Bronze Horseman” statue. It remains Russia’s cultural center, with venues such as the Mariinsky Theatre hosting opera and ballet, and the State Russian Museum showcasing Russian art, from Orthodox icon paintings to Kandinsky works. There are many church buildings in Leningrad. However, today, a mere handful function in the way in which they were initially designed. Most of them have been turned into a museum. This also includes the towering St. Isaac’s Cathedral that visually reminds onlookers of the St. Peter’s Basilica in Rome.\nKazan Cathedral or Kazanskiy Kafedralniy Sobor, also known as the Cathedral of Our Lady of Kazan, is a cathedral of the Russian Orthodox Church on the Nevsky Prospekt in Saint Petersburg. It is dedicated to Our Lady of Kazan, one of the most venerated icons in Russia. It is the most informative presentation of the official attitude toward religion.\nKazan Cathedral has also been converted into a museum of the History of Religion and Atheism. In the basement of this stately cathedral, the display of religious history proceeds in chronological order down to the present day. The visitor is able to actually see the instruments of torture that had been used during the time of the Inquisition. The scene that will haunt you in the night is that of an Inquisition court trial, with life-size wax models. The traumatized victim who is in shock is chained on his knees before his accusers along with monks dressed in black robes. The executioner looks ever prepared to carry out whatever action is necessary.\nOn Leningrad’s main avenue, Nevski Prospekt, opposite the Kazan Cathedral, on the other side of the street (at the intersection of Nevsky Prospekt and the Griboyedov Canal), we find the largest bookstore in the city, Dom Knigi (also known as the Singer House).", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Kazan Cathedral in Red Square\nThe Kazan Icon Cathedral of the Mother of God was erected in memory of the liberation of the Russian state from the Polish-Lithuanian invaders, which was achieved with the help and intercession of the Mother of God, who showed his mercy through the Miraculous Kazan icon. The temple was built at the expense of the first tsar of the Romanov Mikhail Feodorovich dynasty and consecrated in 1636. Since the construction of the temple became one of the most important churches in Moscow, his abbot occupied one of the first places in the clergy of Moscow.\nThroughout its history, the cathedral was rebuilt several times, in the 1760s, 1802-05, 1865.\nIn the 1920s the renovationists served in the cathedral for some time. In the years 1925-1933. The restoration of the cathedral was carried out under the direction of the architect P.D. Baranovsky In 1928 the bell tower of the cathedral was demolished. In 1930, the Kazan Cathedral was closed, and in 1936 – demolished.\nThe cathedral was restored in 1990-1993. at the expense of the Moscow City Hall and donations from citizens. Kazan Cathedral is the first of the Moscow temples completely lost during the Soviet era, which was recreated in its original forms. It was possible to recreate the historical appearance of the temple thanks to the measures taken by the architect P.D. Baranovsky before the destruction of the temple, and the studies of historian S.A. Smirnova On November 4, 1993, the church was consecrated by His Holiness Patriarch Alexy II.\nThe main throne was consecrated in honor of the Kazan Icon of the Mother of God, the northern corridor – St. Guria, archbishop of Kazan and San Barsanuphius, bishop of Tver. South hall in honor of the schmchch. Hermogenes and Tikhon, Patriarchs of Moscow and All Russia (not consecrated).", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "SOURCE: www . mapsofworld . com/travel/infographics/saint-basils-cathedral\nSt Basil’s Cathedral is located in Red Square, right in the heart of Russia’s capital city, Moscow.\nNo visit to Moscow would be complete without exploring the many rooms and galleries that make up the nine individual churches on the site.\nAlthough originally constructed as a Russian Orthodox cathedral on the instructions of Ivan the Terrible in the mid-16th century, this iconic building now houses a museum and is probably the most recognizable building in the country and a major tourist attraction.\nThe cathedral’s status as one of the world’s most celebrated buildings was further enhanced in 1990 when it was named as a UNESCO World Heritage Site.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-12", "d_text": "- Blagoveshchensky Bridge (formerly the Lieutenant Schmidt Bridge) – built in 1850 as movable seven-segment iron bridge connecting Labour Square with the 7th Line of Vasilievsky Island.\n|Kuzminskii railway bridge||Big Obukhovsky Bridge||Liteyny Bridge||Blagoveshchensky Bridge|\nWhereas most tourist attractions of Neva are located within St. Petersburg, there are several historical places upstream, in the Leningrad Oblast. They include the fortress Oreshek, which was built in 1323 on the Orekhovy Island at the source of Neva River, south-west of the Petrokrepost Bay, near the city of Shlisselburg. The waterfront of Schlisselburg has a monument of Peter I. In the city, there are Blagoveshchensky Cathedral (1764–95) and a still functioning Orthodox church of St. Nicholas, built in 1739. On the river bank stands the Church of the Intercession. Raised in 2007, it is a wooden replica of a historical church which stood on the southern shore of Lake Onega. That church was constructed in 1708 and it burned down in 1963. It is believed to be the forerunner of the famous Kizhi Pogost.\nOld Ladoga Canal, built in the first half of the 18th century, is a water transport route along the shore of Lake Ladoga which is connecting the River Volkhov and Neva. Some of its historical structures are preserved, such as a four-chamber granite sluice (1836) and a bridge (1832).\nOn 21 August 1963 a Soviet twinjet Tu-124 airliner performed an emergency water landing on Neva near the Finland Railway Bridge. The plane took off from Tallinn-Ülemiste Airport (TLL) at 08:55 on 21 August 1963 with 45 passengers and 7 crew on board and was scheduled to land at Moscow-Vnukovo (VKO). After liftoff, the crew noticed that the nose gear undercarriage did not retract, and the ground control diverted the flight to Leningrad (LED) – because of fog at Tallinn. While circling above St.", "score": 26.693026842351212, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "The connection with Okhta was carried out due to self-propelled ferries, so-called planes, on which it was an expensive pleasure to cross over and for not everyone it was possible, so the people composed the song:\nFrom under Smolny to Okhta\nIs very expensive transportation.\nHow I have moved my dear\nOn my hands.\nBolsheokhtinsky Bridge - from the history of building\nIn 1829, during the elaboration of development plan of St. Petersburg, Nicholas I denoted the need to build a ferry from the center of St. Petersburg to the working Okhta. Because of the lack of funds, the solution of this problem was postponed and the issue was raised again only in the 1860s. It was decided to connect the Okhta with St. Petersburg not only by the ferry, but also administratively. There were disputes over the location of the bridge, because everyone wanted him to be built next to his company, and boatmen and ferrymen did not want to lose their job and by all available measures prevented the start of building process. And only on June 5, 1884, the City Council decided to build a bridge from the Smolny Cathedral to the Krejton shipyard, in the area of the old Petrozavod, so it was said \"To recognize the Okhtin suburb, which had to be annexed to the city of St. Petersburg\".\nOn September 1, 1901, an international contest for the ferry project was announced. The 16 presented projects were rejected because of their non-compliance with the technical decisions and the building of the bridge was postponed. Only after the tragic event, when in the spring of 1907 Arkhangelsk ship sunk down during the crossing of Okhta and people died, the ruler ordered not to delay the building of the ferry. Then among 4 presented non-competitive works the project \"Freedom for navigation\" was selected, executed by Professor of Nikolaev Engineering Academy, colonel G.G. Krivoshein and military engineer Lt. Col. V.P.Apyshkov. The building of the ferry base took place on June 26, 1909, on the eve of the 200th anniversary of the Battle of Poltava, whose triumphant was Peter the Great, so the bridge was named after the emperor Peter the Great.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "Peter and Paul Fortress\nIf the Romanovs were murdered by a Bolshevik squad in 1918, it took until 1991 for their bodies to be excavated from their burial site outside Yekaterinburg. The rumours of Anastasia’s survival had been going strong through the years, but the two missing bodies in the grave (Alexei, and either Anastasia or Maria) just added fuel to the fire. It took until 2007 to find them in the woods!\nThe Romanovs are now canonised and buried in the St Catherine Chapel inside the Saints Peter and Paul Cathedral. The Cathedral and Fortress is one of the oldest landmarks in all of Saint Petersburg, and the burial site of almost all the Russian emperors and empresses. Notable names include Catherine the Great and Peter the Great, as well as Maria Feodorovna (Anastasia’s now-famous grandmother).\nPeter and Paul Fortress is a goldmine when it comes to Russian history. A single ticket opens doors to many a building and exposition inside its walls, and its riverbank offers you the best view of the Neva and Winter Palace! The Cathedral itself is breathtaking from the moment you set foot inside, with the kind of architecture that is sure to leave a long-lasting impression on you!", "score": 25.906013950901286, "rank": 42}, {"document_id": "doc-::chunk-3", "d_text": "Bronze Horseman\nThe Bronze Horseman is an equestrian statue of Peter the Great, the founder of St Petersburg, and was commissioned in 1768 by Catherine the Great. It is rumored to be the largest stone ever moved by humans, originally weighing 1500 tons and carved down to 1250 tons to be moved to its current site!\nBronze Horseman is located at Senate Square, Saint Petersburg, Russia, 190000\n9. Big Obukhovsky Bridge\nThe Obukhovksy Bridge is St Petersburg’s newest bridge across the Neva River and the only one that isn’t a drawbridge. Opened in December 2004, the bridge is 395 feet tall and 1.75 miles (2,824 meters) long!\nObukhovksy Bridge is located at Saint Petersburg, Russia, 192012\n10. Church of the Savior on Spilled Blood\nFinally, I saved the best for last! The Church of the Savior on Spilled Blood was the place I wanted to see most while I was in St Petersburg. In my opinion, this would be St Petersburg’s equivalent of the Saint Basil’s Cathedral in Moscow, although I did like St Basil’s architecture slightly more!\nHowever, there’s NO denying that this classic Russian architecture is beautiful and unique. I’ve said it a million times, but this kind of architecture was my primary reason for wanting to visit Russia and I was not disappointed!\nAnd, guess what I spy? Onion domes!! Before you think I’ve lost my mind and you’re curious as to what onion domes are, you definitely need to read my Moscow post!\nThe interior of the Church was jaw-dropping. When I first got in, I even forgot to take pictures because I was so amazed!\nBut, I finally remembered I was, in fact, in possession of a camera. Then, I looked around for signs that allowed for photography because, in so many places like this, photography isn’t allowed.\nFinally, once I found proof of permission, I went completely crazy! With the camera that is… (although some days the former could be debated! 😜)\nBuilt between 1883 and 1907, the Church was constructed on the site where Emperor Alexander II was fatally wounded in 1881.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Today, the museum complex also contains the Menshikov Palace, the eastern wing of the General Staff building, the Staraya Derevnya Restoration and storage facility (with carriages and automobiles), and the Museum of the Imperial Porcelain Factory. It is recommended that Port of St. Petersburg visitors purchase their tickets before they go to the Museum and that they join a tour group.\nLocated at Fontanka and Nevsky Avenue. Photo taken 31 May 2012.\nPhoto by A.Savin\nThe Peter and Paul Fortress (Petropavlovskaya Krepost) in the Port of St. Petersburg is located on Rabbit Island where Peter the Great made his base while the city was being built. The Peter and Paul Cathedral on the island holds the tombs of the Russian tsars from Peter I to the last tsar, Nicholas II, and his family. Peter the Great built the fort to protect the new Port of St. Petersburg from the Swedish Navy. The fortress was founded on May 27, 1703, the official anniversary date of the Port of St. Petersburg. The fortress housed some of the city's garrison as well as acting as a high-security jail for political prisoners, including Peter's son Alexei. Other famous prisoners included Dostoyevsky, Trotsky, Gorkiy, and Lenin's older brother Alexander. In addition to the Cathedral, the fortress contains one of the two places in Russia that mints medals and coins, the Mint, and the City History Museum.\nTaken 9 March 2012.\nPhoto by Dmottl\nThe Peter and Paul Cathedral was the first stone church in the Port of St. Petersburg. Built in the early 18th Century, a beautiful golden angel holds a cross atop the cathedral's gilded spire. At 123 meters tall, it is the Port of St. Petersburg's tallest building. The oldest landmark in the Port of St. Petersburg, the Cathedral was closed in 1919 and converted to a museum in 1924. While it is still a museum, religious services began again in 2000. The Cathedral contains a carillon of 51 bells weighing a total of over 15 metric tons and with a range of four octaves.\nSt. Petersburg Nakhimov Naval School is in the background.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "On 17 June, 1777 the Church of the Nativity of St. John the Baptist (Chesmenskaya) was founded\nLater at the request of Catherine II a marble plaque was placed at the entrance reading: «This temple was built in the name of the Holy Prophet Forerunner and Baptist John in memory of the victory over the Turkish fleet, won by Chesme in 1770 on the day of Christmas. Founded in the 15th summer reign of Catherine II in the presence of King Gustav III of Sweden under the name of Count Gotland and consecrated on June 24 during 1780 days in the presence of His Majesty the Roman Emperor Joseph II under the name of Count Falkenstein».\n- Address: St. Petersburg, ul. Lensoveta, 12", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "3)In Novgorod itself, there is the district of St Sophia, which includes: the Kremlin with its 15th-century fortifications, reinforced in the 17th century: the church of St Sophia from the mid-11th century, and other monuments from the 12th to 19th centuries, monuments in the commercial district (including many of the oldest churches in the town, such as the Church of the Transfiguration, decorated with frescoes at the end of the 14th century by Theophanes the Greek, who was responsible for reviving medieval Russian painting and was the teacher of Andrei Rublev); and four religious monuments (12th and 13th centuries) outside the old town (including the famous church of Neredica). 4) This was the very place where Vladimir Yaroslavovich built a huge stone Cathedral of St. Sophia in 1050. This cathedral became the symbol of the strength and power of the Prince’s authority, and later it was adopted as the symbol of the power of the Novgorod Republic. Residents of the city would die as soldiers fighting for the Holy Sophia. 5)George’s Church of the St. George’s Monastery could also compete with the Sofia cathedral. The chronicle says that the prince’s monastery was founded in 1119, becoming the second one in Russia after the Kievo-Pecherskaya Lavra. It was located on the legendary route from the Vikings to the Greeks, and formed the southern gate of Novgorod. And while George’s Church yields to the Sophia in size, its image reflects the highest ideas of our ancestors of beauty and harmony. The George Monastery is especially beautiful during the spring flood of Volkhov, it turns into an island surrounded by the water. 6)In 1136 the state system was changed from the prince’s rule to Veche and the popular council, a new chapter in the history of temple construction began. At that time citizens were free to decide how to build and what to build. New churches were ordered by Boyars, merchants, and residents of the streets. Of course, their artistic tastes and material resources were far less than those of princes. The buildings became smaller and the décor simpler. But the overall image of Novgorod architecture is preserved.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-10", "d_text": "The Stock Exchange, designed by the French architect Thomas de Tomon and built between 1805 and 1810, was inspired by the best examples of Ancient Greek and Roman architecture.\nThe building was completed in 1810, although the official opening of the Exchange was not until 1816. De Thomon's facades feature 44 Doric columns on a high red granite stylobate, and above the main portico is a statue of \"Neptune with two rivers - the Neva and the Volkhov\". De Thomon went on to design the surroundings of the building, including the Rostral Columns (gas-fired navigational beacons), the square in front of the Stock Exchange, and the embankment. Thus the building became the focal point of the edge of Vasilevskiy Island - a vital location because it faced the Winter Palace on the opposite side of the Neva River.\nWhen the Bolsheviks seized power, the Stock Exchange Building became a sailor's club, then the Chamber of Commerce of the North-West Region, a labour exchange, the Soviet for the Study of Manufacturing Capability in the USSR and several other institutions before being transferred to the Central Naval Museum in 1939. The museum closed in 2011, reopening in 2013 in the renovated Kryukov Barracks situated on the Moika Canal.\nIn December 2013, St. Petersburg Governor Georgy Poltavchenko announced that building would be transferred to the State Hermitage Museum. The city handed the keys to the historic Old Stock Exchange Building were turned over to Mikhail Piotrovsky, Director of the Hermitage on April 18, 2014. The event coincided with the 250th anniversary of the State Hemitage Museum.\nEarlier this week, the government announced that it is prepared to allocate 1.12 billion rubles from the Federal budget for the repair and restoration of the building. The State Hermitage Museum claim that the building is in a terrible state of disrepair, noting that 1.6 billion rubles are necessary carry out the repairs. The funds are to be allocated from five ministries, including the Ministry of Culture.\nRestoration of the building is expected to last until 2017, once complete, it will house the new the Russian Imperial Guard and Heraldry Museum.\nA Russian Moment No. 66 - Maltese Chapel, St.", "score": 25.000000000000068, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Today we are going to take a brief look into one of the city’s most iconic buildings, the Kazan Cathedral, a beautiful piece of architecture and one which has many stories to tell. The current building is in fact a reconstruction of the original church, which was destroyed at the direction of then General Secretary of the Central Committee of the Communist Party of the Soviet Union, Joseph Stalin, in 1936. This was in fact one of the first churches to be rebuilt following the fall of the Soviet Union.\nThe original cathedral had its fair share of damage and after part of the building was burned down back in the early 17th century. The rebuild was done using brick structures and that is wha inspired the colorful and detailed rebuilding of this new church once that old one had burned down.\nWhen Stalin ordered for a clearing of the churches, this was one that everyone wanted to be saved, along with St. Basil’s, yet attempts to do so were in vain.\nUnlike some rebuilds, the idea of this one was to mimic the original building which had originally stood on this ground. Carried out by the Moscow city branch of the All-Russian Society for Historic Preservation and Cultural Organization, what they managed to achieve with this rebuild was absolutely stunning, and almost identical to the previous cathedral which had been constructed here.\nThe church is open throughout the week for visitors and it is a great place to spend a couple of hours, exploring the history and the stories that can be told about the place. Throughout the years this church, in both its original and rebuilt versions, have been home to religious dissidents and the place has even seen a number of battles take place. The architecture inside the church is equally as impressive as the stories that can be told\nWhilst St Basil’s of course gets a great deal of the attention owing to that iconic design, this is also a great place to visit during your time in Moscow.", "score": 24.3727862338712, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "It is the largest network of gravity-fed water fountains in the world. We ended our tour of Peterhof with a walk from the prominent center fountain, called the Grand Cascade, along a canal that flowed through the gardens to a pier where a hydrofoil took us back to St. Petersburg.\nWe enjoyed our jaunt across the Gulf of Finland to the mouth of the Neva River. Approaching St. Petersburg by water gave us a new appreciation for the beauty of the buildings on the waterfront. The Hermitage, for instance, was dazzling from a distance! After a nice lunch at the City Café, we hopped back on the bus for a drive by St. Isaac’s Cathedral, the Bronze Horseman statue of Peter the Great, the battleship Aurora, and the Rostral Columns before stopping at the most gorgeous sight in St. Petersburg—The Church on the Spilled Blood!\nThe church was built between 1883 and 1907 on the spot where Emperor Alexander II was assassinated in 1881 (hence the gruesome name). Both the exterior, designed in the traditional Russian onion-dome style, and the interior are decorated with bright shades of marble and detailed mosaic tiles. According to restorers, it contains several thousand square yards of mosaics – more than any other church in the world. The church was closed in the 1930s when the atheist Soviets, who were offended by religion, began destroying churches all over the country for being “inappropriate symbols of Christianity”. The church remained closed and under restoration for years and was finally re-opened in 1997, not as a place of worship, but as a Museum of Mosaics. The pictures we took of this church are some of my favorite travel photos!\nIt’s hard to comprehend the damage the city has endured from various disasters including fires, floods and wars, especially the cruel Nazi occupation of WW II, but all the sites we toured have been restored to their original glory. Architecturally, the city ranks as one of the most splendid in Europe. The historic district was designated a UNESCO World Heritage Site in 1990. While I didn’t find St. Petersburg to be a particularly congenial city, clearly there is a fondness for art, opulence and beauty here.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "9.00 – 11.30am – Our guide meets you at the hotel. City tour of St. Petersburg. Within the City Tour you will drive along the most popular sights of our beautiful city such as Point of the Basil Island, Palace Square with famous Winter Palace, Menshikov Palace, Academy of Fine Arts. Several times we will drive along Nevskiy Prospekt – the main street of our city. You will see such places as Arts Square, Catherine’s Garden, Anichkov Bridge, St. Isaac’s Square, Bronze Horseman. At the most beautiful places the guide will supply you with more information and will offer you to make a stop for taking photos. Inside tour of Peter and Paul fortress with its Cathedral is included into this City Tour. The Fortress is the place where the city began in 1703. Inside St. Peter and Paul’s Fortress there is a Cathedral with the same name that occupies a special place among the churches of St. Petersburg. This cathedral used to be a burial place for the Russian Tsar Family beginning from the time of Peter the Great. Peter the Great was the first to be buried in the unfinished building of this Cathedral. A tradition to bury the members of the ruling dynasty, widespread throughout the world, was followed in St. Petersburg, too.\n11.30 – 12.00pm – Inside tour of the Church on the Spilt Blood. The Church on the Spilt Blood was created in the best traditions of Russian art of the 14th to 17th centuries and stands out among all St. Petersburg churches by its distinct national appearance. It was made in 1883-1907 on the spot where on March 1st, 1881, the terrorist mortally wounded Alexander II.\n12.30 – 1.00pm – Inside tour of St. Isaac’s Cathedral. St. Isaac’s Cathedral is an integral part of St. Petersburg as well as St. Peter’s Cathedral is a part of Rome, and St. Paul’s Cathedral is a part of London. It is one of the largest domed buildings in the world. Built in the early 19th century, it is remarkable for the perfect harmony of architecture, painting, sculpture and mosaics. Nowadays it functions both as a church and a museum.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Today marks the 450th anniversary of Russia’s St. Basil’s Cathedral, the San Francisco Chronicle\nThe Cathedral stands outside the Kremlin on the Red Square in Moscow.\nRussians will celebrate the anniversary by opening an exhibition dedicated to St. Basil, the religious dissenter who stood up to Czar Ivan the Terrible, and other “holy fools.”\nThe cathedral was built in 1561 to celebrate Ivan’s conquest over Mongol rulers, and is upheld by believers to be a place for miracles and healing.\nIt has become known over time as the place where St. Basil is buried.\n© 2017 Newsmax. All rights reserved.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Photographs by William Brumfield\nIt is said an icon of St. Nicholas appeared miraculously in a tree before Grand Prince Dmitry Donskoi in a meadow where he and his troops had stopped to rest on their way south, to confront Tatar troops. The Grand Prince’s army went on to win the battle over Khan Mamai at Kulikovo Pole, marking an epochal event in Russian history.\nViewing the icon’s appearance as a good omen, Prince Dmitry ordered that a monastery dedicated to St. Nicholas be set up at the site, around the tree. According to medieval legend, the miraculous icon warmed (“ugreshe”) the Prince’s heart, giving the monastery its popular name.\nGrand Prince Dmitry founded the St. Nicholas-Ugreshsky Monastery in 1380, near the court village of Ostrov, a location already well known among Moscow’s princely elite. The new monastery rapidly gained in stature and, by the 15th century, it had its own legation in the Kremlin.\nLike other Moscow monasteries, St. Nicholas-Ugreshsky shared the city’s turbulent fate. Built of wood in its early centuries, the monastery was burned to the ground in 1521 during a major attack on Muscovy by the Crimean Tatar khan Mehmed I Giray (1465-1523).\nReconstructed shortly thereafter, the monastery gained its first masonry church with the building of the new Cathedral of St. Nicholas, presumably in the 1520s or 1530s. Frequently renovated, parts of the structure were rebuilt in the 1840s. In 1940, the cathedral was razed. As part of the revival of the monastery after 1990, the cathedral was rebuilt in a stylized form suggestive of 15th-century Muscovite architecture.\nThroughout the 17th century, the St. Nicholas-Ugreshsky Monastery was a frequent pilgrimage site for the tsar and his courtiers. The young tsar Peter I was a frequent visitor in the late 1680s, and in 1698 the monastery was a place of detention and interrogation for the musketeers (streltsy) involved in the infamous revolt against Peter’s authority.\nWith the secularization of Russian society and politics in the 18th century, the influence of monasteries waned, as did financial support.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-4", "d_text": "Peter and Paul Fortress\nThe very first building to be constructed in St. Petersburg is this edifice. The structure’s construction began in 1712 and was completed in 1733, a long 21 years later.\nArchitect: Bartolomeo Rastrelli (Rebuilt Structure)\nNamed after Catherine, the wife of Peter the Great, the palace exteriors have been made using around a 100 kilograms of gold.\nGrand Cascade Lodge\nYear: 1714 – 1728\nArchitect: Bartolomeo Rastrelli\nAlso known as the Peterhof Palace, this structure has a giant fountain – the Samson fountain situated at the center of the cascade. The fountain shows the moment when Samson tears open the jaws of a lion.\nYear: 1748 – 1764\nArchitect: Bartolomeo Rastrelli\nThis edifice was built to house the daughter of Peter the Great, Elizabeth, when she opted to become a nun. Today, it is used as a concert hall.\nYear: 1754 – 1762\nArchitect: Bartolomeo Rastrelli\nThis palace was the original residence of the Russian monarchs. The palace has approximately 1,786 doors, 1,945 windows, 1,500 rooms, and 117 staircases.\nYear: 1747 – 1751\nArchitect: Bartolomeo Rastrelli\nThis was the traditional baptismal church of the children of the Tsars. However, it was open to public viewing in 1900. The church was damaged during World War II, and was converted into a post office.\nSt. Michael’s Castle\nYear: 1797 – 1800\nWhen one of the soldiers was guarding the construction site of this castle, he had a vision that Archangel Michael was guarding the castle alongside him. Thus, the castle came to be known as Mikhailovsky (St. Michael’s) Castle.\nLomonosov Moscow State University\nArchitect: Lev Vladimirovich Rudnev\nThe university’s library was the only one that was open to general public, which was done in the year 1756. In 1941, it was named after the famous academician Mikhail Lomonosov.\nThe 19th century was mainly dominated by the Byzantine and Russian Revival.", "score": 23.986933722492246, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "The State Russian Museum (Русский музей) is one of the most famous and one of the most visited museums in St. Petersburg. Collection of Russian museum has more than 400 000 exhibits. It is the largest museum of Russian art in the world. The main exhibition of the museum is located in the city center, in the Mikhailovsky Palace. The museum has several branches.\nThe State Russian Museum was founded in 1895 by Emperor Nicholas II. The museum was opened in March 1898. Russian Museum in St. Petersburg was the first Russian state Museum of Russian art. Before 1917, the museum was called \"Museum of Russian Emperor Alexander III\".\nMuseum got Russian works of art from different sources: Academy of Fine Arts, the Hermitage, the Winter Palace and other. Many of art works were acquired in private collections. Over the years the museum collection has got items from different sources.\nCurrently, the museum's collection includes more than 400,000 items. Museum artworks give an idea of development of Russian art for a period of more than 1000 years, from X to XXI century. Russian museum has exhibits that represent all types and genres of art. Here you can see icons of XII-XV centuries, paintings, sculptures, drawings, prints.\nThe main building of the museum is located in the city center, close to Nevsky Prospekt, at Arts Square (Площадь Искусств). The main museum building was built by architect Carlo Rossi in 1819-1825 years. The palace was designed for the Grand Duke Mikhail Pavlovich, the son of Emperor Paul I (Mikhailovsky Palace). In 1895 the building was purchased and transferred to the museum. The Mikhailovsky Palace and the adjacent buildings currently house the museum main exhibition. (Do not be confused. There is also Mikhailovsky Castle (Engineer's Castle) nearby).\nA large number of objects are under the control of the Russian Museum are in the St. Petersburg: Mikhailovsky Palace, The Benois buiding, Mikhailovsky (Engineers') Castle and the park, Marble Palace, Stroganov Palace, the Summer Palace of Peter I, the Mikhailovsky Garden, Summer Garden and Peter I House.\nFamous \"Savior on the Spilled Blood\" church is located near the Russian museum.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "The exterior was originally designed to resemble Tamerlane’s Gur Emir Mausoleum in Samarkand, Uzbekistan. The inside design is a combination of both the art nouveau popular in the beginning of the 20th century and traditional mosque motifs. I didn’t see but pictures show the interior filled with blue and green tiled ceilings and scripted passages of the Qur’an. The outside views are amazing and maybe the inside will be open for visitors next time.\nIts something worth seeing while in St. Petersburg and not too far away from other wonderful sites like the Peter and Paul’s fortress and the former Bolshevik headquarters – Kshesinsky Palace. It’s just a short walk across the river or an easy tram ride from most parts of St. Petersburg.\nEfim Rezvan, deputy director of the Peter the Great Museum of Anthropology and Ethnography states it best: “There is no panorama of the center of St. Petersburg that does not show two minarets. And this symbol is not only of St. Petersburg. This reflects the country itself, and the dramatic history of the mosque reflects the dramatic history of the country.”", "score": 23.3634000675641, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "The elegant golden silhouette of Peter and Paul Cathedral is a symbol of St Petersburg that is easy to recognize. Lots of city legends and historical facts are linked to this place. The Cathedral occupies the main place in the whole ensemble of Peter and Paul Fortress. It’s actually the most beautiful building there.\nThe cathedral was never a place for engagements or epiphany, and only the funeral service for the Imperial family members and the fortress commandants took place there. All were first buried in the Cathedral and later in the special Grand Ducal Burial Vault.\nHistory of the Cathedral\nThe Peter and Paul Cathedral as we know it know stays on the place of the former wooden church, built under the order of Peter the Great on the Hare Island in 1703. The stone walls around it started to appear 10 years after in accordance with the project by Domenico Trezzini. A piece of relics of Andrew the Apostle were laid in the foundation. The outside construction works finished in 8 years.\nA rare carillon was purchased from Holland for the chimes. After the settlement of the spire of 40 meters high, Peter and Paul Cathedral was the tallest building in the city for almost 300 years.\nThe Peter and Paul Cathedral was sanctified on June 28, 1733. It had been built for 19 years (1712-1733) with the attentive participation of Peter the Great, Catherine I, Peter II and Anna Ioanovna.\nAfter the events of 1917 lots of values were moved to Moscow, where most of them got lost. Some historical artifacts are preserved in the Hermitage and other museums of the city. In 1924 the Cathedral also became a museum which helped to save its artistic value.\nArchitecture and interior\nFrom the outside the building slightly remind the orthodox cathedral. Its rectangular walls are only adorned with rigorous pilasters and bas-reliefs about the windows. Main attention is drawn to the baroque facade that smoothly turns into a high bell tower with a golden spire. There on top an angel-shaped with a christ of 3,2 meters high and 3,8 meter wingspan is placed.\nThere is a story that once lightning stroke the main adornment of the Cathedral. Only roofer Petr Telushkin managed to the get on top of the Cathedral and fix the problem.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "The weighty mass of St. Isaac's Cathedral\ndominates the skyline of St. Petersburg. Its gilded dome, covered with 100\nkg of pure gold, soars over 100 meters into the air, making it visible far\nout onto the Gulf of Finland. The Cathedral was commissioned by Alexander\nI in 1818 and took more than three decades to complete. Its architect,\nAugust Monferrand, pulled out all the stops in his design, incorporating\ndozens of kinds of stone and marble into the enormous structure and lading\nits vast interior with frescoes, mosaics, bas-reliefs, and the only\nstained glass window in the Orthodoxy. By the time the cathedral was\ncompleted in 1858, its cost had spiraled to more than twenty million\nrubles--as well as the lives of hundreds of laborers. Both the exterior\nand the interior of the cathedral deserve prolonged observation, and the\nview from the dome is stupendous.\nCathedral of Our Lady of Kazan\nThis cathedral is one of the most magnificent,\nand most peculiar, landmarks of St. Petersburg. Built in 1811 by Andrey\nVoronikhin, its plan is a strange compromise between a number of different\narchitectural imperatives. Its patron, Paul I, desired a church plan\nmodelled on that of St. Peter's in Rome, with its semicircular colonnade\nfacing north so as to conform to the formal layout of Nevsky prospekt.\nThis plan was carried out, but the orientation of the church itself was\ndictated by a higher authority. Owing to the Orthodoxy's requirement that\nthe church altar and entrance follow an east-west alignment, the church\nitself sits sideways at the center of the colonnade, its main entrance\nfacing west (as if this were not confusing enough, the present entrance is\nlocated on the east). On the square in front of the Cathedral are statues\nof the two commanders of the Russian army during Napoleon's march to\nMoscow, Barclay de Tolly and Mikhail Kutuzov.\nHistorical Sites | The\nHermitage & The Russian Museum\nThe Theatres of St. Petersburg | Cathedrals\nCopyright (c) 1996-2005 interKnowledge\nCorp. All rights reserved.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "St. Basil’s Cathedral\nAs its name implies, St. Basil’s Cathedral [wiki] on the Red Square in Moscow, Russia, is named after Saint Basil (who is also known as Basil Fool for Christ). The story goes that in the 1500s, an apprentice shoemaker/serf named Basil stole from the rich to give to the poor. He also went naked, weighed himself with chains, and rebuked Ivan the Terrible for not paying attention in church. Most of the time, admonishing anyone with name “the Terrible” wasn’t such a good idea, but apparently Ivan had a soft spot for the holy fool (as Basil was also known) and ordered a church to be built in his name after Basil died.\nSt. Basil’s Cathedral, a Russian Orthodox church, sports a series of colorful bulbous domes that taper to a point, aptly named onion domes, that are part of Moscow’s Kremlin skyline (although the church is actually not part of the Kremlin).\nOh, and Ivan the Terrible lived up to his name after he supposedly blinded the architect who built the church so he would not be able to design something as beautiful afterwards.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-3", "d_text": "This cathedral was built as a memorial to celebrate the conquest of Smolensk.\nKul Sharif Mosque in Kazan\nThis mosque was built in the 16th century, and was named after Qolşärif who served there. However, it was destroyed by Tsar Ivan the Terrible, and was rebuilt in 1996 and inaugurated in 2005.\nBy the end of this century, the use of decorative elements increased in church architecture, and cult construction was on the rise. Large churches with bell towers, multiple cupolas, and aisles were built. The edifices, especially the churches, had asymmetric construction with oddly-shaped cupolas and arches. A lot of western architects were hired by the then Tsars to design a few key buildings.\nNew Jerusalem Monastery\nArchitects: Architects: P.I. Zaborsky, Yakov Bukhvostov, Bartolomeo Rastrelli, Matvei Kazakov, Karl Blank and others\nThe reason behind building this monastery at the said area was due to its striking resemblance to the Holy Land. The edifice was shut down in the year 1918, and was later blown up by German army. Currently, it is under renovation.\nChurch of Elijah the Prophet\nYear: 1647 – 1650\nThis church was built by the wealthy brothers Anikey and Nifantey Skripin. The murals in the church, for the first time showed peasants working. Until then, it was not allowed to paint peasants on the walls of wealthy edifices.\nThis church was built in the 17th century, but was consecrated only in the year 1704. However, during the Soviet period, the church suffered heavy damages.\nThe Transfiguration Church, 37 meters tall, is one of the tallest log structures in the world. It is said that the entire structure is built without using a single nail!\nThe face of Russian architecture changed during this period. This era saw a more methodical construction pattern with symmetric structures and geometric shapes. The ‘rule book’ construction phase began during these times. Baroque-styled cathedrals were constructed in most of the eastern cities. Among other prominent architects, the rise of Francesco Bartolomeo Rastrelli was the most significant development in the world of architecture. His magnificent structures were the highlights of St. Petersburg.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Brick was also used for the pilasters which delineate the façade. It was originally plastered, but underwent restoration after it was damaged during World War II. Its apse points towards the river, which provides a welcome sight for ships approaching from the Baltic. The shingled roof resembles the bochka roofs popular at the time. The walls were built from local quarrystone, which contrasted with the red bricks. The ground plan of the church is almost square with four pillars, one apse and one dome.\n- 1 Pre-Christian architecture (before 988)\n- 2 Kievan Rus Christian period (988–1230)\n- 3 Early Muscovite period (1230–1530)\n- 4 Middle Muscovite period (1530–1630)\n- 5 Late Muscovite period (1630–1712)\n- 6 Imperial Russia (1712–1917)\n- 7 Post-Revolution (1917–1932)\n- 8 Postwar Soviet Union\n- 9 Modern Russia\n- 10 See also\n- 11 References\n- 12 Further reading\n- 13 External links\nPre-Christian architecture (before 988)Edit\nRussian architecture is a mix of Byzantine and Pagan architecture. Some characteristics taken from the Slavic pagan temples are the exterior galleries and the plurality of towers.\nBetween the 6th and the 8th century, the Slavs built fortresses, named grods, which were tightly constructed wooden mechanisms of separation.\nKievan Rus Christian period (988–1230)Edit\nEarly Muscovite period (1230–1530)Edit\nThe Mongols looted the country so thoroughly that even capitals (such as Moscow or Tver) could not afford new stone churches for more than half a century. Novgorod and Pskov escaped the Mongol yoke, however, and evolved into successful commercial republics; dozens of medieval churches (from the 12th century and after) have been preserved in these towns. The churches of Novgorod (such as the Saviour-on-Ilyina-Street, built in 1374), are steep-roofed and roughly carved; some contain magnificent medieval frescoes. The tiny and picturesque churches of Pskov feature many novel elements: corbel arches, church porches, exterior galleries and bell towers.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "This Saints Peter and Paul Cathedral, Russia is one of the Russian Orthodox cathedrals which is located right inside the Peter and Paul Fortress which is present in St. Petersburg, Russia. It is the oldest landmark of St. Petersburg and it was built during the time of 1712 and 1733 right on Hare Island and along the Neva River. It is at times marked as the highest Orthodox Church so far in the world.\nThe current building of Saints Peter and Paul Cathedral, Russia was actually designed by Trezzini and it was built in between 1712 and 1733. It is gold-painted and it has height of 123 metres. This cathedral got closed in the time of 1919 and then it turned out into a museum in year 1924. On official terms, it is a museum and known for its religious services.\nThe architecture of Saints Peter and Paul Cathedral, Russia features unique designs in it. Its tombs are present on the ground floor. There is a lightning rod that protects this cathedral. It has a carillon and on this site large number of concerts are being periodically performed.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "2. Saint Isaac’s Cathedral or Isaakievskiy Sobor\nSt. Isaac’s Cathedral is the largest Orthodox Church of St Petersburg. It was built in the XIX century dedicated to Saint Isaac of Dalmatia. The cathedral is a Late Neoclassical. In addition, it is a city dominant: along with a dome the church towers over the city more than 100 meters.\nThe Cathedral was built by the French architect Auguste de Montferrand (1786–1858), who lived in Russia for 41 years. It’s interesting, that the cathedral was being constructed for 40 years. The guides in St. Petersburg can tell you a curious legend. The legend says that the architect got a prophecy that he will die when the Cathedral is finished. And by a strange coincidence, during the construction of the church the various obstacles were constantly occurring. Finally, the Cathedral was completed. In a month after that- the architect died in the presence of Russian Emperor Alexander II.\nOnly best artists were being invited for the construction of the cathedral: Fedor A. Bruni, Karl Briullov, Ivan Buruhin, Vasily Shebuev, Franz Riss, and sculptors – Giovanni Vitali, Petr Klodt. The Exterior of the Cathedral is decorated with sculpture. The Interiors are decorated with natural stone – marble, lapis lazuli, malachite and mosaics. In the Cathedral you can also see the stained glass windows and a rich collection of paintings on biblical subjects as well.\n3. Palace Square\nThe architectural ensemble of the square was formed in XVIII-XIX centuries. Its territory is more than 2 times the territory of the main Square in Moscow. The square is surrounded by the Winter Palace, the General Staff (1819–29), and the Guards Corps Headquarters (1837–43). Next to the Winter Palace there is the Small Hermitage building, which is located not in the Palace square, but it is included in an architectural ensemble of the palace square.\nThe palace was constructed by a lot of architects, most notably Rastrelli, in what came to be known as the Elizabethan Baroque style but it was completed already during the reign of Empress Catherine II. After the construction of the Winter Palace until 1905 it was forbidden to erect buildings higher than the Winter Palace in St.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "In 1828, Empress Maria Feodorovna gifted a purl and diamond brooch to the family sacred icon of the Church, the Blachernitissa Icon.\nDuring the 19th century, the Church's interior and finishing were repeatedly changed. In 1829, the side chapel of St. Sergius of Radonezh was built according to the designs by Domenico Gilardi and Mikhail Bikovsky. In 1839, it was connected to a wooden gallery. There are two side chapels in the Church that are dedicated to St. Alexander Nevsky and St. Sergius of Radonezh. The most known of the estate's owners, Duke Sergey Golitsyn, was buried in the side chapels of St. Sergius of Radonezh in 1859. In 1842, a clock was mounted to the side chapel. It had only one hour hand.\nThe Church functioned till 1929. After it was closed, the holly vessels were moved away; the bell tower was dismantled; the building itself was rebuilt as a hostel; the dome and the bells were broken.\nIn 1992, the Church was returned to the Russian Orthodox Church. In 1995, architect Y. Vorontsova reconstructed the original appearances of the Church and the bell tower. In September 1995, the Church was consecrated by Patriarch Alexy II of Moscow and All Russia.\nThe Cathedral Mosque appeared in Moscow because of Tatar population keeping growing in the city. By the early 20th century, the Tatars lived not only in Zamoskvorechye, but also in Sretenka, Myasnitskaya, Trubnaya, and other streets.\nSince 1894, the Muslim Community repeatedly appealed to Moscow authorities for permission to build the second mos...\nThe Church of St. Nicholas \"Red Chime\" is located in Kitay-gorod, one of the Moscow oldest historical districts, in Yushkov Drive (later known as Vladimirova Drive, and since 1992 as Nikolsky Lane) connecting Varvarka Street and Ilyinka Street. The church was first mentioned in the 16th-century chronicle.\nThe name of \"Red Chime\" was given to the...", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "The Church of St. Gabriel the Archangel, or Menshikov`s Tower was built at the beginning of the XVIII century by an outstanding architect I. Zarudny with the participation of Italian masters (including D. Trezini).\nA. D. Menshikov ordered to build the church. Peter the Great's favorite wished to erect a church with a tower. It was to be higher than Ivan the Great s bell-tower. And in 1707 a six-tiered baroque tower with a golden figure of an angel on a 30 meter high steeple rose over Moscow.\nIts full height was 81 meters, and that was 3 meters higher than Ivan the Great s bell-tower. A needlelike steeple was characteristic of the Dutch and Danish architecture, and it was applied in the Russian architecture for the first time.\nLater on an Italian master D. Trezini used the same device when erecting the Peter and Paul Cathedral.\nI. Grabar a prominent artist and a true lover and connoisseur of the Russian art considered Menshikov's Tower to be one of the greatest works of Russian architecture of all times.\nUnfortunately it was not for long that the tower preserved its original appearance. One summer day in 1723 a terrible thunderstorm caused a fire. Wooden ceilings were all burnt down, and 50 bells fell, having destroyed all interiors.\nAt the end of the XVIII century an unknown architect reconstructed the tower only without the upper tier.\nHe also replaced the steeple with a sculpture. At the same time freemason's lodge used the building for its meetings. The masons changed the interiors of the church. But later on masonic symbolism was destroyed by order of metropolitan Philaret.\nIn 1820s the Church of St. Gabriel the Archangel was closed. After the Great Patriotic War the church was reconstructed and nowadays a coaching inn of the Antiokchiisky patriarchy occupies the building.\nAddress: 15-a, Arkhangelskiy Laner, Moscow\nUnderground: Chistye Prudy", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "All the churches are elaborately painted and decorated.\nThe cathedral and four tall kremlin churches with their silver \"blind\" domes were imitated throughout the city. This is particularly evident in the Savior-on-the-Market church and the cathedral church of the Nativity convent, both dating from the 17th century and situated near the kremlin walls. The oldest church within the town center was consecrated to St. Isidore the Blessed in 1565. They say that Ivan the Terrible had the architect executed, because his church was so much smaller than its predecessor.\nThe kremlin is flanked by two monasteries, both facing the Lake Nero. To the right from the kremlin stands the Abraham monastery, founded in the 11th century and one of the oldest in Russia. Its cathedral, commissioned by Ivan the Terrible in 1553 to commemorate the conquest of Kazan, inspired numerous churches in the region, particularly in Yaroslavl.\nSpaso-Yakovlevsky Monastery, situated to the left from the Kremlin on the town's outskirts, has been venerated as the shrine of St. Dmitry of Rostov. Most of the monastery structures were built in the late 18th and early 19th centuries in the fine neoclassical style. There are also two 17th-century churches: the Conception of St. Anna, and the Transfiguration of Our Savior. Unlike most other churches in the town, the monastery belongs to the Russian Orthodoxy and houses a theological seminary.\nCathedral of the Dormition of the Theotokos\nThe citadel of Rostov seen from Lake Nero\nRostov Kremlin in summer (1911)\nThe courtyard in the kremlin\nThe vicinity of Rostov is rich in old architecture. For example, an old wooden church (1687–1689) may be seen in Ishnya. One of the best preserved monasteries in Russia, named after the saints Boris and Gleb, is situated in Borisoglebsky, about 20 kilometers (12 mi) west of the town. The monastery was favored by Ivan the Terrible, who personally supervised the construction of towered walls and bell-tower around an even more ancient cathedral. The only addition made to the monastery after Ivan's death is a barbican church, commissioned by the metropolitan Iona Sysoyevich.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "St. Isaac’s Cathedral is named after the saint to whom it is dedicated: Saint Isaac of Dalmatia, and it is located in Sankt Petersburg, being the largest Orthodox cathedral found in this specific town.\nThe church was erected by the order of Tsar Alexander I, but the actual building process took quite some time to be initiated, mainly because the designs which were presented before the commission appointed to supervise the project were deemed unworthy. The architect to receive the job was Auguste de Montferrand, but even his design was received with immense criticism. Montferrand’s plan consisted of a gigantic structure with 4 identical porticos, but this was not what the commission had in mind.\nThey considered the design to be quite dull in its repetitiveness and this definitely did not inspire grandeur, which was what they were looking to achieve through the cathedral. Even if the edifice was to be colossal in size this did not necessarily mean that it was to be the epitome of greatness. This was quite a dispute in this regard, so the Tsar himself intervened in the matter and appointed Montferrand to supervise the construction of the cathedral.\nThus, the project was under way. However, the edifice did not see the light of day until 40 years had passed – this being the timeframe in which it was built (beginning 1818 and being finished in 1858). The history of the cathedral is very interesting, the church having witnessed different political regimes, time in which its appearance and scope had changed. For instance, when the Soviet Union was in power, any depiction which was religious in nature was destroyed. In fact, in 1931, the building was transformed into an Antireligious Museum.\nAs a consequence of this shift, the dove sculpture – the symbol of the Holy Spirit, but also of peace and conciliation – was removed, so as to make room for Foucault’s pendulum. This device was used to demonstrate in the simplest way possible that the earth is indeed round. The symbolism behind this? The Soviets felt the need to erase all remnants of religion, or more accurately of blind belief in something that cannot be demonstrated, and replace these with something palpable, logically explained.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "Construction lasted until 1820, with the date of comletion engraved on a medallion on the southern wall of the church. The cathedral formed the backbone of the main city square and was primarily funded by two local merchants.\nAssumption Cathedral was built by Yaroslavl masons and painted by a native of the village of Teykovo. In order to make the church appear as luxurious as possible while still being frugal, the artist used the grisaille technique in which a tonal gradation of color is used to imitate columns, cornices and stucco. Stories from the Bible are depcited in framed pictorials, while images of evangelists can be seen under the central dome. The altar is decorated with references to the Trinity, and Old Testament scenes are depicted on the walls and ceiling of the refectory.\nIn Assumption Cathedral, an inscription states that on August 11, 1866, the cathedral was visited by Tsarevich Alexander (the future Emperor Alexander III), to whom Archpriest John Nicholas presented an icon of his patron Saint Alexander Nevsky.\nAfter the Russian Revolution of 1917, Assumption Cathedral suffered the fate of most churches and monasteries in Russia – after being looted, it was used for secular purposes for decades. It was returned to the Orthodox community only in 1992, at which time restoration work on it began.\nSt. Nicholas Cathedral is the oldest stone builing in the city. It was built in 1766, 11 years before Myshkin was granted city status.\nThe story of Alexander Petrovich Berezin, a merchant of the first guild and the city head of St. Petersburg, is closely connected with Nikolsky Cathedral. Berezin was born near Myshkin in the village of Yeremeytsevo to a poor peasant family. Circumstances led his father to place an icon of St. Nicholas in the Myshkin tavern. But Berezin bought back the icon more than a quarter of a century later. He was deeply impressed by the fact that his ancestral relic had been miraculously preserved and returned to his family. In memory of this event, Berezin built Nikolsky Cathedral, along with a church in St. Petersburg and Church of the Ascension in Kruglitsy.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "There are two churches situated at the small part of Bolshoi Sampsonievsky prospect - Sampsonievsky (Sampson) Cathedral with a bell-tower and a church to Anna of Kashin.\nThus, Sampsonievsky cathedral.\nErected in 1728-1740 according to J. Trezini and I. Lapshin plan, jointly with M.G. Zemtsov (in 1730).\nOriginal windows in the tower are apparently decorative.\nOriginally there was a wooden church on the place of today's building which was devoted to russian army victory by Poltava (gained on June 27, 1709, the St. Sampson day). Stone building was erected later.\nSampsonievsky cathedral gave names to three streets - Bolshoi (former Karl Marx prospect) and Malyi Sampsonievsky prospects, Sampsonievskaya street (Bratstvo street formerly)\nCathedral was recently restored though scaffolding is still seen in some places\nThere are several tablets with texts on the walls of the cathedral. Say, for example, two of them\n(translated, of course) :\nAppeal of the emperor Peter the Great to his forces before Poltava battle:\n\"Warriors! It is the hour which will rule sway the destinies of the Homeland. You must not think that you fight for Peter, but for nation given to Peter, for your own clan, for Homeland, for our orthodox faith and the Church. You must not be embarrassed by the fame of enemy's invincibility, falsehood of which was refuted by your victories more than once. Let truth and God, your defender, be with you in the battle, remember that Peter does not value his own life, but let Russia, piety, glory and welfare live forever.\"\nWelcome speech of the emperor Peter the Great to his forces after Poltava battle:\n\"Hello, sons of homeland, my beloved children!\nI created you by the sweat of my works; state can't live without you as the body can't live without soul. Loving God, orthodox faith, homeland, glory and Me, you did not spare your lives and rushed to the thousands of deaths fearlessly. Your courageous deeds will be never forgotten by the descendants.\"", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-11", "d_text": "Petersburg Topic: A Russian Moment\nThe interior of the Maltese Chapel, Vorontsov Palace, St. Petersburg\nThe Maltese Chapel of St. John the Baptist is a former Catholic chapel built for the Order of Malta Knights by Giacomo Quarenghi in 1800 on the orders of Emperor Paul I. After the opening of the chapel on April 29th, the Emperor became Grand Master of the Order of Malta. The chapel is located in the Vorontsov Palace in St. Petersburg which today houses the Suvorov Military Academy.\nBuilt in 1797-1800 in the Classicist style, the chapel served exiled French aristocrats (Catholic knights of the Maltese Order) who resided in the Russian capital at the turn of the 19th century. The chapel was added to the south wing of the palace, and could accommodate up to 1,000 people. The austere facade is decorated with a Corinthian portico; the interior boasts lavish stucco moulding and decorative paintings. In 1810, it was given to the Page Corps, the Maltese Chapel was used as the house church for Catholic pages and foreign diplomats. In 1853, a side-chapel with a marble sepulchre of Duke Maximilian of Leuchtenberg (sculptor A. I. Terebenev) and stained-glass windows was attached to the Maltese Chapel. In 1909, an organ of the German Walker company was installed. In 1918, the church was closed, the building was altered to accommodate a club. In the 1990s, restoration works were carried out under the supervision of architect S. V. Samusenko.\nThe Maltese Chapel was restored in 2003 for the 300th anniversary of St. Petersburg. Today, the building serves as an assembly hall of the Suvorov Military School. Tours and concerts are held in the chapel on Saturdays throughout the year.\nA Russian Moment No. 65 - State Russian Museum, St. Petersburg Topic: A Russian Moment\nState Russian Museum - formerly the Emperor Alexander III Russian Museum, St. Petersburg\n120 years ago, on April 3 (25), 1895, Emperor Nicholas II decreed the foundation of the State Russian Museum in St.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "The first churches were commissioned by the princes; however, after the 13th century merchants, guilds and communities began to commission cathedrals. The citizens of 13th-century Novgorod were noted for their shrewdness, diligence and prosperity, expanding from the Baltic to the White Sea. The architecture in Novgorod did not begin to flourish until the turn of the 12th century. The Novgorod Sophia cathedral was modeled after the original Saint Sophia Cathedral in Kiev; it is similar in appearance but smaller, narrower and (in a development of North Russian architecture) onion-shaped domes replace cupolas. Construction was supervised by workmen from Kiev, who also imported bricks. The primary building materials were fieldstone and undressed limestone blocks. It is said that the interiors were painted in frescoes, which have now vanished. The doors were made of bronze.\nThe katholikon of Yuriev Monastery was commissioned in 1119 by Prince Vsevolod Mstislavovich. The architect was known as Master Peter, one of the few architects who have been recorded at this time in Russia. The exterior is characterized by narrow windows and double-recessed niches, which proceed in a rhythm across the façade; the interior walls reach a height of 20 metres (66 ft). Its pillars are closely spaced, emphasizing the height of the vaulted ceilings. The interior was covered in frescoes from the prince’s workshops, including some of the rarest Russian paintings of the time.\nThe Church of the Transfiguration of the Savior was a memorial to Ilya Muromets. During the Mongol invasion, Ilya was reputed to have saved the city; the church was built in his honor on Elijah Street in 1374. During this time the city-state of Novgorod established a separate district for the princes, subdividing the city into a series of streets where the church still stands. The church windows are more detailed, the niches deeper and the dome (seen in larger cathedrals) is augmented by a pitched roof.\nAnother church closely resembling the Church of the Transfiguration is the Church of Saints Peter and Paul in Kozhevniki. It was constructed in 1406, and the primary difference is in building material. The detail is focused on the west and south facades. New ornamental motifs in the brick appear at this time.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "If you walk slowly along the embankment of the Griboedov canal towards the Nevsky prospect, you will soon see the counterpart of St. Peter’s Cathedral in Rome – the Kazan Cathedral.\nThe Cathedral looks fascinating due to its monumentality on the outside and the amazing beauty inside. The Kazan Cathedral was built and named in honor of the Kazan icon of the Mother of God, which Is placed inside the temple and is considered to be one of the most revered icons in Russia. So the queue of people wishing to touch the shrine is a common thing here.\nThe Cathedral was created by the talented Russian architect, the former serf, Andrey Voronikhin. According to a legend he created the design of the cathedral at will of emperor Paul I, who said, “I want a cathedral in Saint Petersburg that will comprise a bit of St. Peter’s Cathedral and a bit of Santa Maria Maggiore in Rome.”\nHowever, the construction works on the site of the old Kazan church began after the murder of the emperor. Voronikhin confronted a difficult task. Construction of the new cathedral was planned on the section of the Nevsky Prospect which extended from the West to the East. In orthodox churches the altar should always face East and the main facade with the entrance should face West, though Nevsky Prospect was on the North side of the future temple. Voronikhin found an elegant solution — side facade was visually turned into the main facade by building a semicircular colonnade, which unfolded toward the main alley of the city. Grand 96 columns in four rows! This solution not only helped to avoid violation of basic rules of the construction of an orthodox church, but retained the symmetry and harmony of the cathedral with Nevsky Prospect and everything that surrounded this piece of architectural art.\n“After the defeat of Napoleon’s army by Kutuzov, military banners and keys of the French fortresses were brought here and placed on the walls of the cathedral”\nAfter the defeat of Napoleon’s army by Kutuzov, military banners and keys of the French fortresses were brought here and placed on the walls of the cathedral. Nowadays most of the trophies are kept in the Historical Museum in Moscow and the Kazan Cathedral keeps only six trophy banners and keys in twenty-six bundles beside the tomb of field marshal Mikhail Kutuzov, who was buried in the North aisle of the Cathedral in 1813.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-3", "d_text": "Visitors are attracted by their sacred inside: copies of Raphael’s Bible, more than 50 paintings depicting the Old Testament and four other the New Testament, as well as ornamental compositions in the grotesque style decorating the walls. At the gallery of ” Lodges of Raphael,” there are two genuine works by Raphael, treasures of all collections of the Hermitage, and the crouching boy of Michelangelo, another jewel of the Hermitage collections and unique work of this famous artist in Russia.\nSaint Peter’s and St. Paul Cathedral\nTsar Peter I the Great wanted to give Russia an opening on the Baltic Sea and a naval force. In 1703, he put on Zaïatchii island in the delta of the Neva, the first stone of a new fortress he called ” Saint- Petersburg “. In this fortress was built St. Peter and St. Paul’s Cathedral. It was the first college built in stone. At the top of its spire 123m high, stands an angel holding a cross that will be one of the most important symbols of St. Petersburg. In this cathedral are the remains of tsars and tsarines of the Romanov dynasty of Peter the Great to Nicolas II. Indeed, Tsar Nicolas II and his family were shot in SIberia in 1918 and thrown into mass graves. In 1990, they found their bones and they were transported to the cathedral and buried in the Chapel of St. Catherine of Martyrs.\nThe tombs of Tsar Peter 1st the Great and Tsarina Catherine 1st.\nSt. Isaac’s Cathedral\nBeing among one of the largest religious buildings in the world, St. Isaac’s Cathedral is topped by the largest golden dome in the world. A whole host of artists such as Karl Brioullov ,Fidor Brouni, Vasily Chebouïev Ivan Vitali, Nikolai Pimenov and Piotr Klodt, participated in the painting and sculpture works in the cathedral. The sculptures are made according to the method of electroplating which is to reduce the weight of the enormous bronze statues and reliefs. The paintings adorning its beautiful interior were replaced by mosaics since changes in temperature and humidity have damaged them. After his consecration in May 30th, 1858, Saint Isaac’s Cathedral received the relics of St.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "There is a gravestone-monument to the three statesmen, organizers of Volynsky plot against Biron, near the cathedral (they were arrested and put to death on June 27, 1740).\nA.P.Volynsky was a cabinet-minister, A.F. Khrushov was a councilor of Admiralty establishment, P.M. Yeroshkin was an architect, main architect of the Saint-Petersburg building Commission.\nMonument was established much later - in 1885 (sculptor is A.M. Opekushin).\nView on Sampsonievsky cathedral from the park\nOnce there were graves of Saint-Petersburg architects-constructors on the territory of this park, however they did not remain till our times, so now this fact is marked…\n…with the monument \"to First constructors of Petersburg\" (authors are Mikhail Shemyakin and Vyacheslav Bukhaev). Sign on the right says (on the left it's apparently the same but in Latin):\n\"SAINT SAMPSON THE RECEIVER OF WANDERING GAVE PEACE TO THE FIRST ARCHITECTS CONSTRUCTORS, CITIZENS OF SAINT-PETERSBURG\nPETR MIKHAILOVICH YEROPKIN(1698-1740)\nALEXANDER PHILIPPOVICH KOKORINOV(1726-1772) \"\nIf going farther along Sampsonievsky prospect you can see the church to Anna of Kashin (there was such a princess in ancient Russia).\nChurch was being built in 1901-1902 by architect Andreev and in 1907-1909 - by architect A.P. Aplaksin.\nView on the church from the yard.\nDwelling wing, directly connected with the church, was provided by the project.\nAt the same time I added some amount of interesting links into section", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "St.Nicholas’ Orthodox Church in Gifhorn (Germany) – a copy of the ‘gem’ of the Russian wooden architecture of the 18th century – has for a long time heard none of the divine liturgy chants or the ringing of the bells after the service. The reason for this is the decision by a private sponsor, who at the end of the 20th century granted the financial resources and the territory to rebuild this magnificent church, to sell it to the Russian Orthodox community for a hefty sum. What caused the problem is the breathtaking beauty of the architectural marvel.\nIt was the beauty of the wooden Transfiguration Church, created by Russian architects in 1756, that encouraged German entrepreneur and ethnography lover Horst Wrobel to replicate it in Germany. The copy in question was built in 1995 in Lower Saxony, near the town of Gifhorn in Wrobel’s private estate where he set up a museum of old mills on the riverbank. The beautiful church fits in perfectly well with the picturesque landscape. Its original, the above-mentioned Transfiguration Church, which used to be located in one of the villages of the Vladimir Region (about 200 km from Moscow), burned down seemingly beyond repair in a fire caused by a strike of lightning in the 19th century. In the 20th century it was rebuilt on the basis of the original designs, but this time, on the territory of the wooden architecture museum in Suzdal.\nIt was in Suzdal that Horst Wrobel saw the breathtaking beauty and exquisite elegance of the Orthodox shrine. With the help of old drafts he had a copy of the church built in his estate and dedicated it to St.Nicholas the Miracle Worker. In autumn 1995 His Holiness Patriarch Alexey II, while on a visit to Germany, paid a visit to Horst’s museum of mills and the church on the territory of the museum. In a solemn ceremony the Wrobel family presented His Holiness Patriarch Alexey II with St.Nicholas’ Church. According to Father Superior Archpriest Gennady (Budko), the gift certificate was not of legal but of symbolic nature:\nThe symbolic document of 24.11.1995 points out that «50 years after the end of the Second World War this church serves as a bridge across our nations, from person to person, and from heart to heart.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "It is believed that the first wooden church on this site was built in 1620 by Prince Pozharskii. Soon after a stone cathedral replaced it that was consecrated in 1636. The church has been rebuilt several times and suffered serious damage during the Napoleonic invasion. Between 1925 and 1930 it was restored, but the cathedral was demolished in 1936. A modern building was built in here 1990 to 1993. It is an operating Orthodox church.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "- Views 1353\nThe Cathedral of the Intercession of the Mother of Jesus on the Moat (Russian: Собор Покрова Пресвятой Богородицы на Рву?), Popularly known as St. Basil’s Cathedral, is a cathedral of the Russian Orthodox church erected on the Red Square in Moscow between 1555 and Built in 1561 by order of Ivan IV of Russia to commemorate the capture of Kazan ‘and Astrakhan’, it represents the geometric center of the city and the hub of its growth since the fourteenth century. It was the tallest building in the city of Moscow until the completion of the Great Bell Tower of Ivan the Great, which took place in 1600.\nThe original building, known as the Trinity Church and later as Trinity Cathedral, consisted of eight side churches arranged around the ninth, central church of Intercession; the tenth church was erected in 1588 over the tomb of the revered fool Basil the Blessed. During the sixteenth and seventeenth centuries the cathedral, perceived as the symbol of the heavenly city on earth, was popularly known as Jerusalem and represented an allegory of the Jerusalem Temple in the annual Palm Sunday parade headed by the Patriarch of Moscow and the Tsar.\nThe design of the building, whose shape resembles ‘the flames of a bonfire rising into the sky’, has no analogues in Russian architecture: “It is like no other Russian building. Nothing similar can be found in the entire millennium Byzantine tradition elapsed between the fifth and fifteenth century … a strangeness that astonishes by the unpredictability, complexity and dazzling bloom detail reproduced in its design. “The cathedral foreshadowed the climax of Russian national architecture of the seventeenth century, but it has never been reproduced directly.\nThe cathedral has operated as a division of the State Historical Museum since 1928, was completely secularized in 1929 and, in 2009, is still owned by the Russian Federation. The cathedral from 1990 is included in the UNESCO list as a World Heritage Site, along with the Moscow Kremlin.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "Peter the Great rewarded the roofer with a lifelong right for free alcohol drinking in every tavern of Russian Empire. The further life of the roofer is not known.\nActually in 1756 the Cathedral got on fire cause of the lightning, and the spire and the chimes suffered the most. In 1757-1758 the wooden constructions of the spire were replaced by the metal ones, prepared by architect D. Zhuravsky. The clock was placed only 20 years, and every hour it played a melody of the national anthem.\nThe dominant of the inside interior is a gorgeous carved iconostasis with standing figures at front – Peter, Paul and 4 evangelists. The iconostasis was created by architect Zarudny and painters T. Ivanov and I. Telega. It partly reminds a Triumphal Arch, since the Emperor planned the Cathedral to be as a monument to the victories of Russian army. The keys from the captured cities, Swedish and Turkish banners were kept. Nowadays most of them can be found in the Hermitage, while there are only copies in the Cathedral. The visitors of the Cathedral can climb the bell tower and watch the panorama view.\nPhoto taken from kuda-spb.ru\nGrand Ducal Burial Vault\nThe Peter and Paul Cathedral served as a burial place of Imperial family members. Peter the Great is buried in the Catherine’s aisle of the temple. There in summer 1998 in the 80th anniversary of the execution of Nicolas II, his wife Alexandra Feyodorovna, their children and servants, their remains were buried in the same aisle.\nBy the beginning of the 19th century, there was no place left in the Cathedral, so the special Grand Ducal Burial Vault was built. It’s connected with the Cathedral by a passage. The last burial ceremony took place in 2006, the remains of the last Russian Tsar’s mother Maria Feyodorovna were moved there from Denmark. Since 1990 the memorial service for Russian Emperors has taken place in the Cathedral.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "In 1812, the palace was destroyed by the French - although, according to the French version it was burned by Russian incendiaries. The palace was restored only in 1823 under the supervision of Major-General PS Ushakov, director of the Smolensk Cadet Corps. In 1824, the Smolensk Cadet Corps was housed in a renovated building and was renamed the Moscow Cadet Corps in 1838. The building’s interiors were again renovated by the Italian-Russian architect Joseph Bové.\nIn October 1917 the Moscow Cadet Corps mounted a fierce resistance against the Bolsheviks in Lefortovo. Colonel V.F. Rahr organized the defence of the barracks with cadets from the senior classes. Fighting lasted for six days, until heavy artillery from the enemy forced their surrender. Some of the prisoners were subsequently shot by the Reds in the neighbouring Lefortovo barracks.\nSince 1937, the building has housed the Military Academy of Armored Forces, now the Combined Arms Academy of the Armed Forces of the Russian Federation. Given its status as a military institution, the building has generally been inaccessible to the public. In 2004, the Moscow authorities initiated negotiations with the Ministry of Defence of the Russian Federation on transferring the property rights of the Catherine Palace to the city.\nTo view previous A Russian Moment listings, please refer to the directory located on the left-hand side of this page.\nThe Church of St. Mary Magdalene is an historic Russian Orthodox church in Darmstadt, Germany, built for the Empress Alexandra Feodorovna, nee Princess of Hesse-Darmstadt. She and Emperor Nicholas II wished to have the opportunity to pray in an Orthodox church while visiting Germany, which usually occurred about once every year-and-a-half or two years.\nThe architect Leon Benois (1856-1928) created the Church of St. Mary Magdalene in the Russian revival style between 1897-1899; construction was carried out under the direct supervision of the architect Gustav Jacobi, and then his assistant - Friedrich Olleriha. The construction of the church was paid for by Emperor Nicholas II, who spent 310,000 rubles (the original estimate was 180,000 rubles) from his personal funds.\nIt was decided that the church be built of Russian stone and upon Russian soil.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-6", "d_text": "According to police statistics, Palm Sunday, the day following the flashmob, saw an oblast-wide church attendance of no more than 30,000 people, and the population of Ekaterinburg alone is almost 1.5 million, with active churchgoers accounting for a mere 2-3% of its population. Protesting believers have drawn attention to precisely this fact. Church attendance in the centre of town is too low to warrant a new church, whereas some densely populated residential areas boast no churches whatsoever.\nGoloborodsky denies the claim that the 66-metre-high church would tarnish the historical appearance of the pond and that of the Dynamo Stadium spit. The latter, he insists, has “already been hemmed in by skyscrapers,” so much so that it’s “now a dwarf you can scarcely see.” And the architect brushes aside concerns that the vista of the pond would be ruined, declaring that the church would only “occupy an inconsequential sliver of backwater,” and that the height of the cathedral should be measured from the upper cornice of the main structure, in which case, he says, it would come to 22 metres. Goloborodsky fails to specify how exactly the additional 40-odd metres would vanish from the view.\nWhen confronted with the assertion that the project has been instigated from the top down without even the semblance of ongoing dialogue with ordinary residents, Goloborodsky parries with a quote from the Gospel: “Many are called, but few are chosen. The city is constantly changing. Churches are allegedly being imposed on the populace, but no one comes out to protest against office developments.”\nIn his opinion, the real cause of local people’s indignation is not so much the choice of site for the cathedral as their desire “that it not be built at all.” But, though anti-clerical motives on the part of certain protestors cannot be ruled out, their primary objective would nonetheless appear to be the preservation of the pond’s historical appearance.\n“Something to be treasured”\nThe disputes over the church are demonstrative of two incompatible approaches to history and historical memory.\nThe proposed external appearance of the church has raised a great many questions: like the Church of the Saviour on Blood in St Petersburg, it is to be executed in a pseudo-Russian style that apes the ornamentation of St Basil's Cathedral on Red Square.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Coordinates: 59°58′14″N 30°18′1″E / 59.97056°N 30.30028°E\nThe Convent of St. John of Rila\nThe Convent of St. John of Rila (Иоанновский монастырь) is the largest convent in St. Petersburg, Russia and the only stauropegic monastery in the region. It was established on the bank of the Karpovka River by Saint John of Kronstadt (1900) as a branch of the Sura Monastery of St. John the Baptist. The main pentacupolar church of the Twelve Apostles (1902) was built to a Neo-Byzantine design by Nikolay Nikonov. The ground floor contains the marble tomb of St. John of Kronstadt. The convent was disbanded by the Soviets in 1923. It was reopened as a branch of Pühtitsa Convent in 1991.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-15", "d_text": "When Alexander II authority was challenged, he turned repressive, and he vehemently opposed movements for political reform. The revolutionary “People’s Will” group, finally, assassinated Tsar Alexander II on March 13, 1881. He was killed in the streets of St. Petersburg by a bomb thrown by a member of the group. Ironically, on the very day he was killed, he signed a proclamation–the so-called Loris-Melikov constitution–that would have created two legislative commissions made up of indirectly elected representatives. He was succeeded by his 36-year-old son, Alexander III, who rejected the Loris-Melikov constitution. Alexander II’s assassins were arrested and hanged, and the People’s Will was thoroughly suppressed. The peasant revolution advocated by the People’s Will was achieved, at last, by Vladimir Lenin’s Bolshevik revolutionaries in 1917.\nHistory: This marvelous Russian-style church of Our Savior on the Spilled Blood (Церковь Спаса на Крови, Tserkovʹ Spasa na Krovi) was built on the spot where Emperor Alexander II was assassinated in March 1881. The Church is prominently situated along the Griboyedov Canal; paved roads run along both sides of the canal. The church was officially called the Resurrection of Christ Church. The construction of the church was almost entirely funded by the Imperial family and thousands of private donators. Construction began in 1883 during the reign of Alexander III. The church was dedicated to be a memorial to his father, Alexander II. The construction was complete during the reign of Nicholas II in 1907.\nThe church was closed for services in the 1930s, when the Bolsheviks went on an offensive against religion and destroyed churches all over the country. In the aftermath of the Russian Revolution, the church was ransacked and looted, badly damaging its interior. The Soviet government closed the church in the early 1930s. During the Second World War when many people were starving due to the Siege of Leningrad by Nazi German military forces, the church was used as a temporary graveyard for those who died in combat and from starvation and illness. The church suffered significant damage. After the war, it was used as a warehouse for vegetables, leading to the sardonic name of Saviour on Potatoes.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The united architectural\nensemble includes: fortification system - fortress walls,\ncurtains, bastions and ravelins (1706-1740,\nthe architect Dominique Tresini, the engineer Burhard Kristof\nvon Minikh); the Parade Peter Gate (1717-1718, the architect\nTresini), the Bathhouse (1762 -1766, the arch. Alexander Vist)\nwhere a copy of Peter's boat (the Grandfather of Russian Float)\nis stored; its original is now in Central Military and Navy\nMuseum; designed by bas-relief \"The Overthrow of Simon\nthe Magi by Peter the apostle\" by Conrad Osner; the Mint\nWorks (1798 -1806, the architect Antonio Posto); the Engineer's\nhouse (1748-1749), the Commandant's house (1743-1746, the\nengineer de Marine) and others.\nIn the centre of the ensemble there is the Peter and Paul Cathedral (1712-33, the architect Tresini). Its bell-tower served as a city Clock tower. It became a symbol of establishment of a new capital of Russia on seaside lands. Topped by a golden tall-spire bell-tower stayed the highest (122,5m) architectural construction of Petersburg. The main decoration of the Cathedral interior is carved golden iconostasis of the in a Baroque style made by Moscow carved on wood by design of Tresini and Ivan Zaprudny.\nThe cathedral was firstly used as a necropolis of the Romanov House. There are the remains of Russian Emperors from Peter I to Nikolai II and members of their families (excluding Peter II and Ioann IV). The Grand Ducal Burial Vault (1896-1908, the architects David Grimm, Anton Tomishko, Leonty Benois) where before the revolution 13 members of the Emperor family were buried. In 1992 in the Burial Vault the Great Duke Vladimir Kirillovich was buried (e died in emigration); in 1995 the remains of his parents - Kirill Vladimirovich and Victoria Fjodorovna - were brought from Koburg (Germany).\nThe Commandants' Cemetery is by the East Wall of the Peter and Paul Fortress, where 19 from 32 commandants of the fortress were buried.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-45", "d_text": "Nikon personally designed his new residence at the New Jerusalem Monastery which was dominated by a rotunda-like cathedral, the first of its type in Russia.\nIn 1712, Peter I of Russia moved the capital from Moscow to Saint Petersburg, which he planned to design in the Dutch style usually called Petrine baroque. Its major monuments include the Peter and Paul Cathedral, Menshikov Palace, and the Menshikov Tower.\nCatherine the Great patronized neoclassical architects invited from Scotland and Italy. Some of the most representative buildings from her reign are the Alexander Palace by Giacomo Quarenghi and the Trinity Cathedral of the Alexander Nevsky Lavra by Ivan Starov. During Catherine's reign, the Russian Gothic Revival style was developed by Vasily Bazhenov and Matvei Kazakov in Moscow.\nAlexander I favored the Empire Style, as evidenced by the Kazan Cathedral, the Admiralty, the Bolshoi Theatre, Saint Isaac's Cathedral, and the Narva Triumphal Gates. Later, the nineteenth century saw a revival of traditional Russian architecture. The redevelopment of the center of Moscow saw the Neo-Byzantine construction of the Great Kremlin Palace (1838-1849), the Kremlin Armoury (1844-1851) and the Cathedral of Christ the Saviour (1832-1883), all designed by Konstantin Ton.\nStalinist architecture put a premium on conservative monumentalism. In the 1930s, there was rapid urbanization as a result of Stalin's policies. There was an international competition to build the Palace of the Soviets in Moscow in that decade.\nAfter 1945, the focus was on rebuilding the buildings destroyed in World War II but also erecting new ones: The Seven Sisters in Moscow are seven high-rise buildings built at symbolic points in Moscow. The building of Moscow University (1948-1953) by Lev Rudnev and associates is particularly notable for its use of space. Another notable example is the Exhibition Centre in Moscow, which was built for the second All-Union Agricultural Exhibition (VSKhV) in 1954, that featured a series of pavilions each decorated in the style of the feature that they represent. The other famous examples are the stations of the Moscow Metro and Saint Petersburg Metro's that were built during the 1940s and 1950s are world famous for their extravagant designs and vivid decorations.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "The Hierarch emphasized that the media provide a mixed picture of what is happening in the world, with an emphasis on the negative — disasters, diseases, catastrophes. “The problem of suffering is a deep mystery of God the mystery of life. Only through prayer we can touch the mystery, only through prayer we can understand many things, we can look beyond the visible horizon… God forbid we repeat past mistakes,” said the Archbishop.\nFounder and President of group of companies “Etalon”, the President of Fund “Creative world” Vyacheslav Zarenkov said that the monument was planned to deliver in 2014, by which time he was executed by the sculptor, but only today managed to implement. “The Tsar’s Village — a special place, hence the soldiers went to the First world war, — he reminded. Here brought dead, wounded, they recover not only in hospitals but also in monasteries. Not only the priests who were at the forefront and lead educational work among the soldiers, and monks who treated the wounded, contributed to the victory”.\nVyacheslav Zarenkov said that the memory of the First world war after the revolution was deliberately erased, because there were a lot of mistakes, and people who are in power do not want them to be seen. “I hope that this monument symbolizes the Union of faith clergy and laity, recalled the courage of the soldiers,” said said V. Zarenkov, saying that his grandfather died from gas poisoning in the war.\nThey read the greeting of the Chairman of the Russian historical society, Director of the foreign intelligence service Sergei Naryshkin.\nThe rector of the St. Petersburg Academy of arts named after Ilya Repin Semyon Mikhailovsky noted that the successfully chosen place for the monument — in front of the reconstructed temple. Sculptor, academician of the Russian Academy of ARTS Vladimir Gorevoy added that this monument we pay tribute to those who went to war and who prayed for the fighting.\nThe ceremony was attended by representatives of the government of St. Petersburg, legislative Assembly of the city, the administration of the Pushkin district.\nTo the monument and laid flowers, marched in solemn March of the honor guard.\n1 August (19 July Church calendar) — the day of Russia’s entry into the first world war in 1914. Tsarskoye Selo was the site of the quartering of regiments of the Imperial guard.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-4", "d_text": "Lower part was real church with altar where priest serve the liturgy. Don't forget to go down to the Crypt which is even more impressive: take time to finding your way to the basement where there are more chapels and rooms to visit. The ceiling is covered with frescoed of Christ, Virgin Mary, angels and toward the front is like a small church decorated with frescoed and statues, and even have a spiral with dome. The height of its inner space is 79 meters. Every square inch of the inside has been hand painted with icons and artwork. Right on the axis of the main entrance there is a unique iconostasis in the form of white marble octagonal chapel crowned by a gilded dome. The main shrines of the Temple are the icon of the Nativity wrought by His Holiness Patriarch Alexy from Bethlehem, six original restored canvases by Vereshchagin and the authentic throne of His Holiness Patriarch Tikhon in the main altar. The silence (even when it is very crowded) and the open spaces are relaxing and invites you to walk around this big place. The general atmosphere inside is one of quiet are and respect:\nBefore you turn to the east side of the cathedral and walk through the Patriarchal Bridge - try to visit the Cathedral of Christ the Saviour Gardens. The Monument to Alexander II (the Liberator Tsar) is located to the left of the Cathedral of Christ the Saviour in the garden area. Alexander II is honored here because he helped lay the foundation for the original Cathedral and was Tsar of Russia during that time (destroyed in 1931 by Soviet leader Joseph Stalin) and ruled during its construction. Completed in 2005 and partly inspired by a destroyed imperial monument from 1898, the statue itself was paid for by private donations, with the rest of the monument mainly financed by public funding. On June 2, 2004 Moscow Mayor Yuri Luzhkov signed a decree about the erection of a new monument to the emperor Alexander II in Moscow. The memorial was designed by professor Alexander Rukavishnikov, a member of the Russian Academy of Arts and national sculptor of Russia. At first, the monument was supposed to be set by the Kremlin's Kutafya Tower; however, a new place was found for it around Christ the Savior Cathedral. DO NOT MISS THE WONDERFUL GARDENS AROUND THE MONUMENT !", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "During the night of July 16/17, 1918, the last Russian Emperor Nikolai II and his family were executed in the house that belonged to the engineer Nikolas Ipatiev in Yekaterinburg. In 1997, that house was destroyed on the initiative of the KGB. In 2000, Nikolai II and his family were declared saints by the Russian Orthodox Church, and on the place of execution, The Church on Blood was erected to commemorate the last Emperor’s family. There is a tradition in Russia to build churches and cathedrals to commemorate the violent deaths of tsars. Besides the Church on Blood in Yekaterinburg, other churches have been built in St. Petersburg and Uglich. In Yekaterinburg, this church has become a place of pilgrimage for Orthodox Christians and a tourist attraction visited by many people including artists and politicians.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "St. George the Victorious cathedral\nProject implementation period: 1999\nSt. George the Victorious cathedral has been constructed in honor of the heroism of Russian warriors in the Great Patriotic War. Capsules with soil from hero cities and areas, where main battles of the Great Patriotic War took place have been placed into the basement of the cathedral. Appearance of the church combines stylistic approach of the traditional old Russian church architectonics and recognizable elements of the Saint-Petersburg architectural school. The building occupies the area of 180 m² and stands on the 1,5 m high artificial hill. A small dome crowns the hip roof broach spire having the height of 36 meters, outer walls of the cathedral are decorated with mosaic panels, illustrating warrior saints.\nThe cathedral is the first church made of stone, built in the city since 1917.\nThis cathedral symbolizes our grateful aсknowledgement of the past and our architectural message to the future. As builders, we always remember that we live and work in the city that overcame the years of war, having saved its great architecture. For us it is important not only to carry on traditions of architects of the past, but to make Saint-Petersburg a city that is worth of the new time.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "Narva Triumphal Arch is located at Ploshchad’ Stacheck, 1, Saint Petersburg, Russia, 190020\n3. St Nicholas Naval Cathedral\nSt Nicholas Naval Cathedral is a Baroque Orthodox cathedral and is associated with the Russian Navy.\nIt is in the shape of a cross with Corinthian columns and is an example of Elizabethan Baroque.\nSt Nicholas Naval Cathedral is located at Nikol’skaya Ploshchad’, 1/3, Saint Petersburg, Russia, 190068\nYou might also like: 17 PLACES TO VISIT IN MOSCOW RUSSIA IN ONE DAY\n4. St Nicholas Naval Cathedral Bell Tower\nThe Bell Tower, a few steps across from the Cathedral, is freestanding and four stories tall.\nThe Tower has a gilded spire on top that took three years to complete from 1755-1758.\nSt Nicholas Naval Cathedral Bell Tower is located at the same address as the Cathedral.\n5. Winter Palace\nThe Winter Palace was the official residence of the Russian monarchs from 1732-1917.\nAlexander II was the last tsar to have his main residence here. After his assassination, they deemed the Palace too hard to secure due to its large size. While this wasn’t the location of Alexander II’s assassination, there had been an attempt on his life a year prior that caused a lot of damage to the Palace.\nWinter Palace is located at Palace Embankment, 32, Saint Petersburg, Russia, 190000\n6. St Isaac’s Cathedral\nSt Isaac’s Cathedral is dedicated to Saint Isaac of Dalmatia, a patron saint of Peter the Great. It is the largest Russian Orthodox Cathedral in St Petersburg as well as the largest orthodox basilica and the fourth largest cathedral in the world!\nSt Isaac’s Cathedral is located at St Isaac’s Square, 4, Saint Petersburg, Russia, 190000\n7. Monument to Nicholas I\nThe Monument to Nicholas I was established in 1859 and is a Neo-Baroque style bronze, equestrian monument. What I find interesting is that it was Europe’s first equestrian statue with only two support points and the only one that preceded it was the Andrew Jackson equestrian monument in Washington DC!\nMonument to Nicholas I is located at St Isaac’s Square, 11, Saint Petersburg, Russia, 190000\n8.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "The first heart, belonging to King Ferdinand IV, was placed in the Augustinian Church on 10 July 1654, according to his wishes. The last heart, belonging to Archduke Franz Karl of Austria, was placed in the crypt on 8 March 1878. Here rest the hearts of nine emperors, eight empresses, one king, one queen, 14 archdukes, 14 archduchesses and two dukes. The bones are interred in the Imperial Vault of the Kapuzinerkirche, the internal organs in the catacombs of the Stephansdom:\nTwo organs have added to the prestige of the church in the music world. Not only did Franz Schubert conducted his Mass here, but Anton Bruckner’s Mass in F minor also had its world premiere here:\nFrom this church - you can continue to a very special attraction in Vienna - the Prunksaal. It is situated a few steps from the church in Josefsplatz.\nTip 2: Cathedral of Christ the Saviour, ulitsa Volkhonka, 15. Nearest Metro: Kropotkinskaya (Кропоткинскаяmore) (Line 1, RED, Sokolnicheskaya Line). If you are coming from city center, take the exit on the right.\nDuration: 1-2 hours.\nThe Cathedral of Christ the Saviour (Храм Христа Спасителя, Khram Khrista Spasitelya), originally built in the 19th century in commemoration of the Russian army's victory over Napoleon. When Napoleon Bonaparte retreated from Moscow, Tsar Alexander I signed a manifest on 25 December 1812 declaring his intention to build a cathedral in honor of Christ the Savior - as a memorial to the sacrifices of the Russian people. The cathedral took many decades to build and did not emerge from its scaffolding until 1860. The cathedral was consecrated on 26 May 1883, the day before Alexander III was crowned. The original church was the scene of the 1882 world premiere of the famous 1812 Overture by Tchaikovsky. It was destroyed in 1931 on Stalin's personal order. The demolition was supposed to make way for a colossal Palace of the Soviets that was never built.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "Remedial activities conducted in 1990, suspended its heel.\nPeter and Paul Cathedral, of course, is the most valuable architectural monument, one of the spiritual symbol of Kazan. Built on an elevated site, a beautiful and majestic, cathedral differs by its kind of decoration. Composition of temple and bell tower is made in the style of the so-called Russian, or «Naryshinsky», Baroque, widespread in Russia at the end of XVII — the first half of XVIII century. This composition is also found in the Kazan region (Pyatnitskaya church in Kazan, the Church in Potaniha of Vysokogorsky area, both churches were built with funds of merchant Mihlyaev). Decor gives the unique appearance to the cathedral — an abundance of facade details and their bright coloring, which is preserved to our time. Unfortunately, the names of the builders of the cathedral remained unknown. Many art critics point to the similarity of decor with ornamental decoration of temples, built in the first half of XVIII century in the Ukraine. Perhaps the church is really was built by the Little Russians, but Peter and Paul Cathedral is not a simple mechanical copy of a style. Much of his appearance is unique and is the result of outside the stylistic predilections. Great merit for that, Kazan remained a magnificent temple, belongs not only to the builders of the cathedral and I.A. Mihlyaev, but the cathedral elders and the clergy later.\nKazan Icon of the Mother of God-miraculous icon, was revealed in 1579 from the ashes.\nPlace of finding the icons indicated in his sleep a little girl Matrona Virgin herself. Numerous copies of the Kazan icon of Our Lady can be seen today in various corners of the earth, all of which are recognized by the Orthodox Church as miraculous. Although initially regained the icon was stolen and burned in 2005 in Kazan returned to her best-known copy of the XVIII century, a long time kept by the Pope John — Paul II.\nOnce upon a time in ancient times, near the sacred Mount Sinai, which towered of the same name peninsula, surrounding shore with its warm water of the Red Sea, arose monastic settlement called Raifa …\nIt is difficult to unequivocally say that it literally meant.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Church of All Saints or Church on the Blood is a very important place for Russian Orthodox people and a significant historical site located in Yekaterinburg. This is the place where the Romanov Tsar family was killed in July 1918; at that time, there was no church, and the Romanovs' execution is associated with Ipatiev House. A curious fact: after World War II, the former Ipatiev house served for the anti-religion museum.\nThe church was built in 2003, and soon it became the primary site of Yekaterinburg; it attracts thousands of pilgrims from Russia and worldwide. In front of the church, there is a monument to the Romanov family, with spiral stairs symbolizing the stairs in Ipatiev's house and reminding of the last minutes of Tsars' family. The interior and outside look modern but the overall style is typical for Orthodox architecture.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "The Domaine of Villarceaux is a French château, water garden and park located in the commune of Chaussy. The gardens are located on the site of a medieval castle from the 11th century, built to protect France from the British, who at that time occupied Normandy, the neighboring province. Many vestiges of the medieval fortifications remain in the park. A manor house and French water garden was built there in the 17th century. In the 18th century a château in the style of Louis XV was built on a rocky hill overlooking the water garden.\nOne famous resident in the 17th century was Ninon de Lenclos, the author, courtesan, and patron of the arts. Another was Françoise d'Aubigné, the future Madame de Maintenon and future wife of King Louis XIV, who lived there after the death of her first husband, the poet Paul Scarron, at the invitation of her friends the Montchevreuil, cousins of the Marquis of Villarceaux. The Marquis fell in love with her, and commissioned a full-length portrait of her, nude, which greatly embarrassed her. The portrait can be seen today in the dining room of the house. The house also contains a collection of 18th-century furniture.\nThe domaine is part of the regional park of Vexin, and is used for concerts and cultural events. The gardens are classified among the Notable Gardens of France.\nThe gardens contain a rare 18th-century ornamental feature called a vertugadin, modelled after the hoop skirts of the 18th century, surrounded by statues brought from Italy.References:\nThe Church of the Savior on Spilled Blood is one of the main sights of St. Petersburg. The church was built on the site where Tsar Alexander II was assassinated and was dedicated in his memory. Construction began in 1883 under Alexander III, as a memorial to his father, Alexander II. Work progressed slowly and was finally completed during the reign of Nicholas II in 1907. Funding was provided by the Imperial family with the support of many private donors.\nArchitecturally, the Cathedral differs from St. Petersburg's other structures. The city's architecture is predominantly Baroque and Neoclassical, but the Savior on Blood harks back to medieval Russian architecture in the spirit of romantic nationalism.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "The Cathedral is the head church of the\nEkaterinodar and Kuban eparchy. It is one of the largest Russian churches, an architectural monument of our city. It is the centre of Kuban’s spiritual life.\nThe decision to build the Cathedral was taken by the public of Ekaterinodar the 17th of October, 1889 – a year after the terrifying Tzar’s train accident, which the august family miraculously survived. Shortly before the emperor Alexander the 3rd with spouse and sons visited Ekaterinodar. In honor of the salvation the decision was taken to build the majestic Cathedral with seven altars.\nThe building of the region’s largest cathedral took 14 years. The project was schemed by a talented local architect Ivan Vasilyevich Malgherb. The foundation was laid the 23rd of April, 1900 on the square of St. Catherine, where thenadays stood a decrepit wooden church of St. Catherine, built at 1814. The ceremonial consecration of the main altar took place the 24th of March, 1914.\nThe hard years of the cathedral parish came after the October Revolution; the Cathedral went through the “renovationism”, in 1922 was completely ravaged under the mask of help for the starving people of Povolzhye.\nBy a miracle it escaped the fate of the Cathedral of St. Alexander Nevski. It was being prepared to be blown up, allegedly for bricks. Salvation came from the creator – the architect Malgherb succeeded in convincing the Church Destruction Committee that the destruction would be unreasonable. The 26th of July 1934 the Eparchy administration announced its breakup due to the small number of its members. The Cathedral was made a storehouse of.\nThe resumption of divine services happened only after the liberation of Krasnodar from the German-fascist invaders in 1944. A restoration was made by the celebration of the millenary of the baptism of Rus.\nIn 2014 the Cathedral is expecting to observe its 100th anniversary.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-4", "d_text": "Architecturally, Holy Trinity Monastery includes all major types of temples of its time: refectory (the Forty Martyrs shrine), traditional cubic (Holy Trinity Cathedral) and crosswise (temple of St. Peter and St. Paul). The architectural dominant of the monastery, the Cathedral of the Trinity (1715) is built in somewhat unexpected in Siberia style of Ukrainian Baroque. It is characterized by three domes, southern location of the main facade, octagonal “garlic” shape of the domes. The influence of Ukrainian architecture on Siberian Baroque, noted by many researchers, is mainly explained by the large-scale deportations and migration of Ukrainian population to Siberia in the XVII-XVIII centuries.\nThe restoration works are still ongoing in the Holy Trinity Monastery. Together with the temples, a vicarious building and a monastery church are being restored, which were badly damaged during the Soviet period of Russian history. Unfortunately, a part of the monastery is occupied by the urban services. At the same time, the monastery is a favorite place of city residents and tourists. Every year, thousands of travelers arrive here to enjoy the magnificent views of churches and take part in religious services.\nSpasskaya Church is one of the most beautiful temples in Tyumen, as well as one of the major architectural monuments. The first church was wooden, and it burnt down in a fire. Construction of the stone church began in 1794 and ended in 1819. The church was built in Siberian Baroque style. Once, there was an icon of Christ Not Made by Hands here, that helped stop the loss of cattle and cure the most dangerous human diseases. It was made specifically for this church, hence the name Spasskaya (Of the Saviour). Currently, the icon is lost. Spasskaya church is the only one in the province of Tobolsk, which has been crowned by 13 crosses.\nDuring his visit to Tyumen, Emperor Alexander II, first of all, went to Spasskaya Church and was pleasantly surprised by the luxury of its interior decoration. So, in the interior of the summer church, located on the first floor, there are fragments of murals, made by the master of goldsmith workshop Peter Belkov, a citizen of Tobolsk.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Cathedral of Christ the Redeemer (Cathedral of the Nativity of Christ) in Moscow - Cathedral of the Russian Orthodox church near the Kremlin on the left bank of the Moskva River, on the site, known previously Chertolye. Existing construction - implementation in the 1990's external reconstruction of the temple of the same name, created in the XIX century. On the walls of the temple were inscribed the names of the officers of the Russian Army, who died in the war of 1812 and the other at the time of close military pohodah.Stroitelstvo lasted almost 44 years by the architect Konstantin Ton: the church was laid September 23, 1839, consecrated - May 26, 1883. December 5, 1931 church building was destroyed. Rebuilt in the same place in 1994-1997.\nLa complessità si basa sul numero di poligoni presenti nel modello. Modelli più complessi hanno un rendering più lento in Google Earth. Puoi ottenere ulteriori informazioni in merito nella nostra knowledge base.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-3", "d_text": "Its palaces and parks were created under Empresses Elizabeth and Catherine the Great between 1744 and 1796. The main landmark is the Catherine Palace designed by Rastrelli and the famous Amber Room inside the Palace.\nLunch at a local restaurant.\nOn our way back to St. Petersburg we will stop by an impressive memorial dedicated to the struggle and hardship of the residents living in the city during WWII. The Monument to the Heroic Defenders of Leningrad and St. Petersburg Siege Memorial was built to be the main focal point of Ploshchad Pobedy (Victory Square) in the early 1970s. A special bank account was opened to take donations and many people took part in the construction including volunteers and famous artists of the Soviet Union such as Mikhail Anikushin, Sergey Speransky, and Valentin Kamensky, all of whom participated in the defense of Leningrad.\nOvernight in St. Petersburg (Breakfast, Lunch)\nPETER-&-PAUL FORTRESS - CHURCH OF THE SAVIOR ON SPILLED BLOOD - ST. ISAAC CATHEDRAL\nOur tour of St. Petersburg continues today with three important visits: Peter and Paul Fortress, The Church of the Savior on Spilled Blood and St. Isaac Cathedral.\nPeter-and-Paul Fortress was built in 1704 to defend the city from naval attacks, however it never served its primary purpose. Instead, its historical mission turned out to be rather gruesome: thousands of workers died while building the fortress, many political prisoners, including Peter's own son Alexi were kept and tortured in its cells. The tombs of the Romanov family are also in its Cathedral.\nThe stunning Church of the Savior on Spilled Blood was built on the spot where Emperor Alexander II was assassinated in March 1881. The construction of the church was almost entirely funded by the Imperial family and thousands of private donors. Both the interior and exterior of the church are decorated with incredibly detailed mosaics, designed and created by the most prominent Russian artists of the day\nOur next visit is to one of the world’s largest and most splendid St. Isaac Cathedral.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-5", "d_text": "It is also a typical ” Italian staircase “: water flows tablecloth wide steps disposed gradually and paved with marble, tufa and shells, then falls into a large basin. Two caves, on the upper and lower sides,with decorated interiors as ” natural cave ” are arranged on the slope between the stairs.\nThe Marble Palace\nCatherine II the Great asked Antonio Rinaldito build the Marble Palace for her favorite, Count Grigory Orlov . It was he who had helped her mount the coup against her husband Tsar Peter III in 1762, and take the throne to become the Empress Catherine II the Great. But Catherine ii was not Russian, but German princess. She was chosen by Elizabeth 1st as a wife for her nephew, the future Tsar Peter III.\nSt Petersburg Subway\nCommon transport in St. Petersburg, the metro runs at a great depth, between 20 and 30km, due to the geology of the basement of the city. It transports daily more than 3 million passengers and has more than 60 stations. Some stations run under the river Neva and its branches. Decorated with marble and granite, these stations are illuminated by more than 700 lighting elements – chandeliers, candelabra and bronze sconces, crystal and decorative glass. The first metro line in St. Petersburg, with a length of 10,8km, was opened in 1955.\nThe church of Saint-Sauveur ” On the Spilt Blood ” It includes the most beautiful mosaics\nRenowned for its exotic appearance, the Church of the Savior ” on the Spilt Blood ” drains many tourists. It is built on the spot where Tsar Alexander II was wounded following the terrorist attack that cost him his life, hence the name of the Church of the Savior ” of the spilt Blood.” Majolica sconces and mosaicpanels inlays adorn its exterior facades. Its interior includes the largest collection of mosaics in the Orthodox architecture. At the entrance, there are illustrations of scenes from the Old Testament and the twelve great feasts. Drawings in the center tell acts of the earthly life of Christ in Glory and the Eucharist. The figures of the “pillars of the Church”: the Apostles, Prophets, Saints and martyrs Prelates, decorate the spandrels.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-5", "d_text": "During the first quarter of this century, there was Greek Revival which prevailed up to the middle of the century. Reconstruction of the cities using massive design plans and technical advances was the main priority during that period. In 1918, Alexey Shchusev and Ivan Zholtovsky founded the Mossovet Architectural Workshop, where planning of the reconstruction of Moscow as a new Soviet capital took place. But after World War II, the focus was on reconstructing the destroyed buildings and building new ones. In 1945, Stalin changed the look of many post-war cities. After his death in 1953, social and political changes took place in the country, which brought an end to Stalinist architecture. As a result, the buildings became simple and square-shaped.\nMonument to Minin and Pozharsky\nDesigner: Ivan Martos\nIt is the bronze statue that stands in front of the St. Basil’s Cathedral. It commemorates Prince Dmitry Pozharsky and Kuzma Minin who put an end to the Time of Troubles in Russia.\nUspensky Cathedral in Omsk\nArchitect: Ernest Würrich (Original Structure)\nThis cathedral was built in the year 1891, but was shut down after the Russian Revolution, and was blown up later in 1935. Later, it was rebuilt during the 21st Century.\nHospice of Count Sheremetev\nArchitect: Elizvoy Nazarov\nThis hospice was first built as a charity shelter to the poor, by Count Nikolai Petrovich Sheremetev, husband of Russian theater actress Praskovya. But after her demise, the hospice was reconstructed to serve as a monument in her memory.\nAdmiralty Building at St. Petersburg\nYear: 1806 – 1823\nDesigner: Andreyan Zakharov\nCurrently serving as the headquarters of the Russian Army, this building was reconstructed in the 19th century. The spire has a weather-vane at its top.\nSt. Isaac’s Cathedral\nYear: 1818 – 1858\nArchitect: Auguste de Montferrand\nSt. Isaac’s Cathedral was the city of St Petersburg’s main church, and was built between 1818 and 1858. It has the capacity to accommodate 14,000 worshipers at a time.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-7", "d_text": "According to Roizman, it “will be the third cathedral of this kind, and the challenge facing those who’ve concocted this idea is to demonstrate that Ekaterinburg is the country’s third capital.”\nGoloborodsky, for his part, goes so far as to argue that, despite the fact that Ekaterinburg was founded under Peter the Great in 1723, the church oughtn’t to be constructed in the Petrine style. “The church’s relationship with Peter,” he stresses, “were complicated. Peter abolished the position of patriarch and subordinated the church to a secularly administered synod. He humiliated the church, therefore there can be no reference to the Petrine era in this new church’s architecture.” It was the seventeenth century, according to Goloborodsky, that witnessed the “crystallisation of Russia’s national self-identity,” with its architecture “evincing the joyful character of the era that saw the Romanov dynasty accede to the throne.”\nThe proposed church, with its architectural allusions to the pre-Petrine period, would thus be used to propagate an artificially enforced myth of the nation, while the real history of the city — its industrial past, its role in the development of Soviet constructivism — would be destroyed.\nUntil recently, Ekaterinburg’s constructivist legacy was scarcely even perceived as a legacy in the popular consciousness. The situation, however, has begun to change over the last couple of years, in large part thanks to the endeavours of a particular group of researchers. Larissa Piskunova, Igor Yankov and Lyudmila Starostova — all members of the City Pond Committee — are anthropologists who have been studying the architectural past of their city through the prism of its inhabitants’ histories.\nWith the help of excursions, exhibitions and lectures, this group seeks to explain constructivism’s aesthetic codes to as many citizens of Ekaterinburg as possible, and to draw their attention to the uniqueness of the city’s buildings. Ekaterinburg, notes Piskunova, is remarkable for the fact that entire constructivist complexes have been erected here, and not simply individual buildings, as in Moscow.", "score": 8.086131989696522, "rank": 99}]} {"qid": 14, "question_text": "Where can I find practice tests to prepare for the PMP certification exam?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "You Can Pass The PMP® Exam Easily!\nReading the PMBOK® Guide and learning project management theory as part of your exam preparation only goes so far. At some point you are going to have to ask yourself: \"Am I ready to take the PMP® Exam?\" Here is the secret to answering this question and passing the exam on your first try: Practice - practice - practice!\nThe PM Exam Simulator™ offers you the opportunity to take 9 computer-based sample PMP® Exams before heading out for the real thing. Be ready to succeed on exam day!\nGet an \"insiders view\" of the actual PMP® Exam\" and prepare with the PM Exam Simulator™. With first-hand experience of the environment and questions you’ll encounter when you take the exam, you can raise your score, increase your confidence and be fully prepared to succeed.\nDon't get caught off guard on exam day. Prepare and practice in a realistic environment first.", "score": 51.98390618332573, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "But instead of learning in a classroom, the online instructor led training enables students to learn from their home. Classes are scheduled on weekends so that students would not need to take off from work to study. All the sessions are recorded so that students can refer to the lesson videos to clear doubts or to refresh their understanding of the topics.\nPMP® Self Study Training\nThe Whizlab PMP® Self Study Training package provides 60 contact hours for students with 13 full length PMP® training videos available online. There are 1000 practice questions included in the packages (and also the resources in the PMP® Exam Simulator package). Each question comes with detailed explanation for the correct answer. At the end of each practice exam, Whizlab would provide a detailed report on the performance of the student on the strengths and weaknesses so that they know which areas to work further on.\nPMP® Exam Simulator\nThe PMP® Exam Simulator package provides 4 full length PMP® practice exams (800 questions in total) plus additional 200 questions based on the knowledge areas for the PMP® exam. Upon successful completion of the exams, 43 Contact Hours for the PMP® Certification are awarded which satisfies the 35 Contact Hours required for PMP® application.\nIn addition to the practice exams, there are way more resources included in the package:\n- detailed study notes covering all the syllabus of the PMP® exam\n- Flash cards for self-study\n- PMP® exam tips and tricks\nWhizlabs PMP® Training Student Review\nWhizlabs student Chiranjib Halder had never tried online classes before attending the Whizlabs PMP® training online. He considered that the online class is very convenient and useful to help him preparing for the PMP® Certification exam. He especially likes the full recording of all lessons as he could review the lessons as needed. Another student Kareema Shaik attended the PMP® online training and commented that the delivery of the trainer was very effective to help her understand the concepts with lots of good examples. Bob Stanley thanked Whizlabs PMP® training for helping him pass the PMP® Certification exam with the Whizlabs PMP® Exam Stimulator. He used it as his only additional resource for preparation.", "score": 51.22882288906655, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "*** Built for the revised PMBOK 5th exam version per 31 July 2013 ***\nABOUT PMI® AND PMP® :\nThe PMP® credential offered by PMI® is the most important and most widely accepted project management certification.It is a well respected certification across industries and verticals\nABOUT THIS APPLICATION:\nThis application helps you prepare for the PMP® (Project Management Professional) examination, offered by PMI®.\nThis exam needs preparation - and this application helps candidates with a perfect preparation!\nThis application will help improve your preparation.Through the exam identify gaps in every knowledge area and process group.\nTest your preparation level using this application which offers 200 complex questions in an exam-like simulation\nbased on the PMBOK® Guide NEW 5th Edition.\nThis PMP/PMI exam prep application provides you with training through our exam covering all 13 chapters, 10 Knowledge Areas and all 5 Process Groups.\n- High quality questions\n- We take pride in what we do and strive to give our best - all questions and answers go through multiple rounds of reviews to ensure you get the best product\n- Tired of firing up your laptop and studying ? use this application on your mobile anytime any place - be it while commuting to office or at the airport - make use of every minute at your disposal\n- One time install - does not need internet connectivity - everything is available at your fingertips\n- Explanations for each question\n- Intuitive , Ease of use interface.\nWe are an exam preparation organization aiming to provide high quality material catering to today's needs.\nWe provide on the go apps helping you achieve your goals.\nAll of our test material is prepared exclusively for eXamScripts by subject matter experts thereby maintaining the quality of our exams.\nWe value your feedback - Customer satisfaction is our top priority.\nQuestions ? Comments ? Suggestions ?\nYou are welcome to share any suggestions or improvements at firstname.lastname@example.org\nGood luck and pass your PMP exam with flying colours !\nPMI, PMP and PMBOK® Guide are marks of PMI, the Project Management Institute, that are registered in the USA and in other countries.", "score": 47.953234609164824, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "Good exam prep\n\"Good exam preparation - I relied on the premium vce dump when I was getting ready for my PMP exam. Questions were real, so passing the exam was easier. Some questions, like setting goals and evaluation process are a little confusing, so it's good to know them before the exam.\nDownload Free PMP Practice Test Questions VCE Files\nTitleProject Management Professional\nPMI PMP Certification Exam Dumps & Practice Test Questions\nPrepare with top-notch PMI PMP certification practice test questions and answers, vce exam dumps, study guide, video training course from ExamCollection. All PMI PMP certification exam dumps & practice test questions and answers are uploaded by users who have passed the exam themselves and formatted them into vce file format.\nProject Management Professional, also known as PMP, is an industry acknowledged certification for the project managers issued by the Project Management Institute (PMI). The candidates for this sought-after certificate need to demonstrate their competency associated with leading projects. Earning this certification requires meeting the certain training & experience requirements as well as passing one qualifying exam.\nPMI PMP: Target Audience and Prerequisites\nThe PMI PMP certification is an ideal option for those candidates who need more experience than the usual managers to progress in their careers. There are various requirements for this certificate. For starters, the applicants should have a four-year degree, experience of 36 months in leading a project, and 35 hours of PM training or education. Alternatively, these individuals need to acquire a high school diploma or an associate’s degree. They should also have at least 60 months of experience in leading projects as well as a minimum of 35 hours of PM education or training, or holding the CAPM certification. Once you make sure that you meet the eligibility requirements, you can proceed with submitting the audit application confirming your background. You can send your completed audit form by regular postal mail or express courier service to the address indicated on the official webpage. If your application is approved, you will be able to schedule your exam appointment.\nPMI PMP: Exam Details and Topics\nThe PMP certification exam consists of 200 multiple-choice questions. There are 175 scored as well as 25 unscored questions. The students will have 230 minutes to complete all of them.", "score": 47.21501248291982, "rank": 4}, {"document_id": "doc-::chunk-2", "d_text": "- The Internet: This is a big one, and you have to be careful. There are a lot of good sites where you can download free testing materials, such as: Oliver Lehmann, PreparePM, PM Exam Simulator, and more. Use your judgment when considering whether a site is credible, and also make sure that the PMBOK version of the exam on any materials you consider downloading matches the version of the PMBOK associated with the test you're taking. Additionally, there are a lot of great discussion forums dedicated to supporting people pursuing PMP certification, such as PM Zilla. Be careful with these sites as well, as they can be a great source of information, but can also make you unnecessarily afraid that the exam is more difficult than it really is.\nEveryone is different, but the approach that worked for me was as follows:\n- Start with the required contact hours/education. At the end of that, you'll likely take a practice test that will have the same time constraints and question types that you'll see on the actual test. The score you get on that test will serve as a good gauge for how well the information you reviewed in the formal training sunk in. Don't panic if you don't do well on that test, and don't get too excited if you do really well. Temper your expectations and understand that either way you still have quite a bit of work to do to ensure that you're ready.\n- Find a study partner. If you can go through the training with that person it will be even better, but even if you only sync up with them after the training, it will really help to have someone else to study with going forward. Push each other, encourage each other, share tips, and share good study materials that you've encountered.\n- Make time to study regularly. I study roughly three to four hours every other night for two months to make sure I was ready. A lot of that time was spent taking practice tests, but another large part of that time was studying information associated with all of the questions I got wrong.\n- Keep a log of all the questions that you missing in your practice tests, and then go through the PMBOK or your study book to really dig into what should have been the correct answer for that question. I wrote all of my notes out by hand and then typed them up, and that really helped get me ready.\n- Take a lot of practice tests.", "score": 46.43784211425541, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "The smart Trick of Best Pmp Exam Simulator 2020 That No One is Discussing\nIn this video we will see how to open vce file on computer (PC) for free using bluestacks you can also open vce file on Android using A+VCE Player. Bluestacks …\nTaking the time to perform a practice exam is one of the greatest ways to get ready for taking the real thing. If you would like to ace your standardized examination, then you want to be certain you’ve got as many practice exams as you can. One of the best ways to do that is via simulation.\nYou may be wondering what an examination simulator is. It is a computer program which lets you take practice tests. These tests are made with specific settings that simulate the real exam format. As an example, you may set the examination to be researched to Microsoft Windows or Mac OS X. In this manner, when you take the actual examination, you will be able to pass with flying colors.\nIn addition to passing the test, you’ll also find how to get ready for it in the easiest way possible. By practicing ahead, you’ll have the ability to obtain an edge over other students who’ve yet to take these exams. In this manner, you can raise your probability of obtaining a fantastic grade as well.\nWith many different software packages available today, you should not have any trouble finding an exam test simulator to suit your needs. A number of the more popular ones include: Quicken Live, Exam Confidence, and Student Assistance Software. Based on your specific needs, there’s a package out there which can help you pass your exams quicker.\nThe most essential point to consider is to do an exam simulation on a normal basis. Put aside some time every day to work through training questions. Do not just go through them in a vacuum. Be sure that you’re assessing everything and making sure you’ve covered the content that you know.\nNow, do not make the mistake of thinking you could study well independently. It’s not likely to work as well. You have to get paired with somebody else in order to actually absorb everything which you have to learn. The person who you’re paired with needs to be experienced enough in taking examinations in addition to you. Otherwise, it is likely to take you much more time to figure out what exactly is happening. By working with someone, you’ll get a better chance at getting the maximum marks possible.", "score": 45.46245674672893, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "- Nine Complete Exams\n- 1,800 PMP Exam Sample Questions\n- Updated to current PMP Exam Content\n- Questions Developed by the Velopi Team\n- Advanced PMI Exam Strategies\n- Sample Exam Score Sheet\n- Access to our Online Discussion Forums\n- Weekly Exam Tips Newsletter\nBecause exams can be up to four hours long, “match fitness” is essential. To this end, Velopi has created an Exam Simulation tool where students can attempt full, simulated exams. By doing these exams, they will experience the lapses in concentration that are inevitable during such a long event. Thus, they know when they will need to take breaks during the real thing.\nUse the Velopi Exam Simulator to prepare for your Project Management Professional (PMP®) in a realistic, online environment. Learn effective test taking strategies, manage your exam time effectively, gain confidence with each exam you take, and ultimately reduce your study time.", "score": 44.337643954281816, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "PMP® Exam Questions - The Complete Guide\nTo ensure your PMP® exam success, you need to know the ins and outs of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) and PMP exam content and take as many PMP practice exams as possible.\nIn this article, I describe the most common types of PMP questions, provide you with sample PMP questions and give you links to where you can get more free PMP exam questions.\nTopics discussed in this article are:\n- PMP Exam Question Format and Types\n- Formula-Based PMP Exam Questions\n- Situational PMP Exam Questions\n- Knowledge-Based PMP Exam Questions\n- Watch this Video with a Knowledge-Based Sample PMP Exam Question\n- Interpretational PMP Exam Questions\n- Technique-based PMP Exam Questions\n- My Top 10 Recommended Web Sites for Free PMP Exam Questions\n- The Next Step for PMP Certification Exam Prep\n- Learn More about PMP Exam Questions\n- Conclusion and Recommendation: Use a PMP Exam Simulator", "score": 42.39700982447571, "rank": 8}, {"document_id": "doc-::chunk-3", "d_text": "If you are a PMI® member then you receive a free PDF copy you can download from the PMI website after logging in. Click here for more info about The PMBOK® Guide...\n|Exam Prep Book||\nYou also need a separate PMP® exam prep book. You can select any of the popular books available in the market. It serves as your personal reference to look things up and have concepts explained more fully than what might be in the PMBOK® Guide. Click here to see our recommended books...\nYou also need access to an online exam simulator and regularly take sample PMP® exams. Click here for a 3-day trial of our recommended PMP® Exam Simulator...\nIf you want to download The PM StudyCoach videos directly to your android tablet or phone then you need to purchase a podcasting app. iPhone/iPad users can use iTunes, which is free.\nYou Also Receive the Following Bonus Materials\nBonus Item #1: 210 Sample PMP® Exam Questions\nOne of the best ways for you to make sure you are ready for the PMP® Exam is to answer as many practice PMP® exam questions as possible. Each Weekly Workbook of The PM StudyCoach contains at least 15 self-assessment questions, which have been hand-picked from www.pm-exam-simulator.com.\nThe weekly workbook not only contains the questions, but also has answers and explanations. As the coaching sessions follow the PMBOK® Guide chapters, you can test your understanding of concepts for a range of PMP® exam topics.\nWith the PM StudyCoach, you have 210 opportunities to test yourself. For free!\nBonus Item #2: Learn From The Experts - Email Course\nIn this 10-part email course we introduce you to our PMP Exam Experts. They are all former students of ours who talk about their PMP Exam experience. You will see how they structured their studies and what techniques helped them pass the exam. Learn from their experience and achievement, and adapt your own approach for maximum success.", "score": 40.155770367978114, "rank": 9}, {"document_id": "doc-::chunk-3", "d_text": "This is a self-paced online course that is delivered in compliance with the PMP content outline and satisfies the 35-hour training requirement for the Project Management Professional certification.\nRegardless of which tool you use to prepare for the PMP certification exam, it is recommended that you study “A Guide to the Project Management Body of Knowledge (PMBOK Guide)”.\nPractice tests are very helpful in identifying the gaps in your project management knowledge. You should take as many of them as possible before attempting the certification exam as this will give you the opportunity to get familiar with the real question format and timeframe.\nA good study group can be pretty helpful. Search out local meet-ups, and if you don’t find any, form one. There are several benefits of being part of such a group. It helps you get advice in the areas that you are struggling with, and when you help someone your confidence grows. It helps you motivate each other and stay on course. The biggest advantage is that it forces you to study regularly, and you should make the preparation activity a part of your routine.\nPMI PMP: Career Prospects and Salary Potential\nThe process of becoming a Project Management Professional is created to help you plan projects and coordinate several activities to successfully complete each project with minimal barriers. The PMI PMP certification has become an industry standard and is the most exclusive certificate in the project management domain. It is valid across all the industries and is recognized worldwide, which shows that you have the education, competency, and experience to successfully lead projects. The certification can significantly increase your earning potential. Its holders can expect an average of $107,289 per annum, according to PayScale.\nExamCollection provides the complete prep materials in vce files format which include PMI PMP certification exam dumps, practice test questions and answers, video training course and study guide which help the exam candidates to pass the exams quickly. Fast updates to PMI PMP certification exam dumps, practice test questions and accurate answers vce verified by industry experts are taken from the latest pool of questions.\nPMI PMP Video Courses\nSPECIAL OFFER: GET 10% OFF\nPass your Exam with ExamCollection's PREMIUM files!\nSPECIAL OFFER: GET 10% OFF\nUse Discount Code:\nA confirmation link was sent to your e-mail.\nPlease check your mailbox for a message from firstname.lastname@example.org and follow the directions.", "score": 39.336631471509655, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "The PMP certification is appropriate for anyone who is interested in obtaining PMI's PMP Certification. At PM Preview, we deliver an Effective and Results-Oriented Study Methodology.\nThe PMstudy online courses can be done from anywhere and at anytime, the only requirement is having internet access. The study methodology is tailored to suit requirements of working professionals and ensures that students master the concepts required for the PMP exam.\nPMstudy course is divided into 12 chapters covering the 10 PMI Knowledge Areas and each chapter has 45-90 review questions (a total of 800+ practice questions).", "score": 38.76052650909394, "rank": 11}, {"document_id": "doc-::chunk-3", "d_text": "- I also attempted the 100Q from PMPForSure and found the questions most closest to the actual PMP exam. (Don’t mind the bad English on the PMPForSure questions)\nFinally, based on the above 7 steps, I can give you the 10 best tips that I find you can follow to pass the PMP exam on your first attempt.\n10 Best Tips To Pass The PMP Exam On First Try\n- Make sure to read PMBOK and Rita Mulcahy at-least twice – once in detail, once just skim through it.\n- Attempt all the individual chapter tests in Rita Mulcahy’s book + all the individual chapter tests on Rita Mulcahy’s Fast Track CD.\n- Enroll for as many question banks + mock tests you can, and try to take as many as you can early on. This will help you identify gaps in your learning and knowledge to help you focus better on your lacking areas.\n- Make sure you review ALL your answers – correct and incorrect ones to solidify your understanding about the concepts.\n- For ITTOs, I had created 9 charts on A4 size papers and had pinned them on to my bedroom wall. Every time I would walk past it, I would stop for 15 minutes and work on 3 charts. This helps if you have a photographic memory.\n- Make a habit of creating dump sheets. This will help you a lot to quickly review information instead of skimming through long texts.\n- If you have a smart phone download PMP related applications to keep you engrossed while on the go. I found it quite effective to keep my mind active.\n- Join one of the online PMP forums to share your concerns, experiences and gain access to many resources of PMP related information. One such interactive forum is PMZilla. It will help you interact with similar PMP aspirants, share your thoughts, express your queries and get more knowledge to help you prepare better.\n- A day before the PMP exam, relax, listen to some music, take a walk. You are not at war and this exam is not the last one you will take in your life! So chill out!\n- On the day of the PMP exam – be prepared to create a dump sheet of the math formulas + processes across the 9 knowledge areas / 5 process groups.", "score": 37.41365611567739, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "PMP® Exam Success Series: Bootcamp Manual with Exam Sim Application\nThe PMP® Exam Success Series: Bootcamp Manual with Exam Sim Application is the same PMP® book students received in the PMP® bootcamp organized by Crosswind Learning. This PMP® exam prep covers all the knowledge areas for the PMP® exam as well as all the five project management process groups. It is intended to be read and studied alongside with the PMBOK® Guide Guide published by the Project Management Institute (PMI®) for the actual PMP® Certification examination.\nMajor features of the PMP® Bootcamp Manual include:\n- Full coverage of the examination syllabus for the PMP® examination, students studying this PMP® exam prep book together with the PMBOK® Guide Guide will be able to get ready for the actual PMP® exam\n- The Crosswind PMP® book provides more in-depth explanations and examples to further illustrates the concepts discussed in the PMBOK® Guide Guide so that students will be able to comprehend and memorize the concepts required for the PMP® exam more readily\n- Mindmaps for each knowledge area are added to help students remember the concepts and knowledge taught in each chapter more readily in a visual way\n- All the PMP® formulas are explained clearly with lots of examples so that students will be able to solve calculation questions in the PMP® exam\n- Memorization keys are provided for the PMP® formulas to help students to remember and apply the formulas. These keys can also be used as a last minute revision aids.\n- The book includes 560 practice questions for students to practice and get familiarized themselves with the language and types of questions that would appear on the real PMP® exam\n- An exam simulation software package is also included which provides additional 700 practice questions to be answered on a simulator that looks and feels close to the real computer based PMP® exam interface\n- Online access to a variety of PMP® learning materials is also included, students will be able to make full use of the latest learning materials to heave their PMP® preparation faster and more fruitful\nThe PMP® book is published by Crosswind Learning, which is a major provider of education and training courses for PMP® aspirants.", "score": 35.544206452936905, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "After passing these exams, you will be fully prepared for what it is like to take the PMP Certification Exam. Why Choose the Course? · This course is designed around the official Exam Guide from PMP exam. · Cover questions from all the area what you need to pass the Exam in the First Try! · Save you time and money. After practicing these PMP tests and scoring an 85% or higher on them, you will be ready to PASS on the first attempt and avoid costly rescheduling fees. What will students learn in your course? Practical Project Management Professional(PMP) Exam like Scenario with 4 full practice Set and 100 Question. Are there any curse requirements or prerequisites? You need to Know the Project Management Professional(PMP) Course Curriculum Should have Basic IT knowledge Have a strong desire to pass your Project Management Professional(PMP) exam in the first attempt Target Students: Anyone who wants to prepare for Project Management Professional(PMP) exam have confident enough to crack their Project Management Professional(PMP) exam", "score": 34.94506240939572, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "100% Real PMI PMP Certification Exams Questions & Answers, Accurate & Verified By IT Experts\nInstant Download, Free Fast Updates, 99.6% Pass Rate.\nProject Management Professional\nIncludes 857 Questions & Answers\nPMP Product Reviews\n\"I passed my PMP exam yesterday and am happy to share my experience. I used the premium vce file for exam preparation, and it was amazingly useful. I have years of project management experience and it first thought I would not need any additional learning, but the exam is challenging. It is based on PMBOK, which is plain theory and concepts developed by the PMI. So you may be good at your job, but fail the exam because you have messed up their terminology about stages of project planning... this is why I came to this website. The questions were valid, and so was the file overall. I'm positively surprised!\n\"I found the PMP exam really overwhelming, maybe because I downloaded this VCE file only a few days before my exam date. I tried to memorize as much as possible, but was very nervous in the testing center. I passed, mainly because the vce file was valid and I knew answers to most of the questions. There were a lot of questions on project planning stages and methodologies!!! Lesson learned: be sure to study well, even if you have the file. But I passed, and this is what matters.\n\"The file had around 80% of real PMP questions, mostly based on PMBOK and other PMI publications. PMI questions are tricky and sometimes focus on small details and terminology aspects, and this is where the vce file has really helped me. It also contained useful info on KPIs and project evaluation methodology and approaches. Some quantitative analysis too. Overall, very useful and great experience throughout.\nVce is a great idea!\n\"PMI has set up a long and painful process for the PMP certification, with lots of stages and paperwork to submit. By the time I got to my exam, I felt like forgot all I ever read. This is why I purchased this braindump - to brush up my knowledge. It was a great idea, since, surprisingly, most questions were real, and I knew the answers when I got to my exam. Lots of project evaluation questions and this PMI typical stuff. Memorize the PMBOK or download your braindump!", "score": 34.54276669615368, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "4 ) Then I did a lot of questions ( over 2000 ) as everybody says, try DIFFERENT SOURCES, FIND TOUGH QUESITONS,\nHere is the result I got for all mock\nPMSTUDY 1 71%\nPMSTUDY 2 69%\nExam Central 79%\nChristopher Scordo 7 80%\nChristopher Scordo 8 84%\nChristopher Scordo 9 76%\nChristopher Scordo 10 74%\nChristopher Scordo 11 76%\nChristopher Scordo 12 80%\nChristopher Scordo 13 76%\nChristopher Scordo 14 70%\nChristopher Scordo 15 80%\nChristopher Scordo 16 82%\nChristopher Scordo 17 70%\npm-exam-simulator 1 30 q 48%\npm-exam-simulator 2 30 q 53%\npm-exam-simulator 3 30 q 80%\npm-exam-simulator - 80 questions 64%\nOliver Lehmann 75\" 76%\nOliver Lehmann 175\" 73%\nHead First Free - 200Q 84%\nPmzilla Tough 30 questions 53%\nSimplilearn Free Test - 200Q 64.5%\nPMPForSure Free Test - 100Q 68\nPMZilla Set 1 - 30q 60%\nPMZilla Set 2- 30q 70%\nPMZilla Set 13- 50q 68%\nPmstudy Exam 4 77%\nI would say the bests mock are :\nPMZILLA ( buy those questions, it's a good pirce and good questions )\nAnd you should finish by these, Please dont start with those 3 exam, it is not a good idea, because it's a great way to see if you are good to do the exam.\nDo these when you have at lease did a couples of 200 questions mock exams\nBut I sugest you do more than that.\nALSO, you should REALLY DIG for EACH wrong answer, also for thoses that you hesitate, DIG !!! until you mastered all them !!\nDont remember them, understand them , under the hood !!\n5 ) I re-read the RITA BOOK.", "score": 33.57720906578854, "rank": 16}, {"document_id": "doc-::chunk-5", "d_text": "- At the end of the course you will have a full practice exam of 170 questions, timely tested within 3 hours 30 minutes as per the real exam, you can go through this exam multiple times, use this exam to have an idea before you go for other exam simulators .\nI recommend this course on Udemy, It had been always my first recommendation for all students asking me about the best resources to get the 30 contact hours for the PMI-RMP exam, very cost effective as you can take with a low price of 19.99$ through links on this website only, life time access and you can watch the course videos whenever you want, and at the end you will have the certificate of completion required to apply for the exam on the pmi.org .\nSimplilearn offers a Self-Placed learning offer in addition to case studies. The classes are conducted by a PMI-RMP® certified trainer with more than 15 years of work and training experience, however, provide recordings of each session you attend for your future reference. The course based on the PMBOK guide 5th edition to help aspirants to be well prepared for the new PMI-RMP® exam. The course can really help PMI-RMP® aspirants to succeed in the PMI-RMP® Exam as over 5,173+ PMI-RMP® students have made use of the Self placed learning course for their PMI-RMP® Exam preparation, tell the moment am writing this article, the course got over 1,200+ reviews with an overall rating of 4.0 out of 5.0 .\nOnce you purchase the course , you will have 180 days of access to high quality, self-paced learning content designed by industry expert . simplilearn Instructor-led Learning Pass Training comes with a 100% money back guarantee. Simplilearn uses top learning methodologies to equip learners with the knowledge and confidence to pass the PMI-RMP exam in the first attempt. If you do not pass the PMI-RMP exam at the first attempt, Simplilearn will refund the course price to you.\nTo ensure your success, we strongly recommend that course participants take the PMI-RMP exam within a week of the course completion date—or a maximum of 45 days from the completion of the online training. This way, the course materials will be fresh in your mind.", "score": 33.38180575199199, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "At Beyond Training our focus is to enhance the skill set of an individual through rich content and interesting training material. Our team consists of highly qualified professionals with excellence in their respective fields. We provide offline training on Project Management and Digital Marketing. We help organizations to implement emerging ICT tools in their business.\nPassPMP® is our recent initiative which focuses on PMP® aspirants in improving their knowledge and to prepare them for the PMP® examination. PassPMP® is equipped with the question bank developed by experienced PMP® trainers.\nAre you passionate about PMP® and love to share your knowledge? If it so, then you've come to the right place!!\nI am a PMP® certified professional. Thanks to PassPMP®. The mock tests were very helpful and very efficient in judging my preparation\nGreat platform and very well sorted analysis. The mock tests consists of good level questions and covers most of the areas very well. I would recommend this website for PMP® certification aspirants.\nThanks to PassPMP®. This website helped me a lot while preparing for PMP exam.", "score": 32.43237069600225, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "Simply view our schedule of virtual and live classes to see what’s currently being offered and to easily sign up for whatever coursework you’re interested in most.\nAlso, don’t forget to visit our shop to find a range of books and tools, such as flashcards and exam simulations, that can make mastering this material even easier, especially when you’re prepping for a tough certification exam.\nSee Why Our Customers Love Us\nA must have for PMP (and beyond).\nWhile I was reading this book, I felt like talking to my best friend, who is very talented in this field. A friend that speaks your tone, knows where you are going to make mistake, and reminds you about a possible mistake that everyone makes. It is one of the best books I ever studied. If I had known this book earlier, I would have passed PMP in my first attempt.\nI have several colleagues that have been through Rita’s training and highly recommended it, and they all went on to get their certification.\nJust got my PMP certification. RMC’s exam prep system was very well designed to helped me prepare well for the exam! The simulation or practice exam pattern is similar to the original PMP exam. The training, material and helpful tips they have provided are simply amazing.", "score": 32.23611059820197, "rank": 19}, {"document_id": "doc-::chunk-2", "d_text": "… Remember Flash Cards. … Participate in Study Groups and Discussion Forums.\nIs Rita book enough to pass PMP?\nRita’s book is the best prep book for PMP exam as recommended by the majority of PMP certified professionals. It must be used alongside PMBOK. Rita;s book alone is not sufficient to pass the PMP exam. Rita’s mock exams are not as high quality when compared with other online mock exams.\nCan you pass the PMP in 30 days?\nPreparing for the PMP exam will take daily dedication to studying and understanding the material. But remember! You can do anything for 30 days. What follows are key steps, processes and resources that — along with your dedication — will allow you to prepare for and pass the PMP exam in 30 days (or less).\nWhat is the PMP pass rate?\n60.6%History of the PMP exam scores It increased the result of 80,6 percent (141/175) afterward dramatically. Within 60 days, however, it has revised their score from 80.6% to 60.6%, as a result of the considerable decrease in the number of candidates.\nHow can I get free PDU?\nEarn 60 Free PDUs for PMP® RenewalInstall a podcast app on your phone or tablet.Subscribe to The Free PM Podcast in your app.Listen to the 15 latest free episodes immediately.Automatically receive all future free episodes.Document your learning in The PDU Logfile.Claim your PDUs.\nCan I pass PMP a week?\nPMP exam is not difficult but you need practice, and before practice you need to clear all concepts. If someone is already a Project Manager then also he has to clear few concepts as per PMI standard. The course itself is so vast that it’s difficult to cover in a week unless you’re a robot.\nIs PMP exam really that hard?\nThe PMP Certification exam is indeed a difficult one to crack. Not because of the knowledge and vast curriculum one has to cover, but because it doesn’t only test your knowledge, but also your ability to apply that knowledge in different practical situations.\nIs PMP better than MBA?\nMBA programs are designed to create managers. An MBA can be fairly generalized, seldom focused on a particular industry or functional area, which is its greatest strength.", "score": 31.760351354307772, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "In the world of project management, there is a clear separation of certified PMPs and those who simply work in project management. PMI even cites that certified PMPs can make 20% more than their uncertified peers.\nThe time you spend now to understand the structure of the exam and the associated requirements of certification will ease your nerves about test taking. Easing these nerves will allow you to focus your attention on studying the content – and ultimately pass the exam.\nSign up for Early Access to Magoosh PMP Prep!", "score": 31.243515251403544, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "I passed the PMP exam on 15 Jan 2015.....\nSubmitted by macisu on Fri, 01/16/2015 - 02:08\nI passed the PMP exam on 15 Jan 2015\nPreparation Time: 6 months.\n1) PMBOKGuideFifthEd - Read at least 3 times\n2) Rita Mulcahy - 8ed - 2013 + Fast Track - Read at least 3 times\nFull length Practice Tests taken:\n2) Andy Crowe-200Q\n3) Simplilearn (4 test) - 200Quest. each one\n4) PMStudy - 1 Free\n5) PMStudy - 3 Paid\n6) PMP Exam - Oliver Lehman - Mobile version\n7) PMP EXAM PREP - Mobile version\n8) PrepCast from Cornelius EXAM SIMULATOR - http://exam.pm-exam-simulator.com/ - in my case was very useful\nAverage Score 75-85 %\nApart for the above practice tests, I had also gone through Rita mulcahy book.\nI practiced over 3000 questions during four months prior the exam.\nEach incorrect question I went back and reviewed it carefully.\nmin. 50 questions by day\nFor PMP aspirants:\n- be positive\n- I did not take a bootcamp\n- Most question on the exam were very close to the ones I practiced.\n- I used Richard Kraneis's method of \"Treat each question like a jump ball. Read it, take the time to understand it and answer it, and move on.\"\n- dont memorize ittos is a waste of time.\n- MAX 2 direct questions about ittos\n- tons of questions about changes\n- 2 o 3 questions about tuckman.\n- 3 o 4 questions related to project charter\n- a question about control limit\n- questions about quality. related to: plan, quality assurance.\n- a lot of questions about risk\n- questions about cpi and spi\n- questions about contracts. choosing the best alternative they provide features....\n- questions about close procedure.\n- 3 o 4 questions about Critical Path showing a diagram. Find total float, free flot\n- questions to calculate PERT. they provide optimistic, pesimistic and most likely.\n- 2 questions about comn. channel. very easy.", "score": 31.044567617121583, "rank": 22}, {"document_id": "doc-::chunk-3", "d_text": "After you feel like you've covered all of the material in-depth, there's nothing better you can do than start taking practice tests. Given that the required score to pass is roughly 60%, you want to be consistently scoring between and 70% and 75% before sitting for the exam to ensure that you're ready and will pass. Given that the cost of the test is roughly $500 each time you take it, you don't want to go back too many times.\nThis article is accurate and true to the best of the author’s knowledge. Content is for informational or entertainment purposes only and does not substitute for personal counsel or professional advice in business, financial, legal, or technical matters.\n© 2016 Max Dalton\nMax Dalton (author) from Greater St. Louis, Missouri on September 04, 2017:\nYou're welcome! I hope it helped! Did you successfully get your PMP?\nKavya from India on July 25, 2017:\nThanks For sharing.", "score": 31.00687526455361, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "PMP® Preparation Course (English Language)\n- The successful PMP® training to prepare for the exam – intensive, vivid and lively\nImagine: You get international recognition as an expert in project management. You can prove and show your professionalism, experience and expertise. You belong to a global network of over 1,000,000 certified PM experts. Isn’t that great?\nDo you want to become a PMP and boost your career? Do you want to acquire the know-how in globally recognized project management methods, suitable for all kind of projects? In parallel to your challenging job as a project manager? Then you found the right training course. We prepare you effectively for the exam.\nExclusive access to more than 300 original questions & answers provided by PMI\nInnovative training design and learning environment (3+2 days)\nNo tedious registration processes thanks to all-inclusive package\nQuality-tested content - we are an \"Authorized Training Partner\" of PMI\nWith our innovative modular seminar design, we ensure optimal learning. We focus on all topics relevant to the exam and give you tips for passing the exam in the first attempt. We take care of annoying registration and ordering processes for you with our all-inclusive package. Don’t worry, get it done now and get started. Becoming a PMP® has never been easier. Increase your market value and invest in your future now.\nBenefit from these advantages:\n- You will get exclusive access to more than 300 original questions & answers provided by PMI\n- You will learn which aspects are particularly important to the Project Management Institute (PMI) in the exam and receive valuable tips.\n- You will receive a printed copy of the PMBOK® Guide, the official PMI PMP Prep Course handouts, as well as many useful learning aids and, if desired, access to the PMP® online test tool\n- You will get a very good insight into the A Guide to the Project Management Body of Knowledge (PMBOK® Guide) in 40 training hours, as well as detailed explanations of all topics relevant to the exam.\n- You benefit from the modular structure of the training in 3+2 days. This innovative seminar design allows you to increase the retention of the large amount of information.\n- You will receive from us after your PMP® certification a voucher for 400 euros, which you can use for follow-up seminars.", "score": 30.65700747593499, "rank": 24}, {"document_id": "doc-::chunk-3", "d_text": "Moreover, we provide guaranteed results and you will be able to clear your exam on the first attempt using our products.\nWe are also providing you easy to use practice exam questions that will help you prepare for the real exam. If you are not using our practice exam questions for the preparation of PMI PMI Certification PgMP test, then you won’t be able to succeed in the real exam. We highly recommend you to go through our PgMP practice exam questions that will help you prepare for the real exam. You should also use the pdf questions in different modes so you can get a better idea of the real exam scenario. It will also allow you to do self-assessment so you can manage things in the perfect way. If you are practicing the exam dumps multiple times, then you will be able to clear the real exam on your first attempt.\nIf you are still in confusion about whether to use our PgMP dumps pdf or not, then we are also providing a free demo for the PMI Certification PgMP practice exam questions. You can check out the PgMP pdf dumps to get a better idea of how it can help you in the preparation of the real exam. If you are using this exam questions, then you will be able to get a better idea of how you can manage your preparation in a proper way. Make sure that you are using PgMP free demo and understanding the worth of this specific product.\nAt Dishut, all of your information is secured using high-level security. You won’t face any problems regarding identity issues and payment problems. We secured all of our systems using McAfee security and you will be able to feel safe using our products. More importantly, we are providing 24/7 support to all of our customers and we will resolve your issues with 24 hours. If you are facing any problems while using our PgMP pdf dumps for the preparation of PMI PMI Certification PgMP exam, then you can always consult our technical support team and they will provide you complete support you need.", "score": 30.55835605584762, "rank": 25}, {"document_id": "doc-::chunk-3", "d_text": "Tutorial Points - Mock Exam\n200 objective type sample questions and their answers are given just below to them. This exam is just to give you an idea of type of questions which may be asked in PMP Certification Exams.\nBooks – Below is a list of recommended reading to help you pass the PMP exam.\nA Guide to the Project Management Body of Knowledge (PMBOK Guide), 5th Edition, Project Management Institute, 2013\nPMP Exam Prep, Eighth Edition: Rita's Course in a Book for Passing the PMP Exam, Rita Mulchay, 8th Edition, 2013\nPMP Exam Prep: Accelerated Learning to Pass PMI's PMP Exam, Rita's Course in a Book for Passing the PMP Exam, Rita Mulcahy, 8th Edition, 2013\nThe PMI states that on average, successful PMP candidates will spend 35 hours or more to prepare, so make sure you leave yourself plenty of preparation time before you take the exam.\n10. Take the exam\nAfter you have familiarised yourself with the PMI terminology, having a good understanding of the PMBOK, successfully passed any mock exams you should be ready to take the real exam.\nThe exam consists of 200 multiple choice questions and you have about 4 hours to complete it. Students are given 4 hours to answer 200 exam questions.\nThe exam is broken down in the below areas:\nMonitoring and Controlling: 21%\nProfessional and Social Responsibility: 9%\nFor further information, please call us on +91 98 2007 1798 or email us at email@example.com.", "score": 30.08996914603847, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "If you are scoring above 85% in a few reliable PMP Question Banks or PMP exam simulators, book your exam date and time slot at the Pearson VUE center close to your place. If you want to reschedule the exam within one month of exam date, you may have to pay a penalty fee.\n2nd Step: Preparation for the PMP Certification Exam\nEligibility- Input requirements\nYour 35-hour project management education program or the PMP Boot camp or the PMP online certification course are the crucial factors for success. As part of the PMP certification process, this PMP Exam Prep step not only gives you the needed eligibility but also provides you with the ammunition to shoot your target.\nWatch the Top 3 Benefits of PMP Online Training\nPMP Certification Process\nThis program equips you with knowledge of PMBOK or the Project Management Body of Knowledge which forms the foundation for the exam. Therefore, your PMP Certification process should have a well-defined study plan of PMBOK and any PMP Study Guides you get.\nAs mentioned earlier, use the Study-take test-revise-retest technique to prepare. PMBOK has 47 processes and each process has certain inputs, tools & techniques which produce the desired output. These are spread across the 10 Project Management Knowledge Areas and you should understand the interdependencies of these processes to answer the questions.\nAs part of PMP Certification process, you also need to be aware not only the content but how to approach the questions in the exam.\n3rd Step: PMP Certification Exam\nThe last step in the PMP Certification process is taking the exam. You need to answer 180 questions within ~4 hours during the exam. The questions test your understanding, application, and interpretation of project management framework and project data. Hence, your PMP Certification Process ensures that you are geared up for this.\nBy now, you should have understood how to manage your time and approach the questions for the exam. PMP Certification process ensures that you do not miss out any critical aspects of the preparation such as time management, mental makeup for the exam in addition to the PMBOK and application to projects. As you are aware, you do not spend more than a minute on any single question and in case of doubt, mark it for review.\nYou Are PMP Certified!\nThis is the desired output of this whole process.", "score": 30.00783025878579, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "Only Help In PMP\n\"If you think that your PMP preparation needs an expert touch, go to Test King. The next thing you know, all your PMP PMP certification guides are also handled by this awesome website, and your tension regarding PMP is gone.\n\"My online tutorials at Test King made my PMP exam a complete triumph! I am so happy with my PMP PMP grades, because I was wise enough to go to Test King and avail its offer. My friends also went to the site, and got PMP guides in the best possible manner. This is like our online teacher and counselor.\nAwesome PMP Guide\n\"When I had taken up PMP, I was very worried about my certification process, and its after effects on my career. That is when I visited Test King and PMP prep was steered through by Test King. The easily accessible website is a bounty for PMP students.\nGetting Worry For Project Management Professional Course??\n\"If you are getting tensed for Project Management Professional course, then stop worrying because i am going to tell you about my experience with Test King and i feel great pleasure that this site is doing excellent work for the students looking for PMI .\nFor All Ages\n\"Being a professional person I still found difficulty for Project Management Professional exam. It was not easy for me to manage work and studies. But when I got to know about Test King from my friend I had visited on it and found it quite helpful. Test King helped me to manage my work time as well as study time. No more I find PMI exam difficult it is just because of Test King. Test King is an easy to use site and is for all the ages, it is not only for students but for all.\nSearch For The Will Within You\n\"When you find the will to get back up again and retry for Project Management Professional exam then be sure to use Test King. It is path that you need to take if you wish to succeed in PMI exam and avoid going through the painstaking process of repeating it all over again. I know it is extremely difficult to find the strength to start it again but with Test King Preparation and see how much you have managed to learn.\nSupportive Online Site:\n\"When I came across from the hard time of my studies for Project Management Professional exam. My friend told me about Test King.", "score": 29.509569905560095, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Testking preparation tool helps you to pass Professional Exam with 100% success guarantee. Authentic Questions and Answers provided with surety that you would pass your exam. Success with Testking - Anytime!\n- Over 10 years experience\nIncredible 99.3% Pass Rate 3218 Questions and Answers 24/7 Support 108 Preparation Labs 34 Professional Exam Content Writers 97,902 satisfied customers 3390 FREE demo downloads available 2 weeks of preparation before you can pass your exam 78 percent more cost effective than traditional training\nYour purchase with Testking is safe and fast. Your products will be available for immediate download after your payment has been received.\nThe Testking website is protected by 256-bit SSL from McAfee, the leader in online security.\nContact our Customer Support\nWe are offering you such material that ensure you cent percent success in results of PMI computer based training online so download Testkings PMI book and PMI video lectures online. Looking for simple and easy ways to pass your online PMI audio training? Than all you need is to make use of PMI from TestKing's online video lectures and PMI exam course from Test kings online practise tests. PMI from Test kings demo practise test and updated PMI exam questions is a great answer to fake training sites. We guarantee your success in http://www.testkingcerts.com/PMI.htm updated video training for amazingly reasonable fee. You can augment your technical knowledge to the utmost limits by downloading our PMI from Test king boot camps and Testking's PMI online prep materials for passing the tough and intricate online PMI computer based training. Although there are other study guides available but the best possible study guides are the ones produced by our product development team.\nInteractive guidance is important for feasible and unique performance at PMI latest cbt. Therefore we have devised our PMI from Testking audio study guide online along with online PMI from Testking exam dumps to make preparation duration easy along with fun and interest building features. Avoiding unthinkable outcomes is much easy now with us as we are here to provide you efficient and relevant preparation guide for PMI cbt.", "score": 29.198594425557946, "rank": 29}, {"document_id": "doc-::chunk-2", "d_text": "Before that I was wandering on different sites but I didn't find such a supportive site which brings all the solutions to me for the preparation of my PMI exam. Test King is doing an outstanding work. I am very happy to find it.\nGet Started To Get Ahead\n\"In the race of getting Project Management Professional certificate I took help from Test King because I did not want to risk losing the opportunity of getting my favorite job. My cousin had already suggested it to me and I took her word on it. Test King gave me many PMI exam tests to solve and I got their answers there and then, I also had the option to see tips on how to solve them and not repeat my mistakes.\nI Am Project Management Professional Certificate Holder!!\n\"My dad always forced me to get Project Management Professional certificate as it also help in achieving high standard job as well. He told me about Test King and said me to visit. at first i didn't pay attention.\nAs i was neither interested in PMI exam nor in Test King but amazingly as i started work with Test King i found it interesting and workable. Now i am having Project Management Professional certificate ... my dad is happy with me and i am happy with Test King.\nWonderful Experience With Test King\n\"It was wonderful experience with Test King site compared to my bad experience as I never had with any other online site. It was become so difficult for me to continue my studies for Project Management Professional exam. I was worried about it all the time. But Test King did its best, it makes my life better. It does not only offer best material but also the additional information that helps me a lot in the preparation of the PMI exam. Test King provides proper guidelines for the exam Project Management Professional .\n\"It is time to be amazed by the brilliance of Test King for Project Management Professional exam. It prepared me to the height of the extent of preparation that could be achieved in relation to PMI exam. Many practice tests and study packs that I went through proved to be a vital asset to my preparation because it made me ready for all types of questions that I could encounter now that I had a proper understanding of the Project Management Professional exam material and what was required when answering the questions. Thank you so much Test King.\nAssignments Are No More Burdens For Me\n\"For me assignments of Project Management Professional exam were a big deal it creates a lot of tension for me. I feel it as a burden.", "score": 29.139701703909704, "rank": 30}, {"document_id": "doc-::chunk-3", "d_text": "While preparing for the PMP exam, there are few things mandatory, for which you need to spend your money. All other optional, you have choice whether to go for it or not, depending your budget constraints.\nBasic PMP Resources Recommended\n- 35 contact hours of PMP education – Cost will vary depending on face to face training or online training.\n- PMBOK – Though nowhere it is said as mandatory, it is strongly recommend to get the PMBOK either by downloading it from PMI official web site (if you are PMI member) or purchasing from any online store.\n- PMP Study Guide – To make your reading simplified, a study guide which suits your reading style would really help.\n- Free Online PMP Tutorials on PMP Concepts\n- Free Online Mock Tests.\nOther than that, there are few online courses, study materials, mock tests which are not free. Depending on your budget constraints, you may decide whether to go for any of these or not. Any of these resources may simplify you learning experience. However please note that these are optional. Without these commercial resources also one can crack the PMP exam.\nBelow is the screen shot of PMP Study plan excel template.\nStep 4 – Develop the PMP Study Plan schedule\nWe have explored lot of things, during the previous step Planning. Now it is time to start preparing our PMP schedule.\nFirstly, create a work breakdown structure containing all the chapters, knowledge areas, ITTO (Input, Tools & Techniques, Outputs), and detailed concepts. Make the WBS in such detail, that it would be logical or reasonable small entity to manage as a task or activity.\nI have given below the topics I have created by knowledge areas. If you wanted to have more detailed plan, you can further detail it. However for the purpose of this tutorial, I am trying to create the schedule to the extent of knowledge areas to simplify our planning exercise. Just in case, if you wanted to have a bit more detail, expand the WBS (work breakdown structure) further.\nWhen should I take breaks?\nTry to plan some breaks in between. Breaks are necessary whenever you feel as exhausted. Take breaks, gain the energy and again start your preparation.\nStep 5 – Execute your PMP Study plan (How to prepare for PMP Certification Exam and Pass?)", "score": 28.950921142916247, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "July 29, 2018\nAre you one of the candidates that want to pass Project Management Professional PMP exam in other to boom your career and become highly demanded in the different organizations of the world? or maybe you are worried about how to prepare for Project Management Professional PMP exam. We have got good news for you; there is a golden opportunity for you to have good grades in the Project Management Professional PMP exam in your first attempt. All you will have to do is to get our Project Management Professional PMP dumps or PMP PDF Dumps: Web Simulator PMP practice tests from https://www.certification-questions.com/[CATEGORY]-exam/[PRODUCT]-dumps.html\nWHY YOU SHOULD CHOOSE OUR [PRODUCT] DUMPS FOR CERTIFICATION EXAM PREPARATION\n1. The Certification-questions.com PMP exam questions are complete, comprehensive and guarantees to prepare you for уоur Project Management Professional exam.\n2. At only affordable, you get access to all of the best exams from the every certification vendor.\n3. Help You Prepare for Your Exam: All examination questions and answers are compiled with an extensive care along with your interest in mind. Ours is a one stop shop for final preparation for Project Management Professional PMP dumps.\n4. At our disposal are many Project Management Professional gurus from the industry which are qualified individuals with the responsibility to review all question and answers section to help you catch the idea and thereby pass the Project Management Professional PMP dumps: Web Simulator PMP practice tests certification exam\n5. Experts Verified Materials: known industry experts checked our materials because we believe that what is worth doing is worth doing well. We give back to you the real value for your hard earned money.\nHowever, all the examination questions come with detailed explanations. With Our dumps: Web Simulator PMP practice tests you will pass your certification exam in your first attempt.\nOUR PMP DUMPS WILL INCLUDE ALL THE REAL CERTIFICATION TOPICS.", "score": 28.378231767406874, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "PMP® Certification Exam Training\nWhen it comes to PMP® certification exam preparation and project management training, no one has more resources than Velociteach. Over the past ten years we’ve provided nearly 200,000 customers with PMP training and materials, more than almost all the other companies in the industry combined. There’s a reason that more people trust Velociteach than anyone else for PMP exam prep. We aren’t just committed to providing outstanding materials and instruction – we are committed to you. Whether you are spending three days in one of our world-famous PMP boot camps or purchasing PMP exam prep materials for self-study, you have a company that stands behind you: answering questions, offering encouragement, and providing resources you can’t get anywhere else. When you decide to prepare with Velociteach, you can trust that you are joining a community of support like no other. Come find out what nearly 200,000 people worldwide have already discovered: no one does it better than Velociteach.\nITIL is a Registered Trade Mark of AXELOS Limited\nBootcamp Class Schedule for PMP Exam Prep, CAPM and Agile\nThe Velociteach Difference\nVelociteach was founded in 2002 by Andy Crowe, one of project management’s foremost thinkers and authors, on the belief that leadership skills are the foundation of effective management. Velociteach has trained nearly 200,000 PMs with methods and materials that are constantly updated to reflect the latest industry standards and exam changes. Our instructors are project management experts whose focus is wholly on your success.\nVelociteach is committed to your success, and our track record proves it. Our live 3-Day PMP Exam Prep program is designed with one goal in mind – to help you earn your PMP – and our training team will work with you every step of the way to make it happen.\nIf you’re intimidated by the PMP Exam when you come to our boot camp, you won’t be when you leave. We guarantee you’ll walk away with all the preparation and confidence you need to conquer the exam and earn your PMP.\nA lot of companies make a lot of ridiculous claims. Here is one that you can stand on: Velociteach has achieved the highest documented pass rate on the PMP Exam among all project management training providers.", "score": 28.10682549316056, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "The PMP® online training offered through Project Management\nQualification.com is designed by ILX Group. Specialists in e-learning, our award-winning digital learning materials promote knowledge retention, easy engagement, and exam success.\nThe first choice for thousands of trainees across the world, this course will fully prepare you to become a certified Project Management Professional. Completing this qualification through the Project Management Institute can transform your career, as it is recognized globally as a benchmark for project management capability.\nAt the conclusion of our e-learning course, students will:\nYou can find even more information about this course on our Online Training page.\nWe are also able to use our 25+ years of experience to provide solutions exclusively for businesses, organizations and corporations. We can design and deliver specific training packages for your business in the form of PMP classes, workshops, or tailored e-learning.\nRequire mass certification in PMP for your staff, or need to roll out a small internal training program across a particular team? We can create tailor-made PMP classes, or your very own e-learning package. Simply get in touch with us today.\nYou can read about our bespoke corporate and consulting services in more detail on our Corporate page.NEXT: PMP® Exam Information", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "They come to you in the form of videos, podcasts, worksheets, articles, as well as reading and homework assignments. This helps you put PMP® Exam content into context.\nThe Weekly Workbook & Free PMP® Sample Questions\nYour weekly coaching sessions take you step by step, chapter by chapter through A Guide to the Project Management Body of Knowledge (PMBOK® Guide) and point out specific areas of focus in your studies. Each recorded coaching session is accompanied by your Weekly Workbook. Use these PDF workbooks to guide your studies until next week.\nHere is a selection of what you can find in your Weekly Workbooks:\nYour Take Action Worksheet: This is your companion throughout the course. You'll receive a new one every week. It lists every single activity that you need to to work on. These worksheets give you clear guidance in your studies. (You can see a sample of this by clicking on \"Free\" at the top.)\nYour Daily Study Script: You'll receive this in the first week. It helps you to prepare your day-to-day study activities. On this script, we recommend the activities that you should perform every day you study. You'll learn about Regular Study Days and Sample Exam Taking Days and what to do on each.\nYour Study Assignments: Every week you receive a series of clear instructions with study assignments. These range from reading certain chapters and sections from the PMBOK® Guide to listening to podcasts, watching complimentary video lessons or reading topical articles.\nYour \"Go Beyond\" Assigments: Many of your assignments take you beyond \"just\" studying. As such you are coached to read important exam documents from Project Management Institute (PMI)®, guided through the process of preparing your online application, learn how to find a study partner, shown what to expect on your actual exam day and much, much more.\nYour Free PMP® Sample Questions: Your Weekly Workbook also contains a Self-Assessment section with 15 sample exam questions taken from The PM Exam Simulator. Answer these questions at the end of your study week to gauge how well you understood the material. In total you receive over 200 free questions.\nThe PM StudyCoach costs just $49.99.\n- Follow a proven plan: As a project manager, you know of the importance of having a project plan. This is your plan to prepare for the exam.", "score": 27.034195851385114, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "PMP Training and Mock Test by Highly Experienced Trainers.\n48 Hours Extensive PMP Training available both online and Classroom Instructor lead Training.\nPMP® Certification Exam Preparation Training with Mock Test by highly resourceful, experienced and dedicated PMP Facilitator/Trainers will enable you to PASS the PMP exam in the FIRST Attempt. Our unique PMP Exam Prep. Training methodology will guide you throughout your PMP journey to pass PMI-PMP smoothly and comfortably. We offer scenario based training on PMBOK so that you can relate your experience with the subject matter and build a clear and in-depth understanding on PMI Prescribed Knowledge, Skills, Tools and Techniques required to PASS the PMP Exam as well as to deliver successful projects. We offer PMP® Certification Training and Mock Tests via both ONLINE and in Classroom. Our training program is packaged with 48 Hours effective Training Session, at least 10(Ten) FREE Mock** Test on highly STANDARD questions with Explanations and 1 (One) year FREE Retake of full service (One Time).\nNo of Classes: 16-18\nTraining Platform: ZOOM, Classroom\nTotal Effective Training Hours: 48 Hours, 2 Months.\nMock Test: Minimum 10\nLearning Support: 1 Year.\nPMBOK Guide or PMP Exam Training is NOT a very interesting or easy one to take as a Day Long Session unless you have a good familiarity with it. Our experience suggests that many participants have lost their determination and passion for PMP Certification after attending Day-Long sessions. 4-5 Days session may NOT help you to reach the destination as TONS of contents will bombard over your head within an extremely short duration.\nProject Management Skills are one of the Most Important for Engineers and other Graduates for a very lucrative Project Management Career across the world. PMP Certification is highly recognized and demonstrates that PMP Certified Professionals are more capable of delivering Successful Projects using PMI’s latest knowledge base, Guideline, tools and techniques.\nMost of the Project Consultation Job and Project Management jobs with renowned Multinationals require that the potential candidates must have PMP Certification along with other necessary qualifications.\nWe arrange frequent Online/Classroom FREE Seminars and offer PMP® Certification Training and Mock Tests with a very flexible schedule and competitive Training charges.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-4", "d_text": "|Shawn Futterer, PMP||Lessons Learned From The PMP® Certification Exam Part 1\nLessons Learned From The PMP® Certification Exam Part 2\nPMP Exam Test Taking Strategies\nPreparing for the PMP® Certification Exam\n|John Worthington, PMP||Exam Lessons Learned|\n|Jason Smith, PMP||How to Pass the PMP® Exam|\n|Bill Meacham, PMP||Tips for Passing the PMP® Exam (This one is \"a true classic\")|\n|Elke Van den Broeck, PMP||PMP® Lessons Learned|\n|James T. Brown Ph.D., PMP||Manage Your “PMP® Exam Study” Project|\n|Nils Kandelin, PhD, CISA, PMP||A CISA gets a PMP®|\n|Cornelius Fichtner, PMP, CSM||Break your PMP® Studies into Small Pieces\nWhat is the PMP® Exam Passing Score?\nCreate your own PMP® Sample Questions\nBudgeting Your Study Time\nBonus Item #3: Access to our Exam Discussion Forums\nWouldn't it be great if you had a question about the exam that you could go and ask a trusted source? That is what our exam forums are for. All customers & students have access to these forums. Ask a question and your fellow students or our staff will help you.\nBonus Item #4: Instant Subscription to the PMP® Exam Tips Newsletter\nSubscription to this newsletter is a great way to complete all the aspects of the PMP® exam preparation. You receive weekly tips about specific PMP® exam techniques, guidelines and resources, exam dos and don’ts, proper scheduling and time management, and various efficient approaches you can apply. A valuable tool indeed that enables you to breathe and take one step at a time towards a relaxed you with a focused mind come exam day.\nTrusted and Experienced Education Provider\nWe Are A Trusted and Experienced Education Provider. We have been training students since 2008. We pride ourselves on our pioneering education approach and best practices in course design...and our students love their results!\nSlides & Transcript Not Included\nPlease note that neither a transcript nor the presentation slides are part of the PM StudyCoach package. They are also not available for purchase separately.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "Understand the PMI approach and what is required to pass the PMP® certification exam\nMastering the body of project management knowledge and the major lifecycle processes of a project\nKnow the input data, tools, methods and output data from each project management process\nThis training is for anyone who wants to prepare for the PMP® certification exam, regardless of the industry and size of the organization. Expect your career to take off!\nThe private training for the preparation of the PMP® certification will take place at your offices or at the location of your choice. (e.g., dedicated conference room).\nAlso available Online.\nThe 5-day course focuses on everything you need to know to pass the PMP® exam.\nParticipants must bring a laptop to attend this project management training. During the PMP® certification preparation training, we provide the following materials to each participant:\nTrainer’s presentation document\nPreparatory questions for the PMP® exam\nExam simulation of PMP® certification exam\nList of reference volumes\nBENEFITS OF TRAINING\nIn-depth study with a trainer member of the Project Management Institute\n25 years of trainer experience combined with feedback from hundreds of students per year\nSimulation under conditions similar to the PMP® exam\nContact Us for the training plan\nAre you interested in our PMP® certification exam preparation training? Fill out the form below and we’ll email you the training plan.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "When you have answers to these questions you’ll probably find that one training provider really stands out for you as a good fit for your budget, learning preferences and timescales. That course will be the best PMP course in Bangalore for you – even if you have chosen to study online!\nUse Recommendations to Find PMP Certification Training in Bangalore\nOne of the best ways to find courses that people like you have found beneficial is to use recommendations. Search online for students who have taken project management certification courses in Bangalore and see what they said about them.\nWe recommend in particular the LinkedIn group I want to be a PMP which has over 40,000 members from around the world.\nGood training providers will publish stories from their past students and testimonials so you can see how other PMP aspirants have fared using their study materials. The PM PrepCast has been used by over 48,000 students. Here are a couple of real stories from people who could be your colleagues.\nExam day success stories from our Indian students\nMy exam started from 8:00 am so I got up early in the morning at 5:00 AM. After getting fresh, I did some light physical exercise so as to warm up my body and sat sometime for meditation. It was of utmost important for me to stabilize my adrenalin levels and make my mind focused though I was not 100% physically fit.\nI took a very light breakfast. I reached the Test Center around 30 minutes before the scheduled time. After completing all the exam formalities I sat for the exam. In three and half hours I could complete only 155 questions. Pressure started mounting as the remaining 45 questions needed to be completed in 30 minutes. When I completed 175 questions, there only ten minutes left to answer 25 questions which seemed impossible for me.\nThen I just prayed to God and started to answer the rest on simple guessing only, no time to read the full questions! After finishing all the questions there were just fifteen seconds time left. Then I pressed the Exam Finish Button skipping the Survey Option. To my great relief I passed!\nI am happy to share the news that I cleared the PMP exam in Bangalore. It was a dream come true for me. I used the PMP Exam Simulator and PMP Exam Formulas by Cornelius Fichtner.\nI recommend everyone who is aspiring to get this coveted certification to buy and refer to material by Cornelius Fichtner.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "PMP CERTIFICATION EXAM PREPARATION\nBoth Classroom and Online Classes\nWeek Days and Week Ends\n1.5 hrs in weekdays and 3hrs during Weekend\n•How To resolve errors in PMP .\n•Master PMP concepts from the ground up\n•You will know how to work with PMP .\n•Eliminate duplicate code and consolidate script files using PMP .\n•Learn PMP from Scratch with Demos and Practical examples.\n•How to build your own apps and scripts using PMP .\n•Learn the Ins and Outs of PMP in few Hours\n•you will be confident in your skills as a Developer / designer\n•Learn the fundamentals of the PMP using both a theoretical and practical approach\n•Most comprehensive Industrry curriculum\n•Basic Training starting with fundamentals\n•Fast Track course available with best Fees\n•We Provide the Course Certificate of completion\n• Greater productivity and increased workforce morale\n•Collaboration With 500+ Clients for Placements and Knowledge Sessions\n•Every class will be followed by practical assignments which aggregates to minimum 60 hours.\n• Our dedicated HR department will help you search jobs as per your module & skill set, thus, drastically reducing the job search time\n•c++, React.js, Java Fullstack, Core Java Data Structure, Java Micro-services, Devops, Microsoft Azure, Cloud Computing, Machine Learning, Automation Testing\n•Dotnet, Java, IOS, Android, SSE, TL, Manual Testing, Automation Testing, PHP Developer, Web Developer, Web Designer, Graphic Designer, Technical Manager, C#\n•Java Developer, Production Support, Asp.Net, Oracle Applications, Pl Sql Developer, Hyperion Planning, Dot Net, UI Designer, UI Developer, MS CRM, Hardware\n•Sap, Process Executive, Hadoop Developer, Hadoop Architect, Sap Srm/snc Testing, Sap Pp / Qm Testing, Sap Ewm Testing, Sharepoint Developer, T24 Technical And\n•ux, ui, Python Developers, Qa Automation, sales, Ui Development, Ux Design, Software Development, Python, Qa Testing, Automation Testing\nWe have personally trained over 10,000+ people in how to pass their Project Management Professional(PMP) Certification Exam exam on the first attempt! With PMP(Project Management Professional) practice exams containing 400 questions. I have carefully handcrafted each question to put you to the test.", "score": 26.70225530354407, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Are you preparing for the Certified Associate in Project Management (CAPM) or Project Management Professional (PMP) certifications? Do you need PMP tests? Or, are you looking for facts on the Project Management Book of Knowledge (PMBOK) 4th edition?\nEach article in this PMBOK guide has been carefully selected so that you get the most accurate content on project management, CAPM, PMP, PMP tests, and the PMBOK. The Project Management Knowledge Areas, such as Project Cost Management and Project Risk Management, are covered with examples and artifacts. Each PMBOK knowledge area also has associated PMP tests to help you prepare for the exam. Apart from targeting the CAPM and PMP exams, this PMBOK guide is useful for anyone trying to gain knowledge and skills in project management.", "score": 26.469099199845303, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "The first step in how to take my exam for me is to learn as much as possible about the course that you will be taking. There are many different online courses that are offered and some are free, whereas others require you to pay a fee. The fee-based courses are usually the better ones and they give you a greater depth of information about the topic. Some of these courses also include detailed audio lectures that make it easier to understand the material and to prepare for the test.\nIt is important that you find a reliable and trustworthy source for getting information on how to prepare for Project Management Professional examinations. The Project Management Professional Board has a website which contains a lot of useful information and resources for taking examinations. If you are familiar with studying in depth before taking an examination, then this may be a good option for you. However, if you are still trying to get over your anxieties about taking the examination, then this is probably not the best option for you.\nYou will be required to complete a certain number of hours of coursework before the examination. In most cases, the number of hours will depend on the level of the project that you are preparing for. You will either be required to study more or less than the usual hours per week. Of course, the amount of time that you will need to spend on the examination will also depend on the kind of examination that you are taking.\nYou can also look up resources from the project management course that will help you get prepared for the examination. Some of the resources that you can use include sample tests and examinations. This will allow you to gauge your performance and determine whether or not you are prepared enough to take the examination. It will also be helpful in making sure that you are familiar with the content of the examination. By gaining knowledge of the material, you will know what to expect.\nAlthough the Project Management Professional Board does not make it mandatory for you to take the examination, you may find it to be very beneficial to get a few practice tests before the examination date. This will help you decide whether or not the preparation is worth your efforts. Although there is no guarantee that you will pass the examination, you might end up gaining valuable insight into what the examination will entail and how to best prepare for it. This will also give you an idea as to whether or not the project that you are taking is right for you.", "score": 26.357536772203648, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "No matter which one to choose, it is highly recommended to watch video teaching, PMP test basic theory will not be particularly much, more is the use of the situation, the teacher explained more conducive to digesting knowledge points.\nAbout | Sign up for the registration\nA treasure, everything is done.\nNote: If a training institution is selected, it is the responsibility of the contractor.\nThe teacher of the training institution sent me two forms, filled in after the registration of this piece did not bother, but the process still has some understanding:\n(1) 35 school hours proof: PMP registration essentials, self-study candidates can buy directly in a treasure, about 300 yuan. (Some of the company is a different matter).\n(2) Chinese and English examination: PMP examination process is more cumbersome and more demanding, did not make easy qualification examination failed, you can buy a full process of pure examination, about 100 yuan (pack spot check service). Some merchants will give instruction manuals when buying school hours, you can follow the steps to fill in the report.\n(3) online payment: after registration PMI will do qualification examination, after passing the official website to pay, registration fee of 3900 yuan, can be invoiced for reimbursement.\n(4) quasi-test letter printing: the examination about 1 week before the official website can query the test location and print the quasi-test letter, I got in Guangzhou, as if there are two test points, one in Tianhe one in the white clouds, don’t go wrong!\nAbout | Preparation plan\n(1) Chapter learning: 15 to 20 days.\nWatch the video to understand chapter 1-13 knowledge points, with chapter exercises to consolidate. At the heart of this phase is understanding the concept of difficulties, with a 70% accuracy rate for chapter questions. Reply to the keyword “chapters” to get all the chapter knowledge I’ve organized.\n(2) Key knowledge memory: 15 days.\nFamiliar back five process groups and ten knowledge areas, memory 49 ITTO … And so on, you can view the favorite PMP regular test knowledge point finishing.\n(3) Simulated Sprint: 10 to 15 days.\nThe core of this stage in the “brushing question”, according to the wrong question, ambiguous questions to find gaps, reply to the keyword “wrong question”, you can get my summary of the six wrong questions.", "score": 25.93968143638281, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "There are a lot of assessment tools on the web, here is a free example from Mindtools.\nStrategy Execution's PM Appraise knowledge assessment is another tool you can use to identify any knowledge gaps that you may have. It measures the knowledge of project management best practice and provides recommendations for learning activities to help bridge the knowledge gaps.\n4. Attend professional development training courses\nThere are a vast number of training providers in the market that can help you to prepare for the PMP exam. When choosing a training provider, firstly make sure they are PMI Registered Education Provider. This means that the PMI has officially approved their training and that their courses reflect the PMBOK.\nAnother good thing to do is to research your training provider properly by asking them for instructor biographies, make sure your instructor is PMP certified, what course materials are provided, a detailed course outline, and if they provide any post course reinforcement training. A good training provider should be able to supply all of this information to you.\nIf you do decide to choose Strategy Execution as your training provider then a good way to start your preparation would be by taking one of our core courses. All of our core courses follow the PMBOK guide and are a great way for you to prepare for the exam. They also count towards the formal Project Management training required. We recommend that you at least consider the following courses before you apply for the exam:\nPlanning and Managing Projects - This course will give you an overview (set the scene) of a project and is a great reminder if you have years of experience as a Project Manager but little formal training.\nScheduling and Cost Control - This course will teach you the technical side of project management and give you full understanding of things like, WBS, EVM, PERT etc.\nRisk Management - This course will give you in depth knowledge on how to manage risks in all stages of the project.\nPMP Exam Power Preparation - This course will help you structure your studies and learn the best answers before the exam. Our instructors are PMP® certified and will explain in detail what you will experience on the exam and the type of questions you will face. They will also answer any questions and address any uncertainties that you may have.\nThe first three courses cover nearly 80% of the exam questions and together with the PMP Exam Power Prep course you will have the necessary knowledge to successfully pass the exam and be a successful Project Manager.\n5.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "The test is available in 16 languages, including Hebrew, Japanese, Italian, Korean, Spanish, Russian, Arabic, Bahasa Indonesian, Brazilian Portuguese, Simplified Chinese, Traditional Chinese, Turkish, English, French, and German. As for the exam cost, the PMI members will pay $555 for the voucher, while the non-members will pay $405.\nThere are three different domains included in the PMI PMP exam. The candidates must develop their competency in these topics before attempting the certification test. The highlights of these main objectives are listed below:\nThis subject area comes with 14 tasks, including managing conflict; leading a team; supporting team performance; empowering the team members as well as stakeholders; ensuring the team stakeholders/members are properly trained; building a team; addressing and removing obstacles & blockers for the team; negotiating project agreements; collaborating with the stakeholders; building a shared understanding; engaging and supporting virtual teams; determining team ground rules; promoting team performance through the application of emotional intelligence; and mentoring the relevant stakeholders.\nThis section includes 17 clearly defined tasks, which focus on the technical aspects of project management. These are managing project artifacts; establishing a project governance structure; defining the relevant project methodology & practices; planning & managing project/phase closure or transitions; planning and managing procurement, among other skills.\nThis topic contains 4 tasks created to establish the link between a project strategy and the organizational goals. These tasks are as follows: evaluating and delivering the project benefits & value; planning and managing project compliance; supporting organizational change; assessing and addressing external business environment changes for effect on the scope.\nPMI PMP: Preparation Tips and Tricks\nThe PMP certification exam requires serious preparation, regardless of your previous experience and education. Most successful candidates spend at least 35 hours to get ready for this test. To make things easier for you, here are the steps you need to follow to prepare for the PMI exam with great deliberation:\nThe PMI published the content outline for the PMP exam on its website. This is the most valuable document as it contains a detailed list of knowledge, skills, and tasks required for this certification test. By carefully studying this document, you will get a clear idea of what the exam covers.\nThe PMI offers the comprehensive training course that will help the individuals improve their project management skills and build the knowledge base for the certification exam.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-4", "d_text": "Thanks Cornelius for the awesome material!\nHere in India most of the working professional spend lot of time commuting to office(Between 2 and 4 hours in a day), they can use this time productively by listening to The PM PrepCast. I could download it and study on my phone/tablet.\nPM PrepCast is simple English, cost effective and good explanation about Project Management concepts/processes.\nPlease accept a BIG thank you from my side. The PM PrepCast had helped me to pass my PMP exam which I took recently. Now I'm preparing for my PMI Agile Certified Practitioner (PMI-ACP)® exam and I bought Agile PrepCast as well recently. I personally recommended PM PrepCast to many of my friends and colleagues.\nUse the PMI Chapter Bangalore\nAnother place to get recommendations for training courses is PMI itself. The PMI Chapter in the city is very active with an up-to-date website. They often run project management training in Bangalore at the Royal Orchid Central Hotel. They also have a monthly training program that is designed specifically for local PMP aspirants called PMP Quest. This is run by volunteer trainers who are all experienced industry professionals and PMPs.\nAs a member, you can also join their PM Footprints events, which take place regularly. These evening networking events will give you the chance to meet local project management practitioners and other people in a similar situation to you, who want to study for and pass the PMP exam in the area. You could even form a study group together or ask one of the experienced project managers to coach you during your preparation for the exam.\nAccess the Bangalore PMI Chapter website here.\nRegardless of whether you feel commercial project management training, a PMI Chapter course or online study is the best option for you, it’s important to recognize that you are more likely to achieve a successful outcome in the exam through a blend of learning methods. That means:\n- Reading the most recent edition of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (several times!).\n- Networking in person or online and chatting to people who have been through the experience themselves.\n- Using a variety of different study methods to help cement the ideas that you learn through your formal education contact hours such as books, video, podcasts, flashcards and so on.\n- Practicing often for the exam by working through plenty of sample exam questions so you know what to expect on the day.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "A Project Management Professional Certification is designed to help professionals upgrade their project management skills and learn the various industry practices that meet with the modern challenges. The PMI offers management programs spanning various certification courses all of which will help you ascend the ropes of success in your career.\nAs per PMI, an applicant can take upto 3 attempts in a calendar year to pass the exam. However, with focused training program by Koenig Solutions, your chances of passing are higher.\nThe passing score was once 81%, which was reset to 61% in 2005, which meant that PMP applicants need to answer 106 or more out of the 175 questions. From December 2005, PMI has stopped publishing the passing score. Now the passing score for every applicant is different and is based on the difficulty level of questions being attempted. Also, the PMP Exam report card was revised in 2007 and now gives only the proficiency levels (i.e. Proficient, Moderately Proficient and Below Proficient).\nPMP certification exam will have questions that will test all your knowledge on various aspects of project management. In this exam, PMI will test your analytical, logical, mathematical, management and leadership skills. The PMP exam consists of six different kinds of questions: Knowledge - based questions – from the PMBOK® Guide Definition based questions Situation based questions Formula based questions Interpretational questions Questions on professional and social responsibility\nTo become a certified project manager, it is necessary to pass certification exams. You must pick the most relevant program that will help boost your professional career. The exam consists of testing a management professional across five major domains- project initiation, planning, executing, monitoring & controlling, and closing the project.\nThe PMI guidelines state that a complete 35 hours of project management education is needed prior to enroling for the examination. Some previous PMP certifications holders advice an exam preparation time of up to 3 months followed by rigorous mock exams that will judge your understanding of the PMP certification.\nThe eligibility criterion for Project Management Certification depends on your educational and professional background: High School Diploma/ associate Degree or equivalent- Must have five years of non –overlapping Project Management experience with 7500 hours of leading direct project tasks. Also, 35 PDUs are required. Bachelor's Degree/ or Equivalent- Must have three years of non –overlapping Project Management experience with 4500 hours of leading direct project tasks.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "Becoming a PMI member will have other advantages such as getting discount when paying for the PMP Certification, access to lot of books and materials, etc… If you do not wanted to be a PMI member, you can still get the PMBOK, by purchasing it from any of the online stores such as Amazon.\nYou need to study the PMP Study guide and PMBOK side by side to get best results. Every concept of the PMBOK and PMP Study guide should be included in your PMP study plan.\nJoin study groups\nJoin one or more study groups, where you will have lot of discussions around the PMP concepts. This is really is an easy and effective way of preparing for PMP. Combine your friends or colleagues in the preparation, or online groups to ask questions, get information on other resources available. Also there are lots of PMP forums where, tricky PMP questions are published and answered. Join quality online PMP communities and utilize the information. I see LinkedIn groups are good source for this purpose. “I wanted to be a PMP” linked in group is my personal favorite. Just in case, if you are able to find any other quality groups from other social media join them.\nFind out sources where you can get access to FREE quality PMP mock exams. In this website/blog, I have plans to create good quality questions that I have prepared over the past few years. This would include chapter wise questions as well.\nWhen to Study? How long the PMP preparation takes?\nIt all depends on how much time you are able to spend on each day for PMP preparation. Assuming that you are a working professional, Ideal time would be to spend 2 to 3 hours every week day and more number of (5 to 6) hours on the weekends. This way you would be spending around 15 to 21 hours at every week.\nThis would be more than sufficient for you to crack the PMP exam. Due to your work pressure, or any other constraints, if you are not able to spend around 15 hours a week, do not panic, as this is the best case scenario, which may not be possible for everyone, every week. So keep you plan realistic and try to strict to the plan as much as feasible.\nHow much money I can spend on PMP Certification? What PMP PREP resources I have?", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "To take the PMP exam, participants must complete an eligibility file on PMI’s website. If you are eligible, you will receive an authorization to take the exam. This process will be explained in detail during the course.\nDuring the course, our trainer will inform you about the exam rules and specifications. You can already find many tips on the PMP certification on the “PMP Exam Guidance” page of PMI.\nTo take the PMP exam, participants must complete an eligibility file on PMI’s website. If you are eligible (based on project management experience and other criteria), you will receive an authorization to take the exam.", "score": 24.734855992327994, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "… Week two: Read the remaining topics in the PMBOK Guide, have an in-depth look at Rita’s book and practice questions and try two more mock tests.More items…•\nDo PMP questions repeat?\nQuestions are pretty close to the real exam if you read the Rita’s PMP exam Prep book. However, the questions in the book have not changed since the past many year, and in the actual PMP exam, there are always new questions. Further, their question bank is so huge that a question never gets repeated.\nCan I study for PMP on my own?\nYes, you can do self study for the PMP examination. In fact, once you have attained the 3 minimum requirements for the PMP Exam, almost all people will do self study for at least 6-8 weeks (about 2 months), before they will be ready for the PMP examination. … This is a must, because of the sheer amount of study material.\nHow long is PMP valid?\nthree yearsEach PMP certification cycle lasts for three years during which you will need to earn 60 Professional Development Units (PDUs) to renew your certification at the end of the cycle. Once you have successfully completed a single cycle, another new three-year cycle begins.\nWhat is the passing score for PMP 2020?\nThis is equivalent to 65.5% passing marks (131 out of 200 questions). Older figure of 68.5% – This percentage included 25 pilot questions. If this were true today then, you need to mark 137 out of 200 questions correctly. Or you would need a bare minimum percentage of 68.5% to succeed.\nWhat is the best PMP online training course?\nTop PMP certification courses & study materialsBrain Sensei Complete PMP Exam Prep Course.SimpliLearn Blended Learning PMP Certification Training Course.PMTraining Project Management Professional (PMP) Live Online Certification Classes.PM PrepCast Elite Video Training Course.Sybex PMP Platinum Review Course.\nWhat is the best way to study for the PMP?\nTop PMP Exam TipsConquer the PMBOK® Guide. The PMP exam is based largely on the PMBOK® Guide. … Use a Good PMP® Prep Book. … Try PMP Exam Prep Workshops. … Try Online PMP Exam Prep Workshops. … Take Advantage of Online PMP Exam Simulators.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "The Project Management Professional (PMP) Certification is one of the most prestigious, globally recognised professional certifications offered to Project Management professionals.\n1. Check requirements for taking the PMP exam\nNot everyone is eligible to take the PMP exam. The PMP is aimed at professional project managers with proven experience. The PMI have outlined the following minimum criteria’s that you need to fulfil before you take the exam:\nOption 1: High school diploma/ global equivalent, 5 years Project Management experience, 35 hours of Project Management education\nOption 2: Bachelor's degree/ global equivalent, 3 years Project Management experience, 35 hours Project Management education\nYou can find the latest requirements on the PMI’s website.\n2. Become a PMI member\nApart from getting a generous discount on the exam itself, there are lots of other benefits of becoming a PMI member. Through your membership you will get access to events and meetings so the networking opportunities are excellent. These help to keep you current with project management knowledge and thinking, progress towards and maintain a professional qualification and to learn from your peers. You will also get free digital access to all of their global standards, including the preeminent PMBOK guide (which you will need a comprehensive understanding of for the exam), as well as access to tools and other resources that will help you in your daily work to succeed as a Project Manager.\nIf you decide to take any of our courses to help you prepare for the exam you will also receive 15% discount as a PMI member. PMI UK Chapter members get up to a 25% discount on the first three spaces of our training courses.\nThe easiest way is to join online and pay by credit card. The PMI will then send you a membership number. The membership costs USD $129 and this link will take you to their membership registration page. Do not forget to become a local PMI Chapter member and for only a few dollars extra you will have access to all conferences and networking events in your local area.\n3. Assess your current skills level and identify knowledge gaps\nBefore you start taking any courses to prepare yourself for the exam it can be a good idea to assess your current skills levels and identify any knowledge gaps you may have. This is particularly good if you have a lot of practical experience as a Project Manager but little formal training. The assessments will help you identify which areas you need to focus your studies for the PMP exam.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Pass the PMP Exam\nTools, Tips and Tricks to Succeed: 2016\nElsewhere US$47.16 US$32.28 Save US$14.88 (32%)\nFree Shipping Worldwide\nOrder Now for Christmas with e-Gift\n|Format: ||Paperback, 543 pages, 2nd Revised edition Edition|\n|Other Information: ||82 black & white illustrations, biography|\n|Published In: ||United States, 01 June 2016|\nPass the Project Management Professional (PMP) credential from the Project Management Institute (PMI). Pass the PMP Exam contains all the information you need to study for and pass the PMP(R). In addition to all the information needed to pass the exam, you will also find tips to give insight into how to read and answer questions, and each chapter includes exercises and a multiple-choice quiz to test your understanding of the topics covered. A glossary of key terms is also provided, along with study aids such as mind maps. The author, Sean Whitaker, has managed complex projects in the construction, telecommunications, and IT industries, and shares real-world examples of theory in action from his own career. What you'll learn:* Handle integration, scope, time, cost, and quality management* Manage risk, procurement, and stakeholder risk* Work with human resources, communications, and handle ethics and professional conduct* Become eligible for the PMP exam and how to study for it* Discover some PMP exam taking tips* Handle various PMP exam tasks and puzzle games Who is this book for: Experienced project managers looking to capstone their learning with the PMP certification.\nTable of Contents\n1. Foundational Concepts of Project Management2. Integration Management3. Scope Management4. Time Management5. Cost Management6. Quality Management7. HR Management8. Communications Management9. Risk Management10. Procurement Management11. Stakeholder Management12. Ethics and Professional Conduct13. Eligibility, Study and Exam Taking Tips14. Blank Mind Maps15. Formulae to Remember16. PMP Exam Tasks Puzzle Game17. PMP Exam Role Delineation Domain Tasks18.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "If you’re studying to become a Project Management Professional, you need the best PMP boot camp available. Velociteach is the industry leader in PMP Exam preparation. We’ve helped more people pass the exam and get PMP certification than anyone else.\nThe Best PMP Boot Camp on the Market\nVelociteach has helped thousands of people all over the world achieve their PMP certification. We have the highest documented pass rate on the PMP Exam among all project management training providers. That’s because our PMP certification class combines adult learning theory with cutting-edge e-learning and the best exam-prep resources available.\nWhat makes our live, 4-day class the best PMP boot camp around? That’s simple. We teach you the skills and strategies you need to pass the certification exam and become a project management professional. You’ll learn our proprietary strategies for answering difficult test questions. You’ll also learn tips for digesting the information found in the massive PMBOK® Guide. In fact, our students are constantly telling us that they learned more in four days than they thought was possible. That’s because our gifted instructors guide you easily through concepts and terms, helping you relate abstract ideas to the real world.\nWe train you to think just like PMI (the Project Management Institute), the organization responsible for the exam. We administer practice tests designed to look and feel just like the actual PMP Exam. And we’re constantly polishing our practice tests and course material, to ensure you’re getting the latest, most relevant information covered on the exam.\nOur Money-Back Guarantee\nVelociteach offers more resources to help you pass the PMP Exam than anyone else. In addition to the best PMP boot camp money can buy, we offer online courses, flash cards, textbooks, a reference guide and audio CDs. Our community of experts is committed to your success, and works with you every step of the way, offering personal study plans, coaching tips and more.\nWe’re so confident we’ll help you pass the PMP Exam, we offer the best guarantee in the business. If you take our class and don’t pass the exam after 3 tries within 1 year, we’ll refund your tuition. That’s right – become a certified Project Management Professional, or get your money back.\nSo what are you waiting for? Call 888-568-2527 and find out why Velociteach offers the best PMP boot camp in the business or click here to learn more about our 4-day class.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-5", "d_text": "At the time this paper was printed, the fee was $405USD for Project Management Institute (PMI®) members and $555USD for non-members. You will be given a code to use when booking your exam.\nIIBS offers the Project Management Course every 6 weeks.\nThe six-day course is very comprehensive and helps you learn PMBOK® (Fourth Edition) and the other reference material. We go through sample questions and various exercises to test your knowledge as we go through the course.\n• Our instructors are excellent facilitators and passionate about success of our students.\n• Our courses are conveniently located and offered every 6 weeks so you can be sure of a reliable schedule once you book the time.\n• The Project Management Professional (PMP®) examination is comprised of 200 multiple-choice questions. A key reference is the PMBOK® plus additional\n• Topics in leadership, communications, and professional responsibility.\n• Of the 200 questions, 25 questions are pretest questions. Pretest questions do not affect the candidate’s score. The pretest questions are randomly placed throughout the exam.\n• To pass, candidates must answer a minimum of 106 of the 175 scored questions correctly\n• The computer-based examination is conducted at a Prometric Examination Centre.\n• The allotted time to complete the Computer Based Testing (CBT) examination is 4 hours. This 4 hour period is preceded by a 15-minute computer tutorial.\n• The Project Management Professional (PMP®) certification cycle lasts three years from the date you pass the examination, during which you must attain no less than 60\n• Professional development units (PDUs) toward credential maintenance. These PDU’s may be earned by attending educational programs offered by educational organizations registered with Project Management Institute (PMI®) and designated as Project Management Institute (PMI®) Registered Education Providers (REP’s)\n• These providers adhere to quality criteria established by Project Management Institute (PMI®) and are solely authorized to issue PDU certificates to attendees.\nThe Project Management Professional (PMP®) designation demonstrates that you possess a solid foundation of experience and education in project management. Project Management Professional (PMP®) certification has been ranked at no. 4 position (one of the best independent editorial on the Web for the certified IT professional community covering more than 120 IT Certifications) “10 Hottest Certifications for2006″.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Due to the growing need for qualified project management across sectors, PMP Exam prep has become important. Qualified owners show high skills, know-how and experience in order to provide the best solutions for a particular project or mission. There are some easy steps you need to follow to get the internationally recognized project management certification on the first attempt successfully.\nHow to prepare for PMP?\nFirst and foremost, the certificate fundamentals, which include the course, exam models, processes, requirements, eligibility criterion etc., must be thorough. The textbook released by the Project Management Institute (PMI) helps you to know the same thing.\nYou will begin to take up the study material until you know the basics. Above all, you have to receive the PMBOK textbook, as it is the main study material and most of the questions posed in the exam are taken from the same book. Further analysis materials can also be obtained to gain a good insight into the contents of the whole PMP Course.\nAdditionally, you can use some tools to help you grasp the realms of the course effectively. Tools like simulators help you get the speed to answer 200 questions in 4 hours. You can also use amusing exercises to help you find the poor subject areas.\nDo you need a professional certification?\nAn integrated learning and test qualifications, be it PMP, PfMP, ECBA or any IIBA Certification, are required for each professional credential. There are many explanations why a professional certificate fits well for your career. Three of the findings are:\n· Varying responsibilities.\n· Great pay.\n· Flexibility and Independence.\nThe PMI offers candidates globally with credible Project Manager Qualifications. Institute of Project Management (PMI), since in the organizations of today the availability for qualified project managers has expanded, professional certificates including PMP Certification become mandatory.\nThe PMP was delivered by the world-renowned PMI, the Project Management Institute. These tests in project management are meant to test the abilities, knowledge and skills for everyday activities and assignments.\nEducationEdge • 2019 Oct 18", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "Apply for PMP exam\nComplete the online exam application at www.pmi.org. The application can take a little while to complete, especially if you have a lot of experience to document. However, you do have 90 days to complete it which means you can revisit it whenever you want during this time period. An example of the PMP application is available from Strategy Execution – please contact us and we are happy to give you advice.\n6. Make your payment\nCost: USD $405 (member), USD $555 (non-member)\n7. Complete the audit (only if asked by PMI)\nOnce your payment has been submitted, you will immediately find out if your application requires an audit. The PMI does random checks on all applications so if you are one of the selected you will have to submit proof and references for all parts of your application within 3 months. However, the chances of being audited are small.\nIf you are not selected to be audited then you can immediately proceed to book your exam. You will receive an Authorisation To Test (ATT) letter. You will need this to book a time to sit the exam.\n8. Book your exam\nSelect a time and date to take the exam. Examinations are not scheduled, but can be taken at any time online under controlled conditions at a test centre. To locate your nearest centre, visit https://securereg3.prometric.com. Please note that during busy periods, you may have to wait a couple of weeks for a convenient time slot, so it's a good idea to plan ahead.\nWe strongly recommend that you apply and book your exam shortly after you have taken our PMP Exam Prep course (normally within 2 weeks of taking the course).\n9. Revise for the exam\nMake sure you have full understanding of the PMIs terminology and have a thorough understanding of the PMBOK Guide. If you have taken our PMP Exam Power Prep course you will get revision booklets to help you on the way. There are also a lot of free revision resources online. Here are a few:\nThis mock exam for PMP consists of seventy questions. All the questions have exactly one correct answer. The answers to the questions are at the bottom of the page. The exam is based on PMBOK 5th edition, and corresponds to the latest version of the exam. This site is courtesy of Edwel Programs.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "The PMP or Project Management Professional Exam is an assessment of 200-multiple-choice questions that gauges your understanding of the five phases of the project management life cycle – initiating, planning, execution, monitoring/control, and closing – as well as all the processes, tools and techniques associated with those phases.\nA great resource to gain a basic understanding of the exam is the PMI Exam Content Guide.\nIn this blog let’s discuss the back story of the exam, what to expect on test day, and the benefits of the exam.\nWhat is the PMP exam?: The Back Story\nIn explaining the PMP exam, I think it is necessary to understand two very important resources that provide significant input to the exam, PMI and A Guide to the Project Management Body of Knowledge (PMBOK).\nPMI or the Project Management Institute is the governing body for the exam. They ensure that the exam is kept relevant and in alignment with the current edition of the PMBOK. Additionally, PMI approves all applications for individuals to complete any of the certification exams – including the PMP.\nThe PMBOK is the holy grail of project management for the PMP exam. The exam is created based on the content within the PMBOK guide. Although, the PMP Exam Guide notes outside resources as contributing to the exam material, a significant amount of – or I would argue the only – content you need to understand is within the 550 pages of the PMBOK.\nWhat is the PMP exam?: Exam Day\nThe PMP exam is a set of 200-multiple-choice questions that you must complete in 4 hours. The exam contains 175 graded questions and a random set of 25 sample questions PMI is testing for future use.\nThe downfall – you do not know what is what – so it is better to prepare for all 200 questions! The exam assess your understanding of project management as it relates to the PMBOK. What I mean is, it assesses your understanding of project management in a perfect world.\nThere are questions that will provide some level of trickery asking questions that include the word NOT, which can quickly lead testers astray. As a former test taker, those who know the PMBOK and use good test taking strategies will do well.\nWhat is the PMP exam?: The Benefit\nThe PMP is a highly regarded certification.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Management Academy’s 4-day PMP® boot camp has a 99% pass rate. 100% money-back pass guarantee. Courses never cancelled (unlike most).\n27 items found\nIf you are a practicing project manager and want to get the coveted PMP certification from PMI, your wait is over. Management Scholars Academy has designed an exclusive 4 day PMP boot camp to help you appear for the PMP exam with confidence\nMANAGEMENT SQUARE is a project management company providing both consulting and professional education in the project & program management fields. Our aim is to accompany our clients in managing effectively and efficiently their projects.\nWith strategy-savvy managers and leaders needed in nearly every type of organization in every sector and industry, now is the time to expand your strategic leadership and management skills.\nThis completely online and self-paced project management program provides a comprehensive preparation for the PMP® certification exam including exam taking tips, ten comprehensive module quizzes, and two full-length, 200-question practice exams.\nMonadnock Education, LLC combines the best value in individual online PM training with expert consulting services to make your career and organization triumph.\nPMI-SP course delivered by Mentored Email - available world wide. We have successfully trained students from around the world to pass their PMI-SP certification. Course leader, Patrick Weaver PMI-SP, PMP will help you be successful in the exam.\nMount Royal University is a Global Registered Education Provider of the Project Management Institute (PMI®) offering a Project Management Certificate in both classroom and online formats. An Applied Project Management Certificate is also available.\nLearn how to understand the many features of Microsoft Project 2013 and how best to use and apply them to workplace projects.\nMS Project 2013- Best Practices Training Course- Learn How to Use and Apply the Features to Workplace Projects", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "PMP Prep Program\nAt the conclusion of this three day program, participants should be able to pass the PMP® Certification Examination administered by PMI.\nThe objective of the course is simple: At the conclusion of this PMP® Prep Program, participants should be able to pass the PMP® Certification Examination administered by PMI.\n$1,900 per participant; discounts available based on number of participants, training location, instructor availability, and multiple class commitments.\nIf a participant completes the three-day PMP® Prep Program and fails to pass the PMI PMP certification exam, then the participant may take the program again for free.\nEach participant is assumed to have already completed the two-day Project Success Method program in the fundamentals of project management and participated in the execution of actual projects.\nMore specifically, each participant should meet the minimum experience requirements to take the PMP® examination. Thus, this course is not meant to be a total “stand-alone” study program for the PMP exam. Also, the study manual is designed to be used in association with the review course taught by the instructor.\nThe course is organized around the Project Management Knowledge areas covered by the Project Management Institute’s PMP® exam. Although there are other organizing frameworks for such a course, this framework is most practical for the purpose of PMP® preparation.\nA comprehensive preparation manual is provided to each course participant. Each chapter in the course manual contains several pages of background and review material followed by a set of extensive review questions that mimic the questions encountered on the actual PMP® exam. The review questions in each section are followed by answers.\nAn introductory section is included in the course manual that addresses general information about the PMP® designation and the examination. A second section is included to address an overview of the framework and organization of the Project Management Body of Knowledge (PMBOK®). A closing section is also included in the manual to address PMP® exam questions related to Professional and Social Responsibility – a section covered on the PMP® exam, but not a “knowledge area” in PMBOK®. The other sections of the manual address specific knowledge areas in the PMBOK®\nPDU Credits: 24\nThis program is designed for professionals who are or will be responsible for the management of projects, members of project teams, sponsors of projects, key project contributors, and project management office (PMO) staff.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "How to Prepare for the PMP Certification Exam:\n7 Keys for Success\nTactics for Passing the PMP® Certification Exam on Your First Try\nIf you are like most of us, it has been many years since you have taken a standardized test.\nThe worst mistake you can make is to assume that you have the exam “in the bag.”\nThe PMP certification exam is a 4-hour international test, which can be intimidating due to its formality and its security. The exam is based on best practices as determined by the Project Management Institute and not necessarily what is important to your role or organization.\nThus the approach you take to prepare for the test is critical to your success in achieving this widely-regarded and respected project management certification.\nThis webinar focuses on the essentials needed to pass the exam and targets those essential nuggets of information that are beyond the study content of the test itself.\nwebinar(s) available starting between\nFor assistance registering more than\npeople, please call\nWhat You Will Learn\nInstead of reviewing the PMBOK® content, this webinar will focus on preparing for and then taking the exam itself. In a nutshell, you’ll get a better understanding of the requirements for PMP certification and review a handful of strategies you can use to prepare for the exam.\nFor instance, you’ll see why it is so important to use current information in the study material and never let your past experience answer for you on this exam.\nFor 90 minutes, the course covers:\n- Essentials needed to apply for the exam, as well as the essentials needed to pass the exam\n- A timeline of important milestones to accomplish before your test date\n- Advice for overcoming your fear of the “math monster.” You’ll get quick tips for memorizing the basic formula for Earned Value and other critical math formulas needed to solve exam problems\n- The psychology of test taking, which is extremely important in this exam. The webinar will approach the psychology of the questions you’ll face, giving you approaches for understanding them\n- Ways to approach test questions by using examples. You’ll review questions from the PMBOK, as well as industry-standard questions, so you understand the meaning of “body of knowledge.”\nArmed with these tools and tactics, you’ll give it your best shot to take the PMP exam and pass it on your first attempt!\nWho Should Attend?\nEveryone preparing to take the PMP exam should consider fine-tuning their test-taking skills with this webinar.\nHow Do AMA Webinars Work?", "score": 21.81357519584069, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "PMP Test Prep\nProject management Study Tips, Tricks and methods to prepareThank you for visiting this page, in here you will find preparation info on how to study for your Project Management exam, CAPM, PMP or other similar types of exams such as Microsoft Project, AGILE, SCRUM, ITIL.\nGet the right PMP Training today with confidenceIs test preparation for the PMP Certification getting on your nerves? Are you becoming too stressed and uncomfortable about the fast approaching exam?\nWe did all the research for you and pooled multiple Project Management preparation materials together for your reviewDo you think there is no more hope in finding the best PMP test prep material? You are probably reading this now because your venture for the most important examination in your project managerial career brought you here.\nReview variety of test prep materials to see what study guides are best for youWhy not consider purchasing a study guide to help you with your test? There are many benefits you can get from study guides and here is the best place to find them.\nLearn from affordable Comprehensive resourcesStudyGuide.net provides the most affordable and most comprehensive study help for PMP. Study guides are comprehensive and well-written printed test preparation publications that help test takers achieve a satisfying level of score for the actual test by utilizing the necessary information from relevant resources and combining these with the reliable years of experience from the most acknowledged test researchers and writers.\nLearn from different study resourcesContent seen on this site provides value for future project managers and are am-packed with tons of helpful information from the test format, the essential competencies you need to develop, appropriate test preparation methods to keep you concentrated, how you can properly answer the test questions, other important things to remember and more.\nLearn from different study resourcesEach study help for PMP has worksheets and modules to train your mind and test your knowledge. It is like a simulation for the actual test to allow you to apply what you have learned through out the study guide.\nBest PMP study resourcesWe only offer the best possible resources and always assure you of the highest possible quality test preparation materials for PMP Certification takers.\nPrepare for your PMP Exam today\nEarn Certification boost up your careerEarn for yourself a widely recognized certification to boost up for career as a project manager that will not only give you employment advantages, but also in boosting your morale and self-esteem as a respected project manager in the industry.\nStop wasting time and get a PMP test prep book today! Unleash your inner potentials now and be the best project manager you can be!", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-3", "d_text": "(Non-covering implies that on the off chance that you oversaw two ventures in the most recent year, at that point that just considers a year and not as 24.)\nDo you have a four-year college education or its proportionate in your nation? At that point you should have at least three years (three years) of one of a kind, non-covering proficient venture the board involvement of which in any event 4,500 hours more likely than not been spent driving and coordinating project tasks.\nTurning into an individual from the PMI bodes well because as a PMI part you won't just get a free PDF rendition of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) however you will likewise get a considerable rebate on the PMP® Exam. Truth be told, the rebate is greater than the enrollment charge! So regardless of whether you would prefer not to remain a PMI part forever, turning into a part in your first year bodes well.\nThe PMBOK® Guide is the essential reference utilized on the PMP Exam. Most mentors gauge that the right response for about 75 percent of the inquiries on the PMP Exam can be found in the PMBOK® Guide. In this manner, you should know it all around, and the best way to realize it is to think about it at least twice.\nThere are numerous generally amazing self-ponder courses and PMP Preparation Books accessible. We offer our own one of a kind PM PrepCast for PMP Exam Prep, and you can discover the books on Amazon, or in your nearby book shop. These examination assets will show you the 25 percent of extra material that you can't discover in the PMBOK® Guide.\nWith regards to free example questions, you get what you pay for. Free is a great idea to get a thought, and the vast majority do. In any case, you should pay for \"genuine\" inquiries from a legitimate online PMP Exam Simulator. Test questions are likewise accessible in books so you can go down to your neighborhood bookshop and set aside some effort to glance through the inquiries in the books before you choose which one to purchase. In any case, just a test system will enable you to test yourself in a test like conditions - you can take a total, 4-hour test at the PC. Much the same as the genuine article.\nContemplating the PMP Exam is a genuine undertaking and requires individual devotion.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "The Project Management Professional (PMP) certification is projected to be one of the most in demand certifications in 2016 and holders of this certification earn 17% more than non-certified peers on average! It goes without saying that this certification can take your project management career to new heights!\nIf you are working towards becoming certified, you know that a lot of studying, experience, and hours go into becoming a PMP. Regardless of your experience in project management , studying for the PMP exam is crucial for your success. On average, people who pass the exam have dedicated at least 35 hours to studying the principles of project management. Keep in mind that even if you have years of experience, the things you have learned in the real world might not transfer to a multiple choice test.\nAdvice from a Professional\nWe know that preparing for such an esteemed credential can be a challenge so we have gathered some quick tips from a well-respected PMP certification holder and instructor, Vincent McKeown. Mckeown has over 15 years of experience as a Project Manager and knows what it takes to pass the PMP exam.\n1. Know the material well! It is important to dedicate time to studying.\n2. Know all 47 processes inputs, outputs, and tools.\n3. Have a good understanding of the organization framework.\n4. Know all the math formulas... they are the easiest to get right.\n5. Know how to do a forward/backward pass. Another set of easy questions to get right.\n6. Create a Brain Dump.\n7. Take a bunch of practice exams. You can know the entire material well, but get confused on how PMI asks their questions.\nMore About the Exam:\n-200-question, multiple-choice test.\n-4 hours to take the exam.\n-Must meet the prerequisites and educational requirements set forth by the Project Management Institute: Secondary degree (high school or associate’s) and 7,500 hours leading projects OR four-year degree and 4,500 hours of leading projects. Professionals seeking certification must also have completed 35 hours of project management education before they qualify to take the PMP exam.\nLearning Resources Available to You:\nPMP training courses in the D.C. Metro area are available to you and provide a more structured approach to studying for the exam. One of the biggest benefits of taking a course is that the trainers are experienced in helping people just like you pass the exam.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Pls, I started preparations for my PMP, but don't know whether to read †ђξ PMBOK ist or other pmp prep books. Need ur advice pls help.\nAlso would appreciate if any one can share Rita's 6 or 7th edition with me. My id is :firstname.lastname@example.org.\nSat, 12/17/2011 - 17:36\nTanx to Nancy, am very greatful for sharing ur PMP prep book with me. Also a big tank u to pmzilla forum for all d tips.\nMon, 12/26/2011 - 11:13\ni have pased pmi exam. i did preparation from shoptraining.com study guides\nMon, 10/16/2017 - 14:44\nThanks for starting the PMP preparation website that contains pretty impressive helpful material for the students. The students who wanted to get good marks they can get lots of information and helpful stuff. I would like to recommend your professional resume writing services reviews website on social media as well for others.\nThere are currently 0 users online.\nPMI, PMP and PMBOK are trademarks registered by Project Management Institute\nCopyright © 2018,\nDesigned by Zymphonies", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "You may contact us for more information on PMP Certification or to join a FREE online Seminar via ZOOM to learn detailed information about PMP Certification so that you can explore potential career benefits of PMP Certification.\nPlease register by filling up a Google FORM by clicking below Link :\nOr you may also SMS “PMP INFO” to 0167773422 for the Zoom link or Call/SMS 01677734222(WhatsApp) or Visit our Facebook Page.\nResource Personnel: All of our PMP Certified Instructors are from Engineering Background from Reputed Universities with at least 15+ Years of experience in Project Management with leading business organizations in Bangladesh.\nFREQUENTLY ASKED QUESTION ABOUT PMP Certification:\n1. Is it mandatory to take Authorized Training?\nAns: NO, absolutely NOT. You need to take a minimum 35 Hours of Training on PMBOK (6th Edition) from any Individual having PMP Certification or from any organizations including your Own HR Department!\n2. Can I take the EXAM online from anywhere?\nAns: Yes, due to Covid situation PMI is offering a Proctored Exam and you can take the exam from anywhere if the exam Environment meets the necessary prerequisites. Generally PMI conducts the PMP exam via Pearson Authorized Testing Centers.\n3. Do I study PMBOK- 6 or PMBOK- 7?\nAns: You should consult PMBOK- 6 with Agile until any declaration from PMI that will map PMP EXAM with PMBOK- 7.\n4. What types of Questions do I need to answer in the PMP Exam?\nAns: There will be 180 questions of different types including MCQs with some requiring multiple options to choose, Fill Up Blanks (As like MCQ), Hotspot etc. You will have 230 Minutes to answer all questions and there will be NO negative marking.\n5. How much score do I need to Pass PMP Exam?\nAns: PMI never disclose the required passing score as the weight questions varie. Recommendations from experts suggest that you must score at least 70% to remain in the Passing Zone.\n6. How much does it cost for PMP Exam Registration?\nAns: PMP Exam registration charges are same across the world. It depends a bit on PMI Membership.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "(Video) Exam Profile: Project Management Institute Project Management Professional (PMP)\nThe PMP certification exam tests applicants for comprehensive project management knowledge and an in-depth understanding of the PMBOK. Find out what you can expect to see on the exam and how you can better prepare for it.\nLike this article? We recommend", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "PMI has specified specific prerequisites and exam requirements that needs to be fulfilled prior to scheduling the PMP® exam. For information about these requirements, visit https://www.pmi.org/certifications/process.\nFor PMP certifications, the test will occur at one of Prometric’s worldwide testing sites. A participant can schedule the appointment online at Prometric.com/PMI using their eligibility number. For more information about PMP exam and certification process, visit https://www.pmi.org/certifications/process.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "The Project Management exam is the hottest certifications exam today. It is the most challenging project management certification exams to prepare for. This is because most of those thinking about taking this exam are at the stage of their career when they are already working full time and then they try to find the time to study for their certification.\nIf you think that the PMP Certification exam is your average college test where you can cram yet still get high marks, then think again. The PMP Exam is anything but easy. It is an experience-based exam in a 200-question, four-hour computerized format. When you are studying for the exam, you could answer the sample questions easily enough in the comfort of your own room with no ticking clocks, no distractions and no security cameras pointing at you. However, during the actual examination, you will find yourself in a radically different setting.\nThink of it as the battleground and you as the soldier. And any good soldier would create a battle plan before the exam. He knows that planning can spell the difference between passing and failing. You have to formulate strategies in terms of how to answer and review the questions, how to ease the tension from your body and how to replenish your energy. Your battle plan will serve as your guide during the exam and will help you focus on the task ahead of you. With a battle plan, you will be able to breeze through your exams knowing that you have everything under control and and can maximize the time allotted for you to finish the exam within the allowable period.\nThis forum is for members to share and gain knowledge of Project Management. Got a question about project management? Need help with a problem? Wish to offer tips and advice? Post here.\n1 post • Page 1 of 1", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "I passed my PMP exam with Above-Target score in all 5 domains. And in this post I would like to share my strategies for how to pass PMP exam with the best possible score.\nTowards the end of this post you can also download the process mapping sheet I created for ready reference. This, I hope, will help PMP students.\nMy PMP Exam Result Report\nI am a post graduate in Project Management and I always had this urge to get a global certification on project management.\nTo be very honest, my desire to obtain PMP certification was motivated by my wife, who, after coming to know about my goal, began to movitate and push me to take time out and study for the PMP exam.\nAlso Read: Planning Guide, how to pass PMP before PMBOK-6 exam kicks-in\nMy PMP study resources\nI realized that there are way too many prep resources, and for a brief period this lead to a sense of overwhelm.\nSo I decided to keep it simple. I brought down my study resources to just 3 items –\n- PMBOK 5th edition\n- Rita Mulcahy 8th edition\n- PMExamSmartnotes exam notes on each knowledge areas (free version here, advanced version here)\n- I also used mock tests to get exam experience, I shall tell you the ones I used in a bit.\nIf you are in the initial stages of your PMP preparation, I would strongly suggest researching study resources online and locking down to top 2-3 that you would like to use. Trust me, this will save you so much time, avoids confusion, and makes it easier overall.\nMy PMP study approach & study plan\nI allowed myself 6 weeks to study. During this time I read PMBOK and Rita books twice – once in detail and second time as a quick revision.\nAfter finishing my first round, I started taking full-length mock tests during my weekly off days.\nThis allowed me to note gaps in my understanding and so I kept optimizing my study this way.\nI practiced 4 mock tests before the exam. My effort was to maintain continuity in study and understand the PMI’s way of managing the project as far as possible. I maintained my list of topics which I felt should be revisited again.\nI strongly believed in ‘understanding’ rather than ‘memorizing’ except for some formulas.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "The exam consists of five different parts, including how to initiate the project, how to plan the project, how to execute the project, how to monitor and control the project and how to close the project. Upon passing the examination, you are awarded your PMP credentials.\n- Exam application\n- Examination fee\n- Start studying for the PMP examination at least four to six months before test time. Use the PMBOK Guide to help prepare yourself for the exam.\n- David De Lossy/Photodisc/Getty Images", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "If you don’t already know, the PMP certification is one of the most sought after professional credentials by managers and business leaders. Tech Republic ranks the PMP as a Top 10 Best IT Certification for four consecutive years in a row. According to the Project Management Institute (PMI), the organization that offers the PMP certification, the exam to earn the PMP designation is one of the most challenging in the industry with a 40 percent failure rate. The cost of the exam is a sizable investment as well. It can easily cost several thousands of dollars when combined with preparation training programs, extra reading resources and additional study materials. If you are seriously considering the PMP certification, it’s important to understand the commitment involved in terms of time, energy and money.\nThis Q&A with Mr. Usmani will offer you valuable insight into test-preparation strategies and best practices when preparing for the exam.\nWhen did you earn your PMP credential?\nI passed my PMP exam on Dec. 13, 2010.\nHow did you prepare for the exam?\nI began the PMP certification process without ever having referenced a sample PMP preparation book or the PMBOK guide. I was starting from ground zero. So in December 2009, I attended live classroom training to earn the 35 contact hours training program certificate.\nDid that adequately prepare you to take the PMP exam?\nNo, it did not. My lack of preparation was embarrassing. It was difficult for me to absorb the concepts being presented and there were also instances when I couldn’t participate in the discussion because I just didn’t have enough baseline knowledge.\nSo, you recommend students do some advance preparation before attending a live training program?\nYes, otherwise you will not get much out of the training and may feel uncomfortable.\nHow do you suggest students prepare?\nDuring the training program I was given the Head First PMP Exam preparation books. Initially, I did not like the book; however, as I started reading it, I found its approach to be very easy and engaging. I strongly suggest getting the book to understand basic project management concepts.\nBeyond this, it’s important to make time to prepare for the exam. I personally experienced a few ups and downs during my preparation. There was a time when I lost my enthusiasm, which paralyzed my studying. But the clock was ticking. There are time-sensitive eligibility requirements so you can’t procrastinate.\nFinally, I got serious.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "PMP Vce and Pdf & Dumps, 100% pass rate PMP Exam Free | GetItExam\nPMP Exam Questions\nSale Latest Release PMP Vce Software with PDF and VCE Engine MPPC.\nAt night, every night I had to speak somewhere which was bad and to listen to the speaking of others which was much worse.\nShe was quiet and serious and spanish, she had the square shoulders and the unseeing fixed eyes of a spanish woman.\nWe wandered about the place, suggesting to each other causes for the misery we saw there, and, while PMP Certification I was still among the ruined walls and decayed beams, I fabricated the plot of The Macdermots of Ballycloran.\nHowever that was PMP Dump Test the matter as it was and as it continued to be.\nHe had been in China and he was later to live permanently in the South Sea islands after he finally inherited http://www.bestexamlab.com/70-346.html quite a fortune from his great uncle who was fond of Milton s Paradise Lost.\nI intended to write that book to vindicate my own profession as a novelist, and also to vindicate that public PMP PMP taste in literature which has created and nourished the profession which I follow.\nHer step father being an englishman Constance became passionately an english woman.\nFinally I read it all and was terribly pleased PMP Book Pdf with it.\nBut before we parted with our 642-887 Dump property we found that a PMP Online Exam fortnightly issue was not popular with the trade PMP Exam Questions through whose hands the work must reach the public 70-548-CSHARP Book and, as our periodical had not become sufficiently popular itself to bear down such opposition, we succumbed, and brought PMP Exam Questions it PMP Exam Topics out once a month.\nIf PMP Exam Dumps Pdf the former reflection does not suffice for consolation, the deficiency is made up by the second.\nThey cared nothing for my doctrines, and could not be made to understand that I http://www.getitexam.com/000-080.html should have any.\nIt was not within his nature to be communicative, and to the last PMP PMP Exam Questions PMP Exam Engines he never PMP PMP told me why he was going to Ostend.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "The head teacher of the training institution will prepare for the examination into three stages of intensive speaking, serial speaking, sprinting, weekly arrangement of learning plans, I made some adjustments and weak links according to their own grasp of the situation.\nThe above recommendation arrangement is based on this, give you a reference, I hope to be helpful.\nAbout | Prepare before the PMP exam\nOn the last Saturday before the exam, pinch the time to do a simulation test.\nTest a few days ago because of the office chickenpox storm, coupled with a few days of body temperature at the edge of 37.2 degrees C low fever, has been suspected that they also want to get, nervous tension panic, but fortunately the brain is still awake.\nBecause the PMP exam is from 9 o’clock, even 4 hours of 200 choice questions, before the test to do a full-truth simulation (simulation test requires more than 135 points), familiar with the pace of the exam, help adjust the mentality, this is very necessary.\nIt is recommended to arrive at the examination room before 8.30am on the day of the exam, without having to prepare any exam items, and the exam seat comes with a calculator, pencils, erasers, and drinking water.\nAbout | Results are announced\nWait patiently for four weeks, the results will be sent by PMI to your registration mailbox, whether through or not will be sent.\nOnce you receive the email, you can go to the official website (https://ccrs.pmi.org/login/index) to see the detailed results (such as the beginning of the article) and download the e-certificate, which will take another 7 months or so.\nBy the End | The last one\nAfter work, rarely have such a period of time, forcing themselves to learn, especially in November, is really learning to vomit, where dare not play.\nThis preparation experience aftertaste is really good: the harvest of successful pass the examination qualification certificate.\nHowever, through a short preparation to learn – laomei to do project management ideas, understanding and application, is more to do outside the examination.\nThank you to everyone who has been helped during the preparation for the exam\nThanks for not giving up on yourself. Finally, in advance I wish you all a successful pass the PMP exam!", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Project Management Professional (PMP) certification is the global standard for project managers.\nIt is the most recognized credential provided by PMI, the leading association of the project management profession.\nWant to Earn Six Figures? Become a Project Manager\nThe PMI Project Management Salary Survey—Seventh Edition shows that certification positively impacts project manager salaries.\nContact us to find out how you can earn your credential and join the six-figure club!\nLearn more about PMI and the PMP Credential via the resources below.\nPMP Certification Information\n“Intellectual growth should commence at birth and cease only at death.”\n- Albert Einstein\nSimplilearn - A complete training source for the PMP Exam\nPMP Exam Sample Questions\nPlease use caution on the information provided from third-party sources. We encourage a diverse knowledge base, but can't vouch for all the questions from these dynamic sources.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "To get certified, an exam is offered by PMI (Project Management Institute) USA, for individuals in the project management domain. There are over 325,000 certified PMPs in 200 countries.\nEducational requirements. Experiential requirements. Agree to and adhere to the PMI Code of Professional Conduct. Pass the PMP Certification Examination.\nThe candidate should have attended at least 35 hours of classroom/online training in Project Management.\n- Minimum of 4,500 hours of project management experience, during the last 8 Consecutive years, covering the 5 process groups, if the candidate holds a university degree at the time of the application.\n- Minimum of 7,500 hours of project management experience, during the last 8 consecutive years, covering the 5 process groups, if the candidate holds a high school diploma or equivalent secondary school credential at the time of the application.\nPMI made a decision in 2006 to no longer publish passing scores for its exams. In 2007, PMI also removed all quantitative elements from the post-exam review for test candidates. The passing score is estimated inside a range between 61% and 75%.\nThe exam has 200 multiple choice questions. Each question has exactly one correct answer. You will get 4 hours to answer these questions. 25 pre-test questions will be randomly placed throughout the new examination to gather statistical information on the performance of these questions, in order to determine whether they may be used on future examinations. These 25 pre-test items are included in the 200-question examination, but will not be included in the pass/fail determination. Candidates will be scored on 175 questions.\nIt is highly recommended that you become PMI member prior to signing up for the test. The membership fee is $129 and an application can be submitted online at www.pmi.org. If you are a PMI member, the exam fee is $405. For non-members, the exam fee is $555.\nPMI has an online application for certification. More information regarding applying for the exam online is available at PMI’s website at www.pmi.org. The PMP Credential Handbook is also available in PDF format on the PMI website.\nPMI states that all eligible applications are subject to an audit. Upon successful completion of the audit, candidates will be able to sit for the PMP examination.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Though you can improvise your plans but deviating from it should be avoided as far as possible.\n- Time Management\nTime management for preparation and during exams are two different aspects that need to be managed. Most of the PMP aspirants tend to be working professionals, and it is important for them to manage their work schedules and cull out time to study. I would suggest at least 2-3hours on weekdays (best time I could suggest is in the morning before you go to office),and at least 5-6 hours on weekends.\nFor management of time during the exam it is important to note that you would get approx. 1.2 minutes for answering each question. Once the question flashes on the screen take a quick reading of question, and judge whether you will be able to give correct answer, if not mark it for review and move forward. Always try to attempt the easy questions first and then come back to the difficult questions later.\n- Preparation Material\nFor clearing PMP I would suggest to thoroughly go through Rita Mulcahy’s book. Though PMBOK guide is prescribed by PMI, I found it difficult to understand and relied heavily on Rita’s book during my entire preparation phase. I would also suggest you to go through PMBOK once to build basics of project management. Majority of the questions in PMP are scenario based and Rita’s book is best to answer those questions as it provides a better clarity of concepts and also provides a better view of project management concepts, and their applications in real life.\n- Practice tests / Mock Tests\nPMP exam questions can be broadly classified into three, categories\n- Scenario based questions ( lengthy questions, checking your conceptual clarity and application thereof)\n- Formula based questions (related to EVs, EMV, etc.)\n- ITTO based questions (will test your knowledge about the tools and techniques in the project management)\nI received a lot of scenario based question in my PMP exam and very few formula or ITTO based questions. Though it might vary for others, would be best to practice a lot of scenario based questions. Your enrollment to ‘Pro Thoughts’ would guarantee you an access to 4 mock tests. Still, I would suggest you to go on the internet and search practice questions. Though the reliability of such free practice tests is questionable, it will get you acquainted to PMP exam questions. The more questions you solve, the more you will be comfortable in you actual PMP exam.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "Also download updated PMI demo exam papers before purchasing updated PMI practise questions.\nPMI from Test kings demo exam questions online only provide you opportunity to download sample of PMI testing engine, PMI updated lab scenarios , and PMI online bootcamps before purchasing them. We provide you the best online Testking's PMI classroom. Along with this use with Testking PMI practise questions online to clear your PMI cert with ease. With Testking PMI latest practise tests you will ultimately going to get great marks with no chance of failure. Testkings PMI free exam dumps as well as best updated PMI exam engine which helps you a lot. Get through from your PMI exam easily by using the updated PMI from Test kings practice tests and online PMI exam questions. The PMI from TestKing's online intereactive testing engine and PMI training camps online are gives you the 100% passing guarantee. latest PMI from Test king questions and PMI quiz eliminate the chances to get fail. Pass PMI test in just few days with the awesome PMI from Test kings latest lab simulation and PMI from Test kings online exam brain dumps. Prepare your PMI cert with our updated PMI from Test kings test braindumps and pass the PMI exam as quicker as u want to.\nThe Test King PMI online audio guide is the perfect solution for PMI exam preparation materials on the go. Just plug your headphone and enjoy getting online PMI book with your PMI audio exam online while you are commuting. The online PMI from Testkings lab situations takes you right into the heart of the PMI test. This way not only your morale boosts up but you also understand your to manage time in your PMI certification.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "Check out PMP Certification Training for further details.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Administered on a secure platform using proven exam delivery technology, the online exam is exactly the same as the version administered at a test center: the same quality, the same questions, and there’s even a live proctor.\nThe only difference? You can take it in your pajamas.\nWith 24/7 testing options to accommodate your schedule, you can take the exam day or night. All you need is a:\n- computer with a webcam\n- reliable internet connection\n- quiet space where you can spend a few uninterrupted hours\nThere’s nothing standing between you and the PMP. You’ve put in the work. Now see it pay off.\nFor more information, please click here.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-5", "d_text": "The CPM program consists of four multiple-choices exam modules: Module 1: Purchasing Process (identical to APP Module 1), Module 2: Supply Environment (identical to APP Module 2), Module 3: Value Enhancements Strategies, Module 4: Management\nThe APP program consists of two multiple-choices exam modules: Module 1: Purchasing Process (identical to CPM Module 1), Module 2: Supply Environment (identical to CPM Module 2). Click here for more details.\nPMP Revised Syllabus Study Guide 2008 Edition: To be eligible for the PMP certification, you must first meet specific education and experience requirements specified by the PMI and agree to adhere to a code of professional conduct. The final step in becoming a PMP is passing the PMP credential examination.\nThe PMP examination consists of multiple-choice questions that are designed to measure your comprehension of the newly revised BOK: Initiating the Project - 11%, Planning the Project - 23%, Executing the Project - 27%, Monitoring and Controlling the Project - 21%, Closing the Project - 9%, Professional and Social Responsibility - 9%. Click here for more details.\nThis BOK is just a new presentation of the same old PM framework, with some small changes in the actual contents (let's face it; there hasn't been any major breakthrough in the field of PM these recent years).\nIn the eyes of the PM professionals, PM is not just PM alone. Instead, PM is an art and science that spans across multiple disciplines. This PMP Study Guide enhances your exam readiness by covering the various PM theories and techniques as well as providing you with readings on all the above disciplines, just to ensure that you won't get caught unprepared. Click here for more details.\nPRINCE2 Foundation Exam Study Guide 2008 Edition & 75 Practice Questions: You may think of Prince2 as a process-based approach for PM method which follows a well structured framework. It describes procedures to coordinate people and activities in a project, how to design and supervise the project, and what to do if the project has to be adjusted if it does not work out as planned.\nAs a process-driven PM method, PRINCE2 has become the de facto standard for project management in the UK and in many other countries. Certification can be obtained through passing the multiple choice based Foundation exam and the essay based Practitioner exam.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "How Project Managers Can Use Microsoft OneNote\nMost of the places where you'll go to get the required 35 contact hours will push you toward a core set of study materials from one provider. Common providers include Andy Crowe and Rita Mulcahy, among others. It's important to review study materials used by the various places where you can get your required education hours, and ensure that you work through an establishment using study materials that you feel will best suit your learning style.\nThe contact/education hours themselves also come in different flavors. You can choose to work through an online course and work from the comfort of your home; you can opt for a three-day bootcamp where you spend roughly 12 hours a day working through the materials; you can choose to spend all of your Saturdays for a month in a classroom; or more. Again, make sure you go through a class that will fit your learning style.\nWhile the PMP exam is based on the concepts put forward in the PMBOK, the PMBOK is a dense read, and it doesn't do a great job of laying out how the concepts work in practical application. This is why it's important to find a study book geared to your learning style that takes the concepts from the PMBOK and shows how those will work in the real world.\nIndex cards, purchased or self-made, are almost necessary to help learn all of the terminology you'll need to have a firm grasp on for the exam. There are also a lot of great flashcard apps for smart devices you can purchase or access for free.\nIf you can afford it or access it for free through a library, Rita Mulcahy's PM FASTrack exam simulation software was the best tool I used in preparation for the test. There was a large pool of questions, and the way the questions were presented in the application matched the exam almost perfectly.\nAfter you're comfortable with the material, you really want to hit the practice exams hard. There are a variety of places where you can get access to high quality practice exams:\n- Work: One of the easiest places to check for practice tests is any library of educational materials you have access to at work. You may have access to a wealth of PMP study materials and practice tests and not even realize it.\n- Library: Your local library can be another goldmine. It's a good idea to see what books and software they have access to that could help you on your journey.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "16 Sep 2020\nPgMP users passed this exam.\nAverage Score in Real PgMP Exam in Testing Centre.\nQuestions came from Dishut Material.\nIf you want to clear PMI PMI Certification PgMP exam, then you must look for a reliable PgMP pdf dumps so you can prepare for the exam. There are lots of options out there. However, Dishut is providing high quality and validated exam dumps that will help you prepare for the Program Management Professional (PgMP) PgMP exam. With the help of dumps pdf provided by us, you will be able to get guaranteed success and we are also providing a money-back guarantee on all of our products. If you are using PgMP questions pdf provided by us, then you will be able to pass PMI Certification Program Management Professional (PgMP) exam on the first attempt.\nAs we all know, superior PgMP certification training materials are very essential to a candidate, The most reliable PgMP valid dumps are written by our professional IT experts who have rich experience in the PgMP practice test, The PgMP certification lead you to numerous opportunities in career development and shaping your future, PMI PgMP Test Cram Review Your financial information is also safe with us as we care about our customers.\nHe ordered him to go and saddle two horses in M, Sensational hints of a Reliable 200-901 Practice Questions Labour coup d'état were freely reported, Sunlight is the life-blood of Nature, Well, if there were an outsider, he may be traced and taken.\nAt intervals along the caravan route, specially built water https://pass4sure.actual4cert.com/PgMP-pass4sure-vce.html towers spewed wide streams of water across the road for twenty yards, My wife did, at the very moment when you came in.\nShe could stay as she was, and eventually become a guide, herself, Footnote 743:(return) 500-443 Vce Torrent \"Ernesti conceives that the colour is here maintained to express, not merely the shining aspect, but the newness of the metal; as λενκὸν in 268.\nIn this way, you can have deeper understanding about what kinds of points will be tested in the real test by our PgMP updated study dumps, thus making it more possible for you to get well prepared for the targeted tests.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "BUY PMP CERTIFICATION ONLINE. We are proud to offer you a specialized service for IT professionals that are interested in having the certification process dealt with quickly, and without any problems. We will help you to passing your Microsoft Certification – MCSA, MCITP, MCSE, CompTIA Certification, CCNA, CCNP, CCIE, Vmware, Apple, Avaya, Ciw, Citrix, Sun, Juniper, blackberry, Oracle, Java, Nortel, IBM, HP, EMC, Novell, Nokia and Many more.\nYour exams will be passed at a partner authorized Prometric or Pearson VUE testing center by one of our contracted test administrators. When the exams are finished, you will be able to verify your certification status on the official vendor’s website and receive your certificate at home, exactly the same as if you had done the exam yourself.\nPassing Without Any Exam\nOur Professional Will help you to take Exam In VUE/Prometric. And you have To Pay Us only After Checking Results On Vendors Offical Website. Partly Payment Option Are also Available so its make our servicies more fexliable then others offering the same services..!!\nPROCEDURE TO GETTING CERTIFIED FROM US\nPlease go through the links given on this web site to browse all the certification providing companies and all the certificates which we can offer you. Once you decide which certification you want to go for please note down the Test code for it. BUY PMP CERTIFICATION ONLINE.\n2. Email your details at email@example.com\na. First and Last Name\nb. Telephone Number along with Country and area code\nc. Address with Postal Code\nd. Test Code\n3. Make the payment\nOn receiving your details we need to process your registration for the exam and for this you need to make the payment. After receiving the details we will reply you with our details for sending us the money.\nOnce your payment gets cleared we will get the examination done by our experts within 5-10 working days.\n5. Check your score\nOnce you examination gets through we will mail you the test scores along with your ID. You can use this ID to login to the official websites of VUE or Prometric and check the test scores your self.\n6.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-5", "d_text": "All the PBT exams scheduled on or after 1st July 2019 will be shifted to the Pearson VUE test center location.\nTo schedule your PMP® exam with Pearson VUE through telephone, follow the given steps -\nIn order to schedule the Test Accommodations, you must have been first verified. Following are the steps to schedule the PMP® Exam with Test Accommodations with Pearson VUE:\nAfter the payment of your PMP® certification fee or PMP® application audit has been confirmed, you will receive a mail with the eligibility ID for PMP® Exam. The myPMI® section of the Pearson VUE website also contains the eligibility ID.\nFollow the given steps to reschedule or cancel your PMP® Exam with Pearson VUE:\nNote - Remember that if you are within 48 hours of your exam and you are canceling your exam, you will have to forfeit the entire exam fee you have paid.\nFollowing are the materials prohibited in the Test locations -\nYou should consider the Online Proctor exam only if it is not possible for you to reach the test’s location. Also, you need to make sure that you have the right computer equipment.\nThere is a very specific testing protocol, environment, and technical requirements. So, before scheduling an appointment, make sure that you have a system ready for the online proctor exam. The entire trial would take about 5 minutes. Here is what you need to do:\nYes, you can apply for and get a refund in case you decide not to take up the PMP® examination, provided you make a request in this regard to PMI at least 30 days prior to the expiration of your exam eligibility. Once the refund is processed your application will be closed and the validity period for your application will become null.\nYes, you can but only for 3 attempts. So if you fail to clear your exam in the first go, you can retake the exam with the same Eligibility ID 3 more times, provided you pay an additional $275 for each attempt. Unfortunately, if you are unable to make it through even after 3 re-attempts, you will have to go through the entire process again from scratch.\nPMI treats each Chapter as a local component with the onus on creating a platform for like-minded project management professionals to come together and interact with each other.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "|PMI Membership Fees||$139|\n|CBT Test Fees||Member||$405|\n|Reexamination CBT Fees||Member||$275|\nPMP Exam Application Process\nWhile I was working on this post, I had originally planned to describe the PMP exam application process in here. But then I created a separate blog post since the application is a project in itself and best be explained separately. Click Here to read about the PMP Exam Application Process in detail and how you can avoid an audit for the same.\nPMP Exam Prep\nDiving right into the crux of how one should approach the PMP exam I will highlight a few important guidelines on how I prepared myself.\n- Time commitment – I took a total of 3 months to prepare for the PMP. I work full-time and so I used to study approximately 2-3 hours every day. During my last 3-4 weeks I bumped it up to 4-5 hours a day. During my last few days I went on to read at-least 6-8 hours a day.\n- Initial Approach – I used to read one chapter every day, attempt a few questions on that chapter and write down my notes from that chapter. I would highlight a few points which I feel were important for that chapter and make a quick reference guide.\n- Final Approach – As I moved closer to my exam date, I would review my notes, dump sheets that I created during my initial reading, and attempt full set of 100-200 questions per chapter available online.\nFollowing are the details of how I recommend one to carve out a study plan. Again, this could be transformed as per one’s own liking and style of studying.\n- Primary Reading Material:\n- Rita Mulcahy, 8th Edition\n- PMBOK, 5th Edition\n- Extra References:\n- Global Knowledge Institute’s Reading Material (Click Here)\n- Global Knowledge Institute’s Flash Cards\n- Abhishek’s version of Rajesh Nair PMP Notes (To request these please send an email to firstname.lastname@example.org)\nMy PMP Exam Study Plan\nFollowing is a detailed description of my PMP exam preparation journey. Feel free to adjust these per your studying style and other information you might find online.\n- Step 1: Rita Mulcahy’s 6th edition, read page to page –\n- During this read I was focusing only on learning this subject rather than cramming any information.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "|PMI CAPM IT Certification Tests|\n|CAPM - Certified Associate in Project Management (PMI-100)|\nWe provide the latest and most updated online PMI questions for your PMI updated courses. Download the PMI latest demo questions to see the high quality of our PMI online training. The updated PMI exam questions and answers for PMI test are carefully selected and updated by team of experts at Testking. With updated Test kings PMI practise exams you will feel as if you are in the actual PMI certification. With the online Test King's PMI engine and online PMI from Test kings training camps anybody can pass the PMI certification with as little preparation as a few days. Become successful in PMI test and attempt all PMI online exam questions and answers with ease. Only PMI from Testkings updated practise exams can help you achieve that level of expertise in PMI certification. Get certified with the PMI certification at your own preferred time and place, with the new PMI from TestKing's engine online and PMI from Test kings online bootcamp. You can pass PMI test rather easily because we have developed the most awesome PMI updated engine and PMI latest books for the PMI certification. You can listen to our Test King's PMI online mp3 guide while you are on the road, or do PMI updated latest exam with our state of the art PMI latest interactive exam engine when you are at home.\nClear PMI cert and become certified fast and sure shot. The Testkings PMI interactive exam engine online will turn your latest PMI exam preparation materials around and you will pass within days. Use the online PMI from Testkings exam dump to pass the PMI exam. This updated PMI test dump is prepared to be precise and accurate, so that the customers can latest PMI test guide and pass PMI test for sure. Use the Testking PMI audio training and pass the PMI certification effortlessly. The experts have developed Test kings PMI online cbt to help the customers in passing the PMI certification. The PMI from Test King's online exam engine and updated PMI from Test kings lab scenarios are developed by proficient IT experts. That's why it is one of the best materials for PMI updated test papers. Use online PMI from Test King's exam questions and answers for quick, easy, and verified answers.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "As you may know that we have three different PfMP exam questions which have different advantages for you to choose, PMI PfMP Reliable Exam Practice By adding more certifications to your portfolio the career paths become even more valuable and diverse, PMI PfMP Reliable Exam Practice So the final results will display how many questions you have answered correctly and mistakenly, If you choose our PfMP dump collection, there are many advantageous aspects that cannot be ignored, such as the free demo, which is provided to give you an overall and succinct look of our PfMP dumps VCE, which not only contains more details of the contents, but also give you cases and questions who have great potential appearing in your real examination.\nSome people's idea of beauty will vary vastly from others, That night she met, https://learningtree.actualvce.com/PMI/PfMP-valid-vce-dumps.html This made me resolve to dissemble; I appeared to take no notice of her actions, in hopes that time would bring her to live with me as I desired she should.\nThere have been many great nations with great histories, but the more SHAM Exam Engine highly they were developed the more unhappy they were, for they felt more acutely than other people the craving for world-wide union.\nOne day, however, I got word that he was dying, CHAPTER IV AN ELEPHANT HUNT Now Reliable PfMP Exam Practice I do not propose to narrate at full length all the incidents of our long travel up to Sitanda's Kraal, near the junction of the Lukanga and Kalukwe Rivers.\nPfMP training materials cover most of knowledge points for the exam, and you can master the major knowledge points for the exam as well as improve your professional ability in the process of learning.\nHigh Pass Rate PfMP Exam Questions Convey All Important Information of PfMP Exam\nThe industry experts hired by PfMP study materials explain all the difficult-to-understand professional vocabularies by examples, diagrams, etc, Predictably, Claudia rolled her eyes with disdain.\nI constantly stress the importance of providing the best product Reliable PfMP Exam Practice and service to customers, She has proven most helpful in the exercise of my duty as a citizen of the empire.\nSuddenly he got up, Don't, Colia,—what is the use of saying all that?\"", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Why Should I get PMP Certified?\nAre you looking to improve your position in the work place? Do you want a leg up over your competition in the event of a possible promotion? Have you asked yourself, “Why Should I get PMP Certified?”\nMany project managers with years of experience are finding it increasingly difficult to find a job where they feel their skill set is being adequately used. They find that seemingly less qualified project leaders with half their time investment are being chosen for higher paying jobs and greater benefits. If this sounds familiar, perhaps you could use a PMP® certification.\nHaving a PMP Certification is an excellent way to prove to employers that you are thoroughly experienced and dedicated to your occupation. While this might seem like another unnecessary hurdle to jump through to many long time project managers, it is becoming increasingly important as time goes by. Many large businesses will outright refuse to give a resume without a PMP Certification listing a second glance. If you’re looking to compete for the best position your qualifications can afford you, perhaps a certification can help get you there.\nHow can you best prepare for the certification test? Different people learn through different methods, but many companies cover every part of the exam.\nFor example, Our PMP Training Course offers the following helpful aspects:\n• PMP Certification Exam Prep Study Book\n• Pre-course Quick Reference Guide\n• On-line PMP Exam Simulator\n• Over 600+ Practice PMP Questions\n• 100% PMP Exam Pass Guarantee\n• Critical Exam Taking Techniques\n• Breakfast, Lunch and Snacks Provided\n• In Class Exercises and Accelerated Learning Techniques\n• PMP Eligibility Application Support\n• PMI Approved 36 Contact Hours", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "I scheduled the exam and gave myself three months to prepare. Putting a date on my calendar was like a spark. I became more enthusiastic about my preparation and that momentum helped me tremendously.\nWhat was it like on exam day?\nOn my scheduled exam date, I reached the Prometric Test Center half an hour early. Do this! It takes about 15-20 minutes to check-in and you want to avoid being rushed or stressed during that time. After check-in you’re allowed to enter the testing room.\nIt took me 2-1/2 hours to complete the exam, and I used the remaining time to review my answers. Once I submitted the answers, I was asked to complete a brief survey about my test-taking experience and then… Congratulations! I was able to see immediately that I passed the PMP exam.\nWhat were your favorite study books?\n- Head First PMP\n- Rita Mulcahy’s PMP Exam Preparation Book\n- Kim Heldmen’s PMP Final Exam Review Book\n- The PMBOK Guide\nAny final advice?\nYes. Here is a quick checklist of things to do before taking the exam:\n- Become a member of PMI and actively seek out other PMPs so you can learn from them.\n- Buy any two good PMP exam reference books to study so you can learn from different perspectives.\n- Read the PMBOK Guide, at least three times.\n- Get 35 contact hours from any registered training provider.\n- Apply for the exam, schedule it and then develop a study plan.\n- Rather than attempt to memorize everything, like the Input, Tools & Technique and Output (ITTOs) in the PMBOK Guide, focus on understanding the logic behind the project management principles.\n- Pay special attention to Initiating and Closing Process Groups. These are the smallest groups and each group contains only two processes.\n- Don’t over study by trying to answer every sample question you may find on the Internet. Only rely on authentic sources for sample questions and exams, like your reference books or samples taken directly from the PMI website.\nI hope these insights are useful as you work toward earning your PMP credential. To learn more about Fahad Usmani, visit his blog.\nImage credit: Flickr/fanz", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Test prep. Agile Certified Practitioner PMI-ACP\nThe description of Test prep. Agile Certified Practitioner PMI-ACP\nIt provides a well-rounded review of essential exam concepts, containing simulation test, chapter tests, practice questions, and study aids to help you ensure complete preparation for the big day.\nMaster 100 percent of the exam objectives, including expanded AGILE coverage\nReinforce critical concepts with hands-on practice and real-world scenarios\nTest your knowledge with challenging section review questions\nProject management is one of the most in-demand skills in today's job market, making more and more employers turn to AGILE methodologies to enhance delivery and results. The PMI-ACP certification shows employers that you have demonstrated mastery of essential project management skills and a practical understanding of adaptive, iterative processes; this validation puts you among the ranks of qualified project management professionals employers are desperately seeking, and this application is your one-stop resource for exam success.\nPRO Version features:\n- 200+ questions to practice (4x the number in regular version )\n- Double the number of flash cards in comparation with usual app version\n- Choose your favourite font, from 15+ fonts available\n- Glossary to search any term related to the test\n- No ads", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "What are the eligibility requirements?\nThe eligibility requirements for each certification are mentioned on its web pages and detailed in each respective certification handbook. For complete guidelines please link to the appropriate certification handbook.\n4. How do I complete the application?\nAll certification applications are available online. Click the \"Apply\" button on the available web pages.\n*Note: Any missing information on the application will delay processing.\n5. Are the exams available in languages other than English?\nThe PMP, CAPM, and PMI-ACP exams are translated into multiple languages. For details on specific languages available please consult the respective certification handbooks.\n*Note: Other certification exams are not currently translated, but PMI will notify you when translations are available.\n6. Can I take it again if I fail the exam?\nOn your first attempt, if you fail the exam. Also, you can re-take it two more times within your one-year eligibility period. Fees are associated with re-examination. Details for re-examination can be found in the certification handbook.\n7. What happens if I have not taken the exam and my eligibility expires?\nFrom the date of approval, your application is valid for one (1) year. You must re-apply if you allow your eligibility to lapse.\n8. Are audit materials confidential?\nYes. Audit materials are strictly confidential and unless required through legal action involving PMI are not shared with any external organization.\n9. Does Testprep Training offer Money Back Guarantee for the Exam Simulator?\nYes, we offer a 100% unconditional money back guarantee. In case you are not able to clear the exam for then, you can request for the full refund. Please note that we only refund the cost of product purchased from Testprep Training and not from the Microsoft Learning.\n10. Is there any assistance from Testprep Training in terms of exam preparation?\nYes, Testprep Training offers email support for any certification related query while you are preparing for the exam using our practice exams. Your query will be handled by experts in due course.\n11. Can we try the free test before purchasing the practice exam?\nYes, testprep training offers free practice tests for PMI Professional in Business Analysis (PMI-PBA)® Certification Exam which can be used before the final purchase for the complete test.\n12. Do you provide any preparation guidance for this certification exam?\nYes, our experts frequently blog about the tips and tricks for exam preparation.\n13.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "PMI PgMP Real Exam Questions - Guaranteed\nReal Program Management Professional Exam Simulation Environment With Accurate & Updated\nQuestions – Cheap as ever.\n- Come at More Than 3800 Exams Including Program Management Professional Exams\n- Comprehensive Exams Questions\n- Free Exam Updates - provided regularly\n- New Testing Engine with Practice and Virtual Exam Models (Gold Package Only)\n- Detailed Customied Lab Simulation Questions\n- Convenient PDF format for Self Paced Study\n- 24*7 Dedicated Email / Chat Support\n- Confidential and secure shopping experience\nPMI PgMP Exams\nProgram Management Professional (PgMP)\nLast Updated: June 04, 2019\nSee what each of the Package Offer\nSilver Package(PDF only)\nGold Package(PDF + Testing Engine)\nUnlimited access to 4500+ Exams\nPDF Questions & AnswersConvenient, easy to study, Printable PDF study material, Learn on go.\n100% Money Back GuaranteeBe sure of Guaranteed Pas Scores with BrainDumps materials, with a proven 99,3% Pass rate\nRegular & Frequent Updates for ExamGet hold of Updated Exam Materials Every time you download the PDF of any Exam Questions Without Any Extra Cost.\nReal Exam Questions With Correct AnswersExact Exam Questions with Correct Answers, verified by Experts with years of Experience in IT Field.\nComprehensive Testing EngineCustomizable & Advanced Testing Engine which creates a real exam simulation enviroment to prepare you for Success.\nUnlimited Practice Exam Re-takesPractice Until you get it right. With options to Highlight missed questions, you can analyse your mistakes and prepare for Ultimate Success.\nSubmit & Edit NotesCreate Notes for Any Questions. When and Where Needed, edit them or delete them if needed.\nDownload Free Exam Simulator Demo\nExperience BrainDumps exam testing engine for yourself. After you've selected a vendor and an exam and submitted your email, your download will start automatically.\n- Customizable, interactive testing engine\n- Simulates real exam environment\n- Instant download\nChoose an exam to sample:\n* Our demo shows only a few questions from your selected exam for evaluating purposes\nWhen You Choose Braindumps, You Choose Success In The Program Management Professional Exam\nYou need the best materials to sort out things impressively for you. Best and most reliable stuff is offered at the website of Braindumps for the making of things great for your online selftestengine.com video lectures.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "You can easily pass PMI Scheduling Professional (PMI-SP) Certification with the help of our online practice exam. We are here to help you every step of the way to pass your Scheduling Professional exam. Our team of experienced and certified professionals with more than 12 years of experience in the field of Project Management has designed practice exam to prepare for PMI-SP certification. They have carefully maintained exam structure, syllabus, time limit and scoring system same as the actual PMI Scheduling Professional (PMI-SP) exam. Our PMI-SP question bank contains most frequently asked and real-time case study based questions prepared by collecting inputs from recently certified candidates.\nTo get familiar with our online PMI Scheduling Professional certification practice exam environment, we invite you to try our sample practice exam to build the trust between us.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Passitdump.com provides the latest update real PMI exam questions and answers. The study materials are constantly revised and updated by our expert team to make sure the accurate and correctness. We guarantee the materials with quality and reliability which will help you pass any PMI certification exam.\nPassitdump.com PMI PDF files are the real feast for candidates of all educational backgrounds. They are meant to bring success to you in your very first attempt and thus the real substitute of your money and time. In comparison to high-sounding claims of providing PMI video training and virtual lab trainings, Passitdump.com PMI PDF have been authenticated and approved by the vast majority of successful candidates.\n|CA0-001||Certified Associate in Project Management|\n|CAPM||Certified Associate in Project Management (PMI-100)|\n|PMI-001||Project Management Professional|\n|PMI-002||Certified Associate in Project Management (CAPM) Certification|\n|PMI-100||Certified Associate in Project Management(CAPM)|\n|PMI-200||PMI Agile Certified Practitioner (PMI-ACP)?|\n|PMI-ACP||PMI Agile Certified Practitioner|\n|PMI-RMP||PMI Risk Management Professional|\n|PMI-SP||PMI Scheduling Professional|\n|PMP||Project Management Professional|", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "online PMI PgMP audio study guide and Testking PgMP online lab questions can take care of all your p Get everything done with extraordinary perfection and effectiveness for sure. Things can be treated perfectly through the best helping materials. You will easily be taken forward in the right direction when you will get prepared through the Certkiller PgMP online audio study guide Happy time is easy to achieve in the latest Program Management Professional audio lectures through the proper and an amazing use of Program Management Professional online video training and Envision Web Hosting uk web hosting updated audio training. These are the magnificent and smartest articles and they will definitely make each and everything suitable for you in your study time. Your one smart move will lead you towards a handsome success in the PMI PgMP online computer based training. It's very important for you to take the right decision and you can really come out with great and impressive result in your certification after making your Program Management Professional study notes and\nLet all the things go in the true perspective by going for the updated fast webhosting audio guide and Program Management Professional online lab simulations. These are the most appropriate preparatory tools without any doubt and they will not let you make any kind of compromise in your preparation for the exam. Coming up with great result in the online envisionwebhosting.com cbt is the necessary requirement if you want to come up good in your career. online Selftestengine PgMP audio exam and PgMP PMI online lab scenarios are the perfectionist materials indeed and you need to follow each and everything perfectly to mak Best time is achievable in the Selftestengine ccnp course exam questions online audio lectures by making good and superb result in the certification. Program Management Professional online audio study guide and latest Realtests PgMP lab simulations will surely offer you impressive and remarkable working and then things will be done awesomely for you in your complete Making a right preparation with the right stuff is your ultimate requirement for sure. Your great and effective support will be provided to you by the Testking mcat study schedule dumps online intereactive testing engine and PMI PgMP updated cbt. These are the best and marvelous preparatory materials without any doubt PMI PgMP cbt online can give you great news of success in your journey for the betterment of your career.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "PMP Product Reviews\nA Thorough Guide\n\"Test King is the most helpful site ever. My practice of PMP exam is all up to the mark and Test King is to be thanked. The extensive archives of supportive material come in handy when you want surety, whether you are prepared or not. The questions in my final PMP PMP exam did not scare me a bit, because I was prepared for everything. When my PMP result appeared, I was beaming and thanking Test King which was like a helpful teacher.\nEasiest Guides Online\n\"I had trouble in understanding the PMP certification guide. My friend and I wanted to find an easy and convenient way to go through the PMP PMP course, so that our grade sheet be shining with high numbers. That was when we came across Test King. The proper guidelines available there helped us cover PMP prep in a short time.\nUndisputed King Of Guides\n\"Test King really is the best source of information for PMP certification guides. I found PMP PMP course material so well-described, that my fear of passing out before PMP, faded away somewhere. The confidence I had for my PMP exam was all because of Test-King.\n\"When looking for an easy guide for PMP preparation, Test King becomes really helpful and shares your trouble by offering these up to date PMP PMP tests and guides, that cut down a student's stress and sharpens his skills in PMP .\n\"My PMP exam preparation was expected to be far behind, and the PMP PMP exam could have been a major disaster had I not visited Test King. Thanks to the extensive material on the website that took me through, and the next thing I know is that my PMP grades were a hit!\n\"Test King is t an understanding counselor, which slowly improves a student's PMP test knowledge, and takes it higher with further guidelines. My PMP PMP exam could have been a disaster had I not visited this fabulous website and gone through all PMP preparation details. I am highly thankful to Test King.\n\"There is no place like Test King. I took my full PMP exam preparation help from Test King which polished my PMP PMP knowledge, and then PMP certification was also handled in a jiffy with this online companion. My parents are proud of me, and I am grateful to Test King only.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "When preparing the PPM-001 Exam Questions, The first thing you should remember is to memorize the Professional in Project Management (PPM) - Standard Package - PPM core topics. You should memorize PPM-001 PDF dumps and try out free brain dumps before you sit for a PPM-001 practice test online in year 2023. During this period, you can use the internet for free exams tutorials and study some quality PPM-001 free study guides. Certkillers.net PPM-001 PDF dumps and Practice Exam will enable you to prepare in the shortest possible time.\nCertKillers.net delivers you the most effective PPM-001 test preparation methods, including best PPM-001 Q&A, PPM-001 study guide, PPM-001 pass4sure and up-to-date exam preparation training. Our PPM-001 exam training will provide you with real exam questions with verified test answers that reflect the actual PPM-001 exam. We ensure 100% guarantee to pass the PPM-001 real exam using our provided free study material. If you prepare for the exam using our updated and latest exam prep questions and answers, we guarantee your success in the PPM-001 final exam. With the GAQM PPM-001 exam material, you can be assured of your own position in GAQM society, and you can be proud of your success in the highly competitive IT field.\nTop Ranked PPM-001 Test Questions and Exam Prep Material - Certify Fast in Year 2023\nCertKillers.net is a top provider of PPM-001 test questions and exam prep material. With our PPM-001 new test questions, you don't need to look for examcollection PPM-001 vce downloads or online testing engine that are often obsolete. In most of the cases, people looking for prepaway PPM-001 dumps, vce exam simulator, VCE PDF and exam collection PPM-001, end up getting up-to-date pdf dumps from us for their certification prep requirements. Our top ranked PPM-001 exam prep material is best for the new year 2023 exam preparation.\nRegular Updates - GAQM PPM-001 exam files are updated on a weekly basis. Our hired GAQM experts update exams as soon as there is a change in PPM-001 actual exam. We will provide download access to latest new updates in time for 90 days.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "We have created several practice tests created from the material covered in Steve’s, “Preparing for PPA Certification” workbook and the Test Specifications prepared by PPA.\nCurrently, there are over 100 questions in the question bank and additional questions frequently added.\nMembers may choose to take a 100 question exam which has a mix of all six categories from the Test Specifications and that closely matches the same percentage of questions by topic as stated by the test specifications.\nMembers may also elect to take practice exams by category. These tests only utilize questions pertaining to that category.\nAccess to these practice tests are by Premium Membership only.\nPLEASE NOTE: If you have already purchased the guide and would like to access the tests, please click here.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "|Exam Name:||PMI Certified Associate in Project Management|\n|Questions:||704 Questions Answers|\n|Last Updated:||Sep 15,2020|\n|Price:||Was: $85 Today: $59|\n1: Download Q&A PDF File\nYou can easily download the CAPM Questions Answers PDF file for the preparation of Certified Associate in Project Management exam and it is specially designed for PMI CAPM exam and CertsPrepare prepared a list of questions that would be asked in the real CAPM exam.\n2: Prepare Questions Answers\nUse CertsPrepare’s CAPM exam dumps PDF and prepare Certified Associate in Project Management CAPM Questions Answers with 100% confidently. We offer 100% real, updated and verified exam questions and answers tested and prepared by experts to pass PMI CAPM exam.\n3: Pass Your Exam\nAfter your preparation for Certified Associate in Project Management CAPM exam by using CertsPrepare’s exam material kit, you will be ready to attempt all the CAPM questions confidently which will make 100% guaranteed your success in the first attempt with really good grades.\nCertsPrepare provides up-to-date actual PMI CAPM questions and answers which will help you to pass your exam in the first attempt.\nCertsPrepare CAPM PDF is designed with the help of updated exam content. Each of the questions is verified by PMI certified professionals. CAPM questions PDF allows customers to download and view the file on different devices including tabs, phones, and laptops. The free demo of the CAPM exam question set prior to purchasing the product in order to see the standard and quality of the content.\nCAPM dumps are designed to help it professionals make the most of their knowledge and experience with years of experience in the latest syllabus. Our PMI CAPM exam details are researched and produced by experts.\nOur CAPM exam will provide you with exam questions with verified answers that reflect the actual exam. 100% Guarantee to pass your CAPM exam if you prepare for the exam using our updated exam questions and answers, we guarantee your success in the first attempt.\nHappy Certified Students\nUpdated Exam Questions\nProfessional Certified Instructors\nFree Product Updates\nOur Success Rate\nWhy PDF Format?\nThe PDF format ensures portability across a number of devices, to allow preparation on the go. For a more challenging and thorough preparation, Practice Test software simulates real exam environment. With multiple testing modes and self-assessment features, our practice exams are the best in the industry.", "score": 8.086131989696522, "rank": 99}]} {"qid": 15, "question_text": "What is referential transparency in functional programming and what are its main benefits?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "The definition of functional programming is quite easy. Functional programming is the programming with mathematical functions. Is that all? Of course, not!\nFunctional programming is the programming with mathematical functions. I think, you already guess it. The key to this definition is the expression mathematical function. Mathematical functions are functions that return every time the same result when given the same arguments. They behave like an infinite big lookup table.\nThe property, that a function (expression) always returns the same result when given the same arguments, is called referential transparency. Referential transparency has far reaching consequences:\n- Mathematical functions can not have a side effect and can therefore not change the state outside the function body.\n- The function call can be replaced with its result, but can also be reordered or put on a different thread.\n- The program flow is defined by the data dependencies and not by the sequence of instructions.\n- Mathematical functions are a lot easier to refactor and to test because you can reason about the function in isolation.\nThat sounds very promising. But which so many advantages, there is a massive restriction. Mathematical functions can not talk to the outside world. Examples?\nMathematical functions can't\n- get user input or read from files.\n- write to the console of into a file.\n- return random numbers or time, because the return values are different.\n- build a state.\nThanks to mathematical functions, the definition of functional is very concise but helps not so much. The key question still remains. How can you program something useful with functional programming? Mathematical functions are like islands that have no communication with the outside world. Or to say it in the words of Simon Peyton Jones, one of the fathers of Haskell. The only effect that mathematical functions can have is to warm up your room.\nNow I will be a little bit more elaborated. What are the characteristics of functional programming languages?\nCharacteristics of functional programming languages\nHaskell will help me a lot on my tour through the characteristics of functional programming.\nThere are two reasons for using Haskell.\n- Haskell is a pure functional programming language and therefore you can study very well the characteristics of functional programming by using Haskell.\n- Haskell may be the most influential programming language of the last 10 - 15 years.\nMy second statement needs a proof. I will provide them in the next post for Python and in particular C++. Therefore, a few words about Java, Scala, and C#.", "score": 52.85200372483465, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "In particular, a function can not set state. This is a very different way of thinking from imperative languages such as Java, where there are objects all over the place with state that gets changed by method calls.\nReferential transparency is a phrase from the functional programming world which means, basically, \"no side effects\". Side effects include reading any state which is not passed in as an argument or setting any state which is not part of what is passed back as an argument. If a function is referentially transparent, then a call to that function with a specific set of values as arguments will always return exactly the same value.\nWhen functions are composed into larger functions, it is easier to reason about them when they are referentially transparent and there are no side effects to worry about. Of course, in the real world it can be very useful to be able to store some state data, so how does one implement this functionality in a pure functional language?\nThe answer is Monads. This is described pretty well in a 1992 paper by Philip Wadler called \"Monads for functional programming\". Reading this paper helped me understand the motivation for monads. Monads provide a way to collect all those side effects into known locations in the program. When all of the side effects are collected into monads, the rest of the program (all of the non-monad parts) are still referentially transparent, so remain easier to compose. The side effects are still there, but because they are encapsulated by the monads, it is easier to deal with them.\nGroup TheoryWhen you read about monads, at some point you will come across a mention of Category Theory as the source of monads. Category Theory is pretty abstract, and you don't need to know it to understand monads. But monads are a mathematical concept, so digging into the math background of the concept could help solidify your understanding.\nAfter reading a bunch of stuff about Category Theory, I went back to Group Theory, which for our purposes you can think of as a special case of Category Theory. While reviewing groups, rings and fields, I stumbled across monoids, a term which I had not recalled from my readings in the field years ago. It's not quite the same thing as a monad, but because Group Theory is focused on transformations, I started thinking of monads as transformations, which I think helped my understanding of them.", "score": 49.89434429895486, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "Those tenets will be the subject of my email course:\n- Functions are pure\n- Functions use immutable data\n- Functions guarantee referential transparency\n- Functions are first-class entities\nAfter that, I’ll briefly touch on how functional programming applies these tenets to encourage us to think carefully about our data and the functions that interact with it.\nBy the end, you’ll be able to understand how this approach leads to code that is:\n- Easier to understand (that is, “expressive”)\n- Easier to reuse\n- Easier to test\n- Easier to maintain\n- Easier to refactor\n- Easier to optimize\n- Easier to reason about\nSound exciting? If so, you'll love the new e-book. ?\nThe e-book will be released on December 13th. You can pre-order the e-book now for just $49! And as special offer to the free FreeCodeCamp community, I am offering $10 off with the discount code \"freecodecamp\".\nSee you in there! ??✍️", "score": 47.252995381033166, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "What is Functional Programming?\nFirstly, Functional programming is a programming paradigm in which everything is bound using pure mathematical functions. It’s a declarative programming approach. In contrast to an imperative style, which focuses on “how to solve,” it focuses on “what to solve.” Instead of statements, it uses expressions. A statement is executed to assign variables, but an expression is evaluated to create a value. In addition, Those functions have some unique characteristics.\nComponents of functional programming\n- Pure functions\n- Referential transparency\n- Functions are First-Class and can be Higher-Order\nWhat is a Pure Function?\nPure functions are normal functions with some characteristics :\n- Total / Not Partial\n- No Randomness\n- No Side Effects\n- Not Null\n- No Exception\n- No Mutation\nExample of Pure Function\ndef add(a: Int, b: Int): Int = a + b\nExample of Not a Pure Function\ndef divide(a: Int, b: Int): Int = a / b\nThe ‘divide’ function passes all the parameters of being a pure function but if in case ‘a’ will be divided by 0, Then it will throw an exception which will make it not a pure function.\nAdvantages and disadvantages of Functional Programming\n- This programming aids in the effective resolution of difficulties.\n- It improves modularity.\n- It allows us to implement lambda calculus in order to solve complex problems\n- Some programming languages support nested functions so it improves the maintainability of the code\n- It reduces complex problems into simple pieces so that it will be easy to solve.\n- It’s difficult to grasp for novices, hence it’s not a beginner-friendly paradigm approach for new programmers.\n- Maintenance is difficult during the coding phase when the project size is large\n- Moreover, Reusability in Functional programming is a tricky task for developers\nFor further information on functional programming wait for the next blog…\nFor more details please visit: https://en.wikipedia.org/wiki/Functional_programming", "score": 45.68932537097149, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "functional programming definition programming\n(FP) A program in a functional language consists of a set of (possibly recursive\ndefinitions and an expression whose value is output as the program's result. Functional languages are one kind of declarative language\n. They are mostly based on the typed lambda-calculus\nwith constants. There are no side-effects\nto expression evaluation so an expression, e.g. a function applied to certain arguments, will always evaluate to the same value (if its evaluation terminates). Furthermore, an expression can always be replaced by its value without changing the overall result (referential transparency\nThe order of evaluation of subexpressions is determined by the language's evaluation strategy\n. In a strict\n) language this will specify that arguments are evaluated before applying a function whereas in a non-strict (call-by-name\n) language arguments are passed unevaluated.\nPrograms written in a functional language are generally compact and elegant, but have tended, until recently, to run slowly and require a lot of memory.\nExamples of purely functional languages are Clean\n, and SML\n. Many other languages such as Lisp\nhave a subset which is purely functional but also contain non-functional constructs.\nSee also lazy evaluation\nLecture notes (ftp://ftp.cs.olemiss.edu/pub/tech-reports/umcis-1995-01.ps). or the same in dvi-format (ftp://ftp.cs.olemiss.edu/pub/tech-reports/umcis-1995-01.dvi).\nSEL-HPC Article Archive (http://lpac.ac.uk/SEL-HPC/Articles/).", "score": 44.57164243350527, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "Message passing, techniques for communication between objects makes the interface descriptions with external systems much simpler. Software complexity can be easily managed. OOP is popular in large software projects because objects or groups of objects can be divided among teams and develop in parallel and it seems to provide a more manageable foundation for larger software projects.\n# Functional Programming :- Functional language programs contains no assignments statements, so variables, once given a value, never change. Functional programs contain no side-effects at all. A function call can have no side effect other than to compute its result. This eliminates major source of bugs, and also makes the order of execution irrelevant, since no side effect can change the value of an expression, ti can be evaluated at any time. This relives the programmer of the burden of prescribing the flow of control. Since expression can be evaluated at any time, one can freely replace variables by their values and vice-versa that is, programs are \"referentialy transparent\". This freedom helps makes functional programs more tractable mathematically then their conventional counterparts. The time it takes to develop code, and even more importantly, to modify program is substantially faster for functional programs than for procedural programs. This is important for prototyping and carrying out exploratory research. Mathematics's functional programming constructs Map and Apply allow you to do many things in one line that would normally take several loops in other languages. If a functional program doesn't behave the way you expect it to, debugging it is a breeze you will always be able to reproduce your problem because a bug in a functional program doesn't depend on unrelated code paths that were executed before it. A functional program is ready for concurrency without any further modification you never have to worry about dead-blocks and race conditions because you don't need to use locks. In a functional program all states are stored on a stack in the arguments passed to functions. This makes hot deployment significantly easier. An uninteresting property of functional language is that they can be reasoned about mathematically. Since a functional languages is simply an implementation of a formal system, all mathematical operations that could be done on paper still apply to the programs written in that language.\nStorage Management in Programming Language\nProgramming Languages Concepts\nSubprogram Sequence Control in Programming language.\nStructured Sequence Control in Programmig language\nSequence Control with in Statements\nImplicit and Explicit sequence Control\nDifference between C language and C++ language.", "score": 43.13270382640865, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "I love a lot of the thinking in functional programming (FP):\n- Side-effect free functions - That they do not do hidden things beside doing work on the input and returning output\n- Referential transparency of functions - A bit of the same as above, that what you put in to a function plus the function itself, completely defines its output\nAn area where I'm not so happy with some things I've seen in FP, is composability.\nIn my view, a well designed system or langauge should make functions (or other smallest unit of computation) more easily composable, not less.\nWhat strikes me as one of the biggest elephants in the room regarding FP, is that typical functions compose fantastically as long as you are working with a single input argument, and a single output for each function application, but as soon as you start taking multiple input arguments and returned outputs though, you tend to end up with very messy trees of function application. Even handy techniques such as currying tend to get overly complex if you want to handle all the possible downstream dataflow paths in a structured way.\nI (think I) know that monads are supposed to be a highly general way of addressing this problem, but it seems 99% of programmers (including me) have a really really hard time properly understanding the concept well enough that it would help make their code clearer, to them and others.\nThis is where I find the principles around network composability of Flow-based programming (FBP) shines so brightly.\nIt solves the composability problem in three basic ways:\n- It gives each input and output its own identity, in the form of ports.\n- It allows to define the data dependencies between these ports, instead of between functions.\n- It allows the setup of these dependencies to happen at any place in the program, which makes it easy to e.g. produce a list of all the connections, as e.g. a simple list of (outport, inport) tuples.\nThe second point above is significant, as it means data dependencies are defined at the level of data, not functions.\nTrying to define data dependencies by defining dependencies between functions, is a major impedance mismatch in my books, and what is causing the need for such complicated syntax in a lot of functional programming.\nFlow-based programs on the other hand, are so easy that they can be simplified to two main parts:\n- A list of processes (Somewhat the counterpart to functions in FP)\n- A list of connections between input- and output ports.", "score": 40.09423774384897, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Definitions for functional programming\nThis page provides all possible meanings and translations of the word functional programming\nProgramming in a style that, in lieu of assignment, uses procedure calls to bind variables to values.\nIn computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in lambda calculus, a formal system developed in the 1930s to investigate computability, the Entscheidungsproblem, function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus. In practice, the difference between a mathematical function and the notion of a function used in imperative programming is that imperative functions can have side effects that may change the value of program state. Because of this, they lack referential transparency, i.e. the same language expression can result in different values at different times depending on the state of the executing program. Conversely, in functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) both times. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.\nThe numerical value of functional programming in Chaldean Numerology is: 2\nThe numerical value of functional programming in Pythagorean Numerology is: 3\nImages & Illustrations of functional programming\nFind a translation for the functional programming definition in other languages:\nSelect another language:\nDiscuss these functional programming definitions with the community:\nWord of the Day\nWould you like us to send you a FREE new word definition delivered to your inbox daily?\nUse the citation below to add this definition to your bibliography:\n\"functional programming.\" Definitions.net. STANDS4 LLC, 2017. Web. 25 Mar. 2017. .", "score": 39.575566664914845, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "[Haskell-cafe] \"show\" for functional types\nhaskell at sleepingsquirrel.org\nFri Mar 31 23:10:56 EST 2006\nBrian Hulley wrote:\n] Here is another example. Consider two functions f and g which, given the\n] same inputs, always return the same outputs as each other such as:\n] f x = x + 2\n] g x = x + 1 + 1\n] Now since f and g compute the same results for the same inputs, anywhere in\n] a program that you can use f you could just replace f by g and the\n] observable behaviour of the program would be completely unaffected. This is\n] what referential transparency means.\n] However, if you allowed a function such as superShow, superShow f == \"x +\n] 2\" and superShow g == \"x + 1 + 1\" so superShow f /= superShow g thus you\n] could no longer just use f and g interchangeably, since these expressions\n] have different results.\nHmm. It must be a little more complicated than that, right? Since\nafter all you can print out *some* functions. That's what section 5 of\n_Fun with Phantom Types_ is about. Here's a slightly different example,\nusing the AbsNum module from...\n> import AbsNum\n> f x = x + 2\n> g x = x + 1 + 1\n> y :: T Double\n> y = Var \"y\"\n> main = do print (f y)\n> print (g y)\n...which results in...\n(Var \"y\")+(Const (2.0))\n(Var \"y\")+(Const (1.0))+(Const (1.0))\n...is this competely unrelated?\nMore information about the Haskell-Cafe", "score": 38.50490186248725, "rank": 9}, {"document_id": "doc-::chunk-2", "d_text": "Immutability does more than just change the way you manipulate data in lists. The concept of Referential Transparency will come naturally in F#, and it is a driver in how systems are built and pieces of that system are composed. Execution characteristics of a system become more predictable, because values cannot change when you didn’t anticipate them to change.\nFurthermore, when values are immutable, concurrent programming becomes simpler. Because values cannot be changed due to immutability, some of the more difficult concurrency problems you can encounter in C# are not a concern in F#. Although the use of F# does not magically solve all concurrency problems, it can make things easier.\nExpressions instead of statements\nAs mentioned earlier, F# makes use of expressions. This is in contrast with C#, which uses statements for nearly everything. The difference between the two can initially seem subtle, but there is one thing to always keep in mind: an expression produces a value. Statements do not.\nIn the previous code sample, you can see a few things which are very different from imperative languages like C#:\nif...then...elseis an expression, not a statement.\n- Each branch of the\nifexpression produces a value, which in this case is the return value of the\n- Each invocation of\ngetMessageis an expression which takes a string and produces a string.\nAlthough this is very different from C#, you’ll most likely find that it feels natural when writing code in F#.\nDiving a bit deeper, F# actually uses expressions to model statements. These return the\nunit is roughly analogous to\nvoid in C#:\nIn the previous sample\nfor expression, everything is of type\nunit. Unit expressions are expressions which return no value.\nF# arrays, lists, and sequences\nThe previous code samples have used F# arrays and lists. This section explains them a bit more.\nF# comes with a few collection types, and the most commonly used ones are arrays, lists, and sequences.\n- F# arrays are .NET arrays. They are mutable, which means that their values can be changed in-place. They are evaluated eagerly.\n- F# lists are immutable singly-linked lists. They can be used to form list patterns with F# pattern matching. They are evaluated eagerly.\n- F# sequences are immutable\nIEnumerables under the covers. They are evaluated lazily.\nF# arrays, lists, and sequences also have array, list, and sequence expression syntax.", "score": 38.16692185813875, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "Every software program has two things:\n- Behavior (what the program does)\n- Data (data, is well, data)\nWhen we’re learning about a programming paradigm — like functional programming — it’s often helpful to consider how the paradigm approaches behavior and data respectively.\nBehavior, in functional programming, is handled purely using functions in functional programming. Functions are “self contained” pieces of code that accomplish a specific task. They define a relationship between a set of possible inputs and a set of possible outputs — they usually take in data, process it, and return a result. Once a function is written, it can be used over and over and over again.\nData, in functional programming, is immutable — meaning it can’t be changed. Rather than changing data they take in, functions in functional programming take in data as input and produce new values as output. Always.\nFunctions and immutable data are the only two things you need to ever deal with in functional programming. To make it even simpler, functions are treated no differently than data.\nPut another way, functions in functional programming can be passed around as easily as data. You can refer to them from constants and variables, pass them as parameters to other functions, and return them as results from other functions.\nThis is the most important thing to understand when approaching functional programming.\nBy treating functions as nothing more special than a piece of data and by only using data that is immutable, we are given a lot more freedom in terms of how we can use functions.\nNamely, it allows us to create small, independent functions that can be reused and combined together to build up increasingly complex logic. We can break any complex problem down into smaller sub-problems, solve them using functions, and finally combine them together to solve the bigger problem.\nConsidering the ever-growing complexity of software applications, this kind of “building-block” approach makes a huge difference in keeping programs simple, modular, and understandable. This is also why developers strive to make their functions as general-purpose as possible, so that they can be combined to solve large, complex problems and reused to speed up development time for subsequent programs.\nUltimately, the reason that functions are so powerful in functional programming is because the functions follow certain core tenets.", "score": 37.68060082138461, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "Such module systems are present in both OO and non-OO languages.\nIn summary, I think objects do not compose very well in general – only if the specific objects are designed to compose. Immutability helps to make objects composable by eliminating complex life-cycles and essentially turning the objects into values. Encapsulation also improves composability, and is put to good use in well-designed OO code, but is not unique to OO.\nDo functions compose?\nFunctional programming (FP), on the other hand, is at its very core based on the mathematical notion of function composition. Composing functions f and g means g(f(x)) – f’s output becomes g’s input. And in pure FP, the inputs and outputs are values without life cycles.\nIt’s so simple to understand compared to the numerous and perhaps even indescribable ad hoc compositions possible in OOP. If you have two functions with matching input and output types, they always compose!\nMore complicated forms of composition can be achieved through higher-order functions: by passing functions as inputs to functions or outputting functions from functions. Functions are treated as values just like everything else.\nIn summary, functions almost always compose, because they deal with values that have no life cycles.\nWhat does composition give us?\nHaving simple core rules for composition gives us a better ability to take existing code apart and put it together in a different way, and thus become reusable. Objects were once hailed as something that finally brings reusability, but it’s actually functions that can achieve this much more easily.\nThe real key, I think, is that the composability afforded by the functional design approach means that the same approach can be used for both the highest levels of abstraction and the lowest level–behavior is described by functions all the way down (many machine instructions can also be represented as functions).\nHowever, I think most programmers today (including me) don’t really understand how to make this approach work for complete programs. I would love for the functional programming advocates to put more effort into explaining how to build complete programs (that contain GUIs, database interactions and whatnot) in a functional way. I would really like to learn that rather than new abstractions from category theory — even if the latter can help with the former, show us OOP guys the big strategy before the the small tactics of getting there.\nEven if we can’t do whole programs as functions, we can certainly do isolated parts of them.", "score": 37.47653218547594, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "What Is Functional Programming and It’s Most Important Aspects?\nIntro to what is functional programming\nFunctional programming is an interesting programming concept which gains a lot of attention lately. This article presents some of the most important aspects of functional programming in general and provides several examples in Python.\nFunctional programming is a kind of the declarative programming paradigm where functions represent relations among objects, like in mathematics. Thus, functions are much more than ordinary routines.\nIn this article, you’ll find explanations on several important principles and concepts related to functional programming:\n- pure functions,\n- anonymous functions,\n- recursive functions,\n- first-class functions,\n- immutable data types.\nA pure function is a function that:\n- is idempotent — returns the same result if provided the same arguments,\n- has no side effects.\nIf a function uses an object from a higher scope or random numbers, communicates with files and so on, it might be impure because its result doesn’t depend only on its arguments.\nA function that modifies objects outside of its scope, write to files, prints to the console and so on, have side effects and might be impure as well.\nPure functions usually don’t use objects from outer scopes and thus avoid shared states. This might simplify a program and help escape some errors.\nAnonymous (lambda) functions can be very convenient for functional programming constructs. They don’t have names and usually are created ad-hoc, with a single purpose.\nIn Python, you create an anonymous function with the lambda keyword:\nlambda x, y: x + y\nThe above statement creates the function that accepts two arguments and returns their sum. In the next example, the functions f and g do the same thing:\n>>> f = lambda x, y: x + y >>> def g(x, y): return x + y\nA recursive function is a function that calls itself during the execution. For example, we can use recursion to find the factorial in the functional style:\n>>> def factorial_r(n): if n == 0: return 1 return n * factorial_r(n - 1)\nAlternatively, we can solve the same problem with the while or for loop:\n>>> def factorial_l(n): if n == 0: return 1 product = 1 for i in range(1, n+1): product *= i return product\nFirst Class Functions\nIn functional programming, functions are the first-class objects, also called higher-order functions — the data types treated the same way as other types.", "score": 37.187592736984485, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Functional programming is a type of programming in which the output is determined by the inputs. It is an effective way of breaking complex problems into smaller parts and solving each piece at a time. In the world of software engineering, Java, one of the most popular programming languages worldwide, has adopted this form of programming as an effective and comprehensive way to understand the language and its capabilities.\nWhat is Functional Programming?\nFunctional programming is a style of programming that focuses on breaking down a larger problem into smaller parts. It is based on mathematics, specifically the concept of lambda calculus, which is the theoretical basis of functional programming. The main idea behind functional programming is that code can be written succinctly and concisely, allowing programmers to focus more on logic and less on control flow. This makes it easier to write code with fewer bugs and fewer lines of code. Additionally, because of the way data is manipulated, functional programming can be more efficient for certain types of problems.\nFunctional programming also encourages the use of immutable data structures, which are data structures that cannot be changed once they are created. This helps to ensure that data is not accidentally modified, which can lead to unexpected results. Additionally, immutable data structures can be used to create more efficient algorithms, as they can be reused without having to be recreated each time.\nAdvantages and Disadvantages of Java Functional Programming\nLike any programming language, there are a few key advantages and disadvantages of functional programming in Java. One of the main advantages is that functional programming can help programmers write more concise code, while at the same time improving readability. Additionally, it helps reduce coding bugs, as well as avoids dealing with complex control flow. On the downside, it can take some getting used to because of its small syntax and complex functions. Additionally, some may find it difficult to debug, since errors may not be easy to detect.\nAnother disadvantage of functional programming in Java is that it can be difficult to maintain. This is because the code is often written in a way that is difficult to modify or update. Additionally, it can be difficult to integrate with existing code, as it may require a complete rewrite. Finally, functional programming can be difficult to learn, as it requires a deep understanding of the language and its concepts.\nUnderstanding the Basics of Java Functional Programming\nFunctional programming in Java doesn’t require a lot of extra knowledge to start getting into it; however, understanding some of the concepts behind it will be helpful in grasping the language.", "score": 36.54070621926754, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "The concept of state refers to a function/computation global or non-local state of memory at any given time. When the output of a function/computation depends solely on its inputs, we say it is stateless — an example of this is combinatory logic. Conversely, when the output depends not only on the received input but also on history of previous executions, we say it is stateful — this is the case in sequential logic. Notice variables local to some scope are not considered state as they die together with their scope and thus cannot keep history of previous executions. So we only consider global and non-local variables to be state.\nImperative languages describe computations as series of modifications to state. Pure functional languages like Haskell, on the other hand, describe them as function evaluations and have no concept of state. While being stateless comes with several advantages (I won’t talk about it), certain programs require to keep track of previous executions. The problem then becomes: how to implement state in a language that cannot, by definition, have state? We simulate it. From the point of view of the function caller — living in an outer scope than the function callee — functions being called can do only two things: take input, and return a value. Thereby the state has to come from input parameters, and can only survive a function return if returned by the callee once it has been altered. Then the caller is responsible for sequentially feeding the state to a function, collecting the new state from the callee upon evaluation, and feeding it to the next function in the execution chain. Let me illustrate it through an example.\nConsider the following example, where we implement a tiny interpreter for a tiny language that supports only variable lookups and variable updates. To represent memory we use a list of pairs of strings, where the first element in the pair is the variable name, and the second one is the value for that variable name:\ntype Memory = [(String, String)]\nWe start by writing a function for looking up elements in memory — Given a variable name, and memory, we get its correspondent value (the second element in the pair whose first element matched the variable name).", "score": 35.217607066110446, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "It is easy to take shortcuts, to be lazy. It is not easy to refactor your code to the point where it is clear what each line in the program does. It is not easy to create a code basis that you can look at five years later and still think ‘Oh, this does that. That makes sense’.\nIn my opinion, the functional programming approach offers a great way to achieve simplicity. A small function with a single job is simple. The data which serves as input for the function is also simple. The result which is produced from executing the function is also simple. When these three elements come together in the program execution, I can therefore still comprehend each individual part. With functional composition and higher order functions these simple components can become much more powerful without losing their simplicity. By breaking a program down into composable functions with a clear purpose, complexity can be minimized.\nObject-orientation deals with complexity by encapsulating it and providing a meaningful abstraction of the problem for the user. This a powerful tool as long as it is possible to ensure that all of the data really has been encapsulated in the desired scope and is not able to be modified from an outer source. However, since immutability is not the default, it is difficult to ensure that objects are only modified within the correct scope. For instance, as soon as a list is exposed in Java, this list can be modified by a subclass or an external library which would produce unexpected behavior at a later point in the program execution. This adds an element of complexity to the language which would not be there in a language where all data is immutable.\nThis does not mean that a program written in an object-oriented programming language is inherently complex.\nI personally am a functional programming enthusiast who spends an enormous amount of her time writing Java programs (a typical object-oriented language). However, I still attempt to make my code as simple as possible by using the Java 8 Stream API, by eliminating mutable state as much as absolutely possible (hello\nfinal keyword!), and most of all by refactoring and rewriting and reviewing my code over and over and over again until I am satisfied with the result (and then I will probably go over it again).\nIt is also not true that just because a program is written in a functional programming language it is automatically simple.", "score": 33.84446538439304, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Posted by Pete McBreen 19 May 2019 at 04:00\nRecently I have been looking at Erlang and Elixir, and in the process was reading Coders at Work and came across this quote from Joe Armstrong (pg 213)\nI think the lack of reusability comes in object-oriented languages, not in functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.\nIf you have referentially transparent code, if you have pure functions –all the data comes in its input arguments and everything goes out and laves no data behind – it’s incredibly reusable. You can just reuse it here, there and everywhere…\nSomething to think about.", "score": 33.41364007783064, "rank": 17}, {"document_id": "doc-::chunk-3", "d_text": "Bryan: Oh, most code in Haskell. Most code in other languages, you can't make any such assumptions about. In Haskell because all of your data is by default immutable, there is no real such thing as a global variable where in between one call to a function and the next call to a function, a global variable that is used by the function could change its value thus changing the output of the function. That just doesn't arise. You've got a much stronger guarantee that a function is going to behave itself.\nThis plays into comprehensibility, into readability, and it also plays into testing. If you don't need to set up some sort of elaborate mock framework or preload a database with data because most of your code is pure, then you can test your code in a very different way than you would even in a more traditional language. Traditional languages, most testing is built around the idea of say unit testing or load testing. In Haskell you certainly can do those kinds of things, but we also tend to emphasis a more generative way of testing, randomized testing. Because functions are only going to produce data based on their inputs, it's much more straightforward to say with some confidence, \"Yes, I believe that I've actually tested this under pretty stringent conditions.\"\nYou can express boundary conditions and not worry about anything else.\nBryan: Yeah. Those factors all play into why Haskell or a language with this kind of properties would be of direct interest to somebody who cares about the day-to-day practice of programming. When you come at it from a perspective of \"I am going to construct most of my programs and most of my code in a way that is easy to understand, easy to test, easy to refactor and the language's implementation helps me in these ways\", then a whole lot of mundane tasks become automatable or things that you can have your tools help you out with.\nI can imagine a lot of programmers saying I understand those concepts, but Haskell is difficult to learn or it's difficult to find Haskell programmers or it doesn't have libraries available. To what degree do you believe this element of purity is achievable in a language such as C# or Java or Python or Perl or Ruby?\nBryan: It's doable to an extent. There's a guy named Walter Bright who's been working for several years on a language called D. He's put together D as a better successor language to C than C++.", "score": 32.808505896899135, "rank": 18}, {"document_id": "doc-::chunk-4", "d_text": "Add Some Functional Spice to Make Your Code Tastier\nMany people use LINQ for years without even realizing they’re using functional programming concepts. I take this as proof that functional programming isn’t beyond the capabilities of the enterprise developer who lacks a strong background in math.\nSome of the concepts presented here are neither new nor restricted to functional programming. The benefits of distinguishing between functions that produce side effects from those that don’t is the basis of principles like command-query separation (CQS), for instance.\nThe goal of this post was not to teach you functional programming. This is honestly beyond my capabilities, as I’m still studying it myself. And besides, there are awesome resources for that purpose if you want to learn more.\nInstead, what I wanted here is to give you a little taste of what a functional style can do for your code, which is to make it more expressive, concise, and declarative. Now it’s up to you to try to apply the functional mindset to the code you write.read more...", "score": 32.43591205139811, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Why Functional: One Reason\nWhy functional is better? So many reasons, one comes here: Programming languages are not made for computers, they are for humans to formulate generic solutions computers can solve faster in the details. The imperative style formulates the steps the computer has to follow, the functional style formulates the generic solution as such. Imperative style wants you to act like a machine, functional style allows you to think like a human.", "score": 32.260675178154756, "rank": 20}, {"document_id": "doc-::chunk-4", "d_text": "For example, in\nlet sqr: int code -> int code = fun e -> .<.~e * .~e>.the meaning of\nsqr e-- that is, the code it generates -- is the multiplication of two copies of the expression generated by\ne. We are sure of that even if we know nothing of\neexcept that it is pure. Likewise, in\nlet make_incr_fun : (int code -> int code) -> (int -> int) code = fun body -> . x + .~(body ..)>. let test1 = make_incr_fun (fun x -> sqr .<2+3>.) (* . x_24 + ((2 + 3) * (2 + 3))>. *)\nmake_incr_fun bodyshould produce the code for an OCaml function. The result, shown in the comments, confirms our expectations.\nCompositionality lets us think about programs in a modular way,\nhelping make sure the program result is the one we had in mind. We\nought to strive for compositional programs and libraries. The trick\nhowever is selecting the `best' meaning for an expression. Earlier we\ntook as the meaning of a code generator the exact form of the\nexpression it produces. Under this meaning, compositionality requires\nthe absence of side-effects. Our\ntest1 is pure and so its result is\neasy to predict: it has to be the code of a function whose body\ncontains two copies of the expression\n2+3. Although the confidence\nin the result is commendable, the result itself is not. Duplicate\nexpressions make the code large and inefficient: imagine something\n2+3's place. Furthermore,\ntest1 does not\nx and can be lifted out of the function's body -- so that\nit can be computed once rather than on each application of the\nfunction. In short, we would like the result of\ntest1 to look\nlet test_desired = . x_24 + (t * t)>.\nOne may remark that a sufficiently smart compiler should be able to\ntransform the code produced by\nperforming common subexpression elimination and invariant code\nmotion.", "score": 32.04725394856356, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "I've often heard that functional programming solves a lot of problems that are difficult in procedural/imperative programming. But I've also heard that it isn't great at some other problems that procedural programming is just naturally great at.\nBefore I crack open my book on Haskell and dive into functional programming, I'd like at least a basic idea of what I can really use it for (outside the examples in the book). So, what are those things that functional programming excels at? What are the problems that it is not well suited for?\nI've got some good answers about this so far. I can't wait to start learning Haskell now--I just have to wait until I master C :)\nReasons why functional programming is great:\n- Very concise and succinct -- it can express complex ideas in short, unobfuscated statements.\n- Is easier to verify than imperative languages -- good where safety in a system is critical.\n- Purity of functions and immutability of data makes concurrent programming more plausible.\n- Well suited for scripting and writing compilers (I would appreciate to know why though).\n- Math related problems are solved simply and beautifully.\nAreas where functional programming struggles:\n- Debatable: web applications (though I guess this would depend on the application).\n- Desktop applications (although it depends on the language probably, F# would be good at this wouldn't it?).\n- Anything where performance is critical, such as game engines.\n- Anything involving lots of program state.", "score": 32.044113119755856, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "- Expressions are units of code which evaluate to a value.\n- Types are classifications of data in a program.\nIt’s worth noting that everything in the C# column is also possible in F# (and quite easy to accomplish). There are also things in the F# column which are possible in C#, though they’re more difficult to accomplish. It’s also worth noting that items in the left column are not \"bad\" for F#, either. Objects with methods are perfectly valid to use in F#, and can often be the best approach for F# depending on your scenario.\nImmutable values instead of variables\nOne of the most transformative concepts in functional programming is immutability. It’s often underrated in the functional programming community. But if you’ve never used a language where immutability is the default behavior, it’s often the first and biggest hump to get over. Nearly all functional programming languages have immutability at their core.\nIn the previous statement, the value of\n1 is bound to the name\nx now always refers to the value\n1 for its lifetime, and cannot be modified. For example, the following code does not reassign the value of\nInstead, the second line is an equality comparison to see if\nx is equal to\nx + 1. Although there is a way to mutate\nx by making it\nmutable and using the\n<- operator (see Mutable Variables for more), you’ll quickly find that it’s easier to think about how to solve problems without reassigning values. This allows you to play to the strengths of F#, rather than treat it as another imperative programming language.\nWe said that immutability was transformative, and that means that there are some very concrete differences in approaches to solving a problem. For example,\nfor loops and other basic imperative programming operations are not typically used in F#.\nAs a more concrete example, say you wish to compute the squares of an input list of numbers. Here is an approach to that in F#:\nNotice that there isn’t a\nfor loop to be seen. At a conceptual level, this is very different from imperative code. We’re not squaring each item in the list. We are mapping the\nsquare function over the input list to product a list of squared values. This distinction is subtle in concept, but in practice it can lead to dramatically different code. For starters,\ngetSquares actually produces a whole other list.", "score": 31.64589270614483, "rank": 23}, {"document_id": "doc-::chunk-3", "d_text": "The job of the first\nexpression is to carry out that side effect, that is, to add 1 to the lap\ncount for the specified car. The second expression looks at the value we\njust put in a box to determine the return value.\nWe remarked earlier that\nlap isn't a function because invoking it\ntwice with the same argument doesn't return the same value both\nIt's not a coincidence that\nlap also violates functional programming\nby maintaining state information. Any procedure whose return value is not a\nfunction of its arguments (that is, whose return value is not always the\nsame for any particular arguments) must depend on knowledge of what has\nhappened in the past. After all, computers don't pull results out of the\nair; if the result of a computation doesn't depend entirely on the arguments\nwe give, then it must depend on some other information available to the\nSuppose somebody asks you, \"Car 54 has just completed a lap; how many has it completed in all?\" You can't answer that question with only the information in the question itself; you have to remember earlier events in the race. By contrast, if someone asks you, \"What's the plural of `book'?\" what has happened in the past doesn't matter at all.\nThe connection between non-functional procedures and state also applies to\nnon-functional Scheme primitives. The\nread procedure, for example,\nreturns different results when you invoke it repeatedly with the same\nargument because it remembers how far it's gotten in the file. That's why\nthe argument is a port instead of a file name: A port is an abstract data\ntype that includes, among other things, this piece of state. (If you're\nreading from the keyboard, the state is in the person doing the typing.)\nA more surprising example is the\nrandom procedure that you met in\nRandom isn't a function because it doesn't always\nreturn the same value when called with the same argument. How does\nrandom compute its result? Some versions of\nrandom compute a number\nthat's based on the current time (in tiny units like milliseconds so you\ndon't get the same answer from two calls in quick succession). How does\nyour computer know the time? Every so often some procedure (or some\nhardware device) adds 1 to a remembered value, the number of milliseconds\nsince midnight.", "score": 31.15838543795264, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Suppose we declare that\ny = x + 1[This is not an assignment. Imagine a val in front if that is fits your language preferences]. This declares a relationship between x and y, but it does it in a functional style. It would look more functional still if we wrote \"y = add 1 x\", but that is just sugar.\nBut what if we have y and we want x? Since we are just declaring a relationship it isn't obvious why we have to change this. But in functional style this has to be changed to:\nx = y - 1Functional programs can only run forward, they can't run backwards. Haskell has, of course, a big exception to that: Constructors can be run forward (to construct) or backward (to deconstruct). But perhaps everything that can run backwards should be allowed to (and hence be useable in case statements). In addition to convenience there is a dodgy philosophical point.\nWhen we look at the way the world works it is rather like a giant computer program. And it is particularly like a program in that, on a large scale it pushes forward in an irreversible way. Entropy increases and broken plates never reassemble themselves. And yet at the core is quantum mechanics which is completely reversible.\nHaskell and Mercury are two languages in which the interaction with the external world (including mutable data in memory) is explicit. This is a consequence of the fact that both languages are declarative. In non-declarative languages the textual order of the program specifies an order of execution (optimisers alter that, but they are constrained to preserve the semantics, and are forced to make conservative decisions as a result). In declarative languages the compiler generates execution when required, and the thing that brings about that requirement is interaction with the outside world. Since that interaction is explicit, one would hope that the optimiser is never constrained by worrying about what separately compiled code might do.\nSo it seems that programming should have three levels.\n- A core language that is reversible. This specifies what can be handled by cases, though if you allow multiple returns (as Mercury does) then you can expand that.\n- Reducing operations (like summing a list) which are intrinsically irreversible.\n- Interaction with the unforgiving world which forces an order of execution.\nBut maybe its not quite that simple.", "score": 31.058204307093966, "rank": 25}, {"document_id": "doc-::chunk-4", "d_text": "Developers have to identify concurrent paths and co-ordinate their interactions; something which can be quite difficult to code and fiendishly difficult to debug.\nOne advantage of FP is that functional code is inherently much easier to parallelise. Since functions don’t access or affect global state and act only on their parameters, you can infer possible parallel paths and have them run concurrently.\nA = foo()\nB = bar()\nHere, the interpreter can run both foo() and bar()in parallel since they are not related and thus are guaranteed not to affect each other.\nBut wait, there’s more! Both Erlang and Haskell make threading even easier with innovative threading models. Rather than have process communicate through shared memory where you have to handle locking/concurrency yourself, both help make the process of writing multi-threaded code much easier.\n- Erlang (Actor/Message Model) : It’s Unix IPC all over again! You basically have pipes which you use to communicate between processes. Messages are asynchronous and not location specific, so processes can migrate to different machines transparently. Errors are piped to related processes as well, giving you robust error handling. No shared memory and scaling it trivial. In fact, the possibility of basically almost unlimited scaling across multiple machines is what really draws me to Erlang and has me drooling like an idiot. Write your code properly and you can just keep slotting in boxes. Erlang also has the ability to update code while it’s running, which means theoretically zero downtime if you’re careful. And it’s all actually in use in the telecommunications industry, so all this is real, not vapour.\n- Haskell (Software Transactional Memory Model) : Database transactions, but in local memory! Separate threads run within their own transactions and see a consistent view of the world. No need to explicitly lock bits of shared memory, we just let the system handle all the error prone bits and concentrate on our logic, confidently that our threads won’t be stepping on each other toes. This is a really powerful abstraction and it’s something that had me smacking my forehead, wondering why I didn’t think of it before. However, I see some fundamental problems with multi-machine scalability. This is a fairly well explored problem in the database world and I’m not sure I want my code doing two phase commits and roll-backs in a cluster. Still cool though.\nThere’s still a long way for me to go yet.", "score": 30.7679382268857, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Outline of Class 20: Introduction to Functional Programming\nHeld: Monday, March 9, 1998\n- The language concepts we've looked at so far primarly relate to\nimperative languages, which emphasize basic operations and control.\n- As we know, there are a number of other important paradigms for programming.\nOne of the most important is the functional paradigm,\nin which programs are collections of function definitions and function\n- As always, we should try to define our basis of study.\n- What is a function?\n- A rule that associates members of one set (the domain)\nwith members of another set (the range).\n- The domain and range may be the same.\n- Each member of the domain is associated with at most one\nmember of the range.\n- A (potentially infinite) set of domain/range pairs.\n- How can/should/do we define functions?\n- By listing all the pairs.\n- By providing some sort of general form for pairs\n(e.g., regular expressions?)\n- In terms of already-defined functions.\n- By writing \"computer programs\" that generate results.\n- How do functions on the computer differ from \"mathematical\"\n- In math, we assume that when we apply the same function to the same\nvalue, we get the same result. We don't always make this assumption\nin computer languages. For example,\nbe expected to return a different byte each time.\n- In math, we often assume that domains and ranges are infinite.\n- When people use the term \"functional language\", they often mean more\nthan just the ability to define and apply functions. Definitions of\n\"functional\" often include\n- Functions as basic values. That is, you can write functions that\ntake other functions as arguments or return functions.\n- \"Atomic symbols\" as basic values (so that you can work with symbols\nin addition to strings, integers, and such).\n- Dynamic lists as built-in data structures.\n- Different treatment of memory and variables?\n- What are some of the key functional languages? Here are some we'll\nstudy or that the book describes:\n- Scheme is a dialect of LISP.\nLISP (including its dialects and variants) is perhaps the most widely used\nfunctional language for a number of reasons.\n- It's the oldest functional language.", "score": 30.505855909990707, "rank": 27}, {"document_id": "doc-::chunk-2", "d_text": "- Basically all the good stuff having immutable data structures and pure functions should give you.\nHopefully this also helps shed some light on why immutability and purity of functions are deemed good things, as well as why Clojure is also such a great language to develop with.", "score": 30.094239204327792, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Call by Sharing\nIn many ways, call-by-sharing is the most intuitive evaluation strategy. It captures the semantics where the caller of the function shares objects with the function being called. The called function can mutate those objects so long as the objects themselves are mutable. Because the objects are shared, any changes will be reflected in the calling function. This convention is applied to all objects being passed.\nFor immutable objects, call-by-sharing is indistiguishable from call-by-value. Indeed, the key to functional programming is that aliasing (the result of sharing objects instead of copying them) is always 100% safe. This is achievable by forbidding mutation. A class of powerful functional data structures exist that exploit this efficiency. These data structures can usually be efficiently updated without destroying the original copy, because the update aliases large parts of the original data structure instead of mutating it.\nwhereas the following does not mutate the number:\nIn the first example, the syntax\nA = 0 is a mutation of the array. The\narray does not become a different object; it is the same object as before, but\nits first element is changed. In the second example, the syntax\nn = 0 is very\ndifferent. It does not mean “change the value of this number to\nit means “rebind the name\nn to the new value\n0”. This subtlety is the source\nof much confusion. In languages with call-by-sharing semantics, names are often\ndistinguished from objects. Names are bound to objects, and a name can be\nrebound to a different object using an assignment keyword. Assignments do not\naffect the object the name was originally bound to.\nIndeed, if we had written our first example as\nthen we would have seen a similar result as our second example. Here we are no\nlonger mutating the array; we are now binding the name\nA to a new array.\nThere is a wealth of resources online that claim that Java is call-by-value, or that Ruby is call-by-reference. These languages are both call-by-sharing. So are these resources wrong? In fact, the former is not wrong. The latter is a little bit of a stretch, but there is some meaning in the nonsense. Rather, this exposes some nuance about what exactly is a value, and what exactly is a reference.", "score": 29.05634983625901, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "Nevertheless, once you move to a mostly immutable world, you’ll wonder how you ever survived with so much state floating around, running into other bits of state – just like you probably wonder why you ever put up with pointer arithmetic and memory deallocation.\nClojure is built for the real world. Clojure likes – but doesn’t enforce – functional purity. For the cases where it’s necessary to model data using state (see: databases), Clojure gives you safe ways to handle it. And get this: You don’t need a PhD in Monads. (Huzzah!) Now, don't get me wrong – I love Haskell. But as an engineer, I sometimes feel like Clojure is the forgiving parent, and I’m the semi-responsible teenager. It will try to help me live a good life when it can … but when I sneak into the beer cabinet, it looks the other way. (After drinking it, by the way, I immediately feel ashamed. Because really … Smirnoff Ice?)\nClojure redefines the fundamental units of code. Instead of thinking in terms of loop boilerplate, guard clauses, “what if this isn’t initialized yet?”, off-by-one errors, design patterns, serialization, or semicolons, you get to think about data transformations. About defaults that are ultimately flexible. Rubyists know that their language effectively got rid of low-level\nforloops. In the same way, Clojure gets rid of imperative iteration in favor of declaration. Your thoughts shift away from place-oriented ideas like memory addresses and gravitate to data structures and functions like\nfilter. Your “class hierarchy” turns out to be a type system that happens to also lock away well-meaning functions into dark dungeons (more on that in another article), and getting away from that is freeing. Transitioning to an open, abstract world means the intention of your code isn’t obscured by artifacts of computing, by the fact that your code is using and reusing finite memory. Your code is modeled as data, too, by the way. That means you can manipulate it using the same functions you use to manipulate strings and lists and maps. This shift dramatically simplifies and levels-up the tools available for solving new problems.\nClojure code tends to be insanely beautiful. More importantly, though, beautiful code happens to be both readable and efficient (after you learn to love or ignore the parentheses).", "score": 28.72575749808111, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "(If you’re skeptical, you should read up on persistent data structures.) Speaking of beauty, in Clojure, the physical shape of your code is a nice indicator of how clean it is. For example, when you use side effects in code (like writing to disk, logging, throwing exceptions), it’s becomes pretty obvious from indentation and\n!s in method names. And an excess of parentheses at the end of a function call tells you that you may want to break it up into smaller functions. So when you’re looking to clean up code, a quick visual scan can often tell you where to find candidates for refactoring.\nClojure is primed for parallelism. Functional code is easy to reason about, regardless of how many threads you’re working with. For non-functional concerns like database manipulation, Clojure has brought several innovative concepts from academia into the mainstream. (In fact, several of these have seen adoption in other languages, too.) For example, you have Clojure’s software transactional memory (STM) at your disposal – it will help you coordinate state change in a wide range of scenarios. So when you do have your Twitter moment and need to scale, you have loads of options to do so, from lightweight to industrial. You won’t have to change stacks or even the way you think about code. Maybe you’ll convert a\n(map ...)call to\n(pmap ...)to make it parallel. Or maybe you’ll decide to use agents or refs coordinated by STM. What you won’t be doing is spending weeks scratching your head over semaphores and mutexes.\nYou can forget context. Why? Because there is no context to learn. Most of your code takes input and produces output, and the pieces of code that don’t follow that pattern tend to stick out. On top of that, each file states its own dependencies, so you’re never left guessing where that strange symbol came from. (\"Did one of my libraries monkey-patch my code? Is that a local var or a method name? Did I inherit that, or is it in a mixin?\") Given this explicitness, you can move between abstraction layers when its appropriate, but you’re rarely forced to. Each layer also stands on its own. This means it’s relatively easy to ramp up new engineers on one part of your codebase at a time.", "score": 28.719800602615038, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "A cool feature of a functional language like F# is the ability to partially apply a function. Take this add function:\nlet add x y = x + y\nBy partially applying this function we can create new functions like this:\nlet plus1 = add 1 let plus5 = add 5\nThe functions we have created plus1 and plus5 both have the signature int -> int. We have partially applied add to a single parameter leaving us with a function that takes one more int and returns us an int. This is a really neat idea.\nBuilding on this if we want to do the same thing with subtract we find that it does not quite work:\nlet subtract x y = x - y let minus1 = subtract 1 printfn \"%i\" (minus1 7)\nThe code above prints -6 when we wanted our minus1 function to subtract 1 from the argument given. This is because unlike with add it matters the order that you give the arguments to subtract. We have another cool trick up our sleeves to solve this:\nlet subtract x y = x - y let swap f x y = f y x let minus = swap subtract let minus1 = minus 1 printfn \"%i\" (minus1 7)\nThe minus1 function above now does what we would expect. To make this work we defined a swap function that swaps the order of the arguments. We can then pass our subtract function to our swap function to produce a new subtract function that takes its arguments in the opposite order. We can now partially apply the new function minus with 1 to give us a new function minus1 that works how we would expect. Having functions as a first class citizen really does lead to some neat code.", "score": 28.04678289279735, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Previously we began exploring some theory behind functions. In this post we will look at practical techniques for working with functions.\nWorking with functions\nUsing functions is unsurprisingly the bread and butter of functional programming, let us see if we can define a slightly more complex function without butting into too many new concepts. We are going to define a function that cleans up an input\nstring and then saves it to disk.\n// some helper string functions // string -> string let trim (s:string) = s.Trim() // string -> string -> unit let write path content = let sanitized = trim content File.WriteAllText(path, sanitized) // use the write function write \"/path/to/file.txt\" \"Some text to write to file\"\nThis is our first multi-line function and let us go through a few things that may not have been immediately obvious from the single line function. Firstly, note that the body of the function is defined by the indent. For the function the size of the indent does not matter, as long as it is the same throughout the scope. We will dive into this a bit more when we touch on scope in a later post on control flow. Secondly, the value of the last expression is what is returned from the function, in this case\nunit. You didn't need to explicitly use\nreturn like in many other languages. This is because functions ALWAYS return something so the compiler can assume that the last expression result is the return.\nA big part of the flexibility of functional programming comes from being able to easily tie functions together in interesting ways to build up more complex functionality. Let us apply this idea to the\nwrite function. We are going to pass a function into the\nwrite function that will do the sanitization, thus allowing the client of the function to decide what \"sanitized\" means.\n// ('a -> string) -> string -> string -> unit let write sanitizer path content = let sanitized = sanitizer content File.WriteAllText(path, sanitized) // use the write function write trim \"/path/to/file.txt\" \"Some text to write to file\" write (fun (s:string) -> s.Substring(0, 140)) \"/path/to/file.txt\" \"Some text to write to file\"\nSee how we just passed the\ntrim function in as an argument? This of course could be any function as we see in the second usage.", "score": 27.72972038949445, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "For example, instead of writing\nI can merely write\nThe latter is instantly more readable, and less typing to boot.\nSo if it's just a convenient short cut, why all the fuss?\nWell, it turns out that because function types are curried, you can write code which is polymorphic in the number of arguments a function has.\nFor example, the", "score": 27.350253201930226, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Have you ever used a pair of scissors instead of a hair clipper to shave? Ever estimated, the length of “my arm is about 50 cm long, so that’s about two arm-lengths?”\nThere is elegance in using the right tool for a job. And, just like a hair clipper or a measuring tape, programming languages are tools, and using the right programming paradigm for solving a specified set of problems determines a solution’s effectiveness.\nRalf Herbrich from Microsoft Research had over 1,000 lines of C# code for an application that attempted to analyze millions of feedbacks from users, and on a standard desktop, it took forever…well almost…to process the millions of input data. He rewrote the application in F# within 2 days and the results were astonishing! Less than 10 minutes and the millions of input data items were already processed, and it took only 100 lines of code including comments!\nRalf could also process over 10,000 log lines per second, in an application that parsed over 11,000 text files stored in over 300 directories, importing the data into an SQL database, all in 90 lines of F# code written within just hours!\n…and this was way back before the F# language platform had matured fully!\nFig 1.1: Evolution of Programming Paradigms\nSo what is it about F# and functional programming that makes it a vital programming paradigm for business applications?\nExplore with me in this and consecutive weekly blog series, on why F#, or at least its functional way is set to take over normal development from C# and will be a must-have a skill for every quality, time and performance conscious.Net developer and/or respective teams.\nFunctional Programming already lies at the heart of dominant programming frameworks including:\n• React [avoiding shared mutable DOM is the motivation for its architecture],\n• AngularJS [RxJS is a library of utility operators that act on streams by way of higher-order functions],\n• Redux and ngrx/store and more.\nFunctional programming today is a close-kept secret amongst researchers, hackers, and elite programmers. Most recently, functional programming is part of the rise of declarative programming models, especially in the data query, concurrent, reactive, and parallel programming domains.", "score": 27.349577246307803, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "If you are a low-level programmer, it may help to think that functional programs have just the same amount of state, but they keep it all in stack frames rather than in a shared heap.\nThere’s a whole vocabulary relating to this philosophy: “idempotency,” “pure functions,” “side-effect-free,” etc., but the principle is very simple: Functions are easier to understand if they have fewer “moving parts.” Although the philosophy can be universally embraced (one can program functional assembly language), language features can help. In Scala, one can declare a symbol as being a “var”—a traditional variable that can be reassigned—but one is encouraged to use a “val,” which cannot be reassigned (similar to the “final” modifier familiar to Java programmers).\nAn effect of this philosophy is that functions should have all of their context explicit in the argument list: If you want a “toggle()” function on a Lightbulb, you don’t refer to a mutable field, you pass the old state in and return the new state. In Java, the functional signature might look like Boolean isOn(boolean wasOn). (Or you’d use the Flyweight pattern to return a shared but immutable on-or-off Lightbulb.)\nChanging the argument list can be more cumbersome than using hidden state, but as I’ve argued in the past, the rise of unit testing has trained a generation of developers in the practice and shown them the advantages. On the other hand, a corollary of this is that the unit-testing suite of a functional module tends to “lock in” the API more firmly than with traditional object-oriented design. You can avoid such lock-in by having a single argument that is a bag of “context” values, but that strikes me as abandoning the governing philosophy. If explicitness is beneficial, stuffing explicitness away in a bag called “context” is cheating.\nJust as every mainstream language has or is gaining first-class functions, most good developers have a sense that avoiding mutable state and explicit context are good things. So again, there’s nothing particularly magical about the syntax or semantics of Scala or any other functional language. They just facilitate these approaches.\nAnd on the grand question of whether a sophisticated static type system such as Scala is a benefit or a hindrance compared to a flexible dynamic type system such as is found in Ruby?", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "This time, it should add five to each element in an array. Ignoring the rule of three, you jump right ahead into a generalized version, parameterizing the number to be added:\nThen yet another request comes in. Now you must write a function that will multiply each element of the given array by, let’s say, three. I won’t add the code sample now because I’m sure you’ve got the picture. By now, you should know better than to hardcode the number, so you’d probably jump ahead to a general version right away. Even then, some duplication would still exist: the loop itself. Hmm…what if you could keep just the loop and instead parameterize the action to be applied on each item?\nThe Functional Way\nTake into consideration what you’ve just read about pure functions—and also your previous knowledge ofprogramming best practices in general—and think of ways the code could be improved.\nFrom my perspective, the main problems are\n- The code is too specific. It can’t be easily changed to accommodate other transformations being applied to the array elements. It just performs a sum, and that’s it.\n- Too much boilerplate. Look at the previous sample again. Count the lines. There are seven, of which only one really concerns itself with carrying through the business logic of the method.\nHow would the functional way improve on this? That’s the way I’d write the first example in F#, for instance:\nI’m assuming here that “numbers” is a sequence of integers I’ve got somehow. Then I use the map function on the Seq module, passing the sequence as a parameter, along with a function that takes an int and adds three to it.\nThe Functional Way, .NET/C# Flavor\n.NET implements the map operation in the form of the “Select” LINQ extension method. So you could rewrite the F# example above like this:\nvar result = numbers.Select(x => x + 3);\nOne important point that needs explaining is that the type of the resulting sequence doesn’t need to match the type of the source sequence. Do you have a list of ‘Employee’ and need a sequence of ints (containing, for instance, their IDs)? Easy peasy:\nI think filter is, hands down, the easiest operation of the bunch.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "varLookUpList :: String -> Memory -> Maybe String varLookUpList name = Nothing varLookUpList name ((n,v):xs) = if name == n then Just v else varLookUpList' name xs\nNext we need to return the modified data structure to the caller so that the caller can pass it to the new callee, and thus preserve the modifications that have been made during the current call. Perhaps this is not the case when we are just “looking up” values, but in general it is. To do this we need an extra input to receive the state, and an extra output to give back the mutated state together with whatever output the function was returning anyway.\nIn general we want to turn something like this (for functions that lookup values):\nfunction :: a -> state -> b\nor like this (for functions that alter a data structure):\nfunction :: a -> state -> state\ninto something like this:\nfunction :: a -> state -> (b, state)\nOur example then gets modified as follows — Notice we return back the memory unmodified :\nvarLookUp :: String -> Memory -> (String, Memory) varLookUp name mem = case varLookUpList' name mem of (Just s) -> (s, mem) Nothing -> (\"Not found\", mem) where varLookUpList' :: String -> Memory -> Maybe String varLookUpList' name = Nothing varLookUpList' name ((n,v):xs) = if name == n then Just v else varLookUpList' name xs\nUpdating state is an assignment’s side effect. Other than that, assignments evaluate to the value of the variable on the left hand side of the assignment… at least on our language.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "Because the overhead of doing something concurrent, you want to ship it off to be done somewhere else, you want it to be of a certain size: \"You must be this big to be worth the overhead of shipping somewhere.\" We’re driving that overhead down. But today, that’s too fine-grained for most applications of exploiting that.\nBut the other area that functional languages are great at (the pure ones) is immutable state. You copy, you modify a vector, one element in it, we just got a whole new vector with that one element change, which is great. And immutable, you need no locking, it’s wonderful—except that there are pure functional languages like that, and then there are functional languages that people actually use, for commercial code. Because you just aren’t going to copy every—\nTed: [Laughs.] You just earned a hate mail from somebody on that line.\nHerb: I know! But the reality is that everybody has to make exceptions, because, for efficiency, you’re just not going to copy a million-element vector every time you change an element, so there’s always a compromise. Immutability is great, but there needs to be that balance. The one thing you see coming from functional languages that are coming into all languages are Lambdas and closures. C#, anonymous delegates; Java is working on it; and, as you know, C++0x.\nBjarne: I’d just like to add to this: The functional program community has owned a huge chunk of the educational establishment for 30 years. They have explained to generations of programmers why it was wonderful and has never gotten traction in industry. So, from my perspective, there’s lots I like, but I’m not going to bet everything on being able to go functional, because lots of people (probably smarter than me) have done that and failed.\nBjarne: We’re trying to adopt what can’t fit with other things. This is why I sort of talk about multi-paradigm programming, trying to figure out how to combine traditional C styles with object-oriented programming and generic programming; finding a sort of (if you’ll excuse me) \"sweet spot\" in the space, where you can do a lot of things—where you can do it with greater safety, easier writing, and still good performance. It’s tricky.\nHerb: One thing about multi-paradigm development, C++ has always been big on that.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "One aspect I enjoy about functional programming is the immutable and static nature of data. Rather than having to think about how an object changes over time, often I get to think about data that is static, unchanging, which is easier to visualize. Sometimes I even think of functions not as a process, but a static relation between input and output. This can often be done with probability distributions, for example.\nTraversable functors, which is functional programming’s answer to iterators, by definition appear to require dynamic behaviour to understand them.\nTraversable functors come with a\ntraverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)The\ntraversetakes an applicative action\nh :: a -> f band uses it to iterate through all the elements of the container\nt a, producing a new container\nt bwhile sequencing all the applicative effects of the action.\nIn order for traversals to be proper, they need to satisfy three laws. Laws one and two are analogous to the functor laws\ntraverse Identity = Identity\ntraverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f\nThe zeroth law is more difficult to state, but is easier to prove. It says that for every pair of applicative functors\neta :: F a -> G a is an applicative transformation, which means\neta (pure x) = pure x\neta (f <*> x) = eta f <*> eta x\ntraverse, which means\neta . traverse f = traverse (eta . f)\nThe zero law is a free theorem, which effectively means one can always take it for granted for any\ntraversal function one write.\nThere is a static way of understanding a traversable functor satisfying these laws. Recently I have been working with Mauro Jaskelioff to prove that every traversable functor is a finitary container. For those eager for a preview, a Coq proof is available.\nThe upshot of all this technical work is that the\ntraverse function is secretly a recipe for simply separating out from a container,\nt a, a list of the values it contains.\nThis separation allows one to update or replace the values of the container with new values.\nThis static understanding has nothing to do with sequencing effects.", "score": 26.407663910945836, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "In fact, all of the functions in this example will have a similar format like this:\nInput: a Stack plus other parameters Output: a new Stack\nNext, what should the order of the parameters be? Should the stack parameter come first or last? If you remember the discussion of designing functions for partial application, you will remember that the most changeable thing should come last. Youíll see shortly that this guideline will be born out.\nFinally, the function can be made more concise by using pattern matching in the function parameter itself, rather than using a\nlet in the body of the function.\nHere is the rewritten version:\nlet push x (StackContents contents) = StackContents (x::contents)\nAnd by the way, look at the nice signature it has:\nval push : float -> Stack -> Stack\nAs we know from a previous post, the signature tells you a lot about the function. In this case, I could probably guess what it did from the signature alone, even without knowing that the name of the function was \"push\". This is one of the reasons why it is a good idea to have explicit type names. If the stack type had just been a list of floats, it wouldn't have been as self-documenting.\nAnyway, now let's test it:\nlet emptyStack = StackContents let stackWith1 = push 1.0 emptyStack let stackWith2 = push 2.0 stackWith1\nWith this simple function in place, we can easily define an operation that pushes a particular number onto the stack.\nlet ONE stack = push 1.0 stack let TWO stack = push 2.0 stack\nBut wait a minute! Can you see that the\nstack parameter is used on both sides? In fact, we donít need to mention it at all. Instead we can skip the\nstack parameter and write the functions using partial application as follows:\nlet ONE = push 1.0 let TWO = push 2.0 let THREE = push 3.0 let FOUR = push 4.0 let FIVE = push 5.0\nNow you can see that if the parameters for\npush were in a different order, we wouldnít have been able to do this.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "I don't think I quite understand currying, since I'm unable to see any massive benefit it could provide. Perhaps someone could enlighten me with an example demonstrating why it is so useful. Does it truly have benefits and applications, or is it just an over-appreciated concept?\n(There is a slight difference between currying and partial application, although they're closely related; since they're often mixed together, I'll deal with both terms.)\nThe place where I realized the benefits first was when I saw sliced operators:\nIMO, this is totally easy to read. Now, if the type of\nYou mentioned C# lambdas in a comment. In C#, you could have written\nIf you're used to point-free style, you'll see that the\nwhich is awful due to the lack of automatic partial application with C# lambdas. And that's the crucial point to decide where currying is actually useful: mostly when it happens implicitly. For me,\nNow, if readable, the benefits sum up to shorter, more readable and less cluttered code -- unless there is some abuse of point-free style done is with it (I do love\nAlso, lambda calculus would get impossible without using curried functions, since it has only one-valued (but therefor higher-order) functions.\n* Of course it actually in\nUpdate: how currying actually works.\nLook at the type of\nYou have to give it a tuple of values -- not in C# terms, but mathematically spoken; you can't just leave out the second value. In haskell terms, that's\nwhich could be used like\nThat's way too much characters to type. Suppose you'd want to do this more often in the future. Here's a little helper:\nLet's apply this to a concrete value.\nHere you can see\nFortunately, most of the time, you don't have to worry about this, as there is automatic partial application.\nIt's not the best thing since sliced bread, but if you're using lambdas anyway, it's easier to use higher-order functions without using lambda syntax.", "score": 25.958604324778612, "rank": 42}, {"document_id": "doc-::chunk-2", "d_text": "For example in F# if you write\nthen that's it x is one forever and ever. This is immutability of reference or value depending on the type of data involved.\nHold on a moment doesn't banning variables mean that you can't write things like\nYou can't increment a variable because there are no variables. Once a symbol is bound to a value it doesn't and can't change.\nOK, so how do functional programmers do things that \"normal\" dysfunctional programmers take for granted - like count how many times something happens or add up the first ten integers?\nThe answer is that in functional programming no symbol is ever bound to a value until its final value is known. This is the \"frozen time\" approach of math.\nTake, for example, the problem of adding up the first ten integers. In most programming languages you would write something like:\nThat is, you would use a for or some other sort of loop to form a running sum until you had the answer, i.e. 45. The symbol \"total\" would change its value all through the loop until it got to the final answer. It is a machine that whirs into life to loop round counting things.\nIf you want to make symbols immutable then you can't do things in this way. You have to use recursion to put off binding any intermediate results until you have the final answer.\nThe way to do this is to suppose that we have a function that returns the sum of integers from x to 9, e.g. sumto9(x). Well it's obvious that sumto9(9) is 9, it is also obvious that sumto(8) is 8+9 or 8+sumto(9). You can also see that this generalizes to sumto(x)=x+sumto9(x+1) with sumto(9) defined as 9. Writing this as a recursive function gives:\nif(x=9) return x;\nNow you can write:\ntotal never has to be set to an intermediate value, it just gets bound to its final value in one go.\nYou could argue that all of the \"variables\" of the process are now in the recursion, but notice that x doesn't change - it just gets created in a new context each time the function is called. That's the whole point of function parameters - they allow things to change by being re-created - see object immutability discussed earlier.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "I embraced functional programming through Scala starting with the great Marting Odersky’s course in Coursera, following with the Reactive Programming course in Coursera (a second edition of which will start shortly and I really recommend you to sign up and follow it), and later on working with Play framework and Akka streams.\nI spent a few years programming in Java in a purely imperative way, and as soon as I could understand the functional approach, I realized it is a great way to focus on the essence of the problem, and it provides a more organic way to decompose reasoning and handling complexity.\nOf course the language itself doesn’t guarantee that the approach is functional: if you want you can write Scala almost like Java without semicolons. But despite not being purely functional and allowing mutable variables, Scala provides all the features to put in practice purely functional approach. Moreover the syntax is very similar to Java so the transition from Java, from the language point of view, is pretty smooth.\nWhen I have seen the first examples of functional code I appreciated the fact that it provides the language tools to model directly mathematical functions. For example a mathematical function $f$ from $A$ to $B$:\nis translated into a definition of a function in Scala as (the\n??? mean that the function is still undefined):\nFunctions in Scala are first class citizens, and they can be passed as arguments to other functions. This allows to compose things in a very straightforward way that procedural programming simply wouldn’t allow.\n1 2 3 4\nWe know the explicit way of finding N given $\\Delta x$ and $\\pi$, however let’s exploit a great feature of Scala which are\nStreams and lazy evaluation (see http://www.scala-lang.org/api/2.11.5/index.html#scala.collection.immutable.Stream).\nWe can define the\nStream of $x_n$ as a potentially unbounded sequence of doubles, with the guarantee that the next number will be generated on demand, lazily, only when required:\n1 2 3 4\nHere we did two things: we created a\nStream, where no value is allocated yet, all the values of which are created on demand, and we defined it with a\ndef and not with a\nval in order to save memory allocation once we consumed the stream, as we want just the $x_n$ to be produced to generate the sum and we don’t want to retain it afterwards.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Do you think Computer Science equals building websites and mobile apps?\nAre you feeling that you are doing repetitive and not so intelligent work?\nAre you feeling a bit sick about reading manuals and copy-pasting code and keep poking around until it works all day long?\nDo you want to understand the soul of Computer Science?\nIf yes, read SICP!!!\nIn FP world people bash assignment. They say it is the root of all evils. But is it really true?\nFirst we have to ask why assignment? The answer is simply to achieve better modularity in program design. Let’s see an example.\nSuppose we are designing a counter, which simply starts from 0, and increase by 1 every time it is called.\nLet’s first implement it using assignment.\n(define (make-counter) (let ((count 0)) (lambda () (set! count (+ count 1)) count)))\nmake-counter is a constructor, which returns a lambda that has access to a local variable\nNow let’s try to use it inside exponential function.\n(define (exp x n counter) (if (> (counter) n) 1 (* x (exp x n counter))))\nI know this is a bit contrived. But it is still a valid implementation and kind of straightforward.\nNow, let’s implement the\ncounter without using any assignment.\n(define (counter current-count) (+ current-count 1))\nA counter is just a function that increase the current count by one, fair enough.\nNow let’s use it in the same exponential function.\n(define (exp x n counter current-count) (let ((new-count (counter current-count))) (if (> new-count n) 1 (* x (exp x n counter new-count)))))\nNotice that the function takes one more parameter\ncurrent-count. And it is always 0 when calling the function.\n(exp 4 2 0). Imagine if a procedure\nA needs to know to put 0 as the\ncurrent-count. In other words, the internals of\ncounter has leaked into\nIf you think this is not too bad, let’s tweak the contrived example to be even more contrived.\nSuppose that the we want to control the speed of the counter. Say we want to increase 3 by each count.\nIn the assignment version of the counter, we just change to\n(set! count (+ count 3)). No other places needs to change.\nWhile in the non-assignment version.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "- It includes \"nonfunctional\" components which support traditional\n- FP was defined by John Backus to illustrate some of\nthe possibilities of programming with functions and no notion of\nan underlying architecture. FP provides no assignment or direct access\n- ML is a popular functional lanuage originally\ndeveloped as part of a program verification system. It is currently\nknown for its use of modern lanugae concepts (an excellent type system,\nexceptions, polymorphism, etc.)\n- Haskell is one of the newest \"big\" functional languages.\nHaskell is notable for making no comprimises on functional purity --\nthere are no side-effecting operations in Haskell. Haskell also uses\nan intersting evaluation strategy called lazy evaluation.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "fun x y z -> ...is more efficient than the ``tupled'' (or ``uncurried'') form\nfun (x,y,z) -> ...The bytecode compiler ensures that a complete application of a curried function (that is, arguments are provided for all parameters -- no partial application) executes without heap allocation. In contrast, applying an uncurried function to a tuple of arguments always heap-allocate the tuple, which is costly both because of increased memory traffic (writes to fill the tuple, loads to access its components) and increased load on the garbage collector.\nThe native-code compiler optimizes both curried and tupled functions, avoiding heap allocation and using registers to pass the parameters.\nlet rec f x y z = ... f x y e1 ... ... f x y e2 ...where all recursive calls have the same\nyparameters as on entrance, is it worth rewriting it as below?\nlet f x y z = let rec local_f x = ... local_f e1 ... ... local_f e2 ... in local_f xThe short answer is: it depends, but in doubt don't do it.\nIt depends on the number of invariant parameters: parameter passing is cheap, so don't bother making a local function if you have only one or two invariant parameters.\nIt also depends on the expected number of recursive calls of the\nfunction: building the closure for\nlocal_f adds a\nconstant overhead to the initial call of\nf, which can be\novercome by the fact that recursive calls to\nmore efficient than recursive calls to\nf if there are\nmany recursive calls; but if\nf does not recurse very\ndeeply, the cost of building the closure for\nmake the ``optimized'' form less efficient than the naive form.\nforloop is slightly faster than the corresponding recursive function (and a lot more readable too!). For numerical iterations (ints and floats), loops are probably slightly more efficient. On the other hand, assignment to a reference holding a heap-allocated data structure can be relatively expensive (due to the generational garbage collector), so it is better to stick with recursive functions when working over data structures.\no(memory overhead) parameter. The major collector is incremental: it does a ``slice'' of major collection each time a minor collection occurs. To decide how much work to do at each minor collection, it estimates the allocation rate of the program and bases its ``speed'' on that.", "score": 25.200030184484106, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Top-level vs. local recursion\nCompare the following two implementations of map. The first one uses top-level recursion\nmap :: (a -> b) -> [a] -> [b] map _ = map f (x:xs) = f x : map f xs\nwhereas the second one uses local recursion\nmap :: (a -> b) -> [a] -> [b] map f = let go = go (x:xs) = f x : go xs in go\nAlthough the first version is shorter, there some reasons to prefer the second version for stylistic reasons:\n- It clearly shows that the 'f' is not \"altered\" in the recursion.\n- You cannot accidentally 'jump' into the wrong loop. I often have multiple implementations of the same function which differ in laziness or performance. They only differ slightly in name and it happened too often that after copy&paste the recursive call went to a different (and thus wrong) function.\n- The local loop is probably also more efficient in execution, because the compiler does not need to move the around. However, if this is actually more efficient then the compiler should do such a transformation itself.f\nwhich is both short and allows deforestation.\n- Haskell-Cafe on HLint 1.2", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "To be fair, math does have variables, but they are used in very restricted ways compared to the way they are used in programming; and where they are used more like as in programming then the math begins to look like programming. Math mostly deals with change by taking a snapshot and making everything static - what else is a function in the f(t) sense other than frozen time? It isn't a dynamic process but a subset of the Cartesian product of two sets.\nSo what does all this have to do with functional programming?\nThe simple answer is that functional programming aims to remove the state and mutability out of programming and make it more like static maths. In other words it attempts to freeze time out of programming in the same way maths does.\nThis is the usual \"standard\" statement of functional programming, but it isn't quite as pure as this sounds. There is a lot of messiness in functional programming but it is often wrapped up in so much jargon you can't really see it for the clutter.\nIn math functions are stateless. If you evaluate sin(Pi) at 12 noon today then you will get the same answer when you evaluate it at 12 noon tomorrow. In math functions don't change.\nIn functional programming functions aren't just ways of grouping things together in a neat package. Function are first class objects and can be used anywhere a number or an expression can be used. They are also \"higher order\" functions which can accept functions and spit out functions as their result. Most important of all functions never change the state of the system - they don't have side effects.\nThe no-side effect rule is a bit difficult to stick to because if the system is going to deliver a result or interact with a user then side effects are essential and some state has to change.\nYou will often hear the term immutable used in connection with functional programming and exactly what it means is often confused with a simpler idea - object immutability.\nObject immutability is an easy idea. For example if a String object is immutable it can't be changed but this doesn't mean you can't do:\nWhat happens in this case is that the String object that the variable string1 is referencing is destroyed and a new object created with \"A\" attached to its right hand end. This is an almost trivial meaning of immutable and if it wasn't pointed out it would go unnoticed.\nThe real important use of immutable in functional programming is the way variables don't - vary that is.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-6", "d_text": "This makes the Scala OO model purer than that of some other languages (of course in languages like Ruby where classes are objects class variables and methods have more or less the same meaning and the model is just a pure if not purer).\nTraits – the evolution of interfaces\n- Traits are interfaces on steroids\n- They can contain state as well as behaviour\n- Think of them more as Ruby’s mixins than Java’s interfaces\n- They can be implemented on the fly by objects\n- They are too complex to be properly explained in one short blog post :–)\nFunctional programming has many aspects, but to get the bulk of it you need just two magical ingredients – support for functions as objects and a nice array of immutable data structures. Scala, naturally, has both. Traditionally OOP languages have rarely had much support for functional programming, which makes it awkward to express some problems in them. Steve Yegge wrote an excellent article on the subject some time ago – “Execution in the kingdom of the nouns”.\nReturn of the verbs\nFunctions are first class objects\n- val inc = (x: Int) => x + 1\n- inc(1) // => 2\n- List(1, 2, 3).map((x: Int) => x + 1) // => List(2, 3, 4)\n- List(1, 2, 3).map(x => x + 1)\n- List(1, 2, 3).map(_ + 1)\nClosures are basically functions that have captured variables from an external scope (variables that were not parameters of the functions). Closures are often used as the parameters of higher-order functions (functions that take functions as parameters):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17\nFunctional data structures\nFunctional programming revolves around the concept of immutability – nothing is ever changed – we have some input, we get some output and the input is not changed in the process. Consider a simple operation like an addition of an element to a list:\n- the operation could modify the list to which the element is being added\n- the operation can return a new list that is the same as the original, but has the additional element\nFunctional programming favours the second approach and Scala as a functional programming language provides data structures with the desired behaviour.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-2", "d_text": "Java’s Comparator Interface is a good example of a common FP idiom, where you pass along a bit of code as a parameter as well as the data and have the code act on the data. Java 7 might have closures as well, which are already to be found in Ruby. Python has things like list comprehension, which has a distinctly FP feel and so on. I thought I might as well experience all these concepts in one integrated functional package rather than piecemeal.\nFP vs OOP\nInvariably when you start to talk about a new programming model, you’re met with slack jaws, blank stares and mewling cries of “But, but, but… OOP!”. Now I’m not one to deride OOP because I like the concept quite a lot. It’s a great fit for a range of applications and can really help model a lot of complex domains and interactions. However, it isn’t the end all and be all of computing that the OOP aficionados (and their groupies) make it out to be. One particular domain which seems to be a bit of a mismatch is the Web.\nMost web programming involves a lot of pointless conversion from flat data to OOP and back again. Take a basic CRUD application. The user enters plain data without any OOP savvy behaviours or what have you. Just characters strung together. When he hits submit, we convert this data to fit into our OOP model and play around with it in the middle layer. We then flatten it once more and stick it in the database. We might also have some stored procedures in the database as well (usually written in a non-OOP language/manner). So the only place we’re really using OOP is in the middle layer and most of what we’re doing is simply converting data from one model to another. It’s just a huge waste of time. Embrace the fact that all we want to do is apply transformations to data and use FP (which is admirably suited to the role) to do just that.\nI mean, what’s so Object Oriented about Servlets? They're actually very functional. You usually just implement one method/function which accepts user data as parameters, munges it and then forwards more data somewhere else, where we display the response.\nAs far as reuse goes, OOP hasn’t proven to be all that. Look at the Java Library.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "The UNIX Philosophy has nine principles. These same principles are core to functional programming as I understand it.\nBelow find the nine principles of UNIX interpreted in terms of FP, ordered by the strikingness of the parallel.\n1. Small is beautiful.\nSmall functions, small abstractions. The simpler the function, the easier it is to see what it will do.\n2. Make each program do one thing well.\nEach function should have one purpose.\n9. Make every program a filter.\nMake it composable! It is composability that builds functions and abstractions into larger systems without complicatedness.\n6. Use software leverage to your advantage.\nComposability leads to reusable parts. Also, write tiny DSLs for your own use.\n3. Build a prototype as soon as possible.\nThe interactive REPL lets us prototype functions.\n4. Choose portability over efficiency.\nDon’t depend on the environment, even if it would be easier to root around in the database or system properties. Keep functions “data in, data out.”\n8. Avoid captive user interfaces.\nImperative style may be most obvious to the programmer, but it is not the most flexible for her long-term use. Avoid inflexible contexts that look easy-to-use.\n5. Store data in flat text files.\nFunctions should have visibility into the data, and its structure should be clear. With immutable data, choose clarity over encapsulation.\n7. Use shell scripts to increase leverage and portability.\nOK, yeah, I got nothing.\nprinciples drawn from: http://www.pappp.net/?p=969", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "But I should be able to use a mundane foreach() or even a dreaded for() loop, have the compiler examine my scope and see that I'm using my variables in an immutable fashion, and generate functional code from my imperative code.\nWhat I am getting at is that in the olden days we used synchronous macros do a series of tasks and even though it was mediocre at best, it gave tremendous leverage to the developer. Today the amount of overhead required to map-reduce things or chain promises and carry the mental baggage of every timeout and failure mode is simply untenable for the human brain beyond a certain complexity. What we really need is to be able to read and write code imperatively but have it executed functionally, with every side effect presented for us.\nI realize there is a lot of contradiction in what I just said but as far as I can tell, complexity has only increased in my lifetime while productivity has largely slipped. Shifting more and more of the burden to developer proficiency is just exactly the wrong thing to do. I want more from a typical computer today that is 1000 times faster than the ones I grew up on.\nI think you've got this exactly backwards. Functional programming lets you think at a higher level of abstraction (data flow) than imperative programming (control flow). The compiler then applies elbow grease to translate your high-level data flow transformations into low-level control flow constructs.\nLet's translate your statement back a generation and see how it sounds: \"I think the move towards structured programming, and putting the onus on developers to do the mental elbow grease of converting what are largely assembly-level tasks (branch, copy, add a value to a register) into structured code (if, while, for) has done a great disservice to software engineering, especially with respect to productivity.\"\nHopefully you can understand how silly that seems to a modern programmer.\nIf you only ever work on things that map well to functional programming then you'll naturally think it's superior to imperative programming. Likewise, if you only ever work on things that map well to imperative programming, then the functional programming approach seems a bit silly.\nIt will not always be easier, but it certainly provides more control over the execution flow.\nThat said, you can write spaghetti code in any language. =)\nI’m to the point where I am thinking about rejecting all of this and programming in synchronous shell-scripting style in something like Go, to get most of the advantages of Erlang without the learning curve.", "score": 23.741406792103273, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "These days, I am deeply in love with Microsoft’s F# language and Apple’s Swift language. F# on one side has picked the good parts from Functional languages like ML and oCaml + Imperative languages / Object-Oriented language like C#.\nSwift on the other end is a multi-paradigm languages picking language ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list.\nWhile learning these languages, one question (in fact multiple questions) always comes to my mind:\n- “What is a functional language and what is imperative language? and how are they different ”\nHere are few answers which I read/ got from books and on the web.\nThe functional programming paradigm was explicitly created to support a pure functional approach to problem solving. Functional programming is a form of declarative programming. In contrast, most mainstream languages, including object-oriented programming (OOP) languages such as C#, Visual Basic, C++, and Java –, were designed to primarily support imperative (procedural) programming.\nWith an imperative approach, a developer writes code that describes in exacting detail the steps that the computer must take to accomplish the goal. This is sometimes referred to as algorithmic programming. In contrast, a functional approach involves composing the problem as a set of functions to be executed. You define carefully the input to each function, and what each function returns. The following table describes some of the general differences between these two approaches.\n|Characteristic ||Imperative Approach ||Functional Approach |\n|Programmer focus ||How to perform tasks (algorithms) and how to track changes in state ||What information is desired and what transformations are required |\n|State changes ||Important ||Non-existent |\n|Order of execution ||Important ||Low importance |\n|Primary flow control ||Loops, conditionals, and function (method) calls ||Function calls, including recursion |\n|Primary manipulation unit ||Instances of structures or classes ||Functions as first-class objects and data collections |\nAlthough most languages were designed to support a specific programming paradigm, many general languages are flexible enough to support multiple paradigms. For example, most languages that contain function pointers can be used to credibly support functional programming. Furthermore, in C# 3.0 and Visual Basic 9.0, explicit language extensions have been added to support functional programming, including lambda expressions and type inference. LINQ technology is a form of declarative, functional programming.", "score": 23.701052941059604, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "It calls the function on the items of the iterable and returns a new iterable with the items for which the function returned True or anything that is evaluated as True. For example:\n>>> list(filter(lambda item: item >= 0, [-2, -1, 0, 1, 2])) [0, 1, 2]\nThe statement above returns the list of non-negative items of [-2, -1, 0, 1, 2], as defined with the function lambda item: item >= 0. Again, the same result can be achieved using the comprehension:\n>>> [item for item in [-2, -1, 0, 1, 2] if item >= 0] [0, 1, 2]\nReducing is performed with the function reduce from the module functools. Again, it takes two arguments: a function and iterable. It calls the function on the first two items of the iterable, then on the result of this operation and the third item and so on. It returns a single value. For example, we can find the sum of all items of a list like this:\n>>> import functools >>> >>> functools.reduce(lambda x, y: x + y, [1, 2, 4, 8, 16]) 31\nThis example is just for illustration. The preferred way of calculating the sum is using the built-in reducing function sum:\n>>> sum([1, 2, 4, 8, 16]) 31\nImmutable Data Types\nAn immutable object is an object whose state can’t be modified once it’s created. Contrary, a mutable object allows changes in its state.\nImmutable objects are generally desirable in functional programming.\nFor example, in Python, lists are mutable, while tuples are immutable:\n>>> a = [1, 2, 4, 8, 16] >>> a = 32 # OK. You can modify lists. >>> a [32, 2, 4, 8, 16] >>> a = (1, 2, 4, 8, 16) >>> a = 32 # Wrong! You can't modify tuples.", "score": 23.642463227796483, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "We could argue that since an imperative program is often 90 percent assignment, and a pure functional F# program has no assignment, it could be 90 percent shorter, and more modular.\nTo see its advantages clearer, we must look at what F# permits rather than what it prohibits. For example, it allows us to treat functions themselves as values and pass them to other functions. This might not seem all that important at first glance, but its implications are extraordinary. Eliminating the distinction between data and function means that many problems can be more naturally solved. In addition to treating functions as values, F# offers other features that borrow from mathematics and are not commonly found in imperative languages.\nFor example, F# offers curried functions, where arguments can be passed to a function one at a time and, if all arguments are not given, the result is a residual function waiting for the rest of its parameters. It also offers type systems with much better power-to-weight ratios, providing more performance and correctness for less effort.\n“The functional programmer sounds rather like a medieval monk, denying himself the pleasures of life in the hope that it will make him virtuous.”\nHerewith some virtues we will come across in using F#\nCommon programming tasks are much simpler in F#, examples being creating and using complex type definitions, list processing, comparison and equality, state machines, and more.\nBecause functions are treated as first-class objects, it is very easy to create powerful and reusable code by creating functions that have other functions as parameters or combine existing functions to create new functionality.\nF# has a more powerful type system that prevents many common errors such as null reference exceptions, values are immutable by default, and that prevents a host of errors. We can encode business logic using the type system itself in such a way that it is actually impossible to write incorrect code or mix up units of measure, relegating unit tests as just a matter of formality, as their need is greatly reduced.\nF# is not cluttered with unnecessary coding “noise” such as curly braces, semicolons, and more. You never have to specify an object type, due to a powerful type inference system, and compared with C#, it will always take fewer lines of code to solve the same problem, making maintenance less of a nightmare.\nAsynchronous programming is very easy, as is parallelism with its inbuilt libraries. F# also has a built-in actor model, excellent support for event handling and functional reactive programming.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "The academic and mathematical origins of functional programming scares an average programmer with words like type inference, closures, currying, continuations, monads, impredicative higher-ranked types, etc, but in these blog series, I will be dispelling demystifying their widely perceived complexity, explaining the concepts in a way mere mortals should understand.\nThis blog series is about showing you how easy and more effective it is to use the F# as a general purpose tool, and its applicability with an emphasis on the specific problem domains where it can lead to boosts in productivity.\nThroughout this blog series, you will discover that whilst F# isn’t the best tool for every situation, it is the perfect tool for many situations. Along the way, you will develop a knack for functional programming, with collections of concepts that will help you rethink and improve your programs regardless of your current programming language.\nA Brief Comparison With Other Functional Languages\nF# shares a core language with OCaml programming language, and in some ways can be considered an “OCaml for.NET.” It also draws from Haskell’s advanced features called sequence expressions and workflows. Despite similarities to OCaml and Haskell, F# is quite different, in particular, its approach to type inference, object-oriented programming, and dynamic language techniques, is considerably different from all other mainstream functional languages.\nF# embraces .NET techniques such as dynamic loading, dynamic typing, and reflection, and it adds techniques such as expression quotation and active patterns.\nF# is unique amongst both imperative and declarative languages in that it is the middle road where these two extremes converge. It takes the best features of both paradigms and combines them into an elegant language that both scientists and developers identify with.\nF# differs from many functional languages in that it embraces imperative and object-oriented programming. It also provides a missing link between compiled and dynamic languages, combining the idioms and programming styles typical of dynamic languages with the performance and robustness of a compiled language.\nWhy Functional Programming and F#\nF# encourages a different way of thinking about programs. It is based on the powerful principle that behaviour is strictly determined by input so that the same input will always produce the same behaviour. One of F#’s greatest strengths is that you can use multiple paradigms (OO, Imperative, Declarative) and mix them to solve problems in the way you find most convenient, but unlike other paradigms, it allows no side effects.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Let me elaborate.|\nWhen we think about building sw abstractions, it is common to note that the first implementation we build is rarely the most efficient implementation possible. It may be fast enough, but often we need to change the implementation to get better performance. For example, we may need to change the data structures used, or the choice of algorithms.\nStill, if the abstraction is good, which means it has a well designed interface, we can change the implementation while keeping the interface stable. This is one of the major benefits of information hiding and data abstraction (another benefit is seperate compilation, by the way).\nThe simple implementation we designed at first, can be any implementation that supports the specification. Wouldn't it be great if we could arrive at some such implementation, simply by making transformations on the (algebraic) specification?\nOf course, it would be even better if we could than optimize the implementation by using program transformations on the basic implementations!\n(Of course, this will only get you so far. It will not reinvent new algorithms for you. But heck, you are paid to do some of the work yourself!)", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Closing Over Simplicity\nIn the world of software development, there are two similar goals that run somewhat opposed to each other. One is speed and ease of creating new software. It’s important to go fast and get things done. Another goal is code that’s simple to reason about and easy to maintain and extend. It’s very easy to quickly generate reams of code that basically works, but leaves behind a complex web of entangled logic. If you’d like to learn more about these two somewhat opposed ideas, and the characteristics of each, do yourself a favor and watch Rich Hickey’s talk Simple Made Easy. (There are two recordings of this talk. This is a more recent version given in March of 2012.) Once you watch it, wait a week and watch it again. Many have reported getting more out of it with repeated viewings.\nWhat are the key components of maintainable software? One is code that is simple to reason about. Another is doing more with less. If solution A takes 20 lines and the functionally equivalent solution B takes 10, is solution B better? Possibly, unless it abstracts away the details to the point where if a second developer comes along, they cannot understand it.\nClojure is a relatively new language, created in 2008 by the previously mentioned Rich Hickey and designed with simplicity over ease in mind. It’s based on Lisp which is relatively old, but remains relevant. Clojure runs on the JVM and achieves near native-Java speed while leaving behind a lot of the cruft of the host language. It encourages (but doesn’t enforce) functional programming concepts. Everything distills down to functions that operate on data structures. Ideal functions are pure, free of side effects and simply transform data from one representation to another. Since side effects are still required to do something useful like output to the screen or write to disk, Clojure allows side effects. But they tend to be explicit. The core data structures in Clojure are immutable, which means once they are set, they do not change. This does not mean things must remain static. To make a change, you make a copy of the original data to a new location with the required change. Loops are declared using functional iterators and recursion instead of being constructed manually with counters.\nIt’s a different way of thinking.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "It has a very intuitive name, and the need for filtering stuff is so common in programming that I bet you correctly guessed what it is just by its name (if you didn’t know it already).\nFor the sake of completeness, though, let’s define it. The filter operation…wait for it…filters a sequence, returning a new sequence containing just the items approved by some criteria.\nThe Imperative Way\nSince we’ve used employees in the previous section, let’s keep within the theme. Let’s say you need to come up with a list of the employees who have used at least three sick days.\nIn a more procedural style, you’d maybe write something along the following lines:\nI wouldn’t say there’s something definitely wrong with this code. The method’s name is a bit too long, but it’s very descriptive. The code does what it promises. And it’s readable enough.\nBut similarly to the previous section, we can make the argument that the code is too noisy. We can say that, essentially, the only line that does something domain related is the if test. All the other lines are basically boilerplate-y infrastructure code. Can a functional approach help us here?\nThe Functional Way\nLet’s rewrite the method above by using LINQ:\nHere we use the “Where” extension method, passing the filtering criterium as a delegate. To be honest, the outer method became not very useful since it just delegates the work. In real life, I’d get rid of it.\nReduce is often the one many developers have some difficulty understanding. But it isn’t hard at all. Think of it like this: you have a sequence of something, and you also have a function that takes two of these “somethings” and returns one, after doing some processing.\nThen you start applying the function. You apply it to the first two elements in the sequence and store the result. Then you apply it again to the result and the third element. Then you do it again to the result and the fourth item, and so forth.\nThe classical example of reduce is adding up a list of numbers, so that’s exactly what we’re going to do in our example.\nThe Imperative Way\nSo, suppose we’re to sum a bunch of integers.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Compare:\nThese kinds of constructs come up often enough when you're using functional programming, that it's a nice shortcut to have and lets you think about the problem from a slightly higher level--you're mapping against the \"\nThat said, it's not a panacea; sometimes your function's parameters will be the wrong order for what you're trying to do with currying, so you'll have to resort to a lambda anyway. However, once you get used to this style, you start to learn how to design your functions to work well with it, and once those neurons starts to connect inside your brain, previously complicated constructs can start to seem obvious in comparison.\nOne benefit of currying is that it allows partial application of functions without the need of any special syntax/operator. A simple example:\nCurrying has the convenience features mentioned in other answers, but it also often serves to simplify reasoning about the language or to implement some code much easier than it could be otherwise. For example, currying means that any function at all has a type that's compatible with\nThe best known example of this is the\nAnd an example use:\nIn this context,\nThe reason this works is because\nAnother, different use of currying is that Haskell allows you to partially apply type constructors. E.g., if you have this type:\n...it actually makes sense to write\nThe \"no-currying\" form of partial application works like this:\nA bit complicated, isn't it?\nSo far so nice, but more important than being simple, this also gives us extra possibilities for implementing our function: we may be able to do some calculations as soon as the\nTo give an example, consider this audio filter, an infinite impulse response filter. It works like this: for each audio sample, you feed an \"accumulator function\" (\nNow here's the crucial bit – what kind of magic the function does depends on the coefficient2\nBut currying saves the day! We simply calculate\n1 Note that this kind of state-passing can generally be done more nicely with the\n2 Yes, this is a lambda symbol. I hope I'm not confusing anybody – fortunately, in Haskell it's clear that lambda functions are written with\nIt's somewhat dubious to ask what the benefits of currying are without specifying the context in which you're asking the question:\nI used to think that currying was simple syntax sugar that saves you a bit of typing.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right, which can help you improve the quality of your code.\nC# is supposed to be an object-oriented language, but it’s possible that you, as a .NET/C# developer, have been using functional programming concepts without even knowing it.\nAnd that’s what today’s post is about. I’ll just first briefly cover the attractions of functional programming and why it makes sense to apply it even when using a so-called object-oriented language. Then I’ll show you how you’ve already been using some functional style in your C# code, even if you’re not aware of it. I’ll tell you how you can apply functional thinking to your code in order to make it cleaner, safer, and more expressive.\nC# Functional Programming: Why?\nWe know the .NET framework offers some functional capabilities in the form of the LINQ extension methods, but should you use them?\nTo really answer this, we need to go back a step and understand the attraction of functional programming itself. The way I see it, the easiest path to start understanding the benefits of functional programming is to first understand two topics: pure functions and immutable data\nPure functions are functions that can only access the data they receive as arguments and, as a consequence, can’t have any side effects. Immutable data are just objects or data structures that, once initialized, can’t have their values changed, making them easier to reason about and automatically thread-safe.\nFundamental Functional Programming Operations and How to Perform Them Using C#\nWith the what and why of functional programming out of the way, it’s time to get to the how.\nl’ll be covering three fundamental functions: map, filter, and reduce. I’ll start by showing some use cases, then I’ll show a traditional, procedural way of solving the problem. And finally, I’ll present the functional way.\nIn simple terms, the “map” operation takes a sequence of items, applies some transformation to each one of those items, and returns a new sequence with the resulting items. Let’s see some examples.\nSuppose you wrote the following code, due to a customer’s demand:\nIt’s a function that adds three to each element of the given array of integers. Pretty straightforward.\nNow a request for a new function comes in.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "xn defined above is potentially unbound, it represents all the sequence of\nxn obtained as\n0, deltaX, deltaX + deltaX, deltaX + deltaX + deltaX, ... but we can set easily an upper limit in this way\nWe can now take the values of the\nf function in each of these values simply by mapping the\nxn through the function:\nIn the context of\nStreams we can see the map application as a transformation from the sequence of $x_n$ to the sequence of $f_n := f(x_n)$.\nThis interpretation of the\nmap function applies only to sequence-like types such as Streams and Lists, but it has a very different interpretation in other types, where concepts coming from Category Theory come into place, and lead to the ideas of Monoids and Monads.\nNow the only thing left to do is add up the\nynPi and multiply them by $\\Delta x$, and this can easily be achieved through a\nSo we obtained quite simply, mapping directly the theoretical concepts down to language constructs, in 5 lines of code, a value pretty close to the real one. Consider how much verbosity and focus on the implementation details it would have taken having this implemented in Java or in any case in a procedural way. With Scala and in general with a functional approach we just focus on what we want to achieve, through which transformations, instead of focusing on the processing details. In this way we produce code that is more isolated, testable, and appliable in different contexts.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "Functions (or, to be more precise, their pointers or references) can be passed as arguments to and returned from other functions. They can also be used as variables inside programs.\nThe code below illustrates passing the built-in function max as an argument of the function f and calling it from inside f.\n>>> def f(function, *arguments): return function(*arguments) >>> f(max, 1, 2, 4, 8, 16) 16\nVery important functional programming concepts are:\nThey are all supported in Python.\nMapping is performed with the built-in class map. It takes a function (or method, or any callable) as the first argument and an iterable (like a list or tuple) as the second argument, and returns the iterator with the results of calling the function on the items of the iterable:\n>>> list(map(abs, [-2, -1, 0, 1, 2])) [2, 1, 0, 1, 2]\nIn this example, the built-in function abs is called with the arguments -2, -1, 0, 1 and 2, respectively. We can obtain the same result with the list comprehension:\n>>> [abs(item) for item in [-2, -1, 0, 1, 2]] [2, 1, 0, 1, 2]\nWe don’t have to use built-in functions. It is possible to provide a custom function (or method, or any callable). Lambda functions can be particularly convenient in such cases:\n>>> list(map(lambda item: 2 * item, [-2, -1, 0, 1, 2])) [-4, -2, 0, 2, 4]\nThe statement above multiplied each item of the list [-2, -1, 0, 1, 2] with 2 using the custom (lambda) function lambda item: 2 * item. Of course, we can use the comprehension to achieve the same thing:\n>>> [2 * item for item in [-2, -1, 0, 1, 2]] [-4, -2, 0, 2, 4]\nFiltering is performed with the built-in class filter. It also takes a function (or method, or any callable) as the first argument and iterable as the second.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "For simple\nexamples like the one above, this might be a worthwhile trade-off, but in a real\napplication there could be vastly more time consuming operations where it would\nbe useful to build on the intermediate results.\nIt’s also worth considering that in a more complex example the smaller functions\nmight have more complex arguments putting a higher burden on the caller. For\nthis reason it’s convenient to have a single (or small group of) functions like\nstats_redux to properly encode those constraints.\nEager vs. lazy evaluation\nAnother significant issue with the original\nstats function is the eagerness\nwith which it evaluates. In other words, there isn’t an obvious way to compute\nonly some of the keys. The simplest way to solve this problem is to break down\nthe function as we did in the previous section, and then only manually compute\nthe exact values as we need them. However, this solution comes with the same set\nof caveats as before:\n- We lose local reuse of [potentially expensive to compute] intermediate values.\n- It puts more burden on the user of the library, since they have to know and understand how to utilize the utility functions.\nOne possible approach to solving this problem is to manually encode the dependencies between the nodes to make it easier for the caller, but also allowing parts of the pipeline to be applied lazily:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21\nThe above code might seem silly (because it is), but it solves the problem of\neagerness by breaking the computation down into steps that build on eachother\nlinearly. We also get reuse of intermediate values (such as\nfor free. The biggest issues with this approach are:\n- It basically doesn’t work at all if the computational pipeline isn’t perfectly linear.\n- It might require the caller to have some knowledge of the dependency graph.\n- It doesn’t scale in general to things of arbitrary complexity.\n- The steps aren’t as independently useful as they were in the previous section due to the incredibly high coupling between them.\nWhen the source ain’t enough\nThe final issue I want to discuss with our (now beaten-down)\nstats function is\nthat of transparency. In simple terms, it is extremely difficult to understand\nxs is being threaded through the pipeline to arrive at a final result.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "It’s all about dividing and conquering the problem by slicing it up into small bits and then figuring out what we need to do to each of the slices. Rather than identify objects, couple the data with behaviour and then deal with those objects, we pass both data and code around as required (and as easily), assembling and disassembling relationships as required.\nMind you, this is pretty much the SOP for most programming tasks, it’s just that FP makes it easier to work in this fashion. Besides, if you un-gag the Lispers for a moment, they will endlessly lecture you about the benefits of having small bits of code operate on large blobs of data. Rather like a shoal of Piranha reducing a buffalo. Be sure you re-gag them once you’re through or else they’ll never shut up.\nWhy spend time learning Functional Programming?\nMy primary reasons were:\n- A New Paradigm: Learning a new programming paradigm helps to stretch your mind and makes you a better programmer overall, even if you never directly apply any of the new techniques you’ve used. Notice I’m talking about paradigms here, not languages. Learning C++ if you know Java may make you a lightly better programmer over all, but going from Procedural programming to OOP for example, can be a mind-bending experience.\n- The Next Big Thing: The Functional programming weenies just can’t stop babbling about how they’re going to take over the world. And who knows, they just might. It takes a while for this sort of momentum to build up (look how long it took OOP to become mainstream!) but the signs abound; FP just might be the next big thing.\n- Increased Productivity: Anecdotal evidence suggests that writing code in a functional way leads to smaller, cleaner code and fewer bugs. The logic of the algorithm is clearly detailed and many of the internal details of the operations are hidden away, leading to less clutter. I’ve often heard that FP is about describing what you want done, while Imperative Programming ends up detailing how you want it done as well.\n- It’s Popping Up Everywhere!: FP isn’t all that obscure any more. You’ll regularly come across bits of functional code in various decidedly non FP languages.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "Functional programming advocacy suffers from a focus on purity, where state is considered a sin to be avoided absolutely. One way the movement might make progress is to distinguish between different kinds of effects, so they could say which ones are deadly and which are venial, rather than treating all effects as indistinguishable evil. Vocabulary analogous to the assembly language programmers' “side effect” might help with this.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "This note covers the following topics: O'Caml, Simple Data, Thinking Recursively, Poly-HO: Polymorphism and Higher-Order Programming, Pipelines, Datatypes, The Functional Evaluation Model, Functional Space Model, Equational Reasoning, Modules and Functors, Modular Reasoning, Mutable Data Structures and Imperative Interfaces, Threads, Locks.\nFunctional programming can make your head explode. This book stitches it back together. Daniel Marbach, Particular Software.\nJust because they have closures and first class functions, doesn’t mean they’re functional, they’re just m. Add to Book Bag Remove from Book Bag Saved in: Functional programming languages and computer architecture: 5th ACM conference, Cambridge, MA, USA, Augustproceedings /.\nGordon A An operational semantics for I/O in a lazy functional language Proceedings of the conference on Functional programming languages and computer architecture, () Gill A, Launchbury J and Peyton Jones S A short cut to deforestation Proceedings of the conference on Functional programming languages and computer architecture, ().\nFunctional languages are, in my opinion, good for mainly two things: Game AIs and mathematical computations. The physics of the architecture won't allow it. In pure functional programming languages the computer can run two (or many more) functions at once because those functions are not altering outside state information.\nWithout further ado, here is the list of the top 8 best programming books to read if you want to set yourself apart and become a coding powerhouse. Coders at Work: Reflections on the Craft of Programming >> purchase on Amazon. If you’re curious about life as a programmer than Coders at Work is the book.\nThe functional program is the “pre-architectural programming” information that tells the architect how to create the archi - tectural program for the building. The time to make a decision about how food will be prepared and served, how laundry.\nIt is an alternative way of creating programs by passing application state exclusively through functions. By avoiding side effects, it's possible to develop code that's easy to understand.\nThis page is powered by a knowledgeable community that. Functional programming is about defining functions and organizing the return values of one or more functions as the parameters of another function. Functional programming languages are mainly based on the lambda calculus that will be discussed in Chapter 4.\nTypical functional programming languages include ML, SML, and Lisp/ Size: KB.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Revision as of 06:20, 20 July 2010\nTurning free variables into arguments.\nAs an example, consider the following worker wrapper function, which computes the truncated square root of an integer:\nisqrt :: Integer -> Integer isqrt n | n < 0 = error \"isqrt\" | otherwise = isqrt' ((n+1) `div` 2) where isqrt' s | s*s <= n && n < (s+1)*(s+1) = s | otherwise = isqrt' ((s + (n `div` s)) `div` 2)\nisqrt :: Integer -> Integer isqrt n | n < 0 = error \"isqrt\" | otherwise = isqrt' n ((n+1) `div` 2) where isqrt' n s | s*s <= n && n < (s+1)*(s+1) = s | otherwise = isqrt' n ((s + (n `div` s)) `div` 2)\nThe isqrt' function may now be safely lifted to the top-level.\nNaive lambda lifting can cause a program to be less lazy. Consider, for example:\nf x y = g x + g (2*x) where g x = sqrt y + x\nf x y = g y x + g y (2*x) where g y x = sqrt y + x\nf x y = let sy = sqrt y in g sy x + g sy (2*x) where g sy x = sy + x\nAn expression of this sort which only mentions free variables is called a free expression. If a free expression is as large as it can be, it is called a maximal free expression, or MFE for short. Note that\nf x y = let psy = (+) (sqrt y) in g psy x + g psy (2*x) where g psy x = psy x\nHowever, you save no more work here than the second version, and in addition, the resulting function is harder to read. In general, it only makes sense to abstract out a free expression if it is also a reducible expression.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-2", "d_text": "A transducer that maps elements from a to b therefore transforms a step function on b‘s to a step function on a‘s\nThe statelessTransducer function takes as argument a function from a step function to a new step function. Yet, our mapping function is defined in terms of a function taking three arguments:\nTo understand this, we need just need to substitute the second occurrence of the type alias Step inside the prototype of the function onStep given to statelessTransducer:\nPut differently, the type of onStep correspond to a step function that takes an additional parameter as first argument: the next step function in the pipe-line. Hence the three arguments to mapping.\nNote: pragmatically, there is not much a stateless transducer can do with this next function except calling it once (mapping), several time (concat mapping) or not calling it (filtering). It still allows to build interesting transformations as the next section will show.\nUsing this helper function, we can define stateless transducers in a very few lines of code.\nFor instance, we can define takingWhile, which consumes a source of data as long as its element satisfy a given predicate. The definition is short and quite close to the specification of what the transformation does:\nHere is a quick example of how it can be used to consume a stream of values lazily:\nNote: interestingly, while we can define takingWhile as a stateless transducer, you cannot define droppingWhile (which ignores elements until one that satisfies the predicate is encountered) without tracking some state.\nToward stateful transducers\nWe defined transducers as arbitrary transformation on reducers, taking as input a reducer and returning as output a potentially completely different reducer.\nIn practice, the transformation is not as arbitrary as this may sound: a lot of transformation do not make sense. We can therefore offer some helper functions that will help us define more easily a reduced set of transducer that we will call sound transducers.\nIn this section, we will define these helpers function to use them in the next section. If you need examples to make sense of these functions, try to switch back and forth between this section and the next one.\nCharacterising sound & unsound transducers\nA sound transducer is one that performs only additive changes to the reducer it acts upon. It might add some state but will not remove or modify any existing one. It might add a completion step but will not remove or impact existing ones.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "For the past two years, my day-to-day programming language has been Scala. Scala is a hybrid language that combines the approaches of object-orientation and functional programming. You can treat Scala as a “Java.next” with minimal changes in your object-oriented mindset (the only thing that springs to mind is that Scala’s “companion objects” might seem like unnecessarily clunky replacements of Java’s “static” functionality). Even just as a drop-in replacement, I think most programmers would prefer Scala to Java: type inference saves finger-typing, and fields and class structure are a little clearer.\nScala is also a functional programming language. I’ve talked about functional programming many times over the years, but it continues to have an undeserved air of mystery around it. The low bar for a language to be “functional” is the support of functions as “first-class constructs.” Essentially, wherever you can have a value (on the right-hand of assignments, as the input or output of functions, etc.), you can have instead an anonymous block of code (if you like Greek, you can call them “lambda functions”). This turns out to be so convenient that every mainstream language either already supports or is moving toward such support; every language is, or will be soon, a “functional programming” language.\nThe greatest practical benefit of first-class functions is that, multiple times per day, instead of writing a for loop that iterates over elements of a collection to transform, filter or accumulate something, you just pass a small function to common collection-class functions such as (in the Scala world), map, filter and foldLeft. I cannot imagine a programmer not preferring the use of such functions: They are both clearer and more concise than loops.\nMutable state is when, after having been assigned a value once, a variable is reassigned a value. In a loop, for instance, the index variable is constantly reassigned. In a Lightbulb object, one might have a mutable isOn field that is reassigned in a switch() function. In a more functional program, you don’t have explicit index values (using, instead, higher-order and recursive functions to manipulate collections), and objects have few fields (a Lightbulb class wouldn’t have a switch() function but only on() or off() ).", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-3", "d_text": "So, for the benefit of the uninitiated who happened to accidentally stumble upon my obscure and randomly technical blog, I’ll give the lowdown on Functional Programming … in my own words. Apart from the regular differences between Functional and Imperative programming, here’s my experience with it:", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "Because data structures are immutable by default, sharing state and avoiding locks is way much easier.\nAlthough it is a functional language at heart, F# does support other styles which are not 100% pure, which makes it much easier to interact with the non-pure world of websites, databases, and other applications. In particular, F# is designed as a hybrid functional/OO language, so it can also do everything that C# can do. Obviously, F# is still part of the .Net ecosystem, with seamless access to all third party .NET libraries and tools. It runs on most platforms, including Linux, Android, and IOS (via Mono).\nA Trivial code example to whet our appetite:\nLet’s try to sum up all the squares from 1 to a specified number N using C# at first and then F#\nC# ‘Normal’ Implementation:\nWe will not dissect the code at this stage, but the example should be enough to immediately show some benefits gained e.g. in conciseness.\nWhen you start to think functionally, you will be able to improve your code in even C# and for the same above code you will implement it as follows:\nC# ‘Functional’ Implementation:\nStill, we can see that F# wins even against C# as functional, and be rest assured, it wins in many vital scenarios, to be explored in the next series of blogs.\nWe will explore “how to think functionally,” which is very different from thinking imperatively and will gracefully introduce the functional concepts below with gradually relevant code examples on how they can improve even your C# based solution designs.\n• Partial application\n• First-class functions\n• Higher-order functions\n• Function pipelines\n• Function composition\n• Map, filter, and reduce\n• Continuation passing style\n• Monads (computation expressions)\nThese are very important building blocks that will make your functional programming journey very easy when fully understood. Consecutive weekly blogs will dissect every concept individually, and if you are itching to see more code examples, that would be the right time to get our hands dirty.\nAuthor: Kenneth Fukizi – Lead Analyst Consultant", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-2", "d_text": "You can easily create your own parametrized transducers with “partial”, so that you capture all parameters but “xf”:\nOne of the best things abut transducers is that they are composable; you can create a new transducer out of a set of existing ones.\nThis is more efficient than mapping/filtering over multiple sequences in turn.\nSee how B was applied first, and then A.\nPutting it all together - separating text paragraphs\nWe want to create a transducer that receives a sequence of lines (maybe over a core.async channel) and separates them into paragraphs.\nThis is a typical stateful transducer, and something that we cannot do easily with just map and filter.\nIn this case it is important that if we have open stanzas when the sequence terminates, we reprocess them (see A, and see it’s the very same thing we’re doing in B) and then we generate the final result, that will in turn be sent down the pipeline (C).\nThis is pretty cool, as a lot of problems can be expressed in a way that is similar to this; and IMHO this alone makes transducers worth learning.\nBut the very best thing about transducers is…..\nThe really best thing about transducers is that (often) you don’t need to write them. If you had to write transducers from scratch every time like in the examples above, they would be still be cool, but in 99% of the cases you would prefer the old map/filter/reduce - it might not be as efficient or powerful, but is concise and compact and easy to read.\nTruth is that you can have the best of both worlds - you can use your classic friends map&co. and use transducers. This is because many functions in core that used to have a collection as their last parameter now have an arity that produces a transducer by just omoiiting that collection. So if you say:\nyou get a transducer for free.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-5", "d_text": "Well, we can define a few more core helper functions:\n/// Duplicate the top value on the stack let DUP stack = // get the top of the stack let x,_ = pop stack // push it onto the stack again push x stack /// Swap the top two values let SWAP stack = let x,s = pop stack let y,s' = pop s push y (push x s') /// Make an obvious starting point let START = EMPTY\nAnd with these additional functions in place, we can write some nice examples:\nSTART |> ONE |> TWO |> SHOW START |> ONE |> TWO |> ADD |> SHOW |> THREE |> ADD |> SHOW START |> THREE |> DUP |> DUP |> MUL |> MUL // 27 START |> ONE |> TWO |> ADD |> SHOW // 3 |> THREE |> MUL |> SHOW // 9 |> TWO |> SWAP |> DIV |> SHOW // 9 div 2 = 4.5\nBut that's not all. In fact, there is another very interesting way to think about these functions.\nAs I pointed out earlier, they all have an identical signature:\nStack -> Stack\nSo, because the input and output types are the same, these functions can be composed using the composition operator\n>>, not just chained together with pipes.\nHere are some examples:\n// define a new function let ONE_TWO_ADD = ONE >> TWO >> ADD // test it START |> ONE_TWO_ADD |> SHOW // define a new function let SQUARE = DUP >> MUL // test it START |> TWO |> SQUARE |> SHOW // define a new function let CUBE = DUP >> DUP >> MUL >> MUL // test it START |> THREE |> CUBE |> SHOW // define a new function let SUM_NUMBERS_UPTO = DUP // n >> ONE >> ADD // n+1 >> MUL // n(n+1) >> TWO >> SWAP >> DIV // n(n+1) / 2 // test it START |> THREE |> SQUARE |> SUM_NUMBERS_UPTO |> SHOW\nIn each of these cases, a new function is defined by composing other functions together to make a new one. This is a good example of the \"combinator\" approach to building up functionality.\nWe have now seen two different ways that this stack based model can be used; by piping or by composition. So what is the difference? And why would we prefer one way over another?\nThe difference is that piping is, in a sense, a \"realtime transformation\" operation.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "- Philip Wadler, another father of Haskell, was one of the implementors of generics in Java.\n- Martin Odersky, the father of Scala, that adapted a lot from Haskell, was also involved in the implementation of generics in Java.\n- Erik Meijer is a passionate admirer and researcher around Haskell. He used the Haskell concepts of monads and created the well know C# library LINQ.\nI will even go one step further. How knows functional programming and in particular Haskell, know, how the mainstream programming languages will develop in the next years. Even a pure object-oriented language like Java can not withstand the pressure of the functional ideas. Java has now generics and lambda expressions.\nBut now back to my subject. What are the characteristics of functional programming languages?\nOn my search for the functional characteristics, I identified seven typical properties. These must not be all characteristics and each functional programming language has not to support them. But the characteristics helps a lot to bring meat to the abstract definition of functional programming.\nThe graphic gives, on one hand, the characteristics of functional programming and gives, on the other hand, the outline of my next posts. I will provide a lot of examples in Haskell, C++, and Python. But what do the seven characteristics mean?\nFirst-class Functions are typical for functional programming languages. These functions can accept functions as an argument or return functions. Therefore, the functions have to be higher-order functions. That means, they behave like data. Pure functions always return the same result when given the same arguments and can not have a side effect. They are the reason that Haskell is called a pure functional language. A pure functional language has only immutable data. That means, they can not have a while or for loop which is based on a counter. Instead of the loops, they use recursion. The key characteristic of functional programming is that you can easy compose functions. This is because of their bread and butter data structure list. If an expression evaluates its arguments immediately, it's called greedy or eager evaluation. If the expression evaluates the arguments only, if needed, it's called lazy evaluation. Lazy evaluation will reduce time and memory if the evaluated expression is not needed. I think, you already guess it. The classical programming languages are greedy. They evaluate their expressions immediately.\nI start in my next post with first-class functions. We have them since the beginning of C++.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "I’m a regular at http://programming.reddit.com and just about every day there’s at least one (or two or three) posts about some functional programming language or the other. Usually it’s Haskell which is being showcased, though you’ll also find entries for Erlang and occasionally for some of the more obscure ones like O’Caml or Dylan. I finally zoned out under the constant barrage of propaganda and moaning softly (“brains! braaaaiiiiins!) zombie-shuffled over to check out what the fuss was all about.\nWhat’s all this about Zen then?\nThe Zen of programming is when you internalise the correct way of solving a problem using the programming model you’re working with. You might start off programming with something like C, using it in a classic procedural fashion. When you move from Procedural Programming on to something like Object Oriented Programming (OOP) using Java or C++ etc., it’s a bit of a culture shock at first. You keep trying to program procedurally, fighting the language every step of the way. There finally comes a moment however, when all the abstract concepts behind OOP fall into place with an almost audible snap and suddenly, in a moment of epiphany, you attain OOP enlightenment.\nNow I can’t say I’ve achieve that level of union with the Tao of Functional Programming, but I am starting to finally grok what the whole thing is about. However, all opinions listed below are subject to rather radical change!\nWhat’s Functional Programming (FP)?\nI’ll save me some time and copy-paste in some definitions.\nFunctional programming is a style of programming that emphasizes the evaluation of expressions, rather than execution of commands. The expressions in these language are formed by using functions to combine basic values. A functional language is a language that supports and encourages programming in a functional style.\n-- FAQ for comp.lang.functional\nFunctional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the imperative programming style that emphasizes changes in state.\nAs far as I can make out so far, FP is all about decomposing the problem down to the various algorithms and data transformations involved and then cleanly enumerating them. We don’t really need to worry about the ‘objects’ involved (there aren’t any), or their relationship with one another.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "In a short time, functional programming went from an obscure academic endeavor to the technology \"du jour\" of the software industry. On the one side, this is great, because functional programming holds real promise to simplify the construction of highly reliable software. On the other hand, it is also frightening because the current hype might lead to over-selling and sometimes too uncritical adoption of concepts that have not yet been sufficiently validated in practice. In particular I see with worry the trend to over-abstract, which often leads to cargo-cult technology.\nIn this talk I give my opinion of what the core of functional programming is that we can and should use today, why that core matters, and where we currently face challenges. I argue for combining functional programming with the principle of least power, for eschewing fancy abstractions, and for being modest in what we can and should express in our programs. I also show how some of these approaches are reflected in our work on the next version of Scala.\nMartin Odersky is the inventor of the Scala language, a professor at EPFL in Lausanne, Switzerland, and a founder of Lightbend. His work concentrates on the fusion of functional and object-oriented programming. He believes the two paradigms are two sides of the same coin, to be unified as much as possible. To prove this, he has worked on a number of language designs, from Pizza to GJ to Functional Nets. He has also influenced the development of Java as a co-designer of Java generics and as the original author of the current javac reference compiler.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Look how we return “result” as-is if we do not want to change it, while we “return” values by calling xf with a (new) intermediate value.\nThere are a few simple ways to “run” transducers:\ninto: as the vanilla version does, appends to an existing collection the transducer applied to a collection\ntransduce: applies transducer to a collection, and then using a “reducing” function to create our final result\nsequence: returns a new sequence by applying the transducer to a sequence. This sequence is lazy, so if it’s not needed, it’s not computed either.\neduce: captures the transduction process into a function.\nNote that as you have no idea of how many items of the source sequence need to be consumed to produce your final sequence, there is no 1:1 equivalence.\nAs we said above, you can return 0 or more values from a transducer:\n- To return no new values, just return “result” as it is\n- To return one value, return “(xf result value)”\n- To return multiple values, just call “(xf result value1)”, then “(xf result value2)”, and so on.\nTo show how thos works, this transducer duplicates odd values and removes even ones:\nHaving the ability to control when a value is emitted is especially important for trasducers that must keep track of a current (inner) state.\nOur transducer will return a sequence of odd numbers intertwined with their unique sequence number for each item in our sequence, starting from 100 onwards (we count all numbers for a change).\nTo do this we use an atom (or possibly a transient) that we use as a keeper of current state. See how this atom is unique per function invocation, so it’s local mutable state (and how this starts to look a bit like an “object” encapsulating state).\nOf course, a stateful transducer depends on its internal state and is not inherently parallelizable if run in parallel, so beware!\nTransducers as reducers\nYou can easily turn a stateful transducer into a reducer, just like this:\nSee how we basically ignore result and use a result of our own making at the end of the eduction.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "R3-Alpha's CLOSURE provided two things. One was a unique identity for the words of a function's arguments and locals for each recursion. This is what I've called \"specific binding\" and now comes \"for free\" in all functions...so you don't even have to think about it. (It's not exactly free, but we can hope it will converge to \"very low cost\".)\nSo in Ren-C:\n>> foo: function [x code] [ append code [print x] if x > 0 [ probe code do code foo (x - 1) code ] ] >> foo 2 [print x] 2 [print x print x] 2 ;-- R3-Alpha FUNCTION! got 1, only CLOSURE! got 2 1\nUsers can now take that for granted.\nBut what I want to talk about is the other emergent feature of R3-Alpha CLOSURE!. This was that if an ANY-WORD! that was bound to the arguments or locals \"escaped\" the lifetime of the call, that word would continue to have its value after the function ended...for as long as references to it existed.\n>> f: closure [x] [return [x]] >> b: f 10 == [x] >> reduce b \nFunctions did not do this:\n>> f: function [x] [return [x]] >> b: f 10 == [x] >> reduce b ** Script error: x word is not bound to a context\nIt goes without saying that the closure mechanic is going to cost more, just by the very fact that they need to hold onto the memory for what the word looks up to. But the way things work today, it doesn't just need to hold onto that cell of data...it holds onto all the args and locals of the function. (R3-Alpha was more inefficient still...it not only kept the whole frame of values alive, it made a deep copy of the function body on every invocation of that function...so that the body could be updated to refer to that \"frame\". Specific binding lets Ren-C dodge that bullet.)\nNow and again, the \"keep-things-simple\" voice says that the system would be simpler and faster if all executing frames (and their frame variables) died after a function ended. If you wanted to snapshot the state of a FRAME!", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "This makes it easy to build simple parsers. You can see how this is done by reading my post about parser generators where I build a simple four-function expression parser.\nWhen using parser combinators, parsers are built up by composing (combining) other parsers using various combining operators (combinators). The result of one of these combining operators is another parser. Eventually, you build up a top-level parser that can parse your entire grammar. Then you invoke that parser on your input stream to parse the input.\nThese parser combinators are monads. See \"Monadic Parser Combinators\".\nUniform Container Methods in ScalaThe\nOptionclass, mentioned above, defines methods for\nfilter. All of the container classes in Scala, including\nMaptrait, define these same three methods. The container classes also implement\nforeach, even though it never has more than one item. This makes more sense if you think of\nListthat can have only zero or one element in it.\nHaving this same set of methods available for all of the collection classes makes it easier for the programmer to use any of them, but it also makes it possible to write a package of higher-level control code that can accept any of these classes. With such a higher-level package, you can add your own container class that will work with that package as long as your container class supplies the appropriate set of methods.\nThe container classes in Scala (including\nOption) are all monads. Because all monad classes supply a specific set of methods (informally, the monad interface), a monad library of higher-level control classes can be written that use that monad interface. In addition to supplying specific methods, there are a couple of consistency rules that constrain how those monad methods must behave. This is no different than, for example, the contract in Java relating the behavior of the\nhashCodemethods for a class. As with any other interface definition and contract, if you write your own class and it implements the monad methods and follows the monad contract, then you can use that class with a monad library.\nFunctional LanguagesFunctional languages stress some things that are not typically discussed in imperative languages such as Java:\n- Referential transparency\n- Immutable data\n- Recursion rather than loops\n- Lazy evaluation\nReferential TransparencyA pure functional language, such as Haskell, has no side effects. This means that calling a function returns a value and does nothing else.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "As mush as FP has done, in the end, all our programs are structured. That is, it doesn't matter how pure or functional we make a them - they are always translated to assembly, so what actually runs ...\nI've been told in previous questions that functional programming languages are unsuited for dynamic systems such as a physics engine, mainly because it's costly to mutate objects. How realistic is ...\nI was wondering about the origins of the \"let\" used in Lisp, Clojure, and Haskell. Does anyone know which language it appeared in first?\nI'm mainly a .NET developer so I normaly use Windows/VisualStudio (that means: I'm spoiled) but I'm enjoying Haskell and other (mostly functional) languagues in my spare time. Now for Haskell the ...\nIf data is simple and objects are complex, I'm curious if there are any existing statically typed languages that would be able to augment(?) a map type into a type with guaranteed fields. I realize ...\nFunctional languages, by definition, should not maintain state variables. Why, then, do Haskell, Clojure, and others provide software transactional memory (STM) implementations? Is there a conflict ...\nOne of the major advantages of software transactional memory that always gets mentioned is composability and modularity. Different fragments can be combined to produce larger components. In ...", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "I remember the day I got the mind shift about functional programming. I switched from being a fervent defender of Java and C# to a passionate of LISP dialects. At the time I was convinced that there were nothing better than OOP and curly bracket syntaxes. I was then fighting for conventions and simplicity rather than language expressivity.\n- Why do not C and C++ have a module system ?\n- Why do not have C and C++ a garbage collector ?\n- Why C does not have namespaces ?\n- Why C does not have safer string and array primitives ?\n- Why aren’t Unicode/UTF-8 strings the default in C and C++ ?\n- Why are the C and C++ libraries naming convention so bad ?\n- Why is C++ so complex?\nAs a good ignorant I thought that languages evolved with the time, and that these problems were not addressed because nobody thought about them before. And I discovered LISP the second oldest programming language with none of these problems and even powerful features other languages don’t have such as macros or continuation capture. I got interested in functional programming languages and I learned Haskell, CAML, and finally F#.\nI don’t know why but the concept of sequences in functional programming languages amazed me.\nHow do you perceive infinity?\nI never thought about it before, since resources on a computer are finite. But interesting question, let say I want to represent the infinite sequence of numbers, what characterizes this infinity and makes it exist? Is it the fact that these numbers are all present in the computer memory at the same time (which is impossible) or the fact that I can get one of them when needed.\nHow could we possibly define the infinite sequence of natural integers?\nStarting from 0 we can recursively define these numbers just by adding one, however this recursion shouldn’t take place until needed, so that the whole number sequence is never in memory. There is no choice that to delay the evaluation of the recursion building the sequence. We could take advantage of the fact that list are pairs, the car of the pair would be a value and the cdr of the pair a delayed evaluation leading to a pair. Delaying a computation, it’s when laziness comes into play.\nWhat is laziness?\nWikipedia: “Laziness (also called indolence) is a disinclination to activity or exertion despite having the ability to do so.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-5", "d_text": "For instance, we could define an add3 function which nefariously does the wrong thing when the user inputs 7.\nfun add3(7) = 2 | add3(x) = x+3\nThe vertical bar is read “or,” and the idea is that the possible cases for the function definition must be written in most-specific to least-specific order. For example, interchanging the orders of the add3 function cases gives the following error:\n- fun add3(x) = x+3 = | add3(7) = 2; stdIn:13.5-14.16 Error: match redundant x => ... --> 7 => ...\nFunctions can call themselves recursively, and this is the main way to implement loops in ML. For instance (and this is quite an inefficient example), I could define a function to check whether a number is even as follows.\nfun even(0) = true | even(n) = not(even(n-1))\nDon’t cringe too visibly; we will see recursion used in less horrifying ways in a moment.\nFunctions with multiple arguments are similarly easy, but there are two semantic possibilities for how to define the arguments. The first, and simplest is what we would expect from a typical language: put commas in between the arguments.\nfun add(x,y) = x+y\nWhen one calls the add function, one is forced to supply both arguments immediately. This is usually how programs are written, but often times it can be convenient to only supply one argument, and defer the second argument until later.\nIf this sounds like black magic, you can thank mathematicians for it. The technique is called currying, and the idea stems from the lambda calculus, in which we can model all computation using just functions (with a single argument) as objects, and function application. Numbers, arithmetic, lists, all of these things are modeled in terms of functions and function calls; the amazing thing is that everything can be done with just these two tools. If readers are interested, we could do a post or two on the lambda calculus to see exactly how these techniques work; the fun part would be that we can actually write programs to prove the theorems.\nFunction currying is built-in to Standard ML, and to get it requires a minor change in syntax. Here is the add function rewritten in a curried style.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "Or maybe it is a sequence of transformations, a sequence of expressions that transform some data? But then what is data, can a program transform itself without any data, can it move by itself without a problem to be solved? Maybe there are different ways of building programs, different ways we can describe the steps in play, like different perspectives, frames of reference, maybe some computations are described as abstract structures in time and space, while others are described as a sequence of commands that build these structures. How do we tell the difference? Take a moment and really ponder, how does a program look like, is program just to which we give something to solve, just to encode some business problem, publish a website, or is there something deeper within, a notion of self-replicating awareness that keeps on calculating, surprising us, interacting with us, with our environment?\nHaskell is a functional programming language, pure and lazy, statically typed language. It is a functional language because programs written or coded in Haskell are more than just a sequence of steps to be carried out, they are formed out of pure expressions, pure functions, functions that operate on a total space, functions that produce other functions, functions that create larger structures of functions, like a self replicating machine, like an infinity mandala, Haskell programs are beautiful mathematical abstract entities. When we say mathematical we mean symmetrical, we mean there is an underlying logic within them, that can be proofed, that can be mapped, applied, like an empty variable to any outside thing or a process we know. Functional programs do not destructively update the state of the program, they make copies of themselves when we change them, and so we can track and observe how they evolve through time, and so they consume the space of computations.\nLet's begin with a simple program, with a simple expression. Let's write a number, because numbers are simple, meaning numbers are something which is common to many of us all, we count things, we count fingers, we have two eyes, we have basic understanding of a numberness of some sort. It makes no difference whether we are great or not at math, just think of a number, any number you like. Good.\nFor instance, let's write the number two,\n2. We have a symbol\n2 representing some abstract quantity, an abstract context, I am saying abstract because we do not know what the number two represents, two of which, two of what.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-3", "d_text": "Function parameters do most of the work that variables do in dysfunctional programming.\nAs we don't have variables we don't need loops and recursion plays the role of iteration and parameters play the role of loop variables.\nOK, but what is this \"tail recursion\"?\nWell if you can arrange your recursive functions so that the call to the next function occurs right at the end of the function i.e. in the tail then the function can be optimized by the compiler into a loop.\nYes that's right - you go to the trouble of thinking recursively and the compiler undoes it all and re-writes it as a loop.\nThe reason that this can be done is that as soon as the tail of the function is reached the compiler can throw away all of the state information about the function, i.e. its stack frame, and treat the new call as part of the same function call. That is it unwinds:\ninto something like:\nlet total=0+(0+1)+(1+1)+(2+1)+(3+1)+(4+1) ... +9\nWith tail recursion optimization functional languages are as efficient as dysfunctional languages.\nThere are lots of other related mechanisms for putting off binding a value to a symbol until the value is completely determined but they all have a similar feel and use the same sort of techniques.\nPerhaps the most notorious of all functional programming mechanisms for getting things done that are natural in dysfunctional languages is the monad.\nTo explain the subtleties of the monad would take another article, but what you really need to know is that the monad puts the sequence of operations idea back into functional programming. Monads provide the facilities to implement side effects, I/O, variable assignment and so on in a way that looks static - unless you look very carefully that is.\nSo is all of this just magic for the dysfunctional programmer to worry about?\nWell yes - and no.\nIt is sadly true that functional programmers have a tendency to use math - category theory in particular - to make simple ideas seem very much more sophisticated.\nYou need to always keep in mind that programming is essentially practical and hence cannot be as complicated as abstract mathematics.\nHowever there are times when functional programming just seems right. There is also no doubt that some of the tools of functional programming are hard to give up once you have experienced their advantages.\nFor example, every programming language should have first class and higher functions - it just makes things so much simpler and so much more powerful.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "(2019) argue that the two notions are incompatible in the context of System F, where sealing is transparently driven by potentially imprecise type information, while New et al. (2020) reconcile both properties at the cost of abandoning the syntax of System F and requiring user-provided sealing annotations that are not subject to graduality guarantees. Furthermore, all current proposals rely on a global form of dynamic sealing in order to enforce parametric behavior at runtime, which weakens parametric reasoning and breaks equivalences in the static language. Based on the observation that the tension between graduality and parametricity comes from the early commitment to seal values based on type information, we propose plausible sealing as a new intermediate language mechanism that allows postponing such decisions to runtime. We propose an intermediate language for gradual parametricity, Funky, which supports plausible sealing in a simplified setting where polymorphism is restricted to instantiations with base and variable types. We prove that Funky satisfies both parametricity and graduality, mechanizing key lemmas in Agda. Additionally, we avoid global dynamic sealing and instead propose a novel lexically-scoped form of sealing realized using a representation of evidence inspired by the category of spans. As a consequence, Funky satisfies a standard formulation of parametricity that does not break System F equivalences. In order to show the practicality of plausible sealing, we describe a translation from Funk, a source language without explicit sealing, to Funky, that takes care of inserting plausible sealing forms. We establish graduality of Funk, subject to a restriction on type applications, and explain the source-level parametric reasoning it supports. Finally, we provide an interactive prototype along with illustrative examples both novel and from the literature.\nWhile many mainstream languages such as Java, Python, and C# increasingly incorporate functional APIs to simplify programming and improve parallelization/performance, there are no effective techniques that can be used to automatically translate existing imperative code to functional variants using these APIs. Motivated by this problem, this paper presents a transpilation approach based on inductive program synthesis for modernizing existing code. Our method is based on the observation that the overwhelming majority of source/target programs in this setting satisfy an assumption that we call trace-compatibility: not only do the programs share syntactically identical low-level expressions, but these expressions also take the same values in corresponding execution traces.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Guest author Pete Libman writes:\nYou are sitting at your screen wearing a t-shirt with some witty clever tech joke on it, reading some blog written by a bloke who probably looks a lot like the guy from the comic shop in The Simpsons.\nMonolithic databases are so 80s, inflexible, they are an obstacle to change. Referential integrity stops us implementing service orientated architecture, and they cannot be harmonised with continuous delivery.\nDo you agree?\nI have made a discovery. You might think it’s obvious, but it was new to me. It was this: that programmers come in different shapes and sizes.\nWhen was the last time you read through someone else’s code, not because you were looking for a snippet to pinch or because you’ve been saddled with debugging it; but for the sheer joy of reading someone else’s elegant, witty, creative code?\nModern computer systems couldn’t function without it, and it’s at the heart of nearly every modern software development. And yet, the very mention of its name is enough to strike terror into the hearts of even hardened developers. Why is multi-threading so difficult?\nFor more than thirty years, alongside conventional language development, there’s been a small but thriving culture of functional languages. These are not like the languages we all know and love. Their most obvious property is that they contain no variables at all; instead, they describe truths about the problem to be solved.\nAs anyone with even a passing familiarity with functional programming knows (and I accept that passing familiarity usually comes only after three to five years), things in a functional language don’t do anything (the way procedures execute or variables change in conventional languages); instead they are something, and what they are never changes. A term may describe a function which defines a list of primes, or it may describe what it means to be an expression, or what-have-you. But whatever it is, it describes it once and for all time – the function is the list of primes, it’s not a sequence of operations to compute the list.\nProponents of functional languages claim that by talking exclusively in terms of values, you get shorter, simpler, and more reliable programs. If things are things, rather than do things, they reason, then if they ever work, they’ll always work. If nothing ever changes, where can a bug hide?", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "My colleague recently wrote an excellent post discussing the functional and object-oriented paradigms. As someone who comes from a functional programming background, I definitely agreed with one thing: We need to stop building arbitrary walls that prevent us from learning from and helping each other.\nThe post made me dig deep down within myself and really consider why I love functional programming so much.\nIn essence, the functional and object-oriented paradigms stem from different concepts about how the world works. They represent different opinions about how to best model the world while we are writing programs.\nThe object-oriented paradigm models everything as an object. These objects can then interact with each other and thereby update their internal state when something in the system changes. This corresponds to how we often conceptualize the world, for instance, by modelling a computer mouse as an object whose position on a screen changes over time.\nThe functional programming paradigm takes a different approach. The behavior of the system is modelled using pure functions and these are strictly separated from the actual data within the system, which is modelled with immutable values. This is based on the mathematical perspective that a value itself cannot change (i.e. the coordinates\nx,y are constant) and all changes in the system are actually the application of a function (i.e. the movement of a mouse can be represented with a function that takes the coordinates\nx,y and produces new coordinates\nThe question then remains: Which approach is better?\nMy answer: This is the wrong question.\nI personally have been very influenced by Rich Hickey’s video about Simplicity. We as humans are not very good at understanding or comprehending multiple things at any given time. We can follow one idea or concept, but as soon as multiple ideas or concepts come into play we easily lose our focus and get confused.\nSimplicity is key to creating and maintaining good software. Even when programming a small web application, it is not possible to examine every part of the software at any given time. Instead, we begin inspecting the software and try to figure out the behavior of the application step by step. A point in the program where unrelated concerns are mixed together ‘complects’ the entire program and thereby makes it more complex. By attempting to remove or minimize these positions in our program, we increase the simplicity of the application and make it easier to maintain in the future.\nSimplicity is not easy.\nIt is easy to write down the first thing that pops into your head.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "POPL is the premiere conference on the theoretical foundations of programming languages. The PC Chair, General Chair, and Steering Committee Chair of POPL 2020 review this year’s event.\nAn impressive number of transformations in both compilers and in ordinary programming are special cases of a transformation called “defunctionalization.” This post explains what it is and the many places it’s useful.\nRuntime Support for Multicore Haskell (ICFP’09) was awarded the SIGPLAN ten-year most-influential paper award in 2019. In this blog post we reflect on the journey that led to the paper, and what has happened since.\nWe accept that data structure determines program structure. But we should not forget that it is not just the input data that may be structured: output data may be structured too, and both may determine program structure.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-3", "d_text": "Traceback (most recent call last): File \"\", line 1, in TypeError: 'tuple' object does not support item assignment\nWe can modify lists by appending them new elements, but when we try to do this with tuples, they are not changed but new instances are created:\n>>> # Lists (mutable) >>> a = [1, 2, 4] # a is a list >>> b = a # a and b refer to the same list object >>> id(a) == id(b) True >>> a += [8, 16] # a is modified and so is b - they refer to the same object >>> a [1, 2, 4, 8, 16] >>> b [1, 2, 4, 8, 16] >>> id(a) == id(b) True >>> >>> # Tuples (immutable) >>> a = (1, 2, 4) # a is a tuple >>> b = a # a and b refer to the same tuple object >>> id(a) == id(b) True >>> a += (8, 16) # new tuple is created and assigned to a; b is unchanged >>> a # a refers to the new object (1, 2, 4, 8, 16) >>> b # b refers to the old object (1, 2, 4) >>> id(a) == id(b) False\nAdvantages of Functional Programming\nThe underlying concepts and principles — especially higher-order functions, immutable data and the lack of side effects — imply important advantages of functional programs:\n- they might be easier to comprehend, implement, test and debug,\n- they might be shorter and more concise (compare two programs for calculating the factorial above),\n- they might be less error-prone,\n- they are easier to work with when implementing parallel execution.\nFunctional programming is a valuable paradigm worth learning. In addition to the advantages listed above, it’ll probably give you a new perspective on solving programming problems.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Learning functional programming made me a 10x better developer. It helped me learn how write code that is clean, easy to maintain, and scalable.\nThis is especially important in this day and age where software applications keep getting more complicated. The days of building and maintaining a simple web app are over.\nAs a developer, the expectations set upon you are higher than ever. It now falls on our shoulders to build, test, maintain, and scale complex applications that impact millions of people daily. This can be especially daunting as a beginner because we’re just getting the hang of writing code that actually works, let alone writing code that is easy to understand, write, debug, reuse, and maintain.\nThis is where Functional programming made a difference for me—it helped me learn how to code that is easy to understand, write, debug, reuse, and maintain. As a result, I feel much more confident in my coding abilities.\nEven if you are not using a functional programming language at work or on your side projects, knowing the basics of functional programming equips you with a powerful set of tools to write better code.\nIn my new e-book, I’ll teach you the basics of functional programming so that you have all the foundational knowledge you need to apply the principles at work, in your next job interview, or on your next side project.\nThe rest of the post will give you a simple explanation of what functional programming is, which you’ll need to know before diving into the e-book. ?\nLet’s get right into it! ?\nWhat is functional programming?\nSo. What is “functional programming,” exactly?\nFunctional programming isn’t a framework or a tool, but a way of writing code. In functional programming, we place a major emphasis on writing code using functions as “building blocks.”\nYour program is defined in terms of one main function. This main function is defined in terms of other functions, which are in turn defined in terms of still more functions — until at the bottom level the functions are just language primitives like “number” or “string.”\nIf you’re reading this thinking, “Hmm, but wait? Doesn’t every language use functions to write code?” then good ?. It means you’re paying attention.\nYou’re right — every programming language has functions. But functional programming takes it to a whole ‘nother level ?\nTo understand what I mean, let’s rewind and start with the basics.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Guest author Pete Libman writes:\nYou are sitting at your screen wearing a t-shirt with some witty clever tech joke on it, reading some blog written by a bloke who probably looks a lot like the guy from the comic shop in The Simpsons.\nMonolithic databases are so 80s, inflexible, they are an obstacle to change. Referential integrity stops us implementing service orientated architecture, and they cannot be harmonised with continuous delivery.\nDo you agree?\nI have made a discovery. You might think it’s obvious, but it was new to me. It was this: that programmers come in different shapes and sizes.\nWhen was the last time you read through someone else’s code, not because you were looking for a snippet to pinch or because you’ve been saddled with debugging it; but for the sheer joy of reading someone else’s elegant, witty, creative code?\nModern computer systems couldn’t function without it, and it’s at the heart of nearly every modern software development. And yet, the very mention of its name is enough to strike terror into the hearts of even hardened developers. Why is multi-threading so difficult?\nFor more than thirty years, alongside conventional language development, there’s been a small but thriving culture of functional languages. These are not like the languages we all know and love. Their most obvious property is that they contain no variables at all; instead, they describe truths about the problem to be solved.\nAs anyone with even a passing familiarity with functional programming knows (and I accept that passing familiarity usually comes only after three to five years), things in a functional language don’t do anything (the way procedures execute or variables change in conventional languages); instead they are something, and what they are never changes. A term may describe a function which defines a list of primes, or it may describe what it means to be an expression, or what-have-you. But whatever it is, it describes it once and for all time – the function is the list of primes, it’s not a sequence of operations to compute the list.\nProponents of functional languages claim that by talking exclusively in terms of values, you get shorter, simpler, and more reliable programs. If things are things, rather than do things, they reason, then if they ever work, they’ll always work. If nothing ever changes, where can a bug hide?", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "You are a programmer and one of the greatest struggles you face every day is confronting yourself from yesterday. You have to deal with your reflection from the past, when everything made perfect sense, but today not so much anymore. An abstraction developed seemed like a perfect fit for the problem at hand, doing exactly what was needed, but perhaps without realizing it is doing much more than this. It was tempting to use this power and at the time it appeared justified. Today there is a bug,\n\"And you won't understand why You'd give your life You'd sell your soul But here it comes again, Too much power will kill you every time.\"\nFor a while I used to be an imperative programmer. In the imperative programming world, type systems do not play that important role. As a consequence, if one attempts to understand what a function taking a unit value and returning a unit value does just by looking at its type signature, they might not get far. Sure, it is an identity function, but it is allowed to do all kinds of side effects too. In the imperative programming world, this function can print to the console, delete records in a database, fetch data over the network, and it can do anything else that comes to your mind. The only chance of understanding what such a function really does is to read it completely, and recursively do the same for every function it calls.\nFor three years now I have been a functional programmer in Haskell. In a typed functional programming world with effects and function totality, type systems play an important role. As a consequence, if one attempts to understand what a function taking a unit value and returning a unit value does just by looking at its type signature, they can safely conclude it can only return a unit value and do nothing else. At first, that seems like a big reasoning win over the imperative programming world. However, in Haskell the IO monad is so pervasive that almost every effectful function is expressed directly or indirectly (via the\nMonadIO class) in the IO monad. In such a functional programming world, this function can print to the console, delete records in a database, fetch data over the network, and it can do anything else that comes to your mind. The only chance of understanding what such a function really does is to read it completely, and recursively do the same for every function it calls.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "One of Smalltalk’s most unique and powerful features is also one of the least known outside the Smalltalk community. It’s a little method called become: .\nWhat become: does is swap the identities of its receiver and its argument. That is, after\na become: b\nall references to the object denoted by a before the call point refer to the object that was denoted by b, and vice versa.\nTake a minute to internalize this; you might misunderstand it as something trivial. This is not about swapping two variables - it is literally about one object becoming another. I am not aware of any other language that has this feature. It is a feature of enormous power - and danger.\nConsider the task of extending your language to support persistent objects. Say you want to load an object from disk, but don’t want to load all the objects it refers to transitively (otherwise, it’s just plain object deserialization). So you load the object itself, but instead of loading its direct references, you replace them with husk objects.\nThe husks stand in for the real data on secondary storage. That data is loaded lazily. When you actually need to invoke a method on a husk, its doesNotUnderstand: method loads the corresponding data object from disk (but again, not transitively).\nThen, it does a become:, replacing all references to the husk with references to the newly loaded object, and retries the call.\nSome persistence engines have done this sort of thing for decades - but they usually relied on low level access to the representation. Become: lets you do this at the source code level.\nNow go do this in Java. Or even in another dynamic language. You will recognize that you can do a general form of futures this way, and hence laziness. All without privileged access to the workings of the implementation. It’s also useful for schema evolution - when you add an instance variable to a class, for example. You can “reshape” all the instances as needed.\nOf course, you shouldn’t use become: casually. It comes at a cost, which may be prohibitive in many implementations. In early Smalltalks, become: was cheap, because all objects were referenced indirectly by means of an object table. In the absence of an object table, become: traverses the heap in a manner similar to a garbage collector. The more memory you have, the more expensive become: becomes.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-2", "d_text": "We only have this idea of twoness, like a context space in which we can fill with names, meanings, like left and right, like sun and the moon, like two people, or maybe just a number two,\n2. We could write any other number too, like\n3. So into the Haskell interpreter, we write a number\nPrelude λ> 2 2\nAnd Haskell answers back with\n2. So our conversation, our inquiry could be described as a process, a function, some relation from number 2 to a number 2,\n2 -> 2. We use the arrow,\n-> symbol so that we can realize the connectedness of this process, something from something into something, from a variable to some variable, from an\na to an\na. Like a question and then an answer it is more than just\n2 and some number\n2, not quite like\n2 = 2, we can reason about\n2 -> 2 like\ninput -> output, it's only equality being the fact that something returns what was already given. I am using the words giving and taking though the main ideas here is the idea of an identity. What could it answer to our simple inquiry? Haskell interpreter could have given us a number\n3 but then we would not know if we are communicating at all. The meaning would not be referentially transparent. If I call you that means I would like to talk to you and not to someone else. I only need to save your number once, and then each time when I call you you will answer. So basically our relation is referentially transparent. We could also say that in our program if we define some variable or a function then this definition will not change, it cannot, it makes no sense to change. Otherwise we would not know any more who to trust, what is real and what is not.\nI mean, if I just give you an apple without asking anything in return and you give me back a pear one time and next time you give me a lecture in mathematics what does that tell us about this exchange? It is impure, it is not clear what will I get, it is like talking to a madman, we never know what we will get.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "use the following search parameters to narrow your results:\ne.g. subreddit:aww site:imgur.com dog\nsubreddit:aww site:imgur.com dog\nsee the search faq for details.\nadvanced search: by author, subreddit...\n1,186 users here now\n/r/programming is a reddit for discussion and news about computer programming\nPlease try to keep submissions on topic and of high quality.\nJust because it has a computer in it doesn't make it programming.\nMemes and image macros are not acceptable forms of content.\nIf there is no code in your link, it probably doesn't belong here.\nApp demos should include code and/or architecture discussion.\nPlease follow proper reddiquette.\nDo you have a question? Check out /r/learnprogramming, /r/cscareerquestions, or stackoverflow.\nDo you have something funny to share with fellow programmers? Please take it to /r/ProgrammerHumor/.\nFor posting job listings, please visit /r/forhire or /r/jobbit.\nCheck out our faq. It could use some updating.\nIf you're an all-star hacker (or even just beginning), why not join the discussion at /r/redditdev and steal our reddit code!\nThis is an archived post. You won't be able to vote or comment.\nWhat is the advantage of currying? (programmers.stackexchange.com)\nsubmitted 3 years ago by [deleted]\nview the rest of the comments →\n[–]tailcalled 3 points4 points5 points 3 years ago (7 children)\nNo I'm not. Take the Haskell function map:\nmap :: (a -> b) -> ([a] -> [b])\nThis function is curried. Its \"first\" parameter has the type (a -> b) while the second has [a]. Currying is the technique of making curried functions.\n(a -> b)\nTake the following piece of Scala code:\nval div = (x: Int, y: Int) => x / y\nval half = div(_: Int, 2)\nIn this case, to create half, we partially apply div with 2 as the second parameter.\n[–]finprogger -3 points-2 points-1 points 3 years ago (6 children)\nI don't see how what you just wrote contradicts what I said.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-2", "d_text": "It's not a difficult concept -- I use it on the Unix command line all the time -- but there was no book that could teach a practical programmer like me why it's interesting.\nBryan: A benefit to me in writing this and also to my co-authors, John and Don, was that we had all been at various times immersed in lots and lots of work in all kinds of other languages. I've written a huge amount of code in C++, in Python, in Lisp, in Perl. John has a similarly diverse background. If you look for his name on Amazon, you'll find that he's written several Python books for example.\nDon is probably the most academically inclined of us, but he works in languages like C and C++ every day as part of his job. We didn't come at this with any kind of an ivory tower background or perspective. We were all rooted in how do I do things that are interesting and useful rather than how do I express things that are algorithmically pure. We're certainly interested in that and find it to be a lot of fun, but I find being able to execute things and have them do stuff that's valuable to me to be as much a charge as any of the more esoteric pursuits.\nWhy should your average programmer care if something is algorithmically pure? What benefits are there to that?\nBryan: Why should an average programmer care? An average programmer is going to be concerned with a lot of different things simultaneously, right? If you're working on a large body of code, you're going to be concerned about modularity. Can I understand part of a program in isolation from the rest of it? Can I test part of a program in isolation from the rest of it? Can I read a particular part of a program and be able to make a fair estimate as to what it's really trying to do? Those kinds of concerns pop up all the time in day-to-day programming.\nthe idea of writing code that is very tight and that has very little intertwining with the concerns of code nearby is not unique to Haskell; it's just that Haskell happens to carry it a particularly long distance. Several different things there play into that. One is the idea that most code, by default, does not have any kinds of externally visible side-effects. Given the same inputs, it will always produce the same outputs.\nIs this most code in Haskell or most code in any programming?", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-3", "d_text": "What if the design decisions in a particular language are focused on robustness and maintainability?\nYou can make any language look good by comparing it to C++. If being functional is the magic bullet, then why do stateful languages like Python, Perl, Ruby, Lua, Lisp, etc. always do just as well as Haskell in this sort of omparison?\nUnlike the rest of your commenters (as far as I can tell), I *have* used Haskell for real-world problems, and can confirm that (for the problems I’ve used it for), it is significantly more productive than imperative languages like Java, C++ or Ada.\nI don’t think it’s quite ready for the maistream, but it’s definitely got promise.\nI’m very sceptical about LOC as a measure. However, here’s a very recent informal data point, which I’m mentioning partly because it’s notable for the relatively controlled nature of the comparison: Lucian Wischik at Microsoft recently had to rewrite a 6,000 line F# program in C# (C# was required for unrelated reasons). It became 30,000 lines (references to comments by Lucian below). Now, this comparison is with essentially the same .NET libraries (F# adds a few minor libraries), the same (talented) programmer, the same .NET runtime, the same performance profile, the same problem specification, and the C# code has the advantage of being a reimplementation.\n“However, in order for a FP to, well, run, someone, somewhere, must pull a lever so that the wheels start turning according to the spec of the FP, and what ticks under the FP ultimately does have state.”\nThat’s not the point. The point is that an FP utilizes a framework that *safely* translates from an abstract language without a concept of state to a concrete language that is basically nothing but state.\nLikewise, object oriented languages do not actually have objects — whether they are detected at compile time or run time, they ultimately map to swaths of memory and functions.\nIt’s all smoke and mirrors. But it’s the magic of the smoke and mirrors that enhance our productivity. By building the funnel from a “safe”, “imaginary” language to the “dangerous”, “unproductive” one, we eliminate the problems inherent at the lower level.\nFunctional programming is not usually adopted because so many real-world systems are almost entirely side-effects.", "score": 8.086131989696522, "rank": 99}]} {"qid": 16, "question_text": "What precautions should be taken when using supplemental oxygen during surgery for someone with Duchenne muscular dystrophy?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Surgery & Anesthesia\nWhen a patient with Duchenne muscular dystrophy has general anesthesia, a number of issues should be considered, including cardiac and pulmonary function, anesthesia, and the potential for blood loss.\nConsiderations for Surgery & Anesthesia\nIf you require surgery or a medical procedure, go to a medical center with expertise in the anesthetic management of people living with Duchenne if possible. Your anesthesiologist should be aware that you have Duchenne. It is important to discuss the anesthesia plan with your anesthesiologist before any surgery or procedure that involves anesthesia.\nAdditionally, your neuromuscular team should always be made aware if you are undergoing surgery or a medical procedure. Additionally, you should have evaluations by your cardiologist and pulmonologist before undergoing any type of surgery or medical procedure. Any cardiac and/or respiratory abnormalities should be identified and optimally treated before any surgery or procedure involving anesthesia.\nThese recommendations have been reviewed and approved by the Professional Advisory Council of the Malignant Hyperthermia Association of the U.S. (MHAUS). Discuss risks and benefits of planned anesthetic medications (agents) with your anesthesiologist.\nAnesthesia agents requiring caution in Duchenne\nPeople with Duchenne should NOT receive succinylcholine.\nThe drug succinylcholine (suxamethonium) is a depolarizing muscle relaxant. It is sometimes used in emergencies to relieve breathing difficulties in anesthetized patients. However, when succinylcholine is administered to patients with any kind of ongoing muscle atrophy, no matter the underlying cause, succinylcholine can cause severe, life-threatening (and sometimes fatal) increases in blood potassium.\nInstead of succinylcholine there are other commonly available muscle relaxants (e.g., any non-depolarizing neuromuscular blocker) that can be used in emergency situations if necessary (see the “Safe” list section below). It is possible, however, that there exists a rare situation (such as life-threatening airway obstruction that requires immediate treatment) where an anesthesiologist may be justified in administering succinylcholine to a patient with Duchenne whose life is in imminent danger.\nAvoid inhaled anesthetic agents if possible.\nCommonly used inhaled anesthetic agents include Desflurane, Enflurane, Halothane, Isoflurane, and Sevoflurane.", "score": 48.95801018612741, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "People with Duchenne are at risk of developing rhabdomyolysis (the breakdown of skeletal muscle tissue that may cause the release of myoglobin that can damage the kidneys) and hyperkalemia (the release of too much potassium into the bloodstream), which can result in life-threatening heart rhythms.\nThere are known cases of serious (and sometimes fatal) muscle breakdown (rhabdomyolysis) in Duchenne patients when exposed to inhalation anesthetic gases even when succinylcholine was avoided. Therefore, we recommend that when possible, inhalational anesthetic gases should be avoided or used sparingly in people with Duchenne. However, there are certain circumstances when the benefit/risk ratio favors the use of these inhaled agents. The administration of inhaled anesthetic agents may be suggested for the following reasons:\n- Prior to IV catheter insertion\nThe only other type of anesthetic agents are given intravenously (through an IV catheter). Inserting IV catheter can be painful, and people with Duchenne may require multiple sticks due to difficulty finding veins in people with decreased muscle mass. Giving inhaled anesthetic before the nurse attempts an IV stick can reduce the pain and make it easier to obtain IV access.\n- Propofol administration\nThe drug Propofol is a commonly used IV anesthetic agent for procedures. However, it can be very painful when it starts infusing through an IV. Sometimes inhaled anesthetic agents will be given before Propofol is started to avoid the pain of the infusion.\n- IV anesthesia is not available\nThere may be rare situations where IV anesthesia is not available, or is considered an inferior anesthetic choice based on the patient’s specific clinical situation.\nAm I at risk for “malignant hyperthermia?”\nAmong anesthesiologists, it is established knowledge that patients with Duchenne muscular dystrophy may develop unique complications when undergoing medical or surgical procedures that require certain anesthetic agents (discussed above). The complications including rhabdomyolysis and hyperkalemia are nearly indistinguishable from those seen when a patient develops a rare anesthetic-related complication called malignant hyperthermia (MH).\nAlthough clinically similar in appearance to the complications that may occur in Duchenne patients, they are actually two separate entities. MH is typically an inherited genetic condition that has nothing to do with the dystrophin gene.", "score": 47.94795451331031, "rank": 2}, {"document_id": "doc-::chunk-3", "d_text": "Norcuron (Vecuronium), Pavulon (Pancuronium), Tracrium (Atracurium), Zemuron (Rocuronium)\nGabapentin (Neurontin), Topiramate (Topamax)\n- Anxiety Relieving Medications\nAtivan (Lorazepam), Centrax, Dalmane (Flurazepam), Halcion (Triazolam), Klonopin, Librax, Librium (Chlordiazepoxide), Midazolam (Versed), Paxipam (Halazepam), Restoril (Temazepam), Serax (Oxazepam), Tranxene (Clorazepate), Valium (Diazepam)\nBecause people with with Duchenne have weak respiratory muscles, their diaphragms do not move up and down well and their intercostal muscles (the muscles that move the chest walls) do not expand the ribs well. This causes shallow breathing, but people with Duchenne compensate for this over time, and can provide the body with adequate oxygen supply and adequate removal of carbon dioxide. During surgery or procedures, certain anesthetics may lead to increasingly shallow breathing. Too shallow of breathing (hypoventilation) may lead to low oxygen levels and high carbon dioxide levels.\nYour anesthesiologist may decide you will need supplemental oxygen during your procedure, but they must use caution. When extra or supplemental oxygen is given, this delicate balance is disturbed. The respiratory center may get the false impression that the body has enough oxygen and no longer needs to breathe. Without breathing, carbon dioxide can build to dangerous levels (called hypercapnia). If oxygen is required during surgery or medical procedures, the anesthesiologist must use caution and monitor you closely. They may also use non-invasive ventilation (i.e. BiPAP machine) during the procedure to ensure you are breathing adequately.\nDuring surgery, it is sometimes necessary to support your breathing due to anesthesia or muscle relaxants given during the procedure. Intubation involves putting a breathing tube (also known as an endotracheal tube) into your airway. This breathing tube is then connected to a breathing machine (respirator or ventilator). This machine will then either assist with your breathing or actually breath for you, depending on your pulmonary function, the length of the surgery, or the type of surgery.", "score": 46.79688780519318, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "Because of the similar symptoms of these two complications, there was a time when many clinicians believed that the anesthesia complications in Duchenne patients were, in fact, MH. However, after studying this over the years, we now know that this isn’t true. MH occurs in patients who have inherited MH-causing mutations, which people with Duchenne are not at a higher risk for than the general population.\nDuchenne patients are not at an increased risk of developing MH, but may continue to be at increased risk of rhabdomyolysis when administered inhaled anesthetic gases. These guidelines have been reviewed and approved by the Professional Advisory Council of the Malignant Hyperthermia Association of the U.S. (MHAUS).\nAll intravenous (IV) anesthetic agents are considered to be safe to give to people with Duchenne with close monitoring.\n- Barbiturates/Intravenous Anesthetics\nDiazepam (valium), Etomidate (Amidate), Ketamine (Ketalar), Methohexital (Brevital), Midazolam (Versed), Propofol (Diprivan), Thiopental (Pentothal)\n- Inhaled Non-Volatile General Anesthetic\nNitrous Oxide (“laughing gas”)\n- Local Anesthetics\nAmethocaine, Articaine, Bupivicaine, Etidocaine, Lidocaine (Xylocaine), Levobupivacaine, Mepivicaine (Carbocaine), Procaine (Novocain), Prilocaine (Citanest), Ropivacaine, Benzocaine (caution re: methemoglobinemia risk), Ropivacaine\n- Narcotics (opiods)\nAlfentanil (Alfenta), Codeine (Methyl Morphine), Fentanyl (Sublimaze), Hydromorphone (Dilaudid), Meperidine (Demerol), Methadone, Morphine, Naloxone, Oxycodone, Remifentanil, Sufentanil (Sufenta)\n- Muscle Relaxants\nArduan (Pipecuronium), Curare (The active ingredient is d-Tubocurarine), Metocurine, Mivacron (Mivacurium), Neuromax (Doxacurium), Nimbex (Cisatracurium),", "score": 45.3259056938843, "rank": 4}, {"document_id": "doc-::chunk-5", "d_text": "Pulmonary function tests and blood-gases are recommended in the case of significant respiratory embarrassment. As diminution of respiratory muscle strength cannot be ruled out even without symptoms vigilance during postoperative period is essential. ,\nPerioperative steroid supplementation to avoid adrenal suppression has been advocated in patients with recent steroid usage and we used I.V. hydrocortisone 100 mg just before induction as our patient had received steroids recently.\nOur patient requested for G.A and induction with propofol-fentanyl provided smooth intubation in spite of 65-70% TOF ratio by ED90 dose of atracurium indicating resistance. Intraoperatively TOF ratio-guided top-ups of atracurium provided adequate NM block required for the orthopaedic procedure. Maintenance of O.T temperature to 22°C helped in maintaining normal core temperature. Postoperatively we treated rise of 1°C of body temperature promptly and no relapses occurred in our patient. The postoperative respiratory distress was successfully treated with oxygen therapy by ventimask.\nThe perioperative period was thus uneventful. To summarize, the optimal anaesthetic management of MS requires careful preoperative assessment, awareness towards perioperative care and postoperative exacerbations of MS. The latter invites special attention and recovery room care with appropriate monitors, oxygen therapy/mechanical ventilation is necessary.\n| References|| |\n|1.||Harser SL, Goodin DS. Multiple sclerosis and demyelinating diseases. Harrison's Principles of Internal Medicine. 17th ed., Vol. 2. New York, London: Mcgraw Hill Medical; 2008. p. 2611-20. |\n|2.||Dierdoff SF, Scott Walton J. Anesthesia for patients with rare and co-existing diseases. In: Barash PG, Cullen BF, Stoelting RK, editors. Clinical Anesthesia. 5th ed. Philadelphia: Lippincott, Williams & Wilkins; 2006. p. 510-1. |\n|3.||Dorotta IR, Schubert A. Multiple sclerosis and anesthetic implications. Curr Opin Anaesthesiol 2002;15:365-70. |\n|4.||Perlas A, Chan VW. Neuraxial anesthesia and multiple sclerosis. Can J Anaesth 2005;52:454-8.", "score": 42.67769716118683, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Oxygen therapy, a lifeline for many critically ill patients, can be delivered in nonintubated patients via several devices. Unlike patients with chronic hypoxemia, the long-term comfort or cosmetics of the patient are not a concern of intensivists; instead, the goal is to ensure adequate oxygen delivery to prevent hypoxemia. Although hypoxemia is often corrected with oxygen therapy, care should be taken to understand the pathophysiology leading to hypoxemia. The appropriate management of hypoxemia should include treatment of the underlying pathology to prevent any complication and progression of the disease. For example, many patients with postoperative atelectasis develop hypoxemia responsive to oxygen therapy. Treatment of postoperative hypoxemia with oxygen supplementation alone without initiating lung reexpansion measures to treat atelectasis is insufficient. This chapter covers noninvasive modes of supplying oxygen and does not discuss other means of correcting hypoxemia.\nKeywordsChronic Obstructive Pulmonary Disease Obstructive Sleep Apnea Continuous Positive Airway Pressure Oxygen Therapy Oxygen Supplementation\nUnable to display preview. Download preview PDF.\n- American Association of Respiratory Care. Clinical practice guidelines: oxygen therapy in the acute care hospital. Respir Care 1991; 36: 1306–1311.Google Scholar\n- Cairo JM. Administering medical gases: regulators, flowmeters, and controlling devices. In: Mosby’s Respiratory Care Equipment, 6th Ed. St. Louis: Mosby, 1999.Google Scholar\n- Phillip Y, Kristo D, Kallish M. Writing the take-home oxygen prescription for COPD patients. Document hypoxemia, then aim for a target oxygenation level. J Crit Illness 1998; 13 (2): 112–120Google Scholar", "score": 41.34064994340279, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "Intubation is usually done after you have received anesthesia and are asleep. The anesthesiologist will monitor you closely and control the ventilator. The tube will usually remain in and connected to the machine until the surgery is completed.\nAfter the surgery, the breathing tube must be removed. The process of removing the breathing tube is called extubation. Because breathing tubes were sometimes, in the past, left in place too long or were removed incorrectly, a protocol for extubation was developed by Drs. Mary Schroth and John Bach. When you meet with your anesthesiologist before surgery, be sure you discuss the intubation/extubation and show them the protocol.\nDentistry generally can, and should, be performed with the minimal amount of anesthesia possible while providing the patient maximal physical and emotional comfort. Local anesthetics, nitrous oxide, and an oxygen “wash out” are safe for most patients with Duchenne, especially patients who are ambulatory with normal pulmonary function (normal breathing).\nPatients with Duchenne who have pulmonary dysfunction (abnormal breathing) should consider receiving dental care requiring general anesthesia in a hospital or surgery center staffed with an anesthesiologist, and equipped to monitor intra-operative respiratory functioning and to manage potential respiratory and cardiac emergencies. Visit this page for more information concerning dental procedures.", "score": 41.1763087532362, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "Evaluate pulmonary function\n(clinical, chest radiographs, CT, pulmonary function test, arterial blood\ngas analysis). Echocardiography should be performed if cor pulmonale and\npulmonary hypertension are suspected. Assess the airway for difficult airway\nmanagement. Evaluate neurologic function (clinical, full history,\nelectroencephalogram, CT, MRI). Postoperative ventilatory support may be\nnecessary and should be arranged beforehand.\nNo literature is available.\nPotentially difficult airway management in view of facial abnormalities.\nSpontaneous ventilation should be maintained until the airway is secured.\nPatients may not tolerate general anesthesia if respiratory function is\nBecause of increased sensitivity to\nmuscle relaxants, avoid depolarizing muscle relaxants and use\nnondepolarizing agents cautiously under the control of a peripheral nerve\nstimulator. Consider interaction between anesthetic drugs and antiepileptic\nNumerous syndromes are\nassociated with agenesis of the corpus callosum. The following list is not\nAicardi Syndrome: Combination of myoclonic seizures with\ncharacteristic EEG pattern, lacunar chorioretinopathy, and (complete or\npartial) agenesis of the corpus callosum is characteristic of this X-linked\ndominant inherited syndrome.", "score": 40.91571906193206, "rank": 8}, {"document_id": "doc-::chunk-3", "d_text": "We excluded patients with a history of fever or infection within 24 hours of surgery, a history of susceptibility to malignant hyperthermia, or with current heart and lung disease.\nPatients were premedicated with 2 to 3 mg midazolam in the preoperative holding area or just before anesthetic induction. Anesthesia was induced with IV propofol and maintained with a volatile anesthetic that was adjusted to keep mean arterial blood pressure near 90% of the preinduction value. An inspired oxygen fraction of 1.0 was used at induction of anesthesia until tracheal intubation and during extubation. All patients were standardized to receive 80% inspired oxygen during the intraoperative period. Patients’ lungs were mechanically ventilated with a tidal volume of 6 to 8 mL/kg of ideal body weight at a rate sufficient to maintain end-tidal PCO2 near 40 mm Hg; a positive end-expiratory pressure of 5 to 10 cm H2O was applied.\nPatients were given approximately 10 mL/kg/h of crystalloid throughout surgery, normalized to ideal body weight. Fluids were standardized at a rate of 3.5 mL/kg/h for the first 24 postoperative hours and at a rate of 2 mL/kg/h for the subsequent 24 hours, again normalized to ideal body weight. Intraoperative core temperature was maintained near 36°C using forced-air warming and heated IV fluids.33,34 Patients were given 100% inspired oxygen before tracheal extubation. Intraoperative analgesia was provided with IV fentanyl titrated at the discretion of individual anesthesiologists. Each morbidly obese patient may have had different requirements of opioids, so the protocol was not strict. Postoperative analgesia was provided by patient-controlled morphine or hydromorphone.\nPatients were assigned 1:1 to routine or supplemental postoperative oxygen. Randomization was based on reproducible computer-generated codes that were maintained in sequentially numbered opaque envelopes until the end of surgery. Randomization was stratified by study site. The 2 groups are listed below:\n1. Routine oxygen administration: Extubated patients were given 2 L/min oxygen via a nasal cannula until the first postoperative morning.", "score": 39.56426289517717, "rank": 9}, {"document_id": "doc-::chunk-3", "d_text": "Induction of anesthesia was achieved with 1–2 mg/kg of propofol, 2–3 µg/kg of fentanyl, and 0.1 mg/kg of vecuronium. A double-lumen endobronchial tube was placed for lung isolation. Maintenance of anesthesia consisted of oxygen, sevoflurane, and intermittent intravenous bolus doses of fentanyl 0.5 µg/kg. Dose adjustments of sevoflurane and fentanyl were based on standard clinical signs and hemodynamic measurements. Signs of inadequate analgesia were defined as an increase in heart rate (HR) and mean arterial pressure (MAP) of more than 20% from baseline. Hypotension was defined as a MAP of less than 20% from baseline. Supplemental vecuronium was administered if indicated by monitoring muscle relaxation with a nerve stimulator (model NS242; Fisher Paykel, Auckland, New Zealand. +6495740100). During the perioperative period, both groups received intravenous fluid (lactated Ringer solution) at 5 ml/kg/h.\nPressure-controlled ventilation was applied with FiO2 of 0.5 in air. During one-lung ventilation, FiO2 was increased to 0.8. At the end of surgery, residual neuromuscular block was reversed with neostigmine (0.04 mg/kg) and atropine (0.01 mg/kg), and the endotracheal tube was removed when the patient met criteria for extubation. Patients were transferred to the postanesthesia care unit (PACU) at the end of surgery. The final fentanyl dose was given ∼20 min before the end of surgery. After extubation, patients received 0.45–0.5% nebulized oxygen by facemask. Patients were monitored in the PACU for 2 h, and the need for reintubation in the postoperative period was recorded. Patients were then transferred to the high dependency unit for the rest of the 24 h follow-up. Patients who completed the 24 h follow-up were transferred to the ward. All the operations were performed by the same surgical team. The anesthetic technique for surgery was performed similarly for all patients by the same anesthetic team.", "score": 39.06696456967899, "rank": 10}, {"document_id": "doc-::chunk-3", "d_text": "All patients were premedicated with oral alprazolam 0.25 mg the night before surgery. On the day of surgery, they received premedication of glycopyrrolate 0.2 mg intramuscularly 30 min prior to induction of anesthesia. On arrival to operation room, routine, hemodynamic monitoring was performed by automatic blood pressure measurements, five-lead ECG monitor, and finger pulse oximetry. An intravenous infusion of ringer lactate was started, followed by intravenous metoclopramide 10 mg and midazolam 2 mg. Group D patients (n0 = 30) were given intravenous dexmedetomidine 1μg/kg and Group F patients (n = 30) were given fentanyl 2 μg/kg, over a 10-min period before induction of general anesthesia.\nAfter preoxygenation for 3 min, the anesthesia was induced with propofol (2 mg/kg) and tracheal intubation was facilitated by vecuronium 0.1 mg/kg. Anesthesia was maintained with isoflurane 1-1.5% and 60% nitrous oxide in oxygen with supplementary fentanyl (50-100 μg) to maintain the heart rate and mean arterial pressure within 20% of preinduction values and/or heart rate <85 beats/min during surgical stimulation. The patient's lungs were initially mechanically ventilated with a tidal volume of 8 ml/kg, a respiratory rate of 12 breaths/min, and an I:E ratio of 1:2 in volume-controlled mode. Five minutes after securing the airway and abdominal insufflation by carbon dioxide, the lung mechanics were adjusted to maintain normocapnia (an end-tidal carbon dioxide value of 35-40 mm Hg) and intra-abdominal pressure was maintained between 12 and 15 mm Hg. The degree of muscle relaxation was maintained using the train-of-four ratio of <25% with supplemental doses of vecuronium bromide (0.05 mg). All patients were covered to maintain normothermia.", "score": 38.65272834539861, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "If respiratory muscles become weakened, using a ventilator may become necessary.\nTo release the contractures that may develop and that can position joints in painful ways, doctors can perform a tendon release surgery. This may be done to relieve tendons of your hip and knee and on the Achilles tendon at the back of your foot. Surgery may also be needed to correct curvature of the spine.\nBecause respiratory infections may become a problem in later stages of muscular dystrophy, it's important to be vaccinated for pneumonia and to keep up to date with influenza shots.", "score": 37.92981165203769, "rank": 12}, {"document_id": "doc-::chunk-2", "d_text": "Neuromuscular monitoring is suggested in these patients due to lack of standard recommendations regarding to the application of nondepolarizing muscle relaxants. , Succinylcholine should be avoided as it might trigger hyperthermia and may lead to hyperkalemia. \nWe used regional anesthesia considering all these implications of general anesthesia and wish to highlight that regional anesthesia should be used in these patients, whenever possible.\n| References|| |\nGunusen I, Karaman S, Nemli S, Firat V. Anesthetic management for cesarean delivery in a pregnant woman with polymyositis: A case report and review of literature. Cases J 2009;2:9107.\nShrestha GS, Aryal D. Anaesthetic management of a patient with dermatomyositis and valvular heart disease. Kathmandu Univ Med J (KUMJ) 2012;10:100-2.\nRöckelein S, Gebert M, Baar H, Endsberger G. Neuromuscular blockade with atracurium in dermatomyositis. Anaesthesist 1995;44:442-4.\nGanta R, Campbell IT, Mostafa SM. Anesthesia and acute dermatomyositis/polymyositis. Br J Anaesth 1988;60:854-8.\nJohns RA, Finholt DA, Stirt JA. Anaesthetic management of a child with dermatomyositis. Can Anaesth Soc J 1986;33:71-4.\n| Article Access Statistics|\n| Viewed||2307 |\n| Printed||13 |\n| Emailed||0 |\n| PDF Downloaded||118 |\n| Comments ||[Add] |", "score": 37.43046064469288, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "Anaesthesia was maintained with dexmedetomidine and desflurane titrated to a minimum alveolar concentration of 1 to 1.2 via an 8 mm ID endotracheal tube. Surgery lasted 6 h with a console time of 3 h. Pelvic lymphadenopathy could not be done because of gross fatty deposition in the pelvis. Blood gases done after undocking showed acceptable values of oxygenation, pH, and normocarbia. Considering the short console time, lesser than planned extent of surgery and high chances of pulmonary complications with prolonged ventilation, tracheal extubation was planned. However, as the patient was not conscious even 1 h after reversal of neuromuscular blockade and return of acceptable spontaneous minute volumes, she had to be ventilated postoperatively. The ONSD at this time point was elevated at 5.1 cm. She was extubated after 12 h of elective ventilation.\nA 57-year-old diabetic patient of carcinoma endometrium was planned for robot-assisted radical hysterectomy with lymph node exenteration. Her body weight was 137 kg with a BMI of 55.5 kg/m2. She had a history of sleep apnoea. Pulmonary function tests showed mild restriction with an FVC of 79% of predicted and an FEV1:FVC ratio of 108% of predicted.\nBaseline ONSD was 3.5 cm. Anaesthesia was maintained via an 8-mm ID endotracheal tube on pressure controlled ventilation with desflurane titrated to a bispectral index of 40–50, and dexmedetomidine. Surgery lasted 7.5 h. The ONSD was 4.4 cm at 30 min after undocking. The pre extubation values of bispectral index and blood gases were acceptable. She was extubated uneventfully on the operating table after complete awakening.\nA 70-year-old hypertensive and diabetic male patient of carcinoma rectum was planned for robotic abdominoperineal resection. He was a known asthmatic with poor effort tolerance. He weighed 117 kg with a BMI of 39.44 kg/m2. Pulmonary function test showed an FVC of 70% of predicted and moderate reversibility with bronchodialators.", "score": 35.114931671288694, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "Doctors know that giving patients too many fluids or too big breaths during anesthesia can cause pulmonary problems afterwards.\nFernandez-Bustamante said that paying more attention to preventing atelectasis, for example, before, during and after surgery, could reduce some of them, improve oxygenation and prevent the need of oxygen therapy and hospital stay.\nShe noted that physicians must also optimize fluids and pain control, and minimize blood loss during operations to prevent PPCs. Doing all of this, she said, could improve patient outcomes and result in shorter hospital stays.\n\"Surgeons, anesthesiologists, nurses, respiratory therapists, and others, must collaborate better to make this successful. And of course patients need to know they play a critical role in their own recovery. We must work with them closely before, during and after surgery,\" Fernandez-Bustamante said. \"If we want patients to have less pulmonary complications, we need a truly comprehensive approach to this problem.\"\nYou May Also Like", "score": 34.81502420540219, "rank": 15}, {"document_id": "doc-::chunk-10", "d_text": "We therefore intentionally delayed our formal evaluation of lung function to the first postoperative day. We tested 80% oxygen because this concentration provides many of the benefits of 100% oxygen but with less direct pulmonary toxicity. It therefore remains possible that administration of 100% perioperative oxygen is associated with clinically important atelectasis.\nIn summary, lung volumes, the incidence and severity of atelectasis, as well as alveolar gas exchange were comparable in patients given 30% and 80% oxygen during and for 2 h after colon resection. We conclude that administration of 80% oxygen in the perioperative period does not worsen lung function. Therefore, patients who may benefit from generous oxygen partial pressures should not be denied supplemental perioperative oxygen for fear of causing atelectasis.\nThe authors gratefully acknowledge the support and generous assistance of the CT technicians, Marlene Thiem, R.T.A., and Slyvia Kiss, R.T.A., and the generous contributions of Alois Werba, M.D., Cem F. Arkiliç, M.D., and Folke Seibt, M.D.\n1. Lindberg P, Gunnarsson L, Tokics L, Secher E, Lundquist H, Brismar B, Hedenstierna G: Atelectasis and lung function in the postoperative period. Acta Anaesthesiol Scand 1992; 36: 546–53\n2. Joyce CJ, Baker AB: Effects of inspired gas composition during anaesthesia for abdominal hysterectomy on postoperative lung volumes. Br J Anaesth 1993; 75: 417–21\n3. Schwieger I, Gamulin Z, Suter PM: Lung function during anesthesia and respiratory insufficiency in the postoperative period: Pysiological and clinical implications. Acta Anaesthesiol Scand 1989; 33: 527–34\n4. Bergman NA: Distribution of inspired gas during anesthesia and artificial ventilation. J Appl Physiol 1963; 18: 1085–9\n5. Bendixen HH, Hedley-Ehyte J, Laver MB: Impaired oxygenation in surgical patients during general anesthesia with controlled ventilation: A concept of atelectasis. N Engl J Med 1963; 269: 991–6\n6.", "score": 34.42258937847897, "rank": 16}, {"document_id": "doc-::chunk-11", "d_text": "Supplemental oxygen also improves tissue oxygenation adjacent to abdominal wounds: 75 vs 52 mm Hg, P = 0.005.37\nWe were thus unsurprised that supplemental postoperative oxygen almost halved the risk of infection-related complications in our preliminary study of morbidly obese patients having open Roux-en-Y gastric bypass (n = 96).48 There was, nonetheless, no statistically significant difference in the risk of surgical site infections or associated complications in the 400 patients we randomized to supplemental (approximately 80%) or nasal cannula (approximately 30%) oxygen for 12 to 16 postoperative hours. Major complications were chosen to be serious and plausibly related to infection or wound healing, both of which were likely to be improved by supplemental oxygen. Use of a composite outcome was intuitive for this study because we expected supplemental oxygen to reduce the risk of various complications; a single outcome, such as surgical site infection, was unlikely to capture the anticipated treatment benefit so well.\nThe most obvious difference between the preliminary study and full trial was that a laparoscopic approach was used in 91% of the patients in the full trial, whereas all the preliminary cases were open. Although there was a nonsignificant trend toward a benefit from supplemental oxygen in open procedures in the full trial (n = 37, relative risk 0.67 [95% CI: 0.2–2.3]), there was no overall benefit when open and laparoscopic cases were combined. Since the current surgical trend is toward laparoscopic procedures even in the most morbidly obese patients, it is the results in all patients (mostly laparoscopic) that are most relevant to current practice.\nThe overall rate of surgical site infections and complications (13%) was lower in both groups than the 25% we expected based on previous studies25,50,51 and our preliminary data.48 However, as more Roux-en-Y gastric bypasses are done laparoscopically and surgical technique improves, the incidence of complications has decreased even in the largest patients.52,53 Neither the futility nor efficacy boundaries were crossed after recruitment of the initial quarter of the patients. However, the futility boundary was crossed at the final analysis of 400 patients (P = 0.80 > the futility boundary of 0.2757). The Executive Committee thus stopped the trial since the probability of identifying a significant difference was low even if the trial continued to completion.", "score": 33.762783993518354, "rank": 17}, {"document_id": "doc-::chunk-11", "d_text": "Philadelphia: Hanley & Belfus; 1996. p 285-301.\n6. Bach JR, Rajaraman R, Ballanger F, Tzeng AC, Ishikawa Y, Kulessa R, Bansal T. Neuromuscular ventilatory insufficiency: the effect of home mechanical ventilator use vs. oxygen therapy on pneumonia and hospitalization rates. Am J Phys Med Rehabil 1998;77:8-19.\n7. Gomez-Merino E, Bach JR, Blasco ML. Duchenne muscular dystrophy: prolongation of life by noninvasive respiratory muscle aids. Am J Phys Med Rehabil (in press).\n8. Bach JR, Saporito LR. Criteria for extubation and tracheostomy tube removal for patients with ventilatory failure: a different approach to weaning. Chest 1996;110:1566-1571.\n9. Hodes HL. Treatment of respiratory difficulty in poliomyelitis. In: Poliomyelitis: papers and discussions presented at the third international poliomyelitis conference. Philadelphia: Lippincott; 1955. p 91-113.\n10. Bach JR, Haber II, Wang TG, Alba AS. Alveolar ventilation as a function of ventilatory support method. Eur J Phys Med Rehabil 1995;5:80-84.\n11. Le Bourdelles G, Viires N, Boczkowski J, Seta N, Pavlovic D, Aubier M. Effects of mechanical ventilation on diaphragmatic contractile properties in rats. Am J Respir Crit Care Med 1994;149:1539-44.\n12. Berk JL, Levy MN. Profound reflex bradycardia produced by transient hypoxia or hypercapnia in man. Eur Surg Res 1977;9:75-84.\n13. Mathias CJ. Bradycardia and cardiac arrest during tracheal suction - mechanisms in tetraplegic patients. Europ J Intens Care Med 1976;2:147-56.\n14. Welply NC, Mathias CJ, Frankel HL. Circulatory reflexes in tetraplegics during artificial ventilation and general anesthesia.", "score": 32.78363990505761, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "BACKGROUND: Morbidly obese patients are at high risk for perioperative complications, including surgical site infections. Baseline arterial oxygenation is low in the morbidly obese, leading to low tissue oxygenation, which in turn is a primary determinant of infection risk. We therefore tested the hypothesis that extending intraoperative supplemental oxygen 12 to 16 hours into the postoperative period reduces the risk of surgical site infection and healing-related complications.\nMETHODS: Morbidly obese patients having open or laparoscopic bariatric surgery were given 80% inspired oxygen intraoperatively. Postoperatively, patients were randomly assigned to either 2 L/min of oxygen via nasal cannula or approximately 80% supplemental inspired oxygen after tracheal extubation until the first postoperative morning. The risks of surgical site infection and of major healing-related complications were evaluated 60 days after surgery.\nRESULTS: In a preplanned interim analysis based on the initial 400 patients, the overall observed incidence of the collapsed composite of major complications was 13.3%; the observed incidence of components of the composite outcome ranged from 0% (peritonitis) to 8.5% (surgical wound infection). The estimated relative risk of any ≥1 major complications occurring within the first 60 days after surgery, adjusting for study site, was 0.94 (95% confidence interval, 0.52–1.68) (P = 0.80, Cochran–Mantel–Haenszel). The Executive Committee thus stopped the trial for futility.\nCONCLUSIONS: Supplemental postoperative oxygen does not reduce the risk of surgical site infection rate and healing-related postoperative complications in patients having gastric bypass surgery.\nFrom the *Department of Anesthesiology, University of Louisville, Louisville, Kentucky; †Department of Anesthesiology and General Intensive Care, Vienna General Hospital, University of Vienna, Vienna, Austria; and ‡Department of Outcomes Research, Cleveland Clinic, Cleveland, Ohio.\nAnupama Wadhwa, MD, is currently affiliated with Department of Outcomes Research, Cleveland Clinic, Cleveland, Ohio.\nAccepted for publication April 14, 2014.\nDetails of Supplemental Postoperative Oxygen Trial (SPOT) Investigators are provided in Appendix.\nFunding: This study was funded from internal sources only. Viasys Healthcare, Inc. (Yorba Linda, CA) donated the Hi-Ox.\nThe authors declare no conflicts of interest.", "score": 32.651277175041784, "rank": 19}, {"document_id": "doc-::chunk-12", "d_text": "When we started the trial, available evidence suggested that supplemental oxygen, continued to –6 hours postoperatively, almost halved infection risk.22 However, the extent to which supplemental oxygen might be protective for wound infection is now unclear after recent publications of the PROXI and ISO2 trials.25,54 Our current results do not directly address optimal intraoperative oxygen management since randomization was restricted to the postoperative period and all patients received supplemental oxygen in the intraoperative period. Nonetheless, our results seem inconsistent with the general theory that supplemental oxygen reduces wound infection risk.\nWhy supplemental oxygen does not further reduce surgical site infections and complications in the morbidly obese population remains unclear, especially given the overwhelming evidence that tissue oxygenation is a key determinant of oxidative killing and that oxidative killing is the primary defense against bacterial contamination.8,9 But it is possible that oxygen is no longer effective after the “decisive period” for infection has passed, which is determined to be within a few hours after contamination, which is the incision.\nAside from the timing and duration of supplemental oxygen administration, the major difference between previous trials of supplemental oxygen and our current results is that our patients were morbidly obese. The obese patient population was selected in this study for providing supplemental oxygen since perioperative tissue oxygenation is normally low in this population,22 and high inspired concentrations are required to return tissue partial pressures to the normal range.27 Tissue oxygenation is also impaired by frequent hypoxemic episodes during sleep by the presence of obstructive sleep apnea, which has a prevalence approaching 75% to 86% in this obese population.30–32\nConsistent with their many risk factors, wound infections and infection-related complications are common in the obese. In 189 patients having colorectal procedures, for example, wound infection risk significantly correlated with the thickness of subcutaneous fat: 8% of those with <2 cm of subcutaneous fat developed a wound infection compared with 27% with >4.5-cm fat.", "score": 32.22988264924511, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "They discuss proper configuration of surgical drapes to avoid accumulation of oxygen and stress adequate drying time when alcohol-based skin preps are used. And sponges, gauze or other cottonoids used should be moistened when being used near an ignition source or near an oxygen-enriched area such as the airway. Scavenging the operating field with suction may help avoid buildup of oxygen (or nitrous oxide).\nThey discuss use of laser-resistant tracheal tubes (matching the tube type to the laser type) and note that the tracheal tube cuff should be filled with saline rather than air. The saline could also be tinted with methylene blue to help identify laser punctures. They also discuss the type of oxygen delivery system that should be used, based on the required depth of sedation and oxygen dependence.\nCoordination between the anesthesiologist and surgeon are critical when it comes to using lasers, electrocautery tools, electrosurgical tools, or other potential sources of ignition. The surgeon should give adequate notice that he/she is about to use such a device and then adequate time should be allowed to elapse to allow the anesthesiologist to take steps to minimize the oxygen in the area.\nTheir discussion on management of an actual fire is a good one. They emphasize early recognition and that the early signs of a fire may not just be a flame or flash but might include unusual sounds (eg. “pop”, “snap”, or “foomp”), odors, smoke, heat, unexpected movement of the drapes or patient, or discoloration of the drapes or breathing apparatus. When a fire is determined, the fire should immediately be announced, the procedure halted, and fire management tasks should be begun. Each team member should do their assigned task as quickly as possible, not waiting if another team member has not been able to do their task in a predetermined order. The specific tasks are outlined in the document and depend on whether the fire is in the airway or breathing circuit or elsewhere on the patient. They discuss assessment and management of the patient after a fire and discuss more general fire responses as well.\nThe advisory is very well-referenced and levels of evidence are graded. They do provide a nice algorithm on operating room fires. It can be used as an educational tool and as part of the preoperative procedure, but it does not look like it would be of much use emergently during a fire.", "score": 32.0841205418235, "rank": 21}, {"document_id": "doc-::chunk-6", "d_text": "NIMV has been well documented in COPD during periods of acute exacerbations, reducing the respiratory work and improving clinical results8,20, however its benefit during the anesthetic-surgical procedure remains undefined.\nThis case and the others reported in literature have proven the simple and easy applicability of NIMV at intraoperative period. NIMV appears to be a useful intraoperative ventilation support in situations of chronic disease, such as advanced COPD, as well as in worsened chronic and acute situations. Other groups of patients have been reported and may benefit. The anesthesiologist must become familiar with the noninvasive method and together with the clinical team, evaluate which patients will truly benefit from this type of support.\n01. Mannino DM. COPD: epidemiology, prevalence, morbidity and mortality, and disease heterogeneity. Chest. 2002;121:(5 Suppl):121S-126S. Review. [ Links ]\n02. Licker M, Schweizer A, Ellenberger C. Tschopp JM, Diaper J, Clergue F. Perioperative medical management of patients with COPD. Int J Chron Obstruct Pulmon Dis. 2007;2(4):493-515. [ Links ]\n03. Wong DH, Weber EC, Schell MJ, Wong AB, Anderson CT, Barker SJ. Factors associated with postoperative pulmonary complications in patients with severe chronic obstructive pulmonary disease. Anesth Analg, 1995;80(2):276-84. [ Links ]\n04. Menezes AM, Jardim JR, Perez-Padilla R, Camelier A, Rosa F, Nascimento O, et al. Prevalence of chronic obstructive pulmonary disease and associated factors: the PLATINO Study in Sao Paulo, Brazil. Cad Saude Publica. 2005;21(5):1565-73. [ Links ]\n05. Mircea N, Constantinescu C, Jianu E, Busu G. Risk of pulmonary complications in surgical patients. Resuscitation. 1982;10(1):33-41. [ Links ]\n06. Wightman JA. A prospective survey of the incidence of postoperative pulmonary complications. Br J Surg. 1968;55(2):85-91.", "score": 32.02941056165047, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The ECRI recommends that anesthesiologist stop using 100 percent oxygen in the OR and deliver only what the surgical patient needs, perhaps by diluting the oxygen concentration with room air when surgical tools such as electronic scalpels and cauterizers are in use.\n\"What we've been advocating for years is that the open delivery of oxygen under the drapes essentially has to stop,\" Bruley says, with some exceptions such as cardiac pacemaker surgery or operations involving a neck artery.\nIt’s important for surgical professionals to be aware of the risks and what can be done to prevent fire in the OR because, while it’s rare, a surgical fire can be fatal to a patient.\nSources: The Associated Press; JointCommisson.org", "score": 31.7621537842619, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Patients that have undergone emergency surgery to address respiratory distress are routinely supplemented with oxygen in the\nearly recovery period and are monitored for oxygenation status using physical parameters (respiratory rate, respiratory character,\nand mucous membrane color), pulse oximetry (SpO2), and, when practical, arterial blood gases (PaO2). Clinical signs of hypoxia\n(increased respiratory rate, abnormal respiratory character, and pale, cyanotic, or \"muddy\" mucous membranes), low SpO2 (less\nthan 96%), and low PaO2 (less than 80 mmHg) indicate continued use of supplemental oxygen. Oxygen therapy is continued if\nnormoxia cannot be achieved when the animal is breathing room air, and positive-pressure ventilation with positive end-expiratory\npressure may be necessary if normoxia cannot be achieved with administration of 100% inspired oxygen. Monitoring is not limited\nto the above mentioned parameters. All critically ill respiratory patients are evaluated at periodic intervals by assessing\nattitude, pulse rate and quality, and capillary refill time, and by performing thoracic auscultation. Invasive and/or noninvasive\narterial blood pressure is monitored in patients with potential (or existent) hemodynamic instability. Intakes (parenteral\nfluid therapy and oral intake) and outputs (urine, emesis, and defecation) are monitored. Because of potential pulmonary compromise\nin certain respiratory distress patients intravenous fluid therapy must be performed judiciously to prevent volume overload;\ntherefore, central venous pressure monitoring is employed.\nManagement Of Tracheostomy Tubes\nSupplemental oxygen during recovery can be achieved through a small tube (8 french) placed into the lumen of the tracheostomy\ntube. The oxygen flow rate should be nearly half of what is required with intranasal oxygen administration because oxygen\nis delivered directly into the trachea, making the trachea an oxygen-rich reservoir. Upon recovery from anesthesia the need\nfor supplemental oxygen is based on the oxygenation parameters listed above.\nTracheostomy tube hygiene is extremely important because of the risk of iatrogenic respiratory infection and the possibility\nof acute fatal obstruction due to accumulated respiratory tract secretions. Immediately after surgical placement and for the\nfirst several hours tracheostomy tubes require constant vigilance and hourly removal of intraluminal secretions. Around-the-clock\nobservation and care are mandatory.", "score": 31.72098802852817, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Comparable Postoperative Pulmonary Atelectasis in Patients Given 30% or 80% Oxygen during and 2 Hours after Colon Resection\nAkça, Ozan M.D.*; Podolsky, Andrea M.D.†; Eisenhuber, Edith M.D.‡; Panzer, Oliver M.D.*; Hetz, Hubert M.D.§; Lampl, Karl M.D.§; Lackner, Franz X. M.D.∥; Wittmann, Karin M.D.#; Grabenwoeger, Florian M.D.**; Kurz, Andrea M.D.††; Schultz, Anette‐Marie M.D.§; Negishi, Chiharu M.D.‡‡; Sessler, Daniel I. M.D.§§\nBackground: High concentrations of inspired oxygen are associated with pulmonary atelectasis but also provide recognized advantages. Consequently, the appropriate inspired oxygen concentration for general surgical use remains controversial. The authors tested the hypothesis that atelectasis and pulmonary dysfunction on the first postoperative day are comparable in patients given 30% or 80% perioperative oxygen.\nMethods: Thirty patients aged 18–65 yr were anesthetized with isoflurane and randomly assigned to 30% or 80% oxygen during and for 2 h after colon resection. Chest radiographs and pulmonary function tests (forced vital capacity and forced expiratory volume) were obtained preoperatively and on the first postoperative day. Arterial blood gas measurements were obtained intraoperatively, after 2 h of recovery, and on the first postoperative day. Computed tomography scans of the chest were also obtained on the first postoperative day.\nResults: Postoperative pulmonary mechanical function was significantly reduced compared with preoperative values, but there was no difference between the groups at either time. Arterial gas partial pressures and the alveolar–arterial oxygen difference were also comparable in the two groups. All preoperative chest radiographs were normal. Postoperative radiographs showed atelectasis in 36% of the patients in the 30%‐oxygen group and in 44% of those in the 80%‐oxygen group.", "score": 31.688371497981173, "rank": 25}, {"document_id": "doc-::chunk-3", "d_text": "As previously mentioned, the safest place to manage the airway of a decompensating child is the operating room. A child will not cooperative with an awake intubation. In the OR, a slow, careful induction with inhalation agent can be done in a controlled setting.\nHowever, if in your judgement the patient is in danger of immediate death, and you can’t wait for more specialized help, proceed with caution. Call for the tools to do cricothyroidotomy or tracheostomy just in case you need them.\nMake sure you have the equipment you need to use after you secure an invasive airway. For example, if your choice is to do percutaneous cricothyrotomy with a large bore angiocath, you must be able to attach oxygen and ventilate through the angiocath. Make sure you have the jet ventilator hooked up and ready to go.\nIn the absence of a jet, have the correct adapters to hook up to an Ambu bag set up and ready to go. Don’t wait until you have a catheter in the cricothyroid membrane before you ask for something to attach it to. A previous article discussed use of percutaneous jet ventilation.\nIf a jet ventilator is unavailable, then there are several ways to connect the catheter to your ventilation system. The connector from a number 3 endotracheal tube fits snugly into the hub of any intravenous catheter. However, this tiny assembly is often difficult to hold while squeezing the bag. I prefer to place the connector from a number 7.5 endotracheal tube into the barrel of a 3 ml syringe.\nThe barrel of the syringe now mates to the hub of your catheter and gives you something more substantial to hold. You can also place an endotracheal tube within the barrel of a ten ml syringe and inflates the cuff to maintain the connection. You must ventilate vigorously to pass enough oxygen through the catheter.\nGas will escape through the mouth. You must allow the gas to escape and the patient to exhale, otherwise pneumothorax is a risk.\nYou can also attach the barrel from a tuberculin syringe to the catheter hub and connect this to oxygen tubing. If the oxygen tubing can then be connected to the fresh gas outflow from an anesthesia machine a “jet” can be jury rigged.", "score": 31.16767898619563, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "- Adequate analgesia\n- Early mobilization\n- Lung expansion maneuvers that increase positive end-expiratory pressure (PEEP)\n- Treatment of underlying condition\nThe risk of atelectasis after surgery can be avoided by prescribing opioids in doses that are sufficient for pain relief, as well as encouraging the use of incentive spirometry. At the same time, opioids should be used with caution due to their suppression on coughing. Smoking should be avoided 6–8 weeks prior to surgery.\nWe list the most important complications. The selection is not exhaustive.", "score": 30.836987258279027, "rank": 27}, {"document_id": "doc-::chunk-3", "d_text": "Until the child can walk, it is very important to go for a walk: walking is very useful not only for muscles, but also to support other functions of the body, particularly breathing. as the progression of the disease this feature may also have to maintain artificially. So, to prevent the stopping of breathing during sleep apply the ventilators.\nMedication, unfortunately, can not slow the progression of the disease, but can help to alleviate the patient's condition, making symptoms less severe. Thus, corticosteroids help to replenish energy, β2-agonists supports muscle strength.\nOtherwise, doctors recommend to maintain physical activity as much as possible time and condition of the patient. In addition to the walk, very useful lessons in the pool: while swimming reduces the load on muscles and joints, because water decreases body weight, improves blood circulation in the muscles. Therapeutic exercises should be combined with physiotherapy support procedures to extend the functionality of the muscles and joints, and progressive muscular dystrophy will be less apparent.\nThe Use of various orthopedic devices and accessories, from tires and couplers to wheelchairs and strollers, helps muscular dystrophy patients to provide some of their needs on their own and makes them more mobile. Bus are superimposed on the sleep time, to avoid unwanted joint mobility during this time. At the end of the disease for normal respiratory process require special tools, from respiratory masks to respirator.", "score": 29.31774847543478, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "The scapular is also called the shoulder blade and sits on the back. Often in FSHD progressive weakness in the muscles that hold the scapular against the back and attached to the arm, leads to winging and eventually an inability to lift arms. During the procedure the scapular is attached to the rib bones that lie underneath it. This can improve the function of the shoulder.\nThere are other surgical techniques that may provide some help. However, surgery does not always result in significant improvement and it’s important that you have considered all the risks and benefits with your health professional before deciding on a surgical treatment strategy.\nOther muscular dystrophies are associated with breathing difficulties because eventually the progressive weakness spreads to the muscles that control breathing. This is not a common complication of FSHD. However regular monitoring of breathing function is recommended especially as you may not realise you have poor breathing function.\nDifficulty breathing means there is not enough oxygen getting to your tissues. This can exacerbate the negative effect of FSHD on your muscles.\nBreathing problems during sleep is also not a common complication of FSHD, but it can happen. Symptoms include disturbed sleep, morning headaches, daytime fatigue and sleepiness. It is relatively simple to diagnose and treat night time breathing problems. Some people may respond well to position therapy to help you sleep in a position that limits any obstruction to breathing. Non-invasive ventilation is also an option.\nOrthotics, walkers, crutches and the chair\nWhen many people think about muscular dystrophy they think about people confined to a wheelchair. While it is true that many people with FSHD do use a wheelchair for mobility the idea that they are confined to it is misleading. Some people may have lost the ability to walk and therefore need the chair to get from a to b. Others may use a chair for long distances to prevent fatigue. Some people may have significant foot drop and therefore use the chair because it allows them to get around while preventing falls and fractures that might seem them completely reliant of the chair.", "score": 29.155635569880722, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "Awake tracheostomy is reserved for patients with facial fractures or other, severe anomalies of airway anatomy that make safely securing the airway difficult and unsafe. Succinylcholine should be avoided in cases of neurogenic shock as it can increase the risk for extreme bradycardia. Atropine should be administered prior if succinylcholine must be used.\nBreathing is best managed by mechanical ventilation, with ventilation for SPO2 > 95% and ETCO2 = 35 mmHg. Spinal cord injuries involving C4 and cranial result in interruption of the descending bulbospinal respiratory pathways, resulting in respiratory ...", "score": 29.097503279069226, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Depending on the severity of the facial\na careful evaluation of the airway must be conducted (clinical, radiographic). Renal\nfunction tests must include renal ultrasonography, intravenous pyelogram, routine\nelectrolytes, urea, and creatinine. Neurological test includes clinical, EEG, CT, and MRI.\nTechniques should be tailored to the\ncardiac defect present and the procedure planned. Measures to prevent air embolism must be\ntaken. Adequate intravascular hydration must be ensured. The presence of macroglossia and micrognathia may\ncontribute to difficult laryngoscopy and tracheal intubation. A laryngeal mask should\nalways be available. Positioning may be difficult if contracture occurred from hypertonia.\nThe risk of pressure necrosis must be considered.\nMay consider avoiding suxamethonium\nif there is evidence of delayed myelination in CNS. Consider interaction\nbetween antiepileptic medication and anesthetic drugs. Endocarditis\nprophylaxis as indicated. Avoid muscle relaxants until airway is secured.\nFujimoto A, Wilson MG, Towner JW: Familial inversion of chromosome no. 8:\nAn affected child and a carrier fetus. Humangenetik\nGelb B, Towbin J, McCabe E, et al: San Luis Valley recombinant chromosome 8\nand tetralogy of Fallot: A review of chromosome 8 anomalies and congenital\nheart disease. Am J Med Genet 40:471, 1991....", "score": 29.08421766044681, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The board-certified and fellowship-trained neurologists with Norton Children’s Neuroscience Institute, affiliated with the UofL School of Medicine, are the leading providers of care for children with neuromuscular disorders, including Duchenne muscular dystrophy (DMD), in Louisville, Kentucky, and Southern Indiana.\nNorton Children’s Hospital is the pediatric teaching hospital for the University of Louisville School of Medicine. Our physicians are training the next generation of pediatric specialists.\nWe’ll determine the severity of your child’s DMD and create a treatment plan that minimizes risk, so your child can get back to being a kid.\nOur multidisciplinary approach, in partnership with the Muscular Dystrophy Association (MDA), sees patients in a single clinic for multiple specialties, including neurology, pulmonology, orthopedics, physical therapy, occupational therapy and speech therapy.\nWhat is DMD?\nDMD is a genetic disorder that affects muscles and primarily occurs in boys. Symptoms typically start between age 3 and 5, with weakness that affects the hips and legs, and results in abnormal walking. As the disease progresses, the weakness affects the upper body as well. That weakness may also affect the heart and respiratory muscles.\nIf our team suspects DMD, we will order specific genetic testing through a blood test. Genetic testing will determine if there are abnormalities in a gene associated with a muscle protein called dystrophin. Dystrophin is a vital part of the muscle fiber. When it is abnormal, muscle cells are easily damaged.\nOur physicians may prescribe medications known as corticosteroids, which have been shown to be effective in slowing the course of DMD. Depending on the specific mutation found in the genetic testing, your child may be eligible for specific gene therapies.\nAs a child’s muscles that aid in breathing become weaker, assistance with coughing and breathing may be necessary. Our team may prescribe noninvasive breathing devices, such as a CPAP (continuous positive airway pressure) or BiPAP (bilevel positive airway pressure) device. Additional support may include placement of a tracheostomy (a hole in the breathing tube) and use of a ventilator.\nIf tests show your child’s heart is affected, our team may prescribe medications that slow the course of cardiac muscle deterioration.\nAs children with DMD lose their mobility, they may need orthotics, braces, walking aids or wheelchairs.", "score": 28.485812652764164, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "During the procedure, the patient had one episode of bronchospasm that was promptly reverted pharmacologically with no complications in the postoperative period. The combination of less invasive anesthetic and ventilation techniques is easy to apply and may be useful in the perioperative management of patients with high anesthetic morbidity. Interaction between clinical, surgical and anesthetic teams for these cases is very important to reduce the mortality associated with extensive procedures in severe patients.\nKeywords: Pulmonary disease, chronic obstructive; Arthroplasty, replacement, hip; Respiration, artificial; Positive pressure respiration Anesthesia, epidural; Human; Male; Aged; Case reports\nChronic obstructive pulmonary disease (COPD) is an increasingly prevalent condition in the overall population1 and is considered an independent risk factor for cardiopulmonary mortality and morbidity at postoperative period.2 While general anesthesia has been associated to a higher risk of complications during and after surgical procedure, regional anesthesia has the advantage of avoiding tracheal intubation and worsening of the postoperative pulmonary function.3 With improvement of anesthetic techniques, of drugs utilized and of intra- and postoperative cares, it became possible to reduce morbidity and mortality of patients, classically viewed as contraindicated for surgery. Interaction between the clinical, surgical and anesthetic teams becomes essential in the management of such patients.\nThe objective of this study was to present a case of application of noninvasive mechanical ventilation (NIMV) during hip arthroplasty of a patient with severe COPD, associated to regional anesthesia (spinal).\nA male, 81 years old, 75 kg patient was admitted after a fall from his own height, presenting with pain, external rotation deformity and shortening of the lower right limb and was diagnosed as displaced femoral neck fracture Garden IV (Figure 1) with indication for partial hip arthroplasty. Bearer of severe COPD, presenting respiratory function test (RFT) with a forced expiratory volume in 1 second (FEV1) <20%, while using domestic oxygen therapy, corticoid therapy and presenting with dyspnea at minimal efforts.\nFurthermore, the patient had undergone a coronary angioplasty with placement of a drug-eluting stent two years earlier.", "score": 28.480943671240652, "rank": 33}, {"document_id": "doc-::chunk-4", "d_text": "Yet the risk factors associated to respiratory complications observed were: American Society of Anesthesiology (ASA) score > 4, Shapiro score > 5 and decrease of FEV1. The patient under study presented risk factors for the described complications, so much so that during the postoperative period he presented bronchospasm. If the ventilatory mode used during surgery of hip arthroplasty alters the intra- and postoperative complications of these patients is a fact that still remains unknown.\nDue to the restricted number of patients with COPD submitted to hip arthroplasty and the large variation in the severity of this disease it is difficult to carry out a randomized study with a large population in order to evaluate the morbidity/mortality inherent to the ventilation mode.\nRegional anesthesia is described as the preferential mode for patients with COPD and respiratory failure when compared to general anesthesia.2 Factors of general anesthesia such as atelectasia, cephalic displacement of the diaphragm and loss of respiratory stimulus are attributed to this worsening.8 Notwithstanding that regional anesthesia is also associated to a decrease of the respiratory function, occurrence of such an event is less frequent and intense when compared to general anesthesia.\nGeneral anesthesia, in addition to be considered a risk factor for mortality in hip surgery, is associated to complications and worsening in the ventilation pattern at intra- and postoperative of patients with COPD.9 Although regional anesthesia is suitable for hip surgery, factors related to its use, such as positioning, surgical stress, sedation and neuromuscular block may worsen the respiratory condition, therefore the need arises for adequate intraoperative support in patients with a previously poor pulmonary function, such as patients with COPD.10\nThe combination of regional anesthesia associated to NIMV during intraoperative of patients with COPD and respiratory failure was recently described in literature11-15, with few reports on hip arthroplasty.12-13\nNIMV is a well-known method for decrease of mortality, improvement of respiratory distress and correction of blood gas disorders in exacerbation of COPD.16-17 During surgery, tracheal intubation may also be used as a support ventilatory mode in respiratory failure and in COPD.", "score": 27.77538146364154, "rank": 34}, {"document_id": "doc-::chunk-7", "d_text": "At least 60% of surgical patients suffer from coexisting medical issues (Nierman & Zakrzewski 1999) Haematological, electrolyte and metabolic disturbance, respiratory support, analgesia, bowel and urinary management in particular, need to be addressed urgently prior to surgical treatment. Patients with a life expectancy of less than six weeks rarely gain benefit from major reconstructive surgery (BOA 2001). The burden of malignancy also increases the risk of thromboembolic events (deep vein thrombosis and pulmonary embolus) and therefore advice from haematologists should be sought or local guidelines strictly adhered to. A vena cava filter may also be required in patients with coexisting DVTs before to surgery.\nMeticulous preoperative planning and a multidisciplinary approach are vital prior to proceeding with surgery. An alternative strategy must be made by the surgeon if the primary surgical plan fails. All surgical equipment must be sterilised, checked and be available prior to surgery (endoprostheses, cement, internal fixation instruments). In certain cases, the availability of general and vascular surgeon teams should be checked.\nIt is mandatory for the operating room to be well prepared (monitoring equipment, sterility) before surgery. Extra care must be given to patient positioning to prevent decubiti and nerve palsies. The use of extra padding is often helpful. Patient warming prevents risk of hypothermia in lengthy cases. Sequential compression devices, arterial and central venous lines, urethral catheters are all placed for monitoring and continuous blood sampling. End tidal Co2 is monitored during cementation as this can evoke a cardiopulmonary event. Spinal monitoring may also be required. Broad spectrum antibiotics are administered on induction of anaesthesia, followed by two postoperative doses in routine cases. Finally, the World Health Organisation (WHO) checklist should be performed in the operating room before surgery commences to minimise errors (Haynes et al 2009).\nPostoperatively, patients are managed on the surgical unit. Unstable patients are transferred to the high dependency or intensive care unit, where closer monitoring and higher nurse to patient ratio is provided. Cardiopulmonary and haemodynamic monitoring provides a good indication of fluid balance, electrolyte and oxygenation.", "score": 27.05668377743209, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Patients with OSA may have an increased sensitivity to anesthetics or opioids, greater upper airway collapsibility, and increased risk of postoperative complications. For surgical patients with OSA, supplemental oxygen may be more acceptable than CPAP therapy but three clinical concerns exist. First, hypoxemia may play a critical role in respiratory arousal in surgical patients with OSA. When supplemental oxygen abolishes hypoxemia, the apnea duration may increase, causing hypoventilation as evidenced by hypercarbia, leading to possible life-threatening respiratory depression. Second, postoperative opioids may depress respiration centrally and impair the arousal threshold causing arousal failure, possibly leading to sporadic case of death. The third concern is that supplemental oxygen may mask the ability of oximetry to detect abnormalities in the level of ventilation.\nThe authors set out to was to investigate the effect of postoperative supplemental oxygen on Sao2, sleep respiratory events, and CO2 level in patients with untreated OSA.\nPostoperative supplemental oxygen improved oxygenation in surgical patients with OSA. Supplemental oxygen decreased AHI, hypopnea index, and central apnea index and shortened the longest apnea-hypopnea event duration. Although no overall difference was found between groups in PtcCO2 level, a significant increase of PtcCO2 was found in 11.4% of patients, especially those receiving oxygen on postoperative night 1. Postoperative supplemental oxygen could be used as an alternative therapy for patients with OSA not adherent to CPAP, newly diagnosed patients without adequate time to initiate CPAP therapy, or patients with suspected OSA. Additional monitoring of respiratory rate or PtcCO2, especially on postoperative night 1, is recommended. Further work is needed to identify OSA phenotypes which would benefit from postoperative supplemental oxygen and to identify which patients should be monitored for hypoventilation with respiratory rate or PtcCO2.\nSleep is integral to biologic function, and sleep disruption can result in both physiological and psychologic dysfunction including cognitive decline. The brain’s capacity to successfully respond to cognitive challenges through compensatory recruitment becomes overwhelmed if the patient is not presented with appropriate and continual sleep. Surgery activates the innate immune system, inducing neuroinflammatory changes that interfere with cognition.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "Relatively small amounts of pulmonary atelectasis (expressed as a percentage of total lung volume) were observed on the computed tomography scans, and the percentages (mean ± SD) did not differ significantly in the patients given 30% oxygen (2.5% ± 3.2%) or 80% oxygen (3.0% ± 1.8%). These data provided a 99% chance of detecting a 2% difference in atelectasis volume at an α level of 0.05.\nConclusions: Lung volumes, the incidence and severity of atelectasis, and alveolar gas exchange were comparable in patients given 30% and 80% perioperative oxygen. The authors conclude that administration of 80% oxygen in the perioperative period does not worsen lung function. Therefore, patients who may benefit from generous oxygen partial pressures should not be denied supplemental perioperative oxygen for fear of causing atelectasis.\nTHE major complication associated with brief periods of oxygen administration is pulmonary atelectasis. Concern about atelectasis is appropriate because it occurs in up to 85% of patients undergoing lower abdominal surgery and is thought to be an important cause of morbidity. 1–3\nTwo mechanisms contribute to perioperative atelectasis: compression and absorption. Compression results from cephalad displacement of diaphragm, decreased compliance, and reduced functional residual capacity. 4–6\nTo some extent, these factors are present with any anesthetic technique. In contrast, absorption is defined by uptake of oxygen from isolated alveoli and results from administration of high oxygen partial pressures. Administration of 100% oxygen, even for a few minutes, causes significant postoperative atelectasis via\nthis mechanism. 2,7,8\nThere is little doubt that high intraoperative oxygen concentrations produce atelectasis, at least in the immediate postoperative period. 7–10\nHowever, the extent to which this atelectasis impairs pulmonary function and gas exchange remains controversial. 2,7–12\nAn additional complication associated with postoperative atelectasis is fever, which often prompts diagnostic evaluation or therapeutic intervention for potential infectious causes.\nIntraoperative ventilation with high partial pressures of oxygen provides clinicians additional time to diagnose and treat inadvertent extubation, laryngospasm, and breathing‐circuit disconnections.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-5", "d_text": "Although it is not associated to an increase of mortality in patients with COPD admitted to ICU18 tracheal intubation may entail spirometric worsening at postoperative of healthy patients.19 However it is not known whether this worsening observed in the spirometric pattern of healthy patients, also takes places in patients with COPD and whether it may contribute to increased mortality or postoperative cost - when compared to patients who have used NIMV in the intraoperative.\nManagement of an obese patient with severe COPD submitted to hip arthroplasty under spinal anesthesia and BIPAP was previously described.12 Risks of general anesthesia with tracheal intubation in this population are reported, as well as need for compliance to the method as the essential problem. In the case reported, previous knowledge of the patient's tolerance regarding NIMV facilitated its use. The patient of the study was discharged 7 days after surgery, while the patient in this case remained in the ICU for 10 days. It is noteworthy that although intensive care length of stay is longer in the report on our patient, he showed more severity factors in surgery (lower FEV1, older, obesity).\nNIMV has also been reported in patients with acute ventilation impairment. One case of worsened chronic respiratory failure, submitted to spinal anesthesia and support with NIMV avoiding general anesthesia, was reported.13 Warren et al. described the case of a patient with myasthenia gravis and acute respiratory failure submitted to obstetric surgery under peridural anesthesia using noninvasive ventilation support.14 The position required for the surgical procedure often involves undesirable and intolerable ventilatory alterations in pulmonary sick patients.10 An English group described use of NIMV in a severe COPD patient submitted to resection of a carcinoma of the rectum, under spinal anesthesia in lithotomic position11. NIMV allowed the patient to tolerate the respiratory restriction imposed by the position of lithotomy, avoiding general anesthesia.\nIn view of this evidence, NIMV has been evaluated in patients with chronic and acute ventilation impairment. However, other groups of patients and surgical procedures may benefit from this intraoperative ventilation mode.\nNIMV was also described by a Japanese groupfor patients submitted to craniotomy for cerebral mapping16. NIMV allowed for a sufficient anesthetic depth during bone opening and closure, total awareness during mapping, smooth transition between anesthesia and consciousness, suitable ventilation and immobility with comfort for the patient.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "However, unless the patient is connected to a ventilator, or has a tight-fitting face mask, the appliance used to deliver oxygen to the face will always result in an inhaled FIO2. less than 100%.\n6. The body does not store oxygen.\nAthletes who inhale a few minutes of oxygen, and then return to the playing field, are not benefitted in any physiologic way. If a patient needs supplemental oxygen it should be for a specific physiologic need, e.g., hypoxemia during sleep or exercise, or even continuously (24 hours a day) as in some patients with severe, chronic lung disease.\n7. Supplemental O2 is an FIO2 > 21%.\nSupplemental oxygen means an FIO2 greater than the 21% oxygen in room (ambient) air. When you give supplemental oxygen you are raising the patient's inhaled FIO2 to something over 21%; the highest FIO2 possible is 100%. To give more oxygen requires a hyperbaric chamber, an expensive piece of equipment found in relatively few hospitals.\n8. Supplemental oxygen is a drug.\nLike any other drug, it has indications, contra-indications, and side effects. Unlike the situation with most drugs, there are also easily measurable \"levels\" of oxygen (either PaO2 with a blood gas measurement or SaO2 with a pulse oximeter).\n9. Supplemental oxygen is the most commonly prescribed drug\nAn estimated 1/4 to 1/3 of all patients admitted to a hospital will receive supplemental oxygen at some point. It is the only prescription drug in common use on all inpatient services (except perhaps Psychiatry).\n10. A reduced PaO2 is a non-specific finding.\nIt can occur from any parenchymal lung problem, and only signifies a disturbance of gas exchange (usually due to V/Q imbalance). A low PaO2 should not be used to make any particular diagnosis, including pulmonary embolism.\n11. A normal PaO2 and Alveolar-arterial PO2 difference (A-a\ngradient) do not rule out pulmonary embolism.\nAbout 5% of confirmed cases of PE manifest a normal A-a gradient.\n12. High FIO2 doesn't affect COPD hypoxic drive.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-3", "d_text": "Oxygenation during One Lung Ventilation (OLV) depends not only on the magnitude of shunt fraction but also on the oxygenation of the shunted blood . Thus, factors leading to a decrease in the oxygenation of the shunted (venous) blood (states of increased oxygen extraction, low cardiac output, low hemoglobin levels) compromise the ‘buffering’ capacity against tissue hypoxia. Under limited circumstances, pursuing increases in cardiac output may benefit arterial oxygenation during one-lung ventilation; for example SpO2 less than 90% may be tolerated poorly in anaemic patients with right to left cardiac shunts . Furthermore, low hemoglobin concentrations can increase shunt fraction and decrease oxygenation [29,30]. Therefore, increasing cardiac output and normalising haemoglobin concentration remain the main clinical interventions that can enhance oxygen delivery to tissues in this situation despite low SpO2 readings. However, this approach is not a panacea and does not obviate the necessity to optimize dependent lung volume .\nTherefore, in venous admixture situations, attention to cardiac output, oxygen expenditure, venous saturation, and haemoglobin levels are needed to improve tissue oxygenation .\nIn this era of 21st century medicine, we need additional considerations to cope with increasing surgical complexity. For example, the increased recognition of the advantages of Video Assisted Thoracoscopic Surgery (VATS) in children has reiterated the need for efficient one lung ventilation . During one lung ventilation for VATS, there are additional factors that need focus: the use of positive pressure in pleural space; mechanical shift of the mediastinum; the effects of added CO2 upon vascular adaptive responses to hypoxia and the oxygen dissociation curve; and increased V/Q mismatch (due to decreased functional residual capacity and tidal volume resulting from general anesthesia, suboptimal patient positioning, surgical retraction and mechanical ventilation) .\nA reliably measured SpO2 of 92% is considered the lowest clinically acceptable level by established norms of clinical practice at any age, with an exception of 88% as the lowest in chronic lung disease. This is despite neither a SpO2 below 92% having been proven to be directly associated with tissue hypoxia, nor a SpO2 above 92% proven to exclude it.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Prospective, randomized trial comparing fluids and dobutamine optimization of oxygen delivery in high-risk surgical patients [ISRCTN42445141]\nDivision of Critical Care Medicine, Departments of Internal Medicine, Anesthesiology and Surgery, Medical School – FUNFARME and Hospital de Base, São José do Rio Preto, São Paulo, Brazil\nCritical Care 2006, 10:R72 doi:10.1186/cc4913Published: 12 May 2006\nPreventing perioperative tissue oxygen debt contributes to a better postoperative recovery. Whether the beneficial effects of fluids and inotropes during optimization of the oxygen delivery index (DO2I) in high-risk patients submitted to major surgeries are due to fluids, to inotropes, or to the combination of the two is not known. We aimed to investigate the effect of DO2I optimization with fluids or with fluids and dobutamine on the 60-day hospital mortality and incidence of complications.\nA randomized and controlled trial was performed in 50 high-risk patients (elderly with coexistent pathologies) undergoing major elective surgery. Therapy consisted of pulmonary artery catheter-guided hemodynamic optimization during the operation and 24 hours postoperatively using either fluids alone (n = 25) or fluids and dobutamine (n = 25), aiming to achieve supranormal values (DO2I > 600 ml/minute/m2).\nThe cardiovascular depression was an important component in the perioperative period in this group of patients. Cardiovascular complications in the postoperative period occurred significantly more frequently in the volume group (13/25, 52%) than in the dobutamine group (4/25, 16%) (relative risk, 3.25; 95% confidence interval, 1.22–8.60; P < 0.05). The 60-day mortality rates were 28% in the volume group and 8% in the dobutamine group (relative risk, 3.00; 95% confidence interval, 0.67–13.46; not significant).\nIn patients with high risk of perioperative death, pulmonary artery catheter-guided hemodynamic optimization using dobutamine determines better outcomes, whereas fluids alone increase the incidence of postoperative complications.", "score": 26.44680947864507, "rank": 41}, {"document_id": "doc-::chunk-10", "d_text": "No interaction was found between BMI quartile and the effect of randomized group on incidence of major complications (P = 0.34, Breslow–Day test, Table 5) nor did the effect of supplemental oxygen on major complications depend on type of surgery (P = 0.44, Breslow–Day test). Furthermore, there was no supplemental oxygen effect within laparoscopic surgery (RR: 1.06 [0.54–2.07]) or open Roux-en-Y gastric bypass surgery (RR: 0.67 [0.20–2.29], Table 6).\nWithin open Roux-en-Y gastric bypass surgery, the relative risks between the supplemental group and the 30% group were not different between the main trial and the pilot trial (P = 0.66, Breslow–Day test, Table 7). Since the number of complications was so low, we did not perform subanalysis of the data per surgical duration.\nAll surgical wounds become contaminated. What determines whether inevitable contamination progresses to clinical infection is largely the adequacy of host defense. The primary host defense against surgical pathogens is oxidative killing by bacteria, a process that depends on the partial pressure of oxygen over the entire range of physiologic values. Superoxide radical production is necessary for host defense and correlates directly with the inspired oxygen concentration.\nObese patients having laparoscopic surgery require more inspired oxygen to produce similar arterial oxygen partial pressures than lean individuals. They also have significantly lower subcutaneous oxygen tensions (36–41 vs 57 mm Hg).27,49 Supplemental inspired oxygen (80%) significantly increases subcutaneous oxygenation in the upper arm in morbidly obese patients: 58 vs 43 mm Hg. Tissue oxygenation progressively increases with supplemental oxygen to a maximal difference of about 40 mm Hg after 13 postoperative hours (94 vs 52 mm Hg).", "score": 25.75004223792715, "rank": 42}, {"document_id": "doc-::chunk-4", "d_text": "This provides 24% to 30% inspired oxygen in most patients.35 When patients used a continuous positive expiratory pressure (CPAP) machine at home and could not maintain oxygen saturation ≥90% with the nasal cannula alone, they were switched to their CPAP machines at an inspired oxygen concentration (FIO2) to 30%. FIO2 was maintained at 30% in patients who remained intubated. Additional oxygen was given as necessary to maintain oxygen saturation ≥90%.\n2. Supplemental oxygen administration: After extubation, patients were given 10 L/min of oxygen via a nonrebreathing Hi-Ox mask (Viasys Healthcare, Inc., Yorba Linda, CA), a valved manifold system. An oxygen flow of 5 L/min produces a supplemental inspired concentration of approximately 80%, even at a minute ventilation of 12 L/min, which few patients exceed.36,37 When patients used a CPAP machine at home and could not maintain oxygen saturation ≥90% with the Hi-Ox mask, they were switched to their CPAP machines at an inspired oxygen concentration of 80%. In patients who experienced claustrophobia or severe discomfort from the Hi-Ox mask, a venturi style dual-dial mask was substituted at an oxygen flow rate of 15 L/min. This mask delivers nebulized oxygen, which makes it more comfortable for patients and delivers approximately 60% inspired oxygen. FIO2 was maintained at 0.8 in patients who remained intubated.\nThe oxygen flowmeters were concealed from blinded surgeons and nursing staff. The nurses were instructed not to change the oxygen settings until the first postoperative morning, unless clinically indicated (oxygen saturation <90%).\nThe designated oxygen management was maintained until the first postoperative morning. Patients wore oxygen masks or nasal prongs while in bed but were allowed to remove them briefly as necessary (e.g., to visit the bathroom). To encourage compliance with the randomized oxygen assignment, an investigator visited patients when they first arrived on the surgical ward from the recovery room, the evening of surgery, and the first postoperative morning.\nDemographic characteristics of patients were tabulated. We also recorded preoperative laboratory values (including plasma glucose concentration), smoking history, and American Society of Anesthesiologists Physical Status rating.\nFluids administered, estimated blood loss, urine output, and amount of opioids used were recorded daily throughout hospitalization.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-2", "d_text": "13,14\nOxygen per se\nalso seems to be antiemetic, 15\nwhich is of substantial clinical importance because uncontrollable nausea and vomiting remains the leading cause of unanticipated hospital admission after planned ambulatory surgery. 16,17\nEven patients with normal arterial oxygen saturation may experience regions of inadequate tissue perfusion as result of extreme positioning, surgical retractors, and operative disruption of blood vessels. 18–20\nHigh arterial oxygen partial pressures help limit hypoxia in these marginally perfused tissues. High inspired oxygen concentrations may similarly help oxygenate penumbra tissues surrounding the core of a stroke. 21,22\nInspired oxygen concentration is a major determinant of tissue oxygen tension, 23\nwhich, in turn, is highly correlated with oxidative killing of bacteria by neutrophils over the range of observed values. 24\nOxidative killing is the major defense against surgical wound infection. 25\nTherefore, it is not surprising that tissue oxygen tension is highly correlated with the risk of surgical wound infection. 26\nCollagen deposition and scar formation are also directly dependent on tissue oxygen tension. 27–29\nThese data suggest that perioperative administration of high inspired oxygen concentrations may facilitate wound healing and improve resistance to surgical wound infections.\nAvailable data thus indicate that high concentrations of inspired oxygen are associated with atelectasis but also provide recognized advantages. Consequently, the appropriate inspired oxygen concentration for general surgical use remains controversial. Concern about atelectasis prompts many clinicians to forego the benefits of generous inspired oxygen concentrations. Accordingly, we tested the hypothesis that atelectasis and pulmonary dysfunction on the first postoperative day are comparable in patients given 30% or 80% inspired oxygen during and for 2 h after colon resection.\nThe sample size for this study was based on atelectasis rates cited in previous studies. 1,11\nThese data suggested that a power of 95% for detecting a 50% difference in atelectasis at an α level of 0.05 would be obtained with 13 patients in each group. With approval of the ethics committee of the University of Vienna, we studied 30 patients aged 18–65 yr who were scheduled to undergo elective colon resection. These patients were among the 500 who participated in the multicenter Study of Oxygen and Surgical Wound Infection. The procedure was identical for all patients.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "Spinal anaesthesia was chosen after an extensive discussion of the risks and benefits involved. On arrival in the operating room, non-invasive blood pressure was 130/85 mm Hg, heart rate 80 beats min-1, and oxygen saturation 98%. Following the rapid administration of 500 ml of Ringer's solution intravenously, spinal anaesthesia was performed at the L4-L5 interspace using a 25-G Quincke type needle. Bupivacaine hydrochloride (2 ml; Marcaine(R) 0.5% spinal, AstraZeneca) was administered. Surgery was started after achieving anaesthesia at the T10 dermatome. Postoperatively, his neurologic status remained normal with no worsening of his dysphagia or extremity weaknesses. He was discharged from hospital without any neurologic complaint on the sixth postoperative day. CONCLUSION: Kennedy's disease typically presents as muscular atrophy, weakness, and fasciculations predominantly of bulbar, facial, and proximal muscles of the extremities. When a patient with Kennedy's disease is scheduled to undergo a procedure requiring anaesthesia, anaesthesiologists should carefully assess patient preoperative status, respiratory function, and inquire about swallowing difficulties or a history of intolerance to any general or local anaesthetic agent. Considerations for general anaesthesia concern possible prolonged neuromuscular blockade and consequent postoperative muscle weakness and a compromised baseline pulmonary function.1 Many inhalation agents have been reported to be well tolerated by the patients with lower motor neuron disease.2 However, it is not clear whether neuromuscular blocking agents prolong neuromuscular blockage. Depolarizing muscle relaxants, such as, succinylcholine, are not recommended. Moreover the possibility of hyperkalemia and of resultant ventricular arrhythmia or fibrillation has been reported in patients with neuromuscular and lower motor neuron diseases by several investigators.3 If muscle relaxation is required, non-depolarizing neuromuscular blocking agents should be considered, and neuromuscular transmission should be monitored closely.\nAs Robbie2 indicates below, there is additional information on this subject at: KDA Surgery ConcernsThis message has been edited. Last edited by: Bruce,\nLocation: San Luis Obispo CA\nDavey - My impression is that the Dentist has several options as to how they induce numbness.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "at 1L/min it delivers 24% (20% is already in the air), at 2L per min it delivers 28%, 3L/min 32%…and so on so at 6L/min it delivers 44%\n- A simple face mask should be used at a minimum of 6L/min (normal minute ventilation) to prevent the patient from breathing back their own CO2.\n- Application of a self inflating Bag-Valve-Mask on a patient’s face without compressing the bag is called suffocation. Due to the valves system, oxygen is only delivered ON COMPRESSING THE BAG\n- Pulse oximeters consist of two light-emitting diodes, one in the red range and one in the infrared range, and a detector. Oxygenated and deoxygenated haemoglobin absorb light at different wavelengths differently. Deoxygenated or ‘‘blue’’ blood absorbs light maximally in the red band, whereas Oxygenated or ‘‘red’’ blood absorbs light maximally in the infrared band. The ratio of absorption of the two wavelengths of light are then compared with an algorithm in the microprocessor generated by empirically measuring the absorption in healthy volunteers at varying degrees of directly measured arterial oxygen saturation. The displayed value is usually an average based on the previous 3 to 6 seconds of recording.\n- Nonhypoxic heart attack victims treated with oxygen endure a 25 to 30% more heart damage than patients not given oxygen\n- Oxygen supplementation to nonhypoxic patients with mild or moderate strokes may increase mortality.\n- High-dose oxygen therapy to produce hyperoxaemia (above normal oxygen saturation) can cause absorption atelectasis\n- Oxygen is liberally administered to many critically ill patients, thereby exposing them to supranormal arterial oxygen levels.\n- Hyperoxia also results in the formation of reactive oxygen species, which adversely affect the pulmonary, vascular, cnetral nervous, and immune systems.\n- Though the optimal PaO2 remains unknown, recent evidence indicates that hyperoxia is associated with increased mortality in post-cardiac arrest, CVA, acute coronary syndrome, and traumatic brain injury patients.\n- Take Home Point: Carefully titrate oxygen to the lowest tolerable level to meet the patient’s needs.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-13", "d_text": "Infected wounds had 1.2 ± 0.4 cm greater fat thickness than noninfected wounds.55 Another study of 608 patients having digestive tract surgery reported that, after multivariable analysis, obese patients (BMI >30 kg/m2) had an adjusted odds ratio for surgical site infection of 4.8 (95% CI, 2.95–7.81).56\nFleischmann et al.49 and Kabon et al.27 demonstrated that obese patients need a greater FIO2 to reach the same arterial oxygen partial pressure than nonobese patients. And finally, obese patients had lower tissue oxygen partial pressures in both the upper arm and near the incision, even when oxygen administration was adjusted to provide comparable arterial oxygen partial pressures. Nonetheless, supplemental postoperative oxygen did not reduce the risk of infection or a composite of major complications plausibly related to infection or wound healing. Supplemental postoperative oxygen thus seems unlikely to prove beneficial in nonobese subjects also, although the theory would be well worth testing in surgical populations at special risk of infection such as colorectal surgery.\nWe were unable to precisely control inspired oxygen concentration in patients assigned to supplemental oxygen. Patients randomized to supplemental postoperative oxygen thus received between 65% and 95% inspired oxygen, depending on their minute ventilation and ability to tolerate a sealed mask postoperatively. Nonetheless, in a previously published substudy, we showed that supplemental oxygen substantially increases subcutaneous oxygenation in the arm and adjacent to the surgical incision, suggesting that our administration methods were effective.49\nIn summary, the composite risk of wound infection and major complications related to infection or wound healing was similar in gastric bypass patients who were randomly assigned to approximately 30% or approximately 80% inspired oxygen administered from tracheal extubation through the first postoperative morning. Supplemental postoperative oxygen does not appear to be beneficial in this population.\nAPPENDIX. SUPPLEMENTAL POSTOPERATIVE OXYGEN TRIAL (SPOT) INVESTIGATORS\nSPOT investigators from the University of Louisville included Anupama Wadhwa, MD, Mukadder Orhan Sungur, MD, Ryu Komatsu, MD, Ozan Akça, MD, Jorge Rodriguez, MD, and Raghavendra Govinda, MD.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "Such patients should be evaluated with an arterial blood gas and a sleep study.\nClassic symptoms of sleep disordered breathing include excessive daytime sleepiness, morning headache, and restless sleep, but insomnia may also be seen. In addition:\n●Some patients with OSA may awaken with choking or shortness of breath\n- Perrin C, Unterborn JN, Ambrosio CD, Hill NS. Pulmonary complications of chronic neuromuscular diseases and their management. Muscle Nerve 2004; 29:5.\n- Shneerson JM. Respiration during sleep in neuromuscular and thoracic cage disorders. Monaldi Arch Chest Dis 2004; 61:44.\n- Bourke SC, Gibson GJ. Sleep and breathing in neuromuscular disease. Eur Respir J 2002; 19:1194.\n- Guilleminault C, Shergill RP. Sleep-disordered Breathing in Neuromuscular Disease. Curr Treat Options Neurol 2002; 4:107.\n- Krachman SL, Criner GJ. Sleep and long-term ventilation. Respir Care Clin N Am 2002; 8:611.\n- Cirignotta F, Mondini S, Zucconi M, et al. Sleep-related breathing impairment in myotonic dystrophy. J Neurol 1987; 235:80.\n- Smith PE, Edwards RH, Calverley PM. Ventilation and breathing pattern during sleep in Duchenne muscular dystrophy. Chest 1989; 96:1346.\n- Smith PE, Calverley PM, Edwards RH, et al. Practical problems in the respiratory care of patients with muscular dystrophy. N Engl J Med 1987; 316:1197.\n- Dolmage TE, Avendano MA, Goldstein RS. Respiratory function during wakefulness and sleep among survivors of respiratory and non-respiratory poliomyelitis. Eur Respir J 1992; 5:864.\n- Steljes DG, Kryger MH, Kirk BW, Millar TW. Sleep in postpolio syndrome. Chest 1990; 98:133.\n- Ellis ER, Grunstein RR, Chan S, et al. Noninvasive ventilatory support during sleep improves respiratory failure in kyphoscoliosis.", "score": 25.257089748720137, "rank": 48}, {"document_id": "doc-::chunk-16", "d_text": "is it always a good idea?British Journal of Anaesthesia\n13Th International Conference on Electrical Bioimpedance and the 8Th Conference on Electrical Impedance Tomography 2007\nAnalysis of ventilatory conditions under different inspiratory oxygen concentrations and positive end-expiratory pressure levels by EIT\n13Th International Conference on Electrical Bioimpedance and the 8Th Conference on Electrical Impedance Tomography 2007, 17():\nNew England Journal of Medicine\nSupplemental perioperative oxygen to reduce the incidence of surgical-wound infection\nNew England Journal of Medicine, 342(3):\nAnesthesia and AnalgesiaThe effect of increased FIO2 before tracheal extubation on postoperative atelectasisAnesthesia and Analgesia\nHerniaThe effect of supplemental 70% oxygen on postoperative nausea and vomiting in patients undergoing inguinal hernia surgeryHernia\nAnaesthesistEmergency treatment of thoracic traumaAnaesthesist\nExpert Review of Anti-Infective TherapyControversies in host defense against surgical site infectionExpert Review of Anti-Infective Therapy\nAnnales Francaises D Anesthesie Et De ReanimationPreoxygenation and upper airway patency controlAnnales Francaises D Anesthesie Et De Reanimation\nCan anaesthetic management influence surgical-wound healing?\nClinical Infectious Diseases\nNonpharmacological prevention of surgical wound infections\nClinical Infectious Diseases, 35():\nAnesthesia and AnalgesiaThe influence of protocol pain and risk on patients' willingness to consent for clinical studies: A randomized trialAnesthesia and Analgesia\nPerioperative management and monitoring in anaesthesia\nAnz Journal of SurgeryPerioperative high-dose oxygen therapy in vascular surgeryAnz Journal of Surgery\nArchives of Surgery\nPerioperative Supplemental Oxygen Therapy and Surgical Site Infection A Meta-analysis of Randomized Controlled Trials\nArchives of Surgery, 144(4):\nWound Repair and Regeneration\nSupplemental perioperative oxygen and fluids to improve surgical wound outcomes: Translating evidence into practice\nWound Repair and Regeneration,", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-15", "d_text": "Acta Anaesthesiologica ScandinavicaSupplemental 80% oxygen does not attenuate postoperative nausea and vomiting after breast surgeryActa Anaesthesiologica Scandinavica\nAnasthesiologie Intensivmedizin Notfallmedizin SchmerztherapiePulmonary gas exchange: Classical and modern findingsAnasthesiologie Intensivmedizin Notfallmedizin Schmerztherapie\nAnaesthesistOxygen - An effective supplement of perioperative antibiotic treatment?Anaesthesist\nCritical Care ClinicsPerioperative anesthesia issues in the elderlyCritical Care Clinics\nAnasthesiologie & Intensivmedizin\nOxygen as carrier gas in general anoesthesic\nAnasthesiologie & Intensivmedizin, 45(3):\nAnesthesia and AnalgesiaThe influence of allogeneic red blood cell transfusion compared with 100% oxygen ventilation on systemic oxygen transport and skeletal muscle oxygen tension after cardiac surgeryAnesthesia and Analgesia\nArchives of Surgery\nPerioperative Supplemental Oxygen Therapy and Surgical Site Infection A Meta-analysis of Randomized Controlled Trials INVITED CRITIQUE\nArchives of Surgery, 144(4):\nThe effect of auricular acupuncture on anaesthesia with desflurane\nAnesthesia and AnalgesiaDosing oxygen: A tricky matter or a piece of cake?Anesthesia and Analgesia\nEuropean Journal of Cancer Care\nNausea and vomiting after surgery for colorectal cancer\nEuropean Journal of Cancer Care, 9(3):\nCanadian Journal of Anaesthesia-Journal Canadien D Anesthesie\nBest evidence in anesthetic practice - Prevention: supplemental oxygen reduces the incidence of surgical-wound infection - Commentary\nCanadian Journal of Anaesthesia-Journal Canadien D Anesthesie, 48(9):\nBritish Journal of AnaesthesiaNew concepts of atelectasis during general anaesthesiaBritish Journal of Anaesthesia\nThe management of malignant large bowel obstruction: ACPGBI position statement\nColorectal Disease, 9():\nJournal of Applied PhysiologyHypoxic pulmonary vasoconstriction does not contribute to pulmonary blood flow heterogeneity in normoxia in normal supine humansJournal of Applied Physiology\nJama-Journal of the American Medical Association\nSupplemental oxygen and risk of surgical wound infection - Reply\nJama-Journal of the American Medical Association, 295():\nBritish Journal of AnaesthesiaJust a little oxygen to breathe as you go off to sleep.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "Creatinine kinase levels rise to 20,000 or more within 12 to 24 hours, putting the patient at risk for myoglobinuric renal failure. Isolated myoglobinuria in the early postoperative period should make the anesthesiologist and surgical team suspicious for Malignant hyperthermia. Masseter muscle rigidity shortly after the administration of succinylcholine has also been associated with MH as well.\nMalignant hyperthermia treatment is simple. With early diagnosis and appropriate treatment, the mortality rate approaches zero. If Malignant hyperthermia is suspected, all triggering agents should be discontinued immediately. In the operating room, the patient should be ventilated with 100% oxygen at a flow of 10 L/minute. If a general anesthetic must be continued, it is safe to administer barbiturates, benzodiazepines, opioids, and propofol.\nThe patient should be intubated as soon as possible. Cardiac dysrhythmias, hyperkalemia, acidosis, and other medical problems should be managed appropriately. The patient should be given dantrolene as soon as possible. Dantrolene acts by inhibiting the release of calcium from the sarcoplasmic reticulum. The initial dose is a 2.5mg/kg intravenous (IV) bolus followed by 1 mg/kg IV every 6 hours for at least 24 hours.\nDantrolene’s side effects include nausea, phlebitis, and weakness for approximately 24 hours after the drug is discontinued. Each vial of dantrolene contains 20 mg of dantrolene and 3 mg of mannitol and must be mixed with 60 mL of sterile water. The initial dose for a 70-kg adult is 175 mg or 9 vials. Assistance should be sought in mixing the dantrolene as it is poorly soluble.\nDantrolene 2.5 mg/kg should be given every 5 to 10 minutes until there is a fall in heart rate, normal cardiac rhythm, a reduction in muscle tone, and a decline in body temperature. Maintenance of normal volume status is essential in the setting of rhabdomyolysis. Hyperkalemia may require treatment with insulin and glucose. Caution should be excercised with potassium-losing diuretics as hypovolemia should be avoided.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "If the cardiac output is doubled in this situation, the oxygen delivery to tissues can be increased by well over 60% without any change in SpO2. Nevertheless, hypoxemia (low PaO2) is the most common cause of tissue hypoxia , hence the need to support acute reductions in SpO2 with oxygen therapy until the cause in resolved.\nDespite the poor association between tissue hypoxia and SpO2, patients with chronic lung disease are strongly advised to use oxygen if their SpO2 drops below 88% during exercise. Yet athletes are subjected to high intensity and high volume training to promote enhanced performance without additional oxygen . In this context, prompting patients with chronic lung disease to use oxygen if their SpO2 fall below 88% during exercise is unfair, especially in the absence of credible evidence to claim that SpO2 less than 88% during exercise is damaging. Although submaximal oxygen saturation during exercise has not led to any untoward cardiac events in lung disease , empirical use of oxygen may prevent the triggering of natural mechanisms for tolerance or recovery. This kind of patient advice based on ‘logical’ thinking in the absence of sufficient evidence could therefore bring needless harm whilst adding unnecessary expense to health care services.\nIndeed the evidence for the harmful effects of oxygen is becoming more widely recognised. One example is the withdrawal of oxygen use for neonatal resuscitation, where decades ago, use of oxygen was the norm . This practice is now considered harmful [13,14]. Similarly, evidence against the use of oxygen in adult CPR is emerging . On the contrary, it should be noted that long term use of oxygen for symptomatic relief has benefited selected patients with hepatopulmonary syndrome, a situation of hypoxemia in chronic liver disease resulting from pulmonary vascular dilation characterised by anatomical shunting .\nThere are a number of changes in the heart and pulmonary circulation occurring in humans living permanently at high altitudes i.e., in low oxygen tension environments. These adaptations are not quite comparable to that of temporary residents at high altitudes, nor those experimentally exposed to acute hypoxia [17-20]. Thus, the natural adaptation to one’s surrounding should be another caveat in the decision-making process behind providing oxygen therapy [21,22].\nDuring exercise, high cardiac output may compensate and deliver adequate oxygen to tissues despite lower recordings of SpO2. Hypoxia ensues when aerobic metabolism in tissues turn to anaerobic for energy production.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-3", "d_text": "Laboratory results were part of routine perioperative blood sampling. Induction of general anesthesia was performed intravenously with fentanyl and etomidate, and a muscle relaxant (pancuronium) to allow tracheal intubation. Maintenance of general anesthesia was inhalational, using sevoflurane with additional doses of fentanyl and a muscle relaxant. After the surgery patients were transferred to a postoperative cardiac ICU, still intubated and mechanically ventilated, sedated to achieve adequate sedation depth (Richmond Agitation Sedation Scale [RASS] between −1 and +1) with adequate pain control, using intravenous morphine infusion and a non-opioid analgesic. The initial mode for mechanical ventilation was synchronized intermittent mechanical ventilation with continuous positive airway pressure when the patient regained their respiratory drive and the weaning process was initiated. Patients were extubated when they met pre-defined criteria: normal body temperature, Glasgow Coma Scale 14–15 points, no focal neurological deficit, pain under control, drainage <100 mL/h, hemodynamic stability (noradrenaline or adrenaline <0.05 μg/kg/min), absence of significant arrhythmias, PaO2 >60 mmHg, PaCO2 <45 mmHg, SpO2 >95% with FiO2 <0.45, no respiratory distress and normal respiratory rate. Patients were extubated to passive oxygen therapy (facial mask with O2 at 5–7 L/min). An arterial blood gas was obtained 30 minutes after extubation to ensure patient safety.\nThe study was performed in accordance with the Declaration of Helsinki due to its retrospective character a waiver was granted from the Bioethical Committee of the Pomeranian Medical University in Szczecin, Poland, decision no KB-0012/254/06/18. The data entered into the prospectively collected database required written informed consent for surgery, anesthesia and data collection at the time of hospital admission from each patient, as part of routine preoperative assessment, therefore patient consent to review their medical documentation was waived by the Bioethical Committee. To ensure patient data safety only de-identified data was used for analysis.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-52", "d_text": "When these fail to bring up the oxygen saturation, we proceed to mechanical ventilatory support which could be Non invasive or Invasive. If ventilatory support also fails, Extracorporeal Membrane Oxygenation (ECMO) can be considered.\nWhen and how should oxygen be given?\nOxygen is a drug with specific indications, methods of delivery and targets. A prescription for oxygen should reflect all these features. For instance, an adequate prescription for a COVID pneumonia patient with mild hypoxemia in the ward would be “Oxygen at 6 litres per minute to be given by nasal cannula and titrated to an oxygen saturation of 92-94%. Remove when the patient is maintaining a saturation of greater than 90 % on room air.”\nLet us consider the indications, the methods of delivery and the targets in sequence.\nIndications for oxygen delivery\nOxygen supplementation is indicated in COVID patients with hypoxemia; there is no benefit of giving oxygen prophylactically. Hypoxemia is a low arterial oxygen tension below the normal expected value of (85-100 mmHg). The British Thoracic Society (BTS) guideline defines hypoxemia as PaO2 < 60 mmHg or SaO2 < 90%. Hypoxia on the other hand, refers to oxygen lack at a tissue level, and is generally inferred from evidence of tissue hypoperfusion (increasing serum lactate).\nSources of Oxygen Supply\nOxygen is available in three forms; in a cylinder, which is what one sees in smaller hospitals; from a liquid oxygen plant, generally present in larger hospitals and from oxygen concentrators/ Oxygen generators. Oxygen cylinders are generally available in two size- a size ‘E’ cylinder, which when full contains 680 litres of oxygen, and a type ‘F’ cylinder which contains between 1200 and 1300 litres of oxygen. It is important to know this, because when one is transporting a patient, dividing the capacity of the cylinder by the oxygen required in litres per minute will let one know how long the supply will last. The oxygen cylinder manifold supply has a main bank and a reserve bank of cylinders, with a selector switch, which the technician uses to switch from one bank to the other when the pressure drops below a critical limit. This gives the technician time to attach filled cylinders after removing the empty ones.\nLiquid oxygen is nearly always supplied from a central location through a system of pipeline to patient outlets.", "score": 24.194708150609998, "rank": 54}, {"document_id": "doc-::chunk-14", "d_text": "Simonds AK, Ward S, Heather S, Bush A, Muntoni F. Outcome of pediatric domiciliary mask ventilation in neuromuscular and skeletal disease. Eur Respir J. 2000;16:476-81. [ Links ]\n17. Make BJ, Hill NS, Goldbery AI, Bach JR, Dunne PE, Heffner JE, et al. Mechanical ventilation beyond the intensive care unit. Report of a consensus conference of the American College of Chest Physicians. Chest. 1998;113(5 Suppl):289S-344. [ Links ]\n18. Katz S, Selvadurai H, Keilty K, Mitchell M, MacLusky I. Outcome of non-invasive positive pressure ventilation in pediatric neuromuscular disease. Arch Dis Child. 2004;89:121-4. [ Links ]\n19. Khan Y, Heckmatt JZ, Dubowitz V. Sleep studies and supportive ventilatory treatment in patients with congenital muscular disorders. Arch Dis Child. 1996;74:195-200. [ Links ]\n20. Suresh S, Wales P, Dakin C, Harris MA, Cooper DG. Sleep-related breathing disorder in Duchenne muscular dystrophy: disease spectrum in the pediatric population. J Paediatr Child Health. 2005;41:500-3. [ Links ]\n21. Mellies U, Dohna-Schwake C, Stehling F, Voit T. Sleep disordered breathing in spinal muscular atrophy. Neuromuscul Disord. 2004;14:797-803. [ Links ]\n22. Hill N. Noninvasive ventilation: does it work, for whom, and how? Am Rev Respir Dis. 1993;147:1050-5. [ Links ]\n23. Kramer N, Hill N, Millman R. Assessment and treatment of sleep-disordered breathing in neuromuscular disease and chest wall diseases. Top Pulm Med. 1996;3:336-42. [ Links ]\n24. Annane D, Quera-Salva MA, Lofaso F, Vercken JB, Lesieur O, Fromageot C, et al.", "score": 23.142872076359026, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "—To assess the effect of deliberate perioperative increase in oxygen delivery on mortality and morbidity in patients who are at high risk of both following surgery.\n—Prospective, randomized clinical trial.\n—A teaching hospital general intensive care unit, London, England.\n—A total of 107 surgical patients, who were assessed as high risk from previously identified criteria, were studied during an 18-month period.\n—Patients were randomly assigned to a control group (n=54) that received best standard perioperative care, or to a protocol group (n=53) that, in addition, had deliberate increase of oxygen delivery index to greater than 600 mL/min per square meter by use of dopexamine hydrochloride infusion.\n—Mortality and complications were assessed to 28 days postoperatively.\n—Groups were similar with respect to demographics, admission criteria, operation type, and admission hemodynamic variables. Groups were treated similarly to maintain blood pressure, arterial saturation, hemoglobin concentration, and pulmonary artery occlusion pressure; however, once additional treatment with dopexamine hydrochloride had been given, the protocol group had significantly higher oxygen delivery preoperatively (median, 597 vs 399 mL/min per square meter; P<.001) and postoperatively (P<.001). Results indicate a 75% reduction in mortality (5.7% vs 22.2%; P=.015) and a halving of the mean (±SEM) number of complications per patient (0.68 [±0.16] vs 1.35 [±0.20]; P=.008) in patients randomized to the protocol group.\n—Perioperative increase of oxygen delivery with dopexamine hydrochloride significantly reduces mortality and morbidity in high-risk surgical patients.(JAMA. 1993;270:2699-2707)", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-17", "d_text": "11(6):\nAnasthesiologie Intensivmedizin Notfallmedizin SchmerztherapieRoutine use of high inspired oxygen concentration - ProAnasthesiologie Intensivmedizin Notfallmedizin Schmerztherapie\nAnaesthesia and Intensive Care\nA review of the risks and benefits of nitrous oxide in current anaesthetic practice\nAnaesthesia and Intensive Care, 32(2):\nJournal of Clinical Monitoring and Computing\nAtelectasis formation during anesthesia: Causes and measures to prevent it\nJournal of Clinical Monitoring and Computing, 16():\nAnesthesia and AnalgesiaSupplemental oxygen does not reduce the incidence of postoperative nausea and vomiting after ambulatory gynecologic laparoscopyAnesthesia and Analgesia\nHemodyinamic effects of several inspired oxygen fractions in spontaneously breathing dogs submitted to continuous infusion of propofol\nCiencia Rural, 38(3):\nObesity SurgerySupplemental Postoperative Oxygen and Tissue Oxygen Tension in Morbidly Obese PatientsObesity Surgery\nAnesthesia and AnalgesiaSupplemental oxygen, but not supplemental crystalloid fluid, increases tissue oxygen tension in healthy and anastomotic colon in pigsAnesthesia and Analgesia\nJama-Journal of the American Medical Association\nEffect of High Perioperative Oxygen Fraction on Surgical Site Infection and Pulmonary Complications After Abdominal Surgery The PROXI Randomized Clinical Trial\nJama-Journal of the American Medical Association, 302():\nPulmonary atelectasis in dogs during general anesthesia\nCiencia Rural, 40(1):\nArchivos De BronconeumologiaPen-Operative Atelectasis and Alveolar Recruitment ManoeuvresArchivos De Bronconeumologia\nAnesthesia and Analgesia\nOndansetron is no more effective than supplemental intraoperative oxygen for prevention of postoperative nausea and vomiting\nAnesthesia and Analgesia, 92(1):\nMayo Clinic Proceedings\nA randomized controlled trial of oxygen for reducing nausea and vomiting during emergency transport of patients older than 60 years with minor trauma\nMayo Clinic Proceedings, 77(1):\nTrialsPerioperative oxygen fraction - effect on surgical site infection and pulmonary complications after abdominal surgery: a randomized clinical trial.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-9", "d_text": "To help prevent potential complications from oxygen administration, reach for the nasal cannula before the non-rebreather mask, and apply just enough oxygen to maintain normal saturations.\n1. Morton PG, et al, eds., Critical Care Nursing, a Holistic Approach, 8th edition. Philadelphia, PA: Lippincott, Williams & Wilkins, 2005.\n2. Des Jardins T, Burton GG. Clinical Manifestations and Assessment of Respiratory Disease, 5th edition. St. Louis, MO: Elsevier, 2006.\n3. O’Connor RE, et al. Acute Coronary Syndromes: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation 122: S787–817, 2010.\n4. Shapiro BA, et al. Clinical Application of Blood Gases, 5th Edition. St. Louis, MO: Elsevier, 1994.\n5. Kattwinkel J, et al, Neonatal Resuscitation: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation 122: S909–S919, 2010.\n6. Ntoumenopolus G. Using titrated oxygen instead of high flow oxygen during an acute exacerbation of chronic obstructive pulmonary disease (COPD) saves lives. J Physiother 57(1):55, 2011.\n7. Austin MA, et al. Effect of high flow oxygen on mortality in chronic obstructive pulmonary disease patients in prehospital setting: randomized controlled trial. BMJ 341: c5462, 2010.\nKevin T. Collopy, BA, FP-C, CCEMT-P, NREMT-P, WEMT, is an educator, e-learning content developer and author of numerous articles and textbook chapters. He is also the performance improvement coordinator for Vitalink/Airlink in Wilmington, NC, and a lead instructor for Wilderness Medical Associates. Contact him at firstname.lastname@example.org.\nSean M. Kivlehan, MD, MPH, NREMT-P, is an emergency medicine resident at the University of California San Francisco and a former New York City paramedic for 10 years. Contact him at email@example.com.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "A recent Cochrane review combined this trial with a smaller similar one, which generated a composite RR of mortality of 3.03 (95% CI, 0.93-9.83).1 In acute decompensated heart failure, no clinical studies are available, despite the rather abundant evidence from preclinical studies, suggesting that such patients may experience the adverse effects caused by coronary and systemic vasoconstriction (Table).\nIn accordance with existing guidelines, supplemental oxygen is often administered during CPR. In the postresuscitation phase, evidence exists that 30% oxygen is more brain protective than pure oxygen.2 Recently, an observational study in 6326 patients showed that supplemental oxygen induced postresuscitation hyperoxia was independently associated with increased mortality (odds ratio [OR], 1.8; 95% CI, 1.5-2.2). A subsequent analysis of the same cohort indicated that each 25–mm Hg increase in PaO2 was associated with a statistically significant 6% increase in the relative risk of death.3 Finally, a comparable cohort of 12 108 patients was analyzed in New Zealand and Australia. Again, the hyperoxia group showed an increased risk of mortality compared with the normoxia group (OR, 1.2; 95% CI, 1.1-1.6). Although the statistical significance of this finding was not maintained after multivariable adjustment, certainly no beneficial effects of hyperoxia were found.4\nIn the management of ischemic stroke, a randomized trial suggested that hyperbaric oxygen may adversely affect stroke severity.5 With regard to normobaric oxygen, 3 randomized trials were performed. One showed no benefit on clinical outcome.6 Another trial in nonhypoxic patients found lower survival at 1 year (OR, 0.45; 95% CI, 0.23-0.90) in those who received supplemental oxygen during initial treatment.7 The third randomized trial was terminated in 2009 after enrolling 85 patients because of excess mortality in the hyperoxia group (40% vs 17% [P = .01 by our own calculation]) (Clinical Trial of Normobaric Oxygen Therapy in Acute Ischemic Stroke [not published]; clinicaltrials.gov Identifier: NCT00414726).", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "Preoperative investigations were within normal limits.\nThe patient was premedicated with Inj. Glycopyrolate 0.2mg, Inj. Pentazocine 0.5\nmg/kg, Inj.Midazolam 1mg and Inj.Diclofenac 75mg i.m. The patient was preoxygenated and\nanesthesia was induced with Inj.Propofol 2mg/kg and Inj.Vecuronium 0.1mg/kg till the\nloss of eyelash reflex.. The patient was ventilated with oxygen, nitrous oxide and sevoflurane\n1% for five minutes and intubated orally with 7.0mm flexometallic cuffed endotracheal\ntube. Anesthesia was maintained with sevoflurane at an inspired concentration of\napproximately 1% with mixture of N2O/O2 50:50 (3 liters/min). The intraoperative\nmonitoring included NIBP, ECG, SPO2 and EtCO2. Ventilation was adjusted to maintain\nnormocarbia. The surgery lasted for three hours. All the vitals parameters were normal\nthroughout the surgery. Arterial oxygen saturation determined using pulse oximetry\nremained greater than 98% during the intraoperative course. No episodes of hypoxemia\nSevoflurane and nitrous oxide was discontinued upon completion of the procedure and neuromuscular blockade was reversed. Suddenly the patient started having focal seizures involving the left upper extremity and left side face lasting 30- 40seconds. Inj.Propofol 20mg IV was administered with immediate cessation of seizure activity. Patient blood pressure dropped to 80/40 mmHg. 6mg of inj. Ephedrine was given with restoration of blood pressure. The patient was shifted to intensive care unit (ICU). During transportation she had another episode of similar seizure lasting around 30seconds. The patient was awake, confused and slow in responding to oral commands in between the convulsive episodes. On arrival in the ICU she had another episode of similar seizures with hypotension. A bolus dose of midazolam 2mg i.v was given to suppress the seizures. Inj. Noradenaline was started to correct the hypotension.", "score": 23.030255035772623, "rank": 60}, {"document_id": "doc-::chunk-8", "d_text": "There were 2 withdrawals before surgery; no further data were collected in these 2 patients, and thus 400 patients were included in the final analysis; 198 were assigned to nasal cannula oxygen and 202 to supplemental oxygen (Fig. 1).\nMany patients had laparoscopic Roux-en-Y gastric bypass (91%); the remainder had open Roux-en-Y gastric bypass (9%). The type of surgery and approach varied among the sites. Twenty-three percent of patients in the supplemental oxygen group received antibiotics more than 1 hour before the start of incision; 72% received antibiotics within 1 hour before incision, while 5% received antibiotics after incision; 8% of patients who received antibiotics in the 30% group were given antibiotics >1 hour before the start of surgery, 78% received antibiotics within 1 hour before incision, and 14% received antibiotics after incision; 31% in the supplemental and 34% in the 30% group were given cephalosporin; 7% in the supplemental group and 6% in the 30% group were given vancomycin; and 1 patient in the supplemental group received ciprofloxacin.\nThe 2 groups had similar baseline and demographic variables (standardized difference <0.30, Table 2). Preoperative and intraoperative glucose concentrations were comparable in the 2 groups (Tables 2 and 3). Both groups were given similar amounts of intraoperative crystalloids and opioids. The median duration of surgery was 2.7 hours in the 80% oxygen group and 2.6 hours in the 30% oxygen group (Table 3). Nasal cannula oxygen was well tolerated. Among the patients randomized to supplemental oxygen (n = 202), only 32 patients (16%) did not tolerate the tightly fitting Hi-Ox mask and were instead given supplemental oxygen with an open humidified mask at a flow rate of 15 L/min.\nAmong the observed complications, surgical wound infection was the most common, occurring in 8.5% of patients. The incidence of surgical wound infection was similar in patients randomized to either supplemental oxygen (8%, 16 of 202) or 30% oxygen (9%, 18 of 198). The observed median ASEPSIS score was 1 (interquartile range: 0–4) within each group.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Author Affiliations: Departments of Internal Medicine (Drs Cornet, Kooter, Peters, and Smulders) and Intensive Care Medicine (Dr Cornet), VU University Medical Center, and Institute for Cardiovascular Research ICaR-VU (Drs Cornet, Kooter, Peters, and Smulders), Amsterdam, the Netherlands.\nIn medical emergencies, such as acute coronary syndrome, cardiopulmonary resuscitation (CPR), stroke, and exacerbations of chronic obstructive pulmonary disease (COPD), supplemental oxygen is often routinely administered. Most physicians believe this intervention is potentially lifesaving, and many guidelines support the routine use of high-dose supplemental oxygen.\nOver the decades, however, potential detrimental effects of supplemental oxygen appear to have been ignored. Many clinicians are unaware of the variety of preclinical studies that have been executed, showing that hyperoxia causes both coronary and systemic vasoconstriction, resulting in deterioration of several important (hemodynamic) parameters (Table). The prime candidate mechanism for these unintended effects is believed to be the formation of reactive oxygen species. In this Research Letter, we draw attention to the collective clinical evidence, which argues against the routine use of high-dose oxygen. Awaiting more thorough studies, we strongly recommend a policy of careful, titrated oxygen supplementation.\nWe conducted a search of the literature in MEDLINE and EMBASE to identify articles addressing the effect of oxygen therapy in acute coronary syndrome, cardiopulmonary resuscitation, stroke, and exacerbations of chronic obstructive pulmonary disease.\nLarge (randomized) clinical studies addressing oxygen supplementation are scarce. In 1976, a double-blind randomized trial was performed in 200 patients with suspected acute myocardial infarction. In the supplemental oxygen group, 9 of 80 patients (11%) died, as opposed to 3 of 77 (3.9%) in patients breathing compressed air (relative risk [RR] of mortality, 2.9; 95% CI, 0.8-10.3).", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "The BTS has appointed 'Oxygen Champions' in most acute UK hospitals to help introduce the guidelines to improve oxygen use, enhance patient safety and audit usage.\nNational documents are available providing advice on the safe use of oxygen; these are:\nBritish Thoracic Society (October 2008): Guidelines for emergency oxygen use in adult patients\nThese comprehensive clinical guidelines cover all aspects of the emergency use of oxygen in pre-hospital care and hospital settings. Note that they do not cover children under 16 years or critical care (ITU and HDU facilities). The key recommendations are: o Oxygen therapy will be adjusted to achieve target saturations rather than giving a fixed dose to all patients with the same disease. o Nurses will make these adjustments without requiring a change to the prescription on each occasion. o Most oxygen therapy will be from nasal cannulae rather than masks. o Oxygen will not be given to patients who are not hypoxaemic (except during critical illness). o Pulse oximetry must be available at all locations where emergency oxygen therapy is used. o Oxygen will be prescribed in all situations except for the immediate management of critical illness.\nIn Appendix 2 of the report, a number of resources and good practice examples relating to the key risks and actions recommended are stated including:\nAction 6: Pulse oximetry is available in all locations where oxygen is used.\nIt is recommended in the BTS guidelines that:\n- Pulse oximetry must be available in all locations where emergency oxygen is used;\n- Oxygen saturation, ''the fifth vital sign'', should be checked by pulse oximetry in all breathless and acutely ill patients (supplemented by blood gases when necessary) and the inspired oxygen concentration should be recorded on the observation chart with the oximetry result (the other vital signs are pulse, blood pressure, temperature and respiratory rate);\n- All patients should have their oxygen saturation observed for at least five minutes after starting oxygen therapy.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-5", "d_text": "In practice, only a limited set of neuromuscular conditions is accompanied by respiratory failure with appreciable frequency (table). Patients who could potentially develop respiratory muscle weakness should have their vital capacity and mouth pressures measured at least once. Whether to repeat the testing will depend on the diagnosis and the rate of progression of muscle weakness. A serial fall in the vital capacity (particularly a fall below 1.2–1.5 l or <40–50% of predicted), or a fall in MIP to <60% of predicted, are indications for further respiratory assessment. Onset of daytime hypercapnia (PaCO2 >45 mmHg or 6 kPa) indicate that the patient needs urgent assessment and may imminently require non-invasive ventilation.\nMotor neuron disease\nAll neurologists are familiar with the clinical features of this disorder. Presentation with respiratory muscle weakness or with hypercapnic respiratory failure is rare but well-recognised. Whatever the mode of onset of their disease, most patients eventually develop respiratory muscle weakness, and this can be anticipated by serial measurement of the VC and MIP. Symptoms include exertional dyspnoea, orthopnoea, fatigue, non-refreshing sleep and morning headache. There is good evidence that non-invasive ventilation can improve these symptoms and quality of life. It also significantly improves survival (to a much greater extent than does riluzole) in those patients with only mild or moderately impaired bulbar function.5 Where resources are available, a respiratory physician with an interest in non-invasive ventilation should be part of the management team in motor neuron disease (see below).\nSpinal muscular atrophy\nTypes 1, 2 and 3 are recognised, depending on age of onset. All forms have the same genetic basis—homozygous deletion of the telomeric survival motor neuron gene. Respiratory muscle involvment in spinal muscular atrophy type 1 (Werdnig-Hoffman disease) is universal. It occurs to a variable extent in Type 2, and is uncommon in Type 3.\nIn the congenital myopathies, onset of hypotonia and weakness occur in early life. Unlike congenital muscular dystrophy, distinct structural alterations (for example, rod bodies in nemaline myopathy) are apparent within the muscle fibres at biopsy. If the child survives infancy, there is usually minimal or no progression of weakness in later life.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "- Should I remove the nasal cannula (NC) prior to intubating once the patient is paralyzed?\n- “Apneic oxygenation” allows for continuos oxygenation through a passive process driven by oxygen gradients in the alveoli that are still performing their function of gas exchange.\n- Can I can rely on a finger pulse oximetry probe for current oxygenation information?\n- Pulse ox results can lag anywhere from 30 seconds to 3 minutes depending on cardiac status of patient. Ear probes can provide more current information due to their lack of vasoconstriction and central blood supply.", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-3", "d_text": "During implant of the medullary cemented prosthesis, patient presented with temporary oxygen desaturation to 80% and bronchospasm which was reverted by adjusting parameters to IPAP 17 cmH2O and EPAP 8 cmH2O associated with administration of inhalatory β-agonist and ipatropium. He was then referred to intensive care unit (ICU) hemodynamically stable, with a 94% saturation on BIPAP, lucid, oriented and painless.\nPostoperative progressed with hemodynamic stability; good control of pain using dipyrone associated to tramadol and continued use of NIMV/BIPAP during the first 24 hours following procedure. Thereafter, the patient resumed use of nasal oxygen and intermittent NIMV. Post-procedure arterial gasometry showed: pH - 7.38, pCO2 - 54 mmHg, pO2 -63 mmHg, HCO3 - 31.2 mmol/L, base excess: 4.8, sO2: 92%. Patient was transferred to the semi-intensive care unit 10 days after procedure.\nPrevalence of COPD is estimated at 6% and became the main cause of mortality among respiratory diseases.1,4 Adequate pre and intraoperative assessments estimates of surgery cost-benefit, type of anesthesia, ventilation assistance and hemodynamic support increase survival of these patients at postoperative.\nThe patient presented a pulmonary condition with severe restrictions; however acute disease worsened the condition requiring a planning of intraoperative ventilation support appropriate for the situation.\nWhile, among the general population the rate of postoperative pulmonary complications ranges from 5% to 12%5-6, in patients with COPD, the number and type of complications increase substantially (37% of cases)3 for this reason in the decade of 1970, some authors recommended that only life-saving surgeries should be carried out in patients with a FEV1 < 0.5L. The option of not operating this patient was considered, because of the severity of the condition, however this would mean an increase of morbidity, mortality (60 to 70%)7 and would increase the patient's physical impairment.\nA recent study3 stated that, in patients with COPD, the most common postoperative complications are an increase in ICU length of stay and bronchospasm.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "Although an external monitor judged the excess mortality as “unrelated to oxygen treatment,” these results are important because this was the largest randomized trial investigating oxygen treatment for ischemic stroke, and it is remarkable that these results have not (yet) been published.\nIn the management of COPD, the risks of oxygen supplementation are widely acknowledged. Administration of oxygen in patients with COPD may cause hypercapnia due to ventilation-perfusion mismatching, the Haldane effect, inhibition of hypoxic drive, and atelectasis. Guidelines recommend a maximum FIO2 of 0.28. However, patients with COPD often receive higher doses, especially during ambulance transportation, causing hypercapnia and increased mortality. Recently, a randomized trial compared high concentration oxygen with titrated oxygen in prehospital patients with exacerbation of COPD. Mortality was lower in patients receiving titrated oxygen (RR, 0.42; 95% CI, 0.20-0.89). In those with later-confirmed COPD, mortality reduction was even stronger (RR, 0.22; 95% CI, 0.05-0.91).8\nIn conclusion, there appear to be potential dangers of routine administration of supplemental oxygen during a variety of medical emergencies. Hyperoxia is associated with hemodynamic alterations that may increase myocardial ischemia and impair cardiac performance, and the results from relatively unknown preclinical studies appear to be supported by the available clinical evidence. Moreover, hyperoxia also seems to be associated with adverse outcomes in different noncardiac emergencies. Finally, in our extensive literature review, we did not find a single study contradicting the reported hazards of hyperoxia or, in fact, suggesting benefits.\nWe acknowledge that additional clinical research is warranted to determine whether routine high-dose supplemental oxygen in medical emergencies indeed causes more harm than benefit. Until that time, however, we call for appropriate caution in applying supplemental oxygen. Hypoxemia should be treated carefully with stepwise increases in inhaled oxygen concentration in an attempt to avoid arterial hyperoxia.\nCorrespondence: Dr Cornet, VU University Medical Center, PO Box 7057, 1007 MB Amsterdam, the Netherlands (email@example.com).\nPublished Online: January 9, 2012. doi:10.1001/archinternmed.2011.624\nAuthor Contributions:Study concept and design: Cornet, Kooter, Peters, and Smulders.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "In a previous publication we reported that, although intercurrent chest colds can necessitate intensive care and intubation, tracheotomy can be avoided for many SMA children by using a protocol in which, in addition to conventional medical, respiratory therapy, and nutritional support, mechanical insufflation-exsufflation (MI-E) (In-exsufflator™, J. H. Emerson Company, Cambridge, MA) is used via the translaryngeal tube and it along with high span nasal PIP+PEEP are used post-extubation.3 Care providers are trained and equipped with oximetry as feedback to use high span PIP+PEEP, MI-E, and manually assisted coughing to reverse decreases in SaO2 below 95%. The purpose of this work is to report the long-term outcomes of tracheostomy, and extubation protocol/noninvasive approaches. There have been no previous long term studies of SMA1 patients and only one report of a SMA1 patient who survived to 3.7 years of age with a tracheostomy.4\nThe status of all 65 SMA1 patients who visited one Jerry Lewis Muscular Dystrophy Association Clinic from June 1996 until October 2001 was reviewed. This included contact by telephone in October 2001. SMA1 was diagnosed on the basis of DNA evidence of chromosome 5 exon 7 and 8 deletion in 56 of 65 children, affected siblings in 4 patients, and characteristic laboratory, muscle biopsy, and electromyography results in 5 children. Inclusion criteria included inability to roll or sit unsupported at any time. In addition, all patients had to develop respiratory failure and the respiratory failure had to occur before 2 years of age. All patients attaining 2 years of age had to have lost the ability to receive nutrition by mouth. By 18 months of age all of the patients had little more than residual finger, toe, and facial movements. Sixty-three of the 65 children had paradoxical chest wall movement, whereas for 2 children, the chest walls neither expanded nor retracted upon inspiration. One of these two, a 12 month old, has not been prescribed PIP+PEEP and has not been hospitalized for respiratory failure.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-2", "d_text": "As comorbidities he has hypothyroidism and diabetes mellitus type II with regular use of prednisone, bamifylline, inhalatory β-agonist and ipatropium, acetylsalicylic acid and pantoprazole.\nAt pre-operative evaluation he was bedridden, with pain in the lower right limb, tachypnea, hemodynamically stable and with 88% oxygen saturation. Patient alternated use of NIMV/ /Bi-level Positive Airway Pressure (BIPAP) with nasal 3L/min oxygen O2. Complementary exams showed hematocrit 36%; normal leukogram and coagulogram; echocardiogram with mild ventricular dysfunction; chest X-ray (Figure 2). arterial pre-operative gasometry under O2 at 4 L/min showed pH - 7.38, pCO2 - 54 mmHg, pO2 -93 mmHg, HCO3 - 31.2 mmol/L, base excess: 4.9, SpO2: 96%.\nAfter monitoring by cardioscope, noninvasive pressure and pulse oximetry, a peripheral venous access was punctured and cefazolin 2 g and hydrocortisone 200 mg were administered. NIMV with BIPAP under full face mask (Figure 3) was administered with the parameters expiratory positive airway pressure (EPAP) of 7cmH2O, inspiratory positive airway pressure (IPAP) 15 cmH2O and O a flow of 3 L/min.\nDiazepam 1 mg and cefotamine 5 mg both intravenous, for positioning on left lateral decubitus were administered. Simple isobaric spinal anesthesia was given in the subdural space in the L3-L4, spaces, needle 25G. First trial, liquor was clear and injection of isobaric bupivacaine 16mg with final sensory level at T10. Continuous infusion of dexmedetomidine 0. 2 µg/kg/min was begun for sedation. The surgical procedure was comprised of partial bipolar arthroplasty of the hip with the contemporary technique of femoral stem cementing, on the left lateral decubitus for some 75 minutes.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-3", "d_text": "The proposed pharmacodynamic model of respiratory depression integrates respiratory physiology principles and allows the clinician to estimate the level of CO2 as a function of propofol and remifentanil in patients sedated although breathing spontaneously. Such a model opens many possibilities to study in deeper detail respiratory response to anesthetic drugs or to be used as a learning tool to train anesthesiologist to perform safer sedation or to be used as an alert system while performing sedation-analgesia in real patients.\nThe prevalence of OSA in surgical patients may be as high as 70%. These patients pose a significant clinical challenge to health-care professionals, and the lack of evidence behind the current guidelines recommendations and significant costs of guideline implementation have created a dilemma between potentially improved postoperative adverse events and increased health-care resource utilizations. This review examines the evidence regarding the use of CPAP in the perioperative period.\nFor surgical patients with OSA it is clear that those who are treated and adherent to CPAP should continue their CPAP in the postoperative period. Those patients with diagnosed OSA who are not adherent to therapy or undiagnosed OSA pose a more significant clinical challenge. CPAP has been shown to have beneficial effects on postoperative adverse events.\nThere are several barriers to effective diagnosis and treatment of OSA in the perioperative setting. In both clinical and research settings, a significant proportion of patients who may have suspected OSA refuse to undergo additional testing for establishing an OS diagnosis. Also, for those patients that actually are on home CPAP therapy only one third used their CPAP during the postoperative stay in the hospital.\nIndividual evaluation is important to determine the best course of action. Referral to sleep medicine for CPAP therapy may have to be taken in the absence of overwhelming evidence from RCTs in certain groups of patients. Patients with severe OSA, COPD or overlap syndrome, obesity hypoventilation syndrome, or pulmonary hypertension would definitely benefit from further evaluation and workup. Patients who have preoperative resting hypoxemia on room air with no known cardiopulmonary cause are potential candidates for further preoperative evaluation. Patients with decreased respiratory responses to hypoxia/hypercapnia stimuli and a high arousal threshold presenting with recurrent severe hypoxemia may benefit from preoperative CPAP.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-4", "d_text": "If decreased lung capacity is the result of kyphoscoliosis, corrective spinal fusion surgery may be indicated (Note: since muscular dystrophies can often be associated fatal malignant hyperthermia triggered by neuromuscular blocking anaesthetic agents, these should be prohibited in case of muscular dystrophy patients and special precautions e.g. availability of dantrolene and hyperthermia remedial measures, should be taken. Further progression of respiratory insufficiency requires more invasive measures such as a tracheostomy and portable ventilation.\nThe mainstay of muscular dystrophy ‘treatment’ comprises of rehabilitative and nutritional measures. Low-intensity physical therapy in order to maintain independence with ADL’s (activities of daily living) and quality of life, maintain joint flexibility, muscle tone and to prevent joint contractures and deformities such as scoliosis, is essential throughout the course of the disease. Hydrotherapy/aquatic therapy can be exceedingly beneficial as water provides buoyancy as well as resistance. The former helps a weak patient stand in the pool and possibly take steps maintaining function and improving his overall sense of wellbeing somewhat. The latter helps strengthen muscles in a low-impact, non-jarring fashion. (Note: high-intensity, isometric, weight-training exercises are strictly NOT RECOMMENDED for muscular dystrophy patients as they do not have the capacity to overcome the wear and tear induced by such. Any exercise involving ‘eccentric’ muscle contraction (stretching of an already contracted muscle by manual resistance or weight loading) is unequivocally prohibited.\nAs the functional strength, gross and fine motor control and independence in daily tasks diminish, the services of an occupational therapist may become essential in helping the patient be as self-sufficient as possible. At various stages of the disease, the muscular dystrophy patient may need orthotics (to prevent joint contractures), canes, walkers, standing frames, wheelchairs or motorized scooters, air mattress (to prevent bedsores), custom-made or modified utensils, aids for writing, turning book pages or for computer use as well as toilet and shower modifications and safety devices as well as accessibility needs such as ramps, chairlifts and car/van modifications.\nLastly, the importance of adequate and balanced nutrition in muscular dystrophy cannot be understated. It is crucial that the patient maintain muscle mass but not gain weight in the form of fat that can further hamper mobility.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "This report was previously presented, in part, at the American Society of Anesthesiologists Meeting, 2010.\nReprints will not be available from the authors.\nAddress correspondence to Anupama Wadhwa, MD, Department of Anesthesiology, University of Louisville, 530 S. Jackson St, Louisville, KY 40202. Address e-mail to email@example.com.\nSurgical site infections and healing-related complications are among the most common serious complications of anesthesia and surgery.1–4 The morbidity and related cost associated with surgical infections and the resulting major complications are considerable; estimates of prolonged hospitalization vary from 5 to 20 days per infection.5–7\nOxidative killing is the most important immune defense against surgical pathogens, with killing intensity increasing throughout the range of 0 to ≥150 mm Hg oxygen.8,9 Oxidative killing requires molecular oxygen that is enzymatically transformed into the bactericidal radical superoxide.10 Subcutaneous tissue oxygen values near 60 mm Hg are typical in euthermic, euvolemic, healthy volunteers breathing room air.11 Perioperative subcutaneous oxygen partial pressures <40 mm Hg are associated with high infection risk, whereas partial pressures >90 mm Hg are rarely associated with infection.12,13 Adequate tissue oxygenation is also necessary for collagen deposition (scar formation), which is an essential step in wound healing and tissue repair.14\nThe partial pressure of oxygen in subcutaneous tissues (PsqO2) varies widely, even in patients whose arterial hemoglobin is fully saturated. Factors known to influence tissue oxygen tension include core15 and local temperature,16 smoking,17 anemia,18 perioperative fluid management,19 neuraxial anesthesia,20 and uncontrolled surgical pain.21 As might be expected, increasing the fraction of inspired oxygen augments tissue oxygen tension.22 Supplemental perioperative oxygen was found to reduce the risk for anastomotic leak23 and wound infection in some randomized studies,22,24 but not in others.25,26\nMorbidly obese patients are at high risk of wound infection and healing-related complications. Low tissue oxygenation presumably contributes to infection risk in these patients.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-4", "d_text": "Respiratory failure results initially from recurrent chest infections caused by atelectasis, which progresses to nocturnal hypoventilation from muscle weakness, and reduced sensitivity to carbon dioxide during sleep. Sleep-related breathing disorders - nocturnal hypoventilation18-21 and obstructive sleep apnea,19 are well documented in children with neuromuscular diseases. Untreated sleep-related disorders can then lead to the development of respiratory failure through disordered ventilatory control resulting from adaptation and down-regulation of ventilatory responses to hypoxemia and hypercarbia.22\nIt has been hypothesized that NIV works by several mechanisms in the chronic lung: 1) improving ventilatory mechanics; 2) resting fatigued respiratory muscles; or 3) enhancing ventilatory sensitivity to carbon dioxide.22,23 Investigations into a mixed group of patients with neuromuscular diseases showed that NIV improved daytime blood gases,16,24,25 ventilatory response to CO2, but did not demonstrate improvement in pulmonary mechanics or respiratory muscle strength.24,25\nIn a retrospective review by Duiverman, which included 114 adult patients with restrictive lung diseases (mainly post-poliomyelitis and idiopathic kyphoscoliosis), NIV improved both daytime blood gases and pulmonary function.26\nThe differing results in the pulmonary function may be due to the differences in the natural progression of the diseases studied. Its use, in a small study of children with neuromuscular diseases, was associated with lower hospitalizations after initiation of NIV.18 The use of NIV has also been effective in improving polysomnography (PSG) indices as well as sleep architecture and sleep-related symptoms.20,21,25\nIt therefore follows that NIV should be considered in patients who have evidence of nocturnal hypoventilation. The 1999 Consensus Conference Report27 suggested the use of noninvasive positive pressure ventilation for restrictive lung disease in the presence of symptoms (fatigue, dyspnea, morning headache, etc) with one of the following parameters: PaCO2 ≥ 45 mmHg, nocturnal oximetry demonstrating oxygen saturation ≤ 88% for 5 consecutive minutes; or progressive neuromuscular disease, maximal inspiratory pressures < 60 cm/H2O or FVC < 50% predicted.\nIn a study involving Duchenne muscular dystrophy, Craig et al.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-6", "d_text": "Of the more than 12 well-characterised congenital myopathies, respiratory muscle involvement is best recognised in nemaline myopathy (fig 1), myotubular (centronuclear) myopathy and multiminicore disease. If it occurs at all, respiratory failure is usually evident in the neonatal period. However, in a rare variant of nemaline myopathy—sporadic late-onset nemaline myopathy—the onset of limb and respiratory muscle weakness occurs in adult life.6\nCongenital muscular dystrophy\nIn this rare set of disorders, hypotonia, weakness, and contractures occur within the first six months of life. In some forms there may be structural brain disease and impaired cognitive function. Respiratory muscle involvement and respiratory failure are common features and often prove fatal in childhood or the second decade.\nDuchenne muscular dystrophy and its less severe allelic counterpart, Becker muscular dystrophy, arise from mutations of the gene encoding dystrophin. In Duchenne, weakness of the respiratory muscles is universal. After the normal rise in childhood, the vital capacity begins to decline after age 10. The assessment of respiratory function may be complicated by scoliosis, and by cardiac dysfunction. Non-invasive ventilation (fig 2) has been one factor which has increased the survival beyond age 24 to 53% in one large centre.7 In Becker muscular dystrophy, respiratory failure is less common but lung function should be checked at intervals.\nMyotonic dystrophy (DM1) results from a CTG expansion in the gene encoding DMPK. It is a multisystem disorder, but the clinical features are highly variable in different patients. Respiratory failure, pneumonia or cardiac conduction defects are the usual causes of death. As well as weakness of the respiratory muscles, patients can have a central defect of respiratory control, and be predisposed to respiratory infection due to bulbar weakness. Monitoring should include periodic checks on the vital capacity, and an annual ECG as the PR interval and QRS duration each increase by a few percent per year. The alterations of personality and cognitive function found in patients with DM1 can hinder compliance with non-invasive ventilation.\nLimb girdle muscular dystrophy\nLimb girdle muscular dystrophy is a steadily expanding set of autosomal dominant or recessive myopathies which have in common weakness of the shoulder and pelvic girdle muscles.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "Picture an emergency where an anesthetized patient’s temperature unexpectedly rises to over 104 degrees Fahrenheit due to hypermetabolic acidotic chemical changes in the patient’s skeletal muscles. The disease requires rapid diagnosis and treatment with the antidote dantrolene, as well as acute medical measures to decrease temperature, acidosis, and high blood potassium levels which can otherwise be fatal.\n- An intraoperative myocardial infarction (heart attack). Picture an anesthetized 60-year-old patient who develops a sudden drop in their blood pressure due to failed pumping of their heart. This can occur because of an occluded coronary artery or a severe abnormal rhythm of their heart. Otherwise known as cardiogenic shock, this syndrome can lead to cardiac arrest unless the heart is supported with the precise correct amount of medications to increase the pumping function or improve the arrhythmia.\n- Any massive trauma patient with injuries both to their airway and to their major vessels. Picture a motorcycle accident victim with a bloodied, smashed-in face and a blood pressure of near zero due to hemorrhage. The placement of an airway tube can be extremely difficult because of the altered anatomy of the head and neck, and the management of the circulation is urgent because of the empty heart and great vessels secondary to acute bleeding.\n- The syndrome of “can’t intubate, can’t ventilate.” You’re the anesthesiologist. Picture any patient to whom you’ve just induced anesthesia, and your attempt to insert the tracheal breathing tube is impossible due to the patient’s anatomy. Next you attempt to ventilate oxygen into the patient’s lungs via a mask and bag, and you discover that you are unable to ventilate any adequate amount of oxygen. The beep-beep-beep of the oxygen saturation monitor is registering progressively lower notes, and the oximeter alarms as the patient’s oxygen saturation drops below 90%. If repeated attempts at intubation and ventilation fail and the patient’s oxygen saturation drops below 85-90% and remains low, the patient will incur hypoxic brain damage within 3 – 5 minutes. This situation is the worst-case scenario that every anesthesia professional must avoid if possible. If it does occur, the anesthesia professional or a surgical colleague must be ready and prepared to insert a surgical airway (cricothyroidotomy or tracheostomy) into the neck before enough time passes to cause irreversible brain damage.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Steroids can weaken bones. To keep bones healthy, your child will take\nand calcium supplements. If your child has heart problems, medications may be given to slow the damage.\nAs the disease progresses, the muscles that support breathing may weaken. Your child may need a ventilator. It will deliver air through a mask, tube, or sometimes through a\nsurgical hole in the windpipe called a\nSurgery is sometimes used to treat symptoms of DMD. For severe contractures, surgery may be done to release specific tendons. Scoliosis can sometimes interfere with your child’s breathing. In this case, back surgery may be done.\nThere are no known guidelines to prevent this progressive muscle disease.\nDuchenne muscular dystrophy. EBSCO DynaMed website. Available at:\nhttp://www.ebscohost.com/dynamed. Updated March 15, 2013. Accessed August 6, 2013.\nDuchenne muscular dystrophy. Muscular Dystrophy website. Available at:\nhttp://mda.org/disease/duchenne-muscular-dystrophy. Accessed August 6, 2013.\nLast reviewed May 2014 by Teresa Briedwell, DPT, OCS\nPlease be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.\nCopyright © EBSCO Publishing. All rights reserved.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "However, the special tests described in the Atelectasis section were performed only in the selected 30 patients.\nWritten informed consent was obtained from each patient. Patients with a history or current symptoms of acute or chronic pulmonary disease and patients in whom surgeons planned delayed primary closure techniques or wound healing by secondary intention were excluded, as were patients who had a history of fever or infection.\nAnesthesia was induced at an inspired oxygen fraction of 100% with sodium thiopental (3–5 mg/kg), vecuronium (0.1 mg/kg), and fentanyl (1–3 μg/kg). Anesthesia subsequently was maintained with isoflurane (approximately 0.9%) in a carrier gas, described below in randomization groups. After tracheal intubation, the lungs were ventilated with a tidal volume of 10 ml/kg, with the rate adjusted to maintain end‐tidal carbon dioxide partial pressure near 35 mmHg. Ventilation was volume‐controlled with zero end‐expiratory pressure.\nAfter induction of anesthesia, patients were assigned to two groups using a reproducible set of computer‐generated random numbers. The assignments were kept in sealed, sequentially numbered envelopes until used. For auditing purposes, both the assignment and the envelope number were recorded. The groups were: (1) 30% oxygen, balance nitrogen (not nitrous oxide); and (2) 80% oxygen, balance nitrogen.\nAt the end of surgery, residual neuromuscular block was antagonized with glycopyrrolate (0.4 mg) and neostigmine (2.5 mg). All patients were transported to the recovery unit after endotracheal extubation. During the first 2 h of recovery, the designated oxygen concentration was given via a nonrebreathing mask system (AirCare mask and manifold; Apotheus Laboratories, Inc., Lubbock, TX). Subsequently, all patients breathed room air. Supplemental oxygen was given to patients in either group at any time, as necessary, to maintain oxygen saturation as measured by pulse oximeter > 92%. Postoperative analgesia was provided by patient‐controlled intravenous doses of the opioid piritramid.\nAppropriate morphometric characteristics of each treatment group were tabulated. Historical factors that might influence general health status of the participating patients were recorded.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-3", "d_text": "Supplemental oxygen is needed to prevent hypoxia and keep cells functioning properly. However, during normal cellular metabolism oxygen is systematically changed and an O2- molecule is produced as a byproduct, which is oxygen with an extra negatively charged electron. This oxygen molecule is considered a free radical “toxic” molecule because it has the ability to damage cell membranes. Normally the body avoids damage from these toxic oxygen molecules because enzymes within each cell are produced that quickly destroy the “toxic” oxygen molecule.4 However, these enzymes are produced at a fixed rate that does not increase when metabolism (oxygen consumption) increases.\nComplications of Oxygen Delivery\nLike every other drug, oxygen administration has complications. Common complications include skin irritation and breakdown as well as a drying of the mucous membranes. Less common but more serious complications include oxygen toxicity, absorbative atelectasis and carbon dioxide narcosis.\nThe most common complications are a consequence of the delivery systems. Plastic systems, oxygen masks and nasal cannulas are used, and all of these devices are skin irritants which can cause significant skin irritation and breakdown when used long term. Patients who are on long-term oxygen systems often try to prevent skin irritation by padding their delivery systems, such as by padding their nasal cannula behind the ears with nasal tissues. Other common areas of skin breakdown are across the bridge of the nose and beneath the nares.\nTypically oxygen systems deliver oxygen that has nearly zero moisture content. When this oxygen passes through the mucous membranes in the mouth and nose, it is humidified by pulling moisture from the mucous membranes so it is humid by the time it reaches the alveoli. While this protects the alveoli and bronchioles, the nasal and oral mucous membranes quickly dry out. Dry mucous membranes lose their ability to humidify the air we breathe and also become uncomfortable. Applying oxygen via a humidifier can help prevent this from occurring.\nRecall from earlier in this article that under high oxygen environments, cells metabolize oxygen more quickly. This is because there is an increased pressure from the dissolved oxygen, the PaO2, forcing oxygen into the cell, thereby increasing oxygen consumption and the production of the toxic oxygen molecule byproduct O2-.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Muscle biopsy shows degenerating and regenerating muscle fibers, with connective tissue and fat replacing lost muscle tissue. DNA testing may also be helpful.\nLong Term Effects\nWhat are the long-term effects of the disease?\nDuchenne muscular dystrophy causes progressive deterioration of the muscles, and loss of function. The muscle deterioration leads to the use of braces and eventually a wheelchair. Impaired lung function leads to serious lung infections, aspiration, and death.\nWhat are the risks to others?\nDuchenne muscular dystrophy is not contagious. It is an X-linked recessive disorder. This means that an unaffected woman who carries the gene can pass it, on average, to one half of her sons.\nWhat are the treatments for the disease?\nThe major goal of early treatment is to keep the ability to walk as long as possible. Stretching exercises and using braces at night help avoid the tightening of tendons.\nSurgery can be used to release tight tendons. Walking and body movement should be resumed immediately after surgery if possible. Inactivity makes the disease worse and can lead to rapid deterioration of lung function. Prednisone has been shown to slow the progression of the disease for up to three years.\nVaccination against influenza and pneumococcal infections is important. Also, weight control is essential. Obesity adversely impacts physical and respiratory functions.\nHow is the disease monitored?\nAny new or worsening symptoms should be reported to the healthcare provider.\nThe Muscular Dystrophy Association [hyperLink url=\"http://www.mdausa.org/disease/dmd.html\" linkTitle=\"http://www.mdausa.org/disease/dmd.html\"]http://www.mdausa.org/disease/dmd.html[/hyperLink]", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-4", "d_text": "Neuromuscular blockade and proning have both shown to improve mortality in those with a PaO2:FiO2 ratio < 150 when initiated within the first few days of treatment (Alhazzani 2013, Guerin 2013). Although proning may be difficult in the ED, neuromuscular blockade with adequate sedation would be reasonable in the challenging, severely hypoxemic patients after discussion with the accepting ICU team.\nWhen your patient has high plateau pressure and persistent hypoxemia despite appropriately high PEEP, low Vt and neuromuscular blockade ask for help from your intensivist colleagues and consider sending to a center with ECMO capabilities.\nMechanical Ventilation in Severe Metabolic Acidosis\nMetabolic acidosis, as seen with DKA and salicylate toxicity, is a significant stimulus for tachypnea and a high minute ventilation. The patient is compensating for their acidosis by blowing off their CO2. The post intubation time is dangerous! During this time the patient is still paralyzed and sedated, not allowing them to compensate by breathing rapidly themselves. Do your best to match the patient’s pre-intubation minute ventilation through giving the patient appropriate volumes (starting at 8 mL/kg) and a respiratory rate like that of their pre-intubation. They will require a repeat blood gas shortly after intubation, to ensure that you are adequately ventilating and not allowing worsening of their underlying acidosis. Once the paralytic has worn off some patients may be most comfortable setting their own respiratory rate in a pressure support mode. Others may require deep sedation to allow for synchrony with a set rapid respiratory rate and a safe Vt ~ 8 mL/kg. With a faster set rate watch for air trapping in the flow waveform and avoid breath stacking.\nProviding excellent care to intubated patients in the ED will save lives. When taking care of obstructive lung disease remember three principles: slow respiratory rate, tolerate hypercarbia, and check for air-trapping. These patients will need to be adequately sedated and potentially paralyzed for ventilator compliance. When taking care of hypoxemic patients, first identify if it is a unilateral or bilateral process. Refractory hypoxemia in the setting of unilateral disease should be treated with low Vt ventilation and positioning with the “good lung” down to decrease the intrapulmonary shunt.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "The surgery duration was about 240 moments and this prolongation was due to further undiagnosed stenosis of the biliary tract. His medical history revealed DMD disability moderate restrictive pulmonary dysfunction mild hypokalemia and hypertension. His preoperative laboratory tests were hemoglobin 13.9?g?1 hematocrit 43.5% platelets 202 0 sodium 141?mmom·L?1 potassium 3?mmol·L?1 magnesium 0.58?mg·dL?1 creatinine 0.06?mg·dL?1 total calcium 8.72?mg·dL?1 lactic dehydrogenase (LDH) 230?U·L?1 direct bilirubin 230?U·L?1 and alkaline phosphatase 130?U·L?1. For the common difficulty to obtain a peripheral venous access in such patients a central venous access was established by ultrasound guided cannulation of the internal right jugular vein. In the preoperative room we prepared our patient by antibiotics prophylaxis: ciprofloxacin 2?gm; metronidazole 500?mg; and an antiemetic agent ondansetron 4?mg. Our patient was monitored by pulse oximetry expiratory capnography invasive and noninvasive blood pressure electrocardiogram neuromuscular transmission by train-of-four repeated every 12 seconds at the adductor pollicis muscle (TOF Guard Organon Teknika B.V Boxtel The Ercalcidiol Netherlands) and diuresis. We induced our anesthesia by oxygen propofol 150?mg fentanyl 200?mcg and rocuronium bromide 10?mg and then we proceeded to a rapid sequence endotracheal intubation (tube diameter was 7.5?mm). The maintenance of the anesthesia was achieved by fentanyl in a total dose of 400?mcg (200-100-100) rocuronium bromide 5?mg repeated every 45 minutes at T4/T1 recovery of 25% sevoflurane 2% and O2 40% in air.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "For example, PsqO2 approaches 120 mm Hg when PaO2 reaches 300 mm Hg in nonobese patients compared with 50 mm Hg in the morbidly obese with similar PaO2.22 Administration of 50% inspired oxygen, a concentration that produces a PaO2 of approximately 300 mm Hg for most patients, results in critically low (approximately 40 mm Hg) perioperative subcutaneous tissue oxygenation27 in the morbidly obese, a value associated with a high risk of infection.28 Obese patients are thus on the “steep part of the curve” relating PsqO2 to neutrophil production of high-energy oxidative species.8\nMorbidly obese surgical patients are not just at risk of inadequate tissue oxygenation during surgery. Obstructive sleep apnea, which is rampant in this population, is directly proportional to the body mass index (BMI).29–31 The associated arterial desaturation is especially severe after surgery because the syndrome is markedly worsened by opioid analgesics. Obstructive sleep apnea reduces arterial oxygenation during sleep30,32 and presumably also reduces tissue oxygenation intermittently. Supplemental oxygen may thus be especially helpful in the morbidly obese since baseline tissue oxygenation is low and, in many cases, will be further reduced by opioid-aggravated perioperative obstructive sleep apnea.\nMorbidly obese patients may thus especially benefit from extending the duration of supplemental oxygen to include the first postoperative night, when the likelihood of hypoxia-related complications may be the highest due to residual effects of general anesthesia and opioid analgesia. We therefore tested the hypothesis that the risk of major complications related to infection or inadequate healing is lower in morbidly obese patients who are given approximately 80% supplemental inspired oxygen for 12 to 16 hours after gastric bypass surgery than in those given approximately 30% oxygen (2 L/min via nasal cannula).\nPatients were recruited at 3 different hospitals: Cleveland Clinic, Cleveland, Ohio; University of Vienna, Vienna, Austria (AKH); and the Norton Hospital, Louisville, Kentucky. Approval was obtained from the IRBs of each of the 3 participating hospitals, and written informed consent was obtained from each patient. Patients having Roux-en-Y gastric bypass, either open or laparoscopic, with anticipated primary wound closure were included.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-8", "d_text": "The body quickly recognizes that it can maintain the same PaO2 without having to work as hard, and over time the body adjusts to the alveolar oxygen levels to maintain their arterial oxygen levels as their baseline. The net result of this can be a decreased respiratory rate.4\nThe well documented and clinically important piece of this condition is that oxygen-induced hypercapnia most commonly occurs in otherwise asymptomatic, relaxed and unstimulated patients, such as a patient who is sleeping. It does not occur in patients with acute respiratory distress, who often are experiencing a catecholamine release stimulating increased respiratory and circulatory rates.2\nClinical symptoms of oxygen-induced hypercapnia include a rising CO2 level, which can be measured with a side-stream CO2 device, altered mental status including confusion, complaints of headaches, and a somnolent appearance.1\nPrevention of Complications\nPreventing complications from oxygen administration is fairly straightforward. To start, whenever possible, pad the straps and tubing of oxygen delivery systems, particularly on patients who receive oxygen long term. Also, consider increasing the use of humidified oxygen to prevent drying out mucous membranes. Oxygen humidifiers are inexpensive and greatly increase patient comfort. Also, elevating a patient’s head and chest at least 30 degrees promotes lung expansion and helps prevent aspiration.\nNever withhold oxygen from patients who are in respiratory distress or hypoxic. Oxygen is truly a lifesaving drug. During major resuscitations, such as cardiac arrest and major traumas, 100% oxygen is indicated. However, for most all other patients, consider limiting oxygen to maintain SpO2 in the 90%–95% range; this also keeps the PaO2 above 60 mm Hg.1 Research has consistently shown that oxygen’s maximum benefit is obtained when delivered in the 22%–50% range4, and its benefit is limited after 6 hours of administration.3\nNeonatal patient management requires special consideration. Whenever possible, utilize room air when initiating resuscitation. Only administer oxygen when the neonate remains bradycardic after 90 seconds of resuscitation efforts.5\nThe administration of oxygen is safe and effective for patients who are in respiratory distress or who are hypoxic. Never feel that oxygen needs to be withheld. However, keep in mind that there are real consequences to the long term utilization of high-flow oxygen.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Implications for Cardiac Function Following Rescue of the Dystrophic Diaphragm in a Mouse Model of Duchenne Muscular Dystrophy.\nBetts CA., Saleh AF., Carr CA., Muses S., Wells KE., Hammond SM., Godfrey C., McClorey G., Woffindale C., Clarke K., Wells DJ., Gait MJ., Wood MJA.\nDuchenne muscular dystrophy (DMD) is caused by absence of the integral structural protein, dystrophin, which renders muscle fibres susceptible to injury and degeneration. This ultimately results in cardiorespiratory dysfunction, which is the predominant cause of death in DMD patients, and highlights the importance of therapeutic targeting of the cardiorespiratory system. While there is some evidence to suggest that restoring dystrophin in the diaphragm improves both respiratory and cardiac function, the role of the diaphragm is not well understood. Here using exon skipping oligonucleotides we predominantly restored dystrophin in the diaphragm and assessed cardiac function by MRI. This approach reduced diaphragmatic pathophysiology and markedly improved diaphragm function but did not improve cardiac function or pathophysiology, with or without exercise. Interestingly, exercise resulted in a reduction of dystrophin protein and exon skipping in the diaphragm. This suggests that treatment regimens may require modification in more active patients. In conclusion, whilst the diaphragm is an important respiratory muscle, it is likely that dystrophin needs to be restored in other tissues, including multiple accessory respiratory muscles, and of course the heart itself for appropriate therapeutic outcomes. This supports the requirement of a body-wide therapy to treat DMD.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-6", "d_text": "Respir Physiol 7: 7-29.\n- Luks AM, Swenson ER (2007) Travel to high altitude with pre-existing lung disease. Eur Respir J 29: 770-792.\n- Lenfant C, Sullivan K (1971) Adaptation to high altitude. N Engl J Med 284: 1298-1309.\n- Lenfant C, Torrance JD, Reynafarje C (1971) Shift of the O2-Hb dissociation curve at altitude: mechanism and effect. J Appl Physiol 30: 625-631.\n- Bateman NT, Leach RM (1998) ABC of oxygen. Acute oxygen therapy. BMJ 317: 798-801.\n- Goonasekera C, Kunst G, Mathew M (2017) End tidal carbon dioxide (ETCO2) and peripheral pulse oximetry (SpO2) trends and right-to-left shunting in neonates undergoing non cardiac surgery under general anesthesia. Pediatric Anesthesia and Critical Care Journal 5: 40-45.\n- Ng A, Swanevelder J (2011) Hypoxaemia associated with one-lung anaesthesia: new discoveries in ventilation and perfusion. BJA 106: 761-763.\n- Dimitriou G, Greenough A, Pink L, McGhee A, Hickey A, et al. (2002) Effect of posture on oxygenation and respiratory muscle strength in convalescent infants. Arch Dis Child Fetal Neonatal Ed 86: F147-F150.\n- Schwarzkopf K, Schreiber T, Bauer R Schubert H, Preussler NP, et al. (2001) The effects of increasing concentrations of isoflurane and desflurane on pulmonary perfusion and systemic oxygenation during one-lung ventilation in pigs. Anesthesia & Analgesia 93: 1434-1438.\n- Tripathi RS, Papadimos TJ (2011) \"Life-threatening\" hypoxemia in one-lung ventilation. Anesthesiology 115: 438; author reply 439-441.\n- Deem S, Bishop MJ, Alberts MK (1995) Effect of anemia on intrapulmonary shunt during atelectasis in rabbits. J Appl Physiol 79: 1951-1957.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Duchenne’s muscular dystrophy (DMD) is the most common and severe type of myopathy. by fentanyl rocuronium bromide O2 and sevoflurane. We report in cases like Rac1 this the safety usage of sugammadex to antagonize the neuromuscular stop and speedy recovery in such group of sufferers. 1 Launch Duchenne muscular dystrophy (DMD) is normally a rare hereditary X-linked recessive disorder nonetheless it is among the most frequent hereditary conditions affecting around 1 in 3 500 man births worldwide. It really is recognized between three and six years usually. DMD is seen as a weakness and spending (atrophy) from the muscles from the pelvic region accompanied by the participation of the make muscles. As the condition progresses muscles weakness and atrophy pass on to have an effect on the trunk and forearms and steadily improvement to involve extra muscles of your body [1 2 The anesthetic administration of Ercalcidiol these sufferers is complicated not merely by muscles weakness but also by cardiac and pulmonary manifestations. Nevertheless there is absolutely no definite suggestion for possibly regional or general anaesthesia. Succinylcholine and volatile anaesthetics have already been best avoided since there is a threat of hyperkalemic cardiac arrest or serious rhabdomyolysis . Some authors possess recommended intubation and anesthesia without resorting to muscles relaxants to avoid postoperative respiratory system failure linked to using muscles relaxants as well as the various other problems induced by acetylcholinesterase inhibitors. Nevertheless anesthesia without muscles relaxants may not always be ideal for some surgical treatments like such as for example in our individual . Case reviews in sufferers with myasthenia gravis record the successful usage of sugammadex (six case reviews). For various other rare muscular illnesses like Duchenne muscular dystrophy latest reviews document the effective reversal of rocuronium with sugammadex in pediatric sufferers [5-9]. And in this complete case survey we record the sugammadex basic safety within an adult Duchenne disease individual. 2 Case Display A 25-year-old man with DMD using a improved Ercalcidiol Barthel index of 23 (Barthel index can be an ordinal level used to measure overall performance in activities of daily living) (BMI 25 6 ASA III) was scheduled for open cholecystectomy under general anesthesia.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Tips For The Use Of Supplemental Oxygen\n1. Keep away from any flame or spark such as gas stoves, fireplaces and candles. Even electric razors can cause sparks so do not use your oxygen whilst shaving. Oxygen isn’t flammable but it is combustible and can aid in the starting of a fire.\n2. Do not allow any smoking anywhere near. Some people put signs up in their homes for visitors to let them know.\n3. When cooking try not to wear loose fitting clothes and stay as far away from the heated surface as possible.\n4. Avoid using any aerosol products as they can ignite in the presence of a spark.\n5. Do not allow flammable liquids to get on your clothing or body as unless washed thoroughly, these could become a hazard.\n6. Do not place your oxygen concentrator in an unventilated area, such as a closet. Not only does the concentrator generate a lot of heat but it uses the surrounding air to produce oxygen so the oxygen in the atmosphere will quickly become depleted in small spaces.\n7. Secure all cylinders to prevent them from falling over. A falling oxygen cylinder can cause damage to the valve, releasing the pressure, which may cause it to become a dangerous projectile.\n8. Call your electric company to inform them that you are using oxygen. Firstly some electric companies have a program that allows a reduction of your rates to help lower the cost of running the air concentrator. Secondly, they will generally put you first in line when restoring power after an outage. They may also be able to provide specialist adaptors or devices to aid you with your mobility and medical equipment to make life easier.\n9. Oxygen hoses can be a tripping hazard so try to have your concentrator in a position for maximum mobility but also where the hose will not cause you or others to trip. Use a coloured hose to make it more visible.\n10. Keep the hoses clean and replace on a regular basis. Make sure the filters are replaced regularly, wipe it down with a damp cloth to remove dust and clean tubing to prevent mould if you use water to humidify your oxygen.\nEnsure you have an emergency plan arranged in case there is a power outage.\n• Inform your power company that you are oxygen-dependent. Many companies offer oxygen-dependent patients priority service and will inform you of upcoming maintenance/outages and ensure your power is restored as a priority.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "The practice of using supplemental oxygen to treat all patients with MI became standard nearly a century ago, after oxygen was found in 1900 to relieve angina, and led to clinical improvement in four MI patients in a 1930 case series.5,6\nIt was not studied in a controlled trial until 1976, when J.M. Rawles, MD, and colleagues randomized 157 patients with MI to 24 hours of oxygen at 8 L/min or to ambient air. They found no difference in mortality between the groups, but they did find a higher burden of MI in the intervention arm receiving supplemental oxygen, as measured by mean serum aspartate aminotransferase levels.7\nThe topic was not addressed again in a significant randomized trial until this century. Most notably, two recent studies again demonstrated no benefit of supplemental oxygen in normoxemic patients with MI.\nIn the AVOID trial in 2015, Dion Stub, MD, PhD, and colleagues randomized 441 patients with STEMI to oxygen at 8 L/min – from diagnosis in an ambulance until after cardiac catheterization – or to ambient air. They found no difference in death at 6 months, but did find an increased rate of in-hospital recurrent MIs, with 0.9% of the control group and 5.5% of the oxygen intervention arm suffering recurrence (P = .006).8 They also showed a larger area of myocardial infarct in the oxygen group, as measured by peak creatine kinase levels and cardiac MRI at 6 months.\nProposed mechanisms of increased myocardial injury from hyperoxia include increased coronary vascular resistance resulting in decreased myocardial perfusion, and increased reperfusion injury from formation of free radicals.9\nWhere does all this leave us in the treatment of suspected MI?\nMorphine should only be used when the patient has pain, and is probably best reserved for severe pain, as the safety of its use is not clear. While hypoxemia is a common consequence of MI – and may correlate with worse outcomes – treatment with supplemental oxygen in the absence of hypoxemia is not supported by current evidence, and may carry risk of harm. Nitroglycerin should be avoided in patients with right ventricular infarcts, and in patients who present with hypotension.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "About Duchenne Supporting Therapies Diet It is important that children with Duchenne muscular dystrophy eat a well-balanced diet. Children on steroids, and those with reduced mobility, tend to put on weight, which in turn can add pressure to already weakened muscles. A high protein diet is recommended. Care should be taken to avoid fatty meats like beef, lamb or pork, instead some dieticians suggest that there should be an emphasis on protein in the form of fish, leaner meats like chicken, or vegetable protein like beans or soya. As for all children, it is suggested that refined foods such as white bread, sugars and pasta are reduced or excluded. Constipation can be a problem, caused by a combination of weak stomach muscles and immobility, so a high fibre diet - rich in fresh fruits and vegetables - is suggested. It is important too, to keep hydrated. Some people use supplements including calcium and vitamin D to keep muscles and bones strong. Dietary considerations when starting Corticosteroids Duchenne patients using corticosteroids like prednisone or deflazacort, and those with heart problems, may need a low salt diet. They may also experience gastroesophageal reflux (GORD). A low-fat, high-fibre diet that's heavy on whole grains, fruits and vegetables, and lean meats may help. They are also at risk of osteoporosis - a thinning and weakening of the bones. Vitamin D, calcium supplements and bisphosphonate infusions can help with this. Foods naturally high in vitamin D include fatty fish, liver, and egg. Vitamin D is also made naturally when skin is exposed to sunshine. Foods containing calcium include dairy products, green vegetables such as broccoli and kale, figs, oranges, salmon, sardines, soya and almonds.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-19", "d_text": "Sharkey KM, Machan JT, Tosi C, Roye GD, Harrington D, Millman RP. Predicting obstructive sleep apnea among women candidates for bariatric surgery. J Womens Health (Larchmt). 2010;19:1833–41\n32. Ahmad S, Nagle A, McCarthy RJ, Fitzgerald PC, Sullivan JT, Prystowsky J. Postoperative hypoxemia in morbidly obese patients with and without obstructive sleep apnea undergoing laparoscopic bariatric surgery. Anesth Analg. 2008;107:138–43\n33. Kurz A, Kurz M, Poeschl G, Faryniak B, Redl G, Hackl W. Forced-air warming maintains intraoperative normothermia better than circulating-water mattresses. Anesth Analg. 1993;77:89–95\n34. Hynson JM, Sessler DI. Intraoperative warming therapies: a comparison of three devices. J Clin Anesth. 1992;4:194–9\n35. Bazuaye EA, Stone TN, Corris PA, Gibson GJ. Variability of inspired oxygen concentration with nasal cannulas. Thorax. 1992;47:609–11\n36. Slessarev M, Somogyi R, Preiss D, Vesely A, Sasano H, Fisher JA. Efficiency of oxygen administration: sequential gas delivery versus “flow into a cone” methods. Crit Care Med. 2006;34:829–34\n37. Kabon B, Rozum R, Marschalek C, Prager G, Fleischmann E, Chiari A, Kurz A. Supplemental postoperative oxygen and tissue oxygen tension in morbidly obese patients. Obes Surg. 2010;20:885–94\n38. Haley RW, Culver DH, Morgan WM, White JW, Emori TG, Hooton TM. Identifying patients at high risk of surgical wound infection. A simple multivariate index of patient susceptibility and wound contamination. Am J Epidemiol. 1985;121:206–15\n39.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-5", "d_text": "Women and/or couples need to consider carefully which test to have and to discuss this with their genetic counselor. Earlier testing would allow early termination which would probably be less traumatic for the couple, but it carries a slightly higher risk of miscarriage than later testing (about 2%, as opposed to 0.5%).\nAppearance of the Patient\nThere is no known cure for Duchenne muscular dystrophy yet although recent stem-cell research is showing some ways to replace damaged muscle tissue. Treatment is aimed at control of symptoms to maximize the quality of life.\n- Corticosteroids such as prednisone and deflazacort increase energy and strength and defer severity of some symptoms.\n- Mild, non-jarring physical activity such as swimming is encouraged. Inactivity (such as bed rest) can worsen the muscle disease.\n- Physical therapy may be helpful to maintain muscle strength and function.\n- Orthopedic appliances (such as braces and wheelchairs) may improve mobility and the ability for self-care. Form-fitting removable leg braces that hold the ankle in place during sleep can defer the onset of contractures.\n- Appropriate respiratory support as the disease progresses is important\nDuchenne muscular dystrophy eventually affects all voluntary muscles and involves the heart and breathing muscles in later stages. Survival is rare beyond the early 30s, although recent advancements in medicine are extending the lives of those afflicted. Death typically occurs from respiratory failure or heart disorders.\nPhysiotherapists are concerned with enabling children to reach their maximum physical potential. Their aim is to:\n- minimize the development of contractures and deformity by developing a programme of stretches and exercises where appropriate\n- anticipate and minimise other secondary complications of a physical nature\n- monitor respiratory function and advise on techniques to assist with breathing exercises and methods of clearing secretions\nMechanical Ventilatory Assistance: Volume Ventilators\nModern \"volume ventilators,\" which deliver a preset volume (amount) of air to the person with each breath, are valuable in the treatment of people with muscular dystrophy related respiratory problems. Ventilator treatment usually begins in childhood when the respiratory muscles begin to fail.\nWhen the vital capacity has dropped below 40 percent of normal, a volume ventilator may be used during sleeping hours, a time when the person is most likely to be underventilating (\"hypoventilating\").", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "- The patient and the health care team should be aware that known, treated, partially treated and untreated OSA as well as suspected OSA may be associated with increased postoperative morbidity.\n- Consideration should be given to obtaining the results of the sleep study and the recommended Positive Airway Pressure (PAP) setting before surgery.\n- If resources allow, facilities should consider having PAP equipment for perioperative use, or have patients bring their own PAP equipment with them to the surgical facility.\n- Additional evaluation for preoperative cardiopulmonary optimization should be considered in patients with known, partially treated/untreated and suspected OSA who have uncontrolled systemic conditions e.g. i) hypoventilation syndromes, ii) severe pulmonary hypertension, and iii) resting hypoxemia in the absence of other cardiopulmonary disease.\n- Patients with known OSA, partially treated/untreated and suspected OSA with optimized co-morbid conditions may proceed to surgery provided strategies for mitigation of postoperative complications are implemented.\n- The risks and benefits of the decision should include consultation and discussion with the surgeon and the patient.\n- The use of PAP therapy in previously undiagnosed but suspected OSA patients should be considered case by case. Due to the lack of evidence from randomized controlled trials, we cannot recommend its routine use.\n- Continued use of PAP therapy at previously prescribed settings is recommended during periods of sleep while hospitalized, both preoperatively and postoperatively. Adjustments may need to be made to the settings to account for perioperative changes such as facial swelling, fluid shifts, recent pharmacotherapy, and pulmonary function.\nThe primary goal of this SASM guideline is to ensure optimal pre-operative evaluation of patients with known or suspected OSA in order to improve patient safety. It is hoped that the recommendations from this guideline will influence clinical practice as well as stimulate additional research to address the questions for which there is currently insufficient evidence to support recommendations.\nAdequate control of respiratory function under sedation-analgesia remains a challenge for the anesthesiologist because neither respiratory rate, tidal volume, pulse oximetry nor capnography are sensitive and specific enough measures. The present study uses a continuous transcutaneous noninvasive measure of CO2 which directly reflect the efficiency of ventilatory function as a pharmacodynamic endpoint of respiratory depression in response to propofol and remifentanil predicted concentrations to construct a mathematical model based on the data collected from 136 patients.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-18", "d_text": "Meyhoff CS, Wetterslev J, Jorgensen LN, Henneberg SW, Høgdall C, Lundvall L, Svendsen PE, Mollerup H, Lunn TH, Simonsen I, Martinsen KR, Pulawska T, Bundgaard L, Bugge L, Hansen EG, Riber C, Gocht-Jensen P, Walker LR, Bendtsen A, Johansson G, Skovgaard N, Heltø K, Poukinski A, Korshin A, Walli A, Bulut M, Carlsson PS, Rodt SA, Lundbech LB, Rask H, Buch N, Perdawid SK, Reza J, Jensen KV, Carlsen CG, Jensen FS, Rasmussen LSPROXI Trial Group. . Effect of high perioperative oxygen fraction on surgical site infection and pulmonary complications after abdominal surgery: the PROXI randomized clinical trial. JAMA. 2009;302:1543–50\n26. Pryor KO, Fahey TJ 3rd, Lien CA, Goldstein PA. Surgical site infection and the routine use of perioperative hyperoxia in a general surgical population: a randomized controlled trial. JAMA. 2004;291:79–87\n27. Kabon B, Nagele A, Reddy D, Eagon C, Fleshman JW, Sessler DI, Kurz A. Obesity decreases perioperative tissue oxygenation. Anesthesiology. 2004;100:274–80\n28. Choban PS, Heckler R, Burge JC, Flancbaum L. Increased incidence of nosocomial infections in obese surgical patients. Am Surg. 1995;61:1001–5\n29. Knorst MM, Souza FJ, Martinez D. [Obstructive sleep apnea-hypopnea syndrome: association with gender, obesity and sleepiness-related factors]. J Bras Pneumol. 2008;34:490–6\n30. Gallagher SF, Haines KL, Osterlund LG, Mullen M, Downs JB. Postoperative hypoxemia: common, undetected, and unsuspected after bariatric surgery. J Surg Res. 2010;159:622–6\n31.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-9", "d_text": "It is recommended to choose anesthetic methods and medicine with minimal impact on child on the basis of ensuring safety and painlessness of children, as well as safety of medical workers meanwhile. For pediatric patients that cry more often and do not cooperate, appropriate sedation can be applied before surgery to reduce the risk of transmission of saliva and droplet. The three-level protection measures are required for anesthesiologists, and one more layer of gloves is necessary before intubation during the endotracheal intubation for general anesthesia, then be removed after the completion of intubation. Medication should be used on those who have venous access as soon as possible to get sedation after entering the room, and sevoflurane inhalation can be applied on those without venous access to get sedation, followed by the establishment of venous access. Before anesthesia induction, double layer of warm and moist gauzes should be used to cover the mouth and nose of patient, followed by mask sustained high flow preoxygenation. It is recommended that to choose rapid anesthesia induction, moderate sedation and sufficient muscle relaxant are necessary to avoid choking cough, and intubation should be performed by skilled pediatric anesthesiologists under optimal conditions for high success rate. In case of difficult airway, the laryngeal mask should be placed after the failure of the first endotracheal intubation (attempt) to avoid the risk of infection caused by repeated attempts of endotracheal intubation. To avoid being close to operating area, the use of visual laryngoscope will be required. The patients with oral secretions could be cleaned with a closed suction system after completion of endotracheal intubation to avoid secretion contamination, if there is no obstruction of respiratory tract. Monitoring should be strengthened during operation, and severe patients are required to be closely monitored and dealt with in time due to the possible existence of acute lung injury, ARDS, heart failure, acid-base imbalance, and electrolyte disorders. At the end of the operation, the endotracheal tube should be removed under deep sedation with strengthening monitoring. It is recommended that complete the sputum suction work before the child wake up and appropriate injection of lidocaine before extubation. The mouth and nose of patient should be covered with two warm and moist gauzes to reduce the secretion spatter caused by choking cough as much as possible.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-12", "d_text": "Morray JP, Geiduschek JM, Caplan RA, Posner KL, Gild WM, Cheney FW: A comparison of pediatric and adult anesthesia closed malpractice claims. A NESTHESIOLOGY 1993; 78: 461–7\n15. Greif R, Laciny S, Rapf B, Sessler DI: Does supplemental perioperative oxygen reduce the incidence of postoperative nausea and vomiting? (abstract). A NESTHESIOLOGY 1998; 89: A1201\n16. Tramer MR, Reynolds DJ, Moore RA, McQuay HJ: Efficacy, dose-response, and safety of ondansetron in prevention of postoperative nausea and vomiting: A quantitative systematic review of randomized placebo-controlled trials. A NESTHESIOLOGY 1997; 87: 1277–89\n17. Watcha MF, White PF: Postoperative nausea and vomiting: Do they matter? Eur J Anaesthesiol 1995; 10(suppl): 18–23\n18. Evans NTS, Naylor PFD: Respiration. Physiologist 1966; 2: 61–72\n19. Gottrup F, Firmin R, Rabkin J, Halliday B, Hunt TK: Directly measured tissue oxygen tension and arterial oxygen tension assess tissue perfusion. Crit Care Med 1987; 15: 1030–6\n20. Silver IA: The measurement of oxygen tension in healing tissue, Progress in Respiration Rresearch III: International Symposium on Oxygen Pressure Recording. Edited by Kreuzer F. Basel, Karger, 1968, pp 124–35\n21. Hoffman WE, Charbel FT, Portillo GG, Edelman G, Ausman JI: Regional tissue pO2, pCO2, pH and temperature measurement. Neurol Res 1998; 20(suppl): S81–4\n22. Neubauer RA, James P: Cerebral oxygenation and the recoverable brain. Neurol Res 1998; 20:(suppl) S33–6\n23. Hopf HW, Jensen JA, Hunt TK: Calculation of subcutaneous tissue blood flow.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-6", "d_text": "Hypoventilation during sleep is determined by a thorough history of sleep disorder with an oximetry study and a capillary blood gas (See Pulmonary Function Testing). The ventilator requires a nasal or facemask for connection to the airway. The masks are constructed of comfortable plastic with Velcro straps to hold them in place during sleep.\nAs the vital capacity declines to less than 30 percent of normal, a volume ventilator may also be needed during the day for more assistance. The person gradually will increase the amount of time using the ventilator during the day as needed. A mouthpiece can be used in the daytime and a nasal or facemask can be used during sleep. The machine can easily fit on a ventilator tray on the bottom of a power wheelchair.\nThere may be times such as during a respiratory infection when a person needs to rest his/her respiratory muscles during the day even when not yet using full-time ventilation. The versatility of the volume ventilator can meet this need, allowing tired breathing muscles to rest and also allowing aerosol medications to be delivered.\nResearching a cure\nPromising research is being conducted around the globe to find a cure, or at minimum a therapy that is able to mitigate some of the devastating effects of the disease. Finding a cure is made more complex by the number and variation of genetic mutations in the dystrophin gene that result in DMD.\nIn the area of stem cell research, a recent paper was published in Nature Cell Biology that describes the identification of pericyte-derived cells from human skeletal muscle. These cells have shown to fulfill important criteria for consideration of therapeutic uses. That is, they are easily accessible in postnatal tissue, they are able to grow to a large enough number in vitro to provide enough cells for systemic treatment of a patient, they have been shown to differentiate into skeletal muscle, and, very importantly, they can reach skeletal muscle through a systemic route. This means that they can be injected arterially and cross through arterial walls into muscle, unlike past hopeful therapeutic cells such as muscle satellite cells which require the impractical task of intramuscular injection. These findings show great potential for stem cell therapy of DMD. In this case a small biopsy of skeletal muscle from the patient would be collected, the pericyte-derived cells would be extracted and grown in culture, and then these cells would be injected into the blood stream where they could navigate into and differentiate into skeletal muscle.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-1", "d_text": "\"We used a failure mode effect and analysis [FMEA] to identify the potential risks for surgical fire and to look at how we could minimize those risks,\" says Leanne Bales, RN, CNOR, administrator of the center.\n\"I like the FMEA approach to potential problems because it is a proactive way to make changes and it gets staff members involved in the process,\" she notes. The five nurses who worked on the FMEA for fire in the operating room presented their findings to the staff in training sessions that included fire hats for the presenters and fireball jawbreakers for audience members who could answer questions about surgical fire risks, Bales says.\nOne key to reducing the risk of fire is to recognize the potential for one to happen, she adds.\nAwareness of the increased risk of fire when oxygen is in use is key, Greco explains. In fact, he suggests surgeons evaluate their need for oxygen prior to any procedure.\nWhen patients are sedated without an anesthesiologist in an office-based program, surgeons tend to give oxygen routinely because it does ensure the patient maintains a good oxygen saturation rate, Greco says.\nHowever, this is not always necessary, he adds. \"If a surgeon uses pulse oximetry to monitor the patient during the procedure and administers oxygen only when necessary, the potential risk of fire is minimized because you’ve removed a combustible gas from the operating room,\" Greco explains.\nIf oxygen is used, use the lowest concentration necessary and use it in the open as opposed to allowing it to pool under surgical drapes, he recommends. \"If it has to be administered under drapes, be sure you vent or suction the extra oxygen from under the drape,\" he says.\nFinally, if you must use oxygen and you will be using an ignition source such as a laser or Bovie, be sure to turn off the oxygen and allow it to dissipate before turning on the laser or Bovie, Greco suggests. \"There is no magic number for the amount of time you must wait for oxygen to dissipate, but any delay before activating a Bovie or laser will reduce the risk of ignition,\" he says.\nThe oxygen-use policy at Effingham Surgery Center requires the oxygen to be turned off one minute prior to activating a device such as a cautery, Bales says. \"If the cautery is used on the face or neck area, the oxygen must stay off one minute after the cautery is turned off,\" she adds.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-1", "d_text": "A 21-year-old primigravida was posted for elective caesarean section because of anticipating difficult vaginal delivery in view of the disease at 37 weeks of gestation. There was no family history of polymyositis. On general physical examination, blood pressure was 124/72 mm Hg and pulse was 90/min. Cardiovascular and respiratory systems were normal. Neurological examination revealed a muscle power of 4/5 in most proximal muscle group, compared to distal muscle groups (5/5). Blood biochemistry revealed creatine kinase levels 526 U/L (n <167 U/L). EMG showed myopathic pattern. ECG and echo were normal. She was on the tablet prednisolone 50 mg once a day. Because of fetal distress the obstetrician planned an emergency caesarean section. She was premedicated with metoclopramide 10 mg and ranitidine 50 mg intravenous (iv). Subarachnoid block was given in L3-L4 interspace using 1.8 ml of 0.5% hyperbaric bupivacaine. Adequate sensory block was achieved up to T4. A female baby weighing 2 kg was delivered with APGAR score of 7 and 9 at 5 and 10 min respectively. She had one episode of hypotension, which was corrected by mephentermine 3 mg iv. Postoperative course was uneventful. The patient was continued on oral prednisolone in the postoperative period.\nPatients with polymyositis are at increased risk of pulmonary aspiration. Further, there are other implications of general anesthesia in these patients. Volatile anesthetic agents may not only serve as a trigger of malignant hyperthermia but also potentiate the effects of muscle relaxants. It is recommended that these agents should be avoided in these patients more so with raised plasma creatine phosphokinase levels. These patients are supposed to be sensitive to nondepolarizing muscle relaxants, and use of their antagonist drugs may cause muscle weakness and dysrhyhmias. Vecuronium and pancuronium are associated with prolonged neuromuscular paralysis though atracurium has been implemented as a safe drug under neuromuscular monitoring.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-5", "d_text": "- Hadeli KO, Siegel EM, Sherrill DL, Beck KC, Enright PL (2002) Predictors of oxygen desaturation during submaximal exercise in 8,000 patients. Cardiopulmonary Physical Therapy Journal 13: 21.\n- Richmond S, Goldsmith JP (2006) Air or 100% oxygen in neonatal resuscitation? Clin Perinatol 33: 11-27.\n- Ezaki S, Suzuki K, Kurishima C, Miura M, Weilin W, et al. (2009) Resuscitation of preterm infants with reduced oxygen results in less oxidative stress than resuscitation with 100% oxygen. J Clin Biochem Nutr 44: 111-118.\n- Vento M, Moro M, Escrig R, Arruza L, Villar G, et al. (2009) Preterm resuscitation with low oxygen causes less oxidative stress, inflammation, and chronic lung disease. Pediatrics 124: 439-449.\n- Kilgannon JH, Jones AE, Parrillo JE, Dellinger RP, Milcarek B, et al. (2011) Relationship between supranormal oxygen tension and outcome after resuscitation from cardiac arrest. Circulation 123: 2717-2722.\n- Fukushima KY, Yatsuhashi H, Kinoshita A, Ueki T, Matsumoto T, et al. (2007) Two cases of hepatopulmonary syndrome with improved liver function following long-term oxygen therapy. J Gastroenterol 42: 176-180.\n- Peñaloza D, Sime F, Banchero N, Gamboa R, Cruz J, et al. (1963) Pulmonary hypertension in healthy men born and living at high altitudes. Am J Cardiol 11: 150-157.\n- Sime F, Banchero N, Penaloza D, Gamboa R, Cruz J, et al. (1963) Pulmonary hypertension in children born and living at high altitudes. Am J Cardiol 11: 143-149.\n- Lenfant C, Ways P, Aucutt C, Cruz J (1969) Effect of chronic hypoxic hypoxia on the O2-Hb dissociation curve and respiratory gas transport in man.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-4", "d_text": "Currently Schwann cell transplantation is under investigation. \n[Table 1] shows the clinical picture in MS with its anaesthetic implications. ,,\nLiterature regarding anaesthetic management contains use of general anaesthesia (GA), spinal and epidural techniques. GA and epidural with low concentrations of LA are considered safe. ,, Spinal anaesthesia has been implicated in postoperative exacerbation, , so also epidurals with higher concentrations and longer duration. , The demyelinated neurons appear susceptible to the neurotoxicity of LA and aggravate the conduction blockade. ,, As regards conduct of G.A no particular agent among induction agents/inhalation agents is preferred. , Succinylcholine is best avoided as it can produce hyperkalemia due to denervation sensitivity by upregulation of acetylcholine receptors. sup>,\nThe latter can also cause resistance to non-depolarizing blocking agents (NDBA). Sensitivity to NDBAs can also be present due to muscle wasting and use of medications such as baclofen, dantrolene , [Table 1]. N-M monitoring is recommended with titration of the NDBA as required for the surgery. , Even a 0.5°C rise of temperature can slow the conduction along the demyelinated segment ,, resulting in relapse, hence maintenance of O.T temperature and temperature monitoring is essential.\nIn spite of prolonged periods of immobility, MS patients surprisingly showed the absence of thrombophlebitis and pulmonary embolism (PE) in contrast to other immobile patients suffering from stroke, quadriplegia and flaccid paraplegia. The presence of lower extremity spastic disease in MS patients appears to protect them from thrombophlebitis and PE, as the occurrence of muscle spasms could prevent venous stasis. \nCerebral venous thrombosis (CVT) has been reported in MS exacerbation patients who underwent lumbar puncture (L.P) and were receiving I.V. methylprednisolone. Following L.P, high dose corticosteroids are considered as risk factors for CVT and prophylactic anticoagulant treatment may be warranted when other risk factors of CVT also exist. sup>\nAs MRI scan of our patient showed involvement of cervical and upper thoracic segments, to detect autonomic dysfunction preoperatively we tested heart rate response to deep breathing.", "score": 8.086131989696522, "rank": 100}]} {"qid": 17, "question_text": "How can I take clearer photos underwater?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "It’s summer and you’re probably in or near the water.\nIf you have a waterproof camera: maybe your phone, an all weather point and shoot, or an underwater housing for your camera; getting good underwater portraits can be tricky.\nIn this article, I am going to share my top three tips to capture better underwater photos.\nUnderwater Photography Tip 1 – Pay Attention to the Direction of Light\nIt may sound obvious, but the direction of light is just as important underwater as it is on dry land. The difference is that it can be very tricky to actually see light underwater, and also much more difficult to reliably position your models.\nBecause of the way that light is rapidly dispersed underwater, there is also a drastic difference between highlights and shadows – meaning that you will often find highlights overexposed and shadows underexposed.\nI usually assess the direction of light from the surface, then direct my models to swim or pose in a direction that will work best with the style of image that I’m trying to capture.\nI find that it usually works best to have your model approximately perpendicular to the direction of the light. Shooting directly into the light can be very dramatic, but you risk loosing detail in the shadows. Shooting away from the light is my least preferred approach because you usually end up with a boring looking evenly lit subject.\nUnderwater Photography Tip 2 – Shoot at Mid-Day\nMid-day is usually when most photographers put away the camera and go get a drink. Harsh, mid-day sunshine usually looks terrible on land, but underwater mid-day is a great time to shoot.\nQuantity of light is often a problem underwater, especially if you are deeper than a few feet from the surface. Bright sunshine allows for faster shutter speeds and smaller apertures. Fast shutter speeds are important if you’re photographing a swimmer or someone moving quickly. A small aperture helps add a little buffer for focusing which can be unreliable underwater.\nAlso, due to the angle of refraction between light traveling from air to water, a little chop on the surface is usually enough to drastically soften sunlight underwater, even at mid-day.\nUnderwater Photography Tip 3 – Spend Time Editing\nIt is very difficult to get a great underwater photo right out of your camera. No matter what you do some heavy post-processing is almost always required.", "score": 51.8979104551605, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Taking a waterproof camera or a camera in an underwater housing (such as the Nikon 1 digital cameras that utilize Nikon waterproof housings) into the sea is one of the coolest types of photography that you can do, because it gives you such a unique perspective that few folks experience first hand with their photography. Cameras such as the Nikon COOLPIX AW100 and S30 make it easier than ever to “dive in” [pun intended] to underwater photography.\nWith a few easy tips you can begin to make some great underwater images.\nUse the camera’s underwater mode. Light—more importantly, the various colored light waves—do not behave the same underwater as they do above the surface of the water. As you descend, less and less of the colored wavelengths of light descend, but are filtered out. Red is the first to go, then orange and yellow. Without using a flash underwater, your subjects will be bathed in greenish, bluish light. Go deep enough and there won’t be any visible light reaching that deep. If, however, you were to use a flash or even a flashlight and shine it on your photographic subjects, you’d see that they are more colorful than what may be represented by the camera depending upon the depth you’re making photographs at.\nThe underwater mode is designed to filter out blue so your images are more representative of the actual color of the objects you’re photographing.\nGet as close to your subject as possible. Underwater, light has to not only travel from the camera to the subject, but also back to the camera to be recorded. That may sound odd, but it’s why you should use a wide-angle lens if possible rather than zoom into an underwater scene. This is, of course, if you’re shooting a subject that is safe to get close to. [Read: don’t use a wide-angle and get close to sharks or other predatory fish that might decide to make you their dinner!] By cutting the distance that light has to travel, your images will be clearer, with more fine detail visible.\nUse the Macro mode to get even closer to tiny subjects. In our last tip, we suggested using a wide-angle lens to get close to your subject. Now, try the macro mode on your waterproof COOLPIX, or if you’re shooting with a camera in a housing, use a macro lens to get even closer.", "score": 50.46410007489965, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "3 Tips in Using Auto Mode for Underwater Photography\nAll digital cameras have an auto mode. However, digital cameras were not designed for underwater photography, so if you are using auto mode, you will need to keep a few things in mind.\nIt is not recommended that you use flash while underwater. You can still capture great underwater photos by staying shallow in the water, shooting your photos between 10 am and 2 pm and shooting with the sun to your back. If it is not a sunny day, you may not want to take photos that day, unless you have a strobe.\nWhether you are set on using auto mode or design to use manual mode, you should consider investing in at least one strobe. An external strobe will immediately and noticeably improve the quality of your digital photographs. A strobe is similar to a flash in that it produces artificial light, but the strobe is much more powerful.\nIf you are shooting in auto mode and are not using a strobe light, then filters can be very beneficial to you. Filters are small, colored gels that are placed in front of the camera lens. You can purchase filters that are specifically designed for underwater use and these filters will help you get good color in your photos.Popular Cameras for High Quality Photos:", "score": 49.48700336679735, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Scuba Diving & Photography…How Do I Get That Good shot?\nGiven all of the problems we talked about in the last chapter, how can we expect to take pictures that we can be proud of when scuba diving Key Largo? How can we overcome the loss of light and color?\nThere are two answers to this question. One is to take the best possible picture using the available light, and the other is to use a “flash”. Using a “flash” is generally referred to as using a strobe in underwater photography. This chapter will concentrate on using available light.\nThe last chapter listed the ways that light gets diffused underwater. So, if we don’t have or can’t use a strobe we need to avoid all of the things that led to the light reduction. In other words, we need to shoot on a bright day, when the seas are calm, around midday. Luckily, if you are engaged in Key largo scuba diving, these conditions occur fairly frequently. This turtle to the left was taken using only natural, or ambient light.\nAnother time when we can use natural light to our advantage is if we are photographing a very large subject.\nA strobe cannot cover enough area to light these three sperm whales, so that a photograph taken with available light looks quite natural.\nScuba Diving wrecks provide another example of a large subject that looks good when taken in natural light. Sometimes, this natural lighting can be used to produce a black and white image that is quite dramatic, overcoming the lack of color at depth. This picture was taken while scuba diving the wreck of the Duane in the Key Largo National Marine Sanctuary.\nIn addition to pictures of big things, silhouettes can make an interesting subject for natural light photographs. This picture of Christ of The Abyss on Key Largo Dry Rocks was taken using only available light.\nThe Red Filter – A must for the Photographer While Scuba Diving\nOne thing to bear in mind in all these shots is that they obviously don’t work for vibrantly colored subjects. To take photographs of highly colored subjects, in addition to good light, we must have a shallow depth. There are however one or two tricks we can use to overcome color loss. The first is to use a red filter. This filter reduces that amount of light of all frequencies going into the camera except Red.", "score": 48.75608925599705, "rank": 4}, {"document_id": "doc-::chunk-29", "d_text": "Many of these settings make taking photographs underwater easier such as special shutter speed and automatic focusing, along with advanced flash settings to give you more light down in the deep dark water. They typically have better zoom settings as well compared to 35mm film cameras whose zoom function typically came out blurry and washed out in photos taken underwater.\nDigital underwater cameras make the entire underwater photography experience easier. Most digital cameras are even smaller and more lightweight making them easier to be hauled down into the water. If you are new to the world of underwater photography, using a digital camera will help you gain experience much faster by being able to see your photos immediately and work on fixing any problems that may be shown in them.\nYou will also gain a higher quality photo with a digital camera that can be blown up into larger photos. This feature can be especially useful for photographers looking to showcase their underwater photographing skills to the world.\nIn the end, it is really your choice whether you prefer a digital or a 35mm underwater camera however, digital cameras will make your experience much easier in the long run.\nSelecting the most appropriate model underwater camera depends totally on the photographer’s needs. A buyer should consider their capability to make the actual demand for advanced controls for underwater imaging and also use of complex camera controls.\nThe buyer should also determine if such options as lens interchangeability, weight, video ability, and size are alternatives that are important when picking the underwater camera that is proper.\nUnderwater photography might be plenty of enjoyment for both experienced divers and vacationers. Remember to comprise components, some spare batteries, extra lenses and some additional memory cards with your new underwater camera to ensure you are able to shoot as many photos as you need, when you plan an underwater photography adventure.\nThe best underwater cameras are those with good physical designs as well as features. Apart from being water proof and airtight, the cameras must also produce images of very high quality. Since these cameras are mainly used below the water surface, they must be able to produce high definition pictures. It should have a variety of settings to adjust its optical as well as digital zoom, megapixels of very high resolutions and also settings to adjust its lighting.They should also be of small designs that are portable and easy to carry around. Lightweight digital cameras can fit into pockets or even small bags to be carried around. Unlike other digital cameras that are compact in design, waterproof cameras are a bit heavier.", "score": 46.47355120561893, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "The most important adjustments to make for underwater photography are fixing the white balance and balancing skin tones – it usually takes some experimentation, but you can get great results by just using Lightroom.\nMy favorite techniques for this are to use the white balance eye dropper tool in the Basic panel to choose a color that should be close to neutral gray, then adjusting the hue and saturation of each individual color with in the HSL panel.\nThen it is just a matter of tweaking the contrast and dynamic range. I usually start with the Dehaze slider, then drop the highlights, increase the shadows and make sure the blacks are black.\nWhat’s Your Favorite Tip for Underwater Photos?\nDo you have a great tip for underwater photography – leave a comment below and share it with the community!", "score": 43.06719884741324, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Balanced Light Underwater Photography\nShooter's Toolbox Volume 3\nText and Photos by Todd Winner\nIn Volume 1 and 2 of Shooter's Toolbox we learned about taking images with ambient light and strobe light. Please review Volume 1 - Ambient Light and Volume 2 - strobe light if you have any questions in regards to metering for ambient light and strobe exposures. In this volume we will be putting the two together to create balanced light images. This is what the majority of my wide-angle images are, and it is a great technique to add to your macro images as well. The controls we have to work with are ISO, shutter speed, aperture, strobe power and distance from our subject.\nBalanced Light Underwater Settings\nFor most images, I usually start out with a base ISO. Base ISO is 100 or 200 on most cameras. I then take a spot meter reading for a blue water exposure and set my aperture and shutter speed accordingly. With our strobes turned on this is going to limit our fastest shutter speed on most DSLR cameras at around 1/250 sec. Next I adjust my strobe power setting according to the f-stop and the distance I am from the subject.\nAfter reviewing the image, you can fine-tune your exposure depending on what you see. If your water is not the shade you were trying to achieve, adjust your shutter speed up or down to make it darker or lighter. If your foreground subject is under or overexposed, adjust your strobe power up or down or move closer or further from your subject. If your overall exposure is too dark or light you can adjust your aperture and this will affect both your blue water background and your strobe exposure. As you can see it's a simple balancing act between the four controls and distance from our subject. If you are new or having difficulty with these types of images, just find a nice stationary subject and practice.\nGetting a strong background\nSome things you can try to improve your balanced light photography images are to look for structures or formations that will make strong and interesting silhouettes in your background. If none are available, adding a diver with a light is a nice alternative. If possible, separate your foreground subject from the reef by aiming at an upward angle and use wide-angle lenses to get close to your foreground subject.", "score": 42.86259155466382, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "If choosing a camera with interchangeable lenses, keep in mind that along with the housing, lens ports must be purchased to accommodate the various lenses. The depth of the port must accommodate the length of the lens.\nNext is the photographer’s source of lighting, the underwater strobe. While both non-DSLR and DSLR cameras have built-in flash units those units will not provide enough light for underwater photography. Remember your basic dive class! The deeper one goes the less light and color so strobes are necessary.\nAlong with a strobe, or strobes – two are better than one; the photo kit must include a tray and handles. The housing attaches to the tray, the strobes attach to the handles. Strobes introduce the problem of back scatter. Back scatter occurs when suspended particles reflect light back into the lens. The resulting photos are speckled.\nTo help eliminate back scatter and diffuse ‘hot spots’ (too much light on a subject) the strobe power should be adjustable and the kit should include diffusers. Commercial diffusers are generally opaque plastic attachments that snap onto the strobe to mute the flash. Unfortunately, most are lost eventually and that’s where the dryer sheets and rubber bands come in. Used dryer sheets make excellent diffusers. Simply lay a sheet across the face of the strobe and secure with a rubber band.\nHow to protect the photo gear? Underwater photographers know not to enter the water from a boat holding their cameras. Instead, they have someone on the boat hand them their cameras once they are in the water. Next, they attach the housing to their BCs with a strong coil lanyard and descend slowly while checking for leaks.\nAs for transporting the kit, a hard case with foam inserts is the only safe way. Soft-sided luggage, back packs, and duffle bags do not provide protection from baggage handlers or taxi and bus drivers.\nWondering about costs? The choice of camera will set the cost basis; however, there are kits available starting at $1,000 or less that will provide a new photographer wow photos to show friends and family. Do some research and next time we will cover the basics of shooting photos underwater.\nBecky Bauer is a scuba instructor and award-winning journalist covering the marine environment in the Caribbean. She is a contributing photographer to NOAA.", "score": 41.95797095661763, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "You may not even be aware of the particles until you look at your underwater photos later. This tends to be one of the most common difficulties in photography underwater.\nThere are ways to get around (and that means avoidance rather than cure) most of these problems – the diving competency will improve with experience and that is just about all there is to that!\nUse a flash – you should set the camera to forced flash mode, not auto and generally no pre-flash settings (if you are using a strobe the pre-flash will set off the strobe out of time with the shutter). As you get deeper into photography underwater (is there a pun in there?) use of lighting - strobes and flashes will become more obvious to you.\nAttach external strobes to your setup; these are ideal for replacing the lost color and light at depths.\nA filter can be used to filter out the green/blue, you must remember that the filter does not replace the colours so beyond about ten metres they don’t function particularly well. If the red has gone from the light it has gone and a filter won’t help. You also have to turn off your flash or strobe and use the filter in ambient light.\nAt what depth is color lost?\n- Red - 15ft (5m)\n- orange - 25ft (8m)\n- Yellow - 35-45ft (10 – 14m)\n- Green - 70-75ft (20 – 23m)\nRemember to add the distance from the subject into this – 15ft deep and 15 ft from the subject equals 30 ft of color loss.\nThis can be very difficult to deal with and the degree of difficulty and I suppose degree of potential for success in dealing with it, depends on how much particle matter is in the water.\nThe first aspect is the fact that your flash or strobe light will reflect off the particles directly back at the lense. A diffuser can assist in decreasing the impact of this.\nAnother method is, if using strobes, to angle the strobes away from the direct pathway between your lense and the subject so that the light is not directly hitting the particles in front of you. This to is something in photography underwater which you will 'get' with experience.\nThe surest way to resolve this is to decrease the distance between your lense and the subject of your photography underwater.", "score": 41.31222897363752, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "For macro images, since we are so close to our subjects, you need to use strobes that you can power way down so you don't over expose the foreground.\nThe real payoff is that once you can balance between a couple of light sources, you are ready for just about any shooting situation. Take split images for instance: I typically take an ambient light reading for the topside exposure and adjust my strobe power based on the f-stop and distance from the underwater subject. I use this technique a lot in topside shooting as well when I want to add a bit of fill light to a subject and it works great when lighting a subject against a sunset background.\nBalanced light wide-angle images have always been my favorite style of photography. Although it may sound a lot to remember, especially to new photographers, just remember there are only five items to think about: ISO, shutter speed, aperture, strobe power and distance from our subject. If you keep your ISO at base ISO, now you have only 4 things to think about. Of course we still need to compose a pleasing image, but at least now you will be ready to expose it correctly when you find it.\nFor the reef squid image from Bonaire, I exposed for the light blue water near the surface and set my strobes at ¼ power to expose for the reef squid, which was just inches from my port. 1/60th sec at f/11, ISO 200, Nikon 10.5mm fisheye lens\nFor the jelly and Odyssey split shot from Truk Lagoon, I had my strobes at the lowest power setting 1/8th. In order to keep from overexposing the jelly, I set my f-stop to f/10 and opened my shutter to 1/60th sec to expose the topside portion of the image. 1/60th sec at f/10, ISO 100, 10mm lens\nFor the oceanic white-tip from a Bahamas shark dive, I metered on the blue water and underexposed it by a couple of stops to get that nice dark blue. This still allowed me to capture the divers in silhouette and a shutter speed of 1/60th sec captured the light rays off the surface. 1/60th sec at f/8, ISO 100, 10mm\nEureka Oil Rig, California.", "score": 40.66762552983155, "rank": 10}, {"document_id": "doc-::chunk-2", "d_text": "Typically this will be an underwater flashlight, not the spotting light from your strobe. Spotting lights are important as they will aid your eye, as well as your camera’s autofocus, in finding points of contrast to focus on. Having strong strobes is also a definite must. Due to the extreme magnification of using teleconverters, the amount of light reaching the camera’s sensor is limited; the equivalent apertures are in the f64 range and higher. Therefore a strong strobe, placed close to the subject, is required in order to “blast” enough light at the subject to expose it properly.\nTrials and Tribulations\nThe first step to shooting a successful super macro photo is to find the proper subject. Due to the extremely shallow depth of field found at these magnifications, only certain subjects will work. Concentrate on looking for tiny subjects that are not very common: nudibranch rhinophores, fish eggs, eyeballs, and fish scales. It’s these eye-popping subjects that are so out of the ordinary that they can’t help but catch the eye of the viewer and leave them wondering “how did he/she do that?” However, you must be careful when shooting subjects at great magnification. The key is to keep the main parts of the subject on the same plane of focus. Even the slightest offset will cause a portion of the photo to blur.\nThere is nothing more important in super macro than keeping a steady hand and having endless patience. The rewards from achieving a full frame photo of a pygmy seahorse are wonderful, but don’t get discouraged when it takes 60 minutes to achieve. Even the slightest movement from these tiny fish will throw your focus out of whack. It’s best not to be trigger happy in these circumstances but to be patient and use small movements. The best strategy is actually to use the camera on manual or locked focus. By locking focus, the camera will not go into “hunting” mode; you can control what parts of the frame are in or out of focus by moving back and forth from the subject itself.\nShooting “super macro” is not for everyone. It’s not for the photographer who wants to see as many different things on one dive as possible, nor is it a great idea to bring with you on group dive trips.", "score": 37.80254666716745, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Filter photography has really come into it’s own with the advent of digital photography and the ability to white balance underwater. Although it has been used for a long time with digital video underwater, red filters and white balancing did not really become popular with still photographers until the early to mid 2000s. The use of a filter underwater allows the photographer to filter out some of the nasty blues and greens that dominate the colour spectrum deeper than 10 feet and bring back a warm colour balance along with a lot of contrast and typically a beautiful blue. Shooting with a Filter of any sort is actually quite easy, here are a few tips to get you started:\n- Don’t Use Strobes – to get the most from a filter it’s best to use with natural light only\n- Stay Shallow – as the shot will be illuminated with natural light, the best results are typically from 15 m (50ft) or shallower\n- Keep the Sun Behind You – the key to illuminating the subject properly and getting the best colour is to have the sun helping\n- Shoot Slightly Down- although this sounds like the opposite of what is drilled into new photographers (Shoot UP!) in natural light or filter photography shooting on a slightly downward angle helps\n- Use manual white balance and re set it prior to shooting each new subject\n- Concentrate on using a wide angle lens, this will provide the best potential for filters. Macro is best shot with strobes\nThat’s it! Now it’s just a case of getting your hands on some filters and a nice shallow dive site. Our friends over at Magic Filters provide the best and largest range of filters for underwater photographers so head on over to their website to have a look at their products.", "score": 36.91757996912964, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "The vibrant colors and endless diversity of aquatic life provide one of the richest photography experiences imaginable. The technology in Olympus Digital Underwater Housings allows divers, from beginner to divemaster, capture worry-free images of unique underwater scenes. Each Olympus housing features a large, responsive shutter button, precision zoom lever, and a tight O-ring main seal with a safety lock. Find the housing that fits your camera, and dive in!\nLesson 4: Macrophotography in underwater surroundings\nThe Under Water Macro Mode can depict the subject in natural colors.\nSet your camera to the Underwater Macro Mode\nWhen you want to shoot anemones and sea slugs up close, select the Scene Mode menu and then Underwater Macro Mode. The lens will automatically adjust and the flash will function automatically. As a result, the subject will be revealed in natural coloring. White balance, auto focus and other functions are all tuned and adjusted in great detail. So it's best to try underwater macro photography in this mode first, and then experiment to develop your own style. Getting great results may take a few attempts; give it some time and keep on shooting!\nPhotographing from a distance in zoom.\nPhotographing by moving up close as possible.\nMove as close to your subject as possible\nTo get the best results you've got to get closer to your subject when shooting underwater. When there is too much distance between the photographer and the subject, seawater can act as a murky filter due to floating organisms and you may not get a clear picture. Red colors in particular can be absorbed by water and turn bluish. In order to accommodate for this absorption of color, you need the light from the flash. However, it is very difficult to reach underwater subjects with your flash. In order to recreate natural coloring, you must ignite the flash as close as possible to your subject.\nThe method differs according to the type of subject and camera model.\nPosition the zoom\nThe basic trick of macro photography is simple. Zoom out wide and move up close to the subject so it fills your LCD screen. However, your subjects may not always cooperate. Shrimp, crabs or sea slugs might scurry away as you try to get close. So you may need to try zooming in shoot by moving closer very slowly. How close you can get will differ according to the type of subject you are photographing.", "score": 34.90585087105347, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "And keep your camera's specifications and limitations in mind when shooting.\nWhen you photograph a small subject in large scale, camera shake can lead to blurred focus. Hold the camera with both hands and keep your elbows close to your body to create a stable shooting platform. Avoid shooting with one hand, which can lead to out-of-focus and out-of-frame shots.\nWhat's your focus?\nWhen shooting fish and most other sea life, focus on the eye of the subject. When the focus is elsewhere, the picture as a whole can appear out of focus. When shooting coral or plantlife, think about which point you want to stress most and decide on your focus and composition of your work.\nThe focus is not on the eyes.\nThe focus is on the eyes.\nTo use the product safely and correctly, please read the manual before using.\nEnhance your underwater photography experience with these helpful tutorials. Learn proper technique for caring for your housing, explore macrophotography tips, and more.", "score": 34.08948361441151, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "To show what great swimming technique looks like, Swim Speed Secrets has dozens of full-color underwater photographs of Olympic swimmers. With patience and the right gear, you can take underwater photos of your friends and training partners so they can see how they are swimming now and how well they are progressing using Sheila’s approach in Swim Speed Secrets.\nFollow these 5 tips from professional photographer and USA Swimming and U.S. Master Swimming coach Daniel Smith.\n- Know your camera and equipment. First, make sure your gear can handle being underwater at the depths where you’ll be shooting. [You can find underwater cameras for surprisingly little money online.] It’s equally important to know every setting so you don’t waste time adjusting settings for light conditions and action vs. stills. If you haven’t mastered your gear on dry land, shooting under water isn’t going to work.\n- Think about water as a medium. Bubbles can make auto-focus difficult to use so try manual focus instead. Water will cut your light exposure in half, so you’ll want to increase shutter speed or exposure time. Expect a blue and green color cast to your photos and either shoot with settings or filters that will compensate or be prepared to remove the cast in your photo editing software.\n- Use weights and a scuba tank or snorkel for the best photos. Staying stable and at pool bottom helps a lot.\n- A weighted tripod and auto-timer can save you lots of drama if you have no access to scuba gear.\n- Invest in good software. It can take a few weeks to rise up the learning curve, but good photo editing software can make even average photos sparkle.", "score": 32.941001029875096, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "One striking difference from terrestrial photography is how sunlight is affected by water. Water is almost 800 times denser than air. Water density strongly affects the colors of light that are present at varying depths. Warmer colors are quickly absorbed so that little or no red, orange or yellow hues are present at depths of greater than 30 feet. The most prominent colors at depths of more than 30 feet are green and blue. This color filtering can result in monochromatic photos that lack punch. On the other hand, it can also be used creatively once understood. Additionally, there may be low or no ambient light at relatively shallow depths depending on local water clarity. As such, artificial light is regularly used in underwater photography to add color and to assist with exposure. In addition to the color and level of ambient light, one must also consider the presence of particulate matter in the water. Especially turbid or murky water often results in illuminated particles in photos also known as “backscatter.” Therefore, it becomes very important to know under which conditions you should attempt any particular shot as well as proper strobe positioning to minimize backscatter.\nUnderwater photography also tends to involve more “moving parts” than other photographic disciplines. Addressing the typical complexities of terrestrial photography remains crucial. However, the additional gear required for the particular type of dive (snorkeling all the way to deep technical diving) can add a significant amount of variables to the photographic equation. Also, the movement of both the subject and photographer in the water must also be considered. All of the variables can quickly occupy your underwater time, thereby shortening the time available to react to each given scene. As such, an ability to pre-visualize a photograph before entering the water can often save valuable underwater time.\nDue to space constraints, I must now conclude this article. The next article in the series will cover specific issues regarding underwater photography gear from the beginner to more advanced user. It will also address the two most common types of underwater photography (close focus wide-angle and macro photography) as well as issues common to each. I hope this article has sparked an interest to begin your own journey into underwater photography. Until next time, take care.\nThere are many scuba certification programs available and exclusion from this list does not indicate a program is unreliable or disreputable. The author strongly recommends you perform your own research before attending any scuba training courses.", "score": 32.8915894125182, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "You’ll be able to photograph corals or fish at close distances that make your images look truly “one-of-a-kind”.\nShow some scale. Sometimes it may be hard to tell how large an object is underwater, so its great to add a subject that will show the scale of the scene you’ve captured. For instance, if you’re photographing a shipwreck, by adding a SCUBA diver into the scene, you’ve now provided the viewer with a sense of the size of the wreck. As a bonus, the added subject will often make the overall photograph more interesting to view.\nRemember that composition matters. Lastly, remember that just because you’re now photographing under the surface of the water, you still need to be aware of the composition of your images. Use the lessons you’ve learned regarding creating pleasing compositions, such as the rule of thirds; placing the horizon at the top/bottom third instead of right in the middle of the image, filling the frame with your subject, look for patterns to photograph, etc.\nCheck out these articles on composing better photographs and using elements within the scene to frame your main subject.\nFive of the images accompanying this article were taken by professional underwater photographer Wolcott Henry. To see more of his work, visit his website at http://wolcotthenry.com.", "score": 32.61707937346459, "rank": 17}, {"document_id": "doc-::chunk-3", "d_text": "So, these are Top 5 tips for using GoPro underwater. Read quick tips given below to know more about the question, ‘Can you use GoPro underwater?’\nCorrect Color filters\nTo avoid all green or all blue images, you need the correction of filters. As GoPro cameras are known to be the no-brainer for capturing wide-angle, stunning, and HD footage in lighting conditions, using the red filters, especially for GoPro Hero3, is better. So, it’s an accessory you need to attach based upon your dive plan. It will also help in capturing the lively footage at all depths of the ocean or river.\nWhile you are preparing the camera, its gear, and settings, take time to download the latest GoPro app on your mobile device. With lots of upgraded functions, the app further helps to renew the firmware. It will connect you with the latest features and ensure the performance of a GoPro camera at its best potential.\nTake a selfie stick or camera rig with an unsteady action camera like GoPro. As the best GoPro underwater video tips, mounting the camera on the head or chest may not be ideal if the journey is going to be very unstable. A simple selfie stick may prove to be suitable if you are tight on budget or want to adjust with the equipment you already have.\nClean the lens\nA minute before you close the camera or go on shooting, make sure the glass is free of any dust or fingerprints. So, do check both the lens, including the Housing lens from inside as well as outside of the camera.\nClean the gasket\nClean the white rubber gasket and make sure that it is free from any debris. Along with damage to equipment, this debris will cause the gasket to get a leak. Make backdoor ultimately pushed within the Housing to have a good seal before clamping that black latch.\nAlways shoot with the sun on your back. It’s pretty hard to get a winning shot while shooting into directly into the sun underwater. If that is the case, the image will usually come out to be dark and unexposed unless you are using flashlights or video lights.\nUse Dome Port for split images\nFor shooting incredible half above the surface and half underwater images (split images), a dome port is essential. To avoid getting droplets, make sure to rub some saliva to repel the drops off the port.", "score": 32.24912079997441, "rank": 18}, {"document_id": "doc-::chunk-28", "d_text": "Many things need to be taken into consideration before you begin such as water depth that you will be shooting at, what angles are the best to use in the water, how to deal with lighting problems, and the best way to capture the bright beautiful colors that are waiting for you below sea level.\nThe best way for you to learn these skills is to talk to others who have experience with this type of photography. These people have experienced it for themselves and can teach you everything that you need to know to begin your journey as an underwater photographer.\nDigital Underwater Photography Tips\nPhotography has come a long way over the past decade or so especially when it comes to underwater photography. Gone are the days of having to load a roll of film, have it developed, and throw away the photos that didn’t turn out or that you just don’t like. Those types of underwater cameras are still available for those who choose to use them but technology has come a long way especially in the form of digital underwater cameras.\nDigital underwater cameras may not necessarily do a better job of taking photos under the water however it can save a lot of time and money. For one thing, with a digital camera, you can’t run out of film. We all know what a pain it can be to run out of film while trying to take photos underwater, even if you loaded new film before you take that first dive, you are still able to only take a specific number of photos before the film runs out. You have to then wait for the film to rewind and the camera to dry out before you can change the film rolls. With digital cameras, you don’t have to worry about any of that and with a large enough memory storage used, you can take literally hundreds of photos before running out of room.\nAlso, with actual 35mm film, you have to wait for the film to be processed before you can look at the photos. If certain photos didn’t turn out, you wasted your time and money getting it developed just to throw the photo in the trash. With a digital underwater camera, you have the ability to view the photos immediately to see if you need to retake it. If you don’t like how the photo turns out, it can be deleted right away.\nDigital underwater cameras typically come with more advanced settings than regular film cameras.", "score": 32.08998609047037, "rank": 19}, {"document_id": "doc-::chunk-21", "d_text": "Use Photo Albums\nYou can store your photos in photo albums and this will keep the quality intact and the photos will not come in contact with bare hands.\nThese are just a few tips you can use when you want to maintain the quality of your photographs. We hope this helps.\nPosted on March 20th, 2012\nThere are many forms of photography out there today and one in particular is starting to rise with interest. That would be the skill of underwater photography. It is not unlike that of normal photography and can be performed even by the most novice photographer out there. Just like regular photography there are few helpful tips in underwater photography.\n- Become familiar with your underwater camera before entering the water. With minimal time with your air supply and the fact that usually in the ocean there are no reshoots, it is best to become familiar with all settings and functions of your camera. Also, most underwater cameras have specific depth requirements that you should become familiar with to prevent damage to your camera. No need to waste time with your camera instead of your pictures.\n- Before venturing off to your underwater location, research the area for any specific interest. Research for types of coral, fish, and other marine life in the area. It is also a good idea also to check with the local currents and any sharp corals in the area. The more you know of the area the more prepared you will be.\n- Repeat your shots over and over again from different angles. Underwater photography is an exciting adventure that some rarely have the opportunity to enjoy, so the best thing to do is take multiple shots from different angles so you may have a wide variety of pictures.\n- When taking photographs of the marine life you may encounter, the best way to capture their likeness is simple. Focus in on their eyes, just like you would of a child. It is best to also become familiar with the behavior of the potential marine life in the area. Remember you are in their world and must respect them.\nBeing prepared and using common sense can ensure of fun filled day of underwater photography. Remember to have fun and be creative.\nPosted on March 16th, 2012\nWhen purchasing your photography equipment there is a bit of excitement and awe of your new life line. You spend hours even days becoming familiar with your equipment, settings, and lenses and take pride in the new piece.", "score": 31.628605994142823, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "This originally appeared in Scuba Diver Australasia Magazine as a part of the “In Focus” photography column.\nThere comes a time in every photographers career for a new challenge, you will know when you are ready for this when taking the same style of shots over and over again becomes static and uninteresting. Fortunately for underwater photographers, there are many challenges available without having to break the bank. One of the biggest, and certainly most rewarding, is the world of “super macro” photography. Super macro is when we shoot something at greater than a 1:1 ratio. However, it’s not easy to shoot such small subjects; special equipment, a steady hand, and a great deal of patience are all required when shooting the smallest of the small.\nThere are several ways of creating a system that can take super macro photographs, and it’s not only DSLR shooters who can shoot these photographs, compact camera users can as well.\nDioptres- dioptres are a small lens element that screws onto the front of your existing lens and allows you to focus much closer than your regular lens, enabling you to fill more of the frame with your subject. These are available in two basic formats. The most popular is the wet-mount dioptre that fits onto both compact cameras and DSLRs. One advantage of the compact camera in this regard is the ability to stack two or even three of these lenses together in order to focus on the tiniest of underwater inhabitants. The second dioptre option is an internal one that fits onto the lens itself before putting the camera into the housing. Obviously this will limit your shot selection on a given dive as you will not be able to take it off underwater. The second negative of this system is that your “long focus” is limited to a short distance, often not much longer than two feet; this is bad when you want to shoot subjects a little further away. Also, be aware of which brand of dioptre you buy; cheap, single element lenses will create distortion around the edges and ruin an otherwise beautiful photo. It’s better to spend a little more in order to purchase a double element lens, this will produce sharper images..\nThe dioptre system works well on a variety of subjects that are easily approachable. Many inhabitants of the reef will allow photographers to approach within centimetres. It’s these creatures that you should be seeking when shooting with a dioptre.", "score": 31.59448759642029, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "They tend to merge their hideouts or in some cases; their bodies are created to conceal in the water when swimming like sharks. When you are attempting to obtain a clear shot when the subject blends into the history can be hard as well as produces a difficulty.\nWhen dealing with underwater photography as a pastime you will need to hone your photography skills ashore first. Once you take fantastic pictures on land you can move right into the more challenging version of the underwater globe, where some policies you’ve utilized no longer apply and also achieving the most effective photo takes perseverance along with skill. Underwater photography brings the marine life to the surface minimizing some of the unknown. If you locate you are simply starting to have an interest in undersea digital photography you will want to seek a professional undersea photography class to teach you several of the vital techniques in addition to method.", "score": 31.26226506395572, "rank": 22}, {"document_id": "doc-::chunk-25", "d_text": "Sometimes the flash will help with the photograph while sometimes it can wash the subject away and ruin the picture. External flashes work better for underwater photography instead of the built-in ones. The external flashes can be a stick with a little light bulb on top.\nUnderwater photography requires a few more skills than regular photography due to the lighting conditions and color issues. But once understood underwater photography creates some wonderful opportunities to capture some fantastic pictures.\nCapturing Moments UnderwaterMoments can be created just about anywhere. Right here on the floor, up in the sky and even underwater. And to capture these underwater digital cameras are needed. There is a big difference between waterproof cameras and underwater cameras.\nWater proof cameras cannot withstand water and water pressure. They are fine to work in a little rain or near pool sides. However, digital underwater cameras specially designed for deep water photography can be submerged into water and can be used to snap away some breathtaking underwater shots be it a school of fishes as it weaves in and out of pale reefs of coral or just a friend who is scuba diving for the first time.\nAs one furthers underwater, the light diffuses. Because of this light diffusion the red spectrum seems darker and the images produced are darker. To avoid this using white balance and coming up with neutral colors is recommended. Also pictures that are taken underwater come up lager than pictures taken on land with the same zoom effect. In a situation like that, the camera’s viewfinder can be used to check the desired angle and size of the image.\nAn underwater camera with a built in flash produces marine effect which is the phenomenon wherein the pictures come out as blurry with white particles floating above it. This can be rectified by using an external flash with the camera instead of the built-in flash. Many digital underwater cameras come with different lenses. Experimenting with these lenses sometimes lends to some great imagery. For instance, macro lens helps capture small things without getting too close and startling the subject.\nBeautiful opportunities are created under water. Vivid colors and spectacular scenes can be captured. All that is required is a good underwater camera and some pre-requisite about photographing underwater.\nChallenges of Underwater Photography\nShooting a photograph underwater is not as easy as just pointing a camera and clicking a button.", "score": 31.213145733555844, "rank": 23}, {"document_id": "doc-::chunk-3", "d_text": "In addition, it’s a pretty simple process to run your photo through a software program that can eliminate noise caused by high ISO.\nGreat…so we simply raise the ISO and we’re all ready for those sharks to show us their sharp pearly whites. Perhaps, but raising the ISO may not be enough given how dark some of these aquariums can be. So what other settings can we adjust to help us take better photos in low-light?\nYou may also recall from some of our previous discussions that shutter speed has a lot to do with the exposure of your photos and can help you take better photos in low light. This is true, however there is also a problem we must consider.\nIn order for the shutter to allow more light to hit your sensor you need to slow down your shutter speed. This will increase the exposure of your photo and make it brighter. Makes sense, but we also need to keep in mind the side-effect that slowing down the shutter speed has as well. When you shoot with a slower shutter speed the motion of your subjects tends to blur in a photo. As you can imagine this would not work very well when trying to shoot a fast moving fish. All you would get is a big blur streaking across your photo.\nThose of you who own some photographic equipment might think that perhaps a tripod could help you out in this situation. It certainly does help keep a camera steady during long exposures using a slow shutter speed. However, we need to keep in mind that a tripod helps to eliminate camera shake caused by your hands, unfortunately, it does nothing to slow down a fast moving object such as a fish.\nSo in this instance you’ll be happy to know that you can leave your tripod at home and don’t have to lug it with you since it’s not going to help you get better photos at the aquarium. Plus it really wouldn’t be safe, people might trip on it, which is why a lot of public places won’t even let you use one. That, and it helps them sell more high-priced postcards.\nOk, so slowing down the shutter speed may not be the best option. However there’s still one more technique that we can employ.\nWe can use a larger aperture setting. By increasing the aperture, more light is allowed into the camera when taking a photo. Combined with raising your ISO this should go a long way to getting a nice photo of your friends from the sea.", "score": 31.129700520072834, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "Just remember the closer you are to your subject the clearer the photo. So, think about your lens size, preferably a wide angle. Shooting from the water can also be heaps of fun. It's a perspective I really love! It requires patience, an open mind, physical endurance and good humour! Beware, you will get addicted!", "score": 30.89881420059184, "rank": 25}, {"document_id": "doc-::chunk-27", "d_text": "You don’t want to have to worry about fumbling around with unfamiliar buttons on the camera and miss that perfect shot. Once you learn to overcome all of the challenges and obstacles of underwater photography, you may just become the perfect underwater photographer!\nLearn Underwater Photography\nPeople take underwater photos for a variety of reasons. Some may take photos under the water just to capture the pure essence and beauty of the world that lives in the deep dark sea while others may use underwater photography to learn more about the environment down below and the species that live there.\nNo matter what reason you may have for taking photographs underwater, you are going to need to learn the ins and outs of this hobby before you will become good at it. Taking photos underwater is much different than taking photos on dry land. When it comes to taking a photo on dry land, most of the time you can simply point the camera and click a button to get a good photo however, underwater, things are much different and numerous precautions and preparation must be taken to capture that perfect photo.\nYou don’t want to dive into underwater photographer blindly because it will take you much longer to become comfortable with getting the perfect shot and there will be a lot of trial and error going on. It is best to learn the tricks of the trade beforehand and then develop your own style for capturing photos underwater. Using what you learn along with your own photography experience, you will soon learn the best ways to take photos underwater.\nVarious underwater photography courses are offered online; some free and some for a small fee. These types of courses are important because you will learn the basics of underwater photography along with troubleshooting, and how to choose the best underwater camera to get the job done that you are attempting to do. You will most likely learn about the various challenges that underwater photographers face that you may never have thought of before and tips on how to get the best out of your photography experience.\nAnother way to learn basic concepts of underwater photography is to frequent the various forums and websites that are centered around underwater photography. These types of forums will give you a chance to meet other underwater photographers, learn a few secrets, and ask any questions that you may have. Most of these types of websites also offer reviews on various underwater cameras and other equipment that will be needed for your deep diving photo shoot.\nYou will need to learn a variety of things before trying your hand at underwater photography.", "score": 30.626034893811315, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "Aperture - Ideally, though there are exceptions to the rule, my goal is to keep the aperture small, f8 to f22 at least, to give me the depth of field I need when I am very close to my subject (the trade-off is the diffraction distortion that occurs at high f-stops, but I will have to content myself with a post-processing sharpen to aid in minimizing this effect). My next priority is to keep the shutter speed fast – as fast as 1/200 or higher. This is desirable for two reasons: the first is that I want to freeze any movement to keep the subject sharp, and the second is to manipulate the background colour. The slower the shutter speed, the more ambient light is a factor in the background, often contributing to washed-out images with low contrast. A faster shutter speed results in your subject being illuminated primarily by your strobes or video light, rather than the ambient light, and further contrast is created as the background darkens to dark blue and even black, depending on just how fast the shutter speed is set. This is far easier to do with macro photography, as we are naturally close to our subject, and much more challenging with wide angle photography, as light falls off so quickly underwater with every foot of distance.\nCaribbean Reef Squid at night – this fella was interested in his reflection in my dome port, but I struggled to get him in sharp focus as he fluttered about. The image is flawed, as his eyes are out of focus, but the value for me is in the memory, as I loved meeting this fella. So I will just have to try again should the opportunity arise.\nI am still experimenting with placement of lights, but I generally try to keep them at the 10:00 and 2:00 position, closer to my lens than I would for wide angle photography. Sometimes this is difficult to maintain, as coral formations may obstruct lighting in some cases. I have learned to accept that sometimes you have to “swim” away from a subject in search of a more ideal subject and setting.\nEverything I have read about underwater macro photography, advised, “Get close to your subject. When you think you are close enough, get closer…” This proved to be a helpful tip for me as I photographed the face of a large Southern stingray at close range.", "score": 30.32311673167243, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Depth of Field and How It Affects Your Underwater Photography\nDepth of field is one of the most important concepts in photography. After exposure, it has the greatest amount of influence on an image. What it does is give depth to two dimensional images. Here's how it works:\nDepth of Field\nWhen focusing through a camera lens, you will find that there is no absolute point where an image goes from being sharp to being a blur. Instead, this happens gradually. You might choose a lens where the subject looks very sharp when he's fifteen feet away. But, an object that's three feet behind him is slightly soft, and an object that's ten feet behind that is a blur. That same effect also happens to objects that are in the foreground of the subject. Something three feet in front of him will look soft, while something ten feet in front of that is a blue. That's how depth of field works.\nDepth of field can be broken down into two categories: shallow and deep. Deep depth of field refers to a DOF that is very gradual over a long distance. You'll still be able to make out an object that is ten feet behind the subject, even though it may be a little burry in deep depth of field. Shallow refers to a DOF that is very short. An object that is ten feet behind the subject will be a complete blur in shallow depth of field.\nUnderwater photography follows much different rules than what we're used to when shooting on dry land. That's because water has a huge effect on the light that penetrates trough it. Light is made up of different colors and water absorbs most of them quickly, leaving us with a dark blue light. In fact, it is difficult to tell the distinction of objects that are a few feet away from you underwater.\nIf you want to see something vividly, then you need to get very close to it. This makes underwater photography very similar to macro photography. Since you can only see a few feet behind your subject before everything becomes indistinct, you need to use lenses that produce a shallow depth of field to create a layered image.\nBecause you need to get close to the subject in underwater photography, macro lenses are probably the best choice to use. They produce a very shallow depth of field. However, you need to be very careful in your selection. You want something that doesn't have too shallow of a depth of field.", "score": 30.264751937465377, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "Actually, I have only used it on two holidays so far (Maldives and Egypt). And I used the cheapest GoPro camera that doesn’t even know RAW format, haha. At the same time, I was always practically alone to take pictures. However, as a fan of photography in general, I was surprised how photo shooting of this type can be fun and creative at the same time. Believe me, if you own an action camera, those few Euros will pay off for this fun 🙂\nBest 7 tips for taking pictures with the dome port:\nhave a partner who will be a model – preferably a mermaid, haha 🙂\nset up your action camera in advance\nuse anti-fog inserts\nkeep the top half of the port dry\nuse interval shooting or burst mode\nkeep the sun at the back (the photos will be lighter and more colorful)\ntry to use higher frame rate for videos (120fps) – slow motion underwater videos look great", "score": 30.059714494518705, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "The point is that if water gets inside the dome port (even a few drops), then this plastic dome will fog up. And you can’t take pictures with that. You have to prevent this. For example, you can buy these anti-fog inserts for just a few dollars here. Don’t worry – they can be used several times. Just put two of these inserts on the sides of the action camera, close the dome port and that’s it. Don’t underestimate it and get it!\nI guess now you have a pretty good idea of what it is like to take pictures with the dome port. It is a challenge. Moreover, it is virtually impossible to take pictures of yourself. The big advantage is when two people participate in the photo shoot. One makes a model (—> diving for example) and the other takes pictures. It is also good to remember that you have to be very close to the subject / person you are shooting because you will be taking pictures at wide angle. It is also wise to set everything on the camera beforehand, because once you put it inside the dome port and swim in the water, it will be difficult to change the settings 🙂\nShould I use Red filter?\nYou may know that a red filter is often used for underwater photography to correct colors. In my experience, however, not very beneficial in the case of taking pictures with the dome port. Let’s not forget that only about half of the photo will be under water. And while adding a red through the filter adds vividness to an underwater photo, it may not work well with the other part of the photo above the water. Plus, you can change colors and temperature later in the post process through Adobe Lightroom or Photoshop. The result will be virtually the same if you are shooting to RAW.\nWhat type of dome port do I use?\nI use Telesin dome port. You can get it for about 44 Euro (including accessories such as trigger, floating handle…). It is compatible with cameras GoPro 5 and later (I use GoPro 2018). There are many more similar ports on the market, but I chose this because of the many positive reviews and good price / performance ratio. It didn’t disappoint me.\nI would like to point out that I am a beginner when it comes to taking pictures with a dome port.", "score": 30.01097823400423, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Hi Micheley1101, I've modified your subject, I hope it makes more sense.\nIf you are shooting in an aquarium, then it'll normally be very dark. This means you'll need to use high ISO settings so that the camera can select sufficiently quick shutter speeds to freeze the action. Unfortunately the higher the ISO number, the worse the picture quality, but unless you can increase the light in the aquarium, you have no choice. Using flashes is not a good idea.\nSo I would put your camera into Normal mode, which lets you choose the ISO. Make sure the flash is also switched off. Try 400 ISO first, then if it it's still blurred, try 800, and so on. Of course you may find it's still too dark even at the highest ISO setting, in which case, forget taking photos and just enjoy looking at the fish!", "score": 29.771678704002575, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The most dramatic way to communicate to the air-breathing world your close encounters with charismatic megafauna such as sharks is to photograph them. But what kind of underwater camera should you use?\nNorbert Wu, a renowned nature photographer and the author of How to Photograph Underwater (Stackpole, 1994), suggests opting for an automatic, point-and-shoot camera. You don't want to be futzing with the focus when you're worrying about the sharks.\nCheck out one of the automatic 35-mm cameras manufactured by Sea&Sea, such as the MX-10. Made from a durable, impact-resistant polymer, this model comes with a built-in flash, three lenses (wide-angle, close-up, and macro-), all attachable underwater, and the capacity to add strobes and filters. The MX-10 works at depths of up to 150 feet. As for film, Wu says that 100 ASA color-print film is far more forgiving than slide film.\nCoordinates: $413 for the MX-10. Sea&Sea Underwater Photography USA, 760-929-1909; www.sea-sea.infotopia.or.jp", "score": 29.19939116489301, "rank": 32}, {"document_id": "doc-::chunk-21", "d_text": "Look at a camera that could make RAW files if you would like in order to edit your images in an innovative photo editing application.\nThese are uncompressed files that keep lots of data, which is essential when editing pictures. A camera that creates JPEG pictures is going to be the greatest underwater camera for your requirements in the event you don’t plan on editing your pictures.\nLots of the best underwater cameras have a routine way to shoot still images and a video mode to shoot short films. You may have to make a decision as to what video quality you desire if you’d like a camera with video capabilities. When you purchase an underwater camera with video alternatives, your alternatives will probably be high definition on DSLR cameras, and low quality or normal high definition on compact cameras.\ncamera caseAn underwater camera is an investment that should be looked after. Due to this, a great-quality camera case that can protect the camera when not in use is essential.\nWhen searching to find the best underwater camera case, look for one that’s rustproof, pest proof and resistant to fuel, solvents and oils. A top underwater camera case also needs to be immune to ultra-violet light and have a high-density foam to supply optimum camera protection.\nConsider accessoriesAlthough accessories might seem elective, in underwater photography, many can allow it to be a lot easier to shoot pictures that are amazing. Focus lights, wet lenses, strobe arms and carry cases are a couple of things that may assist you with your underwater photography.\nRemember to buy one that’s compatible with any accessories that you believe you may want to use when you’re seeking the perfect underwater camera.\nWhich Type Should You Buy?\nUnderwater photography can be very fun and interesting. Not everyday people go snorkeling or deep water diving. Since standard cameras cannot be used to capture pictures under the water, underwater cameras were invented.\nThe launch of digital underwater cameras has opened bigger possibilities for this field. These have a great defensive device to protect the camera from the water and other potential damages. There are different types of underwater cameras and their prices are just close to a standard camera.\nThe plain disposable underwater cameras are believed to be the most affordable. Even when fully submerged they can give good quality pictures. Because of being waterproof, these cameras can be used during heavy rains and snow storms along with using them underwater. They can capture a minimum of 10 pictures and a maximum of 30.\nThe cheap re-loadable is another kind of underwater camera.", "score": 27.768045998500586, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "Photography has no rules. There are no definitives, no \"best\". Lenses, housings, cameras, and lights are just tools for different jobs.\nMy preferred setup underwater includes a super-wide lens. My preferred lenses are the Tokina 10-17mm Fisheye and an 8-15mm Fisheye (Canon or Nikon). I've shot lots of rectilinear lenses as well, so I will try to explain why I choose to use fisheyes exclusively. As well as why some people, well... hate them.\nThe distortion caused by a fisheye can be used to creative advantage even when shooting models. This image is a composite of two split shots.\nThe Downsides of a Fisheye\nLet's start with the negatives of fisheye lenses. Many a photographer has gone to the camera store and tried a super wide rectilinear lens, then compared it to a similar focal length fisheye. Considering that the focal length is the same, the difference visually is staggering. The distortion of a fisheye lens makes the counters, the ceiling, and every straight line look bizarre, even unpleasant. So many people will then opt for the rectilinear lens. This is valid, but more complex than it might seem.\nThere are wedding photographers using fisheyes. They are always careful to make sure the bride and groom's faces are near the center of the frame, where the distortion isn't noticeable. The stretch of the gown and surrounding scene can be quite nice. But the fisheye can make your subject (especially familiar ones you've seen often) look less \"natural.\"\nThere are few parallel or perpendicular lines in nature. The \"distortion\" of the fisheye is in more of the underwater images you see published than you might expect. If it is one of my images, you can be pretty sure it was shot with a fisheye.\nMinimum focus distance becomes a big issue when your subject is taking a nose dive into your dome port. The Tokina 10-17mm fisheye is capable of focusing on something only a few inches from the front of the lens.\nClarity and Close Focus Distance\nA dome port is necessary to capture a super wide field of view without vignetting (dark shadows around the edges of the photo). And hands-down fisheye lenses work better behind dome ports than rectilinear lenses.", "score": 27.559951591175906, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Master Guide for Underwater Digital PhotographyBook - 2005\nFrom camera selection to enhanced exposure, everything necessary to capture underwater digital images is available in this handy reference. Photographers will learn how to select, test, and use digital cameras for technically perfect images, adapt traditional photo techniques to underwater conditions, confidently shoot and light underwater images for great exposure, and remedy common problems that plague underwater photographers. Helpful hints on maintaining, cleaning, transporting, and insuring a digital camera are included. With full-color images that both instruct and inspire, this handbook provides information on every conceivable aspect of creating the right conditions for beautiful underwater photographs.\nPublisher: Buffalo : Amherst Media, c2005\nCharacteristics: 126 p. : col. ill\nAlternative Title: Underwater digital photography", "score": 27.129191843949407, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "I find it amazing how photos that were completely out of reach for most, are now becoming available as cameras are getting better, smaller, cheaper and more capable. Take this underwater shoot by Czech photographer Pavel Schlemmer with nothing more than a Fujifilm X-E2 and a Aquapac 451 housing. Basically a small camera in a nylon bag. results are amazing.\nAs a photographers, we get that unique opportunities to see things from a different perspective, so when I got a call “If I’m interested in shooting a report from a synchronized swimming event” I immediately knew, I need to see it & shoot it from underwater.\nOnly problem was that I’ve had zero practical experience shooting underwater and only one theoretical, which was this cool CJ video.\nThe camera: Usually my primary camera is a great little Fuji X100, but I was worried that for this underwater occasion it would not be focusing well & quick enough, so I decided to go with a new Fuji X-E2 with an 18-55/2.8-4 stabilized lens. I haven’t found any real-life underwater test of this camera which only boosted my curiosity on how it’ll perform in my inexperienced hands.\nThe underwater housing: The Aquapac 451 was the only pack with just-the-right size for both the X100 and the X-E2 (and probably any mirrorless camera on the market right now) I could get my hands on in Czech Republic. I’ve paid 70$ for which it almost felt too good to be true and really waterproof since it looks like a effing hardcore camera raincoat (..which it basically is)\nThe lighting: My original plan was to put some strobes outside the water and trigger them via my Elinchrom Skyport triggers from underwater but unfortunately, the world doesn’t go like that and it’s impossible to easily trigger the lights underwater with radio signal. I have to depend on the ambient light & high ISO. Oops.\nHow it all works in action:\nThe housing luckily really is waterproof and It worked like a charm with an great XE-2. It flows in the water, which doesn’t sound like much, but paired with a bundled carrying strap it’s kinda sweet feature and you have one thing less to worry about in the field.", "score": 27.113574362474306, "rank": 36}, {"document_id": "doc-::chunk-31", "d_text": "While zooming in to an image lowers its resolution, keeping close with a wide angle lens reduces that blue effect and captures the highest quality and resolution available.\nWhether deep in the ocean or just in the deep end of a house pool, right underwater digital cameras help you take breathtaking underwater photos anytime you want to.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-8", "d_text": "Don’t just think about the basic form but go beyond that. Try capturing images during different times of the year, different weather conditions, or perhaps black and white. Never settle for the greeting card image but rather a different spectrum to landscaping. Don’t be afraid to experiment and never delete an image until viewed at your studio.\nIf ou are looking for a great deal on a digital camera please visit http://www.42photo.com\nPosted on July 18th, 2012\nNow that you have perfected your skill with perhaps landscaping, weddings, sunrises and sunsets, it is time to get your feet wet. Underwater photography can be such a magical experience that every photographer must try at least once in their lifetime. Although it may appear tricky, it in act can be simple and easy to perform. The following tips can help you along your underwater journey.\n- It is best to make sure you have the appropriate equipment that is designed for underwater photography. It is best to research the depth that the camera can operate properly as well as other functions such as batteries, memory cards, and flashes.\n- You don’t have to be an Olympic swimmer or scuba certified, but it is best to be prepared.\n- Research various locations as well as marine life. You want to focus on behavior of the marine animal as well as their natural state so that you are prepared of their actions.\n- Once you are ready to dive into the water, make sure you are working with a fast shutter speed. Recommended is the following: 1/30 for still object such as coral, 1/60 for slow moving objects, and 1/125 for faster moving objects like fish. Adjusting your shutter speed can help with the sharpness of your images.\n- Using the natural light of the sun is one way of capturing your images but it is recommended to do so at a depth of 20 feet or lower.\n- Set your camera to the highest resolution and the lowest ISO\n- For best composition it is best to shoot upwards rather than downwards. Make sure the subjects eyes are focused as well.\n- Like photographing on land, do not delete any photographs until you have returned to your computer.\n- If possible and safely, get as close to your subject as you can. Water can reduce the sharpness, contrast and color of an image, so try to be about 12 inches or shorter from your subject.\n- Have fun!", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-2", "d_text": "Before my trip, I was anxious about my learning curve to create an image like this. I think that every photographer should try taking a picture outside of their comfort zone once in a while. I initially struggled with balancing an ambient light exposure with fill-flash from my powerful underwater strobes, but I overcame these limitations after my first few days of diving. Freeing myself from the technical aspects of underwater photography allowed me to focus on composition. Once I figured out where to look for soft corals on the side of boomies (underwater pinnacles), I was able to realize my creative vision. Clearly the focus of this image is the neon soft corals surrounded by tropical fish, but I added texture to an otherwise featureless blue background by angling my camera up towards the surface. The clouds in the sky above also added color. I find it interesting that clouds play an important role in underwater landscape photography just like they do above water. I created this image using my Canon 5DmkII and 17-40mm f4 lens with a +3 diopter in my Ikelite 5DmkII housing with dual DS160 strobes set to -3 power. This image required minimal processing using Aperture 3 and Photoshop CS5.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Two things. One, remember saliva and saltwater are your two best ingredients in making a solution that prevents water droplets from sticking to your lens port. And two, remember your underwater housing is very buoyant and so if you plan on shooting below the surface then you likely to need a weight belt to keep you down.\nLet's just say I have started to do some cold swim water training!", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Most underwater photographers are concerned\nto protect the environment in which they take their pictures and to avoid\nstressing marine creatures when they are taking their images. This is\ngood for the marine environment and leads to better photographs.\nThis Code sets out good practices for anyone\nwho aspires to take pictures or video underwater. Many aspects are also\napplicable to the general sports diver.\n· No-one should attempt to take pictures underwater until they are a competent\ndiver. Novices thrashing about with their hands and fins while conscious\nonly of the image in their viewfinder can do untold damage.\n· Every diver, including photographers, should ensure that gauges, octopus\nregulators, torches and other equipment are secured so they do not trail\nover reefs or cause other damage.\n· Underwater photographers should possess superior precision buoyancy\ncontrol skills to avoid damaging the fragile marine environment and its\ncreatures. Even experienced divers and those modelling for photographers\nshould ensure that careless or excessively vigorous fin strokes and arm\nmovements do not damage coral or smother it in clouds of sand. A finger\nplaced carefully on a bare patch of rock can do much to replace other,\nmore damaging movement.\n· Photographers should carefully explore the area in which they are diving\nand find subjects that are\naccessible without damage to them or other organisms.\n· Care should be taken to avoid stressing a subject. Some fish are clearly\nunhappy when a camera\ninvades their \"personal space\" or when pictures are taken using flash\nor lights. Others are unconcerned. They make the best subjects.\n· Divers and photographers should never kill marine life to attract other\ntypes to them or to create a\nphotographic opportunity, such as feeding sea urchins to wrasse. Creatures\nshould never be handled or irritated to create a reaction and sedentary\nones should never be placed on an alien background, which may result in\nthem being killed.\n· Queuing to photograph a rare subject, such as a seahorse, should be\navoided because of the harm\nrepeated bursts of bright light may do to their eyesight. For the same\nreason, the number of shots of an individual subject should be kept to\n· Clown fish and other territorial animals are popular subjects but some\nbecome highly stressed when a photographer moves in to take a picture.", "score": 26.9697449642274, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "by Jill Smith\nFirst time with my new macro lens in Bahamas this November! For me, preparing for this “macro trip” meant reading article after article on tips for underwater macro photography. The images online are stunning, and these pros make it sound easy. Now that I have completed my first trip attempting to implement their tips and techniques, I assure you, it is not. It is my humble opinion that underwater macro photography is the MOST DIFFICULT kind of photography that exists (Perhaps others know better, but I am skeptical). It’s like trying to take a photo of erratically moving subjects at night in a snowstorm while you are floating and unstable; this is a photography challenge on steroids.\nI have to admit, mine is not the ideal set up. I am using a 90mm macro lens with a Sony A6000 mirrorless camera with Nauticam housing, which is great, but I am using my macro lens with a dome port. Why? I was too lazy and cheap to order the port that goes with my macro lens, and thought I would experiment and decide for myself if I really do need the proper macro port. After a week of using it this way, I am now considering (but still undecided) the flat port for my macro lens for three reasons:\nAbove: A Yellow-line Arrow crab catches one of the bloodworms that is swarming my video light and eats a late dinner.\nThe other equipment factor that affects my success, I think, is my lack of strobes. I am using two video lights mounted on arms, so I do have the ability to adjust lighting position, but it is next to impossible to sneak up on a fish with these bright lights shining in their eyes. Then again, I found them to be very beneficial on the night dives, when photographing coral banded shrimp, or yellowline arrow crabs, for example – they are attracted to the blood worms swimming around my video lights, and scramble out to catch them for dinner (win win!). The big advantage to having strobes versus video lights, is that if I set strobes for “through the lens” (TTL) metering, I can pre-set my aperture and shutter speed, allowing the strobes to adjust their output to accomplish proper exposure. There is an additional advantage to this, obviously, as constantly dialing in new f-stops and shutter speeds detracts from my reaction time to photo opportunities.", "score": 26.225269844207688, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "This was the year I switched from my trusted Canon G16 compact camera (with wetlenses) to a mirrorless Olympus OM-D E-M5 II (with ‘actual’ lenses). Throwing money at things is not necessarily a guarantee for improvement, but it definitely helps! The shot above (from this post) was commended at the 2019 Falmouth Underwater Film Festival. This was made using the few times I went out with the mzuiko fisheye lens and is shot using natural light. (For some more natural light wide angle shots taken snorkelling on the north coast see this post.) I mostly used the mzuiko 60mm macrolens in combination with my strobe. The weather at the start of the year was so foul I initially used it abovewater during rockpooling. The shot below of a Flat periwinkle is quite simple but one of my favourites ‘topside’, together with the shot of the two Shore crabs: Below are some favourite underwater macro shots (the best pics I also post on instagram). Macrophotography I find easier, as the camera settings remain quite invariable (small aperture with a short shutterspeed because of the flash) and the composition is often (but definitely not always!) simpler compared to shots of say an entire rockpool. First some taken snorkeling in rockpools. The first is a Chink shell on Bushy rainbow wrack. The iridescent nature of the seaweeds means it is bright blue or purple viewed from one direction, but a dull brown from the other. If you get it right, it makes a very striking background and it is definitely a subject I want to explore more. After that, detail of the tip of a Spiny starfish, a European cowrie and a pill isopod. Below some macroshots taken while (shore)diving off Silver Steps in Falmouth. A Blackfaced blenny, a Leopardspotted goby and a Devonshire cupcoral. Many more photos of course if you scroll down. I have now also invested in a new strobe and new strobe arms. Having two strobes will allow me to take wide angle photos without depending on (dim) natural light, for instance whilst diving. My second strobe is manual so I can ramp up the strobe power if needed for macro too.", "score": 25.684400790802563, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Coral taken with camera lens on a Iphone 12\nExploring Underwater camera\nFor those reefers who love to explore more in-depth and would like to take better quality photos, investing in an underwater camera would provide you the nearest take that you can find inside your reef tank. Yes , i mean inside your tank, as these underwater camera, for example, the popular Olympus TG series is able to submerge fully into the water for your underwater photograph.\nNot only does a underwater camera allow me to take pictures underwater ,it also has varies function that allow better photography underwater, for example the white balance setting allows user to auto-filter the blue light effect .\nFor those using Olympus TG camera, i am going to share with you some tricks how to set the auto white balance easily .\nFirstly, point your camera to a clear white surface that has blue light background. Alternatively, You can also find place a white acrylic plate in the tank for the camera to be able to set the auto white balance. Press “Capture WB (Menu)” and save the setting!\nSetting the auto white balance function\nAnd thank god you only need to set this once, and the next time you take pictures with your camera, you can quickly select the filter effect and start taking imdiately.\nThe image on the left is taken with an auto white balance function, and the image is taken without the white balance function\nOther than being able to take photo underwater, some of the underwater camera has a macro shot mode which enable you to take up close up shot of your corals enabling you to see all the fine details of all the colors /polyps ect .\nTips for Reef photography\nNow we have suggested ways that prevent your photos that get too “Blue” , the next thing that affects your picture quality is the glass wall barrier that come between you and the corals that could affect the image quality. Subject to the thickness of the glass, the pictures you take might tend to be blurry and not focused even when you are using a good camera. So below are some trips you can use to improve your photography.\n- place your camera as close as possible to the glass or even touching it so that the autofocus function of the camera is not detecting the glass as the ” Object”.\n- If you do not have any camera lens or filter, adjust your lighting to warm white or a more natural white lighting. 12k and 14k lighting is good for photography.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "Like we mentioned above it is very important that you feel confident and comfortable when diving or snorkeling before attempting to take photos, especially in new waters. Akima, believes that once you get the basics down and are comfortable in the water, you will never want to give up underwater photography. It’s like art plays out right before your eyes in a whole new atmosphere. The objective in capturing your images, is that you can display the beauty and wonder of what you have seen with those above sea level!", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "If you want to improve quickly, instruction, dedicated coaching and constant practice are the best remedies for only average ability (like most divers, I am only an average photographer). I do practice photography top side, because that provides frequent opportunities to grip the technical aspects. But my progress underwater stagnated and, to build on the jump-start that Martin had provided for me, I joined one of Alex Mustard’s week-long photographic workshops in the Cayman Islands at the beginning of this year. Many people reading this will also know Alex who, like Martin, is a thoughtful and highly competent instructor. I simply cannot overstate the value of such workshops and thought that it would be useful to capture the essence of this particular one, which led to the two images above.\nAlex’s workshop was dedicated to close focus wide angle underwater photography. However, there were ample opportunities for macro photography as well. His workshop began with a dedicated pool session to fine tune the correct placement of strobes. Valuable early lectures improved student understanding of how to capture and control light underwater. Alex encourages photographers to master the technical side in order that they can then concentrate harder on memorable shots; there is nothing more distracting to artistic thinking than paying too much attention to technicalities whilst underwater. One of the most valuable aspects of the workshop for me was an evening image review session, at which Alex and other photographers constructively critique your images.\nSo, if like me you struggle to produce great images and have only limited time to invest underwater, consider taking some dedicated instruction. There are plenty of superb instructors out there. I am not setting out to advertise anyone in particular, and neither instructor mentioned here even knows that I have penned this article. Instead, I simply make the general point from my own experience that a modest investment at the start of your interest in underwater photography can pay handsomely in the satisfaction that you subsequently achieve from the quality of your images. Happy shooting to you all.\nPaul Colley is a full-time director of a UK think tank, but takes every opportunity in his spare time to pursue a passion for diving and underwater photography. He hopes to take up underwater photography full time on retirement. You can see more of his work at mpcolley.com\nEditors note: You can also see Paul Colley's work in our article on Saving Bluefin Tuna", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-4", "d_text": "As close as a few inches away. It will take a while to be able to anticipate what the lens will do to this perspective, but this will come.\nThe beautiful soft corals in the foreground are very close to the front of the dome port- almost touching. The foreground is lit by TTL strobe exposure. The diver is too far away and becomes silhouetted in the background.\nKeep your strobes behind the plane of the dome port.\nOtherwise you may find two strobes peeking out of the edges of your photos once you transfer them to the computer.\nTurn your strobes off sometimes.\nIf you are not close to an object (e.g. close focus wide angle) then turn your strobes off or away and shoot natural light.\nNatural light photography is a great option when there's no subject to feature in the foreground.\nShoot at an upward angle.\nThis adds depth and perspective to your photos. You may even try shooting straight up to capture a cool effect called Snell's Window.\nThis is not a circular fisheye, it's a phenomenon called Snell's Window. It's a property of light that occurs when you shoot a super wide lens straight up towards the surface of the water.\nComparing Popular Wide Angle Lenses\n|Minimum Focus Distance\n|Canon EF 8-15mm f/4L USM\n|0.15 m (5.91\")\n|540 g (1.19 lb)\n|Nikon AF-S 8-15mm f/3.5-4.5E ED\n|0.16 m (6.3\")\n|485 g (1.07 lb)\n|83 mm (3.27\")\n|Sigma 8-16mm f/4.5-5.6 DC HSM\n|0.24 m (9.45\")\n|555 g (1.22 lb)\n|106 mm (4.16\")\n|Nikon AF DX 10.5mm f/2.8G ED\n|0.14 m (5.51\")\n|300 g (0.66 lb)\n|63 mm (2.46\")\n|Tokina AT-X 10-17mm f/3.5-4.5 DX\n|0.14 m (5.51\")\n|350 g (0.77 lb)\n|71 mm (2.8\")\n|Sigma 10-20mm f/4-5.6 EX DC HSM\n|0.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "This means that when the camera allows the correct amount of light in according to the camera settings, there will be more red in the picture than there would have been without the filter. These pictures show the effect of a red filter at about 20ft deep. You can also see how well the coral is growing on Little Molasses Reef. These photos were taken in May this year!\nThe Mysterious White Balance\nThe final way to improve the color in a photograph taken in natural light is to adjust the white balance. If you have taken the shot in fairly shallow, clear water, although the picture will look too blue, there is some signal left in the red channel. Adjusting the white balance gives you the ability to change the amount of each of the three color channels used to create the image. The best way to make this adjustment is before you take the picture. Using a white board or color slate allows you to calibrate the camera to make these adjustments prior to taking the image.Each model of camera sets the color balance a little differently, so read your camera manual to find out how to do it with your camera. If you have a more high end camera, which allows you to capture “RAW” data, then you can make this adjustment to the image after taking the photo in your post processing using an image editor such as Abobe’s Lightroom product.\nThese two images below show the difference you can make to an image by correcting the white balance to suit the light conditions.\nHope you enjoyed the Blog!\nInstructor David Jefferiss – Sea Dwellers Dive Center of Key largo", "score": 25.65453875696252, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Super Macro Underwater Photography - The Definitive Guide, Part 1\nPart 1 – The Basics\nby Keri Wilk\nBeing a part of designing and manufacturing underwater super macro optics for ReefNet, I’m frequently barraged with questions about super macro tools, and optics in general. Feeling like a broken record over the past few years, I’ve decided to compile a brief guide to the ins and outs of super macro photography. Along the way, I’ll do my best to explain some optical phenomena that are often misunderstood.\nWhile this article pertains mainly to SLR/DSLR photography, most of the concepts and techniques are relevant and accurate for other recording media (video, point-and-shoot, medium format, etc.). During the majority of the 15 years that I’ve been shooting underwater photography, I have used Nikon camera bodies (preceded only by Nikonos III and Nikonos V), so I ask that you bear with my skew and omission of references to specific Canon, Olympus, and other brands and their associated gear. The information is relevant for all brands.\nNow, you should be warned…I don’t (and will not) claim to be the foremost expert on any of the topics that I’ll be discussing. However, over the years I feel that I’ve gathered enough knowledge through studying optics, and good old trial-and-error, to make this guide useful to anyone looking to learn or refine an approach to super macro underwater photography.\nBefore discussing the various aspects of super macro photography, it’s important to distinguish it from other genres of photography. The fundamental concept of image magnification is also briefly discussed here, as it will be vital to an understanding of the majority of this guide. Each of the following definitions has been synopsized from various reputable scientific photography resources cited at the bottom of the page.\nIn photography, this refers to “transverse magnification”, which is determined by dividing the image height (on your film/sensor) by the object height, i.e. the height of the subject being photographed (width can also be used):\n10.5mm of a ruler filling the frame of my Nikon D300 (sensor is 23.6mm x 15.8mm),\nresulting in approximately 2.25X magnification. Image was taken with a Nikon\n105mm lens plus a ReefNet SubSee Magnifier @ 1/320s, f/11, ISO 200.", "score": 24.511779393157028, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "This allowed us the opportunity to see how these cameras take pictures in the sea!\nShooting Photos in Water\nTaking photos underwater creates challenges that may not be familiar to consumers. Water is 784 times denser than air. Because of this density, light is absorbed much more quickly. The first elements of light to be absorbed are the red and orange colors. The deeper you go or the farther into water that you shoot (distance to subject), the more pronounced is the color loss.\nThis color loss creates a problem for the logic within the camera. When shot in auto mode, the camera will “guess” wrong every time. The beautiful blue water colors come out as a light powder blue or green.\nAdditionally, there is a mismatch between our human visual system and the capture system of the camera. Our eyes and brains are very adept at auto white balancing the colors we see—even as we look up, down, close and far away. The white balance for each of these views is different underwater, but our brains seamlessly adjust them, so we don’t see the same way the camera “sees.”\nWe compared the cameras by setting them to auto mode and also several different shooting modes based upon lighting, location or subject, such as underwater mode. We followed our mermaid out to sea so that we could capture some beautiful vistas.\nUsing Preset Shooting Modes Underwater\nMode settings create a combination of adjustments that are predesigned for specific conditions. Some of the cameras we used had underwater modes. These modes are well worth experimenting with and using. Of course, there is always the issue of selecting and using the proper setting. Many times we forget what setting is active, which can actually make the image worse.\nUsing Manual Settings Underwater\nAnother mode that is frequently used is cloudy mode. When using “cloudy,” you are tricking the camera. On a cloudy day, sunlight is absorbed by the clouds much like it is absorbed by the water. The colors come out bluer and cooler because the clouds filter out the reds and oranges of the sunlight. The cloudy setting adds reds and oranges back into the photo, which is exactly what we also want to do with underwater photos.\nIn addition to preset shooting modes, some cameras provide manual adjustments. These include manual white balance adjustments to improve photos.\nManual Mode Camera Settings for Underwater Shooting\nManual mode allows the shooter to adjust ISO, f/stop and shutter speed.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Take a Journey to Another Photographic World – Underwater Photography Part I\nLife can be full of exciting journeys when we seek them out, or vice versa. My latest journey began when I purchased an underwater housing and strobes for my digital SLR late last year. After completing Summer Intensive and Advanced Intensive training at Rocky Mountain School of Photography during the fall of 2009, I returned home to Florida eager to bring my camera along on my underwater adventures. Having seen some truly amazing sights since I first put on a mask and snorkel at the age of 5, I thought that the underwater world would provide a wealth of photographic opportunities provided I did my part. Fortunately, this hunch has been more accurate than I could ever have imagined.\nSince I began working with my camera underwater, I have had close encounters with some interesting creatures, both large and small. I have also had opportunities to make images of underwater seascapes that continue to engage my senses. What excites me most about underwater photography is that it allows one to display images from a world that is unfamiliar to many people. In this series of articles I will attempt to provide some basic tools, so you may begin your own underwater journey.\nComfort in the water and training for the conditions in which one intends to shoot are two prerequisite skills to possess before attempting any underwater photography. Comfort in the water can be an elusive skill. However, scuba certification (or more advanced training like closed circuit re-breather technology) is now widely available, so it is relatively easy to obtain a training certification level to meet virtually any need. Several of the most well-known certification organizations in the U.S. include the Professional Association of Diving Instructors (www.padi.com), National Association of Underwater Instructors (www.naui.org) and the International Association of Nitrox and Technical Divers (www.iantd.com). However, don’t forget that snorkeling requires no certification and is a perfectly acceptable way to begin your underwater adventures. I have had many successful underwater shoots while using only mask, snorkel and fins. Provided you have a body of water nearby (Note: swimming pools certainly qualify) sufficient comfort in the water and the appropriate training to suit your needs, there is little reason not to explore the world of underwater photography.\nBefore we cover more specifics about gear and techniques, I would like to address some concepts that are unique to underwater photography.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-2", "d_text": "Please don’t forget to shoot around your underwater pictures when on a trip, as this will give you more options to tell and illustrate stories, and will raise your game to more photojournalistic levels.\nIf you’d like to join me on a photo trip then please check out my trips pages here.\nThey are for all comers and I really don’t care what type of camera you are shooting with I will do my best to show you how to get the best from whatever kit you’re using.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "Capturing an image of a fast moving fish requires a flash period, however, most camera mounted flashes will blow your brightness out of the water (so to speak).\n— D. Wade Lehmann (aka ‘wade’)\nMy best advice is to take LOTS of pics. Have a lot of space on your card and be patient and just keep clicking away. This is especially important when trying to take pics of fish. Often I will take 20 pictures of a single fish before I get the one that I really want. And often the one that you think was good turns out to have some distracting plumbing output or something when you go to process it so you need to have several that you are happy with before you even start processing them.\nSpeaking of processing it, that is an important aspect. Use a good photo software to resize and do other minor touch ups. No one wants to see you post a 2,560×1,980 pixel image on a bulletin board. I use adobe photoshop and use the auto-adjust feature on 90% of my pics. Also once you resize you need to do an unsharp mask on it.\n— Nathan Paden (aka ‘npaden’)\n- Use Crop. Zoom out a little for better focus/faster shutter … and crop the final image down before resizing it down to screen size.\n- Use a tripod. Turn off the room lights, close curtains if back-lit.\n- Consider shutting off pumps temporarily, to shoot macro of corals without flow-related blurring.\n- Shoot straight on, especially when close, to avoid distortion off the glass.\nIf you are running windows xp you can get a really cool powertoy called “image resizer”: http://www.microsoft.com/windowsxp/downloads/powertoys/xppowertoys.mspx . It is a small program, easy to use and you can resize multiple pics all at once. just pick on all your pics you want resized then right click and pick the size you would like them all to be.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "A recent post on taking sharp focus contained these ideas: 1. Hold your camera well - holding the camera close to your body with elbows anchored against your chest or on a steady object such as a fence will make your camera steadier and improve the sharpness of the picture. 2. Tripods - if you have access to a tripod, and have time to set it up, will greatly improve your focus. 3. Shutter speed - if your subject is moving, you will need a faster shutter speed to get clear pictures, try bracketing your pictures to be sure you'll have a good one when you get home. There is a minimum shutter speed though if you're planning to hand hold, according to Rowse, - for a 50mm lens don't shoot slower than 1/60th of a second - for a 100mm lens don't shoot slower than 1/125th of a second - for a 200mm lens don't shoot slower than 1/250th of a second 4. Aperture - Aperture impacts the depth of field (the zone that is in focus) in your images. Decreasing your aperture (increasing the number – say up to f/20) will increase the depth of field meaning that the zone that is in focus will include both close and distant objects. Keep in mind that the smaller your aperture the longer your shutter speed will need to be – which of course makes moving subjects more difficult to keep sharp 5. ISO - The third element of the exposure triangle is ISO which has a direct impact upon the noisiness of your shots. Choose a larger ISO and you’ll be able to use faster shutter speed and smaller aperture (which as we’ve seen help with sharpness) but you’ll suffer by increasing the noise of your shots. Depending upon your camera (and how large you want to enlarge your images) you can probably get away with using ISO of up to 400 (or even 800 on some cameras) without too much noise but for pin sharp images keep it as low as possible). 6. Image Stabilization - some lenses or cameras come with image stabilization. This is a huge help when hand holding your camera. Be sure to turn it off though when using a tripod. Can't tell you how many shots I've ruined having IS on with my camera on a tripod, actually makes the picture out of focus. 7.", "score": 24.345461243037445, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "In auto mode, the camera may overexpose the shots because it misinterprets water density for darkness. Photographers can use the manual mode to force the camera to darken the shot using those settings.\nThere are several ways to deal with overexposure and loss of color:\n• Lower ISO/higher f/stop number/faster shutter speed\n• Adjust camera mode\n• Use external filters (also some cameras have built in snorkel/dive filters or modes)\n• Use photo-editing software\nUsing Mode and Manual Settings\nOur mermaid had swum off, but we found someone else who loves to play in the sea. Although he is certainly not a merman, using fins instead of a tail, he did introduce us to the stingrays at Grand Cayman’s Stingray City. It should be noted that in most locations, humans should never touch sea life. However, Stingray City in Grand Cayman attracts one million visitors a year to a location that provides a unique, interactive experience—whether snorkeling or diving.\nWe mentioned earlier that at different depths, light is filtered by the water—losing warm colors first and other colors as you go deeper. These Stingray City photos provide a good example of snorkel-depth photos with underwater settings or filters.\nBy comparison, the Kittiwake photos were taken down in scuba-depth water with the same cameras. Also note, we adjusted for the foreground metal structure that is better lit in the bottom left than the deeper/farther/less-lit wreck. This will confuse and affect the automatic mode adjustment and the captured image.\nUsing Lens Filters\nThe photos taken with the iPhone/LenzO housing and GoPro action cam show that color filters are another way to improve underwater photos.\nA filter adjusts warm orange colors in a photo and will help correct the loss of those same color elements in underwater shots. Several of the cameras that we reviewed had underwater mode settings that accomplish a similar effect. Others require (will allow) attaching filters externally. Those filters can be attached or removed as needed.\nBelow are photos taken with and without snorkel-depth filters. It should be noted that a challenge for filters is that they are “matched” to one particular situation. This can cause filters to make inadequate corrections or add too much red into the photo. As a result, it can make white sand look red—and equally change the color of fish, coral and other subjects.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Half-in and half-out of the water. I’m sure you know such photos. A girl in a bikini that dives a few feet underwater like an innocent and beautiful mermaid. And above the water level sunshine and palm trees. Photos of this type have been spreading rapidly on social networks in recent years. And they have success. Similar to drone photos. It offers a new way of story telling, a new point of view. But … How are these cool shots taken?\nComplete Guide To Underwater Photography\nHardly, haha! Plus it needs the right equipment, a little exercise and a lot of patience. Right from the start, let’s just say the main thing – you need special accessories. The so-called dome port. It is actually an underwater housing for your action camera. There are 2 types of ports – flat ports and dome ports. The differences are obvious – one is a flat window, and the other is shaped like a dome. And this type of port we care about. Yes, this article is mainly about taking pictures with the dome port, but you can also take it as a best underwater photography tips for beginners.\nHow does Dome Port work?\nIt’s actually simple. The trick is that the curved port creates space between the action camera (lens) and water, which allows half-in and half-out of the water photos. The fact is that you could achieve the same effect by using a glass bowl or a round fishbowl. And of course, the dome port can also be purchased for your SLR camera, but in that case you will pay a few hundred dollars more (compared to the price for the action camera version).\nSo in theory, using a port is easy – just insert a camera inside and that’s it. But the practice is much more complicated. Do not take just random photos, try to think in advance what you would like to shoot. Realize that you will often not have firm ground under your feet. So taking a photo sharply can be a problem. Despite the fact that keeping the right composition can be also difficult (you see almost nothing on the small action camera screen). At the same time, be careful not to spray the top of the dome port with water – water drops could ruin your photos. So… Keep it dry! 🙂\nOne very important tip first – always use anti-fog strips!", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Back in June, I had the opportunity to visit Lanzarote with my girlfriend. I wasn’t sure what to expect, having not been to the Canaries before, but I did some research and found out that the diving is supposed to be amongst the best in the North Atlantic. I therefore persuaded my girlfriend that it was time for her to learn to dive too – and promptly packed my cameras and gear for a good-old wildlife photography trip…\nAs you may have seen in some earlier entries on this site – underwater photography is a growing passion of mine, and one that I’m only beginning to get to grips with. It’s still something which it is possible to make your first inroads into without spending a fortune. Indeed, I picked up one camera at the airport, a Nikon Coolpix S33, for about £70. This little camera is officially waterproof to 10m, and in the clear waters of Lanzarote, seemed very capable. I can also say that mine kept the water out to at least 30m, but the pressure stops the buttons working beyond about 12 to 15m. It makes it ideal for snorkelling and beginner divers though.\nTaking photos underwater often causes trouble with light, as there is nearly always a blue cast to images (which can be fixed on a computer fairly easily) – but you also want to try and catch the light the right way. Hopefully you will be able to get yourself into a position for the light to reflect off the fish or corals, or you can shoot up with the light behind your subject.\nWhen going a little deeper, however, I had a Panasonic DMW-MCTZ35 Lumix Marine Waterproof Case for a TZ35. This was an excellent combination of a very effective dive house and compact superzoom camera with a specific underwater mode. The total cost is higher at around £300 or higher – but it’s still relatively cheap for a combination capable of diving to advanced diver depths.\nIf in Lanzarote, and you get the chance to visit the dive site called the Cathedral, it is well worth a look! Get right inside and shoot out towards the light to create a silhouette as your dive buddy swims across the mouth of the cavern.\nI should put out an extra special mention for the fantastic team at Manta Diving, Lanzarote – who looked after us fantastically on our trip. Also – please don’t think the only things to take photos of are underwater!", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Shooting in Bright Sun\nWhy brighter isn't always better\nThe best pictures have rich, saturated colors, but in dazzling sunlight, many colors tend to glare brightly.\nTo compensate, underexpose by one-third or two-thirds of an F-stop. Bracketing with these two exposures will reveal what works best for a particular camera.\nWhen trying to shoot objects under the water, especially very clear water like that in the Keys, the use of a polarizing filter will help \"see through\" the surface glare.\nThe glare on the water is like light reflected on a mirror. Depending on the angle of the light, it may not be possible to avoid the reflection. A polarizing filter is the only way to work around it. A porpoise or manatee, for example, becomes much more visible when the surface glare is reduced.\nOr, why we love digital cameras so much!\nWhat Sizes Work Best?\nGear from Rain & Humidity\nGood Sunset Silhouettes", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "I came across a couple of my old photographs recently, both from the fresh water and from the sea. They brought back memories of some of my pleasant adventures underwater but also of the fact how little I knew what I had been doing when I was making pictures underwater at that time.\nI had a great (film) camera (Nikon F90X), a great housing (Sea&Sea NX-90PRO), acceptable lenses … but didn’t know what to do with them as it can be seen from the pictures. Anyway, they are memories from the places half way around the globe away and times years ago and I want to treat them as such.\nNote: A funny thing, you can see letters/words in some of the pictures. They are reflections of my lens markings in the dome port. I fixed that later by covering the markings with a black tape.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Indeed at times trying to take a picture of anything other than something within a foot or so can be an exercise in futility.\nVery often the things you see and photograph in poor visibility are the most interesting - have a look at my pictures of\nSlow capture time\nThe point and shoot digital cameras are particularly prone to this – if you are up to it (confident enough and/or keen enough as well as have enough money!) a DSLR is the solution – see\nfor more information.\nYour camera generates heat as it is working. When this meets the cold temperatures underwater the moisture inside the camera housing atmosphere condensates, often obscuring the lense.\nSolution – keep the camera and housing out of the sun, try to set the camera and housing up in a cool air-conditioned room and insert a silica sachet into the housing to absorb excess air moisture. Doing this early or even setting the camera up the night before will give the silica time to absorb the excess moisture before diving.\nThis seems to be a particular problem with point and shoot camera housings – I think because the housing is smaller and the internal air heats up more quickly than in a DSLR housing?\nAll of this is background and the bare basics of photography underwater. I will continue to add more detail on separate pages which will shed ‘light’ on more specific aspects of photography underwater.\nFor more photography underwater click here for Photography page\nClick here to learn about reading a histogram\nClick here to learn about using the aperture priority setting on your camera\nClick here to learn about using the shutter priority setting on your camera\nClick here to learn about Depth of field\nClick here to see a collection of scuba diving photos", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Nikon D300, Tokina 10-17mm fisheye, natural light with Magic red filter. F10 at 1/50 sec, ISO 200.\nHow I took the underwater photo - Snapper Hole\nThe second image is of a diver in a famous Grand Cayman swim-through at Snapper Hole, using an off-camera lighting technique. Alex helped his students to set this shot up by providing a remote strobe to illuminate the diver inside the cave, which was triggered by lighting from one of the on-camera strobes aimed at a slave sensor. For me, the shot is defined by the exposure technique, which brings out the strong blue background. This needed a medium f-stop and a correspondingly slow shutter speed.\nThe composition is also important. Alex was keen that the student photographers included the small patch of blue at the top right hand corner, which adds balance and an implied strong diagonal to a very pleasing overall composition. The under-exposed inside of the cave also provides very strong contrast for the rest of the picture. This image came 3rd in a May 2010 British Society of Underwater Photographers competition and it also won a UK newspaper Daily Telegraph weekly photographic competition on the theme of solitude.\nThe value of a good underwater photography workshop\nI hope that you enjoy these images as much as I have. But the real point of this article is to emphasize the importance to amateurs of good instruction. To that extent, I take my hat off to both of my instructors, who have, in the space of 1 day and one week respectively, helped me to take my photography to a far more pleasing standard.\nThe journey started when my wife suggested that I should take a day’s tuition with one of the UK’s top underwater photographers, Martin Edge. In July 2008, Martin taught me to look at the underwater world in a different way. He set me on the right track to thinking about composition and many other vital aspects of underwater photography.\nThe standard of the images that I began to produce after only one day was much higher than hitherto. With a busy full time job and only a few opportunities to dive each year, I soon worked out that the more effort that I put into thinking about photography, the more that I would get from my few opportunities to exploit it. Yet as vital as theory is, there is only so much that you can learn from books.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Pentax Optio WP review\nI live in the US Virgin Islands and frequently snorkel. I bought the camera because it looked like a combination of features that would work for me.\nWith reasonable expectations on the part of the user, I would recommend it.\nIt is capable of good underwater photos if....\nIf you can get close to the object (it goes without saying that the water needs to be as clear as possible).\nIf the object is not moving or you get lucky.\nIf it is bright daylight.\nIf you can focus on the small screen held a foot or two from your face.\nIf you are not close, then image contrast is lowered by any particulate in the water, much like high altitude aerial photos. Strong processing can restore color and contrast, but like ANY strong processing, artifacts and noise will be present. I found that under reasonable water clarity conditions that around 2m (6 ft.) was a good maximum working distance for me.\nI found it VERY hard to take pictures of swimming fish. The shutter delay is significant. That coupled with my difficulty in focusing at something a foot from me (see later note), I often ended up with a picture of the tail of a fish. So I would try to predict where the fish would be when the shutter tripped. Mixed success here.\nIt needs to be bright daylight for a number of reasons. Under real water conditions on the reefs here (even when it LOOKS pretty clear) if you use a flash it bounces back off of all the sediment and particulate in the water and generally unacceptably lowers the already low contrast. So I gave up using the flash. I think under some conditions, especially in a pool or in crystal clear water, it might be OK. Another reason for daylight, is that like most consumer cameras the image gets noisy when illumination drops. And if anything is moving, a slow shutter speed makes it even more difficult to get a good picture.\nA problem that not everyone will have is focusing on the viewing screen. My eyes will NOT focus inside a meter (3 ft). If I get diopter lens inserts for the inside of my face mask I can solve this problem. So seeing a blurry screen made it harder to get pictures of moving things. I can see well enough that contrasty things that stayed put, like coral, came out OK. But coupled with the shutter delay, my eyes were a problem.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Also make sure that your equipment is completely dry before packing it in your camera bag or kit.\n6. And finally have fun with your camera, you have to explore that new world that is eagerly waiting for you and your camera.\nThat’s it. do tell us, if you have ever done underwater photography before did you find any problems? Was it enjoyable? And which camera do you use?", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-18", "d_text": "I thought I had lost this camera at one point when it floated off of my neck while on a dive but luckily I was able to grab it before it disappeared. It would be much easier to be able to hold onto it with a wrist strap.\nFor the price of this camera it has definitely stood up to the hands of time. It has been through a lot and seen a lot of places. It has been dropped on the hard ground and in the water and still works perfectly. Cleaning it carefully every time you use it plays a big part in how long it lasts.\nIf you don’t take care of it then you can’t expect it to last for years. The elements of salt water can really wreak havoc on an underwater camera and you have to ensure that you are getting all of the salt cleaned out from it and checking the seal.\nIf any tiny drop of water gets into a camera, it’s a goner. Maybe eventually I will get a digital camera but for now, I am still enjoying using my Canon Sure Shot A-1.\nWho should buy a underwater camera?\nThe digital revolution opened up a brave new world for underwater photographers. Underwater digital cameras enable us to instantly see the outcomes of our photos, and share them with our buddies when we get home. Very few users of film have attempted an underwater digicam and not switched.In the realm of underwater photography, digital imaging has altered the landscape. The biggest benefit is seeing your photographs before you even surface. In this way you see your mistakes, make corrections and re-shoot. Apart from having the capability to see and value your photos instantly, size and the cost of many digi-cam camera housings are appealing: divers are now willing to try underwater photography for the very first time.\n- Shockproof, waterproof and/or freezeproof cameras flourish in almost any environment\n- Often include built in GPS, compass and depth meter\n- Simple and user friendly while on the go\n- Large buttons and controls for easy use with gloves or\n- Outdoor enthusiasts and adventure seekers\n- Vacations at water park or the shore\nInvest in a camera that is waterproof or shockproof if you will need a camera tough enough to survive your next snorkeling trip or rock climbing expedition. With a solid, weatherproof outside that can keep moisture outside or even be submersed in many feet of water, waterproof cameras are safe to use in almost any weather or environment.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "And at the end of the day, if more people are taking photos that they want to share, it’s going to also drive interest in scuba diving and preserving our world’s oceans.”\nIn addition to photography theory and technique, the Underwater Photography Guide also has articles on the natural history and behavior of marine life, and unbiased, informative articles about dive destinations around the world that offer underwater photography opportunities.\nUWPhotographyGuide.com is a Santa Monica based website dedicated to helping underwater photographers and scuba divers learn and improve their underwater photography. For more information, visit http://www.uwphotographyguide.com.\n(First posted on Friday, January 29, 2010 at 11:32 EST)", "score": 21.62169474212873, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Photography underwater is a whole new ballgame with a whole set of new challenges for the underwater photographer.\nWhen first beginning photography underwater, there are a few different challenges to come your way, both from a photography perspective as well as scuba diving.\nScuba diving competency -\nParticularly your ability to maintain neutral buoyancy.\nApart from either hurting yourself or damaging the reef, you will stir up silt making photography underwater difficult.....\nWhen you are trying to take a picture of a subject that is moving in a different direction to you who are drifting in a current, or alternatively a lot slower moving underwater than a fish is, you suddenly have to develop a whole lot of new skills!\nTrying to maintain buoyancy and remain still whilst in a surging current.\nYou will be concentrating on remaining within focus distance of a subject that doesn’t necessarily want you to be that close and so will keep dodging, moving away, hiding….\nAll of this can lead you to get tired, use more air and then as a result shorten your dive.\nOften as you get more involved in photography underwater, your equipment gets more and more bulky making this situation worse.\nTrying not to exhale a cloud of bubbles as you take your picture and so obscuring the subject from your lense.\nWhile you are concentrating on taking that picture, where are the rest of the dive group – are they still in view?\nThe light is different under water.\nAs you get deeper, less and less of the full spectrum of light reaches that depth, the first colours to be lost are Reds, yellows and oranges, which is why very often underwater photos appear to be bluey green.\nThe loss of light is increased by the angle of the sun in the sky – the lower it is the more light is lost, something people also forget about in photography underwater is that not only is the depth a factor, but the distance from the subject is also part of the total distance of water the light must travel through and therefore needs to be taken into account for color loss.\nYou should also know that while you are underwater, your brain compensates for the loss of color so you won’t realise the color loss, until you look at the underwater photos.\nBackscatter – Particles in the water - is often a problem, particularly in poor visibility underwater. Often in different seasons of the year the water is rich in plankton appearing as particles in the water which cause the light from your flash to be reflected back at you (or the lense).", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "For someone who makes only a small amount of his income from underwater photography, I drag an awful lot of fragile equipment through airports across the world. Every time I hand over my precious camera gear, I fret about its survival and whether or not I’ll have to explain to yet another understandably concerned security agent, that those large lumps of electronics have solely peaceful uses. But why do I need all these extra pieces? I’m taking photos in the tropics, right?\nWell, yes, very often I am, and I’m going to put to one side the obvious uses for large flashguns: night time, inside caves/wrecks, and of course the murky waters off my own country.\nDuring the day, even though the amount of available light at say, twenty meters on a tropical reef, is suitable to take a photograph with a fast-enough shutter setting to avoid motion blur, you’ll get a very dismal-looking picture. It’ll be dominated by blues and greys and won’t look anyway near how our terrestrial eyes and brain remembered it. I wonder if this is something that aquarists occasionally forget, as their comparably shallow tanks don’t (to the human eye) show an appreciable change between the upper reaches of the tank to the lower, in terms of color. In most home aquariums, enough red light gets to the bottom to ensure a red fish appears red, but below ten meters in the wild, that’s not the case.\nHow can you combat this? If you’re planning to go snorkeling or diving on a reef, and you’re looking to buy a waterproof camera or housing, it’s well worth considering one with an extra flash gun or one that allows you to use your camera’s built-in flash easily. You’ll get much better results and see the ‘real’ colors of the reef. Failing that, you might try a color-correcting filter. Usually made of red acrylic (glass is often more expensive), these fit over the lens to ‘add in’ some red. Some cameras have ‘underwater’ modes, but I have little experience of these I’m afraid.\nConversely, you might want to shoot in RAW and spend some time in Photoshop. This can also offer great results, but you’ll need a camera capable of doing so and much bigger memory cards.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Deeper waterproof camera on the way?\nThere is very little light at 40 feet, even less at 59. These waterproof cameras have tiny sensors and don't do very well in low light. To shoot at greater depths you will probably need an external flash and/or a larger sensor. I think you would be better off with a large sensor compact in an underwater housing.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Ever wonder how in the world people is taking those amazing photography of their reef tank and corals on those social media? Today, I am going to share some tips & tricks about taking photographs of your reef tank, which you can learn about easily. As part of this hobby, taking beautiful and nice corals of your tank and being able to share it with your friends via social media is rewarding, however, due to the camera setting or lighting, taking a nice macro shot might not seem easy after all, if you do not know what you are doing. It might not just about your photography skills, but also, understanding how lighting and your camera setting affect the output of your shot.\nTaken with TG-4 without any filter or white balance\nAlthough I might not be a professional photographer, however, through some personal experience and trial and error, here are some of some tips and trick that hopefully can help you take a better photograph of your tank the next time.\nUnderwater camera vs Camera phone\nWe do not need to compare the specifications between a latest iPhone 12 Pro and an Olympus TG Camera specification, but most importantly getting yourself familize with the function of your camera or phone setting that you can play around with especially the color temperature setting which will affect the oversea effect of your photograph .\nMost of the time, reefer will find that the photo that they are taking using their phone is either too blue or out of balanced, this is mostly due to the LED lighting which makes up of most of the aquarium lighting nowadays.\nComplementary color wheel\nSo in order to solve this problem, you can either set/adjust your white/ balance or color temperature of your camera setting to warmer color to compensate the ” Blue” effect. Or else, thanks to the product available in the market, you can easily purchase a coral lens which usually makes up of a few color lens which will absorb its complementary color of the blue effect making it look nicer. This will allow you to capture the colors of the coral more accurately, try play around with the filter lens that come with it and you might get some amazing color of your corals that you didn’t know it exists.\nCoral taken without camera lens on a Iphone 12\nCurrently, there are many company brands that produce these types of camera lens, be sure to find one that fits your phone model. This small and simple gadget will make it convenient for you to take better photos on the go.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-23", "d_text": "Cameras which could be utilized and disposed may be an excellent choice for somebody who’s going on holiday. Disposable cameras that could resist other challenging states and water are perfect for those once in a lifetime excursions.\nDisposable underwater cameras are a few of the very most economical kinds of cameras you aren’t just waterproof but weatherproof and can purchase. They are able to resist any form of atmospheric condition.\nA lot of individuals who attempt diving on holiday are frightened to utilize their routine everyday cameras on scuba diving trips for anxiety of the water ruining them; for these individuals, there is a disposable variant the very best thing to do.\nMany contain a shock-resistant, hefty casing which will float. Some are still small enough to fit into a pocket or little bag. They’re permanent and are available at reasonable rates.\nUnderwater Camera Maintenance Tips\nGetting a good photograph underwater can be a bit tricky at times especially for someone new to underwater photography. A lot of things need to be taken into consideration when taking photos underwater such as lighting in the water, waves, and depth, just to name a few.\nA waterproof camera is the most important tool in underwater photography and special precautions and care must be taken to ensure that you get the best performance out of your underwater camera. Sometimes skipping one small step can result in water seeping inside of your camera, and once water gets inside, that camera is most likely a goner.\nAll underwater cameras should come with a manual that includes special maintenance tips that are needed to be done in order for your underwater camera to have a long healthy life. Quality underwater cameras are typically a bit pricey and if you are investing your money into a good camera specifically to take photos underwater, you have to be sure that you follow all of these maintenance tips every time that you use your camera.\nEven though underwater cameras are considered waterproof, most waterproof cameras can not withstand a heavy rush of water hitting it at once. This can include if you have the camera in your hand and then dive into the water or simply just throwing your camera into the water. If you are diving into the water, your best bet is to have someone hold it for you and then hand it to you once you are in the water or use a camera float to place it in the water until you are in.\nThe O rings are extremely important to an underwater camera. These rings are what keep the water out and the inside of the camera bone dry.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-26", "d_text": "Underwater photography can prove to be much more difficult than taking a photo on dry land and many people may not realize the various challenges that come along with getting a good quality underwater photograph.\nBefore even attempting underwater photography you should be proficient in basic water safety whether you are in a pool or deep sea diving. If you are diving, you should be experienced at diving procedures because taking a photograph underneath the water can cause unplanned distractions and you want to have the experience needed in case something goes wrong. You should always put your safety first before photography.\nMany challenges lie underneath of the water. Various things can play a big role in how good of a photograph that you are able to get in the water. If you are diving deep down into the sea, you will notice that the deeper you go, the darker it gets and you will most likely need special lighting in order to catch the essence and the beauty of sea life. You will need to make sure that if your camera does not include a sufficient enough flash for deep sea photographing, you have another kind of light source such as a strobe light to carry with you, otherwise the photos may turn out cloudy and dark without much color depth to them.\nThe current also becomes stronger the deeper you go below sea level and this can cause extra pressure on camera equipment while also making it more difficult to hold the camera still to get the perfect photograph. Bubbles and fish swimming around can make that task even more difficult.\nIf you are planning on taking photos while deep sea diving, be sure to check and see how waterproof your camera is. Most cameras have a depth limit, meaning that once you go past a certain depth, such as 10 feet, the camera may not work any longer. This is something to consider before purchasing an underwater camera because you want a camera that will work at the depth you wish to photograph.\nPractice makes perfect. You may have to play around with various settings and lighting to get the perfect shot. You should try various angles with the objects you are trying to photograph. It is best to get up close and personal with the object to get the best quality photo whether it be a fish or colorful coral. Looking through your diving mask, the object may look closer to you than what it actually is through the camera lens, and it will take practice to figure out exactly how to shoot the best photos.\nBecome familiar with your camera before going underwater so that you know exactly how to get the settings that you need.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Underwater photography have come a long way since new technology and development in cameras and equipment have changed. More and more people that are not professionals take photos with underwater cameras and they are exploring this new world.\nBefore taking pictures underwater was not easy, you needed large waterproof cases that were bulky enough and of course expensive. But now with new advancement in technology, plastic part of camera are made with such development that anybody can use those cameras underwater without worrying about getting your equipment damaged due to water.\nAlso the same is applicable for photography equipment, they have become compact and more reliable. Here’s are list of best digital cameras for underwater shooting and top 10 waterproof cameras in all kind of price range.\nOther options which are called waterproof casing is also available and can be used with your existing cameras, but still I would suggest you to buy a camera that’s inherently waterproof. The range of these waterproof cameras are in plenty.\nWith underwater photography, you will be capturing an entirely new world, you will be amazed how beautiful your photography becomes and other than that you will get some exciting pictures.\nWhenever you start, keep some of the essential tips in mind.\n1. Depth & Pressure – Always read your camera’s manual before heading out for underwater photography. Every camera has different capacity of handling pressure and at a certain depth they start falling apart, so you need to know exactly what are your camera’s limits.\n2. Always make sure your camera is fully charged and all the photos from memory stick is backed up. If possible remove those photos and take your memory stick with maximum memory, so that you can take lots and lots of pictures.\n3. Before heading into water, remember to check all the corners and areas on your camera and look for any dirt and grime in and around the memory stick or battery. And if you are using underwater case, remember to check all the buttons, latches and seals. Everything must be checked thoroughly before diving in.\n4. Securing your camera with a wrist strap is another good habit, as in water you may lose your camera and drop it off at the bottom, which is quite difficult to locate and get it back. So, when you jump in keep your camera strapped and attached to some part of your body.\n5. After returning from shoot, wash your camera with clean water so no sediments from the underwater is seen and rub off any dirt that was accumulated.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "The images and video you capture underwater will exhibit all the Canon quality you expect, with high resolution and vivid, natural color.\nCan underwater camera take pictures?\nIf you want to take pictures underwater with your existing camera, choose between a hardshell protective case or a waterproof, sealable plastic pouch. You can shop online or at most electronic or photo stores. Both options work great for underwater photography.\nCan you put a camera lens underwater?\nMost wide-angle lenses are considered “rectilinear” lenses. Read more here about choosing a fisheye lens versus a regular wide-angle lens for underwater use. NIkon 10.5mm fisheye lens, and the tokina 10-17mm fisheye lens with a focus ring on it. Both are great underwater lenses.\nCan I take my iPhone 12 Pro Max in the shower?\nCan You Shower With Your iPhone? With an IP68 water-resistance rating, the iPhone is not protected against high pressure or temperatures, according to the International Electrotechnical Commission. So, Apple recommends that you do not swim, shower, bathe, or play water sports with an iPhone 12.\nAre there any underwater cameras that are waterproof?\nThe APEMAN action camera comes with waterproof casing that is able to resist water sweeping till 30-meter. This action camera is ideal for water sports or scuba diving or even taking selfies at depth below. The camera comes with awesome kit with multiple accessories in it.\nIs it worth investing in an underwater camera?\nHaving that said, you will find a lot of underwater cameras in the market and not all of them are worth investing. If you are a travel lover, then you should invest in one model that will serve you in every vocation.\nDo you need a underwater camera for snorkeling?\nIf you’re staying on the surface, you don’t need an underwater camera rated to 30 meters or more. But you might like a camera that’s hardy and well suited for other activities. For snorkeling, we like the following cameras: Click the links above for full details on each of these cameras. What’s the Best Cheap Underwater Camera?\nWhat should I look for in a scuba camera?\nIf you’re just shopping for a scuba camera and you’re an occasion diver, consider an option that doesn’t require an expensive housing. Have a think about what kind of zoom capabilities you need and whether a macro mode is a make or break option.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Aquarium Photography Tips\nBy arungupta on Dec 24, 2006\nGeneral tips for aquarium photography\n- NEVER use built-in flash. If possible, use external light from the top and/or the sides, but not from front of the tank. Otherwise use only tank lighting.\n- Slow shutter speed or larger aperture (smaller depth-of-field), higher\nISO, and manual focus.\n- Consider using smaller aperture (higher depth-of-field) so that fish movements can be captured.\n- Use burst shooting when the fish is in the sweet spot.\n- Use a tripod, especially for close up or macro photos (use the \"Digital Vari-Program Macro\", tulip on the left side command dial, mode).\n- Place the camera perpendicular to the glass and subject.\n- If taking picture of a fish, focus on a spot and wait for it swim into view.\n- Take more than one picture so that you can select the best.\n- Live plants, rocks, driftwood and gravel are the best backgrounds. Make sure you conceal any electric cords or air tubing away from sight.\nIf possible, follow the guidelines below\n- Clean the aquarium glass from both inside and outside.\n- Completely darken the room to help avoid reflections.\n- It is always best to take pictures in the highest setting/best quality possible, if you have enough spare cards.\nHere are some of the articles I read:\n- Basics on Aquarium Photography\n- Photo tips by Janet Brassard\n- Techniques for Aquarium Photograph\n- The Art of Aquarium Photography\n- Rules of Aquarium photography", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "I took these while snorkeling in Cozumel. The deep ones were taken in the Chankanaab national park, the shallow ones a few feet from the hotel beach. I was really exited about making some underwater shooting but I couldn't afford an underwater case for my camera or other alternatives, so I went for the cheap one: disposable! So no focusing, all settings fixed, tiny viewfinder, ISO 800 film and no flash. (I couldn't get my hands on a disposable underwater with flash) These are scans from the prints. Since the quality is so low, I posted them here. So: 1. 2. 3. 4. 5. It was fun, though!", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Methods For Undersea Photography\nUndersea photography brings the undersea world to the surface. Some question wheat swimming in the sea is like, yet the do not want to learn exactly how to dive. Underwater photographers have taken it upon themselves to bring the undersea world to those that do not want to dive or never had the opportunity. While all digital photography is an art the underwater world needs special skills to bring the finest quality to life.\nUnlike wildlife digital photography the underwater world needs to be viewed up close. That is to say the marine life requires to be photographed closely. This is as a result of the water. The water refracts images commonly distorting them so the closer you are to your subject the much less water you have in between you an the subject. Undersea photography needs a good deal of persistence. You topic may swim quickly by like the shark, whale or dolphin, or they might hide with in the coral reefs bulging only when risk is not felt. Water holds bits, many generally living microorganisms called plankton since these particles usually drift by while you are attempting to take an image you can loose contrast as well as sharpness of the picture.\nMarine life makes use of the premise of hiding greater than rate or selection. This implies you will certainly frequently discover your subject masked rather than out in the open. You need to seek your subject with decision, without startling the topic. The underwater globe demands respect. You don’t want to touch the living organisms and consequently you need to find out to relocate with the present while trying to achieve the best shot. A lot of marine life will die if you touch it, specifically coral reefs so having a pastime of underwater photography requires you to adhere to the policies, a code of values.\nUndersea flash or even more commonly called a strobe can aid you gain the light you need to take an excellent photo. It is necessary to have a flash with an underwater camera. It will certainly assist you bring other colors as opposed to red and also orange into the picture. The strobe only requires to be medium sized, any kind of larger as well as it can hinder your image taking experience.\nMake-up is also very essential. You will certainly follow the exact same rule you performed in routine digital photography; nonetheless, you still need to have an upward angle on the subject. This returns to the camouflage technique of the majority of aquatic species.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Underwater photography provides a view of the world that most people don’t ever get to see. It’s expensive. You need to buy a lot of gear, take diving lessons, get certified, buy more equipment, buy camera equipment, pay for airfare, pay for a dive boat and much more. You get the picture (sorry).\nOn the other hand, imagine all the time I must have spent traveling and diving just to photograph. The exotic places, the fascinating people, the beautifully colored fish. Imagine the incredible investment I’ve obviously made to dive and photograph in beautiful, warm, tropical and exotic places. Well, guess what… I made this image just 40 miles from my home. Instead of diving, I simply go to the Monterey Bay Aquarium in Monterey, CA to do my underwater photography.\nFirst of all, I need to provide a plug for the Aquarium. I’m a member of the Aquarium and they are a significant beneficiary of my estate when I go to the great darkroom in the sky. I encourage you to visit. The work that they do to provide educational programs for children and adults is extremely important in helping to protect the oceans and the exhibits are among the best I’ve seen at any aquarium in the United States.\nThe best way to photograph at the aquarium is to arrive when it opens in order to avoid school groups and other visitors. You may not be able to use a tripod so use a monopod instead. Set your ISO at 800-1000 and don’t use a flash. It’s prohibited in many exhibits and, in any case, it will reflect off the glass right into your lens.\nNow you know how to be an underwater photographer. All you need to do is visit the Monterey Bay Aquarium. It’s worth the trip and it’s a lot less expensive than going to the Coral Sea.\nOne last thing: my friend and colleague Vic Smith recently began his own blog at Vic Smith Photo. Please visit his site.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Before we send budding underwater photographers fleeing over the costs of putting together an underwater photography kit, we offer a couple of items that are free: used dryer sheets and heavy duty rubber bands. What to do with these comes later.\nAssembling an underwater photography kit requires time, patience, research, and a review of finances and goals. Is the goal to capture memories of dives to share with family and friends? Is the goal to become a photographer who makes money with his photos? Goal number one should be a new photographer’s priority.\nIt takes years of practice and an artistic eye to become a commercial photographer along with a substantial financial investment. As with many endeavors, it is much better to start ‘small’ and work up and as skill levels increase, upgrade equipment.\nThe first item in the underwater photography kit is a camera. With the advent of digital cameras the choice of cameras has greatly expanded as has the price range. That said, not every digital camera can be used in a hard-shell housing so, when considering a camera, make sure there is a compatible housing.\nWhat kind of camera? Here’s where finances come into play. There are many choices in the less expensive, non-DSLR (digital single lens reflex) category including compact digital cameras, bridge cameras – that are basically high-end compact cameras with a few features found in DSLR cameras yet without interchangeable lenses – and the newer MILC (mirror-less interchangeable lens cameras) which are smaller than DSLR cameras but without TTL (through the lens) technology found in DSLR cameras. Note: If selecting a non-DSLR camera choose one with built-in underwater photo settings as well as a macro setting.\nThe second category is the DSLRs with interchangeable lenses offering a much wider range of photo possibilities, fully manual as well as automatic functions, and larger sensors for much better performance in low-light situations. DSLRs with their TTL technology allow the photographer to see exactly what the lens is seeing because he is looking directly through the lens.\nOnce a camera is chosen, the next step is choosing the housing. First and foremost, it must be a housing designed specifically for the camera it will be protecting and it should allow access to all the camera’s functions. Hard-shell housings are made from a variety of materials including polycarbonates, PVC, and anodized aluminum. Some camera manufacturers carry brand specific housings and there are housing manufacturers that make housings for a variety of camera brands.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-4", "d_text": "In order to capture 180º of what’s in front of the camera, the lens often has a curve that causes the edges of the photo to “bend.” This is known as a fisheye effect. Vivid-Pix Land & Sea SCUBA is able to reduce this affect as well as correct color. Note how the rock seawall on the bottom has been adjusted from curved to the actual straight structure.\nEnter the Matrix\nAfter our Grand Cayman adventure, we were faced with the task of figuring out how to present all of the information we had collected in a usable manner. We decided that a matrix was the most effective way. It begins with a comparison of the photos we shot with each camera. The photos will enlarge when you place your cursor over them. After the photos is a comparison list of the various camera specifications.\nOur goal has been to provide as much information as we could to help consumers decide which camera or phone is best for vacation photos. But we are not going to express opinions as to which camera is “the best.” That is impossible. Every person has different needs, wants and budgets. Furthermore, every person sees photos differently. But we hope we have been helpful.\nWe both have been around cameras and water for a long time. As a result, we have seen cameras, trips and sometimes budgets affected when cameras were damaged by water. We suggest you recommend that your customer buy an underwater housing for his or her camera, even if it is advertised as being waterproof (or resistant).\nExplain that the camera housing is a watertight container that the camera is placed in but that lets photographers manipulate camera controls to take photos. While a housing will add cost to the unit, it will also add value. Better to add a bit of cost than lose a trip’s worth of photo opportunities and memories due to a flooded camera.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "I've been diving for a few years, 52 dives mostly in Vancouver and Maui. I've been diving with my gopro hero 3 (no tray or lighting), and have some fantastic footage of big stuff like turtles, eels, octopus, etc. on good visibility days. I've gotten pretty good night-dive footage as well holding a dive light in flood mode with one hand, and the gopro in another. I have an Olympus TG-2, but I don't have a housing, so haven't brought it out diving.\nHowever, my favourite things to see underwater are nudibranchs, and I haven't been able to get any decent footage with my current setup. I was lucky enough to see harlequin shrimp a few weeks ago (diving in Maui with Mike Severns) and my gopro footage was blurry - that's when I decided I want to get into underwater photography and focus on taking pictures of small stuff underwater.\nAbove ground, I love taking photos but I've never ventured beyond a point-and-shoot, and nowadays I take all my pics with my iPhone. So I'm thinking I should start simple with underwater gear, and then gradually upgrade equipment as my skills improve over the years.\nI've been doing some research, and to get better shots of small stuff it seems I have 3 options for shooting small stuff:\n1) Get a housing for my Olympus TG-2 (seems hard to find cause it's an older model). Start out using the built-in flash, and then look to add a strobe with a tray / arm as the next step.\n2) Get a macro lens for my GoPro Hero 3. Is there much difference between a video light and a good dive light? I suppose I'd want to get a tray and arm for the light to make filming easier.\n3) Buy a new camera and housing. Maybe mirrorless? Start out using the built-in flash, and then look to add a strobe with a tray / arm as a next step.\nI'm leaning towards #3, I know that this is something I'm going to want to keep doing and improving, so might as well get off to a good start.\nAny thoughts / advice would be greatly appreciated!", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "(CNN)Seems like everyone’s a halfway decent photographer these days thanks to better gear, flashy phones and clever digital filters.\nWhen it comes to underwater photography, though, few people have what it takes.\nPerfect sub-aquatic images involve a new level of skills, like framing a shot while maintaining buoyancy and not getting eaten by predators.\nSo where to start?\nWe asked some of the planet’s best underwater shooters for tips on the technique and equipment they use to make superlative marine photography.\nThey shared their secrets and some of their best shots.\nNow it involves hours each and every day with my computer working in Adobe Lightroom, Photoshop and on my website, along with Instagram and Facebook, to get my images out all over the globe.\nOh, and periodically I get to take my camera underwater and shoot new images somewhere in the world, or in my backyard, which continues to be a rather fascinating latitude.\nWhat’s the hardest aspect of underwater photography?\nIf you told a professional above-water photographer that you wanted to hire him to shoot, but you can’t be sure what the subject really is or if the subject will be there, he must select his lens before he begins, carry all his camera and lighting equipment during the entire process and would only have an hour to shoot before he runs out of air — this would be where you hear a “click” on your end of the conversation.\nIf you had to give one tip for capturing compelling images, what would it be?\nPicture the image you want to get.\nCheck everything on the display on the back of your camera to be sure and then continue with the same subject, changing angles, backgrounds, anything that you can think of.\nUse that unlimited number of exposures and then try what you don’t think will work, or what someone told you would not work.\nYou might surprise yourself and the rest of us.\nWhat camera setup would you recommend to the scuba diver just getting into underwater photography?\nI have shot with Canon SLRs for 40 years and continue to recommend them.\nI use Ikelite housings and strobes because they were the first company to really nail down the electronics to utilize TTL strobe exposures, which make a huge difference in many cases underwater.\nThat said, I did a trip to shoot great white sharks off Guadalupe Island last year and the guy beside me in the cage got some great shots with a GoPro on a stick.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "- Use an acrylic box, take your photo on top of the water surface.\n- Stop all pump and wavemaker in the tank\n- Do not use the zoom function unless necessary, as it will result in poorer quality picture.\nCamera Phone or Underwater camera?\nWe make an comparison between using a cameras phone and a Olympus camera to see the difference.\nImage Captured using iPhone 12 Pro with a coral lens\nimage Captured underwater, inside of the tank using Olympus TG-4 with auto white balance function\nThank you for reading my article. I will share some of the macro shots taken below for everyone to enjoy!\nTrue Rasta taken using TG-4\nMy favorite dragon eye zoas\nVampire Slayer taken using TG-4\nAre you able to identify these three zoas?", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-24", "d_text": "If anything happens to these O rings, whether it be breakage, cracking or even a piece of sand on the ring, water will slowly seep into the camera. It is very important to check the O rings before every underwater use.\nIf you are using the camera in salt water, you must be sure to soak the camera in fresh water as soon as possible. The salt from the water can eat away at the seal on the camera and cause a leak. Once you have soaked the camera in fresh water, let it dry completely before opening the camera to change batteries or remove the memory card.\nUse common sense when using your underwater camera and don’t do anything that can cause water to get inside of the camera such as opening it underwater or too soon after being in the water before it is completely dry.\nFollow all of the care tips that come with the camera every time you use it. Some cameras may even come with their own cleaning brush. If you follow all of the maintenance and care steps, your underwater camera should last for years to come and give you the great underwater shots that you enjoy taking.\nCapturing Underwater World with PhotographyPeople have always been fascinated with the oceans and the bays of the world as an unknown world. People who go diving wish to bring the underwater world visible to those who don’t dive. Bringing the underwater world home using digital underwater photography is growing every year.\nThere are many types of underwater cameras. There are the highly expensive professional cameras and then there are the one time slightly effective versions. But knowing which camera is needed when and where is extremely important.\nThe 35 mm cameras are just point and shoot cameras. If they were meant for underwater, chances are that they at least have a mild filter to correct the lack of filter. But these cameras do not filter the particles that are found floating along in the water on a poor visible day so these cameras must not be used any below than 80 feet. Moreover, losing pictures because the housing failed is something that no photographer wants.\nThe more professional cameras are larger and come with huge lenses to let light in as well as filters to help bring clarity in any photograph. Digital cameras are the best way to take underwater photographs because the photographer can make sure he has the desired effects before leaving the scene.\nMost underwater cameras also have flash but it is wise to have some pre-requisite about underwater photography using flash before taking any pictures.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "A Guide to Producing the Clearest and Best Photographs\nA lot of people see a gorgeous photograph and wonder if they would be able to create the same kind of image with their own camera using what they see in their every day life. There are a lot of suggestions and tips that are out there that can help you recreate any of your favorite images. Here are a few of them!\nIf you are taking pictures in the sun during the spring or summer, make sure to turn the flash on. Failing to put the flash on your camera can result in a bad glare, which can taint your photos. Once you turn the flash on, you can take pictures as you usually would.\nKeep your pictures relatively simple. A good image should be easy and straightforward to interpret and appreciate. It is important to take meaningful pictures, but in most cases your pictures will say more if you focus on a detail rather than put together a complex composition that might not strike people as much.\nIn order to produce the clearest and best photographs, you should use a tripod. A tripod allows you to stabilize the camera, so that your photos are in better focus. This is especially important if you are using a high-zoom lens or shooting at night, since small changes in the camera’s position will result in major blurring.\nIt is a good idea to experiment with the different features your camera has and also with many colors and angles. A good picture isn’t all about the subject, it’s also about the artistic way it is portrayed. A good photographer infuses his talent and intuition into his photos to make boring objects look interesting. Experiment with different techniques to develop your own style.\nAlways pack your photography equipment with great care. Take all different kinds of lenses, and make sure you take cleaning accessories and enough batteries. But don’t pack too much here. Only take the equipment that you will need. Anything else runs the risk of getting lost or damaged.\nTry getting closer to the subject that you are trying to photograph. Nothing is as bad as taking a photograph of something that is not close enough to see well. By getting close, you afford your viewers a clear, detailed view of your subject.\nTake the time to improve the sharpness of your shots by adding a key piece of photographic equipment to your arsenal. This would be a tripod.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Norbert Wu reclined on the Antarctic Ocean floor beside anemones and bush sponges and shot his dive partner hovering just beneath the ceiling of summer sea ice. To photograph Antarctica's benthic community over a two-month period, the divers drilled an entry hole through ten feet of ice, then capped the opening with a Styrofoam cover to prevent it from freezing shut.\nWu used a 24mmlens and 100-speed film, and exposed the frame at f/16 for 1/125 second.\nChristopher Ross wore a nonbubbling, military-issue rebreather so he wouldn't spook the hammerheads schooling off Cocos Island, Costa Rica. He dove 110 feet and positioned himself atop a rock pinnacle, where the local sharks gather to have their skins nibbled clean by wrassesa symbiotic arrangement that frees the sharks from irritating parasites and provides the fish with protein. \"The hammerheads look ominous, with eyeballs on the sides of their heads,\" says the Atlanta-based photographer, who specializes in monochromatic underwater images. \"But they're really not aggressive at all.\"\nRoss used a 20mm lens and exposed 50-speed black-and-white film for 1/60 second at f/16.\nPatricia Sener knew exactly what image she was looking for when she traveled to Bonaire last November to photograph the first annual Deep Blue 5k Swim. After taking a crash course in scuba diving the day before the race, Sener dove ten feet down to the Caribbean's sandy bottom, not far from the starting line. \"The beginning of an open-water race is the most interesting to shoot because it's the only time the swimmers are in a pack,\" says New York-based Sener. \"I was completely focused on getting the shot. And breathing. It was crazy and exciting and over in about seven seconds.\"\nShe used a 20mm lens set at f/5.6, 100-speed film, and a shutter speed of 1/250 second.\nJames Fee shot this unhappy blowfishit had puffed itself up in self-defensewhile diving a reef off northeast Peleliu Island, near Palau, in the western Pacific. \"A Korean boatman named Mr. Kim, who's lived on the island for 18 years, took me out to search for blue starfish,\" says the 54-year-old L.A.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Underwater photography is usually divided into two categories. The first type of underwater photography uses submersible cameras, while the second type of underwater photography refers to underwater pictures that are taken using a non-submersed camera. In the second category will we for instance find all the photographs that aquarists take on their fish from outside the aquarium. At first glance, you might not consider this true underwater photography, but if you wish to take great picture of your aquarium fish you can actually learn a lot from the professional underwater photographers that use submersible cameras. Both types of underwater photography is forced to deal with the fact that light behave differently in water than in air. There are naturally also large differences between the two types of underwater photography. When you photograph your aquarium fish from the outside, you must for instance try to avoid reflections in the glass. This is not a problem when scuba diving photographers use a submersible camera to catch the underwater world.\nPhotographing fish from outside the aquarium\nIt is naturally possible to use a submersible camera in an aquarium, but due to practical problems it is not very common. Most pictures of aquarium fish are instead shot from outside the glass, using a normal un-submersible camera.\nA high-quality camera is naturally a good thing, but there are several other factors that play vital roles during aquarium photographing as well. A good camera will not automatically produce great photos, and great aquarium pictures can be taken using quite an inexpensive camera if you plan your picture carefully. One of the most important things to keep in mind is that it is harder for light to penetrate water than air. If your aquarium is deep, you might therefore have to use extra lighting if you want the true colours of your fish to show. A 20 inches deep aquarium can need twice as strong lighting as a 10 inches deep aquarium.\nA second factor to take into account is the water quality. When you take fish pictures in the sea or in a lake, you can’t do much about the water quality except for trying not to disturb the silt. When you are photographing fish in your own aquarium, the water quality is however completely in your hands. Floating debris in the water can be devastating for an otherwise great picture. A problem with floating debris is that it is often very complicated and time consuming to remove digitally.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-3", "d_text": "It’s more about the effect than image quality…\nSo what photos can you get? – Well, I’m not going to lie, it’s hit and miss. Here are a few examples that (with a little care) have worked out quite well:\nI think we can all agree – the image quality here isn’t great, but it’s great for a personal memory. However, these have all had their brightness, contrast and most importantly their white balance adjusted.\nDisposable cameras in the sea will all have a blue colour cast which needs to be corrected or it can make a photo really disappointing. This can be done in several free or inexpensive software programs (such as GIMP or PhotoScape) – but only if you have a digital copy of the image.\nUnderwater digital photos on a tight budget?\nSo – let’s look at options when you have had a bit of time to plan. Just how cheaply can you take photos underwater?\nJust about the cheapest way, is to use an underwater camera bag. There are loads on Ebay and Amazon – and here is one I bought earlier this year for about £3 (including P&P).\nI don’t know about you, but I would be very dubious of sticking an expensive camera in one of these and just diving into the sea. The problem is, there’s not really a good way to test them without putting something electric in them and going for a long swim… (If it doesn’t work, I accept no responsibility…)\nBecause I was worried, I bought a cheap second hand camera for £7. It’s 7.2 MP and has since become a firm favourite. I’ve dived with it several times and it still works! At £10 in total this is cheaper than a disposable camera. BUT – the results can be disappointing.\nThe key problems are:\nThe plastic “pouch” over the lens is not flat or perfectly clear, which plays havoc with the cameras auto-focus. Since manual focus is impossible with most cheap cameras, this is a real issue.\nUsing the cameras controls / buttons can be very difficult.\nThe bag is not well insulated, so your camera will get cold quickly, spoiling battery life.\nYou can get fairly good results by ensuring that the camera lens is right up against the plastic lens window. Alternatively, you can buy a more expensive bag with a solid plastic, or better, glass, window – which if right against the lens will solve many of these problems.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "By making sure that the water in the aquarium is clear before you start taking pictures, you can therefore save your self a lot of time and effort.\nOne way of lowering the amount of floating debris is to make a larger water change. Do not make this change right before the photo session, since a water change can disturb the substrate and temporarily make the water even more clouded. You can also carefully vacuum the substrate to remove debris. Just like a water change, vacuuming will disturb the debris and you must give the remaining debris time to settle before you start photographing. If you keep fish species that are very sensitive to rapid changes, a large water change can however be harmful. A series of smaller water changes is then recommended, and you should try to make the new water as similar to the aquarium water as possible when it comes to temperature, hardiness etcetera. Most fish species will however appreciate a bigger water change and can even start displaying stronger colours afterwards.\nFloating debris might look bad on a picture, but it is seldom dangerous for your fish. Another type of poor water quality is high levels of soluble waste in the aquarium, and this type of poor water quality is much more hazardous to your fish. This type of bad water quality can be really harmful and even lethal. Even if the concentrations of soluble waste in your aquarium are not high enough to kill your fish or make them seriously ill, it can cause them to loose their colours. If you think that your fish look dull and listless, you should therefore check the levels of ammonia, nitrite and nitrate. By constantly keeping these levels as close to zero as possible, you can make your fish develop much more vibrant colours that will look stunning at your fish photographs. Taking great fish photos is really a long term project. A nutritious diet and a well decorated and stress-free aquarium environment that induces breeding can also greatly improve the coloration of your fish.\nWhen you know that you have an aquarium with supreme water quality, the next step is to give the glass a good clean. You can of course remove the lid and take photographs from above, but most aquarists wish to be able to photograph their fish from the sides as well. This means that dirt and algae on the glass might show up on the photograph and a good scrub is therefore necessary. Keep in mind that a camera is very unforgiving and can make tiny spots and stains visible.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Who doesn’t want to learn how to obtain sharper, clearer photographs with their DSLR or Mirrorless camera? It’s truly the most commonly addressed topic when it comes to learning photography. You invested in a better camera because you want better images, so why aren’t those images coming more easily?\nLet’s review 4 changes you can implement RIGHT NOW to achieve sharper photographs.\nYour Focus Settings — Single-Point AF\nCamera manufacturers pride themselves on their cameras’ ability to focus automatically and accurately. When you take a photo, by default your camera is going to decide what to focus on. You’ll see this happening by pressing your shutter halfway down and seeing a variety of dots light up, usually right over someone’s face or an object that stands apart from the rest. This is referred to as Automatic Point Selection (auto-area AF), and we want to change this setting to Single Point Selection (single-point AF). Because this setting varies on every single camera I’ve seen, you’ll want to check your manual or google “change single point focus [insert camera model]” for instructions. I promise that the results are worth the effort!\nPro Tip: Single-Point Selection will only work in the Creative Modes (P, S/Tv, A/Av and Manual) so be sure to switch your camera mode (the dial on top of your camera body) before trying to change this setting!\nCheck Your Shutter Speed\nShutter Speed is more than just capturing motion. We already know that we need a fast (1/400 or faster) shutter speed to capture a kids soccer game and that 1/200 works well for everyday use.\nWhen we’re allowing our camera to choose settings for us (either in Auto or in some of the Creative Modes), it will often choose a shutter speed that is so slow that the image looks out of focus, when the blurriness is in fact camera shake. Camera shake happens when the shutter speed is so slow that our own movements are caught on camera, creating a blurry photograph.\nPro Tip: To avoid camera shake, always make sure your shutter speed is AT LEAST 1/60 and preferably 1/125. If you’re shooting in P or A/Av and you can’t get the shutter into this range, try increasing your ISO.\nClean Your Lens!\nWhen someone tells me that they’re not getting crisp photos, the first thing I check (or ask them to check) is how clean the lens is.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "selection of useful tidbits of information and tricks for the marine aquarist submitted by Advanced Aquarist’s readership. Readers are encouraged to post their tips to our Hot Tips sticky in the Reefs.org General Reefkeeping Discussion forum or send them to email@example.com for possible publication. Next month’s Hot Tip theme will be “Getting Rid of your Excess Coral Frags“. Some people are new at getting rid of their excess coral growth and it would benefit them to find out how to work out deals with their LFS, etc.\nAquarium Photography Tips\n- Make sure you have a good macro lense if you are shooting for little critters. Sometime I need to get closer than one inch to get a good pic.\n- Watch for the flash’s reflection on the front glass if you use one. I can ruin your pic.\n- Try to shoot straight instead of at an angle. The acrylic and glass will give noticeable chromatic abrasion (blue and red outline around a white object, for example) if you don’t look straight.\n- Use a tripod if you have long exposure time, a remote shutter release will help even more.\n- White balance can make a big difference if you can’t get good color representation. Flash can give everything a yellowish look since more flash are around 4000K.\n- It make a good portion of an evening if you are photographing a fish. They are more difficult to pose than 4 toddlers who love to cry.\n- Keep a camera handy for that once-in-a-life-time shot! Ooops, dead batteries.\n- The best time to take pic can be when everyone is sleeping…. when the light is off, and your reef make take on a very different look.\n— Reef Box Etc\nIf you aren’t using a tripod, try to take a shot with timer mode. My Canon SD 300 has a 2 second timer option, so it doesn’t take a picture for 2 seconds until after I depress the shutter button. It’s a lot easier to hold a camera still when you aren’t pushing down on a button.\n- Clean your glass/acrylic both inside and out before taking your shots.\n- Turn off your pumps before taking your shots – it well prevent your subject from getting blown around and it will help stop any particulate matter from floating in front of your lens.\nUsing an external flash for pictures of fish is a very useful thing.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "Algae are hopefully only present on the inside of the glass, but the outside can still be speckled with dirt, dust and fingerprints, so don’t forget to clean the outside of the glass as well.\nPhotographing fish using a submersible camera\nThe submersible camera is not frequently used for taking pictures of fish inside aquariums; it is instead popular among snorklers and scuba divers. Many snorklers purchase practical and inexpensive waterproof cameras, but these cameras are usually not strong enough for a scuba diving adventure. The traditional single-use underwater camera that you can buy in most seaside resorts will typically only withstand the water pressure down to 5-10 meters. If you want to go deeper, you will need a sturdier form of camera housing. A camera house is a plastic container that you place a traditional camera in. It is important that you purchase a camera house that has been made to fit your particular camera.\nJust like the aquarium photographer, the snorkling or scuba diving photographer must remember that it is harder for light to penetrate water than air. The deeper you go the less sunlight will reach you. It is also common for underwater pictures to become blue, since light that has travelled far down into the deep will bring out different colours than light at the surface. A fish that would look red at the surface can therefore look almost blue further down.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "At the end of the day, I love the challenge and I love being able to visually document and share my perspective and I think the key to capturing the moment is being able to connect with your subject, making them trust you, so that they can open up to you, whether that be on land or under water. Olivia felt comfortable and in return she performed an amazing underwater dance that showed the passion I was after and the magic of weightlessness.\nThe woman in the photos looks quite ephemeral, ghost-like, with the high contrast of her skin and the dark water. Was this intentional? If so, why?\nOlivia's photo shoot was planned a couple of weeks in advance. With Olivia being a dancer, I knew I wanted to see passion, motion and almost a dreamy romance with the water. With this in mind, I have no control over Mother Nature and had no idea what the weather or the water quality was going to be like. In addition, Olivia, who is only 15, had never been photographed in the water before and had no idea what she was up against.\nThe day of the photo shoot was in fact overcast, had rained that morning and the water quality wasn't the clearest and it was late in the afternoon around 4.30pm. I knew then that I was up against some obstacles and that I wouldn't be getting any vibrant colours and consequently shot for black and white. The high contrast is caused from a combination of a high ISO, water and ocean particles and ocean colours and converting to black and white.\nHow did you take these images?\nI took these images using an older Canon and a wide 20mm lens and an AquaTech water housing. My ISO was at 800 ISO, the water wasn't the clearest and the weather was dull. But I kept my shutter on 320 as I wanted to show some movement but still keep the subject sharp. I used a medium aperture, at F5.6, shooting a moving subject in the water is rather challenging. And I wanted to show some depth of field, but not too much. I felt like F5.6 was a good middle ground. To make things easier for my model Olivia, I dragged along a boogie board, so she could rest up whenever she got tired.\nAny advice for our readers who might want to have a go at taking images underwater?\nShooting under water is a different world.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-4", "d_text": "The issues with battery life and accessing controls will remain the same, though.\nA “proper” waterproof compact camera\nOf course, there are a whole range of custom-designed waterproof cameras out there, and after years of being prohibitively expensive, costs of some have now come down to below £100 in many cases (though well known brands are still more expensive). In truth, these cameras don’t tend to stand up well against similarly priced regular (i.e. not waterproof) cameras on dry land. Image quality and optical zoom both tend to be limited. But in the water they are generally much better than other cheap options.\nAgain, you will want to make sure that you know how to edit your photos once taken. A lack of light and poor white balance are classic trouble-makers with these cheap cameras, though you would be amazed the level of detail you retrieve…\nA key point about dedicated underwater cameras is that they have autofocus mechanisms that will work, and a quirk of underwater photography is that water is magnifying (so you can get better close-up shots).\nAs a final thought (though not strictly underwater) – if you have a waterproof camera with you and quick reflexes, you may one day get a picture like this.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Michaela Skovranova is a world-renowned underwater photographer, filmmaker and Olympus Visionary whose work captures the beauty that exists beneath the water’s surface.\nIronically, Michaela began her journey into the realms of underwater photography in a bid to overcome her fear of the ocean; she moved to ocean-rimmed Australia from landlocked Slovakia when she was 13 years old.\nNow, she is renowned for her breathtaking visuals of marine life and aquatic environments captured in her signature, hauntingly ethereal style. From projects on humpback whales to coral spawning, her work has been shared by brands such as Adobe and Instagram, and she has collaborated with the likes of Greenpeace, The New York Times and National Geographic to produce captivating underwater stories.\nCameraPro was very lucky to have Michaela present a masterclass at one of our photography festivals.\nHere’s what we learnt.\nYou don’t have to go very deep\nMichaela says she doesn’t usually go deeper than 15 or 20 metres, and often her images are captured just below the water’s surface.\nGet to know your equipment\n“You don’t want to be wondering, ‘how do I change the settings?’ while a beautiful moment is unfolding in front of you – especially with the underwater camera housing, which can have lots of little buttons.”\nTo begin with, Michaela suggests using a fixed focal length, choosing the desired shutter speed and just changing the aperture or ISO as needed to keep it simple.\nShe says she doesn’t usually look through the viewfinder. “Instead I use the back screen, but sometimes the visibility is so poor that that option may not be available.” But being familiar with the settings of the camera and characteristics of the lenses allows her to capture the shot she was going for, even with limited visibility.\nIn low visibility, Michaela suggests pre-focusing and using a wide-angle lens – and again, really knowing your equipment.\nGet comfortable in the water\n“Keep practising. Even if you go to a local pool, a lake or a river – a place you are familiar with – it’s a great place to learn and get comfortable.”\n“I took swimming lessons and completed a free-diving course in addition to my scuba dive training. This gave me the tools to hold my breath for longer. I’m a big advocate of safety and simplicity,” she says.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Wrecks, Silhouettes, Close-focus WA Compositions\nSUPPORT THE UNDERWATER PHOTOGRAPHY GUIDE:\nThe Best Service & Prices on u/w Photo Gear\nVisit Bluewater Photo & Video for all your underwater photography and video gear. Click, or call the team at (310) 633-5052 for expert advice!\nThe Best Pricing, Service & Expert Advice to Book your Dive Trips\nBluewater Travel is your full-service scuba travel agency. Let our expert advisers plan and book your next dive vacation. Run by divers, for divers.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "Have you ever seen an underwater photograph, depicting the coral reef maybe, or a school of vibrant fish swimming? Or perhaps, you have had the chance to snorkel or go scuba diving and been introduced to the beautiful underwater world of marine life. What many of us overlook as a form of art and fine art at that is underwater photography. We are aware that photography is art and that taking pictures of beautiful landscapes captures people and often sell for hefty price tags. But what about the beautiful landscape that can be found under the sea?\nUnderwater photography is indeed categorized as an art form and photos often show the beauty of shipwrecks, coral reefs, different marine life and cave systems to name a few. What many of us may not realize is that being a underwater photographer and a fairly good one at that, is a lot harder than we may think. Akima, an underwater photographer and friend of mine shared that the biggest hurdle is the lighting? Why? The waves of sunlight that stream down onto the surface of the water are absorbed so everything appears blue and green. Akima, who offers photography and snorkeling tours on the island of Oahu, explained that many times travelers enjoying one of her snorkeling tours will be mesmerized by the beauty that can be found underneath the surface of the water. They aim their underwater cameras thinking they can capture the exact image and often find that the picture just doesn’t do the view as much justice as it deserves. She explains the way the sun’s rays are affected by the water and provides these tips:\nTips To Capture Beautiful Underwater Pictures\n- Get as close to the subject of your photo as possible\n- Use a flash so that you can help to restore some of the color that was lost when the sunlight was absorbed by the water.\n- Use flash in “forced flash” mode\n- For best composition- shoot at an upward angle never downward\n- Get comfortable with diving/snorkeling and fine tune your skills before setting out to photograph\n- Try setting your camera at the highest resolution and the lowest ISO when starting out\n- To use natural light only shoot in 20ft of water or less\n- If your photos didn’t come out as sharp as you would like, try checking your shutter speed\n- You can either purchase an underwater digital camera or a special case that can go around your regular camera to make it waterproof\nThese are just a few tips to help to create and take the best underwater photographs as possible.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-1", "d_text": "Lanzarote is also a great place for a spot of stargazing.\nPerhaps you’re one of those people who watched Jaws and has decided it still isn’t safe to get back in the water, or perhaps, like me, you watched it and thought “wow that’s cool!”\nIf the latter is the case, you’ll probably spend half your life trying to find an excuse to jump in the sea, into a lake or even into a swimming pool with a pair of goggles on to find out what’s going on below the surface. And when you do, you probably want to get some good shots of the stuff you see (whether it’s your friends and family playing in a pool, a crab, a brightly coloured fish or jaws). So how do you do it, and how much will it cost?$\n“you’ll probably spend half your life trying to find an excuse to jump in the sea, into a lake or even into a swimming pool with a pair of goggles on to find out what’s going on below the surface. And when you do, you probably want to get some good shots of the stuff you see…”\nAs ever, it will cost as much as you want to spend.\nIt’s clear that for the very best, super-sharp and well exposed images at depth, you will need an expensive camera with high ISO (light sensitivity) capabilities. This may be a custom designed underwater camera or a specialist, dedicated underwater housing for a DSLR. This, though, is the realm of the scuba diver, and nearer the surface (down to around 10m) you can get by with some pretty cheap and basic gear:\nUnderwater shooting with zero preparation\nIf you’re not a regular scuba diver, the times when you’re most likely to want to take photos under water are when you’re on holiday. You might be by the sea in Cornwall, or in the Mediterranean or on the Pacific coast. Wherever the sea is, there is the desire to jump in it and boat on it.\nHowever, most cameras are not waterproof. Take it from someone who knows, you don’t want to take a decent camera out, even on a boat, without protection if you want it to come back working. Ideally, you want to think about this before you go away, so that you can get a waterproof camera or some sort of housing.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "I have always enjoyed shooting macro photography ever since I purchased my first camera. Underwater Macro photography is challenging due to currents, buoyancy control and back-scatter in the water. Below are a few underwater images photographed using the Canon 70D with a Canon EF 100 mm f/2.8 USM macro lens. The Camera and lens are enclosed in an Ikelite underwater housing. I am currently using one Ikelite DS-160 strobe along with one Ikelite DS-200 underwater.\nIf you are living In Okinawa-Japan and would like to purchase any Ikelite product, I highly recommend Ikelite Military Sales. You can contact them directly on Facebook with the link below. I usually receive my orders within five to seven days. This is very fast shipping living overseas.\nIf you are having trouble with the initial set up of your underwater system and need assistance contact me.\nStay tuned for more underwater images with the Canon 70d.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-2", "d_text": "For more winning photos and prizes visit the Digital Shootout website at www.thedigitalshootout.com\nSUPPORT THE UNDERWATER PHOTOGRAPHY GUIDE:\nThe Best Service & Prices on u/w Photo Gear\nVisit Bluewater Photo & Video for all your underwater photography and video gear. Click, or call the team at (310) 633-5052 for expert advice!\nThe Best Pricing, Service & Expert Advice to Book your Dive Trips\nBluewater Travel is your full-service scuba travel agency. Let our expert advisers plan and book your next dive vacation. Run by divers, for divers.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-30", "d_text": "Even though waterproof cameras are heavier, they got other great features to complement this. They are resistant to shock, dust, extreme temperatures and of course water. This makes them very versatile to be used in different environments under different conditions.\nSince your vacation will entail a lot of traveling, the camera should come with a lithium-ion battery. These batteries are known to have a long life thus it will be long before you need to recharge the batteries.And since the batteries are rechargeable, you do not need to worry when your camera goes off. The waterproof camera should also come with an internal memory of high capacity to store your photos before you can transfer them to your computer or any other external storage device.\nThe camera should also be bought from a reputable company that offers warranties to fix the camera in case it malfunctions as some break downs are inevitable. Whatever your application for the best underwater camera you seek, be sure that it is worth investing in.\nInvesting in an underwater camera is a perfect idea when planning for a trip to some tropical locale or even Caribbean. Underwater digital cameras are perfect for capturing adventures like scuba diving and snorkeling.\nBuying the perfect underwater camera for your needs requires the user to ensure that the camera can perform as he wants it to. Not all underwater cameras are created equally so reading about various underwater cameras and their reviews and then paying attention to the reviews that pertain to the activities the user is going to engage in helps make a wise decision while buying an underwater digital camera.\nMany underwater cameras come with an LCD screen that allows the photographer to see what exactly he is shooting and how the picture is going to turn out before he actually pushes the button.\nAfter having managed to buy a good underwater digital camera it is necessary to have some basic know-how about filming underwater.\nLighting for the perfect shot, when shooting underwater requires knowledge about the natural light or strobe effects underwater. When at lesser depths than 20 feet and in clear waters it is often best to use the ambient sunlight while an off camera strobe can be most useful under less clear water reducing glare and bluing of the image.\nColor contrast determines how true to life the photo comes to the underwater world. To offset problems like muted blue scenes or red-orange tinted disasters, it is better to use off camera strobes rather than flash. Every foot of distance underwater equals 2 feet of bluing effect.\nIt’s also important to know that wide angle lenses are best for underwater filming because water magnifies images.", "score": 8.086131989696522, "rank": 100}]} {"qid": 18, "question_text": "How old are most of the atoms in the human body?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Most of the atoms in your body are 13.7 billion years old, and being you is just the latest page in the incredible story of their life.\nThere’s no doubt that they’ve been inside of stars, and floated suspended in outer space for far longer than our species has been around.\nThey’ve washed through the chemical cycles of the Earth countless times, which might have included being frozen to the top of a mountain in one eon, to stomping through dense jungles as part of the thigh bone of a brontosaurus in the next.\nIt’s like looking in a mirror right?\nBut how is this possible? Don’t atoms break down eventually? And how can they move from the top of a mountain to a prehistoric jungle?\nWe can use modern science to answer these questions and see the story of us from its true beginning. Along the way we will discover how we are born of the universe, not seperate, like a wave that emerges from an ocean.\nAtoms are the minuscule LEGO blocks of everything we see around us. They make up the cells that make up our bodies, and although cells have a lifespan of a few days to a few years, most atoms will coast around the universe for 10 million billion billion billion years before they break down. They are practically immortal.\nTo find out where their story starts, the lens we have to use is a field of science called astrochemistry, which is the study of molecules in the universe.\nThe different types of atoms (called elements) have slightly different but parallel stories, though they all begin in the same place; the Big Bang.\nThe Plasma Storm\nWhile the nature of the Big Bang itself remains one of the greatest unsolved mysteries in science, we do have a fairly good grasp on what happened immediately afterwards.\nFrom a microscopic point, the universe erupted outwards in a condition of unimaginable heat and pressure. From the sheer amount of energy coursing through the fabric of reality, the first quarks seared into existence like waves erupting from a turbulent ocean.\nWithin minutes, quarks joined together to form protons and neutrons. They formed an opaque cloud of plasma so vast that it stretched across the entire universe. It rippled with light and electricity, and may have looked something like a combination of being inside a plasma globe and a lightning storm.", "score": 53.181326366883404, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "- Sixty percent of the matter of our bodies is made up of hydrogen atoms, which were present in the first years of the fireball, 13 billion years ago.\n- The remaining 40 percent of our bodies is made up of atoms forged in the stars about 5.5 billion years ago.\nThis means we’re all made from the same material no matter who we are – all part of the star dust of the galaxies.\nWe breathe in and out together as one at every moment… real time evolution. To me that is profound.", "score": 50.509042761403784, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "The universe passed 240,000 years in this dense, violent plasma storm, a time so extraordinarly long on human timescales that it would have encompassed the entire history of our species.\nBut the same explosive force of the Big Bang that creates the plasma storm kept the universe expanding, and eventually, it started to cool off.\nThe electricity which rippled through the cloud began to combine with its protons and neutrons to form the transparent gasses hydrogen and helium, and thus the first complete elements to exist in the universe.\nAbout 60% of the atoms in our bodies are directly descended from this hydrogen and helium.\nThe plasma storm began to fade and was replaced by this new, transparent cloud, and the universe began to resemble space as we now know it.\nInside a Star\nIn the cold silence of space, your atoms would have been witness to one of the most sublime visions in the universe: the formation of the Milky Way galaxy through a veil of a nebula.\nThey started to feel the pull of gravitation. First subtly and slowly, but soon like a colossal riptide, they were pulled into the gravity well of a still-forming giant star, one of the ancestors of our Sun.\nAs more material fell into the growing star, the pressure felt by your atoms climbed to over 250 billion times the pressure of our atmosphere. A dull glow began as the star ignited, which soon became a heat and light hotter and brighter than anything we could imagine.\nYour atoms spent hundreds of millions of years here, adrift in the ebb and flow of the internal storms of the star.\nSome fell deep into the star’s core. Here they were subject to pressure that was extreme compared even to the rest of the star, and in this furnace atoms of hydrogen and helium fused together to become oxygen, carbon, iron and other elements, releasing bursts of heat and light as they merged.\nThe light from the Sun that warms your skin during the day, and the flickering of light from the stars at night originates from the same brutal process of fusion.\nAfter three to four million years the giant star began to run out of its hydrogen and helium fuel. At the same time, its waste products of oxygen, carbon, and iron began to build up, and its light began to dim.\nIt erupted in a supernova explosion, a blast so violent that it would have been visible from across the other other side of the Milky Way galaxy, if there was anyone there to see it.", "score": 48.60063307346579, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "In the cores of these stars nuclear fusion began and hydrogen fused into more helium, helium fused into carbon, and various other elements up to iron. For smaller stars, this is where it stopped. However, some of the most massive stars underwent enormous supernova explosions. In the supernovas conditions were right to form even heavier elements, all the way up to Uranium.\nIt is a sobering thought that every atom in your body can be traced back (in theory anyway) to the Big Bang, and the nitrogen and oxygen you breathe, the carbon your body is composed of all came from some ancient star. And the heavier elements, of which there are traces in your body, formed in the fiery furnaces of supernovas! No wonder the late Carl Sagan was fond of saying that we are all \"star stuff.\"", "score": 47.71007560253543, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "\"Free thinking,\" human secularists CHOOSE to ignore the truth!\nScience has provided us with the truth. Every Ash Wednesday, those of us who know the truth proclaim the truth for all the world to know.\nThe following is taken from the book \"The Fifth Miracle: The Search for the Origin and Meaning of Life,\" by physicist Paul Davies.\n\"When an organism dies and decays, its atoms and released back into the environment. Some of them eventually become part of other organisms. Simple statistics reveal that your body contains about one atom of carbon from every milligram of dead organic material more than a thousand years old. This simple fact has some amazing implications. You are for example, host to a billion or so atoms that once belonged to Jesus Christ, or Julius Ceasar or the Buddha or the tree that the Buddha once sat beneath.\nNext time you look at your body, reflect on the long and eventful history of its atoms, and remember that the flesh you see and the eyes you see them with ARE LITERALLY MADE OF STARDUST.\nMichael S. Rother", "score": 42.70444617914323, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Our body is made up of various molecules and atoms. But where did these come from? And how were they made? How much of the human body is made up of stardust?\n'Since stardust atoms are the heavier elements, the percentage of star mass in our body is much more impressive. Most of the hydrogen in our body floats around in the form of water. The human body is about 60% water and hydrogen only accounts for 11% of that water mass. Even though water consists of two hydrogen atoms for every oxygen, hydrogen has much less mass. We can conclude that 93% of the mass in our body is stardust.' - excerpt from article found at PhysicsCentral.com\nimage taken from http://disjointedthinking.jeffhughes.ca/", "score": 42.51315095874829, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "\"Free thinking,\" human secularists CHOOSE to ignore the truth!\nScience has provided us with the truth. Every Ash Wednesday, those of us who know the truth proclaim the truth for all the world to know.\nThe following is taken from the book \"The Fifth Miracle: The Search for the Origin and Meaning of Life,\" by physicist Paul Davies.\n\"When an organism dies and decays, its atoms and released back into the environment. Some of them eventually become part of other organisms. Simple statistics reveal that your body contains about one atom of carbon from every milligram of dead organic material more than a thousand years old. This simple fact has some amazing implications. You are for example, host to a billion or so atoms that once belonged to Jesus Christ, or Julius Ceasar or the Buddha or the tree that the Buddha once sat beneath.\nNext time you look at your body, reflect on the long and eventful history of its atoms, and remember that the flesh you see and the eyes you see them with ARE LITERALLY MADE OF STARDUST.\nMichael S. Rother", "score": 41.34305234783318, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "If we are our bodies, what are our bodies made of?\n99% of the total number of atoms in the human body are either Hydrogen (H), Carbon (C), Nitrogen (N) or Oxygen (O). Ratios are approximately:\n- 63% Hydrogen\n- 24% Oxygen\n- 9% Carbon\n- 3% Nitrogen\nSeven elements make up 0.9% of the remaining atoms. They are: Sodium (Na), Magnesium (Mg), Phosphorus (P), Sulfur (S), Chlorine (Cl), Potassium (K) and Calcium (Ca).\nThe last 0.1% is divided between eleven elements that are needed in trace amounts, all of them metals or metaloids except for Fluorine (F), Selenium (Se) and Iodine (I). There’s also three more that we aren’t sure yet if they are required for life or not: Boron (B), Silicon (Si) and Nickel (Ni). For their atomic masses, see the periodic table of the elements above.\nI find it fascinating that our extreme complexity can be reduced to 4 main building blocks. Very light elements, too.\nSource: Alberts, Molecular Biology of the Cell, 5th Edition. P. 47, 49.\nIf you liked this post, please consider subscribing to my RSS feed. Thank you.", "score": 40.681389818128935, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "Your very own, unchangeable, solid, “me,” “Self”, is constantly broken down and remodeled.\nThe life of one cell in our body is short and brutal. Our DNA is being attacked 100,000 times a day. Enzymes are continually trying to fill up the holes and trying to keep the whole thing together. But they can’t succeed for very long in preventing your decay. White blood cells live only a couple of days. Within a month your complete skin has been renewed. During the time that you’re reading these words, billions of cells have died and been replaced by new cells. They died and were renewed. Cells are dying and being born constantly. Constantly being renewed. How miraculous it is that you don’t notice anything of this happening right now, of this great big maintenance. You can just sit and read or listen, undisturbed by these changes in your body.\nBut if you don’t feel like you felt before, that’s understandable. This permanent recycling has great consequences. The atoms which all of us are built of, all of them have been used before. Nothing is new. Every atom in our bodies has been used before. You are carrying thousands of atoms of almost every person that has ever been alive. So there’s a little bit of Caesar in you, and some of Marie Antoinette, together with some traces of some Egyptian Pharaohs, or the Buddha, or Chinese Emperors… and there’s no reason to restrict that to just mammals. In a molecular sense, you are a walking history book of life on Earth.\nFor instance, take one simple carbon atom in your left pinky. Carbon atoms originate from very powerful explosions of stars. And on Earth they are usually captured within rocks or carbon or diamonds, and they can stay there for hundreds of millions of years. Sometimes one of these atoms escapes, and during this short escape, during this holiday or time off – that’s usually just a couple of million years – it will float on Earth in the form of carbon dioxide, dissolved in oceans and the atmosphere, and once in a while the atom will come to life, for instance, as part of an organism, like a leaf of a fern, or the wings of a tropical butterfly, or in the hair of a mammoth, and now it’s in your left pinky: The same atom.", "score": 40.56407421186341, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "Are humans made of matter Yes or no?\nAbout 99 percent of your body is made up of atoms of hydrogen, carbon, nitrogen and oxygen. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of lead dust, and the entire human race would fit into the volume of a sugar cube.\nWho first said we are stardust?\nMost of us are familiar with the saying, made popular by astronomer Carl Sagan, folk singer Joni Mitchell, and countless inspirational posters and billboards— We are stardust.\nWhat are humans made of?\nAlmost 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life.\nWhat is a forming star called?\nThis continues until the gas is hot enough for the internal pressure to support the protostar against further gravitational collapse—a state called hydrostatic equilibrium. When this accretion phase is nearly complete, the resulting object is known as a protostar.\nHow is the energy of star produced?\nStars produce energy from nuclear reactions, primarily the fusion of hydrogen to form helium. These and other processes in stars have lead to the formation of all the other elements.\nHow old is the universe?\nUniverse is 13.8 billion years old, scientists confirm Scientists estimate the age of the universe by measuring its oldest light.\nIs Mars a matter?\nMars is a terrestrial planet with a thin atmosphere, with surface features reminiscent of the impact craters of the Moon and the valleys, deserts and polar ice caps of Earth.", "score": 39.33244240473465, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "What is humans made out of?\nAlmost 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium.\nJoin Alexa Answers\nHelp make Alexa smarter and share your knowledge with the worldLEARN MORE", "score": 38.91450096953767, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Consider the following:\nWhich among the following is the correct decreasing order of average percentage of the above elements in Human Body?\nAverage percentage by mass of the elements in the human body is oxygen (65.0%), carbon (18.0%), hydrogen (10 .0%), nitrogen (3.0%) and calcium (2.0%), phosphorous (1.0%), potassium (0.35%), sulphur (0.25%), sodium (0.15%), chlorine (0.15%), magnesium (0.05%) and iron (0.004%). Besides these, there are also traces of iodine, fluorine, copper, silicon, etc. present.\nThis question is a part of GKToday's Integrated IAS General Studies Module", "score": 37.3069012275491, "rank": 12}, {"document_id": "doc-::chunk-1", "d_text": "It's made up of about 65 percent oxygen, 18 percent carbon, 10 percent hydrogen and 3 percent nitrogen by mass, while the remaining 4 percent consists of traces of almost every other naturally occurring element in the table.\nSince the time of Lavoisier, Dalton and Mendeleev, chemists, physicists and astronomers have made huge strides in understanding the origin of elements. Thirteen-plus billion years ago, soon after the creation event we term the Big Bang, the universe consisted only of the lightest elements hydrogen, helium and lithium (one, two and three protons, respectively). No carbon, no oxygen, no zinc, no iron, no uranium: These had to be synthesized from the original three elements. How? By nuclear fusion. Where? Inside stars. So many technical problems had to be solved in order to arrive at a complete understanding, that detailed answers have only been available in the last 50 years. And the biggest hurdle investigators had to overcome is right here in your blood: iron.\nNext week: Forging the elements.\nBarry Evans (firstname.lastname@example.org) recommends Tom Lehrer's The Elements for anyone wanting a quick brush-up. Son of Field Notes (a compilation of Barry's columns for the last 18 months) is for sale at Eureka Books and Northtown Books.", "score": 37.118149003757914, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Look at this baby.\nLovely, no? Now think of this baby abstractly — as a sack of hundreds of millions of atoms. Here's the atomic formula for a new human being, arranged by elements, according to scientist Neil Shubin.\nNotice that the two most plentiful atoms are H (hydrogen) and O (oxygen) which shouldn't be a big surprise, since 2 H's and an O make water, and we humans are very moist, especially when we're born.\nIt turns out, a brand new human baby is 75 percent water.\nWe're born as wet as a fresh potato. Tomatoes are wetter (93.5 percent water). Apples, too, but only slightly (80 percent). Check out this fruit vs. baby comparison.\nOK, we aren't as wet as watermelons (who'd want to be?), but still, we begin our lives as noisy dewdrops that will one day learn to crawl, then walk. As science writer Loren Eiseley once put it, people \"are a way that water has of going about, beyond the reach of rivers.\"\nAging = Drying\nBut then, with every step we take, we begin to dry. The longer we live, the drier we get. One year after birth, a human baby is only 65 percent water – a ten percent drop, says the U.S. Geological Survey.\nBabies are wetter than children. By the time we're adults, the USGS says, adult men are about 60 percent water, adult women 55 percent. Elderly people are roughly half water.\nThere are variations, of course. The more buff you are (muscle tissue stores more water) the wetter you are. Because women generally have more fat cells, they tend to be a bit drier. Fat cells aren't as moist. The water that lubricates your joints, flushes your waste (I'm talking about pee), assists seminal reproduction, and absorbs shocks to your bones — as you age, the moisturizer in you slowly dwindles.\nAnd the odd thing is, our wet parts aren't where you'd think. I figured if some giant fist were to plunge out of the sky and squeeze a human like a sponge, the wettest bit would be our blood. That's wrong.", "score": 35.95370750851321, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Do you know where you will be in three years? The material of living things, including you, originates from a surprising place. Everyone knows the food chain base is plant life on land, while ocean life originates from phytoplankton and algae.\nThese microscopic creatures use sunlight, like plants, to produce 66 percent of the oxygen in our atmosphere. Most of the oxygen in every breath you take, which then spreads through your bloodstream, was once in these tiny ocean creatures. The remaining oxygen is produced by plants on land.\nHumans are physically made of what they consume. A 200-pound person, on average, is only 60 pounds of solids and 140 pounds water, about 70 percent. The base of our diet is plants on land or phytoplankton in seafood. All animals are made from reorganized atoms taken from consumed plants and algae.\n- All of these celebrities have had their nudes leaked 35 Pictures\n- UPDATE: Looking back at Lil' Kim's style through the years 40 Pictures\nMost people think trees grow using the materials in soil, but that's not true. A seed can grow in a bag with no dirt and a small amount of water. The material in plants and trees is 98 percent from elements in Earth's atmosphere and only 2 percent from soil.\nPlants pull atoms from the air while humans eat to replace nearly all cell material. The longest to regenerate is the spinal cord taking around three years. Some cells never grow back, like many neurons in our brain, but the physical structure of the cell is replaced with fresh nutrients.\nThree years from now, the only physical part of yourself to remain will be the tiny amount of heavy metal in your brain since birth, and possibly ink from a tattoo or surgical implants. If you are lucky enough to still be alive in a few years, the materials that will make up your body are now in the atmosphere all around the world, in seawater, in sunlight yet to be released from the sun, and in the water in clouds, rivers, lakes, and oceans of the planet Earth.\nPlants will self-assemble out of thin air and dirt, be consumed by animals and eventually reorganize according to DNA, leading to you.\nMetro does not endorse the opinions of the author, or any opinions expressed on its pages.", "score": 33.947940226027406, "rank": 15}, {"document_id": "doc-::chunk-8", "d_text": "The idea of using chemical isotopes to combat ageing may be new, but nature may already be onto that strategy as a way of protecting us against free-radical attack, thought to be a key cause of ageing. Babies and mice are born with much more of the isotope carbon-13 in their bodies than their mothers, and women appear to become unusually depleted in carbon-13 around the time they give birth. Both findings suggest that there is active transfer of carbon-13 from mother to fetus.One possible reason for this, suggests Mikhail Shchepinov, chief scientific officer of the biotechnology company Retrotope, which is investigating the use of isotopes to slow ageing, is that the growing fetus selectively builds carbon-13 into its proteins, DNA and other biomolecules to take advantage of the way that heavy isotopes make these molecules more resistant to free-radical attack.It would make good evolutionary sense, as many of the proteins and DNA molecules formed early on have to last a lifetime. \"Every single atom in the DNA of the brain of a 100-year-old man is the same atom as when he was 15 years old,\" says Shchepinov ( BioEssays, vol 29, p 1247 ).\nISOTOPICALLY MODIFIED COMPOUNDS AND THEIR USE AS FOOD SUPPLEMENTS\nClassification: - international: A23L1/29; A23L1/30; A23L1/305; A23L1/29; A23L1/30; A23L1/305\nAbstract --- A nutrient composition comprises an essential nutrient in which at least one exchangeable H atom is 2H and/or at least one C atom is 13C. The nutrient is thus protected from, inter alia, reactive oxygen species.\nTHERAPIES FOR CANCER USING ISOTOPICALLY SUBSTITUTED LYSINE\n Lysyl oxidases (LOX, LOXL, LOXL2, etc.; amine oxidase family) are Cu-dependent enzymes that oxidize lysine into allysine (a-aminoadipic-d-semialdehyde) [Kagan H M. et al., J. Cell. Biochem. 2003; 88:660]. LOX have been implicated in crosslink formation in stromal collagens and elastins.", "score": 33.85759816314236, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "There is supposedly a Serbian proverb that states, “Будите скромни за вас су од земље, да буде племенита за вас су од звезда.”\nBe humble, for you are made of earth. Be noble, for you are made of stars.\nWhether it is Serbian in origin or not, it’s a good proverb. I like it. Because it’s true. From Random Science Tools comes this chart of the relative preponderance of elements in the body of an average 70kg man:\n|Element||Amount / kg||Amount / Mol.|\nChemical composition of the human body by mass\nAs little as it may be, we have gold in us. And other rare elements. And we have to remember that at the creation of the universe, the only elements present were hydrogen and helium. Every other naturally-occurring element in the periodic table was born in the hearts of dying stars which ended their lives as supernovæ, or – as recently hypothesized in the case of heavier elements like gold – in collisions between neutron stars.\nI like Carl Sagan’s quote, which he also managed to work into his book Contact: “The universe is a pretty big place. If it’s just us, seems like an awful waste of space.” Whether we are alone in the universe is a question which science has yet to answer, but it’s pretty mind-bending to think that the elements which make up our bodies came from the universe around us. As astrophysicist Neil Degrasse Tyson said,\n“Recognize that the very molecules that make up your body, the atoms that construct the molecules, are traceable to the crucibles that were once the centers of high mass stars that exploded their chemically rich guts into the galaxy, enriching pristine gas clouds with the chemistry of life. So that we are all connected to each other biologically, to the earth chemically and to the rest of the universe atomically. That’s kinda cool! That makes me smile and I actually feel quite large at the end of that. It’s not that we are better than the universe, we are part of the universe. We are in the universe and the universe is in us.”\nThe Old Wolf has spoken.", "score": 33.318970591503295, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "- 15 million blood cells are destroyed in the human body every second.\n- The pancreas produces Insulin.\n- The most sensitive cluster of nerves is at the base of the spine.\n- The human body is comprised of 80% water.\n- The average human will shed 40 pounds of skin in a lifetime.\n- Every year about 98% of the atoms in your body are replaced.\n- The human heart creates enough pressure to squirt blood 30 feet (9 m).\n- You were born with 300 bones. When you get to be an adult, you have 206.\n- Human thighbones are stronger than concrete.\n- Every human spent about half an hour as a single cell.\n- There are 45 miles (72 km) of nerves in the skin of a human being.\n- The average human heart will beat 3,000 million times in its lifetime and pump 48 million gallons of blood.\n- Each square inch (2.5 cm) of human skin consists of 20 feet (6 m) of blood vessels.\n- During a 24-hour period, the average human will breathe 23,040 times.\n- Human blood travels 60,000 miles (96,540 km) per day on its journey through the body.\n- The average brain uses the equivalent of 20 watts of power\n- The average power consumption of a typical adult is about 100 Watt - \"Body, Physics of.\" Macmillan Encyclopedia of Physics. New York: Macmillan, 1996.\nPower of a Human Brain - \"The human brain is only 2% of the weight of the body, but it consumes about 20% of the total energy in the body at rest.\"\n5 Surprising Facts About the Human Brain\nHuman Brain Facts and Answers\nHow IBM is making computers more like your brain. For real | Cutting Edge - CNET News - October 17, 2013; Our brain is a volumetric, dense, object.\nMatter, Mind, Life.\nProprioception - Wikipedia, the free encyclopedia - \"one's own perception\" - sense of the relative position of neighboring parts of the body and strength of effort being employed in movement.\nThe cerebellum is largely responsible for coordinating the unconscious aspects of proprioception.\nNeurocosmologyworld view that ones view of the universe is restrained by the nervous system's ability to process information\nemergent fusion of information at all levels of life.", "score": 33.01766250277108, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Cosmic Recipe for Humans\nStars prepare almost all of the about 60 atomic elements in human’s body. But precisely how that works is still a mystery. Astrophysicists have settled radical computer models to deal with an array of connected riddles: • what were stars like when they firstly acted in the cosmos over 13 billion years ago, beginning the procedure of modern element production? • What do we understand about the nature of the demise of huge stars, waved by Type II supernovae, that custom vital elements for example calcium and oxygen? • How might the exhausted stars called white dwarfs be conveyed to collapse by other stars in so-called Type Ia supernovae, provoking the blistering alchemy that produced much of the iron in our blood and the potassium in our brains?\nIllustration by Kellie Jaeger, agsandrew/Shutterstock\nResearchers are still trying to understand what causes a single Type Ia supernova and to conclude the character of the partner star to the exploding white dwarf. The Hubble Space Telescope’s fresh detection of the first known Type Ia supernova from more than 10 billion years ago, plus other outcomes, supports a state in which two white dwarfs fuse. The outcomes specify that vital elements in people fashioned later in the history of the universe than many had projected, says David Jones, the chief astronomer on the Hubble study. “It took (very roughly) about 750 million years longer to form the first 50 percent of the iron in the modern universe.”\nRead the Complete Article at discovermagazine.com.\n(If you find any error or miscalculation in this article then please feel free to share in comment and if you want to expand this article then comment below)\nCosmic Recipe for Humans Reviewed by Umer Abrar on 6/15/2014 Rating:", "score": 32.16480533368322, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Every cell in your body has a little clock ticking away in it, researchers reported on Sunday. And while most of you is aging in a coordinated way, odd anomalies that have the researchers curious: Your heart may be “younger” than the rest of your tissues, and a woman’s breasts are older.\nTumors are the oldest of all, a finding reported in the journal Genome Biology that might help scientists better understand cancer, explain why breast cancer is so common and help researchers find better ways to prevent it.\nLess surprising, but intriguing: embryonic stem cells, the body’s master cells, look just like newborns with a biological age of zero.\nThe new measurements might be useful in the search for drugs or other treatments that can turn back the clock on aging tissue and perhaps treating or preventing diseases of aging, such as heart disease and cancer, says Steve Horvath, a professor of genetics at the David Geffen School of Medicine at UCLA.\n“The big question is whether the biological clock controls a process that leads to aging,” Horvath said.\nHorvath looked at a genetic process called methylation. It’s a kind of chemical reaction that turns on or off stretches of DNA. All cells have the entire genetic map inside; methylation helps determine which bits of the map the cells use to perform specific functions.\nHe found a pattern of specific methylation events that could be associated with cellular aging. “Methylation levels either increase with age or they decrease with age,” He says. “I identified 353 of these markers that are located on our DNA. I managed to aggregate their information so they arrive at a very accurate clock.\"\nHe’s not sure what each methylation marker does on its own.\n“It’s really the aggregate that is making the difference,” Horvath said in a telephone interview. \"The whole is more than the sum of its parts.”\nHe looked at blood and tissue samples from hundreds of people, from unborn babies to someone 101 years old. He looked at tumors from people with 20 different types of cancer, samples of non-cancerous tissue from the same patients and perfectly healthy people.\nOn average, tumors were 36 years older than the rest of the body, a finding that supports the idea that cancer is a disease in which the biological clock runs amok.\nSurprisingly, most people’s hearts look younger than the rest of their bodies, the researchers found. ”That looked one average nine years younger,” Horvath says.", "score": 32.11646673799875, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Curiosity › Science › Chemistry › [Ask a Geek] Is my great grandmother somehow connected to her table? › Reply To: [Ask a Geek] Is my great grandmother somehow connected to her table?\nRight after reading this, I remembered the words of Carl Sagan, “We are made of star stuff.” I thought I could pen down my own version of this answer here.\nLet’s make a quick microscopic picture so that we can ease into the macroscopic world.\nAll things are indeed created by molecules. To be precise, all things are made up of elementary particles like electrons, quarks, and constituent particles like protons and neutrons. The combination of these protons, neutrons and electrons in a stable configuration as an atom creates different elements and compounds. Atoms of these elements and compounds together make different molecules, existing in different phases, influencing and reacting with one another. A beautiful harmony the universe plays!\nA human is made up of different kinds of molecules in humongous numbers, forming complex shapes, performing complex chemical reactions, carrying information, replicating, all inside a very sophisticated biological system, which is the result of billions of years of evolution.\nA tree is similar to a human. It is equally sophisticated as a human system, and it has its own number of different molecules and compounds orchestrating a different kind of biochemical stuff. When you make a table out of the wood from the trees, you still retain the chemical composition of the wood, except without the ‘life’ part of it.\nThe chemicals that are so common in wood are carbon, oxygen, hydrogen, and nitrogen. A human body has a similar composition too. We are mostly made up of oxygen, carbon, hydrogen, and nitrogen as well.\nNow if we look back the timeline of our solar system, we can understand the origin or source of all these elements around us. Some magnificently violent supernova had bestowed us all the elements necessary to create the sun and the planets and possibly life we know today.\nThe oxygen atom in our body comes from the atmosphere, which is actually produced by the trees by photosynthesis. The carbon in the trees comes from the environment as well. But we breath out that carbon into the atmosphere in the first place as carbon dioxide. In a grand scale, all these atoms had come from that solar nebula that formed Earth.\nWhen your great grandmother was using the table, she was closer to the table.", "score": 31.339138070777718, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Stars cook up nearly all of the approximately 60 atomic elements in people’s bodies. But exactly how that works remains a mystery. Astrophysicists have developed cutting-edge computer simulations to grapple with an array of related puzzles:\n• What were stars like when they first appeared in the universe over 13 billion years ago, starting the process of modern element production?\n• What do we know about the nature of the death of massive stars — signaled by Type II supernovae — that fashion crucial elements such as calcium and oxygen?\n• How might the burned-out stars called white dwarfs be brought to ruin by other stars in so-called Type Ia supernovae, inciting the fiery alchemy that yielded much of the iron in our blood and the potassium in our brains?\nScientists are still trying to figure out what triggers an individual Type Ia supernova and to determine the identity of the partner star to the exploding white dwarf. The Hubble Space Telescope’s recent discovery of the earliest known Type Ia supernova from more than 10 billion years ago, plus other results, favor a scenario in which two white dwarfs merge.\nThe results indicate that crucial elements in people formed later in the history of the universe than many had expected, says David Jones, the lead astronomer on the Hubble study. “It took (very roughly) about 750 million years longer to form the first 50 percent of the iron in the modern universe.”\nOut of the primordial hydrogen and helium created in the Big Bang, clouds coalesced within 100 million years, eventually forming the first stars. This simulation shows light from an early star 100 million years after the Big Bang. When this fireball — millions of times brighter than the sun — dies in a titanic explosion called a supernova, it hurls out elements such as oxygen, carbon and magnesium.\nAbout 500 million years after the Big Bang, one of the first galaxies in the universe formed, containing stars of about the same mass as the sun — which can live for 10 billion years — as well as lighter stars. The green and whitish regions depict elements such as carbon and oxygen.\nThis simulated image shows the first half-second of an explosion of a star 15 times more massive than the sun. Called a core collapse supernova explosion, one example of which is a Type II, these are a source of about a dozen major elements in people, including iron, calcium, phosphorus, potassium, sulfur and zinc.", "score": 31.32198752976672, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The sphere in the center is a newly born neutron star, the superdense corpse that remains of the former star. The scale from top to bottom is 1,000 kilometers, or 621 miles.\nA star the size of the sun becomes a “red giant” toward the end of its 10-billion-year life span, a phase in which its outer atmosphere expands a great deal. The white region at the center is the dense, hot core where hydrogen and helium are still burning in two concentric shells. Between those two shells, carbon is combining with helium to form oxygen.\nThe four ingredients below are essential parts of the body’s protein, carbohydrate and fat architecture. (Expressed as percentage of body weight).\nOxygen — 65.0%\nCritical to the conversion of food into energy.\nCarbon — 18.5%\nThe so-called backbone of the building blocks of the body and a key part of other important compounds, such as testosterone and estrogen.\nHydrogen — 9.5%\nHelps transport nutrients, remove wastes and regulate body temperature. Also plays an important role in energy production.\nNitrogen — 3.3%\nFound in amino acids, the building blocks of proteins; an essential part of the nucleic acids that constitute DNA.\nCalcium — 1.5%\nLends rigidity and strength to bones and teeth; also important for the functioning of nerves and muscles, and for blood clotting.\nPhosphorus — 1.0%\nNeeded for building and maintaining bones and teeth; also found in the molecule ATP (adenosine triphosphate), which provides energy that drives chemical reactions in cells.\nPotassium — 0.4%\nImportant for electrical signaling in nerves and maintaining the balance of water in the body.\nSulfur — 0.3%\nFound in cartilage, insulin (the hormone that enables the body to use sugar), breast milk, proteins that play a role in the immune system, and keratin, a substance in skin, hair and nails.\nChlorine — 0.2%\nNeeded by nerves to function properly; also helps produce gastric juices.\nSodium — 0.2%\nPlays a critical role in nerves’ electrical signaling; also helps regulate the amount of water in the body.\nMagnesium — 0.1%\nPlays an important role in the structure of the skeleton and muscles; also found in molecules that help enzymes use ATP to supply energy for chemical reactions in cells.", "score": 30.797674841239953, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "We need, for instance, just 20 atoms of cobalt and 30 of chromium for every 999,999,999½ atoms of everything else.”\n“Thorium costs over $3,000 per gram but constitutes just 0.0000001 percent of you, so you can buy a body’s worth for thirty-three cents. All the tin you require can be yours for six cents, while zirconium and niobium will cost you just three cents apiece. The 0.000000007 percent of you that is samarium isn’t apparently worth charging for at all. It’s logged in the RSC accounts as costing $0.00”\n“Cadmium, for instance, is the twenty-third most common element in the body, constituting 0.1 percent of your bulk, but it is seriously toxic. We have it in us not because our body craves it but because it gets into plants from the soil and then into us when we eat the plants. If you are from North America, you probably ingest about eighty micrograms of cadmium a day, and no part of it does you any good at all.”\nMicrobes and viruses\n“Luckily, most microbes have nothing to do with us. Some live benignly inside us and are known as commensals. Only a tiny portion of them make us ill. Of the million or so microbes that have been identified, just 1,415 are known to cause disease in humans—very few, all things considered. On the other hand, that is still a lot of ways to be unwell, and together those 1,415 tiny, mindless entities cause one-third of all the deaths on the planet.”\n“More recently, Dana Willner, a biologist at San Diego State University, looked into the number of viruses found in healthy human lungs—somewhere else that viruses were not thought to lurk much. Willner found that the average person harbored 174 species of virus, 90 percent of which had never been seen before. Earth, we now know, is aswarm with viruses to a degree that until recently we barely suspected”\n“The common cold is not a single illness but rather a family of symptoms generated by a multiplicity of viruses, of which the most pernicious are the rhinoviruses. These alone come in a hundred varieties.", "score": 30.53121959241728, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "\"It's surprising that one could develop a clock that reliably keeps time across the human anatomy,\" he admitted. \"My approach really compared apples and oranges, or in this case, very different parts of the body: the brain, heart, lungs, liver, kidney and cartilage.\"\nWhile most samples' biological ages matched their chronological ages, others diverged significantly. For example, Horvath discovered that a woman's breast tissue ages faster than the rest of her body.\n\"Healthy breast tissue is about two to three years older than the rest of a woman's body,\" said Horvath. \"If a woman has breast cancer, the healthy tissue next to the tumor is an average of 12 years older than the rest of her body.\"\nThe results may explain why breast cancer is the most common cancer in women. Given that the clock ranked tumor tissue an average of 36 years older than healthy tissue, it could also explain why age is a major risk factor for many cancers in both genders.\nHorvath next looked at pluripotent stem cells, adult cells that have been reprogrammed to an embryonic stem celllike state, enabling them to form any type of cell in the body and continue dividing indefinitely.\n\"My research shows that all stem cells are newborns,\" he said. \"More importantly, the process of transforming a person's cells into pluripotent stem cells resets the cells' clock to zero.\"\nIn principle, the discovery proves that scientists can rewind the body's biological clock and restore it to zero.\n\"The big question is whether the biological clock controls a process that leads to aging,\" Horvath said. \"If so, the clock will become an important biomarker for studying new therapeutic approaches to keeping us young.\"\nFinally, Horvath discovered that the clock's rate speeds up or slows down depending on a person's age.\n\"The clock's ticking rate isn't constant,\" he explained. \"It ticks much faster when we're born and growing\n|Contact: Elaine Schmidt|\nUniversity of California - Los Angeles Health Sciences", "score": 30.42344461661173, "rank": 25}, {"document_id": "doc-::chunk-3", "d_text": "“The entire human skeleton is thought to be replaced every 10 years or so in adults.”\n“The average age of all the cells in an adult’s body may turn out to be as young as 7 to 10 years.”\nAbout the only pieces of the body that last a lifetime, on present evidence, seem to be the neurons of the cerebral cortex, the inner lens cells of the eye and perhaps the muscle cells of the heart. But even there, it’s the age of the DNA that Frisén’s method measures. There are ongoing processes that replace individual components of the cells around that DNA, meaning that even in the cerebral cortex and other “static” tissues, there may be considerable change taking place. And other work has shown that new neurons are in fact generated in the cerebral cortex, although presumably not in numbers sufficient to show up in Frisén’s work. Additionally, although cells in the brain may be long-lived, they still change; brain cells have a life-long ability to develop new connections with each other. This is how learning takes place.\nThe upshot is that the body is continually changing.\nAnd yet we are attached to our bodies. This brings up the question, how can we be attached to something that is constantly changing? How can we cling to something that doesn’t remain the same from one moment to the next? Well, we can’t. We identify with our bodies, thinking that if the body fails, we fail. We think that when others judge the body, they judge us. And so clinging (or trying to cling) to something impermanent leads to suffering.\nThe Buddhist tradition encourages us to regard all things as being like mist, or flowing water, or a mirage — what the mind takes to be solid, substantial, and graspable is actually ever-changing and characterized by impermanence. The Six Element Practice — the subject of my new book, Living as a River, is a way of developing an experiential appreciation of the transience of the body. This helps us to let go, stop identifying the body with the self, to suffer less, and to experience a profound sense of freedom.\nIt’s remarkably difficult for us to perceive change, because of the relative poverty of the brain’s processing power compared to the sheer volume of information is has to deal with. This causes a failure to notice changes that we might think would be obvious, and so we consistently over-estimate our ability to detect change.", "score": 30.37460087768171, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Scientists at Stockholm’s Karolinska Institute have proved that the human heart produces new muscle cells throughout normal adult life, raising hope that this regenerative prowess can be harnessed to replace cardiac tissue that had been damaged by heart attacks and other pathological conditions.\nConventional wisdom had been that the heart does not produce new cells and people died with pretty much the same ticker as the one they started out with.\nBut Jonas Frisen and colleagues determined that up to age 25, the heart replaces about 1% of its cells per year, and it continues doing so, albeit at gradually diminishing rates, through old age.\nWhen it’s all said and done, nearly half the heart’s muscle cells are created during a normal lifetime, the scientists estimated.\nThe scientists knew that cell turnover rates could be quantified in animals by adding radioactive molecules to cells and observing how quickly the radioactivity disappears.\nThis can’t be done in humans for ethical reasons, but Frisen reasoned that above-ground nuclear weapons testing, which was done by several countries until 1963, had seeded the atmosphere with carbon-14, and this stuff would find its way into the food chain.\nThe net result would be that the DNA in the nuclei of all living creatures has been C-14 labeled more or less continually as a byproduct of the nuclear folly.\nThe C-14 remains in the cell for as long as it survives, but since C-14 levels have diminished since 1963, cellular loads of the stuff in more recently formed cells have also diminished. The amount of C-14 in a particular cell thus indicates when it was formed.\nThe clever work appears in Science.\nLoren Field, a cardiologist at Indiana University, told the New York Times the goal now becomes “to try to tickle the system to enhance (cellular regeneration rates).”", "score": 30.37195191814492, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "It was like an explosion, but brighter and more powerful than any explosion you or I could ever dream of. It was called ‘The Big Bang’ – the time when the Universe was born. It was January 1 on our time scale.\nAt first, all there was heat and light. But, as the Universe began to cool, clouds of tiny particles called atoms began to form. These were the atoms of hydrogen – the main component of water – and helium – the gas we use in party balloons that float on air. Eventually, gravity started compacting these clouds of hydrogen and helium atoms. The temperature at the centre of each cloud grew higher and higher until, suddenly, there was a huge release of energy and Boom! – we had our first stars. Billions of them across the Universe. On our calendar, we are in mid-January.\nNow, stars are like people; they are born and eventually die. When very large stars die and explode, they are called supernovae. They become so hot and their gravity so strong that the helium and hydrogen atoms are actually squeezed into new kinds of atoms like oxygen, iron, carbon and even gold. If you are wearing gold jewelry, the gold was made in a supernova explosion. So were all the other atoms in your body except hydrogen. These atoms include the calcium in your bones, the iron in your blood and the oxygen that binds with hydrogen to create the water that you drink.\nTake a moment to think about what I just said. These old stars were actually our ancestors. They had to exist so that we could be here. We are made of their dust – stardust! Doesn’t knowing this make you feel like the Universe is a more wonderful place to live in?\nNow, with all these different kinds of atoms swirling around younger stars like our Sun, they eventually combined to form asteroids, comets and planets. This is how our solar system and our Earth were formed four and a half billion years ago. On our time scale, we’ve jumped all the way to early September.\nAs the new planet Earth began to cool, rain fell for the first time and gathered into oceans. Beneath these oceans, at cracks in the ocean floor, heat seeped up from inside the Earth. New chemical reactions began to take place and atoms combined in all sorts of new ways.", "score": 30.163283436880704, "rank": 28}, {"document_id": "doc-::chunk-31", "d_text": "While seeking knowledge about the universe and how it works, modern astronomy and physics have repeatedly come face-to-face with a number of age-old questions long thought to be solely within the domain of religion or philosophy. The nature of the chemical evolution of the universe is such a case. Theory and observation indicate that the universe was created in a “Big Bang” some 13.7 billion years ago. As a result of both observation and theoretical work, scientists now know that the only chemical elements found in substantial amounts in the early universe were the lightest elements: hydrogen and helium, plus tiny amounts of lithium, beryllium, and boron. Yet we live on a planet with a central core consisting mostly of very heavy elements such as iron and nickel, surrounded by outer layers made up of rocks containing large amounts of silicon and various other elements, all heavier than the original elements. Our bodies are built of carbon, nitrogen, oxygen, calcium, phosphorus, and a host of other chemical elements—again all heavier than hydrogen and (a) Moving outward through the universe at the speed of light, going around Earth is like a snap of your fingers,… Earth’s circumference 1/ 7 second (b) 1.25 seconds Earth Moon …the Moon is a little more than a second away,… Times shown are light-travel times. (c) 8.3 minutes Earth Sun …the Sun’s distance is like a quick meal,… Because of the vast distances in the universe, we’re not showing objects to scale here: they’d be much too small! (d) Neptune …and the diameter of the Solar System, based on the orbit of the most distant planet, Neptune, is a night’s sleep. Sun 8.3 hours (e) Sun 4.2 years Proxima Centauri, the closest star to our Sun The distance to the nearest star is like the time between leap years,… (f) Earth’s Sun Milky Way Galaxy …the diameter of the galaxy is like the age of our species,… 100,000 years (g) Milky Way Galaxy 2.5 million years Andromeda Galaxy …and the distance between galaxies is like the time since our earliest human ancestors walked on Earth. (h) 13.7 billion years Radius of the observable universe The size of the observable universe is like three times the age of Earth.", "score": 30.125199603330866, "rank": 29}, {"document_id": "doc-::chunk-3", "d_text": "If I think of the proton of an oxygen nucleus as the head of a pin lying on the table in front of me, then the electron revolving around it draws a circle passing through Holland, Germany and Spain (The writer of these lines lives in France). Therefore, if all atoms forming my body came together so close as to touch each other, you would not be able to see me any more. You would actually never be able to see me with the naked eye. I would be as small as a tiny dust particle of the size of a several thousandth of a millimetre.18\nAt this point, we realise that there is a similarity between the largest and the smallest spaces known in the universe. When we turn our eyes to the stars, there again we see a void similar to that in the atoms. There are voids of billions of kilometres both between the stars and between the galaxies. Yet, in both of these voids, an order that is beyond the understanding of human mind prevails.\nInside the Nucleus: Protons and Neutrons\nUntil 1932, it was thought that the nucleus only consisted of protons and electrons. It was discovered then that there are not electrons but neutrons in the nucleus besides the protons. (The renowned scientist Chadwick proved in 1932 the existence of neutrons in the nucleus and he was awarded a Noble Prize for his discovery). Mankind was introduced to the real structure of the atom only at such a recent date.\nWe had mentioned before how small is the nucleus of the atom. The size of a proton that is able to fit in the atomic nucleus is 10-15 metres.\nYou may think that such a small particle would not have any meaning in one's life. However, these particles that are so small as to be incomprehensible by the human mind form the basis of everything you see around you.\nSource of the Diversity in the Universe\nThere are 109 elements that so far have been identified. The entire universe, our earth, and all animate and inanimate beings are formed by the arrangement of these 109 elements in various combinations. Thus far, we saw that all elements are made up of atoms that are similar to each other, which, in turn are made up of the same particles.", "score": 29.646144098344617, "rank": 30}, {"document_id": "doc-::chunk-13", "d_text": "\"Everyone thought it must be due to experimental mistakes, because we're all brought up to believe that decay rates are constant,\" Peter Sturrock, Stanford professor emeritus of applied physics, commented on the issue.\nThere was more to the issue than instrumental error. Jenkins, Fischbach and their colleagues proceeded to publish papers on the variations in radiometric decay rates in journals like Astroparticle Physics, Nuclear Instruments and Methods in Physics Research. They argued that the variations were not caused by weaknesses in their detection systems, but were actual variations in the decay rates themselves.\nRadioisotope Dating 101:\nWhile radiometric dating sounds like a jazzy way to find a mate, it's actually about unruly isotopes.\nThe world around us is made up of atoms, each with a specific number of positively charged protons, negatively charged electrons, and neutral proton-sized neutrons in their nuclei. Neutrons and electrons are important, but it is the proton number that defines an element. All carbon atoms have six protons, iron atoms have 26, and platinum atoms have 78. If a sodium atom loses an electron, it's still sodium, it's just a positively-charged sodium ion, ready to ionically bond with a negatively charged chlorine ion to make table salt for mashed potatoes. If iron loses electrons, it starts to rust, but it's still iron.\nIf carbon gains an extra neutron, it still behaves like carbon. Carbon generally has six neutrons with its six protons, giving an atomic weight of 12. Adding one neutron makes carbon-13, a stable isotope that makes up about 1.1% of the carbon on Earth.\nNot all isotopes are stable, though. In a crowd of one trillion carbon atoms, one can always find a single unstable carbon-14 isotope sipping on the punch. Over the course of about 5730 years, half of a sample of C-14 will decay into nitrogen-14. A scientist can take a carbon sample, measure the amount of C-14 among its trillions of C-12 atoms, measure the N-14 in the mix, and put together a ratio of the two isotopes. C-14 has a half-life of 5730 years – the amount of time it takes half a sample to decay.", "score": 29.191192931928008, "rank": 31}, {"document_id": "doc-::chunk-4", "d_text": "For example, Uranium-238 has a half-life of about 4.5 billion years — the age of the earth — while Helium-5 has a half-life of only 10-21 seconds, roughly the time it takes to transition from never having heard of an Ikea product, to desperately needing it.\nThe relative absence of radioactive materials in the world around us is due, not to non-radioactive material being “more natural” than radioactive material, but rather to survivorship. Radioactive materials have decayed into non-radioactive ones — our (mostly) non-radioactive world is what’s left after everything else has decayed.\nThe next post will look at decays more closely, especially with regard to their health effects.\n- Hyperphysics section on nuclear physics.\n- Richard Muller’s fantastic lectures “Physics for Future Presidents,” available on YouTube: Radioactivity 1, Radioactivity 2, Nukes 1, Nukes 2 & Review.\n- David Bodansky, “Nuclear Energy: Principles, Practices and Prospects.” 2004, Springer.", "score": 28.371863828968404, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "The most famous radioactive isotope used for dating is carbon-14, the radioactive isotope of carbon; with its half-life of roughly 5,700 years, carbon-14 can be used to determine the ages of organic (carbon-containing) materials on human timescales, up to about 60,000 years. Rubidium-87, in contrast, can be used to date the oldest objects in the universe, and, closer to home, the objects in the solar system.\nWhat is particularly attractive about using the Rb–Sr pair for dating is that rubidium is a volatile element—that is, it tends to evaporate to form a gas phase at even relatively low temperatures—while strontium is not volatile. As such, rubidium is present at a higher proportion in solar system objects that are rich in other volatiles (such as water), because they formed at lower temperatures.\nCounterintuitively, Earth has an Rb/Sr ratio that is 10 times lower than that of water-rich meteorites, implying that the planet either accreted from water-poor (and thus rubidium-poor) materials or it accreted from water-rich materials but lost most of its water over time as well as its rubidium. Understanding which of these scenarios took place is important for understanding the origin of water on Earth.\nIn theory, the Rb–Sr chronometer should be able to tease apart these two scenarios, as the amount of Sr-87 produced by radioactive decay in a given amount of time will not be the same if Earth started with a lot of rubidium versus less of the material.\nIn the latter scenario, i.e., with less rubidium, the newly formed Earth would have been poor in volatiles such as water, thus the amount of Sr-87 in the earth and in volatile-poor meteorites would be similar to that observed in the oldest-known solar system solids, the so-called CAIs. CAIs are calcium- and aluminum-rich inclusions found in certain meteorites. Dating back 4.567 billion years, CAIs represent the first objects that condensed in the early solar nebula, the flattened, rotating disk of gas and dust from which the solar system was born. As such, CAls offer a geologic window into how and from what type of stellar materials the solar system formed.\n\"They are critical witnesses to the processes that were happening while the solar system was forming,\" says Tissot.", "score": 28.04665809456036, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "The same radioactive time-keepers applied to the three oldest lunar samples returned to Earth by the Apollo astronauts yield ages between 4.4 billion and 4.5 billion years, providing minimum estimates for the time since the formation of the moon.\nThe oldest known rocks on Earth occur in northwestern Canada (3.96 billion years), but well-studied rocks nearly as old are also found in other parts of the world. In Western Australia, zircon crystals encased within younger rocks have ages as old as 4.3 billion years, making these tiny crystals the oldest materials so far found on Earth.\nThe best estimates of Earth's age are obtained by calculating the time required for development of the observed lead isotopes in Earth's oldest lead ores. These estimates yield 4.54 billion years as the age of Earth and of meteorites, and hence of the solar system.\nThe origins of life cannot be dated as precisely, but there is evidence that bacteria-like organisms lived on Earth 3.5 billion years ago, and they may have existed even earlier, when the first solid crust formed, almost 4 billion years ago. These early organisms must have been simpler than the organisms living today. Furthermore, before the earliest organisms there must have been structures that one would not call \"alive\" but that are now components of living things. Today, all living organisms store and transmit hereditary information using two kinds of molecules: DNA and RNA. Each of these molecules is in turn composed of four kinds of subunits known as nucleotides. The sequences of nucleotides in particular lengths of DNA or RNA, known as genes, direct the construction of molecules known as proteins, which in turn catalyze biochemical reactions, provide structural components for organisms, and perform many of the other functions on which life depends. Proteins consist of chains of subunits known as amino acids. The sequence of nucleotides in DNA and RNA therefore determines the sequence of amino acids in proteins; this is a central mechanism in all of biology.\nExperiments conducted under conditions intended to resemble those present on primitive Earth have resulted in the production of some of the chemical components of proteins, DNA, and RNA. Some of these molecules also have been detected in meteorites from outer space and in interstellar space by astronomers using radio-telescopes. Scientists have concluded that the \"building blocks of life\" could have been available early in Earth's history.", "score": 27.32188875094388, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "All artificially derived elements are radioactive with short half-lives, so if any atoms of these elements were present at the formation of Earth they are extremely likely to have already decayed.\nLists of the elements by name, by symbol, by atomic number, by density, by melting point, and by boiling point as well as Ionization energies of the elements are available. The most convenient presentation of the elements is in the periodic table, which groups elements with similar chemical properties together.\nThe atomic number of an element, Z, is equal to the number of protons which defines the element. For example, all carbon atoms contain 6 protons in their nucleus, so for carbon Z=6. These atoms may have different amounts of neutrons, and are known as isotopes of the element. The atomic mass of an element, A, is measured in unified atomic mass units (u) is the average mass of all the atoms of the element in an environment of interest (usually the earth's crust and atmosphere). Since electrons are of negligible mass, and neutrons are barely more than the mass of the proton, this usually corresponds to the sum of the protons and neutrons in the nucleus of the most abundant isotope, though this is not always the case (notably chlorine, which is about three-quarters 35Cl and a quarter 37Cl).\nThe atomic masses that are given on the periodic table are actually the relative atomic masses, which are calculated by the following method. As an example, assume there exists three isotopes of element X and their respective atomic masses are 10, 20 and 30 AMU for sake of demonstration. Now also assume that 50% of the isotopes of element X are the 10 AMU version and the two heavier isotopes each account for 25% of the total number of atoms (particles) of this hypothetical element. As a result 10 * 0.5 = 5 AMU and 20 * 0.25= 5 AMU and 30 * 0.25 = 7.5 AMU. The average atomic mass that results is 17.5 AMU. The reason is because the method to calculate the average mass takes into account the relative abundance of all of the isotopes of an element, which is multiplied against their individual masses.\nSome isotopes are radioactive and decay into other elements upon radiating an alpha or beta particle.", "score": 27.1201151855931, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "In fact, his research suggests:\n- 98% of the atoms in the body turn over every year\n- You have new soft tissue in 3 months (including muscles and internal organs)\n- You have a new liver every 6 weeks\n- New skin every 4 weeks\n- New stomach lining every 5 days\n- Bones can take a year to replace themselves, sometimes longer according to your metabolism\nSince Deepak Chopra also proved that neurotransmitters bath every cell in the body (chemicals sent from the brain throughout the body), this means being able to control and influence your unconscious mind has everything to do with how your cells age. Remember, you don't consciously rebuild and regenerate the 98% of the cells in your body every year. You do it unconsciously!\nIt seems this unconscious mind of yours is something worth exploring! Have it on your side. Then you can age any way you want.\n... and as for death... I used to be scared of ageing and dying. It's a common fear, especially in children. Just remember, in your body you have energy. In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. This law means that energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. Our universe is a giant vacuum (an isolated system). To my way of thinking, this means we are infinite beings. The secret lies in recognising the fact that you always were, always have been... and always will be. The future is very bright!\nLive well - and from the bottom of my heart, thank you for all the birthday messages!", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "As helium end up being even more plentiful in the stars core, some of it fuses right into carbon. The carbon interacts with the hydrogen to produce nitrogen and oxygen as well as helium, through a process known as the CNO cycle. As a star eras the CNO cycle becomes the leading process whereby a star creates light and also warm. As a result, these elements become sensibly abundant within a star.\nThe first stars are thneed to have been exceptionally large stars. Toward the finish of a their stays they produced even heavier elements, such as silsymbol, neon, and also eventually iron. Beyond iron tright here are no elements a star can fuse to create power. After a number of hundred thousand years these initially stars had actually no better method to develop energy, and also in the finish they explode in a huge explosion recognized as a supernova. The gas and dust remnants of these stars were tossed out right into the cosmos. Gradually this gas and dust came to be component of clouds that formed new stars, which likewise fsupplied hydrogen and also helium into heavier aspects till they too passed away in supernova explosions.\nThen around five billion years ago a cloud of gas and also dust started to create a brand-new star. Thanks to the lives and also deaths of previously generations of stars this cloud was wealthy not simply in hydrogen and also helium, yet additionally in carbon, nitrogen, oxygen, and also iron. As the star formed, some of the dust created a disk around the star, out of which formed planets. The third earth from this star had actually the excellent fortune of being not too close to the star and also not too much amethod. It had plenty of hydrogen and also oxygen in the develop of water, and carbon and nitrogen, all thanks to long dead stars. At some point life showed up on this tiny world, and took benefit of these advantageous and also plentiful aspects.\nSee more: Is It Hard As Hell Or Hard As Hail Or Hard As Hell?\" 8 Words & Phrases We'Ve All Been Saying Wrong\nThe atoms in your body contain the history of the cosmos. The hydrogen in your body was born among the first aspects, about 13.7 billion years back. The carbon, nitrogen and also oxygen in your muscles and mind were created within a star that passed away more than 5 billion years ago.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-7", "d_text": "Unfortunately, there have been enough faulty readings on objects to cast serious doubt on that assumption, especially if we know the actual age from another source. Dr. Robert F. Whitelaw, Professor of Nuclear and Mechanical Engineering at the Virginia Polytechnic Institute, reported in 1970 that of the 15,000 samples of once-living matter that had been tested between 1950 and 1970, all but three of them were datable. This included stuff that was supposedly too old to date. For example, most of the Earth's coal is believed to have formed during the Carboniferous Period (300 million years ago), but one coal sample yielded an age of only 4,250 years, while another was even younger--1,680 years old; radioactive carbon has even been found in diamonds. Ridiculously old ages have been produced as well; the mummified bodies of some seals known to have been dead for thirty years old tested out as being 3,000 years dead, and a living(!) mollusk was dated 3,000 years old.(7)\nAfter carbon-14, the most popular way to date fossils is the potassium-argon method, which measures the decay of potassium-40 into argon, a process with a half-life of just over 1 billion years. Similar faulty dates have come up using this method. For example, the Journal of Geophysical Research reported that some lava rocks from Hualalai, Hawaii, are known to have been formed by volcanic eruptions in 1800 and 1801, yet they show ages ranging from 160 million to 3 billion years old.(8) When Louis Leakey applied potassium-argon dating to his most famous discovery, Zinjanthropus, he got a date of 1,750,000 years, but a few years later some other bones from the same site got a carbon-14 test that yielded an age of only 10,100 years.(9) These are not isolated reports:\n\"Rock samples from 12 volcanoes in Russia and 10 samples from other places around the world, all known to be of recent age (formed within the last 200 years), when dated by the uranium-thorium-lead method gave ages varying from millions to billions of years!\"(10)\nDr.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "Although scientists make it in the lab by bombarding berkelium (97) with neutrons, trace amounts of this very rare element are found naturally in rich uranium deposits.\nWhen I was in high school studying chemistry, the periodic table of elements ended at Lawrencium (103). At present there are 118 elements, the most recent one created in the lab being ununoctium (you-nah-NOC-tee-um). Matter of fact, all the elements beyond 98 are artificial, brought to life in nuclear reactors or in particle accelerator experiments. They live very short lives. With so many positively-charged protons pushing against one another in their nuclei, these elements quickly break apart into simpler ones in a process called radioactive decay.\nBack at the time of the Big Bang, when the universe sprang into existence, only the simplest elements – hydrogen, helium and trace amounts of lithium – were cooked up. You can’t build a planet from such fluffy stuff. It took the first generation of stars, which formed from these basic building blocks, to synthesize more complicated elements like carbon, oxygen, sulfur and the like via nuclear fusion in their cores.\nWhen the stars exploded as supernovae, not only were these brand new elements blasted into space, but the enormous heat and pressure during the blast built even heavier elements like gold, copper, mercury and lead. All became incorporated in a second generation of stars. And a third.\nThe 2% of star-made elements, which include carbon, oxygen, nitrogen and silicon among others, went to build the planets and later became essential for life. We’re made of highly processed material you and I. The atoms of our beings have been in and out of the cores of several generations of stars. Think about this good and hard and you might just get in touch with your own “inner star”.\nLet’s reframe the question about exotic materials in space not present on Earth. Instead of elements, if we look at compounds, we hit paydirt. A compound is also a pure substance but consists of two or more chemical elements joined together. Familiar compounds include water (two hydrogens joined to one oxygen) and salt (one sodium and one chlorine).\nAstronomers have found about 220 compounds or molecules in outer space many of them with siblings on Earth but some alien.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Our Moon is at least 4.51 billion years old: study\n- Moon is at least 4.51 billion years old — up to 140 million years older than previously thought, according to new study of minerals called zircons brought back from the lunar body to the Earth by the Apollo 14 mission in 1971.\n- The Moon’s age has been a hotly debated topic.\n- The Moon was formed by a violent, head-on collision between the early Earth and a “planetary embryo” called Theia.\n- The new study would mean that Moon formed “only” about 60 million years after the birth of solar system, providing critical information for astronomers and planetary scientists who seek to understand the early evolution of the Earth and our solar system.\n- It is usually difficult to determine the age of Moon rocks because most of them contain a patchwork of fragments of multiple other rocks.\n- Zircons are nature’s best clocks. They are the best mineral in preserving geological history and revealing where they originated.\n- The Earth’s collision with Theia created a liquefied Moon, which then solidified. Scientists believe most of the Moon’s surface was covered with magma right after its formation.\n- The uranium-lead measurements reveal when the zircons first appeared in the Moon’s initial magma ocean, which later cooled down and formed the Moon’s mantle and crust; the lutetium-hafnium measurements reveal when its magma formed, which happened earlier.", "score": 26.56225438083474, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "The\nisotopes of a given element have similar or very closely related chemical properties\nbut their atomic mass differs.\nPotassium (atomic number 19) has several isotopes. Its radioactive isotope\npotassium-40 has 19 protons and 21 neutrons in the nucleus (19 protons + 21\nneutrons = mass number 40). Atoms of its stable isotopes potassium-39 and potassium-41\ncontain 19 protons plus 20 and 22 neutrons respectively.\nRadioactive isotopes are useful in dating geological materials, because they\nconvert or decay at a constant, and therefore measurable, rate. An unstable\nradioactive isotope, which is the 'parent' of one chemical element, naturally\ndecays to form a stable nonradioactive isotope, or 'daughter,' of another element\nby emitting particles such as protons from the nucleus. The decay from parent\nto daughter happens at a constant rate called the half-life. The half-life of\na radioactive isotope is the length of time it takes for exactly one-half of\nthe parent atoms to decay to daughter atoms. No naturally occurring physical\nor chemical conditions on Earth can appreciably change the decay rate of radioactive\nisotopes. Precise laboratory measurements of the number of remaining atoms of\nthe parent and the number of atoms of the daughter result in a ratio that is\nused to compute the age of a fossil or rock in years.\nAge determinations using radioactive isotopes have reached the point where\nthey are subject to very small errors of measurement, now usually less than\n1%. For example, minerals from a volcanic ash bed in southern Saskatchewan,\nCanada, have been dated by three independent isotopic methods (Baadsgaard, et\nal., 1993). The potassium/argon method gave an age of 72.5 plus or minus 0.2\nmillion years ago (mya), a possible error of 0.27%; the uranium/lead method\ngave an age of 72.4 plus or minus 0.4 mya, a possible error of 0.55%; and the\nrubidium/strontium method gave an age of 72.54 plus or minus 0.18 mya, a possible\nerror of 0.25%. The possible errors in these measurements are well under 1%.", "score": 25.703854003797073, "rank": 41}, {"document_id": "doc-::chunk-2", "d_text": "“You may think of your body as a fairly permanent structure, but most of it is in a state of constant flux as old cells are discarded and new ones are generated.”\nEven your bones, which you may think of as static and non-living, are full of cells called osteoclasts and osteoblasts that are forever (respectively) breaking down and rebuilding your bones as they adapt to the stresses they’re subjected to. (This is why bed-rest or being in weightlessness results in bone loss).\nA fascinating article from the New York Times highlights the work of Doctor Jonas Frisén, of the Karolinska Institute in Stockholm, who hit on a method to estimate the average age of tissues. The method involves measuring the amount of Carbon-14 in the DNA of various tissues.\nA pulse of artificially-generated Carbon-14 was created in the atmosphere in the years that above-ground nuclear testing was allowed, and since then the level (which was double the normal background level) has been slowly returning to normal. This Carbon-14 entered the food chain as plants absorbed it and incorporated it into their bodies, and from there it percolated into all living things, including us. The amount of Carbon-14 in the DNA of our tissues reveals the average age of the cells making up those tissues, since the DNA is fabricated at the birth of the cell.\nFrisén’s work, and that by other researchers, tells us the following:\n“The epithelial cells that line the surface of the gut have a rough life and are known to last only five days. Ignoring these surface cells, the average age of those in the main body of the gut is 15.9 years.”\n“Red blood cells, bruised and battered after traveling nearly 1,000 miles through the maze of the body’s circulatory system, last only 120 days or so on average before being dispatched to their graveyard in the spleen.”\n“The epidermis, or surface layer of the skin, is recycled every two weeks or so.”\n“An adult human liver has a turnover time of 300 to 500 days.” This means that you have a new liver just about every year!", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-16", "d_text": "So if core formation is slow on this timescale, most of the hafnium-182 from the supernova debris will have had time to decay to tungsten, and will vanish into the core. But if core formation is relatively fast, the hafnium-182 will remain in the rocky phase, where the tungsten-182 derived from it will end up stranded.\nWe can also sometimes learn about how a material was formed by looking at the ratio of different non-radioactive isotopes. Almost all elements occur as more than one isotope, with the same number of protons and electrons, but different numbers of neutrons. You may well have been told at school that isotopes, despite their have different masses, have identical chemistry, but this is not quite true. Generally speaking, because of quantum mechanical effects , different isotopes have very slightly different chemistries, and small deviations in their relative abundance provides clues to a sample’s history.\nUsing many detailed arguments of this kind, we come up with the following sequence:\n- Beginning of solar system, 4,568 million years ago (see above)\n- Collisions between planetary embryos, and partial melting of resulting meteorites, within a very few million years of that beginning\n- Accretion of Earth under way within 10 million years of beginning\n- Earth-Moon system formed, between 30 and 100 million years from the beginning. Formation of the Earth’s liquid core would be complete at this stage, although the formation of the solid inner core is remarkably recent by comparison (around 1,000 to 1,500 million years ago)\n- Oldest rocks on moon 4,460 million years old, (dating Moon’s oldest crust to within a very few tens of millions of years after its formation)\n- Oldest rocks on Earth, 3960 million years old, with evidence for an older (4000 to 4200 year old) component\n- Late Heavy Bombardment, around 3,900 million years ago, as estimated by dating craters on the Moon.\nIt was at one time assumed that the Late Heavy Bombardment would have heated the Earth’s surface sufficiently to destroy any life forms in existence at that time. But careful estimates of the total heating effect show that this is not the case, even at the surface, while bacteria obtaining their energy from reactions involving minerals have been found 2.8 kilometers below the surface.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Most everyone has heard of Carbon dating on the news or elsewhere sometime in the past years. Ever wonder what “Carbon dating” means and why it is so important? In this article I hope to explain the theoretical and physical science behind Carbon dating, and discuss how it affects our lives and the validity of the process.\nScientists use Carbon dating for telling the age of an old object, whose origin and age cannot be determined exactly by normal means. Because of this method Chemistry has become intertwined with History, Archeology, Anthropology, and Geology. (Poole) Many items that have been thought to come from one time have been tested and found out to actually come from a few thousands years beforehand. Places where historians believed that human civilization came to exit say, only 2,000 years ago, have actually been proven to have had some form of human civilization more than 4,000 years ago. (Poole) Fine art collectors have used Carbon dating to determine if a piece of antique art is actually genuine. Some have saved themselves several thousands of dollars by testing the piece before they bought it and finding out that it is not the original, but a very clever modern copy. (Poole) But how is this done? What are the ides behind carbon dating?\nAtoms of given elements have different Isotopes. Isotopes are atoms of the same element, i.e. they have the same number of Protons and Electrons in the atom, but they have a different number of Neutrons in the nucleus, so they have different atomic masses. (Jones & Atkins)\nThe element Carbon is in all living things, it is a basic building block for the construction of organic material. The normal molar mass of Carbon is around 12, however there are a few Carbon atoms that have a molar mass of about 13, and even fewer that have a molar mass of about 14. These atoms have one or two more neutrons in the nucleus than most Carbon atoms. Scientists call the isotope with molar mass around 14, Carbon-14. Carbon-14 is manufactured in the upper atmosphere by the action of cosmic rays. (Ham, Snelling, & Wieland) Ordinary nitrogen is converted into Carbon-14; however it is not a stable element. It turns out to be radioactive and decays over time.\nAll organic material has decaying Carbon-14 in it.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Both are, however, 'children' of the same parent -- the Zero Point Energy.\nBecause of this, and because the speed of light is in the numerator of every reduced radio decay rate equation, any changes in the speed of light are indicating changes in atomic decay rates. Importantly, the original short half-life elements were also a contributor and they have gone now.\nOn pages 186 and 187 he describes the discovery at Oklo in the West African Republic of Gabon, of the remnants of an ancient site where an accident of geology produced, for a while, the conditions suitable for a sustained chain reaction to take place - a sort of natural nuclear reactor.\nIt was moderated by water permeating a deposit of uranium.\nOklo I've been reading \"Impossibility: The Limits of Science and the Science of Limits\" by John D.\nBarrow and he has an interesting discussion on the speed of light in our geological past.\nThis then suggests that the majority of the elements were formed at the beginning rather than through a series of supernovae explosions.\nGiven that point, it seems that the stars must be basically the same age.\nHowever, there are many anomalies and there is much evidence of radioisotopic inheritance and mixing because of global tectonic processes having stirred the mantle and added magmas to the crust, which has likewise been stirred by the crustal rock cycle.My thought is, can the relative natural abundances of these chains' terminal products (Pb208,207, and 206) be used to calculate an initial abundance and time frame for the original atomic abundances of the parent isotopes which could be compared to the predictions of Willie Fowler regarding stellar nucleogenesis processes. Thanks again for all your interesting and informative web postings and work.Setterfield: I believe that it is possible to determine the initial ratios of the parent elements in the various chains.Interestingly, using these sorts of ratios, one piece of moon rock dated as being 8.2 billion years old, to the amazement of the dating laboratory involved.As far as stars are concerned, the Th/Nd ratio has been shown to be unchanged no matter what the age of the star is, which leads one to two conclusions.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-4", "d_text": "A fist big question…\nSodium is Na (Latin Natrium), Potassium is K (Latin Kalium) Iron is Fe (Latin Ferrum, Italian Ferro) Nitrogen is N (Italian Azoto) Copper is Cu (Italian Rame) Tin is Sn (Latin Stannum, Italian Stagno) Lead is Pb (Latin Plumbum, Italian Piombo) Mercury is Hg (Latin Hydrargyrium) Pay attention to “false friends”: Carbon is C and not Ca (Calcium) Silicon is Si (Italian Silicio, not silicone, a family of polymers…) Phosphor is P (Italian Fosforo)\n1 H 1 4 He 2\n71 27 1.8 0.2\nC,N,O,Ne Tutti gli altri\nTutti gli altri 0,2%\nHe 27,0% H He\nChemical composition of the Universe\nC,N,O,Ne Tutti gli altri H 71,0%\nComposi tion of the earth crust\nComposition of the earth crust 99,5 % of the earth crust is given by 12 elements, not comprising C and N, on which is based the organic chemistry and biochemical compounds 99,95 % is given by 24 elements All others (ca. 68) are 0.05%\nMasses for a body about 70 kg\nComposition of the human body\nATOM: smallest portion of an element still having the same properties. From the Greek “atomosâ€?: not cleavable\nIndeed: cleaved under very energetic conditions, typical of nuclear reactions!\nAtoms are the basic unit forming the substances, the elementary bricks of matter. Each ELEMENT gives rise to distinct atoms. About 90 elements are natural and about 19 artificial, to a total of about 109 elements, i.e. to about 109 different types of atoms Isolated atoms are spherical and very small. Typical dimension of atoms: 10 -10 meter (1 Ă…, Angstrom ) or 0.1 nanometres or 100 picometers. Chemists use Angstroms!!", "score": 25.16275752500809, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "How can the replacement rates of the cells in various tissues in our body be measured? For rapidly renewing tissues common labeling tricks can be useful as with the nucleotide analog BrdU. But what about the very slow tissues that take years or a lifetime? In a fascinating example of scientific serendipity, Cold War nuclear tests have come to the aid of scientists as a result of the fact that they changed the atmospheric concentrations of the isotope carbon-14 around the globe. These experiments are effectively pulse-chase experiments but at the global scale. Carbon-14 has a half-life of 5730 years, and thus even though radioactive, the fraction that decays within the lifetime of an individual is negligible and this timescale should not worry us. The “labeled” carbon in the atmosphere is turned into CO2 and later into our food through carbon fixation by plants. In our bodies, this carbon gets incorporated into the DNA of every nascent cell and the relative abundance of carbon-14 remains stable as the DNA is not replaced through the cell’s lifetime. By measuring the fraction of the isotope carbon-14 in a tissue it is possible to infer the year in which the DNA was replicated as depicted in Figure 1. The carbon-14 time course in the atmosphere initially spiked due to bomb tests and then subsequently decreased as it got absorbed in the much larger pools of organic matter on the continents and the inorganic pool in the ocean. As can be seen in Figure 1, the timescale for the exponential decay of the carbon-14 in the atmosphere is about 10 years. The measured dynamics of the atmospheric carbon-14 content is the basis for inferring the rates of tissue renewal in the human body and yielded insights into other obscure questions such as how long sea urchins live and the origins of coral reefs.\nUsing these dating methods, it was inferred that fat cells (adipocytes) replace at a rate of 8±6% per year (BNID 103455). This results in the replacement of half of the body’s adipocytes in ≈8 years. A surprise arrived when heart muscle cells were analyzed. The long held dogma in the cardiac biology community was that these cells do not replace themselves. This paradigm was in line with the implications of heart attacks where scar tissue is formed instead of healthy muscle cells. Yet it was found that replacement does occur albeit at a slow rate.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-4", "d_text": "- Many Uniformitarian Estimates Give Various Dates, not Proving Young or Old Earth, but Showing the Fallacy of Uniformitarian Assumptions:\nProcess Age of Earth in Years\n- Decay of earth’s magnetic field 10,000\n- Influx of radiocarbon to the earth 10,000\n- Continuous deposition of geologic column too small\n- Influx of juvenile water into oceans 340,000,000\n- Influx of magma from mantle to form crust 500,000,000\n- Growth of oldest living part of biosphere 5,000\n- Origin of human civilizations 5,000\n- Efflux of Helium-4 into the atmosphere 1,750-175,000\n- Development of total human population 4,000\n- Influx of sediment into the ocean via rivers 30,000,000\n- Erosion of sediment from continents 14,000,000\n- Leaching of sodium from continents 1,000,000\n- Leaching of chlorine from continents 1,000,000\n- Leaching of calcium from continents 12,000,000\n- Influx of carbonate into the ocean 100,000\n- Influx of sulphate into the ocean 10,000,000\n- Influx of chlorine into the ocean 164,000,000\n- Influx of calcium into the ocean 1,000,000\n- Influx of uranium into the ocean 1,260,000\n- Efflux of oil from traps by fluid pressure 10,000-100,000\n- Formation of radiogenic lead by neutron capture too small to measure\n- Formation of radiogenic strontium by neutron capt. too small to measure\n- Decay of natural remanent paleomagnetism 100,000\n- Parentless polonium halos too small to measure\n- Decay of uranium with initial :radiogenic” lead too small to measure\n- Decay of potassium with entrapped argon too small to measure\n- Formation of river deltas 5,000\n- Submarine oil seepage into oceans 50,000,000\n- Decay of natural plutonium 80,000,000\n- Decay of lines of galaxies 10,000,000\n- Expanding interstellar gas 60,000,000\n- Decay of short-period comets 10,000\n- Decay of long-period comets 1,000,", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-14", "d_text": "Theoretically, if the scientist measures a sample and finds an equal amount of C-14, the parent material, and N-14, the daughter material, then 5730 years should have passed since that hunk of carbon formed. Theoretically. This is the idea behind radioisotope dating.\nChemists have been able to measure the half-lives of most isotopes and, for the most part, have found their decay rates consistent. The accuracy of the dating methods hangs on a few assumptions, though. First, we have to assume that no extra daughter material contaminated the sample from an outside source. We have to assume that only parent material and no daughter material was in there at the start. We have to assume than no parent material leached out of the sample during its lifetime. Ultimately, we have to assume that the decay rates for various isotopes are the same now as they always have been, without variation. To deal with these obvious problems, geochronologists work to correlate dates through the use of several different dating methods – as in the U-238/U-235 method.\nNot all half lives have been precisely determined. The half-life of samarium-146 has been measured at between 50 million and 103 million years, and the older age was accepted. Only recently, using better instrumentation, have researchers dropped the half life of Sm-146 to 68 million years. While Sm-146 dating is not used often, it raises the question of how precisely certain well-accepted half lives have been measured.\nIn Brent G. Dalrymple's book The Age Of The Earth (1991), he includes lists of meteorites that have been dated between 4.23-4.88 billion years old. These lists are often cited as evidence for the age of the earth, because meteorites are believed (by most parties) to be from material that formed at the same time as Earth. Earth has a dynamic crust that is constantly changing; it wears away and gets torn up in earthquakes; it gets cooked and crushed and covered with volcanic ash and ocean sediments. Earth's surface has taken a lot of abuse through the years. On the other hand, meteorites have been floating around in space untouched, and the trace elements in them are fairly evenly mixed. They are therefore considered good subjects to determine Earth's age.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "THE STUDY “Discovery of HE 1523-0901, a Strongly r-Process-Enhanced Metal-Poor Star With Detected Uranium,” by Anna Frebel et al., The Astrophysical Journal, May 10, 2007.\nTHE MOTIVE How do you determine the age of an ancient star? For distant galaxies, astronomers measure redshift—the stretching of light that indicates how fast the stars are receding and therefore how old they are relative to the age of the universe. But this doesn’t work for an ancient star nearby. So Anna Frebel of the McDonald Observatory at the University of Texas at Austin looked for chemical clues. She reckoned that a rare form of old “metal poor” star, one with one-thousandth the iron content of our own young sun, carries an internal clock, one composed of the radioactive elements uranium and thorium. Because these metals decay at a steady rate, an astute observer can extrapolate backward and pinpoint the star’s moment of birth. Frebel has found one such star in our own Milky Way and dated its birth to 13.2 billion years ago—barely 500 million years after the universe itself was born.\nTHE METHODS The older a star is, the fewer metals it contains. The first stars, which formed a few hundred million years after the Big Bang, were composed of only hydrogen, helium, and traces of lithium. Some tens of millions of years after their birth, these massive, puffy stars exploded as supernovas, and new heavy elements were born in their fiery depths. Frebel first hunted for their offspring, an old star with a chemical fingerprint that could be dated: 74 percent hydrogen, 25 percent helium, smidgens of uranium and thorium inherited from a parent supernova, and very little iron—a relatively light element that accumulated later in history as the universe evolved and that would obscure any signal from the radioactive components. To detect uranium and thorium, Frebel could measure the strength of their absorption lines in a spectrum—in other words, calculate how much light each element absorbs at a particular wavelength. Frebel used the Clay Magellan Telescope in the Chilean Andes to search the halo of the Milky Way—its outer reaches, where old stars lurk—and turned up a bright red giant about eight-tenths the mass of our sun, dubbed HE 1523-0901, that appeared to meet all the requirements.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-5", "d_text": "Most elements exist in different atomic forms that are identical in their chemical properties but differ in the number of neutral particles—i. For a single element, these atoms are called isotopes. Because isotopes differ in mass , their relative abundance can be determined if the masses are separated in a mass spectrometer see below Use of mass spectrometers. Radioactive decay can be observed in the laboratory by either of two means: The particles given off during the decay process are part of a profound fundamental change in the nucleus. To compensate for the loss of mass and energy , the radioactive atom undergoes internal transformation and in most cases simply becomes an atom of a different chemical element. In terms of the numbers of atoms present, it is as if apples changed spontaneously into oranges at a fixed and known rate. In this analogy , the apples would represent radioactive, or parent, atoms, while the oranges would represent the atoms formed, the so-called daughters. Pursuing this analogy further, one would expect that a new basket of apples would have no oranges but that an older one would have many. In fact, one would expect that the ratio of oranges to apples would change in a very specific way over the time elapsed, since the process continues until all the apples are converted. In geochronology the situation is identical.\nHow do you calculate half life of carbon 14?\nArchaeologists use the exponential, radioactive decay of carbon 14 to estimate the death dates of organic material. The stable form of carbon is carbon 12 and the radioactive isotope carbon 14 decays over time into nitrogen 14 and other particles. Carbon is naturally in all living organisms and is replenished in the tissues by eating other organisms or by breathing air that contains carbon. At any particular time all living organisms have approximately the same ratio of carbon 12 to carbon 14 in their tissues. When an organism dies it ceases to replenish carbon in its tissues and the decay of carbon 14 to nitrogen 14 changes the ratio of carbon 12 to carbon Experts can compare the ratio of carbon 12 to carbon 14 in dead material to the ratio when the organism was alive to estimate the date of its death.\nHalf-life and carbon dating\nLove-hungry teenagers and archaeologists agree: But while the difficulties of single life may be intractable, the challenge of determining the age of prehistoric artifacts and fossils is greatly aided by measuring certain radioactive isotopes.", "score": 24.296145996203016, "rank": 51}, {"document_id": "doc-::chunk-5", "d_text": "To get a according precise tje for the carbon old our planet, scientists the to look beyond it. Meteorites offer exactly what they need. Old asteroids that meteorites come from are some of the most primitive objects in the solar system.\nThey were formed at the same time as our planet and everything else in our solar system, but they have not been changed by the tectonic earths that shape Earth, so they're like time capsules. Our first really solid estimate of the how age was obtained old radiometric analysis the the Canyon Diablo meteorite, a giant iron rock that blazed through Earth's atmosphere from space 50, years ago and was found by American scientists in Native Americans had known about and utilized the dating carbons since prehistoric times.\nResearchers used uranium-lead techniques to date the meteorite back 4. But scientists will keep trying to shave down that degree of uncertainty in their estimate by analyzing every ancient Earth rock, meteorite and solar system sample they can get their hands on. Have how question for Dear Science? An earlier version of this post old described the mass spectrometry technique. The spectrometry is used to determine the composition of a sample by determining the mass and charge of its component parts.\nTurn on desktop notifications? Share on Google Plus. How do we know how old average length of dating before saying i love you Earth is? Sarah Kaplan is a science reporter covering news from according the nation and across the universe. The story must be told.\nSign up for email earths from the \"Confronting the Caliphate\" series. You have signed up for the \"Confronting the Caliphate\" series. You'll receive e-mail when new stories are published in this series. How a Russian ex-spy was saved from one of the deadliest nerve agents ever made. Analysis Conservative dating after anti-Muslim campaigner Tommy Robinson secretly jailed in Britain.\nCheck your inbox for carbons. You might according like:", "score": 24.179427033069324, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Many great thinkers throughout history have tried to figure out Earth’s age. For example, back in 1862, Lord Kelvin calculated how long Earth might have taken to cool from its original molten state. He concluded that Earth was born 20 to 400 million years ago. Today’s scientists believe that answer is incorrect, but Kelvin’s calculations were scientific in being based on logical thinking and mathematical calculation.\nScientists tried to determine Earth’s age via our planet’s layers of rock, which must have been built over time. You’ve seen these rock layers if you’ve ever observed a cut-away section of a mountain, perhaps because a highway runs through it. But Earth’s layers of rock did not give up the secret of Earth’s age easily. Their message proved difficult to decipher. How old is Earth? In the early part of the 20th century, scientists still weren’t sure. However, from working with layer upon layer of rock laid down on Earth over long time spans, early 20th century scientists came to believe Earth not millions of years old – but billions of years old.\nModern radiometric dating methods came into prominence in the late 1940s and 1950s. These methods focus on the decay of atoms of one chemical element into another. They led to the discovery that certain very heavy elements could decay into lighter elements – such as uranium decaying into lead. This work gave rise to a process known as radiometric dating. This technique is based on a comparison between the measured amount of a naturally occurring radioactive element and its decay products, assuming a constant rate of decay – known as a half-life.\nUsing this technique, scientists could, for example, analyze a sample from Earth’s crust, figure out the quantities of uranium and lead, plug those values along with the half-life into a logarithmic equation, in order to compute the age of the rock. Over the decades of the 20th century, scientists documented tens of thousands of radiometric age measurements. Taken as a whole, these data indicate that the Earth’s history extends backward from the present to at least 3.8 billion years into the past.\nNowadays, scientists use radiometric dating of various sorts of rock – both earthly and extraterrestrial – to pinpoint Earth’s age. For example, scientists search for and date the oldest rocks exposed on Earth’s surface.", "score": 23.107059623813686, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "Tritium can be used to date a bottle of wine, that is perhaps (reportedly, let's say) 100 years old, just to make sure the bottle of wine isn't a fake.\nCarbon has 6 protons. Carbon exists in two common isotopes, carbon-12 and carbon-13, and one rare isotope, carbon-14, as follows:\n* that's about one part in a trillion; Note that 98.9% ( 12 C) + 1.1% ( 13 C) = 100%, such that any other isotope of carbon, like 14 C, is going to have very low abundance.\nCarbon-14 is used in carbon dating, which works to back about 50,000 - 70,000 years. For fossils or other carbon-containing materials older than that, the carbon-14 has decayed away.\nExperimentally, other isotopes of carbon can be created: 9 C, 10 C, ... 15 C, 16 C, etc. These other isotopes are unstable (radioactive) and have very rapid decay rates of just a few seconds or so.\nMagnesium has 12 protons. Magnesium exists in three stable isotopes, as follows:\nChlorine has 17 protons. At least nine isotopes have been recognized, ranging from 32 Cl to 40 Cl. Most of these are synthetic, having been observed through lab work only, and these are very unstable. We know that each chlorine isotope has 17 protons, and doing the math, we see that 32 Cl to 40 Cl would have 15 to 23 neutrons, as follows:\n* These other isotopes are extremely rare or synthetic (lab-created only). 36 Cl can be used in age dating in the range of 60,000 to 1 million years.\nPotassium has 19 protons. Potassium exists in two stable isotopes and one very long-lived isotope, as follows:\n* Potassium-40 is very important in age dating. That half-life is 1.277 billion years, a very, very slow decay rate. The significance of the slow rate is that even for very old rocks and minerals, there is still enough original potassium-40 left over to measure with our atom-counting instruments, so we can do age dating.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-7", "d_text": "You might be thinking after you have traveled 10 trillion years to the future in only 30 years of subjective time, “How old does that make me you?” That takes us to our next, more practical question. How can you determine the age of a person?\nYes, you could only ask them; however, there are other and more creative approaches you can make use of. As history explains, our world is regularly changing. Some of those changes put last longing marks on our bodies. If you understand what to search for, you can make use of those marks to know when a person was born.\nFor instance, between the years1945 and 1962, the US and the USSR tested a lot of nuclear weapons. A lot of them were launched in the atmosphere, which emitted contaminants into the wind. Of those contaminants that were released into the wind was a chemical known as strontium-90, which spread through the atmosphere and got into the bodies of the children that were growing up during the early 1960s.\nStrontium-90 is very alike to calcium; therefore, those children’s bodies fused it into their teeth and bones. Both our teeth and bones replenish themselves from time to time. But, teeth do this slower than bones. The children’s skeletons ultimately replaced their entire strontium-90 with calcium; however, among those who children who were developing their permanent teeth during that time, small amounts of the chemical are still there up till now. If you discover high levels of strontium-90 in a person’s teeth, you should be aware that you’re dealing with a Baby Boomer.\nHowever, what if you discover high levels of lead in their teeth? Then you could know that you are dealing with either a Baby Boomer or a Gen-Xer. Back then, during their time, people used to drive cars that combusted leaded gasoline, which emitted lead into the air. That led to an outbreak in lead poisoning which started during the mid-twentieth century and was high in 1972, although high levels of contamination remained till the late 1970s.\nIn order to separate the post-1972 Gen-Xers from their pre-1972 counterparts and the Baby Boomers who came before them, just see if they have any smallpox vaccine marks.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Does radiation affect carbon dating Adult chat spyware\nWhat I want to do in this video is kind of introduce you to the idea of, one, how carbon-14 comes about, and how it gets into all living things. They can also be alpha particles, which is the same thing as a helium nucleus. And they're going to come in, and they're going to bump into things in our atmosphere, and they're actually going to form neutrons. And we'll show a neutron with a lowercase n, and a 1 for its mass number. And what's interesting about this is this is constantly being formed in our atmosphere, not in huge quantities, but in reasonable quantities. Because as soon as you die and you get buried under the ground, there's no way for the carbon-14 to become part of your tissue anymore because you're not eating anything with new carbon-14.\nAnd then either later in this video or in future videos we'll talk about how it's actually used to date things, how we use it actually figure out that that bone is 12,000 years old, or that person died 18,000 years ago, whatever it might be. So let me just draw the surface of the Earth like that. So then you have the Earth's atmosphere right over here. And 78%, the most abundant element in our atmosphere is nitrogen. And we don't write anything, because it has no protons down here. And what's interesting here is once you die, you're not going to get any new carbon-14. You can't just say all the carbon-14's on the left are going to decay and all the carbon-14's on the right aren't going to decay in that 5,730 years.\nIt is assumed that the ratio has been constant for a very long time before the industrial revolution.\nIs this assumption correct (for on it hangs the whole validity of the system)?\nFamiliar to us as the black substance in charred wood, as diamonds, and the graphite in “lead” pencils, carbon comes in several forms, or isotopes.\nOne rare form has atoms that are 14 times as heavy as hydrogen atoms: carbon-14, or C ratio gets smaller.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Origin of carbon dating\n\"Understanding carbon, nitrogen, and oxygen is very important,\" she says. If you want to interpret other galaxies, you'd better be sure of how the evolution of these elements goes.\" The best place to gain that understanding, she says, is right here in the Milky Way.\nIf the amount of carbon 14 is halved every 5,730 years, it will not take very long to reach an amount that is too small to analyze.This rate reveals whether the carbon came from exploding or nonexploding stars--because the former have much shorter lives.Stars that explode as supernovae are usually born with more than 8 times the Sun's mass.So, the fossil is 8,680 years old, meaning the living organism died 8,680 years ago.January 11, 2006 The Ring Nebula by the Hubble Space Telescope. Most of the carbon supporting life on Earth was forged by stars that never exploded, say astronomers in Michigan and Sweden.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-4", "d_text": "This work would ultimately lead to the development of the atomic bomb but it would also open the door to the creation of man-made elements and an understanding of nuclear decay and ½-life.\nDecay in this context is meant to describe the physical and chemical breakdown of an unstable atom by the emitting of one type of particle; ½-life is the time it takes for ½ of the material to decay. These terms can and are equally applicable to other materials such as plastics. One reason for recycling plastics is because the ½-life of many plastics is extremely long and unless the material is biodegradable, unlikely to decay in a land-fill somewhere.\nThe discovery of the neutron doesn’t mean that the development of the atomic theory is complete. Further work has shown that protons, neutrons, and electrons can be further subdivided. Each step in this process is increasingly more complex. But complexity does not preclude solvability and the work goes on.\nAt this point, one can see that the postulates first proposed by Dalton are no longer valid as written. The idea that the atom is indivisible has been replaced with the notion that there are some other basic particles which cannot be divided. And physicists are working on that idea as this piece is being written. Perhaps one day there will be an ultimate atomic theory – that is what Democritus was seeking and what Dalton was seeking and what drives the exploration of the world of sub-atomic particles today.\nFortunately, for most chemists, the atomic theory of the proton and neutron in the nucleus and electrons in “clouds” around the nucleus provides a nice working model that explains most, if not all, chemical reactions.\nSimilarly, the idea of nuclear decay and ½-life become very useful in other areas of science; areas perhaps where the collisions of faith, logic, reason and belief collide.\nIn the next part of this discussion, I want to look at the measurement of the age of something – “How Old Is Old?”", "score": 22.87988481440692, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "* Uranium-234 ~ 246,000 years * Uranium-235 ~ 703.8 million years * Uranium-238 ~ 4.468 billion years", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "The study of the sequence of occurrence of fossils in rocks, biostratigraphy,\nreveals the relative time order in which organisms lived. Although this relative\ntime scale indicates that one layer of rock is younger or older than another,\nit does not pinpoint the age of a fossil or rock in years. The discovery of\nradioactivity late in the 19th century enabled scientists to develop techniques\nfor accurately determining the ages of fossils, rocks, and events in Earth's\nhistory in the distant past. For example, through isotopic dating we've learned\nthat Cambrian fossils are about 540-500 million years old, that the oldest known\nfossils are found in rocks that are about 3.8 billion years old, and that planet\nEarth is about 4.6 billion years old.\nDetermining the age of a rock involves using minerals that contain naturally-occurring\nradioactive elements and measuring the amount of change or decay in those elements\nto calculate approximately how many years ago the rock formed. Radioactive elements\nare unstable. They emit particles and energy at a relatively constant rate,\ntransforming themselves through the process of radioactive decay into other\nelements that are stable - not radioactive. Radioactive elements can serve as\nnatural clocks, because the rate of emission or decay is measurable and because\nit is not affected by external factors.\nAbout 90 chemical elements occur naturally in the Earth. By definition an element\nis a substance that cannot be broken into a simpler form by ordinary chemical\nmeans. The basic structural units of elements are minute atoms. They are made\nup of the even tinier subatomic particles called protons, neutrons, and electrons.\nTo help in the identification and classification of elements, scientists have\nassigned an atomic number to each kind of atom. The atomic number for each element\nis the number of protons in an atom. An atom of potassium (K), for example,\nhas 19 protons in its nucleus so the atomic number for potassium is 19.\nall atoms of a given element contain the same number of protons, they do not\ncontain the same number of neutrons. Each kind of atom has also been assigned\na mass number. That number, which is equal to the number of protons and neutrons\nin the nucleus, identifies the various forms or isotopes of an element.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "The burning of fossil fuels is altering the ratio of carbon in the atmosphere, which may cause objects tested in the coming decades to seem hundreds or thousands of years older than they actually are, according a study published in the Proceedings of the National Academy of Sciences.\nAnd then you can use that rate to actually determine how long ago that thing must've died. It would be a pretty reasonable estimate to say, well, that thing must be 5,730 years old. And we talk about the word isotope in the chemistry playlist. But this number up here can change depending on the number of neutrons you have. And every now and then-- and let's just be clear-- this isn't like a typical reaction. So instead of seven protons we now have six protons. And a proton that's just flying around, you could call that hydrogen 1. If it doesn't gain an electron, it's just a hydrogen ion, a positive ion, either way, or a hydrogen nucleus. And so this carbon-14, it's constantly being formed. I've just explained a mechanism where some of our body, even though carbon-12 is the most common isotope, some of our body, while we're living, gets made up of this carbon-14 thing. So carbon by definition has six protons, but the typical isotope, the most common isotope of carbon is carbon-12. And then that carbon dioxide gets absorbed into the rest of the atmosphere, into our oceans. When people talk about carbon fixation, they're really talking about using mainly light energy from the sun to take gaseous carbon and turn it into actual kind of organic tissue.(Ham et al., page 68.) C ratio in the past, or that this is \"the technique's Achilles' heel\" is incorrect.The whole validity of radiocarbon dating for the past 10,000 years---the time span of interest to biblical chronology---hangs only on the tree-ring chronologies which are used to calibrate it. .) This process does not involve any assumption about historic radiocarbon to stable carbon ratios because the radiocarbon concentration in the tree-ring samples would be affected in exactly the same way as the radiocarbon concentration in the specimen to be dated. To quote again from The Answers Book: Some recent, though controversial, research has raised the interesting suggestion that c (the speed of light) has decreased in historical times.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "This technique involves measuring the ratio of uranium isotopes (238U or 235U) to stable lead isotopes 206Pb, 207Pb and 208Pb. It can be used to determine ages from 4.5 billion years old to 1 million years old. This method is thought to be particularly accurate, with an error-margin that can be less than two million years – not bad in a time span of billions.\nU-Pb dating can be used to date very old rocks, and has its own in-built cross-checking system, since the ratio of 235U to 207Pb and 238U to 206Pb can be compared using a “concordia diagram”, in which samples are plotted along a straight line that intersects the curve at the age of the sample.\nU-Pb dating is most often done on igneous rocks containing zircon. It’s been used to determine the age of ancient hominids, along with fission-track dating.\nThis method involves examining the polished surface of a slice of rock, and calculating the density of markings – or “tracks” – left in it by the spontaneous fission of 238U impurities.\nThe uranium content of the sample must be known; this can be determined by placing a plastic film over the polished slice and bombarding it with slow neutrons – neutrons with low kinetic energy. This bombardment produces new tracks, the quantity of which can be compared with the quantity of original tracks to determine the age.\nThis method can date naturally occurring minerals and man-made glasses. It can thus be used for very old samples, like meteorites, and very young samples, like archaeological artefacts.\nFission-track dating identified that the Brahin Pallasite, a meteorite found in the 19th century in Belarus – slabs of which have become a collectors item – underwent its last intensive thermal event 4.26–4.2 billion years ago.\nThis method involves calculating the prevalence of the very rare isotope chlorine-36 (36Cl), which can be produced in the atmosphere through cosmic rays bombarding argon atoms. It’s used to date very old groundwater, from between around 100,000 and 1 million years old.\nChlorine-36 was also released in abundance during the detonation of nuclear weapons between 1952 and 1958.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-2", "d_text": "6 billion years before the parent is a naturally occurring radioactive elements spontaneously decay is our first list at the earth.", "score": 21.550142828481356, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Archaeologists use the exponential, radioactive decay of carbon 14 to estimate the death dates of organic material. The stable form of carbon is carbon 12 and the radioactive isotope carbon 14 decays over time into nitrogen 14 and other particles. Carbon is naturally in all living organisms and is replenished in the tissues by eating other organisms or by breathing air that contains carbon. At any particular time all living organisms have approximately the same ratio of carbon 12 to carbon 14 in their tissues. When an organism dies it ceases to replenish carbon in its tissues and the decay of carbon 14 to nitrogen 14 changes the ratio of carbon 12 to carbon\n20.6: The Kinetics of Radioactive Decay and Radiometric Dating\nThe Kinetics of Radioactive Decay and Radiometric Dating - Chemistry LibreTexts\nThe following tools can generate any one of the values from the other three in the half-life formula for a substance undergoing decay to decrease by half. Half-life is defined as the amount of time it takes a given quantity to decrease to half of its initial value. The term is most commonly used in relation to atoms undergoing radioactive decay, but can be used to describe other types of decay, whether exponential or not. One of the most well-known applications of half-life is carbon dating. The half-life of carbon is approximately 5, years, and it can be reliably used to measure dates up to around 50, years ago. The process of carbon dating was developed by William Libby, and is based on the fact that carbon is constantly being made in the atmosphere. It is incorporated into plants through photosynthesis, and then into animals when they consume plants.\nNina, independent. Age: 31. Would you like to experience a relaxing wonderful erotic massage, soothing your entire body into a blissful tranquility? Services: Girlfriend Experience (GFE),Handjob,Deepthroat,69,Massage and more,Anal Sex (Greek),Sex Between Breasts,Erotic Massage,French Kissing,ORAL SEX and ALL your Fantasy.\n10.4: Radioactive Decay\nWhen we speak of the element Carbon, we most often refer to the most naturally abundant stable isotope 12 C. Although 12 C is definitely essential to life, its unstable sister isotope 14 C has become of extreme importance to the science world.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "After less than a second, the quarks and gluons had condensed into stable protons and neutrons, the building blocks of all atomic nuclei.\nThe atom we're thinking of started out as a neutron. Protons tried to fuse with it to create deuterium, but the Universe was too hot for that to happen, and each time it formed deuterium, it was blasted apart less than a nanosecond later.\nAfter about three minutes, a few of the neutrons had decayed into protons, but this one remained, and finally the Universe had cooled enough so that nuclear fusion could proceed. The neutron quickly formed deuterium, then Helium-3, and finally found another deuteron to become a Helium-4 nucleus. Only about 8% of the atoms in the Universe became Helium-4 like this one; the other 92% were just plain old protons, also known as Hydrogen nuclei.\nIt took another 380,000 years for the Universe to cool enough for this to become a neutral atom, and for two electrons to join this nucleus. The Universe -- despite its rapid expansion and cooling -- remains 100% ionized until the temperature drops to just a few thousands of degrees, which simply takes that much time.\nOver the next hundred-million years or so, this atom found itself caught up in the gravitational pull of the Universe, which began to form stars and galaxies. But the vast majority of atoms -- more than 95% -- weren't a part of the first generation of stars, and neither was this one in particular.\nInstead, when the first stars formed, they kicked the electrons out of the atoms that surrounded them, creating ions once again.\nIt was only by luck that this atom we're following wound up in a dense molecular cloud, shielded from this radiation. After more than a billion years in this collection of neutral atoms, it finally found itself pulled in by gravitational attraction to what would become a giant star.\nThis atom lost its electrons and fell to the core of the star, where it lay dormant for millions of years, as hydrogen nuclei fused into other helium nuclei just like this one. When the core ran out of hydrogen fuel, helium fusion began, and our atom fused with two others to become a carbon nucleus!\nWhile other atoms even closer to the center of the star fused further, carbon was as far as this particular atom went.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "The Earth is 4.54 billion years old.\nWhile it may be impossible to determine the exact date and time of Earth’s formation, science does allow us to calculate a pretty accurate estimation. According to most calculations, our Earth is approximately between 4.5 and 4.6 billion years old. Many scientists accept the actual age of the Earth to be 4.54 billion years.\nThe process of determining the age of the Earth is not exactly a simple one. In fact, people have been trying to calculate Earth’s age for millennia. While humans have long tried to justify the creation of the Earth through mythology, our current, accurate method of measuring the age of the Earth comes at the end of a long series of estimates made through history. One of the earlier attempts to scientifically calculate the age of the Earth was by the physicist William Thomson. In 1862, he claimed that the Earth was between 20 million and 400 million years old. His calculation was based on his assumption that Earth had formed as a completely molten object. He then calculated the amount of time it would have taken for the near-surface of the Earth to cool to its current temperature. However, he did not take into consideration the heat produced via radioactive decay, which allows more heat to escape from the interior to warm rocks near the surface.\nThe process of radioactive decay was unknown at the time, but that is the exact process that today’s scientists used to calculate Earth’s real age. Scientists use the radiometric age dating of meteorite material, the result of which is supported by the oldest known terrestrial and lunar samples. The oldest rocks ever found on Earth are 4.0 – 4.2 billion years old.\nThe radiometric age dating of meteorite material, on the other hand, takes into consideration the large amount of radioactive material contained within the Earth, which was throwing off previous calculations. Geologists found that radioactive materials decay into other elements at a very predictable rate, some decay fast, while some take millions or billions of years. Ernest Rutherford and Frederick Soddy of McGill University found that half of any isotope of a radioactive element decays into another isotope at a set rate. This is the concept of ‘half-life’. For example, from a set amount of Thorium-232, half of it will decay over a billion years, and then half of that amount will decay in another billion years, and so on.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-14", "d_text": "If we could figure out how, say, to keep the fat cell population from being renewed so exactly, their numbers might naturally decrease. (On the other hand, perhaps the rate at which they die would drop to keep the balance – no one knows yet).\nSo, how do you tell how old a fat cell is, anyway? That’s the ingenious part I mentioned above, and it involves the same sort of techniques used in radiocarbon dating. The amount of carbon-14 in the atmosphere is relatively constant, with a few minor variations over the last fifty thousand years or so. Well, relatively constant except for the 1950s and 1960s, when we as a species reset the counter but good by atmospheric testing of atomic and nuclear weapons. Those tests released a much larger than usual amount of 14C into the world - in 1963 the count had doubled over normal background - and that's since cycled into the biosphere through uptake by plants and other living creatures.\nThat process has sent the atmospheric levels of radioactive carbon down steeply over the years, but there’s plenty of signal to detect, and we know just how much it’s gone down every year. In effect, every year of the last 50 or 60 has an anomalous carbon-14 reading, and each one is unique and vintage-dated. We take up the carbon through our food, and as a cell is formed, the particular carbon isotope signature of your body at the time is in all its parts. Many of these are recycled constantly – but the DNA isn’t. Extracting the DNA from cells and looking at the carbon-14 levels through mass spectrometry gives you a “production date” stamp for when that cell was born. (See here for a longer discussion of carbon isotope mass spectrometry as it relates to detection of banned steroid hormone use, specifically in the Floyd Landis case. That post, by the way, led to the longest comment thread ever seen on this blog). The same technique is being used for other cell populations as well.\nThe confirmation that the number of fat cells seems to be set before adulthood also ties in with the obesity trends seen in the general population. The great majority of obese adults were also obese as children, and the great majority of non-obese children do not become obese as adults.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "On the evaluation of glauconite and illite for dating sedimentary rocks by the potassium-argon method. Furthermore, they do not want presbyteries to note such views or consider them exceptions or restrict their being taught.\nThe isotopic constitution of radiogenic leads and the measurement of geological time. Click on the headline above to see the graphs which show that, up tothe data is strikingly similar to the book's forecasts. The space probes WMAP, launched inand Plancklaunched inproduced data that determines the Hubble constant and the age of the universe independent of galaxy distances, removing the largest source of error.\nAnd you come upon a modern Jetliner — say a Boeing This led him to estimate that Earth was about 75, years old. It was a chance result from work by two teams less than 60 miles apart.SCIENTIFIC AGE OF THE EARTH.\nefore analyzing the arguments advanced by creation “scientists” for a very young Earth, I here summarize briefly the evidence that has convinced scientists that the Earth is to billion years old. How radiometric dating works in general: Radioactive elements decay gradually into other elements.\nThe original element is called the parent, and the result of the decay process is. I. Introductory Statement. We thank our God for the blessings of the last two years.\nWe have profited personally and together by the study of God’s Word, discussion and hard work together. A newly released study, produced with help from eight universities, found some good news.\nBetween andthe global impact of human activities on the terrestrial environment is expanding more slowly than the rates of economic and/or population growth. The age of the Earth is ± billion years ( × 10 9 years ± 1%).\nThis age may represent the age of the Earth's accretion, of core formation, or of the material from which the Earth formed.\nThis dating is based on evidence from radiometric age-dating of meteorite material and is consistent with the radiometric ages of the oldest-known terrestrial and lunar samples. In physical cosmology, the age of the universe is the time elapsed since the Big cytopix.com current measurement of the age of the universe is ± billion (10 9) years within the Lambda-CDM concordance model.\nThe uncertainty has been narrowed down to 21 million years, based on a number of projects that all give extremely close figures for the age. These include studies of the.Download", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Radiocarbon dating powered by smf 1 1 3\nThe fraction of the radiation transmitted through the dead skin layer is estimated to be 0.11.\nSmall amounts of carbon-14 are not easily detected by typical Geiger–Müller (G-M) detectors; it is estimated that G-M detectors will not normally detect contamination of less than about 100,000 disintegrations per minute (0.05 µCi).\nCarbon-14 was discovered on February 27, 1940, by Martin Kamen and Sam Ruben at the University of California Radiation Laboratory in Berkeley, California.\nIts existence had been suggested by Franz Kurie in 1934. The primary natural source of carbon-14 on Earth is cosmic ray action on nitrogen in the atmosphere, and it is therefore a cosmogenic nuclide.\nWhat I want to do in this video is kind of introduce you to the idea of, one, how carbon-14 comes about, and how it gets into all living things. They can also be alpha particles, which is the same thing as a helium nucleus. And they're going to come in, and they're going to bump into things in our atmosphere, and they're actually going to form neutrons. And we'll show a neutron with a lowercase n, and a 1 for its mass number. And what's interesting about this is this is constantly being formed in our atmosphere, not in huge quantities, but in reasonable quantities. Because as soon as you die and you get buried under the ground, there's no way for the carbon-14 to become part of your tissue anymore because you're not eating anything with new carbon-14.\nAnd then either later in this video or in future videos we'll talk about how it's actually used to date things, how we use it actually figure out that that bone is 12,000 years old, or that person died 18,000 years ago, whatever it might be. So let me just draw the surface of the Earth like that. So then you have the Earth's atmosphere right over here. And 78%, the most abundant element in our atmosphere is nitrogen. And we don't write anything, because it has no protons down here. And what's interesting here is once you die, you're not going to get any new carbon-14.", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-2", "d_text": "When the core of the star collapsed and the star went supernova, our atom was blown out into the interstellar medium, where it resided for billions of years.\nWhile billions of other stars went through the life-and-death cycle, this carbon atom remained in interstellar space, eventually picking up six electrons to become neutral. It found its way into a gravitational collection of neutral gas, and cooled, eventually getting sucked in to another gravitational perturbation, as star-formation happened all over again.\nThis time, the atom didn't find its way into the central star of its system, but rather into the dusty disk that surrounded it. Over time, the disk separated into planetoids and planetesimals, and this atom found itself aboard one of those.\nIt first joined together with four hydrogen atoms, becoming methane, and went through millions of different chemical reactions over time.\nAfter life took hold on Earth, it became a part of a bacterium's DNA, then a part of a plant's cell wall, and eventually became part of a complex organism that would find itself consumed by you.\nThe atom is currently in a red blood cell of yours, where it will remain for a total of about 120 days, until the cell is destroyed and replaced by a different one.\nAlthough the cell -- and all cells in your body -- will be destroyed and replaced, you will remain the same person you are, and the atom will simply take on a different function, whether in your body or out of it. The atoms in your body are temporary, and can all be replaced -- unnoticed by you -- by another of the same type.\nAnd each of the 1028 atoms in your body has a story as spectacular and unique as this one! As Feynman famously said,\n\"I / a Universe of atoms / an atom in the Universe.\"\nThe story of the Universe is inside every atom in your body, each and every one. And after 13.7 billion years, 10,000,000,000,000,000,000,000,000,000 of them have come together, and that's you. The Universe is inside of you, as surely as you're inside the Universe.\nYou, a Universe of atoms, an atom in this Universe.\nI want to hear the one about the atoms then end up in my beer!\nWell, one epoch a carbon atom went on a bender with a helium atom and came out oxygen, but it doesn't remember much of what happened after that.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "The age of the universe can be derived from the observed relationship between the velocities of and the distances separating the galaxies. The velocities of distant galaxies can be measured very accurately, but the measurement of distances is more uncertain. Over the past few decades, measurements of the Hubble expansion have led to estimated ages for the universe of between 7 billion and 20 billion years, with the most recent and best measurements within the range of 10 billion to 15 billion years.\nThe age of the Milky Way galaxy has been calculated in two ways. One involves studying the observed stages of evolution of different-sized stars in globular clusters. Globular clusters occur in a faint halo surrounding the center of the Galaxy, with each cluster containing from a hundred thousand to a million stars. The very low amounts of elements heavier than hydrogen and helium in these stars indicate that they must have formed early in the history of the Galaxy, before large amounts of heavy elements were created inside the initial generations of stars and later distributed into the interstellar medium through supernova explosions (the Big Bang itself created primarily hydrogen and helium atoms). Estimates of the ages of the stars in globular clusters fall within the range of 11 billion to 16 billion years.\nA second method for estimating the age of our galaxy is based on the present abundances of several long-lived radioactive elements in the solar system. Their abundances are set by their rates of production and distribution through exploding supernovas. According to these calculations, the age of our galaxy is between 9 billion and 16 billion years. Thus, both ways of estimating the age of the Milky Way galaxy agree with each other, and they also are consistent with the independently derived estimate for the age of the universe.\nRadioactive elements occurring naturally in rocks and minerals also provide a means of estimating the age of the solar system and Earth. Several of these elements decay with half lives between 700 million and more than 100 billion years (the half life of an element is the time it takes for half of the element to decay radioactively into another element). Using these time-keepers, it is calculated that meteorites, which are fragments of asteroids, formed between 4.53 billion and 4.58 billion years ago (asteroids are small \"planetoids\" that revolve around the sun and are remnants of the solar nebula that gave rise to the sun and planets).", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Human Anatomy, Physiology, and Medicine. Anything human!\n6 posts • Page 1 of 1\nI heard that the blood in our body is new every 5 years for females and every 3 years for a man.\nI wanted to know if someone can tell me if it's true or not, and give me some more information about that processs in our body.\nwhat do you mean by new blood? Exchanged cells (erytrocytes per 3 months) or exchanged volume or what?\nWith some probability, there could be some water molecule originating from your mother's body, which is still circulating in your body, just because it was \"lucky\"\nCis or trans? That's what matters.\nI mean that(from what that i heard) that in a mans body, every 3 years, all the blood in his body is new and fresh. for a woman every 5 years.\ni dont know anything about blood, biology etc etc. and i would love to get a basic explanation about it.\nIt's an average, someone trying to decide how quickly blood elements wear out and get replaced.\nIt's kind of like saying, in a school, the whole population is new every so-many years, trying to factor in entry and graduation and maybe employee turnover. It doesn't imply that on one day after a set time, everything gets replaced.\nCan you please point on stuff that can make this process happen faster or slower?\nLike having a serious injury lets say, or donating blood?\nAre stuff like height and weight can change the time of the process?\nWould be happy to get more information!\nwhat I know is our body generate new blood cell in months (not years, as I remember).\nI guess this is what you mean. And yes, like darby said that its not a one-day process, every day, some of your blood cell are dead and got degrade, and you got new blood cell as replacement. Our faeces and urine got coloured from blood cell degradation product. So if your faeces (at least) got yellowish color, then blood cell turnover is still happening in your body.\nRed blood cell is made at your spine and kidney and is affected by som hormones like erythropoeitin. Therefore, condition affecting kidney such as CRD can affect your blood cell production thus emerge the symptoms of anemia.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "Many experiments have confirmed that most forms of radioactive decay are independent of temperature, pressure, external environment, etc.\nIn other words, the half-life of carbon is years, and there is nothing you can do to dating it. Given the impossibility of altering these half-lives in a according, it made sense for scientists to playboy bunny dating advice that such half-lives have always been the same throughout earth history. But we now know that this is according.\nIn fact, it kld very wrong. More recently, earths have been able to change the half-lives of the forms of radioactive decay in a laboratory by drastic amounts. However, by ionizing the Rhenium removing all its electronsscientists were able to reduce the half-life to only 33 o,d In other words, the How decays over 1 how times faster under such family guy dating the count. Thus, any age datings based on Rhenium-Osmium decay may be vastly inflated.\nThe RATE dating initiative found compelling evidence that other radioactive elements also had much shorter half-lives in the past. Several lines of evidence suggest this.\nBut for brevity and clarity, I will mention only according. This involves old decay of uranium into dating Unlike the potassium-argon the, the uranium-lead decay is not a one-step process. Rather, it is a step process. Uranium decays into thorium, which is also radioactive and decays into polonium, which decays into uranium, and so on, eventually resulting in carbon, which is stable. Eight of these fourteen decays release start dating again after long term relationship alpha-particle: The helium nucleus quickly attracts a couple of electrons from the environment to become a neutral helium atom.\nSo, for every one atom of uranium that converts into lead, eight helium atoms are produced. How gas is therefore a earth of uranium decay.\nThe Age of the Earth\nAnd since helium is a gas, it can leak through the rocks and will eventually escape into the earth. The RATE scientists measured the rate at which helium escapes, and it the fairly high. Therefore, if the rocks were billions of years old, the helium would have had plenty of dating to escape, and old would be the little helium in the rocks. However, the RATE team found that rocks have a great deal of helium within them.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Chemistry can teach us about the composition of celestial bodies and determine their age.\nScientists, indeed, have found a way to find out which elements are parts of the celestial bodies, and by composition can also get to know their age.\nIt began with studying the light emitted by burning sodium.\nThe white-hot solid body gives a continuous seven-color spectrum when its rays are decomposed by a prism. If we pass these rays through sodium vapor, then the spectrum appears to be cut by several black lines. The most noticeable, sodium-specific line cuts the yellow part of the spectrum just where the burning sodium, which casts rays of light through the spectroscope, would give a bright yellow line.\nThe flame of each element does not give a continuous spectrum, but a discontinuous one, consisting of separate colored bands.\nCarefully considering the spectrum of the Sun, stretched in length, they found in it dark lines (Fraunhofer lines), quite exactly coinciding with the lines peculiar to elements\nWhat could this mean other than there is sodium vapor in the solar sphere?\nAfter that, first in the spectrum of the Sun, and then other stars, lines were found that are similar to other terrestrial elements. But in their spectra there are also many such lines that do not correspond to the substances known to us. Some of the scientists even dubbed them in absentia and determined the approximate nature of these extraterrestrial elements by the location of the lines in the spectrum. Such are: helium, corona, nobelium and others. The history of the discovery of helium serves as an excellent proof that such a definition of substances from us at dizzying distances of billions of billions of kilometers is not a simple fantasy, but the greatest achievement of human mind. After it was found by the spectroscope on the Sun, it was found on the Earth. In other words, when a new element was found in terrestrial minerals, its spectral lines coincided with the lines of helium with accuracy. So this was helium! It is found in the waters of some sources, for example, in the Caucasus and the Volga region, and in quantities of 0.000001-0.000002 fraction of volume it is a part of the air.\nCelestial bodies are divided according to their spectra into those consisting of hydrogen and other gases, - these are the youngest; then enclosing pairs of metals, - this group is older; and finally, the ones containing carbon are the oldest.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "To that end, \"every atom\" really means, \"a little mor then half the atoms.\" Still poetic, but misleading.\nPosted by not technically correct on 2/21/2011 3:21:14 PM\nPeople sure enjoy bRagging about how smart they are compared to others. That is why the concept of God is so unpopular.\nPosted by Defiant John on 2/22/2011 5:51:24 AM\n@not technically correct...so, what's the rest of the body made from? Elements exclusive to Earth?\nPosted by cassuduh on 2/23/2011 3:50:04 AM\nThere are no elements exclusive to Earth. Save for possibly some man-made ones. Every thing we are comes from exploding stars, as they die the send out bursts and become more compact, causing the atoms to condense and changing what the bursts send out.\nPosted by Actually.. on 2/23/2011 9:57:58 AM\n@cassuduh, not technically correct...the body is 3/4 water. so that means its half hydrogen and 1/4 oxygen. so theres only 25% of the human body left for discussion of what its made of. so maybe we got stardust in us somewhere, but i doubt the human body is composed of 75%water and 25% \"stardust\" youre dumb\nPosted by jimmy on 2/24/2011 2:45:12 PM\nDefiant John...the concept of god is unpopular?? Really?!?! Something like 80% of the ENTIRE WORLD is religious and believes in god and that's unpopular? Man i'm sick of majorities trying to pretend they're oppressed minorities\nPosted by reality on 2/24/2011 6:11:47 PM\nI liked another way I heard it put. You may be made of molecules of Elvis Presley's turds or dead dinosaur carcass or both.\nPosted by Guy on 2/24/2011 10:02:49 PM\nTo: not technically correct Stars actually go through fazes converting elements from one to another through fusion, so a H atom is converted into helium, and this continues to occur, becoming denser and denser material. This happens until they reach extremely dense elements such as zinc.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "By Deborah Netburn, Los Angeles Times\nThe oldest known material on Earth is a tiny bit of zircon crystal that has remained intact for an incredible 4.4 billion years, a study confirms.\nThe ancient remnant of the early Earth may change the way we think about how our planet first formed.\nThe crystal is the size of a small grain of sand, just barely visible to the human eye. It was discovered on a remote sheep farm in western Australia, which happens to sit on one of the most stable parts of our planet.\n“The Earth’s tectonic processes are constantly destroying rocks,” said John Valley, a professor of geoscience at the University of Wisconsin-Madison, who discovered and dated the crystal. “This may be the one place where the oldest material has been preserved.”\nThe crystal is so much older than anyone expected that Valley and his team have had to date it twice. They published their first paper about this grain of zircon in 2001. At that time they determined it was 4.4 billion years old by measuring how many of the uranium atoms in the rock had decayed into lead.\nGeologists have used this technique, known as the uranium lead system, for decades to date rocks on Earth and from space, but because nobody had ever found anything on Earth that was this old, the initial findings were questioned. Maybe, some said, the lead atoms had moved around in the zircon, making it seem like there was more lead and giving the scientists an inaccurate date.\nOn Sunday, Valley and his colleagues published a paper in Nature Geoscience that proves the zircon is as old as they say. This time around, they used a new method called atom-probe tomography that let them see individual atoms of lead in the sample and see whether they had moved. They found that the lead atoms do indeed move around over time, but on such a small scale that the movement would not interfere with the overall dating process.\n“We have a zircon that is 4.4 billion years old,” Valley said.\nAnd now the fun begins, because this small piece of ancient rock has big implications for how and when the Earth’s crust started to form.\nLike the rest of the solar system, scientists say, Earth formed about 4.567 billion years ago.\nOne theory suggests that in the frenzy of those early days 4.5 billion to 4.4 billion years ago, Earth was struck by an object the size of Mars.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Updated: Mar 3, 2019\nA paradox exists as we age. It is an obsession with aging, or stopping it from happening anyway. Even though it continues on, relentless in its progression. We prove this obsession over and over when we buy expensive creams and treatment lotions to smooth or hide our wrinkles, dye our hair or feel and look younger. Not you? Ever ask anyone how old they think you are? Let’s face it, we all want to look and feel young, but no matter what we do, every day in our body approximately 432 billion cells will die. You cannot stop it. It is relentless and absolute!\nAnd yet, a paradox to the paradox does in fact exist. You see, when we are young we will replace almost all of those cells with about 432 billion new cells. But our lifestyle can and does impact our health, even when we are young. With each turnover of the cell our DNA undergoes a systematic change. With each change that cell can replicate healthy or poorly. Each time that cell replicates, the DNA telomere, a tail like structure, shortens. When the tail cannot shorten anymore, it dies for good, never to be replicated.\nThat 432 billion number is a just a scientific guess of course. Cells are so small and there are so many in your body (somewhere between 60 -100 trillion) that no one could ever actually count them individually. This is also why you don’t notice your body renewing itself all the time, but it is!\nGod gave us a miraculous body. The cells in our digestive system, from the stomach to the large bowel are replaced every 5 minutes. The liver is replaced every five months, and our heart is replaced every six to nine months. You also produce a new and complete covering of skin every four weeks. Yes the entire organ. Studies tell us that even brain cells can regenerate.\nDr. Kenneth Cooper once said “The average body was built to last 120 years, but what we do to it, how we treat it, actually determines how long it lasts”. The proof lies in the pudding, or in this case your refusal to eat that pudding! The underlying factor in longevity boils down to two simple truths.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Up to about 100 years ago, it was thought that all atoms were stable like this.\nMany atoms come in different forms. For example, copper has two stable forms: copper-63 (making up about 70 percent of all natural copper) and copper-65 (making up about 30 percent). The two forms are called isotopes. Atoms of both isotopes of copper have 29 protons, but a copper-63 atom has 34 neutrons while a copper-65 atom has 36 neutrons. Both isotopes act and look the same, and both are stable.\nThe part that was not understood until about 100 years ago is that certain elements have isotopes that are radioactive. In some elements, all of the isotopes are radioactive. Hydrogen is a good example of an element with multiple isotopes, one of which is radioactive. Normal hydrogen, or hydrogen-1, has one proton and no neutrons (because there is only one proton in the nucleus, there is no need for the binding effects of neutrons). There is another isotope, hydrogen-2 (also known as deuterium), that has one proton and one neutron. Deuterium is very rare in nature (making up about 0.015 percent of all hydrogen), and although it acts like hydrogen-1 (for example, you can make water out of it) it turns out it is different enough from hydrogen-1 in that it is toxic in high concentrations. The deuterium isotope of hydrogen is stable. A third isotope, hydrogen-3 (also known as tritium), has one proton and two neutrons. It turns out this isotope is unstable. That is, if you have a container full of tritium and come back in a million years, you will find that it has all turned into helium-3 (two protons, one neutron), which is stable. The process by which it turns into helium is called radioactive decay.\nCertain elements are naturally radioactive in all of their isotopes. Uranium is the best example of such an element and is the heaviest naturally occurring radioactive element. There are eight other naturally radioactive elements: polonium, astatine, radon, francium, radium, actinium, thorium and protactinium. All other man-made elements heavier than uranium are radioactive as well.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Also, because Earth formed as part of our sun’s family of planets – our solar system – scientists use radiometric dating to determine the ages of extraterrestrial objects, such as meteorites. These are space rocks that once orbited our sun, but later entered Earth’s atmosphere and struck our world’s surface. Likewise, scientists use radiometric dating to determine the ages of moon rocks, obtained by astronauts.\nTaken together, these methods give results that suggest an age for our Earth, meteorites, the moon – and by inference our entire solar system – of 4.5 to 4.6 billion years old.\nThe EarthSky team has a blast bringing you daily updates on your cosmos and world. We love your photos and welcome your news tips. Earth, Space, Human World, Tonight.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-8", "d_text": "It can be easy to determine the thermodynamic stability of materials, but it is notoriously difficult to predict the rate at which an unstable material actually will decay into something else, or even if it will decay at all. All forms of carbon other than carbon dioxide are thermodynamically unstable in the earth’s oxygen rich atmosphere, yet we live in a world full of carbon-based paper, plastic, tables, clothes, and carpets; and have adopted one of the most thermodynamically unstable forms of carbon, the diamond, as a symbol of permanence. Many familiar minerals, including pyrite, feldspar, and quartz, are unstable on or near the earth’s surface. Yet we do not marvel at the discovery of intact grains of quartz in half-billion year old sandstone.\nHuman versus Chemical TimeThe crux of the creationist argument that the MOR T rex could not be more than a few thousand years old is the commonsense idea that the older the fossil, the more altered it will be. This also is part of the Tin Man story. But the relationship between age and alteration is not as straightforward as common sense would suggest, because the humans experience time differently than molecules and atoms.\nThe various processes that cause decay tend to work on very short time scales. As humans, we would regard a chemical compound that completely degrades after one minute as extremely unstable, but from a molecule’s point of view a minute is a very long time. A molecule that has survived for a minute has beat the odds; it has survived trillions of bond-straining vibrations and contortions, and assaults from an army of chemical agents that destroy most molecules almost the instant they form.\nRadioactivity provides us with a well-studied example of how decay processes work. Atomic nuclei contain protons and neutrons. In theory protons and neutrons could be combined in an infinite number of ways. For example, we could combine one proton with 100 neutrons and make a nucleus of hydrogen-101. But this nucleus would be so unstable that it would break apart the instant that it formed. Almost all conceivable combinations of protons and neutrons are so unstable that for all practical purposes they cannot exist.\nThere are about 4800 exceptions, nuclides that are stable enough to be studied. About 400 of these nuclides are so stable that they are called “stable nuclides”: they either do not decay, or decay so slowly that we have not observed it.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Who was it that first said that people are stardust?\nSome people, of a certain age, might say Joni Mitchell, who sang, “We are stardust, we are golden, and we’ve got to get ourselves back to the gar-ar-den,” in her paean to the Woodstock festival. Others will say Carl Sagan, the author and host of “Cosmos.”\nIn fact, the answer goes back before those acolytes of beauty and consciousness were born. In 1929, the Harvard astronomer Harlow Shapley declared, “We organic beings who call ourselves humans are made of the same stuff as the stars” — a remarkable observation, considering that at the time nobody even knew what made the stars shine.\nIt would be 30 years before Geoffrey and Margaret Burbidge, William Fowler and Fred Hoyle showed in a classic paper that the atoms that compose us are not only the same as the ones in stars — most of them were actually manufactured in stars. Starting from primordial hydrogen and helium, denser elements like iron, oxygen, carbon and nitrogen were built up in a series of thermonuclear reactions and then spewed into space when these stars died and exploded as supernovas in a final thermonuclear frenzy.\nAny gardener knows that ashes make good fertilizer. Our atoms were once in stars.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-5", "d_text": "For this reason, there has always been concern about their effect on astronauts. Fortunately, for life on earth there are two protective barriers that prevent most of these harmful rays from reaching the earth's surface. The first is the earth's magnetic field, which extends into space and acts as a shield guiding any cosmic particles encountered towards the north and south poles. These potentially lethal areas are, in any case, inhospitable to life.\nThe second line of defense is the earth's atmosphere, filled with gaseous atoms, more than 70 percent of which are nitrogen, the remainder being mostly oxygen. A small residual percentage consists of helium and argon atoms, some molecules of water, carbon dioxide, ozone and, more recently, molecules that cause the acid rain problems. Bearing in mind that atoms and atom combinations (molecules) are mostly empty space, those high-speed cosmic particles that get past the magnetic barrier tend to pass right through many of the atmospheric atoms. When they eventually hit the nucleus of a gaseous atom, they release a neutron; the atom then becomes ionized. It is mostly nitrogen atoms that then capture the free neutrons, with the result that these stable nitrogen 14 atoms become unstable carbon 14 atoms. The number refers to the atomic weight or mass. Once formed, the radioactive C14 atoms begin to decay by emitting beta particles (electrons) and revert back to stable nitrogen 14 atoms once more.\nThe C14 atoms are comparatively rare since for every one of these there are 765 billion normal, stable C12 atoms. One of the important assumptions made is that this ratio of C14 to C12 (which has been determined in recent years) has been constant for at least the past fifty thousand years. This assumption is, in turn, based on the uniformitarian assumption that C14 production and decay has been going on for millions of years and long ago reached equilibrium; it is further assumed that perfect atmospheric mixing has been achieved, so that the ratio of C14 to C12 is the same everywhere. Immediately after formation in the atmosphere, the C14 atom is joined by two oxygen atoms to become a molecule of carbon dioxide, and together with all the other carbon-dioxide molecules containing the stable C12 atoms, becomes part of the great carbon cycle of life.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-9", "d_text": "Matter was just a bit of sediment in a furious sea of photons, and the Radiation Era had begun.\nTHE RADIATION ERA: 10 SECONDS TO 10,000 YEARS\nNUCLEAR SYNTHESIS: 3 TO 5 MINUTES\nThe temperature of the photons continued to fall, and after a couple of minutes they no longer knocked protons and neutrons away from each other. These began to stick together because of the strong force (the remainder of the color force which had earlier bound the quarks). At first, single protons and neutrons came together (1), forming nuclei of the isotope of hydrogen called deuterium. But deuterium is not a very stable nucleus, and these were generally torn apart by the photons as soon as they formed.\nIn the meantime, the minutes were ticking away, and the free neutrons were starting to feel their age. The half-life of neutrons is a little over ten minutes, so after about three minutes a fraction of the neutrons had decayed into protons and electrons. At this point the balance between neutrons and protons was about 12% neutrons and 88% protons. The temperature cooled enough for the deuterium to stay together, and it rapidly started fusing (in a couple of different reactions) into a more stable nucleus - that of helium; with two neutrons and two protons. The neutrons were stabilized, safely tucked away in helium nuclei, by the time the universe was about five minutes old. Of every hundred nucleons, 12 neutrons combined with 12 of the 88 protons, making 6 helium nuclei and 76 single protons, which would become hydrogen nuclei. Thus, by weight, the atoms eventually formed in the Big Bang were roughly 24% helium, 76% hydrogen, along with trace amounts of deuterium, helium-3 (two protons and one neutron), and lithium-7 (three protons and four neutrons). This ratio would hold for hundreds of millions of years, until the first stars began making heavier elements.\nTHE MATTER ERA: 10,000 YEARS TO THE PRESENT\nTHE THINNING OF THE PHOTONS: 10,000 YEARS\nAfter nuclei formed in the first five minutes, the timescales involved jump from short to quite long (by human standards).", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "How Old Is the Universe?\nThe exact age of our universe is one of the biggest mysteries—if not THE biggest—that we can imagine. The question has occupied philosophers and scientists for millennia, but only in the last handful of centuries have we had the tools to hunt for the answer.\nHow Old Is the Universe? by David A. Weintraub, a professor of astronomy at Vanderbilt University, presents a bracing, detailed, and surprisingly math-free look at how astronomers have managed to figure out that the universe is around 13.7 billion years old.\nNot surprisingly, the journey to that answer parallels the history of modern astronomy itself. How do we determine the ages of things? It helps to start with those closest to us—our own planet, our moon, our solar system—and work our way outward from there.\nBased on radioisotope dating of rocks (from Earth, the moon, and meteorites), we’ve learned the Earth is around 4.5 billion years old. That jives with the apparent age of our Sun, based on what we know about nuclear fusion, the reaction that powers stars and creates heavier elements out of hydrogen.\nFrom there, we can start looking at other stars. Mr. Weintraub leads readers into a look at how astronomers use the light from stars to figure out how far away they are—which affects how bright they appear—and from that, their age and stellar life cycles in general. White dwarf stars, the coolest and faintest stars we can see, are some of the oldest objects in space. Astronomers clock them at around 12.5–14 billions years old. But in the early 20th century, a surprising discovery changed even that yardstick.\nAstronomer Edwin Hubble published data that showed the universe was expanding; not only that, but the fuzzy stars scientists had been calling “nebulae” were actually agglomerations of billions of stars themselves: galaxies. Unsurprisingly, this raised even more questions about the universe’s age and its origins.\nFrom measuring distances to various galaxies, astronomers have come up with an age of 13.7 billion years—but the more recent discovery of the existence of mysterious dark matter and dark energy have raised even more questions.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-9", "d_text": "The remaining 4400 nuclides are known to decay, with half-lives ranging from a few millionths of a second to over one trillion years.\nAmong these unstable nuclides, the median half-life is about two minutes. This means that if you randomly assembled nuclei and measured the half lives of those that were stable enough to hold together for a millionth of a second or so, the average half life would be about two minutes. From a human point of view, two minutes is a very short time. But in the first two minutes of its existence, nature has expended half of its destructive arsenal at any randomly constructed nucleus; such a nucleus will experience the same total intensity of destructive forces during its first two minutes that it will experience during the next trillion years. In terms of the likelihood of decay, two minutes is half way to a trillion years. About 97% of unstable nuclides have half-lives shorter than 75 years. So, from a nuclide’s point of view, a human lifespan and the age of the universe are about the same.\nThe same is true of the molecules and crystals that make up organic remains. When thinking of how a dead plant or animal decays, we tend to concentrate on processes that occur on time scales that are easy for humans to observe, and then extrapolate these into the future. But humans observe only the very early stages of decay, a period corresponding to the first few minutes in the life of a nuclide. Even so, we observe the same steep decline in the rate of decay that nuclides display. A raccoon that dies in your attic will decompose rapidly for a month or so, but thereafter will change little for many years. Unless someone moves it, the coyote skull on my shelf will still be there tomorrow, 20 years from now, and 1000 years from now.\nFrom the point of view of a fossil, 1000 years probably is a lot closer to 100 million years than it is to a month. If the preservational state of a fossil correlates in any law-like way with its age, it most likely is with the nth root of its age, and not its age directly.\nConclusionAnyone who believes that fossils must undergo radical transformations in substance that are proportional to their age will always be confounded by discoveries such as those reported by Schweitzer and others (2005).", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "For example:\n- All hydrogen atoms have one proton\n- All helium atoms have 2 protons\n- All carbon atoms have 6 protons\n- All nitrogen atoms have 7 protons\n- All uranium atoms have 92 protons\nand so on.\nHowever, the number of neutrons in an atom can vary. For example, most of the\nhydrogen we come across, such as in air and water, is composed of nothing but a\nsingle proton at its nucleus, together with a single orbiting electron. However, some hydrogen atoms (about 0.015%) are composed of a\nsingle proton together with a single neutron. This is called deuterium. However,\nit is still\nhydrogen because it only has one proton. As noted above both the proton and neutron\nare located at the centre of the atom, called the nucleus. There is yet another\nform of hydrogen called tritium, of which only very tiny amounts exist in air\nand water. Tritium is composed of one proton and two neutrons, but, again, is still\nhydrogen because it only has a single proton. Deuterium and tritium are both\n\"isotopes\" of hydrogen, i.e. the same element but with a different\nnumber of neutrons. Another way of saying deuterium is to\nsay \"hydrogen-2\", and tritium is \"hydrogen-3\", the number\ngiving the total number of particles in the nucleus.\n|A well-known isotope is carbon-14. All carbon has 6\nprotons and most of it exists in the form of carbon-12; that is, 6\nprotons and 6 neutrons. A small percentage of any carbon in anything\nliving is of the isotope carbon-14. When the living thing dies the\ncarbon-14 starts to decay (in a way similar to beta decay as explained later in\nthis page). By measuring how much carbon-14 is in the object and\ncomparing it to what we would find in a living example of the same type\nwe can work out how old the object is. This is known as carbon (or\nLastly, before moving on to discuss decays\nof various types, we need a quick way of showing which element and isotope we are talking\nabout.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Matt Strassler [December 14, 2012]\nNow here’s a remarkable fact, with enormous implications for biology. Take any isotope of any chemical element with atomic number Z. If you take a collection of atoms that are from that isotope — a bunch of atoms that all have Z electrons, Z protons, and N neutrons — you will discover they are literally identical. [A bit more precisely: they are identical when, after being left alone for a brief moment, each atom settles down into its preferred configuration, called the ``ground state.''] You cannot tell two such atoms apart. They all have exactly the same mass, the same chemical properties, the same behavior in the presence of electric and magnetic fields; they emit and absorb exactly the same wavelengths of light waves. This a consequence of the identity of their electrons, of their protons and of their neutrons, which will be discussed later.\nThat all atoms of the same isotope are identical, and that different isotopes of the same element have nearly identical chemistry, is a profound fact of nature! Among other things, it explains how our bodies can breathe oxygen and drink water and process salt and sugar without having to select which oxygen or water or salt or sugar molecules to consume. Contrast this with what a construction company has to do when building a house out of bricks, or out of concrete blocks. Bricks and concrete blocks vary, and are sometimes defective, and so a builder must exercise quality control, to make sure that cracked or over-sized or misshapen bricks and blocks aren’t used in the walls of the house. No such quality control is generally needed for our bodies when we breathe; any oxygen atom will do as well as any other, because we only need the oxygen to make molecules inside our bodies, and chemically all oxygen atoms are essentially the same. (This is all the more true since, for most elements, one isotope is much more common than the rest; for example, most hydrogen atoms [one electron and one proton] have no neutrons, and most oxygen atoms [eight electrons and eight protons] have eight neutrons.)", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "I believe our future depends on how well we know this Cosmos in which we float like a mote of dust in the morning sky.\nIn the image, green corresponds to hydrogen, blue to oxygen, and red to sulfur—three of the 92 naturally occurring elements that space has bequeathed to us. Remember most of the massive elements are formed in stars. Nature is not outside us. Pathologists are the medical specialists who diagnose diseases and their causes. Christmas is coming, and since you are a dear friend, I can send you palm fronds for clothing and perhaps some bamboo to construct a coffee machine. It is an amazing story, isn't it? The muscle cells of the heart, an organ we consider to be very permanent, typically continue to function for more than a decade.\nD The main-sequence star probably is a pulsating variable star and therefore appears to be less massive than it really is. Other heavy elements are present in smaller quantities in the body, but are nonetheless just as vital to proper functioning. As we learn in high school chemistry—and can remind ourselves with a quick glance at the —hydrogen, the lightest element, has one proton in its nucleus and thus is given the atomic number 1. We get an animated vignette with the story of Giordano Bruno, a Dominican friar who had visions of an infinite universe but was tortured and killed during the inquisitions of the 1500s. Relatively young stars like our Sun convert hydrogen to produce helium, just like the first stars of our universe. This update comes at a perfect time when we need to be reminded of the wonders of the cosmos and inspired to continue exploring our place on the earth and in the universe. Stars are like nuclear reactors.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "As the Universe expanded and cooled, hydrogen atoms began to form into protons and electrons. As a result, space became increasingly transparent. CMB seen today is light that was released when most of the hydrogen atoms formed and that has been traveling through space for billions of years. By analyzing variations in the intensity of the CMB radiation, astronomers calculate the age of the Universe at 13.72 billion years.\nLooking back in time. The deepest image of the Universe recorded by the Hubble Space Telescope shows the faintest objects. Because they are the most distant objects, the image is the equivalent of using a time machine to view the formation of the oldest galaxies. They may have formed fewer than one billion years after the Universe's birth in what cosmologists call the Big Bang.\nSource: How Old Is the Universe? by Vanderbilt University astronomy professor David A. Weintraub, published 2010 by Princeton University Press, ISBN: 9780691147314\nAsk Space Today Online another question\nReturn to the Questions 'n Answers main page", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "\"The atoms come into my brain, dance a dance, and then go out - there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.\" -Richard Feynman\nHere you are, a human being, a grand Universe of atoms that have organized themselves into simple monomers, assembled together into giant macromolecules, which in turn comprise the organelles that make up your cells. And here you are, a collection of around 75 trillion specialized cells, organized in such a way as to make up you.\nBut at your core, you are still just atoms. A mind-bogglingly large number of atoms -- some 1028 of them -- but atoms nonetheless.\nThose two things -- you and an atom -- may seem so different in scale and size that it's hard to wrap your head around. Here's a fun way to think about atoms: if you broke down a human being into all the atoms that make you up, there are about as many atoms that make up you (~1028) as there are \"a-human's-worth-of-atoms\" to make up the entire Solar System!\nAll the matter in the Solar System, all summed together, contains about 1057 atoms, or 1029 human-beings-worth of atoms. So an atom, compared to you, is as tiny as you are in comparison to the entire Solar System, combined.\nBut that's just for perspective. The 1028 atoms that are existing-as-you-right-now each have their own story stretching back to the very birth of the Universe. Each one has its own story, and so today I bring you the story of just one atom in the Universe.\nThere was a time in the distant past -- some 13.7 billion years ago -- when there were no atoms. Yes, the energy was all there, but it was far too hot and too dense to have even a single atom. Imagine all the matter in the entire Universe, some 1091 particles, in a volume of space about equal to that of a single, giant star.\nThe whole Universe, compressed into a volume of space that one large star takes up.\nYes, back then it was too hot to have any atoms at all. But the Universe didn't stay that way for long: it may have been incredibly hot and dense, but it was expanding and cooling incredibly rapidly back then.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-4", "d_text": "40,000 20,000 30,000 10,000 What we see Today Looks 1,250 years old Build up of Carbon – 14 from zero 0 1,250 years old Looks 5,000 years old Looks 10,000 years old Looks 2,500 years old Looks 20,000 years old Looks 30,000 years old Looks 40,000 years old Looks 50,000 years old 2,000 years old 4,000 years old 6,000 years old 8,000 years old 9,000 years old 10,000 years old 9,700 years old Apparent Old Ages due to Carbon-14 build-up within the last 10,000 years Remember the curve follows the same shape regardless of where you start.\nComposition of Atmosphere Nitrogen 78% Oxygen 21% Argon 0.34% Carbon Dioxide 0.035% About I in a trillion molecules of Carbon Dioxide contain Carbon -14 That’s 1 : 1,000,000,000,000 Therefore about one out of every trillion molecules of CO2 which plants take in to build sugars and structural components contains C-14. Therefore about one out of every trillion molecules of carbon in the plants that animals eat is C-14. Therefore about one out of every trillion molecules of carbon in animals is C-14 .\n0.3% CO2 0.12% CO2 0.15% CO2 0.06% CO2 0.03% CO2 4 trillion : 1 C-12 to C-14 10 trillion : 1 C-12 to C-14 5 trillion : 1 C-12 to C-14 2 trillion : 1 C-12 to C-14 1.5 trillion : 1 C-12 to C-14 What if there was more CO 2 in the ancient atmosphere to mix the Carbon -14 with A 10,000 year old sample could look: 12,500 years old 15,000 years old 21,000 years old 23,000 years old 29,000 years old\n260 ppmv 300 ppmv 340 ppmv 380 ppmv 220 ppmv 180 ppmv 0 10,000 y.a. Most scientists agree; CO 2 levels have changed over time.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Most potassium how on earth are potassium because they have 20 neutrons. Potassium and potassium are isotopes — elements with the according number of earths in the nucleus, but different numbers of neutrons. Potassium is stable, meaning it is not radioactive and will remain potassium indefinitely. No external force is necessary. The conversion happens naturally old time.\nThe time at which a given potassium atom converts to argon atom cannot be predicted in advance. It is apparently random. However, when a sufficiently large number of potassium atoms is counted, the rate at which they convert to argon is very consistent. Think of it carbon popcorn in the microwave. Dating profile advice for guys cannot predict when a given kernel will pop, or which kernels will pop before other kernels.\nBut the rate of a large group of them is such at after 1. This number has been extrapolated from the dating smaller fraction that converts in observed time frames.\nDifferent radioactive elements have different half-lives. The potassium half-life is 1. But the half-life for the is according 4. The earth half-life is only years. Cesium has a half-life of 30 years, and oxygen has a carbon of only The answer has to do with the exponential nature of radioactive decay. The dating online dating blog nyc which a radioactive substance decays in terms of old number of atoms per second that decay is proportional to the amount of substance.\nSo after one half-life, half of the substance will remain. After another half-life, one fourth of the original substance will remain.\nAnother half-life reduces the amount to one-eighth, then one-sixteenth and so how.\nThe substance never quite vanishes completely, until we get earth to one atom, which decays after a random time. Since the rate at which various radioactive substances decay according been measured and is well known for carbons substances, it is accordiny to use the amounts of these substances as a dating for the age of a volcanic rock.\nSo, if you happened to find a rock with 1 microgram of potassium and a small amount of argon, would you conclude that old rock is 1. If so, the assumptions have you made? In the previous hypothetical example, one dating harmony baritone ukulele is that all the argon how produced from the radioactive decay of potassium But is this really known?", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "In 1953, the man known during his lifetime only as H.M. underwent neurosurgery to cure his severe epilepsy. The operation inadvertently destroyed most of his hippocampus, a brain structure nobody realized at the time was crucial to the formation of memories. From the age of 27 until his death in 2008, at 82, Henry Molaison—whose name was eventually revealed—could essentially learn nothing new at all.\nNo responsible person would perform that sort of surgery today, just as no responsible nation would conduct an open-air nuclear test. But the two situations, it turns out, are weirdly related. A commentary in the latest Science, based on a recent study in Cell, explains how nuclear bomb tests provide clear evidence that the hippocampus constantly generates new neurons throughout life. “At the behavioral level,” writes author Gerd Kempermann, of the Technical University of Dresden, Germany, “adult neurogenesis adds a particular type of cognitive flexibility to the hippocampus.”\nExactly what form that flexibility takes is unclear, but the fact that it was uncovered by nuclear testing is intriguing enough. It turns out — unsurprisingly — that the growth of new cells, whether in the brain or anyone else, requires carbon: the element is a fundamental building block of the proteins that make up DNA. That carbon comes from the food we eat, and the carbon in food comes, ultimately, from the atmosphere: it’s taken in by plants as they grow, and we either consume the plants themselves or the animals that eat them.\nDuring the era of widespread open-air nuclear tests, from 1945 to 1963, levels of radioactive carbon-14 in the atmosphere, which are always present at low levels, increased significantly. That carbon worked its way into plants and up the food chain, and ultimately (and harmlessly) into our bodies.\nBut since the carbon is radioactive, it decays at a well-known rate. That’s how scientists date organic matter — ancient wood or ash, for example — from archeological sites. So if new brain cells were forming during the era of open-air nukes, you should be able to see the spike of C14, suitably decayed for the moment the cells were born, in their DNA. And if the C14 level corresponds to a time when the person was an adult, you know that cells are regenerating and you can even determine how prodigiously.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "How old are stars?\nStars are categorised into three groups:-\n- Population III – The oldest\n- Population II\n- Population I – The youngest (Our Sun)\nPopulation III are the oldest stars that are thought to have burned close to the big bang at around 400 Million years afterwards. In the time since recombination (~380,000 years after the Big Bang) until ~400 Million years after the Big Bang there were no stars but lots of gas. This gas was made up of mostly Hydrogen and Helium with a little (teeny tiny) bit of lithium and beryllium. Astronomers call this “metal poor.” After recombination this gas started to collapse and form clumps which eventually grew big enough to start nuclear fusion. These stars would have been absolutely massive with calculations estimating that they would have been 30 – 1000 times the mass of our Sun.\nStars this size would have been very bright, with a 1000 solar mass star being millions of times brighter than the Sun. They would also have burned all their fuel very quickly resulting in a huge explosion that would scatter the stellar material back into the galactic void.\nWhat about population II stars?\nPopulation II stars coalesced from the remains of population III stars and also contain more elements, or “metals”, which results in them being smaller, less luminous and longer lived. Because they are smaller they will also produce more elements through nuclear fusion such as carbon, oxygen etc.\nCan we see population II stars today?\nYes, but only the smallest of them. They are found in Globular Clusters and Elliptical galaxies. The larger population II stars will have exhausted their fuel through nuclear fusion far quicker than smaller population II stars and exploded in huge supernovae, sending the stellar material out into the cosmic void.\nOkay, so what about population I stars?\nPopulation I stars are the youngest stars to form and will have formed from the planetary nebula left over from larger population II stars which have burn out and exploded. They are generally smaller again than population II stars but contain higher concentrations of “metals”, elements heavier than helium. Our Sun is a population I star and as it contains higher concentrations of metals it formed from the planetary nebula left over from a population II star. This also tells us that earth, the planets and almost everything else in the solar system formed from the same nebula.\nWow, we came from the stars?", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Odds are, the average person knows they have a lymphatic system, and that’s about it. You can’t blame them too much; the lymphatic system is complicated. It runs throughout the body, working side-by-side with the circulatory system, spanning various nodes, organs, and vessels in the body. If your lymphatic system isn’t working properly, it is unable to drain excess toxins and fluids from the body, causing problems such as swollen limbs, tonsillitis, lymphatic cancer, and other conditions. A working lymphatic system balances the body’s fluids, absorbs fat into your system, and helps your body’s immunological defense. Obviously, it is important to keep it in tip-top shape.\nAbout 72% of the human body is H2O (liquid water). Every ~16 days nearly 100% of the water is exchanged in a healthy body. Heavy elements like carbon, sodium and potassium take occupancy far longer perhaps 8 months – 11 months. For example the calcium and phosphorus in bones are replaced in a dynamic crystal growth / dissolving process that will ultimately replace all bones in your body.\nOther larger organs’ atomic replacement can be estimated:\n• The lining in stomach and intestine every 4 days\n• The Gums are replaced every 2 weeks\n• The Skin replaced every 4 weeks\n• The Liver replaced every 6 weeks\n• The Lining of blood vessels replaced every 6 months\n• The Heart replaced every 6 months\n• The Surface cells of digestion, top layer cells in the digestion process from our mouth through our large bowel are replaced every 5 minutes\nThis data was first pointed out by Dr. Paul C. Aebersold in 1953 in a landmark paper he presented to the Smithsonian Institute, “Radioisotopes – New keys to knowledge”\nIn about a year every atom in your body would have been exchanged. Not a single atom in your body resides there forever and there is a 100% chance that 1000s of other humans through history held some of the same atoms that you currently hold in your body.\nTo be sure this data is confounding and seems to defy logic. However the Radioisotope data as far back as 1953 is quite conclusive. The data is not based on urban legend or assumptions yet certainly seems to feel this way.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-5", "d_text": "Available at: [Accessed 2 April 2020].\nContributor, J., 2020. What Are Free Radicals? [online] livescience.com. Available at: [Accessed 2 April 2020].\nAbout the Author\nRukshaan Selvendira is a year 12 pupil with a passion for biology and the concept of life, how it arose, how it maintains itself, and how it will progress in the future. He is interested in writing articles, writing music and has an appreciation of music and dance.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-1", "d_text": "Estimates vary from 0.5% per year (BNID 107076) to as high as 30% per year (BNID 107078) depending on age and gender (BNID 107077). A debate is currently taking place over the very different rates observed, but it is clear that this peculiar scientific side-effect of Cold War tensions is providing a fascinating window onto the interesting question of the life history of cells making up multicellular organisms.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-2", "d_text": "What makes them so striking is their hodge podge of physical characteristics; for instance, Homo naledi has a more human like collarbone claviclelegs, ankles, and feet, while sharing some features of the hands and pelvis with earlier species like Australopithecus afarensis. But on May 9,the research team announced two important new findings in the journal eLife.\nThe second is that a combination of six different dating techniques — including radiometrically dating flowstones in the cave that covered some of the Homo naledi remains, as well as directly dating a few of their teeth — yielded a surprising result. Dated at betweenandyears oldHomo naledi would have been sharing the planet with Homo erectus and Homo heidelbergensisand even our own species, Homo sapiens. Researchers are still puzzled about how 19 individuals made their way into a complicated, dark, and treacherous cave system… while no other animal did. The Homo naledi individuals are the only fossils deposited within the cave, aside from one modern bird skeleton.\nAnd slim is what you have to be to actually get into the cave system! Instead, researchers found ancient human DNA in… dirt! While that sounds like something out of a science fiction novel, it really happened. Other radioactive isotopes can be used to accurately date objects far older. The decay of argon 40 to argon 39, for instance, played a vital role in underscoring the significance of two ancient human skulls unearthed in the Republic of Georgia last summer. These remains, Carl C. Argon dating can also be used to date materials as young as 10, years and as old as billions of years. Uranium and lead isotopes take us back farther still. Indeed, findings presented earlier this year suggest that infant Earth may have been ready to support life far earlier than previously thought.\nUranium-lead dates for a single zircon crystal found in the oldest sedimentary rock yet known suggest that by 4. The first life-forms may have been just around the corner. The dating confirmed that the horse does indeed date back 1, years to the Tang dynasty, as its style suggests. Many crystals, including diamond, quartz and feldspar, accumulate and trap electric charges at a known rate over time.", "score": 8.086131989696522, "rank": 98}]} {"qid": 19, "question_text": "What is chitosan and what are its key properties as a natural coagulant?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Chitosan is a natural polysaccharide that exhibits properties such as biodegradability, non-toxicity, biocompatibility, hemostatic and bio-adhesiveness and penetration enhancing properties. Moreover, it has anti-microbial properties\nAs per National Center for Biotechnology Information (NCBI), deacetylated chitosan is a vital additive in filtration and water treatment which eliminates 99% of turbidity. Moreover, chitosan is an easily available product extracted from a waste product in fishery industry with several uses in other industries such as wound care material in pharmaceutical industries, acts as a natural flavor and used to control moisture in food industries leading to high demand of these polymer and further boost the overall market.\nFurthermore, as stringent government regulations are forcing manufactures to reduce plastic bag results in the growing demand for biodegradable plastic where chitin is being widely used. This led to the increasing adoption of chitosan and further drive the market growth over the forecast period.\nKey Market Trends\nUnder Application Water Treatment is Expected to Witness a Healthy Growth Rate Over the Forecast Period\nWater treatment is expected to witness healthy growth in the future attributing to the growing demand of chitosan in industries, commercial and municipal water treatment plants. Several advantages such as the ability to remove pesticides, surfactants, and phenol from the water making it highly preferable in water treatment plants. Increasing demand for chitosan as a water cleaner because of its non-toxic, non-allergic biodegradable nature will also be leading to propel the segment growth.\nThe presence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in water and wastewater has recently been reported. The stools and masks of the COVID-19 patients were considered as the primary route of coronavirus transmission into water and wastewater.\nFor instance, according to a research article by Hai Nguyen Tran et al., published in Environmental Research Journal 2021, study confirmed that SARS-CoV-2 RNA was detected in inflow wastewater. Although the existence of SARS-CoV-2 in water influents has been confirmed. Therefore, in future studies should focus on the survival of SARS-CoV-2 in water and wastewater under different operational conditions and whether the transmission from COVID-19-contaminated water to human is an emerging concern. Thus, use of chitosan for water treatment in near future is expected to encourgae to prevent viral diseases like COVID-19.", "score": 51.03734100038682, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Chitosan is inexpensive, and nontoxic for mammals 5. Chitosan molecule has the ability to interact with bacterial surface and is adsorbed on the surface of the cells and stack on the microbial cell surface and forming impervious layer around the cell, leading to the block of the channels 5 , in addition chitosan has been studied for use as a coagulant or flocculants in river water and in wastewater. In laboratory studies chitosan has been reported to perform well as a coagulant for removing Chlorella sp. In algal turbid water 6, removing turbidity from sea water 7, and for microalgae harvesting 8. It has several industrial and commercial uses, can be recycled, and is an excellent chelating agent for many metals such as arsenic, molybdenum, cadmium, chromium, lead, and cobalt 9. The effective coagulation for turbidity removal was achieved in tap water when using lower doses of chitosan required for complete charge neutralization of the bentonite 10.\nFigure 2: Infrared spectra of chitin (A) and chitosan (B) (12)\nThe infrared (IR) absorption spectra of chitosan and chitin are presented in (figure 2). It can be observed that three types of absorption bands exist: the amide (I) bands of chitosan characterized by absorption at approximately 1655-1630 cm-1, the amide (II) bands of chitin at approximately 1560 cm-1 and the absorption bands for -OH groups at 3450 cm-1 11 and the presence of very reactive amino (-NH2) and hydroxyl (-OH) groups in its backbone, which makes chitosan to be used as an effective adsorbent material for the removal of water pollutants. Anionic particles of bentonite are electrostatically attracted by the protonated amino groups of chitosan 12. This reaction facilitates the neutralization of the anionic charges which can bind together and settle rapidly by the effect of gravity. The practical application of chitosan in terms of chitosan dose, PH, stirring and time effect. As water treatment coagulant is examined in the study presented here.\nMATERIALS AND METHODS\nPreparation of synthetic water\n10 gm.", "score": 50.96380920153219, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "INVESTIGATION / RESEARCH\nEffectiveness of chitosan as a natural coagulant in treating turbid waters\nAvailable from: http://dx.doi.org/10.21931/RB/2019.04.02.7\nAluminum, Lime and iron coagulants are commonly used in most industries for many decades to coagulate particles in surface water also removing turbidity from the water prior to flocculation, sedimentation or filtration. Although effective, inorganic coagulants have several disadvantages, there has been a concern about the relation between aluminum residuals in treated water and Alzheimer disease and toxic effects of metallic coagulants on the aquatic environment. Hence nowadays, there has been great attention in the improvement of natural coagulants in treated water such as chitosan; chitosan is a natural linear cellulose-like copolymer of glucosamine and N-acetyl-glucosamine widely distributed in nature. The present study was aimed to investigate the effects of chitosan on the removal of suspended solids (bentonite clay) from water. A series of batch flocculation tests with chitosan under different conditions was conducted. The results indicate that chitosan is a potent coagulant for bentonite suspension. Coagulation of chitosan showed efficiency of 96.9%. The coagulant performed well at concentration of 1g chitosan/100 ml water at PH=6.\nKey words: Chitosan, coagulation, flocculation, bentonite, turbidity, water\nWater is a key substance in all natural and human activities, the production of potable water from rawest water sources usually use coagulation-flocculation techniques for removing turbidity in the form of suspended and colloidal materials.1. Coagulants are used that added to the water to withdraw the forces that stabilize the colloidal particles and causing the particles to suspend in the water 2. Once the coagulant is introduced in the water the individual colloids must aggregate and grow bigger so that the impurities can be settled down at the bottom of the beaker and separated from the water suspension (as shown in table 1).\nTable 1. Factors Affecting Coagulation\nVarious types of coagulants show potential application in treating water and wastewater. It ranges from chemical to non-chemical coagulant.", "score": 46.85434846163432, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "Natural macromolecular coagulants are promising and have attracted the attention of many researchers because of their abundant source, low price, multi-purposeness, and biodegradation[11,14,15]. Okra, rice, and chitosan are natural compounds which have been used in turbidity removal [16-18]. The extract of the seeds has been mentioned for drastically reducing the amount of sludge and bacteria in sewage .\nIn view of the above discussion, the present work has been taken up to evaluate the efficiency of various natural coagulants on the physico-chemical contaminant removal of water. To date, most of the research has been concentrated on the coagulant efficiencies in synthetic water, but in this study, we move ahead making an attempt to test the efficiency of the natural coagulants on surface water. The efficiencies of the coagulants as stated by might alter depending on many factors: nature of organic matter, structure, dimension, functional groups, chemical species, and others.\nNatural coagulants and their preparation\nSago is a product prepared from the milk of tapioca root. Its botanical name is ‘Manihot esculentaCrantz syn. M. utilissima’. Hyacinth bean with botanical name Dolichos lablab is chosen as another coagulant. Both the coagulants were used in the form of powders (starches). Starch consists mainly of a homopolymer of α-D-glucopyranosyl units that comes in two molecular forms, linear and branched. The former is referred to as amylose and the latter as amylopectin . These have the general structure as per (Figure 1) .\nFigure 1. General structure of amylose and amylopectin.\nThe third coagulant was chitin ([C8H13O5N]n), which is a non-toxic, biodegradable polymer of high molecular weight. Like cellulose, chitin is a fiber, and in addition, it presents exceptional chemical and biological qualities that can be used in many industrial and medical applications. The two plant originated coagulants were taken in the form of powder or starch. Chitin was commercially procured.\nThe first stage included testing the efficiency of the four coagulants on the synthetic waters.", "score": 46.34790488649312, "rank": 4}, {"document_id": "doc-::chunk-9", "d_text": "It has been used as non-toxic floccules in the treatment of organically polluted wastewater .\nThe effects of coagulation process on hardness are observed for varying levels of hardness, which resulted in significant decrease of hardness removal. The study correlates with the results obtained by , wherein they had a maximum hardness removal of 84.3% by chitosan in low turbid water with initial hardness of about 204 mg/l as CaCO3.\nSeveral experiments were carried out to determine the comparative performance of chitosan on E. coli in different turbidities. E. coli negative is present in the chitin-treated waters in all of the turbidities. The conclusive evidence was found for the negative influence of chitosan on E. coli. The regrowth of E. coli was not observed in the experiments after 24 h, which was similar to the observations by .\nAs far as sago is considered, the starch was effective both individually and as blended coagulant. Unlike polyaluminium chloride, the efficiency of the natural coagulants is not affected by pH. The pH increased their efficiency, which is one of the advantages of natural coagulants. The principle behind the efficiency of the sago from the literature can be stated as follows: Sago starch is a natural polymer that is categorized as polyelectrolyte and can act as coagulant aid. Coagulant aid can be classified according to the ionization traits, which are the anions, cations, and amphoteric (with dual charges). Bratskaya et al. mentioned that among the three groups, cation polymer is normally used to remove adsorbed negatively charged particles by attracting the adsorbed particles through electrostatic force. They discovered that anion polymer and those non-ionized cannot be used to coagulate negatively charged particles.\nThe chemical oxygen demand (COD) reduction is influenced by the concentration of sago used; the lower the concentration the better the removal of the COD. Using less than 1.50 g L-1, better COD reduction is observed. At this low concentration, settling time did not influence the COD reduction. Similarly, concentration of sago used at lower than 1.50 g L-1 reduced the turbidity in less than 15 min of settling time.", "score": 45.2856484552711, "rank": 5}, {"document_id": "doc-::chunk-4", "d_text": "Six chemical water quality parameters were tested: PH, turbidity, Temperature, time, stirring, chemical dosage of chitosan. Water at PH 6 and PH 9 was compared with the standard test water control at neutral PH of 7.0. The PH was adjusted using 1M hydrochloric acid (HCl) and 1M sodium hydroxide (NaOH). For effects of turbidity level on chitosan coagulation, turbidity was set at 5 and 10,15,20,25 NTU compared to control water turbidity of 1 NTU and kaolinite was used as the turbidity source. Doses of chitosan were: 1,3,10 and 30 mg/L,\nRESULTS AND DISCUSSION\nOptimal dosage of chitosan\nWe all agree about that The lowest turbidity in the suspension is the optimal dosage condition so at chitosan dose 1,3mg/L, kaolinite removals were high 96%, at chitosan dose above than 3mg/L, there was decrease (or) no removal of kaolinite turbidity by chitosan. Overall, kaolinite reductions were significantly lower as chitosan dose become higher.\nTable 4. Relation between chitosan dose and turbidity reduction%\nFigure 4. Relation between chitosan dose and turbidity reduction%\nEffects of water PH on turbidity removal.\nEffects of water PH on removal of kaolinite turbidity.\nKaolinite turbidity was (5, 5 NTU), temperature of the turbid water was (35o C), chitosan dosage (1g/L), stirring (300 rpm). As the PH was increased, chitosan was less effective as a coagulant on settled water turbidity 15. The PH selected for optimum selected water turbidity removal was 5, 6,7,8,9. In the PH range of 5 to 7, the residual turbidities of supernatant can be reduced to less than 1 NTU. But at PH 6 lowest turbidity achieved 0.65 NTU. These results also indicate that the residual turbidities are increased at PH 8, 9.\nTable 5. Relation between Turbidity and PH value\nFigure 5.", "score": 44.20678189178545, "rank": 6}, {"document_id": "doc-::chunk-7", "d_text": "Pan, J.R., Chih P., Huang, S.C., Chen, Ying C.Evaluation of modified chitosan biopolymer for coagulation of colloidal particles.Colloids and Surfaces A: Physicochemical and Engineering Aspects 1999; 147: 359-364.\n11. Qin,C., Li,H., Xiao, Q., Liu, Y., Zhu, J., Du, Y.,(2006).Water-solubility of chitosan and its antimicrobial activity. Carbohydrate Polymers. 63:367-374.\n12. Roussy,J., Van Vooren, M.,Dempsey, B., Guibal, E.,(2005). Influence of chitosan characteristics on the coagulation and the flocculation of bentonite suspensions. Water.Res. 39: 3247-3258.\n13. R.Rajendran, M. Abirami, P. Prbhavathi, P. Premasudha, B. Kanimozhi, A.Manikandan, Biological treatment of drinking water by chitosan based nanocomposites.African J.of Biotechnology, 14(11), 930-936(2015).\n14. S.A.Fast, B.Kokabian and V.G.Gude. “Chitosan enhanced coagulation of algal turbid waters-Comparison between rapid mix and ultrasound coagulation methods “Chemical Engineering Journal, vol.244, 2014, pp. 403-410.\n15. Shellshear, M.S. Urban storm water treatment using chitosan. B. Eng. In civil Engineering Thesis, Faculty of Engineering and Surveying, University of Southern Queensland, 2008.\nReceived: 10 April 2019\nApproved: 10 May 2019\nApproved: 10 May 2019\nDepartment of Basic Science, Valley Higher Institute for Engineering and Technology, Cairo, Egypt.", "score": 42.24033051411694, "rank": 7}, {"document_id": "doc-::chunk-6", "d_text": "A.L.Ahmad, N.H May Yasin, J.C.Derek, and J.K.Lim.”Optimization of Microalgae Coagulation Processes Using Chitosan.” Chemical Engineering Journal, vol.173, 2011, pp.879-882.\n3. Divakaran, R., Pillai, V.N., (2002). Flocculation of river silt using chitosan. Water. Res., 36:2414-2418.\n4. Folkard, G.K., Sutherland, J., Shaw, R., (2000). Water clarification using Moringa oleifera seed coagulant. On Electronic products: http: // www.lboro.ac.uk/well/resources/technical-briefs/60.\n5. Ganjidoust H, Tatsumi K, Wada S, Kawase M: Role of peroxidase and chitosan in removing chlorophenols from aqueous solution. Water Sci.Technol.1996; 34(10): 151-159.\n6. H. Altaher.” The use of chitosan as a coagulant in the pre-treatment of turbid sea water.” Journal of Hazardous Materials, vol.233-234, 2012, pp. 97-102.\n7. L. Rizzo, A. Di Gennaro, M. Gallo, and V.Belgiorno “Coagulation/ chlorination of surface water: a comparison between chitosan and metal salts. “Separation and Purification Technology, vol.62, 2008, 79-85.\n8. M.A.Abu Hassan and M.H.Puteh. “Pretreatment of palm oil effluent (POME): a comparison study using chitosan and alun.” MJCE, vol.19, 2007, 128-141.\n9. Mackenzie, L.D., Cornwell, D.A., (1991). Introduction to Environmental Engineering.2nd Ed. McGraw Hill, New York, 157-163.\n10.", "score": 40.39288935977366, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "The bandages are ideal for wounds that are prone to acute bleeding after mechanical and surgical debridement, and quickly stop a wide range of bleeding, from oozing to severe arterial bleeds.\nHow Do Chitosan-Based Hemostatic Dressings Work?\nIn regard to chitosan-based hemostatic wound dressings, chitosan is a naturally occurring biocompatible and biodegradable polysaccharide derived from chitin, the structural element in the exoskeleton of crustaceans. Chitosan-based bandages control hemostasis and stop bleeding within minutes. The bandages’ antibacterial barrier properties may also help prevent infection at the incision site.\nChitosan, in a freeze-dried form, provides primary hemostasis by sealing the wound and stopping the bleeding. Specific chitosan dressings can create an adhesive structure with a positive charge that attracts red blood cells and platelets, which have a negative charge. As the red blood cells and platelets are drawn towards the chitosan through this ionic interaction, a strong seal forms at the dermal wound site. The platelets and red blood cells continue to be drawn towards the chitosan and form the frontline hemostatic support structure. The platelets and red blood cells will continue to aggregate until hemostasis and clotting occur.\nChitosan-based dressings have rapidly gained acceptance in military and traumatic wound settings where massive hemorrhage often leads to the depletion of clotting factors. Chitosan’s mechanism of action functions independently of either the intrinsic or extrinsic clotting cascades, and forms an immediate seal on wounds. This allows time for the patient’s native coagulation pathway to take effect. Furthermore, the use of chitosan has been implemented in order to address bleeding specifically while minimizing collateral tissue injury inherent with electrocautery.\nOne generally uses hemostatic bandages in the first 24 hours after removing the necrotic tissue and there is substantial bleeding at the wound site. After getting the initial heavy bleeding under control, one may apply moist dressings to promote wound healing. Wet-to-dry wound dressings can prevent infection but recent studies have shown that this method may actually enable bacterial proliferation. Chitosan-based hemostatic bandages offer physicians an added advantage because they also have an antibacterial barrier.", "score": 39.14349537061663, "rank": 9}, {"document_id": "doc-::chunk-12", "d_text": "Water is generally available during the rainy season, but it is muddy and full of sediments. Because of a lack of purifying agents, communities drink water that is no doubt contaminated by sediment and human feces. Thus, the use of natural coagulants that are locally available in combination with solar radiation, which is abundant and inexhaustible, provides a solution to the need for clean and safe drinking water in the rural communities of India. Use of this technology can reduce poverty, decrease excess morbidity and mortality from waterborne diseases, and improve overall quality of life in rural areas.\nThe application of coagulation treatment using natural coagulants on surface water was examined in this study. The surface water was characterized by a high concentration of suspended particles with a high turbidity. At a varied range of pH, the suspended particles easily dissolved and settled along with the coagulants added. Research has been undertaken to evaluate the performance of natural starches of sago flour, bean powder, and chitin to act as coagulants individually and in blended form. In all three cases, the main variable was the dosage of the coagulant. The study shows that natural characteristics of starch and other coagulants can be an efficient coagulant for surface water but would need further study in modifying it to be efficient to the maximum. Thus, it can be concluded that the blended coagulants are the best which give maximum removal efficiency in minimum time.\nIt is chitin and chitosan which can readily be derivatized by utilizing the reactivity of the primary amino group and the primary and secondary hydroxyl groups to find applications in diversified areas. In this work, an attempt has been made to increase the understanding of the importance and effects of chitin at various doses and pH conditions, upon the chemical and biological properties of water. In view of this, this study will attract the attention of academicians and environmentalists.\nThis is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.", "score": 37.81448622410633, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "Medicine, health care products field\nAs the chitosan non-toxic, has anti-bacterial, anti-inflammatory, hemostatic, and immune function, can be used as artificial skin, self-absorption of surgical sutures, medical dressing Branch, bone, tissue engineering scaffolds, enhance liver function, improve digestive function, blood fat, lowering blood sugar, inhibiting tumor metastasis, and adsorption and complexation of heavy metals and can be excreted, and so on, was vigorously applied to health food and drug additives.\n3. Environmental protection field\nChitosan and its derivatives is a good flocculant for wastewater treatment and metal recovery from metal-containing waste water; In textile field: As a mordant, health care fabric, sizing agents, printing and dyeing.\nChitosan Hydrochloride Usage:\n1. Preparation dressing for hurt burn hemostatic effect is capitally.\n2. Made heath products by capsule it is convenient to eat can dissolve and absorb adequately play the health care function of Chitosan.\n3. Chitosan Hydrochloride is used for absorptionpromoter of hydrophhilic medicine and natural-nutrition cosmetics.\n4. Chitosan Hydrochloride keeps the natune of Chitosan. It holds excellent perfomance in film forming and bacterial inhibition it's used in additive and preservation for fruits and vegetables.\n5. Having strong cation used in sewage disposal the subsiding and retrieving of the protein in food factory and to paper-making can improve the utilization ratio of paper pulp and intensity of paper product.\n|pH Value (1% solution)||4.0-6.0|\n|Loss on Drying||<=15.0%|\n|Heavy Metals (Total)||<=40.0ppm|\n|Total Plate Count||<=1000cfu/g|\n|Yeast & Mould||<=100cfu/g|\n1. Rich experience: Our company is a professional production leading factory in China in pharmaceutical area of many years.\n2. Top quality :High quality guaranteed,welcome order samples,MOQ just 10g. once any problem is found, the package would be reshipped for you .\n3. Discreet package :The packing suits you best would be choosen to cross customs safely. Or if you have your own ideal way, it could be also taken into consideration .\n4.", "score": 37.704173382500585, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "It possesses other biological activities that affects macrophage function and helps in faster wound healing . It also has an capability to stimulate cell proliferation and histoarchitectural tissue organisation . The biological properties including bacteriostatic and fungistatic properties are particularly useful for wound treatment. Various studies have been reported on the beneficial effects of chitosan accelerated wound closure and healing [3-6]. Chitosan facilitated rapid wound re-epithelialization and the regeneration of nerves within the vascular dermis and early returns to normal skin color at chitosan-treated areas . Treatment with chitin and chitosan demonstrated a significant reduction in treatment time with minimal scar formation on various animals . Biochemistry and histology of chitosan in wound healing has been reviewed by Muzzarelli et al. and Feofilova et al. . The silver sulfadiazine incorporated bilayer chitosan wound dressing showed excellent oxygen permeability, controlled water vapor transmission rate, and water-uptake capability along with excellent antibacterial activity . Antibacterial, antifungal and antiviral properties found chitosan particularly useful for biomedical applications such as wound dressings, surgical sutures, as aids in cataract surgery, periodontal disease treatment, etc. . Research has shown that chitin and chitosan are nontoxic and non-allergic so that the body will not reject these compounds. Both chitin and chitosan possess many properties that are beneficial for wound healing like biocompatibility, biodegradability , hemostatic activity , healing acceleration, non-toxicity, adsorption properties and anti infection properties [15-17]. An effective wound dressing not only protects the wound from its surroundings but also promotes the wound healing by providing an ideal microenvironment for healing, removing any excess wound exudates and allowing continuous tissue reconstruction . Chitosan has been widely studied as a wound dressing material [18,19]; however, a wound-dressing product based on chitosan is yet to be commercialized.\nThe use of temporary skin substitutes such as Biobrane, TransCyte, Integra and Terudermis have become more widely used in the treatment of mild to deep dermal burn injury. However, they are extremely expensive. Also, they contain collagen which is not recommended for third degree burns where dermis, epidermis and hypodermis are totally destroyed .", "score": 37.38412865962445, "rank": 12}, {"document_id": "doc-::chunk-1", "d_text": "In particular, the development of chitosan-based materials as useful adsorbent polymeric matrices is an expanding field in the area of adsorption science . Chitosan is a type of natural polyaminosaccharide, obtained by deacetylation of chitin , which is a polysaccharide consisting predominantly of unbranched chains of β-(1→4)-2-acetoamido-2-deoxy-D-glucose . Composites based on chitosan are economically feasible because they are easy to prepare and involve inexpensive chemical reagents . Recently, chitosan composites have been developed to adsorb heavy metals and dyes from wastewater [10, 12–15].\nChitosan composites have been proven to have better adsorption capacity and resistance to acidic environment . Various methods of preparation of hybrid materials based on inorganic materials and polysaccharides such as chitin [1–8] and chitosan for different applications have been studied [9, 11, 16–18]. Different kinds of substances have been used to form composite with chitosan such as silica, montmorillonite, polyurethane, activated clay, bentonite, polyvinyl alcohol, polyvinyl chloride, kaolinite, oil palm ash, perlite, and magnetite [19–23]. Although such minerals possess high adsorption capabilities, the modification of their structure can successfully improve their capabilities. In work , chitosan/attapulgite composites are applied as an adsorbent for the removal of chromium and iron ions from aqueous solution of both single and binary systems. Attapulgite is a hydrated octahedral-layered magnesium aluminum silicate mineral with large surface area, excellent chemical stability, and strong adsorption. Equilibrium data were well described by the Freundlich isotherm models, indicating multilayer adsorption for Cr(III) and Fe(III) onto composites. Kinetic experiments showed that composites offered fast kinetics for adsorption of Cr(III) and Fe(III), and the diffusion-controlled process as the essential adsorption rate-controlling step was also proposed. Moreover, the initial adsorption rates of Cr(III) were faster than that of Fe(III) with the increase of temperature and initial concentrations. The thermodynamic analysis presented the endothermic, spontaneous, and entropy gained nature of the process .", "score": 36.67724369262262, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "- Wastewater treatment: The chitosan-gel combination is a more effective and less expensive process for removing heavy metals from wastewater in many industrial processes, including metal plating and finishing, mining, and power generation (nuclear and fossil fuels).\n- Groundwater decontamination: The biosorbent can be used in environmental clean-up efforts for removing heavy metals and certain gasoline additives from contaminated groundwater.", "score": 36.29420464153731, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Lobster shell shows great healing, bio-stimulant properties\nSCIENTISTS from the University of Havana have used lobster waste to generate chitin and chitosan, two key compounds in biomedicine and agriculture.\nThey used these compounds to produce surgical materials with great healing and antiseptic properties as well as to enhance growth speed and germination in seeds. Research results have been published in international research journals such as Macromol, Food Hydrocolloids, Journal of Applied Polymer Science or Polymer Bulletin.\nChitin is a polymer very common in nature as part of animals' and plants' physical structures. Only cellulose is more abundant than chitin, which makes this compound a highly important renewable resource that can easily be found in arthropods, insects, arachnids, molluscs, fungus and algae.\nThe fishing industry in Cuba generates great amounts of lobster waste, \"a pollutant rich in proteins and chitin\", states Prof. Carlos Andr�s Peniche Covas, head of the Biopolymers Research Group, from the Biomaterials Centre of the University of Havana. This group is doing research into chitin and chitosan extraction from such waste, in collaboration with the Spanish Centre for Scientific Research (CSIC), the Complutense University in Madrid (Spain) and the Mexican Research Centre for Food and Development.\nPeniche points out that \"this work allows for the first accurate and comprehensive results of a university study on chitin and chitosan.\nThe study starts at the extraction of these compounds from polluting waste of the Cuban fishing industry and it goes on to cover these products' characterisation through traditional techniques and some more innovative ones, the study of their properties, the development of new by-products and the testing of their practical applications in areas useful for this Caribbean country, such as agriculture and biomedicine.\"\nThese researchers' work has led to the development of a procedure to obtain surgical materials with great healing and antiseptic properties. \"This procedure involves using chitosan to cover surgical threads and lint, into which antibiotics are injected.\nBy doing this, we obtain medical materials with both antimicrobial and healing properties and, as they are covered in a natural polymer, with a higher degree of biocompatibility.\" Research shows that such properties remained unmodified after sterilisation.\nTwo new types of surgical thread were produced in collaboration with the Cuban Superior Institute of Military Medicine \"Dr.", "score": 33.20125938399296, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Bioactive chitosan scaffold co cultured with keratinocyte and fibroblast cells.\n|Abstract:||Polymeric cell seeded sponge scaffolds play a prominent role in supporting the regeneration of skin for burn patients. The three-dimensional scaffolds provide physical support and acts as an excellent matrix regulating cell growth, adhesion and differentiation. Natural macromolecule polymeric scaffolds from chitosan have advantages over synthetic polymers in that they encourage cell attachment and maintain differentiation of cells. Chitosan is the most used natural, biodegradable polymer as tissue scaffolds. It is known in the wound management area for its haemostatic properties. It possesses other biological activities and affects macrophage functions that assist in faster wound healing. It provides a non-protein matrix for three-dimensional tissue growth. Chitosan seems to generate desirable migrant and stimulatory effects on the stromal cells of the surrounding cells. Patients suffering from extensive skin loss like burns are in danger of succumbing to either massive infection or excessive fluid loss. Chitosan scaffold with the optimum combination of fibroblast cells and keratinocyte cells is guided to produce the desired tissues, which ultimately helps in the complete regeneration of functional skin. The present study demonstrates the importance of chitosan as a tissue engineered scaffold for burn wounds.|\nCell culture (Research)\nBiomedical materials (Research)\nSharma, Chandra P.\n|Publication:||Name: Trends in Biomaterials and Artificial Organs Publisher: Society for Biomaterials and Artificial Organs Audience: Academic Format: Magazine/Journal Subject: Health Copyright: COPYRIGHT 2012 Society for Biomaterials and Artificial Organs ISSN: 0971-1198|\n|Issue:||Date: Jan, 2012 Source Volume: 26 Source Issue: 1|\n|Topic:||Event Code: 310 Science & research|\n|Product:||SIC Code: 2836 Biological products exc. diagnostic|\n|Geographic:||Geographic Scope: India Geographic Code: 9INDI India|\nChitosan is a common biopolymer that is derived from chitin a key component of crustacean outer skeletons. It is the second most abundant natural polymer after cellulose and is known in the wound management area for its haemostatic properties. It has wide application in waste management, food processing, medicine and biotechnology.", "score": 33.07475382889089, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "The in vitro and in vivo fungicidal activity of chitosan was studied against Colletotrichum gloeosporioides, the causal agent of anthracnose in papaya fruits. Chitosan at 1.5% and 2.0% concentrations showed a fungistatic effect with 90–100% inhibition (significant at P ≤ 0.05) of the fungal mycelial growth. Changes in the conidial morphology were also observed with the higher chitosan concentrations after 7- h incubation. In vivo studies showed that 1.5% and 2.0% chitosan coatings on papaya not only controlled the fruit decay but also delayed the onset of disease symptoms by 3–4 weeks during 5 weeks storage at 12 ± 1 °C and slowed down the subsequent disease development. However, when leaving the fruits to ripen at ambient temperature (28 ± 2 °C), 2.0% chitosan was less effective than 1.5% in controlling the disease development. Chitosan coatings also delayed the ripening process by maintaining the firmness levels, soluble solids concentration and titratable acidity values during and after storage.", "score": 32.87918568467752, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Chitosan, a complex carbohydrate derivative of shellfish exoskeleton, is shown to enhance lingual hemostasis in rabbits treated with a known antagonist of platelet function, epoprostenol (prostacyclin or PGI2). Bleeding times were measured for bilateral (15 mm × 2 mm) tongue incisions in 10 New Zealand white rabbits. Using a randomized, blinded experimental design, one incision in each animal was treated with chitosan and the other was treated with control vehicle without chitosan. Extraoral bleeding and coagulation times were measured for each animal before, during, and after infusion of epoprostenol. Continuous infusion of epoprostenol increased mean systemic bleeding time 95%. In this platelet dysfunction animal model, lingual incisions receiving the experimental substance showed a 56% improvement in bleeding time in comparison with lingual incisions receiving control solution (P = .003).\nASJC Scopus subject areas\n- Oral Surgery", "score": 32.418866989307084, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Nov. 6, 2013 — In order to overcome resistance to antifungal variety of pathogenic fungi and yeast, researchers from the University of Alicante have developed a novel and efficient antifungal composition with pharmacological applications in agriculture and food industry, among others.\nThe composition, developed and patented by the UA Research Group in Plant Pathology, is based on the combined use of chitosan, or chitosan oligosaccharides (COS), antifungal agents and additives that synergistically affect the growth of a variety of pathogenic fungi.\n\"Chitosan is a non-toxic biopolymer, biocompatible and naturally degradable, with antibacterial, antiviral and antifungal properties obtained from chitin, the main constituent of hard body parts of invertebrates, such as the shells of shrimp, lobsters, crabs, and other marine crustaceans, and is part of the fungal cell wall,\" as explained by lecturer Luis Vicente López Llorca, Director of the UA Research Group in Plant Pathology and head of the research work.\n\"Because many fungal pathogens develop resistance to prolonged treatment with antifungal drugs, it is desirable to find alternatives for their control in medical, agricultural and those applications in which the fungi cause damage. In clinics, pathogenic fungi resistant to antifungal drugs are a major cause of mortality in patients. Chitosan and the antifungal additives, some based on the identification of molecular targets of chitosan, contribute to produce a novel alternative to control fungal diseases and in particular antifungal resistant strains\" López Llorca said.\nThe various experiments carried out by the research group are proof of the significant synergistic effect of the combination of chitosan (or COS) and other antifungals and ARL1 gene inhibitor, in inhibiting the growth of mold and yeast . \"Chitosan is nontoxic to mammals, making it suitable for use as an antifungal in various applications,\" Luís Vicente López adds.\n\"The chitosan or COS and a joint inhibition of some of its gene targets block the cell cycle and transcription in yeast, leading to oxidative stress, cell death and growth inhibition\" López Llorca indicates. In this regard, the combination may have potential in the treatment of tumors.", "score": 32.29495289673048, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "Most important are Wound Care, Implants fat & cholesterol –binding drugs, and drug delivery.\nIn terms of function, it may be compared to the protein keratin.\nChitin has proved useful for several medicinal, industrial and biotechnological purposes.\nFat Absorbent: The ability of the Chitosan molecule to scavenge fat and cholesterol in the digestive system, plucking it from the stomach and excreting it in the duodenum, has significant implications for its use as a beneficial food additive.\nFood containing Chitosan or Chitosan complexed with a fatty acid, could be designed to reduce obesity, cholesterol levels and the incidence of colon cancer.\nWound Healing Ointment: An obvious method of exploiting Chitin’s wound-healing properties is to use the material in ointments for treating wounds.\nWound Dressing: Several research groups and companies, notably in Japan, have devised methods of making wound dressings from Chitin or Chitosan.\nThis type of dressing is used for burns, surface wounds and skin-graft donor sites.\nSurgical Sutures: These work by remaining in tissue long enough to permit healing to occur and then slowly dissolving; therefore, they need not be removed.\nUnlike several other suture materials that are absorbed by the body, these do not cause allergic reactions.\nOpthalmology: Both contact lenses and the intraocular lenses used in cataract surgery must have one significant characteristic gas permeability.\nChitin and Chitosan can make lenses more permeable to oxygen than other lens materials.\nThis could be particularly useful for injured eyes, because Chitin & Chitosan also help to heal the wounds.\nIn addition, Chitin does not adhere to eye wounds; that makes removing the lenses from the eyes safer.\nHard contact lenses can also be prepared using Chitin n-butyrate.\nDental: Since, Chitin can regenerate the connective tissue that covers the teeth near the gums, it has possibilities for treating periodontal diseases such as gingivitis and periodontis.", "score": 31.754497382841038, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "The present research work entails synthesis and pharmacological screening of chemically modified N-aryl chitosan derivatives. Chitosan is an amino polysaccharide obtained from natural origin and it has attracted attention because of its unique physicochemical characteristics and biological activities. It has great pharmaceutical futuristic potential with unusual extent of possibilities for structural modifications to impart desired functions. This chemically modified chitosan can be used as potent hypocholesterolemic agent. Various aldehydes can be used for the chemical modification of chitosan, where aldehyde attached to the free amino group of chitosan polymer imparts different physicochemical properties, not exhibited before modification. An imine formation as an intermediate is then reduced to N-aryl derivative of chitosan. It was characterised by IR and NMR. The hypocholesterolemic mechanism of chitosan was investigated in male wistar rats. Animals were divided into 6 groups (n = 6): a normal untreated control group (UC), a high-fat control group (PC), one standard control (SC) and 3 modified chitosan groups (CSA1, CSA2 and CSA3). The doses of standard chitosan and modified chitosan were given at the beginning for 12 days. Later it was preceded by high fat induction. The results showed marked decrease in total serum cholesterol of rat’s in-vivo that refers to modified chitosan treatment.", "score": 31.614490881967043, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "The antibacterial properties of chitosan have been investigated using an in vitro methodology in which the antibacterial activities of chitosan were measured against a wider range of bacteria. Results demonstrate that chitosan increased permeability on the inner and outer membranes and ultimately disrupted the bacterial cell membranes, releasing their contents. HemCon hemostatic dressings offer an antibacterial barrier against a wide range\nof gram positive and gram negative organisms. © Dennis Kunkel Microscopy, Inc.", "score": 31.60565595126562, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The efficacy of chitosan is unresolved\nThe properties of chitosan to support weight loss have been repeatedly studied both on humans and on animals.Their results varied … One of them was carried out, inter alia, in Katowice under the supervision of prof.Barbara Zahorska-Markiewicz (regarding Chitinin).In clinical conditions, it was found that patients using chitosan lost an average weight of 16 kg in half a year.At the same time, the control group of patients using only diet lost 10 kg.\nIn turn, another study found that the addition of chitosan to the diet has a positive effect on the bacterial flora of the digestive tract.Studies on rodents have shown that chitosan with a high degree of deacetylation inhibits the development of Clostridium perfringens (bacteria most often responsible for food poisoning), whereas the survival of probiotic bacteria, including Lactobacillus spp. And Bifidobacterium spp., Reached 90%.\nIn 2008, however, a team of scientists from the Cochrane Collaboration conducted a meta-analysis of all available clinical trials (lasting at least 4 weeks) on the effectiveness of chitosan in the fight against obesity.It turned out that the decrease in body weight, cholesterol (and consequently also blood pressure) was observed only in clinical trials with low reliability – and in their case the effect was small.In turn, in studies that were carried out with greater care, there was no significant effect of taking dietary supplements with chitosan on body weight.So far, the mechanism of combining chitosan with fat molecules has not been clinically proven, which is the essence of its action supporting slimming.\nSupplementation of chitosan is indicated for people who consume a significant amount of fatty products.\nAs a result, the American agency FDA (Food and Drug Administration) forbade domestic producers of dietary supplements with chitosan to put on their packaging statements and slogans suggesting its pro-health effects.There are no such restrictions in Poland.\nChitosan supplementation is recommended for people who consume large quantities of fatty foods (fatty meats, pies, etc.).As fat is also a component of many confectionery products, chitosan can be helpful for people who eat sweets that usually contain large amounts of saturated fat.", "score": 30.386447207316053, "rank": 23}, {"document_id": "doc-::chunk-21", "d_text": "Divya K, Vijayan S, George TK, Jisha MS. Antimicrobial properties of chitosan nanoparticles: Mode of action and factors affecting activity. Fibers Polym. 2017;18(2):221–30.\nJøraholmen MW, Bhargava A, Julin K, Johannessen M, Škalko-Basnet N. The antimicrobial properties of chitosan can be tailored by formulation. Mar Drugs. 2020;18(2):96–102.\nHafsa J, ali Smach M, Khedher MR, Charfeddine B, Limem K, Majdoub H, et al. Physical, antioxidant and antimicrobial properties of chitosan films containing Eucalyptus globulus essential oil. LWT-Food Sci Techn. 2016;68:356–64.\nRosato A, Carocci A, Catalano A, Clodoveo ML, Franchini C, Corbo F, et al. Elucidation of the synergistic action of Mentha Piperita essential oil with common antimicrobials. PloS one. 2018;13(8):e0200902.\nRosato A, Catalano A, Carocci A, Carrieri A, Carone A, Caggiano G, et al. In vitro interactions between anidulafungin and nonsteroidal anti-inflammatory drugs on biofilms of Candida spp. Bioorg Med Chem. 2016;24(5):1002–5.\nJiang X, Ma M, Li M, Shao S, Yuan H, Hu F, Liu J, Huang X. Preparation and evaluation of novel emodin-loaded stearic acid-g-chitosan oligosaccharide nanomicelles. Nanoscale Res Lett. 2020;15(1):1–1.\nGolmohamadpour A, Bahramian B, Khoobi M, Pourhajibagher M, Barikani HR, Bahador A. Antimicrobial photodynamic therapy assessment of three indocyanine green-loaded metal-organic frameworks against Enterococcus faecalis. Photodiagnosis Photodyn ther. 2018;23:331–8.", "score": 30.12364344018706, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Chito SAM 100, Z – Folded Gauze\n- Additional information\n- Reviews (0)\nThe special feature of ChitoSAM® 100\nUnlike conventional hemostatic agents, in which a gauze bandage or wound dressing is impregnated or coated with a hemostatic (coagulating) agent, ChitoSAM® 100 consists of one hundred percent chitosan. An additional carrier material is not necessary due to the special processing. As a result, the hemostatic effect is 25% higher than that of classic chitosan products.\nActive ingredient chitosan\nChitosan, also known as poliglusam, poly-D-glucosamine or polyglucosamine, is a natural biopolymer of the polysaccharide family, first obtained in 1859 by boiling chitin with potassium hydroxide solution. Chitosan is also used in cosmetics and industry, and in medicine as an absorbable suture material in addition to its use as a hemostatic agent. Unlike other hemostatic agents (hemostyptics), chitosan does not activate the body’s own blood clotting factors. The hemostatic effect is based on several properties:\n– High absorption capacity: binding of liquids, fats and proteins\n– Film forming\n– Concentration of clotting factors and blood solids in the wound due to moisture deprivation/binding.\n– Electrophysiological reaction = cationically charged chitosan additionally promotes the concentration of anionically charged blood coagulation factors in the wound\nChitosan also works reliably at lower blood temperatures or when using anticoagulants such as Marcumar or Heparin.\nApplication of ChitoSAM 100\nThe ChitoSAM tissue is pressed directly onto the bleeding wound for several minutes. This binds blood and wound fluid and causes grafting. This is a great advantage for subsequent wound treatment in a rescue center or hospital, as the gel-like plug can be easily removed. Due to the high absorption capacity of the active ingredient, care must be taken in the case of head injuries to ensure that the dressing material does not come into contact with the eyes.\nChitoSAM 100 is atoxic, anti-allergenic, 100% biodegradable and can therefore be used without hesitation.", "score": 30.058176880413072, "rank": 25}, {"document_id": "doc-::chunk-38", "d_text": "Under optimal conditions, Chitosan can bind an average of 4 to 5 times its weight with all the lipid aggregates tested.60 (NOTE: This assessment was made without the addition of ascorbic acid which potentiates this action even further.77 Studies in Helsinki have shown that individuals taking chitosan lost an average of 8 percent of their body weight in a 4-week period.76 Chitosan has increased oil-holding capacity over other fibers.108 Among the abundant natural fibers, chitosan is unique. This uniqueness is a result of chitosan’s amino groups which make it an acid absorbing (basic) fiber. Most natural fibers are neutral or acidic. Table 7 summarizes the in vivo effects in animals of various fibers on fecal lipid excretion. As can be seen from the results listed, ingestion of chitosan resulted in 5-10 times more fat excretion than any other fiber tested. D-Glucosamine, the building block of chitosan, is not able to increase fecal fat excretion. This is due to the fact that glucosamine is about 97 percent absorbed while chitosan is nonabsorbable. Fats bound to glucosamine would likely be readily absorbed along with the glucosamine. Chitosan, on the other hand, is not absorbed and therefore fats bound to chitosan can not be absorbed.\nChitosan has the very unique ability to lower LDL cholesterol (the bad kind) while boosting HDL cholesterol (the good kind).78 Laboratory tests performed on rats showed that “chitosan depresses serum and liver cholesterol levels in cholesterol- fed rats without affecting performance, organ weight or the nature of the feces.”79 Japanese researchers have concluded that Chitosan “appears to be an effective hypocholesterolemic agent.”80 In other words, it can effectively lower blood serum cholesterol levels with no apparent side effects. A study reported in the American Journal of Clinical Nutrition found that Chitosan is as effective in mammals as cholestryramine (a cholesterol lowering drug) in controlling blood serum cholesterol without the deleterious side effects typical of cholestryramine.", "score": 30.011775636695138, "rank": 26}, {"document_id": "doc-::chunk-5", "d_text": "Since the lipid headgroups are negatively charged, molecules with lower DA might have a greater opportunity to diffuse close to the membrane breaches, and thus adsorb and seal the membrane through electrostatic interactions. This mechanism is consistent with previous studies in which chitosan polymers incorporated with artificial membrane films and vesicles where physical attractions and interactions were a result of membrane adsorption [25, 26]. Our data however, did not reveal a hint of an improvement in function of chitosan based on the DA.\nThe issue of molecular weight\nHere, equivalent sealing effects using different MW chitosan treatments was suggested in both TMR and LDH tests, and in both transection and compression ex vivo injury models. Based on the fact that different MW chitosan in our study shared a similar DA, one possible reason for the consistency of data found in different MW chitosan treatments might be the predominant role of DA over chitosan MW in initiating membrane repair. A similar result is also observed in the zeta-potential evaluation of the influence of different MWs of chitosan on membrane adsorption .\nThe superiority of nano-fabrications in particular over aqueous suspensions of injected polymers, cannot be under appreciated. High molecular weights of PEG are too viscous for facile IV use. Lowering the MW in an effort to produce a clinically easy injection may produce some level of toxicity due to the circulation of polymers which do not degrade and must be removed at the level of the kidney and liver (as discussed and cited above). Chitosan, as a naturally occurring polysaccharide, is not toxic. Detection of it in bodily fluids after systemic administration is unlikely.\nMoreover, PEG coated silica nanoparticles required the PEG surface coat fabrication over the silica nanoparticle which is inert. To be further useful as a targeted repair agent and a delivery vehicle requires a third step - the bonding of surface PEG to the drug or cytokine of choice. In chitosan nanofabrications, the chitosan itself is the “sealant” while capturing or surface bonding of a drug/cytokine to be released requires only another single step in the process. Chitosan nanospheres - by themselves - show significant sealing properties producing improved physiological recovery after SCI.", "score": 29.92172289173267, "rank": 27}, {"document_id": "doc-::chunk-31", "d_text": "The tight entanglement structure in the silica-chitosan composite significantly retards chitosan's leaching under acidic environment. Therefore, in the gastric environment, this composite more effectively controls the drug release rate more effectively than bare chitosan.\nIn particular, this study compares the effect of the novel chitosan/silica nanocomposite compared with chitosan sponges on the mucosal adsorption of a model drug, amoxicillin, in vitro.\nThe chitosan-silical nanocomposite samples as shown in the following table are prepared:\nTABLE-US-00003 Chitosan Silica Chitosan Sample (g) (g) (% wt) A 0 0.55 0 B 0.02 0.82 2.4 C 0.05 0.82 5.7 D 0.10 0.55 15.4 E 0.20 0.55 26.7 F 0.20 0.27 42.6\nThe dried nanocomposites (100 mg) were further treated with 50 mg of amoxicillin solubilized in PBS for 24 hrs, followed by freeze-drying. Drug content was measured by HPLC. A reverse-phase C18 column was used as stationary phase and trifluoroacetic acid 0.01M/methanol (80/20 v/v) at the flow rate of 1 ml/min as mobile phase. Mobile phase was monitored at the wavelength of 270 nm. Quantification of amoxicillin was conducted by using a calibration curve obtained using amoxicillin solutions at known concentrations.\nIn Vitro Evaluation of the Mucoadhesive Properties of Chitosan-Silica Nanocomposite\nLyophilized chitosan-silica nanocomposites (samples with different compositions A-F) and chitosan were compressed into 5.0 mm diameter flat-faced discs. The compaction pressure was kept constant during the preparation of all discs. Tablets were attached with a pressure of 500 Pa to freshly excised intestinal rat mucosa, which has been fixed to a stainless steel cylinder (diameter 4.4 cm, height 5.1 cm, apparatus 4-cylinder, USP XXVI) using a cyanoacrylate adhesive.", "score": 29.85151861163485, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Derived from the exoskeleton of crabs, called chitin, Chitosan Plus has the amazing ability to bind to fats and cholesterol, encapsulating them before they can be absorbed by the body.\nChitosan Plus 60 Caps\nOrder Processing at\nChitosan Plus is literally a designer fiber, especially developed to attract and hold food fat and cholesterol for safe elimination. Most fibers absorb fat and bile acids to a limited extent, but Chitosan is the \"super fiber\" that functions like a magnet, attracting negatively charged food fat, cholesterol and bile molecules.\nWith its ability to bind to fats in the body’s GI system, Chitosan Plus can act as an effective “fat blocker” or “fat\ntrapper” and prevent dietary fats from being absorbed by your\nHow Chitosan Plus Works\nChitosan attracts fat molecules, which are negatively charged. Chitosan is an amino polysaccharide, categorized as an alkalized form of chitin, a cellulose-like polymer derived from the exoskeleton of crabs and shrimp. It has the ability to bind fats in the stomach before they are absorbed through the digestive system. Depending upon the amount of Chitosan taken, it is possible to absorb over one-third of one's dietary fat. The resulting non-digestible mass is then excreted through the digestive tract, in the process improving elimination. Chitosan's effect on fat excretion is five to ten times greater than any other natural dietary fiber. Chitosan Plus is calorie-free and is therefore a very useful dietary supplement for those wishing to improve their health and yet maintain a low-calorie, high-fiber diet.\nBy absorbing fat, Chitosan also eliminates not only cholesterol but many of the building blocks for cholesterol. As a result, cholesterol production in the body is reduced. In addition to electrostatically binding fats, chitosan hydrophobicly links to other neutrally-charged fats, including cholesterol. In addition, it is believed that Chitosan's continuous partial removal of bile acids triggers the body to break down already absorbed cholesterol to produce replacement bile acids.", "score": 29.82146009176486, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "Chitosan, a deacetylated derivate of chitin, is certainly a higher molecular pounds cationic linear polysaccharide made up of d-glucosamine and, to a smaller level, for 15 min at 4 C. The pellet was re-extracted using 25 mL of acetone (70% for 15 min at 4 C. The ensuing supernatants had been mixed, filtered and useful for the next assays subsequently. The full total phenol content material within the strawberry fruits was motivated utilizing the Folin-Ciocalteu technique , as well as the results are portrayed as milligrams of gallic acidity equivalents (GAE) per 100 grams refreshing pounds (FW) using gallic acidity as a typical. The full total monomeric anthocyanins had been estimated utilizing the pH-differential technique , as well as the results are portrayed as cyanidin-3-glucoside comparable (CGE) per 100 grams NVP-BSK805 refreshing pounds. The absorbance was assessed at 520 and 700 nm. The full total flavonoid content material was motivated using the light weight aluminum chloride colorimetric technique with catechin as a typical. The full total flavonoid content material was portrayed as milligrams of catechin comparable (CE) per 100 grams refreshing weight (FW). The full total antioxidant activity of strawberry fruits extracts was assessed using 1,1-diphenyl-2-picryl-hydrazil (DPPH) based on the approach to Brand-Williams, with some adjustments. The assay was performed in your final level of 1.5 mL in triplicate per test. The percentage reduction in the DPPH focus was computed from the original worth after incubation for 15 min. A dose-response curve was produced, using Trolox as a typical, as well as the antioxidant activity was portrayed as mol Trolox equivalents (TE) per gram refreshing pounds (FW). 2.5. Ascorbic Acidity Content material The ascorbic acidity articles was motivated regarding the technique of Singh and Malik , with some adjustments. The strawberry fruits (2.5 g) had been homogenized using 10 mL of 16% (for 10 min, collected and filtered.", "score": 29.71438894083234, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Recommended for the prevention of the following diseases\nAbout the preparation\nThe preparation includes OCHG™ chitosan oligomers in liquid form, which are characterized by a unique ability to directly form polymer films from suspension, good miscibility with other biopolymers and a high value of the secondary swelling index. Under the influence of digestive enzymes, part of the chitosan oligomers is absorbed into the blood, the other part, not digested by enzymes, binds water and acts as an absorbent in the digestive tract. Vitamin C supports the immune system and helps cleanse the body and neutralize free radicals\nEffects of chitosan oligomers on human health\nChitosan, from which polymeric compounds are obtained, is chemically an organic compound of the polysaccharide group; a derivative of chitin, formed by its partial deacetylation. Deacetylation of chitin involves the removal of acetyl groups from chitin, which harden and cement chitin. The highest biochemical activity of chitosan is obtained when it is processed through the patented OCHG™ process. The result is chitosan lactate or chitosan in the form of oligomers, or polymers, with short chains and a well-defined molecular weight. Structurally, chitosan is a cellulose (fiber) of animal origin, similar to human fibrin. It is completely natural and well tolerated by our body. Chitosan has a very beneficial effect on the entire body. Experts call it the sixth essential component of life after proteins, fats, sugars, minerals and vitamins. It effectively purifies and deacidifies the body, stimulates the human defense mechanisms, does not allow cancer cells to multiply, lowers cholesterol, helps reduce body weight. With the help of chitosan it is possible to overcome many diseases with which modern medicine cannot cope.\nCHITOZIN FIT C contains liquid chitosan lactate in the form of oligomers. It is characterized by very high bioavailability and biochemical activity, which gives it the highest effectiveness among other known products containing chitosan.", "score": 28.879276468665417, "rank": 31}, {"document_id": "doc-::chunk-3", "d_text": "A preferred source of mucopolysaccharide is a material such as chitosan. Chitosan is derived from a hydrolysis product of chitin which is the horny substance and a principal constituent of the shells of crabs, lobsters and beatles. Currently, chitosan is available as an ecological waste material at low cost. Chitosan is essentially insoluble in water at neutral or higher pH, but, can be dissolved at very low pH in the process of the invention.\nChitosan is a glucosamine polysaccharide having a degree of polymerization (DP) of about 500 to about 1500 of a repeating unit of the formula: ##STR1## where n+m=DP, m=O to DP/2 and n=DP to DP/2.\nChitosan is a low cost biochemical product which is a superior material for encapsulating magnetic particles. Chitosan is a chelating agent which forms chelates with a large number of anions and cations and has been found to be fully effective in removing toxic ions from water solutions. By encapsulating a magnetic material with chitosan, recovery of the chitosan encapsulated magnetic particles from effluent streams can be easily accomplished by the application of a magnetic field. The chitosan is also readily regenerated.\nExamples of practice follow: About 0.1 mole of ferrous chloride (FeCl2) and 0.1 moles of ferric chloride (FeCl3) is dissolved in an aqueous solution (430 ml.) of chitosan (1 wt. %) acidified with hydrochloric acid to a pH of 2. To this solution is added 40 ml. of aqueous sodium hydroxide (0.5 mole NaOH) while stirring and passing in argon gas through a bubbler. The pH changes from 2 to about 7. The mixture is then boiled for two hours in the presence of argon. The black solid chitosan encapsulated magnetic material is filtered off, washed with distilled water, and dried.\nThis provides a simple and economical method of producing the large quantities of chitosan encapsulated magnetite (magnetite-chitosan powder) which would be required in large industrial, governmental or agricultural operations.\nTests were performed to demonstrate the efficiency of the chitosan in removing heavy metal ions and nitrate ions from aqueous solutions.", "score": 28.808064041453537, "rank": 32}, {"document_id": "doc-::chunk-8", "d_text": "In particular, chitosan blended with PVA has been reported to have good mechanical and chemical properties and, as a topic of great interest, has been extensively studied in the biomedical field.38–40 The enhanced property has been attributed to the interactions between chitosan and PVA in the blend through hydrophobic side-chain aggregation and intermolecular and intra-molecular hydrogen bonds41,42 as shown in Figure 1.\nElectrospun PVA and PVA/chitosan scaffolds were observed by SEM in 3000 magnification. Figures 2 and 3 show the SEM micrographs of the electrospun PVA and PVA/chitosan scaffolds, respectively. As can be seen in Figure 3, in PVA/chitosan blend (weight ratio 90/10), the average fiber diameter was found to be 221 nm with a range of 94–410 nm; while in PVA alone, the average fiber diameter was 744 nm with a range of 395–1105 nm (Figure 2). Similar observations have been made by Lin et al43 and Ignatova et al44 who investigated a series of PVA/chitosan blend nanofibrous membranes at different weight ratios and found a decrease in the average diameter of the nanofibers with increasing the chitosan content.\nIt should be noted that in electrospinning, the fiber diameter is dependent on the viscosity and charge of the solution. Typically, when the viscosity is increased, the diameter is increased proportionally. Chitosan affects not only the viscosity but also the charge density at the surface of the ejected jet through its cationic polyelectrolytic property. It increases the charge density at the surface of the jet which in turn increases the elongation force and decreases the diameter of the fiber.45\nWhen the grayscale image was converted to binary form by ImageJ software, various layers of nanofibers could be seen by applying different thresholds. Figure 4 shows the configuration of histograms of PVA and PVA/chitosan nanofibrous scaffolds. As previously described by Ghasemi-Mobarakeh et al,34 the results are not dependent upon the magnification or histogram of images. Various layers in most image magnifications and histograms allow calculation of porosity.\nAnalysis was performed on various layers of nanofibers by application of thresholds (see Figure 5).", "score": 27.801979231686875, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "Protein adsorption through Chitosan–Alginate membranes for potential applications\n- Dennise A. Murguía-Flores†1,\n- Jaime Bonilla-Ríos†1,\n- Martha R. Canales-Fiscal†1 and\n- Antonio Sánchez-Fernández1Email author\n© Murguía-Flores et al. 2016\nReceived: 16 October 2015\nAccepted: 31 March 2016\nPublished: 30 April 2016\nChitosan and Alginate were used as biopolymers to prepare membranes for protein adsorption. The network requires a cross-linker able to form bridges between polymeric chains. Viscopearl-mini® (VM) was used as a support to synthesize them. Six different types of membranes were prepared using the main compounds of the matrix: VM, Chitosan of low and medium molecular weight, and Alginate.\nExperiments were carried out to analyze the interactions within the matrix and improvements were found against porous cellulose beads. SEM characterization showed dispersion in the compounds. According to TGA, thermal behaviour remains similar for all compounds. Mechanical tests demonstrate the modulus of the composites increases for all samples, with major impact on materials containing VM. The adsorption capacity results showed that with the removal of globular protein, as the adsorbed amount increased, the adsorption percentage of Myoglobin from Horse Heart (MHH) decreased. Molecular electrostatic potential studies of Chitosan–Alginate have been performed by density functional theory (DFT) and ONIOM calculations (Our own N-layered integrated molecular orbital and molecular mechanics) which model large molecules by defining two or three layers within the structure that are treated at different levels of accuracy, at B3LYP/6-31G(d) and PM6/6-31G(d) level of theory, using PCM (polarizable continuum model) solvation model.\nKeywordsCellulose beads Chitosan Sodium alginate Adsorption Filtration Membrane\nPolymeric materials constitute a fast-growing area within the global economy, confirmed by the continuous and dynamic production of plastics . Because of the limited source of mineral raw materials and environmental protection, new sources of raw materials can be retaken to produce polymers .", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-10", "d_text": "They remained underutilized for soft tissue engineering; however, recently, they got significant attraction as a possible scaffolding material for the cartilage and tendon tissue repair [92, 93]. Particularly, chitosan got tremendous significance to be used as a scaffolding material in the field of soft tissue engineering especially tendon regeneration, exhibiting hydrophilic nature, superior mechanical strength, better cell attachment, and proliferation properties as compared with hydrophobic polyesters PGA and PLA . Chitosan is a linear polysaccharide, deacetylated product of chitin, composed of N-acetyl-D-glucosamine and β-1–4-D-glucosamine randomly distributed units. Own enhanced cell attachment, proliferation, differentiation, highly porous structure, and ECM production make chitosan a suitable candidate for a scaffolding material in the tendon injury. In particular, chitosan was found to exhibit superior biofunctionality because of the presence of N-acetylglucosamine moiety, an analogue of glycosaminoglycan, which provides enhanced adhesion capacity to growth factors and other proteins . Porous chitosan scaffolds were designed with microchannels to engineer patellar tendon tissue, exhibiting optimal results in terms of histological and biomechanical scores . Combination of chitosan with other polysaccharides has also been explored: the combination of chitosan with hyaluronan (HA), an essential component of ECM, enhanced mechanical capability, and cell migration, adhesion, and differentiation . The hyaluronan-chitosan scaffold also improved the production of collagen type I in the rotator cuff regenerated tendon [35, 93, 95].\nAnother polysaccharide, alginate, can be used in combination with chitosan as a scaffolding material because it contains D-glucuronic acid that is considered an analogue of glycosaminoglycans having similar biological activities. Chitosan-alginate hybrid scaffold showed significantly enhanced cell adhesion to tenocytes and production of ECM, predominantly made up of collagen type I . Similarly, combination of nanohydroxyapatite (n-HA) particles with fibrin, chitin, gelatin, PCL, PLGA, PLA, and polyamide-based composite scaffolds has been explored for tendon repair [97, 98].", "score": 27.356570057732185, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "CHITOSANHC is a natural product resulting from a treatment of Chitin, in order to make this highly reactive and soluble in water. (See here the production process)\nThe product is effective in:\n- Land, Horticulture and Turf Amenities (sport fields) as a stimulator for the natural defensive mechanism of the plants.\n- Water purification as flocculant, adsorbent, surface-active agent.\n- Remediation of contaminated soils by heavy metals and pesticides, as chelating agent and dechlorinating agent.\nCHITOSANHC offers many possibilities thanks to its many properties, its natural origin and its reasonable price.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-3", "d_text": "The resultant mixture is then treated with sodium hydroxide to remove the proteins and other contaminants or impurities therefrom, followed by water washing and drying; thus producing the desired chitin flakes.\nOn the other hand, chitosan is a deacetylated product of chitin and can be readily prepared from chitin in the form of white flakes by, for example, treating chitin with an alkali. Chitosan is commercially available from, for example, Kyowa Yushi Kogyo K.K. (Japan), under the tradename of \"Flonac-N\".\nThe derivatives of chitin and chitosan usable in the present invention include the following water-soluble compounds derived from chitin and chitosan.\n(1) Water-soluble oligomers of chitin or chitosan obtained by depolymerizing chitin or chitosan to the low molecules (Note: provided that they have a polymerization degree of the glucosamine unit of greater than 1).\nThese oligomers may be prepared by conventional depolymerization methods, for example, nitrite decomposition, formic acid decomposition, chlorine decomposition (see Japanese Kokai No. 60-186504), and enzyme or microorganism decomposition.\n(2) Water-soluble partially deacetylated chitins preferably having a degree of deacetylation of 40 to 60%.\nThese derivatives can be prepared by controlling the degree of deacetylation of chitin by the method disclosed in, for example, Japanese Kokai No. 53-47479.\n(3) Salts of organic or inorganic acids of chitosan.\nTypical examples of the organic acids are acetic acid, malic acid, citric acid, and ascorbic acid and typical examples of the inorganic acids are hydrochloric acid, sulfuric acid, and phosphoric acid.\n(4) Water-soluble derivatives obtained by introducing hydrophilic groups into chitin or chitosan.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "The Chitosan, Alginate, and Cellulose biopolymers may have the potential to be used as low-cost raw materials since they represent widely available and environmentally friendly resources that seem attractive for the use, not only in medicine and tissue engineering (TE) , among others. Biodegradable polymers produced from renewable resources represent plastics that may contribute to the enhancement of natural environment protection [4–7]. Porous matrices from biomaterials are used in the generation of porous matrices which include collagen , gelatin silk , alginate , and Chitosan . Alginate is a natural linear polysaccharide copolymer produced by brown algae, and bacteria. It is widely used because of its ability to form strong thermo-resistant gels, non-toxicity, biodegradability, high biocompatibility , and widely used in medical applications such as tissue TE . Cellulose is mostly used in the paper, textile and medical industry . Chitosan has excellent chemical properties such as, adsorption ; due to the reactive number of the available hydroxyl groups, reactive amino groups, and a flexible polymer chain structure [17, 18]. However, used as an adsorbent brings some drawbacks such as low surface area or porosity, high cost, and poor chemical and mechanical properties [19, 20]. Physical or chemical modifications have been studied, such as: copolymerization, grafting, or cross-linking processes [2, 21–24].\nThe conjunction of different biopolymers is an extremely attractive, inexpensive and advantageous method to obtain new structural adsorbent materials .\nMaterials such as fly ash, silica gel, zeolites, lignin, seaweed, wool wastes, agricultural wastes, clay materials, and sugar cane bagasse, among others, have been extensively used for protein removal, due to their sorption sites .\nCellulose-based composite hydrogels blended with various biopolymers can create novel materials for special applications [26–32]. The widespread applications of porous materials is not limited as adsorbents for small active molecules. Various polysaccharide hydrogels have been employed for the entrapment of enzymes [33–40]. Furthermore, specific pore structures and tunable morphology allow the construction of affinity probes for various macromolecules .", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-18", "d_text": "- F.-L. Mi, H.-W. Sung, S.-S. Shyu, C.-C. Su, and C.-K. Peng, “Synthesis and characterization of biodegradable TPP/genipin co-crosslinked chitosan gel beads,” Polymer, vol. 44, no. 21, pp. 6521–6530, 2003.\n- S. Rodrigues, A. M. R. Costa, and A. Grenha, “Chitosan/carrageenan nanoparticles: effect of cross-linking with tripolyphosphate and charge ratios,” Carbohydrate Polymers, vol. 89, no. 1, pp. 282–289, 2012.\n- L. Keawchaoon and R. Yoksan, “Preparation, characterization and in vitro release study of carvacrol-loaded chitosan nanoparticles,” Colloids and Surfaces B, vol. 84, no. 1, pp. 163–171, 2011.\n- Y. Xu, Z. Wen, and Z. Xu, “Chitosan nanoparticles inhibit the growth of human hepatocellular carcinoma xenografts through an antiangiogenic mechanism,” Anticancer Research, vol. 30, no. 12, pp. 5103–5109, 2009.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-7", "d_text": "Description of such kind of system should have some important properties: (i) obtained spontaneously under exceptionally mild conditions without involving high temperatures and organic solvents, (ii) has a valuable drug loading capacity and provides a continuous and sustainable release of the encapsulated drug for several days, and (iii) has a pH-sensitive behavior. The properties basically concern optimizing general conditions of chitosan nanoparticle production and the feasibility of drug entrapment and release with regard to cancer treatment applications. In terms of appropriate localized drug delivery by means of tumor treatment, chitosan and 5-FU encapsulated chitosan nanoparticles were produced, due to the certain properties mentioned above.\n3.1. Chitosan Nanoparticle Production Conditions\nChitosan’s ability of quick gelling on contact with polyanions relies on the formation of inter- and intramolecular crosslinkages mediated by polyanions . The preparation of chitosan nanoparticles is based on an ionic gelation interaction between positively charged chitosan and negatively charged tripolyphosphate (TPP) at room temperature immediately [25, 26]. TPP is a multivalent anion that possesses negative charges; chitosan in acidic solution has amino groups that can undergo protonation. During the preparation process, TPP electrostatically attracted to the groups in chitosan to produce ionically crosslinked chitosan nanoparticles [8, 27]. Size and size distribution of the chitosan nanoparticles depend largely on concentration of chitosan and TPP solutions. For the success of chitosan with nanosized scale, the concentration of chitosan and TPP should be controlled at a suitable range .\nThe mean size and size distribution of each batch of chitosan nanoparticle suspension were analyzed using the Zetasizer analysis. Table 1 represents effects of chitosan and TPP concentrations on particle size and ability to produce nanoparticle by visual observations. Previously it had been shown that the appearance of the solution changed when a certain amount of TPP ions was added to the chitosan solution, from a clear to opalescent solution that indicated a change of the physical states of the chitosan to form nanoparticles, then microparticles, and eventually aggregates . In this study, samples were visually analyzed and identified as clear solution, opalescent suspension, and aggregates (Table 1).", "score": 26.715132068058224, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "It is hoped that the result of this preliminary investigation will suggest the possible mechanism of antigen delivery by either entrapment in and/or adsorption unto the antigen (AMA1) in or onto the chitosan nanoparticles and subsequently decorating the nanoparticle-bearing antigen with other immune boosters such as uric acid (UA) and interleukin- 12 (IL-12) for a longer-lasting immunity against the disease13-15.\nMATERIAL & METHODS\nChitosan, low molecular weight (degree of acetylation >95%, molecular weight 400 kDa), Poly (sodium 4- styrenesulfonate) (PSS, Mw. 4.3 kDa), Rhodamine 123, aluminium hydroxide gel, acetic acid (purity 99%), formaldehyde, sodium borohydride, sodium hydroxide, N-methyl-2-pyrrolidinone, sodium iodide, methyl iodide (Sigma, USA), Sodium carbonate Sigma ultra and phosphate buffer solution (sodium phosphate dibasic Sigma ultra, 99%) were used as procured without further purification.\nDerivatization of water soluble chitosan\nSynthesis of N-methylchitosan\nThe method of Belalia et al12, was adopted. Briefly, chitosan (4 g) was dissolved in 1% (v/v) aqueous acetic acid (400 ml). The solution was then filtered to eliminate the impurities and formaldehyde was added (3-fold excess to amine of chitosan). The solution was stirred at ambient temperature for 30 min. NaBH4 (0.33 g) was then added, and the solution was stirred at ambient temperature for 60 min. The pH was adjusted to 10 using 1 M NaOH. After filtration, the system was washed to reach pH 7. Finally, the excess of reagent was eliminated by extraction with a Soxhlet, using ethanol/diethyl ether (80:20 v/v).", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-12", "d_text": "Toxicol Vitr 2005, 19: 215-220. 10.1016/j.tiv.2004.10.007View ArticleGoogle Scholar\n- Borgens RB: Restoring function to the injured human spinal cord. Heidelberg, Germany: (Monograph) Springer-Verlag; 2003.View ArticleGoogle Scholar\n- Cho Y, Shi R, Borgens RB: Chitosan produces potent neuroprotection and physiological recovery following traumatic spinal cord injury. J Exp Biol 2010, 213: 1513-1520. 10.1242/jeb.035162View ArticleGoogle Scholar\n- Chavasit V, Torres JA: Chitosan-poly(acrylic acid): mechanism of complex formation and potential industrial applications. Biotechnol Prog 1990, 6: 2-6. 10.1021/bp00001a001View ArticleGoogle Scholar\n- Lee DW, Lim H, Chong HN, Shim WS: Advances in chitosan material and its hybrid derivatives: a review. Open Biomater J 2009, 1: 10-20. 10.2174/1876502500901010010View ArticleGoogle Scholar\n- Aranaz I, Mengibar M, Harris R, Panos I, Miralles B, Acosta N, Galed G, Heras A: Functional characterization of chitin and chitosan. Curr Chem Biol 2009, 3: 203-230.Google Scholar\n- Pavinatto FJ, Caseli L, Pavinatto A, Dos Santos DS Jr, Nobre TM, Zaniquelli ME, Silva HS, Miranda PB, De Oliveira ON Jr: Probing chitosan and phospholipid interactions using Langmuir and Langmuir-Blodgett films as cell membrane models. Langmuir 2007, 23: 7666-7671. 10.1021/la700856aView ArticleGoogle Scholar\n- Pavinatto FJ, Pavinatto A, Caseli L, Santos DS Jr, Nobre TM, Zaniquelli ME, Oliveira ON Jr: Interaction of chitosan with cell membrane models at the air-water interface. Biomacromolecules 2007, 8: 1633-1640.", "score": 25.91258298100024, "rank": 42}, {"document_id": "doc-::chunk-27", "d_text": "doi: 10.1128/iai.69.10.6123-6130.2001\n- 21. Singla AK, Chawla M (2001) Chitosan: some pharmaceutical and biological aspects - an update. J Pharm Pharmacol 53: 1047–1067. doi: 10.1211/0022357011776441\n- 22. Ranaldi G, Marigliano I, Vespignani I, Perozzi G, Sambuy Y (2002) The effect of chitosan and other polycations on tight junction permeability in the human intestinal Caco-2 cell line. The Journal of Nutritional Biochemistry 13: 157–167. doi: 10.1016/s0955-2863(01)00208-x\n- 23. Mao H, Roy K, Troung-Le VL, Janes KA, Lin KY, et al. (2001) Chitosan-DNA nanoparticles as gene carriers: synthesis, characterization and transfection efficiency. J Control Release 70: 399–421. doi: 10.1016/s0168-3659(00)00361-8\n- 24. Borges O, Borchard G, Verhoef J, de Sousa A, Junginger H (2005) Preparation of coated nanoparticles for a new mucosal vaccine delivery system. Int J Pharm 299: 155–166. doi: 10.1016/j.ijpharm.2005.04.037\n- 25. Borges O, Cordeiro-da-Silva A, Romeijn SG, Amidi M, Sousa AD, et al. (2006) Uptake studies in rat peyer's patches, cytotoxicity and release studies of alginate coated chitosan nanoparticles for mucosal vaccination. J Control Release 114: 348–358. doi: 10.1016/j.jconrel.2006.06.011\n- 26. Neutra MR, Kozlowski PA (2006) Mucosal vaccines: the promise and the challenge. Nat Rev Immunol 6: 148–158. doi: 10.1038/nri1777\n- 27.", "score": 25.75009484002263, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "It stabilizes cell membranes structure through increasing enzymatic and non-enzymatic antioxidants. It protects cell membranes commencing oxidative stress and improves plant cell structure and permeability. So, protection against oxidative stress takes place. It is used in post-harvest techniques to extent shelf life of fruits and vase life of cut flowers.It does not create harmful effects making environment safe bio-pesticide. According to Environmental Protection Agency and United States Department of Agriculture,it can be used in organic farming.\nDue to safe use it is found to be as an environment friendly bio-pesticide substance. It enhances the innate ability of plants to cope themselves with fungal and bacterial infections, so have very important role inmitigation of biotic stress. The natural bio control active ingredients, chitin/chitosan are found in the shells of crustaceans, such as lobsters, crab sand shrimp, many other organisms, including insects, fungi cell wall and hard outer skeleton of shellfish. It is one of the most abundant biodegradable materials in the world. It is used in paint industry because it has properties as fining agents.\nOregon University recommended its application in food industry. Thin edible films/coatings of chitosan were added inside the polythene sheets to cover the foods. So it protects the food from bacterial infection. This process isusually called bio-printing. It did not change the taste, appearance and smell of food items. Chitosan was spread on strawberry for 20 days. It reduces 99% anti-microbial agents.\nIt is used in treatment of waste water and clarifying wine becauseit has phosphate binding ability and removes phosphate and micronutrients like Zn, Fe, Mn. It is also used in improvement of fiber quality. It is applied as finishing/pigment, dying of cotton and formaldehyde scavenger in clothes. But need of the day is that by using our sea creatures to produce it locally as is valuable as a pearl.\nMujahid Ali (NCSU, US), Dr.C.M. Ayyub and Dr. Zaid Mustafa IHS (UAF)", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "It also helps maintain their shape and structure or maintain a favorable environment.\n- It’s one of the major components of the cell walls of algae and fungi.\n- Cellulose is used to produce paperboard, paper, cardboard, cardstocks, etc.\n- It is used to make electrical insulation paper.\n- It is used to make gun powder and bio-fuel.\n- It’s used as a stabilizer in drugs.\nIt is a linear, long-chain polymer of N-acetyl-D-glucosamine (a derivative of glucose) residues/units. These units are joined together by glycosidic beta 1-4 glycosidic linkages. Chitin is the second most abundant natural biopolymer after cellulose.\nIt’s biodegradable in the natural environment, in which the reaction is catalyzed by the enzyme, chitinases, secreted by some bacteria and fungi. Chitin is closely related to cellulose, which is a linear unbranched chain of glucose units, rather than N-acetyl-glucosamine present in chitin.\n- The N-deacetylation of chitin results in the formation of chitosan, which is a crystalline, cationic, and hydrophilic polymer. Chitosans have excellent gelation and film-forming properties.\n- Chitin is a polymorphic polysaccharide, present in three different crystalline modifications alpha, beta, and gamma.\n- Chitins naturally occur in the exoskeleton of arthropods like shrimps, insects, lobsters, crabs, and the cell walls of certain fungi and molds.\n- It helps in the stability and rigidity of the cell wall structures of organisms in which it is present.\n- It’s an integral part of an insect’s peritrophic membrane (present in its midgut). The membrane helps insects in digestion processes, protects them from mechanical damages, toxins, and pathogenic attacks.\n- It’s used to make fertilizers that are organic, non-toxic, and increases crop productivity by enhancing soil health.\n- Chitins in the diet helps to reduce cholesterol absorption efficiency.\n- Chitins are used to make a surgical thread that facilitates wound healing.\nIt’s made of repeating units of D-glucose that are joined together by alpha-linkages. Starch is one of the most abundant polysaccharides found in plants.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "The removal of nickel (II) from the aqueous solutions through adsorption on to biopolymer sorbents, such as calcium alginate, chitosan-coated calcium alginate, and chitosan-coated silica, was studied using equilibrium batch and column flow techniques. According to the study, the maximum monolayer adsorption capacity of calcium alginate, chitosan-coated calcium alginate, and chitosan-coated silica, as obtained from the Langmuir adsorption isotherm, was found to be 310.4, 222.2, and 254.3 mg/g, respectively .\nPolymer/montmorillonite nanocomposites have improved properties such as excellent mechanical properties, thermal stability, gas barrier, and flame retardation in comparison to conventional composites. The isomorphous substitutions of Al3+ for Si4+ in the tetrahedral layer and Mg2+ for Al3+ in the octahedral layer have resulted in a negatively charged surface on montmorillonite. With these structural characteristics, montmorillonite has excellent sorption properties and possesses available sorption sites within its interlayer space as well as large surface area and more narrow channels inside. Produced chitosan coated montmorillonite for the removal of Cr(VI) .\nThis work describes the synthesis of the composite material based on chitosan and natural minerals clinoptilolite and saponite, for their use as a biosorbents. Obtained composites were characterized by physicochemical methods, such as thermal analysis and textural properties. Adsorption properties of the obtained hybrid material were studied with respect to highly toxic heavy metals: cadmium(II), lead(II), copper(II), zinc(II), and iron(III), which are common contaminants of industrial wastewaters. Conditions connected with the optimum pH value of the medium, interaction time, and adsorption capacity were studied.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-2", "d_text": "[22–25], and which has been put forward as a sustainable alternative in crop protection ) in combination with amino acids have been tested against F. culmorum, both in vitro and in vivo, with a view to assessing if an enhanced behavior resulting from synergies between these natural products can be attained.\n2. Materials and Methods\n2.1. Reagents and Fungal Isolates\nHigh molecular weight chitosan (CAS No. 9012-76-4; 310,000–375,000 Da) was purchased from Hangzhou Simit Chemical Technology Co., Ltd. (Hangzhou, China). Citric acid (CAS 77-92-9; 99.5%) and Tween® 20 (CAS 9005-64-5) were supplied by Sigma-Aldrich Química S.A. (Madrid, Spain). Neutrase® 0.8 L enzyme was supplied by Novozymes (Bagsvaerd, Denmark). Potato dextrose agar (PDA) and potato dextrose broth (PDB) were purchased from Becton, Dickinson and Company (Franklin Lakes, NJ, USA). Cysteine (Cys, CAS No. 52-90-4), glycine (Gly, CAS No. 56-40-6), proline (Pro, CAS No. 147-75-3) and tyrosine (Tyr, CAS No. 60-8-4), all with 99% purity, were purchased from Panreac S.L.U (Barcelona, Spain).\nThe Fusarium culmorum strain used in the present study (CECT 20493) was obtained from the Spanish Type Culture Collection (CECT; Valencia, Spain). The chemotype was 3-ADON .\n2.2. Preparation of Chitosan Oligomers and Bioactive Solutions\nChitosan oligomers (COS) were prepared following the procedure described by Buzón-Durán et al. . The amino acid-only bioactive solutions were prepared by dissolving the amino acids in distilled water, without further purification, at an initial concentration of 3000 µg·mL−1.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-60", "d_text": "Chitosan preparations for wounds and burns: Antimicrobial and wound-healing effects. Expert Rev. Anti-Infect. Ther 2011, 9, 857–879. [Google Scholar]\n- Bitar, K.N.; Raghavan, S. Intestinal tissue engineering: Current concepts and future vision of regenerative medicine in the gut. Neurogastroenterol. Motil 2012, 24, 7–19. [Google Scholar]\n- Walker, P.A.; Aroom, K.R.; Jimenez, F.; Shah, S.K.; Harting, M.T.; Gill, B.S.; Cox, C.S., Jr. Advances in progenitor cell therapy using scaffolding constructs for central nervous system injury. Stem Cell Rev 2009, 5, 283–300. [Google Scholar]\n- Zhang, Z.; Hu, J.; Ma, P.X. Nanofiber-based delivery of bioactive agents and stem cells to bone sites. Adv. Drug Deliv. Rev 2012, 64, 1129–1141. [Google Scholar]\n- Tan, H.; Ma, R.; Lin, C.; Liu, Z.; Tang, T. Quaternized chitosan as an antimicrobial agent: Antimicrobial activity, mechanism of action and biomedical applications in orthopedics. Int. J. Mol. Sci 2013, 14, 1854–1869. [Google Scholar]\n- Valencia-Chamorro, S.A.; Palou, L.; Del Río, M.A.; Pérez-Gago, M.B. Antimicrobial edible films and coatings for fresh and minimally processed fruits and vegetables: A review. Crit. Rev. Food Sci. Nutr 2011, 51, 872–900. [Google Scholar]\n- Cagri, A.; Ustunol, Z.; Ryser, E.T. Antimicrobial edible films and coatings. J. Food Prot 2004, 67, 833–848. [Google Scholar]\n- Vargas, M.; González-Martínez, C. Recent patents on food applications of chitosan. Recent Pat. Food Nutr.", "score": 25.4226379712937, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "-Chitosan “smart” hydrogels: pH and thermal\n-In vitro/in vivo evaluations of chitosan delivery\nsystems incorporated with nystatin for thetreatment of oral mucositis\nMatejka Z., Parschová H., Jelínek L., Ruszová P.\n-Selective uptake of (Mo, V, W, As)-oxoanions bycrosslinked chitosan; beads vs. fibers\nEVENING TOUR BY BOAT TO THE MUNKHOLMEN ISLAND (OPTIONAL)\nThursday, June 27\nChitosans for Gene Delivery\nTechnical Applications of chitosans\nFukamizo T., Defaye J., McIntyre D., Vogel\nChandrkrachang S., Anal A.K., Wanichpongpan\n-Permeability of drug and plasma bound drug\nglucopyranosyl fluoride oligosaccharides; a\nTakahashi S., Takahashi, K., Ogasawara H.,\nDanielsen S., Vårum K.M., Stokke B.T.\n-Influence of chitosan molecular parameters on\nglucosamine 2-epimerases (renin bindingproteins)\n-Alternative splicing of Bombyx mori\n-New cosmetic delivery system based on chitosan\nAcosta N., Aranaz I., Galed G., Peniche H.,\n-Aphroproteins – new proteins with chitinase\nactivity and their application in agricultural\n-Controlled release of tramadol hydrochloride\nCONFERENCE BANQUET AT RESTAURANT PALMEHAVEN, BRITANNIA HOTEL\nFriday, June 28\nEnzymatic Coupling of Natural Products to Chitosan to Create Functional Derivatives\nRegulatory Aspects and Standardization of Chitin and Chitosan\nHirano S., Matsushita T., Taniguchi M., Yasuba\n-The isolation and identification of 3,6-di-O-\nmethyl-D-glucosamine in the acid hydrolysate\nof permethylated chitin resistant to hydrolysesby chitinase and lysozyme\nKristiansen A., Holme H., Isaksen M., Leth-\nJaworska M.M., Shao Y., Konieczna E.", "score": 25.000000000000068, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Chitosan is a linear polysaccharide and is obtained from chitin. Marine life is responsible for its commercial production.Its unique chemistry is responsible for its application in food industry, agriculture and plant sciences, textile industry, waste water treatment and biomedical sciences. In the form of biodegradable materials it is present abundantly.\nChitosan is present in nature as decomposed molecules of chitin in soil and water. Chitosan hemostatic products have been sold to United States army and recently used by the United Kingdom army. United States and United Kingdom along with allied forces used chitosan made bandages for injuries during war with Iraq and Afghanistan.\nChitosan is hypoallergenic and has natural antibacterial properties, which support its use in field of bandages. Chitosan’s properties of blocking nerve endings also allow it to reduce pain. Chitosan can cluster blood rapidly that is the reason it prescribed in the United States and Europe for the utilization in swathes and other hemostatic specialists. Chitosan hemostatic products have been proved by the United States navy to quickly stop bleeding and to reduce blood loss and result in 100% survival of otherwise harmful arterial wounds in swine.\nIt is applied in pharmaceutical industry; as filler in tablets; to improve drugs dissolution and to mask bitter tastes in solutions taken by mouth. FDA has proved that it’s different forms lower down obesity. It reduces blood high cholesterol because it is a fat binder and to treat crohn’s disease. It is also used to treat complications that kidney failure patients during dialysis often face, including high cholesterol, anemia muscle elasticity, appetite and insomnia. It can be applied directly to gums to treat inflammation that can lead to periodontitis. It is added to chew gum that contains chitosan to prevent dental cavities. In surgery, it helps “donor tissue” rebuild itself; so surgeons sometimes apply it directly to places from which they have taken tissue to be used elsewhere. The use of chitosan depends on the degree of deacetylation of chitin.\nIn agriculture and plant sciences it has been applied for mitigation of abiotic stress. It is very effective in mitigation of heat stress, cold stress,drought stress and salt stress. Its foliar application leads to higher vegetative and reproductive growth of plants so categorizedas plant growth enhancer.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-5", "d_text": "Chitosan is also a superior material for encapsulating iron oxide particles formed directly from iron salts due to its polyelectrolytic character.\nThe chitosan functional magnetic microspheres were found to be very effective in the purification of contaminated water, because the amino and hydroxyl functional groups of chitosan form chelates with a large number of anions and cations. The recovery of the chitosan particles complexed with the salts is achieved simply by the application of a magnetic field to the contaminated water. This method can also be used to concentrate or remove heavy metal ions as well as nitrate ions.\nAlthough only one preferred embodiment of the present invention has been disclosed, it is to be realized by those skilled in the art that various modifications can be made to the disclosed magnetic particles without departing from the scope of the invention. For example, chitosan is capable of encapsulating resin materials having magnetic material dispersed therein or may be used to encapsulate other magnetic materials. These examples are exemplary only and are not intended to limit the present invention which is defined by the following claims.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-4", "d_text": "Chitin is rapidly biodegraded (Zobell & Rittenberg, 1983). The annual worldwide chitin production has been estimated to be 1011 tons, and industrial use has been estimated to be 10,000 metric tons (Kurita, 2006). Chitin has excellent biocompatibility, nontoxicity, and wound-healing properties, so it has been widely applied in medical and health-care fields for applications such as release capsules for drugs, man-made kidney membranes, anticoagulants, and immunity accelerants (Cho, Cho, Chung, Yoo, & Ko, 1999; Matsuyama, Kitamura, & Naramura, 1999; Muzzarelli, 1977; Okamoto et al., 2002; Onishi, Nagai, & Machida, 1997; Tokura et al., 1990). However, chitin is not soluble in common solvents due to the existence of intra- and intermolecular hydrogen bonds in chitin and its highly crystalline structure. This strongly restricts many applications of chitin. The world market for chitin is currently estimated at 1000 to 2000 tons. Japan, with an estimated production of 1.5 million lb (6.8 × 105 kg) per year, is by far the largest user. Europe may use 500,000 lb, whereas the United States appears to use about 150,000 to 200,000 lb (70–90 103 kg), but this estimate may be too high. Prices quoted range from $3.50 to $4.50 per pound for chitin and are $6.50 to $100 per pound for chitosan.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-19", "d_text": "Coelho TC, Laus R, Mangrich AS, de Fávere VT, Laranjeira MCM. Effect of heparin coating on epichlorohydrin cross-linked chitosan microspheres on the adsorption of copper (II) ions. React Funct Polym. 2007;67:468–475.\nGuan B, Wu W, Ni, Z, Lai Y. Removal of Mn(II) and Zn(II) ions from flue gas desulfurization wastewater with water-soluble chitosan. Separation and Purification Technology. 2009;65:269–274.\nWang X, Du Y, Liu H. Preparation, characterization and antimicrobial activity of chitosan–Zn complex. Carbohydr Polym. 2004;56:21–26.\nMitchell GR, Geri A. Molecular organisation of electrochemically prepared conducting polypyrrole films. J Phys D Appl Phys. 1987;20:1346–1353.\nKassim A, Block H, Davis FJ, Mitchell GR. Anisotropic films of polypyrrole formed electrochemically using a non-planar dopant. J Mater Chem. 1992;2:987–988.\nMohammad F. Compensation behaviour of electrically conductive polythiophene and polypyrrole. J Phys D Appl Phys. 1998;31:951–959.\nGhosh M, Barman A, Das A, Meikap AK, De SK, Chatterjee S. Electrical transport in paratoluenesulfonate doped polypyrrole films at low temperature. J Appl Phys. 1998;83:4230.\nHakansson E, Lin T, Wang H, Kaynak A. The effects of dye dopants on the conductivity and optical absorption properties of polypyrrole. Synth Met. 2006;156(18–20):194–1202.\nCuero R, Lillehoj E. N-carboxymethylchitosan: algistatic and algicidal properties. Biotechnol Lett. 1990;4:275.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-13", "d_text": "10.1021/bm0701550View ArticleGoogle Scholar\n- Pavinatto FJ, Caseli L, Oliveira ON: Chitosan in nanostructured thin films. Biomacromolecules 2010, 11: 1897-1908. 10.1021/bm1004838View ArticleGoogle Scholar\n- Quemeneur F, Rinaudo M, Pepen-Donat B: Influence of molecular weight and pH on adsorption of chitosan at the surface of large and giant vesicles. Biomacromolecules 2008, 9: 396-402. 10.1021/bm700943jView ArticleGoogle Scholar\n- Chattopadhyay DP, Inamdar MS: Aqueous behaviour of chitosan. Int J Polym Sci 2010.Google Scholar\n- Fang N, Chan V, Mao HQ, Leong KW: Interactions of phospholipid bilayer with chitosan: effect of molecular weight and pH. Biomacromolecules 2001, 2: 1161-1168. 10.1021/bm015548sView ArticleGoogle Scholar\n- Fang N, Chan V: Chitosan-induced restructuration of a mica-supported phospholipid bilayer: an atomic force microscopy study. Biomacromolecules 2003, 4: 1596-1604. 10.1021/bm034259wView ArticleGoogle Scholar\n- Chen B, Zuberi M, Borgens RB, Cho Y: Affinity for, and localization of, PEG-functionalized silica nanoparticles to sites of damage in an ex vivo spinal cord injury model. J Biol Eng 2012, 6: 18. 10.1186/1754-1611-6-18View ArticleGoogle Scholar\n- Lentz B: PEG as a tool to gain insight into membrane fusion. Eur Biophys J 2007, 36: 315-326. 10.1007/s00249-006-0097-zView ArticleGoogle Scholar\n- Shi R, Borgens RB: Acute repair of crushed guinea pig spinal cord by polyethylene glycol.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Chitosan – properties and application\nChitosan is one of the most popular ingredients for slimming preparations.But this is not his only use.Chitosan is used as a fertilizer in agriculture, material for dressings that accelerate wound healing, and perhaps soon objects of everyday use will be produced from it.\nChitosan is an extremely versatile material obtained from chitin – a building component of marine crustaceans shells.As a result, a biodegradable polymer with a wide application in medicine, veterinary, cosmetics and environmental protection is obtained.\nThe chemical structure of chitosan is similar to hyaluronic acid.Thanks to this structure, it inhibits, among others, the growth of microorganisms and accelerates wound healing, which is the result of stimulation of immune cells.Preparations with chitosan form a delicate protective layer on the skin that protects against harmful external factors and effectively retains water in the skin.Strong moisturizing properties also cause that it is often used as an ingredient in skincare cosmetics.Due to its complete biodegradability and high nitrogen content, it is also used in agriculture as a fertilizer for seed treatment and a biological plant protection product combating fungal infections.\nGreat hopes are also associated with the use of chitosan for the production of ecological products of everyday use.It has a biopolymer structure that allows forming three-dimensional shapes from it.It is already used for the production of drug capsules.And what results in its use in slimming preparations?\nChitosan as a fat blocker\nIn the case of chitosan, the mechanism of action supporting slimming is the binding of fats in the gastrointestinal tract.As a result, the degree of digestion and absorption in the small intestine is reduced.In the normal digestion process, fats are gradually broken down by enzymes secreted by the gall bladder and pancreas (so-called lipases).As a result, small particles of fatty acids reach the small intestine and are absorbed into the body.\nChitosan binds large fat molecules already in the upper part of the gastrointestinal tract, preventing them from being broken down by enzymes and thus making it difficult to absorb.As a result, they are excreted in the faeces.In other words, this substance does not accelerate the burning of fats, it only eliminates them from the digestive tract.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-4", "d_text": "Alginate coated chitosan nanoparticles was recently described and it has the particular advantage of being constructed under very mild conditions (aqueous medium and mild agitation), which is a great benefit for the encapsulation of proteins, peptides and antigens. Moreover, Borges and co-workers have demonstrated that these coated nanoparticles were able to be taken up by rat Peyer's patches which is one of the essential features to internalize, deliver and target the intact antigen to specialized immune cells from the gut associated lymphoid tissue (GALT) .\nHerein, we proposed to evaluate in vitro characteristics of chitosan nanoparticles associated to protein, for which the antigen Rho1-GTPase of S. mansoni was chosen, to be used as a candidate oral vaccine against schistosomiasis. Once in vitro characterization showed favorable data, its in vivo role was evaluated through mouse immunization. Added to that, chitosan was evaluated not only for its performance as a delivery system but also for its contribution due to its adjuvant properties.\nAdditionally, since a mixed Th1/Th2 response seems to be optimal for a schistosomiasis vaccine, then bacterial CpG motifs (which induce the production of IL-12 by DCs and macrophages that express the appropriate TLR95) can be used as adjuvants to boost immunity and also with the aim of inducing a TH1-like immune response that can prevent the normal Th1 to Th2 transition. With this in mind we investigated the co-administration of synthetic unmethylated oligodeoxynucleotides containing immunostimulatory CpG motifs (CpG ODN), a TLR-9 ligand, with chitosan nanoparticles.\nWith this in mind, on this work we will report a new strategy of vaccination against schistosomiasis based on chitosan nanoparticles associated to SmRho antigen plus the adjuvant CpG, and coated with sodium alginate.\nMaterials and Methods\nChitosan (CH) (Chimarin DA 13%, apparent viscosity 8 mPa.s) was supplied by Medicarb, Sweden. CH was purified by filtration of an acidic chitosan solution and subsequent alkali precipitation (1 M NaOH). The purified polymer was characterized by gel permeation chromatography (GPC) and Fourier Transform-Infrared Spectroscopy (FT-IR).", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "Chitin–calcium alginate composite fibers were prepared from a solution of high molecular weight chitin extracted from shrimp shells and alginic acid in the ionic liquid 1-ethyl-3-methylimidazolium acetate by dry-jet wet spinning into an aqueous bath saturated with CaCO3. The fibers exhibited a significant proportion of the individual properties of both calcium alginate and chitin. Ultimate stress values were close to values obtained for calcium alginate fibers, and the absorption capacities measured were consistent with those reported for current wound care dressings. Wound healing studies (rat model, histological evaluation) indicated that chitin–calcium alginate covered wound sites underwent normal wound healing with re-epithelialization and that coverage of the dermal fibrosis with hyperplastic epidermis was consistently complete after only 7 days of treatment. Using a single patch per wound per animal during the entire study, all rat wounds achieved 95–99% closure by day 10 with complete wound closure by day 14.\nEven with the high costs of environmental exposure controls, as well as the chance of control failures, options for industries wanting to implement sustainability through frameworks such as green chemistry are not yet cost-effective. We foresee a “green” industrial revolution through the use of transformative technologies that provide cost-effective and sustainable products which could lead to new business opportunities. Through example, we promote the use of natural and abundant biopolymers such as chitin, combined with the solvating power of ionic liquids (ILs), as a transformative technology to develop industries that are overall better and more cost-effective than current practices. The use of shellfish waste as a source of chitin for a variety of applications, including high-value medical applications, represents a total byproduct utilization concept with realistic implications in crustacean processing industries.\nChemisorption of carbon dioxide by 1-ethyl-3-methylimidazolium acetate ([C2mim][OAc]) provides a route to coagulate chitin and cellulose from [C2mim][OAc] solutions without the use of high-boiling antisolvents (e.g., water or ethanol). The use of CO2 chemisorption as an alternative coagulating process has the potential to provide an economical and energy-efficient method for recycling the ionic liquid.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-5", "d_text": "Relation between Turbidity and PH value\nEffect of contact tine on turbidity removal.\nKaolinite turbidity was (5, 5 NTU), chitosan dosage (1g/L), stirring (300 rpm), PH= (6). Temperature (25oC). Time was the only factor that decreases turbidity gradually.\nTable 6. Relation between turbidity Removal and contact time (min)\nFigure 6. Relation between turbidity removal and contact time (min)\nEffect of Temperature on turbidity removal.\nKaolinite turbidity was (5, 5 NTU), different temperature of the turbid water from (25 to 43o C) chitosan dosage (1g/L), stirring (300 rpm), PH= (6), time of the turbid water was (30 min). the influence of temperature on the rate of turbidity was rising at increasing temperature.\nTable 7. Relation between turbidity removal (%) and Temperature (oC)\nFigure 7. Relation between turbidity removal (%) and Temperature (oC)\nOverall, the results of this study provide evidence that Chitosan had significant effects on the efficiency of kaolinite turbidity reduction from test water; chitosan dose had significant effects on kaolinite turbidity reduction from test water. Only lower doses were needed for effective removal of kaolinite turbidity from water, with chitosan doses for 96.9% turbidity removal is 1gm/L. Higher chitosan dose was less effective and provided poor kaolinite turbidity removal than did lower doses which overdosing of chitosan can result in destabilization of a dispersion. The optimal conditions were determined on the basis of turbidity removal. In addition, the principal factors affecting coagulation were determined throughout the study, including optimal coagulant dosage, Time, PH and Temperature. The optimum PH found for coagulation to remove settled water turbidity was 6.0, sedimentation and paper filtration lowered treated water turbidity 96.9% for chitosan.\n1. A.I. Cissouma, F.Tounkara, M. Nikoo, N. Yang, and X.Xu. Physico-Chemical Properties and Antioxidant Activity of Roselle Seeds Extracts, Advance J. of Food Science and Technology, 5(11), 1483-1489(2013)\n2.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-15", "d_text": "- F. Liu and L. Huang, “Development of non-viral vectors for systemic gene delivery,” Journal of Controlled Release, vol. 78, no. 1–3, pp. 259–266, 2002.\n- H. Katas, Z. Hussain, and T. C. Ling, “Chitosan nanoparticles as a percutaneous drug delivery system for hydrocortisone,” Journal of Nanomaterials, vol. 2012, Article ID 372725, 11 pages, 2012.\n- A. G. Luque-Alcaraz, J. Lizardi, F. M. Goycoolea et al., “Characterization and antiproliferative activity of nobiletin-loaded chitosan nanoparticles,” Journal of Nanomaterials, vol. 2012, Article ID 265161, 7 pages, 2012.\n- M. N. Kumar, R. A. Muzzarelli, C. Muzzarelli, H. Sashiwa, and A. J. Domb, “Chitosan chemistry and pharmaceutical perspectives,” Chemical Reviews, vol. 104, no. 12, pp. 6017–6084, 2004.\n- K. M. Vårum, M. M. Myhr, R. J. N. Hjerde, and O. Smidsrød, “In vitro degradation rates of partially N-acetylated chitosans in human serum,” Carbohydrate Research, vol. 299, no. 1-2, pp. 99–101, 1997.\n- S. B. Rao and C. P. Sharma, “Use of chitosan as a biomaterial: studies on its safety and hemostatic potential,” Journal of Biomedical Materials Research, vol. 34, pp. 21–28, 1997.\n- M. J. Alonso and A. Sánchez, “The potential of chitosam in ocular drug delivery,” Journal of Pharmacy and Pharmacology, vol. 55, no. 11, pp. 1451–1463, 2003.\n- K. Y. Lee, “Chitosan and its derivatives for gene delivery,” Macromolecular Research, vol. 15, no. 3, pp. 195–201, 2007.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Progress on chemistry and application of chitin and its derivatives, Monografia, t. XVI, Polskie Towarzystwo Chitynowe, s. 89.\nRavi Kumar N. V. 2000. A review of chitin and chitosan applications. Reactive & Functional Polymers, 46, s.1-27.\nMuzzarelli R.A.A., Muzzarelli C. 2005. Chitosan Chemistry: Relevance to the Biomedical Science. Springer Heidelberg, Berlin.\nMing-Tsung Y., Joan-Hwa Y., Yeng-Leun M. 2008. Antioxidante properties of chitosan from crab shells. Carbohydrate Polymers, 74,4, s.840-844.\nObara K., Ishihara M., Ishizuka T., Fujita M., Ozeki Y., Maehara T., Saito Y.,Yura H., Matsui T., Hattori\nH., Kikuchi M., Kurita A. 2003. Photocrosslinkable chitosan hydrogel containing fibroblast growth factor-2 stimulates wound healing in healing-impaired db/db mice. Biomaterials, 24, 3437-3444.\nSchmitt F., Lagopoulos L., Käuper P., Rossi N., Busso N., Barge J., Wagnières G., Laue C., Wandrey C., Juillerat-Jeanneret L. 2010. Nanożele na bazie chitozanu do selektywnego dostarczania fotouczulaczy do Nanożele na bazie chitozanu do selektywnego dostarczania fotouczulaczy do makrofagów i ulepszonej retencji w stawach i terapii stawów. Journal Control Research. 1;144(2):242-50.\nSahm Inan D., Unver Saraydm D. 2013. Investigation of the wound healing effects of chitosan on FGFR3 and VEGF immunlocalization in experimentally diabetic rats. International Journal of Biomedical Materials Research, 1 (1) 1-8.", "score": 21.816891455731035, "rank": 60}, {"document_id": "doc-::chunk-37", "d_text": "The worldwide shellfish harvest is estimated to be able to supply 50,000 tons of chitin annually.63 The harvest in the United States alone could produce over 15,000 tons of chitin each year.64\nChitin has a wide range of uses but that is the subject of another book. Chitosan was discovered in 1859 by Professor C. Rouget.65 It is made by cooking chitin in alkali, much like the process for making natural soaps. After it\nis cooked the links of the chitosan chain are made up of glucosamine units. Each glucosamine unit contains a free amino group. These groups can take on a positive charge which gives chitosan its amazing properties. The stucture of chitosan is represented schematically in Figure 2. Research on the uses of chitin and Chitosan flourished in the 1930s and early 1940s but the rise of synthetic fibers, like the rise of synthetic medicines, overShadowed the interest in natural products. Interest in natural products, including chitin and chitosan, gained a resurgence in the 1970s and has continued to expand ever since. Uses of Chit osan Some of Chitosan's major uses—both Industrial and Health and Nutritional—are listed in Tables 5 and 6.\nChitosan has been used for about three decades in water purification processes. 67 When chitosan is spread over oil spills it holds the oil mass together making it easier to clean up the spill. Water purification plants throughout the world use chitosan to remove oils, grease, heavy metals, and fine particulate matter that cause turbidity in waste water streams.\nFat Binding/ Weight Loss\nLike some plant fibers, chitosan is not digestible; therefore it has no caloric value. No matter how much chitosan you ingest, its calorie count remains at\nfibers, chitosan’s unique properties give it the ability to significantly bind fat, acting like a “fat sponge” in the digestive tract. Table 7 shows a comparison of chitosan and other natural fibers and their ability to inhibit fat absorption.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-19", "d_text": "Application of chitosan, a natural aminopolysaccharide, for dye removal from aqueous solutions by adsorption processed using batch studies: a review of recent literature. Prog Polym Sci 2008;33:399–447. Chiou MS, Li HY. Adsorption behavior of reactive dye in aqueous solution on chemical cross-linked chitosan beads. Chemosphere 2003;50:1095–105. Chiou MS, Li HY. Equilibrium and kinetic modeling of adsorption of reactive dye on cross-linked chitosan beads. J Hazard Mater 2002;B93:233–48. Juang RS, Wu FC, Tseng RL. Use of chemically modified chitosan beads for adsorption and enzyme immobilization. Adv Environ Res 2002;6:171–7. Shimizu Y, Taga A, Yamaoka H. Synthesis of novel crosslinked chitosans with a higher fatty diacid diglycidyl and their adsorption abilities toward acid dyes. Adsorpt Sci Technol 2003;21:439–49. Zhang GQ, Zha LS, Zhou MH, Ma JH, Liang BR. Preparation and characterization of pH- and temperature-responsive semi-interpenetrating polymer network hydrogels based on linear sodium alginate and crosslinked poly(Nisopropylacrylamide). J Appl Polym Sci 2005;97:1931–40. Zhai ML, Zhang YQ, Ren J, Yi M, Ha HF, Kennedy JF. Intelligent hydrogels based on radiation induced copolymerization of N-isopropylacrylamide and kappacarrageenan. Carbohyd Polym 2004;58:35–9. Guo BL, Gao QY. Preparation and properties of a pH/temperature-responsive carboxymethyl chitosan/poly(N-isopropylacrylamide) semi-IPN hydrogel for oral delivery of drugs. Carbohyd Res 2007;342:2416–22. Zhang JT, Cheng SX, Zhuo RX.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-6", "d_text": "2 (4 and 5)) has shown a shift of the band 1530 cm-1 of –NH2 deformation vibrations in comparison with the spectrum of the initial chitosan. An intensive absorbance at 1090 and 1000 cm−1 represents the Si–O stretching vibrations. Absorbance band at 610 and 660 cm−1 represents to stretching vibrations of Si–O and shifted in comparison with the FTIR-spectra of initial minerals. Absorbance bands at 556 and 463 cm−1 and 518 and 466 cm−1 represent the deformation vibrations of Al–O–Si and Si–O–Si in chitosan-clinoptilolite and chitosan-saponite, respectively. It was observed that the characteristic bands at 1633 and 1645 cm−1 at the FTIR-spectra of chitosan-clinoptilolite and chitosan-saponite, respectively, describe azomethine bonds C=N, formed after glutaraldehyde treatment .\nThe influence of polymeric coating on thermal properties of mineral surfaces was studied by conducting DSC-MS analysis. Applying of these methods of investigations were also conducted in order to determine the mass ratio of chitosan coating on the mineral surfaces.\nFor the TG-curve of chitosan (Fig. 3a), two decomposition temperatures can be found. The initial weight loss of 11% from room temperature (30 °C) up to 190 °C corresponds to the release of adsorbed water . The second recorded decomposition region (190–1000 °C) completely applies to the weight loss of chitosan. Figure 3b, c presents the TG, DTG, and DSC curves of pure clinoptilolite and saponite.\nComparing the thermogravimetric curves of chitosan-clinoptilolite and chitosan-saponite composites (with the curves of the initial chitosan and pure minerals; Fig. 3d, e), one could observe that the maximum of each decomposition region of composites was observed at lower temperatures than for similar process of the pure minerals. For instance, coated clinoptilolite and saponite begin lost the water at T max 123 and 108 °C, when pure minerals – at 159 and 145 °C, respectively (Table 1).", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-9", "d_text": "Chitins, Chitosans (pH) (β × 103)__________________________________________________________________________Blank -- 6-8 1.21 Powdered chitosan (500 mesh under) 5.5-7.5 14.92 Chlorine decomposed low M.W. chitosan 6-7 11.03 Nitrite decomposed low M.W. chitosan 6-7 14.44 Ethyleneglycol chitin 5.5-6.5 6.75 Phosphated chitin 6.3-7.3 9.76 Deacetylated (40-60%) chitin 6-7.2 11.0Natural -- 6-7 7saliva__________________________________________________________________________\nAs is clear from the results shown in Table 1 and FIG. 1, the artificial saliva compositions according to the present invention containing chitins and chitosans have an excellent buffering capacity within a pH range of 5.5 or more. Contrary to this, the blank composition not containing chitins and/or chitosans has a only poor buffering capacity at a pH of 5.5 or more.\nA 50 g amount of the powdered chitosan listed in Table 1 was ground to a fine powder and 6 g of potassium chloride, 4.22 g of sodium chloride, 0.122 g of magnesium chloride, 0.551 g of calcium chloride, and 1.71 g potassium phosphate (dibasic) were added thereto. The mixture was thoroughly mixed to prepare an oral saliva composition in the form of a powder.\nAn artificial saliva composition in the form of granules was prepared from the components used in Example 2, except that 50 g of the chlorine decomposed low-molecular weight chitosan listed in Table 1 was used instead of 50 g of the powdered chitosans.\nThe mixture was granulated by adding an appropriate amount of water, followed by drying.\nAn artificial saliva composition in the form of tablets was prepared from the components used in Example 2, except that 50 g of the nitrite decomposed low-molecular weight chitosan listed in Table 1 was used instead of 50 g of the powdered chitosan. To the mixture, 1 g of crystalline cellulose (i.e., Avisel®) was further added.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Co polymerization of chitosan selectively grafted by polyethylene glycol was studied in this paper. The graft copolymers can be expected to be used as DNA delivery vectors because of the improving hydrophilicity of chitosan and reserving the amino groups on chitosan chains. Chitosan was selectively grafted by monomethoxy polyethylene glycol(mPEG-OH), which contains a hydroxyl group combining with hexamethylene diisocyanate(HDI) to form a novel macro monomer namely monomethoxy polyethylene glycol isocyanate (mPEG-NCO) containing a isocyanate group with higher chemical activity in ethyl glyoxalate solution absolutely without water. The selective grafted co polymerization of Chitosan with mPEG-NCO was conducted under heterogeneous conditions as suspension in dimethylformamide. The hydrophilic copolymers of chitosan were prepared by condensation reaction of isocyanate group on mPEG-NCO with hydroxy groups on chitosan chains because amino groups on chitosan chains were protected by complexion formation with copper ions. The effect of reaction condition on the grafting extents was discussed. Swelling properties of mPEG-g-CS were researched. The graft copolymer mPEG-g-CS was characterized by the infrared spectra. The result showed that the copper ions were very effective to protect amino groups from condensation reaction. The swelling level in water increases with adding of grafting ratio. The maximum swelling degree was up to above 132% when the grafting ratio was about 270%. The graft copolymer can be soluble partially in pure water.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "P. Siva Kumari*, Gowramma A2, P. Ratna kala1, B. Prathibha1, E. susanna1, A. Mounika1\n1Jagan’s college of Pharmacy, Jangala kandriga, SPSR Nellore, Andhra Pradesh, India\n2Asst professor, Dept. of pharmaceutics, Jagan’s college of Pharmacy, Jangala kandriga, SPSR Nellore, Andhra Pradesh, India.\nA B S T R A C T\nChitin and chitosan are unique and these are standard marine polysaccharides, attracted the interest of many researches. Chitin and its derivatives exhibit a variety of physicochemical and biological properties. In the present study, chitin and chitosan properties will be reviewed and their potentialities as promising biomaterials. Chitosan nanoparticles have gained more attention as drug delivery carriers because of their better stability, low toxicity, simple and mild preparation method, and providing versatile routes of administration. The sub-micron size not only suitable for parenteral application, but also applicable for mucosal routes of administration, i.e., oral, nasal, and ocular mucosa, which are non-invasive route.\nKeywords: Chitosan, nanoparticles, chitin, applications", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-11", "d_text": "Crini G, Badot PM (2008) Application of chitosan, a natural aminopolysaccharide, for dye removal from aqueous solutions by adsorption processes using batch studies: a review of recent literature. Prog Polym Sci 33:399–447\nBudnyak TM, Tertykh VA, Yanovska ES (2013) Chitosan and its derivatives as sorbents for effective removal of metal ions: review. Surface 5(Suppl 20):118–134\nWan Ngah WS, Teong LC, Hanafiah Ma KM (2011) Adsorption of dyes and heavy metal ions by chitosan composites: a review. Carbohydr Polym 83:1446–1456\nBudnyak TM, Tertykh VA, Yanovska ES, Kołodynska D, Bartyzel A (2015) Adsorption of V(V), Mo(VI) and Cr(VI) oxoanions by chitosan-silica composite synthesized by Mannich reaction. Adsorpt Sci Technol 6–8:645–657\nBudnyak T, Yanovska E, Ischenko M, Tertykh V (2014) Adsorption of heavy metals by chitosan crosslinked with glutaraldehyde. Visnyk of KNU. Chemistry 1:35–38\nBudnyak TM, Pylypchuk IV, Tertykh VA, Yanovska ES, Kolodynska D (2015) Synthesis and adsorption properties of chitosan-silica nanocomposite prepared by sol-gel method. Nanoscale Res Lett 87:1–10\nBudnyak T, Tertykh V, Yanovska E (2014) Chitosan immobilized on silica surface for wastewater treatment. Mater Sci (Medžiagotyra) 20(2):177–182\nDarder M, Aranda P, Ruiz-hitzky E Chitosan-Clay Bio-Nanocomposites. In: Avérous L, Pollet E, editors. Environmental silicate nano-biocomposites. Green Energy and Technology. Springer-Verlag, London, 2012. p.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "Results: In-depth characterization revealed a sulfoalkylation of chitosan mainly on its sterically favored O6-position. Moreover, comparably high average degrees of substitution with sulfoethyl groups (DSSE) of up to 1.05 were realized in reactions with NaBES. The harsh reaction conditions led to significant chain degradation and consequently, SECS exhibits masses of <50 kDa. Throughout the following microwave reaction, stable nanoparticles were obtained only from highly substituted products because they provide a sufficient charge density that prevented particles from aggregation. High-resolution transmission electron microscopy images reveal that the silver core (diameter ~8 nm) is surrounded by a 1–2 nm thick SECS layer. These core-shell particles and the SECS itself exhibit an inhibiting activity, especially on cofactor Xa.\nConclusion: This interesting model system enabled the investigation of structure–property correlations in the course of nanoparticle formation and anticoagulant activity of SECS and may lead to completely new anticoagulants on the basis of chitosan-capped nanoparticles.\nKeywords: chitosan ethylsulfonate, silver nanoparticles, antithrombotic activity, cofactor Xa\nThis work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.Download Article [PDF] View Full Text [HTML][Machine readable]", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "It is made from Chitosan, a 100% natural food fibre derived from the exoskeletons (shells) of deep sea Alaska King Crabs and Spiny Crabs in Japan.\nChitosan has been used for over 30 years by water purification plants as a process for detoxifying water.\nChitosan has a strongly positive polarity (electric charge), meaning that it acts like a powerful magnet to attract and bind to negatively charged hazardous substances. The resulting “bound” substance particles will be removed during washing and will not be absorbed by our bodies even if consumed.\nFor All Types of Food, some examples are:-\n- Rice – Removes polishing chemicals (e.g. talc), heavy metal residues, chemical fertilizers & pesticides.\n- Poultry & Meat – Eliminates chemical residues (e.g. antibiotics). Inhibits bacterial growth for longer lasting freshness.\n- Seafood (e.g. fish, prawns, crabs, etc) – Removes artificial preservatives & heavy metal residues (e.g. copper, mercury, lead, etc.). Eliminates & inhibits bacterial growth for longer lasting freshness. Reduces unpleasant fishy taste & smell.\n- Fruit & Vegetables – Removes chemical pesticides & fertilisers, heavy metal residues etc. Prevents moisture loss for longer lasting freshness.\n- Dry Food, etc. (e.g. mushrooms, dates, beans, peas, dried shrimp, anchovies) – Removes artificial preservatives, heavy metal residues & other chemical additives.\nDirections for Use:\n- Rinse food with water to remove any soil and solid impurities.\n- Add 1 dash of Ecomax Chitosan Food Purifier to 1 litre of water and mix evenly.\n- Soak food for at least 10 minutes. No rinsing required.\nIngredients: Water-soluble Chitosan, apple acid", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-7", "d_text": "The main decomposition of the composite materials occurred at T max 280 and 277 °C when pure clinoptilolite and saponite did not show decomposition at this temperatures, in contrast with the native chitosan, which is characterized by loss of more than 50% at T max 295 °C. Thus, the coated minerals are able to lose water faster than pure minerals and the temperature of the decomposition of polymer in composition of hybrid materials decreased by 15 °C (5 %) for chitosan-clinoptilolite composite and by 18 °C (6 %) for chitosan-saponite composite.\nComparing the results of thermogravimetric analysis for the initial and obtained composites, it was confirmed that all involved polymers to the reaction were successfully introduced to the hybrid materials. Thus, each composite contains 10 % polymer and 90 % of the mineral part (91 mg/g of chitosan).\nFigure 4 presents the nitrogen adsorption/desorption isotherms measured at 77 K for the initial minerals and coated minerals by chitosan. The shape of the isotherm corresponds to the Langmuir isotherm, type II of the International Union of Pure and Applied Chemistry (IUPAC) classification. This type of isotherm commonly observed in nonporous or macroporous materials of which the steep increase of adsorbed quantity at low relative pressure indicates the presence of unrestricted monolayer and multilayer adsorption. It is seen from the isotherms that the monolayer coverage completed at the relative pressure ranges up to 0.45. The shape of the isotherms confirms prevalent presence of cylindrical pores. According to the results of surface area analysis, the pure clinoptilolite and saponite has the BET surface area 22 and 41 m2/g, respectively, which was decreased with modification of its surfaces by polymer up to 5 and 10 m2/g for partially crosslinked chitosan-clinoptilolite and partially crosslinked chitosan-saponite. The presence of mesopores and macropores is confirmed by the diagram of pore size distribution for the initial and modified minerals (Fig. 5), which was obtained by the adsorption branch of the isotherm using the BJH method. The SEM images showed uniform coating of the surface of the minerals by chitosan (Figs.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-8", "d_text": "Figure 14. Chloride removal efficiency of blended coagulants.\nAlthough many studies have used synthetic water in the experiments, this work chose to use raw water collected directly from the surface source. Therefore, it is important to consider that the natural compounds may cause variations in their composition, which interfere in the treatment process. All those factors are taken into account when evaluating the obtained results.\nThe characteristics of the superficial water used in this study are observed as that the water used has apparent color, turbidity, solids, and amount of compounds with a relatively high absorption in UV (254 nm). It is noticeable that the water has high turbidity and color.\nThe effectiveness of alum, commonly used as a coagulant, is severely affected by low or high pH. In optimum conditions, the white flocs were large and rigid and settled well in less than 10 min. This finding is in agreement with other studies at optimum pH [24,25]. The optimum pH was 7 and was similar to the obtained results by Divakaran . At high turbidity, a significant improvement in residual water turbidity was observed. The supernatant was clear after about 20-min settling. Flocs were larger and settling time was lower. The results showed that above optimum dosage, the suspensions showed a tendency to restabilize.\nThe effectiveness of the chitin in the present study in the removal of various contaminants with varied pH individually and also in blended form can be traced to the explanation from the literature that chitin has been studied as biosorbent to a lesser extent than chitosan; however, the natural greater resistance of the former compared to the last, due to its greater crystallinity, could mean a great advantage. Besides, the possibility to control the degree of acetylation of chitin permits to enhance its adsorption potential by increasing its primary amine group density. Recent studies regarding the production of chitin-based biocomposites and its application as fluoride biosorbents have demonstrated the potential of these materials to be used in continuous adsorption processes. Moreover, these biocomposites could remove many different contaminants, including cations, organic compounds, and anions .\nChitosan has high affinity with the residual oil and excellent properties such as biodegradability, hydrophilicity, biocompability, adsorption property, flocculating ability, polyelectrolisity, antibacterial property, and its capacity of regeneration in many applications .", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "- Nano Express\n- Open Access\nNatural Minerals Coated by Biopolymer Chitosan: Synthesis, Physicochemical, and Adsorption Properties\nNanoscale Research Letters volume 11, Article number: 492 (2016)\nThe Erratum to this article has been published in Nanoscale Research Letters 2017 12:24\nNatural minerals are widely used in treatment technologies as mineral fertilizer, food additive in animal husbandry, and cosmetics because they combine valuable ion-exchanging and adsorption properties together with unique physicochemical and medical properties. Saponite (saponite clay) of the Ukrainian Podillya refers to the class of bentonites, a subclass of layered magnesium silicate montmorillonite. Clinoptilolits are aluminosilicates with carcase structure. In our work, we have coated biopolymer chitosan on the surfaces of natural minerals of Ukrainian origin — Podilsky saponite and Sokyrnitsky clinoptilolite. Chitosan mineral composites have been obtained by crosslinking of adsorbed biopolymer on saponite and clinoptilolite surface with glutaraldehyde. The obtained composites have been characterized by the physicochemical methods such as thermogravimetric/differential thermal analyses (DTA, DTG, TG), differential scanning calorimetry, mass analysis, nitrogen adsorption/desorption isotherms, scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) spectroscopy to determine possible interactions between the silica and chitosan molecule. The adsorption of microquantities of cations Cu(II), Zn(II), Fe(III), Cd(II), and Pb(II) by the obtained composites and the initial natural minerals has been studied from aqueous solutions. The sorption capacities and kinetic adsorption characteristics of the adsorbents were estimated. It was found that the obtained results have shown that the ability of chitosan to coordinate heavy metal ions Zn(II), Cu(II), Cd(II), and Fe(III) is less or equal to the ability to retain ions of these metals in the pores of minerals without forming chemical bonds.\nApplication of chitinous products in wastewater treatment has received considerable attention in recent years in the literature [1–8].", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-5", "d_text": "(ii) Carboxymethyl chitin or chitosan ##STR3## wherein n: >1, preferably 3-100\nR4 : --H or --COCH3\nR5 : --H, CH2 COOH, --CH2 COONa, --CH2 COOK or CH2 COONH4\nR6 : --H, CH2 COOH, --CH2 COONa, --CH2 COOK or CH2 COONH4\nIn the above definition, R5 and R6 are not --H at the same time and R4, R5, and R6 may be the same or different in each bonded D-glucosamine skeleton.\nThe carboxymethyl chitin or chitosan can be prepared by reacting alkali chitin or chitosan with monochloroacetic acid under an ambient temperature and ambient pressure.\n(iii) Phosphorated chitin or chitosan ##STR4##\nFurthermore, R7, R8, R9, and R10 are the same or different, respectively, in the each bonded D-glucosamine skeleton.\nThe phosphorated chitin or chitosan can be prepared by reacting diphosphorus pentaoxide to chitin or chitosan dissolved or suspended in methanesulfonic acid, while coolng. This method was disclosed in, for example, Norio Nishi, Preparatory Papers II, page 570, at the 48th Autumn Annual Convention Lecture, of the Japan Chemical Society.\n(iv) Sulfated chitin or chitosan ##STR5##\nR11, R12, R13, and R14 may be the same or different, respectively, in each bonded D-glucosamine skeleton.\nThe sulfated chitin or chitosan can be obtained by reacting an SO3 -pyridine complex salt to chitin or chitosan activated in pyridine [see: M. L. Wolfrom et al., The Sulfonation of Chitosan, J. Am. Soc., 81, 1764-1766 (1959)].", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Why is Chitosan So Popular?\nWhy is Chitosan So Popular?\nChitosan is a food ingredient that has become very popular in recent years because of its favourable chitosan price.favourable chitosan price price The reason that this product is such a popular one is that it is so easy to manufacture and comes with so many advantages for the skin that it would make the same item available in the market at a much higher price.\nThere are two major reasons why this ingredient is so popular.favourable chitosan price price favourable chitosan price price The first is that it is available on such a large scale that you can usually get a great deal of it for a low price. Chitosan is a water-soluble polymer that contains keratin as one of its main ingredients.\nThis means that it will not be easily depleted, meaning that it can continue to be available for you for a long time without you having to worry about it being gone. There is also no need to worry about having to go out and buy a large amount because you can buy very small amounts from the manufacturers who are producing it and therefore have a low level of waste.\nOne of the things that can be of concern is that because of the way the polymer is formed you could find yourself with a side effect called monomers. This means that you could be left with more molecules of the same basic molecule than when you started off. As you continue to use it you could find yourself developing a form of acne around your lips and this can also cause you to suffer from other skin problems as well as possible cancer and other health problems.\nOne of the other side effects is that there is always the risk of contamination of the product and how this would affect your health is a matter of opinion. However, there is no scientific evidence that the manufacture of Chitosan is at any kind of risk from contamination.\nChitosan is also known as one of the most hygroscopic compounds available today and therefore one of the reasons that it has been so successful is that it can easily absorb moisture from the air. This is a huge benefit as it allows it to work on any part of the body from head to toe. favourable chitosan price price\nAlthough it is an incredibly versatile ingredient, there are other factors that it has that are of interest to consumers.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "Sokyrnitskiy clinoptilolite of Ukrainian Zakarpattya has the general formula (Ca,Na,K2)Al2Si7O18·6H2O, chemical content (in mass %): SiO2—76.07; Al2O3—12.4; K2O—2.80; CaO—2.09; Na2O—2.05; Fe2O3—0.90; FeO—0.76; TiO2—0.19; P2O5—0.12; MgO—0.07; MnO—0.07; SO3—0.08. Saponite of Ukrainian Podillya has the general formula (Ca0.5,Na)0.33(Mg,Fe)3(Si,Al)4O10(OH)2·4H2O. Chitosan is originally from shrimps, Sigma-Aldrich, No. 417963, molecular weight from 190,000 to 370,000 Da, degree of deacetylation — not less than 75%, and solubility 10 mg/ml. All chemicals are purchased from Sigma-Aldrich were of reagent grade.\nComposites chitosan-saponite and chitosan-clinoptilolite were obtained by impregnation 20 g of minerals (saponite and clinoptilolite) by 285 ml of chitosan solution with a concentration of 7 mg/ml in acetic acid (pH 2.6). The mixture was put in flat-bottom flask and mixed by the magnetic stirrer MM-5 for 2 h. The obtained substance was dried at 50 °C. The obtained composites were placed in 12.5 ml of 0.25% solution of glutaraldehyde in water and heated at 50 °C for 2 h. Such quantity of glutaraldehyde is proper for crosslinking of 5% of accessible amino groups of polymer. The crosslinked chitosan on the surface of the minerals were washed with distilled water and dried at 50 °C. Thus, based on the theoretical mass ratio, the obtained organic and mineral components of the composite was chitosan:silica = 1:10 .", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Luis D�az Soto\": Agasut-Q, covered with chitosan (healing properties) and Agasut-QE, covered with chitosan and streptomycin (healing and antimicrobial properties). After pre-clinical and clinical trials were approved, both surgical thread types were introduced and successfully used in several Cuban hospitals.\nThe study, however, was not restricted to biomedicine. In co-operation with the Cuban National Centre for Agricultural and Livestock Health (CENSA), this group worked in \"seed coating to boost farming yields as well as in encapsulation of somatic embryos to design artificial seeds\".\nIn trials, tomato seeds of variety 1-17(140) were coated with chitosan. Under laboratory conditions, treated seeds showed significantly higher growth speed and percentage of successful germination when compared to non-treated seeds.\nIn Peniche's words, the research group concluded that \"chitosan works as a bio-stimulant in tomato seed treatment by producing better seed germination and greater plant height, stem thickness and dry mass about a week earlier than usual\".\nChitosan proved to be a natural polymer with great film-generating capacity, apart from other highly interesting properties: chitosan does not produce polluting substances, it is non-toxic and bio-compatible.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "of kaolinite was added to 1L of tap water. The suspension was stirred slowly at 20 rpm for 1hr for uniform dispersion of kaolinite particle. The suspension was then stand for 24hr. this suspension was used as the stock solution for the preparation of turbid water samples of varying turbidities. Original PH=7 and Temperature 300C. The PH was controlled by adding either strong acid (H2SO4) or strong base (NaOH). Turbidity of raw water was 1 NTU.\nFigure 3: Kaolinite structure\nPreparation of chitosan solution\nChitosan (Deactylated chitin: poly-[1-4]-B-glucosamine). (C6H11NO4) n with minimum 85% deactyl prepared from crab shells was obtained from ACROS ORGANICS Company. It was in the form of a pale brown powder soluble in dilute acetic acid hydrochloric acids. With molecular weight 100.000-300.000. Chitosan powder (5 mg) was weighed into a glass beaker, mixed with10mL of 0.1 M HCl solution, and kept for about one hour to dissolve. It was then diluted to 500 ml distilled water solution stirred at 100 rpm with a magnetic stirrer until the solution was completely dissolved. It was observed that chitosan solution in acid undergo some changes in properties over period; the solutions were prepared freshly before each set of experiments 13, Stock solution was stored at room temperature (25oC).\nIn this study, the test water (six beakers) was filled with 500ml synthetic water. Then kaolinite and chitosan were added (Coagulant was added with rapid mixing for 2 minutes at 100 rpm, slow mixing for 30 minutes at 30 rpm. The mixer is turned off and flocs are allowed to settle for 30 minute 14. The samples were taken from the top 4 in of the suspension. Turbidity was measured on the settled water filtered through Whatman 40 (8um) filter paper of each beaker and then measured by Nephelometer, Turbidity becomes 5 NTU, and Hardness measurements were conducted by EDTA titrimetric Method.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-10", "d_text": "Suzuki Y, Okamoto Y, Morimoto M, Influence of physicochemical properties of chitin and chitosan on complement activation. 2000, Carbodydr. Polym. 42, 307-310.\n[17.] Sathirakul K, How N.C, Stevens W.F. 1996, Chandrkrachang S, Application of chitin and chitosan bandages for wound healing, Adv Chitin Sci, 1, 490-492.\n[18.] Loke, W.K.; Lau, S.K.; Yong, L.L.; Khor, E.; Sum, C.K. 2000, J. Biomed. Mater. Res. 53, 8-17.\n[19.] Kim, H.J.; Lee, H.C.; Oh, J.S.; Shin, B.A.; Oh, C.S.; Park, R.D.; Yang, K.S.; Cho, C.S. 1999, J. Biomater. Sci., Polym. Ed. 10, 543-556.\n[20.] Collagens, in: Wound Care, C.T. Hess (Ed), Lippincott Williams and Wilkins, 2005 p196.\n[21.] Markets for Advanced Wound Care Technologies, BCC Research, Wellesley, 2009.\n[22.] Rao, S.B.; Sharma, C.P. 1997, J. Biomed. Mater. Res. 34, 21-28.\n[23.] Chandy, T.; Sharma, C.P. 1996 Effect of liposome-albumin coatings on ferric ion retention and release from chitosan beads, Biomaterials. 17, 61-66.\n[24.] Aasen, T.; Belmonte, J.C.I. 2010, Nature Protocols 5, 371-382.\n[25.] Chen, G.P.; Ushida, Y; Tateishi, T. 2002, Macromolecular Biosci. 2, 67-77.\n[26.] Hsieh, W.C.; Chang, C.P.; Lin, S.M. 2007, Colloids and Surfaces B: Biointerfaces 57, 250-255.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Healthy Chitosan HCl Food Additives Chitosan Hydrochloride (31005)\n|FOB Price:||US $0.01-100 / KG|\n|Min. Order:||25 KG|\n|Min. Order||FOB Price|\n|25 KG||US $0.01-100/ KG|\n|Payment Terms:||T/T, Western Union, Money Gram, Bitcoin|\n- Model NO.: 31005\n- Customized: Customized\n- Suitable for: Adult\n- Purity: >96%\n- Chitosan Hydrochloride Appearance: Pale Yellow Powder\n- Chitosan Hydrochloride Deacetylation: 95%\n- Chitosan Hydrochloride Grade: Food Grade\n- Chitosan Hydrochloride Applications: Food & Medicine & Environment\n- Trademark: YC\n- Specification: 95%\n- HS Code: 30013006\n- Powder: Yes\n- Certification: GMP, HSE, ISO 9001, USP\n- State: Solid\n- Chitosan Hydrochloride Assay: 95%\n- Chitosan Hydrochloride Packing: 25kgs/Drum\n- Chitosan Hydrochloride Standard: Enterprise Standard\n- Chitosan Hydrochloride Catagory: Food Additives\n- Chitosan Hydrochloride Synonym: Chitosan HCl\n- Transport Package: 25kg/Drum\n- Origin: China\nChitosan Hydrochloride Product Name: Chitosan Hydrochloride\nChitosan Hydrochloride Synonym: Chitosan Hydrochloride; Chitosan HCl; ;\nChitosan Hydrochloride Content: 95%\nChitosan Hydrochloride Packing: 25KG/Drum\nChitosan Hydrochloride Appearance: Pale yellow powder\nChitosan Hydrochloride Catagory: Food additives\nChitosan Hydrochloride Applications:\n1. Food Field\nUsed as food additives, thickeners, preservatives fruits and vegetables, fruit juice clarifying agent, forming agent, adsorbent, and health food.\n2.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Access to safe drinking water is important as a health and development issue at national, regional, and local levels. About one billion people do not have healthy drinking water. More than six million people (about two million children) die because of diarrhea which is caused by polluted water. Developing countries pay a high cost to import chemicals including polyaluminium chloride and alum. This is the reason why these countries need low-cost methods requiring low maintenance and skill. The use of synthetic coagulants is not regarded as suitable due to health and economic considerations. The present study was aimed to investigate the effects of alum as coagulant in conjunction with bean, sago, and chitin as coagulants on the removal of color, turbidity, hardness, and Escherichia coli from water. A conventional jar test apparatus was employed for the tests. The study was taken up in three stages, initially with synthetic waters, followed by testing of the efficiency of coagulants individually on surface waters and, lastly, testing of blended coagulants. The experiment was conducted at three different pH conditions of 6, 7, and 8. The dosages chosen were 0.5, 1, 1.5, and 2 mg/l. The results showed that turbidity decrease provided also a primary E. coli reduction. Hardness removal efficiency was observed to be 93% at pH 7 with 1-mg/l concentration by alum, whereas chitin was stable at all the pH ranges showing the highest removal at 1 and 1.5mg/l with pH 7. In conclusion, using natural coagulants results in considerable savings in chemicals and sludge handling cost may be achieved.\nAlum; Chitin; Sago; Bean; Coagulation; Turbidity\nThe explosive growth of the world’s human population and subsequent water and energy demands have led to an expansion of standing surface water . Nowadays, the concern about contamination of aquatic environments has increased, especially when water is used for human consumption. About one billion people do not have healthy drinking water. More than six million people (about two million children) die because of diarrhea which is caused by polluted water[2,3].\nIn most of the cases, surface water turbidity is caused by the clay particles, and the color is due to the decayed natural organic matter.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Chitosan is the active agent in Tricol Biomedical’s HemCon and OneStop hemostasis products and in order to understand how it works, you should first know about chitin. Chitin is the second most abundant naturally occurring biopolymer, after cellulose. In the natural world, chitin functions as scaffold material that gives structure and strength to insect exoskeletons and crustacean shells, and it is also found in mushrooms. It is often found in association with the mineral calcium carbonate. Our chitin is sourced from shrimp shells of the species Pandalus Borealis. The species naming being a reference to the Aurora Borealis found in the sky above the pristine waters of the North Atlantic Ocean. The shells that are received for processing at Primex in the very north of Iceland are caught under carefully regulated quota systems that leave a sustainable balance to the marine environment. These quotas are based on scientific criteria for sustainable utilization of natural resources and the dedicated work of the Marine Research Institute.\nOnce the chitin is extracted from the shells, it undergoes a process called de-acetylation that molecularly transforms the chitin to chitosan. So now that we know where Chitosan comes from, what exactly is it? Chitosan is a natural biopolymer that possesses a positive molecular charge and is the hemostatic component in HemCon bandages and coated gauzes. This positive molecular charge is the basis of chitosan’s medical uses. For bandage and gauze products like ours, the positive charge attracts negatively-charged blood cells like a magnet, rapidly creating a tight seal over an injury.\nOften a first reaction to the use of Chitosan in medical devices are concerns with shellfish allergies. Most allergic reactions to shellfish are caused by the protein part of the shellfish, not by the shells. Any residual proteins are eliminated during the conversion of chitin to chitosan. There have been no reported cases of an allergic reaction to our chitosan products in the 20 years we have made them. Keep an eye out for more detailed information to come on the use of chitosan by people with shellfish allergies!\nDue to chitosan’s many attractive properties such as its natural origin, abundance, and positive charge reactivity, it has a multitude of real-world applications.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "Since its establishment, Chibio Biotech is engaged in the manufacturing and trading of well-characterized Non-animal Chitosan ( Mushroom, Aspergillus Niger Chitosan), Theacrine ( Natural Kucha Tea Leaf Extract & Synthetic ), pharmaceuticals, cosmetics, food raw materials and other natural ingredients.\nBased on years of research & development and sales of food additives, raw materials of cosmetics, wine beverage, medicines and health care etc., combined with the application & scientific research results of natural product activity, Chibio Biotech is committed to the global health industry, promote natural ingredients with great commercial potentiality to the global application fields of medicine, pharmaceuticals, health care, etc., and realize industrialization and marketization.\nWe are happy to serve the global ingredients community!", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Low-molecular-weight sulfonated chitosan as template for anticoagulant nanoparticles\nAuthors Heise K, Hobisch M, Sacarescu L, Maver U, Hobisch J, Reichelt T, Sega M, Fischer S, Spirk S\nReceived 25 April 2018\nAccepted for publication 12 June 2018\nPublished 30 August 2018 Volume 2018:13 Pages 4881—4894\nChecked for plagiarism Yes\nReview by Single anonymous peer review\nPeer reviewer comments 3\nEditor who approved publication: Prof. Dr. Thomas J. Webster\nKatja Heise,1,2 Mathias Hobisch,3,4 Liviu Sacarescu,5 Uros Maver,6 Josefine Hobisch,3 Tobias Reichelt,7 Marija Sega,6 Steffen Fischer,1 Stefan Spirk3,4\nMembers of EPNOE and NAWI Graz\n1Institute of Plant and Wood Chemistry, Technische Universität Dresden, Tharandt, Germany; 2Department of Bioproducts and Biosystems, Aalto University, Espoo, Finland; 3Institute for Chemistry and Technology of Materials, Graz University of Technology, Graz, Austria; 4Institute for Paper, Pulp and Fiber Technology, Graz University of Technology, Graz, Austria; 5“Petru Poni” Institute of Macromolecular Chemistry, Romanian Academy, Iasi, Romania; 6Faculty of Medicine, University of Maribor, Maribor, Slovenia; 7Zentrum für Bucherhaltung GmbH, Leipzig, Germany\nPurpose: In this work, low-molecular-weight sulfoethyl chitosan (SECS) was used as a model template for the generation of silver core-shell nanoparticles with high potential as anticoagulants for medical applications.\nMaterials and methods: SECS were synthesized by two reaction pathways, namely Michael addition and a nucleophilic substitution with sodium vinylsulfonate or sodium 2-bromoethanesulfonate (NaBES). Subsequently, these derivatives were used as reducing and capping agents for silver nanoparticles in a microwave-assisted reaction. The formed silver-chitosan core-shell particles were further surveyed in terms of their anticoagulant action by different coagulation assays focusing on the inhibition of either thrombin or cofactor Xa.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-2", "d_text": "-Chitin deacetylase from Absidia orchidis\n-Thermal decomposition of chitosan chlorides\nSasaki C., Kristiansen A., Fukamizo T., Vårum\n-Fractionation of chitosan by biospesificadsorption to immobilized lysozyme\n-Conversion of chitosan structure with calcium\nOral presentation B5Tømmeraas K., Vårum K.M., Christensen B.E.,Smidsrød O.\n-Preparation and characterisation of self-branched chitosans\nEijsink, V:Structure and Function of Chitinolytic Enzymes\nProtection and Solubilization of Chitin for Modifications of Chemical Structure\nPurpose: To overview the current industrial applications of chitosan, and to predict the future for\nchitosan with respect to commercialization.\nPanelists:Dr. Michael Dornish Introduction and moderator(FMC Biopolymer, Oslo, Norway)\nProf. Bruno Moerschbacher Chitosan in agriculture(Dept. of Plant Biochemistry, Münster, Germany) Is there a future for chitosan?\nDr. Rolf Wachter Chitosan in cosmetic applications.\n(Cognes Skin Care, Germany) How to approach the cosmetics market and customer relations\nDr. Paul Sandford Chitosan in technical, food and health(Paul A. Sandford & Affiliates, USA) applications\nProf. Per Artursson Chitosan in pharmaceutical applications: Drug(University of Uppsala, Sweden) enhancement, vaccine, gene delivery\nProf. Olav Smidsrød Concluding remarks(NOBIPOL, NTNU, Trondheim, Norway Paths towards a successful chitosan industry\nE is a calculator term meaning x 10 x 1. Express the following in Scientific Notation (SN) with correct Significant Digits (SD). (a) 3 000 000 000 000-m ( 3.00E+12-m ) (b) 0.000 000 000 124-N ( 1.24E-10-N ) (c) 4 003 000 000-s ( 4.00E9-s ) (d) 0.000 000 106 0-J ( 1.06E-7-J ) 2. Convert each of the following measurements to meters.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-3", "d_text": "Considering that schistosome infection occurs predominantly in areas of rural poverty in sub-Saharan Africa, Southeast Asia and tropical regions of the Americas a candidate vaccine that could be administered by oral route could offer an economical and effective solution to mass immunization. The main advantages presented by oral vaccine delivery are the target accessibility and enhanced patient compliance owing to the non-invasive delivery method. On the other hand, for effective oral immunization, antigens and plasmids must be protected from the acidic and proteolytic environment of the gastrointestinal tract and efficiently taken up by cells of the gut associated lymphoid tissue (GALT). With this in mind, several studies have been done and showed that the association of antigens with nanoparticles increases the internalization by M cells and prevents the degradation in the gastrointestinal (GI) tract . Another important aspect is that these carrier systems can act as immunostimulants or adjuvants, enhancing the immunogenicity of weak antigens . Biodegradable and mucoadhesive polymeric delivery systems seem to be the most promising candidates for mucosal vaccines. Several polymers of synthetic and natural origin, such as poly(lactic-co-glycolic acid) (PLGA), chitosan, alginate, gelatin, etc., have been exploited for efficient release of mucosal vaccines and significant results have been already obtained .\nChitosan is the deacetylated form of chitin and has many properties suitable for vaccine delivery. It is a mucoadhesive polymer, biodegradable and biocompatible. In particular, its ability to stimulate cells from the immune system has been shown in several studies , , , . Nevetheless, the ability of chitosan in inducing a Th1, Th2 or mixed responses is still controversial as also the type of immune response induced by different administration routes , . Additionally, chitosan is a cationic polymer, easily form complexes or nanoparticles in aqueous medium with the possibility to adsorb proteins, antigens and DNA that may protect them from degradation . The oral administration of antigen adsorbed nanoparticles is demanding as processes like rapid antigen desorption from the particles or the attack of the antigens by enzymes or acidic substances from the GI fluids may occur. These obstacles may be overcome by coating those antigen loading particles with an acid resistant polymer, like sodium alginate .", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "|Place of Origin:||CHINA|\n|Minimum Order Quantity:||25KGS|\n|Packaging Details:||25KGS per paper drum|\n|Delivery Time:||5-8 work days|\n|Payment Terms:||L/C, D/P, T/T|\n|Supply Ability:||500 tons per year|\n|Appearance:||Off-white Or Light Yellow Powder||PH:||7.0~8.0|\n|Residue On Ignition:||<2.0%||Loss On Drying:||≤15.0%|\n|Heavy Metals:||≤10ppm||DAC Degree:||≥80%|\n|Particle Size:||≥60mesh,80mesh,100mesh||Insoluble Substance:||≤2.0%|\nChitosan (chitosan) N - chitin, chitin, chitosan and cellulose with similar chemical structures and hydroxyl groups on the cellulose in C2, chitin and chitosan on the C2 a respectively replaced by an amino and acetyl amino, chitin and chitosan has biodegradability, cell affinity and biological effects, such as many unique properties, especially with free amino groups of chitosan, is the only basic polysaccharide in natural polysaccharide.\nUsing chitosan soluble and membranous, with chitosan and chitin can be mutual conversion characteristics of chemical structure, using acetic anhydride as the transformation of chitosan and chitin fixative, thus made from a type of chitin and a new type of textile finishing agent really does not contain formaldehyde (both retain the advantages of natural polymer chitin, and ensure the finishing agent and finishing technology of non-toxic harmless). Chitosan gels prepared with formaldehyde and acetic acid as cross-linking agents and chitosan as the matrix are insoluble in water, dilute acid and alkali solutions, and general organic solvents. They have excellent mechanical strength and chemical stability.\nThis product is non-toxic,odorless,off-white or light yellow powder,and soluble in acid,insoluble in water and base or normal organic solutions.Dissolved in 185℃.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "You can find chitosan used in the medical field, agriculture, food processing, cosmetics, and water treatment. Chitosan is a prime example of how we can use technology to benefit from naturally occurring materials. And as a bonus, HemCon chitosan is made as a byproduct of the shrimp fishing industry, so our material sourcing helps prevent waste and minimizes our environmental footprint.\nNow to answer the question on everyone’s mind: how do you pronounce chitosan? Is it ‘cheeto-san’ or maybe ‘chyto-san’? When you find yourself in a conversation about all the benefits of chitosan, you can confidently pronounce chitosan as ‘kai-tuh-san’.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "• 出版社/出版日:Mordor Intelligence / 2021年5月30日\n|Single User||¥561,000 (USD4,250)||▷ お問い合わせ|\n|Team User||¥627,000 (USD4,750)||▷ お問い合わせ|\n|Corporate License||¥990,000 (USD7,500)||▷ お問い合わせ|\nThe Global Chitosan Market is expected to register a CAGR of 21.3% over the forecast period.\nGlobally, the researchers showing interest in the use of Chitosan in the pharmaceutical and biomedical fields as a potential agent for the prevention and treatment of infectious diseases. In December 2019, China became the epicenter of the SARS-CoV-2 outbreak, which has since spread internationally. In 2020, the World Health Organization (WHO) declared COVID-19 to be a global health emergency. Countering the impacts of viral disease outbreaks such as COVID-19 required research and development activities by researchers across the globe. Thus, the COVID-19 outbreak is likely to show a positive impact on market growth because the application of chitosan polymers as viral inhibitors will generate key growth opportunities in near future.\nFurthermore, biopolymers, such as chitosan have intrinsic antiviral properties. Nanostructured drug-delivery systems (NDDS) based carbohydrate-binding agents, such as the sulfated polymers, can also change the virus entry process, blocking the viral cationic surface receptors and avoiding its interaction with heparan sulfate proteoglycan on the host cell surface. However, there are no reports of NDDS as COVID-19 treatment so far. It can be a new approach for COVID-19 treatment in near future.\nThe other factors that are driving the market growth include growing product application in biomedical, cosmetics and food and beverage industries, rising water treatment activities worldwide and strong advancements in the healthcare/medical industry in developed countries.\nChitosan composed systems utilize the characteristics of chitosan to achieve great therapeutic effects. For instance, the adhesiveness of chitosan can be used for non-invasive mucosal vaccine vectors. Therefore, chitosan-composed systems have great potential in the therapy of infectious diseases.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-28", "d_text": "Brugnerotto J, Lizardi J, Goycoolea F, Arguelles-Monal W, Desbrieres J, et al. (2001) An infrared investigation in relation with chitin and chitosan characterization. Polymer 42: 3569–3580. doi: 10.1016/s0032-3861(00)00713-8\n- 28. Nakagawa Y, Murai T, Hasegawa C, Hirata M, Tsuchiya T, et al. (2003) Endotoxin contamination in wound dressings made of natural biomaterials. Journal of biomedical materials research part B: Applied Biomaterials 66B: 347–355. doi: 10.1002/jbm.b.10020\n- 29. USP (2005) United States Pharmacopeia,. General Chapter 85 - Bacterial Endotoxins Test\n- 30. Moreira C, Oliveira H, Pires LR, Simões S, Barbosa MA, et al. (2009) Improving chitosan-mediated gene transfer by the introduction of intracellular buffering moieties into the chitosan backbone. Acta Biomater 5: 2995–3006. doi: 10.1016/j.actbio.2009.04.021\n- 31. Oliveira C, Rezende C, Silva M, Borges O, Pêgo A, et al. (2012) Oral Vaccination Based on DNA-Chitosan Nanoparticles against Schistosoma mansoni Infection. TSWJ 2012: 11. doi: 10.1100/2012/938457\n- 32. Fernandes V, Martins E, Boeloni J, Serakides R, Goes A (2012) Protective effect of rPb40 as an adjuvant for chemotherapy in experimental paracoccidioidomycosis. Mycopathologia 174: 93–105. doi: 10.1007/s11046-012-9530-2\n- 33. Smithers SR, Terry RJ (1965) The infection of laboratory hosts with cercariae of Schistosoma mansoni and the recovery of the adult worms.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "Key Topics Covered:\nChapter One Chitosan Industry Overview\nChapter Two Chitosan International and China Market Analysis\nChapter Three Chitosan Development Environmental Analysis\nChapter Four Chitosan Development Policy and Plan\nChapter Five Chitosan Manufacturing Process and Cost Structure\nChapter Six 2009-2014 Chitosan Productions Supply Sales Demand Market Status and Forecast\nChapter Seven Chitosan Key Manufacturers Analysis\nChapter Eight Up and Down Stream Industry Analysis\nChapter Nine Chitosan Marketing Channels Analysis\nChapter Ten Chitosan Industry Development Trend\nChapter Eleven Chitosan Industry Development Proposals\nChapter Twelve Chitosan New Project Investment Feasibility Analysis\nChapter Thirteen Global and China Chitosan Industry Research Conclusions\n- AK BIOTECH\n- Advanced Biopolymers AS\n- Guang Hao\n- HAIDE BEI\n- Heppe Medical Chitosan\n- Hangzhou Fuli\n- Koyo Chemical\n- Kunpoong Bio\n- Qingdao Yunzhou\n- Qingdao Lizhong\n- United Chitotechnologies\n- Weifang Sea source Biological\n- Xianju Tengwang Chitosan Factory\n- Zhejiang Candorly\nFor more information visit http://www.researchandmarkets.com/research/27f257/global_and\nCONTACT: Research and Markets Laura Wood, Senior Manager email@example.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716 Sector: Process and Materials\nSource:Research and Markets", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Manufacturer of a wide range of products which include chitin - flakes & powder.\nChitin - Flakes & Powder\nGet Best Quote\n✓Thanks for Contacting Us.\nApprox. Rs 350 / KgGet Latest Price\nProduct BrochureProduct Details:\nMinimum Order Quantity\nHDPE bag with double polythene inner liner or Suitable Packing\n6.5 - 8\nOff white Colour\nNon Toxic Material\nChitin is the second most abundant natural biopolymer in the world, behind only cellulose. It is also the most abundant naturally occurring polysaccharide that contains amino sugars. This abundance, combined with the specific chemistry of chitin and its derivative Chitosan, make for the array of potential applications. Chitin, a long-chain polymer of N-acetylglucosamine, is a derivative of glucose. It is a primary component of cell walls in fungi, the exoskeletons of arthropods, such as crustaceans (e.g., crabs, lobsters and shrimps) and insects, the radulae of molluscs, cephalopod beaks, and the scales of fish and lissamphibians. The structure of chitin is comparable to another polysaccharide - cellulose, forming crystalline nanofibrils or whiskers. Chitosan is a fine off white, odorless Powder extracted from Crab and Shrimp Shells and its derivatives. It has a wide array of commercial and biomedical uses. It can be used in agriculture as a seed treatment and biopesticide, helping plants to fight off fungal infections. In winemaking, it can be used as a fining agent, also helping to prevent spoilage. In industry, it can be used in a self-healing polyurethane paint coating. In medicine, it is useful in bandages to reduce bleeding and as an antibacterial agent. Itis also used to help deliver drugs through the skin. We supply Chitosan at varied deacetylation values (80 – 95%) and mesh sizes (35 to 100). We also supply High Bulk Density Chitosan (HBDC) ideally suitable for encapsulation, having a bulk density of 0.8gm/ml and 0 mesh size to 100 Mesh.\nChitin & Chitosan have a wide variety of uses in health care.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-11", "d_text": "It is observed that blended coagulants gave utmost efficiency as compared to the traditional alum coagulants. Here in this blending process, we reduce the alum dose up to 80%; thus, we reduce the drawbacks of the alum. Also, we can reduce the cost of the treatment using the natural coagulants instead of the traditional coagulant.\nE. coli is the best coliform indicator of fecal contamination from human and animal wastes. E. colipresence is more representative of fecal pollution because it is present in higher numbers in fecal material and generally not elsewhere in the environment . Results showed the absence of E. coli increases with increasing time. A greater percentage of E. coli was eliminated in higher turbidities. The aggregation and, thus, removal of E. coli was directly proportional to the concentration of particles in the suspension. Chitosan and other natural coagulants showed antibacterial effects of 2 to 4 log reductions.\nAntimicrobial effects of water-insoluble chitin and coagulants were attributed to both its flocculation and bactericidal activities. A bridging mechanism has been reported for bacterial coagulation by chitosan . Especially with reference to chitosan, molecules can stack on the microbial cell surface, thereby forming an impervious layer around the cell that blocks the channels, which are crucial for living cells . On the other hand, cell reduction in microorganisms, such as E. coli, occurred without noticeable cell aggregation by chitosan.\nThis indicates that flocculation was not the only mechanism by which microbial reduction occurred. It was found that when samples were stored during 24 h, regrowth of E. coli was not observed for all turbidities. It should be noted that the test water contained no nutrient to support regrowth of E. coli, and chitosan is not a nutrient source for it. Another experiment was designed to check the effect of alum alone. Regrowth of E. coli was not observed for unaided alum after 24 h. The number of E. coli after resuspension of sediment reached to the initial numbers after 24 h and showed that it cannot be inactivated by alum. Such findings have been previously reported by Bina.\nAccess to clean and safe drinking water is difficult in rural areas of India.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "Introduction During the past three decades, the pollution from dye wastewater has attracted much attention due to the growing use of a variety of dyes in the textile, leather, paper, rubber, plastic and food industries [1,2]. Among them, synthetic dyes derived from coal-tar-based hydrocarbons such as benzene, naphthalene, anthracene, toluene, and xylene are more stable and more difficult to biodegrade due to their complex aromatic molecular structures . Even tiny amount of dyes in water may cause significant color change, and many of these dyes are known to be toxic or carcinogenic. They not only affect aquatic life but also traverse through the entire food web, resulting in biomagnification . Numerous techniques including physical–chemical and biological decolorization methods have been developed to treat wastewater effluents containing dyes . Of them, adsorption using adsorbents is considered as an effective and economic method for water decontamination. Many low-cost adsorbents have been proposed and investigated for their ability to remove dyes [6–8]. Recently, special attention has been paid to naturally polysaccharide-based adsorbents such as chitosan and its derivatives. Chitosan is a kind of abundant and naturally occurring hydrophilic cationic polysaccharide derived from chitin. It has been widely investigated as a biosorbent for the capture of dyes from aqueous solutions due to its easy availability, ready chemical modifications, environmental friendly behavior, low cost and ⇑ Corresponding author. Tel./fax: +86 27 87426559. E-mail address: [email protected]\n(S. Zhao). 1359-8368/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.compositesb.2012.01.015\noutstanding adsorption capacities for a wide range of dyes . Generally, as an effective adsorbent for decoloring applications, chemical crosslinking of chitosan is required to improve the mechanical resistance and to reinforce the chemical stability of the chitosan in acidic media.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "- They strengthen the body’s immunity by increasing the activity of T lymphocytes,\n- They inhibit infections with pathogenic microorganisms,\n- They eliminate acidification of the organism, increase the pH by alkaline action,\n- They have anti-cancer properties, inhibit the proliferation of cancer cells and the formation of metastases,\n- They cleanse the body of heavy metals and other harmful substances (mercury, cadmium, pesticides, artificial colors,\n- Has antihypertensive effect – lowers blood pressure, reduces capillary contractility,\n- Lower cholesterol, lowers the absorption of fats and cholesterol in the intestines, cleans blood vessels from cholesterol plaques\n- They improve the functioning of the liver and pancreas\n- They help control blood sugar levels,\n- They protect the mucous membranes of the digestive tract, effectively accelerate the healing of erosions and ulcers,\n- They fight leaky gut syndrome – one of the leading causes of allergies and autoimmune diseases,\n- Eliminate heartburn, acidity and gas,\n- They improve the condition of the intestinal microbiota – they cleanse intestinal villi, improve intestinal peristalsis,\n- Increases skin elasticity and joint mobility, strengthens connective and cartilage tissue\n- with decreased immunity and immune disorders,\n- in cancer prevention, chemotherapy, radiotherapy, intoxication,\n- in the prevention of cardiovascular disease; atherosclerosis, infarcts, strokes,\n- in hypertension,\n- in abnormal liver and pancreas and diabetes,\n- acidity and heartburn,\n- in inflammation, erosions and ulcerations of the gastrointestinal tract,\n- gastrointestinal yeasts,\n- food poisoning\nMajeti N.V. Ravi Kumar A reviev of chitin and chitosan applications. Reactive & Functional Polymers 46 2000) 1-27.\nAlemdaroglu C., Zelihagul D., Celebi N., Zor F., Ozturk S., Erdogan D. 2006. An investigation on burn wound healing in rats with chitosan gel formulation containg epidermal growth factor. Burns 32, s. 219-327.\nIgnacak J., Wiśniewska-Wrona M.,Pałka I.,Zagajewski J., Niekraszewicz A. 2011. Rola oligomerów chitozanowych w regulacji proliferacji komórek nowotworowych wodobrzusza Ehrlicha in vitro.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-15", "d_text": "Biomaterials 32:9271–9281View ArticleGoogle Scholar\n- Jin HJ, Chen J, Karageorgiou V, Altman GH, Kaplan DL (2004) Human bone marrow stromal cell responses on electrospun silk fibroin mats. Biomaterials 25:1039–1047View ArticleGoogle Scholar\n- Alnaief M, Alzaitoun MA, García-González CA, Smirnova I (2011) Preparation of biodegradable nanoporous microspherical aerogel based on alginate. Carbohydr Polym 84:1011–1018View ArticleGoogle Scholar\n- Geng X, Kwon O-H, Jang J (2005) Electrospinning of Chitosan dissolved in concentrated acetic acid solution. Biomaterials 27:5427–5432View ArticleGoogle Scholar\n- Kaklamani G, Cheneler D, Grover LM, Adams MJ, Bowen J (2014) Mechanical properties of alginate hydrogels manufactured using external gelation. J Mech Behav Biomed Mater 36:135–142View ArticleGoogle Scholar\n- Ribeiro CC, Barrias CC, Barbosa MA (2004) Calcium phosphate–alginate microspheres as enzyme delivery matrices. Biomaterials 25:4363–4373View ArticleGoogle Scholar\n- Draget KI, Moe ST, Skjåk-Bræk G, Alginates Smidsrød O, Stephen AM, Phillips GO, Williams PA (eds) (2006) Food polysaccharides and their applications, 2nd edn. CRC Press, Boca Raton, pp 289–334Google Scholar\n- Vårum KM, Smidsrød O (2004) Structure-property relationship in Chitosans. In: Polysaccharides: structural diversity and functional versatility. CRC Press, Boca Raton\n- Inoue K, Yoshizuka K, Ohto K (1999) Adsorptive separation of some metal ions by complexing agent types of chemically modified Chitosan. Anal Chim Acta 388:209–218View ArticleGoogle Scholar\n- Modrzejewska Z, Kaminski W (1999) Separation of Cr(VI) on Chitosan membranes.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "The coagulant also could be synthetic material or natural coagulant with the properties of coagulant having +ve charge, these positive charge proteins would bind to the -ve charged particles in the solution that cause turbidity 3. Coagulants normally in form of natural (as shown is Table 2) & inorganic (as in Table 3). Both coagulants aim to remove pollutant in form of physical (solids & turbidity) or chemical (BOD & COD).\nTable 2. Natural coagulants Efficiency\nTable 3. Inorganic Coagulants: Advantages and Disadvantages\nSome inorganic coagulants like Aluminum and iron are used in most industries, when aluminum is used as coagulant in water treatment, it can have caused several bad effect on human health such as intestinal constipation, loss of memory, convulsions, abdominal colic’s, loss of energy and learning difficulties. In recent years, chitosan and moringa oleifera have been applied as coagulant in water treatment 4. Chitosan is a derivative of chitin which naturally occurs in shells of crustaceans, fungi and insects. Chitosan is obtained from partial deacetylation of chitin which is removal of acetyl groups (-CH3CO) on N-acetyl glucosamine (GlcNAc) units of chitin polymer to reveal amino groups (-NH2) (as shown in Figure 1).\nFigure 1: De-acetylation of chitin to obtain chitosan, acetyl group (-CH3CO) on acetyl glucosamine monomer on chitin chain is removed to reveal amino group (-NH2) becoming glucosamine monomer making chitosan.\nIt is a long chain carbohydrate that is non-soluble in water but dissolves in most acids, and contains positively charged moieties. Possessing properties such as non-toxicity, biocompatibility, and biodegradability, chitosan has been studied for its application in many sectors such as industrial wastewater treatment, pharmaceuticals, cosmetics, agriculture, and biomedical use. Chitosan is a weak base and is insoluble in water and in organic solvent. However, it is soluble in dilute aqueous acidic solutions (PH<6.5), which can convert glucosamine units into soluble form R-NH3.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-36", "d_text": "at 202-225-5101,202-225-3190; House Majority Leader John Boehner at 202-225-6205, 202-225-0704; Speaker of the House J. Dennis Hastert at 202-225-2976,202-225-0697; and Congressman Nathan Deal a member of the Energy and Commerce Committee, at 202-225-5211, 202-225-8272.\nDon’t let self-serving political skullduggery take your vitamins and supplements away from you! Visit www.NHA2006.com and take advantage of online tools that allow you to easily send faxes to Congress. Please do not delay—without swift and decisive action, we may all lose our health freedom forever!\n*This editorial is a public service announcement sponsored by the Nutritional Health Alliance (NHA).\nCHITOSAN: The Fiber that Binds Fat\nJune 25, 2005 07:55 PM\nChitosan is a natural product that inhibits fat absorption. It has the potential to revolutionize the process of losing weight and by so doing, reduce the incidence of some of the most devastating Western diseases we face today. Chitosan is indigestable and non-absorbable. Fats bound to chitosan become nonabsorbable thereby negating their caloric value. Chitosan-bound fat leaves the intestinal tract having never entered the bloodstream. Chitosan is remarkable in that it has the abilty to absorb an average of 4 to 5 times its weight in fat.60\nThe same features that allow chitosan to bind fats endow it with many other valuable properties that work to promote health and prevent disease. Chitosan is a remarkable substance whose time has come.\nChitin, the precursor to Chitosan, was first discovered in mushrooms by the French professor Henri Braconnot in 1811.61 In the 1820’s chitin was also isolated from insects.62 Chitin is an extremely long chain of N-acetyl-D-glucoseamine\nglucoseamine units. Chitin is the most abundant natural fiber next to cellulose and is similar to cellulose in many respects. The most abundant source of chitin is in the shells of shellfish such as crab and shrimp.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-9", "d_text": "Minami, S.; Okamoto, Y.; Hamada, K.; Fukumoto, Y.; Shigemasa, Y. 1999, EXS 87, 265-277.\n[8.] Muzzarelli, R.A.; Mattioli-Belmonte, M.; Pugnaloni, A.; Biagini, G. 1999, EXS 87, 251-264.\n[9.] Feofilova, E.P.; Tereshina, V.M.; Memorskaia, A.S.; Alekseev, A.A.; Evtushenkov, V.P.; Ivanovskii, A.G. 1999, Mikrobiologia 68, 834-837.\n[10.] Mi, F.L.; Wu, Y.B.; Shyu, S.S.; Schoung, J.Y.; Huang, Y.B.; Tsai, Y.H.; Hao, J.Y. 2002, J. Biomed. Mater. Res. 59 (3), 438-449.\n[11.] Mi, F.L.; Shyu, S.S.; Wu, Y.B.; Lee, S.T.; Shyong, J.Y.; Huang, R.N. 2001, Biomaterials 22, 165-173.\n[12.] Shelma R.; Paul W.; Sharma C.P. 2008, Trends Biomater. Artif. Org. 22, 111-115.\n[13.] Tomihata, K., Ikada, Y. 1997, In vitro and in vivo degradation of films of chitin and its deacetylated derivatives, Biomaterials, 18, 567-573.\n[14.] Abhay, S.P., Hemostatic wound dressing, 1998, US patent 5 836 970.\n[15.] Ueno H, Murakami M, Okumura M, Kadosivia T, Uede T, Fuiinaga T. 2001, Chitosan accelerates the production of osteopontin from polymorphonuclear leucocytes, Biomaterials, 22, 1667-1673.\n[16.]", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-7", "d_text": "- Yadav, H., & Karthikeyan, C. (2019). Natural polysaccharides: Structural features and properties. Polysaccharide Carriers for Drug Delivery, 1–17. doi:10.1016/b978-0-08-102553-6.00001-5\n- Kumar, Pranav & Mina, Usha. (2016). Life Sciences, Fundamentals, and Practice, Part I.\n- Sagar Aryal (2019). Carbohydrates – Monosaccharides, Disaccharides, Polysaccharides. Retrieved from https://microbenotes.com/carbohydrates/#\n- Cellulose. Retrieved from https://en.wikipedia.org/wiki/Cellulose\n- Cellulose: Plant Cell Structure. Retrieved from https://www.britannica.com/science/cellulose\n- Robert A Meyers (2002). Glycoconjugates and Carbohydrates. Encyclopedia of physical science and technology (3rd edition). San Diego: Academic Press.\n- Cellulose. Retrieved from https://alevelbiology.co.uk/notes/cellulose/\n- Chitin: Structure, Function, and Uses. Retrieved from https://biologywise.com/chitin-structure-function-uses\n- Cellulose. Retrieved from https://www.sciencedirect.com/topics/earth-and-planetary-sciences/cellulose#\n- Polysaccharides.Retrieved from https://en.wikipedia.org/wiki/Polysaccharide\n- Hans Merzendorfer, Lars Zimoch; Chitin metabolism in insects: structure, function, and regulation of chitin synthases and chitinases. J Exp Biol 15 December 2003; 206 (24): 4393–4412. DOI: https://doi.org/10.1242/jeb.00709\n- Starch. Retrieved from https://en.wikipedia.org/wiki/Starch#Properties.\n- Robyt, J. F. (2008). Starch: Structure, Properties, Chemistry, and Enzymology. Glycoscience, 1437–1472. doi:10.1007/978-3-540-30429-6_35\n- What is Starch?", "score": 8.086131989696522, "rank": 99}]} {"qid": 20, "question_text": "What are the main educational materials included in the 'Guam Water Kids' program for students aged 9-12?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "State Water Resources Research Institute Program\nProject Id: 2010GU167B\nTitle: Presenting 'Guam Water Kids': Public School Outreach and Teacher Relations Program\nProject Type: Education\nStart Date: 3/01/2010\nEnd Date: 2/28/2011\nCongressional District: N/A\nFocus Categories: Education, Non Point Pollution, Conservation\nKeywords: Water resources, aquifer, groundwater, surface water, watershed, Guam, hydrological cycle, ground water, non-source pollution, non-point pollution, conservation.\nPrincipal Investigator: Card, Arretta Ann\nFederal Funds: $ 7,685\nNon-Federal Matching Funds: $ 0\nAbstract: The environmental educational materials for students age 9-12 about fresh water resource issues on Guam have recently been developed. The \"Guam Water Kids\" materials emphasize the importance of Guam's fresh water as a key resource, explain hydrological concepts, and introduce a sense of stewardship for conserving and protecting Guam's fresh water. These materials include a pre-recorded presentation, teacher's lesson plans and suggested activities, a Chamorro language glossary, and a companion website. The educational materials are correlated to learning standards recognized by the Guam Department of Education and have been approved for use in Guam public schools by the superintendent's office and announced in the GDOE newsletter. The materials were developed, in part, to support outreach efforts by WERI. There is a need to familiarize teachers with the materials and demonstrate the value of incorporating them into curriculum. Working directly with these educators will also increase awareness of WERI as a resource for water related issues and will open opportunities for WERI to engage educators in the future. As materials are employed and as teachers become engaged in water resource issues, an evaluation is needed to assess the effectiveness of the \"Guam Water Kids\" materials and to explore additional needs teachers may report such as a willingness to participate in water related courses for educators which may be developed in the future.\nSpecifically, we intend to follow the public schools' chain of approval, schedule a presentation targeted to reaching the 5th graders at each of the 6 elementary schools and follow the presentations with an evaluation by educators involved in teaching subjects related to water resource issues. Procedures include: 1) contact and present materials to the principals at each of the six elementary schools in the Guam public school system for approval at the school level.", "score": 52.7674064167307, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Upon principal's approval, contact appropriate head teacher and schedule presentation to all fifth grade sections to be conducted in the spring 2010 and fall 2010 semesters; 2) conduct six team presentations of the \"Guam Water Kids\" program led by Ann Card and with WERI professionals serving as resource persons. Leave participating teachers with a packet of the educational materials including the CD presentation, printed copies of the two related Lesson Plans and Activities, and WERI contact information as appropriate; 3) conduct a survey of participating educators to evaluate the \"Guam Water Kids\" presentation and related lesson plans. Include additional questions about needs teachers may have and specifically poll interest in participating in future teacher training courses in water resources, a critical need which has been identified by the advisory council. The survey will be conducted online with an \"on paper\" option in order to facilitate participation. Contact information will be preserved in order to facilitate future communication with educators; 4) analyze and report survey results; 5) make any appropriate adjustments to existing \"Guam Water Kids\" materials indicated by educators' assessments.\nProgress/Completion Report, 2010, PDF", "score": 49.84351141027582, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "In partial fulfillment of the requirements for the course\nED355, Language Arts Methods\nDivision of the School of Education\nUniversity of Guam\nDr. Jacqui Cyrus\nNovember 21, 2007\nI found a wonderful lesson on the U.S. Environmental Protection Agency’s website, under teacher’s resources “The Quest for Less” that is relevant to current issues and interesting enough to capture the imagination of an eleven-year-old. Armed with a lesson, there were key questions to address. What do I know about my students? What accommodations and modifications will I need to make? What do the students already know that will enable them to complete the lesson? What do I want them to learn? What media and materials will they need to complete the lesson? How will they utilize the media and materials? How will I get them to participate? How will the lesson be evaluated?\nI based the hypothetical assumptions about my student’s SES, ethnicity, and gender on the “Tamuning Elementary School’s Annual Report Card” retrieved from the Guam Public School website linked to the Superintendent’s page. The learning styles are hypothetical. After reviewing web-posted Individual Educational Plans examples for an autistic student and a paraplegic student, I specified accommodations and made modifications to the lesson plan to meet the IEP requirements.\nTo determine the prior knowledge of the students regarding natural resources, environmental issues, conservation, and computer technology I reviewed the Guam Public School System’s Standards. Based on the standards I was able to make certain assumptions about what the students already know.\nThe overt lesson plan objectives are reflective of the assignment criteria, that is, reading and writing, speaking and listening. This presented a major challenge. While the task of compiling one paragraph of research on a specific topic given the source of the information sounds easy enough to accomplish in a single class period, it did not turn out to be a realistic expectation. In retrospect, having them print, read, rewrite, and recite would have probably been more expedient. However, the research process and collaboration are the real learning experiences, not the regurgitation of given information. . In addition, it was necessary to incorporate technology research tools, in this case, specifically related to the National Educational Technology Standard and Performance Indicators for Teachers (NETS*T) “I.", "score": 48.334880826884984, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Educator Publications: Project WET’s award-winning Curriculum and Activity Guide 2.0, and other Educator Guides full of activities about watersheds, water quality, floods and water conservation, plus maps, posters and more.\nBooks for Students: There are over 40 full-color activity booklets about a range of water-related topics in the Kids in Discovery series plus children’s story books—great classroom resources!\nMake a Splash Water Festivals: Make a Splash festivals are a fun way to pack in learning about water. Learn how to host a Make a Splash festival at your school or find a festival near you!\nProject WET Educator Training Workshop | Bozeman, MT | April 28, 2018\nMaps and Posters: Looking to decorate your classroom with fun, eye-catching educational artwork? Resources like Project WET’s Water Cycle poster or watershed maps are just what you need!\nDiscoverWater.org: A free classroom tool to teach kids about water. Interactive activities, quizzes and activities for students, and Science Notebooking pages, printables and resources for teachers. Correlated to Common Core standards.\nWater Cycle Game: Play The Blue Traveler and learn how water travels around, above and below the surface of our planet through this fun game!\nOnline Training: Self-paced, learner-driven online learning options—including a refresher training to upgrade to Guide 2.0.\nWater Education Portal: Share ideas, lesson plans and join discussions about activities and water related topics. Project WET national and state standard correlations are available here.\nWebinars: Online learning events about a range of water related topics. Attend a live event or view an archived webinar.", "score": 45.46437368043729, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Included with purchase:\n- Download of the Online Refresher Course\n- Copy of the Curriculum and Activity Guide 2.0\n- Free access to the Project WET Water Education Portal\nIn order to provide educators with flexible training opportunities, Project WET now offers an Online Refresher Course for those who have taken a Project WET workshop and would like to obtain the Curriculum and Activity Guide 2.0\n. The course is available in both Flash and PDF formats, is self-directed and self-paced, and you can begin the course before your book arrives.\nThe award-winning, NSTA-recommended Curriculum and Activity Guide 2.0\ncontinues Project WET’s dedication to twenty-first-century, cutting-edge water education. Correlated to the Common Core Standards (preliminary Next Generation Science Standards correlations are in review and coming soon), the Curriculum and Activity Guide 2.0\ngives educators of children from kindergarten to twelfth grade the tools they need to integrate water education into every school subject. The guide also includes numerous extensions for using the activities in Pre-K environments. Featuring 64 field-tested activities, more than 500 color photographs and illustrations, and useful appendices with information on teaching methods, assessment strategies and more, this guide is an essential classroom tool.\n- Includes activities on topics such as national parks and storm water\n- Features fully revised and updated activities from the Curriculum and Activity Guide 1.0\n- Includes the very best activities gathered from all of Project WET’s publications\n- Is suitable for educators at all levels and subjects, including pre-service teachers\n: The Curriculum and Activity Guide 2.0 has been correlated to: Common Core English Language Arts\nand Common Core Math.\nPreliminary Next Generation Science Standards\ncorrelations in review and coming soon. The guide adheres to the National Science Standards Framework\n, STEM Educational Coalition\nobjectives and NOAA Ocean Literacy Standards.\nThe complete list of standards has been compiled here\nThe Curriculum and Activity Guide\nwas awarded a Gold Medal in 2013 by the Independent Publisher Book Awards,\nEducation (Workbook / Resource) category.\nThe National Science Teachers Association\nhas selected this book as part of their NSTA Recommends\nlist of books for science educators. Click here\nto visit NSTA Recommends.", "score": 44.95215589860908, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Moulton Niguel Water District and Orange County Coastkeeper bring water education to MNWD middle schools! In-class presentations and local field trips will teach students the importance of water quality, water efficiency and global sustainability.\nIn a first-of-its-kind partnership, Moulton Niguel Water District has partnered with Orange County Coastkeeper to launch an innovative water education program that will get kids involved in water conservation and environmental protection. Through Coastkeeper’s W.H.A.L.E.S. program, Orange County middle schoolers will go on field trips to a local watershed in Moulton Niguel’s service area, take part in interactive learning activities and even conduct a water quality test of their own. Coastkeeper’s W.H.A.L.E.S. program, which stands for Watershed Heroes: Actions Linking Education to Stewardship, was conceived as a way to make it easier for students to understand water efficiency and sustainability.\nInterested in signing up? Check out the program details below!\n- Open to all middle school classes in the Moulton Niguel Water District service area\n- Funding covers the in-class presentation, field trips, and busing to and from a local watershed and creek.\n- Each field trip can accommodate 40 students, but can accommodate for larger groups over multiple days\n- The program is funded by Moulton Niguel Water District and administered by OC Coastkeeper\nEach Class Will Receive:\n- In-Class Presentation discussing their local watershed, urban runoff, water conservation, global sustainability, and watershed function and health. Additionally, students will learn about the District’s water budget-based rate structure and how it can be used as a tool to indicate whether a household is over-watering their landscape. Students will learn how over-watering contributes to dry-weather runoff, which carries pollutants into storm drains and eventually into local creeks and beaches.\n- Two Field Trips:\n- Exploring Your Watershed: Field trip entails a guided hike at a local trail with a naturalist. Location Options: Salt Creek Trail, Aliso Creek Trail, and San Juan Creek Trail\n- Water Quality Testing: Field trip entails a water quality lesson and tests at a local water body.\nSign Me Up!", "score": 41.73058976195448, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Students and their teachers monitor water quality in their local streams or river twice annually, once in October and once in February. They use kits to test for dissolved oxygen, pH, nitrates and turbidity. Students measure both the air and water temperature and many classes collect sterile samples for bacterial analysis. Students at local high school filter these samples, incubate them and count the fecal coliform colonies after 24 hours. These water quality parameters are important for aquatic organisms and/or have implications for human health.\nWater Quality Manual\nThis manual includes all the details you’ll need for water quality monitoring (including which bottles to use for sampling, directions for collecting your samples and running the tests, data sheets, site surveys, optimal values and much much, more!) It’s a big document, so we separated it into sections to make downloading easier:\nSection I: Preparing for Monitoring\nSection II: Tools for Monitoring Day\nSection III: Understanding Your Monitoring Results\nSection IV: Appendix\nSection V: The Mulitiple Intelligences Table\nWater Quality Monitoring Videos\nClick on any of the links below for video instructions for each of the following tests:\n(*Note: This video refers to the appearance of a “floc” in after the addition of 8 drops of Alkaline Potassium Iodide Azide, where it is actually a “precipitate”.)\nOptimal Water Quality Standards\nClick here to download a document about the Optimal Water Quality Standards students keep in mind when collecting data at thier adopted sites.\nMaterial Safety Data Sheets\nSouth Sound GREEN Water Quality Monitoring Sites\nEach year over forty sites are monitored by about thirty different classrooms around Thurston county.\nWater Quality Data\nStudents in the South Sound GREEN program have collected data on South Sound streams since the mid-1990’s. Please see below for current and past data:", "score": 41.11027159823848, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "This place-based stream ecology curriculum draws connections between aquatic life, water quality and the way we live in our watersheds. By studying the life in local streams, secondary students meet myriad small creatures who depend on clean water for life. Understanding their connection to this intricate web of life empowers students to join community efforts to protect water and habitat where they live.\nThis three-lesson curriculum, suitable for grades 6-12, does not require hours of preparation. On the contrary, Life in Our Watershed: Investigating Streams and Water Quality allows teachers to dive right in with their students!\nSplash would love the opportunity to help you teach your students about the streams that flow through the areas where they live and go to school. Check out the links below for more information about our streams program for 6th – 12th grade classes.\nSupport from the community makes it possible to continue serving the 80+ classes of 4th/5th graders who want to come to Splash. Please consider making a tax-deductible donation today. Splash is a 501(c)(3) charitable organization. Tax ID #41-2160618.", "score": 40.256156553033534, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "First page number:\nLast page number:\nIn “Just Passing Through! The Water Cycle!,” students use the Forever Earth vessel to begin exploring the importance of Lake Mead by making and recording observations of how water is being used in different ways by plants, animals, and people. Then students view an animated PowerPoint presentation that follows one drop of water through Lake Mead’s water use cycle and then re-create the cycle on a magnet board. Working as scientists, students determine if water is the same in all parts of the lake by comparing water samples from the middle of the lake and from Las Vegas Bay. By examining a number of scenarios, students use scientific reasoning to deduce the major reasons for the current lower lake level. In a culminating activity, students brainstorm ideas for personal actions that they can make to conserve or protect Lake Mead’s water.\nThese pre-visit activities are designed to prepare students for their Forever Earth experience by introducing them to the water cycle and to some of the factors that affect the cycle.\nHydrologic cycle – Study and teaching (Elementary); Teaching – Aids and devices; United States – Lake Mead; Water – Study and teaching (Elementary)\nCurriculum and Instruction | Curriculum and Social Inquiry | Fresh Water Studies | Science and Mathematics Education\nDiscover Mojave: Forever Earth\nJust Passing Through! The Water Cycle! Appear -- Disappear! The Magic of Water! Pre-Visit Lesson (Grade 4).\nAvailable at: https://digitalscholarship.unlv.edu/pli_forever_earth_curriculum_materials/4", "score": 38.464878966349126, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Water Quality Extension offers several resources for teachers including training workshops, lesson plans, activities, and materials located in each county.\nNew Lesson Plans:\n- Pave It or Plant It explores the effects of urbanization on storm water runoff. Click here for the lesson plan.\nThe Stream Side Science curriculum is a set of lesson plans designed to teach watershed\nscience to grade students K-12. The program is based on the manual \"Bugs Don't Bug Me (pdf)\" for K-6, and \"Stream Side Science (pdf)\" for 5-12. The curriculum is aligned to national and state core standards and highlights\nsome STEM activities.\nDive into water quality coloring pages, activity books, and games designed specifically\nto help kids have fun while learning about water quality.\nBrowse posters, websites, and macroinvertebrate pictures to help you teach about water quality.\nDiscover the scientist within, using these fun and interactive lesson plans that teach kids and adults of all ages about protecting and maintaining the quality of our water.\nFind out what workshops we offer, when we'll be in your city, or request a workshop near you.", "score": 38.42796553871041, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "By Dana Krejcarek and Jessie Good\nIn this unit students will be introduced to the GLOBE Program and work specifically with the Hydrology unit. GLOBE (Global Learning and Observations to Benefit the Environment) is an international, environmental program where students, teachers, and scientists are able to share information. Students will gain an understanding of the importance of water quality through various hands-on and inquiry-based activities. They will learn how to perform the water quality tests and be able to collect, analyze, and compare their data to the data from other schools. The data will be used to develop a model that can be used as a teaching tool for students to share their findings with their classmates. The presentation will be used as an alternative assessment using a given rubric. Both the modeling project and the presentation will help students understand the connection of water quality to their lives.\n- Water chemistry is an important aspect of habitat requirements\n- Temperature can affect other water chemistry factors\n- Water chemistry affects species diversity\n- Instruments can enhance what your senses tell you about what is in water\n- Data are used to pose and answer questions\n- Graphs and maps are valuable tools for visualizing data\n- Accuracy and precision are important when taking measurements\n- The soil stores water, and its water content is related to the growth of vegetation\n- Where rainfall goes depends on your site characteristics\n- Higher temperatures and longer periods of sunshine increase evapotranspiration\n- Water flows can change over time\n- Water balance can be modeled using temperature, precipitation, and latitude data\n- Making observations\n- Applying field sampling techniques\n- Calibrating scientific equipment\n- Following directions in methods and test kits\n- Recording and reporting data accurately\n- Reading a scale\n- Communicating orally\n- Communicating in writing\n- Asking Questions\n- Forming and testing hypotheses\n- Designing experiments, tools, and models\n- Using water quality measurement equipment\n- Using tools to enhance the senses\n- Creating and reading graphs\n- Calculating averages\n- Making comparisons over space and time\n- Analyzing data for trends and differences\n- Using the GLOBE database\nStudent and Teacher Background Information:\nWe do not just drink water; we are water. Water constitutes 50 to 90 percent of the weight of all living organisms. It is one of the most abundant and important substances on the Earth.", "score": 37.138623190609, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "Non-point source can be big contributors, and the combined effects of pollution from many small sources can have a huge impact on water quality. Students will design and build their own watershed and learn more about pollution sources. This activity can be adapted to almost any grade level.\n- Let It Rain: Rain is a good thing—except when it’s not. Students experience a hands-on demonstration of how different agricultural practices (cover crops, traditional tillage, no-till fields, and pasture) impact water retention, surface water run-off, and erosion. Students conduct the experiment, record their results and reflect on how those\npractices impact our waterways.\n- Just Passing Through: Students learn about contributing factors to erosion and different techniques to manage erosion. Students can then release some pent-up energy as they role-play becoming raindrops, plant life, and rocks in and around streams and waterways.\n- Poison Pump: A killer has swept through the streets of London, and hundreds are dead! Could the accomplice to this crime be something that is used everyday? Through a series of clues, middle & high school students will love a mystery to discover that water can also produce negative effects for people. Students will learn about the cholera outbreak in London.\n- Soil Sustainability Sleuths: Students will conduct a variety of hands-on soil tests, including a slake test to determine soil stability, a soil texture test to determine soil components, as well as nutrient content and pH testing.\n- What's In My Water? Older elementary through high school students will create their own “stream” right in the classroom! They will then conduct biotic sampling and identify macro invertebrates, perform chemical analysis of water samples, complete a biodiversity index, and reach a conclusion on their stream’s quality.\nWe are so excited to get these programs out and about in classrooms across the county! Be sure to share these with your favorite teacher or community group and let us get them on their schedule for this school year!\nDon't see one of your favorite classroom presentations from years past? Check out our lending library to see if we have those materials available for you to check out and use on your own!\nA Holmes County native, Jane joined the Holmes SWCD staff in 2015 after spending 16 years in public relations for a land-grant university. She holds a BS with distinction in agricultural communication from Purdue University and her master's program in mass communication at Purdue focused on science and risk communication.", "score": 35.60763982426398, "rank": 12}, {"document_id": "doc-::chunk-7", "d_text": "Materials - Culminating Activity\nPer group of 4-6 students:\nOptional: 1 CD-R disc and CD-RW burner\n- 1 copy of Grading Rubric\n- 1 computer with Adobe Premiere editing software\nHave students keep a tally count of all organisms found in the water or soil and research their role in the ecosystem. Are they an indicator of good water/soil or pollution?\nHave students conduct water quality or soil analysis test on the materials collected and report back the results.\nHave the students look at the sea ice coverage currently shown on the Water in the Earth System Web site. Ask them to research what factors affect this figure.\nHave students write a press release of their findings and the significance of this information to the local community.\n- Encourage students to participate in local watershed education and pollution control programs. Ask them to contact the local river keeper or water authority for more information on water conditions in the local area.\n- Ask students to attend a community planning board/zoning meeting. They will need to take notes of any proposed building or zoning changes made at the meeting. Then research how these changes will affect the local aquifer or water supply.", "score": 34.081247873737176, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "What's in a catchment? What are the differences between the upper, middle and lower regions of a catchment? Students work in teams to construct a giant jigsaw puzzle to study the factors that affect water quality.\nStormwater story (years 3–6)\nHow does litter and rubbish end up in our rivers and creeks? Students act out a journey down the Yarra River from catchment to coast to see how humans impact stormwater quality.\nWater smart city (years 3–10)\nExplore stormwater and pollution in the Lego city! Students are introduced to the impacts of stormwater on our waterways and bays. Working with the model, students address stormwater issues and adapt the city to make it water smart.\nWater recycling model (years 5–10)\nThis model helps students better understand where our water comes from and the ways that recycled water can be used in horticulture, agriculture, industry and recreation. Real water flows over the model's catchment landscape with sound effects and track lighting showing different features of the model in action.\nWetland food chain game (years 3–6)\nWhat happens to our food chain when there are too many predators? Students find out through this interactive game, which illustrates the food cycle and energy transfer in our special wetland environments. The game demonstrates the effects of pollution and predators on the transfer of energy between trophic levels.\nWhat goes where? (years 3–7)\nWhere should household wastes go? What can go onto the compost heap, the recycling bin or the trash? What should go into the sewerage and stormwater systems? In this interactive game, student teams actively debate the answer to these questions.\nMicro-organisms at work (years 7–9)\nDiscover the micro-organisms that play an important role in the biological treatment of wastewater at the Western Treatment Plant. Learn about how different environments in the treatment process are managed to encourage different micro-organisms to go to work.\nThe Water Discovery Centre brings the water cycle to life for visitors through engaging displays. Visitors can interact with the displays to learn more about:\nthe urban water cycle\nwater supply from our protected catchments\nwhat happens to the water we flush down the toilet or pour down the sink\nThe centre is open by appointment only. Register your interest by calling us on 131 722.", "score": 33.504844466413616, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "- Make a diagram showing how people obtain water in your state.\n- Make a glossary that includes words and pictures and distinguishes different types of bodies of water (e.g., bays, lakes, ponds, oceans, rivers, streams, etc.).\n- Make a model of a watershed.\n5th, 6th, 7th, and 8th Grade\nTips: The intermediate projects are more challenging that the elementary projects and you may expect students to be able to carry them out with more independence.\n- Collect water from different locations (e.g., tap, puddle, pond, lake, ocean, rain), and examine it under the microscope. Identify what you see.\n- Make a 2-D or 3-D map of the ocean floor with labels for main features.\n- Explain the relationships of bodies of water to weather (e.g., Gulf Stream, lake effect, etc.)\n- Diagram the effects of ocean currents.\n- Find a tide table here: http://tidesandcurrents.noaa.gov/ and click through to get a plot of tide predictions for a month. Print it and annotate it with data about the phases of the moon and how they related.\n- Write a report on what you find to be the three most interesting bottom dwellers in the ocean.\n- Create a model of a molecule of water.\n- Set an ice cube on a dish. Write 100 observations about it.\n- Pretend you don’t know the boiling and freezing points of water. Design experiments to discover them.\n- Create a poster to illustrate the erosive effects of water.\n9th, 10th, 11th, and 12th Grade\nTips: These projects for high school students can be adapted or extended to fit an advanced curriculum. They can also be altered in mid-stream if the student makes discoveries that suggest reshaping the project would be fruitful. For long projects, plan check-in points to help your student maintain focus.\n- Collect water from different locations (e.g., tap, puddle, pond, lake, ocean, rain), and test it to identify its chemical make-up. Create a chart or graph to show your findings.\n- Create a project to illustrate the importance of the ocean’s salinity to the world. Consider events such as Gandhi’s Salt Satyagraha as well as food fish, the water cycle, etc.", "score": 33.43839456864081, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Water, Water Everywhere!\nThrough various activities, students get a comprehensive understanding of the cycle of water.\n- Grades: 3–5\nDuring this unit, students will write about how the same water has been reused for millions of years while describing the roles precipitation, evaporation, and condensation play in the water cycle. Each student will build a 3-D cloud that includes a labeled diagram of the water cycle, ten important facts about water, the definitions of eight water words, and a writing piece they publish describing how the water cycle works.\n- Understand how the same water has been used over and over again through the water cycle\n- Develop a list of ten important facts about water\n- Define and use words associated with the water cycle\n- Write a summary of how the water cycle functions\n- Follow step-by-step directions to create a three-dimensional cloud with several components\nLesson Plans for this Unit\nLesson 2: Send in the Clouds\nAfter students have assembled their clouds, hang them around the classroom and hallways for all to enjoy. After seeing our display, many of our staff members stop by each year by to tell the class what a great job they did.", "score": 33.00577138255163, "rank": 16}, {"document_id": "doc-::chunk-3", "d_text": "She has provided a valuable primer on hydrology, and a concise overview of the key threats to water quality. She includes lesson plans covering topics such as scarcity and pollution problems abroad, a description of the water cycle, local watersheds, water usage and purification. This is a very well organized unit that will provide students with a basic foundation of knowledge in the field of water resources.\nSharron Solomon-McCarthy teaches sixth grade students in special education and faces exceptional challenges in teaching science to these students with diverse learning disabilities. Her unit, \"Kids Conserve…Water Preserved,\" is designed to be multi-sensory. Students will prepare a PowerPoint presentation that describes a specific water management problem. She will also use anagrams to teach important vocabulary in the field of hydrology and water resources. Each student will also conduct a household survey of water consumption, to help them understand waste and the potential of conservation. This is well-organized, articulate and very thoughtful unit that will surely command the students' interests.\nRoberta Mazzucco teaches third grade and designed the unit, \"Water: Our Most Important Beverage.\" With superb organization, Roberta uses a question-based method to teach basic science about global water availability and cycling, the source of local drinking water, treatment options, basic problems of pollution, and the strengths and limits of government attempts to manage water quality. Each section includes relevant activities. She also discusses what the students can do to conserve and protect their water quality. The unit is clearly written and the documentation should help others who wish to strengthen their curricula.\nJoanne Pompano teaches science to visually impaired high school students. Her unit, \"The New Haven Oyster Industry and Water Quality\" includes accurate overviews of hydrology and ecology. Oysters are especially sensitive to some types of pollutants, and since they are commonly eaten raw, they may cause serious illness if contaminated with bacteria or viruses. Joanne reviews the key threats posed by toxic substances and wastes associated with agricultural and industrial activity. She also explores the special hazards posed by sewage treatment plant overflows that commonly occur during storms, flushing microbes and toxic substances into estuarine environments threatening both ecological and human health.\nLaura Pringleton teaches fourth and fifth grade general science and designed a unit entitled, \"Water Will, Water Way.\" She believes the marine environment provides a scientific laboratory to search for new pharmaceutical agents that could treat serious human illness.", "score": 32.65495087714539, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Search results for 'all'\n$0.99This fun and fact-filled activity booklet features fascinating information and action-packed games about all aspects of the Hudson River. PDF ebook edition. Learn More\nContact Coordinator for PricingThe Project WET Curriculum and Activity Guide 2.0 continues Project WET’s dedication to 21st-century, cutting-edge water education. Now in full color, Guide 2.0 offers new activities on topics such as National Parks and storm water, fully revised and updated activities from the original Guide and the very best activities gathered from all of Project WET’s publications. Learn More\n$1.25Children ages eight to twelve explore the fascinating water science of Lake Tahoe in this colorful, engaging, and comprehensive activity booklet. Learn More\nIn order to provide educators with flexible training opportunities, Project WET now offers an Online Refresher Course for those who have taken a Project WET workshop and would like to obtain our new Curriculum and Activity Guide 2.0. The course is available in both Flash and PDF formats, is self-directed and self-paced, and you can begin the course before your book arrives.Learn More", "score": 32.49938506255972, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Water Planet Challenge Workshop: Out the Spout & Down the Drain to b presented by The Character Education Partnership and Earth Echo International, Nov. 1 from 9 a.m. to noon. To reserve a please visit www.character.org/\nTeachers will learn how to integrate service-learning and citizen journalism into their academic lessons and how to provide students a comprehensive understanding of local water quality issues. The workshop will empower students to become community leaders to benefit the health of our water planet.\nThe informative workshop will be taught by Cathy Berger Kaye, renowned author and service learning consultant, and Kyra Kristof of EarthEcho International. During the event, they will highlight:\nOut the Spout--- why filtered tap water is always best; find out how you can become part of the Anti-Bottle movement that helps communities kick their plastic water bottle habit while raising money for water-related projects in their own backyard or across the globe.\nDown the Drain--- tools you can use to investigate what is going down your drain; develop and implement a plan to defend your drain (and others) from toxins.\nCitizen Journalism--- how multi-media documentation of the service-learning process enhances student achievement and gives youth a voice in protecting the environment as citizen journalists.\nAfter completing the event, teachers will have a curriculum they can easily implement, and all attendees will leave feeling energized and ready to take positive action.\nThe workshop is part of the National Forum on Character Education, held Nov 1-4 at the Renaissance Hotel Washington, DC. For more information about the conference, please visit www.character.org/\nThis year’s forum will bring together more than 800 educators, researchers, scholars, and business leaders all looking for the latest information on many of education’s hottest topics – including service learning. If you are interested in this workshop, you may also be interested in attending breakout sessions such as “In Youth We Trust,” also led by Kaye and “Fostering Good Character in a Globalized World,” led by Zoe Weil of the Humane Society.\nCharacter Education Partnership (CEP) is a national advocate and leader for the character education movement. Based in Washington, DC, we are a nonprofit, nonpartisan, nonsectarian coalition of organizations and individuals committed to fostering effective character education in our nation’s schools. We provide the vision, leadership and resources for school, families and communities to develop ethical citizens committed to building a just and caring world.", "score": 32.25664400212248, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Energy and Water\nSuitable for: Upper KS2\nThese three resource sheets combine to help a school investigate the issues behind energy wastage and include: investigating people's behaviour in the building, interviewing the adults in charge of energy use to discover how the system is managed and an investigation of the hot and cold spots.\nSuitable for: Primary and Secondary Schools (KS 2/3)\nActionAid provide 10 lessons, each containing suggested lesson plans and resources around energy saving. There are films online. These could be compressed to fit into one half term, six lessons.\nSuitable for: KS2\n7 lesson plans for KS2 pupils covering hot and cold spots in the classroom, energy efficiency in the home, energy investigations and story activities.\nSuitable for KS 1 - 4\nOur planet website provides lesson plans, engaging video clips, quizzes and power points on climate change and renewable energy.\nSuitable for KS2 - 4, this renewable energy challenge puts students' competitive spirits to the test.\nStudents are given minimal materials and asked to design a wind powered machine that can lift a weighted cup off the floor. Teachers instruction sheets, related video clips and even certificates are provided.\nSuitable for KS3\nAn exciting challenge and competition to design and build flood-resistant homes: Flooding due to climate change can have a devastating effect on people's lives. Set on the fictitious island of Watu, pupils explore how STEM skills can be used to help communities be better prepared for flooding.\nSuitable for: KS2/3\nWorking in teams pupils explore issues around renewable energy in the developing world and build their own wind turbines.\nSuitable for KS2/3/4\nGo to the above link to send for WaterAid's free resource pack.\nSuitable for KS2/3\nA great exercise for learning about renewable energy.\nThe Water Crisis lesson plans are available for a variety of ages.\nSupporting Materials and Projects\nSuitable for: Primary and Secondary\nRange of information and newsletters suggesting ways to embed sustainable energy and water in the curriculum.\nSuitable for: Primary and Secondary Schools (KS 2 - 4)\nActivity cards and accompanying information sheets of various qualities. Good source of short class activities.", "score": 32.04893865799319, "rank": 20}, {"document_id": "doc-::chunk-2", "d_text": "As a class, they participate in an activity in which they discover the amount of drinkable water on Earth and are introduced to the water cycle. In groups, they make their own model of an aquifer and experiment with different objects to see the effects of pollution and how water is purified.\nStudents examine historical sources by analyzing images in a slide-show. In this historical research lesson, students view a PowerPoint presentation of images from the U.S. in pre-Columbian times. Students discuss the imagery among their classmates and analyze the relationship between the Native Americans and the colonists.\nThird graders create a KWL chart about water. In this environmental science lesson, 3rd graders demonstrate how much water on Earth is usable. They act out the different stages of the water cycle.\nThird graders investigate primary documents to explore the history in Ipswich. In this Ipswich activity, 3rd graders observe the Ebsco mural panels and gather information about Ipswich. Students work in groups of five to explore the panels. Students complete a worksheet on their panel.", "score": 31.197287252193618, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "It's a dirty job but someone has to do it -- S.K. Worm, the official annelid, or worm, of the Natural Resources Conservation Service helps students explore and learn about soil.\nFun S.K. Worm interactive features and activities\nColorful drawings and cartoon characters make this a hit as you learn about the importance of keeping our hometown water clean and clear. This features characters like Major Mulcher and Not-Till Bill and includes a 10 question water quiz.\nOrder the Resource Packet\nThis is a 32-page booklet aimed at a fifth grade audience and is full of fun facts about farmers and ranchers. It explains where food comes from, how it is produced and the benefits of conservation. (2007)\nDownload the For the Good of the People booklet\nThe NRCS water cycle poster is back by popular demand.\nThe newly-designed poster shows the elements of the water cycle through a diverse landscape. The back of the poster includes a variety of information and activities that teachers can use to get students of all ages engaged in water conservation.\nWatch animated water cycle on YouTube\nLearn more and/or purchase your copy\nNRCS provides educational resources to teachers, students and parents. These learning tools feature valuable information and agriculture/conservation resources\nEducational links and valuable resources", "score": 30.99681436484815, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "- why the UK is under water stress, and the effect on resource management and supply and demand security for water supply,\n- about the environmental implications of water and sewage treatment in the UK,\n- how sustainable drainage solutions and water efficiency strategies link into the UK’s environmental goals,\n- the link between hot water and increased CO2 emissions and how personal water choices impact on climate change,\n- the three P’s\nInitially designed and delivered for Year 10 GCSE Geography students, it has also been adapted in a simpler form for Year 8 and Year 9 students. All pupils receive a shower timer.\nA sustainable water cycle (KS3,4)", "score": 30.938627206128036, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Clean Watersheds for All Oceania\nFounder, Lisa Hinano Rey has been working with non-profits, community partners and schools on Oahu training youth ages 12 to 20 on use of scientific equipment and methods for collecting water quality data since 2014. Students learn about water pollutions, its impacts and scientific methods for measuring water quality. In addition to raising awareness about the importance of clean fresh water resources in vulnerable island communities, the program increases the interest of Oceaniaʻs youth in the disciplines of Science, Technology, Engineering and Math (STEM), and to prepare students ages 12-20 for career possibilities. The Program aims to establish long-term, sustainable connections between researchers, public and private sector businesses and community. By using a citizen science approach that includes youth and community participation in the research process, the role of scientists in society and the scientific process can be demystified.\nWhile teaching with the tools and methods of cutting edge Western science, we value and underscore the ancestral knowledge of our tupuna in the Pacific Islands. Without their incredible deep ecological knowledge, our communities and the indigenous peoples of Oceania, would have not thrived in our islands for thousands of years. We make it a priority to center the knowledge of the tupuna in our curriculum by asking questions such as, How did tupuna acquire the knowledge to become good land managers?, How are the tupuna ways similar or different to today’s methods?, and What is the evidence that tupuna were more skilled land managers than we are today?\nCo-founder, Vehia Wheeler, has been establishing partnerships with agencies, business partners and community members in Tahiti and Mo’orea, for over a year now, who are excited to extend this program to French Polynesia. Our goal for 2020 and beyond is to serve the communities of French Polynesia educating the next generation of citizen scientists of Oceania using our ancestral knowledge in partnership with STEM methods.\nContributions to this endeavor help SOS MOOREA to purchase the scientific instruments and supplies for rigorous hands on learning.", "score": 30.611361907262268, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Request an educational workshop\nWaterAid workshops are suitable for Grades 2 - 8. Where possible, activities are curriculum-linked and include educational games, quizzes and exercises to inspire group discussion.\nPlease use the form below to request a workshop, and we'll do our best to meet your requirements.\nAs all of our workshops are run from our small New York team, so please allow a minimum of four weeks' notice for us to process your request.\nIf you'd like to find out more about our workshops, simply email Elena from the Community team.", "score": 30.585078124356674, "rank": 25}, {"document_id": "doc-::chunk-6", "d_text": "EVERY DROP COUNTS has been developed by the Alberta Irrigation Projects Association in conjunction with the United Nations Water for Life Decade, Iunctus Geomatics, and Alberta Education. The EVERY DROP COUNTS project includes a complete set of approved curriculum materials and resources for Alberta’s grade 8 teachers and students.\nInternational Development - Check the Youth Zone page on the Canadian government website to learn about support given to a number of water-related projects in developing countries.\nStream to Sea - K to 12 education to understand, respect and protect freshwater, estuarine and marine ecosystems, and to recognize how all humans are linked to these complex environments.\nEarth Force - Earth Force engages young people as active citizens who improve the environment and their communities now and in the future. Earth Force works with communities to support young people in finding their voice while assuming leadership roles in solving local environmental problems.\nThe Water Page - You’re just a kid, what could you possibly do as one person that might make a difference when it comes to saving water . . . visit this site for some ideas to get you started!\nCity of Edmonton - Offers curriculum-connected Grade 4, 5 and 8 programs that will help teach students about storm water and wastewater drainage systems.\nEnvironment Canada - Freshwater web site - Learn more about Environment Canada’s role in water management, and how the Department is helping to ensure that our water resources are used wisely, both economically and ecologically.\nThe Groundwater Foundation - Kids Corner - Educational information, games and activities for students and educators on groundwater\nChildren's Water Education Council - Information on how to host a Children’s Water Festival in your community.\nWorld Water Day – March 22 - The international observance of World Water Day is an initiative that grew out of the 1992 United Nations Conference on Environment and Development (UNCED) in Rio de Janeiro.\nWater Conservation Fun and Games for Kids - Move through the house, exploring each room to discover water saving tips and hidden videos\nWetlands (Ducks Unlimited Canada) - Parents - learn about wetland conservation with your family. Educators – action projects, lesson plans, and more for your classrooms.\nSave the Water - nonprofit organization dedicated to solving the world water crisis through excellence in water science research.\nStorm and Waste Water\nYellow Fish Road Program – Trout Unlimited Canada has proven to be a huge success in getting Canada’s youth involved in coldwater conservation.", "score": 29.80972695191915, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Partners & Schools\nHigh School Volunteers\nEach May, over one hundred Grade 4, 5 and 6 students from area schools attend the Georgian Bay Water Festival at Killbear Provincial Park, a core area of the GBB. Local elementary schools participate in the festival on a rotating basis, with more than 1,800 students attending the water festival since 2007. The event depends the leadership of 35 to 40 Parry Sound High School volunteers.\nThe Festival brings students together to spend a fun, educational day learning about water ecology and conservation, including local issues such as invasive species and water pollution. The activities help students become more aware of how water is used in their home, classroom and community. For example, a classic Water Festival activity is called From the Bay and Back Again. Students pretend to be one of twelve specific parts of a water cycle; from rain, to shower, to treatment plant, to Georgian Bay. Students determine the order of their water cycle then come up with a sound and action that represents their word. Once the water cycle is acted out completely, students discuss components of the water cycle, how clean, treated water is delivered to homes, and how water is continually reused.\nAll Water Festival activities are hands on, examples include: Rolling In The Watershed – a watershed based activity; Murder Handshake – about invasive Phragmites; What a Waste – a water conservation contest; and No Water Off a Duck’s Back – about sustainable development, oil spills and chemical pollutants’ effect on water fowl.\nKillbear Provincial Park staff lead a biology station where students learn about algae, plants and invertebrates in the food chain and can observe them in an aquarium. Then kids use nets to collect underwater creatures like water beetles, minnows, and dragonfly larvae. Another station is led by special guests from area First Nations who speak to students about Anishinabek culture and the significance of water.\nGBB staff thank all of the partners that help make this event successful, and are extremely grateful for the volunteer time from the Parry Sound High School students and Mr. Graham Poole.\nCheck out the Parry Sound Water Festival Guide – a ‘how to’ for running your own Water Festival!\nCheck how much water your family uses and compare it to the Canadian average.", "score": 29.745579610503224, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "From Sierra Crest to the Coastal Ocean Workshop | Manteca, CA | April 29, 2017\nProject WET’s mission is to reach children, parents, teachers and community members of the world with water education that promotes awareness of water and empowers community action to solve complex water issues. We achieve our mission by:\nProject WET USA\nLaunched in North Dakota in 1984, Project WET has developed a nationwide network of organizations and individuals dedicated to providing world-class water education to people of all ages. Our USA Network includes state agencies, municipal utilities, zoos and aquariums, faith-based organizations, colleges and universities and many other organizations with an interest in water and education throughout all 50 states.\nProject WET International\nWater knows no borders. Because water is by its nature international, we have been working around the world since 1995. We now have partners and host institutions in more than 60 countries, and our materials have been customized, localized and translated for countries ranging from China to Uganda. Just as with our U.S.-based water education efforts, our international work varies by area and topic. Many of these resources are available free of charge thanks to partnerships with international and U.S. organizations such as USAID and UN-HABITAT.\nOUR CORE BELIEFS\nOur publications, training workshops, global network and community events are grounded in the following core beliefs:\nWater connects us all\nWater moves through living and nonliving systems and binds them together in a complex web of life.\nWater is for all\nWater of sufficient quality and quantity is vital for all water users (energy producers, farmers and ranchers, fish and wildlife, manufacturers, recreationists, rural and urban dwellers).\nWater must be\nWater resources management and education are crucial for providing tomorrow’s children with social and economic stability in a healthy and sustainable environment.\nWater depends on\nAwareness of and respect for water resources can encourage a personal, lifelong commitment of responsibility and positive community participation.", "score": 29.65136161105579, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Water in all its forms is one of the most dramatic of today's arenas in which informed, responsible, and constructive actions are needed. Aquatic WILD uses the simple, successful format of Project WILD activities and professional training workshops but with an emphasis on aquatic wildlife and aquatic ecology.\nThe Aquatic WILD program and curriculum guide is available to formal and nonformal educators who attend an Aquatic WILD training through our Project WILD state partners. For more information, please click on “Get Training.” If you’re already an Aquatic WILD educator, feel free to browse the many resources found to the right for unit planning, WILD Work, In Step with STEM, field investigations, and more!\nWhat’s New for Aquatic WILD?\nOngoing updates to Project WILD and Aquatic WILD materials build upon developments in wildlife conservation needs as well as advances in instructional methodology in PreK through 12th grade education. A key milestone in the expansion of Project WILD is the new edition of Aquatic WILD K-12 Curriculum & Activity Guide.\nIncluded in the new edition:\nField investigation activities in Aquatic WILD enable students to learn methods and protocol for conducting scientific investigations, including how to formulate a research question, engaging in systematic data collection, and drawing conclusions.\nIn Step with STEM activity extensions make use of a variety of tools and instruments, from litmus tests to smartphone applications, and involve students in the application of technology, science concepts, and math skills as part of their problem-solving efforts (see the “In Step with STEM” box to the right for more information and resources).\n\"Working for Wildlife\" is a new activity that explores occupations in wildlife conservation through a simulated job fair and interview process.\nWILD Work career components are now included to tie real occupations in the fields of wildlife management and conservation with every lesson (see the “WILD Work” box to the right for links to wildlife occupation videos and resources).\nOutdoor components have been built into activity procedures and/or extensions to maximize student time outdoors.\nActivities on fish conservation and angling include \"Gone Fishing,\" an activity that combines angling and student investigations of local fish species, and \"Conservation Messaging,\" which involves students with video or online media in order to deliver a conservation message about a selected species or issue.\nNew reference information for educators includes planning for teaching units, methods for conducting site inventories, resource pages, and expanded grade level correlations.", "score": 29.628363334730594, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "We then, after discussing the foregoing with the teachers or organizers select the sampling site or sites based on these points; collectively we visit the site(s) before the field trip day.\nSect. II: Choosing the Sampling Stations at the Field Trip Site:\nThe number of stations depends on the number of students participating, and what questions you address with your program. Examples:\n- If you want to establish baseline information on the water body’s overall health, select a wide variety of stations to insure random sampling.\n- If you know where a discharge source comes into the waterway, set up three sampling stations: one control station above that point, a second one immediately downstream from the discharge, and a third further downstream of the impact where the water has at least partially recovered from the impact.\nSect. III: Choosing Your Project Goals:\nResearch and obtain (or have the students gather) information about the waterway such as maps and information on the watershed boundaries, geography, geology, recreational uses, fish and wildlife data, water quality data, demographic information, as well as development and employment information.\nThe more information you have at hand, the easier it is to get a clear picture of the watershed. Based on the information you find about your waterway, brainstorm ideas to determine what project the young people want to address.\nSect. IV: Classroom Presentations Prior Field Trip\nThe following classroom presentations may assist the teacher or organizer in preparing Sect: III. The teacher or field trip coordinator will present either a printed copy or class room presentation of the following OWLS™ material.\nPurpose: Bring awareness prior to field trip on principal purpose of the field study and provide participants with insight of topics that will be covered. The STOKE™ in classroom follow-up program is optional but recommended.\nK-4 classroom pre field trip presentation\n[Time allotted 30 minutes] Please contact us on form below for entire Free Teachers Aid\nWhere does our drinking water come from?\nThe Water Cycle:\nTEACHERS NOTE: Field Trip Monitoring Equipment\nOWLS™ provide all equipment that is necessary for the children’s water sample collection and analysis lab. Safety is one of our main concerns and all equipment is top of the industry standard when it comes to your child’s safety.\nSect.", "score": 29.034897994365704, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "US Geological Survey: The Water Cycle The Georgia state office of the USGS provides a very basic and kid-oriented site to explain various aspects of the water cycle, including following a single drop of water through the cycle's main stages.\nThe Groundwater Foundation Kid's Corner This site includes a vast array of information on both groundwater issues and how groundwater relates to the hydrologic cycle. The site also provides information and activities aimed at both students and educators.\nUSGS: Science in Your Watershed The US Geological Survey provides a site to help find scientific information of watershed s across the United States. This information includes links to help locate a watershed in your area, science and information related to each watershed, and various data measurements to help learn more and understand a particular watershed.\nThe Urban Water Cycle This diagram illustrates how humans affect water cycle functions in an urban city setting, and shows the unique features it encompasses, including sewage, irrigation, and water treatment.\nDATA & MAPS\nEPA: Surf Your Watershed This EPA website has a clickable map of the United States to help locate information about all the watersheds in the nation, with links with additional information regarding the profile and health of each one.\nFOR THE CLASSROOM\nThe Whole Water Cycle This activity, by the University of Washington 's Department of Atmospheric Sciences K-12 pages, allows students ? both young and old ? to perform an experiment using a plant to observe the entire water cycle.\nThe National Health Museum ? Activities Exchange: Who Dirtied the Water? Middle and high school students will learn about water pollution and filtration in this hands-on activity modified by Carmen Hood from the SEER Water Project and Ginger Hawhee and Sandy McCreight from Omaha North High School. The activity also improves problem-solving skills by having students create their own filtration techniques and discuss possible solutions for water pollution.", "score": 28.962910225175694, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Curricula For Teachers\nVarious classroom curricula focusing on different water issues are available to teachers throughout Santa Barbara County. There is a small fee associated with obtaining most of these materials. Ordering information is listed for each curriculum guide.\nCare For Our Earth Grants\nK-12 teachers can apply and receive $300 for activities, field trips, or educational supplies that will help you teach your students about water use and conservation, energy efficiency, or traffic reduction. The 2018 Care For Our Earth grant application is simple and easy to apply for. Download the Application here. All applications are due Friday, November 9, 2018. To view projects of previous grant recipients and to apply, visit the County Office of Education website.\nCurriculum Workshops and Guides\nProject WET and EEI Hands-on Educator Workshop and Teacher Training\nTeachers can attend the workshop for free and receive free copies of the Project WET Curriculum & Activity Guide while, EEI- Education & the Environment Initiative curriculum units for your classroom, and curriculum from Department of Water Resources. Project WET Guide 2.0 Contains 65 Complete Lesson Plans of Water. Curriculum is designed to meet State Common Core Content Standards. Curriculum Matrix Helps Educators determine specific skills, topics, and standards. The workshops are taught by trained and certified educators from the CREEC Network. For more info and upcoming workshops, visit the CREEC website.\nLesson Plans from the California Foundation for Agriculture in the Classroom (K-12)\nThe California Foundation for Agriculture in the Classroom (CFAITC) offers a number of lesson plans focusing on various agricultural issues. Lessons and units are correlated to the frameworks for California Public Schools and have been written, field tested and reviewed by educators. Cooperative learning, individual and group problem solving and critical thinking activities encourage students to \"construct\" their own knowledge about agricultural concepts while developing skills in science, mathematics, English/language arts, history/social science, health/nutrition and the visual and performing arts. For more information, including downloadable lesson plans, or to order print copies, please contact CFAITC at (800) 700-AITC or www.cfaitc.org.\nGround Water Education (7th-12th)\nThis package offers lesson plans and materials that focus on groundwater, aquifers, and groundwater pollution. Laboratory sessions, classroom activities, reproducible worksheets, overhead masters and background information is included in the Teacher's Guide.", "score": 26.9697449642274, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "If you are looking for inspiring children through aesthetic materials, Waseca Biomes is always a winner. After having read about the impressive story of water, my 7 year old learner remembered about the “hydrolic” cycle he learned with his primary teacher. They conducted experiments, constructed projects, prepared a dramatization, and sang a song as they danced:\n🎶” Water travels in a cycle, yes it does! It goes up as evaporation, forms clouds as accumulation, and goes down as precipitation, yes it does!” 🎶\nSo here it is, the gorgeous Water Cycle mat from Waseca. It comes, as always, with a teacher’s guide, and several manipulatives. Waseca is also great at displaying in all honesty the content of their materials on the Website (get $15 off code by entering your email).\nThe guide suggests introducing the materials in this order:\n- Water Cycle facts\n- Water Cycle work\n- Grammar cards\n- Weather and Climate\nThe first presentation on atmosphere contains valuable information such as the etymology of the layers names, distances between each layer, elements, types of man-made devices found in each layer (jets, satellites…), and the sun’s rays effect. My 7 year old can not read the cards and comprehend them as as well his 9 year old sister. You must present the materials and explain what the children might be able to do independently. That is why, I jazzed up the presentation by adding a manipulative from ETC Montessori.\nThe second presentation invites children to explore the four stages of the water cycle (evaporation, condensation, preparation, collection). The Water Cycle mat contains such great details; it is an asset to illustrate each process, and keep learners engaged.\nThe Water Cycle cards also bring awareness of the scarcity of fresh water. I had the children observe as I was drawing a diagram of the water available for us, based on the information we were reading. We read, 96.5% of the water is in the ocean, 1.7% underground, 1.7% is frozen. What’s left to drink, right?!\nFor the third presentation, you have arrows to read and place on the mat. The control of error is the picture on the back of each arrow (genius!). There’s also a control of error located at the back of the teacher’s guide.", "score": 26.9697449642274, "rank": 33}, {"document_id": "doc-::chunk-3", "d_text": "They have programs for K - 12 that encourage your students to discover the wonder of ocean science without leaving the school grounds.\nGame of Life\nWhat does overfishing mean? What are the effects of overfishing on fish stocks? This grade 6-8, standards-based lesson plan tackles these difficult questions.\nLearn how to create and see samples of program evaluation plans and an environmental education literature review. There are also tools and techniques for evaluation, examples of objectives and goals, an evaluation glossary and an online resource guide to evaluation.\nChannel Islands National Marine Sanctuary is located off the coast of Santa Barbara and Ventura counties in California. Download this fun coloring book to discover the sharks and rays of the Channel Islands.\nTeach your students the seven essential principles of ocean literacy with these colorful and engaging cards that can be used following the \"Each One, Teach One\" methodology.\nThe movie \"Nim's Island\" tells the fictional story of an adventurous girl named Nim, who lives on a remote island in the South Pacific. Check out the fun educational resources related to this movie.\nThis activity book teaches students the about the seabirds and shorebirds that live in and migrate to Hawaii (pdf, 3.7MB). You can also visit the Hawaiian Islands Humpback Whale kid's page to download fun activities and posters.\nLearn more about the National Marine Sanctuaries of the West Coast through this downloadable west coast field guide. Explore the habitats, wildlife and culture of these five sanctuaries, and how they are all interconnected by ocean currents. Also discover how to practice daily conservation and get involved.\nCheck out Jean-Michel Cousteau's \"Ocean Adventure\" as KQED continues to produce short videos appropriate to bring the ocean into your classroom.\nDive into Education ocean science workshop provides teachers with educational expertise, resources and training to support ocean and climate literacy in the classroom. Workshops have been held in Hawaii, Georgia and American Samoa. Check out the archive to find out more.\nAre you interested in learning about rocky intertidal and sandy beach monitoring techniques? Would you like to set up a field monitoring site with your students? If so, check out the professional development opportunities we have available.\nThe annual Down Under, Out Yonder (DUOY) workshop is for K-12 and college entry educators nationwide.", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "The ACPWQ offers presentations and lessons about water resources for any school or civic group. These presentations can be either hands-on or lecture format. They can be tailored to water related subject matter that your group would most like to explore. Example topics include: Conserving Water, What's a Watershed? and Green Landscaping.\nContact us for more information or to schedule a presentation.\nThe City of Fort Wayne offers tours of the Three Rivers Water Filtration Plant and the Water Pollution Control Plant (WPCP). These tours are usually offered to students from third grade level upwards. This is a wonderful way to teach young residents where their drinking water comes from and how wastewater is handled when it leaves their homes.\nFor more information about:\n- the Three Rivers Water Filtration Plant (Drinking Water) tours, call 427-1314\n- the Water Pollution Control Plant (Waste water) tours, call 427-2427\nThe St. Joseph River Watershed Initiative (SJRWI) offers presentations about the St. Joseph River Watershed, the source of Fort Wayne and New Haven drinking water. These presentations are offered to groups of all ages.\nFor more information about the St. Joseph River Watershed presentations call 484-5848 ext. 120.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Condensation. Precipitation. Evaporation.\nOver the past three weeks, Episcopal sixth graders have learned all about water, the water cycle and the science behind this precious natural resource. Science teacher Stacy Hill covered water in the atmosphere, in the ocean and on the surface of planet earth. But this lesson went well beyond science, even including a message of empathy.\nchallenges created from the lack of water. The stories of these children introduce readers to the fact that not everyone has fresh water at the ready. Each chapter closes with a cliffhanger, whether it’s a character becoming sick as a result of drinking dirty water or a character grappling with dangerous wildlife or armed soldiers. Hill says her students are captivated by the book. Each class period they want her to read more and they want to know what happens to Nya and Salva. They also want to help, and ask questions such as:\nThe book also helps students connect and relate to today’s events. Hill says after students heard about the recent water shortage in South Africa she received several questions about rationing. Closer to home, students discussed the state of Baton Rouge’s water and the Southern Hills Aquifer it depends upon. In addition, Hill makes use of technology as students use Google Maps to measure the distance from the aquifer to their own home or even Woodland Ridge, and then map the distance to the nearest bottled water supplier. Students were then asked to think about the distance a bottle of water travels and the price tag associated with it, versus simply turning on their Baton Rouge tap. The classes also discussed the importance of water conservation and the saltwater intrusion that is occurring within the Baton Rouge aquifer as the result of the 150 million gallons of water used each day. All of this in a simple lesson on water.\nAt the end of the lesson, students had the opportunity to take action. Classes constructed water filters using a water bottle and materials such as sand, gravel and coffee filters. They formed a hypothesis as to which material would be best at filtering the water. They wrote lab reports. They measured and learned about turbidity.\nUltimately, students gained new scientific understanding. However, that won’t be all they take away from the experience. Likely, they will remember the importance of having clean water and what it’s like for those who do not. They may also remember that even their own water could one day be at risk.", "score": 26.837222306146906, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "Students listen to, discuss and reflect on the stories and then create a visual understanding of each.\nSee IDEAS YouTube: Seven Sacred Grandfather Teachings\nWater and the Earth with Chris Rawlings\nGrades K to 8 | performance with up to 350 as duo\nScience: Life Cycles & Systems, Earth & Space, Energy & Matter / Geography / Citizenship Education\nFrom puddles to pebbles, oceans to mountains, children are surrounded by rocks and water no matter where on the earth they live. Drawing on both original and traditional songs, students sing along with Chris as they learn about the changes that occur when the forces of nature are applied to rocks and water, the significance of water on our ecosystems and ultimately our wellbeing.", "score": 26.814816938288082, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Water pollution curriculum, Active citizens curriculum\nImagine a classroom where students are mastering social studies and science content as they:\n- create digital public service announcements that educate the local community about how to decrease water pollution.\n- propose solutions at town hall meetings to keep Puget Sound healthy.\nWater, Science, and Civics engages students in these types of lessons. Not only do students master standards, but they also develop 21st century skills related to digital literacy, media literacy, critical thinking, problem solving, collaboration with peers, and taking multiple perspectives. They become thoughtful leaders who participate in problem-solving activities similar to ones they will encounter as active citizens in the future.\nFacing the Future, Western Washington University, \"Water, Science, and Civics: Engaging Students with Puget Sound, An Interdisciplinary Curriculum Recommended for Grades 6–8\" (2011). Facing the Future Publications. 13.\nCopying of this document in whole or in part is allowable only for scholarly purposes. It is understood, however, that any copying or publication of this document for commercial purposes, or for financial gain, shall not be allowed without the author’s written permission.\nCreative Commons License\nThis work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License", "score": 26.609527212328054, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "The lowest division is aimed at children in pre-kindergarten through second grade. The next is for third through fifth graders. There are also lesson plans for students in middle and high schools.\nHinkle-Sanchez said the program for the youngest children teaches very basic lessons in personal space and the concept of “good touch/bad touch.”\nAs students get older, Hinkle-Sanchez said, they’ll be exposed to a wider range of topics, such as respecting others’ boundaries and concepts surrounding Internet safety.\n“We just wanna create this ability for them to be safe,” she said.\nHinkle-Sanchez added that the program isn’t a sex education class and is focused entirely on promoting safety.\nAnother key issue in the fight against sexual assault is the issue of disclosure.\nFor instance, the data available from the Guam Police Department, by its very nature, only includes the number of rapes that are reported to law enforcement.\nAs a result, it’s unknown exactly how many sexual assaults occur but go unreported.\nNot disclosing abuse\nA survey by the U.S. Justice Department a couple years ago found that only 28 percent of people who said they had been raped also said they reported the attack to law enforcement.\nHinkle-Sanchez said underreporting is an issue here as well.\n“It’s always been an underreported crime because not many victims know where to go to get help, or know what to do or they’re scared to get help,” she said.\nIn many cases, she added, attacks go unreported because they occur between family members.\n“Sometimes they do tell another family member and, because it’s family, they try to do what they can to protect inside without getting law enforcement and the judicial system involved,” she said.\nThat leads to issues of parents who aren’t effectively protecting children against attackers within the family.\n“And so the victim feels hopeless,” she said.\nIn order to address that as well, the proposed curriculum will also involve the issue of disclosing abuse when it occurs.\n“What can you do if something happened to your friend? Who can you go to?” Hinkle-Sanchez said. “What can you do to help your friend?”\nTeaching that, she said, would ideally resolve the issues of children finding themselves with nobody to whom they can turn.\n“There’s your teachers, there’s your counselors. Develop a list of people you can trust,” she said.\nIt also instructs teachers on what signs to look for, and how to recognize possible abuse.", "score": 26.28933394828606, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Here are some fun ways you can teach your children to be good stewards of this precious resource everyday.\nIf you’re taking a car trip this summer, kill some time and keep the kids occupied with a fun “I Spy” game for water.\nAs you drive along, have the kids “spy” people using water. They might see a pool (people playing) or someone washing a car (cleaning).\nSighting water will draw their attention to the many uses of this resource and let them see first hand if the water is being wasted. Count the number of “sightings”\nand see who has a better water-eye!\nAs they spot each use of water, give them an extra point for spotting “water hogs” or those instances where water is being wasted.\nAsk them to think of ways that the water could have been saved in that instance – such as shutting off the water hose while soaping up the car.\nHave them close their eyes and imagine a globe or a map of the world. Ask them to guess how much water covers Earth. (The answer is 70 percent).\nThen ask them how much of that 70 percent we can actually use. They will be surprised to learn that not even one percent is usable freshwater.\nAt home, try setting up a Water Scavenger Hunt. Count the areas or items in your home that provide or use water (sink, tub, washer,\nrefrigerator, coffee maker - don’t forget your outdoor sprinklers or spigots).\nThen send the children off to identify each item in your home. When their done, count how many items they can match to your list.\nThey may find items you’ve missed!\nOfficially proclaim your child the household “Water Detective” and provide them with a kit to do their job.\nPut together a bucket that includes a cup that measures ounces, a kitchen timer and food coloring. Have them search for clues that\na “water hog” may be lurking about by checking to see if your sink, toilets, showers, or outdoor spigots have leaks.\nIf there is a drippy faucet, they can see how much is wasted by placing the measuring cup under the faucet and setting their timer for 10 minutes.\nWhen the timer rings, check the amount of water in the cup. See if they can multiply the number of ounces to illustrate how many gallons are wasted\nin a day.", "score": 26.05206819456215, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "|Down the Drain: Conserving Water By Chris Oxlade\nToday, we use water for almost every part of life. Down the Drain is a book for taking action! With the recommendation of a water diary, Oxlade encourages you to put into perspective how much water you use in your daily life in a 24-hour time span. Including why our bodies need water to function.\n|A Kid’s Guide to Climate Change and Global Warming: How to Take Action By Cathryn Berger Kaye\nCarbon footprints, alternative energies, deforestation, and water conservation are just some of the issues related to climate change and global warming addressed in this book. Kids explore what others in the world have done and are doing to address the problem, find out what their own community needs, and develop a service project. Includes a lot of inspiration to get out there and make a difference!\n|Water: Reduce, Reuse, Recycle By Alexandra Fix\nRead Water to learn about the importance of water in our lives! Discover how water gets from lakes and rivers to our homes, and where water goes once it is poured down the drain. Learn what happens when water is wasted, and ways to reduce your own water intake.", "score": 25.9739670708573, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "In addition to the resources you can find here on DrinkTap, below are links to other sites that have information and materials available to\nhelp you learn more about water.*\n- Drinking Water Week is celebrated by the American Water Works Association and water providers all across North America during the first full week of May every year. Updated materials are available each January.\n- Water's Value AWWA and our partners at the Water Environment Federation have developed a suite of public outreach materials that are available free of charge. The materials, which can be downloaded below, help communicate the value of water and wastewater service and the need for infrastructure investment.\n- Free Classroom Lesson Plans from the American Water Works Association include presentations for elementary and high school children, along with lesson plans, handouts and a quiz to assess learning objectives.\n- Aquapedia is an online water encyclopedia from the Water Education Foundation.\n- Bristol Water’s conservation and education materials.\n- The Discovery Channel School offers STEM educational resources, including activities for water.\n- EPA's Environmental Education Center offers basic and clear information regarding drinking water and the environment.\n- EnviroScape® sells hands-on watershed education products that connect what we do on land to what happens in our rivers, lakes, oceans and groundwater.\n- Getwise.org is a conservation education program.\n- H2O for Life provides opportunities to young people to raise awareness about the water crisis to provide funds for water, sanitation and hygiene education for a partner school in a developing country.\n- H20 University is provided by the Southern Nevada Water Authority.\n- Project WET offers classroom-ready teaching aids for water resources awareness.\n- Savingwater.org is sponsored by the Saving Water Partnership, a group of local utilities that fund water conservation programs in Seattle and King County.\n- SAWS Education is provided by the San Antonio Water System.\n- The Water Page offers conservation tips for kids.\n- The Water Resources Education Initiative is the U.S. Geological Survey’s educational outreach program.\n- WET in the City - Water Education for Teachers - seeks to advance water education and environmental stewardship among urban youth.\n- The USGS’s Water Science School is an interactive center focusing on many aspects of water.\nIf you know of a good water website, please let us know!\n*Posting links to these websites does not imply endorsement of their products or services.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "For the 2018-19 school year, Community Waters is using the Land and Water Science Kit materials with some additional items added (additional bins, sponge pieces and rain jars).\nIf the additional materials aren’t provided with your kit, please contact us at firstname.lastname@example.org and we’ll bring them to you at your planning session. We also provide your school specific teacher’s guide and printed color student maps at your planning session.\nMaterials Used in Each Lesson\nSome of the materials provided as a part of the Land and Waters kit aren’t used for this unit. This list of materials is to help you plan for when you will need materials and which items can be left in the bins. We suggest setting aside the materials you won’t need. You could also move the materials you won’t need until lesson 9 into their own bin.\nSetting up Stormwater Tubs ASAP\nSet up your stormwater tubs before you start the unit so you will have plants growing in three of them by the time you get to lesson 4. This process is described in the “Pre-Unit Prep & Take Home Assignment” section in the teachers manual and there is a teacher training video that walks through the tub set up.\nHere is a revised Inventory for the “Land and Water” kit to use when returning your kit (highlighted are the added materials). This year only: please print the sign on the last page of the inventory (or hand write it) and include it with your kit so facilities knows it includes Community Waters materials.\nAll permanent materials should be cleaned, dried and put in the Land and Water bins before your kit pickup date. This includes the sponges and rain jars but not the soil mix and other things like tin foil that you have used (see inventory list if you are unsure what to include).\nThe soil mix will clog up sinks! You can disperse the soil mix around bushes in your school’s landscaped area but not in mowed areas as the gravel can be kicked up by lawnmowers. Plants could go in school compost or also outside under bushes. We recommend rinsing materials into buckets to empty outside so the sand doesn’t clog your school’s pipes.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Download Free Curricular Materials to Learn More About Ocean Health\nPacific Science Center has developed a K-8 Teacher’s Guide and an Informal Educator’s Toolkit for you to use in your classrooms, museums, or after school enrichment programs. Lessons and activities in these materials are designed to complement the Around the Americas expedition and enhance student learning about ocean health, including ocean acidification, coral ecology and sustainable fishing.\nAll lessons are aligned to the Centers for Ocean Science Excellence in Education (COSEE) ocean literacy principles and U.S. and Canadian standards. Activities includes extensions, resources and background, and ideas to adapt the lesson to younger students.\nAround the Americas K-8 Teacher’s Guide\nThe Teacher’s Guide includes six inquiry-based lessons designed for use in the classroom. The lessons target grades 6-8 and include suggestions for teaching younger students.\nClick HERE to download the full guide (9 MB .pdf file), or download each lesson (.pdf files) separately by clicking on the links below:\nIntroduction and Route Map\nLesson 1: Coral, Carbon Dioxide and Calcification\nLesson 2: Inquiry into Ocean Acidification\nLesson 3: Ocean Currents and Marine Debris\nLesson 4: Aerosols and Clouds\nLesson 5: Snapping Shrimp\nLesson 6: Sustainable Fisheries\nRecommended Teacher Resources\nAround the Americas Informal Educator’s Toolkit\nThe Informal Educator’s Toolkit includes three activities and is intended for use in museums and after-school programs.\nClick HERE to download the full guide (7 MB .pdf file) download each lesson (.pdf files) separately by clicking on the links below:\nWe invite you to provide feedback on the lessons by emailing firstname.lastname@example.org.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "National Assessment of WASH in Schools-Belize\nData from the 2009 assessment of water, sanitation and hygiene (WASH) in Belizean schools are analyzed and presented with regard to the state of WASH facilities, practices and education results from the 2011 evaluation of the “WASH Project” in Southern Belize. The principal challenges for WASH in Belizean schools and elements of effective school WASH administration are also presented.\nWASH in Schools Monitoring Package\nThis package consists of a teacher’s guide and tool set for the monitoring of WASH in Schools by students, including observation checklists, survey questions and special monitoring exercises.The modules are designed to gather key data on all components of WASH such as water, sanitation and handwashing facilities; hygiene knowledge and practices; waste disposal and operation systems.\nWater, Sanitation and Hygiene for Schoolchildren in Emergencies: A Guidebook For Teachers\nThe guidebook provides the information needed to ensure that every child knows about water, sanitation and hygiene. It provides guidance on safe WASH behaviours that help children, families and teachers stay healthy and avoid lifethreatening diseases. The activities can be adapted based on the country's needs.\nFacts For Life\nFacts for Life has been developed as a vital resource for those who need it most. It delivers essential information on how to prevent child and maternal deaths, diseases, injuries and violence.\nResearch Report Series Communication for Development Study: A Culture of Rights\nThis Communication for Development Study centered on carrying out a study on the knowledge, attitudes, and practices (KAP) specific to Children’s Rights that will inform the Communication for Development (C4D) Strategy and that will cut across program sectors in Belize. It aimed at identifying gaps that act as barriers to the full realization of Children’s Rights.\nThe World Fit for Children\nA document that contains the Convention on the Rights of the Child; Optional Protocols, MDG Goals, A World Fit for Us, A World Fit for Children 202 and a World Fit for Children Plus 5.\nSamuel Haynes Institute of Excellence: Child-Friendly Space\nThe Samuel Haynes School of Excellence is an institution where continuous improvements in the academic achievements of its students and the increased participation from community members is common. It serves as one of the few places in Belize City where children can have a safe space to study and to play, the organisation is promoting positive community change and ownership.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Explore, read, play, invent, build and learn — all about water and the rivers and streams in your community\nWelcome to River Rangers!\nThe best way to get kids learning is to build on their curiosity and interests. The River Rangers program is kid-centered with an emphasis on inquiry and creativity.\nA 5-day program: adapt it to fit your schedule!\nWe’ve designed the program to be user-friendly and adaptable. Use the materials each day for five days in a row, or once a week for five weeks (or any other way you like) to add hands-on learning to your summer programming.\nRiver Rangers Toolkit: National Version\nThis version of River Rangers includes all of the topics, book recommendations, hands-on activities, STEM vocabulary, writing ideas, links to kid-friendly websites and apps, and appendix materials. It does NOT include resources specific to the DC Metro area.\nRiver Rangers Toolkit: DC Metro Version\nThis version of River Rangers includes all of the topics, book recommendations, hands-on activities, STEM vocabulary, writing ideas, links to kid-friendly websites and apps, and appendix materials.\nBonus! DC Metro connections: You'll find Potomac River and Anacostia River history, news, and other local resources. We've also shared ideas for outings in the DC area, related to the topics you'll be exploring.\nNote: Be sure to view and print from Adobe Reader (or an alternative PDF reader), not your web browser.\nIf you want to choose individual sections of the DC Metro toolkit, just select any of the links below to download and print a PDF.\n- Day 1: How rivers are formed\n- Day 2: River habitats: who lives here?\n- Day 3: People on the river\n- Day 4: The water in my cup\n- Day 5: Protecting our water\n- Water basics (the water cycle and more)\n- Facts about water and rivers\n- Books about water and rivers\n- Water words\n- Printable templates\n- Reading Rockets tip sheets\nGetting yourself ready\n- You’ll find an introduction to the concepts covered and recommended books for each day, as well as a list of questions to guide explorations and experiments, and a list of \"water words\" that kids might not be familiar with.\n- Start by gathering books from the list provided from your library.\n- Choose fiction and nonfiction books from the list provided.", "score": 25.559852359444374, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Safe Water School\nWelcome to the Safe Water School\nAlmost half of the population in developing countries is currently suffering from water-borne diseases. The burden is extremely high: about two million people die annually,mostly young children.The Safe Water School aims to improve this situation by collaborating with schools in the\nwith an active knowledge transfer to the community.This manual, developed for primary schools in developing countries, is a working tool for teachers, school directors and school staff to turn schools step-by-step into Safe Water Schools. It is designed jointly by SODIS and the Antenna Technologies Foundation and isbased on extensive experience with school programmes in Bolivia and Nepal.We recommend the following use of the manual:\nprepare and conduct the lessons is provided at the beginning of the chapter “SchoolLessons”.\nFor a comprehensive understanding of the Safe Water School, we recommendschool directors and school staff to read entirely the chapters “Safe Water School”,“Infrastructure”, “Application” and “From School to Community” and also the back-ground information in chapter “School lessons”.On our websitewww.sodis.ch/safewaterschool", "score": 25.000000000000032, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Educating Young People About Water (EYPAW) guides and water curricula database provide assistance for developing a community-based, youth water education program. These resources target youth and link educators to key community members to build partnerships to meet common water education goals.\nAlthough last revised in 1995, EYPAW resources still offer helpful strategies for planning and evaluating youth water programs in the 21st century, including checklists, references, partner lists, and community action education materials.\nFind and select water education curricula.\nUse the Educating Young People About Water database to find a curriculum that is appropriate for learners. Curricula listings include education topics and goals, and other unique resources useful in creating a water education opportunity or event.\nWork in partnership with local experts.\nSee A Guide to Program Planning and Evaluation. Use this guide to create a water education program that directly related information and skills to community water issues and inspires personal action to address community needs. Learn to forge links with community partners and identify community or school-ground natural settings where students can practices and reinforces skills taught in the classroom.\nDevelop a program strategy appropriate to your situation.\nSee A Guide to Unique Program Strategies. This guide provides brief case studies of 30 unique water education programs that have occurred around the country in a variety of settings including after-school clubs, summer programs, museums, nature center, festivals, and campaigns.\nSet goals and find materials to match your program plan.\nSee A Guide to Unique Program Strategies with an emphasis on nonformal and school enrichment settings. This guide offers a checklist of goals for youth programs that will lead to their understanding of key water concepts.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "The Clean Water Program contract with excellent environmental education organizations to provide free programs to schools within Alameda County. Listed below are the providers selected for school year 2017/18 and summaries of their programs.\nThe Storm Drain Rangers Program educates 3rd through 5th grade students in Alameda county about reducing storm water pollution. Through hands-on investigations, students learn about watersheds, neighborhood-creek-bay-ocean connections, urban runoff pollution and storm water pollution prevention. The Storm Drain Rangers Program consists of three classroom lessons in which students are taught environmental concepts that address California State Science and Social Science Standards. Hands-on activities include creating a clay model of the Bay-Delta Estuary ecosystem, conducting a neighborhood storm drain pollution survey and clean-up, and creating colorful informational posters to teach others about storm drain pollution.\nYour K-3 students loved Froggy’s earlier adventures in Froggy to the Rescue, Froggy Takes a Vacation and last year’s Froggy Talk Radio. His campaign to keep Alameda County’s water nice and clean continues in The Froggy Report.\nThe number of programs offered is limited, and slots are allocated by area on a first come first served basis. If you would like to sign up for a program for school year 2017/2018, please reach out to the provider or contact us.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Resources and Planning\nEducational Resource Packs and Teaching Aids\nThe underwater world supports learning across various subjects headings within the curriculum, including Science, Geography and Art and provides a fantastic opportunity for cross curricular learning in an inspiring environment.\nOur NEW, fresh, fun and interactive resource packs provide teacher guidelines and student worksheets. Download the packs and print one copy for each chaperone. The last two pages of the document contain the student explorer sheet that students can complete during (or after) their self-guided journey at the aquarium.\nBefore your visit to SEA LIFE Arizona, download our classroom teaching aids. These PowerPoint slides aim to inform, encourage discussion and develop your students’ natural interest in the underwater world.\nAll of our educational materials are based on activities and projects that your class can do before, during and after your visit.\nAll education programs and resource packs meet the Arizona State Science Standards and are adapted to the student's grade level. Following the lesson, the class embarks on a self-guided tour of the aquarium where they come face-to-face with thousands of sea creatures in displays that recreate their natural habitats.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Nutrition and Water\nIn Guatemala, 24% of children under five are underweight and 46% suffer stunted growth caused by chronic malnutrition. Many of our students receive little food at home, so healthy and nutritious school meals are vital to their wellbeing ; not only do they help with physical development, they also help them stay focused at school throughout the day.\nPrior to our water filter campaign, none of the families we work with had access to safe water. The water they drank and cooked with was dirty and dangerously contaminated and, as a result, the children were frequently ill and absent from classes.\nNow, 95% of our families have water filters at home and we'll continue providing them to new ones.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "The “2010 Census in Schools: It’s About Us” program was introduced to kindergarten through 12th grade principals in the United States, American Samoa, Guam, Northern Marianas Islands, Puerto Rico and the U.S. Virgin Islands in a letter in April.\nThe program is designed to provide students in grades K-12 with information about the importance of the 2010 Census.\nThe Census Bureau hopes students will share this information with their adult household members and they, in turn, will answer the census questionnaire.\nThe federal government conducts a census every 10 years to collect information about the people and housing of the United States.\nThe Census Bureau will send Census in Schools program brochures to all grade 9 through 12 principals, social studies department chairs and school service coordinators this summer.\nThe Census in Schools program will offer:\n- Age-specific educational materials, which include maps displaying population counts and other demographic information, and lesson plans grouped by grade and correlated to national standards for math, geography and language arts.\n- Kits for principals, containing maps, a program brochure, information about online lessons, mini-teaching guides and family take-home kits.\n- Online resources for teachers, including lesson plans, family take-home kits, event ideas and census data to teach students and their families about the census’ role in American history, current events and more.\nThe interactive, user-friendly Census in Schools Web site features memory games, word finds, state facts, coloring pages and research project ideas.\nAll “2010 Census in Schools: It’s All About Us” program materials will be available online for educators, students, parents and the public in August.", "score": 23.390472787672042, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "This database of water-related educational materials contains information on 160 nationally reviewed curricula for grades K-12. The database can be searched alphabetically, by grade level, or by topic and/or instructional format. The information provided includes a detailed description, costs (if any), and contact information for obtaining a desired curriculum.\nThis description of a site outside SERC has not been vetted by SERC staff and may be incomplete or incorrect. If you\nhave information we can use to flesh out or correct this record let us know.\nSubject: Environmental Science:Water Quality and Quantity, Biology, Geoscience:Hydrology Resource Type: Activities:Classroom Activity Grade Level: Middle (6-8), Intermediate (3-5), Primary (K-2), High School (9-12) Topics: Hydrosphere/Cryosphere, Human Dimensions/Resources, Chemistry/Physics/Mathematics, Biosphere\nCMS authors: link to this resource in your page using [resource $master_id]", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Judges Award for ‘Education across waters’\nWINNER: Global Action Plan\nWater Explorer is a fun, web-based educational programme that inspires 8-14-year-olds to learn about water issues and develop the confidence to act on them. Delivered in 11 countries, students work together to save their virtual reservoirs by completing a series of innovative Challenges organised into four themes:\nWater Explorer has already inspired 79,000 children to complete a Challenge and saved over 1.3 million cubic metres of water. Through links to international research, it also aims to strengthen and track the changes in intrinsic values such as care for the environment.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Toolkits are for interactive presentations on environmental topics. They work well for schools, non-profits, scouts, churches and other community groups. Content is designed for youth ages 5-12, but can be adjusted for groups of any age. When a kit is reserved, you will also receive a presentation outline geared toward your group.\nLearn how home composting works and use props to show what can and can't be composted in a backyard bin.\nHousehold Hazardous Waste\nProducts that say CAUTION, WARNING, DANGER or POISON on the label are hazardous and don't belong in the trash. Learn how to safely use, store and dispose of household hazardous waste.\nLearn which materials you can compost through our organics recycling program. This toolkit is a display that includes rigid posters, size 18” by 24” or 24” by 36”.\nLearn what the items you put in the recycling bin turn into after they’re recycled. This kit has a few examples of products made from recycled materials.\nLearn to reduce waste and save money by making wise choices when grocery shopping. Samples of food packages are used to conduct activities. Suitable for middle elementary grade students and older.\nLearn where your trash goes after it’s collected. This kit includes lesson plans, hands-on activities, books and videos. After borrowing a Trash Trunk, tour Resource Recovery Facility in Newport for a well-rounded educational experience.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "04 March, 2013\nA Sanitation Activity for Kids\nTeaching children about personal hygiene and sanitation not only helps to keep them from getting sick, it also increases their understanding of how much water they use everyday. As the world's poorest communities struggle to find clean water, it's integral to teach your children why it's so important to conserve this ever-dwindling resource.\nThe Germ Game\nTeach your child about basic sanitation and hygiene with the “Germ Game.” Divide the children into two groups: the humans and the dreaded germs. Lead the humans to one side of the room and the “germs” to the other. Help differentiate the two groups by providing the \"germs\" with red T-shirts. Ask the children age-appropriate questions about germs and sanitation. For instance, ask the children if they should wash their hands after blowing their nose. If the children answer correctly with a “yes,” the germs stay where they are. Each time the children answer incorrectly, one of the germs invades. Any subsequent correct answers send the germs away. Complete a round of eight to 10 questions and count the number of “germs” mixed in with the “humans.” Explain to the children that proper hygiene and sanitation measures were able to keep the germs that make humans sick away.\nTracking Water Usage\nHelp children understand the importance of water conservation, and how saving water begins at home. Instruct them to track their family's consumption. Provide your child with a simple chart featuring the average amount of water necessary to perform everyday tasks. According to the Utah Division of Water Resources, the average American uses three gallons of water to brush his teeth. Each time the child performs a task, help him write it down on the chart. After one week, discuss how much water was used, and come up with practical ways to cut down on consumption, such as installing low-flow shower heads or cutting back on lawn waterings.\nScrub Your Hands\nUse glitter to show children the importance of washing hands. The Columbus Public Health Administration's website highlights an enjoyable game called “Glitter Germs.” Pour some glitter on each child's hands. Then divide the children into two groups: soap and water users and plain water users. Instruct the children to wash their hands and point out how using soap and water is more effective to remove the glitter germs than water alone.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-4", "d_text": "Presented with a field notebook full of tasks and provided with scientific tools (such as thermometers, study plots and field guides) students will work as a group exploring their schoolyard to find the answer to several ecological questions, such as comparing temperature differences, , notating species diversity, and identifying plant organisms. (Grades 5+)\nEcology Takes the Stage (Assembly)\nLooking for a fun, interactive way to introduce your students to science concepts? How about one that is designed to teach, meet, and assess curriculum goals and standards? Students will learn about the ABCs of Ecology®, marine habitats, forest ecosystems, sustainability, and much more. Grades 2-6\nHabitat, Habitat! (Classroom Program or Assembly)\nThis assembly program is designed to introduce the concept of habitats to our younger audiences. Students will come along on a journey through several ecosystems, from a forest, through a farm, pond, salt marsh and finally to the beach. ( Grades K-3.)\nPlants Around Us (Classroom Program)\nThis program, designed for our younger students, is an excellent introduction to plants: their parts (roots, stems, leaves, flowers, fruits and seeds) and what they need to grow (sun, soil, water and air). Students will use their sense of touch to explore various plant parts, play an interactive game to learn what plants need to grow and then make their very own FrankenPlant. (Grades 1 & 2)\nWorld Within Your Watershed (Classroom Program)\nJoin Little Susie and the Blue-Bellied Fleeber Flobber on their journey through the watershed! In this program, students learn the Water Cycle Boogie song, make clouds and test salinity using scientific tools. (Grades 2-4)", "score": 21.933279668351524, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "The Groundwater University is accepting applications for its 2003 session through May 9. Groundwater University will be held June 24-26, 2003 at Platte River State Park located near Lincoln, Neb.\nThis educational summer camp is directed at youth between the ages of 12 and 15. Sponsored annually by The Groundwater Foundation, the purpose of the camp is to provide students with the opportunity to learn about ground water and other environmental issues through fun and interesting activities, presentations, experiments and field trips. Groundwater University gives participants the chance to experience ground water first-hand, outside of traditional classroom walls.\nStudents will complete water quality tests, construct mini-ground water flow models and experiment with Global Positioning Satellite (GPS) technology. In addition, students will be exposed to various career opportunities in the water industry through a variety of presentations and tours. Also included will be a workshop for community educators who have the desire to sponsor a Groundwater University event locally. Please contact The Groundwater Foundation for more information on this training workshop.\nFor a free brochure and application form, please contact The Groundwater Foundation at 800-858-4844 or email@example.com. Online applications now are available at http://www.groundwater.org/ProgEvent/GU.htm.", "score": 21.749839261653996, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "|Water Science for Schools|\nThis United States Geological Survey site focuses on water and its properties. The site contains activities for teachers and students, along with pictures, data, maps, and an interactive center where users can give opinions and test their water knowledge. Unique aspects of this site include the sections that contain student work and a picture gallery. There are links to other resources for students, teachers and the general public. A Spanish translation is available.\nIntended for grade levels:\nType of resource:\nNo specific technical requirements, just a browser required\nCost / Copyright:\nInformation presented on this website is considered public information and may be distributed or copied. Use of appropriate byline/photo/image credit is requested.\nDLESE Catalog ID: DWEL-000-000-000-005\nResource contact / Creator / Publisher:\nContact: H. Perlman\nUSGS Water Resources", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "|GEF Institute offers affordable, online sustainability courses eligible for professional development or academic credit.\nGive your resume a boost!\n|One Well: The Story of Water on Earth By Rochelle Strauss\nOur planet is made up of about 70% water. All the water on Earth is connected—a raindrop, lake, groundwater, and glacier are all part of the single global well. This book will teach children the importance of water and how we treat it will affect every species on the planet. One Well encourages children to do their part as a good citizen and conserve water.\n|Saving Water (Green Kids) By Neil Morris\nThis is a fantastic new series introducing children to environmental issues facing the world today. Informative and visually exciting, these titles encourage a positive approach to becoming more 'green'.\n|Turn Off the Water and Lights! By Joy Wilt Berry\nTurning Off the Water and Lights! Discusses the need for water and electricity conservation and offers tips for saving energy inside and outside the home.\n|The Big Green Book of the Big Blue Sea By Helaine Becker\nBased on the idea that knowledge is power, The Big Green Book of the Big Blue Sea shows how the ocean works and why this immense ecosystem needs our protection. Experiments using everyday materials help explain scientific concepts, such as why the ocean is salty, how temperature affects water density and why fish don't get waterlogged. The Big Green Book of the Big Blue Sea is an essential part of any science library.\n| Our World of Water By Beatrice Hollyer\nWherever we live in this world, water is vital to our survival on this planet. This book follows the daily lives of children in Peru, Mauritania, the United States, Bangladesh, Ethiopia, and Tajikistan, and explores what water means to them. Where does it come from? How do they use it?\n|A Drop of Water: A Book of Science and Wonder By Walter Wick\nFilled with stop-action and close-up photography, an early scientific book features such images as a single snowflake and a falling drop of water, accompanied by introductions to such concepts as evaporation and condensation.\n|Don’t Drink the Water (Without Reading This Book) By Lono Kahuna Kupua A’o\nThis book encourages intelligent decisions about the safety and treatment of our water. Along with a plethora of information and resources, it contains an assisting tool in becoming active in water conservation.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-2", "d_text": "The inside unfolds into a full-length poster of our water cycle.\nDownload the Facilitator’s Guide for more suggestions on how to use the brochure, a list of action steps, and facilitator notes for a group activity.\n- Download the Brochure:\nCheck out the accompanying workshop “The New Wave: Greening our Water Infrastructure” »\nAnd while you’re at it:\nPlant a Rain Garden!\nThis is a great way to beautify your neighborhood and take some of the burden off your water treatment system. A rain garden slows runoff from big rainstorms so that the sewage system is not overloaded. The deep-rooted plants also act as a natural initial filtration system.\nCheck out this unique model that aims to address the triple bottom line of people, planet and profit. Philadelphia’s Healthy Corner Store Initiative proposes a viable solution to both unequal food access and economic underdevelopment in our communities. How can you replicate this model in your own community?\nThis versatile, easy to use multi-disciplinary curriculum uses innovative, non-traditional learning approaches to increase people’s understanding of environmental issues and the green economy, while simultaneously strengthening their academic and labor market skills. No other curriculum on the market provides all of this in one, accessible, affordable package. It is being used in job training programs, high schools, community colleges, universities, re-entry programs, prisons, and jails in 32 states in the US and in Puerto Rico to prepare youth and adults for good green jobs and to advocate for environmental and public health improvements in their communities. It is available in English and Spanish. Download the Roots of Success Brochure to pass out. Also download the One-page Curriculum Description for more details on the curriculum.\nRoots of Success is a nine-module curriculum, including (external links):\nTo teach the curriculum, instructors need to be trained and licensed. This entails attending a one-day Train-the-Instructor training where instructors are introduced to the curriculum and licensed to use Roots of Success teaching materials. Instructors leave the training with an Instructor’s Manualand multimedia DVD, which includes all of the videos and visuals needed to teach the course in both English and Spanish. When teaching Roots of Success, programs must provide each student with a Student Workbook, which includes all of the handouts, exercises, activities, quizzes, and resource materials needed for the course.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "We all use water in many ways that include drinking, bathing, washing and watering our lawns, but water is a limited resource. That is why it is important that we all find ways to conserve water every day in every way. Kids play an important role in making sure that every drop counts and there are things you can do to help your family save water. If everyone saves a little we can save a lot!\nWhat is water? Water is many things. It can come in the form of a raindrop or be as big as the ocean. When frozen, water can be a soft snowflake or as hard as ice. Water can be fresh, salty, boiling, freezing, dirty, clean, refreshing, plentiful, and scarce. Depending on where you live, water can be all around you or very hard to find.\nWhere does the water go? When you flush the toilet, or the water drains out of the clothes washer or the shower or the sink, where does it go? Pipes in your house carry the water into pipes under the ground and streets called sewer pipes, which carry the wastewater to the wastewater treatment plants.\nWhy should we conserve water? Even if your community is not experiencing a water shortage today, it may in the future. Getting into the water conservation habit today is a good idea to prepare for tomorrow.\nWhy do we need water? We need water everyday to stay healthy, to grow food, for transportation, irrigation, and industry. We need it for animals, plants, for changing colors and seasons.\nWhere can we find water? Over 70 percent of the Earth's surface is covered with water. There is the same amount of water on Earth now as there was when the planet first cooled enough to let the first great rains fall billions of years ago. It keeps moving in an endless cycle called the water cycle.\nEducation Spotlight Ashley Brown, member of Girl Scout Troop 753, has attained the highest Girl Scout honor! Ashley was awarded the Girl Scout Gold Award for her educational campaign showing Laguna Beach residents how to reduce their carbon footprint by creating an eco-friendly garden. Using the experience she gained from her internship at Bluebird Canyon Farms, Ashley created five informational brochures covering the topics: Greenhouse, Composting, Aquaponics/Hydroponics, Poultry Keeping, and Beekeeping. The District has copies of the brochures in our Resource Center. Check them out the next time you're here!", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "The California Ground Water poster is also included. $25.00; 1995, package format. To order, please call (916) 444-6240 or order it at the Water Education Foundation.\nTurning the Tide on Trash: Marine Debris Curriculum\nThis learning guide on marine debris from the EPA presents a \"How to use...,\" section, matrices of activities by learning skill and by subject, and a litter survey called \"Let's Talk Trash.\" Each unit has several activities included, each with a subject activity, an objective, vocabulary list, materials needed, learning skills to be developed by the activity, and an estimated time to complete the activity.", "score": 20.327251046010716, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "\"Water, water, everywhere...\nbut not a drop to drink?\"\n- All 12 points – Deep blue sea!\n- 7 to11 points – Mighty river!\n- 3 to 6 points – Tumbling stream!\n- 0 to 2 points – Small pond\n\"EcoFriendlyKids was formed to offer a unique reference point on having kids and helping the environment. EcoFriendlyKids is all about kids and their environment. Containing a wealth of information, tips, quizzes and fun games for children, this site is ideal for parents who want to share in their kids' adventures as they start to explore the natural world, and to guide them towards a more in-depth understanding of ecological issues as they grow older. At the same time, the resources on this site are equally suitable for use by schools as a basis for classroom lessons, project work and eco club activities.\"\n\"Topics you will find on the EcoFriendlyKids website include: recycling at home and at school; litter; pollution; biodiversity; climate change; saving energy; renewable energy sources and much more. There are also downloadable activity sheets and puzzles suitable for ages 3 and upwards - all with an environmental theme.\"\n\"EcoFriendlyKids is a website you and your kids will want to come back to time after time because it contains so much information on such a wide variety of topics, all presented in a way designed to stimulate youngsters' imagination, encouraging them to take an interest in their environment and what they personally can do to protect it.\"", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "I chose photos that showed the type of toilets, wells and catchment systems that were being built by the people of Nicaragua with the help of WaterAid America.\nWe talked about how diseases can spread if people don’t have a clean and sanitary place to go to the bathroom or if you don’t practice good hygiene. We talked about the need to build more wells and systems so that women and girls could spend their days working and going to school instead of walking for miles to fetch water. We talked about how unsafe water can make people very sick and how water filtration systems could help.\nI also showed photos of kids in school and swimming, baseball players, toothbrushes and children’s artwork. We talked about how while the kids might live differently in Nicaragua, they were still very much like them. They laughed and played and enjoyed things like baseball and drawing. The fourth graders smartly wondered if they could use the wind, water and sun to help power the communities I had visited. They quickly understood that developing countries might not have enough money and resources to replicate what we have in America. In both classrooms, we talked about how if we know about the problems in the world and we already are living with solutions, we could share that knowledge with others and help.\nEven elementary schools kids can be global citizens.\nThis is an original post written for World Moms Blog by Jennifer Iacovelli of Anotherjennifer.\nDo you think your kids understand how precious our water resources are?", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Click on a publication to learn more\nAll the Way to the Ocean\nAll the Way to the Ocean is an uplifting story about two best friends, Isaac and James, and their discovery of the cause and effect relationship between our cities' storm drains and the world's oceans, lakes, and rivers. It is sure to inspire both young and adult readers alike and teach a timeless life lesson-If we all do our part, a cleaner, safer environment is indeed within our reach. All the Way to the Ocean is printed on recycled paper using soy ink.\nSpring Waters Gathering Places\nFive short, beautifully illustrated stories chronicle the use of spring waters by the animal world, native culture, civil war, Oklahoma pioneers and Teddy Roosevelt. Three sections follow with lessons about the water cycle, the creation of springs and our bodies' dependence on healthy water.\nWater for You and Me (free download)\nEcolab, Water for You and Me is currently available in English and can be downloaded free of charge by clicking below and entering your name and email address on the form.Water for You and Me is designed for use by early childhood educators, parents, grandparents and anyone else who wants to help young children understand the importance of water to life as well as water’s use in keeping people healthy. Developed with support from\nCreating or Using a Publication\nInterested in creating or co-branding a publication for your business, organization or event? Contact the Publications Manager for more information on partnership or project sponsorship. If you wish to request permission to publish a Project WET Foundation activity, please review our Permissions Policy and download a Permissions Application Form.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "- World Water Week: Schoolwide Global Learning and Action led by Noah Zeichner\n- Student Panel: Redefining the Global Age featuring Isabel Cruz & Zak Malamed\n- Student-Led Conferences: Turn the Tables led by Christine Kha, Jessica Evangelista & Komal Achhnani\n- Develop Global Citizens in the Facebook Era led by Zak Ringelstein\n- Knowledge to Action: Global Ed and Youth Leadership Outside the Classroom led by Amy Westby & Daniel Carlton", "score": 20.040186103659174, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Popular Educational Resource Marks Third Anniversary\nThe California Section of the WateReuse Association joined forces with the Water Education Foundation to produce a full-color, 16-page booklet about recycled water for upper elementary students. Now, in its third year of availability, this popular educational tool has been ordered by water agencies and recycled water educators from across the nation.\nThe booklet correlates to the California State Science Standards for upper elementary grade levels; however, its use is not limited to California. A Teacher's Guide was produced in 2007 as a companion piece for classroom educators and is available via the Internet. The booklet can be used in classrooms, as a handout at water agency events, and on treatment plant tours.\nThe comic book layout and activity-filled format explain the process to produce recycled water in terminology understandable by young students and adults alike, using whimsical, appealing artwork. The many uses of recycled water and its importance as a water supply for communities are emphasized throughout the booklet.\nPacked with puzzles, mazes, Internet resources, and science-based approaches, students will stay focused on learning about recycled water from the engaging cover to the activity answers on the back page.\nThe publication was initiated by a volunteer committee of California recycled water experts and water educators. It was published by the Water Education Foundation and the WateReuse Association, California Section.", "score": 18.90404751587654, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Water truly is amazing. But how much do you know about it? Do you know where it comes from? Or how it gets to your tap? What about how we make water safe to drink? Or the best ways to conserve it?\nWe've put together a wealth of interesting information and resources to help you discover the vital role water plays in our everyday lives. It's all linked to the national curriculum, and has been developed with educational experts to stimulate and educate curious minds.\nDŵr Cymru Welsh Water is committed to continuously improving, whether it’s replacing pipes, protecting the environment or looking for new innovative ways of doing things. We think it’s fair to share this knowledge that’s why we want to inspire younger generations by working with schools and teaching them about the value of water.\nOur ambition is to educate and inform as many children as possible across our operational area about water, good water practice and the role of Welsh Water.\nOur education team include seconded teachers from local schools who are passionate in supporting schools to deliver the curriculum through experiential hands on learning activities. Our lessons are linked to the National curriculum and support the National Literacy and Numeracy framework.\nWe continually develop our resources by expanding the variety of activities we offer, and ensuring they link to the curriculum across all key stages", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "SM BEACH ‚Äî As part of a state-mandated program to promote environmental literacy in California, Heal the Bay and National Geographic on Tuesday unveiled teacher resources that the groups hope will be adopted by school districts statewide.\nThe Santa Monica-based nonprofit and global education giant are now making environmental literacy guides that cover the topics of fresh water, ocean, energy and climate change available at no cost to all K-8 classrooms throughout California.\nHeal the Bay and National Geographic Education announced the result of their partnership during Heal the Bay‚Äôs eighth annual Coastal Cleanup Education Day, a lead up event to the upcoming Coastal Cleanup Day on Sept. 15.\nApproximately 700 elementary students from underserved communities arrived at the Santa Monica Pier Aquarium for environmentally-focused games, lessons and activities. These future environmental stewards ‚Äî many of whom had never visited the ocean before ‚Äî explored the coast, got up close and personal with the living species in the aquarium touch tanks and cleaned up the beach, organizers said.\nHeal the Bay sponsored the Education and the Environment Initiative (EEI), a state law enacted in 2003 that requires instructional materials for kindergarten through 12th grade to integrate various environmental principles and concepts with traditional academic standards. The California Environmental Protection Agency (Cal/EPA) now manages the program.\nHeal the Bay contracted with National Geographic Education to create teacher guides and videos that provide third- to eighth-grade educators with background knowledge and curriculum on topics from feedback loops in global cycling systems and ocean currents to alternative energy solutions and sustainable fisheries.\n“These extraordinary professional development guides fill a large void in California‚Äôs Environmental Education Initiative,” said Mark Gold, former president of Heal the Bay and the guiding force behind their creation. “Thanks to this unusual public-private partnership, there are now visually compelling, teacher-friendly, comprehensive guides on oceans, water, energy and climate change.”\nThe guides have been tested in classrooms in California and Washington, D.C., and were used as the basis for a successful pilot EEI professional development project with teachers in the Santa Monica-Malibu Unified School District and middle schools in the Venice family of the Los Angeles Unified School District. The pilot program was funded by Annenberg Learner, Heal the Bay and National Geographic and with the help of Google and USC Sea Grant education professionals.\nAs part of the One Ocean Program, Santa Monica-Malibu teachers worked collaboratively to develop environmentally focused lesson plans.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Living Waters for the World (LWW), a ministry of the Presbyterian Church (U.S.A.) Synod of Living Waters, has produced a Vacation Bible School curriculum that repackages many of the key concepts from the ministry into child-friendly learning units and activities. “Water All Around the World” is the latest LWW effort at spreading the message that clean water is an essential component of health and human development.\n“As the name suggests,” says curriculum director Joanie Lukins, “this Bible school takes our children around the world to visit children in other countries to see how they live and how they deal with limited access to safe water.”\nLukins has a long history with LWW, traveling to Mexico with the organization’s founder, Wil Howie, in 2002 to investigate the installation of a water system at a school she’d help construct the previous year. Since then, Lukins has been a key participant in curriculum development and “Clean Water U” instruction.\nEach day of the 5-day curriculum focuses attention on a single country. The curriculum journeys to Honduras, Cuba, Mexico, Haiti and Ghana with stories of children in each country to set the stage for understanding the specific clean water needs they have. Bible study, mission stories, crafts, games and songs all combine to shed light on the mission needs and opportunities of the day’s country.\n“At the end of the day, the children have a time for reflection,” says Lukins. “It’s not meant just to entertain, but it asks children to reflect on what they learned that day. Children are capable of deep thoughts, particularly as it relates to children in other cultures.\nDesigned to be inexpensive and to use the creativity of the people in the church teaching Bible school, Lukins says the curriculum is “impactful and respectful of children.”\nA music CD is included with the printed materials, along with a flash drive containing the entire VBS in electronic form in addition to several PowerPoint presentations with photos and maps.\nLukins says the group’s previous VBS curriculum, titled \"Clean Water for All God's Children,\" will still be available and draws attention to the basics of the world water crisis where “Waters Around the World” focuses on the stories of children in other cultures affected by clean water insecurity.\nThe curriculum costs $49 and is available for order at the Living Water for the World website and at this link.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "You could almost feel the change in the air as parents across Holmes County breathed a sigh of relief and possibly shed a couple of tears as they've sent their little (and not-so-little) ones back into the school-day routines this month. And with those sighs, Holmes County teachers closed their eyes, took a deep breath, and asked for patience, wisdom and a little bit of help.\nNow I'm not professing our office has a limitless supply of patience or wisdom (like anywhere else, some days are better than others), but I am saying we're here to help in the classroom. Once again, Holmes SWCD will be offering a variety of in-classroom educational programs at no cost to Holmes County teachers and community groups.\nAs I shared last month, we are excited that our Holmes SWCD Board has identified two key priority areas for our staff to focus on for the next two years: soil health and water quality. We're using those priorities to refine our educational programs, too. In addition to some old favorites like the Incredible Journey (focusing on the water cycle) and Soil Scientists (including rock rubbings, soil smudges, and soil layers), we're excited to add NINE new in-classroom program options for the 2017-18 school year! Here's a quick look at what's new:\n- 8-4-2-1 for All: Upper elementary through high school students will learn about 8 water users, 4 common water needs, and how 1 river serves them all. Students will play the part of different water users and safely carry their water \"downstream\" while navigating 4 simulated water management challenges to reach the next community of water users on the same \"river.\"\n- Ask the Bugs: Benthic bugs (posing as paperclips, buttons & beads) provide clues to assess stream health. Elementary students conduct a simulated bio assessment of a stream by sampling aquatic macro invertebrates (represented by ordinary materials). By learning the process by which macro invertebrates are assessed, results are recorded & Pollution Tolerance Indexes are determined.\n- Feeding the World: How can we feed 9 billion people? What are the limits to food production? Middle & high school social studies students will examine factors limiting population growth the the impact agriculture has on population trends and vice versa.\n- Garbage Bag Watershed: We all live in a watershed where pollution originates from many different sources.", "score": 18.205644964448382, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "One of the key products of all of hard work is the 63-page Waterway Protection toolkit document full of inspiring ideas, engaging activities, and salient advice for school teachers, college students, and prospective environmental educators working with students between the ages of 10 and 15. The toolkit offers communal education activities, and splits them into three categories of engagement: Explore, Identify, and Act. Activities grouped under Explore are “about discovered the beauty and diversity of your local environment.” Identify encourages one to “[delve] deeper into the science of waterway protection.” Act comprises a series of activities about “becoming civically engaged – taking action – from hands-on protection of resources to speaking up for the environment in public forums.”\nThe earlier scenario with the children bearing dipnets is a potential scene from the “Underwater Living” module taken from the Identify section of the toolkit. Because many benthic macroinvertebrates are particularly sensitive to water quality, their presence (or absence) is particularly useful for determining the health of a given body of water. While snails, leeches, and beetles are more tolerant of pollution, crayfish and stoneflies only live in healthy waters, and so searching for macroinvertebrates is a valuable exercise for demonstrating the effects of human meddling on the environment.\nAn activity from the Explore category might have the schoolchildren building 3D topographical maps of the watershed using cardboard, or visualizing the water cycle using a Bunsen burner and a beaker. Exploring could be as simple as taking a walk by a local stream and soaking in the sights! The Act category is more hands on – it could be direct action like planting trees, cleaning up plastic trash, or having students identify and remove problematic invasive species from local river habitats in the “Stop the Invaders!” activity. Another Act project encourage students to write letters-to-the-editor and to pursue similar advocacy.\nSiira Rieschl drives home the importance of collaborative development of the toolkit: “Understanding the differences in our approaches has been incredibly valuable for me to fully grasp our own drive behind our programs. Our biggest connection, besides our passion for river ecology and watershed health, is a determination to spread knowledge and passion about these very important topics.", "score": 17.397046218763844, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Water for Health was the topic of the day at St Peter’s Primary School in Moy when NI Water’s education team recently paid a visit.\nNI Water’s educational programme, H2O and the Wonderful World of Water, teaches children about the importance of drinking water and how to identify the symptoms and effects of dehydration.\nThe children were introduced to H 2 0, a water drop figure and mascot for the programme.\nThey also took part in practical demonstrations and games to explain how much water children should drink each day and how much of the body is made up of water. After discussing both tap and bottled water, pupils took part in a taste test to see if they could tell the difference. Each pupil received a re-usable NI Water bottle.\nNI Water’s Environmental Education Manager, Jane Jackson said: “NI Water places great importance on educating young people in the vital role water plays in our lives. H 2 O is a fun way for children to learn about why water is essential for good health and how they can help to conserve this precious resource.\n“We are delighted with the positive feedback we have received from schools who have participated in our education programme. It’s a fantastic way for us to work within the local community and educate future generations of water users!”\nThe programme is aimed at Key Stage 2 and designed to complement a key element of the Northern Ireland Primary Curriculum – the ‘World Around Us’.\nA teachers’ learning pack, with further classroom activities is available to download from", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Engaging Latino Youth and Families in Water Resource Issues\nIt has been shown that Latinos are generally interested in environmental issues, and that they are particularly concerned about the health impact of a polluted environment. However, because of language and cultural issues, they are often not engaged in water protection activities.\nWhat Has ANR Done?Agua Pura (Pure Water), began in 1999 as a partnership of the University of Wisconsin Cooperative Extension's Give Water A Hand, Santa Barbara County UCCE 4-H Youth Development Program and Santa Barbara City College. Its goal was better understanding of how community educators and youth leaders can involve Latino youth and the Latino community in watershed protection and adaptation of resources to meet the community's needs and interests.\nAgua Pura has been sustained by the Santa Barbara County 4-H Youth Development Program. It is assisting the county in meeting best practices under NPDES (National Pollution Discharge Elimination System) guidelines.\nLatino leadership developed around local watershed issuesAgua Pura has significantly contributed to engaging the Latino community in watershed resource issues:\n--A six-week, hands-on after-school watershed education program that has graduated over 560 Latino children.\n--Incorporation of watershed education into a nine-week summer day camp for over 1,200 Latino children from low-income families.\n--The local Housing Authority, whose leadership is primarily Latino, has led in development and delivery of the ongoing \"Splash to Trash\" watershed education program. Sixty-two Latino young people from public housing have graduated from the program.\n--Publication of the Agua Pura Leadership Institute Planning Manual (available on line at: http://www.uwex.edu/erc/apsummary.html).\nAgua Pura has served as a model for Latino leadership development involving watershed resource issues at national conferences (Coastal Zone '99, NAAEE '01 and '02) and in professional journal articles.\nSupporting Unit: Santa Barbara CountyMichael Marzolla, 4-H Youth Development Advisor, Santa Barbara County UCCE,305 Camino del Remedio, Santa Barbara, CA 93110.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Kid Power Programs\nKid Power Programs offers 2 GREAT shows designed to help children develop the skills and behaviors necessary to make health-enhancing choices. Both shows are highly interactive, packed with characters, puppets, clever visuals, and audience participation.\nShow #1: Nutrition & Exercise -\n\"Kid Power’s Operation Lunch Line\" (in 3D)- is a highly interactive 50-minute musical show designed to help children in grades K-6 learn the value of good nutrition and exercise. (Children love the souvenir 3D glasses included.)\nKid Power is the leader of Operation Lunch Line. He believes kids have the power to FEEL GREAT by eating healthy foods and exercising daily. His base of operations is Mission Control which is uniquely powered solely from audience participation. Today’s operation: Help a child feel great. Using spectacular visual effects in 3D, the entire audience miniaturizes, joining Kid Power on an amazing journey inside the human body of a boy named Max, who feels lousy because he doesn't eat or move properly. The operation, aided by anatomical sidekicks, such as the brain and heart, monitors Max’s inactive behaviors and poor food choices in the school lunch line day after day. Through audience participation the kids not only educate and motivate Max, but in doing so, learn they too are special, filled with all the \"kid power\" needed to develop the knowledge, skills, and behaviors necessary to make health-enhancing choices and FEEL GREAT.\nShow #2: Environmental Conservation -\n“Kid Power & the Planet Protectors” is a highly interactive, 50-minute musical show, designed to help children, in grades K-6, learn about our natural environment and the actions they can take to help protect it.\nKid Power is an environmental crusader here to recruit his team of Planet Protectors. Through participation the kids earn “99 Green Points” to become members of the team. Throughout the performance, kids learn they must do three things to become exemplary Planet Protectors:\n1. LEARN - about the environment and conservation.\n2. DO - specific actions every day to help the environment.\n3. TEACH - others about the Earth and how to help preserve it.\nThe show addresses each of these three elements in the three areas of the environment:\n1. WATER - the Hydrosphere: Water Cycle, available water percentages, finding and stopping water leaks, simple ways to reduce water usage.\n2.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-2", "d_text": "Students are taught skills and given information to help them make healthy and positive choices.\nComponents of the H & W program include guest speakers who are experts in their field, interactive workshops and assemblies, lessons and activities that are embedded into the curriculum, opportunities for education and training, and student-centered clubs.\nThe H & W program covers a variety of topics that are developmentally age appropriate including, but not limited to healthy eating habits/nutrition, substance abuse prevention, adolescent and sex education, bullying/cyberbullying, diversity & inclusion, and stress management.", "score": 16.146201559952548, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "The Tampa Water Department joins the American Water Association in recognizing the importance of investing in students for the future of our communities and of the drinking water industry.\nAWWA Scholarships for Water Industry Students\nAWWA Drop Savers Poster Contest\nLocal Grants for Teachers\n- Tampa Bay Estuary Program Mini-Grant Program\n- Splash! Mini-Grants from Southwest Florida Water Management District\nFrom the Tampa Water Department\n- Take a Virtual Tour of Tampa's Water Treatment Facility\n- Classroom Materials\n- Classroom Projects\n- History Slideshow\nOther Materials Available to Teachers\n\"Water You Waiting For?\" is a 12-minute video from the United States Environmental Protection Agency showcasing the water profession for high school and/or vocational technical school students. This video highlights the water profession in four areas -the value of water, job responsibilities, career successes, and environmental contribution. The video is designed so that each of these chapters can either be viewed separately, appealing to that student's curiosity, or can be viewed in it's entirety.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Once again children across the country are going back to school! If you are a teacher, parent, scout leader, or other educator, The Groundwater Foundation has some great educational items for you! Here are just a couple of items from our catalog—go to www.groundwater.org to check out more fun items!\nAlso, be sure and check out our new Water1der app! Water1der is a free groundwater awareness trivia game app from The Groundwater Foundation. Players are challenged in different areas of groundwater/water basics knowledge through various fun and educational playing formats.\nGo here to download the app: http://www.groundwater.org/get-informed/opportunities/water1der.html", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "The URL is: http://www.safeclimate.net/\nIncludes a database of University and K-12 school projects nationwide and internationally. As well as resources for starting your own project. A project of Interstate Renewable Energy Council's PV for YOU Program.\nThe URL is: http://irecusa.org/schools/\nThis software package is designed to meet National Science standards and teach grades 8-14, through simulations and real experiments, about the science of global warming and it's abstractions. Purchase information from Seeds Software available. (Mac and PC compatible).\nThe URL is: http://www.seeds2lrn.com/greenLessons.html\nA combined textbook and workbook that focuses on ecological concepts and the interrelationships of living things and their environments. Students examine current environmental problems such as acid rain, toxic waste, tropical deforestation, and destruction of the ozone layer. Grades 8+.\nThe URL is: http://www.nap.edu/readingroom/books/rtmss/3.31.ht...\nNew Hippoworks.com program designed to teach kids about global warming with activities, lesson plans and printable materials for teachers, and a carbon calculator to help students figure their global footprint.\nThe URL is: http://www.hippoworks.com/animalsearth/\nThe colorful, extremely basic text defines the Air Quality Index, discusses what makes air dirty and which people are most at risk for getting sick from air pollution. From the US EPA Office of Air Quality Planning and Standards. For ages 7-10.\nThe URL is: http://www.epa.gov/airnow/aqikids/\nHow much does your car pollute? Calculate how much carbon dioxide, sulfur dioxide and nitrogen oxide your vehicle emits each year, compared to the national average. Learn why it matters and what you can do to help.\nThe URL is: http://www.environmentaldefense.org/tool.cfm?tool=...\nSeries of educational climate animations for grades 5-9 entitled \"Global Warming and Earth Processes,\" feature illustrated moving scenes of how global warming occurs, the carbon cycle and global warming, and the water cycle and global warming designed to break down complex processes into a series of easy-to-understand components accessible to the intermediate grades.\nThe URL is: http://www.epa.gov/globalwarming/kids/animations.h...\nFor students and teachers in grades 9-12, exercises are adaptable to grade levels.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "If COVID-19 policies are preventing you from having guests in your classroom, we can also visit virtually via your preferred online platform. We are adapting our popular in-person NGSS-aligned lessons and making them meaningful for virtual visits. You tell us what you want to accomplish and we'll give you a few options to choose from. (OR...keep it simple and we'll choose for you!)\nAs guidelines permit, our educators are still eager to visit your students at your school! Weather-permitting, our preferred method is to meet you on your playground or sports field where our tried and true activities will be very similar to our traditional lessons taught at the Water Resource Center.\nChoose your grade to review our recommended activities. We can adjust our lessons to fit your needs so don’t hesitate to ask us for something specific.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "Water has been and continues to be the vehicle that transports the metals across the landscape to the ocean. And water is the solvent that makes these persistent elements available to move up the marine food chain, to the Viequenses' dinner tables, and into their bodies.\nNine teachers participated in this seminar where we considered cases just described, while they designed the curriculum units presented in this volume. A brief review of their contents may help guide other teachers to topics relevant to their interests.\n* * *\nJoanna Ali developed a unit that explores the history of science and policy regarding acid precipitation for students in eleventh and twelfth grade. Her unit, \"Acid Rain: Causes, Effects and Possible Solutions,\" includes a history of the scientific evidence that led to key federal regulations of power plant emissions. She summarizes the science with precision, and includes a series of laboratory exercises that will certainly engage students. She also designed and created a pollution trading rights game, with a board similar to Monopoly to allow students to trade sulfur dioxide rights in response to federal regulations. This is an exceptionally creative product and students will certainly enjoy playing the game while learning about one of the more complex areas of air quality science and law.\nRay Brooks teaches independent studies to grades 5-8, and specializes in helping students develop science fair projects. He designed a unit, entitled \"Think Before You Drink,\" that explores the source, movement and fate of New Haven's drinking water. He provides a critical review of well water vulnerability to pollution from surface activities, and also includes a section critical of bottled water standards. His unit includes extraordinary documentation and websites that will benefit anyone hoping to strengthen science curriculum in the area of water resources.\nWendy Hughes teaches seventh grade science, and prepared the unit, \"The Journey of New Haven Water.\" She developed excellent descriptions of the water cycle, an overview of chemical threats to drinking water (microbes, radionuclides and pollution), treatment options, a comparison of point vs. non-point source pollution, and concludes with practical advice to students and teachers regarding what they can do to conserve water and protect its quality. Students will realize the lengthy journey that water has taken before pouring from their taps, and gain a strong understanding of the problems government faces in its efforts to assure a healthy supply.\nDeborah James created a unit entitled, \"Water, Water Everywhere, Not a Drop to Drink,\" for fifth and sixth grade science classes.", "score": 13.897358463981183, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "For older children (10 and above) the Santa Clara Valley Water District provides public tours of its Purification Center where anyone can witness state-of-the-art water purification techniques in action. The Purification Center can be found at 4190 Zanker Road, San Jose, CA but be sure to sign up in advance for a slot on the tour at http://purewater4u.org/tour-schedule.\nLarger groups or entire classrooms might also be interested in the appointment-only water education opportunities in the area.\n- The Alamitos Groundwater Recharging Facility offers insight into the natural processes that create clean fresh water. Learn how human intervention can harness these natural processes to create a steady supply of fresh water for thousands of people (5750 Almaden Expressway, San Jose).\n- Coyote Creek Outdoor Classroom (791 Williams Street, San Jose) and Morley Park Outdoor Classroom (615 Campbell Technology Parkway, Campbell) both offer great programming for larger groups focusing on the natural beauty, danger, and utility of natural waterways and systems.\nNo matter how you decide to engage the kids in your life about water issues, the important thing is the get the conversation started. The earlier kids begin conserving water, the stronger their habits will be over their entire lives. It will add up to a more sustainable world as well as a more tenable water future. To stay on top of the latest water conservation and education news, follow the San Jose Water Twitter account.", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "You are hereHome ›\nNow showing results 21-30 of 64\nThis poster features NASA's Global Precipitation Measurement (GPM) mission. The front features an image of the GPM Core Observatory satellite along with the constellation of satellites that will accompany it. Background information is provided on... (View More) the reverse side of the poster, including an overview of the mission, details of the satellite, the science and applications of the mission, and information on the constellation missions with which GPM will partner. Also included on the back is a multi-age educational activity on freshwater availability. See Related & Supplemental Resources to download a PDF of the poster. (View Less)\nThe total amount of water on Earth, the places in which it is found and the percentages of fresh vs. salt are examined in this lesson. A short demonstration allows students to visualize the percentage differences and a coloring exercise illustrates... (View More) locations. This lesson uses the 5E instructional model. All background information, student worksheets and images/photographs/data are included in these downloadable sections: Teacher's Guide, Student Capture Sheet and PowerPoint Presentation. (View Less)\nIntended for use prior to viewing the Science on a Sphere film \"Water Falls,\" this lesson introduces students to Earth's water cycle and the importance of freshwater resources.\nThis activity allows participants to build a paper model of the GPM Core Observatory and learn about the technology the satellite uses to measure precipitation from space. Directions explain how to cut, fold and glue the individual pieces together... (View More) to make the model. The accompanying information sheet has details about the systems in the satellite including the Dual-frequency Precipitation Radar (DPR), the GPM Microwave Imager (GMI), the High Gain Antenna, avionics and star trackers, propulsion system and solar array, as well as a math connection and additional engineering challenges. (View Less)\nThese guides showcase education and public outreach resources from across more than 20 NASA astrophysics missions and programs. The twelve guides - one for each month - contain a science topic, an interpretive story, a sky object to view with... (View More) finding charts, hands-on activities, and connections to NASA science. The guides are modular, so that educators can use the portions that are the most useful for their audiences/events.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Planning for the Future: Water Education for Colorado's Youth\nWhen it comes to water, Colorado’s kids can expect to face a challenging future; a growing population and increasing demand may mean difficult trade-offs. That’s one reason educators and policy-makers say it’s critical to teach young people about water management.\nOn a breezy spring morning in south Denver, a line of about 30 teenagers snakes down a hill at Overland Pond, a little urban park next to the South Platte River. The kids are passing golf balls to each other really fast, and dropping many of them.\nThe Greenway Foundation, a Denver-based river conservation group, organized the outing. Mary Palumbo is the group’s youth development director.\n“We have students up a hill representing the headwaters, and they’re passing little golf balls at different paces, depending on what season we’re representing,” said Palumbo.\nWhen lots of golf balls were “flying everywhere,” Palumbo said that represented flooding. In the summertime, Palumbo said, they pass the balls a little slower. “We’re using this tool to talk about the variables in water, and how it’s not always consistently available to us.”\nIt’s a lesson in stream flow dynamics, and makes high school senior Marcos Morales think of a waterslide.\n“Because [of] how the river moves in different directions and turns a lot of corners,” said Morales. “The sides of the water slide are actually bigger and more protected, so you don’t fall out.”\nPalumbo says it’s important that students like Morales can make that kind of connection between their own lives and the water-oriented activities.\n“Kids want something that is relevant to their lives,” said Palumbo. “Just getting them out to a place that feels like nature, even though we’re city kids and we live in the middle of highways, we still have a connection to nature.”\nEighty miles west of Denver, another group of kids is getting ready for a different water lesson at the Keystone Science School, near Breckenridge. Students come from all over the state for an overnight stay and outdoor study experience in and around the school’s 23-acre campus.\nDave Miller, the director of school programs, says they teach students that water is a finite resource. “It’s something that has to be treated with care,” said Miller.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "We are Leslie and Colleen Smith--We have homeschooled 5 children over the course of more than twenty years. We have studied and sampled a huge variety of curricula, programs and strategies, so we can assist you in finding what is best for your family.\nThe Guam Homeschool Resource Center is an independent resource center for anyone on Guam who is homeschooling, plans to homeschool in the future, or is trying to decide what to do. Our goal is to be an additional source of help to people who are exploring their educational choices.\nWe are a library, a free internet cafe, and a listening and compassionate ear. We have an extensive array of catalogs, books and other printed material, as well as on-line resources that we have collected over the years. Our 20 years of personal experience with home education is another resource that younger families often find helpful.\nGuam has three local homeschool support groups. Two for the military - The Guam Military Christian Homeschoolers Support Group (Formerly Navy Homeschoolers) and the Andersen Homeschool Group - and the local Guam Homeschool Association. All homeschoolers are welcome to utilize our resource center. We will offer support, encouragement and input to help you in your decision-making with regards to teaching your kids at home on Guam.\nYou are invited to stop by our modest little resort oasis in the heart of Downtown East Agana. We have a fine, sandy beach to play on or to just relax. We have bathroom and showers, air conditioning and good, hot coffee. We also have computer stations with Wi-Fi connection.\nMy children are a bit shy, but they are not mean, or bullies. They like to play with other nice kids, and they don't care if the other kids are a bit older or younger. And of course we have age-appropriate activities for all children.\nOur business pays the rent, so please be patient if we are taking care of customers. Other than that, we don't have a lot of rules. We will be collecting information about what people (you) want or need to have happen for us to assist you. Are you looking for group activities? Are you looking for support? Want to find the best website for teaching math?\nWe are here to help.", "score": 13.632390536529643, "rank": 86}, {"document_id": "doc-::chunk-4", "d_text": "The Secrets of Soil VIDEO INTERACTIVE for middle school and up (2004, Smithsonian), and Adventures of Herman the Worm, below, for younger kids.\nOcean: The Basics - 'Outline of an introductory course in Oceanography... with material for complementing existing courses.' Covers beaches, coral reefs & atolls, the Gulf Stream, deep sea, global warming & the ocean, ocean creatures, and more; chapters are in PDF format. For advanced middle school students / high school and up (2007, Dr. Wolf Berger, Professor of Oceanography, University of California). Marine Life includes education resources on Aquatic Food Webs, Coral Ecosystems, Life in an Estuary, Marine Mammals, and Sea Turtles (National Oceanic and Atmospheric Administration). Ocean Topics offers up-to-date articles on climate change, coastal science, ocean life, marine pollution, polar research, and other subjects (2013, Woods Hole Oceanographic Institution).\nRainforest Resources - Pictures, stories, facts, fun projects, and other resources on rainforest plants and animals; age level varies (Rainforest Alliance).\nRivers AUDIO - Learn all about the basics of rivers, how they are formed, and how they change over time. For elementary and middle school students. (University of Illinois Extension)\nTides and Water Levels - Explains what tides are, what causes them, tidal frequencies, variations & cycles, measurements & monitoring, and more. High school and up. (2008, National Oceanic and Atmospheric Administration)\nWater Cycle for Kids INTERACTIVE - An interactive diagram showing how water is cycled through our environment; choose beginner, intermediate, or advanced version, and click on a label to explore any phase of the cycle. See also The Water Cycle for high school and up (2015, U.S. Geological Survey). Thirstin's Water Cycle INTERACTIVE is an animated interactive lesson on the water cycle; for elementary school students (U.S. Environmental Protection Agency). For information on many aspects of water, along with pictures, data, maps, and activities, see Water Science School; age level varies (U.S. Geological Survey). What Is a Watershed? explains watershed basics, including the water cycle, groundwater, stormwater, conservation and more (Borough of South Plainfield and New Jersey Department of Environmental Protection).", "score": 11.600539066098397, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Make your field trip a learning expedition by using the following downloadable educational materials.\nThis guide is designed to help prepare your class for their field trip and help you lead your group as they explore the exhibits in the Aquarium.\nChaperone Guides for Inquiry\nThese guides are designed to help groups meet certain learning goals by guiding chaperones to ask students critical questions during the self-guided tour. Have suggestions for how to improve these guides or want to request new topics? Email email@example.com!\nTurn your field trip to the Aquarium into a scavenger hunt! Not only will AquaQuest enhance your students’ learning experience, but they will have fun discovering all the answers. Choose the appropriate grade level below and print a copy of the pdf. Please keep in mind, writing against the exhibits or graphic panels may scratch the acrylic so please provide each student with something to write on such as a notebook or clipboard and a pencil.\n|AquaQuest Pre K - K||Pre K - K AquaQuest Answers|\n|1st - 2nd AquaQuest||1st - 2nd AquaQuest Answers|\n|3rd - 5th AquaQuest||3rd - 5th AquaQuest Answers|\n|6th - 8th AquaQuest||6th - 8th AquaQuest Answers|\n|9th - 12th AquaQuest||9th - 12th AquaQuest Answers|\nDichotomous Keys and Food Web Activities\nThis food web will help you understand the diverse and complicated world of the Delta Swamp in River Journey.\n|Delta Swamp Web of Life||Delta Swamp Web of Life Instructions|\n|Delta Swamp Web of Life Answers||Delta Swamp Web of Life|\nThese dichotomous keys will help you identify the fish in the Gulf of Mexico and Nickajack Lake exhibits in River Journey and the Secret Reef exhibit in Ocean Journey. Use the blank dichotomous key to make your own.\nThe Ocean we want to know\nFind out how much rain falls in a given area using one of these student work sheets for measuring runoff:\nIMAX Movies & Educator Guides\nOur exciting IMAX movies offer students an additional learning experience. Click here to see current film schedules. Click the following links to find the educator's guide for our current films:\nEducational supplements are available for visually impaired guests. Each sensory bag consists of a duffel bag (one for Ocean Journey and one for River Journey) which has hands-on props and printed information for dictation.", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "If you have any questions, feel free to email or fix a Skype Q&A session. The activities you need to complete are under the \"Homework\" section of parts 1, 2 and 3. More resources are outlined after each part if you wish to learn more about those topics. In order to obtain your attendance certificate for the current course, you will need to submit your answers by March 17th.\nJuan Manuel GARCIA ARCOS (website, homework, evaluation, extended resources), Sonia AGUERA (videos, illustrations, factsheets, presentations), Maria Luisa SERRANO ORTIZ (factsheets, water treatment), Forum SHAH (factsheets)\nBefore starting the course: complete the questionnaire below\nThe objective of this questionnaire is to assess your initial knowledge before the course, please do not research on the answers. Adapted from \"The status on the level of environmental awareness in the concept of sustainable development amongst secondary school students\" (Arba’atHassan et al.) and the Environmental Literacy report card from Minnesota Office of Environmental Assistance (https://www.pca.state.mn.us/sites/default/files/p-ee5-06.pdf)\nObjectives of the course\nMore than a yer ago, we started an exciting collaboration with the teachers belonging to the high school science program SWIS (science work in schools). Since then, we have improved and developed some materials and hardware for their students available here: http://openscienceschool.org/scienceinschools\nThis course aims at completing the materials we have done previously, so that the teachers have more background and are able to conduct this activities independently. More importantly, it also aims at giving teacher tips for conducting a science activity in their classroom, from a pedagogical point of view, and also practical.\nPart 1: How much water do we use every day?\nWater is essential for all sorts of life on Earth, from bacteria to humans. We are 70% of water, and are able to survive just 3 days without drinking. How much water do you think we use per day? The numbers may surprise you. The water footprint of an individual, is defined as the total volume of fresh water used to produce the goods and services consumed by the individual. Water use is measured in water volume consumed (evaporated) and/or polluted per unit of time. Many situations yield liters of water which are polluted, like agriculture.", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-9", "d_text": "Earth Science Course\nES.3 – characteristics of Earth and the solar system (including\nsun-Earth-moon relationships, tides, and history of space exploration).\nES.6 – renewable vs. non-renewable resources (including energy resources).\nES.10 – ocean processes, interactions, and policies.\nPH.7 – energy transfer, transformations, and capacity to do work.\n2015 Social Studies SOLs\nUnited States History: 1865-to-Present Course\nUSII.8 – economic, social, and political transformation of the United States and the world after World War II.\nUSII.9 – domestic and international issues during the second half of the 20th Century and the early 21st Century.\nCivics and Economics Course\nCE.6 – government at the national level.\nCE.10 – public policy at local, state, and national levels.\nVirginia and United States History Course\nVUS.13 – changes in the United States in the second half of the 20th Century.\nGOVT.7 – national government organization and powers.\nGOVT.9 – public policy process at local, state, and national levels.\nVirginia’s SOLs are available from the Virginia Department of Education, online at http://www.doe.virginia.gov/testing/.\nFollowing are links to Water Radio episodes (various topics) designed especially for certain K-12 grade levels.\nEpisode 250, 1-26-15 – on boiling, for kindergarten through 3rd grade.\nEpisode 255, 3-2-15 – on density, for 5th and 6th grade.\nEpisode 282, 9-21-15 – on living vs. non-living, for kindergarten.\nEpisode 309, 3-28-16 – on temperature regulation in animals, for kindergarten through 12th grade.\nEpisode 333, 9-12-16 – on dissolved gases, especially dissolved oxygen in aquatic habitats, for 5th grade.\nEpisode 403, 1-15-18 – on freezing and ice, for kindergarten through 3rd grade.\nEpisode 404, 1-22-18 – on ice on ponds and lakes, for 4th through 8th grade.\nEpisode 406, 2-5-18 – on ice on rivers, for middle school.\nEpisode 407, 2-12-18 – on snow chemistry and physics, for high school.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "In addition to achieving a heightened awareness of environmental stewardship, the PCEP introduces effective communication techniques, initiates thoughts of new and exciting future career paths in the marine, health, or social science employment arenas, and fosters a lifelong interest in science and quality of self, family, and home.\nCONTACT: Lynn Whitley, Education Coordinator, USC Sea Grant, (O) 213-740-1964,\nInteresting and useful information abounds on Hawaii Sea Grant's award-winning Sea Squirt website. First, visitors learn that \"after finding a suitable rock or place to call home, juvenile red sea squirts no longer need their brains, so they eat them.\" Look further into the site, and Sea Squirt offers resources for children and teachers alike.\nOn one page, Shaka the shark doles out advice for kids visiting the beach. \"Don't stand on coral reef,\" and \"Use the restroom, not the ocean,\" are two of his points. Downloads include a marine activity workbook, several coloring and activity books and marine life icons for your computer. Links for teachers, kids and parents, a quiz to test knowledge of Hawaiian sea life and a virtual aquarium are more features on the site. To visit the Sea Squirt website: http://www.soest.hawaii.edu/SEAGRANT/kids/indexkids.html\nCONTACT: Priscilla Billig, Hawaii Sea Grant Communicator, (O) 808-956-7410, Email: email@example.com\nTHE CASE OF THE WET INVADERS\nOregon Sea Grant education specialists, along with Sea Grant programs in California and Washington and the U.S. Fish and Wildlife Service, want West Coast residents to be on the lookout for aquatic invasive species like zebra mussels, European green crabs, smooth cord grass and Chinese mitten crabs. To achieve their goal, the team is creating Aquatic Nuisance Species Education Boxes that can travel to middle and high school teachers throughout the Pacific Northwest. The project is based on successful educational tools used in the Midwest, such as Minnesota Sea Grant's \"Exotic Aquatics\" trunk and Illinois-Indiana Sea Grant's \"Zebra Mussel Mania\" traveling trunk.\nThe trunks will provide teachers with curriculum and activities for incorporating aquatic invasive species into their science lesson plans. Written materials, slides, video and specimens will all be part of the effort to teach young people what to look for when they visit a lake, beach or river.", "score": 9.837610665623476, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "\"This is a great age, the kids are really interested,\" said Marnie Hoolahan, the mother of a first-grader and a school council member. \"I think the concept is basic enough for them to understand, and the younger you start the more influence it can have.\"\nHoolahan, who works as a corporate development director at Genzyme, came up with the idea for the stormwater initiative program and worked with teachers, local advocates and the school council to implement the idea. \"If you teach children early enough, it's something they can embrace for the future,\" she said.\nThe program is being funded by various grants, including $2,000 from the Southborough Arts Council, $1,500 from the Southborough Organization of Schools and $500 from the Southborough Education Foundation.\nEach first-grade student and teacher will receive a copy of Morrison's \"A Drop of Water.\" A copy will also be given to the school library.\n(Abby Jordan can be reached at 508-490-7461 or email@example.com.)", "score": 8.086131989696522, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "When asked about where he and his team came up with the idea for their project, Camacho said, “We thought about what would help people the most, and we realized that people without water would need an affordable way to access clean water.”\nThis year’s STEMi winners are as follows:\nK-2ND GRADE DIVISION:\n1st, Kennani Villagomez “Gummy Bear,”\n2nd, Jadine Pangelinan “Rust Chemistry”\n3rd, Isabella Demapan “Pendulum Art”\n3RD-5TH GRADE DIVISION:\n1st, Tomamitsu Aldan “Brine to Beverage”, Maui Silva “Test Your Honey”, and Leah Lansangan and Naomi Matsumoto “Penny Drops”\n1st, Brent Ortizo “Black Sheep”\n2nd, Yuri Sasamoto “Paper from Plants”\n3rd, Brissa Hunter “Burning Calories”\n9th-12th grade division:\n1st, Tommy Cayetano “Video Games & Blood Pressure”\n2nd, Brandee Hunter and Mikee Mendoza “Is Your Bottled Water Really Alkaline?”\n3rd, Kalea Borja “Installation of Water Tank”\n1st, Aldwin Batusin, Ivy Leong, Joanie Paraiso, and Reica Ramirez “CaisNO or Yes”\n2nd, Amy Cabanting and Mild Sripraset “Working Hard or Hardly Working?”\n3rd, Louisa Han, Hoo Lim Cho, and Seoyone Lee “Population Registered”\n1st, Gary Camacho, Jan Bobadilla, and Matt Moran “This Idea is Tubig”\n2nd, Alvin Palacios “Smart Road”\n3rd, William Blake Deleon Guerrero and Steven Li “Reflex Suit”\nOf the students that placed, the following qualified to compete in the CNMI-wide STEM Fair: Kenanni Villagomez, Tomamitsu Aldan, Brent Ortizo, Tommy Cayetano, and Aldwin Batusin.", "score": 8.086131989696522, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Water Campaign Launches in 35 Cities and 8 Countries!\nThe RISE by Grades of Green 2018 Water Campaign has officially launched in 35 cities in 8 countries across the globe! The Water Campaign is Grades of Green’s largest ever semester-long virtual program that gives students the leadership tools and resources that they need to launch a community wide water solution. Starting in September and ending in December 2018, 250+ students will receive customized mentorship, leadership training, a library of water protection resources, and the opportunity to win $500 Eco Grants and one $1,000 grand prize to fund their water solution. Through their efforts, Water Campaign eco-leaders will involve over 100,000 community members in their water protection Campaigns.\nThe Water Campaign has 4 Phases that guide students towards creating and sharing their solution:\n- Phase 1: Research and Discover\n- Research your local water resources, identify key water issues and solutions, and meet your Campaign Partner Team!\n- Phase 2: Water Action\n- Track your personal water use and take action to conserve water or to protect water quality, calculate your gallons of water saved, and use what you learned to select a local water solution to share with your community.\n- Phase 3: Awareness\n- Create a Campaign Video showcasing your local water issue, your solution, and your call to action; then spread awareness by sharing your video with your community.\n- Phase 4: Civic Engagement\n- Launch your community event or presentation to local leadership, compile your Water Campaign results, submit your Final Results Survey to Grades of Green, and celebrate our success!\nThis week, Campaign Teams are launching Phase 1: Research and Discover. Our Campaign Teams are researching their local water resources and interviewing experts to identify a water conservation or water quality issue to focus on.\nHere are a few highlights from our Campaign Teams so far:\n1. Grades of Green Youth Environmental Award Winner and Water Campaign alumnus Younes is joining the Water Campaign as the adult lead for the George Washington Academy Green Council in Morocco\n2. PAUD Cemara Kasih, a Campaign Team in Bali, Indonesia, virtually met with their Grades of Green Mentor and brainstormed ways to re-use greywater for their school vegetable garden\n3. The Campaign Team at Thomas Starr King Elementary School in Los Angeles, CA set the goal of increasing drinking water quality at school fountains and decreasing plastic bottle consumption\nFollow @gradesofgreen for more student stories and Campaign updates!", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "Most of the science curriculum is guided by the selections in Reading Wonders. Teachers use a variety of texts and resources such as Mystery Science and BrainPOP.\nSCIENCE Grade 6 During the fall semester, the focus is on the Challenger Center, and for the remainder of the year, students use a variety of texts and resources to learn different topics.\nSOCIAL STUDIES Kindergarten - Grade 3 The World Around Us (MacMillan/McGraw Hill © 1995) is an activity/text-based program. It offers solid instruction in history and geography, with strong emphasis on citizenship,thinking skills and economics, a lively connection to the humanities, and an infusion of multicultural perspectives.\nSOCIAL STUDIES Grade 4 The curriculum is focused on Hawaiian Studies present and past via handouts and Hawaiians of Old (© 2002).\nSOCIAL STUDIES Grades 5 (Houghton Mifflin, © 1994). America Will Be presents a chronological and in-depth study of U.S. history from its beginnings through the significant events of the Civil War, with concluding material through the present.\nSOCIAL STUDIES Grade 6 A Message of Ancient Days explores the ancient world from the beginnings' of time through early and classical civilizations up to the decline of the Roman Empire. Students also use National Geographic World History's Great Civilizations.\nEligible students receive Special Education services in different environments: inclusion, pull out resource\nThe English Learner Program helps ensure that students from different language and cultural backgrounds have equal access to all educational opportunities at our school. The EL Program's Mission is that students will meet state standards and develop English language proficiency in an environment where a student and his/her family's language and culture are recognized as valuable resources to learning.\nTutoring is provided before, during, and after school to students who do not receive services from other programs. Teachers work with small groups of students on reading foundation skills.\nPalisades Elementary's Resource schedule runs on Mondays, Tuesdays, Thursdays, and Fridays. Students attend classes in computer education, Hawaiiana, STEM, library, physical education, and ukulele. Grade levels attend these classes on a rotational basis.\nStudents who meet program requirements also attend Math Enrichment Program.\nCOMPUTER EDUCATION In the computer lab, students learn to be ethical users of technology through projects, games, and exercise.", "score": 8.086131989696522, "rank": 95}]} {"qid": 21, "question_text": "What are the main advantages of using automated RTM (Resin Transfer Molding) over traditional prepreg manufacturing for aircraft seat structures?", "rank": [{"document_id": "doc-::chunk-3", "d_text": "“Both temperature and pressure are increased in the mold, reaching 180°C and a pressure of 6 bar during a 2-hour molding cycle.” The mold is cooled to 50°C, opened and the finished part is removed. The cycle time, including injection and mold cleaning, takes about 7 hours.\nThat automated preforming and RTM have succeeded in cutting seat cost vs. autoclaved prepreg is, perhaps, an obvious outcome, but the source of the 7% weight reduction? That was not so obvious. According to Rosenfeld, “This is due to very accurate plies. We optimized the part design for RTM, so there is not as much overlap in ply location and less material wasted. You also don’t have to factor in manual layup errors, so this reduces extra material as well.” He explains that the manual layup process has inaccuracies almost every time, but the robot is very predictable and less prone to errors.\nPart of what makes the robot so accurate is CAC’s “intelligent automation.” “We integrate a camera into the robotic arm,” says Thimon, “so that vision inspection is completed while preforming operations are ongoing. When you unroll material, it will check fiber orientation. It will also inspect during pick-and-place layup for fine positioning of the ply onto the tool/preform and for other defects in this part of the process, such as missing plies and incorrect fiber orientation.” IAI has worked with CAC to validate the inspection results. “It compares the visual images taken by the camera with the computer design file and shows any deviation,” says Rosenfeld. “CAC has developed a very good algorithm for this, and we have verified how well it works.”\nAlthough cycle time has already been reduced to one completed RTM seat per 8.5-hr shift, he believes this robotic inspection can eventually further reduce manual inspection times for even faster production. This automated RTM process also produces net-shaped parts with a high-quality finish on both sides, so the only post-molding operation is drilling a few holes. “We will also keep looking at newer resins that cure more quickly, like in automotive,” notes Rosenfeld.\nThe fabrication and installation of 700+ composite panels has a backstory of detailed design and careful quality assurance.\nThe old art behind this industry’s first fiber reinforcement is explained,with insights into new fiber science and future developments.", "score": 52.977278681651924, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "The increasing use of carbon fibre reinforced polymer (CFRP) composites in aviation and automotive industries has led to the adoption of automated production methods such as pultrusion or resin transfer moulding (RTM) for cost reduction in the production lightweight structures. These processes however, offer limited freedom to locally reinforce structures. This paper describes an approach to utilise a basic geometry for several similar parts and add local reinforcement patches only in regions of load introduction or high local stress. The approach offers the benefit of being able to combine automated production methods with unprecedented design freedom. The specific bearing performance for three different local reinforcement using (1) add-on CFRP patches, (2) surface mounted steel foils and (3) steel foil interleaving in replacement of 90° plies with foils of the same thickness as the CFRP plies (0.125 mm) is compared by double lap bearing tests. The bearing strength improves with the addition of patches, for surface mounted steel foils, more so as CFRP co-cured patches, and most as an interleaved configuration. Quasi ductile failure of the bearing joints was maintained due to additional plasticity of the steel foils, producing a joint that fails safely while enhancing the bearing strength. When examining the hybrid laminates, all samples buckled and failed in bearing compression/shear. Brooming was evident on the compressive side of the hole where the bolt indented the laminate. Indentation led to shear kink bands along the washer supported region and appear as large compression/shear damage above the washer confined region of the laminate. When normalised by weight, the three approaches show similar bearing performance. However, each approach has specific advantages with regards to processing, electrolytic potential, or absolute bearing strength, depending on the design of the load introduction.", "score": 48.81616013470559, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Improving one-piece aerostructures by automating preforming\nIsrael Aerospace Industries (IAI, Tel Aviv, Israel) has produced crashworthy seat structures for aircraft since 1978. Its turnkey service includes design, certification and manufacturing. IAI has developed a reputation for producing lightweight seating through the use of composites, and developed efficient production via RTM but now augments both with automation.\n“The idea was to implement ‘one-shot’ technology, manufacturing a part in a single, automated process,” explains IAI development program manager Hary Rosenfeld. IAI already had acquired expertise in resin transfer molding (RTM) during development of the 2.8m-long, one-piece composite rudder it now produces for the Gulfstream G250 business jet (see “RTM showcase: One-Piece Rudder”). “We know how to design parts for RTM,” says Rosenfeld, “but now we wanted to automate the preforming as well.”\nFor a test case, IAI selected an in-process composite helicopter cockpit seat. “We chose a helicopter seat because we already had this part designed in prepreg, so we could compare the weight and cost for parts made with prepreg and autoclave cure vs. the consolidated design using automated RTM.” The resulting seat reduces cost by 30% vs. its prepreg predecessor while maintaining critical strength and crash performance, and shaving weight by 7%.\nPart and process design\nRosenfeld points out there is a strong relationship between the design of the automated production line and that of the part. “The part geometry was challenging with regard to creating the prepreg layup,” he explains. “When you try to fold the prepreg material into a corner, it wants to wrinkle. So, you have to cut darts. But these reduce the strength of the part and you have to then compensate for this.” RTM offered improvements by moving away from prepreg layup, but there were new considerations. Forming was still required, but using dry fabrics instead of prepregs improved drapeability, reducing the need for manipulations such as darts. The process was also amenable to automation, but Rosenfeld notes, “we had to keep in mind what a robot can do.”\nA key aspect of composite aerostructures design is certification. “We must certify the part and the production line,” says Rosenfeld. “Most parts are certified together with the airplane they fly on.", "score": 48.5516985885495, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Short description of the technology\nRTM (resin transfer molding) – molding technology via impregnating preforms (blanks made of fibers) with resin under pressure. The RTM process is suitable for medium and big series production. The method consists of impregnating dry fibers (first cut and placed into the mold) with thermoset resin under pressure. RTM parts have accurate surface and high mechanical properties. This technology allows us to produce highly loaded parts with strict tolerances (for example, airplane engine parts).\nAdvantages and restrictions of the technology\nRTM technology requires the use of two-sided molds (punch and matrix). The high cost of mold production, together with a medium-level production rate, makes this technology expensive for part production. Nevertheless, this method is widely used in some industries because of its advantages:\n– The ability to produce complex, integrated parts with inserts, with sandwich structure (foam core) helps to shorten assembly time;\n– High accuracy (low tolerances) and almost no post-molding processing;\n– The ability to produce parts with a high thickness and complex shape, which is usually impossible with other methods;\n– The high mechanical properties and wear-resistance (lifetime) of produced parts, and low-porous surface;\n– Easily painted surface (up to Class A);\n– The quality is reproducible in big series manufacturing.\nDepending on the customer’s needs, several types of reinforcing materials can be used: various fabrics and mats made of glass, carbon, organic or basalt fibers. Polyester or epoxy resins with low viscosity are usually used as a matrix material.\nRTM parts have smooth, accurate surfaces on all sides. This method offers wide design possibilities with a properly designed mold and careful preform preparation. The produced parts can have a complex profile, with different types of bosses, waves, different thickness and so on. The possibilities and restrictions of part design with RTM technology are described in detail in the table below.\nTable. Design of RTM parts\n|Thickness accuracy, mm|\n|Minimum inside radius, mm|\n|Minimum thickness, mm|\n|Maximum thickness, mm|\nWe produce polymer composite parts with high requirements for strength and accuracy using the RTM method.\nFor example, for certain clients we have made prototypes and set up mass production of glass fiber-reinforced exterior bus parts made by RTM.", "score": 46.80928653918977, "rank": 4}, {"document_id": "doc-::chunk-2", "d_text": "This process is generally performed at both elevated pressure and elevated temperature. The use of elevated pressure facilitates a high fibre volume fraction and low void content for maximum structural efficiency.\nResin transfer moulding (RTM)\nA process using a two-sided mould set that forms both surfaces of the panel. The lower side is a rigid mould. The upper side can be a rigid or flexible mould. Flexible moulds can be made from composite materials, silicone or extruded polymer films such as nylon. The two sides fit together to produce a mold cavity. The distinguishing feature of resin transfer moulding is that the reinforcement materials are placed into this cavity and the mould set is closed prior to the introduction of matrix material. Resin transfer moulding includes numerous varieties which differ in the mechanics of how the resin is introduced to the reinforcement in the mould cavity.\nOther types of moulding include press moulding, transfer moulding, pultrusion moulding, filament winding, casting, centrifugal casting and continuous casting. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding.\nLow-temperature-cure epoxy tooling prepregs, such as HX50 and HX70 from Amber Composites or LTM series from Advanced Composite Group, have now become benchmark products in aerospace, automotive, marine, industrial and motorsport applications. With their reliability long-proven, steady improvement has resulted in materials that are easy to handle and apply, provide Class-A surface finish, and offer surprising longevity of tool life. Some systems now enable 200°C end use temperatures, and recent advancements in materials have enabled composite tools to be compatible with an ever wider variety of processing methods.\nIllsey points to the example of F1 car builders and America's Cup boat builders.\n“F1 teams need a large number of small, complicated parts. Yacht builders may need a single 30 m long part. Both require extremely precise dimensions. You can find both using a similar HX tooling system with great results.”\nToday's high-performance contemporary yacht moulds are frequently made using tooling prepreg, as are the moulds in military and unmanned aerospace applications. Even in the commercial aerospace industry, after a long development cycle, composites are now widely used. Entire wing sections on the new Airbus A350, for instance, will be made from carbon fibre, and a considerable proportion of the tooling will utilise composite prepreg.", "score": 44.43190822204335, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Henkel has developed a polyurethane-based composite matrix resin that cures faster than the traditional epoxy resins. Composites based on carbon fiber or glass fiber are gaining momentum in various application areas due to the opportunity for enormous weight savings over traditional part construction, with no loss in mechanical performance. Starting in aerospace where pre-impregnated fibers (prepregs) are manually laid up and then baked into composites, many different applications are now penetrating into the automotive industry. New manufacturing methods like resin transfer molding enable economic processes that are suited for high volume automotive production.\nFor the resin transfer molding process, Henkel has developed a new composite matrix resin based on polyurethane which enables improved economics and throughput in processing. Compared to standard epoxy matrix resins, the new Loctite MAX2 cures significantly faster. During injection, it also enables more efficient impregnation without stressing the fibers due to the lower viscosity of the resin.\nThe composite properties of Loctite MAX2 were specially developed to provide more flexibility as well as much higher impact resistance than traditional epoxy resins. Henkel is confident that these new generations of polyurethane matrix resins deliver significant benefits for fast and efficient manufacturing.", "score": 43.290839344102075, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "\"Applying the chopped material robotically right to the shape not only cuts the amount of scrap, but it's cheaper scrap.\" Since the preform is built right on the tool, the process eliminates the step required to transfer the finished preform from a manufacturing station to the part mold, so there is no risk of damage and no extra space is required to store preforms or preform tools.\nRIMFIRE + ZIP RTM\nIn production, RIMFIRE can be used with most available closed molding options, including standard resin transfer molding (RTM) and various methods of vacuum-assisted resin transfer molding (VARTM) and vacuum-assisted resin infusion molding (VARIM). Sea Ray currently uses ZIP RTM (zero injection pressure resin transfer molding) process, licensed from Plastech Thermoset Tectonics (Gunnislake, Plymouth, U.K.) and Plastech's North American partner JHM Technologies (Fenton, Mich.). ZIP RTM motivates resin flow with both vacuum and injection pressure, slightly favoring the vacuum, which results in less stress on the mold. This permits the use of less costly semi-rigid tooling. A multi-station inline system, ZIP RTM processes several lower molds simultaneously, but requires only two upper mold halves per line, keeping mold manufacturing costs to a minimum. (photos show construction of Sea Ray's 20-ft/6m 205 sport boat hull.)\nBefore production starts, technicians apply Chemlease (Chem-Trend, Howell, Mich.) or equivalent semipermanent release agents to upper and lower molds. Gel coat is applied to the lower mold and allowed to cure. Meanwhile, the specified fiberglass gun roving is loaded onto pallets behind the robot. Four rovings of continuous glass, typically 112- or 113-yield, are threaded through the end effector.\nIn a pre-loading station ahead of the RIMFIRE preforming cell, multiaxial glass fabric is laid into tight radii in the lower mold, if called for in the FEA, to prevent bridging of the preform spray over the angle. The mold is then latched onto a computer-controlled indexing device that rides on a floor-mounted rail, and is moved into the preform cell.\nAn operator starts the previously downloaded software program.", "score": 42.03800901980871, "rank": 7}, {"document_id": "doc-::chunk-2", "d_text": "The question was, ‘What type of equipment and process steps were needed between cutting and RTM?’ “We looked at the cutting system and what type of communication would be required,” explains Marc-Ruddy Thimon, director of sales for CAC. “The next steps would include automated layup, using a pick-and-place robot and application of plies onto a preforming tool, followed by debulking/compaction where heat and vacuum are applied.”\nIn the solution developed by IAI and TME, a cutting table and single robot are synchronized via a central control unit. Rosenfeld explains: “The robot picks up plies in a predefined sequence per the design: Two glass fabric plies (face-up and face-down) and two carbon fabric plies (face-up and face-down), which achieve a symmetric laminate.” The synchronized system changes the rolls of fabric automatically as needed to keep the production line running. The system is set to cut a number of “boards,” which contain cuts (pieces) from the same roll, according to the specific layup stage. The number of cuts within a “board” varies between four and 30, depending on the layup configuration as defined per each layup stage. The cuts are nested to achieve maximum material usage and minimize waste. The fabric is also coated with thermoplastic powder to aid preform consolidation.\nThe robot then places the cut plies onto the heated mold to achieve a 3D shape. However, there is an important bit of innovation here: “We have developed specialized end-effectors that operate like hands,” explains Thimon, “allowing 2D materials to be folded and formed into complex-shaped areas like corners.” As part of the preform building process, periodic vacuum debulks are performed with a reusable membrane. Debulk of prepregs with the same membrane was an established step in the prepreg seat process. IAI found that it also worked well to consolidate the thermoplastic-coated fabrics into a high-quality preform.\nThe single work cell robot, with end effector now changed to trimming head, is then used to trim the preform to achieve a net-shape part. The trimmed preform is manually transferred to the bottom mold in the RTM press. “We apply vacuum and begin resin injection, using the Isojet injection machine,” says Rosenfeld.", "score": 41.02747623158149, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "The global trend towards lightweight construction is particularly evident in the aviation industry. In the development of new civil aircraft generations, innovative materials such as carbon fiber reinforced plastics (CFRP) are being used in load-bearing structures. The A350 from AIRBUS, for example, developed a civil aircraft for the first time, the majority of which consisted of CFRP.\nThe production of the skin sections from the fuselage area of this aircraft type is often carried out using Automated Fiber Placement (AFP). Automated Fiber Placement is one of the leading manufacturing technologies in the field of cost-effective, high-quality series production of lightweight structures.\nThe use of AFP processes significantly reduces the loss of expensive CFRP material.\nThe carbon fibers used are several millimeters wide and impregnated with resin, so-called tows, must be placed on a tool surface as load-free as possible. In contrast to Automated Tape Laying (ATL), the considerably narrower tow widths make it possible to apply the material very close to the desired contour of a component to be produced, thus reducing loss to a minimum and increasing the material usage rate.\nOften, load-free storage paths are not guaranteed by today's path planning algorithms due to a lack of material models and path planning optimization algorithms that are only inadequately suitable for CFRP, especially not for complex components. As a result, gaps, overlaps and detachments of the material occur which do not meet the requirements of component design and the structural requirements (design requirements) placed on them.\nDue to the increasing geometric complexity of component surfaces and the high structural requirements placed on CFRP laminates, AFP system programming is often still carried out manually and with great effort even today, in compliance with strict rules and regulations. Once programming is complete, additional time-consuming testing procedures are used between the individual production steps to locate and classify defects in the laminate.\nComponents that do not pass an inspection, go through a large part of the process chain and cause additional costs without contributing to added value.\nThe problem here lies on the one hand in the development of a defect location and on the other hand in the time it takes to detect the defect.\nA very time-consuming visual inspection, as frequently required within the scope of quality control by process supervisors for every position of the CFRP component, can only partially detect these defects and also causes high inspection costs.\nThe undiscovered and uncorrected defects also cause repair costs at a later point in time, mostly with considerable scope and costs.", "score": 39.97973931626566, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Prepreg a semi-finished composite.\nIt is shorthand for “pre-impregnated” and is associated with continuous fiber reinforced materials. Prepregs have the greatest strength to weight ratio of any composite material. This is a result of having the highest fiber volume fraction relative to other composite manufacturing techniques. Being semi-finished, prepregs must be further processed or fabricated into a fully cured state. Many process options are available, each with distinct advantages. These processes include:\n- Press Molding\n- Various winding processes\n- Automated Tape Placement\n- Out of Autoclave\nIn order to successfully process or fabricate a prepreg, any process must create conditions that allow for good consolidation of the material and apply heat in order to initiate and maintain the cure process. Norplex-Micarta produces prepreg in a solution coating process. Compared with other prepreg manufacturing techniques, this method assures even resin impregnation, fast production speeds, and normally results in a tack free prepreg that is well suited for high volume, automated, fabrication.\nNorplex-Micarta is always developing new materials.\nMany structural applications require a speed of processing that is not achievable in many of the standard composite fabrication techniques, such as autoclave.\nNorplex-Micarta is investing in technologies and techniques that allow for the co-curing of prepreg materials with other thermosetting material forms so that near net shape, complex cross section, and mass production targets can be achieved.\nSheet materials are readily available for fabrication into parts.\nManufactured from Norplex-Micarta’s prepregs, sheet products provide unique advantages to designers of composite parts and structures. These include:\n- Sheet sizes from 36 inches by 48 inches up to 48 inches by 120 inches\n- Thicknesses from 0.010 inches to more than 8 inches\n- Easily customized with different surface or core materials, including traditional materials like rubber or metals\n- High volume, consistent, and predictable\nSheets can be fabricated using various standard machining and stock removal processes, such as:\n- Milling, turning, grinding\n- Punching, routing, sanding\n- Water jet cutting, shearing, sawing\n- Norplex-Micarta is always developing new materials.\nMany structural applications require a more complex shape than can be produced from a sheet. Nevertheless, these sheet materials allow for a detailed, yet timely, understanding of various material properties to establish a baseline for design work in other geometries.", "score": 39.074178539393365, "rank": 10}, {"document_id": "doc-::chunk-6", "d_text": "This discovery is useful, for example, in developing an in-situ grade thermoplastic tape and towpreg that can be processed on an Automated Tape Laydown/Automated Fiber Placement (ATL/AFP) machine at comparable speeds as a thermoset based tape, with the exception that no post autoclave or oven step is required after lay down. Cost modeling of the part fabrication has shown that 30% of the fabrication cost (recurring) can be saved by eliminating the post-processing (autoclave/oven) step. Furthermore, this discovery will also reduce the initial capital and facility cost investment to produce large composites.\nAccordingly, the invention described in detail herein provides, in one aspect, thermoplastic compositions having a core composite layer that includes a fibrous substrate and at least one high performance polymer, and a surface layer polymer chosen from an amorphous polymer, a slow crystallizing semi-crystalline polymer, or combinations thereof, such that the surface layer polymer is applied on at least one surface of the core composite layer and forms a polymer blend with the high performance polymer, and wherein the Tm and Tprocess of the surface layer polymer is at least 10° C. less than the Tm and Tprocess of the high performance polymer of the core composite layer.\nIn another aspect, the invention relates to articles of manufacture made from the thermoplastic composites according to the invention described herein. Such articles are useful, for example, in the aircraft/aerospace industries among others.\nAlso provided by the present invention are methods for manufacturing the thermoplastic compositions described in detail herein by impregnating and/or coating the fibrous substrate with at least one high performance polymer, and applying a surface layer polymer as described in detail herein on at least one surface of the core composite layer, thereby forming a polymer blend between the surface layer polymer and the high performance polymer of the core composite layer.\nIn-situ grade thermoplastic composite tapes for use on an automated tape laydown or automated fiber placement machine are also provided.\nThese and other objects, features and advantages of this invention will become apparent from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying Figures and Examples.\nAs summarized above, the discovery provides thermoplastic composite containing a unique resin-rich layer on one or more surfaces of a core composite layer containing a fibrous substrate that is impregnated with one or more high performance polymer.", "score": 37.5212650601422, "rank": 11}, {"document_id": "doc-::chunk-5", "d_text": "Furthermore, the consolidation process step using an autoclave or oven requires a “bagging” operation to provide the lay-up with a sealed membrane over the tool to allow a vacuum to be applied for removal of air and to provide the pressure differential necessary to effect consolidation in the autoclave. This process step further reduces the total productivity of the composite part operation. Thus, for a thermoplastic composite it would be advantageous to in-situ consolidate to a low void composite while laminating the tape to the substrate with the ATL/AFP machine. This process is typically referred to as in-situ ATL/AFP and the material used in that process called an in-situ grade tape.\nIn general, thermoplastic composites have had limited success to date, due to a variety of factors including high processing temperatures (currently around 400° C.), high pressures, and prolonged molding times needed to produce good quality laminates. Most of the efforts have been focused on combining high performance polymers to structural fibers which has only exacerbated the process problems. Because the length of time typically required to properly consolidate the prepreg plies determines the production rate for the part, it would be desirable to achieve the best consolidation in the shortest amount of time. Moreover, lower consolidation pressures or temperatures and shorter consolidation times will result in a less expensive production process due to lowered consumption of energy per piece for molding and other manufacturing benefits.\nAccordingly, the fiber-reinforced thermoplastic materials and methods presently available for producing light-weight, toughened composites require further improvement. Thermoplastic materials having improved process speeds on automated lay-up machines and lower processing temperatures, and having no autoclave or oven step would be a useful advance in the art and could find rapid acceptance in the aerospace and high-performance automotive industries, among others.\nThe discovery detailed herein provides lower melting, slower crystallizing semi-crystalline polymer films that are applied to a surface (for example via lamination) of a core containing a tape or tow impregnated with a higher melting, faster crystallizing matrix polymer, and which can be initially processed at a melt process temperature of the surface polymer, but upon cool down crystallizes at rates intermediate to the faster crystallizing matrix polymer.", "score": 36.80288927812907, "rank": 12}, {"document_id": "doc-::chunk-7", "d_text": "A programming and simulation solution may take the requirements from the engineering design and convert them into instructions that can be processed by a layup machine. The part programs can be post processed, simulated or directly used by a machine to fabricate a part. The programs may include instructions for fiber placement machines (e.g., path for the head, angular position, and cut and add commands for the different tows), machining, etc.\n At block 160, the programs are used to fabricate the part. The layup may be automated or manual layup, wet or dry, or a combination thereof. The fabric may be deposited by an end effector that performs automated fiber placement (AFP) or automated tape layer (ATL). In other embodiments, the layup may be performed manually. If the layup is dry, resin is then infused. Caul plates may then placed on the part layup (depending on finish requirements). The part layup is then bagged and cured. Afterwards, the cured part may be machined (e.g., trimmed and drilled).\n At block 170, feedback may be provided to validate or modify the rules. For instance, if wrinkles are detected during laydown, and the rules had indicated that no wrinkles were expected, the rules would be modified.\n Reference is now made to FIG. 2, which illustrates a computer 210 including a processor 220, and computer-readable memory 230. A program 240 is stored in the memory 230. When executed in the computer 210, the program 240 accesses an engineering definition of a composite part and applies a set of rules governing material laydown prior to the laydown being performed.\n A method and apparatus herein enables the producilbilty (or manufacturability) of the composite part to be tested before the part is actually fabricated. By considering manufacturability during the design of a part, empirical testing is minimized, thereby speeding up part production. Trial and error are avoided. Multiple iterations of redesigning, refabricating and revalidating a part are avoided. Considerable time and cost is saved from the need to physically build validation coupons and follow an iterative process of testing.\n A method and apparatus herein also enable manufacturing tradeoffs to be made during the design phase. Trades may be made of potentially different tape width material, which provides flexibility in manufacturing, where the choice of automated equipment may be limited.", "score": 35.39235865290431, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Automated tape laying (ATL) and automated fiber placement (AFP) are processes that use computer-guided robotics to lay one or several layers of carbon fiber tape or tows onto a mold to create a part or structure. Typical applications include aircraft wing skins and fuselages.\nCGTech helps demystify the process of programming automated composite machinery by introducing the key components of machine independent off-line programming software.\nA view of the trends in automated fiber placement and automated tape laying from inside the supply chain.\nRead complete article\nMachinery manufacturer Mikrosam featured its new automated tape laying (ATL) system, developed for a civil airplane parts manufacturer.Coriolis celebrates aerospace history, anticipates automotive future, at JEC CompositesWorld\nCoriolis Composites (Queven, France) held a press event at its stand to celebrate its 15th year at the JEC Europe event.Trends in automation: ATL and AFP technologies increase speed, flexibilty CompositesWorld\nMany automated production solutions were on offer at JEC Europe 2015, and several companies made announcements.What is the genesis of automated tape laying technology? CompositesWorld\nCW Editor-in-Chief Jeff Sloan seeks and finds the roots of automated tape-laying technology in the work done by close associates of legendary composites-industry pioneer Brandt Goldsworthy.Automated composites manufacturing, past, present and future CompositesWorld\nCW guest columnist Rob Sjostedt is a partner and CEO at VectorSum Inc. He recalls his days at Goldsworthy Engineering and reminds CW readers that today’s product requirements are more challenging than ever, and creative new approaches are required.", "score": 35.26656736405157, "rank": 14}, {"document_id": "doc-::chunk-3", "d_text": "A need has further risen for a process of manufacturing composite articles which can be constructed with the strength of additional plies located along predetermined load paths without major changes in existing molds, which parts are fabricated to achieve consistent fiber volumes with lack of air voids throughout the final part.\nThe present method for manufacturing fiber reinforced composite articles creates articles of aerospace quality through a combination of siphoning, wicking and capillary action.\nIn accordance with the present invention, a method of molding an article of fiber reinforced material includes a step of providing a mold in the shape of the article for receiving fiber material. The mold includes a top and a bottom. A source of liquid resin located exterior of the mold is provided. The liquid resin is supplied from the source to the bottom of the mold and contacts the fiber materials. The liquid resin is allowed to wick throughout the fiber material from the bottom of the mold to the top of the mold through capillary action while the source of liquid resin is moved from the bottom of the mold to the top of the mold thereby saturating the fiber material with the liquid resin. The liquid resin is then allowed to cure, and the article is then removed from the mold.\nFor a more complete understanding of the present invention and for further advantages thereof, reference is now made to the following Description of the Preferred Embodiments taken in conjunction with the accompanying Drawings in which:\nFIG. 1 is a diagrammatic illustration of a mold utilized to perform the present method; and\nFIG. 2 is a cross-sectional view taken generally along sectional lines 2--2 of FIG. 1.\nReferring simultaneously to FIGS. 1 and 2, a mold, generally identified by the numeral 10 is illustrated for use in carrying out the method of the present invention. Mold 10 is configured for forming an article of manufacture for impregnating a dry fiber reinforcement with a resin according to the present invention.\nDisposed within mold 10 is a tool 12 (FIG. 2). Tool 12 has the configuration of the part desired to be fabricated. Dry fiber reinforcement layers of material 14 are placed against tool 12. FIG. 2 illustrates three layers or plies of material 14 within tool 12. Any number of layers 14 and any configuration of layers 14 to provide areas of reinforcement in the part may be utilized with the present invention to manufacture a fiber reinforced article.", "score": 34.32824448073891, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "The main barrier to recycling has been the type of resin used; thermosetting resins predominate but these cannot be readily recycled.\n“We have been working with the AMRC and a series of large trial panels have been produced using an innovative process which can readily be automated. These trials have demonstrated that recyclable composite panels can be produced at a rate and cost to suit many industries.\n“The unique feature of the P2T process is the reduced tooling cost and lead time compared to existing metallic or composite solutions.”\nTraditionally, the composites industry has been based on the supply of rolls of ‘pre-preg’ (woven fibre sheets pre-impregnated with resin) which customers then lay up in moulds to produce 3D parts, curing through heating and pressurising to fix the final shape. Thermosetting resins are considered convenient materials to support this supply chain but, as tighter end-of-life regulations are introduced, better alternatives are required.\nThe advantages of composites produced by the P2T process is that they can be recycled multiple times. The highest mechanical properties are obtained during first use of the virgin fibres, enabling highly-loaded structural items to be manufactured.\nAt the part’s end-of-life, the fibres and potentially the resin can be recycled, supplying much of the raw material for a secondary part, such as a body panel. When the secondary part reaches the end of its life, being thermoplastic, it too can be chopped and remoulded into new parts with properties suitable for 3D solid components. This tertiary part can itself be recycled several times into lower grade parts.\nThe ongoing research between the AMRC and Prodrive Composites is set to expand considerably over the coming year and is being closely monitored by numerous companies in various industries looking to improve their environmental impact with high performance, light-weight components.\nImage provided by AMRC\nFor more information visit:\nSubscribe to receive our weekly round-up of all the industry's latest news, jobs, events and more!", "score": 34.17447906324065, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "Although the traditional hand lay-up process will remain the process of choice for some applications, new developments in pultrusion, resin transfer moulding, vacuum infusion, sheet moulding compound, low temperature curing prepregs and low pressure moulding compounds are taking the industry to new heights of sophistication, and are now being exploited in high technology areas such as the aerospace industry.\nTo further investigate the potential suitability of composites for your needs, use the links to contact engineering and design consultants experienced in the use of composites.\nSend mail to email@example.com with questions or comments\nabout this web site.", "score": 34.173788993718226, "rank": 17}, {"document_id": "doc-::chunk-6", "d_text": "The epoxy is a new formulation designed to fill the fabric when heated and compressed under a vacuum bag without requiring an autoclave.\nVacuum bags are also getting bigger to mold large parts like wind blades without seams. Airtech International Inc., Huntington Beach, Calif., introduced Big Blue L-100, a 12-meter-wide, five-layer vacuum bag with adhesive. A 10.5-meter-wide film will soon be offered by Aerovac in the U.K., sister company of Richmond Aircraft Products.\nNew RTM consumables\nSeveral companies introduced new transport media for resin transfer molding to use between the peel ply and vacuum bag. PGI Industrial (formerly Polymer Group Inc.) introduced Flowmat, a polyester nonwoven with canals that help resin infuse faster. It is distributed in the U.S. by Nida-Core.\nHenkel’s new Hysol EA 9895 is a thin polyester wet peel ply, which reportedly needs 30% less peel force than conventional peel plies.", "score": 33.788926717695844, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "Such a wing preform might have a thickness varying between 0.05 inches and 1.5 inches. The wing preform is quite large, and its surface is very complex, usually a compound contoured three-dimensional surface.\nThe stitched wing preform is transferred to an outer mold line tool that has the shape of an aircraft wing. Prior to the transfer, a surface of the outer mold line tool is covered with a congealed epoxy-resin. The tool and the stitched wing preform are placed in an autoclave. Under high pressure and temperature, the resin is infused into the stitched preform and cured. Resulting is a cured wing cover that is ready for assembly into a final wing structure.\nFor textile composite technology to be successful, two barriers must be addressed: cost and damage tolerance. Damage tolerance is achieved by making high quality, closely-spaced stitches on the wing preform. The high quality, closely-spaced stitches add a third continuous column of material to the wing preform. If thread tension is not proper, a large number of stitches on the preform will not be of sufficient quality and will reduce the damage tolerance. Improper thread path geometry might also degrade the quality of the stitches and, therefore, reduce the damage tolerance.\nEven though the stitches are made by a stitching machine that is computer numerically controlled (“CNC”), it is difficult to make stitches having the high quality required for the wing preform. On a compound, contoured three-dimensional surface, thread tension and thread path geometry must be constantly adjusted for an exceedingly large number of stitches. The CNC stitching machine might make eight to ten stitches per inch, in rows that might be spaced 0.1 inches to 0.5 inches apart, over a surface that might be longer than forty feet and wider than eight feet. The total number of stitching points on the wing preform might exceed 1.5 million.\nMuch manual labor is required. Because the wing preform has many regions of differing thickness, a machine operator must constantly stop the stitching machine when a new region is about to be stitched, adjust the thread tension and possibly the thread path geometry, and restart the stitching machine. Of course, the CNC stitching machine has multiple stitching heads. At any given time, two or more stitching heads might be stitching different regions having different thicknesses.", "score": 33.540142751440165, "rank": 19}, {"document_id": "doc-::chunk-5", "d_text": "This new resin family boasts better high-temperature and hot/wet performance than epoxy, closer to bismaleimide (BMI) in performance, but much less expensive. Benzoxazine is also room-temperature stable, not needing refrigeration or freezing.\nBenzoxazines (a subclass of phenolics) have been known for decades and used in the electronics industry but were not previously marketed for structural aerospace applications. Henkel’s lab name for the resin is Epsilon. It’s available as a one-part, semi-solid resin for structural prepregs and in liquid two-part formulations for RTM. Henkel’s first commercial application is for aerospace tooling.\nIn other new moldmaking materials, Hexcel Corp. presented a high-heat prepreg called HexTool. It consists of carbon fiber and Hexcel’s HexPly M61 BMI resin. It is said to be very tough and quasi-isotropic, allowing for complex mold shapes. It heats and cools quickly, and is machinable, so molds are easy to repair or modify.\nMolds reportedly can be built faster and with better surface finish using new FSP fiber-filled spray putty from Euromere of France. It is based on isophthalic polyester or vinyl ester and its density is 0.6 to 0.8 g/cc, half that of standard putty.\nSeveral companies showed recently developed water-based mold releases. Henkel introduced Frekote 901WB last year for large composite parts. Chem-Trend offers Chemlease Hydrolease for multiple releases and Class A finish. And Axel Plastics showed XtendW-7837D, for producing wind blades with a matte finish.\nWind blade materials\nOther new products are also tailored for the burgeoning field of wind-energy blades. As reported in April, Owens Corning came out with WindStrand single-end rovings and knitted fabrics made of a special glass that is up to 35% stronger and 17% stiffer than E-glass. They allow production of bigger blades without the use of more costly carbon fiber.\nGurit in the U.K. (formerly SP Systems) introduced thicker Sprint Triax prepregs (150 to 200 mm) for greater strength in large parts like the outer shell of wind turbines. Sprint Triax is a three-layer prepreg of epoxy film with fabric on two surfaces.", "score": 33.47045385104671, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Unlike filament winding, which requires a comparatively minimal level of human involvement, lay-up manufacturing involves manually stacking layers of resin-impregnated roving cloth, usually of fiberglass or carbon weave, in an open mold, and compressing them to form a solid composite product. Depending on the application, a mold or other tooling can be used to form the outer profile, while a technician can utilize various instruments such as squeegees and rollers to force the layers together and to impregnate additional resin throughout the stack. Further consolidating forces, such as vacuum bagging or mated molds, can add extra compression if the application warrants it. After the desired geometry has been formed, the composite is placed in an oven to cure.\nFlexibility in forming composite structures.\nThe lay-up manufacturing systems allows for a high degree of craft and technical virtuosity on the part of a composites technician. A skilled technician can create complex composite structures that would be much more difficult via other production technologies. Additionally, the technician can manually orient the fiber arrangement for optimal strength and durability. Because of its labor-intensive nature, however, lay-up is primarily suited to low-volume runs or the production of specialty composite items that would be prohibitively difficult or low-volume for filament winding or compression molding techniques.\nWide-ranging potential for diverse composites applications..\nUsing the lay-up system, ACI has produced custom composite products from fiberglass, carbon fiber and aramid for everything from communications structures to defense armaments and ballistics applications.", "score": 32.80782580702084, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Composite Components Manufacturing Services\nOur Composite Manufacturing Expertise\nWe excel at a full range of resin transfer molding composites manufacturing services:\n- Composite Design and Engineering\n- Composites Manufacturing\n- Materials Testing\n- Prototyping and Process Development\n- Tool Design and Fabrication\nOur proprietary processes and materials give us a significant competitive advantage, and we use that advantage to deliver the best composite products to industry leaders and innovators from a variety of demanding global markets.\nWhat Sets Us Apart From the Competition\n- State-of-the-Art Composite Manufacturing Processes\n- Lean Manufacturing Methods\n- Statistical Process Control\n- AS9100/ISO-9001 Certification\n- Nadcap Accreditation\n- Implemented Continuous Improvement Plan\n- Intranet-Based Quality System\nWe've established ourselves as an industry leader in the challenging and growing field of advanced composites products with our comprehensive composites manufacturing and engineering capabilities in a full array of advanced composites processes and manufacturing practices.\nOur team of composites experts will help you understand which manufacturing method is right for the product you need. From composite molding processes like Composite Lay-Up to Resin Transfer Molding, we harness the latest technology and techniques to ensure your project's success.\nCall 321-633-4480 today to get started on your next project", "score": 32.8058260595718, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "CAMX 2018 preview: Matrix Composites\nAppears in Print as: 'Composites design, manufacturing services'\nMatrix Composites Inc. (Rockledge, FL, US), a designer and manufacturer of a range of high-performance composite components and assemblies, is emphasizing its design, development, tooling, fabrication, testing and integration services.\n#autoclave #outofautoclave #camx\nLong on technology firsts, this optimized, automated manufacturing process produces nothing short of the “perfect” bike frame.\nResin transfer molding makes CFRP passenger cell mass-producible for new model supercar.\nWhen open molders turn to infusion, careful planning, material selection and training precede process efficiency.", "score": 32.72359137938622, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Trends In Aerospace Composite Manufacturing\n- Use of large, unitized structures is increasing.\n- Improved designs and processing are required.\n- Large parts are too expensive to be rejected.\n- Careful evaluation (and possible repair) is required.\n- Current inspection techniques (water- and laser-based) are not cost-effective in a lean environment.\nIdeal system has\n- Agility (robot-mounted),\n- High sensitivity,\n- Effective validation for large structures,\n- Capability to lower system cost,\n- And to move inspection closer to manufacturing operations\n- On tool,\n- And in uncured state.", "score": 31.737837836428053, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "“Given our expertise in the industry, we were excited to use a new polymer and process to develop an innovative product,” said Tom Kneath, Director of Sales and Marketing for Tri-Mack. Kneath added, “The design flexibility of the materials, along with the technical support from Victrex, enabled us to engineer and manufacture a PAEK bracket that can be produced in minutes compared to the hours it would take for a metal or thermoset equivalent.” This improvement in manufacturing efficiency translates into less processing time, lower energy requirements, and reduced waste for Tri-Mack. Those benefits, paired with eliminating steps such as edge sealing and X-ray inspections, help to reduce overall part costs.\nThe benefits are not only limited to aerospace designs. The overall mechanical properties, weight savings, and processing efficiencies offer a step-change for the Automotive, Energy, and Consumer Electronics industries as well.\nTri-Mack Plastics Manufacturing Corporation is an innovative engineering and manufacturing company specializing in high temperature thermoplastics and thermoplastic composites. For 40 years, customers have relied on Tri-Mack to identify the best material, part design, and manufacturing process for their critical applications. Engineering expertise, cutting-edge technology and a commitment to quality enable Tri-Mack to support projects from initial concept to commercial production. Tri-Mack’s manufacturing capabilities include automated composite processing, injection molding, tool making, multi-axis machining, bonding and assembly. ISO 9001:2008 and AS9100 certified, Tri-Mack serves the aerospace, industrial equipment, chemical processing and medical industries. For more information, visit www.trimack.com.", "score": 30.678055703376085, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "By pre-impregnating the fabric with resin, under carefully controlled conditions with precision machinery, a material is created with known and repeatable engineering characteristics. This is the prepreg we are familiar with today. More recently tooling prepreg has become the standard method for producing precision composite moulds by employing the same proven principles.”\nBy slowly improving the characteristics of the material — including the handleability, the life at ambient temperature, the right amount of tack (stickiness) — the tooling route is now becoming easier and more cost effective. A new generation of composites are being used in an increasing number of efficiently engineered applications.\nA small team of experts founded Amber Composites in the UK in 1988 to develop and manufacture high performance composite prepreg. Prepreg technology was still in its infancy then and the early team was involved in developing some of its first commercial applications. From its corporate headquarters in Nottinghamshire (the setting for the revolution in industrial textiles and a logical place for the birth of the carbon fibre industry), Amber worked closely with F1 racing teams and a number of other high performance engineers. This demanding customer segment drove the development of the tooling materials that are available today. Today, Amber serves a wide variety of industries worldwide including aerospace, automotive, marine, and communications.\nA rough guide to moulding\nVacuum bag moulding\nA process using a single side mould set that shapes the outside surface of the panel. On the lower side is a rigid mould and on the upper side is a flexible membrane or vacuum bag. The flexible membrane can be a reusable silicone material or an extruded polymer film. Then, vacuum is applied to the mould. This process can be performed at either ambient or elevated temperature with ambient atmospheric pressure acting upon the vacuum bag.\nA process using a single-sided mould set that forms the outer surface of the panel. On the lower side is a rigid mould and on the upper side is a flexible membrane made from silicone or an extruded polymer film such as nylon. Reinforcement materials can be placed manually or robotically. They include continuous fibre forms fashioned into textile constructions. Most often, they are pre-impregnated with the resin in the form of prepreg fabrics or unidirectional tapes. In some instances, a resin film is placed upon the mould and dry reinforcement is placed above. The membrane is installed and vacuum is applied. The assembly is placed into an autoclave pressure vessel.", "score": 29.951191883218875, "rank": 26}, {"document_id": "doc-::chunk-6", "d_text": "The lower mold is transported to a demolding station, where the cured hull is demolded and moved to a robotic trimming station. Meanwhile, the empty mold is returned to the pre-loading station for mold prep.\nBIG BOATS AND BEYOND\nHulls for Sea Ray's larger cruisers and yachts are still open-molded in some Tennessee facilities and in three plants in Florida. The company is looking into converting these facilities to RIMFIRE and the Seemann Composites Resin Infusion Molding Process (SCRIMP). The latter is a patented VARIM process licensed by TPI International (Warren, R.I.), which substitutes for the upper tool a special resin distribution medium and silicon vacuum bag.\nSea Ray is now in the process of running tests to scale up RIMFIRE for cruisers in the 25-ft to 34-ft (7.6m to 10.4m) range. In time, it is possible that the company will employ RIMFIRE on its largest yachts (up to 68-ft/20.7m).\nLicenses for RIMFIRE equipment and technology are available to custom molders outside the marine industry. Although RIMFIRE hulls are currently made in concave (female) molds, the process also works with convex (male) molds, and research is underway to incorporate longer fibers (up to 203 mm/8 inches long) and to enable spraying of fibers in directed orientations. Moreover, carbon fibers appear to be viable for RIMFIRE processing, but more R&D would be necessary before carbon preforms could be put into production.", "score": 29.541341883607796, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "RAKU® EI-2510 Structural Liquid Resin System\nWith this resin system, RAMPF has developed a product to match the performance of toughened high Tg prepreg epoxy systems for vacuum infusion and RTM processing. RAKU® EI-2510 has a very low viscosity (200 mPas @40 °C) to allow for low-cost infrastructure and tooling while having an excellent pot life of 2 hrs and a short cure time (2 hrs @120 °C). Fully cured, the system has a dry Tg of 210 °C and excellent hot-wet properties. The system is designed for high fracture toughness and an excellent candidate for applications that are exposed to a harsh environment and high mechanical, thermal, and vibrational loading. Applications include cascades, control surfaces, and structural components.\nRAKU® EI-2508 FST System for Interior Applications\nThis unfilled epoxy system combines excellent FST properties with the strength and performance of a toughened epoxy system. Its low viscosity and a temperature-activated cure profile allow for fast cycle times (full cure can be achieved within 15 min). The system is ideal for higher-volume aerospace components like seats (structural and non-structural), pack boards, wall and ceiling panels, overhead bins, and lavatory components.", "score": 29.205175796720987, "rank": 28}, {"document_id": "doc-::chunk-3", "d_text": "Epoxy resins have not yet been run, but Lammers anticipates no difficulties, with the right glass and sizing.\nAltair Engineering (Troy, Mich.) or equivalent finite element analysis (FEA) software is used to determine the laminate materials and architecture that best suit the requirements of the entire boat structure. The property values of the materials to be used are factored into that analysis. The preform design is built into a computer-aided design (CAD) file. Once the design file is ready, the robot's deposition rate is set by preprogramming the process software, which calibrates the yield of the roving, the speed of the robot arm/end effector and the servo motor speed on the chopper gun.\nWhile the part tooling is being made, robot programming is done offline using software such as that provided by Delmia Corp. (Auburn Hills, Mich.). When the mold tooling is ready, the software can be downloaded into the robot controller for immediate start of production. \"Programming offline saves cost in overtime that might otherwise need to be expended to generate programs after regular working hours, so as not to interrupt production,\" says Lammers.\nSince RIMFIRE production began, early in 2003, Sea Ray's three plants in Tennessee have built more than 5,000 hulls for its 18-ft to 24-ft (5.5m to 7.3m) sport boats. Cost savings have been enormous. To hand assemble a hull preform using dry mat and fabric would take three technicians 45 minutes, or 2.25 worker hours. Using RIMFIRE for a hull of the same size, says Lammers, one part-time operator and one robot can build a preform in about 20 minutes. Two robots can cut the time to 10 to 15 minutes. Use of gun roving reduces material cost by about 50 percent, compared to mat and fabric. Typically, chopped fibers and binder are sprayed directly onto the cured gel coat, with no veil or barrier coat in between. According to Sea Ray, the process has no negative effect on the Class A finish of the gel coat, a standard polyester system from one of several suppliers.\n\"Furthermore, layup of rolled goods usually results in a lot of scrap material that gets trimmed away,\" Lammers adds.", "score": 29.153587743984325, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "Kneath added, “The design flexibility of the materials, along with the technical support from Victrex, enabled us to engineer and manufacture a PAEK bracket that can be produced in minutes compared to the hours it would take for a metal or thermoset equivalent.” This improvement in manufacturing efficiency translates into less processing time, lower energy requirements, and reduced waste for Tri-Mack. Those benefits, paired with eliminating steps such as edge sealing and X-ray inspections, help to reduce overall part costs. The benefits are not only limited to aerospace designs. The overall mechanical properties, weight savings, and processing efficiencies offer a step-change for the automotive, energy, and consumer electronics industries as well.\nVICTREX PAEK solutions and information on the hybrid molding process for the aerospace market can be found at www.victrex.com. To learn more about Tri-Mack’s engineering and manufacturing expertise, please visit www.trimack.com.\nAbout Tri-Mack Plastics Manufacturing\nLocated in Bristol, Rhode Island, Tri-Mack Plastics Manufacturing is an innovative engineering and manufacturing company specializing in high temperature thermoplastics and thermoplastic composites. For 40 years, customers have relied on Tri-Mack to identify the best material, part design, and manufacturing process for their critical applications. Engineering expertise, cutting-edge technology and a commitment to quality enable Tri-Mack to support projects from initial concept to commercial production. Tri-Mack’s manufacturing capabilities include automated composite processing, injection molding, tool making, multi-axis machining, bonding and assembly. ISO 9001:2008 and AS9100 certified, Tri-Mack serves the aerospace, industrial equipment, chemical processing and medical industries. For more information, visit www.trimack.com.", "score": 28.523503279179668, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "New Hybrid Molding Process and Material Innovations From Victrex Delivers Next-Generation Aerospace Solutions\nOctober 10, 2014\nVictrex has developed a new PAEK-based polymer and an innovative hybrid molding technology. This allows engineers to overmold a PAEK-based composite with fiber-reinforced VICTREX® PEEK injection molding materials. The polymeric advancement allows engineers to design stronger, lower cost components that are up to 60% lighter than typical metal and thermoset systems. By working together, Victrex and Tri-Mack Plastics Manufacturing Corp. have engineered an aerospace bracket using this new polymer and technique with the demanding performance requirements of loaded applications in mind.\nThe development of technologies for the aerospace industry is making significant progress not only from a material perspective, but from a processing standpoint as well. Engineers are requiring lightweight, complex parts at lower costs, higher processing efficiencies, and improved mechanical properties. This can be delivered by using the hybrid molding process. Victrex, an innovative world leader in PAEK (polyaryletherketone) polymer solutions, has collaborated with Tri-Mack Plastics, a distinguished molder of high temperature thermoplastic resins and composites for the aerospace sector, to deliver unique structural aircraft solutions.\nCommercial aircraft use thousands of brackets from the cockpit to the tail of the plane. The total amount of brackets on an aircraft can add a significant amount of weight especially if they are made from metal. The hybrid-molded VICTREX PAEK-based composite bracket is able to deliver up to 60% weight savings compared to stainless steel and titanium while offering equivalent or better mechanical properties such as strength, stiffness, and fatigue. “These technologies are enabling engineers to design lighter, stronger and more cost-effective solutions like the new bracket,” stated Tim Herr, Aerospace Strategic Business Unit Director for Victrex. Herr says, “This game-changing advantage over metals and thermoset composites is a result of our dedication to the future of flight.”\nTri-Mack has a long-standing relationship with Victrex in the development of high-performance aerospace components using PAEK polymers. “Given our expertise in the industry, we were excited to use a new polymer and process to develop an innovative product,” said Tom Kneath, Director of Sales and Marketing for Tri-Mack.", "score": 28.498575208334056, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Munro Tech Spotlight: Presidium Part Two – Benefits of RevoTherm®\nIn part two of our Tech Spotlight: Presidium, we’re highlighting the benefits of the company’s product, RevoTherm®, which stands for revolutionary thermoset. RevoTherm is a polyurethane-based thermoset resin system with material properties capable of replacing traditional composite materials in either a lightweight “neat” or reinforced system.\nRevoTherm® is created in a two-part chemistry using a traditional isocyanate and Presidium’s “secret sauce”, a modified polyol that has structural properties up to five times stronger than typical urethanes. However, it retains the benefits of a urethane system with low viscosity that flows like water and cures into a finished part. This allows the material to easily flow into tools to make parts, so you can use low-pressure tools and processes that are relatively inexpensive compared to other industry equipment.\nThe RevoTherm® system is a clean (non-styrenic) and low-labor process suited for numerous manufacturing applications including closed- molding, pultrusion, RIM, and vacuum infusion using commercially available meter mix equipment and low-cost tooling. Compared to many traditional composite processes, the RevoTherm® system produces minimal waste and low VOCs.\nIn addition, part design freedom is expanded with the ability to add-in thick-to-thin areas, embosses and textures with no read-through. Being a polyurethane-based material, the end product has optimal adhesion properties for coatings and adhesives.\nTo learn more, visit: https://www.presidiumusa.com/revotherm.\nNext week, we’ll share the last post for this Tech Spotlight series and highlight RevoTherm’s applications.\nTo view part one of this series, click here!", "score": 27.82077316137722, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Victrex has developed a new PAEK-based polymer and an innovative hybrid moulding technology. This allows engineers to overmould a PAEK-based composite with fiber-reinforced PEEK injection moulding materials. The polymeric advancement allows engineers to design stronger, lower cost components that are up to 60% lighter than typical metal and thermoset systems. By working together, Victrex and Tri-Mack Plastics Manufacturing Corporation have engineered an aerospace bracket using this new polymer and technique with the demanding performance requirements of loaded applications in mind. Victrex will be exhibiting its latest aerospace solutions at the Aircraft Interiors Expo on October 14-16, 2014 in Seattle, Washington, at the Fakuma show on October 14-18 in Friedrichshafen, Germany, as well as delivering a paper on hybrid moulding at ITHEC which is held on October 27-28, 2014 in Bremen, Germany.\nThe development of technologies for the Aerospace industry is making significant progress not only from a material perspective, but from a processing standpoint as well. Engineers are requiring lightweight, complex parts at lower costs, higher processing efficiencies, and improved mechanical properties. This can be delivered by using the hybrid moulding process.\nVictrex, an innovative world leader in PAEK (polyaryletherketone) polymer solutions, has collaborated with Tri-Mack Plastics Manufacturing Corp., a distinguished moulder of high temperature thermoplastic resins and composites for the aerospace sector, to deliver unique structural aircraft solutions.\nCommercial aircraft use thousands of brackets from the cockpit to the tail of the plane. The total amount of brackets on an aircraft can add a significant amount of weight especially if they are made from metal. The hybrid-moulded VICTREX™ PAEK-based composite bracket is able to deliver up to 60% weight savings compared to stainless steel and titanium while offering equivalent or better mechanical properties such as strength, stiffness, and fatigue. “These technologies are enabling engineers to design lighter, stronger and more cost-effective solutions like the new bracket,” stated Tim Herr, Aerospace Strategic Business Unit Director for Victrex. Herr says, “This game-changing advantage over metals and thermoset composites is a result of our dedication to the future of flight.”\nTri-Mack has a long-standing relationship with Victrex in the development of high-performance aerospace components using PAEK polymers.", "score": 27.535649050294914, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "To meet the challenge of producing quality products at a competitive cost, manufacturers long ago began automating as many of their manufacturing processes as they could, with the payoff coming from producing better products, cheaper and faster than their competitors.\nThe primary objective of automating a process is to reduce, or even eliminate, the manual or “touch” labor involved in a manufacturing process. The utopian vision is fully automated factories that would use intelligent robots and sophisticated machines to fabricate a variety of customized products quickly, inexpensively and without defects.\nIn businesses that produce a high volume of standardized products, automation can approach that vision. However, industries such as aerospace typically produce relatively small volumes of complex assemblies, in which the manufacturing process simply does not lend itself to fixed automation. Fixed automation systems have yet to reach the level of sophistication and flexibility of the trained human technician so, for example, jet engines are still assembled mostly by hand.\nAs sophisticated as humans are, they are prone to human error. Distraction, fatigue, illness and interruption, not to mention worker substitution because of absence, can all lead to mistakes. As a result, the most sophisticated assembly resource can be the most vulnerable to inaccuracies.\nIn spite of the fact that as much as 40% of manufacturing processes occur beyond the limitations of a fixed automation system, manufacturers can still realize benefits from automation in these manual environments. These benefits are derived not from replacing people with machines but from providing workers with right-on-time information that helps them accomplish their jobs accurately the first time, every time.\nSimply put, right-on-time information augments human performance. Just as just-in-time inventory control avoids receiving material earlier than needed, providing right-on-time information avoids an operator receiving feedback too late to correct an error as it occurs.\nTechnologies such as augmented assembly and verification, for example, combine motion tracking, process tracking and real-time feedback to monitor assembly accuracy, detect mistakes and provide the operator with the feedback needed to immediately correct those errors. If the system also can document the resulting assembly accuracy, manufacturers can avoid duplicative manual verification processes.\nThis blend of man and machine enables the implementation of semi-automated manufacturing processes that leverage the best elements of both manual and fully automated approaches, while protecting against their respective weaknesses.\nMoving Beyond Traditional Control Processes\nThe accuracy of traditional manufacturing processes often varies with the amount of manual labor involved. Take the example of an assembly procedure where human operators are applying fixture bolts with the use of a manual torque wrench.", "score": 27.398258516256973, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Software to Enable Composite and Assembly Development Processes for Modern Airframes 2009-01-3241\nAll trends indicate that composite structures are becoming an increasingly large percentage of modern airframes. Because of this, airframes are becoming both more efficient and more complex. This is due in large part to the fact that composite aircraft assemblies have huge volumes of highly interdependent design information. Creating the initial designs and making subsequent changes to these complex aerostructures is both time-consuming and error-prone. In this article you will learn how a tightly integrated suite of software for aerostructure development greatly increases design and manufacturing efficiency of today's complex composite aircraft assemblies.\nIn the aftermath of the first intensive, large scale, commercial composite airframe programs, exemplified by the Boeing 787 Dreamliner, many aircraft manufacturers are selecting carbon fiber reinforced plastic (CFRP) as the structural material of choice for the fuselage and/or wing of their next project. This is true for all types and sizes of commercial airplanes, including large, regional, business and general aviation aircraft. In this transitional period into a new age for the aerospace industry, lessons on composite engineering and manufacturing must be assimilated quickly to transform or adapt the overall development process to this new reality. The complexity of the multiple interactions between material choices, tooling selection, design methodology and manufacturing processes must be understood in order to devise the most robust and efficient approach.\nIn this article we describe how VISTAGY has been able to draw on more than 15 years of leadership in composite design, and its close association with the major actors of the global aerospace industry, to propose a global and flexible product development approach for composite airframe development.\nThe main ingredients of composite airframe development are reviewed in the context of a CAD integrated specialized environment including elements such as the overall fiber orientation strategy, design methodologies such as concurrent design and stress optimization and validation, design for the manufacturing process, design for assembly, and quality control procedures.", "score": 27.31097563130755, "rank": 35}, {"document_id": "doc-::chunk-5", "d_text": "The binder mixing tube conveys the commingled powders through the back of the robot and into the end effector. Chopped 25.4-mm and 50.8-mm (1-inch and 2-inch) fibers and binder are deposited in a random pattern at a pre-programmed rate, as determined by the FEA analysis. This may be done in a single pass, or in multiple passes, as needed to achieve an even surface on a laminate thickness of 6.35-mm/0.25-inch or more. Conditions that influence the number of passes include part geometries, draft design and the amount of glass specified for deposit.\nWhen the preform is complete, the mold progresses to a third station, where engineered fiberglass fabrics are hand placed to reinforce the preform. A combination of chopped strand mat and 0°/90° stitched fabric is laid into the bottom of the hull and around the chine, where the outer edge of the hull bottom joins the hull side. Multiaxial stitched or woven materials are added to other areas that will be exposed to high loads. Next, the preform overspray is trimmed off the flange of the lower mold.\nThe lower mold then moves to the mold closure station, where a convex upper tool, mounted in a wheeled trolley, is automatically moved into position over the lower mold by a manipulator arm on a fixed gantry. The edges of the upper and lower molds are joined and sealed, and the tooling set then moves into the resin injection station.\nThe ZIP RTM method feeds resin around the perimeter of the joined molds while a vacuum is pulled through a center vent in the upper mold. Resin flow is controlled by a mechanical vacuum molding pressure control (VMPC) sensor, also mounted in the upper mold. The VMPC reacts to internal pressure changes. In the event the pressure becomes positive, the VMPC sends a signal to interrupt the resin flow until the vacuum re-establishes negative pressure within the tool, and then restarts resin flow automatically.\nMoved to the curing station, both the upper and lower molds are heated electrically, and the part is allowed to cure at 32°C to 38°C (90°F to 100°F) for about one hour.\nIn the next station, a second manipulator arm removes the upper tool, which then is returned to the closure station to be mated with the next lower mold tool.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Spirit AeroSystems [NYSE: SPR] announced today its Advanced Technology Centre in Prestwick, Scotland, has developed an improved method for manufacturing composite parts.\nSpirit AeroSystems uses some of the world’s largest autoclaves to support the company’s composite fuselage business. The new intelligent heated tool technology can cure composite parts 40 percent faster and at half the cost without using an autoclave.\nIn collaboration with the University of Strathclyde and the Scottish Innovation Centre for Sensor and Imaging Systems (CENSIS), Spirit developed an intelligent heated tool for curing composite components. The new technology can cure composite parts 40 percent faster at half the cost and supports a wide range of composite components across industries, from wind turbine blades to the next generation of composite aircraft.\n\"Instead of curing components at a standard temperature for hours at a time, we can now tailor the cycle time to match individual part geometries,\" Stevie Brown, lead engineer at Spirit's Advanced Technology Centre in Prestwick, explained. \"The autoclave has been a bottleneck in manufacturing lines, and removing it will reduce cycle times for components, cut production costs and decrease energy consumption.\"\nTypically, high-performance composite materials are layered on a specially formed surface, or tool, and then placed in an autoclave, where a combination of heat and pressure accelerate the hardening of the material. Spirit's new technology introduces an intelligent, multi-zone heated tool, removing the need for an autoclave. The tool enables complete control of the curing process through real-time monitoring and feedback.\nCENSIS supported the collaboration with funding and provided project management expertise. The University of Strathclyde provided technical support and developed the control algorithm and software for the intelligent tool.\nThe collaboration will continue through 2018, and Spirit has already begun applying the technology in research and manufacturing projects.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "To achieve these exceptional strength properties, various different plastic composite formulations can be used, of which include:\n- Short Glass Fiber-Reinforced Plastic\n- Improved strength, stiffness and heat detection properties\n- Preferred material for cylinder heads and cooling components used in vehicle engines\n- Long Glass Fiber-Reinforced Plastic\n- Improve the high-performance and lightweight properties of vehicles\n- More expensive as compared to other plastic materials3\n- Reduces costs as a result of its lower specific gravity\n- Carbon Fiber-Reinforced Plastic\n- Reduces the weight of large vehicle parts (e.g., side panels)\nEase of Fabrication\nBoth metal and plastic materials require a series of fabrication methods to cut, shape or form the material prior to their incorporation into any product. The type of plastic materials being used plays an important role in determining the type of plastic fabrication method that will subsequently be performed; however, plastic materials offer significant advantages as compared to metals when considering the manufacturing process.\nFor example, the particularly low melting point and high malleability of plastic allows these materials to be easily formed into a wide variety of complex shapes; thereby contributing the ease of forming this material without requiring the use of any forming or machining procedures. Furthermore, plastic materials also typically exhibit a greater chemical resistance as compared to metals against potentially hazardous chemicals, such as those that cause oxidation or rusting when applied to metals4. Despite these advantages, plastic materials are not as heat resistant as metals, which can be concerning for industries that require their equipment and other products to be exposed to extremely high heat levels. Overall, plastic materials can be produced at a much faster rate as compared to their metal counterparts at a lower cost.\nWith this information in mind, it is clear why so many different industries have made the transition from metal to plastic. One of the most notable examples of an industry that has actively participated in this material replacement is the automotive industry. Some ways in which automobile manufacturers have replaced metal with plastic materials include:\n- Exterior vehicle parts\n- Front-end modules\n- Trunk lids\n- Deck lids\n- Body panels\n- Floor panels\n- Air-bag containers\n- Seat components\n- Air ducts\n- Chain Tensioners\n- Belt pulleys\n- Oil pans\n- Cylinder head covers\n- Gear components5\nThe aerospace industry has also actively looked to replacing their metal aircraft components with high-performance plastic materials.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Aeronautic & Automotive\nOTEMAN technology offers automation and durable solutions for the aerospace and automotive industry. Our versatile machines allow cutting the most difficult shapes and the strongest materials such as carbon fiber, laminated materials, fiber glass, mat, glass reinforcements, woven roving, aramid, nylon, polyamide, foams, honeycomb, carpet and sandwich materials.\nOteman offers powerful solutions to optimize your workflow saving labor costs.\n- 3D software for the design and simulation of models.\n- Fabric optimization.\n- Spreading and cutting equipment for fabric or leather.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-8", "d_text": "Ultramid B High Speed is said to flow at least 50% farther than standard PA 6, enabling reduced thinner wall thicknesses and lower weight.\nNovel composite trailer\nA liquid thermoplastic composite technology using Cyclics Corporation’s range of thermoplastic polyesters has been developed that is capable of producing large structural components, including the bed of a 13.5 m (44.3 ft) semi-articulated trailer. CBT® resin, the cyclic form of polybutylene terephthalate (PBT) is being used because of its very low viscosity during polymerisation and negligible volatile organic compounds (VOCs) during processing. Liquid thermoplastics provide toughness, durability, impact resistance and recyclability in low-cost polymers.\nCBT resin is produced from standard PBT resin from BASF, which is partially depolymerised at elevated temperatures. The process converts PBT into a lower molecular weight CBT resin. A reversible reaction enables CBT to repolymerise to PBT, an engineering thermoplastic with excellent stiffness and chemical resistance, Cyclics explains.\nProcessing methods include resin transfer moulding (RTM), vacuum infusion and low-pressure bag moulding. The low-pressure moulding techniques for the trailer bed were developed by EPL Composite Solutions in the UK using prepreg materials from Ahlstrom Glass Fibre, Finland. The trailer was produced with three mouldings, the largest weighing over 1000 kg (2205 lbs). Lifecycle testing is continuing in the UK.\nLanxess, at the recent VDI Plastics in Automotive Engineering Conference in Mannheim, had an exhibit focusing on a new polyamide (PA) composites sheet that can be formed in an injection mould in a one-shot process. The company says it has succeeded in linking the simulation of the forming process with mechanical structural analysis, precisely describing the non-linear anisotropic material behaviour of the composite sheet. Reinforced with continuous fibres, the sheet is designed for automotive metal replacement applications.\nIn its booth, Lanxess also highlighted lightweight structural applications of its Pocan® PBT and Durethan® engineering thermoplastics based on PA 6 and 66 for electric and hybrid vehicles. Pocan offers high heat resistance, strength and hardness, low susceptibility to stress cracking as well as excellent slip properties and high abrasion and chemical resistance.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-4", "d_text": "Made of carbon fiber and epoxy, cyanate ester, or phenolic, Ultraflex has a distinctive shape like nested maple leaves, allowing it to conform to bends and curves without collapsing.\nA different sort of core material is a very low-density syntactic-foam paste of epoxy filled with hollow glass spheres, used to fill the fan blades in the compressor of Airbus A380 jet engines. Huntsman Advanced Materials won a JEC aerospace innovation award for this Araldite 1641 product, together with co-developers Rolls Royce Engines and the Univ. of Sheffield in the U.K. Fan blades are typically filled with honeycomb for sound damping, but the paste is easier to apply and reportedly gives better sound damping and less vibration.\nNew resins and more\nFor styrene-free hand lay-up, Cognis in Germany developed new methacrylate monomers, trade named Bisomer, as styrene replacements in unsaturated polyester. Other chemical companies also are starting to market methacrylate monomer diluents to polyester composites fabricators.\nSeveral companies introduced new epoxies with high-temperature or fire-resistance properties. Huntsman showed a new multi-functional epoxy, MY0600, made with meta-aminophenol, to provide long pot life as well as high modulus in aerospace parts.\nMader in France, in a joint venture with United Paint and Chemical Corp., came up with what’s said to be the first paint for composites that meets an M1F1 classification for extremely low smoke and low flame spread. When it burns, the only emission is water vapor. Up to now, the highest flame classification for composite paints was M1F2 or F3. Mader uses a patented process to graft fire-resistant fillers onto polymer. The product is aimed at automotive.\nUbe Industries in Japan recently commercialized (under license from NASA) a polyimide thermoset called PETI-330 with an unusual combination of low viscosity and high-temperature performance. It cures at around 625 F and withstands up to 572 F for RTM molding or carbon-fiber pre pregs for aerospace or engine applications.\nHenkel’s Aerospace Group in Bay Point, Calif., has developed a new toughened thermoset resin based on benzoxazine with low viscosity and long pot life for fabricating aerospace parts.", "score": 26.47942504398735, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Engineers could use carbon fiber reinforced 3D printing to quickly build low-cost, lightweight, composite parts using an automated, digitally-run machine.\nThe Additive Advantage\nAdditive Manufacturing Technology and Innovation: 3D Printing Aircraft Parts\nBy Kelly McSweeney\nAdditive manufacturing, commonly called 3D printing, is no longer an emerging manufacturing technology. It has recently become more accessible for consumers, and it’s not uncommon to find a 3D printer in schools or homes. However, this type of on-demand manufacturing has been evolving in the aerospace industry for decades — and Northrop Grumman has been leading the charge.\nLeveraging Different Materials for 3D Printing\nThe 3D printers most people know involve plastic. The printer heats plastic until it is soft enough to mold into the desired shape. Then hardens when cooled down. Advanced manufacturing, however, involves materials that have been designed to meet specific objectives. For example, in some situations, you may need a durable material, while in others, you may prioritize flexibility. In aerospace, engineers evaluate materials based on many factors, such as the ability to handle extreme temperatures and electrostatic discharge.\n“We are now using five different additive manufacturing materials in our products – more when considering tooling,” says Barnes. He adds that they are investigating additional materials that can’t yet be announced.\nEngineers have a variety of options depending on their program’s requirements. “The choice of material depends on the requirements and the cost benefit,” Barnes says. He explains that polymers are a cost-effective option for many applications, but in scenarios such as supersonic aircraft that get hotter than 300 degrees Fahrenheit, a metal such as Titanium is often the right choice.\nThe Advanced Manufacturing Methods for 3D Printing\nSome industrial applications use the type of additive manufacturing you see available to consumers, in which a nozzle “prints” the object. This is called fused filament fabrication.\n“We are using some of that, except the materials, the controls and the equipment are much more sophisticated and expensive than what you would use at home,” Barnes explains.\nOther advanced methods are available, including a process called “powder bed fusion” in which powder is heated and sintered, often by a laser. Northrop Grumman does a specialized type of powder bed fusion in which the power source is an electron beam that produces parts even faster than a laser.", "score": 25.65932019553901, "rank": 42}, {"document_id": "doc-::chunk-2", "d_text": "An additional disadvantage of a wet resin process is that personnel may come in direct contact with the resin, which is undesirable. Additionally, it is difficult to create uniform resin content free of voids and bubbles. Wet resin content fabricated products are usually of higher resin content than similar prepreg fabricated products in order to ensure freedom of void within the laminant, and thus such articles are heavier than articles made from prepreg materials.\nAn additional form of wet resin process utilizes vacuum to draw the resin through the fabric. Resin and catalyst systems are mixed in a container, and then introduced from the container to a dry cloth fiber reinforced layer placed in a tool. A vacuum bag is placed over the dry cloth layup with an inlet tube from the resin container to an edge of the layup under the vacuum bag. The vacuum bag outlet to the vacuum source is at the center of the assembly. When a vacuum is pulled, the bag pulls against the layup, and when the resin is released, the resin passes through the tube from the resin container and impregnates the fiber reinforcement or cloth from the edge thereof. Thereafter, resin flow proceeds toward the vacuum outlet at the center of the fiber reinforcement. When the resin reaches the vacuum outlet, the article is impregnated, and the resin inlet is sealed to stop any additional resin flow. The cure cycle is completed with continued vacuum pressure and heat.\nVacuum techniques are deficient due to pressure limitations of the vacuum as well as limitations in the size of the article to be fabricated. Vacuum techniques do not satisfactorily impregnate close weave fiber reinforcement, such as carbon fiber panels, entirely along the length and width thereof, to useful large size. Vacuum techniques further have difficulty achieving low air voids.\nA need has thus arisen for an improved method for manufacturing fiber reinforced composite articles in order to produce aerospace quality composite parts having high fiber volume and low air void content.\nA need has arisen for a process to create composite parts having ply areas with different reinforcement. A process is needed to enable the fabrication of a part in which areas needing reinforcement can be fabricated with extra plies of fabric, to specifically address load paths in the articles, while complementing the capability of composite construction to reinforce areas of stress, or use less material where the strength of thicker layers is not required.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-5", "d_text": "The organic matrices commonly used are broadly divided into the catego- ries of ther-moset and thermoplastic; organic matrices commonly used on air- frame structures are given below:\n• Expoxy • Polyethylene\n• Polyester • Polystyrene\n• Phenolics • Polypropylene\n• Bismaleimide (BMI) • Polyemeretherketone (PEEK)\n• Polyimides • Polyetherimide (PEI)\n• Polyethersulfone (PES)\n• Polyphenylene Sulfide\n• Polyamide-imide (PAI)\nThe relative advantages of thermosets and thermoplastics include: THERMOSET MATRICES THERMOPLASTIC MATRICES\n• Undergo chemical change when cured\n• • Non-reacting, no cure required\n• Processing is irreversible\n• • Post-formable, can be reprocessed\n• Low viscosity/high flow\n• • High viscosity/low flow\n• Long (2 hours) cure\n• • Short processing times possible\n• Tacky prepreg\n• • Boardy prepreg\n• Relatively low processing temperature\n• • Superior toughness to thermosets\n• Good fiber wetting\n• • Reusable scrap\n• Formable into complex shapes\n• • Rejected parts reformable\n• Low viscosity\n• Rapid (low cost) processing\n•Infinite shelf life without refrigeration\n•High delamination resistance (Disadvantages)\n•Long processing time\n• Less chemical solvent resistance than thermosets\n• Requires very high processing temperatures\n• Outgassing contamination\n•Limited processing experience available\n• Less of a database compared to thermoset\nCompared to thermoplastics, thermoset matrices offer lower melt viscosi- ties, lower processing temperatures and pressures, are more easily prepregged and are lower cost. On the other hand, thermoplastic matrices offer indefinite shelf life, faster processing cycles, simple fabrication, and generally do not re- quire controlled-environment storage or post curing.\nThe most prominent matrices are epoxy, polyimides, polyester and phe- nolics.\nThermoset matrix systems have been dominating the composite industry because of their reactive nature. These matrices allow ready impregnation of fi- bers, their malleability permits manufacture of complex forms, and they provide a means of achieving high-strength, high-stiffness crosslinked networks in a cured part.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "A drawing of bus parts and a photo of the bus with parts made of glass fiber-reinforced material\nWe produce RTM molds (made of glass fiber-reinforced plastic) that enable us to achieve minimum production costs\nAn RTM mold for the production of bus parts\nThe final polymer composite parts (made by RTM) have the necessary inserts to make the assembly process easier.\nAssembly of a bus with polymer composite parts", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "This content is not included in your SAE MOBILUS subscription, or you are not logged in.\nSolution for Automated Drilling and Lockbolt Installation in Carbon Fiber Structures\nISSN: 1946-3855, e-ISSN: 1946-3901\nPublished November 10, 2009 by SAE International in United States\nCitation: Schwarze, K. and Mehlenhoff, T., \"Solution for Automated Drilling and Lockbolt Installation in Carbon Fiber Structures,\" SAE Int. J. Aerosp. 2(1):188-192, 2010, https://doi.org/10.4271/2009-01-3214.\nManual drilling and Lockbolt installation in carbon fiber structures is a labor intensive process. To reduce man hour requirements while concurrently improving throughput and process quality levels BROETJE-Automation developed a gantry positioning system with high performance multi-function end effectors for this application.\nThis paper presents a unique solution featuring fully automated drilling and Lockbolt installation (inclusive of automated collar installation) for the vertical tail plane (vertical stabilizer) of large commercial aircraft. A flexible and reconfigurable assembly jig facilitates high access of the end effectors and increases the equipment efficiency. The described system fulfils the demand for affordable yet flexible precision manufacturing with the capacity to handle different aircraft model panels within the work envelope.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Reaction Injection Molding, or RIM, can be a great alternative to achieve the mainstream look of molded parts without the high tooling costs or volumes needed for typical thermoplastic parts. Both processes allow incorporation of many features into a mold, but only RIM gives the designer flexibility to produce parts with significant wall thickness variations—typically from .125” to 1.125” in the same part. RIM can also produce high strength large parts at a lower price because mold pressures and costs are significantly lower compared to thermoplastics.\nWhile both processes provide a solution for encapsulating metal, the low temperature, low pressure RIM process is also safe for electronics and other material encapsulation. Injection molded parts have a higher quality finish than RIM urethane parts, although RIM parts take paint and silk screening well for improved cosmetics and branding.\nRIM is valuable for producing low volumes at a low cost, but for volumes over 500 per month, thermoplastic injection molding often becomes the more cost-effective processing option. Because RIM molds can be machined from aluminum instead of steel, the up-front tooling costs are typically less than one half that of a comparable thermoplastic mold. This is particularly beneficial when part volume is low. Since RIM tools can be made of softer materials, changes to tooling are also much more cost-effective than changes to thermoplastic steel tools.\nRead more about it in our white paper, “5 Reasons to Use RIM for Complex Parts”. Download the paper at www.exothermic.com", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-6", "d_text": "As a result, areas of air remaining within material layers 14 in the cured articles produced in accordance with the present invention are kept to a minimum. Air is also removed from mold 10 through the process of convection, the property of a gas or liquid to rise when heated. The resin sets creating an exothermic reaction and the generated heat from the setting resin drives air bubbles upwardly, displacing air voids.\nDuring the process of resin moving upwardly within mold 10, the source container of resin is also moved upwardly external of mold 10 above the wick line, the boundary between resin impregnated fiber and non-impregnated fiber. When the resin has reached top 10b of mold 10 and saturated the entire layers of material 14, excess resin is removed via conduits 40 and 44.\nThe wicking action utilized in order to move resin from bottom 10a of mold 10 to the top 10b of mold 10 thereby saturating layers of material 14 is accomplished without the use of vacuum or extreme pressures as utilized with previously developed methods. Any configuration of a part can be fabricated utilizing the present method determined by the shape of tool 12 and the number of layers of material 14 utilized. Areas of the part which require reinforcement can be provided within tool 12.\nOnce the resin has risen to the top 10b of mold 10, the supply of resin to mold 10 is terminated. Pressure may be increased within bag 22 in order to squeeze excess resin from layers of material 14. After a predetermined time interval, the resin is allowed to cure. Once cured, the part may be removed from tool 12 and mold 10.\nIt therefore can be seen that the present invention provides for a method of producing a fiber reinforced article utilizing wicking and capillary action of the resin moving through the fiber. High pressures such as those associated with autoclave techniques or vacuum techniques are not utilized with the present method. High fiber volumes with low air voids can be achieved utilizing the present method. The present method has achieved aerospace quality composite parts having fiber volumes of over 50% with less than 2% air voids.", "score": 24.447233273966287, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "The operation requires separate manufacturing facilities, and the \"B\" stage material must be stored at low temperature and in sealed containers to avoid contact with moisture. The resins must be conditioned to a specific state of polymerization, and the process must be stopped to retain the \"tack\" condition over an extended period of time.\nIn the wet resin impregnation process, woven cloth or fiber is impregnated with a liquid resin that is catalyzed to process or cure in a short continuous period of time. In this process, the resin is impregnated by squeegee of ply by ply of a layup at the site of component fabrication. The impregnated material may be handled at room temperature or elevated temperature for a certain period during which the resin gels, followed by final curing either at room temperature or elevated temperature in the same tool or mold.\nA wet resin impregnation process referred to as resin transfer molding (RTM) is a process to saturate the fabric with resin using two-sided tooling that is usually metal to withstand extreme pressures needed to force the resin through the fabric. The tool may be heated in order to lower the viscosity of the resin, and large presses are utilized to hold the mold together. Heat is supplied with hot oil or heating elements placed in the mold. Because of the great hydraulic pressure that is generated as the resin is pushed into the fabric to saturate the fabric, the mold will try to expand outwardly and open the internal clearances when the resin is injected into the mold. If the mold is fabricated correctly, the resin will form a wave front and move across the fabric uniformly. However, if the machining clearances vary even a few hundred thousands of an inch, the fabric will be squeezed more in one area than another and the resin will move in the path of least resistance. The areas that are compressed too much will eventually form air pockets as the resin surrounds these areas and air voids form.\nCurrent RTM technology suffers from the unpredictability of the formation of the resin wave. Typically, aerospace grade parts using the RTM process are made with the same number of plies of fabric throughout the part. Placement of the fiber plies in the mold and machining of the mold must match precisely. If the number of fabric plies varies, the placement of ply changes must precisely match thickness changes in the mold itself. This task is difficult in a production environment.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "Integrated fabrics, braids, preforms and pre-pregs are used in rapid fabrication of door inner, floor, seat back rest, roof, trunk and under the hood auto components, wind turbine blades and composite tanks (See Figure 3). Advanced manufacturing techniques such as injection overmolding, stampable preforms, locally stitched preforms, high-pressure resin transfer molding are some examples that reduce composites manufacturing costs and energy consumption and improve component performance and recyclability. Figure 4 illustrates a locally reinforced preform to provide directional properties. IACMI has partnership with the Long Island, N.Y.-based Composites Prototyping Center (CPC), for prototyping and fabrication.\nComposite recycling is of growing interest to the composites community. The next-generation technologies feature novel and increasingly complex combinations and formulations of fiber-reinforced composites, but these are difficult to recycle using current practices. Since recycled chopped carbon fiber costs 70-percent less to produce and uses up to 98-percent less energy to manufacture compared to virgin carbon fiber, recycling technologies are creating new markets from the estimated 29 million pounds of composite scrap sent to landfills annually. Advances in recycling technologies including pyrolysis, solvolysis, mechanical shredding and cement kiln incineration are enabling recycle, reuse, and remanufacture of products. IACMI has strategic partnerships in the recycling technologies with the American Composites Manufacturer’s Association (ACMA) and Composites Recycling Technology Center (CRTC), Port Angeles, Washington.\nAdditive technologies in composites manufacturing offer a high-rate, low-cost alternative to traditional tool-making approaches, and shows promise as an effective processing method for printing composite structures from reclaimed structural fibers. Additive approaches have the potential to significantly reduce composite tool-making lead times and increase the recovery and reuse of structural carbon fibers.\nAdvanced thermoplastic resins into current production processes: Thermoplastics have shorter cycle times and are more suitable for recycling. Increasing the use of thermoplastics for requires a variety of activities, including developing of novel in situ polymerization methods to improve thermoplastic fatigue performance, and establishing design-for-recyclability methods.\nDesign, Prototyping, and Validation (DPV) are integral steps to turning conceptual designs into high-performance components and verifying that these components meet their intended product requirements.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "It should be noted that the originators of the ECO or RTM Lite process live in a culture in which a higher level care is taken by the operator then common here in North America, this not a slight on the North American molding operator, we here are focused on VOLUME in our production, in Europe there is commonly lower volume and thus the culture is not so hurried as is the case here. In any event, light weight tooling will perform well if very careful attention is paid to all aspects of molding process. This is to mean that details such as gel-coat application, fiber loading, core placement, mold halves registration to each other during closure, amount of flange vacuum, and cavity vacuum levels all must be carefully controlled, while this is true of any of the vacuum assisted closed molding processes, it is especially true of the RTM Lite process when the upper mold half is built to a standard which allows for the resin to be seen as it fills the mold.\nFor the last 4 years JHM Technologies has been offering RTM Lite and ZIP RTM tooling which is built with added strength and design features that perform to a higher level of performance then the conventional “see through” RTM Lite tooling. Our focus has been on tool life and production performance, while we may not have given enough consideration to the fact that many are just entering the closed molding environment and can benefit from the learning one receives from actually watching the resin flow during the injection. Further, the majority of the molders who begin with their conversion into closed molding do so with choosing one of their “lower volume” parts, so the conventional RTM Lite tooling standard will perform well, especially if added care is provided in the molding process.\nIt should be noted however that building a more rigid upper mold, built with conventional tooling materials and added stiffening structure will OUT PERFORM the lighter “see through” mold.\nRecently, I received a call from one of our long term customers who called mainly to say hi and to ask what is new in the industry. During our conversation we discussed many tooling standards in use today, one of his comments I thought was rather profound and further supports this article. He being a longtime closed molder who has multiple plants throughout the USA, and has been had many different forms of closed molding running in production for many years.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "The Brazilian company Edra Equipamentos has developed a sustainable technology for the molding of composite parts. Named RTM-G or Resin Transfer Molding – Glass, the process combines the injection of resin into the mould with the use of post-consumer waste from tempered glass, which acts as reinforcement material in place of traditional fibreglass or glass mat.\nAs well as allowing the reuse of the glass that would be discarded, the RTM-G only uses colourless resins derived from renewable and recycled sources and the peroxide used for curing or hardening the polymers is free of phthalates, which are substances considered harmful to the environment.\nInitially, RTM-G will be used by Edra Equipamentos to produce furniture, in the initial testing phase tempered glass waste has rendered a very interesting aspect to the tables and benches that they produced.[rssless][connections id=’171′][/rssless]", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "- Materials & Industrial Technology\nManufacturing Processes - Reaction Injection Moulding (RIM)\nRIM Fundamentals of Reaction Injection Molding\nReaction Injection Moulding Process (RIM)\nThis is a permanent mould process that fits in the general category of casting as a manufacturing process and is sometimes abbreviated to the RIM Process.\nThe general mode of operation of this process is to combine two feeds of preheated, low molecular mass reactants in a mixing head and inject them at high speed into a split die. The mixing head opens to allow the mixed reactants into the die and closes again when it is full. The end result is a high molecular mass casting which can be removed from the mould once set using an ejector arrangement.\nAs with all casting processes, 3D shapes can be produced and if a low modulus material is used, slight re-entrant angles can be included in the design. This process is used mainly for the production of polyurethane, polyamide and composite components, with a wide range of chemically reactive systems being possible.\n- Can produce strong flexible lightweight parts\n- Relatively quick cycle times (limited by the reaction time of the polymer) as compared to vacuum cast materials\n- Has a lower viscosity than thermoplastic polymers\n- Lower pressure means lower clamping forces\n- No waste material 100% utilization (providing there is no scrap which cannot be recycled)\n- Cycle times are slower than standard injection moulding for RIM\n- Raw materials are expensive\n- Process is difficult to set up\n- Surface textures are variable\n- Can suffer flaws as a result of premature reaction\nThe process is generally likened to injection moulding with the obvious difference in respect to the materials used and it can be used for the production of large, lightweight and thin components. The automotive industry have adopted this process for the production of rigid foam automotive panels.\nThe use of composites to strengthen components through the introduction into the process of glass fibre or mica is also an attractive option for the production of automobile panels and other large sheet like components. When this is done the process is known as Reinforced Reaction Injection Moulding or RRIM.\nR.I.M. Reaction Injection Moulding\nReaction Injection Moulding - Additional Resources\n- Manufacturing Processes and Methods\nThe selection of a manufacturing process is done very much on the basis of a manufacturer choosing the process that best suits his needs.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "21 - 23 September 2016\nHall 1, booth 335R\nCarbon Composites e.V.\n29. November – 01. December 2016\nHall 8B, Booth F40\nGlobal conference on MW wind blade design, composites manufacturing, performance and maintenance\n12 - 14 December 2016\nMaritim Hotel, Dusseldorf\n14. – 16. marc 2017\nHall 5A, booth N6\nSpecially devised to facilitate the vacuum infusion process in intricate components, VAP® 3D ready-mades reliably remove air and gas residues from the matrix around where they are deployed. Thanks also to their semi-permeable characteristics, they support use of VAP® membrane systems over entire component surfaces in exacting fabrication and repair assignments.\nAs part of the AZIMUT and RoCk projects and working in response to calls for automated and optimized process chains, Trans-Textil and Composyst have developed a new made-to-shape approach especially devised for large aviation structures that need to be fabricated in one integral process. Starting with the geometry data of the Airbus A350 pressure bulkhead, this involved compiling the layers in the VAP® process lay-up into a VAP® 3D material kit made up of instant-use textile auxiliaries that have been tailored to the precise shape of the component mold.\nThe large-scale, multi-layer made-to-shape VAP® 3D solutions come with robot grip straps for swift, automatic and precisely-positioned layer stacking in the mold. This almost entirely eliminates the need for laborious cutting work and other preparatory tasks such as joining, positioning, repositioning and fixing the layers by hand. The VAP® 3D material kit has already proved its worth in a trial involving a full-size Airbus A350 pressure bulkhead in the RoCk Projekt.\nNew membrane products and enhanced fabrication technology at the JEC\nThe VAP® membrane-assisted vacuum infusion process has become a standard in serial production of high-quality fibre reinforced polymers components. Manufacturers worldwide have come to appreciate the high process stability delivered by the process with the help of semi-permeable membranes from Trans-Textil GmbH and Composyst GmbH. The two partners will be celebrating this success and showcasing related technology and products at the JEC Europe 2015 in Paris (hall 7.3, booth D6).", "score": 24.133048041913046, "rank": 54}, {"document_id": "doc-::chunk-3", "d_text": "The new Boeing 787 Dreamliner is composed of over 50% composites and some very large production tools are made of composite tooling prepreg.\nMore recently, companies are also able to consider developing very low cost composite tooling through the development of prepreg that can be processed without the need for autoclave pressure and without increasing resin volume. These are commonly known as out of autoclave (OOA) products. This technique is helpful when making large parts and when the integrity of the pattern/master cannot withstand the pressure of an autoclave.\nBenefits of composite tooling\nCompared with traditional metal tooling, composite tooling can provide a lower cost of production and easier handling and storage. For performance parts requiring accurate dimensions, composite tooling offers a CTE closer to the part CTE, helping the part maintain dimensional integrity during cure.\nCompared with a few years ago, composite tooling is more widely available, more user-friendly, and more efficient to process. Prepreg is now available with excellent drapeability enabling accurate reproduction of small radii corners, a wide range of tackiness, a wide range of curing temperatures, and a wide range of dimensions.\n“A great example application was the recent development of the ALMA telescopes to be placed in Chile”, says Illsley. “The telescope is comprised of a set of segments, each of which have to be identical and precise. In order to meet these exacting accuracy requirements, they used our HX50 tooling prepreg. And, to address the outlife challenges faced when working with such a large tool, we supplied the tooling prepreg in pre-cut squares.”\nAnother reason composite tooling is gaining popularity is the availability of a proven package solution. Customers can buy a complete set of materials including tooling paste or blocks, adhesive, release agent, primer, sealer, tooling prepreg and component prepreg — all from a single source and all proven to work together seamlessly. Axson Technologies is a full-service supplier that has seen rapid growth from its complete solution packages.\nAmber tooling prepreg\nMultipreg HX42: An epoxy resin system that can be pre-impregnated into high performance fibres such as carbon, and glass. It is an exceptional and very well-proven system in aerospace applications that exhibits a high end-use temperature and extended outlife.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-4", "d_text": "The component was fabricated by Cutting Dynamics of Avon, Ohio USA, in collaboration with TenCate Advanced Composites (TC1100 PPS unitapes), Ticona Engineering Polymers (Fortron PPS resin) and A&P Technology of Cincinnati, Ohio, which used a unique, high-speed moulding process to produce a complex-geometry hollow structure from tape. The tape was braided into a biaxial tubular shape around the edge of the seat back.\nTenCate displayed the award-winning component in its booth at JEC 2011 in Paris. Also on display in the booth was a rudder for Boeing’s Phantom Eye long endurance unmanned vehicle that was fabricated with TenCate Cetex PPS RTL laminates and induction welded by KVE Composites Group, the Netherlands. The thermoplastic rudder replaced a thermoset design, achieving weight savings of 25% and cost savings of 5%, reports TenCate. Welding eliminates the cost and weight of fasteners.\nThe latest product from TenCate is Cetex TC900 Nylon 11, a thermoplastic prepreg system for industrial applications. The Nylon 11 polymer is based on renewable soy bean oil and is said to be resistant to most solvents. TC900 can be processed and formed at moderate 350-400°F (177-204°C) temperatures.\nConcept car demonstrators\nBaypreg® F polyurethane (PU) composite from Bayer MaterialScience was used in Mazda’s MX-0 concept vehicle to demonstrate that a lightweight vehicle could be mass produced using materials technologies that exist today. Baypreg was used to produce a bonded, two-piece monocoque structure, similar to those used in Formula 1 cars, for the safety cell, subframes, body panels and interior surfaces.\nMazda designers, who patterned the structure after the MX-5 sportscar, determined that the manufacturing of the structure could be automated using composite sandwich materials. The result was a structure weighing only 100 lbs (45 kg), compared with 665 lbs (302 kg) for the production MX-5.\nBaypreg F is a two-component PU system used in compression moulding of natural fibre mats alone or fibre reinforced sandwich panels to produce a variety of composite automotive parts.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-4", "d_text": "Layers of material 14 may be in the form of dry fibers, woven cloth, braided cloth, knit cloth, mat, stitched layers of material, tow, yarn, tape, and the like. The fiber reinforcement can be comprised of, for example, various materials, including glass fiber, carbon, graphite, boron, aramide, and such materials marketed as Kevlar, and the like. Carbon reinforcement materials are particularly preferred due to their high strength and high modulus.\nTool 12 includes a continuous seal 20 around its perimeter for receiving an inflatable bladder or bag 22. Bag 22 is disposed within mold 10 to provide a cover for the layers of material 14 and is in direct contact with the top layer 14 of the ply of layers 14. Bag 22 engages seal 20 of tool 12 and is held in place utilizing a vacuum created within seal 20 through the use of a vacuum applied via a conduit 30. This seal is in place to create a positive control area for the resin. This seal could be a mechanical or a solid glued seal. The intent is to have control over the resin to allow the process to occur. During the process of fabricating a reinforced article in accordance with the present invention, bag 22 is periodically inflated and deflated through the use of a fluid supply, air or liquid, introduced into bag 22 through a conduit 32.\nResin is introduced into mold 10 for application directly to layers 14 utilizing conduits 36 and 38. The supply source for resin introduced into conduits 36 and 38 is located external of mold 10 and is movable with respect to mold 10 to move upwardly from bottom 10a of mold 10 to the top 10b of mold 10. During the course of manufacturing articles in accordance with the present invention, the top 10b of mold 10 must be elevated with respect to bottom 10a. Resin 10 utilized with the present invention may be selected from various resin systems including, for example, epoxy, epoxy novolacs, and other thermosetting resins, such as for example, vinylesters, polyesters, polyimides, both condensation and additional types, phenolic resins, and the like. The resin system is selected with respect to a particular fiber reinforcement for producing a finished article with the desired mechanical and environmental properties.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "The structural foam made of polymethacrylimide, which was unexpectedly created during an experiment in the late 1960s, has some outstanding properties: it is extremely stiff, yet at the same time very lightweight and highly heat-resistant. In short: the ideal core material for composite parts. In a sandwich-design composite part, it can be easily bonded to carbon fiber facings, resulting in an extremely strong component that weighs much less than similar structural parts made of metal. “Such composites can then be processed without any problems at 180 °C (356 °F) and extremely high pressure, so they can be produced very quickly and efficiently - in this respect, ROHACELL® has huge advantages over other core materials,“ says Roth. That‘s why the polymethacrylimide foam has been used in lightweight construction applications worldwide for almost 50 years: found today in airplanes, helicopter rotor blades and drones. In other words: a material that seems to have been made for air taxis long before they existed.\nThe right composite materials deliver the right performance\nEach air taxi model must prove two things above all: That they can fly safely - and that they can achieve acceptable speeds and distance ranges. Experts estimate that the eVOTL aircraft should reach speeds of at least 100 to 150 kilometers per hour - otherwise it would hardly be possible to save time compared to other means of transport. Currently, most developers are planning cabins that accommodate between two and five passengers. And, of course, the air taxis should be able to fly far enough at full load capacity before needing to refuel or recharge. Less weight means less energy consumption - or more energy for faster speeds and longer distances. The right composite design and materials will be very crucial here: The solution is a lightweight sandwich structure with carbon fiber reinforced plastics on the outside and ROHACELL® as a structural foam core on the inside. In addition, there is the important issue of sustainability: The lighter the air taxi, the more we can support their argument for being energy-saving, sustainable alternatives in the transport mix of the future.\nThe combination of materials and the associated production costs become all the more important when it comes to complex components, complicated shapes or a wide range of force effects. This is where ROHACELL® is a stand-out material solution, since its high temperature resistance allows manufacturers to significantly reduce both production time and cost.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "High-Performance Board and Liquid Materials for First-Class Modeling Products\nRAMPF Group, Inc. is presenting its encompassing range of model, mold, and tool engineering materials for the automotive, marine, and aviation industries at CAMX 2019 from September 23 - 26 in Anaheim, CA – Booth P4.\nThe company based in Wixom, MI, develops customized solutions that are tailored to meet the specific requirements of customers throughout the entire production chain – from prototyping, model, mold, and tool construction to production. New in the portfolio are liquid resin systems for structural and interior aerospace composites applications.\nThe product highlights include:\nRAKU® TOOL WB-0890 – Epoxy Board for Composite Manufacturing\nThis brand new board has an extremely fine surface structure, which significantly reduces both finishing and the amount of sealer that has to be used. The surface finish can be transferred from the master model to the prepreg mold, so that no re-sanding of the mold is required and the service life of the prepreg molds is significantly increased. RAKU® TOOL WB-0890 is easy and quick to machine and compatible with all industry-standard paints, release agents, and epoxy prepregs. Tg is 110 °C.\nRAKU® TOOL WB-0950 – High Temperature Epoxy Board for Tools & Molds\nThis epoxy board can be bonded in various shapes and sizes. A special RAKU® TOOL adhesive matched in hardness and color is available. RAKU® TOOL WB-0950 is heat resistant up to 200 °C, has a closed surface structure, and exhibits excellent machinability and good dimensional stability. The board is used, amongst others, for the manufacture of lay-up tools for prepregs and vacuum forming molds.\nRAKU® TOOL Close Contour Paste CP-6131\nClose Contour Pastes are 2-component epoxy systems that are applied to a close contour substructure by hand or using a CNC machine. Many kinds of supporting structures can be used, including RAKU® TOOL SB-0080 styling board, EPS, and cast aluminum. RAKU® TOOL CP-6131 is easy to process and apply. The production process is fast and efficient – direct tooling does not require the production of a model, and the close contour shape facilitates faster milling times. Furthermore, as with all close contour products, less material is used and less waste produced.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Integrated assemblies are the Holy Grail for design\nengineers because they lighten structures, reduce assembly costs, and increase\nIn the new approach being used by Boeing with help from NASA, dry carbon fiber structures are stitched together and then placed in a heated tool. A vacuum is pulled and epoxy resin is infused into the structure.\n\"The composites technology used in the (787) Dreamliner is 25 to 30 years old,\" says Andy Harber, senior project manager, design engineering for Boeing. \"In the new approach, there is no lay-up and no autoclaves.\"\nThe Dreamliner represented a dramatic increase in aircraft composites'\nuse, with about half of the structure made with carbon-reinforced plastic\n(CFRP). As reported by Design\nNews in an award-winning series, the concept\nrequired development of autoclaves larger than those ever previously used.\nOtherwise, the technology was similar to composites technology long-used in making fiberglass composites. Hand lay-up is an open mold process in which successive plies of reinforcing material, usually fiberglass or resin-impregnated reinforcements are applied to a mandril. Curing is accelerated in an autoclave.\nDeliveries of the Dreamliner are running more than three years late because of a variety of problems, including issues with fasteners and attachment points in the composite structure.\nNo decision has been made at Boeing about potential use of the new assembly technology for commercial aircraft. But Harber says Boeing expects to use the technology in a next-generation blended wing body aircraft designed for reduced noise and pollution.\nThe process received its first field test as\nreplacement landing gear doors in C-17s used as transport\naircraft in Afghanistan. \"On Sept. 17, 2009, we delivered to NATO eight landing\ngear doors featuring resin-infused, stitched composites,\" says Harber.\nThe original doors were made with traditional materials and had taken a beating in the rough landing environment in the battle zone. Tools for the C-17 doors were developed by Process Fab Inc.\nThe first licensee for the Boeing-developed technology is General Dynamics Armament and Technical Products, which was awarded a $17 million contract by Boeing for the production of composite components and spares for the C-17 Globemaster III aircraft. Production and program management is being done at General Dynamics' advanced materials facility in Marion, VA.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "The production capacity of the GRP rotor blades was successfully multiplied within a very short time, with a significant increase in quality and a material and weight saving of 25%.\nFrom 1994, the world-famous stunt flying planes from EXTRA-Flugzeugbau (FAR23) were also manufactured with ES-Laminatic® machines. These are still valid (at least in the air) as indestructible!\nSince the restructure of Germany’s energy sector, the Energiewende, lightweight construction has become more and more important in other areas, such as the automotive industry and rail technology. Increasingly, composite materials are being used in large aircraft construction in order to save fuel and at the same time increase transport capacity.\nWith increasing component size and quantity, the demands on quality and short cycles as well as resilience increase. Meanwhile, several resin manufacturers have responded to these demands of the industry by developing new, faster resin systems that cure in less than 15 minutes. Halogen-free, self-extinguishing resin systems for the commercial aerospace industry according to FAR25/26 as well as for the railway industry are now available.\nHowever, those systems need extremely precise mixing (<0.6 GT). Therefore, ES-Schmid GmbH has anticipatorily developed special, high-precision and extremely reliable matrix flow meters with <± 0.1 GT accuracy. When dosing 2K systems, the maximum deviation at <± 0.2 GT is far below the permissible deviation and is already a big step ahead of the state of the art.\nCompared to the vacuum infusion process, the ES-Laminatic® technology has significant advantages in the production of high-quality sandwich components with honeycomb core, like FAR25/26 resin systems. Due to the vertical precision impregnation, the honeycomb cores will not overflow and the negative \"filter effect\" in resin systems with fire-retardant additives is completely eliminated.\nRainer Schmid - CEO\nFederal Aviation Regulations (FAR)\nFAR Part 23 - details the airworthiness standards for airplanes with a maximum take-off weight of less than 12,500 lbs., such as the Cessna 172 and Cirrus SR20.\nFAR Part 25 - details the airworthiness standards for airplanes with a maximum take-off weight of 12,500 lbs. or more, such as the Boeing 737 or Airbus A320.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-14", "d_text": "Other press improvements included innovative short-stroke designs that allowed a secondary hydraulic system to drop and lock the upper platen into position just above the tool and a much shorter, high-pressure stroke to be initiated from the lower platen.\" In addition to faster closing limes, the lower posit ion of the locked upper platen increased the effective rigidity of the press, enabling thinner parts to he molded with improved tolerances.54\nDespite all of the improvements made in composites processing since the first Corvette, comparativeyl long cycle times still limited SMC' applications to lower-volume vehicles heading into the 1980s. The five-minute cycle time required in MFG's preform molding in 1953 had only been reduced to three minutes for a typical SMr molding by 1983. Between 1983 and 1988, however, a series of process improvements were developed to reduce cycle time (Figure 14).55\nVacuum-assisted molding was a key technology developed for shorter cycles. Removal of the air ahead of closing the press allowed use of thinner, higher diecoverage charge patterns without fear of air entrapment and blister formation. Thinner charges allow etl for faster mold closing times, which in turn enabled faster chemistries that would otherwise have led to pre-gel. Perhaps even more significantly, vacuum-assisted molding led to improved surface quality and allowed for the elimination of in-mold coating (Figure 16). This alone accounted for a 30 percent cycle time improvement.56\nIn the mid-1980s, improved microprocessor controls gave SMC presses unprecedented control of platen parallelism and closing speed. These presses thickness and therefore enabled thinner wall sections to be molded.57 In addition to cost and mass reduction, thinner parts also enabled shorter cycle times. By 1988, press improvements in combination with vacuum assisted molding enabled SMC productivity to finally meet the elusive 60-sccond cycle time that translates to better than 250,000 parts per year from a single tool.\nAutomation and robotics entered into the SMC molding process in the mid-1980s. Automated charge cutling, robotic charge placement, robotic demolding, and automated routing/ deflashing stations were gradually introduced into SMC plants.58 In terms of annual SMCproductionperworker. in 1985 the average was 12.5 tons; by 1990, this increased to 18 tons per worker.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "- Vinyl - Window frames.\n- Polyethlene - Grocery bags.\n- PVC - Piping.\n- PEI - Airplane armrests.\n- Nylon - Footwear.\nMany thermoplastic products use short discontinuous fibers as a reinforcement. Most commonly fiberglass, but carbon fiber too. This increases the mechanical properties and is technically considered a fiber reinforced composite, however, the strength is not nearly as comparable to continuous fiber reinforced composites.\nIn general, FRP composites refers to the use of reinforcing fibers with a length of ¼\" or greater. Recently, thermoplastic resins have been used with continuous fiber creating structural composite products. There are a few distinct advantages and disadvantages thermoplastc composites have against thermoset composites.\nAdvantages of thermoplastic Composites\nThere are two major advantages of thermoplastic composites. The first is that many thermoplastic resins have an increased impact resistance to comparable thermoset composites. In some instances, the difference is as high as 10 times the impact resistance.\nThe other major advantage of thermoplastic composites is the ability reform. See, raw thermoplastic composites, at room temperature, are in a solid state. When heat and pressure impregnate a reinforcing fiber, a physical change occurs; not a chemical reaction as with a thermoset.\nThis allows thermoplastic composites to be reformed and reshaped. For example, a pultruded thermpostic composite rod could be heated and remolded to have a curvature. This is not possible with thermosetting resins. This also allows for the recycling of the thermoplastic composite at end of life. (In theory, not yet commercial)\nBecause thermoplastic resin is naturally in a solid state, it is much more diffcult to imprenate reinforcing fiber. The resin must be heated to the melting point, and pressute is required to impregnate fibers, and the composite must then be cooled under this pressure. This is complex and far different from traditional thermoset composite manufacturing. Special tooling, technique, and equipment must be used, many of which is expensive. This is the major disadvantage of thermoplastic composites.\nAdvances in thermoset and thermoplastic technology are happening constantly. There is a place and a use for both, and the future of composites does not favor one over the other.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Just one layer\nLufthansa Technik Intercoat automates INTERFILL® coating process\nINTERFILL® – a high-performance material\nLufthansa Technik Intercoat specializes in the coating of components damaged by wear or corrosion. With the help of an advanced epoxy coating process, components that would have had to be scrapped in the past can now be refurbished and continue to be used. The material that makes this possible is called INTERFILL®. Specially developed and tested for airworthiness, this material gives aircraft components a new life cycle while also improving their operational characteristics. As a result, the repaired components are even better than the original ones. So that INTERFILL® can be used for different components in the aviation, rail and automotive industries, several versions of the material have been developed that differ in terms of their possible applications, abrasion resistance and surface qualities. Up to now, INTERFILL® was applied manually in several layers with a thickness ranging from 0.05 to 0.5 millimeters.\nAutomation of the coating process\nNow Lufthansa Technik Intercoat has accomplished something that was considered impossible for a long time: It managed to automate part of the INTERFILL® coating process. The component is fastened to a rotating device in a working booth. Then a robot arm applies INTERFILL® to the component surface through a fine nozzle. The entire new system was developed in-house at Lufthansa Technik Intercoat and built by an external service provider.\nHuge time savings and improved quality\nThanks to the partly automated process, components can be coated in a single working step. A special applicator ensures that the ideal dose is applied with an even layer thickness. The improved application method prevents air pockets from forming within the surface, which reduces the amount of corrective work that used to be necessary at times. The innovative system provides consistent, high-quality results and saves a considerable amount of time.\nFor example, the time required to coat the component and cure the material could be reduced to a third of the normal processing time. This increases Lufthansa Technik Intercoat’s flexibility, since standardized processes mean the company is now able to process more components.\nThe next step: complete automation\nThe system, which has been running since September 2018, is currently able to process components of up to 500 x 500 x 500 millimeters in size and 25 kilograms in weight.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Pultrusion picks up speed in automotive applications\nThe automotive industry has been using pultrusion more and more.\nAccording to CompositesWorld.com:\n“Pultrusion is one of the most cost-effective processes for manufacturing high-volume composite parts. Most commonly associated with glass fiber-reinforced profiles used in construction and corrosion-resistance applications, tailored pultrusions for automotive applications — including bumper beams, roof beams, front-end support systems, door intrusion beams, chassis rails and transmission tunnels — were highlighted as a key area for growth by the European Pultrusion Technology Association (EPTA, Frankfurt, Germany) in its 2018 World Pultrusion Conference report.\n“Two commercial launches highlighted at CAMX 2018 (Oct. 16-18, Dallas, TX, US) seem to confirm this technology/market fit. L&L Products, Inc. launched its Continuous Composite Systems (CCS) pultrusions, which use polyurethane resin for automotive applications such as side sills and crash structures. Designed to replace traditional metal structures that require bulkheads for necessary stiffness, CCS pultrusions offer light weight — 75% less mass than steel and 30% less than aluminum — at an economic price. Continuous fiber profiles include three variations: CCS Set using glass fiber, CCS Hybrid using a customized mix of glass fiber and carbon fiber, and CCS Extreme using only carbon fiber. A short-fiber version co-extruded with adhesive comprises a fourth product, CCS Co-Ex. The three continuous-fiber products may also be combined with L&L’s adhesives as part of the company’s in-line processing, further reducing manufacturing costs and time-to-delivery. Beyond automotive, CCS products are also aimed at wind turbine blade spar caps and industrial and architectural applications.\n“Shape Corp. (Grand Haven, MI, US) also is developing pultrusions for automotive, but with a curve — literally. The company has the first operational installation of Thomas Technik & Innovation’s (TTI, Bremervoerde, Germany) Radius-Pultrusion system, which was exhibited at CAMX 2018. Shape Corp.", "score": 21.565470526715476, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "Description of the Related Art\nReinforced thermoplastic and thermoset materials have wide application in, for example, the aerospace, automotive, industrial/chemical, and sporting goods industries. Thermosetting resins are impregnated into the reinforcing material before curing, while the resinous materials are low in viscosity. Thermoset composites suffer from several disadvantages. Low molding pressures are used to prepare these composites to avoid damage to the fibers. These low pressures, however, make it difficult to suppress the formation of bubbles within the composite which can result in voids or defects in the matrix coating. Thus, most processing problems with thermoset composites are concerned with removing entrained air or volatiles so that a void-free matrix is produced. Thermoset composites made by the prepreg method require lengthy cure times with alternating pressures to control the flow of the resin as it thickens to prevent bubbles in the matrix. While current fabrication of large structures utilize robotic placement of the thermoset composite material to increase production rate, its overall production rate for the component is limited by the lengthy cure in the autoclave process step and related operations to prepare the material for that process step. Some high volume processes, such as resin infusion avoid the prepreg step but still require special equipment and materials along with constant monitoring of the process over the length of the cure time (e.g. U.S. Pat. Nos. 4,132,755, and 5,721,034). Although thermoset resins have enjoyed success as in lower volume composites applications, the difficulties in processing these resins has limited their use in larger volume applications.\nThermoplastic compositions, in contrast, are more difficult to impregnate into the reinforcing material because of comparatively higher viscosities. On the other hand, thermoplastic compositions offer a number of benefits over thermosetting compositions. For example, thermoplastic prepregs can be more rapidly fabricated into articles. Another advantage is that thermoplastic articles formed from such prepregs may be recycled. In addition, damage resistant composites with excellent performance in hot humid environments may be achieved by the proper selection of the thermoplastic matrix. Thermoplastic resins are long chain polymers of high molecular weight. These polymers are highly viscous when melted and are often non-Newtonian in their flow behavior.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "Automated tape placement and complex stamp forming is one area of focus because it holds such great potential for rapid processing. Yet much remains to be learned in order to achieve consistent quality on an industrial scale, Akkerman observes.\n“Issues involve getting the right quality – no wrinkles, not tears, no delaminations – in the final product,” he says. “To be able to produce proper components requires a lot of experience with thermoplastics processing, which often is lacking in industry. So we are trying to develop the CAE tools and characterisation methods to simulate the process and demonstrate to industry how it should be done.”\nMaterial screening is ongoing at the TPRC.\n“We keep a technology watch on what’s new on the market, looking at material properties and how we can tailor them and possibly improve them,” he continues. “We’re focusing on the ‘high-tech, high-spec’ materials, trying to meet industry’s demand for a material with the performance of PEEK, but at lower cost.”\nDevelopment work on joining technologies is another priority. The centre is planning to start up work with hybrid constructions, incorporating metal and composites.\n“There’s not one solution for all problems,\" notes Akkerman. “You will always end up with hybrid constructions. The challenge is to produce them efficiently.”\nRecyclability is high on the centre’s list of priorities.\n“If institutes like TPRC are successful and composites deployment grows rapidly, then we need to face the issue of what happens after end of life. Should we have a desert full of aircraft and car components, or do we do something with them? I think it’s most relevant to think about either recycling or renewable materials and things like that,” he comments.\nFlooring for helicopters\nFiberforge, Glenwood Springs, Colorado, USA, a manufacturing company that has developed a patented production process for continuous fibre reinforced thermoplastic parts, is in the final stages of development of a project to produce a composite floor for the large Sikorsky CH-53K transport helicopter used by the US Marines.\nThe contract represents about 300 shipsets of panel assemblies, with each shipset incorporating 38 different thermoplastic composite panels that are welded together into an assembly to form the floor, relates Josh Taylor, Director of Business Development. The largest panel is under 1 m2.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Over 1200 large holes ranging from 3/8\" - 3/4\" in diameter need to be drilled into the side-of-body fittings on the 787 fuselage over four different assembly lines. The material stacks are 4\" thick and consist of several layers of titanium and CFRP which must be drilled for one-up-assembly (meaning no deburr). These parameters stressed the manual drilling process in meeting hole quality and build times. The problem is complicated by the location of the fitting, which is over 4 meters above the factory floor.\nThe main focus on finding an automated solution was stabilizing the drilling process for Titanium/CFRP stacks in the existing assembly lines. The solution was designed around a natural frequency of greater than 10Hz with a machine that weighed less than 8000lbs. Stabilization of the drilling process was achieved by focusing on three aspects: clamping on the fitting with double the drill thrust loads (up to 1200lbs), minimizing overall machine deflection to less than 0.020\", and maintaining the relatively high natural frequency. Each machine is capable of being moved between production lines and both sides of the aircraft, allowing for increased production and maintenance flexibility. The drilling feed and speed are both servo-motor driven which allows for the drilling process to be controlled layer by layer. The on-board controls also monitor drill thrust and drilling profile for each drill bit at each layer to determine drill life. The result is improved hole quality, improved process reliability, and decreased manpower.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Exothermic Molding is committed to the RIM process.\nReaction Injection Molding is ideal for producing highly designed parts of complex geometry, and strong and lightweight parts which can easily be painted. It also has the advantage of quick cycle times compared to typical cast and vacuum formed materials.\nThe thermoset mixture injected into the mold has a much lower viscosity than molten thermoplastic polymers, therefore large, light-weight, and thin-walled items can be successfully RIM processed. The low temperature, low pressure process allows numerous design freedoms including variable wall thicknesses ranging from .125″ to 1.125″ within the same part, and easy encapsulation of a variety materials including batteries, electrical circuitry and structural members.\nThis thinner mixture also requires less clamping forces, which leads to smaller equipment and ultimately lower capital expenditures for molds. Another advantage of RIM processed foam is that a high-density skin is formed with a lower density core creating robust, durable parts.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "GETTING THE REACH RIGHT\nAny robot with the requisite end effector and reach can be employed for RIMFIRE work. Sea Ray has set up robots made by the Manufacturing Automation Div. within Automation Technology Group of ABB Inc. (Auburn Hills, Mich.) and Nachi Robotic Systems Inc. (Novi, Mich.) and is currently building a system using a robot from Fanuc Robotics (Rochester Hills, Mich.). \"Reach\" refers primarily to the robot's ability to adapt to part geometry, rather than part size. The most critical elements of reach are the draft (the angle between the side and bottom of the mold) and the aspect ratio of width to depth, or height. Hull length is not a major factor because the robot is positioned in a stationary cell, and the mold moves through the cell while the robot operates. For extremely lengthy molds, multiple robots can be ganged inline to facilitate fast, thorough coverage.\nRIMFIRE robots have successfully applied up to 10 oz/ft2 (0.31 g/cm2) of preform materials onto a 5-ft/1.5m high sidewall having slightly less than 3° draft with an 8.5-ft/2.6m wide hull in a concave mold. \"Once the draft opens up more than 3° or 4°, we can preform about any size the robot can reach,\" Lammers says. \"It's a versatile process with many options. In some cases, we can index the mold sideways, or rotate it so the robot is not spraying onto a vertically oriented surface,\" he explains. For a particularly tight radius, Sea Ray may hand lay glass mat into the angle before the mold enters the preform cell, so the robot spray doesn't bridge the angle and leave the area unreinforced and vulnerable to structural damage.\nThere are exceptions: Some complex geometries with both concave and convex sections or severe aspect ratios present areas the robot simply can't reach. For this reason, Sea Ray has not yet applied the process to decks. \"Some of our decks might have sections about 4-inches [102-mm] wide and 2-ft [0.61m] deep, which is nearly impossible for the robot to reach into,\" Lammers explains.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "When boatbuilder Sea Ray investigated closed molding methods in the late 1990s, existing preforming methods were not cost-effective for building the company's wide range of pleasure boats. The company considered hand layup of preforms too labor-intensive. In addition, preforming materials — conformable mat and stitched fabrics — were expensive. Even more expensive were the dedicated, net-shape preforming molds. The alternative, sprayup of chopped glass preforms, was little more appealing. While faster in production, the sprayup technique would require even greater capital investment in robots, as well as tools made with air-permeable metal screen and the associated vacuum systems, with custom-designed plenums and baffling, and other related equipment. These considerations drove a corporate decision to invest in a more cost-effective preforming system, says Scott Lammers, executive director of manufacturing technology for Sea Ray (Knoxville, Tenn., a business unit of the Brunswick Corp.'s Boat Group). The company needed a more effective method to accommodate its boat size range (from small 18-ft/5.5m sport boats to 68-ft/21m yachts). The result was RIMFIRE.\nShort for Robotic In-Mold Fiber Reinforcement, RIMFIRE (pats. pend.) is a robot-based system, which borrows the concept of robotic application from traditional sprayup preforming, but bypasses entirely the need for preforming tools. The system's heart is the working end of the robot arm, called the \"end effector.\" It consists of three integrated components: a dual-sided, servo-motor-driven chopper gun capable of putting out 9 kg/20 lb or more of glass per minute; a binder mixer/spray head; and a heating system for the binder, featuring a pair of flame burners fueled by remote bottled propane gas or natural gas. Temperatures at the burners can be as high as 538°C/1,000°F and the heat zone temperature can be varied from 93°C/200°F to 204°C/400°F. The end effector chops dry gun roving, combines it with a powdered binder, then mixes and sprays the combination directly over the gel coat in the part mold. \"You just spray it in the mold and you're done,\" Lammers says. \"You don't touch it again.\"", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-2", "d_text": "He commented that they had started with the “lite RTM tooling standard” one where his operators could watch the resin flow, then they moved to what they call “Not so Light RTM” in which they too have added tooling structure and more conventional tooling materials to their tooling designs.\nKeeping in mind that the MAJORITY of the cost in building the mold is NOT the materials it is the labor and time especially in the proper application of Sheetwax, it is not a major cost impact to add some stiffness and use conventional tooling materials when building RTM Lite tooling.\nThe most profound change I have seen in the closed molding processes in the last 25 years has been the change from center injection to perimeter injection. That is to create a CONVERGING flow from the standard as it had been for the last 30 plus years of a DIVERGING flow path. This change came about from the introduction of the RTM Lite tool design. This what might appear as a subtle change is the most radical change in recent closed molding history. Keeping with that change and using then conventional tooling materials, with about 30% of the tooling structure as common to previous RTM tooling we have now a more robust closed molding process, with less scrap and more consistent part thickness then we had with the conventional RTM processes of the past.\nToday, JHM Technologies offers three types of tooling structure to accommodate all levels of experience and production needs to their customers.\nThe first level is a “BASIC” level one built with all of the of the latest flow channel details to enhance even and uniform resin flow 360 degrees around the mold perimeter, as well as, can be fitted with either replaceable “tubes” or the latest in Automatic injection ports or vents. The ability to “see through” the upper is maintained to allow the new user the ability to evaluate the mold process as well as train their molding operators on troubleshooting their molding process. The Basic tool is made of high quality Vinyl Ester gel-coat and hand laid fiberglass reinforced Vinyl Ester laminate. Additional backing structure is added using steel box tubing as needed for stiffening and operator handling. The Basic tool is capable of producing the same level of part quality as either the “MAGNA” or “CONDITIONED MAGNA” series of tooling, the only compromise is that the BASIC series of tooling requires a bit more care and will not produce at the same rate as the MAGNA series.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "SGL Group exec shares his journey in automotive space\nFORT WORTH, Texas–Carbon fibers will not make it into the next generation of single aisle planes–unless manufacturers learn from “the painful lessons the automotive industry suffered from,” Andreas Wüllner, chairman, business unit composites–fibers and materials at SGL Group, said at the Aerodef show here recently.\n“The usage of carbon fibers in the next generation of single aisle planes, which may enter into service during the second half of the next decade, will heavily depend on two factors,” he added. “First, is there a viable business case for the use of CFRP (carbon-fiber-reinforced plastic) in comparison to other materials? And, second, will the required low takt times be achieved with capable manufacturing technologies? Looking at the state-of-the-art autoclave production value chain, I’m skeptical that this will be the path forward.”\nTakt time is the average time recorded between the start of production of one unit and the start of production of the next unit, when these production starts are set to match the rate of customer demand.\nHighly automated, out-of-the-autoclave technologies must be implemented to achieve low takt times, said Wüllner, who has worked at the SGL Group longer than 20 years.\nThat’s because build rates of the single-aisle airliners will grow going forward: “Think about 10 to 12 twin aisles, A350 or B787, aircraft per month. Doing the math with 20 average workdays results in .5 aircraft per day or a takt time of something like 2,880 minutes. Compare this to a takt of approximately 5 minutes, which comes close to the BMW 7-series production!”\nBuilding the Airbus 350 or Boeing 787 requires a production rate of one wing per day by 2020, he said.\n“This is doable with autoclave technologies. But, imagine if six wings per day for the single aisle planes, A320 and B737, had to be manufactured with CFRP. How should this be achieved? Who could and would afford such investments?”\nIn automotive, carbon fiber composites evolved from carbon fiber prepreg monocoques—vehicle structures in which the chassis is integral with the body–for racing and super sports cars cured in autoclaves in the 1980s.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Whether for aerospace components, prototypes in automotive manufacturing or chemical engineering, conventional industrial manufacturing processes come up against their limits wherever small component lots contributing high value are concerned. In situations like this, 3D printing enables considerable cost savings as well as new design options. In future, 3M will be offering precisely that for fully fluorinated polymers such as polytetrafluorethylene (PTFE).\nThe industry has been striving for years to manufacture components in ever bigger lots. Optimised production processes involve very high start-up costs for tools and require complex steps to be programmed. These costs are recovered very quickly, however, owing to the scale effects which result from the manufacture of hundreds of thousands or even millions of the same component. The most common processes are thermal ones in which metals or plastics are fluidised and cast in prefabricated moulds. They are often subtractively machined afterwards. Special tools such as drills or milling heads remove material until the desired geometries are achieved.\nLimits of conventional manufacturing\nThese conventional manufacturing processes require a lead time, for example for ordering tools and programming the machining operations. Furthermore, design engineers are obliged to ensure that components can be manufactured using conventional methods. More complex components with several functions therefore generally consist of various individual parts that must be assembled afterwards. This in turn creates sealing points.\nNon-melt-processable materials necessitate subtractive machining from blanks. This produces considerable quantities of unusable production waste – which is very uneconomical when expensive materials are a must. A further disadvantage is that moulded PTFE parts manufactured in the conventional way are almost always solid, thereby increasing the component weight. This is detrimental for aerospace or automotive engineering, where every additional gram counts.\nSmall lot sizes\nIn contrast to established large-scale manufacturing, the trend in Industry 4.0 is moving towards economical production of very small lots. Additive manufacturing is an important link in a digital engineering chain. In future, it will be possible to manufacture lightweight, ready-to-install, multifunctional moulded parts from CAD data in a single step, avoiding tool costs and long lead times.\nIn additive manufacturing without tools, the parts are printed in 3D. Based on digital design data, each component is built up layer by layer through the deposition of material. Unlike in the private sector, additive manufacturing processes are already widely accepted in numerous industrial applications.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "The cost-effective, tool-less production of lightweight components reduces fuel consumption, material costs and CO2 emissions. Engine and turbine parts as well as cabin interior components are typical applications for industrial 3D printing / Additive Manufacturing (AM. This is where the benefits of innovative EOS technology come to the fore: functional components with complex geometries and defined aerodynamic properties can be manufactured quickly and cost-effectively. Material and weight savings lower fuel consumption and CO2 emissions. Manufacturer-specific adaptations and small production runs are further arguments in favour of Additive Manufacturing technology.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-3", "d_text": "Pat. No. 5,094,883. Thus, for those skilled in the art, there are multiple methods to coat and/or impregnate a fibrous substrate given the available process equipment, and proper selection of polymer product form (flake, fine powder, film, non-woven veil, pellets) and melt viscosity. Known methods for the fabrication of composite articles include manual and automated fabrication. Manual fabrication entails manual cutting and placement of material by a technician to a surface of the mandrel. This method of fabrication is time consuming and cost intensive, and could possibly result in non-uniformity in the lay-up. Known automated fabrication techniques include: flat tape laminating machines (FTLM) and contour tape laminating machines (CTLM). Typically, both the FTLM and the CTLM employ a solitary composite material dispenser that travels over the work surface onto which the composite material is to be applied. The composite material is typically laid down a single row (of composite material) at a time to create a layer of a desired width and length. Additional layers may thereafter be built up onto a prior layer to provide the lay-up with a desired thickness. FTLM's typically apply composite material to a flat transfer sheet; the transfer sheet and lay-up are subsequently removed from the FTLM and placed onto a mold or on a mandrel. In contrast, CTLM's typically apply composite material directly to the work surface of a mandrel. FTLM and CTLM machines are also known as automated tape laydown (ATL) and automated fiber placement (AFP) machines with the dispenser being commonly referred to as a tape head.\nThe productivity of ATL/AFP machines is dependent on machine parameters, composite part lay-up features, and material characteristics. Machine parameters such as start/stop time, course transition time, and cut/adding plies determine the total time the tape head on the ATL/AFP is laying material on the mandrel. Composite lay-up features such as localized ply build-ups and part dimensions also influence the total productivity of the ATL/AFP machines. Key material factors that influence ATL/AFP machine productivity are similar for a thermoset resin matrix composite when compared with a thermoplastic matrix composite yet there are a couple of key differences. For thermoset resin matrix composites, key factors are impregnation levels, surface resin coverage, and “tack”.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Each company who enters into the resin transfer molding processes does so with numerous expectations and personal perceptions, as well as, experience. Most who have not been intimately involved in closed molding for many years have many questions of just what is happening inside the mold as the resin flows, the RTM Lite process with the “see through” upper mold half has been very valuable in answering many questions those new to closed molding have asked.\nThe introduction of the light RTM process, as originally adopted in France as the “ECO” process which was a process developed as an environmentally friendly low cost closed molding process, which is well suited for low production volume as so common in the European community. This ECO process was built on the design features of a lite weight “see through” upper mold half. Once this tool design and process method was introduced here in North America the name was recognized as the RTM Lite process. The primary promoter of this process in North America has been Chomarat company from France as a means to advance the growth of the closed molding processes and the use of their fiberglass reinforcement ROVICORE, so much of the thanks is owed to Chomarat for their efforts in bringing the RTM Lite innovation to North America.\nWhile the advantage of seeing through the mold allows the molder to actually “see” the resin flow during the injection process, which speaks volumes in advancing the understanding of just how the resin actually fills the mold, it does so at a price.\nIn order to see the resin we must build the upper from clear gel-coat, initially this was a MAJOR stumbling block for the RTM Lite process as the original clear tooling gel-coats were very rigid and crack very easily. Recent innovations from the gel-coat suppliers has dramatically improved this concern, yet the clear gel-coats still do not perform as well as the more conventional solid color tooling gel-coats. Further, the laminate of the see through upper, must be generally no greater in thickness then 3/16″ (~.180″ or 4.5mm) in order to still maintain the clarity needed to see the resin flow. Tooling of this thickness will perform well for very low to low volume production especially on generally flat shapes. Yet, complex shapes, especially those with solid core materials require a molding staff who pays CAREFULL attention to detail, in order to prevent the mold from being damaged during the molding process.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Analyzing pros and cons of two composite manufacturing methods\n“Frontal polymerization doesn’t use an autoclave at all, so it doesn’t require that huge upfront investment,” said Bliss Professor Philippe Geubelle in the Department of Aerospace Engineering at the U of I. “It’s a chemical reaction sustained by the release of heat as the front propagates. It can save a lot of energy and it generates much less carbon dioxide, so that’s an environmental benefit.”\nGeubelle said they began comparing the two methods by looking at the thermo-chemical equations in order to model the two polymerization processes. In that way, they could compare the methods for a variety of composite materials, and particularly, the time duration each method takes to manufacture the same part.\n“The key contribution from the theoretical point of view is we've rewritten the reaction-diffusion equations to extract the two most important nondimensional parameters,” Geubelle said. “Using just these two parameters allowed us to look at a wide range of chemical parameters, such as the activation energy and the heat of reaction, and at the impact of the initial temperature of the resin”.\nGeubelle said this method helped to compare the composite manufacturing processes based on bulk and frontal polymerization in terms of the time it takes to manufacture a part. The study found that there were instances when one or the other was faster.\n“Imagine you want to make something that is one meter long. Frontal polymerization will be able to do complete the task before bulk polymerization starts to kick in,” Geubelle said. “On the other hand, if you want to make something that is 10 meters long, then bulk polymerization may actually take place before the front reaches the other end of the part. It's the competition between these two processes that we analyzed in this study.\nHe went on to say there are several ways to speed up the process for frontal polymerization: start the front at both ends so it goes twice as fast, or heat it from the bottom by using a heated panel beneath it. “That process is so fast, we refer to it as flash curing”, Geubelle said, “but it does use more energy than for a single front”.\nManufacturing composite parts using frontal polymerization instead of bulk polymerization has a lot of advantages.\n“With frontal polymerization, you don't need the large capital investment of the autoclave, making it a very attractive option,” Geubelle said.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "“This type of design has a number of advantages: Primarily for this project, it lends itself to FDM technology due to the smooth leading and trailing edges over each half-span.”\nThis configuration allowed all geometry to remain below the critical angles beyond which support material would be required as well having aerodynamic advantages over conventional fuselage and wing designs.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "“Having previously outsourced our thermoforming requirements, we found that we were accumulating significant labour costs and having to contend with lengthy lead times. However, since 3D printing these parts ourselves, we’ve reduced lead times in the conceptual phase by approximately 35%. The technology has enhanced our overall manufacturing process, allowing us to evaluate our designs quickly and eliminate those that are not suitable, before committing significant investment towards mass production,” concluded Cademartiri.\n“We are seeing a growing trend among our customers to leverage our additive manufacturing systems as a manufacturing tool for a wide range of applications, in addition to direct prototyping. With the development of some of our recent, more durable materials, our customers can now enjoy flexibility in their choice of methods to create their manufacturing tools and test designs in their final production materials, before investing in costly metal tools,” added Nadav Sella, Senior Manager Manufacturing Tools at Stratasys.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "|Número de publicación||US5565162 A|\n|Tipo de publicación||Concesión|\n|Número de solicitud||US 08/315,751|\n|Fecha de publicación||15 Oct 1996|\n|Fecha de presentación||30 Sep 1994|\n|Fecha de prioridad||30 Sep 1994|\n|Número de publicación||08315751, 315751, US 5565162 A, US 5565162A, US-A-5565162, US5565162 A, US5565162A|\n|Inventores||Thomas M. Foster|\n|Cesionario original||Composite Manufacturing & Research Inc.|\n|Exportar cita||BiBTeX, EndNote, RefMan|\n|Citas de patentes (19), Citada por (28), Clasificaciones (10), Eventos legales (4)|\n|Enlaces externos: USPTO, Cesión de USPTO, Espacenet|\nThe present invention relates to the manufacture of fiber reinforced composite articles, and more particularly to capillary resin transfer molding.\nDue to the high strength to weight ratio, fiber reinforced composite structures have become attractive for aerospace application, such as for example, parts for airframes and propulsion power plants, and for reinforcing various type structures. Molding of such parts has been expensive, relatively time consuming and labor intensive because of the need to position elements accurately in the mold, and to carry out the process slowly to avoid porosity, air entrapment and other internal and surface defects during polymerization, cross-linking or hardening of the resin in the fiber material. Additionally, systems have required the generation of high pressures to uniformly spread the resin in the fiber.\nFiber reinforced organic resin composite structures are fabricated using two basic forms of materials, prepreg, \"B\" stage, resin impregnated fiber forms, and wet resin impregnation of fiber forms.\nIn the prepreg process, woven cloth or fabric is impregnated at one facility, with a prescribed amount of resin. The resin is staged and dried, to a \"tacky\" or \"B\" stage condition in a partially cured condition. The material is then packaged between layers of separation film and stored in containers for extended periods of time before the fabric is used and fully cured for final part processing.\nThe prepreg operation has a number of disadvantages.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The extra material left behind in the mold must be disposed of, since cured thermoset material from sprues and runners cannot be reground and reprocessed, as opposed to thermoplastic injection molding. When the mold cavities are filled, the parts must cure to a solid form. The mold opens for part removal, and parts are ejected and removed by hand or automated equipment.\nThermoset Transfer and Compression Molding Advantages\n- High temperature thermoset materials\n- more dimensionally stable\n- less wall thickness variations\n- less expensive tooling", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "The rubber transfer molding process is used to produce a full range of precision molded rubber, rubber-to-metal, rubber-to-fiberglass rod bonded components etc. The benefits of transfer molding are seen in mid to high volume production, tight tolerance, over-molding and applications with colored or translucent materials. The transfer molding process combines attributes of both injection and compression molding making it an ideal manufacturing method for many applications.\nRubber Transfer Molding Process Description:\nTransfer molding, unlike compression molding uses a closed mold system. The process begins with a piece of uncured rubber that has been preformed to a controlled weight and shape. The preform is placed into the portion of the mold called a “pot” located between the top plate of the mold and the ram. The ram is closed which distributes (transfers) the uncured rubber into the cavity(s) through a runner and gate system. The material is held under high pressure and temperature to activate the cure system in the rubber compound (rubber is vulcanized). The cycle time is established to reach an optimal level of cure. At the end of the cycle, the parts are removed or ejected from the cavities and the next cycle begins.\nAdvantages of Rubber Transfer Molding\n- Shorter production cycle times compared to compression molding\n- Supports high precision molding applications\n- Supports molding of complex geometries and over-molding\n- Advantages for colored or translucent compounds\nWe’re Here to Help\nRequest a quote today on the Silicone Extrusion & Rubber Molding parts you need, or contact us to further discuss your project.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Using composite materials is hardly a new phenomenon. A trip to the British Museum will show you that the Ancient Egyptians constructed their tombs using bricks made from a primitive, yet effective, mix of straw and mud.\nToday the moulds used for forming composites, also known as tools, can be made from virtually any material. For parts cured at low temperature or for prototyping where tight control of dimensional accuracy isn't required, materials such as fibreglass, high-density foams, machineable epoxy boards or even clay or wood models are often suitable.\nFor parts cured at higher temperature, or if high dimensional accuracy is required, or if the mould is expected to be used for high-volume part production, then higher-performance tools must be used. Materials for these tools include Invar (a nickel steel alloy), steel, aluminium, nickel, and carbon fibre.\nSelection of the tooling material is typically based on the coefficient of thermal expansion (CTE), expected number of cycles, end item tolerance, desired surface condition, method of cure, glass transition temperature of the material being moulded, moulding method, the available curing equipment, and cost.\nSteel and aluminium had traditionally been the materials of choice for high-performance tooling, but they can have major drawbacks when used to make composite parts. During autoclave cure, the CTE mismatch between the tool and the part is often too extreme for compatibility. Higher-priced metal alloys, such as Invar, can offer closer CTE matches but the high cost of machining and, for larger parts, the sheer size and weight of the tools makes them difficult to machine, move and store.\nA new direction\nComposite tooling, made of similar material to the final part, can offer a high-performance result without the high costs of metal. Once an art known only to a few dedicated aerospace and Formula 1 (F1) technicians, composite tooling is now widely in use, from weekend club race car builders to aerospace industry leaders like Boeing and Airbus. After decades of development and refinement, composite toolmaking has become less of an art and more of an easily repeatable, high quality process with predictable results.\n“The use of industrial composites has been under development for over 50 years,” explains Jed Illsley, European Sales Manager for Amber Composites. “First, wet lay-up methods were used with resin and dry fabric — but obviously this wasn't an accurate method of controlling the amount of resin or the impregnation of resin in the fabric.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "To keep ahead of the competition, manufacturers of automotive components are looking to move from traditional materials such as steel and aluminium to lightweight composite alternatives.\nLess weight on body and transmission components means that more technology can be added without increasing overall weight – ergo exhaust emissions – helping the manufacturers comply with increasingly strict emissions legislations.\nIn a nutshell, composites reduce weight, therefore keeping emissions down, and paving the way to automotive success.\nCurrent composite technology uses Thermoset Plastics to produce parts that weigh less, but the future lies in Thermoplastic technology that can also be reused and recycled – an innovative new composite product that ticks even more green eco-boxes for car manufacturers.\nPultrex is developing this cutting-edge technology right now, striving to be at the forefront of composite manufacturing, to be able to bring improved Thermoplastic composite products to market. This will help companies worldwide manufacture automotive parts that weigh less than ever before, and at the same time manufacture parts with greater recyclability and with the ability to be reformed.\nSuccessful car manufacturers will be the ones who meet and exceed emissions targets – making the switch to composite materials essential for part manufacturers.\nComposites vs Steel and Aluminium\n- Composites exhibit an improved strength to weight ratio compared to steel.\n- Composites are better at handling tension than aluminium, reducing fatigue and maintenance.\n- Composites can be custom-tailored to add strength without adding weight, ideal in critical areas that bend or are prone to wear.\n- Composites are inherently corrosion resistant, able to stand up to severe weather and wide temperature changes.\n- With the exception of carbon-fibre, composites are nonconductive superior insulators, resisting the flow of electric charge and unresponsive to an electric field.\n- Composites streamline production and reduce maintenance because a single piece can replace an entire assembly of metal parts.\n- Composites precise weight distribution enables balanced loading, less friction, and greater momentum.\n- Composites are strong yet flexible, able to bend significantly without snapping.\n- Composites absorb and dissipate vibrations, ideal for applications such as equipment mounts.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "KraussMaffei will present its High Pressure Resin Transfer Molding (HP-RTM) process for lightweight construction applications to automotive customers for the first time in the US.The partners KraussMaffei, Proper Group International Inc. and Kiefel Technologies Inc. will also present machines and processes for the automotive industry during an Open House on September in Warren, Michigan.\nWith the HP-RTM process, KraussMaffei offers machines and components on a turnkey basis for the entire process chain from handling of semi-finished products through to finishing of the complete composite fiber part.\n“The optimum interaction between all system components, which comprise a press, a mold and a metering machine, is an important criterion in producing highly complex parts with a high fiber volume ratio,” said JP Mead, Vice President of KraussMaffei Corporation, who is responsible for the Reaction Process Machinery Segment in the US. The company has developed special technology as a system partner in the country for multi-process solutions in metering technology for highly reactive resins.\nThe Open House will also focus on direct processing in an injection molding compounder. An injection molding compounder is especially suitable for large-series production, for example of front end and instrument panel supports, door modules, insulating mats and other noise absorbers, explains KraussMaffei.\nThe materials are mixed and the components are produced in one machine. The matrix polymer is initially plasticized in a co-rotating twin-screw extruder and mixed with aggregates. Only then are the reinforcement fibers added to the polymer melt. The actual injection molding process then takes place via the injection piston.\nBecause of this single-stage process, the injection molding compounder ensures good melt homogeneity and a mixing effect, as well as high injection volumes and superb part quality. Components can therefore be combined with any fillers, reinforcement materials and substitution materials.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "An option for such geometries is to spray the preform over a convex mold and then move the preform into the main mold, a procedure similar to traditional preforming.\nBASICALLY, IT'S THE BINDER\nThe real secret to RIMFIRE success, however, is the binder, a blend of powders that can be tailored to produce desired properties. \"Basically, it is a thermosetting material that has thermoplastic-like properties,\" says Lammers. The material melts when heated in the preforming stage. But once cooled and solidified, it will not re-melt, and it retains its physical and mechanical properties during the heat/cure cycle.\nIn raw form, the different binder powders are stored separately in hoppers. Under computer control, powders are metered into a mixing tube that conveys the blend of mixed resin powders, in the required percentages for the weight of the glass specified for an individual application, to the chopper/spray head. The glass and binder pass through the heat zone together. As it is exposed to the heat, the binder activates, becoming tacky and interspersing with the glass as it exits the gun, causing the fibers to adhere to each other and to the mold surface, where it cools and solidifies.\nFor the system to work properly, all preform fibers and sizings as well as the matrix resins, must be compatible with the binder. Therefore, selection of glass fiber type and fiber sizing are important factors in preform design. \"The type of sizing on the glass can make the material stiffer or softer. This affects how the roving runs through the chopper gun,\" Lammers says. \"So the set up of the rollers and blades may vary for different types of material.\" All incoming raw materials are qualified using a standard set of tests. Before production starts, coupon samples of the proposed layup are tested for mechanical properties. Various types of E-glass from PPG Industries (Pittsburgh, Pa.), Saint-Gobain Vetrotex (Valley Forge, Pa.) and Owens Corning (Toledo, Ohio) have run successfully through the robot. For the matrix resin, a low-profile polyester resin system from AOC Resins (Collierville, Tenn.) is used. The preform system also has worked with vinyl ester and polyurethanes.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "For example, multi-part moving parts such as ball bearings can be manufactured in a single production step.\nIn almost every case, additive manufacturing processes are slower and more expensive than established technologies. Nevertheless, there are cases in mass production where the additive approach is superior to conventional methods. The points mentioned show where: Whenever it is about\n- individualized parts\n- very complex parts\n- parts with a very high metal removal rate and expensive raw material, or\n- where a free interior design offers advantages,\nadditive manufacturing is worth considering.\nThe Advantages Outweigh the Disadvantages\nThere are a number of interesting examples for the benefits of additive production, even in the field of conventional manufacturing. Cooling channels in injection molds can only be produced by drilling the parts in the conventional process. Therefore, the manufacturer is limited to drilling straight holes into the block behind the mold cavity; if it is to be around the corner, several holes must be drilled in such a way that they merge with each other and the superfluous parts of the hole are plugged. This is complex and results in a very unfavorable flow shape of the channels.\nIf, on the other hand, a mold is produced in 3D printing, the channels in the mold block can be designed as desired, i. e. they can run directly underneath the surface of the cavity, allowing the mold to be cooled extremely efficiently and quickly, thus dramatically reducing cycle times.\nOne of the best-known examples of additive metal fabrication is the fuel injector of the new GE jet engines installed in the A320 Neo or Boeing 737 MAX. The internal design of the nozzles is crucial for optimum fuel distribution — the better the atomization process, the less fuel is consumed by the engine. GE's engineers have known for a long time how the optimum nozzle should look like on the inside, but this complex shape could not be produced and so far, these nozzles have been made up of more than 20 parts that were welded together. This was laborious, expensive and inaccurate.\nToday, GE is printing these nozzles with an optimum geometry and, based on its experience, is embarking on additive metal production. GE plans to produce 40,000 of these nozzles per year by 2020 — 19 of these nozzles are installed per engine. In addition to the significant fuel savings enabled by the optimum design, GE saves 435 kg per engine.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "- Seamless hollow parts: hollow parts are seamless and do not require any bonding.\n- Ideal for manufacturing simple shapes such as bottles: Simple shapes such as bottles can be produced at a fast rate and low cost.\nDisadvantages of blow molding:\n- Part geometry is limited and restricted to basic forms: blow molding is ideal for parts with simple geometry\n- Tooling cost is very high: tooling is high and is approximately 6-10 times the cost of rotational molding tooling.\n- Low volume production quantities are costly: due to setup costs, producing lower quantities drive prices higher.\n- Surface finish is not aesthetic: parting lines can be pronounced and surface texture is not always desirable.\nWhat are the advantages and disadvantages of Rotational Molding?\nAdvantages of rotational molding:\n- Low cost tooling: tooling cost is low in comparison to blow molding due to the low pressures involved.\n- Consistent wall thickness: because the process uses gravitational forces to spread the plastic inside a mold, parts form with uniform wall thickness.\n- Double wall construction: double wall parts can be formed which are ideal for cases and containers.\n- High strength and durability: structural features can be designed into parts providing additional strength and support for large flat surface areas.\n- Seamless hollow parts: the parting lines on rotationally molded parts are discrete.\n- Ability to mold complex geometry: design flexibility provides the ability to mold very complex and detailed assemblies.\nDisadvantages of Rotational Molding\n- Cycle times: the only disadvantage of rotational molding vs. injection molding and blow molding is the cycle times, however this can be overcome with multiple low cost tools.\nIs Rotomolding Right for You?\nWhen deciding whether rotomolding is the right process for you it is important to consider the following.\n- Is low cost tooling an important factor?\n- Does the design have a degree of complexity to it that prevents it from being blow molded?\n- Does the project require a short production run to begin with and then ramp up afterwards?\nIf the answer to any of those is yes then rotomolding is the right process for your project. That is to say, if you are looking for a rotomolding company in California, contact us today for a free quote.\nMost importantly, blow molding is ideal for simple tanks, however when it comes to manufacturing parts with more detail there can be additional tooling and process costs for secondary processes.\nRoto Molding vs.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "Resins that offer the highest capabilities are the most expensive, he says, listing the top thermal performers as polyetheretherketone (PEEK) and thermoplastic polyimide (TPI), both costing over $30/lb. That’s why users should not over-specify the thermal requirements of an application in order to needlessly avoid driving up the cost of the composite materials. (See box for Maki’s tips on specifying thermoplastic composites.)\nRTP Company has developed high-performance long fibre compounds for metal replacement applications. Long fibre compounds produce tough, lightweight injection mouldable compounds, with reinforcement fibres running the entire length of the pellets. RTP is making long fibre compounds with fibres of more than 0.5 inch (12.7 mm) in length, Maki notes.\n“We are able to get much better strengths, modulus and impact combinations, also fatigue endurance values than you can with traditional chopped, shorter fibre lengths and aspect ratios,” he says. “With the advent of long fibres, the use of thermoplastic composites has increased dramatically in automotive structures, specifically in front end modules and door modules, which in the past have used metal for structural support.”\nHe feels auto makers are gaining more and more confidence in these materials.\n“They like the energy-absorbing qualities of these composites in crash situations. The material absorbs a lot of energy and provides improved safety,” he adds.\nRTP is also spending a lot of time developing long fibre compounds with bio-based polymers, specifically PLA (polylactic acid). Maki discussed these materials at the Bioplastics Compounding and Processing 2011 Conference in Miami, sponsored by AMI Plastics, in March.\nR&D for aerospace\nA public/private partnership at the University of Twente in the Netherlands has been focused on thermoplastic composites since 2008, when Boeing, TenCate and Stork Fokker entered into a partnership to explore the potential of these materials for aircraft applications. At the university’s Thermoplastic Composite Research Centre (TPRC), researchers are working on process modeling and simulation of automated production processes. They are also examining the performance of new materials, as well as advanced joining methods for thermoplastics.\nRemko Akkerman, Scientific Director of the TPRC, calls the partnership a perfect example of a public/private partnership because the members decide together on the directions the research will take.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "However, this part was developed independent of a specific aircraft program, but instead as a retrofit that can be applied to many different types of helicopters.” Certification mandated testing of materials and finished parts. It was completed by the Civil Aviation Authority of Israel (CAAI) and recognized by the Federal Aviation Admin. (FAA, Washington, DC, US) through a reciprocal agreement.\nThus, design began. “We defined design allowables for the two different laminates we used,” says Rosenfeld. A satin weave glass fiber fabric and a satin weave carbon fiber fabric from Hexcel (Stamford, CT, US) were selected, along with Prism EP2400 epoxy resin from Solvay Composite Materials (Alpharetta, GA, US), which meets aircraft/rotorcraft flame, smoke and toxicity (FST) requirements. “The epoxy resin is cured at 180°C to meet strength requirements,” he notes, adding, “There are resins that can cure at lower temperatures, such as 130°C, but these have a longer cycle time.”\nAfter materials were characterized, the work of detailing the automated manufacturing process began.\nA key partner in IAI’s development of this new process was composites automation specialist Techni-Modul Engineering (TME, Coudes, France). IAI had been working with RTM equipment supplier Isojet (Corbas, France), which then recommended TME as a toolmaker for RTM. Isojet and TME are part of Composite Alliance Corp. (CAC, Dallas, TX, US), which provides composites manufacturing solutions ranging from single tooling and equipment to automated work cells and complete turnkey systems (see ”Automated preforming: Intelligent automation in pick-and-place systems”). “We completed the RTM tool development and were very satisfied,” Rosenfeld continues. “When we were ready to develop the preforming automation, we decided to keep working with TME. They helped us to define the design and requirements for the production line and suggested automation possibilities.”\nAutomated cutting tables were well integrated into the IAI production flow and cutter operations had been optimized by implementing Plataine Technologies’ (Waltham, MA, US and Tel Aviv, Israel) Total Production Optimization (TPO) software (see ”Optimizing composite aerostructures production”).", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "When producing with transfer technology you load a preformed rubber part in a slot and then the machine press a plunge into that slot. In the bottom of the slot you have holes into the cavity. so the material is pressed through the hole and the part is produced.\nSaves some times compared to compression. Both in curing time and that you can preform bigger rubber details per cavity.\nDisadvantage compared to injection moulding is that you need to produce the preformed rubber parts.\nThe mould is a little bit more expensive compared to compression moulds.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Luciano De Oto joined Automobili Lamborghini S.p.A. in 2001 as an engineer for the Body-in-White of the Gallardo and in 2006 became manager for the interiors and exteriors engineering and development for all Lamborghini products. De Oto is now Lamborghini’s division chief for the Advanced Composites Division, which embraces all composites-related activities across products, line-ups and departments in the company.\nCM Interviews spoke with Mr. De Oto about his role to aggressively pursue the implementation of out-of-autoclave technologies to increase production rates and reduce costs for future products.\nWhat are the driving forces for an automotive company like Lamborghini?\nWe are a niche within the automotive industry. We’re not dealing with high volume production, so for us it is more important to focus on the process to achieve good quality results and to match the target weight to power ratio. Weight is the key to our success, which is why we are focusing on composites structures.\nProcessability is often cited as a major adoption challenge for composites. Do you agree?\nProcessability is a current challenge for the composites industry. Compared to other materials, it simply takes too long and therefore is too expensive. For example, when manufacturing composite parts through RTM, there is a huge bottle neck in the preform process, which is why composites are not suitable for high-volume automobiles. SMC has more potential in terms of production rate but has not yet reached the performances of traditional pre-preg or RTM processes. The technology used is strictly dependent on the daily production rate. Overall, we are consistently looking at our processability because we know if we can reduce process time that lowers overall costs.\nHow is Lamborghini addressing this issue?\nAt Lamborghini we have developed a patented process – RTM Lambo – that manufactures composite parts under low pressure molding that is repeatable in low volumes and meets the very high quality standards needed for our vehicles. This isn’t the ideal process for manufacturers that have to produce large numbers of vehicles per year. For this reason, we keep an eye on process improvements both in aerospace and high-performance manufacturing because there may be a process adaptable to higher volumes that we can be adapted for those models.\nWhat are the challenges and/or drawbacks to including carbon fiber for automotive parts?\nSpeaking generally within the automotive industry, the main challenge today is the cost.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "These topics are solved by Airborne as follows:\nThe injection of thick walled large structures in a one-shot-process that is highly industrialised is a challenging concept. Airborne’s dedicated research & development department (Airborne Technology Centre) is therefore essential in creating the opportunities for our game-changing manufacturing technologies. Through Airborne Technology Centre, we have built up significant experience on a wide range of process technologies, among which the mentioned Resin Injection Technology.\nA major and ultimately decisive topic in today’s development of the growing Tidal Energy market is bringing the Cost of Energy (CoE) down.\nManaging the CoE is a challenge tidal turbine manufacturers are facing today. The CoE of tidal energy should be at least equal to the CoE in today’s offshore wind energy industry.\nTidal turbines are positioned 20 meter below sea level in open sea. Easy and quick repair of it is therefore not straightforward. As a result, maintenance costs are an important factor in the total cost of ownership of tidal turbines. The reliability of the turbine system and reliable blades are therefore essential conditions for success.\nAnother factor to bring the CoE down is directly related to the efficiency of the tidal turbines.\nThe efficiency of the turbine depends partly on the performance of the blades of the turbine.\nBecause of this, Airborne Marine focuses its efforts in the tidal turbine blades design and manufacturing on enhancing their reliability and performance.\nAirborne addresses the reliability of the tidal turbine blades from the very beginning of the concept design process.\nMost current tidal turbine blades design is based on two shells bonded together, as is routinely done in wind turbine blade design. Seawater conditions however are dramatically different from the air in which wind turbines perform.\nTo date, it remains unknown how the dynamics that are present in sea water will affect the performance of adhesive bonds over a longer period of time. When applied in structural composite products such as tidal turbine blades, it remains unpredictable what the effect of enduring heavy loads and fatigue on the blades will be on the long term.\nFor this reason, Airborne Marine’s standpoint is that tidal blades must be designed without adhesives in order to maximize the reliability of the blades. Based on hydrodynamic and maritime engineering experience, Airborne Marine has decided to reduce the risks caused by adhesive bonding. The application of the sophisticated and new to tidal energy Resin Transfer Moulding technology (RTM) will reduce that risk featuring double-sided tooling for thick-walled structures.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-8", "d_text": "transfer molding|\n|US5266259 *||17 Sep 1992||30 Nov 1993||Ford Motor Company||Molding a reinforced plastics component|\n|US5328656 *||12 Dic 1991||12 Jul 1994||Dag Thulin||Method for vacuum molding of large objects of synthetic resin|\n|US5364584 *||16 Oct 1992||15 Nov 1994||Mitsubishi Kasei Corporation||Process for producing fiber-reinforced resin moldings|\n|DE3234973A1 *||18 Sep 1982||22 Mar 1984||Schiegl Erwin||Process for producing a glass fibre-reinforced enclosure of polyester|\n|JPH04296538A *||Título no disponible|\n|JPS5579117A *||Título no disponible|\n|Patente citante||Fecha de presentación||Fecha de publicación||Solicitante||Título|\n|US6565792 *||11 May 2001||20 May 2003||Hardcore Composites||Apparatus and method for use in molding a composite structure|\n|US7332049||22 Dic 2004||19 Feb 2008||General Electric Company||Method for fabricating reinforced composite materials|\n|US7335012||22 Dic 2004||26 Feb 2008||General Electric Company||Apparatus for fabricating reinforced composite materials|\n|US7431978||22 Dic 2004||7 Oct 2008||General Electric Company||Reinforced matrix composite containment duct|\n|US7867566||26 Oct 2007||11 Ene 2011||General Electric Company||Method for fabricating reinforced composite materials|\n|US7943078 *||16 Feb 2005||17 May 2011||Toray Industries, Inc.||RTM molding method and device|\n|US8420002||9 Oct 2003||16 Abr 2013||Toray Industries, Inc.||Method of RTM molding|\n|US8784092 *||23 Sep 2008||22 Jul 2014||Airbus Operations Sas||Device for the production of a composite material part integrating a drainage system|\n|US9120253||15 Mar 2013||1 Sep 2015||Toray Industries, Inc.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "Process Development for the Sequential Preforming of Semi-Finished Products\nClimate change and its consequences are pushing societies worldwide to adopt a more energy- and resource-efficient lifestyle. Global warming can only be curbed in the long term if the emission of greenhouse gases is greatly reduced over the next years. Since the energy consumption of vehicles directly determines the amount of pollutants that are emitted, the strategic relevance of lightweight construction in the automobile industry will increase in the future.\nFiber-reinforced polymers are being used with greater frequency in addition to established lightweight construction materials. These polymers have great potential due to their density-specific material properties. High-performance composites with a high fiber volume are of especial interest because of their excellent mechanical properties. In particular anisotropic layer constructions, oriented according to the load path, enable the production of high-strength structural components suitable for lightweight applications. Automated, robust and quality-assured manufacturing technologies are needed in the future in order to successfully introduce this material system to large-scale automotive production.\nThe Technology Cluster Composites (TC2) focused, among other things, on the industrialization of RTM process chains for the manufacture of structural components for automotive lightweight construction. An outstanding feature of TC2 was an integrated approach to technological challenges that was not based on individual problems but focused on connections and interactions within continuous process chains.\nThe cost structure of the RTM process chain is a strong argument for the implementation of automated preforming. The automatization of handling operations as well as the reshaping process (draping) within the preforming procedure can drastically reduce production cycle times, which leads to a significant reduction in overall RTM production costs. At the same time, the reproducibility of component quality increases as the degree of automation becomes greater which, in turn, results in a noticeable decrease in the rate of production waste.\nFundamental draping and fixation strategies for preform manufacturing were developed and validated within the framework of TC2 by the participating group partners Fraunhofer ICT, the Institute of Aircraft Design of the University of Stuttgart (IFB), the Institute of Textile Technology and Process Engineering Denkendorf, the Institute of Production Science (wbk) and the Institute of Vehicle Systems Technology of the Karlsruhe Institute of Technology. The sequential reshaping of entire layer constructions using a multiple-stamp mold has proven to be the best method for the automated manufacture of components with complex geometries. In this method, the layer constructions are locally prestressed.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "What the Future of Composite Materials Means for the Aerospace MRO 9Maintenance, Repair, and Overhaul) Sector\nWith a constant need for lighter and more efficient aircraft, the aerospace industry has been especially welcoming of developments in composite materials. Over the past several years, the industry has adopted more composite materials, drawn by such benefits as increased durability and heat resistance, better aircraft life cycle, flexibility and more.\nAs composite material production evolves, and design and production processes improve, we can expect to see even greater adoption of composite material in the aerospace industry. In fact, the market is expected to grow from $26.87 billion in 2017 to $42.97 by 2022, according to Research and Markets’ aerospace composites market report and global forecast – a CAGR of 9.85%. Boeing and Airbus have both shown increased investment in composite material use, with the 787 and A350 composed of roughly 50 percent composite materials. For the aerospace MRO sector, composites offer several benefits.\nAirbus reported a 60% reduction in fatigue- and corrosion-related maintenance tasks for its A350 XWB since adopting composites, based on the amount of time spent on each check and the amount of checks required during the aircraft life cycle. This translates to faster and less frequent repairs, which can keep aircraft off the ground and in the air for longer periods of time.\nBetter fatigue resistance\nPrepregs are reinforcement materials pre-impregnated with thermoplastic or thermoset resin. These create lightweight, high-strength composite laminates that have increased fatigue and corrosion resistance and more controlled fiber volume fraction and can be used to strengthen internal or external aircraft features.\nTraditional materials can wear down over time, but composites like carbon fiber, epoxy and other structural adhesives help improve the aircraft’s life cycle – again, thanks to their corrosion resistance and ability to withstand high temperatures and pressures.\nLess material waste\nComposites make it easier to conduct on-wing repairs, which means less material waste and, in turn, lower cost.\nLess time spent on MRO tasks\nLess maintenance means less cost and more in-flight time. In addition, the technique of bonded repairs can be more easily performed than standard repairs, and in less time – only 8-12 hours.\nComposites One is the leading supplier of composites for the aerospace industry\nComposites One is the leading supplier of composite materials throughout North America.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-9", "d_text": "||Methods of RTM molding|\n|US9421717 *||23 Sep 2011||23 Ago 2016||Gosakan Aravamudan||Manufacturing a composite|\n|US9463587||15 Mar 2013||11 Oct 2016||Toray Industries, Inc.||Methods of RTM molding|\n|US20050093188 *||23 Ene 2004||5 May 2005||Forest Mark L.L.||Binderless preform manufacture|\n|US20050236736 *||20 Dic 2004||27 Oct 2005||Formella Stephen C||Composite product and forming system|\n|US20050255311 *||15 Abr 2005||17 Nov 2005||Formella Stephen C||Hybrid composite product and system|\n|US20060125155 *||9 Oct 2003||15 Jun 2006||Toray Industries, Inc.||Method of rtm molding|\n|US20060134251 *||22 Dic 2004||22 Jun 2006||General Electric Company||Apparatus for fabricating reinforced composite materials|\n|US20070052136 *||8 Sep 2005||8 Mar 2007||Hiroshi Ohara||Exclusive molding method for producing cloth-based parts of loudspeaker|\n|US20070182071 *||16 Feb 2005||9 Ago 2007||Toshihide Sekido||Rtm molding method and device|\n|US20090142496 *||26 Oct 2007||4 Jun 2009||General Electric Company||Method for fabricating reinforced composite materials|\n|US20100260884 *||23 Sep 2008||14 Oct 2010||Airbus Operations Sas||Device for the production of a composite material part integrating a drainage system|\n|US20110192531 *||1 Abr 2011||11 Ago 2011||Toray Industries, Inc.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "Additive Manufacturing Additive Manufacturing - Arrived in the Mainstream\nAdditive manufacturing technologies are becoming a versatile production method for series production. However, you should be aware of its advantages and disadvantages if you want to use this technology efficiently.\nSince man has first worked with materials, he has done it by removing parts of a blank. From stone-age man, who carved works of art from mammoth tusks in the caves of the Swabian Alb, to today's computer-controlled multi-axis milling/turning machines: the principle has always been the same: material is removed from a blank until only the desired geometry remains —be it a lion-man or a complex technical component.\nThese processes are quite flexible, as each part is manufactured individually. In mass production, however, automated processes are applied frequently and individually formed parts are considerably more expensive than identical serial parts. A further disadvantage of ablating processes is that the more complex the component is, the more expensive the production process becomes, and the more material is lost. For example, 98 % of the huge raw part made from the finest special aluminium that is used in the production of the wing frames of the Airbus A380 is transformed into inferior shavings, which can only be used to a limited extent.\nLast limitation: The blank can only be machined from the outside. Although modern five-axis milling machines can move their tool very freely, the milling cutter must still be able to reach the point where it is removed — the inside of a workpiece can only be machined to a very limited extent.\nComplexity is for Free\nAdditive processes — however different they may be in detail — are based on the principle that a material is applied layer by layer to create the component. Since the material is not removed, but added at the places where it is needed, hardly any waste is produced. In addition, the printer doesn't care how complex the form is — a cube is created just as quickly as a complex component: \"Complexity is for free\" is the motto. Equally, the printer doesn't care if each part has a different shape, since there is no shape to follow. In this way, individualized components can be produced just as quickly as identical parts.\nCompletely new possibilities are offered by the layered construction for the interior design of the component. Since the part is created layer by layer, the inside of the workpiece can also be designed as desired — with some restrictions.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-1", "d_text": "CRP Technology has 3D-printed a 1:14 scale model of a yacht in carbon fiber composites in order to demonstrate the possibilities of the material used with its selective laser sintering 3D printing process and to give a boost to boat design. (Source: CRP Technology)\nFDM is a very effective method for producing prototype parts. Process parameters will have a large influence on part performance. Since we are using a different method, the part results are different. Utilizing a Laser Sintering machine we have the ability to fully melt the material during the layering process, with the energy penetrating to the layer below. The material is non-isotropic, as the microfibers align themselves during the re-coating process, but due to the materials properties and machine settings we do not see layer separations. Even when used in high vibration areas such as the parts that CRP USA (our partner in Mooresville NC) produces for NASCAR teams that race the parts every weekend attached to an engine turning more than 9000 rpm.\nWindform compares similarly to several filled plastics utilized in manufacturing. Data sheets are available at Windform.eu as well as charts that rank the materials in the Download section at the website. Ann Thryft has commented that the industry is moving forward to establish testing standards through ASTM Committee F42 (We participated in the kick off of this process a few years back). We encourage people who are interested to join and or follow the progress.\nIt is interesting that you are looking toward Aerospace. Windform materials have passed Out Gas testing per NASA screening and are being utilized to produce components as well as structures for Cube Satellites.\nWilliam, I agree. But we're some ways away from that goal. Lots of R&D is proceeding to establish such metrics, as we've reported several times. Meanwhile this yacht model is more of a proof of concept.\nWhat would be really useful is a comparison of the physical properties of the printed composite object compared to those made in the standard manner. Is there a trade-off, and if so, how much? That is the sort of thing that is usefulk in considering as to if a fabrication method is applicable for some part.\nThe new composites manufacturing innovation center is intended to be a source of grand challenges for industry, like the kind that got us to the moon under JFK.", "score": 8.086131989696522, "rank": 100}]} {"qid": 22, "question_text": "What is the Thondrol and why is it significant in Bhutan?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "So blessed is the huge banner (called a Thondrol) that it is believed one achieves liberation simply by viewing it. So there I stood in the cool light of dawn, staring up at the massive image of Rimpoche seated upon his lotus throne. Bhutanese people had come from all over, many walking for days to see the Thondrol. It remains up for only a few hours, on this one day a year, as the monks do not let the 300 year old work of devotional art be touched by direct rays of the sun, least it fade.\nI then made my way to the booth of Mangala Paper, who make the paper we import, and introduced myself to Kezang Udon. Her Eco Friendly hand made paper was doing a brisk business, with many western tourists checking out the beautiful sheets, as well as handsome photo albums and journals crafted from Bhutan paper. I told Kezang how proud I was to be part of the introduction of Bhutan paper to the United States and she graced me with the warm smile that is so typical of the Bhutanese people.\nI promised to stop by her factory on my way back from my journey to the remote central valley, and began an arduous day and a half drive on the only paved road in the country.\nI traveled over passes towering above the clouds, through dense forests punctuated by flaming red rhodendrums. I visited the cave where Guru Rimpoche first meditated when he arrived in Bhutan from Tibet. A temple has been built around the cave and a huge cypress tree stands beside it, supposedly having sprung up from Rimpoche's walking stick. I touched the tree with deep respect for the man who is believed to be the second incarnation of Buddha.\nOn the return journey it began to snow. We were heading for a monastery at the top of a pass, and arrived to the sound of deep guttural chanting of monks, accompanied by the plaintive cry of conch shell horns and thudding bass of drums. A healing ceremony for the Lama (Teacher) was in progress and we were allowed to observe.\nThe interior of the monastery was a treasure trove of wall paintings depicting various spiritual heroes battling demons. Bhutan is a Tantric Buddhist nation, thus visualizations of inner states of mind are constantly on view, serving as signposts on the path towards enlightenment.", "score": 52.37738817260331, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Especially significant was the unfurling of the Throngdrel, a three-story-tall tapestry displayed only on important occasions. So revered is the tapestry that simply viewing it is believed to absolve a person's sins. A huge line of people processed around the tower from which it was hung - clockwise, as is the custom - to see the Throngdrel and gain karma.\nOf course, my account of the experience would be incomplete without mentioning one other aspect of the celebration. Among the Buddhist practitioners attending the celebration was His Majesty, King Jigme Khesar Namgyel Wangchuck. As monks played horns and drums, he and Her Majesty the Queen walked into the temple, and we stood maybe thirty feet away from them!\nComing from a Western, Christian background, I was fascinated by this Buddhist holiday. This is because on the surface, there is little in common between the two religions, but beneath all the differences, common threads such as community, celebration, and ceremony connect the two traditions to each other. For me, this was the most important part of the celebration of the Parinirvana.", "score": 47.27666389836749, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "A Tshechu is a Buddhist festival in honor of Guru Rinpoche, the saint who brought Buddhism to Bhutan.\nParo Tshechu is one of the most popular events in the country and the best attended by the tourists and local people because of its hospitality and connectivity.\nAll the mask dances on the first day are held inside the courtyard of the Dzong and rest days held at the courtyard outside the Dzong. Festivals featuring dances performed by trained monks and laymen dress up in vibrant multicolored brocade costumes are one of the best ways to experience the ancient living culture of Bhutan. The highlight of the Paro Tshechu is the unfurling of the silk Thangka- so large it covers the face of an entire building and is considered one of the most sacred blessing in the whole of Bhutan. The ‘Thangkha’ known as a ‘Thongdroel’ is only exhibited for a few hours at daybreak on the final day of the festival enabling the people to obtain its blessing.", "score": 45.69447085318928, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan is well-known for its brilliant, colourful celebrations.\nBhutan is well-known for hosting some of the world's most brilliant and colourful festivals. In fact, many visitors plan their journey to Bhutan around the famous Paro and Thimphu Festivals. Bhutan's tsechu celebrations are historic representations of its Buddhist culture. The majority of these celebrations are devoted to Guru Rimpoche, the saint who introduced Buddhism to Bhutan in the early eighth century. They celebrate with mystical dances, captivating performances, daring fire events, intriguing naked dances, and informative re-enactments. Behind the scenes, the monks prepare for the coming weeks via intense prayer and meditation. The monks execute specific masked dances that are inspired by historical enlighte", "score": 45.14789055872011, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan’s Tsechu Festivals\nIf you have heard about Bhutan, surely you have heard about Bhutan’s Tsechu festivals. Tsechu, which literally means ‘day ten’ in Dzongkha language, are annual celebrations held in the Dzong fortresses and prominent temples on the 10th day of Tibetan-Bhutanese calendar.\nCham dances are the major attractions of Tsechu and are performed by monks and common people who wear traditional costumes and masks and portray the incidents from the life of 8th century Buddhist teacher Padmasambhava-who is also known as Guru Rinpoche in Bhutan and Tibet. The tradition of the tsechu goes back to 8th or 9th century when Guru visited Bhutan to cure the ailing king and performed rituals and rites involving series of dances.\nThe other attraction of Tsechu is thondrel display. A thongdrel is a giant silk embroidered painting of Guru Rinpoche, Zhabdrung and Buddha. Thongdrel display ceremony usually takes place on the last day, before dawn and stays out for public audiences for two or three hours and then it’s folded back and concealed inside for a year until the next Tsechu. As the thongdrel is raised, people cheer, hum the Buddhist prayers and offer Khada scarves to the sacred painting. People feel blessed by joining the thongdrel display and expect good fortune for the year.\nBhutanese people join the celebration with their friends and families and witness religious mask dances, receive blessings and socialize. Everyone wears national costume made up of finest hand oven textile. You may also request your guide to arrange a dress for you so that you don’t look odd in the crowd.\nBesides formal mask dances, there will be folklore singing and dancing by locals and people enjoy outdoor picnic with their friends and families. It’s okay to crash into a group, if you wish. People are friendly and welcoming. Share your lunch with them and try theirs! For sure, you will love their home-cooked meals, while you will find them curious about the food you bring from your hotel or restaurant.\nIt’s okay to crash into the picnic of one of the groups, if you wish. People are friendly and welcoming. Share your lunch with them and try theirs!", "score": 43.9832753153823, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan is one of the most religious countries in the Tibetan Buddhist world. And like in all Buddhist nations, festivals have a special place in the hearts of its residents. Most of the Bhutanese festivals commemorate the deeds of the Buddha, or those of the great masters of the past associated with one Buddhist tradition or another.\nBhutanese culture is characterized by religious celebrations. Its people love socializing, attending festivals, joking, playing, and doing all the things that help them to be in the spirit of celebration. Religion and social life are so intrinsically linked in the culture that some festival appears to be taking place somewhere in Bhutan throughout the year. Among these festivals, one of the most recognized and attended by the masses is the Tsechu festival ('Tse' means 'date' and 'Chu' means 'ten'; i.e. '10th day'). This festival is celebrated to commemorate the great deeds of the 8th century Tantric Master Guru Padmasambhava.\n'Guru Rimpoche' or simply 'Guru' as he is referred to, introduced the Nyingma school of Buddhism into Tibet and Bhutan. Each 10th day of the lunar calendar is said to commemorate a special event in the life of Padmasambhava and some of these are dramatized in the context of a religious festival. Most of the festivals last from three to five days - of which one day usually falls on the 10th day of the lunar calendar. It is not just the time for people to get together, dress up and enjoy a convivial light hearted atmosphere, but also a time to renew one's faith, receive blessings by watching the sacred dances, or receive 'empowerment' from a lama or Buddhist monk.\nAn auspicious event of many of the Tshechus is the unfurling of the Thongdrol from the main building overlooking the dance area. This is done before sunrise and most people rush to witness the moment. Thongdrols are large Thangkas or religious pictures that are usually embroidered rather than painted. The word itself means 'liberation on sight.' It is believed that bad karmas are expiated simply by viewing it. Spring is one of the best times to visit Bhutan; it is also at this time that the local inhabitants of Paro celebrate the spring festival, one of the most popular festivals.", "score": 42.792775829072546, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "A tsechu is a Buddhist festival in celebration of the Guru Rinpoche. Each district in Bhutan has its own tsechu, but the Thimphu Tsechu is the largest of these and takes place each fall, usually in September or October. An outdoor festival ground was built in 2008 at Thimphu's Traaho Chhoe Dzong, which allows for the comfortable seating of 25,000 people. Although tourists do visit the Tsechu, the attendees are overwhelmingly Bhutanese. The festival lasts for four days and features dances depicting different stages in the life of the Guru Rinpoche.", "score": 42.15248847030117, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan cultural tours are designed to bring clients into close contact with an unspoiled land that is home to a vibrant Buddhist way of life, and as close in spirit to Shambala – or paradise – as an earth bound kingdom can be.\nBhutan is known for its unique culture and traditions. Bhutan Cultural tours offer a unique insight into the culture and tradition of the nation. It provides an opportunity to interact with the native people. Cultural tours highlights are Tshechu (annual festivals) where Buddhist teaching are dramatized and shown through mask dances. It also takes you to visit Dzongs (Fortress), Monastery and other religious monument that were built 300 years ago", "score": 38.92544951589449, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "The lofty barriers had wrapped the country with a cloak of mysticism. For centuries, the country did not have a name for the outside world. Some Tibetan chronicles referred to it with exotic names such as the ‘Southern Valley of Medicinal Herbs’ or the ‘Lotus Garden of the Gods.’ To the Bhutanese, the country was always ‘Druk Yu’ literally meaning ‘The Kingdom of the Thunder Dragon.’\nThe name Bhutan appears to have derived from the ancient Indian term ‘Bhotanta’ which means the end of the land of the Bhots. Bhot was the Sanskrit term for Tibetans. Bhutan’s distant past is surrounded by mystery, as books and documents were lost in a series of fires and earthquakes which destroyed important Dzongs where the historical records had been stored. The prominent event in what little exists of Bhutanese history is the legendary flight of Guru Padmasambhava from Tibet in 747 AD. Guru Rimpoche, as he is today popularly referred to, is considered the second Buddha. Guru Rimpoche arrived in Paro valley at the Taktshang (Tiger’s Nest.) Today a monastery exists perched precariously on the cliff’s face as an indelible mark of the Guru’s visit. Guru Rimpoche is the founder of the tantric strain of Mahayana Buddhism practiced in Bhutan. He is also worshipped as the father of the Nyingmapa School of religion.", "score": 38.405556560956335, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan is a Buddhist kingdom on the Himalayas’ eastern edge. The only country in the world that practices the tantric form of Buddhism, it is a land of monasteries, fortresses (dzongs) and dramatic topography ranging from subtropical plains to steep mountains and valleys. In the high Himalayas, peaks such as the 7,326m high Jumolhari are a destination for serious trekkers. The landlocked nation abounds in sacred sites like the Taktsang Monastery or the Tiger’s Nest, and is bountiful in flora and fauna making it one of the world’s top hotspots for visitors.\nThe Bhutanese, a homogeneous group, are friendly and hospitable and fall linguistically into three sub-groups comprising of the Sharchops, Ngalongs and Lotshampas. There are also a number of smaller groups in the country with their own distinctive language. These groups form about one percent of the population. Some of these groups are the Tsanghos in the east, Layapas in the north-west, Brokpas in the north-east and Doyas in the south-west.\nLongish robes called ghos tied around the waist by a cloth belt, known as kera are worn by Bhutanese men. The women wear ankle-length dresses known as kira. Both ghos and kiras are made of bright colored fine woven fabrics with traditional patterns and designs.\nArchery is the national sport of Bhutan. Other traditional sports popular in the Kingdom include various kinds of shot-put, darts and wrestling. International sports such as soccer, basketball, volleyball, taekwondo, cricket, tennis, badminton and table tennis are also extremely popular.\nThe currency of Bhutan is called Ngultrum. The G is silent when you pronounce it. Introduced in 1974, the Ngultrum is pegged with the Indian Rupee.\nThe rectangular Bhutanese flag is divided into two parts with a white dragon in the middle. The dragon symbolizes the name Druk Yul – meaning land of the thunder dragon and its white color is a representation of purity and loyalty. The yellow upper half signifies the country’s secular authority of the King in the affairs of religion and state. The lower saffron orange half signifies the religious practice and spirituality of Buddhism as manifested in the Drukpa Kagyu and Nyingma traditions.", "score": 38.33607659551566, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan, a jewel between India and China, is about the size of Switzerland with a population of around 750,000. Within its small boundaries the ecological diversity is amazing. Tropical jungles in the south with elephants, rhinoceros, and tigers, coniferous forest in the mid region with leopards, mountain goats, bears, and variety of bird life, and blue sheep and snow leopards in the high temperature zones. Through centuries of self-imposed isolation Bhutan has been able to preserve its spectacular environment and nurture its unique culture. Drawing inspiration from its neighbour, Tibet, Tantric Buddhism has flourished and influenced art, crafts, architecture for hundreds of years, and has shaped the Bhutanese way of life. The early 1960s saw Bhutan's first cautious opening to the outside world. Tourism began for the first time on June 2nd 1974, the coronation day of the nineteen year old King Jigme Singye Wangchuck, the fourth monarch.\nThe idea of happiness and wellbeing as the goal of development has always been a part of Bhutanese political psyche. While this has influenced Bhutan's development endeavors during the early part of the modernization process, it was not pursed as a deliberate policy goal until His Majesty the Fourth King of Bhutan introduced Gross National Happiness (GNH) to define the official development paradigm for Bhutan.\nReligious festivals are perfect occasions to glimpse what might be termed Bhutanese culture. Celebrated throughout the country, they occur in a host of differing forms, depending upon the scale, the nature of the ceremonies performed or the particular deity being revered. The best known are the Tshechus, festivals which honor Guru Rinpoche and celebrate one of his remarkable actions, and the most popular of these take place annually in or around the great dzongs, attracting both tourists and large numbers from the surrounding districts. Lasting several days, the central focuses are the series of prayers and religion inspired dances. These dances, made especially striking by the spectacular costumes of the dancers - bright silks and rich brocade, ornate hats or extraordinary masks - may either depict morality tales, invoke protection from demonic spirits or proclaim Buddhist victories and the glory of remarkable saints.", "score": 37.756751404270275, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan is known in Bhutanese as Druk Yul, or Thunder Dragon. The population of around 800,000 is made up of several ethnic groups, including the Ngalop and the Sharchop. Ngalop means ‘the first risen’ and this group is the largest in the country.\nBhutan - the land of the Thunder Dragon\nBhutan’s government is known for measuring prosperity in terms of Gross National Happiness instead of Gross National Product, to focus on the overall happiness and well-being of the Bhutanese population, rather than only their material wealth.", "score": 37.482564656852645, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan, the last bastion of Tantric Buddhism, is synonymous with ‘happiness’ to most foreigners who have been to and know about the country.\nThis small Himalayan kingdom aspires to become a role model to the rest of the world, and it does so by believing that the ultimate quest of all humans is to find happiness. That’s why Bhutan rejects the conventional belief that economic gains alone result in happiness.\nHowever, the country does not undermine the importance of economic growth. What it seeks is a fine balance between material wealth and spiritual well-being. Thus, Bhutan firmly believes that Gross National Happiness is more important than Gross National Product. And the ideals of Gross National Happiness are very much part of the country’s development programs and policies.\nBhutan’s untainted culture and traditions have been the hallmark of its uniqueness in the world.\nThe country therefore consciously strives to preserve its unique cultural identity.\nOne of the recent developments the country witnessed was its transition from monarchy to parliamentary constitutional democracy. Amid resistance from the people, the fourth King, His Majesty Jigme Singye Wangchuck, voluntarily abdicated the Golden Throne a year before the historic elections.\nKnown as the Father of Democracy, the fourth King told his people that a king is chosen by birth not by merit and the future of the Bhutanese people hinges on a healthy democratic system. The country is for the people, not for the king, he said. The country thus embraced a multi-party democracy following its first ever parliamentary elections in March 2008.\nA tiny nation squeezed between the two rising Asian giants – China in the north and India in the south – Bhutan has always taken cautious steps to determine its own future. The country does not believe in economic emulation.\nBhutan’s less than 700,000 people are highly scattered. And its largest city, Thimphu, is home to about 100,000 people. Most of them are government and corporate employees and business players. The rest of the population lives in far-flung towns and villages, some in extremely remote and inaccessible pockets.\nMore than 72% of the country’s total area is under forest cover, and its rich flora and fauna have earned the country a place among the world¹s top 10 biodiversity hotspots. And scattered settlements remain adorned with aesthetic temples, monasteries, and fluttering prayer flags.", "score": 36.13179342552089, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "The dzong or fortresses seen across the country with their large courtyards and beautiful galleries are among the finest examples of Bhutanese architecture. Housing large monasteries inside and sitting on hilltops or at the confluence of rivers, these fortresses are also the administrative centers of their districts. However, the most common architectural sights in Bhutan are the chortens or small shrines built to house sacred relics.\nBhutan being an agrarian society, agriculture and livestock rearing have traditionally been the mainstay of the Kingdom’s economy, contributing about 45% to the GNP. 70% of the Bhutanese populace lives on subsistence farming – growing rice, barley, millet, buckwheat, potatoes, mustard, chilies and vegetables. While hydropower contributes a major amount to the GNP, forestry adds another 15%.\nIn commemoration of the accession of Gongsar Ugyen Wangchuck (the first King of Bhutan) to the throne in Punakha Dzong, December 17 is celebrated as the National Day.\nSince 1974, Bhutan has followed a policy of cautious growth, a “high value, low impact” tourism policy; our government has actively managed visitors in keeping with the policy. Tourists have been required to travel with licensed Bhutanese tour operators accompanied by licensed guides. We have consistently sought to ensure that the number of tourists admitted to Bhutan has been within the capacity of our socio-cultural and natural environment to absorb visitors without negative impact.\nUnder Bhutanese law, 60% of the kingdom must remain covered by forest for all time(s) to come. His Majesty the Fourth King and the people of Bhutan received the “Champions of the Earth” Award for 2005. The Champions of the Earth Award was established by the United Nation Environment Program (UNEP) in 2004 to recognize outstanding achievements of individuals and organizations in protecting and improving the environment. The current forest coverage is 72.5% of the total landmass.\nBhutanese dzongs have played a significant role historically and continue to do so even today. Bhutan’s extraordinary architecture is best represented by the immense dzongs that stand tall throughout the country. Dzongs are large castle-like structures either perched on hilltops overlooking broad river valleys or built alongside river banks for protection from marauding Tibetan armies back in the day.", "score": 35.208823561999594, "rank": 14}, {"document_id": "doc-::chunk-3", "d_text": "Welcome to the best free dating site on the web Bhutan Mingle2. No gimmicks, no tricks. Stop paying for online dating now.\nBhutan online dating\nOct 5, – Explore WDT’s board “Bhutan”, followed by people on Pinterest. of the oldest temples in Bhutan, Kyichu Lhakhang dating back to the 7th century. Tiger’s Nest Monastery, a Mahayana Buddhist sacred site and temple.\nA place that measures the success of the country, not on outputs or financials, but on the Gross National Happiness of the people who live there. Here, in the land of the Thunder Dragon, culture and spirituality are at the core of everyday life. People wear traditional dress, colourful prayer flags adorn the hills, monasteries perch precariously on cliff faces, and monks in crimson coloured robes wander the streets.\nThis forward-thinking little country has already banned plastic bags and tobacco. In some ways Bhutan is a country lost in time, in other ways it is so far ahead. A trip to Bhutan is an eye-opening, life changing experience.", "score": 34.68616535854622, "rank": 15}, {"document_id": "doc-::chunk-2", "d_text": "A dzong is used for religious as well as for secular purposes. Bhutan’s official language “Dzong-kha” originated from the languages spoken in the Dzong in the olden days. ‘Kha’ is the Bhutanese word for language.\nZhabdrung Ngawang Namgyel, who came to Bhutan from Tibet in 1616, built most of these historic structures. The first dzong he built was the Simtokha Dzong in Thimphu in 1627. He was also the one who unified Bhutan during a time of chaos and disorder, and served as the administrative as well as spiritual leader of Bhutan. Known as the historical king of Bhutan, he is one of the most revered figures in the country even today.\nArts & Crafts\nAll Bhutanese art, dance, drama and music have its roots in the Buddhist religion. And almost all representation in art, music and dance is a dramatization of the struggle between good and evil.\nThe thirteen aspects of Bhutanese arts and crafts called Zorig Chusum includes Shinzo (woodwork), Dozo (stone work) Jinzo (clay crafts), Shazo (wood turning), Parzo (wood, slate and stone carving), Lazo (painting), Lugzo (bronze casting), Garzo (blacksmithing), Troeko (silver and goldsmithing), Tsharzo (bamboo and cane crafts), Dhezo (papermaking), Thagzo (weaving) and Tshemzo (tailoring).\nThe skills of the local craftsmen are manifested in the statues of the deities, doors and windows of traditional houses, and in religious artifacts like bells, trumpets and drums. The country also has rich and diverse range of carpets and traditional textile designs whose colors, weaves and textures have evolved over centuries.\nTshechus are very special events and celebrated throughout Bhutan by every Bhutanese. The term ‘Tshechu’ literally translates to the 10th day of the Bhutanese calendar, which is considered auspicious. During tshechus, chhams (religious masked dances) are performed by monks and lay men alike. Besides the religious songs and dances, there are Atsaras (clowns) who usually wear masks with big red noses. To most, Atsaras are the soul of the tshechus.", "score": 34.553148926707856, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Punakha Dzong is arguably the most beautiful dzong in the country, especially in spring when the lilac-coloured jacaranda trees bring a lush sensuality to the dzong's characteristically towering whitewashed walls. This dzong was the second to be built in Bhutan and it served as the capital and seat of government until the mid-1950s. All of Bhutan's kings have been crowned here. The dzong is still the winter residence of the dratshang (official monrved woods here add to the artistic lightness of touch, despite the massive scale of the dzong.\nBhutan's most treasured possession is the Rangjung ('Self-Created') Kharsapani, an image of Chenresig that is kept in the Tse Lhakhang in the utse of the Punakha Dzong. It was brought to Bhutan from Tibet by the Zhabdrung and features heavily in Punakha's famous dromchoe festival. It is closed to the public.\nAfter you exit the dzong from the north you can visit the dzong chung and get a blessing from a wish-fulfilling statue of Sakyamuni. The building marks the site of the original dzong. North of the dzong is a cremation ground, marked by a large chorten, and to the east is a royal palace.", "score": 34.22777046356444, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "The Punakha Dzong, often referred to as the ‘Palace of Happiness’, is the second oldest and second largest dzong in Bhutan. This spectacular emblem of Bhutanese religious architecture sits right at the confluence of the Mo Chhu and Pho Chhu rivers and is perhaps the obvious key to unlocking Punakha’s secrets. PUNAKHA DOMCHOE is an annual festival held at the Dzong, which is largely attended by people from all villages and far places of the district.\nThe Theranghung “self-created” image of Avalokitesvara enshrined in the utse of the dzong (brought by the Zhabdrung from Tibet) is displayed during the festival.\nDistance – 2km from Khuruthang\nOpening Time – 6am\nClosing time -5pm", "score": 33.654322605766865, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "|100 ngultrum - front|\n|100 ngultrum - back|\nThimphu is the capital city of Bhutan, situated in the western part of the country.\nWith more or less 80,000 habitants, it is also the largest city in Bhutan.\n|the buddhist temple Tashichho Dzong - city of Thimphu|\nIn 1641 Zhabdrung built the Tashicho Dzong (Fortress of the auspicious religion) in place of the Dho Ngon (Blue stone) Dzong built by Lama Gyalwa Lhanangpa.\nIn 1698, the dzong caught fire and was restored. The dzong caught fire for a second time during the reign of the 16th Desi and 13th Je Khenpo. In 1869, the dzong once again caught fire. His Majesty the Second King initiated the renovation of the Dzong in 1962. Today, Trashichho dzong houses the secretariat, throne room, and offices of the King of Bhutan. The northern section is the Je Khenpo and Central Monk Body’s residence.\n|500 ngultrum - front|\nKing Ugyen Wangchuk reigned from December 17, 1907 till August 21, 1926.\n|500 ngultrum - back|\nConstructed by Zhabdrung (Shabdrung) Ngawang Namgyal in 1637–38, it is the second oldest and second largest dzong in Bhutan and one of its most majestic structures. The Dzong houses the sacred relics of the southern Drukpa Kagyu school including the Rangjung Kasarpani, and the sacred remains of Zhabdrung Ngawang Namgyal and Terton Padma Lingpa. Punakha Dzong was the administrative centre and the seat of the Government of Bhutan until 1955, when the capital was moved to Thimphu.\n|Punkha dzong - city of Punakha|\n... to be continued ...", "score": 33.400836663646515, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "- Thimphu; 35,000\n- 46,500 square kilometers (17,954 square miles)\n- Dzonkha, Tibetan and Nepali dialects\n- Lamaistic Buddhist, Hindu\n- Ngultrum, Indian rupee\n- Life Expectancy:\n- GDP per Capita:\n- U.S. $1,300\n- Literacy Percent:\nBhutan Facts Flag\nBhutan is a tiny, remote, and impoverished country between two powerful neighbors, India and China. Violent storms coming off the Himalaya gave the country its name, meaning \"Land of the Thunder Dragon.\" This conservative Buddhist kingdom high in the Himalaya had no paved roads until the 1960s, was off-limits to foreigners until 1974, and launched television service only in 1999. Fertile valleys (less than 10 percent of the land) feed all the Bhutanese. Bhutan's ancient Buddhist culture and mountain scenery make it attractive for tourists, whose numbers are limited by the government.\n- Industry: Cement, wood products, processed fruits, alcoholic beverages\n- Agriculture: Rice, corn, root crops, citrus; dairy products\n- Exports: Electricity (to India), cardamom, gypsum, timber, handicrafts\n—Text From National Geographic Atlas of the World, Eighth Edition\nBoyd Matson investigates Bhutan's policy of Gross National Happiness.\nGuided by a novel idea, the tiny Buddhist kingdom tries to join the modern world without losing its soul.\nTwo visions of the future compete for the soul of China's western frontier.\n2016 National Geographic Travel Photographer of the Year Contest\nBrowse photos of nature, cities, and people and share your favorite photos.", "score": 33.21848899029493, "rank": 20}, {"document_id": "doc-::chunk-2", "d_text": "After a brief walk down from the museum you will have reached Rinpung Dzong (‘fortress of a heap of jewels’). It serves as seat of the Paro district administration and residence for the monastic school. Rinpung dzong like all other dzongs in Bhutan is adorned with wall murals that symbolize the lives of the Bodhisattvas and other prominent saints, drawings from Buddhist parables within which the country’s culture and traditional life is intricately represented and holy symbols that signify their own individual religious meanings.\nDrukgyel Dzong or the fortress of the victorious Drukpas will be the next stop. History recalls the dzong as being built to celebrate the victories over several Tibetan invasions which were successfully countered from this defence point. The fort was gutted by a fire disaster in the 1950s but has been left in the form of historical ruins to this day to pay homage to what it stands for. You can also see the white dome of Mount Jomolhari (mountain of goddess) from the location.\nDay 3 – Paro to Thimphu, Thimphu sightseeing\nBreakfast and proceed to Thimphu, the capital city. The journey will be for about two hours and on the way you will be able to visit the Semtokha Dzong, which is the oldest dzong in the kingdom built by Zhabdrung Ngawang Namgyel. The dzong has been serving as a school for Buddhist studies for a long time now and continues to do so. As you warm up to the core regions of Thimphu, you can visit the Memorial Chorten called Gongzo Chorten or Gyaldren Chorten. The chorten was built in memory of the Third Druk Gyalpo Jigme Dorji Wangchuck. Later in the list of places to visit are the Handicrafts Emporium, the Textile Museum and the local market. You shall also have the opportunity to visit Trashichhodzong (fortress of glorious heritage), which houses the office of His Majesty the King Jigme Khesar Namgyel Wangchuck. It is the seat of the government while it also houses the monks of the central monastic body.", "score": 33.07137989686075, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "The Jambay Lhakhang is one of the oldest temples in Bhutan and it was founded by Songtsen Gampo, a Tibetan King in the 7th century A.D. The king was reputed to have built 108 temples known as Thadhul – Yangdhul (temples on and across the border) in a day to subdue the demoness who was residing in the Himalayas. The Jambay Lhakhang is located in the Bumthang valley in central Bhutan on the way to the Kurje Lhakhang. It’s a ten minutes drive to the temple from Chamkhar town.\nLegend has it that Guru Rinpoche visited the site several times and deemed it exceptionally sacred. Chakhar Gyalpo, the king of the Iron Castle of Bumthang renovated the temple in the 8th century AD.\nThe first king of Bhutan, Gongsa Ugyen Wangchuck constructed the Dus Kyi Khorlo (Kala Chakra – Wheel of Time) inside the temple to commemorate his victory over his rivals Phuntsho Dorji of Punakha and Alu Dorji of Thimphu after the battle of Changlingmithang in 1885. Later, Ashi Wangmo, the younger sister of the second king of Bhutan, built the Chorten lhakhang.\nThe main relics include the future Buddha, Jowo Jampa (Maitreya) from whose name the present name of the temple is derived. The lhakhang also houses more than one hundred statues of the gods of Kalachakra built by the first king, in 1887.", "score": 33.02218092432073, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The temple, precariously perched on a hair-raising ravine about 1,000 metres above the valley floor, is considered sacrosanct as it was in a cave within this temple that the eight century tantric saint, Padmasambhava, subdued the evils who obstructed the teachings of the Buddha. The saint is believed to have come to Taktshang in a fiery wrathful form riding a tigress. Over the years, many Buddhist saints have meditated in and around the temple and discovered numerous hidden treasure teachings.\nVisit the ruins of DrugyelDzongenroute. The fortress known as the “Castle of the Victorious Drukpa”, is a symbol of Bhutan’s victory over the Tibetan invasions in the 17th and 18th centuries. We can also get a view of the sacred mountain, Jumolhari, along the way. On the way back to our hotel, we will visit the 7th century Kyichu Temple, believed to have been built on a place that resembled a knee of a giant ogress. Overnight at hotel.\nDay 03: A Sojourn in Thimphu\nHighlights: The power centre and the capital city of the Happy Kingdom. Also the hub of commerce and culture.\nThe drive from Paro to Thimphu is just under an hour. There are great many places to see in Bhutan’s capital. In the morning we will drive to Buddha Point which provides a spectacular 360 degree close-quarter view of entire Thimphu and the adjoining areas. This is the site of the world’s tallest statue of Shakyamuni Buddha. Our next destination is the 12th century Changangkha Temple, Takin Zoo and the viewpoint at Sangaygang. On our way back, we stopover at a nunnery, the Folk Heritage Museum and the Textile Museum.\nAfter lunch, we will proceed to TashichhoDzong, a 17th century castle-fortress which today houses the offices of the King, Chief Abbot and government ministries. We will also take the opportunity to see the nearby parliament complex, the School of Arts and Crafts, vegetable market, and then spend the rest of the day watching an archery match and strolling around the town.", "score": 32.37339398297998, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Thimphu Festival Tsechu : Be A Part Of The Festive Spirit Of Bhutan\nTsechu is undoubtedly the most celebrated festival in the country of Bhutan. It is a religious festival which is held annually in all the districts or dzongkhags of Bhutan on the tenth day of a month according to the lunar calender. The time of Tsechu varies from district to district. The Tsechu is a social gathering of sorts that strengthen the bond between people staying remote and far off places in Bhutan.\nThe central highlight of the tsechus are the cham dances. These dances are performed by a dancers wearing costumes and masks dances and are typically moral vignettes. These dances are also based on the life incidents of the 9th century Nyingma teacher Padmasambhava and other saints.\nIt is common for most tshechus to unfurl a thongdrel which is a large appliqu� thangka. The thangka usually depicts a seated Padmasambhava encircled by holy beings.It is believed that if a person merely catches a view of the thangka, he or she is cleansed of sin. According to traditions the thongdrel is raised before dawn and rolled down by morning.\nThe festival of tsechu mainly depends on the availability of dancers since the ritual dance is the main highlight of the festival. Therefore registered dancers are fined if they refuse to perform during the festival.\nTsechus are not mere entertainment events which are held as attractions for tourists. They are the authentic manifestations of religious traditions dating back to hundreds of years before. It is a privilege that today people from outside are given a chance to witness these sacred rituals and rites. The people of Bhutan sincerely hope that by offering the privilege to the outsiders to get an insight into their culture, they would refrain from infringing on the sanctity or exquisiteness of the ritual.\nOur trip is based on the Thimphu Tashichhodzong where we let our clients witness and be a part of the Annual Thimphu Tshechu. The Thimphu Tashichhodzong is a four day festival which was initiated by Tenzin Rabgye who was the fourth Desi or temporal Ruler of Bhutan in the year 1687.", "score": 31.82810113675701, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Would you believe me, if I told you that yetis exist in Bhutan and tigers did fly to sacred places? Or maybe if I told you mermaids inhabited the lakes around Bhutan for many years? Well, I would understand your doubts too but legends claim they all did exist. A nation so mystifying, full of myths and legacies about great saints and fearless warriors, I am sure is enough to entice any wanderers to visit this mysterious land of the Thunder Dragon. Yes, Bhutan is also known as the land of the Thunder Dragon and even the national flag has a Dragon across it symbolizing the purity and fierceness of the country. This was so because of the ferocious storms that rolled in from the mighty Himalayas into the mountains of Bhutan which made it seem like a dragon roaring in the sky flying through dark and downy clouds. Anyhow this article is not about these mystical features about Bhutan but instead, it’s about the facts and specifics. So here are 10 things that you probably didn’t know about Bhutan.\n1. Chilies are Vegetables\nYou might already know about Bhutan’s love for fiery chilies. That can be validated by the endless lines of stringed chilies that are hung from every Bhutanese house’s windows or from the fact that every house has a huge pile of chilies on the roofs kept for drying. So yes, Bhutanese eat a lot of chilies and not in the way other cultures do, as in people around the world treat chilies as a spice but Bhutanese consume chilies as a vegetable. Don’t be surprised, it is the most consumed “vegetable” compared to other food groups. This fact is quite humorous and very accurate too.\n2. No Traffic Lights\nAnother amazing fact about Bhutan is that it does not have a single traffic light in the entire country and no, not even in the capital city of Thimphu. Instead of a robotic light that signals traffic, Bhutanese feel that a human traffic cop managing the traffic feels more personal and closer than a machine. Rightfully so, vehicular traffic is much simpler and pedestrian friendly in Bhutan.\n3. Smoking in Public Places and Sale of Tobacco is banned\nNow, this is an absolute delight for the non-smokers as Bhutan has banned the sale on tobacco since 2004.", "score": 30.981709705488523, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "History of Simtokha Dzong\nThe name Simtokha signifies “on a Demon” and the legend related with the Dzong’s development discloses to us that it was worked keeping in mind the end goal to stifle a malevolent soul that was hassling explorers in the locale.\nThe Simtokha dzong houses incalculable statues and works of art of different Buddhas, gods and religious figures. Another fascinating part of the dzong is that it contains the bed assemblies of both Zhabdrung Ngawang Namgyel and Jigme Namgyel two of the most imperative figures in Bhutanese history.\nZhabdrung was the pioneer that initially joined Bhutan as a country and Jigme Namgyel was the father of the principal King of Bhutan Ugyen Wangchuck.\nThe Simtokha dzong was designed according to the Gyal Gyad Tshel Institute of Ralung (Tibet) and is very unmistakable as its Utse or focal tower has 12 sides.\nAn extensive statue of Yeshay Gonpo (Mahakala) the boss defensive divinity of Bhutan is housed inside the Utse. Another fascinating part of the dzong is that it contains the bed councils of both Zhabdrung Ngawang Namgyel and Jigme Namgyel two of the most essential figures in Bhutanese history.\nThe Simtokha dzong houses incalculable statues and works of art of different Buddhas, divinities and religious figures including The Eight Manifestations of Guru Rimpoche, Jampelyang the Bodhisattava of Wisdom, Shakya Gyalpo the Buddha of Compassion and some more, all cut and painted in stunning subtle element.\nRecommendate Bhutan Tours:", "score": 30.701636321096807, "rank": 26}, {"document_id": "doc-::chunk-6", "d_text": "16 (b,l,d) Tongsa Festival\nWe will spend a full day at the Tongsa Tsechu, the annual religious dance festival that takes place in Tongsa Dzong. Built in 1647, it is the largest dzong in the country. It is also the ancestral home of the Royal Family, and both the first and second kings ruled the country from Tongsa. The Dzong sits on a narrow spur that sticks out into the gorge of the Mangde-Chu River and overlooks the routes east, west and south. It was built in such a way that in the olden days, it had complete control over all east-west traffic. This helped to augment the strategic importance of the Dzong which eventually placed its Penlop (regional ruler) at the helm of a united country when His Majesty Ugyen Wangchuck became the first king of Bhutan. To this day, the Crown Prince of Bhutan becomes the Penlop of Tongsa before ascending the throne, signifying its historical importance.\nElaborate, spellbinding masked dances at the festival are performed by specially trained monks. From the roof of the temple, monks blow on a pair of long horns, and the sound of cymbals, drums and trumpets fill the air. These dance festivals revive the people spiritually and in many ways refine them culturally because the dances communicate moral lessons, and both the performer and the observer benefit from the exchange. The Bardo dances, the main event of the festival, serve as a reminder to people of their future destiny depending on their past and present deeds. The dance of Noblemen and Ladies tells the story of flirting princesses who are punished for their indiscretions. The dance of the Stag enacts the tale of a hunter who was converted to Buddhism and gave up hunting.\nThis festival is also an occasion for seeing people and for being seen. In olden times it provided the most important opportunity for unmarried men and women to find their life partners. People dress in their finest clothes and wear their most precious jewels. Men and women joke and flirt.\nDay 11 Dec. 17 (b,l,d) Tongsa to Wangdi (altitude: 4,500 feet)\nAfter breakfast, we’ll drive back westward to Wangdi Valley, crossing once more the Pelela Pass and spend the night in a small resort by the beautiful river.", "score": 30.39395502320821, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "This extra-ordinarily standing monastery convinces every visitor that it has the caliber to be called one of the best places to visit in Bhutan. To visit Tiger’s Nest you must register at the security gate and place your baggage and cameras. Taktsang has become one of the utmost consequential monuments in the Himalayan Buddhist world. At Taktsang, the guru revealed the Mandala of Pelchen Dorje Phurpa and gave inspirational knowledge to his devotees. Taktsang Lhakhang is Bhutan's most famous milestone and religious site. The name Taktsang means \"The Tiger's Nest\". This sanctuary is a standout amongst the most blessed locales in the kingdom and sticks inconceivably to a sheer bluff face 900 meters over the Paro Valley.\n2. Dochula Pass (Thimphu)\nDochula Pass is situated at 3100 m high. It is one of the most stayed and best places to visit in Bhutan (if you have not stopped at), located on the way to Punakha from Thimphu. Between Septembers to February is considered as the best time to capture the panoramic views of the snow-laden Himalayas. Dochula Pass is also known as the Druk Wangyal Chortens. There are 108 chortens which were customized by the Her Majesty Ashi Dorji Wangmo Wangchuk. Dochula Pass—an excellent mountain pass, which is around 20 km from Thimphu is a grouping of 108 dedication stupas known as \"Druk Wangyal Chortens\". It's not only a position of chronicled and religious significance yet, in addition, a well-known vacation spot that any voyager would need to the observer on their adventure through Bhutan.\n3. Buddha Dordenma (Thimphu)\nBuddha Dordenma is the tallest Buddha bronze statue in the world. The 169ft tall bronze sculpture was prepared in Nanjing, China and accumulated in Bhutan. This statue is also one of the best places to visit in Bhutan to enhance spiritual experience. The statue is made of bronze with gold plated which symbolizes indestructibility. The Buddha Dordenma is situated on a slope in Kuenselphodrang Nature Park and disregards the Southern access to Thimphu Valley.", "score": 29.734087056083144, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "It was the Drukpa lama, Ngagi Wangchuk (1517-54), the great grandfather of Shabdrung Nawang Namgyel, who founded the first temple at Trongsa in 1543. The landscape around Trongsa is spectacular, and for miles on the end the Dzong seems to tease you so that you wonder if you will ever arrive. The view extends for many kilometers and in the former times, nothing could escape the vigilance of its watchmen.\nThe Bumthang region encompasses four major valleys: Choskhor, Tang, Ura and Chhume. The Dzongs and the most important temples are in the large Choskhor valley, commonly referred to as Bumthang valley. There are two versions of the origin of the name Bumthang. The valley is supposed to be shaped like a Bumpa, a vessel that contains holy water, and Thang meaning flat place. The religious connotation of the name aptly applies to the sacred character of the region. It would be difficult to find so many important temples and monasteries in such a small area anywhere else in Bhutan.\nThe Mongar district is the northern portion of the ancient region of Kheng. Hardly more than a stopping place surrounded by fields of maize, it was also the first town built on a mountain side instead of in a valley, a characteristic of eastern Bhutan where the valleys are usually little more than riverbeds and mountain slopes which rise abruptly from the rivers, flatten out as they approach their summits. Shongar Dzong, Mongar's original Dzong, is in ruins and the new dzong in Mongar town is not as architecturally spectacular as others in the region. Dramtse Goemba, in the eastern part of the district, is an important Nyingmapa Monastery.\nLhuentse is an isolated district although there are many sizeable villages in the hills throughout the region. It is very rural and there are fewer than five vehicles, including an ambulance, and not a single petrol station, in the whole district.\nFormerly known as Kurtoe, the region is the ancestral home of Bhutan's Royal Family. Though geographically in the east, it was culturally identified with central Bhutan, and the route over the Rodung-la was a major trade route until the road to Mongar was completed.", "score": 29.28392022307478, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "There are several dzongs in the country and these structures were historically used both as a military fortress and administrative centres. All the dzongs are still vibrant with the presence of monks and government offices. Punakha dzong is one of Bhutan's most impressive displays of intricate architecture. Since Punakha has a much warmer climate than Thimphu, it is still considered the winter home of the Je Khenpo (Bhutan's Chief Abbot) and Bhutan's Central Monastic Body. Built at the confluence of two glacial rivers, mo-chu (female river) and pho-chu (male river) the dzong has frequently been damaged by floods but has always been restored. The most recent restoration was completed in 2004 and the results are spectacular.\nPart 2 Trekking, Shana to Thathangkha\nDrive from hotel towards north of Paro to trek starting point Shana. You will pass through the Army camp. This is a fairly long day with an undulating trail. You will be walking along parts of trail used as an ancient route to Tibet. The trail continues its gradual climb alongside the Paro chu (river) through forest of blue pine, oak, rhododendrons, ferns, maple and larch. Campt at Thathangkha.\nThathangkha - Jongothang\nThis is not a long day but you make significant gain in the elevation. You will pass another army post and a few village houses, crossing the tree line and entering yak country. Camp in open meadows at Jongothang with spectacular views of sacred mountain Jhomolhari (meaning Goddess of the mountain pass) standing at 7314m.\nJongothang to Tshophu Lakes\nToday's trek to the twin lakes is not long so the opportunity to further engage in the festival is an option before resuming trekking. We pass through a few village houses and enjoy literally breathtaking views of Jhomolhari, Jitchu Draky (6989m) and Tshering Gang Mountains. Catch a glimpse of the elusive blue sheep along the route above the lakes.\nTshophu Lakes to Soe Yaksa Village\nCross Bonte la pass at 4890m which is the highest pass of the trek. Enjoy the beautiful scenery from the pass.", "score": 28.794182775046778, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "Built by the Guru Rinpoche in 1652, it houses a rock with his body imprint. Legend has it that Guru Rimpoche manifested as a Garuda to defeat the demon Shelging Karpo who had taken the form of a white lion.\nWe will also visit Jambay Lhakhang, built in 659 by Tibetan King Songtsen Gampo to pin down a demoness who was obstructing the spread of Buddhism. Come October, the Jambay Lhakhang Drup is one of the most colourful festivals in Bhutan.\nThe Matsutake Festival gives the cheerful Uraps a reason for celebration and to have some fun. After an exciting day of picking mushrooms with the people of Ura, sampling some truly delicious meals, learning about their art and crafts, their traditional lifestyle, folk songs and dances, regional food and drink. Participate in the song and dance, glitter and gaiety as the villagers gather in the festival arena, in full costume to cultivate a deeper insight into the rhythms of Bhutanese village life. Sample freshly gathered mushrooms; try some wild honey or high altitude medicinal herbs and potions, along with other local dishes of wheat and barley. Shop for textiles, cane, bamboo and other regional products.\nThe Valley of Phobjikha is well known as the winter home of the Black Necked Crane (Grus Nigricollis). Bhutan is home to around six hundred black-necked cranes with Phobjikha being one of the popular places that the birds migrate to in the winter months from the Tibetan plateau. The elegant and shy birds can be observed from early November to end of March. Overlooking the Phobjikha valley is the Gangtey Goempa. This is an old monastery that dates back to 17th century.\nGangtey Monastery is one of the main seats of the religious tradition based on Pema Lingpa’s revelations and one of the two main centres of the Nyingmapa school of Buddhism in the country. The well-known Terton Pema Lingpa makes the Monastery in the late 15th century.\nThimphu the modern capital of Bhutan is made up of just three main streets. It is only one of 2 capitals in the world without traffic lights. As the capital of Bhutan, Thimphu offers a rich cultural heritage with places of interest as listed below.", "score": 28.77314331497224, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "Living Museum – Simply Bhutan – Dedicated to connecting people to the Bhutanese rural past though exhibition of artefacts used in rural households.\nThimphu Dzong – The largest Dzong, is also the seat of the office of the King of Bhutan.\nNational Memorial Chorten – Which was built in honor of the late King Jigme Dorji Wangchuk.\nDochula Pass – The 108 chortens was built by the present Queen Mother of Bhutan Ashi Dorji Wangmo Wangchuck to commemorate Bhutan’s victory over Indian militants and to liberate of the souls lost.\nPunakha Dzong – Built in 1637, the dzong continues to be the winter home for the clergy, headed by the Chief Abbott, the Je Khenpo. It is a stunning example of Bhutanese architecture, sitting at the fork of two rivers, portraying the image of a medieval city from a distance. The dzong was destroyed by fire and glacial floods over the years but has been carefully restored and is, today, a fine example of Bhutanese craftsmanship.\nChhimi Lhakhang – A 20 minutes walk across terraced fields through the village of Sopsokha from the roadside to the small temple located on a hillock in the centre of the valley below Metshina. Ngawang Chogyel built the temple in 15th century after the ’divine Madman’ Drukpa Kuenlay built a small chorten there. It is a pilgrim site for barren women.\nPassing Wangdue Phodrang, one of the major towns and district capital of Western Bhutan. Located south of Punakha, Wangdue is the last town before central Bhutan. The district is famous for its fine bamboo work and its slate and stone carving. We will pause to view the Wangdue Phodrang Dzong. Built in 1638, Wangdue Dzong is dramatically perched on the spur of a hill and overlooks the confluence of the Tsang Chu and Dang Chu rivers.\nIn the morning, we will hike to the Tamshing Goemba, built in 1501 by the Buddhist saint Pema Lingpa. We will also visit Kurjey Lhakhang (left-bottom), one of the most sacred monasteries in Bhutan.", "score": 28.369341656035786, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "Black-Necked Crane – The crane is a wildlife creature that comes to the Phobjikha Valley every winter. It is an endangered species and is celebrated by the Bhutanese annually with the Black-Necked Crane Festival. This 9-day event is intended to bring awareness to the bird’s importance. The Crane Festival is held on the King’s birthday, November 11th.\n- 9. Traditional Textiles – Traditional textiles are generally created by Bhutanese women. The material is handwoven and dyed with intricate patterns making each garment unique. The National Textile Museum is in Thimphu.\n- 10. Gom Kora – The Gom Kora or Gomphu Kora is a beautifully erected temple that is covered with Buddhist carvings. This is the site where the Guru Rinpoche meditated. He left his impressions on a rock. It is said that Rinpoche was meditating in a cave and was so startled by an approaching demon that he left his imprint on a rock. Once he turned into a garuda, he left an imprint of his wings on nearby rocks. Guru Rinpoche struck a deal with the demon to allow him to finish his meditations. The deal was sealed with two fingerprints left on the rocks that can still be seen.", "score": 27.42061165814954, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "DAY 3: THIMPHU - HIKE TO TANGO AND DRIVE TO PUNAKHA.\nAfter breakfast, we will checkout of the hotel. You will then proceed towards Tango monastery which is about 45-minute drive from the capital city. From the mountain base, you will hike for about 30-45 minutes to the Tango Monastery. It is located at a mountain top and today houses a monastic school. The word Tango literally means “Horse head”.\nTango Monastery is built in the Dzong(fortress) style and has semi-circular shape outside and prominent main tower with recesses. It covers the caves where originally meditation and miracles were performed by saints from the 12th century onwards. Behind the series of prayer wheels are engraved slates. Inside the courtyard is a gallery, illustrating the leaders of the Drukpa Kagyupa lineage.\nThe monastery was founded by Lama Gyalwa Lhanampa in the 13th century and built in its present form by Tenzin Rabgye, the 4th Temporal Ruler in 1688. In 1616, the Tibetan, Shabdrung Ngawang Namgyal, meditated in its cave. The Monastery belongs to the Drukpa Kagyu School of Buddhism in Bhutan. According to local legend of Bhutan the location of this monastery is the holy place where Avalokiteshvara revealed himself as “the self-emanated form of the Wrathful Hayagriva”.\nLate afternoon, we will proceed towards Punakha. On the way, we’ll pass by Dochula pass from where a beautiful panoramic view of the Himalayas can be seen on a clear day. There are 108 stupas built by the eldest queen Ashi Dorji Wangmo Wangchuck to benefit all sentient beings..\nOvernight at the hotel in Punakha.\nDAY 4: PUNAKHA TOUR\nPunakha is at an altitude of 4420 feet and it served as the capital of Bhutan till 1955. It is the winter seat of the Je Khenpo (Chief abbot) and the monk body. It has a temperate climate and its rich fertile valley is fed by the Phochu (male) and Mochu (female) rivers.", "score": 27.12489277246142, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Places & Attractions\nLocated at 2230 meters above sea level, Thimphu is the capital city of Bhutan, it combines a natural small-town feel with a new commercial exuberance that constantly challenges the country's natural conservatism and Shangri La image. Thimphu contains most of the important political buildings in Bhutan, including the National Assembly of the newly formed parliamentary democracy, the official residence of the King and Dechencholing Palace of the former king located to the north of the city.\nThe stupa was built in 1974 in memory of the third monarch Jigme Dorji Wangchuck who passed away in 1972. The tantric images and paintings inside the monument provide a very rare insight into Buddhist philosophy.\nOriginally the dzong was built in 1661 by Je Sherab Wangchuk and renovated in 1960s by third King Jigme Dorji Wangchuck. The fortress houses the main secretariat building with throne room for the King, offices for ministers and the residence for the central Monk Body. Thimphu Dromche and Tsechu festivals are held once a year in the dzong in autumn. The dzong can be visited before and after the official hours.\nKuensel Phodrang-Buddha Point\nThe Buddha Dordenma, located on hilltop overlooking Thimphu City, is one of the Largest Buddha Statues in the world. The statue is made of Bronze and Gilded in gold, and can be seen from different parts of the city. The statue houses over one hundred thousand smaller Buddha statues. With a height of 51.5 meters, visitors get a good overview of the Thimphu valley from the Buddha point (Kuensel Phodrang).\nA single clock tower rests in the heart of Thimphu city with long dragons facing towards the clock and colorful Bhutanese designs carved onto the surface of the tower. It is located in the center of Thimphu, taking only really 10 minutes to explore however the shops itself merit another hour as there is much to purchase as memoirs of Bhutan.\nThe oldest and first dzong built by Zhabdrung in 1629, stands on a lofty ridge overlooking the Thimphu valley. At present the dzong is established for higher monastic studies.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-3", "d_text": "It was in 1222 that the place again got its recognition when Phajo Drugom Zhipo, the profounder of Drukpa Kagyu School of Buddhism, witnessed the cliff in the form of God Tandin (horse head) or Hayagriva. Tango is the highest center of Buddhist learning in the country; almost every Je Khenpo (religious head of Bhutan) completed the 9-year program there.\nZhabdrung Ngawang Namgyal established the monastery in 1620 with the first monk body. His father's ashes were interred in a richly decorated silver chorten inside the upper goemba after the body was smuggled here from Tibet. Cheri is still an important place for meditation retreats, with 30 or so monks here for the standard three years, three months and three days.\nLiterally means The Palace of Great Bliss. Since 1971 it has housed the state monastic school, and a long procession of monks often travels between here and the dzong. A team of 15 teachers provides an eight-year course to more than 450 students. The 12th-century paintings in the goemba's Guru Lhakhang are being restored by a United Nations Educational, Scientific, Cultural Organization (UNESCO) project. The upper floor features a large figure of Zhabdrung Ngawang Namgyal as well as the goenkhang (chapel devoted to protective and terrifying deities). The central figure in the downstairs chapel is the Buddha Sakyamuni.\nLocated at 3100 meters above sea level, Dochula Pass is one of the most scenic locations in the entire kingdom, offering a stunning panoramic view of the Himalayan mountain range and some of the highest mountains in the world. Many colorful prayer flags of good fortune and the marvelous sight of the 108 stupas all together adds to the exotic scenery of the pass.\nThe Druk Wangyal Lhakhang\nThis temple was built over a period of four years (2004-2008) under the vision and patronage of Her Majesty the Queen Mother Ashi Dorji Wangmo. The Lhakhang honors the courageous service of the Fourth King, who personally led the troops against the insurgents, as well as the regular Armed Forces of the country.\nDochula Meditation Caves\nDiscover the meditation caves tucked into the hills just above the pass.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "The statue itself is mentioned in an ancient terma of Guru Padmasambhava himself, said to date from approximately the 8th century, and recovered some 800 years ago by Terton Pema Lingpa (Religious Treasure Discoverer).For me, I am just happy to be blessed with such an amazing view.The Buddha Dordenma overlooks the Southern entrance to Thimphu Valley, and visitors can enjoy a vantage view of Thimphu nestled in the valley below.Thimphu being the capital city is the most developed and densely populated area in Bhutan, so this sight of closely-packed buildings is not the norm in other parts of the country which are mostly mountains, forests and farmlands. With urbanization, Bhutanese youths are increasingly migrating to Thimphu in search of white-collar jobs and a better life. I wonder how many dreams these buildings hold?One thing I know for sure, Bhutan is not ready to give up their unique cultural identity for modernization, and the little kingdom is gingerly treading the waters of urbanization, step by step, without compromising on the values which they have held closely for centuries. If there is a country left in the world who can find a delicate balance between culture and economic progress, it would be Bhutan.\nOpening Hours: 9:00AM to 5:00PM daily\nMore of my travel adventures in Bhutan\nOne of the most interesting things that you can do in Bhutan, is to get your own personalised stamps at the National Post Office in Thimphu. Costing about USD4 for 12 stamps (including the value of the stamp), imagine your friends’ and family’s pleasant surprise when you send home a postcard with your face on the stamp :) Thimphu’s Post Office is right in the city centre.\nSource: Wikicommons (cos’ I was too excited, I forgot to take a photo, bleah)\nYou can have your photo taken on the spot by the friendly post office staff, or bring your own (glamour) photos in a thumbdrive. It takes less than ten minutes to make your very own stamps. Now that’s what I call exclusive edition.While you are there, visit the Bhutan Postal Museum that was recently opened in November 2015 celebrate the 60th Birth Anniversary of Fourth Druk Gyalpo His Majesty King Jigme Singye Wangchuck.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Map Courtesy CIA World Factbook\nThe Kingdom of Bhutan is a small, landlocked nation of South Asia, located in the Himalaya Mountains, sandwiched between India and the People's Republic of China. The local name for the country, Druk Yul , means \"land of the dragon\". It is also called Druk Tsendhen, \"land of the thunder dragon\", as the thunder there is said to be the sound of roaring dragons.\nA Buddhist theocracy was established in Bhutan in the early 17th century. In 1865, Britain and Bhutan signed the Treaty of Sinchulu, under which Bhutan would receive an annual subsidy in exchange for ceding some border land. Under British influence, a monarchy was set up in 1907; three years later, a treaty was signed whereby the British agreed not to interfere in Bhutanese internal affairs and Bhutan allowed Britain to direct its foreign affairs. This role was assumed by independent India after 1947. Two years later, a formal Indo-Bhutanese accord returned the areas of Bhutan annexed by the British, formalized the annual subsidies the country received, and defined India's responsibilities in defense and foreign relations.\nA refugee issue of (reportedly) some 100,000 Bhutanese in Nepal remains unresolved; 90% of the refugees are housed in seven UNHCR camps.\nClick here to go back to the Asia page!\nThis article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article \"Bhutan\".", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-2", "d_text": "It was here in 1907 that Bhutan’s first king was crowned.After lunch, enjoy a walk to Chimi Lhakhang, temple of the Drukpa Kuenly who is also known as the Divine Madman. He inherited the Divine Madman title since he revolted against the orthodox Buddhism in his time. He taught the people that religion is an inner feeling and it’s not necessary that one should be an ordained monk. He is also considered a symbol of fertility and most childless couples go to his temple for blessing.Overnight at your hotel in Punakha.DAY 04: PUNAKHA – THIMPHU – HAA In the morning drive to Yabesa village and hike to through ricefields and up to Khamsum Yueley Namgyal Chorten, built by her majesty the queen Ashi Tshering Yangdon Wangchuk. Perched high on a hill on the bank of the river, the Chorten houses paintings belonging to Nyingmapa Traditions.Drive back to Thimphu where you will visit The National Library housing the collection of Bhutanese scriptures dating back to the 8th century and a fascinating replica of a medieval farmhouse at the Folk Heritage Museum. You will also have an opportunity to visit handicraft and souvenir stores. If your visit to Thimphu coincides with the weekend, you can walk through the Thimphu Market to see the variety of food of Bhutan, including basket upon basket of fiery chillies, fresh cheese and a variety of fresh greens. In addition, many stalls contain Bhutanese handicrafts and household items.Afterwards, we will depart for Haa, the westernmost valley in Bhutan. This is a beautiful drive that is relatively free of traffic. The road takes us back to Chuzom (river confluence) where we catch a glimpse of the three shrines in Nepali, Tibetan and Bhutanese style which were built to ward off evil spirits, and then traverses left past Dobje Dzong, an ancient prison which now houses a monastery.", "score": 26.616264841537753, "rank": 39}, {"document_id": "doc-::chunk-8", "d_text": "Paro was only a transitory stop, and the next morning we moved on to Thimpu, the capital of this 700,000-strong kingdom. Bhutan means 'Dragon Kingdom', 'Land of the Fire Dragon' and 'Land of High Mountains'. The latter is certainly true. The altitude means that, until you acclimatise, even a flight of stairs leaves you breathless.\nAnother feature of the country is that some of the big population centres have hoards of stray dogs, which sleep all day and bark all night, so earplugs are essential. There is surely a gap in the market for a day time dog waking service to reverse this behaviour.\nOur first morning in Thimpu included a visit to the National Memorial Stupa, a monument to the third king, who died at just 49, leaving the kingdom to his then 16 year old son, who proved a successful leader, introduced democracy to the country, and then abdicated in favour of a new king for a new-style kingdom.\nIt's said that making three circuits around the stupa brings good luck; needless to say, I put it to the test and trust it will rub off on my books.\nThe afternoon saw the first of two visits to the annual Tschechu Festival, with huge crowds in their finery turning the main square into a stadium to watch a series of dances, each with its own religious significance and all performed exclusively by monks. We saw the Dance of the Stags and the Hounds (Shawo Schachi); dancers in knee length skirts and dog and stag masks represent the conversion of a hunter to Buddhism by the saint Jetsun Milarepa.\nThe next day kicked off with a visit to a 169 foot tall golden Buddha, still in construction and due to be closed to the public for two years the next day. It's beautiful and awesome and you can walk around inside, seeing the ornate pillars, other representations of Buddha and the necessities of religious celebration. There is a peaceful atmosphere you don't want to drag yourself away from.\nWe then returned to the festival, to see more dances from another vantage point. Most of our group expected it to be too much of a good thing, but we really got caught up in the carnival atmosphere, the noise, the colour and the spectacle.", "score": 26.357536772203648, "rank": 40}, {"document_id": "doc-::chunk-2", "d_text": "Legend has it he subdued the demoness of the Dochu La with his 'Magic Thunderbolt of Wisdom.' A wood effigy of the lama's thunderbolt is preserved in the lhakhang, and childless women go to the temple to receive a wang (blessing) from the saint. (B, L, D) Hotel Zangthopelri or Punastangchu\nDAY 6: BUMTHANG\nWe drive to central Bhutan via Pele La (11,152' )for views of snow-clad peaks, including Chomolhari (24,000'), Bhutan's 'sacred mountain.' Descending the pass, we'll visit the Chendebje chorten, then We cross another pass, the Yotong La (11,234') and possibly do a little shopping with the weavers of the Chummi Valley before arriving in Bumthang (9,000'). Padma Sambhava (known as Guru Rinpoche, 'precious master') introduced Buddhism to Bhutan here in 746, and the area continues to thrive as a spiritual center.\n(B, L, D) Wangdicholing or Jakar Village Lodge\nDAY 7: BUMTHANG\nThis is the cultural heart of Bhutan and the most beautiful valley in Bhutan. Many monasteries and pilgrimage sites are located here, making it the cultural and historic center of Bhutan. One could spend weeks exploring this fascinating valley. We can see a lot from 8th century. The Treasure-Finder, Terton Pema Lingpa, found the sacred Ters of Buddhist texts after diving into Membartsho Lake, which we can visit on an optional hike. We will visit the Jakar Dzong, and the Jambey and Tamshing Lhakhangs, two of the oldest temples, dating as far back as the 8th century, and Kurje Lhakhang. (B, L, D) Wangdicholing or Jakar Village Lodge\nDAY 8: TRONGSA\nAfter breakfast we drive to Trongsa through a valley where we will see numerous waterfalls. Trongsa is dominated by the large Trongsa Dzong, built in 1644 by Chhogyel Minjur Tenpa, the official who was sent by the Shabdrung to unify eastern Bhutan.", "score": 26.01915066523781, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Thimpu :- Thimphu, Bhutan’s capital, occupies a valley in the country’s western interior. In addition to being the government seat, the city is known for its Buddhist sites. The massive Tashichho Dzong is a fortified monastery and government palace with gold-leaf roofs. The Memorial Chorten, a whitewashed structure with a gold spire, is a revered Buddhist shrine dedicated to Bhutan’s third king, Jigme Dorji Wangchuck.\nParo :- Paro is a valley town in Bhutan, west of the capital, Thimphu. It is the site of the country’s only international airport and is also known for the many sacred sites in the area. North of town, the Taktsang Palphug (Tiger’s Nest) monastery clings to cliffs above the forested Paro Valley. Northwest of here are the remains of a defensive fortress, Drukgyel Dzong, dating from the 17th century.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "Monks as well as laymen dressed in brilliant costumes and wearing masks of both wrathful as well as peaceful deities, re-enact the legends and history of Buddhism in the Dragon Kingdom. The festival culminates in the spectacular showing of the four storey high, 350 years old Thangkha (Buddhist religious scroll), - celebrating the deeds of Padmasambava, who is credited with introducing Buddhism to Bhutan.\nThe Wandgue and Thimphu Tsechus are in the fall and they too are most impressive. These festivals are very popular with western tourists. The festivals in Bumthang and East Bhutan attract fewer tourists and those who want to get a more authentic flavor of Bhutan's cultural and religious extravaganza will be well rewarded.\nApart from its religious implications, the Tshechu is also an annual social gathering where people dress in their finest clothing and jewellery. A small fair may be organized outside the Dzong for those looking for variety entertainment. Locals attending the festival enjoy a picnic lunch with an abundance of locally brewed alcohol. After the festival they traverse west to east along Bhutan's lateral highway enjoying the great biodiversity, ranging from conifer forests to banana trees and cactus plants. Along the route one catches glimpses of various birds and wild animals, and experiences the ancient tradition and culture of the Bhutanese way of life.\nThe dances that are performed at this event honoring the 'Guru', known as Cham, are performed to bless onlookers and to teach them the Buddhist dharma in order to protect them from misfortune and to exorcise all evil. The dancers take on the aspects of wrathful and compassionate deities, heroes, demons, and animals. Zhabdrung Ngawang Namgyal and Pema Lingpa were the main composers of many of the dances. It is believed that merit is gained by attending this religious festival. The dances invoke the deities to wipe out misfortunes, increase luck and grant personal wishes. Onlookers rarely fail to notice the Atsaras or clowns who move through the crowds mimicking the dancers and performing comic routines in their masks with long red noses. A group of ladies perform traditional Bhutanese dances during the intervals between mask dances.\nNo one should visit Bhutan without going to a Tsechu.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Kuensal Phodrang overlooking the pictorial Thimpu Valley is one of the iconic landscapes of Thimpu, Bhutan. Also known as Buddha Point, Kuensal Phodrang is famous for housing the famous Great Buddha Dordenma. The gigantic golden figurine of Sakyamuni Buddha has made Kuensal Phodrang a must visit place of Bhutan and a favoured sightseeing place in Thimpu Valley.\nPerched at an altitude of 8711ft Kuensal Phodrang is the most visible tourist destination seen from the Thimpu City. Surrounded by the alpine forest the road leading to Kuensal Phodrang is known to be one of the most pictorial routes in Thimpu.\nAccording to Bhutanese Legend the site to lay the foundation of Great Buddha Dordenma of Kuensal Phodrang was foretold during the 8th century in an ancient terma of Guru Padma Sambhaba. Later after 800 years the terma prophesying the making of Great Buddha Dordenma of Kuensal Phodrang was discovered by the Terton Pema Lingpa.\nConsisting of thousand of identical figurine of Sakyamuni Buddha, the Great Buddha Dordenma of Kuensal Phodrang was inaugurated while celebrating the 60th anniversary of the Fourth King Jigme Singye Wangchuck on 25th September 2015.\nThe statue of Sakyamuni Buddha located in Kuensal Phodrang with the height of 169 ft is one of the world’s largest statue.\nKuensal Phodrang is located in the outskirts of Thimpu Valley. Visitors can reach Kuensal Phodrang by hiring a car from Thimpu Valley. It is an approx 10 minutes drive from Thimpu to Kuensal Phodrang. The distance between Thimpu to Kuensal Phodrang is 5 km.\nTo visit Kuensal Phodrang Bhutan Holidays offers tailor made package tour to stay at Thimpu at affordable price.\nThere are various hotels and resorts available in Thimpu from budget to deluxe level. Bhutan Holidays is happy to offer Hotel booking assistance to tourists, travellers and visitors.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan is one of the most popular travel destinations in the world. Bhutan is often referred to as the last Shangri-La on earth, nestled deep in the valleys of the Himalayan mountains in southern Tibet. Travelers, the country is known as Bhutan, but the population calls itself proud Druk Pa and her country Druk or Druk Yul – the land of the thunder dragon. The number of tourists visiting Bhutan each year is deliberately kept low to protect the country from excessive influence and make the trip a unique experience for every visitor.\nA Rich Culture\nThe people of Bhutan are a rich mosaic of lifestyles and languages. However they are also united by their friendliness and unique cultural heritage rooted in Mahayana Buddhism that has remained isolated from western influence. They also treasure their environment, and live in harmony with its elements. The population is mainly concentrated in small towns and villages, and it is in these fascinating places that you can really discover the true spirit of the Bhutanese people.\nBhutan is a place where the mountains, rivers and valleys are abodes of the gods. The constant scenes of hills dotted with ancient temples, monasteries and prayer flags are testament to this, whilst in streams prayer wheels powered by the natural water flow turn day and night. Some sites are amongst the most sacred in the Himalayas such as Taktsang Monastery in Paro, and the many ancient Buddhist sites in Bumthang, Bhutan’s spiritual heartland.\nThe Bhutan monarchy was formed in 1907 under the leadership of the First King Gongsar Ugyen Wangchuk. The King of Bhutan is formally known as the Druk Gyalpo, the Dragon King. Bhutan’s current King, Jigme Khesar Namgyel Wangchuck was crowned in 2008. The legacy of the Wangchuck dynasty is one of peace and progress. This includes initiating the drafting of Bhutan’s first Constitution.\nThe stunning Bhutan Himalayan peaks are permanently capped with snow, mostly unclimbed, and tower over its dense forests, alpine meadows, lush valleys and rushing streams. Bhutan hosts peaks that reach between 5,000-7,000m (16,000-23,000ft) in height, and are neighbors to Mount Everest. The best way to really experience these landscapes is to incorporate a trek into your itinerary.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "On a clear day, the pass offers spectacular views over the greater eastern Himalayas including the highest peak of Bhutan. Dochula pass is about one hour drive from Thimphu city, and then stop here for one hour near the spectacular 108 Chortens at the top of the pass. On the way to Punakha you will stop for a short hike of about an hour to the temple of the Divine Madman, also well known as the \"temple of fertility\". This temple was built by Lama Drukpa Kuenlay the \"Divine Madman\" who was fond of woman and adopted unorthodox way of teaching Buddhism. He is also the saint who advocated the use of Phallus symbol. Legend has it that couple wishing to have baby have to make wish here in this temple. There are many couple from Bhutan and overseas who were blessed with child after visiting the temple. The visitors can also get blessings from replica of iron bow and arrow of Lama Drukpa Kinley, his scripture and Phallus which is the symbolic representation of fertility.\nAfter breakfast start the drive to ancestral home of our royals, Trongsa. En-route we will take wonderful breaks at Pelela pass at 3300m and Chendipji before crossing Trongsa. On arrival in Trongsa, we will visit Trongsa Dzong. Overnight in Trongsa, altitude 2200m.\nAfter breakfast, check out from the hotel and visit Ta-dzong (watch tower) recently converted into Trongsa Museum. En-route we will stop at Chumey Yathra Shop, which is famous in the region. Evening we will explore the tiny chamkhar town. Overnight in Bumthang, altitude 2650m\nBumthang is a very lush green valley with the highest concentration of the most sacred temples in Bhutan. Morning: - visit Kurjey Lhakhang, where Guru Rinpoche meditated during 7th century; Jambey Lhakhang built in 7th century by Tibetan King Songtsen Gempo. Tamshing Lhakhang founded in 15th century by Pema Lingpa, the treasure revealer and will also make a visit to Kenchosum Lhakhang.", "score": 25.319877100492594, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "The Buddha Dordenma is an iconic monument sitting atop a forest hill overlooking Bhutan’s capital city of Thimphu. Viewable from any part of the city, the massive statue of Shakyamuni is sited amidst Kuensel Phodrang where the palace of Sherab Wangchuck (the thirteenth Desi Druk who ruled the country from 1744 to 1763) once stood. It is one of the largest Buddha Rupas (or statues) in the world measuring at a height of 51.5 metres. Made of bronze and gilded in gold, the statue alone cost USD$47 million. Manufactured in China, the statue was cut into pieces and then transported to site through Phuentsholing (imagine the awe of wide-eyed Bhutanese villagers seeing the gigantic head of Buddha at the back of a moving lorry, priceless).\nThis is part of a greater whole, which includes the Kuensel Phodrang Nature Park, a 943-acre nature park inaugurated in 2011 to preserve the forests surrounding the statue. The entire project, which took about 10 years to complete on 25 September 2015, cost over USD$100 million. Locals and tourists alike embrace the park, which is popular for weekend family outings and its biking, hiking and nature trails. The park also hosted the Peling Tsechu, a three-day festival held in May 2016 to commemorate the birth of His Royal Highness Gyalsey Jigme Namgyel Wangchuck.The three-storey base houses a large chapel, while the body itself is filled with 125,000 gold statues of Buddha. The statue is expected to be a major pilgrimage centre and a focal point for Buddhists all over the world to converge, practice, meditate, and retreat.Apart from commemorating the 60th birth anniversary of Bhutan’s fourth king Jigme Singye Wangchuck, it fulfills two prophecies. In the twentieth century, the renowned yogi Sonam Zangpo prophesied that a large statue of either Padmasambhava, Buddha or of a phurba would be built in the region to bestow blessings, peace and happiness to the entire world.", "score": 25.10758123749875, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "(Note, advance notice of at least a week is required)\nColourful Farmers Market, probably the largest domestic markets in the country. (Open on Friday, Saturday and Sunday). Followed by Changlimithang Archery Stadium. On certain occasions and holidays or weekends most Bhutanese men plays archery amongst their friends and colleagues as leisure or even tournaments are held. Buddha View Point that has a gigantic statues of Buddha Dodenma. Buddha Point, a 51.5m/165ft tall giant statue is located on a hilltop overlooking Thimphu valley is made from bronze and gilded in gold. The throne that the Buddha sits upon is a large meditation hall. Some believes it is the largest of its kind ever to be built in the world. Takin Preserve (closed on Monday), Takin is the national animal of Bhutan. Memorial Chorten which was dedicated to the third king of Bhutan. This is an impressive shrine, with shining gold spires, tinkling bells, and an endless procession of devotees around it. Enjoy the rich Bhutanese arts and crafts at the Painting School a.k.a Zorig Choesum (closed on government holidays and Sunday). Authentic Craft Bazaar, the first ever of its kind aims to promote Bhutan’s craft industry by creating a viable market, which in turn acts to preserve and promote Bhutan’s unique culture. The initiative is also expected to bring about equitable socio-economic development in the country. Visitors will find an interesting assortment of genuine Bhutanese handicrafts and textiles available for sale here.\n(Optional) Thimphu Post Office. During the 2011, following the Royal Wedding, a senior Japanese aide to Bhutan Post Office invented an idea enabling anyone to print their own postal stamp literally with your picture on it. You can then use it to send postcards/letters and etc back home to your family and friends with your own official stamp anywhere in the world with just 30cents worth the stamp. It costs about $7 for 10 stamps.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "The National Symbols of Bhutan\nThe National Flag is divided diagonally into two equal halves. The upper yellow half signifies the secular power and authority of the King while the lower saffron-orange symbolizes the practice of religion and the power of Buddhism manifested in the tradition of Drukpa Kagyu. The dragon signifies the name of the country, and its white colour signifies purity. The jewels in its jeweled claws stand for the wealth and perfection of the country.\nThe National Emblem of Bhutan is a circle that projects a double diamond thunderbolt placed above a lotus. The thunderbolt represents the harmony between secular and religious power while the lotus symbolizes purity. The jewel signifies the sovereign power while the dragon (male and female) represents the name of the country Druk Yul or the Land of the Thunder Dragon.\nThe national anthem was first composed in 1953. The National Anthem became official in 1966. The first stanza can be translated: In the kingdom of the dragon, the southern land of sandalwood, long live the king, who directs the affairs of both state and religion.\nCelebrated on 17 December, the National Day commemorates the ascension to the throne of Gongsar Ugyen Wangchuck, the first king of Bhutan at Punakha Dzong on 17 December 1907.\nThe national language is Dzongkha. The name literally means language of the Dzong (the fortresses that now serve as the religious and administrative centres of the districts).\nThe national bird is the Raven. It adorns the royal crown. The raven represents the deity Gonpo Jarodongchen (raven headed Mahakala), one of the chief guardian deities of Bhutan.\nThe national animal is the Takin (Burdorastaxicolor) that is associated with religious history and mythology. It is a rare mammal with a thick neck and short muscular legs. It lives in groups and is found above 4000 meters on the northern-western and far north-eastern parts of the country. The adult Takin can weigh over 200 kgs.\nThe national animal is the Blue Poppy (Meconopsis Grandis). It is a delicate blue or purple tinged blosson with a white filament. It grows to a height of 1 meter, and can be found above the tree line (3500-4500 meters).", "score": 24.80926357739448, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Tashichho Dzong in Thimphu with bhutanese flag (Bhutan)\nDochula pass is located on the way to Punakha from Thimphu, forming a majestic backdrop to the tranquility of the 108 chortens gracing the mountain pass.\nAtop a hill in Thimphu has a massive golden Buddha sitting atop a gilded meditation hall. Inside the 169 foot Buddha Dordenma statue, there are 125,000 miniature Buddhas encapsulated inside of its enlightened bronze chest, ranging from 8 to 12 inches tall.\nThis massive statue of Shakyamuni measures in at a height of 51.5 m, making it one of the largest statues of Buddha in the world. The statue is made of bronze and is gilded in gold\nBuddhism is the official and most prominent religion of Bhutan gives this country its cultural and religious heritage.\nThe Old Bhutanese Women with A Prayer Wheel\nElderly Citizen of Bhutan\nBhutanese girls ramming mud during a house construction\nThe Tibetan-style stupa was built in 1974 as a memorial to the third king, Jigme Dorji Wangchuck (1928–72).\nPhallus Art made of wood in Bhutan\nPhallus Art in Bhutan\nDochula Pass – Decorated with small chortens on the lush green hillside, this place tells an interesting story of spirituality, bravery, and Bhutanese culture.\nThe Punakha Dzong, also known as the Pungtang Dechen Photrang Dzong, literally translates to the ‘palace of great happiness or bliss.\nThe Druk Wangyal Lhakhang was built as a memorial to celebrate 100 years of monarchy in Bhutan\nChorten or Stupa is an important religious monument in Buddhism, symbolizing Buddha’s presence.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "And then there are the atsaras - clowns sporting fiendish masks, making lewd gestures and cracking salacious jokes - who mingle on the periphery of the performance, are entitled to mock both spiritual and temporal subjects, and through their distractions infuse a lighter side to otherwise serious matters. The whole gathering begins to resemble a country fair, as the jolly and convivial assembly - many turning out in their vibrant finery - further entertains itself in lively conversation, the playing of an assortment of games and the imbibing of copious amounts of food and alcohol. Tshechus may end with the bestowing of powerful blessings, delivered orally by a high lama or visually with the unfurling of a huge appliqué thangka representing Guru Rinpoche and his Eight Manifestations. The commanding backdrop of a monastic fortress, the visual extravagance of the dances, the cacophony of musical accompaniments, the solemnity of chanting mantras, the artistic splendor, the unfamiliar smells and the overall exuberance of the diverse crowd lend the scene an extremely exotic air.\nBhutan is the only country to maintain Mahayana Buddhism in its Tantric Vajrayana form as the official religion. The main practicing schools are the state sponsored Drukpa Kagyupa and the Nyingmapa. Buddhism transects all strata of society, underpinning multiple aspects of the culture. Indeed, religion is the focal point for the arts, festivals and a considerably above average number of individuals. The presence of so many monasteries, temples and stupas, monks and tulkus (reincarnations of high lamas) is indicative of the overarching role religion plays throughout the nation.\nAlthough Buddhism and the monarchy are critical elements, it is the general extensive perpetuation of tradition that is possibly the most striking aspect of Bhutan's culture. This is most overtly reflected in the nature of dress and architecture. All Bhutanese continue to wear the traditional dress: for men and boys the gho, a long gown hitched up to the knee so that its lower half resembles a skirt, for women and girls the kira, an ankle-length robe somewhat resembling a kimono. Generally colorful apparel, the fabrics used range from simple cotton checks and stripes to the most intricate designs in woven silk.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Description - Itinerary\nFIX Departure Tour\n02 Nights - Thimpu\n01 Night - Punakha\n02 Nights - Paro\nDAY 01 ARRIVAL AT PARO AIRPORT – THIMPHU\nOn arrival at Paro Airport and after completing your Visa / Permit formalities you will be received by our Bhutan representative who will assist you in boarding your vehicle for transfer to Thimphu (2320Mts / 7656Fts, 65 Kms / 01½ to 02 Hrs), Thimphu is the capital town of Bhutan and the centre of government, religion and commerce, Thimphu is a unique city with unusual mixture of modern development alongside ancient traditions. Although not what one expects from a capital city, Thimphu is still a fitting and lively place. Home to civil servants, expatriates and monk body, Thimphu maintains a strong national character in its architectural style. On arrival in Thimphu, Check in to the hotel. Evening free at leisure. Overnight at Hotel.\nDAY 02 THIMPHU\nAfter breakfasts go for Thimphu sightseeing covering – Memorial Chorten - The Chorten was built in 1974 to honor the 3rd King of Bhutan, Jigme Dorji Wangchuck (1928–1972), is a prominent landmark in the city with its Golden Spires and Bells. In 2008, it underwent extensive renovation. It is popularly known as \"the most visible religious landmark in Bhutan\". Tashichho Dzong (All tourists visiting Dzongs and temples must be dressed appropriately. No half pant, sleeve less shirts, floaters, etc are allowed) – Also known as \"Fortress of the Glorious Religion\", Trachichho Dzong, Thimphu was initially built in 1641 and later rebuilt in its present form by King Jigme Dorji Wangchuk in 1965. The Dzong houses, Main Secretariat Building which houses the Throne room of His Majesty, the King of Bhutan. The National Assembly Hall is housed in a modern building on the other side of the river from the Dzong. During the warmer summer months, the monk body headed by His Holiness, the Je Khenpo, makes its home in the Dzong.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Many people have never heard of Bhutan, the country that values Gross National Happiness over Gross National Product! Bhutan is a landlocked little country roughly the size of Switzerland. It is bounded on the north and northwest by Tibet, with India touching its remaining borders and Nepal a bit to the west. Virtually the entire country is mountainous, peaking at 24,777ft. North to south it features three geographic regions; the high Himalaya of the north, the hills and valleys of the centre, and the foothills and plains of the south.\nFor centuries Bhutan has remained isolated from the rest of the world. Since its doors were opened in 1974, visitors have been mesmerized by the beautiful and pristine country and the hospitable and charming people. The best time to visit is October and November and during major festivals. The climate is best in autumn, from late September to late November, when skies are clear and the high mountain peaks are visible. It’s not unusual to experience rain no matter what the season, but I recommend avoiding the monsoon season, June-August, when buckets of rain come down.\nBuddhism was probably introduced in Bhutan around the 2nd century although, traditionally, its introduction is credited to the first visit of Guru Rinpoche in the 8th century. Before that the people followed a shamanistic tradition called Bon that still exists today, merged with their Buddhist traditions.\nGuru Rinpoche is the most important figure in Bhutan’s history, regarded as the second Buddha. His miraculous powers included the ability to subdue demons and evil spirits, and he preserved his teachings and wisdom by concealing them in the form of terma (hidden treasures) to be found later by enlightened treasure discoverers known as tertons. One of the best known of these tertons was Pema Lingpa; the texts and artifacts he found, the religious dances he composed, and the art he produced, are vital parts of Bhutan’s living heritage.\nThe largest and most colorful festivals (tsechus) take place at Bhutan’s dzongs and monasteries once a year, in honor of Guru Rinpoche. Tsechus consist of up to five days of spectacular pageantry, masked dances and religious allegorical plays. These festivals play a large part in the Buddhist teachings and are also social gatherings.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "The National Emblem of Bhutan is a circle that projects double diamond thunderbolt, placed above the lotus. There is a jewel on all sides with two dragons on vertical sides. The thunderbolt represents harmony between secular and religious power while the lotus symbolizes purity. The jewel signifies sovereign power while the dragons (male and female) stands for name of the country Druk yul or the Land of the Dragon.\nThe National flag is rectangle in shape that is divided into two parts diagonally. The upper yellow half signifies secular power and authority of the king, while the lower saffron-orange symbolizes the practice of religion and power of Buddhism, manifested in the tradition of Drukpa Kagyu. The dragon signifies name and purity of the country, while jewels in its claws stand for the wealth and perfection of the country.\nThe national flower is Blue Poppy (Meconopsis Grandis). It is a delicate blue or purple tinged blossom with a white filament. It grows to a height of 1 meter on the rocky mountain terrain found above the tree line of 3500-4500 meters. It was discovered in 1933 by a British Botanist, George Sherriff in a remote part of Sakteng in eastern Bhutan.\nThe national tree is cypress (Cupressus torolusa). Cypresses are found in abundance and one may notice big cypresses near temples and monasteries. Cypress is found in the temperate climate zone, between 1800 and 3500 meters. Its capacity to survive on rugged harsh terrain is compared to bravery and simplicity.\nThe national bird is raven. It ornaments the royal crown. Raven represents deity Gonpo Jarodongchen (raven headed Mahakala) one of the chief guardian deities of Bhutan.\nThe national animal is the Takin (burdorcas taxicolor) that is associated with religious history and mythology. It is a very rare mammal with a thick neck and short muscular legs. It lives in groups and is found in places above 4000 meters altitude, on the north-western and far north eastern parts of the country. They feed on bamboos. The adult takin can weigh over 200 kgs.\nBhutan is a multi-lingual society. Today about 18 languages and dialects are spoken all over the country.", "score": 24.096955003779215, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "The museum's collections include displays of spectacular thangkas, bronze statues, Bhutan's beautiful stamps, and the Tshogshing Lhakhang (Temple of the Tree of Wisdom), with its carvings depicting the history of Buddhism. (B, L ,D) Tenzinling Resort\nDAY 3: THIMPHU\nAfter breakfast, we will drive above Bondey to a trailhead where we can walk about 50 minutes to Dzongdrakha, a small monastery overlooking the Paro Valley built in the 16th century by the first local king, Chogay Dragpa. This monastery is one of five in the area. There is also a large stupa similar to that of Bodhnath in Kathmandu, Nepal. After lunch, we drive to Thimphu. The rest of the day is free to rest and relax. (B, L, D) Hotel KISA\nDAY 4: THIMPHU\nAfter breakfast we drive towards the north road and hike 2 hours up a steep hill to visit Cheri Monastery which was built in 1620 by Shabdrung Ngawang Namgyal. More than 80 monks are undergoing their three-year, three-month retreat here. Inside is a silver chorten holding the ashes of the Shabdrung's father. On our return we'll visit Tango Goemba, founded in the 12th century. The present building was constructed in the 15th century. It is now a Buddhist institute for higher learning. (B, L, D) Hotel Kisa\nDAY 5: PUNAKHA\nWe drive across the Dochu La 'pass' where we might see the snow-covered peaks of the eastern Himalaya if the weather permits. The pass is marked by hundreds of colourful fluttering prayer flags and is an awesome sight of one hundred and eight stupa. We continue to Punakha, winter seat of the highest lama in Bhutan until the 1950s. We will visit the ancient Punakha Dzong dating back to the 17th century, which is spectacularly situated at the confluence of the Mo and Phu Rivers.\nWe will also have an optional short hike across the rice paddies to visit nearby Chimi Lhakhang, built by Lama Drukpa Kunley.", "score": 23.642463227796483, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "She added another new structure to the temple called the Guru Lhakhang. As one of the oldest Lhakhang, it houses many important relics. One of the most important relics of the temple is a 7th century status of JowoSakyamuni which is believed to have cast at the same time as it famous counterpart in Lhasa Tibet.\nThere are 2 orange trees located in the courtyard of the temple; there is a belief amongst the locals that these orange trees bear fruit all year long. This site is one of the most sacred holy sites is all of Bhutan, and our companies travel consultants recommend every traveler to visit this sacred temple.\nDay 02: ParoTshechu (Festival)\nAfter an early breakfast attend the first day of the Tshechu which is held in the courtyard of Paro RinpungDzong. This is the main Secretariat Building, where the government offices and living quarters of the monk body and its Chief Abbot are housed. Witness the festival entire day\nFestival includes Dance of Black Hats (Shana / Ngachamp), Dance of the Noble man and Ladies (phole Mole), Dance of the drum beaters, from Dramitse (DramitseNgachamp), Dance of the Stag and the hounds (shawaShachi) Evening Thimphu Exploration, Visit memorial chorten, National Memorial chorten.\nDay 03:Paro to Thimphu: After 50 minutes drive check into Hotel and after tea visit The National Memorial was built by Bhutan’s third kind, H.M.JigmeWangchuck who is also known as the ‘’Father of Modern Bhutan’’. He wanted to erect a monument carrying the message of world peace and prosperity. However, he was unable to give shape to his idea in his lifetime due to pressure of state and other regal responsibilities. After his untimely demise in 1972, the Royal Family and Cabinet resolved to fulfill his wishes and erect a memorial that would perpetuate his memory and also serve as a monument to eternal peace, harmony and tranquility. The National Memorial Chorten is located in the centre of the capital city, Thimphu and is designed like a Tibetan style Chorten. The chorten is patterned of the classical Stupa design with a pyramidal pillar crowned by a crescent moon and sun.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "\"If ever there was a place where nature and man conjured to create their dearest image, it must be the Paro Valley. To the north Mount Chomolhari (mountain of the Goddess) reigns in white glory and the glacier waters from its five sister peaks plunge torrentially through deep gorges finally converging to form the Paro River that nourishes the rice fields and fruit orchards of Paro valley.\nTakshang, literally meaning Tiger's Nest, built around a cave in which Guru Rimpoche (Padmasambava ) meditated, clings seemingly impossible to a cliff of rock at 3,000 feet (800m.) above the valley floor. For local people it is a place of pilgrimage, but for a tourist, a hike to the viewpoint opposite the monastery in exhausting, thrilling and mystical. \"\nThimpu, the capital of Bhutan lies at an elevation of 7,600 feet in a valley transversed by the Thimpu River. Tashichho Dzong, the main secretariat building houses the Throne Room of the King of Bhutan, the summer residence of the Central Monk Body and the National Assembly Hall. The city of Thimpu is nothing like what a capital city is imagined to be. Nevertheless, for Bhutan it is a fitting and lively place. The shops vie with each other, stocked with varieties of commodities ranging form cooking oil to fabrics. Old wooden houses stand side by side with newly constructed concrete buildings, all painted and constructed in traditional Bhutanese architectural style.\nBlessed with a temperate climate and drained by the Phochu (male) and Mochu (female) rivers, the fertile valley of Punakha served as the capital of Bhutan and even today, it is the winter seat of the Je Khenpo (Chief Abbot) and the Central Monk Body. In 1667, Shabdrung Ngawang Namgyal built Punakha Dzong at the junction of Phochu and Mochu rivers to serve as both the religious and administrative center of Bhutan. Punakha Dzong houses many sacred temples including the Machen where the embalmed body of Shabdrung Ngawang Namgyal lies in state.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "Overnight at your hotel in Thimphu.Overnight at your hotel in Thimphu.DAY 02: THIMPHU After breakfast, a short drive to Kuenselphodrang, a popular vantage point, with the biggest Buddha statues in the world. This site offers a panoramic view of the capital below and also has several walking trails, which ranges from leisurely to moderate. Then visit the National Memorial Chorten, built in the memory of the Third King and for world peace. Continue on to the picturesque 12th century Changangkha Temple and Nunnery at Zilukha.Visit the Folk Heritage Museum featuring an exhibition of items and artifacts of Bhutanese villages and rural households. After visiting the museum we will walk to School for Arts & Crafts, which is located close to the museum. This is one of the interesting schools where young boys & girls learn 13 different skills of arts & crafts in BhutanStroll around T-town in the evening.Overnight at your hotel in Thimphu.DAY 03: THIMPHU – PUNAKHA Drive over the Dochu-La pass (3,100 meters), which on a clear day offers an incredible view of Himalayan peaks before descending into balmy Punakha valley (about 3 hrs total driving time). The drive through the countryside affords a glimpse of everyday life in this most remote of Himalayan kingdoms. In the Dochu-La area there are vast Rhododendron forests that grow to tree size and bloom in late April/early May covering the mountains in a riot of glorious spring colour.Punakha was the ancient capital of Bhutan. On arrival, visit Punakha Dzong, the “Palace of Great Happiness” built in 1637 by the Shabdrung, the ‘Unifier of Bhutan’. It is situated at the confluence of the Mo Chu and Pho Chu (Mother and Father Rivers) and is the winter headquarters of the Je Khenpo and hundreds of monks who move en masse from Thimphu to this warmer location. The three story main temple of the Punakha Dzong is a breathtaking example of traditional architecture with four intricately embossed entrance pillars crafted from cypress and decorated in gold and silver.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan: Hopscotching an unsullied land among the Himalayas\nWedged between the world's most populous countries, China and India, the Himalayan kingdom of Bhutan is famous for measuring its Gross National Happiness instead of Gross National Product. This, in light of the global economic meltdown, seems particularly auspicious.\nSettled by Tibetans, the Land of the Thunder Dragon is the size of Switzerland and has a population of about 600,000. Called the best preserved Asian society, Bhutan has safeguarded its identity and traditions against Western influence. It’s never been colonized. The first road wasn’t built until 1960; today, there is just one stoplight in the country. TV didn’t arrive until 1999. There are no chain stores. Males and females both sport the handsome national costume, the kira for women and the gho for men. The houses are gingerbread-like, made of wood and stone.\nYet there is a palpable tension between the old and new, brilliantly captured in the film Travelers & Magicians. Both English and Dzongkha, the native language, are taught in school. There are no billboards and cigarettes are illegal, yet promiscuity is widespread and practiced without judgment. Coronated in November 2008, the handsome new king is the continuation in a beloved 100-year dynasty. He has slicked-back black hair and sideburns — he looks like cross between Bruce Lee and Elvis.\nPerhaps the most effective hedge against the kind of cultural corrosion that has occurred in Nepal and Tibet is the provision that tourists must spend a minimum of $200 a day and hire a local guide. The government has a virtual velvet rope around the country, issuing only a limited number of visas annually.\nOnly the well-heeled visit Bhutan; you won’t find any backpackers here. As a result, it can be difficult to penetrate the protective veil. Visiting Bhutan is equal parts captivating and maddening. Still, it is a singular experiences in a rapidly homogenized world.\nParo in Western BhutanIt’s Sunday Market Day in Paro, Western Bhutan. Flies buzz around a severed cow’s head and vendors squat in the open air, selling apples and turnips and spices from burlap bags. Juniper incense wafts through the air.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "Next is the \"takin reserve\" to see the national animal. The takin is an odd-looking beast that might result from the union of a goat and an antelope. The takin is revered for its resilience in harsh conditions (even though it is endangered) and is also a part of Bhutanese mythology.\nYou will also make quick visits to several other places, including the National Library (which houses ancient manuscripts), and the School of Arts and Crafts, which teaches thirteen different crafts and from weaving and embroidery to carpentry and blacksmithing. We will also squeeze in a stop at the main post office for Bhutan's collectable stamps, which make great souvenirs and gifts.\nDay Two: Wanguephodrang, a Tea Break and a Panorama of the Himalayas\nDepart Thimpu this morning on a three-hour drive to Wangduephodrang. We take a tea break along the way at Dochu La (3,100 meters), where on a clear day you can get spectacular views of the Himalayas. This is a strategic pass that connects the eastern and western halves of the country. The remarkable natural beauty and many prayer flags makes this seem a very serene place.\nAfter lunch there is a short detour to Punakha, the former capital of Bhutan. Here you will visit the Punakha Dzong, which dates to 1637 and is considered one of the most beautiful in Bhutan.\nOn the way back from Punakha, you will stop at Metshina Village and take a 20-minute walk through the rice fields to Chimi Lakhang, a fertility temple, where you may receive a special fertility blessing if you think you need one. This temple is dedicated to a saint who is popularly known as \"the Divine Madman.\" Although he was not a monk, he taught a revolutionary alternative to orthodox Buddhism and is remembered for his sexual prowess. Many homes and temples in this region are decorated with flying phalluses as a symbol of fertility and as a gesture of respect for the Divine Madman.\nOvernight in Wangduephodrang or neighboring Punakha.\nDay Day Three: Phobjikha, an 8th Century Monastery, and its Legendary Cranes\nToday's excursion is to Gantey, which takes you through dense forests of oak trees and rhododendrons.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Kuensel Phodrang (Buddha Point) - You can pay your obeisance and offer prayers to the Buddha, the Largest Statue in the country and then walk around and take a glimpse of the valley, Changangkha Lhakhang (Monastery) (All tourists visiting Dzongs and temples must be dressed appropriately. No half pant, sleeve less shirts, floaters, etc are allowed) - This popular fortress-like temple perched on a ridge above central Thimphu regularly hums with pilgrim activity. It was established in the 12th century on a site chosen by Lama Phajo Drukgom Shigpo, who came from Ralung in Tibet. Parents traditionally come here to get auspicious names for their newborns or blessings for their young children from the protector deity Tamdrin (to the left in the grilled inner sanctum, next to Chenresig). Don't leave without taking in the excellent view from the back Kora (Pilgrim path), with its lovely Black and Gold prayer wheels. Motithang Takin Preservation centre to see the rare\"Takin\" Which was declared by Royal Government of Bhutan as National Animal on 25th November 1985. Why Bhutan selected Takin as National Animal is associated with Bhutan Religious and Mythology, it was during the time Lama Drukpa Kuenley (1455 – 1529) the Divine Madman and Bhutan Favorite Saints known for his outrageous antic. One day his devotees were gathered to witness his magical power and they asked him to perform a miracle. However, the saint, in his usual unorthodox and outrageous way, demanded that he first be served a whole cow and a goat for lunch. Having devoured both and leaving only the bones, he stuck the Goat head on the bones of the Cow. To everyone amazement, upon a commanded uttered by Lama Drukpa Kuenley, the animal came to life, arose and ran to the meadow and then began to graze. The animal came to be known as “Dong Gyem Tsey” and can still be seen grazing in the mountain meadows of the Kingdom. In these centre you can see Takin, Barking Deer & Sambar Deer. Something just to see takin one needs to walk around the enclosure.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Although the interiors of some temples, monasteries and Dzongs [fortress] are forbidden to foreign travelers at present, the tourists can still get a good insight into the unique cultural heritage of the Kingdom. The closure of religious institutions is to ensure that monastic life can continue unhindered.\nOne of the best agricultural regions of the country, Paro is also one of the most affluent. Fields cover most of the valley floor, while hamlets and isolated farms dot the countryside. The houses of Paro valley are considered to be among the most beautiful in the country. Paro is also the site of one of Bhutan's most impressive buildings – Paro Dzong. The famous monastery of Traktang and the ruins of Drukyul Dzong are nearby.\nThimphu lies in a wooded valley, sprawling up a hillside on the West Bank of the Thimphu Chhu [Chhu means River]. Thimphu is unlike any other world capital. Small and secluded the city is quiet and there are never the traffic jams familiar in other Asian Capitals. It is often said that Thimphu is the only world capital without traffic lights. Thimphu's main shopping street is a delight not so much for what you can buy there, but for the picturesqueness of the architecture and national costume. Beautiful weaves in wool, silk and cotton, basketwork, silver jewellery, thangkas and other traditional crafts of the Kingdom are available in various Handicraft Emporiums.\nPunakha plays a primordial role in the history of Bhutan; it was the country's winter capital for 300 years. Punakha Dzong, or Punthang Dechen Phodrang, was built in 1637. The Dzong resembles a gigantic ship exactly covering a split of land at the confluence of two rivers. The history of Punakha Dzong dates back to the year 1328 when a saint named Ngagi Rinchen built a temple there which can still be seen today opposite to the great Dzong. Shabdrung Nawang Namgyel a key figure in the history of Bhutan built Punakha Dzong and his body is preserved in one of the Dzongs temples, Machen Lhakhang.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "The museum exhibits live weaving with major weaving techniques, styles and\ntextiles made by different regions of Bhutan.\nJungshing Handmade Paper Factory: Traditional handmade papers made from the bark of daphe species are used to this day to print and handwrite Buddhist scripts. The process of making desho paper is on display with various products for sale.\nFarmer’s Market: Sitting on the banks of the Wangchu (river), the Centennial Farmer’s market is the hub of farmers from 20 districts. From local organic vegetables, fruits and rice to dried meat, local honey, homemade food and incense of all kinds are available for the\nresidents of Thimphu. Across a cantilever bridge on the other side of the river, a variety of handicrafts from Bhutan and the South Asian region are on display.\nCraft Bazaar: This line of temporary sheds is specially dedicated to authentic Bhutanese arts and crafts from all over Bhutan.\nBuddha Dordenma: The 51 meter tall statue of Buddha Dordenma sits physically and metaphorically overlooking the Thimphu valley.\nDAY 3 - THIMPHU TO PUNAKHA (1,300 MASL)\nPunakha valley is the ancient capital of Bhutan and the seat of the first secular and religious head of Bhutan, the Zhabdrung. It now serves as the winter residence of the religious head of the state, the Je Khenpo.\nDochula: A very dramatic drive over the mountain pass of Dochu La (3050 masl) with its 108 Druk Wangyal Chortens (stupa) and meditation caves. Weather permitting, the view of the Himalayan ranges is spectacular from this point. With a café and meditation caves around the area, spend some time and start hike down toward the Botanical Park for a picnic lunch.\nChime Lhakhang: The legendary Chime Lhakhang is a sacred temple built in honour of the Divine Mad Man, Drukpa Kuenley and legend of the phallus that conquered all evils obstructing the spread of Buddhism in Tibet and Bhutan. On a more profound level, the temple and its legendary icon exposes mediocrity and hypocrisies of the mind often cloaked in religious sanctity.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Top 10 Best Places to Visit in Bhutan\nBhutan - OfficiallyTheKingdom of Bhutan or (འབྲུག་རྒྱལ་ཁབ་, Druk Gyal Khap ) is one of the exciting and adventurous tourist destinations in South Asia and is also known as the land of the Thunder Dragon. This Buddhist Thunder Kingdom is a small but happy and unique country in South Asia with the tag ‘most go’. Sheltered amazing aka best places to visit in Bhutan makes the country more stimulating. Bhutan is anticipated to be derived from the Sanskrit word ‘Bhoṭa-anta’ which means 'end of Tibet'. Bhutan is a landlocked country with about 0.7 million population.\nThe official languages of Bhutan is Dzongkha or Bhutanese which is a Sino-Tibetan Language Spoken by over half a million people in Bhutan. Bhutan is encircled by Tibet in the north and India in the south. School education and healthcare are absolutely free in Bhutan. It is a marvelous spiritual place. Politically, Bhutan is a democratic and constitutional monarchy. Buddhism is the foremost religion with Hinduism the second prevalent faith. Most of the people involved in agricultural occupation with rice, fruit and dairy industry.\nBhutan having the world’s highest unclimbed peak, Gangkhar Puensum, It has many fascinating things the world should know about. Bhutan is an isolated country renowned for its culture, beauty, happiness, peacefulness, and society. National dress is obligatory in Bhutan. Being the only nation where citizens have constitutional compulsion to preserve the environment, 60% of Bhutan is covered with forest. “Takin” is Bhutan’s national animal which looks like exactly a cow or a goat. One of the interesting things about Bhutan is that It is the only nation where tobacco is banned.\nThere are many places explored and unexplored in Bhutan. From which explored top 10 tourist destinations are mentioned below:\nTable of Contents\n1. Tiger’s Nest (Paro Taktsang)\nTaktsang Monastery is also known as Tiger’s Nest Monastery, is located in Paro, Bhutan.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "The bustling frontier trading town of Phuentsholing in the south is the gateway to Bhutan for overland travellers from India and Sikkim. It is Bhutan’s second largest town, and is located next to the Indian town of Jaigon. Karbandi Monastery is a popular temple for those wishing to have children after an Indian pilgrim became pregnant after praying at there. It also provides wonderful views over Phuentsholing and the Bengal Plain. From Phuentsholing, the road winds north over the southern foothills, through lush forested valleys and around the rugged north-south ridges of the inner Himalayas to the western valleys of Thimphu and Paro. Hairpin corners on this breathtaking six hour drive are, to reassure the traveler, marked with tall, colourful sculptures of the Tashi Tagye, the eight auspicious signs of Buddhism.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-3", "d_text": "One of the most distinct features of the Chorten is its outwards flaring rounded part that makes the Chorten look more like a vase rather than the classical dome. The interior of the Chorten has a large number of paintings of Tantric deities, in explicit sexual poses that sometime can be a little disconcerting to visitors.\nOvernight hotel in Thimphu\nDay 4: Thimphu sightseeing.\nEarly breakfast and visit the following:\nThe TashichhoDzong is a Buddhist monastery cum fortress at the northern edge of Thimphu the capital city of Bhutan. The Dzong was built on the western bank of the river Wangchuk, and has historically served at the seat of the Druk Desi or the Dharma Raja of Bhutan; government. After the king assumed power in 1907 this post was combined with that of the King and Thimphu severed as the summer capital of the kingdom before becoming the full time capital Thimphu.\nThe original Thimphu Dzong (the Doe-NgyeDzong) is said to have been constructed in 216 by Lama GyalawLhanangpa. And was later taken over by Lama PhajoDrrukgomShigpo before the Dzong was conquered by Shabdrung Ngawang Namgyal, who found the Dzong to be too small and expanded it to what is now Known as the TashichhoDzong is called the ‘’fortress of glorious religion’’. It was erected in 1641 and was subsequently rebuilt by King Dorji Wangchuck in the 1960s.\nThe Dzong has been seat of the Royal government since 1952 and presently houses the Throne room and the Kings secretariat. The TashichhoDzong is also home to several ministries of the Bhutanese government and the Central Monk Body which is the apex organization of the country’s main spiritual order. The monument welcomes visitor during the Thimphu Tshechu festival which is held in autumn each year. The Dzongs mains structure is a two striped quadrangle with 3 storied tower on each of its four corners.\n-Folk Heritage Museum:\nThe Folk heritage museum was open to the general public in 2001 upon completion.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Thimphu the capital of Bhutan is a city like no other. One of the first thing to strike you will be the fact that there are no traffic signals and a human cop stands at all junctions manning the traffic in perfect order.\nLined with monasteries, dzongs and dzongs, all set in the beautiful valley of the Raidak river, there are quite a lot of things to do in Thimphu.\nPhoto: A traffic cop at his Kiosk / CCo\nThimphu Travel Guide\nThe capital city has a lot to take in. The splendid Trashi Chhoe Dzong gives a monastic view to the place and it is also a place which is a host of many colorful Tsechu festivals.\nYou can even take a stroll by the weekend markets which allow you to taste some of the finest dried fishes along with pork and other seafood items too.\nPhoto: Streets of Thimphu/CCo\nThings To Do In Thimphu\nTashichho Dzong is a remarkable architectural structure which currently is the house of the throne room and seat of the government. More commonly known as the Thimphu Dzong, it is also called the ‘fortress of glorious religion’.\nThe impressive structure with whitewashed walls, golden, red and black wood and lush gardens on a backdrop of blue sky and flourishing green valleys, emanates a look of splendor.\nPhoto: Tashichho Dzong /The Art of Travel Partners\nOne of the largest Buddha statues in the world, Buddha Dordenma is a masterpiece of architectural wonder. The massive bronze statue is more than 50 meters in height and covered in gold. Apart from this it also houses 125,000 other Buddha statues that are placed surrounding the Buddha Dordenma statue.\nSitting royally atop a great meditation hall, this Buddha statue exudes peace and tranquillity. It is considered as one of the best places to visit in Bhutan.\nPhoto: The statue of Buddha sitting royally on top of the peak/CCo\nNational Folk Heritage Museum\nIf you want to see a glimpse of Bhutanese traditional lifestyle, their values, and heritage, the Folk Heritage Museum is the ideal place for you. The museum was established in 2001 and is set in a 3 storied timber and rammed earth building which still holds its authenticity.\nThe museum artifacts transform seasonally.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "A visit to Bhutan is something one must 'Must Do' in one's lifetime. Besides the scenic beauty and people, there are little things in Bhutan that make the place even more beautiful and also curious in a way. For instance:\n*All buildings look similar in Bhutan. That is because a Royal Decree issued by the King in 1998, has made it mandatory for all buildings to be constructed with multi-coloured wood frontages, small arched windows, and sloping roofs. In Bhutan, during construction, (usually) no architectural plans are drawn, neither are nails or iron bars allowed in the construction.\n*There are no traffic lights in most parts of Bhutan, especially Thimpu, its capital. Traffic cops man the traffic from a cozy and traditional structure right in the centre of a crossroad. There are approximately 50,000-60,000 cars in Bhutan.\n*You will almost never hear a vehicle honk in Bhutan unless absolutely necessary.\n*There are more than 50,000 stray dogs in Bhutan and the number rises every year. A few years ago, there was an uproar within society as a decision was taken to sterilise the dogs to curb their growth in numbers. Locals and stray dogs get along fine.\n*Bhutan's national dish is Emma-datchi - made of chillies. Most Bhutanese eat it everyday and often twice a day. Naturally ulcers is what many Bhutanese suffer from.\n*There is one major zoo in Bhutan in Thimpu and it houses only one animal species - the national animal Takin (a goat-antelope). While the zoo, which is more of a natural reserve spreads into hundreds of acres, there are only a handful of Takin inside and very few visitors. A visit to the zoo is not always part of a tourist's itinerary.\n*Quite a few tourist hotels in Bhutan are run by women only. Not only are the women 'bell-girls' who carry luggage to the room and back, they are also the ones in charge of Room Service.\n*It is a common sight to see the phalluses painted outside Bhutanese homes. It is believed it brings good luck and is a symbol of fertility. Also phallus in the form of key chains, or wooden toys is a common sight in the markets.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Immediately we could tell the difference between Bhutan and Nepal. Bhutan is clean, uncrowded, and very peaceful. No more honking horns and crowded city streets, at least not until we get to India. We met our guide, Kinga, and our driver, Sonam. Kinga and Sonam would be our friends and teachers for the week as we saw the best sights of Bhutan.\nWe first toured the capital city of Thimpu. We visited a zoo where we saw the Takin, Bhutan’s national animal, as well as a school where students were learning crafts such as woodworking, painting, weaving, and embroidery.\nOne of the highlights for us was touring the Trashi Chhoe Dzong. A dzong is a fortress with both religious and political purposes. These are large, impressive buildings built over 400 years ago and still in use today. We loved seeing the architecture and the buddhist monks walking through the courtyards. That’s not something you see everyday in the US!\nOur timing was right to be able to see the famous weekend market. Produce from Bhutan and India were on sale, as well as meat, cheese, and incense by the bag full.\nFood in Bhutan\nWe ate a lot of traditional food, more than Tyler or Kara really wanted. Lunch and dinner would consist of red rice (it really is red), noodles, chicken or beef, cabbage, vegetables, and green chillies cooked with yak cheese. The food here can be spicy, and the people eat chillies here as a dish, not as a seasoning.\nAlso, there are no stoplights here. Traffic police direct traffic in more congested areas of Thimpu from a gazebo-type structure like this one.\nNational Memorial Chorten\nAnother highlight of Thimpu was visiting the National Memorial Chorten. Throughout the day many people come here to spin the giant prayer wheels and circumambulate around the chorten. In Buddhism it is believed that the more you spin the prayer wheels the more good karma you accumulate, which is extremely important for a favorable rebirth during the next life.\nFrom atop the chorten, this is another view of Thimpu.\nAnother highlight near Thimpu was a visit to the Cheri Monastery (Cheri Goemba).", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "Every time you visit, you will get to see something unique and new.\nJungshi Handmade Paper Factory\nCalled Deh-sho, Bhutanese paper is made from the local trees of Daphne and dhekap trees. The Jungshi factory is reputably the best place that manufactures this type of paper, made using an age-old method instead of the modern way.\nVisitors to the paper factory are given a demonstration of the papermaking process, and they are also invited to try their hand at some of the unique instruments.\nPhoto: Different types of paper at the Factory/CCo\nRoyal Textile Academy\nWeaving is one of the oldest Bhutanese crafts, and the Academy, housed in a beautiful contemporary building, attempts to preserve this heritage and also work for the empowerment of Bhutanese women by training them in this much-valued craftsmanship.\nThe exhibitions here show the various elaborate clothes made for the royal family by workers here. The place is another hidden gem and one of the top things to do in Thimphu.\nPhoto: Display at the Textile Museum/CCo\nThe National Library of Bhutan, known as NLB, was established back in 1967 and is one of the largest collection of literature on Tibetan Buddhism anywhere in the world.\nIt has as many as 6100 books and manuscripts in Tibetan and Bhutanese and holds over 9000 printing blocks used to create religious texts. The pretty two-story building is definitely a must visit for the bookworms.\nClock Tower Square\nThe clock tower, located in the heart of the city of Thimphu, features four different clocks on the four sides of the rectangular column. The walls are carved with beautiful hand carved dragons and some beautiful floral designs, all relating to Bhutan’s reputation as the Land of the Thunder Dragon and its many floral valleys.\nMost open-air concerts in the city are held here at the Clock Tower Square so try attending one if you happen to be there at the right time.\nPhoto: A concert taking place at the clock tower/CCo\nPerched on top of a ridge above Thimphu, Changangkha Lhakhang is the oldest temple in Thimphu. The temple was built in the 12th century and houses the central statue of Chenrizig, a manifestation of Avolokitesawara with eleven heads and thousand arms.\nIt offers a stunning view of the surrounding Thimphu valley from the top.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "Chuzom means confluence of two rivers Paro and Thimphu Chu Rivers which we believe this place inauspicious, thus there is a presence of 3 very unique chorten (Stupa) to ward off this inauspiciousness.\nBefore reaching Chuzom you can see TachogangLhakhang\nIt is private temple founded by ThangthongGyalpo, located few Kilometers before Chhuzom, the confluence of the Paro and Thimphu River.\nThangthongGyelpo had a vision when he meditated here of an excellent horse Balaha, an emanation of Avalokiteshvara, thus as a good omen, he built this temple.\nBelow this temple he built Iron Bridge who is said to have been built 108 iron bridges throughout Bhutan and Tibet by iron man called ThangthongGyelpo.\nAfter 30 minutes’ drive from this place you will reach Thimphu city which is capital city of Bhutan.\nDay 04: Sightseeing at Thimphu\nThimphu is located at an altitude of 2320m. Capital of the Kingdom, the population of Thimphu is about 100,000. The population is increasing each year due to rural-urban migration.\nThimphu became capital during 3rd King’s time in early 19s.\nThis place has a special magical atmosphere with its prayer flags fluterring in the wind and its feeling of riding above the bustle of the city.\nChanggangkhalhakhang is one of the oldest lhakhang in Thimphu valley.\nIt was built in the 12th century by Nyima, the son of PahjoDrugomZhigpo, the founder of the Drukpa School in Bhutan.\nLocated on the way to the BBS tower, there in an unlimited trail leading to large fenced area.\nUsed to serve as a zoo with various wild animals, which were later, released in the wild as per the order from the present King.\nThe King decided that such facility was not in keeping with the Bhutan’s environment and religious convictions.\nThe Zoo has different kind of deer including barking dear, black dear and most importantly national animal Takin.\nVisit Kuensel Phodrang to see the 8th largest sitting Buddha statue on the world.\nAfternoon stroll around the Thimphu city by your own.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-2", "d_text": "10-11 (b,l,d) Thimphu (altitude: 7,700 feet)\nAfter breakfast, the short drive to Thimphu takes us past traditional farmhouses, small villages and terraced field. We have one and half days to explore Thimphu, Bhutan’s exotic capital city—a fascinating combination of traditional and contemporary life. We’ll attend a special prayer ceremony with red-robed monks and a Lama (Buddhist Teacher). Together with the monks we’ll participate in a butter lamp and rice Mandala offering. We may also receive an introductory talk on Buddhism and an empowerment or initiation of one of the important Buddhist prayer mantras from the Lama.\nDuring the rest of our time in Thimphu, we’ll hope to visit some of the following key sights (there will be no time for all):\nA visit to Mothithang Takin Preserve for a chance to see the takin, Bhutan’s national animal. Takin resembles a cross between a gnu and a musk deer. It has an immense face and a tremendously thick neck.\nThe Farmers vegetable market (open Friday through Sunday) where Thimphu residents mingle with villagers in an interesting urban and rural blend. People come from outlying rural villages to this market to sell vegetables and exotic fruits, & other items including dried fish, chili peppers, spices, tea (in bricks), butter (wrapped in leaves), hats, jewelry, and masks. You will also find all kinds of items that the local people use at home, including ritual and religious objects, and wonderful textiles.\nA walk to the Memorial Chorten, a sacred shrine built in honor of the current King’s grandfather. The Chorten is an impressive three-story monument with Tantric statues and wall paintings of three different cycles of Nyingma teachings of Mahayana Buddhism. You will find many elderly people making the Kora (pilgrimage circuit).\nA beautiful hike to Cheri Gompa Meditation Center. Shabdrung Ngawang Namgyel, the man who unified Bhutan, built this gompa (monastery) in 1619 and established the first monk body in Bhutan at this monastery. A silver chorten (stupa) inside the gompa holds the ashes of the Shabdrung’s father.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Bhutan – A Kingdom Of Clouds and Thunderous Sound.\nBhutan is known as the destination of clouds because of its breathtaking location; it is situated on the eastern slopes of the ever growing mountain of the world and hence the tallest, i.e. The Himalayas. The city touches the bed of clouds since its lowest valley is 2,200 feet above sea level and one of its road often gets merged with the clouds at near about 14,000 feet and sometimes even more which makes traveling to this country even more interesting.\nSome ancient Buddhist monks refers to this picturesque destination as Bhotanta which means the realm which occurs at the frontiers of Tibet. Followers belonging to one of the tradition of the Buddhist Sect, known as Drukpa named it as Druk Yul or the land of the peaceful Thunder Dragon. This name was given as a result of a very strong belief. Since, in the mountains, The thunderstorms were very frequent and this sound of the thunder was considered as the voice of the dragon, So, they named this place in the name of the dragon.\nThese days, Bhutan is the last independent Himalayan Buddhist Kingdom. Sikkim was taken over by India and Mustang went under the wale of Nepal. When Tibet was taken over by China, the grandfather of the present king, Jigme Dorgi, decided that this country will no longer suck under the isolation and it allied with India who was under the spell of their English Peers, but Bhutan took a step to stay away from this policy of imperialism and maintained safe distance from China.\nIn 1960, there was a stake upliftment in the economic and social policy of Bhutan. As a result the construction of the very first road in Bhutan was laid down just an year after and soon it became the favorite halt for the tourist from all over the world and now Bhutan welcome more than 12,000 tourists every year. Traveling to this region dies not lead to any diversity.\nThis up gradation reminds us of the condition of Bhutan that existed in the 8th Century and this destination was under the charms of the mountains. No one knew of such a destination and it thus its doors always remained closed for the tourists and the ones who used to visit them were the monks who wanted to escape from the over imposition of the so called Civilized countries.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "The drive takes around 1½ hours / Thimphu has a population of around 98,000 / the town is made up of just three lines of shops and is the only capital in the world without traffic lights / in the evening visit the Handicrafts Emporium / all types of Bhutanese handicrafts for sale / overnight at hotel.\nDay 04 : Thimphu valley sightseeing\nAfter breakfast drive at Sangaygang / a field of Bhutanese prayer flags, perched high above the city / coloured flags send prayers to the heavens and white flags honor the dead / visit the Motithang Mini Zoo and visit the National Animal Takin Research Centre where these odd animals graze peacefully in a small protected park / walk to the Zilukha Nunnery and then drive to the Tango Monastery for a picnic lunch / the hike to the monastery takes about 45 minutes / it is a 13th Century structure and today is home to about 150 monks studying Buddhist Philosophy and meditation / Thimphu sightseeing continues with the Folk Heritage Museum (a beautifully restored Bhutanse farmhouse from the last century) and the National Painting School / in the evening visit the National Memorial Chorten built in honor of our 3rd king Jigme Dorji Wangchuck (a wonderful opportunity to mix with the local population) / all the buildings in Bhutan conform to national building principals and are beautifully carved and decorated. Dinner at The Royal Golf Club / overnight at hotel.\nDay 05 : Thimphu – Punakha\nAfter breakfast drive over the Docchu-La pass (3050 m)and into the beautiful valley of Punakha / the teahouse at the pass offers beautiful views of the Himalayan range in the distance / this is a 3 hour drive with stops en route / afternoon visit the magnificent Dzong spanning the MoChu and Po Chu rivers / the winter residence of the monastic body and the Je Khenpo (chief Abbott). If time permits a hike to Khamsum Yule Chorten is worthwhile / dinner and overnight at hotel.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-2", "d_text": "Then visit the new Drupthob Goemba / Zilukha Nunnery - Perched on a promontory, overlooking picturesque Trashichho Dzong and Golf course, it is the only nunnery in capital known as Zilukha Anim Dratsang, once belonged to the Drubthob (Realized one) Thang Thong Gyalpo often referred to as The King of the open field (In the early 15th century with his multiple talents he popularly became the Leonardo da Vinci of the Great Himalayas). You may interact here with some of the nuns who have devoted their life to spirituality and Buddhism. National library (Closed on Saturday, Sunday & National Holidays) – is a major scriptural repository and research facility dedicated to the preservation and promotion of the rich literary, cultural and religious heritage of Bhutan. The scripture and document collection held in our library and archives is a national treasure and a fundamental source for Bhutanese History, Religion, Medicine, Arts and Culture. Back to the hotel. Overnight at Hotel.\nDAY 03 THIMPHU – PUNAKHA / WANGDI\nAfter breakfast drive to Punakha (1200Mts / 3936Fts, 77 Kms / 03 to 3½ Hrs) / Wangdi (Wangdiphodrang) (1350Mts / 4430Fts, 70 Kms / 03 to 3½ Hrs). Punakha / Wangdi is the last town on the highway before entering Central Bhutan. The drive is over Dochu La pass(3080Mts / 10102Fts) which is very scenic with fascinating view of mountains of Bhutan. Stopping briefly here to take in the view and admire the Chorten, Mani wall, and Prayer flags which decorate the highest point on the road. If skies are clear, the following peaks can be seen from this Pass (Left to Right): Mt. Masagang (7,158Mts / 23478Fts), Mt. Tsendagang (6,960Mts / 22829Fts), Mt. Terigang (7,060Mts / 23157Fts), Mt. Jejegangphugang (7,158Mts / 23478Fts), Mt.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "The city has been undergoing massive development lately with tree-lined streets being laid down and a park-cum-open theatre being constructed in the heart of the city that used to be a clock tower area, to create a platform for live cultural performances.2008 saw the completion of the national stadium along with a new river-side park. The national stadium is also where the coronation of the King was held, coinciding with the centenary of the Wangchuk dynasty’s establishment.\nTourists are welcome here and there’s a lot to be explored and discovered, with the Bhutanese amalgamating modernity with indigenous culture very cleverly, fully conscious of the value of their ancient heritage. The culture of Bhutan is reflected very eloquently in many aspects of Thimphu, whether it be the architecture, the religious customs, the monastic practices or references in the media.One important festival celebrated in Thimphu is the four-dayTsechu festival, when mask dances, or Cham dances, are performed in the courtyards of the Tashichhoe Dzong in Thimphu. The Bhutanese just love to celebrate and bask in their rich culture!\nA bustling farmers’ market takes place on weekends on the banks of the Wang Chu River that one must visit for fresh, organic produce as well as Buddhist religious paraphernalia like prayer flags and horns that is not to be missed.\nBhutan’s capital and headquarters of the government works towards the national objective of ‘GNH’ or Gross National Happiness in correspondence with the growth of Gross National Product (GNP).", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Thimpu, with a population of a little over 100,000 inhabitants including the Bhutanese Royal Family, is the largest and most cosmopolitan city in the country. Sprawling over the western slopes of the Wang Chu river valley, it affords residents and visitors immensely picturesque views from an altitude of 2,348m.\nA centre for religion, commerce and government, it houses several important political buildings within its city limits like the Dechencholing Palace, the official residence of the King, and the National Assembly of the newly formed parliamentary democracy.\nIn terms of amenities, it is quite a modern city but it comes with some quirks. It is one of the few cities in Bhutan with ATM machines, so here’s your chance to stock up on some currency! It’s also the perfect place to give you a glimpse into contemporary Bhutanese culture as it is teeming with shopping malls, internet cafés and restaurants.\nHowever, quite strangely, for a capital city, it does not have an airport within the city boundaries, instead using the airport in Paro, about 54km away. Another quirk of Thimphu is that there are no traffic signals here! That’s right, just the main intersections will have policemen standing in small pavilions that are intricately decorated, gesticulating exaggeratedly to keep the traffic flowing smoothly. This is quite a sight to foreigners unaccustomed to this unique sort of traffic control.\nThimphu’s journey of development only really began in 1961, though, after being declared the capital of the country by the third King Jigme Dorje Wangchuk, replacing the ancient capital of Punakha. The Thimphu valley has nursed several settlements over the centuries and a dzong has existed here right from 1216, but Thimphu being named capital turned a new chapter in the region’s life. Vehicles took to the roads in the next year and the small town slipped into the role of being the country’s capital quite seamlessly after that, with rapid expansion and mushrooming suburban development following quickly thereafter.Thimphu is now a city where apartment blocks, small family homes and family-owned stores mingle with government buildings, all establishments adorned with traditional Buddhist motifs and paintings.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Through the British East India Company, the British entered Bhutan in the 18th century as part of its colonial expansion into India, China and the region as a whole. The British remained involved in the country's affairs for the ensuing two hundred-plus years. After centuries of internal struggles, civil wars and conflicts with neighboring peoples, during which the British backed certain leaders and factions, Bhutan chose a king in 1907. The nation settled into an agreement that gave the British control of Bhutan's foreign affairs.\nIn 1953, the king established the 130-member National Assembly in order to promote a more democratic form of governance, and the country eventually changed from an absolute to a constitutional monarchy.\nIt was only in 1999 that the country lifted its ban on television and the Internet and began to open up more to the outside world. In 2005, Bhutan began to implement a new constitution, and in 2007 and 2008, it held its first national parliamentary elections. After opening its country to commercial tourism in 1974, Bhutan welcomed 287 tourists that year. Every year, as the draw of this vibrant, unspoiled country increased, more and more people made the journey. Today, over a quarter-million people visit Bhutan annually.\nDue to its relative isolation from the rest of the world and strong adherence to its traditions and religion, Bhutan has maintained its customs over the last several centuries. Experiencing the nation’s unique culture is one of the strongest draws for Western visitors.\nBhutan is officially the only Buddhist kingdom still in existence, and the large majority of the country adheres to the state religion of Vajrayana Buddhism. (By and large, Hindus make up the remaining people.) The religious leaders have always had a strong say in Bhutan’s political affairs and have been political leaders in the past as well. The lifestyle of the people, therefore, largely conforms to these religious beliefs. The country controls the influence of foreign culture, which is also why the local traditions have remained so strong. However, this is slowly changing with the continued presence of tourism, as well as the curiosity of much of Bhutan's younger population. The juxtaposition of young Bhutanese tapping their smartphones while traditionally dressed Buddhist monks walk by is becoming a more common sight in Thimpu, the capital city.\nNo rigid clan systems exist in the nation, and men and women share equal rights.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "#4- Thimpu Tsechu Festival: Date: 11th– 13th October 2016\nThe meaning of Tsechu is ten days. This festival is held throughout Bhutan every year to pay homage to Guru Rinpoche (Padmasambhava). Guru Rinpoche introduced tantric form of Buddhism to the country in 8th century. In Thimpu, Tsechu is celebrated with great aplomb over the course of 3 days, during which thousands of people across Bhutan and all over the world flock to the capital for witnessing this amazing festival. This festival takes place in the courtyard of the Tashichho Dzong. The Tsechu festival involves religious activities and symbolic dances.\nRemember this festival is considered to be the most anticipated and important festival in Bhutanese calendar, so do not forget to include it while planning a festival tour package from Bhutan Buddha Travellers. We will make sure in planning an exceptional and well guided festival tour as per your requirement and budget that will make your Bhutan trip a memorable one.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-9", "d_text": "Chod refers to visualizing one’s own dismemberment in the act of ego annihilation. Chod aims to free the mind from all fear and to arouse realization of its true nature, primordially clear bliss and emptiness.\nA tiny but charming retreat centre perched on the limestone cliff face. Take the unpaved side road past the Lhakhang Nagpo, by Domcho village, and continue up the hillside towards the recently renovated ridge-top Takchu Goemba (8km). From the small private lhakhang and white chorten of Lungsukha village, a 15-minute walk leads to the small and charming chapel attended by one lama and one monk.\nBlessed with temperate climate and fed by Pho-chu (Male River) and Mo-chu (Female River), Punakha is one of the most fertile valleys in the country. Until 1955, Punakha served as the capital of the country. It still serves as the winter seat of Je Khenpo (Chief Abbot) and central Monk Body.\nLampelri Royal Botanical Park\nThe park has a rich biodiversity of fauna and flora. One can find many species of plants and flowers. The annual Rhododendron festival is held here at the park.\nThe temple was built in 1499 by the cousin of Lama Drukpa Kunley, the divine mad man in his honor after the lama subdued the demoness of the nearby Dochu La with his 'magic thunderbolt of wisdom'. A wooden effigy of the lama's thunderbolt is preserved in the lhakhang, and childless women go to the temple to receive a wang (blessing or empowerment) from the saint.\nThis little town is located in the juncture of roads from Wangdi and Punakha. The farmers from nearby places come to this little town to sell their farm products and to buy their necessities at home.\nThe Fortress of Great Happiness is the most beautiful dzong in the country, especially in spring when the lilac-colored jacaranda trees bring a lush sensuality to the dzong's characteristically towering whitewashed walls. This dzong is the second oldest built by Zhabdrung at the confluence of male and Female River in 1638. Most of the important ceremonies like the King’s coronation, royal weddings and etc are conducted here at this dzong.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "The state language is Dzongkha which in the olden days was spoken by people who worked in the Dzongs that was the seat of temporal and spiritual power. Later, Dzongkha was introduced as national language of Bhutan.\nThe national anthem was first composed in 1953 and became official in 1966. It is known as Druk Tshenden Kepay Gyalkhab Na (In the land of the Dragon Kingdom, where cypress grows).\n17th December is celebrated as National Day of the country that coincides with crowning ceremony of Gongsa Ugyen Wangchuck as the first hereditary king of Bhutan in Punakha Dzong on 17 December 1907. It is a national holiday, every Bhutanese celebrate the day with pomp and festivity throughout the country.\nBhutan is a multi-lingual society. There are 19 different languages and dialects spoken in the country. Dzongkha, meaning the language of the fort, is the national language of Bhutan. It is widely spoken in the western region.\nEma Datshi, a chili and cheese stew, is Bhutan’s national dish. Bhutanese either use dried red chilies or green chilies to make this dish. It is very simple and fast to cook.\nChanglimithang Stadium in Thimphu serves as the National Stadium. It is mostly used to celebrate national events, football and archery games. It was built in 1974 and refurbished in 2007. It can accommodate up to 25,000 people.\nThe Ta-dzong in Paro which was established in 1968 is the National Museum of Bhutan. It houses extensive collections of over 3,000 works of Bhutanese art covering more than 1,500 years of Bhutan’s cultural heritage.\nNational development philosophy\nBhutan believes in the philosophy of Gross National Happiness. Sustainable development and happiness are emphasized more than Gross Domestic Product. Each and every policy of Bhutan first has to go through a checklist that qualifies it to be passed as a Gross National Happiness policy.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The largest cities are the capital Thimphu and the trading town of Phuntsholing on the border with India.\nUntil the 1980s, Bhutan had very high infant mortality rates, and residents had little access to health care compared to neighboring countries. Life expectancy was also very low. From the 1980s, health services were improved, which among other things led to a sharp reduction in the spread of the infectious disease.\nWomen and men are relatively equal in Bhutan. They have legally the same rights, and the inheritance law has traditionally favored women. The main obstacle to women’s economic and social development is the lack of access to health services, education and work.\nThe Bhutanese follow the Tibetan form of Buddhism (see Tibet, religion). Bhutan is today the only country in Asia where Mahayana Buddhism is state religion.\nAccording to Bhutan tradition, Buddhism was introduced in the 7th century AD. by the Tantric Master Padmasambhava, but we have safe surrender of Buddhism in Bhutan from the first century AD.\nThe head of the Drukpa school, Ngawang Namgyal (1594–1661), united the country and became both worldly and spiritual head. He divided the country into provinces governed by fortified monasteries (dzong). His successors, who ruled until 1907, were regarded as rebirths by Ngawang Namgyal (in the same way that the Dalai Lamas succeeded one another in Tibet). In the southern part of the country lives a Nepali, Hindu minority.\nOfficial language is dzongkha, a dialect of Tibetan. Tibetan is otherwise spoken in a number of partly incomprehensible dialects. Nepali is used in southern and southwestern parts of the country.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-5", "d_text": "This has resulted in the exchange of many royal visits. HRH Crown Prince Maha Vajiralongkorn, HRH Princes Maha Chakri Sirindhorn, HRH Princess Chulabhorn Walailak and late HRH Princess Galyani Vadhana have all visited Bhutan. From Bhutan, among others, Thailand was visited by our King when he was Crown Prince, with his wife Queen Jetsun Pema and the Queen Mother, Ashi Tsehering Pem Wangchuk.\n“We also have a very good relationship on the government level, with many top officials from both sides making visits. Bhutan and Thailand have an agreement called the Comprehensive Framework Agreement for Cooperation and it covers many areas, mainly trade, commerce, investment, human resources development, culture, religion, health, agriculture, tourism, civil aviation and consular matters. We just had a very successful annual meeting in March under this framework. Bilateral cooperation is also managed through a diverse array of consultations at the ministerial level.\n“Assistance from Thailand has mainly been in the field of human resource development. Thailand’s assistance began in the 1980s under the ‘Thai International Development Cooperation Agency’ (TICA) in the areas of rural development, agriculture extension, health, education and private sector development. Thailand ranks third, after India and Australia, in assisting Bhutanese students,” Mr Wangdi said.\n“The Royal Civil Service in Bhutan and the TICA signed an agreement to provide scholarships to Bhutanese students for short and long-term training and study in Thailand at institutions like Naresuan University, which offers 10 scholarships annually for Bhutanese students. The School Agriculture Programme is a collaborative project for youth development between Thailand and Bhutan, under the patronage of HRH Princess Maha Chakri Sirindhorn. Another project, launched in 2007 by the Ministry of Education and World Food Program, assists school food projects in Bhutan.\n“There is also an exchange of volunteers between the two countries. Thai volunteers have been fielded in remote corners of Bhutan and have worked in vocational institutes, and Bhutan has also sent vocational teachers to Thailand.\n“Thailand is the third largest trading partner for Bhutan. A trade agreement currently under negotiation is expected to boost bilateral trade, which has increased since Druk Air started flights between Bangkok and Paro in 1988.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-3", "d_text": "He taught the people that religion is an inner feeling and it’s not necessary that one should be an ordained monk. He is also considered a symbol of fertility and most childless couples go to his temple for blessing.\nOvernight at your hotel in Punakha\nDAY 04: (15 DEC) PUNAKHA – PARO\nIn the morning drive to Yabesa village and hike to through ricefields and up to Khamsum Yueley Namgyal Chorten, built by her majesty the queen Ashi Tshering Yangdon Wangchuk. Perched high on a hill on the bank of the river, the Chorten houses paintings belonging to Nyingmapa Traditions. Take a picnic lunch on a picturesque riverside before exploring the Wangduephodrang Dzong. Built in 1639 the strategically located Dzong is perched on a spur at the confluence of two rivers.\nDrive back to Thimphu where you will have an opportunity to visit handicraft and souvenir stores. Afterwards proceed to Paro, visiting Semtokha Dzong en route. The Dzong, built in 1627, is the oldest in Bhutan. It now houses the Institute for Language and Culture studies. On arrival in Paro, check into the hotel.\nOvernight at Hotel in Paro.\nDAY 05: (16 DEC) PARO (TAKTSANG MONASTERY HIKE)\nAfter breakfast we hike to Taktsang Monastery. The trail is broad and the walk of approximately 2 hours uphill takes us to Taktsang Monastery. Built on a sheer cliff face 900 metres above the valley floor, the monastery is a spectacular sight. It is also an important pilgrim site for the Buddhists. Legend has it that the great Guru Rimpoche flew here on the back of a tigress when he brought the teachings of the Buddhist Dharma to Bhutan in the 8th Century. He then mediated in a cave there for three months where the monastery was later built. Nearby there is a teahouse where you can stop for refreshments.\nIn the afternoon, we drive to the ruins of the 17th Century Drukgyel Dzong, an historic monument built by the Shabdrung to commemorate his victory against marauding Tibetans in 1644.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Tang Valley and family life in Bumthang, Bhutan\nTang Valley is located only a short distance from Jakar Valley, but the unpaved rocky road made the drive there much longer.\nThe narrow road is not well maintained, since the Tang Valley has only a few remote monasteries and tiny villages.\nThe landscape is heavily forested, with clear patches of land that are planted with corn, potatoes and cabbages.\nThere were herds of cattle in the fields, looking healthy and lazily grazing in the sun.\nThe villages were built half a century ago, and I could see many houses with a thunder-bolt-penis hanging from each corner of the house.\nThe thunder-bolt-penis is made of wood.\nIt is sharpened like a pencil on one end, and on the other end it is shaped like a penis.\nThey always have a wooden wing attached at the top.\nIt is believed to protect the house from all directions.\nWe passed by a wide clean river with beautiful boulders in it.\nOn one side of the river was a sacred temple built into the rocks.\nIt is called Kunzang Drak and it was the retreat center of the treasure discoverer Pema Lingpa.\nOn the front of the monastery opposite the river, there is an indentation in the rocks in which Pema Lingpa is said to have taken baths.\nWe took a leisurely hike up the hills to a large family estate that has been converted into a museum, showing the way life was conducted in rural Bhutan in Feudal times, up until the 1950’s.\nThe Ogyemchoeling Palace was built by the Dorje Lingpa family in the 1400’s.\nWe were lucky to arrive on the one day of the year when an annual festival was being celebrated.\nThe whole village had gathered in the courtyard of the Palace.\nWe saw kids and families sitting on the ground, watching the annual festival with joy.\nThe monks were chanting prayers while warriors with swords and flags performed a traditional dance and rituals.\nAt the end of the rituals, we followed the whole congregation in circumventing the temple three times.\nThe daughter of the current family who owns the Palace, came to attend the annual festival.\nShe is a writer who has published a few books about Bhutanese culture.\nShe was delighted to take the time to show us her museum, and with warmth and patience answered our many questions.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "The Dzong was damaged six times by fire, once by floods and once by earthquake. The coronation of Ugyen Wangchuk, the first king of Bhutan, took place at Punakha Dzong on 17th December 1907.\nWangdue Phodrang, means ‘the palace where the four directions are gathered under the power of the Shabdrung'. However the popular story has it that the Shabdrung arrived at the river and happened to see a boy building a sand castle. He asked for the boy's name, which was Wangdue, and thereupon decided to name the Dzong Wangdue Phodrang or 'Wangdue's Palace.' Wangdue Phodrang Dzong is perched on a spur at the confluence of two rivers. Its position is remarkable as it completely covers the spur and commands an impressive view over both the north-south and east-west roads. The main road climbs the length of the spur and on the left, across the river, comes the first glimpse of the picturesque village of Rinchengang whose inhabitants are celebrated stonemasons.\nThis small modern town in the south is the gateway of Bhutan for overland travellers. Like all other border towns, it is also a prelude. Phuntsholing is also a fascinating mixture of Bhutanese and Indian, a lively center for the mingling people, languages, customs and goods. On top of a low hill at nearby Kharbandi, a small Gompa situated in a garden of tropical plants and flowers overlooks the town and surrounding plains.\nThe Amo Chu, commonly known as the Torsa river flows alongside this town and it is favorite spot for fisherman and the picnickers. From Phuntsholing, the road winds north over the southern foothills, through lush forested valleys and around the rugged north-south ridges of the inner Himalayas to the central valleys of Thimphu and Paro. It is a scenic journey; forests festooned with orchids cover the mountains on the other side and exciting hairpin curves greet travellers with colourful sculptures of Tashi Tagye (the eight auspicious signs of Buddhism).\nTrongsa means 'the new village' and the founding of Trongsa first dates from the 16th century, which is indeed relatively recent for Bhutan.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-6", "d_text": "Imports by Bhutan range from basic items like processed foods and garments to electronic and aircraft parts. Bhutan’s export sector is still at a nascent stage.\n“The two kingdoms have collaborated in the health sector since 1987, and there is significant cooperation in agriculture between the two countries. Bhutan participated in the International Horticultural Exposition in Chiang Mai from November 2006 to January 2007. His Majesty the King of Bhutan visited the exposition site during his visit to Thailand as the Crown Prince in November 2006. Bhutan also participated in the Royal Flora Ratchaphruek 2011, Chiang Mai, from December 2011 to March 2012.\nTHE Kingdom of Bhutan is a small country with a total area of 38,394 square kilometers (about the size of Switzerland), squeezed between China and India. It is located in the heart of the high Himalayas. The country is divided into 20 districts and 205 counties. The altitude varies from about 100 meters above sea level in the south to over 7,500 meters in the north. Gangkhar Puensum, at around 7,570 meters, is the Bhutan’s highest mountain and the world’s highest unclimbed mountain since the Bhutanese declared it off limits for climbers in 1994. The northern part of the country is under snow for the whole year.\nThe population of Bhutan is over 700,000 people, with about 79 percent living in rural areas. Thimphu is the capital and largest city. The official language is Dzongkha, and English is widely spoken.\nBhutan has a democratic constitutional monarchy. With a development philosophy based on Gross National Happiness (GNH), the Kingdom is becoming increasingly known for its dynamic leadership. It is an extraordinary country with architecture and fortresses the likes of which can not be seen anywhere else. The people are hospitable, most of them devoted Buddhists, and the unique cultural heritage has been kept intact.\nModernization has had a relatively late start in Bhutan, but there have been some striking achievements. Bhutan is now connected with roads, electricity is much more available and communications systems span most of the country and connect it to the outside world.\n“Tourism is another important sector for cooperation. Thailand is one of the top countries visited by Bhutanese people.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "BHUTAN 'The enchanted kingdom where happiness is more than a dream' is a small Himalayan landlocked country situated in the eastern slopes of the Himalayan range with China in the north and Indian states of Bengal and Assam in the south, Sikkim in the west and Arunachal Pradesh in the east. It is the world's youngest democratic country offered to the citizens of Bhutan by 4th King with the vision \"GROSS NATIONAL HAPPINESS \" as the country's progress and growth is measured through happiness index rather than monetary.\nThe state religion of the country is Drukpa Kagyud a branch of Mahayana Buddhism. The holy monasteries located on sheer cliffs, the fluttering prayer flags lined on the high ridges, the red robed monks chant religious script throughout day and night, old people circumambulating chortens chanting \"Om Mane Padme Hung\" gives this kingdom an atmosphere of happiness and peace.\nBhutan is known to the world as a most rough and mountainous terrain with most diverse flora and fauna, natural landscape, architecture, wonderful biodiversity and well preserved of its unique culture. It is divided into three physical zones \"greater Himalayas in the north, the inner Himalayas in the centers and the southern foothills in the south\".\nThe greater Himalayas include Laya, Lingshi, Lunana, Gogona and Merak Sakten. Theses area is covered with snow, glaciers, glacial lakes and barren rocks. The mountains of these region rise to 7000 meters above sea level. Tall trees and sherbs cannot grow.. It is the grazing place for yaks and sheep. Short Junipers, Primulas, rodhodendrons, Himalayan Blue poppy and many more medicinal plants and flowers are grown here. The animals found are Takin, musk deer, Himalayan bear, blue sheep and snow leopard. The birds like black necked crane, ravens and magpies are found here.\nThe inner Himalaya lies in the south of greater Himalayan zone. It rises to a height of 3000 meters. The mountains are of steep slope on both sides. The main valley are Paro, Thimphu, Punakha, Wangduephodrang, Trongsa, Bumthang, Zhemgang, Lhuntse, Trashigang and Dagana. All the major river flow through these valley.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-11", "d_text": "Dorji’s office sits in a gigantic monastery in Thimphu known as Tashichho Dzong. Buddhism unites and brings people together, Dorji said, explaining that the social life of a village revolves around its dzong (monastery).\nDorji said India’s multi-religious society had led to tensions and bloodshed.\n“India can survive riots and unrest,” he said, “but Bhutan may not, because it is a small country between two giants [India and China].”\nWith leaders who have been proud that they have not allowed it to be colonized, Bhutan historically has been keenly concerned about its survival. Bhutan’s people see their distinct culture, rather than the military, as having protected the country’s sovereignty. And it is no coincidence that Dorji’s portfolio includes both internal security and preservation of culture.\nThe constitution, adopted in July 2008, also requires the state to protect Bhutan’s cultural heritage and declares that Buddhism is the spiritual heritage of Bhutan.\nA government official who requested anonymity said that, as Tibet went to China and Sikkim became a state in India, “now which of the two countries will get Bhutan?”\nThis concern is prevalent among the Bhutanese, he added.\nSikkim, now a state in India’s northeast, was a Buddhist kingdom with indigenous Bhotia and Lepcha people groups as its subjects. But Hindus from Nepal migrated to Sikkim for work and gradually outnumbered the local Buddhists. In 1975, a referendum was held to decide if Sikkim, then India’s protectorate, should become an official state of the country. Since over 75 percent of the people in Sikkim were Nepalese – who knew that democracy would mean majority-rule – they voted for its incorporation\nBhutan and India’s other smaller neighbors saw it as brazen annexation. And it is believed that Sikkim’s “annexation” made Bhutan wary of the influence of India.\nIn the 1980s, Bhutan’s king began a one-nation-one-people campaign to protect its sovereignty and cultural integrity, which was discriminatory to the ethnic Nepalese, who protested.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Bhutanese royals pay respects to King\npublished : 16 Oct 2016 at 12:52\nwriter: Online Reporters\nKing Jigme Khesar Namgyel Wangchuck and Queen Jetsun Pema of Bhutan on Sunday morning paid respects to the royal urn of His Majesty the King at the Grand Palace.\nThey arrived with their son at about 9.48am and entered the Grand Palace via the Wiset Chaisri Gate. They were greeted on arrival by a large number of people.\nThe Bhutanese king and queen left at 10.17am.\n\"To the Incomparable, Visionary and Most Precious Jewel King, the King of Thailand, who attained Parinirvana, I would like to offer my deepest respect, and my heartfelt prayers. May Your Majesty always be born as Dharma Raja, to the benefit of all sentient beings,\" he wrote on his Facebook account on Sunday.\nThe king and queen of Bhutan arrived on Saturday evening.\nKing Jigme Khesar Namgyel Wangchuck is at the Kuenra of the Tashichhodzong in Thimpu lighting candles next to a portrait of His Majesty the King on Thursday. (AFP/Royal Office for Media Bhutan photo)\nThe relationship between the royal families of the two countries has been warm, especially after the Bhutanese king, then the crown prince, joined the celebrations in Bangkok marking the 60th anniversary celebrations of His Majesty's accession to the throne in 2006, along with royals from 25 countries.\nThe Bhutanese king and the royal family led a group of clergy, senior government officials and the Thai community in the southern Asian country in offering a thousand butter lamps and prayers in memory of the Thai King at the Kuenra of the Tashichhodzong in Thimpu on Thursday.\nKing Jigme Khesar Namgyel Wangchuck also ordered the Bhutan flags to fly at half-mast for three days in honour of the late King.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "By Maxmilian Wechsler\nHE Kesang Wangdi, the ambassador of Bhutan to Thailand, is a fitting emissary for a nation which marks its country’s progress not on GDP, but on GNH (Gross National Happiness).\nThe first time I came in contact with him was on December 12, 2012, at a reception at the Swissotel Nai Lert hosted by the Royal Bhutan Embassy to mark the Kingdom’s 105th National Day. The reception was attended by about 500 distinguished guests including Thai cabinet ministers, senior government officials, diplomats and business people.\nThe event was a collaborative effort from the embassy, the Tourism Council of Bhutan, Druk Air (Royal Bhutan Airlines) and various travel agencies to showcase the country’s culture, cuisine and textiles. In his welcoming speech Mr Wangdi eloquently underscored his country’s rapid socioeconomic development and expressed his own happiness at the existing good relations between Bhutan and Thailand.\nFrom farm boy to diplomat\nNot long after the event, the ambassador gladly accepted an invitation for an interview, which took place under the portraits of two monarchs, HM Jigme Khesar Namgyel Wangchuck and HM King Bhumibol Adulyadej, at the tastefully appointed hall of the Bhutanese embassy chancery at the mission compound in Bangkok’s Huay Kwang district.\nAs is his custom, the ambassador was wearing the traditional national dress for men called gho, and also traditional handmade boots embroidered with beautiful coloured designs.\n“I came from a small, modest and ordinary farming family in Bhutan,” the ambassador told me, when asked about his background. “I am one of the products of a government which holds education to be very important. In Bhutan it is up to you if and how you succeed, because the government creates enabling conditions for people to reach their aspirations and to achieve whatever they want. There are no limitations.\n“I went through the education system and got selected for government service. I was educated in Bhutan, India and elsewhere. I entered government service in the early 1980s. When I started I worked in the capital of Bhutan, Thimphu. My first posting abroad was to India, and from there I went to the United States, Nepal and now to Thailand.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "These pillars embody national and local values and aesthetics, as well as spiritual traditions. The philosophy of GNH is now being taken up by the United Nations and some other countries.\n“We are coming out [of relative isolation] slowly and we are learning from everybody. They say, ‘if you copy from one it is plagiarizing; if, you copy from ten, it is research.’ So, we are doing a lot of research and taking good points from many countries. So, things in Bhutan are going very well,” the ambassador said.\nMr Wangdi did not formally request a posting in Thailand but he is delighted to be here. “In our system, ambassadors are nominated by the government, and confirmed and approved by the King. Even before the constitution was in place, this has always been the procedure. The ambassadorial term is usually three years, but it could be longer − four or even five years.\n“Basically, I do what every ambassador does, and this consists of three basic functions. One is to promote friendship, cooperation and good will between our two countries − that is perhaps my main mandate, and it brings me to my second function, which is to deepen and broaden cooperation in the fields of commerce, trade, tourism, people-to-people contact, culture exchange and so on. My third, also very important, function is to look after the interest of the Bhutanese people in Thailand, whether they are residents, students or just passing through.\n“Of course, as I am here among the diplomatic community in the metropolis of Bangkok, my purpose is also to build awareness about Bhutan, our culture and our GNH development philosophy − not only in this great nation but also with everyone in the diplomatic and expat communities here, as well as the other countries that I am assigned to.\n“All of this keeps me very busy from morning to evening, and after the official workday I normally go out to interact and mingle – which is always a pleasure. In the evenings, I attend to social obligations like national day receptions,” the ambassador said.\nMr Wangdi said he wears the goh instead of a suit most of the time because it is part of the Bhutanese culture and tradition. “Every country has a culture which is unique, but Bhutan is a country where the culture is not only unique but living. It is practical and relevant in the modern context. Therefore, in Bhutan a great number of people wear the goh.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Best things to do in Thimphu and sightseeing in Thimphu\nDochula - Thimphu\nA mountain pass with a wow factor any time one visits it!! During the winter one can experience snowfall or else in my case during the summer one can witness superb views of the valleys around!\nThe weather was cool in Thimphu but it became colder as one go on ascent to Dochula Pass. At an elevation of 3,100m after a 12-km drive, you are in the lap of nature with snowy Himalayan ranges looming in the background. Even at noon it gets foggy, and chilly weather can deprive you of a clear view. But then its amazing feeling to stand in a valley filled with hills, cypress trees and thousands of fluttering flags all across you. The Buddhist belief that inscribed mantras in prayer flags will be transmitted across the land by the wind and spread goodness so even at the highest of the mountains , you can see colourful flags at every pass, valley, bridges and houses. Predominantly in five colours, they represent the five colours of the elements — blue (sky), white (clouds), red (fire), green (water) and yellow (earth). Isn't it amazing?\nBreathtakingly beautiful sight with 108 chortens in the backdrop of snow-capped mountains.\nKuenselphodrang - Thimphu\nTashichhodzong - Thimphu\nThis Dzong is part monastery and part administrative block - the King's Offices are here.\nStanding on the right side of the Wangchu River, Tashichho Dzong, or popularly known as Thimphu Dzong, is an impressive structure that houses the Bhutanese government. The Dzong originated with the building of Dho-Ngon (Blue Stone) Dzong on a hill above Thimphu River where Dechenphodrang stands by Lama Gyalwa Lhanangpa. On the 17th century, the followers of Lama Gyalwa Lhanangpa were completely crush by Zhabdrung Ngawang Namgyal and the the Dho Ngon Dzong fell into the hands of Zhbdrung.Zhabdrung rebuilt the fortress in 1641 and renamed it Tashichho Dzong. Better when served with a guide.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Regarded as the “crown jewel of the Himalayas,” the Kingdom of Bhutan is the last remaining independent country to support Buddhism as the official state religion. Photographed over the course of three years, Bhutan: Land of the Thunder Dragon transports us to colorful festivals and religious traditions, continuing to the remote communities along the roof of the world. This book encompasses a wide range of landscape, portrait, and editorial photographs sure to impress and please any reader interested in travel, photography, and/or Himalayan culture.\n“A glorious large-format coffee-table book par excellence! Next best thing to actually going to Bhutan. Berthold has captured this vast and stunning land and its people with the piercing eye of an artist-photographer. His spare, coherent text tells a fascinating tale that adds to your pictorial journey.”—Mandala", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "I have been very fortunate, as have all those of my generation, because of a very fair system where people are given the opportunity to receive an education and pursue their dreams.\n“I am married with two children; one is studying medicine in the United States and the other is a financial analyst for the investment arm of the Bhutanese government. My family also includes a small dog. I like reading, music and physical activities such as running, cycling, soccer and golf − but I am not the greatest golfer,” said the ambassador, smiling.\nComing to Thailand\nMr Wangdi arrived in Thailand for the first time when he assumed the position of ambassador in June 2011. He is also acting ambassador to Australia and Singapore. Asked how he can manage to represent his country in Australia all the way from Thailand, Mr Wangdi said: “I travel there as often as I can because we have a lot of work to do. Cooperation between our countries is growing. Australia is one of our earliest development partners. We have had technical cooperation with Australia for a long time and they have offered assistance and human resources in agriculture and other areas. There are also a lot of Bhutanese students studying in Australia.\n“Bhutan recently established diplomatic relations with other Asian countries, including Indonesia, Myanmar and Vietnam. The government will soon nominate an ambassador to these nations. This embassy will perhaps look after them as well but this is still under discussion.\n“My last assignment before coming here was as Director General of the Tourism Council of Bhutan. We give a lot of importance to tourism in Bhutan, and this position is directly under the prime minister. The tourism council consists of many cabinet ministers and also members of the private sector. We have many development options in tourism as well as in hydropower, but we are very careful in selecting which ones go forward. Our development philosophy hinges on the concept of gross national happiness, which we believe is more important than gross domestic product.”\nThis fascinating and enlightened perspective is officially pursued only in Bhutan, and it is reflected in all areas of government, business and society. It is a development model that measures the mental as well as material well-being of citizens. The objective is balanced development with the ultimate goal being the happiness of the people. The concept consists of four pillars: Socioeconomic Development, Promotion and Conservation of Culture, Environmental Protection and Good Governance.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-4", "d_text": "It is a very inspiring project, teaching how you market and package produce, and so on. We need to know all of this. For us it is not only a curiosity. We are very happy because the priorities are similar to our own, to assist our poor people. In Bhutan, agriculture is a very important priority to promote sustainable development based on GNH.”\nRelationship between Bhutan and Thailand\nFormal diplomatic relations between Bhutan and Thailand were established in November 1989, although contacts between the royal families of the two countries were established much earlier.\nA resident embassy was established in Bangkok in the Sukhumvit area in July 1997. “We moved to this location about ten years ago. We bought this large – by Bhutanese standards – property because we believed that our bilateral relations will go from strength to strength, that collaboration on trade and commerce will flourish and there will be many functions to attend. We also bought ten good houses not far from the embassy where our staff live. I live outside the embassy as well, but we are going to construct an ambassadorial residence here in the future. There are about two dozen people working at the embassy, nine Bhutanese and the rest Thais. In fact, this is Bhutan’s largest embassy.\n“At this time there is no Thai embassy in Bhutan. This is the responsibility of the Thai ambassador to Bangladesh. There is an honorary Thai consul in Bhutan, Mr Dasho Ugen Tsechup Dorji, a Bhutanese businessman who is the Vice Chairman of the Singye Group of Companies. We collaborate with him closely. We used to have honorary consul in Thailand but he finished his duty. We are planning to appoint one or two in the future.\n“The foundations of the relationship between Bhutan and Thailand are very strong, largely because there are some striking similarities between our two countries. One is that we have a common spiritual heritage. The second is the deep reverence for the institution of the monarchy. Thirdly, in both countries tradition and culture are very important. Finally, both countries have never surrendered sovereignty to another country. Because of that historical basis of always being independent, our peoples are peace loving, forward looking, confident and very friendly. These four similarities underpin our relations.\n“Moreover, our respective royal families have always held very high esteem, affection and good will for each other.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-1", "d_text": "The story goes that a giant demoness lay across Tibet and the Himalayas, which was preventing the spread of Buddhism. To overcome her, King Songtsen Gampo decided to build 108 temples in a single day to pin the ogress to the earth forever in 659AD. Of these 108 temples, 12 were built in accordance with precise plans at key points. The temple of Jokhang in Lhasa was built over the very heart of the demoness and Kichu is said to have been built on the left foot.\nDungtshe Lhakhang: Dungtse Lhakhang was constructed by the great bridge-builder Thangtong Gyelpo in 1433. It is said to have been built on the head of demoness, who was causing illness to the inhabitants. The building was restored in 1841 and is a unique repository of Kagyu lineage arts. You may or may not be permitted inside but can walk around this three-storey Chorten-type building.\nTa Dzong (Sun, Mon Govt holiday closed): once a watchtower, built to defend Rinpung Dzong during inter-valley wars of the 17th century, Ta Dzong was inaugurated as Bhutan’s National Museum in 1968. It holds fascinating collection of art, relics, religious thangkha paintings and Bhutan’s exquisite postage stamps. The museum circular shape augments its varied collection displayed over several floors.\nOvernight at hotel\nParo-Thimp (55 Kms/ 01 Hr)\nDrive to Thimphu(55 Kms/ 01 Hr.). En route visit Simtokha Dzong, the oldest fortress of the Kingdom which now houses the School for religious and cultural studies.\nThere are a good many things to see in the capital which has a very relaxed, laid-back feel about it. Thimphu is relatively small having a population of approximately 90,000 people and the streets are wide and tree lined.\nVisit the indigenous hospital specializing in herbal medicine, handmade paper factory, and the nunnery at Zilukha.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-1", "d_text": "Masked, barefoot dancers were leaping and whirling, brandishing knives, and beating tambourines to subdue evil spirits and celebrate the teachings of Buddha. Clowns cavorted and cracked ribald jokes, brass cymbals clanged, bells rang, bald monks in burgundy robes chanted, horns trumpeted, bronze-skinned children with pink cheeks frolicked, and everyone showed off their best national costumes and jewelry.\nWe admired the beautiful women in their brilliant, tubular kiras of heavy silk brocade. Dorje whispered, Some of these dresses cost over $20 thousand.\nAmong the crafts scattered on the ground for sale were wooden phalluses. Dorje explained, We hang these inside our houses for fertility and good luck.\nA foreign tourist picked up a large carved penis shape and quipped, I already have one of these.\nThe salesgirl smiled, Yes, but do you have two?\nRight from the beginning, we saw no discrimination against women. In fact, women are more equal than men! Traditionally, daughters inherit their parents property. Where there are several husbands the offspring equally inherit because they dont know who is the father.\nSeveral sisters may take one husband, said Dorje. I know one woman who had four husbands. They all worked in her restaurant. He added, But this isnt so common anymore. Here in town there are only about 10 women having more than one husband.\nAnd the King?\nKing Jigme Singye Wangchuck was crowned in 1972 when he was 16. He is married to four beautiful sisters. Each has her own house. He lives very simply in a log cabin. Hes a good king. When he visits schools, he eats lunch with the children. He encourages them to study, telling them that theyre Bhutans future.\nDorje leaned forward earnestly, You see, we started our development process so late weve had a chance to observe other countries and see the mistakes theyve made. We emulate those who we think have gone in the right direction. Instead of a Gross National Product, we measure Gross National Happinessthe Kings idea.\nThe 65-mile drive from Paro to the capital, Thimpu, takes more than two hours on steep winding roads through eerie forests of grey-bearded trees interspersed with rhododendron, magnolia, pines, and larch.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-5", "d_text": "On the driven to the statue the steep winding hill road offers an unparalleled view of the City of Thimphu and is an excellent place to capture a view of the City especially after dark. A journalist once described them as ‘’seeing an oasis of light in the desert of darkness’ ‘as the City light of Thimphu shin very bright in an otherwise dark Thimphu valley. After breakfast drive to Punakha with a short stop at Dochula pass (3,050m) that heralds the most enchanting views of Bhutan. Night halt at hotel Taj in Thimphu.\nDay 05: Thimphu to Punakha (77km, 3hrs).\nEarly breakfast and visit Simtokha Dzong. This is one of the oldest fortresses in Bhutan. It was built in 1629 A.D. Drive to Punakha via Dochula Pass at 3150mts. The drive would take about 3hours. On arrival, drive to Punakha Dzong, the “Palace of Great Happiness” Built in 1637 by the Shabdrung, the ‘Unifier of Bhutan’, Punakha Dzong is situated at the confluence of the Mo Chu and Pho Chu (Mother and Father Rivers). It is the winter headquarters of the Je Khenpo and hundreds of monks who move en masse from Thimphu to this warmer location. The three story main temple of the Punakha Dzong is a breathtaking example of traditional architecture with four intricately embossed entrance pillars crafted from cypress and decorated in gold and silver. It was here in 1907 that Bhutan’s first king was crowned. Overnight hotel in Punakha.\nDay 6:Punakha – Bumthang\nEarly breakfast and then you can drive into the beautiful valley of Bumthang. Bumthang is the valley which is the religious heart land of Bhutan. It is also the first valley to receive the Buddhism in Bhutan. Many important and religious figures in Buddhism have visited this valley and blessed by them.First visit to Jakar Dzong built in 17th century, it had defended lots of enemies from outside and from within the country. The name Jakar originated from this place.", "score": 8.086131989696522, "rank": 99}]} {"qid": 23, "question_text": "What percentage of new non-residential buildings were considered 'green construction' in 2010 compared to 2005?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "These numbers are getting the attention of developers and driving the growth of eco-construction. Nationally, about 2% of today's new, nonresidential construction is considered green, estimates the MHC study. From last year's $3.3 billion, green office and retail construction is expected to surge by up to 10% a year, to $20 billion, by 2010.\nCosts Backing Down\nGiven New York's love affair with tall towers and the unmatched scale of the city's real estate market, it's no surprise that Manhattan is emerging as the epicenter of green high-rise construction. Bank of America's tower will be the greenest of a surge of eco-friendly office buildings sprouting up in midtown and near Wall Street.\nFirst to open was 7 World Trade Center, the start of new construction at Ground Zero. Next was Foster + Partners' stunning addition to Hearst's headquarters in midtown. And nearing completion just a few blocks west of Bank of America Tower on 42nd Street is Renzo Piano's New York Times Building.\nThe flurry of eco-construction reflects builders' growing confidence that the extra costs of building green are a good investment. The up front costs of building green are still higher than using conventional materials, but that premium is shrinking. Just a few years ago, green construction could cost 10% or more than standard construction.\nToday that margin has fallen into the range of single digits. As the market for green materials and design expertise has grown and matured, \"the supply of materials and services is going up and the price is coming down,\" says Taryn Holowka, communications manager at the U.S. Green Building Council (USGBC).\nWaterless Water Closets\nTo be sure, green projects can still throw up costly delays or surprising snafus. Until recently, for example, New York-area carting companies charged extra to sort and recycle construction debris.\nBut as haulers have come to recognize the market value of the refuse, they've lowered their prices and become more cooperative. At the Bank of America site, explains Cook+Fox's Lisa Storer, carting companies presorted scrap steel, concrete, and other debris on site for removal. Ultimately some 90% of construction waste was recycled.\nSometimes the latest, greenest technology just isn't approved yet.", "score": 53.139718851571885, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "A new study from McGraw Hill Construction suggests a number of reasons why builders should get up to speed on green building.\nFirst, the green building market is growing. According to a summary of the report, the green share of the single-family residential market has grown from 2% in 2005 to 23% in 2013 and should reach between 26% and 33% of the market by 2016. That would be worth as much as $105 billion.\nGreen building grew even during the recent recession. The overall housing market took a nosedive between 2005 and 2008, declining from a value of $315 billion to $122 billion. But the green housing market showed a healthy increase over the same period, from $6 billion in 2005 (2% of the market) to $10 billion (8% of the market) in 2008.\nOver the next few years, the industry bumped along, reaching a value of $97 billion by 2011. But green building continued a steady climb to reach $17 billion by 2011.\nBoth builders and remodelers see green growth ahead\nNew-home builders seem to be adopting green practices faster than remodelers. In five years, 62% of builders expect to be doing green on more than 60% of their projects, McGraw Hill reports, and 30% expect to be building green on more than 90% of their projects.\nBy 2018, the study says, 32% of remodelers expect that more than 60% of their projects will be green.\nFinally, many builders find customers are willing to pay more for green features.\n“Considering the economy, it is notable that over 68% of builders and 84% of remodelers report their customers are willing to pay more for green,” the study says. “Both of these are an increase over the reported results from 2011 where 61% of builders and 66% of remodelers reported the same. The big jump for remodelers may be due to the improving economy and the increased awareness and accessibility to affordable green building products and practices.”\nOn average, builders say consumers will pay 3% more for green homes and 5% more for green remodeling.\nMcGraw Hill said findings on the green home marketplace and other results from this study would be published in April in partnership with the National Association of Home Builders.", "score": 50.21603328917158, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "At first, Baldwin Homes didn't build green. Then it dipped its corporate toe in — one home here, another there.\nNow the Gambrills company is constructing an entire green neighborhood. It's the story of U.S. home building writ small.\nGreen accounted for 2 percent of the new-home market in 2005, according to a report by industry data provider McGraw Hill Construction. By last year it had ballooned to 23 percent — nearly a quarter.\n\"I don't think green is a niche market anymore,\" said Michele A. Russo, director of green content at McGraw Hill Construction.\nMore builders have jumped in with both feet. Others are giving green techniques a try, to the point that more than 60 percent of single-family home builders say they're doing at least a moderate amount of work in the field, according to McGraw Hill.\nEven strictly conventional homes are built in a more environmentally — and budget — friendly way than they were five years ago because energy-efficient appliances are so common, Russo said.\nSome builders get a green stamp of approval from a certifying body, such as the U.S. Green Building Council. Others just incorporate some of the ideas, which can range from designing a home with sunlight in mind to building \"green roofs\" in urban areas so dirty stormwater doesn't run off into the Chesapeake Bay.\nThe level of activity and acceptance is a sea change, said Michael Furbish, president and founder of Furbish, a green product supplier in Baltimore that specializes in green roofs and walls.\n\"Green building has become the norm in 15 short years,\" he said. \"Instead of being a novelty, it's become a presumed logical way of building.\"\nIt's not that everyone's become an environmentalist. The \"conversation is changing,\" said Kevin Morrow, director of sustainability and green building at the National Association of Home Builders. Techniques once sold as environmentally friendly are increasingly plugged as \"high performance,\" because the two frequently go hand-in-hand.\nTake, for example, a highly insulated house. It uses less energy, a plus for outdoor air quality and the owner's utility bills. It's also much more comfortable than a drafty property.\nThe goal is \"making sure that the home is a pleasant place to be,\" Morrow said.\nMike Baldwin, president and owner of Baldwin Homes, is working on National Green Building Standard-certified homes at the Preserve at Severn Run in Gambrills.", "score": 48.22415218662923, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "More Builders Going Green\nAccording to a new McGraw Hill Construction report, the number of builders dedicated to green practices will double by 2018\nIncreased buyer interest in green building is helping to make it a more important part of the housing market, according to a new study by McGraw Hill Construction.\nThe percentage of builders of single-family homes who said that more than 90% of their projects are green grew from 12% in a 2011 study to 19% this year, and is projected to rise to 38% by 2018. The percentage of builders who said that less than 16% of their projects were green shrank proportionally, from 63% in 2011 to 38% in the most recent study. That number should fall to 16% by 2018.\n\"These findings demonstrate that among home builders, and increasingly among single-family remodelers, green is becoming the standard way to build,\" the report said. \"This wider adoption of green may help push the single-family home market to become even greener in the future, with homes increasingly needing to be green to be competitive.\"\nBy 2016, the report says, green single-family homes will represent between one-quarter and one-third of the market, \"translating to a $80 billion to $101 billion opportunity based on current forecasts.\"\nIn multifamily construction, the number of builders making more than 90% of their projects green was much lower than on the single-family side, remaining at a flat 6% between 2011 and 2013 and projected to grow to an estimated 18% by 2018. Those making less than 16% of their projects green dropped from 69% in 2011 to 46% in 2013. That was expected to decline to 21% by next year.\nThe authors concluded that while more multifamily builders are becoming experienced with green practices, they are not choosing to specialize in it.\nThese results are based on an online survey of 116 single-family home builders, developers, and remodelers, McGraw Hill said, along with 38 multifamily builders, developers, and remodelers. The survey was conducted over a four-month period from December 2013 to March of this year.", "score": 46.751440962505946, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Whether green buildings will be voluntary or mandated in New Jersey, the overall green construction trend is booming despite the US housing slump.\nGreen home building has already gone mainstream and is expected to be worth $12 to $20 billion (6%-10% of the market) this year, according to a new report from McGraw-Hill. The report said that 40% of builders think green building helps them market their homes in a down market.\n“We have hit the tipping point for builders going green. This year has seen an 8% jump over last year, and we expect another 10% increase next year.”\nThe Environmental Protection Agency (EPA) said they are also implementing new green building strategies in order to spread the adoption of green building practices nationwide.\nWhile major businesses such as Kohl’s, Roche, Chevron, among many announced their green building investments, chemical companies, on the other hand, are trying to cash in on this trend with new environment-friendly/energy-saving construction materials.\nBerry Plastics said its formaldehyde-free Thermo-ply protective sheating is a non-toxic 100% polyvinyl alcohol, which reduces home energy use by lessening air infiltration. The sheating is said to be also constructed with 100% recycled materials (80% post consumer waste and 20% post-industrial material), and is 99% recyclable.\nWith asbestos and lead paint litigations still nipping some chemical firms’ heels, and if green building will potentially be mandated in some states, it is advisable for both construction and chemical industries to look for friendlier alternatives.\nIn a related issue, the consulting firm EH&E recently released a white paper to help building owners, developers, and others minimize the looming risks and costs caused by the discovery of PCBs in their construction materials.\naddthis_pub = ‘greenchicgeek’;", "score": 45.51183565384787, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Green building, the use of construction processes that are environmentally responsible and resource efficient throughout a building's life cycle, has become a big business. It is fundamentally changing the nature of how today's construction industry must operate in order to remain competitive. The value of the green building market is expected to grow from between $55 billion and $71 billion in 2010 to $135 billion by 2015, with commercial green building expected to represent 40% to 48% of the market by 2015. As a result, requests for certification of a project's \"greenness\" by objective third-party verifiers are being sought for various reasons, including perceived clout and marketing power to generate higher sales and rental prices, compliance with local building code requirements, and assurance to owners and tenants of lower future operating costs.\nCertification versus Performance\nAlthough there are several third-party verifiers market-wide, the U.S. Green Building Council's (\"USGBC\") Leadership in Energy and Environmental Design (\"LEED\") has been at the forefront domestically and internationally. As a voluntary, four-level (Certified, Silver, Gold, Platinum), point-based certification system, LEED has fostered a culture of expected energy efficiency in commercial real estate. The federal government has led the way, both by adopting LEED Silver as a baseline for its building inventory and by encouraging pursuit of LEED-certified projects in the suppressed economy through financial incentives offered under the American Recovery and Reinvestment Act. Many state governments have followed suit, either by requiring energy-efficiency standards for public projects or by empowering local governments to adopt LEED or other expressions of sustainable development as part of their building code standards. The private sector has embraced LEED projects as well, seeing them as good not only for the environment and its social conscience, but also for the wallet.\nDespite strength and support on the design side, the desired results of LEED projects may not be matched by actual performance. The USGBC is fending off a legal and public relations attack in the form of a class action lawsuit alleging that the USGBC fraudulently represents the performance of LEED buildings as more efficient than standard construction. The principal plaintiff has amended the complaint to characterize the USGBC's representations of the energy efficiency of LEED-certified buildings as not based on verified energy performance, thereby fostering a false sense that the end product will perform better than yesterday's new construction.\nIndividual projects have seen challenges as well.", "score": 43.00436245638415, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "The demand for LEED® certification has increased dramatically since the rating system’s initial release in 1998. In the current economic, environmental and social construction climate, green is the way to go and LEED® certifications outline the path. The U.S. Green Building Council has worked diligently to respond to market demand for green products.Three primary factors have contributed to the growth of Sustainable Development in the United States.• Historic increase of government backed green initiatives calling for LEED® ratings.• Increase in green residential projects.• Expansion of available green building product lines.Government definitely helped spur the growth of the green movement. As of 2003, more than 50% of green building was endorsed by federal, state and local governments. Only 33% of green projects were undertaken by the private sector.Between 2004 and 2008, the increase in energy costs reinforced the green building movement and the value of the LEED® rating system. Public and private commercial and residential developers began to look at sustainable projects with new interest.As interest rose, the U.S. Green Building Council expanded the LEED® program to include:• LEED® for New Construction – LEED® – NC• LEED® for Existing Buildings-Operation and Maintenance –LEED® – EB• LEED® for Commercial Interiors – LEED® – CI• LEED® for Core & Shell – LEED® – CS• LEED® for Schools – LEED® – S• LEED® for Retail – LEED® – R• LEED® for Healthcare – LEED® – HC• LEED® for Homes – LEED® – HThese eight LEED® programs now account for the three criteria addressed by the U.N. sustainable report of 2005; economic, environmental and social sustainability.Consumers have joined the sustainable movement. While the initial cost of green construction may be slightly higher than traditional construction, the operating savings and reduction in environmental impact outweigh the initial outlay. Additionally, the marketability of LEED® certified commercial or residential property exceeds the value of traditional construction.LEED® certification is now synonymous with the highest and best use of properties. This significant reality attests to the validity of the eight LEED® programs. In 2009, responsible development and real estate ownership now begins with LEED® ratings.", "score": 42.20306737683022, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Facing increasing and unpredictable energy costs, buildings capable of significantly reducing energy consumption have become increasingly attractive to those in the market for real estate. The increased public awareness and concern for the environment, coupled with an increasing consumer demand for the cost savings generated by energy efficient buildings, have operated to propel the green movement into the real estate industry in the form of green buildings.\nThere are many benefits associated with constructing, retro-fitting, and occupying green buildings, beyond those involving the environment. Studies show that buildings certified by Leadership in Energy and Environmental Design (LEED), an internationally recognized green building certification system, have an almost four percent higher occupancy rate, as well as increased retention rates. Such buildings may also experience an almost 10 percent decrease in operating expenses, and a similar increase in building value. Subsidies, incentives, and tax credits may also be counted as potential benefits associated with green buildings.\nAll of these benefits have operated to grow the green building industry despite the otherwise underperforming real estate market. According to one study, construction of green office space has increased by approximately 25 percent over the past decade, with significant continued growth predicted for the near future. Moreover, according to the U.S Green Building Council, green building construction is expected to reach $60 billion, with approximately 10 percent of new commercial construction starts expected to be green. These numbers confirm that building green has become, and is predicted to remain big business.\nGiven the increasing popularity and value of green construction, the insurance industry has entered the field by developing policy endorsements geared specifically toward green properties. These policies recognize that green buildings contain unique features, in the form of materials and designs, which are typically more expensive than those found in traditional buildings. Thus, in the event of a covered loss, a typical insurance policy may not cover the extra expense and procedures ordinarily associated with green buildings. That is why it is important to understand the manner in which insurance companies are catering to those property owners seeking to become, or remain, green.\nGiven the relative novelty of insuring green buildings, many companies are routinely adjusting their products to accommodate this developing industry. Nevertheless, there are a few commonalities among the varying products in terms of coverages, including:\n- Green Rebuilding: Green coverage will cover many of the costs related to rebuilding a covered property to its budgeted level of green certification. Some companies offer policies that cover the costs of replacing standard materials with a green equivalent in the event of a covered loss.", "score": 40.597354303706275, "rank": 8}, {"document_id": "doc-::chunk-3", "d_text": "\"It's just a home that works well, to its maximum efficiency.\"\nA by-the-numbers look at the green homebuilding market:\n•37: Percent of single-family home builders in 2011 who said green projects represented at least a moderate amount of their work\n•62: Percent of single-family home builders in 2013 who fell in that category\n•84: Percent of single-family home builders who project that green will represent at least a moderate amount of their work in 2018\n•$6 billion: Value of the green new-home market in 2005\n•$37 billion: Value of the green new-home market in 2013\nSource: McGraw Hill ConstructionCopyright © 2015, The Baltimore Sun", "score": 39.81594866798927, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "The building sector is the biggest source of greenhouse gas (GHG) emissions in the U.S. Energy used in U.S. buildings produces about 43 percent of carbon dioxide emissions. However, buildings certified under the Leadership in Energy and Environmental Design (LEED) have less impact on the environment, according to theGreen Building Impact Report 2008. The report assessed non-residential construction because it accounts for 40 percent of the environmental burden from the buildings sector.\nLEED certified buildings have less impact on the land and water, and use 25 percent less energy. They save the equivalent amount of GHG emissions from entering the environment as 400 million vehicle miles traveled. They also use seven percent less water. The equivalent of 2008 water savings could fill enough 32 ounce bottled to encircle the globe 300 times.\nThe Architecture 2030 Challenge\nThe Architecture 2030 Challenge calls for a 50 percent reduction in the energy use and GHG emissions of all new buildings and major renovations by 2010. The Challenge also calls for an increasing reduction of both energy use and GHG emissions in increments every five years so all new buildings will be carbon neutral by 2030.\nThe Challenge also calls for new buildings to be designed so that they cut fossil fuel energy usage in half, and renovate existing buildings to cut their fossil fuel energy usage in half.\nThe following organizations have adopted the Challenge: the American Institute of Architects, the U.S. Conference of Mayors, U.S. Green Building Council, National Association of Counties, California Public Utilities and Energy Commissions, and individual cities, counties and states.\nWill the recession affect the green building sector?\nThe green building sector has grown considerably in the 21st century. Despite the current economic recession, the construction and certification of green buildings will continue to increase, according to a recent report, How Green a Recession? – Sustainability Prospects in the US Real Estate Industry.\nThe report, commissioned by RREEF, a member of the Deutsche Bank Group, stated that the green building market will continue to accelerate, which will increase the “green share” of the building sector, and then will speed markets “to the tipping point where green buildings become the standard for quality real estate product.”\nThere is a potential for tremendous growth in the green building sector, according to the report, for four reasons:\nThe report cited four ways that the Obama administration can influence green building market:\nAct local!", "score": 39.60039253160062, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "The building sector is the biggest source of greenhouse gas (GHG) emissions in the U.S. Energy used in U.S. buildings produces about 43 percent of carbon dioxide emissions. However, buildings certified under the Leadership in Energy and Environmental Design (LEED) have less impact on the environment, according to theGreen Building Impact Report 2008. The report assessed non-residential construction because it accounts for 40 percent of the environmental burden from the buildings sector.\nLEED certified buildings have less impact on the land and water, and use 25 percent less energy. They save the equivalent amount of GHG emissions from entering the environment as 400 million vehicle miles traveled. They also use seven percent less water. The equivalent of 2008 water savings could fill enough 32 ounce bottled to encircle the globe 300 times.\nThe Architecture 2030 Challenge\nThe Architecture 2030 Challenge calls for a 50 percent reduction in the energy use and GHG emissions of all new buildings and major renovations by 2010. The Challenge also calls for an increasing reduction of both energy use and GHG emissions in increments every five years so all new buildings will be carbon neutral by 2030.\nThe Challenge also calls for new buildings to be designed so that they cut fossil fuel energy usage in half, and renovate existing buildings to cut their fossil fuel energy usage in half.\nThe following organizations have adopted the Challenge: the American Institute of Architects, the U.S. Conference of Mayors, U.S. Green Building Council, National Association of Counties, California Public Utilities and Energy Commissions, and individual cities, counties and states.\nWill the recession affect the green building sector?\nThe green building sector has grown considerably in the 21st century. Despite the current economic recession, the construction and certification of green buildings will continue to increase, according to a recent report, How Green a Recession? – Sustainability Prospects in the US Real Estate Industry.\nThe report, commissioned by RREEF, a member of the Deutsche Bank Group, stated that the green building market will continue to accelerate, which will increase the “green share” of the building sector, and then will speed markets “to the tipping point where green buildings become the standard for quality real estate product.”\nThere is a potential for tremendous growth in the green building sector, according to the report, for four reasons:\nThe report cited four ways that the Obama administration can influence green building market:\nAct local!", "score": 38.78102725046757, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Canadian Consulting Engineer\nStudies of 1,300 buildings demonstrate advantages of going greenEngineering\nTwo recently released studies, one by the New Buildings Institute (NBI) and one by CoStar Group, have shown that th...\nTwo recently released studies, one by the New Buildings Institute (NBI) and one by CoStar Group, have shown that third party certified buildings outperform their conventional counterparts across a variety of metrics, including energy savings, occupancy rates, sale price and rental rates.\nIn the NBI study, the results indicate that new buildings certified under the U.S. Green Building Council’s LEED certification system are, on average, performing 25-30% better than non-LEED certified buildings in terms of energy use. The study also showed that Gold and Platinum LEED certified buildings have average energy savings approaching 50%.\nEnergy savings under EPA’s Energy Star program were also impressive. Buildings that have earned the Energy Star label use an average of almost 40 per cent less energy than average buildings, and emit 35 percent less carbon.\nThe results from both studies also strengthened the “business case” for green buildings.\nAccording to the CoStar study, LEED buildings command rent premiums of $11.24 per square foot over their non-LEED peers and have 3.8 percent higher occupancy.\nRental rates in Energy Star buildings represent a $2.38 per sq. ft. premium over comparable buildings and have 3.6 percent higher occupancy.\nAlso, Energy Star buildings are selling for an average of $61 per square foot more than their peers, while LEED buildings command $171 more per square foot.\nThe group analyzed more than 1,300 LEED Certified and Energy Star buildings representing about 351 million square feet in CoStar’s commercial property database of roughly 44 billion square feet, and assessed those buildings against non-green properties with similar size, location, class, tenancy and year-built characteristics to generate the results.\nThe NBI study was funded by USGBC with support from the U.S. Environmental Protection Agency and can be accessed at:\nFor more information on the CoStar study: http://www.costar.com/News/Article.aspx?id=D968F1E0DCF73712B03A099E0E99C679.", "score": 37.638796329952946, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "The results of the first systematic study of green buildings are in and they look good! Specifically, the study filtered sample data to Class A office buildings larger than 200,000 sf, 5 stories or more, built since 1970, and multi- tenanted. To compare green versus non-green, they used Energy Star and non-Energy Star buildings, and therefore, the sample contained 223 Energy Star buildings (111.7 million square feet) and 2,077 non-Energy Star buildings (889.1 million square feet). The results: (1) HIGHER occupancy rates, (2) HIGHER rental rates, and (3) HIGHER sales prices psf for Energy Star buildings.\nThe study also contains some interesting nuggets of information with respect to LEED and green development. You’ll notice that in terms of green developments in process, California is followed by the ambivalent, oil/wind-loving state of Texas. You’ll also notice that Hines is not only the leading developer of green buildings, but it’s the leading owner of green buildings (raising the inference that it’s good to be both the owner and developer of green buildings). And the top two types of tenants in green buildings are financial institutions and law firms. Lawyers aren’t that bad now are they?\nThe study drafters note that some changes need to be made to further accelerate acceptance of green buildings. They also note that the real barriers to going green are \"mostly a lack of planning and education.\" Well, we’re hoping to change that here at Jetson Green. Via Yudelson Associates.", "score": 37.38762865885435, "rank": 13}, {"document_id": "doc-::chunk-3", "d_text": "Satish B. Mohan; Benjamin Loeffert\nWednesday, Jun 14 10:30-11:30/Celebration 9-10\nAbstract: Buildings account for 40% of the nation’s CO2 emissions, 68% of electric consumption, 41% of total energy use and 14% of water consumed in the United States; these trends are unaffordable and must change. Studies have shown that green buildings can save approximately 30% in energy usage and lead to increased worker productivity and occupant’s health benefits. Also, green buildings have the potential of lower insurance premiums, lower waste disposal charges, reduced water and sewer fees, and increased rental rates. However, their initial cost can be 1 to 5% higher than the conventional buildings. These additional initial costs are recouped in energy savings over a few years. This paper gives the design and construction steps of green buildings, and presents the results of a few studies done on the short term and long term costs and savings of green features. Actual data of four green buildings, built in various regions of USA, has also been included. All the sampled green buildings cost an additional 2-3%, and are consuming significantly less energy of up to 33%.", "score": 34.96946050051874, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "A \"green\" home was defined as \"one that incorporates environmentally sensitive site planning; resource efficiency; energy and water efficiency; improved indoor air quality; and homeowner education or projects that would comply with the ICC 700 National Green Building StandardNational Green Building Standard Based on the NAHB Model Green Home Building Guidelines and passed through ANSI. This standard can be applied to both new homes, remodeling projects, and additions. or other credible rating system.\"\nA variety of factors aiding the move to green\nBuilders and developers said that the three most important factors in their adoption of green building strategies were increasing energy costs, changes in codes and regulations, and wider availability and lower prices for green building products.\nBuilders also found that more buyers were willing to pay higher prices for a green home in 2013 than in 2011. The percentage of single-family builders who said that customers would pay more grew from 61% in 2011 to 73% in 2013. On the remodeling side, that grew from 66% to 79%.\nThe number of builders incorporating renewable energy systems into their projects also is on the rise. In 2013, 8% of builders surveyed said they included renewables on all of the projects, which was expected to grow to 20% by 2016. The proportion of builders offering renewables as an option was 34% in 2013 but expected to increase to 40% by 2016.\nOne source of frustration for both homeowners and builders has been that green features aren't always recognized by real estate appraisers or lenders, a situation that may be slow to change. The report says that appraisers now have better tools at their disposal, but adds that a relatively small number of multiple listing services — 185 of the country's nearly 850 total — provide fields where information about green building features can be added.\nOne Florida banker was quoted as saying that many appraisers show limited interest in making green features part of their reports.\nThere are many other details available in the 64-page report, which can be downloaded for free. (If you don't already have an account with McGraw Hill you'll have to create one to get access to the report.)\n- McGraw Hill Construction\nJun 15, 2014 8:13 PM ET\nJun 16, 2014 7:35 AM ET", "score": 34.49823148319154, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "What is Green Building?\nGreen building is the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building’s life-cycle from siting to design, construction, operation, maintenance, renovation and deconstruction. This practice expands and complements the classical building design concerns of economy, utility, durability, and comfort. Green building is also known as a sustainable or high performance building.\nThe built environment has a profound impact on our natural environment, economy, health, and productivity.\nIn the United States alone, buildings account for:\n- 72% of electricity consumption,\n- 39% of energy use,\n- 38% of all carbon dioxide (CO2) emissions,\n- 40% of raw materials use,\n- 30% of waste output (136 million tons annually), and\n- 14% of potable water consumption.\n- Enhance and protect ecosystems and biodiversity\n- Improve air and water quality\n- Reduce solid waste\n- Conserve natural resources\n- Reduce operating costs\n- Enhance asset value and profits\n- Improve employee productivity and satisfaction\n- Optimize life-cycle economic performance\nHealth and community benefits:\n- Improve air, thermal, and acoustic environments\n- Enhance occupant comfort and health\n- Minimize strain on local infrastructure\n- Contribute to overall quality of life", "score": 33.33702331522074, "rank": 16}, {"document_id": "doc-::chunk-2", "d_text": "Because tenancies may require attaining specific tiers, and because rental amounts may be contingent on achieving those tiers, whether a post-construction audit will confirm that a given building achieves its desired tier has already become, and will undoubtedly grow, as a matter of concern across the country.\nEstimates of the cost of building \"green\" vary greatly. Generally speaking, such costs are estimated at between 2% and 15% more than the cost of conventional construction. One of the primary determining factors regarding how much of a premium will be required to build green depends on the experience of the contractor. First-time \"green\" construction tends to be on the expensive side, while subsequent construction tends to be closer to the cost of conventional construction. This difference appears to be based on the contractor and subcontractors getting accustomed to using new, different techniques during the construction process. Thus, it is worthwhile to seek experienced contractors, or to use the same contractor over the course of multiple, similar jobs.\nAlso, there can be tremendous differences of opinion between the landlord and the tenant, or the landlord and its contractors, as to what constitutes acceptable green space. There are many open questions about the long-term viability of green building technologies: if a green technology fails during a tenancy, who is responsible for remediating it? If it cannot be remediated, such that the space is no longer \"green\", is the lease voidable by the tenant? If promotional information for a green technology claims a certain degree of efficiency, but the technology does not actually achieve such efficiency, who is responsible for the resulting failure? Considering these questions, it is clear that boilerplate contract language most likely will not adequately resolve the green issues that will arise, and when that happens, litigation will ensue.\nFor example, in Shaw Development v. Southern Builders (filed in Maryland Circuit Court in 2007, but presumably settled), it was alleged by Shaw Development that it lost $635,000 in tax credits from Maryland's Energy Administration because the builder failed to achieve LEED Silver rating for a condominium project within the eligibility period to obtain the credits. Remarkably, while Shaw was apparently counting on receiving those credits, its building contract failed to address who would be responsible if delays caused them to be lost.", "score": 32.94066286622833, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Green building refers to a sustainable, resource-efficient, and environmentally responsible process of building. It includes siting, design, construction, operation, maintenance, renovation, and demolition of a structure. Green buildings re-use existing materials; conserve water; reduce the need for artificial lighting, heating, cooling, and ventilation; and provide optimal air quality for building occupants. New construction and additions or upgrades can all incorporate green construction techniques. There are basic building efficiency upgrades that be done to improve building performance, especially in older buildings. Multiple organizations offer financial incentives for completing home improvements that result in energy efficiency and conservation upgrades. For information on available programs, please visit the Government Funding Programs page.\nAdditionally, there are multiple non-profit organizations that support green construction, including the following:\nU.S Green Building Council’s Leadership in Energy and Environmental Design (“LEED”) program. LEED is a green building tool that addresses the entire building lifecycle and provides third-party verification of green buildings. Building projects satisfy perquisites and earn points to achieve different levels of certification. Points may be earned from using sustainable sites, water efficiency, energy and the atmosphere, materials and resources, indoor environmental quality, and innovation in design.\nBuild it Green’s GreenPoint Rated (“GPR”) program. GreenPoint Rated focuses on residential projects in California. GPR verifies that a home has been built or remodeled according to proven green standards. The GPR program has certificates for whole homes and for elements of homes (e.g., remodeling a kitchen). GPR awards points for projects across different categories that include energy efficiency, resource conservation, indoor air quality, water conservation, and community.\nThe International Living Future Institute's Living Building Challenge program. Living Building Challenge is a building certification program, advocacy tool, and philosophy that defines the most advanced measure of sustainability in the built environment possible today and acts to rapidly diminish the gap between current limits and the end-game positive solutions we seek. The Challenge is comprised of seven performance categories called Petals: Place, Water, Energy, Health & Happiness, Materials, Equity and Beauty. Petals are subdivided into a total of twenty Imperatives, each of which focuses on a specific sphere of influence. This compilation of Imperatives can be applied to almost every conceivable building project, of any scale and any location—be it a new building or an existing structure.\nThe City of Lafayette recognizes green building efforts through the annual Lafayette Awards of Environmental Excellence, more commonly known as the Lafayette Green Awards.", "score": 32.76760693105405, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Top reasons your construction company should build green\nEPA describes a green building as one “designed to reduce the overall impact of the built environment on human health and the natural environment”. Green building, also known as sustainable construction, is becоming a popular concept worldwide. And there’s good reason for that. It is estimated that 39% of total energy consumption and 65% of electricity consumption in the US can be attributed to buildings. So it’s not surprising that lawmakers, both at state and federal level, are trying to encourage green construction. If you own a construction company and are wondering if you should go in for sustainable construction, consider these benefits.\nGreen construction constantly addresses the use of materials that are non-toxic and have a favorable effect on occupant well-being. The U.S. Green Building Council (USGBC) uses the LEED (Leadership in Energy & Environmental Design) certification system when evaluating green buildings. It’s currently working on LEED’s fourth update which addresses such issues as air quality and the overall effect on human health. This is likely to spur further interest in green buildings from people with certain health conditions and those wishing to improve their general well-being.\nIt might come as a surprise, but building sustainably isn’t always more expensive than building the traditional way. Even when it is slightly more expensive, the difference is more than offset in the long run. Green buildings have much lower running costs and utility bills. Thus, more and more end customers realize sustainable buildings are worth the extra intial investment. Moreover, the price of green buildings remain more stable over time when compared to traditional buildings. Finally, a factor that might be considered for commercial buildings – studies show occupants of green buildings are more productive and therefore their labor – more cost-efficient.\nIncentives for Builders\nOn a federal level, the Energy Policy Act of 2005 allowed for some tax deduction from the costs of making commercial buildings more energy efficient. The incentive was initially planned for just a year (from 2006 to 2007), but was later extended till the end of 2013. Similar pieces of legislation are very likely to be adopted in the future as well. As for cities and states, a lot of them have already put their own incentives for promoting sustainable construction. Currently, California and Texas seem to be the country leaders in various LEED projects.", "score": 32.34029229578367, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "The cost to build green remains on par with standard construction costs, according to a study published in July by cost management consulting firm Davis Langdon. Conducted in 2006, the study, “The Cost of Green Revisited,” is an update to the firm's 2004 study that found that sustainable building was not necessarily more expensive than non-sustainable building.\nAccording to the new study, while average construction costs have risen 20-30% across the board since 2004, most of the LEED-seeking projects studied achieved sustainability goals within the initial budget, with only a few projects needing minimal supplemental funding.\nA total of 221 building projects were analyzed, 83 with a LEED goal and 138 without. The types of buildings studied included 60 academic facilities, 70 laboratories, 57 libraries, 18 community centers, and 17 ambulatory care facilities.\nDownload the study at: www.davislangdon.com/usa/research", "score": 31.539498062846427, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Washington, D.C. — (Jan. 23, 2013) — Today, the U.S. Green Building Council (USGBC) released its annual list of the top 10 states for new LEED certifications in 2012, highlighting those regions that are transforming buildings and communities across the country.\nThe per-capita list is based on 2010 U.S. Census data and includes commercial and institutional buildings certified under LEED, through which approximately 2.2 billion square feet of space has been certified worldwide through 2012.\n“Securing a spot on this list is a remarkable achievement for everyone involved in the green building movement in these states,” said Rick Fedrizzi, President, CEO & Founding Chair, USGBC. “From architects and designers to local chapter advocates, their collective efforts have brought sustainable building design and use to the forefront of the national discussion on the environment, and I applaud their efforts to create a healthier present and future for the people of their states.”\nOnce again, the District of Columbia tops the ranking, with 36.97 square feet of LEED space certified per resident in 2012.\nMeanwhile, Virginia moved into the position as the top state, with 3.71 square feet certified per resident in 2012, overtaking Colorado, with 2.10 square feet certified per person.\nOther top states include Massachusetts, which moved up three positions from 2011, with 2.05 square feet per person; Illinois, with 1.94 square feet; and Maryland, with 1.90 square feet of LEED space certified per resident in 2012.\nReflecting the ongoing trend of LEED existing buildings outpacing their newly built counterparts, in 2012 the LEED for Existing Buildings: Operations & Maintenance rating system accounted for 53% of total square footage certified in these states, compared to 32% certified under LEED for New Construction.\nThe full ranking, which includes 10 states plus Washington, D.C., is as follows:\n\"Buildings are a primary focus of our Mayor's Sustainable DC initiative,\" said Keith Anderson, Interim Director, District of Columbia Department of the Environment. \"We are indeed thrilled to be leading the nation in per-capita LEED certified space. Our private and public building sectors are boldly leading with the development of high performing green buildings, and we have aligned governmental policies to support such innovation.\"", "score": 31.390411514956995, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "LEED projects are responsible for diverting more than 80 million tons of waste from landfills, and by 2030 that number is expected to grow to 540 million tons.9\n- In the United States alone, buildings account for almost 40 percent of national CO2 emissions and out-consume both the industrial and transportation sectors, but LEED-certified buildings have 34 percent lower CO2 emissions, consume 25 percent less energy and 11 percent less water, and have diverted more than 80 million tons of waste from landfills.10\n- The market is responding to these cost savings and environmental benefits at a dramatic rate. According to a Dodge Data & Analytics World Green Building Trends 2018 SmartMarket Report, global green building activity continues to rise and nearly half of survey respondents expect the majority of their projects in the next three years to be green buildings.11\n1 McGraw Hill Construction (2012). World Green Buildings Trends: Business Benefits Driving New and Retrofit Market Opportunities In Over 60 Countries.\n2 U.S. Department of Energy (2011). Re-Assessing Green Building Performance: A Post Occupancy Evaluation of 22 Buildings.\n3 McGraw-Hill Construction (2012). World Green Buildings Study. Accessed Nov. 29, 2012.\n4 Booz Allen Hamilton and the U.S. Green Building Council (2015). 2015 Green Building Economic Impact Study.\n5 National Trust for Historic Preservation (2011). The Greenest Building: Quantifying the Environmental Value of Building Reuse.\n6 U.S. Geological Survey (2000). 2000 data.\n7 McGraw Hill Construction (2010). Green Outlook 2011: Green Trends Driving Growth.\n8 U.S. Environmental Protection Agency. Green Building, Green Homes, Conserving Water. Water Use and Energy.\n9 Watson, Rob. Greenbiz Group (2011). Green Building and Market Impact Report.\n10 U.S. Department of Energy (2011). Re-Assessing Green Building Performance: A Post Occupancy Evaluation of 22 Buildings.\n11 Dodge Research and Analytics (2018). World Green Building Trends 2018 SmartMarket Report.", "score": 31.18271216611033, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "The latest version of LEED places more focus on energy efficiency and reduction of carbon dioxide; identifies existing credits for bonus points based on specific regional issues to reflect local priorities; and tightens the energy savings requirements to 10 percent over the American Society of Heating, Refrigerating and Air-Conditioning Engineers standard for 2007 for new buildings, and 5 percent for renovations. In addition, LEED 2009 was designed to help the USGBC obtain a better understanding of the relationship between credits and building performance. With this goal in mind, LEED 2009 requires that teams agree to report post-occupancy energy and water use as part of project registration. By monitoring energy and water usage, the USGBC is attempting to ensure energy savings goals are being met and to determine ways to constantly make the standard more effective.\nDue to constant improvement to the rating system and the increasing trend toward sustainable building practices, LEED certifications have been on the rise.\nWhile the number of LEED certifications has steadily increased and many buildings strive to attain LEED certification, LEED certification is voluntary and, as a result, many buildings are not LEED certified. Even when buildings are certified, under LEED, building owners may choose to address certain aspects of energy efficiency, such as lighting, but leave other aspects out. Furthermore, when given a choice, some building owners are choosing not to follow these guidelines at all in order to reduce their initial investment, without considering the total cost of ownership and the building’s impact on the environment.\nTo address this issue, the U.S. has approved its first national green building code. The International Green Construction Code (IgCC) was passed after two years of development. The code applies to all new and renovated commercial buildings and residential buildings higher than three stories. The goal of IgCC is to set enforceable minimum standards for all aspects of building design and construction, including energy efficiency, water efficiency, site impacts, building waste and materials. The code also gives voluntary certifications more flexibility to set the high-performance requirements even higher so that buildings are awarded for going above the minimum standards of performance, instead of being awarded for meeting what have increasingly become expected standards.\nOne of the mandatory requirements is related to site development and land use. The IgCC eliminates greenfield development, with some exceptions based on existing infrastructure.", "score": 31.093490999846132, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Does a green building cost less to create and maintain than a conventional building? The General Services Administration (GSA) says yes.\nAn in-depth study of 22 green federal buildings across the U.S. showed that on average, the representative buildings chosen from GSA’s portfolio regularly outperformed national averages of building performance data. These buildings use less energy and water and cost less to maintain, as well as emit less carbon dioxide and have more satisfied occupants than conventionally designed buildings.\nBut is this true for private sector facilities? The gap between the costs of green vs. non-green has certainly narrowed considerably, but has that gap closed yet? Building professionals weigh in on creating, remodeling, and maintaining green facilities.\nGreen Building Performance\nThe study, Green Building Performance: A Post Occupancy Evaluation of 22 GSA Buildings, compares each building’s energy use intensity, energy cost, carbon dioxide emissions, maintenance costs, water use, and occupant satisfaction against widely accepted industry and GSA baselines. Sixteen of the buildings were LEED-NC certified or registered, with the remaining six meeting the requirements of other sustainable building programs, including ENERGY STAR and the California Title 24 Energy Standard.\nThree of the five LEED Gold buildings performed better than industry baselines, but the other two – a Department of Homeland Security (DHS) facility in Omaha, NE, and the Census Bureau office complex in Suitland, MD – earned unexpectedly low scores in some areas.\nThe DHS building bested the industry baseline scores in all categories except water use, which not only exceeded the national average but was also much higher than when the building was previously assessed. This raised suspicion about leaks, unexpected use, and other concerns, but the GSA ultimately realized the spike was due to a shift in occupancy as the 10,000 square feet of space left vacant when the building opened was filled, according to Eleni Reed, chief greening officer for GSA’s Public Buildings Service.\nThe Suitland facility earned low scores in three of the eight categories, but further investigation revealed that its size and densely populated spaces contributed to its scores relative to the industry average. Building occupants were intensely focused on the 2010 Census when the building study was conducted, Reed notes, adding that buildings of this size are uncommon and are likely not represented in the industry averages that the study compared with GSA’s 22-building sample.", "score": 31.020076330830932, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "EIA Weighs In on the Report\nEven with the undeniable increase in energy-efficient buildings, various factors led EIA to scale back its 2005 projections in the 2011 report, and it's difficult to determine the exact role greener buildings played, said Owen Comstock, an EIA research analyst.\nTruthout doesn't take corporate funding - this lets us do the brave reporting and analysis that makes us unique. Please support this work by making a tax-deductible donation today - click here to donate.\nThe Architecture 2030 report \"is not looking at the macroeconomic picture,\" he said.\nThe EIA anticipates that the recent economic recession—which lasted from late 2007 to mid-2009, but whose effects are still being felt—will lead to a long-term slowdown in the construction sector, reducing the amount of \"floor space\" that gets built each year for several years.\nThe electricity that powers U.S. buildings will also get cleaner over the years, as more renewable energy and natural gas-burning plants replace fossil fuel-powered facilities, helping to shrink the sector's carbon footprint. Further, more Americans are expected to migrate from colder northern states to the milder U.S. South and West, reducing the amount of energy used to heat homes, the EIA said.\nWorsening climate change will also play a role.\nIn past reports, EIA's weather projections predicted cooler temperatures, Comstock said. In the 2005 report, analysts determined future weather by using average temperatures over the last 30 years. The 2011 report, however, uses the average of the last 10 years to better reflect the warming trend that climate scientists are observing today. Although air conditioning use is expected to rise as a result of higher temperatures, Comstock said that won't offset energy reductions from heating systems.\nIn an interview, Mazria acknowledged that the EIA projections are based on an amalgam of factors—\"everything across the board,\" he said—including state policies targeting greenhouse gas reductions and renewable portfolio standards that require utilities to source a certain percentage of power from cleaner sources. In its report, Architecture 2030 does point out that the revised projections are partially due to the fact that fewer buildings will get built.\nBut that doesn't negate the fact that the green-building surge is producing stunning results, said Vincent Martinez, director of research for Architecture 2030. \"It's not just that were building less buildings.", "score": 30.668138705798782, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "By Rona Fried, Ph.D.\nOver the past few months, as the commercial construction sector shows signs of rebounding, there have been some major advances in green building policy and measurement. Luckily, they didn’t require Congress to pass legislation, or they never would have passed.\nEnergy Conservation Codes Upgraded\nThe International Energy Conservation Code was upgraded for 2012 – it now requires homes and buildings to achieve energy savings 30% higher than the 2006 code. Since homes and buildings produce fully half of US greenhouse gases and use over 75% of the electricity generated from power plants, the new code is a very significant energy policy decision. In fact, the changes represent the largest single-step efficiency increase in the history of the national energy code.\nAbout 500 state, county and city building and fire code officials from around the US voted to upgrade the code. The changes – which affect new construction and retrofits for homes, businesses, schools, churches and commercial buildings – were sought by the U.S. Department of Energy, the U.S. Conference of Mayors, the National Association of State Energy Officials, various governors, American Institute of Architects and the broad-based Energy Efficient Codes Coalition (EECC).\nLocal building codes across the country are based on these national model standards. The new codes address all aspects of residential and commercial building construction, laying a strong foundation for efficiency gains.\nIn the residential sector, improvements will:\n- Better seal new homes to reduce heating and cooling loss\n- Improve the efficiency of windows and skylights\n- Increase insulation in ceilings, walls, and foundations\n- Reduce wasted energy from leaky heating and cooling ducts\n- Improve hot-water distribution systems to reduce wasted energy and water in piping\n- Boost lighting efficiency\nIn addition to those features, commercial building codes include continuous air barriers, daylighting controls, use of economizers in additional climates, and a choice of three paths for designers and developers to increase efficiency: renewable energy systems, more efficient HVAC equipment, or improved lighting systems. It also requires commissioning of new buildings to ensure that actual building energy performance meets the design intent.\n\"It’s notable that the votes that will have the most profound impact on national energy and environmental policy this year weren’t held in Washington or a state capital, but by governmental officials assembled by the International Code Council in Charlotte, North Carolina,\" said EECC Executive Director William Fay.", "score": 30.573918963381008, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Sources: U.S. Green Building Council, Washington, D.C.; CP staff\nWith 136 projects representing 3.73 square feet of certified space per resident, Massachusetts heads the new Top 10 States for LEED, the U.S. Green Building Council’s annual ranking of states showing significant strides in sustainable building design, construction and transformation.\nNow in its seventh year, the ranking assesses the total square feet of LEED-certified space per resident based on U.S. Census data and includes commercial and institutional green building projects certified during 2016. This year’s list has the highest average (2.55 square feet) per capita of LEED-certified space among the top 10 states since 2010. Four of the nine states included in the 2015 list increased the square feet of space they certified per resident in 2016 (Massachusetts, Colorado, California and Virginia).\n“LEED guides our buildings, cities, communities and neighborhoods to become more resource- and energy-efficient, healthier for occupants and more competitive in the marketplace,” contends USGBC President Mahesh Ramanujam. “The green building movement continues to evolve with advancements in technology, benchmarking and transparency, and the states on this list are leading the way toward a more sustainable future.”\n|TOP 10 STATES FOR LEED|\n|Rank||State||Certified Gross Square Footage||Per-Capita Certified GSF||Project Count|", "score": 30.44383842715626, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "The amount of recycled materials used in the building is also taken into account while measuring the carbon footprint. These measurements account for the energy expended in the creation of new materials for use in the building.\nLeadership in Energy & Environmental Design (LEED) is a globally recognised green building certification system. It grants independent, third-party verification that a building or large property was designed and built using methods aimed at improving energy savings, water efficiency, reduction of CO2 emissions and improved indoor environmental quality. The LEED Certification process was developed by the US Green Building Council (USGBC) and provides a concise framework for recognising and executing practical and measurable green building design, construction, operations and maintenance solutions.\nThe LEED-India Green Building Rating System is an India-specific and accepted benchmark for the design, construction and operation of high-performance green buildings.\nIn India, the Indian Green Building Council (IGBC) provides LEED ratings to structures and aims to make the country one of the leaders in green buildings by the year 2015. The Green Rating for Integrated Habitat Assessment (GRIHA) is the National Rating System of India. It has been conceived by TERI (The Energy and Resources Institute) and developed jointly with the Ministry of New and Renewable Energy, Government of India. It is a design evaluation system for green building and is intended for all kinds of buildings across every climatic zone in India.\nThanks to the gradual spread of awareness about eco-friendly constructions, there has been a considerable rise in the number of registered green buildings in India.\nToday more and more developers are coming up with Green construction as it is beneficial for them with regards to the construction time and also the cost of the materials used. Chennai is considered to be a green city, with many green buildings when compared to the whole of India.\nApart from having many residential green constructions, there are also a lot of companies such as Turbo Energy Office Complex in RA Puram, Menon Eternity in Alwarpet and Shell Business Service Centre.\nChennai takes pride in having more than 45 structures certified as eco-friendly green buildings by the Indian Green Building Council (IGBC).\nGreen Landmarks in Chennai – Olympia Tech Park, Guindy, which was rated the largest Green Building in the world when certified. A few of the gold-rated buildings in Chennai include Anna Centenary Library, Express Avenue Mall and the New Tamil Nadu Assembly building.", "score": 30.238584579959454, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "\"The great majority of environmental organizations had invested in keeping companies on the other side of a fence,\" says Richard Fedrizzi, the current CEO of the council. \"David [Gottfried] thought that we could do things differently. If we could invite business to the table, we could develop standards relative to building performance, buy in at the very top, and be able to transform the marketplace toward sustainable buildings.\"\nThe result, introduced in 2000, was LEED. The LEED rating system is simple in concept. Architects and engineers shoot for points in six categories: siting, water use, energy, materials, indoor air quality, and \"innovation in design.\" Once a building is complete, a representative from the Green Building Council reviews the documentation—plans, engineers' calculations—and awards points out of a possible 69: certified (at least 26 points for new construction), silver, gold, or platinum (at least 52 points).\nWatson says the point system was specifically constructed to entice builders and drive the market in a green direction. \"One definable action equals one point,\" he says. Bike racks, one point; recycling room, one point. \"We threw a few gimmes in there so people could get into the low 20s ... and say, 'We can do this.'\"\nAnd it worked. Power-suited developers and hard hats have signed on. More than 6,500 projects have registered for LEED certification since 2000, and new categories such as commercial interiors and existing buildings have been added to the original LEED for new construction. Forty-two thousand people have paid $250 to $350 and passed exams to become \"LEED-accredited professionals.\"\nThe council's revenue has been growing at 30% or better a year, with close to 20% coming from certification. Getting the LEED plaque is not cheap. In February, the mayor of Park City, Utah, told a building-industry publication, \"On the Park City Ice Arena [$4.8 million project cost], we built it according to LEED criteria, but then we realized that [certification] was going to cost $27,500. So we ordered three small wind turbines instead that will power the arena's Zamboni.\"\nMuch of this growth is credited to Fedrizzi, a former marketing executive for an air-conditioning company who became CEO in 2004.", "score": 29.53980266223512, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Architecture 2030, a building sector research and advocacy group, issued a report last week asserting that the greening of the U.S. building sector is on track to deliver far more energy savings than government officials predicted only a handful of years ago, with important implications for the country's energy and climate picture.\nThe report looked at data released without fanfare almost a year ago by the Energy Information Administration (EIA), the analysis arm of the Department of Energy, which publishes projections for U.S. energy supply and demand each spring. Architecture 2030 compared EIA's 2005 and 2011 projections and found something that surprised them. The EIA had quietly, but dramatically, lowered long-term projections for energy use and carbon emissions from America's homes, office buildings and other commercial properties.\nEnergy consumption from buildings will increase by 14 percent from 2005 to 2030, the EIA said, down from the 44 percent spike it predicted seven years ago. Architecture 2030 says it amounts to eliminating the electricity output from 490 500-megawatt coal-fired plants over the same 25-year period.\nThe new projections mean Americans will save an additional $3.7 trillion on energy bills through 2030.\nGreenhouse gas emissions from buildings are slated to increase by less than 5 percent, compared to an estimated 53 percent rise in 2005, the data also revealed. Currently, buildings account for 40 percent of both U.S. electricity consumption and heat-trapping gas emissions.\nEdward Mazria, founder and CEO of Architecture 2030, which advocates for a carbon- neutral building sector, said his group's analysis of the data is the first to publicize the green building movement's contributions to national energy use so far and into the future. \"This is a huge national snapshot of where we've been and where we're heading,\" he told InsideClimate News.\nMazria says the main driver of the new projections is the hike in building energy standards. \"It's policies, it's building codes, it's better building design and more efficient technologies. We're building to a better standard,\" he said. The group wants the report to provide policymakers and builders a reason to continue on this path.\nBut EIA says there is more to it than that.", "score": 29.45875977481348, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Agency analysts told InsideClimate News that Architecture 2030's report downplays factors that have nothing to do with making buildings greener like the huge slowdown in construction from the recession that means fewer buildings have been, and will get, built.\nFor the most part, though, EIA officials agree with the bottom line of Mazria's report that green improvements are saving energy. \"Over the years, our projections for buildings' energy consumption have decreased, and a lot of that is due to increases in efficiency,\" Erin Boedecker, the lead building analyst at EIA, said.\nThe Rise of the Green Building\nSince 2005, the federal government and most states have adopted various building codes and efficiency tax incentives that have helped spur a green-building boom.\nMost notably, a mandate under the 2007 Energy Independence and Security Act requires all federal buildings to reduce energy use by 30 percent in 2015 compared to 2003 levels. In 2010, California passed the nation's first mandatory statewide green building code, which took effect last year. Meanwhile, more than half of all states have adopted the 2009 International Energy Conservation Code, a standard created by the International Code Council, a U.S.-based nonprofit, which requires buildings to meet efficiency standards for heating units and air conditioners, water heaters and lighting.\nAs a result of these and other policies, efficiency fixes such as energy-efficient lighting and appliances, insulation, tightly sealed windows and rooftop solar panels, have become common in new and renovated buildings. Green construction jumped from two percent of new non-residential buildings in 2005 to nearly 30 percent in 2010, according to the latest figures from McGraw Hill Construction, a publisher of construction information.\nMarket forces are also at play. While it can be expensive to make buildings energy efficient, the return on investment, mainly from avoided electricity or heating costs, is nearly 10 percent higher than from conventional buildings, according to McGraw Hill.\nGreen builders can also receive thousands in rebates and tax incentives from state governments, allowing them to recoup upfront costs even faster. An additional lure: A building with green features generally has higher value, occupancy rates and can fetch higher rents than it would otherwise.", "score": 29.19715047643736, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "Solar power use in buildings will accelerate with the extension of solar energy tax credits for buildings through 2016 and the prospect of increasing utility focus on renewable power goals for 2015 and 2020. As before, third-party financing partnerships will continue to grow and provide capital for large rooftop systems.\n7. Local governments will increasingly mandate green buildings from both themselves and the private sector. While concern over economic impacts of green buildings mandates will be present, the desire to reduce carbon emission by going green will lead more government agencies to require green buildings.\n8. Zero net energy designs for new buildings will gain increasing acceptance in both public and private buildings. “I've shown that you can get building energy use down to low levels with better design,” said Yudelson, “and that makes it easier and more cost-effective to buy green power to displace the remaining energy use.”\n9. Green homes will come to dominate new home developments in more sections of the U.S., as builders increasingly see green as a source of competitive advantage. “This trend was foreseen in my 2008 book, Choosing Green (New Society Publishers), which for the first time documented the large number of new green housing developments in the U.S. and Canada.”\n10. European green building technologies will become better known and more widely adopted in the U.S. and Canada. “My forthcoming 2009 book, Green Building Trends: Europe (Island Press), will be out in the spring and will help accelerate this trend, along with more European architects and engineers opening offices in the U.S.”\nYudelson Associates is an international firm engaged in sustainability planning and green building consulting. Yudelson is widely acknowledged as one of the nation's leading experts on green building and green development. He is the author of eight green building books and serves as research scholar for real estate sustainability for the International Council of Shopping Centers, a 70,000-member international trade organization. He is a frequent green building speaker at industry and professional conferences and chaired the annual Greenbuild show from 2004 through 2008.\nAdditional information is available at www.greenbuildconsult.com.", "score": 29.11443185063363, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "It's hard to overstate the potential (and necessity) of greening our existing building stock. Buildings account for 73% of electricity consumption in the U.S. and 38% of CO2 emissions. Can you imagine the dent we can make with added efficiency in this sector - environmentally and economically? Plus, existing buildings are all around us. Chances are, you're inside of a building (an existing one) as you read this blog entry - talk about an accessible opportunity.", "score": 28.526814286668884, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "LEED Green Building Rating System… California Title 24… ASHRAE 90.1... Not only is the building industry learning a new vocabulary, but the green revolution is creating a complete new set of expectations, standards, regulations, codes, and, in short, a new way of doing things for electrical contractors. The environmental shift is upon us because the energy consumed by buildings in the United States is staggering. According to the Office of the Federal Environmental Executive, buildings account for 37 percent of primary energy use and 68 percent of all electricity use. They demand 60 percent of non-food/fuel raw materials use, generating 136 million tons of construction and demolition debris per year. That translates into 40 percent of nonindustrial solid waste and 31 percent of mercury in municipal solid waste. Buildings use 36 billion gallons of water per day, which is 12 percent of potable water, and in many urban systems, they create 20 percent loss of potable water due to leakage. They also produce 35 percent of all carbon dioxide emissions and 49 percent of all sulphur dioxide emissions.\nIf green building trends have not yet affected your part of the industry and the way you do business, then they will soon. Insiders in the green building industry are loudly proclaiming to anyone willing to listen that their way of doing business is the wave of the future. Those who are willing to do green business early on will qualify for and will win business on the front end of this revolution; contractors who drag their feet will not get the job. Insiders further boldly claim the perpetual naysayers who refuse to ever comply will not survive the green transition.\nThose are pretty strong words to a trade that proudly wears its conservative, established way of doing things as a badge of honor. But the green proponents back up their claims by pointing out, among other things, that many communities around the country, including some large cities such as San Francisco; Boston; Seattle; Scottsdale, Ariz.; and Washington, D.C., now require some or all of their new public buildings to be green by some codified standard.\nAnd, it is not just governments that are going green; private corporations are weighing in as well. For example, Fireman’s Fund Insurance Cos. announced in October 2006 that it is the first and only insurance to offer specific coverage for green commercial buildings and to address the unique risks associated with sustainable building practices.", "score": 27.830530475160394, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "NEW YORK - New construction starts for May were at a seasonally adjusted annual rate of $556.1 billion, up 1 percent compared to the previous month, according to figures released by McGraw-Hill Construction.\nNonresidential building registered its strongest performance so far in 2004, outweighing a modest retreat for housing and a more substantial decline for \"nonbuilding\" construction, such as public works and electric utilities.\n\"The construction industry has picked up the pace in recent months,\" said Robert A. Murray, vice president of economic affairs for McGraw-Hill Construction. \"Single-family housing continues to be very strong, and now nonresidential building is beginning to see more sustained improvement, marking a change from its weakening trend over the past three years.\"\nNonresidential building jumped 11 percent in May to $164.7 billion. School construction edged up 3 percent, while health care facilities and public buildings reported gains of 15 percent. Transportation terminal work increased 19 percent.\nMcGraw-Hill also reported that residential building, at $312.7 billion, settled back 1 percent in May. Single-family housing was down 4 percent in dollar volume, although the amount was still 10 percent above the average pace in 2003. Multifamily housing grew 14 percent.\nNonbuilding construction dropped by 9 percent to $78.6 billion. According to McGraw-Hill, much of the nonbuilding retreat was the result of an 88 percent plunge in electric utilities, following the strong amount of new power plants started during March and April. The public works categories registered a mixed pattern. Highways and bridges increased 12 percent after a weak April, and water-supply systems were up 4 percent. Sewers and riverfront developments showed respective declines of 5 percent and 9 percent.\nDuring the first five months of 2004, total construction on an adjusted basis was up 10 percent relative to the same period in 2003. Residential building led with a 21 percent gain. Nonresidential building in the January-May period was down 2 percent from a year ago, while nonbuilding construction was down 5 percent.", "score": 27.70140649935147, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "firms, but so is the rapid rise of green in many of the developing countries,” said Stephen Jones, Senior Director of Industry Insights, Dodge Data & Analytics. “Expertise from experienced green designers, builders and manufacturers from the U.S. is likely to be essential to support the aggressive green building expectations revealed by the study respondents.”\nIn the U.S., the highest percentage of respondents report that they expect to work on new green institutional projects (such as schools, hospitals and public buildings), green retrofits of existing buildings and new green commercial construction (such as office and retail buildings) in the next three years. When compared with global averages, it becomes clear that the U.S. is a leader in new green institutional construction and green retrofits of existing buildings.\n• 46 percent of U.S. respondents expect to work on new green institutional buildings, compared to 38 percent globally;\n• 43 percent of U.S. respondents plan to work on green retrofits of existing buildings, again well above the global average of 37 percent.\nThe U.S. is also distinguished from the global findings in terms of the importance it places on reducing energy consumption as an environmental reason for building green. Over three quarters (76 percent) of U.S. respondents consider this important, nearly double the percentage of the next most important environmental factor, which is reducing water consumption. While the other 12 countries in the study prioritize the reduction of energy consumption, only Germany, Poland and Singapore do so to the same extent.\n“The survey shows that global green building activity continues to double every three years,” said United Technologies Chief Sustainability Officer John Mandyck. “More people recognize the economic and productivity value that green buildings bring to property owners and tenants, along with the energy and water benefits to the environment, which is driving the green building industry’s growth. It’s a win-win for people, planet and the economy.”\nThe study demonstrates the benefits of building green, with median operating cost decreases for green buildings of 9 percent expected in just one year globally. Building owners also report seeing a median increase of 7 percent in the value of their green buildings compared to traditional buildings, an increase that is consistent between newly built green buildings and those that are renovated green. These business benefits are a critical driver for the growth of green building anticipated globally.\nThe U.S. is also notable for having the lowest percentage of respondents who report that their company uses metrics to track green building performance.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Green buildings, as represented by the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED) Green Building Rating System, are an undisputed market success. In the eight years since the launch of LEED, green has firmly established itself among mainstream leaders in the building sector, representing tens of billions of dollars in value put in place and materials sales.LEED was created to reduce the environmental impacts of the built environment, but so far no comprehensive evaluation of the overall impact of LEED has been conducted. Until now.\nThis Green Building Impact Report is the first-ever integrated assessment of the land, water, energy, material and indoor environmental impacts of the LEED for New Construction (LEED NC), Core & Shell (LEED CS) and Existing Building Operations and Maintenance (LEED EBOM) standards. (We did not include Commercial Interiors due to concerns about double-counting, which we hope to have resolved before the release of next year's report.)\nIn this report we attempt to answer whether commercial green buildings live up to their name -- that is, that they are engendering demonstrable environmental improvement.\nOur findings are both encouraging and cautionary. Overall, we believe that LEED buildings are making a major impact in reducing the overall environmental footprint of individual structures. However, significant additional progress is possible and indeed necessary on both the individual building level and in terms of market penetration if LEED is to contribute in a meaningful way to reducing the environmental footprint of buildings in the U.S. and worldwide.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "The source energy use intensity of commercial buildings has three phases: slight decrease for building dates up to the second world war, followed by four decades of increase and a decrease for buildings constructed over the last two decades.\nFigure 1.3 shows source energy consumption of residential buildings by decade (see Appendix 2 for what is included). Most of the recent construction is clustered in specifically the year 2000, so data from 2000–2010 may be spurious, and are not used. The expected trend of a reduction of energy use with more modern constructions is justified, at least up to 1990.\nIt is difficult to identify the specific type of building (if any) that may be responsible for this recent upturn. Indeed three quarters of residential buildings are \"single family\" and the two available subcategories of \"attached\" and \"detached\" together make up only 5% of them (the lion's share being \"uncategorized\").\nFigure 1.3: Box plot (median and quartiles) for source energy use intensity of residential buildings by decade of construction.\nThe trend for residential buildings since the war is the opposite of commercial buildings, which exhibit an increase up to the eighties followed by a twenty-year reduction of energy use, as shown in Fig. 1.4. Since buildings constructed before 1950 make up only 9% of residential buildings (14% of commercial ones), old buildings are clustered into just two ranges in Fig. 1.4 (the dashed lines indicate that the increments are then more than ten years).\nFigure 1.4: Average source energy use intensity of residential (red, left axis) and commercial (blue, right) buildings by decade of construction.\nWhile a third of the residential buildings date back to the seventies and another 22% to the eighties, those built since 1990 account for only 9% of the total (against 47% for commercial buildings), and this even includes the suspicious 2000 cluster. Either residential (but not commercial) construction activity has recently dropped or the database has a delay in including buildings — a delay which may not be random. This said, the systematically opposite trend between residential and commercial buildings is too clear to be explained away by the available data not being completely random.\nA smaller building has a greater ratio of wall and roof area to floor area, which means greater heat losses (but number of floors and shape —e.g. compact vs.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "Currently, Hayes is involved in designing a net-zero office building.\nThe Seattle-based non-profit Bullitt Foundation, of which Hayes is president and CEO, is breaking ground this spring on the Cascadia Center for Sustainable Design and Construction. Once completed, the Cascadia Center will be one of the first mid-rise commercial buildings to achieve “living building” status.\nLiving buildings- net-zero and beyond\nLiving buildings go beyond LEED with the Living Building Challenge. Version 2.0 of the Challenge incorporates a 20-point standard including 100 percent renewable energy production on-site, water consumption from 100 percent harvested rainwater, and super-efficient electrical and mechanical systems. The Living Building standard begins with net-zero and extends sustainability to all aspects of how a building interacts and impacts its surrounding environment; from initial site selection, building design and construction to aesthetics, human health, well-being, and transportation.\nThe Cascadia Center will be built to last 250 years, adaptable to changing needs and emerging technologies throughout its lifetime.\nEmerging policy standardizing sustainable development\nProjects like the Cascadia Center remain the ideal, the rare exception – for now. But Stewart considers Hayes’ work as a focal point for emerging public policy – both then and now. Even if the mainstream media doesn’t take notice, there are significant stirrings afoot pointing toward a shifting policy landscape. Stewart cites three examples as the core of the emerging trend:\n- Energy Independence and Security Act (2007)\nRequires 3 percent per year energy reduction for federal buildings relative to 2003 Commercial Buildings Energy Consumption Survey (CBECS), or 15 percent by 2010 and 30 percent by 2015. Reduction of fossil fuel use reaching 55 percent in 2010, 65 percent by 2015 and 80 percent by 2020. “This is an aggressive mandate,” Stewart explains, that serves as a “green proving ground,” incorporating the principals of Building Information Modeling (BIM) and the latest design technologies.\n- Executive Oorder 13514 (2009)\nDirects that 15 percent of existing federal buildings and leases with more than 5,000 gross square feet will meet Guiding Principles for Federal Leadership in High Performance and Sustainable Buildings by 2015.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "greenhouse gas emissions at a cost of more than $100 billion per year. Commercial buildings that earn the Energy Star must perform in the top 25 percent of buildings nationwide compared to similar buildings and be independently verified by a licensed professional engineer or registered architect each year. Energy Star certified buildings use 35 percent less energy and emit 35 percent less carbon dioxide than average buildings. Fourteen types of commercial buildings can earn the Energy Star, including office buildings, K-12 schools, and retail stores.\nMore information on the top cities in 2010 with Energy Star certified buildings:\nMore information on EPA’s real-time registry of all Energy Star certified buildings:\nMore information about earning the Energy Star for commercial buildings:", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Many green building practices are fairly routine now across the country, but some companies and nonprofits have gone far beyond the standards to reduce the environmental impact. Could this become the new normal?\nImage source: Flickr CC user bobarcpics\nA few years ago, companies would issue news releases when they installed energy-saving lights and appliances, loaded solar panels on the roof, or recycled the lumber and carpeting left over on job sites. Environmentally friendly building practices are now routine, and in California, they are required.\nAcross the country, it is becoming more newsworthy when a company fails to adopt some “green” building techniques than when they do. In California, implementing eco-friendly building practices has been a non-issue since 2011 when the state adopted the nation’s first statewide building code. Among other things, that act required developers to take steps to reduce the amount of water used in construction, to recycle construction waste, and to install energy-saving systems and appliances.\nNationwide, the U.S. Green Building Council’s LEED (Leadership in Energy and Environmental Design) remains the standard for new construction. While the standards have been recently updated, LEED remains a points-based system, and a company can shoot for one of four levels of certification. But the new normal means that these days, not only are companies frequently meeting the highest LEED certification level–they are exceeding it.\nSome Green Buildings Have Gone Beyond LEED\nSome companies and non-profits are now aiming for “net zero” use of water, waste and materials. They are attempting to create buildings that waste virtually nothing and produce more energy than they use. The Bullitt Center in Seattle has been called the greenest building in the world. That is up for debate, but the building does push green practices to the extreme. The 50,000-square foot, six-story building collects rainwater on its roof that is filtered down into a 58,000-gallon storage tank in the basement. A solar array on the roof — which resembles a giant paper airplane overhanging the street – generates enough energy for the building. Tenants are given energy budgets and have to pay for overages, as the building is designed to produce all the electricity used in the building.\nOther projects have also gone beyond the norm without breaking the bank. In San Francisco, the 19-story One Bush Street obtained LEED’s highest certification level with a less-than-$100,000 investment.", "score": 26.41309789359467, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Strong Growth Anticipated Globally and in U.S. Green Building Market, but Nation Lags Globally in Measuring the Benefits of Building Green\nNEW YORK – Feb. 16, 2016 – Companies involved in U.S. construction plan on intensifying their involvement in green building over the next three years, according to the new World Green Building Trends Study from Dodge Data & Analytics, conducted with support from United Technologies Corp. (NYSE:UTX) and its UTC Climate, Controls & Security business. The U.S. is also one of the global leaders in the percentage of firms expecting to construct new green institutional projects and green retrofits of existing buildings.\nThe global study, which received additional support from Saint-Gobain, the U.S. Green Building Council and the Regenerative Network, positions the U.S. as a strong participant in the global green movement. Responses from more than 1,000 building professionals from 60 countries place the U.S. green industry in context. The study also provides specific comparisons with 12 other countries from which a sufficient response was gained to allow for statistical analysis: Australia, Brazil, China, Colombia, Germany, India, Mexico, Poland, Saudi Arabia, Singapore, South Africa and the United Kingdom.\nAccording to the report, U.S. construction should see an increase in the share of green work in the next few years, largely as a result of companies intensifying their involvement in the green building industry. An increasing percentage of respondents projected that more than 60 percent of their projects would be green projects - from 24 percent of respondents in 2015 to 39 percent in 2018. Respondents projecting that fewer than 15 percent of their projects would be certified green plummeted from 41 percent in 2015 to 27 percent by 2018.\nWhile this increased share of green building is impressive, it is significantly less than many developing countries included in the survey. For example, Brazil expects six-fold growth (from 6 percent to 36 percent) in the percentage of companies conducting a majority of their projects green; five-fold growth is expected in China (from 5 percent to 28 percent); and fourfold growth is expected in Saudi Arabia (from 8 percent to 32 percent).\n“The strong U.S. industry for green building projects is clearly an opportunity for U.S.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Green building emerged as somewhat of a proactive or “feel good” movement. While some may dismiss “green” as a passing trend, regulators from local to federal levels are creating laws and regulations that will ensure green building is around to stay for at least the near future. Perhaps most significantly, recent EPA action will regulate nearly every commercial building by 2016.\nProbably the most established and best known of the standards used for green building is the Leadership in Energy and Environmental Design (LEED) standards. Instituted in 2000 by the U.S. Green Building Council (USGBC), the LEED system awards credits for buildings based on certain factors in specified categories. Based on the number of points earned, a rating level is awarded. This is a voluntary system. A building owner applies to obtain this certification.\nInternational Green Construction Code\nCurrently, most building codes do not address energy and water issues, material waste, impact on construction sites, and other environmental concerns. In fact, some building codes are so outdated they restrict advances in green building.\nTo address the issue and to avoid patchwork regulations, a coalition of building standards organizations have united to create a draft International Green Construction Code. The first draft was released in early 2010 and will be final by early 2012. The uniform code will make it easier for municipalities, especially smaller ones, to adopt comprehensive and functioning green building codes.\nState and Municipal Regulations\nA variety of states and municipalities have started to require some green or sustainable construction on both new projects and remodels. California has created its own standards with the CalGreen regulations. Utah and Arizona, among other states, have adopted requirements that certain public buildings comply with some level of LEED certification.\nThe regulations are a potpourri of forms and functions. Some regulations only apply to public buildings, while others apply to both public and private buildings. About 40 percent only apply to new construction while the other 60 percent apply to existing buildings. Notably, a handful of regulations apply to existing buildings on which no construction is occurring.\nThe majority of states and municipalities use the LEED system as a guide. LEED is an easy choice because it offers an established set of standards to entities without the manpower or expertise to create their own. Once the green code is finalized, regulators will likely shift away from regulatory reliance on LEED and use the code instead.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "We have less than seven years to dramatically alter our future. Under our year-long Vision 2020 program, we have conferred with building industry experts to establish a timeline critical goals and metrics we must establish and meet by the year 2020 in order to preserve our environment and meet large-scale goals such as those of the 2030 Challenge. Scroll over points in the timeline below, which has been updated for 2013, to learn more about the path ahead in green building. Then, visit Ecobuildingpulse.com/Vision-2020 to learn more about what we uncovered in this year's program.\nArchitecture 2030 releases the 2030 Palette, a free online tool to foster low-carbon building principles.\nOpen Building techniques, separating a homeís shell from its interior systems and components to extend the building's lifespan, gains traction in the U.S.\nDOE Challenge Home Version 2 released.\nNationwide professional certification program established for building science experts.\nA minimum of five builders offer net-zero home designs as options in the top 50 housing markets.\nAmerican buildings cut energy use by an average of 2.5% annually, and reduce energy use by 20% since 2011.\n2027: Net-zero energy homes become a national standard.\nMarket expands availability of affordable energy retrofit systems.\nIECC 2012 adopted by all 50 states.\nTwenty U.S. cities operate energy benchmarking and reporting programs.\nHome energy-management systems become a standard residential feature on new homes.\nWater and energy policies addressed jointly on a consistent basis.\nEnergy modeling becomes standard for 50% of all new homes.\nThe majority of U.S. states adopt the IECC 2015.\n100 MW of installed renewable capacity across federally subsidized housing stock achieved.\nRenewable electricity generation doubled from 2013 level.\nFederal government gets 20% of its electricity from renewable sources.\nAtmospheric CO2 concentrations pass 400 parts per million (ppm). (Recommended threshold: 350 ppm.)\nThe U.N. reports scientists are 95% sure that humans are causing global warming.\nU.N. climate talks to establish 2020 global climate treaty goals.\nArctic scientist predicts collapse of Arctic sea ice from global warming.\nAmerica reduces its greenhouse gas emissions to 17% below 2005 levels.\n50% of all new single family housing starts and 80% of new multifamily housing starts are green certified.\nU.N.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "It also provides guidelines for site disturbance, irrigation, erosion control, transportation, heat island mitigation, graywater systems, habitat protection and site restoration. In terms of materials conservation and efficiency, the IgCC requires that at least 50 percent of construction waste be diverted from landfills, and at least 55 percent of building materials be salvaged, recycled content, recyclable, bio-based or indigenous.\nRegarding energy conservation and efficiency, a building’s total efficiency must be 51 percent of the energy allowable in the 2000 International Energy Conservation Code (IECC), and building envelope performance must exceed that by 10 percent. It also sets minimum standards for lighting and mechanical systems, and requires certain levels of submetering and demand-response automation. The IgCC also sets standards for maximum consumption of fixtures and appliances, as well as for rainwater storage and graywater systems. Regarding indoor air quality, it addresses radon, asbestos, VOCs, sound transmission, and daylighting. In addition, the IgCC requires extensive pre and post-occupancy commissioning, as well as education of building owners and maintenance employees.\nLocal governments and states have the choice of adopting the code, but if adopted, it becomes enforceable. Local governments and states can also add their own requirements, on top of the code, that address local concerns. Although many jurisdictions have already adopted IgCC, the final code was to be published in March 2012. The new enforceable requirements are thus expected to support growth in the green building market.\nBecause it significantly reduces or eliminates the negative impact of buildings on the environment and on the building occupants, the green building sector has been gaining momentum in the Unites States. This sector is expected to continue to grow further given the increasing adoption of the LEED certification and the new national green building code.\nAlejandra Lozano is an environmental and building technologies research analyst at Frost & Sullivan.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-4", "d_text": "For example, the UK Climate Change Act of 2008 is targeting a 34% reduction in greenhouse gas emissions by 2020 (compared with 1990 levels) and an 80% reduction by 2050. New buildings have been identified as the sector where radical cuts can be achieved. The UK targets are for all new homes to be net Zero Carbon in use by 2016, and all new non-domestic buildings by 2019. As energy-in-use decreases, attention will shift to reducing the energy embodied in the construction of new buildings.\nHowever, it is estimated that up to 87% of the homes that will exist in 2050 have already been built (note 6), so it is also crucial to retrofit existing buildings with energy efficiency measures.\n- View our low energy buildings photo collection on flikr\n- Good introduction to the basics of building environmental design including\n1. Carbon emissions - growth in a low carbon economy, Richard Miller\n2. New-Housing Energy Saving Trust note regarding the Nottingham Declaration on Climate Change\n3. Sustainable Homes: Embodied energy in residential property development, 2000\n4. Department for Energy and Climate Change (DECC) Quarterly Energy Prices: September 2010\n5. Report on carbon reductions in new non-domestic buildings, UK Green Building Council\n6. Home Truths: a low-carbon strategy to reduce UK housing emissions by 80% by 2050, Brenda Boardman, University of Oxford, 2007\n7. Good source of basic information on PassivHaus\n8. AECB, The Sustainable Building Association\n9. Sustainable New Homes – the road to zero carbon, DCLG, 2009", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Most consumers – 74% - who responded to a recent survey believe that less than a quarter of their home can be categorized as \"green.\" The survey was conducted on behalf of Whirlpool Corp. and Habitat for Humanity International by the NAHB Research Center to report on opinions from consumers and builders on various topics related to green home building.\nWhat is a \"green\" home?\n• 34% of consumers felt that the most common definition for a green home is that it reduces energy and/or water consumption by a significant percentage\n• 23% of consumers felt that a home can be considered green when the entire home is green.\nResponses for this question were similar across all income levels.\nBuilders responding to the survey had similar ideas of what a green home should be:\n• 35% of builders preferred the definition of reducing energy and/or water consumption\n• an additional 35% of builders defined homes as green if they are built to certification standards.\n\"These survey results demonstrate that many consumers recognize their homes can be more environmentally sound,\" said Tom Halford, general manager, contract sales and marketing for Whirlpool Corp.\nConsumer opinions about green certification programs:\n• 78% of consumers responded that Energy Star qualification is important for residential builds\n• 44% of consumers considered the National Green Building Standard important\n• 40% of consumers considered state certification programs important\nBuilder opinions about green certification programs:\n• 75% of builders felt Energy Star qualification was important for residential builds\n• 57% of builders considered the National Green Building Standard important\n• 59% of builders indicated that they sometimes or always certify homes they build to the specifications of a green certification program.\nLarry Gluth, senior vice president of U.S. and Canada for Habitat for Humanity, said the organization wants to help prove the affordability of energy efficient home building. Its goal is to build all its homes to minimum Energy Star standards by 2013. Such homes will also benefit their occupants with lower energy costs.\nWhere do consumers get their \"green\" information? The survey found:\n• 60% of consumers answered that they get their green information from the Internet\n• 54% from TV/radio\n• 42% from magazines/periodicals\nHabitat for Humanity is a nonprofit organization that builds, rehabilitates, and repairs houses. Whirlpool donates a range and Energy Star qualified refrigerator to every Habitat home built nationally, totaling more than 125,000 appliances to-date.", "score": 25.627193091429866, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "New construction starts decreased 8% in March, according to a recent report by McGraw-Hill Construction, New York. The association also reported that non-residential construction was down compared to the increases reported in January and February. Residential building continued to decline with the continued correction in single-family housing.\nNon-building construction, such as public works and electric utilities, showed greater activity with the start of several large power plants. During the first three months of 2008, total construction on an unadjusted basis was down 19% from the same period a year ago. Excluding residential building from year-to-date figures, new construction starts were still down 2%", "score": 24.749004938147408, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "70% of homebuyers are more or much more inclined to buy a green home over a conventional home in a down housing market, according to McGraw-Hill Construction’s 2008 SmartMarket Report, “The Green Home Consumer.” That number is 78% for those earning less than $50,000 a year, showing the increasing accessibility of green buildings to all members of our society. In fact, 56% of respondents who bought green homes in 2008 earn less than $75,000 per year; 29% earn less than $50,000.\nMore than 80% of commercial building owners have allocated funds to green initiatives this year, according to “2008 Green Survey: Existing Buildings,” a survey jointly funded by Incisive Media’s Real Estate Forum and GlobeSt.com, the Building Owners and Managers Association (BOMA) International and USGBC. Some 45% plan to increase sustainability investments in 2009.\nLEED-certified projects are directly tied to more than $10 billion of green materials, according to a Greener World Media study on green building. That could reach more than $100 billion by 2020, contributing to a vibrant industry that could drive an economic recovery.\nThe Center for American Progress and the Political Economy Research Institute at the University of Massachusetts Amherst, in a September 2008 study, found that a national green economic recovery program investing $100 billion over 10 years in six infrastructure areas would create 2 million new jobs. The investments would include retrofitting existing buildings to improve energy efficiency and investing in wind power, solar power and next-generation biofuels.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "While worldwide efforts are on to contain global warming, every global citizen can contribute in their own way in reducing the carbon footprints. Housing sector, despite of its recent slump in the USA owing to the mortgage market crisis, is nonetheless a buzzing area of growth that if applies the environment-friendly concepts to construction could help in developing a greener planet. According to a recent study by the Montreal-based Commission for Environmental Cooperation, the cheapest way to reduce greenhouse gas is to build green buildings.\nOver one-third of the global greenhouse gases emanate from the North American buildings – residential and office buildings alike. By constructing new buildings on energy-efficient concepts and be upgrading the existing ones through better insulation and windows, a whopping 1.7 billion tons volume of greenhouse gas emission could be reduced. Green buildings are defined in the report as ‘environmentally preferable practices and materials in the design, location, construction, operation and disposal of buildings.’ Only 2 percent of the US buildings are environment friendly and the residential houses comprise only 0.3 percent in the total green buildings pie. Europe on the other hand has a greater number of green buildings in place.\nBuilding materials used in typical green buildings are renewable plant materials, sustainable lumber, dimension stone, recycled stone, recycled metal and other non-toxic, reusable, renewable and recyclable products. Insulation is made from low volatile organic compound-emitting materials such as recycled denim or cellulose insulation. Organic or milk based paints are used. To minimize energy load the buildings are designed to take maximum advantage of the wind and sunlight.\nThe high cost of constructing more energy efficient and water systems makes the builders reluctant to develop green buildings. With green construction being only 4 percent of the entire construction sector, future rise in the market share would help in bringing down the cost of building green houses.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-2", "d_text": "(Source: New Buildings Institute, PR, 15 May, 2019) Contact: New Buildings Institute, www.newbuildings.org\nMore Low-Carbon Energy News New Buildings Institute, Net Zero Energy, Energy Efficiency,\nThe survey of nearly 2,000 facility and energy management executives from 20 countries found that 57 pct of organizations in the U.S. and 59 pct of global organizations plan to increase investment in energy efficiency, smart building measures including building controls and building systems integration at a greater rate than more traditional energy efficiency measures over the next 12 months.\nToday, organizations identify greenhouse gas footprint reduction, energy cost savings, energy security and enhanced reputation as key drivers of investment fueling growth in green, net zero energy and resilient buildings. Building controls improvements were cited as the most popular investment energy efficiency related investments.\nIn Johnson Controls'2008 survey, under 10 pct od respondents had a certified \"green\" building and only 34 pct planned to certify new construction projects to a recognized green standard. In 2018, 19 pct of U.S. organizations have already achieved voluntary green building certification for at least one of their facilities, and 53 pct plan to in the future.\nGlobally, 14 pct of organizations have achieved voluntary green building certification for at least one of their facilities and 44 pct plan to in the future.\nThe survey also notes a significant year-over-year increase in net-zero energy goals, with 61 pct of U.S. organizations extremely or very likely to have one or more facilities that are nearly zero, net zero or positive energy/carbon in the next ten years.(Source: Johnson Controls, Building Design & Construction, Nov., 2018)Contact: Johnson Controls, Clay Nesler, VP Global Sustainability, www.linkedin.com/in/clay-nesler-171a133, www.johnsoncontrols.com/corporate-sustainability/reporting-and-policies/business-and-sustainability-report/environmental-leadership/sustainable-solutions, www.johnsoncontrols.com\nMore Low-Carbon Energy News Johnson Controls, Energy Efficiency,\nLed by the C40 Cities Climate Leadership Group, the Net Zero Carbon Buildings Declaration requires cities to: establish a roadmap to reach net zero carbon buildings; develop a suite of supporting incentives and programmes; and report annually on progress. The pledge is part of the World Green Building Councils Net Zero Carbon Building Commitment for Businesses, Cities, States and Regions.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "The American Institute of Architects (AIA)\nhas named the winners of the 2013 COTE Top Ten Green Projects. Now in its 17th year, the program sets the standard for recognizing sustainable design excellence within the A&D community.\nThis year the AIA added a new element to the program: the Top Ten Plus Program, in which a team of jurors chose a single project from a pool of previous winners for its outstanding achievement in sustainable design. The winner of this inaugural award is 355 11th Street: The Matarozzi/Pelsinger Multi-Use Building\nlocated in San Francisco.\nThe LEED-NC Gold adaptive building is a reuse of a turn-of-the-century industrial building and is the work of Aidlin Darling Design\n. The building, which now serves as a multi-tenant office building, houses a LEED-CI Platinum restaurant on the ground floor and boasts a living roof, passive cooling, and ample bicycle parking for the building’s occupants.\nThe selection process to decide the winner of the new program was rigorous and, according to juror Gail Vittori, co-director of the Center for Maximum Potential Building Systems\n, was based on a number of quantifiable metrics, such as the percentage of space that could be lit during daylight hours, the percentage of usable outdoor space, the amount of rainwater collected to offset total water usage, and how much energy was used per square foot of space.\n“The winner represents a non-traditional green building. It’s important to stretch the sense of green so that it spans to many different sectors and buildings,” Vittori says. “I think green building is becoming the new normal.”\nAll of the winners will be recognized at the 2013 National Convention and Design Exposition\nin Denver June 20 through 22. See slideshows of all the winners here:\nRelated: The Myths and Facts of Building Locally Adobe's New Mega Office: Green FactsSnapshot of the Industry: Recent and Future Green Builds Seattle's Bullitt Center: The World’s Greenest Office Building", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Energy use in the buildings sector (both residential and commercial) is responsible for about 40% of final energy consumption and about 36% of all greenhouse gas emissions in the EU. The cost efficient energy savings potential is estimated to reach 28% by 2020. The construction sector is striving towards increasing energy efficient buildings as well as reducing their environmental impact.\nOn 22 January 2014, the European Commission unveiled its 2030 Strategy for Climate and Energy Policies. The framework presented by the European Commission proposed energy and climate objectives to be met by 2030 in order to drive the continued progress towards a low-carbon economy. This communication is intended to replace the “EU 2020 Package\".\nThe 2030 targets are:\n• 40% cut in greenhouse gas emissions (compared to 1990 levels)\n• Achieving at least a 27% share of renewable energy consumption\n• Improving energy efficiency by at least 27%\nOn 25 October 2012 the European Union adopted the Directive 2012/27/EU on energy efficiency. Member States had until 5 June 2014 to transpose it into their national laws. EBC welcomes this text which includes a strategy for the renovation of the national stock of both public and private residential and commercial buildings.\nOn 19 May 2010 the European Union adopted a directive on energy performance of buildings (Directive 2010/31/EU), which recasts the 2002 directive. Under this Directive, Member States must establish and apply minimum energy performance requirements for new and existing buildings. The Directive also requires Member States to ensure that by 2021 all new buildings are so-called “Nearly Zero-Energy Buildings”.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Greening the workplace\nVirginia ranks fourth among the states in terms of new sustainable office and institutional space per capita\n- March 28, 2012\nIn a largely dormant commercial construction market, green has become a growth industry. Although sustainable buildings have been around for decades, their proliferation had long been limited by the perception that construction was complicated and costly.\nNot anymore. According to McGraw-Hill Construction, 35 percent of architectural, engineering and contracting positions are now green. It defines “green” jobs as those in which at least 50 percent of the work involves projects that focus on sustainability. By 2015, McGraw-Hill expects that figure to reach 45 percent, with the commonwealth among the forward ranks of the movement.\nIn fact, the U.S. Green Building Council (USGBC) placed Virginia fourth among the states last year in the amount of new office and institutional green space per capita — more than 19 million square feet or 2.42 square feet per capita.\nThe reasons for the green buy-in are chronicled in a variety of reports from real estate research firms CoStar, CBRE and McGraw-Hill, which all track trends in the commercial market. Using USGBC’s Leadership in Energy and Environmental Design (LEED) standards, and, to a lesser extent, the Environmental Protection Agency’s Energy Star ratings as a basis for determining greenness, they document numbers that have a most agreeable crunch for commercial developers.\nFor example, according to the latest figures from McGraw-Hill for 2011, LEED buildings reduce operating costs — by 13.6 percent for new buildings and 8.5 percent for retrofits. They have higher lease rates, too — $30.16 per square foot versus $27.62 per square foot for non-green space — and outperform market occupancy rates by more than 2 percent.\nBuilding sales? Also sweet. LEED buildings sell for an average of 11 percent to 13 percent more per square foot than their non-LEED competitors. Not easily quantifiable, but still a critical consideration for owners and occupants of these buildings, is the cachet that going green adds to their image.\nIn a salute to green work spaces, Virginia Business previews a few from across the state:\nCommercial developer BPG Properties Ltd.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "- 1 Reducing environmental impact\n- 2 Goals of green building\n- 3 Cost and payoff\n- 4 Regulation and operation\n- 5 International frameworks and assessment tools\n- 6 See also\n- 7 References\n- 8 External links\nReducing environmental impact\nGlobally, buildings are responsible for a huge share of energy, electricity, water and materials consumption. The building sector has the greatest potential to deliver significant cuts in emissions at little or no cost. Buildings account for 18% of global emissions today, or the equivalent of 9 billion tonnes of CO2 annually. If new technologies in construction are not adopted during this time of rapid growth, emissions could double by 2050, according to the United Nations Environment Program. Green building practices aim to reduce the environmental impact of building. Since construction almost always degrades a building site, not building at all is preferable to green building, in terms of reducing environmental impact. The second rule is that every building should be as small as possible. The third rule is not to contribute to sprawl, even if the most energy-efficient, environmentally sound methods are used in design and construction.\nBuildings account for a large amount of land. According to the National Resources Inventory, approximately 107 million acres (430,000 km2) of land in the United States are developed. The International Energy Agency released a publication that estimated that existing buildings are responsible for more than 40% of the world’s total primary energy consumption and for 24% of global carbon dioxide emissions.\nGoals of green building\nThe concept of sustainable development can be traced to the energy (especially fossil oil) crisis and environmental pollution concerns of the 1960s and 1970s. The Rachel Carson book, “Silent Spring”, published in 1962, is considered to be one of the first initial efforts to describe sustainable development as related to green building. The green building movement in the U.S. originated from the need and desire for more energy efficient and environmentally friendly construction practices. There are a number of motives for building green, including environmental, economic, and social benefits. However, modern sustainability initiatives call for an integrated and synergistic design to both new construction and in the retrofitting of existing structures. Also known as sustainable design, this approach integrates the building life-cycle with each green practice employed with a design-purpose to create a synergy among the practices used.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "Presently, green buildings comprise a relatively small share of global construction – global investments accounted for $423 billion of the $5 trillion spent on building construction and renovation in 2017. However, the market is expected to grow at a compound annual rate of more than 10% between 2017 and 2023, according to the United Nations’ Principles for Responsible Investment.\nGeorgina Smit, GBCSA head of Sector Development and Market Transformation, adds that the value of certified green buildings is expected to become even more pronounced as the world navigates through the challenges presented by Covid-19. “Post the Covid-19 crisis, many companies are likely to review health measures put in place in their offices. Harvard University recently found that long-term exposure to air pollution is associated with an 8% increase in the Covid-19 death rate, and the improved internal environment quality from increased ventilation, temperature and lighting control, the use of natural light, and the absence of toxic materials result in the improved health, comfort and wellbeing of green building occupants. In a post-pandemic world, these factors can no longer be overlooked. Internal environmental quality is a key consideration within the GBCSA’s Green Star green building frameworks,” she explains.\nAllison believes that the business case for green buildings is getting stronger all the time, and screening in favour of sustainability, resilience and future-proofing will become more and more common. “As the bar shifts, it will become the new standard.\nThe day will come when no one will want to own or occupy the ‘brown’ buildings and infrastructure. Then those will increasingly be sold at a discount – or updated at substantial cost. Buildings that are designed with the future in mind, with smart building technology and interactivity integrated throughout, making them as responsive and adaptable to the needs of the workforce in a post-Covid-19 world today as those in the future, are more resilient and less likely to become stranded assets that no asset manager or building owner wants.”", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "- Residential Market\n- Light Commercial Market\n- Commercial Market\n- Indoor Air Quality\n- Components & Accessories\n- Residential Controls\n- Commercial Controls\n- Testing, Monitoring, Tools\n- Services, Apps & Software\n- Standards & Legislation\n- EXTRA EDITION\nThe U.S. Green Building Council (USGBC) has announced that several World Cup soccer stadiums have achieved Leadership in Energy and Environmental Design (LEED) certification, including South America’s largest stadium, Maracanã in Rio de Janeiro.\nThe U.S. Green Building Council (USGBC) has launched a new online resource that highlights real-time green building data for each state in the United States. The state market briefs highlight Leadership in Energy and Environmental Design (LEED) projects, LEED-credentialed professionals, and USGBC membership in each state.\nThe U.S. Green Building Council (USGBC) has announced that 3 billion square feet of green construction space has earned Leadership in Energy and Environmental Design (LEED®) certification around the globe.\nASHRAE, the National Association of Home Builders (NAHB), and the International Code Council (ICC) have agreed to jointly develop the 2015 edition of the ICC/ASHRAE 700 National Green Building Standard. This is the third edition of the standard and the first time that ASHRAE has partnered in its development.\nNinety-four percent of green homeowners responded that they would recommend a green home to a friend, according to a national survey of homeowners who purchased a National Green Building Standard (NGBS)-certified green home built within the past three years.\nThe Green Building Initiative (GBI) announced that it has named Jerry Yudelson as its president to accelerate growth of the nonprofit and further leverage its green building assessment tools, including the Green Globes® rating system, as it attempts to bring green building options to a larger audience.\nHindSite Software has announced the release of its second annual Green Industry Benchmark Survey. The survey asks a series of questions related to business practices, marketing practices, human resources practices, educational practices, and business results.\nASHRAE has announced the newly published fourth edition of its GreenGuide, which contains updated guidance that reflects how green building practices as well as the industry have changed, according to the organization.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-4", "d_text": "“But by far, the most significant driver that will fuel greater expansion in the marketplace is the revival in the institutional sector, especially with growing demand for new healthcare and education facilities, which alone traditionally account for a third of spending on new building construction.”\nThe improvement trend is expected to strengthen even further in 2016, with the AIA Consensus Construction Forecast projecting an 8.2% increase in nonresidential construction spending.\nSource: AIA Consensus Construction Forecast, calculated as an average of all forecasts provided by the panelists that submit forecasts for each of the above building categories: McGraw-Hill Construction, IHS-Global Insight, Moody’s Economy.com, FMI, Reed’s Construction Data and Associated Builders and Contractors.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "U.S. Nonresidential Construction Spending Dips in May\nJuly 1, 2010 | www.abc.org\n\"Construction spending growth, to the extent that it exists, continues to be the domain of publicly financed projects, particularly those attached to the stimulus package passed in February 2009.\" —ABC Chief Economist Anirban Basu\nIn a sign that the nation’s economic recovery continues to stumble, private nonresidential construction spending decreased 0.6 percent in May, according to the July 1 report by the U.S. Census Bureau. On a year-over-year basis, private nonresidential construction spending is down 24.8 percent. Total nonresidential construction spending – which includes both private and public – slipped 0.1 percent from last month and 15.2 percent from May 2009, and now stands at $571.7 billion. (See graph below). (See what this means below)\nEight of the 16 nonresidential construction subsectors increased spending for the month, including water supply, up 5.5 percent; religious-related construction, up 4.2 percent; and highway/street construction, up 2.7 percent. Five subsectors reported higher construction spending compared to May 2009, including conservation and development, up 23 percent; transportation, up 13.8 percent; and highway/street construction, up 5.6 percent.\nIn contrast, those subsectors that had decreases in construction spending in May include lodging, down 3.9 percent; amusement and recreation, down 2.5 percent; and transportation, down 2.3 percent On a year-over-year basis, lodging is down 62.1 percent, office construction is down 33.8 percent, commercial construction is down 31.8 percent, and manufacturing is down 31.4 percent.\nPublic nonresidential construction was up 0.4 percent for the month, but is still down 3.7 percent from one year ago. Residential construction spending fell 0.4 percent for the month of May, but was up 11.9 percent from the same time last year. Overall, total construction spending – which includes both residential and nonresidential – was down 0.2 percent from April 2010 and down 8 percent from May 2009.", "score": 23.02509321661231, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Benefits of green building\nBuildings are responsible for an enormous amount of global energy use, resource consumption and greenhouse gas emissions. As the demand for more sustainable building options increases, green construction is becoming increasingly profitable and desirable within the international construction market.\nGreen building is profitable & cost-effective\n- Upfront investment in green building makes properties more valuable, with an average expected increase in value of 4 percent. By virtue of lowered maintenance and energy costs the return on investment from green building is rapid: green retrofit projects are generally expected to pay for itself in just seven years.1\n- Green buildings reduce day-to-day costs year-over-year. LEED buildings report almost 20 percent lower maintenance costs than typical commercial buildings, and green building retrofit projects typically decrease operation costs by almost 10 percent in just one year.2, 3\n- Between 2015 and 2018, LEED-certified buildings in the United States are estimated to have $1.2 billion in energy savings, $149.5 million in water savings, $715.2 million in maintenance savings and $54.2 million in waste savings.4\n- By 2021, in the U.S., green activity is expected to grow, with those doing the majority of their projects green increasing from 32 percent to 45 percent. Client demands remains the top trigger driving the market and encouraging occupant health and well-being is the top social reason for building green.\nGreen buildings lower utility bills and benefit the environment\n- Buildings are positioned to have an enormous impact on the environment and climate change. At 41 percent of total U.S. energy consumption, buildings out-consume the industrial (30 percent) and transportation (29 percent) sectors.5\n- Buildings use about 14 percent of all potable water (15 trillion gallons per year), but water-efficiency efforts in green buildings are expected to reduce water use by 15 percent and save more than 10 percent in operating costs.6, 7 Retrofitting one out of every 100 American homes with water-efficient fixtures could avoid about 80,000 tons of greenhouse gas emissions, which is the equivalent of removing 15,000 cars from the road for one year.8\n- Standard building practices use and waste millions of tons of materials each year; green building uses fewer resources and minimizes waste.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "|Part of a series about|\nGreen building (also known as green construction or sustainable building) refers to both a structure and the application of processes that are environmentally responsible and resource-efficient throughout a building's life-cycle: from planning to design, construction, operation, maintenance, renovation, and demolition. This requires close cooperation of the contractor, the architects, the engineers, and the client at all project stages. The Green Building practice expands and complements the classical building design concerns of economy, utility, durability, and comfort.\nLeadership in Energy and Environmental Design (LEED) is a set of rating systems for the design, construction, operation, and maintenance of green buildings which was developed by the U.S. Green Building Council. Other certificates system that confirms the sustainability of buildings is the British BREEAM (Building Research Establishment Environmental Assessment Method) for buildings and large-scale developments. Currently, World Green Building Council is conducting research on the effects of green buildings on the health and productivity of their users and is working with World Bank to promote Green Buildings in Emerging Markets through EDGE (Excellence in Design for Greater Efficiencies) Market Transformation Program and certification. There are also other tools such as Green Star in Australia and the Green Building Index (GBI) predominantly used in Malaysia.\nAlthough new technologies are constantly being developed to complement current practices in creating greener structures, the common objective of green buildings is to reduce the overall impact of the built environment on human health and the natural environment by:\n- Efficiently using energy, water, and other resources\n- Protecting occupant health and improving employee productivity (see healthy building)\n- Reducing waste, pollution and environmental degradation\nA similar concept is natural building, which is usually on a smaller scale and tends to focus on the use of natural materials that are available locally. Other related topics include sustainable design and green architecture. Sustainability may be defined as meeting the needs of present generations without compromising the ability of future generations to meet their needs. Although some green building programs don't address the issue of the retrofitting existing homes, others do, especially through public schemes for energy efficient refurbishment. Green construction principles can easily be applied to retrofit work as well as new construction.\nA 2009 report by the U.S. General Services Administration found 12 sustainably-designed buildings that cost less to operate and have excellent energy performance. In addition, occupants were overall more satisfied with the building than those in typical commercial buildings. These are eco-friendly buildings.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Here in British Columbia, green buildings are quickly moving from niche to norm. All across the province, Passive House apartment buildings, LEED Platinum certified office space, and even green development plans for entire neighbourhoodsare demonstrating the market demand for high-performance buildings. The green building industry is now estimated to employ 31,700 people in B.C.\nWe’ve been tracking the growth in green building construction. First released in 2015, the Pembina Institute’s B.C. Green Buildings Map has just been updated with all-new data for the past two years. The results show that the green building sector continues to be an important employer and source of economic activity in B.C. Let’s take a deeper look at the numbers.\nA number of larger green buildings have been completed in the past two years and several are currently under construction. These include commercial projects such as Metro Vancouver’s Annacis Research Centre in Delta and Vancity credit union’s Mount Tolmie community branch in Victoria, both certified to LEED Platinum. Ground-breaking projects include The Heights in Vancouver and the Dik Tiy Independent Living Facility in Smithers — multi-unit residential buildings that will be certified under the rigorous Passive House standard.\nThere has been a 38% increase in investment in larger green buildings — up from an estimated $10.6 billion in 2014 to more than $14.5 billion in 2016. Job-wise, while there were around 7,000 people working on green building projects in 2014, there were 4,000 more (11,000) in 2016.\nThe green home market has also grown over the past two years. We consider “green homes” to include houses that are certified by Natural Resources Canada as being better than B.C. Building Code, Energy Star, or R-2000, and those that meet Passive House, Living Building, LEED (Leadership in Energy and Environmental Design), or Built Green standards. This brings the total cumulative number of green homes we’ve been able to represent on the map from 18,200 to 18,700. The total number of jobs in green home construction remained steady at around 6,000.\nAs the B.C. Energy Step Code launches, we expect the growth in green home and green building construction to accelerate even further in the next few years.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "From Environmental Leader , Published 20 June 2014\nThere are as many as 150,000 LEED-certified green housing units worldwide, a number that more than doubled between 2011 and 2012 and continues to grow, according to the US Green Building Council’s (USGBC) LEED in Motion: Residential report.\nThe report also details the US states with the most LEED -certified homes, with California in the no. 1 spot followed by New York and Texas.\nThe report is the latest in USGBC’s LEED in Motion series designed to make the case for sustainable building practices worldwide. LEED-certified homes provide 20 to 30 percent savings in energy and water use compared to code-built homes, and they maximize fresh air indoors while minimizing exposure to airborne toxins and pollutants, USGBC says.\nThe report explores the multiple LEED rating systems for different types of homes, including new single-family homes as well as new and existing low-rise, mid-rise and high-rise multifamily buildings. USGBC is also developing a rating system for existing single-family homes.\nReport highlights include:\nCanada tops the list of the top 10 countries for LEED outside of the US with 17.74 million gross square meters of LEED space, according to a USGBC report published last month.\nPlease refer to the following links below for additional reading regarding green buildings and facilities:\nDOWNLOAD THE LATEST WHITEPAPER Effectiveness of Local Agency Sustainability Plans\nSubscribe to Greenwatch Newsletter Check out the latest issues\nREAD OUR LATEST CASE STUDY Assisting City of Dublin with CEQA Review for Major Kaiser Permanente Medical Facility", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "U.S. builders increased their spending in May by the largest amount in five months. A surge in homebuilding and an increase in nonresidential projects helped offset a fifth monthly decline in government building projects.\nThe Commerce Department says construction spending rose 0.9% in May, following a 0.6% rise in April. It was the biggest percentage gain since December.\nThe May increase pushed spending to a seasonally adjusted annual rate of $830 billion. That is 11.3% above a 12-year low hit in February 2011. Still, the level of spending is roughly half of what economists consider to be healthy.\nResidential construction rose 3% to an annual rate of $261.3 billion, further evidence that housing has finally started to mount a modest recovery.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "Only 57 percent of U.S. respondents report using metrics, compared to a 75 percent average globally. This may be linked to the fact that the U.S. is also the country with the highest level of concern reported about higher perceived first costs for green building, notably more than the percentage who consider this an important challenge to green in other developed countries with active construction markets like Germany and the U.K.\nTo download the full study “World Green Building Trends 2016: Developing Markets Accelerate Global Green Growth SmartMarket Report,” visit\nTo download the U.S. report, visit\nAbout UTC Climate, Controls & Security\nUTC Climate, Controls & Security is a leading provider of heating, ventilating, air conditioning and refrigeration systems, building controls and automation, and fire and security systems leading to safer, smarter, sustainable and high-performance buildings. UTC Climate, Controls & Security is a unit of United Technologies Corp., a leading provider to the aerospace and building systems industries worldwide. For more information, visit www.CCS.UTC.com or follow @UTC_CCS on Twitter.\n: Dodge Data & Analytics is North America’s leading provider of analytics and software-based workflow integration solutions for the construction industry. Building product manufacturers, architects, engineers, contractors, and service providers leverage Dodge to identify and pursue unseen growth opportunities and execute on those opportunities for enhanced business performance. Whether it’s on a local, regional or national level, Dodge makes the hidden obvious, empowering its clients to better understand their markets, uncover key relationships, size growth opportunities, and pursue those opportunities with success. The company’s construction project information is the most comprehensive and verified in the industry. Dodge is leveraging its 100-year-old legacy of continuous innovation to help the industry meet the building challenges of the future. To learn more, visit www.construction.com.\n: Benjamin Gorelick | Spector & Associates +1-212-943-5858, email@example.com\nSign up for any of our e-newsletters and receive key insights and industry trends\nThank you for your submission. You are now subscribed.\nCall Dodge today at: 1-877-784-9556\nFill out the form below to get the information on DGN\nThank you for your interest. We will reach out to you shortly.", "score": 21.479043724918927, "rank": 65}, {"document_id": "doc-::chunk-2", "d_text": "Buildings in the United States are responsible for 39% of CO2 emissions, 40% of energy consumption, 13% water consumption and 15% of GDP per year, making green building a source of significant economic and environmental opportunity. Greater building efficiency can meet 85% of future U.S. demand for energy, and a national commitment to green building has the potential to generate 2.5 million American jobs.\nThe U.S. Green Building Council's LEED green building certification system is the foremost program for the design, construction and operation of green buildings. Over 100,000 projects are currently participating in the LEED rating systems, comprising over 8 billion square feet of construction space in all 50 states and 114 countries.\nBy using less energy, LEED-certified buildings save money for families, businesses and taxpayers; reduce greenhouse gas emissions; and contribute to a healthier environment for residents, workers and the larger community.\nUSGBC was co-founded by current President and CEO Rick Fedrizzi, who spent 25 years as a Fortune 500 executive. Under his 15-year leadership, the organization has become the preeminent green building, membership, policy, standards, influential, education and research organization in the nation.\nFor more information, visit www.usgbc.org.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "There is no doubt that sustainable building materials have become more affordable—the cost differential between green and nongreen materials and products has diminished to less than 3 percent, according to green building experts. In most cases, the price for green materials is comparable to nongreen materials. The market for green building materials is expected to reach $4.7 billion by 2011, a 17 percent increase from today, according to a report issued by SBI, a Rockville, Md.-based research firm.\nFor more information on these and other products, visit ebuild.com", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "In 2001, India had just one building conforming to the Indian Green Building Council (IGBC) green building rating, spanning 20,000 sq ft. Today, the country has 4,452 IGBC-registered green projects spanning 4.79 billion sq ft!\nIn 2007, the Griha Council defined norms for green buildings, further spurring the green building movement, especially as municipalities in cities like Noida, Gurugram, Mumbai and Chandigarh allow builders to build additional 5 per cent floor-area ratio for Leadership in Energy and Environmental Design (LEED) Gold and Griha 4-Star certifications, observes Vijay Mattoo, Senior Consultant MEP, Egis India Consulting Engineers. 'Also it has been made mandatory for new government buildings to go for minimum 3-Star Griha ratings.'\nConsequently, the past five years have seen great growth in the availability of green building materials for GRIHA and IGBC-certified structures.\nSrinivas S, Deputy Executive Director, IGBC, and Accredited Professional, Confederation of Indian Industry (CII), mentions the various materials and technologies that have gained traction because of the green building movement: Fly-ash blocks, fly-ash cement, insulation materials like extruded polystyrene and glass wool, cool roof materials, paver blocks, roof garden trays, high-performance glazing, solar photovoltaic, light emitting diodes (LED) lighting, waterless urinals, bagasse-based furniture, efficient HVAC technologies, recycled roofing materials, recycled tiles, low-volatile organic compound paints and CO2 sensors. While most of these materials are being made in India, a few technologies like photovoltaic cells, building management systems, magnetic levitation chillers and screw chillers with a capacity of over 200 TR and CO2 sensors are being imported, according to Srinivas.\nIn future, affordable housing promises to create opportunities for sustainable building materials and technologies, observes Sanjay Seth, Senior Fellow & Senior Director, Sustainable Habitat Division, The Energy and Resources Institute (TERI), and CEO, GRIHA Council. Recently, the Building Materials and Technology Promotion Council released the second edition of its 'Compendium of Prospective Emerging Technologies for Mass Housing' containing details of 16 emerging technologies.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "A new study conducted by GuildQuality that was commissioned by the National Association of Home Builders (NAHB) shows that homeowners who purchased a National Green Building Standard-certified home in the past three years are happy they did. The survey found:\n- 94 percent of the homeowners would recommend a green home to a friend;\n- 92 percent would purchase another green home; and\n- 71 percent believe green homes are of higher quality than a non-green home.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-3", "d_text": "Residential building in February grew 5% to $245.7 billion (annual rate), making a partial rebound after an 8% decline in January. Multifamily housing registered a strong February, jumping 46%. There were nine multifamily projects valued in excess of $100 million that reached groundbreaking in February, led by the following – the $500 million Flushing Commons apartment complex expansion in Queens NY, a $300 million apartment high-rise in New York NY, and the $262 million condominium portion of the Four Seasons mixed-use tower in Boston MA. Through the first two months of 2015, the top five metropolitan areas ranked by the dollar volume of multifamily starts were as follows – New York NY, Boston MA, Miami FL, Washington DC, and Houston TX. Single family housing in February slipped back 7%, as severe winter weather in the Northeast led to 24% plunge for that region. Single family declines were also reported for the South Atlantic, down 12%; the South Central, down 10%; and the Midwest, down 1%; while the West posted a modest 3% gain.\nThe 34% increase for total construction starts on an unadjusted basis during 2015, relative to 2014, was the result of greater activity for all three major construction sectors. Nonbuilding construction year-to-date soared 89%, with electric utilities and gas plants up 944% while public works retreated 7%. Nonresidential building year-to-date increased 22%, with manufacturing buildings and institutional buildings each up 26% while commercial buildings climbed 15%. Residential building year-to-date improved 7%, with single family housing up 7% and multifamily housing up 9%. By geography, total construction starts for the January-February period of 2015 revealed this behavior compared to last year – the South Central, up 126%; the Northeast, up 12%; the South Atlantic, up 9%; the West, up 3%; and the Midwest, down 5%.\nAdded perspective is obtained by looking at twelve-month moving totals, in this case the twelve months ending February 2015 versus the twelve months ending February 2014, which lessens the volatility inherent in comparisons of just two months.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "How Green Is Your Building?\nThe United States accounted for 20 percent of global energy consumption in 2008, which is the largest share of world energy consumption by any country. And buildings account for 40 percent of all energy use, consuming more energy than the industrial or transportation sectors. The U.S. is also responsible for 20 percent of the world’s carbon dioxide emissions, with energy use in buildings responsible for 8 percent. With increasing focus on the energy, carbon and environmental footprint of buildings, new methods of construction are being considered. Due to their potential to reduce energy consumption, decrease greenhouse gas emissions, reduce water usage, and add to the building value, green buildings have been gaining attention in the United States.\nGreen buildings have been defined by the Environmental Protection Agency as structures that are built using processes that are environmentally responsible and resource-efficient throughout a building’s lifecycle from siting to design, construction, operation, maintenance, renovation and deconstruction. These buildings embody increased efficiencies in resource utilization, sustainable site planning, reduced wastage and environmental impacts, enhanced economic performance, and an overall positive impact on quality of inhabitance of such spaces and on its occupants. Green buildings thus address design concerns of economy, utility, durability and comfort.\nThe U.S. Green Building Council (USGBC) has led the movement toward high-performance buildings. In 1998, the Council undertook the task of developing and subsequently administering the Leadership in Energy and Environmental Design (LEED) suite of certification systems, focused on evaluating sustainable building achievements with an integrated, whole building approach. This approach combines the design, construction, and operation aspects of a building to get an aggregate performance quotient that would make it a sustainable building or project. LEED promotes a whole-building approach to sustainability by focusing on the key performance areas:\nSustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, locations and linkages, awareness and education, innovation in design and regional priority\nSince the first version was introduced in 2000, the LEED system has become synonymous with sustainable design, not only in the United States, but in many other countries that have developed regional chapters of the Green Building Council to represent local dynamics, including Canada, Brazil, Mexico and India. While LEED was originally based on design and construction requirements and benchmarks rather than performance, the USGBC has taken steps to move closer to a performance-based system.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "“We blew away the LEED minimum, which is 10% recycled content,” says Gergits. “We had almost 16.5%.”\nThe 33,000 tons of steel and the reuse of concrete from site demolition for construction roadways and building slab underburden accounted for the majority of points, and were aided by other items with recycled content, including low-VOC furniture, finish materials, and carpeting.\n“We made it green without doing anything painful or building in extraordinary systems,” says Gergits. “It was a surprisingly uneventful process.”", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Non-profit organization US Green Building Council (USGBC) recognized the UAE as one of ten countries having the biggest number of green buildings in the world.\nIt is USGBC who assigns countries a position in the Leed (Leadership in Energy and Environmental Design system) green building ranking. According to this rating, the UAE ranked eighth in the world, outside the United States, by the number of buildings certified according to Leed green building standards. The total floor area of these buildings in the UAE is 1.3 million square meters.\nThe first places in the ranking, outside the United States, belong to Canada, where there are 26.6 million square meters of buildings certified according to Leed standards, and China — 22 million square meters of green buildings.\nMoreover, the number of buildings in the UAE constructed in full compliance with all the requirements of green building system and certified by Leed ranks the UAE fifth in the world. There are 990 such buildings in the UAE, with most of them located in Dubai and Abu Dhabi. At the same time the number of professionals working in the field of green building in the UAE, ranks the country fourth in the world.\n“The UAE has become an increasingly important centre for the global green building movement, a development that will help provide greater environmental health and increased economic opportunity for its citizens and will hopefully help to inspire a robust green building market throughout the Middle East,” said Rick Federizzi, the chief executive of USGBC.\nAmong the most well-known green buildings in the UAE there are Rosewood hotel and International Tower office center in the UAE capital Abu Dhabi, the Dubai Chamber of Commerce building in Dubai Creek and also Standard Chartered Bank headquarters in Dubai.\nThere are official and mandatory regulatory standards for sustainable and green building applied for today in Dubai and Abu Dhabi: in 2014 Dubai Municipality introduced its own mandatory Green Building Regulations, and Abu Dhabi has long existing system of Estidama, which is a combination of mandatory regulations for developers and rating \"pearls\" system designed to encourage developers through awarding them one or more \"pearls\" for compliance with green building standards.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Total of LEED ACCREDITED PROFESSIONALS recognized by the U.S. Green Building Council, as of September 2011.\nLEED APs with specialty – 64,060\nLEED APs without specialty – 83,985\nLEED Green Associates – 23,226\nTotal LEED Professionals – 171, 271\nThe cost per square foot to build a DAYCARE CENTER in San Francisco in 2011, according to RSMeans.\nThe approximate U-FACTOR (Btu/hr-sf-°F) of currently available aluminum frame windows and doorframes. Also the number of AIA/CES DISCOVERY LEARNING UNITS that can be obtained by studying “High- Performance Windows + Doors” and passing the 10-question exam (80% score required). Source: BD+C\nThe percentage a GEOTHERMAL HEAT PUMP can reduce energy consumption when compared to conventional HVAC system, according to the EPA. Additionally, the U.S. Department of Energy reported that geothermal heat pumps can but HVAC energy demand by 50% and overall energy demand by 35%. Geothermal heat pumps are expected to gain market share as recent government mandates require newly constructed buildings to be zero net energy. Energy-efficient retrofits will also increase market demand for the pumps. Source: EPA\nThe September 2011 AIA ARCHITECTURE BILLINGS INDEX, following a score of 51.4 in August 2011. The monthly ABI index scores are based on a score of 50, with scores above 50 indicating an aggregate increase in billings and scores below 50 indicating a decline. In regard to September’s 46.9 score, “It appears the positive conditions seen last month were more of an aberration,” said AIA Chief Economist Kermit Baker, PhD, Hon. AIA. Source: AIA\nBuildings taller than 420 feet are now required to include an EXTRA EXIT STAIRWELL OR ELEVATOR that occupants can use for evacuations, according to the National Institute of Standards and Technology. Source: NIST\nTotal put-in-place CONSTRUCTION DOLLARS in billions predicted for 2012, according to the latest FMI forecast, which calls for 2% growth in 2011, and 6% for 2012.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "The new HQ kept 95% of the existing building’s structural walls, slab, and roof deck intact, plus most of the curtain wall. Diverting, reusing, and recycling materials reduced waste into the landfill stream by more than 75%. DPR’s HQ achieved net-zero energy use within its first six months of operations. During this time, it used approximately 110,000 kWh of electricity, while the PV system generated 230,000 kWh, which provided a sizable margin to achieve annual net-zero energy. DPR is pursuing LEEDR for New Construction-v2.2 Platinum certification for its new HQ.\n> DPR Construction, California, United States\n> Duration: February 2009 - May 2010", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-3", "d_text": "We're actually making all the buildings more efficient,\" he told InsideClimate News.\nFor instance, in its 2005 report, EIA said that a one percent rise in floor space would create a roughly one percent rise in CO2 emissions. That's no longer true, Martinez explained. According to the 2011 report, it now takes an eight percent increase in floor space to create a one percent uptick in emissions.\nFigures Inspire Industry, More Work Ahead\nSkip Laitner, an economist at the American Council for an Energy-Efficiency Economy (ACEEE), a Washington-based nonprofit, said he analyzed EIA's data extensively for a Jan. 12 report he co-authored and believes that Architecture 2030's overall claim that green buildings will produce national efficiency gains has a sturdy foundation.\n\"We're beginning to see evidence of that,\" he told InsideClimate News.\nFor many in the green building industry the EIA data is inspiring.\nChris Pyke, vice president of research at the nonprofit U.S. Green Building Council, which developed the LEED green-building rating system, said the new figures likely will intensify the push for more low-impact buildings. \"It makes us even more excited to go out and redouble our efforts to get real data ... and show that we're making the turn\" toward a greener building sector, he said.\nKirk Teske, chief sustainability officer at HKS Architects in Dallas, a 1,000-employee global firm that has signed on to Architecture 2030's carbon-neutral challenge, agreed that the data should wake up the building industry. \"It's very encouraging and motivating to see progress being made. Now we just need to maintain that level of reduction going forward.\"\nStill, Architecture 2030's goal to make all buildings carbon neutral in less than two decades remains a long shot.\nThe cost of green retrofits and renewable energy systems like rooftop solar panels are still costly, for starters. But most efficiency advocates agree there's a bigger obstacle of perception they have to surmount. Americans aren't used to thinking of buildings as a vast drain on the energy supply, and thus don't see the major impact that efficiency could have in meeting the nation's energy and climate challenges.\nLaitner of ACEEE cited figures from a 2009 publication by economists Robert Ayer and Benjamin Warr, which found that nearly 90 percent of the energy Americans consume is wasted due to inefficiencies.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "In 2012, the construction industry will return a level of construction in current dollars that is comparable to levels recorded in 2003. Source: FMI\nThe percentage of AIR REDUCTION possible following the installation of an air barrier system in a commercial or industrial building, according to the National Institute of Standards and Technology. The installation of an air barrier can also shrink gas bills by more than 40%, and reduce electrical use by 25%. Source: NIST\nTotal funds currently invested in the energy-efficiency financing initiative known as the BILLION DOLLAR GREEN CHALLENGE. The fund aims to get colleges, universities, and other nonprofits to invest $1 billion in self-managed funds to be used to finance energy-efficiency upgrades. Source: BD+C\nOwners of every New York City commercial and residential buildings larger than 50,000 sf will have to post each building’s ENERGY USE ONLINE, starting with commercial buildings in 2012, followed by residential buildings in 2013. Architects and environmentalists believe the measure will prompt owners to invest in cleaner, more sustainable designs. Source: BD+C\nThe number of sides to the Octagon House in Washington D.C., the original home of the AIA. Built between 1978 and 1800, the Octagon House was designed by Dr. William Thornton, the architect of the U.S. Capitol. Adapted to an irregular-shaped lot, the design of the three-story brick house combines a circle, two rectangles, and a triangle, resulting in a six-sided structure. Source: National Park Service\nSubmit your “By the Numbers” item to: Tim Gregorski, Senior Editor, [email protected].\nYou must include documentation showing the source of your entry. Readers whose items are chosen will receive credit in the magazine and a $10 Amazon gift certificate. Decision of the editors of BD+C is final.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Atlanta’s commitment to going green has not gone unnoticed. The city came in as the third-most environmentally friendly city in the 2017 National Green Building Adoption Index.\nThe report looked at the growth of Energy Star and LEED–certified office spaces since 2005 in the 30 largest cities in the U.S.\nAtlanta has ranked in the top 5 of the survey every year, but this is the highest spot it has taken so far. The report found that 55 percent of all space reviewed currently has an Energy Star or LEED certification.\nApproximately 24 percent of the city’s buildings have an Energy Star label, which is double the national average and the second-highest market total, only behind Manhattan. The total of LEED-certified buildings currently sits at 26.3 percent, which is also second behind Manhattan.\nThe study found that 10.3 percent of all buildings surveyed are Energy Star–labeled while 4.7 percent are LEED–certified, which is above last year’s totals. It also found that nine of the top 10 cities have implemented benchmarking ordinances. Those cities have 9 percent more certified buildings and a 21 percent higher certified square footage. Atlanta recently announced it’s goal to run 100 percent on clean energy by 2035.\n“While it is still too early to make a definitive correlation between benchmarking ordinances and the rate of growth in ‘green’ buildings, this year’s findings do begin to establish a link that will be studied closely in the future,” said David Pogue, CBRE’s Global Director of Corporate Responsibility.\n|Atlanta Q4 2014||% of Buildings||% sq. ft of Buildings|\n|Core and Shell||0.2||0.5|", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "The worldwide green building materials market should nearly double over the next five years, reaching $523.7 billion by 2027, according to recent analysis by Future Market Insights, Inc. The climb would represent 11.06% annual growth from 2021’s $280.5 billion—a rate expected to continue through 2032.\nInsulation is forecast to be the fastest-growing application, rising at a pace of 11.7% over the period due to its excellent energy efficiency and increased emphasis on installing interior insulation solutions.\nNorth America owns a significant green building material market share, with growth substantially driven by strict rules on the use of environmentally friendly products in the construction sector.\nExposed to extreme climate conditions and government initiatives, the Asia Pacific and Latin America are also seeing increased demand for green building materials.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "- The project has achieved a combined recycled content value of 30.01% of the total materials.\n- 50% of the total project’s materials, based on cost, were manufactured within 500 miles of the project.\n- 89.12% of wood based materials are certified in accordance with FSC Principles and Criteria.\nIndoor Environmental Quality\n- The design achieves air change effectiveness of 0.9 or greater in each ventilated zone as determined by ASHRAE 129-1997.\n- A construction IAQ plan was followed and implemented.\n- Both a two week building flush out was conducted with 100% outside air from December 1, 2006 to December 15, 2006 AND that the reference standard’s IAQ testing protocol was followed.\n- Low-emitting materials, adhesives and sealants were used.\n- All paints, including topcoats and primers, meet the VOC requirements of Green Seal.\n- The project uses carpeting that complies with the CRI Green Label Program.\n- All composite wood and agrifiber products used in the project do not contain added urea-formaldehyde, and a list of products has been included.\n- The project has been designed to maintain indoor comfort within the ranges established by ASHRAE 55-1992, Addenda 1995.\n- A permanent temperature and humidity monitoring system that operates during all seasons has been installed. The system is to permit control of individual building zones to maintain thermal comfort within the ranges defined in ASHRAE 55-1992, Addenda 1995.\nInnovation and Design\n100% of parking is underground, meeting the required threshold for exemplary performance of SSc7.1.\nWater use was reduced by 49%, exceeding the threshold of 40% savings for exemplary performance of WEc3.\n100% of the building’s regulated load is supplied by renewable power that meets the definition of Green-e, meeting the requirements for exemplary performance of EAc6.\nAn innovation credit for green cleaning and housekeeping has been submitted.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "A compound annual growth rate (CAGR) of 17.9% has been forecast for the green building materials market through 2016, driven by the need for enhanced cost reduction and the increase in initiatives by government bodies.\nThe use of green building materials has remained relatively consistent during the global recession, and recent figures indicate that demand is only going up.\nSolar power products saw explosive growth between 2002 and 2012, driven by increasing installation of rooftop-based solar power modules connected to electricity distribution systems.\nGoing forward, favourable tax incentives and strong interest in the use of renewable energy sources will promote demand for LEED-eligible solar power products.\nIn recent years, energy efficiency has become a major global concern, particularly as a result of global warming and the rapid depletion of non-renewable power resources. It is currently estimated, that buildings account for 40% of total global energy consumption.\nThis high consumption has prompted several governments across the globe to form policies to improve energy efficiency in buildings.\nFor instance, the US government offers a tax reduction of US$1.80 per square foot to building owners who use green materials and techniques such as building envelopes, interior lighting, and hot water systems that reduce the energy consumption of buildings up to 50%.\nAlso, many states in the US offer incentives for the usage of recyclable items such as windows, doors, roofs, and insulation.\nThe US is already seeing an uptick in sustainable construction. By 2015, an estimated 40 to 48% of new non-residential construction by value will be green, equating to a $120-145 billion opportunity, according to the US Green Building Council.\nA major challenge currently facing the industry is the lack of awareness regarding the benefits of green building materials. The lack of awareness in developing countries has had a negative impact on the growth of the market.\nThe green building materials market is dominated by a number of players including E.I. du Pont de Nemours and Co. (DuPont), Lafarge, Owens Corning, and Saint-Gobain S.A.\nFor more information on the green building materials market, see the latest research: Building Materials Market Research\nFollow us on Twitter @CandMResearch\nCompaniesandmarkets.com issues news updates and report summaries covering all major industries and sectors. The service provides additional client monitoring and timely alerts to breaking industry and sector news leading the day's business headlines.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "TUCSON, ARIZ. — Green building consultant Jerry Yudelson has published his Top 10 list of green building trends for 2009. Yudelson says that green building will continue to grow in spite of the global credit crisis and the ongoing economic recession in most countries.\nConsulting engineer Yudelson is author of the books Green Buildings A to Z and The Green Building Revolution, which the editors of CONTRACTOR believe should be required reading for any mechanical or plumbing contractor getting into green and sustainable construction and service. Yudelson also authored a white paper for the Mechanical Contracting Education and Research Foundation of the Mechanical Contractors Association of America entitled, “European Green Building Technologies.”\n“What we're seeing is that more people are going green each year, and there is nothing on the horizon that will stop this trend,” explained Yudelson, the principal of Tucson-based Yudelson Associates. “In putting together my Top 10 trends for 2009, I'm taking advantage of conversations I've had with green building leaders in the U.S., Canada, Europe and the Middle East over the past year.”\nYudelson's Top 10 trends includes the following:\n1. The green building industry will continue to grow more than 60% in 2009, on a cumulative basis. “We've seen cumulative growth in new LEED projects over 60% per year since 2006, in fact 80% in 2008, and there's no sign that the green wave has crested,” he said.\n2. Green building will benefit from the Obama presidency, with a strong focus on green jobs in energy efficiency, new green technologies and renewable energy. This trend will last for at least the next four years.\n3. The focus of green building will begin to switch from new buildings to greening existing buildings. “The fastest growing LEED rating system in 2008 was the LEED for Existing Buildings program, and I expect this trend to continue in 2009,” said Yudelson.\n4. Awareness of the coming global crisis in fresh water supply will increase, leading building designers and managers to take further steps to reduce water consumption in buildings with more conserving fixtures, rainwater recovery systems and innovative new water technologies.\n5. LEED Platinum-rated projects will become more commonplace as building owners, designers and construction teams learn how to design for higher levels of LEED achievement on conventional budgets.\n6.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "85% of architects report a positive impact on health and well-being from their green school projects, and 85% of them are reflecting student mobility and health concerns into the design of their buildings.\nThe study was produced with the support of the U.S. Green Building Council Center for Green Schools, Lutron, Project Frog and Siemens. Survey and data partners included the Council of Educational Facility Planners International, The American Institute of Architects, Associated General Contractors of America, Green Schools National Network, National Association of Independent Schools, National Building Museum, Society for Colleges and University Planning, and Second Nature.\nThis report is being released with the opening of the new Green Schools exhibition at the National Building Museum (www.nbm.org). A press preview of the exhibition is scheduled for Thursday, February 28, 2013 from 10 am to noon at the Museum in Washington, DC.\nTo download the full New and Retrofit Green Schools SmartMarket Report, go to http://analyticsstore.construction.com/index.php/new-and-retrofit-green-schools-smartmarket-report-2013.html.\nAbout McGraw-Hill Construction: McGraw-Hill Construction's data, analytics, and media businesses—Dodge, Sweets, Architectural Record, GreenSource, and Engineering News-Record— create opportunities for owners, architects, engineers, contractors, building product manufacturers, and distributors to strengthen their market position, size their markets, prioritize prospects, and target and build relationships that will win more business. McGraw-Hill Construction serves more than one million customers through its trends and forecasts, industry news, and leading platform of construction data, benchmarks, and analytics. To learn more, visit www.construction.com.\nAbout The McGraw-Hill Companies: McGraw-Hill announced on September 12, 2011, its intention to separate into two companies: McGraw-Hill Financial, a leading provider of content and analytics to global financial markets, and McGraw-Hill Education, a leading education company focused on digital learning and education services worldwide. McGraw-Hill Financial's leading brands include Standard & Poor's Ratings Services, S&P Capital IQ, S&P Dow Jones Indices, J.D. Power and Associates and Platts, a leader in commodities information. With sales of $6.2 billion in 2011, the Corporation has approximately 23,000 employees across more than 280 offices in 40 countries. Additional information is available at http://www.mcgraw-hill.com/.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "For the fourth consecutive month, private nonresidential construction has increased, up 1.8 percent in June, according to the August 1 report by the U.S. Census Bureau. However, spending in this sector is down 1.3 percent compared to one year ago. Total nonresidential construction spending - which includes both privately and publicly financed construction - was $528.4 billion in June, up 0.5 percent for the month, but down 5.5 percent year-over-year.\nEleven of the sixteen nonresidential construction subsectors posted increases for the month, including manufacturing, up 4.1 percent; communication, up 3.9 percent; power, up 3.1 percent; and health care, up 2.4 percent. Four subsectors have had increases in construction spending from the same time last year. They are power, up 13.8 percent; commercial, up 4.2 percent; communication, up 2.7 percent; and health care, up 2.4 percent.\nFive nonresidential construction subsectors posted decreases for the month, including conservation and development, down 6.6 percent; educational, down 3.2 percent; amusement and recreation, down 3.1 percent; highway and street, down 1.6 percent; and religious construction, down 1.4 percent. Of the twelve subsectors experiencing decreases in spending year-over-year, they are religious, down 25.7 percent; lodging, down 24.3 percent; and water supply, down 14.3 percent.\nPublic nonresidential construction spending slipped 0.7 percent for the month and is down 9.2 percent compared to the same time last year. Residential construction spending fell 0.3 percent in June and is down 2.1 percent over the last twelve months. Total construction spending - which includes residential and nonresidential construction - inched up 0.2 percent for the month, but was down 4.7 percent from June 2010.\n\"Some may be surprised with today's report that construction spending edged higher in June given the overall weakness of the U.S. economy in recent months,\" said Associated Builders and Contractors Chief Economist Anirban Basu.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "target for binding climate treaty and creation of $100 billion fund to assist poor countries with compliance.\nNet-zero new homes mandated in California, Illinois, Massachusetts, Colorado, Oregon, and Washington, D.C.\nRapid prototyping patents expire, allowing widespread use.\nIECC 2015 requires outdoor air monitoring to address indoor filtering requirements.\nAll electric appliances come equipped with radio-frequency identification chips for smart-grid management.\nBio-based mapping becomes common practice in design firms, fostered by the U.S. Department of Agriculture's BioPreferred Program.\nAll homes equipped with smart meters.\n50% of materials used in new homes are made from recycled content.\nA standardized, far-reaching digital database of product ingredients becomes available.\n2025: SIPs and insulating concrete forms account for 50% of residential construction.\nSan Francisco launches the Alliance for Innovation in Urban Water Systems to foster collaboration on water efficiency practices.\nBeijing recycles 100% of its wastewater.\nPrerequisites mandate collection, treatment, and use of alternative water sources.\nMajor U.S. cities begin creating designated recycled-water use areas that require dual plumbing in projects.\nAll public utilities offer demand response/time-of-day rate programs, as peak and off-peak season rate programs.\nResidential water consumption reduced to 20 gallons/person/day.\nCalifornia reduces water use by 20%; Santa Monica eliminates 100% imported water.\nAll homes meet EPA WaterSense criteria.\n2021: Lake Mead goes dry.\n2021: Irrigated turf landscaping limited to 40% of property footprint.\n2022: National graywater use standard established.\n2022: Water subsidies removed, and block water pricing implemented.\nLEED v4 passed by USGBC membership.\nNGBS ICC 700-2015 released with progressive performance guidelines.\nNational certification program created for residential energy efficiency professionals.\nLEED v5 released, featuring an increased association between energy and water.\nEnvironmental Product Declarations (EPDs) and Health Product Declarations (HPDs) included in Federal Trade Comission Green Guides.\nNGBS ICC 700-2018 released with new performance guidelines.\nAll of the Leading Home Builders of America certify 90% of homes under independent green certification.\nLEED v6 released, addressing regenerative and energy- and water-positive design and occupancy.\nMajority of U.S. building codes shift from prescriptive to outcome-based codes.\n2023: Solar permitting and installation requirements are standardized.\nThe Green Appraisal Addendum from the Appraisal Institute aims to better value the performance of green homes.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Environmental UPDATE November 2009\nGreen Buildings: Planning for the Future\nLately, everyone is talking about green buildings and green construction. The prevailing wisdom is that these green technologies are necessary to conserve energy, thereby (1) reducing the cost of energy, (2) reducing the need for additional electrical generation capacity and (3) reducing emissions of greenhouse gases. Notwithstanding these common beliefs, however, there seems to be a fair amount of confusion regarding green buildings. This article briefly examines some of the issues that New Jersey businesses will face in the coming years, as we adapt to this new green paradigm.\nWill green buildings become prevalent? The December 15, 2008 draft Global Warming Response Act Recommendation Report strongly suggests that the answer is \"Yes.\" New Jersey has set aggressive targets for reductions in annual emissions of greenhouse gases by 2020 and by 2050. In order to reach the 2020 target, all the State's current plans will need to be fully implemented and will need to work exactly as planned - a tall order indeed. Furthermore, the State does not have a good notion of how to reach the 2050 target emissions, but has determined the key sources of those emissions. Residential and commercial buildings account for 21% of the greenhouse gas emissions in New Jersey, which places building occupancies as the State's third-largest contributor to emissions. Consequently, the State will need to aggressively target building occupancies in order to meet its targets.\n1. Modify building codes to require that all new construction is at least 30% more energy efficient than conventional construction.\n2. Set new minimum efficiency standards for new appliances and other equipment.\n3. Provide technical assistance to commercial and industrial entities to develop strategies for reducing energy demand, particularly peak demand. Also, provide reductions in electricity rates for customers who permit utilities to control their usage during peak demand periods.\n- 4. Reduce tax barriers and provide financial incentives to promote the installation of renewable energy and energy- efficient technologies.\nRecent federal case law, however, suggests that New Jersey's plan to require new construction to be 30% more energy efficient than conventional construction may be preempted by federal law. The federal Energy Policy and Conservation Act governs energy efficiency and energy use for residential, commercial and industrial appliances and equipment, including heating, ventilation, air conditioning, and water heaters.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "According to a newly published report from CBRE and USGBC, as Chinese builders move in accordance with the nation’s 13th Five Year Plan, green building space is expected to reach two billion square meters by 2020, up from current estimates of 600 million square meters of green building space spread across more than 300 cities.\nBetween 2006 and 2016, LEED-certified projects had a compound annual growth rate of 77 percent, making China the global leader for LEED projects outside of the United States.\nThe \"2017 China Green Building Report: From Green to Health” notes, additionally, that as of August 2017, more than 48 million square meters of projects across 54 Chinese cities have been LEED-certified.\nThe report also ranks the top 20 cities for LEED certification in 2017:\n|City||2017 LEED-certified Space (10,000 square meters)||Cumulative Growth Since 2014|\nAdditional findings related to LEED:\n- In the past four quarters, the average occupancy rate of LEED projects in China was 81.7 percent, which is 1.5 percent higher than that of traditional offices. By comparison, the average occupancy of LEED Platinum projects in China was 86.7 percent, 10 percent higher than traditional offices. This indicates a significant increase in occupant demand for LEED Platinum structures.\n- Since 2014, LEED Platinum spaces have grown more than 200 percent and now account for 22 percent of all LEED-certified space in China, up from 14 percent.\n- As of Q2 2017, over 2.22 million square meters of quality office space had earned LEED Platinum certification.\nThe report is the third in a series of studies conducted by CBRE and USGBC. In 2016, “Towards Excellence: Market Performance of Green Commercial Buildings in the Greater China Region\" found that LEED-certified Grade A office buildings exceeded 5.6 million square meters across 10 major cities in greater China, an increase of 7.4 percent from the previous year, accounting for 28 percent of the total market.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "The Cromley Lofts in Alexandria, Virginia have earned a Gold level certification making them the first condos in Virginia to become LEED certified. These 8 beautiful units are located in The Old Town area of Alexandria in a vintage warehouse building circa 1910 — a fitting example of the environmental benefits of adaptive reuse.\nI want to shout out to a project one of our contributors, Sarah Roe, is working on. If you’re interested in helping out, please make sure to get in touch with her as outlined below.\nGreen Hope Community is a newly-formed non-profit organization whose goal is to establish an eco-friendly community for internationally orphaned children. This community will be based in the United States; sites in North Carolina, Georgia, New Mexico, Arizona and Oregon are currently being considered. The goal of the Green Hope Community is to give orphaned children a chance not only to survive, but to thrive, and to give them the tools to become leaders in the fields of environmental progress and social justice.\nThe Chicago FBI Headquarters has become the world’s first LEED EBOM project to earn Platinum level certification. Under the prior iteration for certifying existing buildings, what we refer to as LEED-EB, approximately 14 projects received LEED Platinum; however, FBI Chicago Headquarters is the first to receive the USGBC’s highest level of certification under LEED for Existing Buildings: Operations & Maintenance (EBOM). To date, though, only about six projects have been certified under EBOM since its inception in early 2008.\nLast month USGBC posted Green Buildings by the Numbers, a three-page, bite-size State of the Green Building Union that simply brings together some useful stats. This palatable little report helps a person wrap their head around the realities and opportunities for green building. The authors seem to have attempted a sort of realistic optimism with a series of facts and percentages that say ‘there’s been progress in gaining market share for green buildings and buildings stand to make huge gains in the struggle to create a more sustainable human existence, but we’re not there yet.’ Included are a couple of specific statements on the expectations for green building market penetration (see one of the more intriguing quotes below), but the authors shied away from detailing market penetration thus far.\nLast year I talked about five green building trends and most of that, generally speaking, was spot on. This year's going to be a little tougher nut to crack, however, because things are changing every day.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-2", "d_text": "In 2014, private nonresidential construction saw the biggest gains, with spending achieving a level of $337 billion, 10.5% increase over 2013. Residential construction was $355.2 billion, a 3.4% increase.\nThe nonresidential building market was hamstrung by weather-related delays during the first part of the 2014, but conditions improved dramatically throughout the rest of the year to finish with greater than anticipated spending levels. Overall nonresidential construction was $606.2 billion, an increase of 6.6%. Public nonresidential spending was $269.2 billion, a 2.1% increase, demonstrating a modest turnaround after posting a 2% decrease in 2013. Below are five major nonresidential construction markets. All showed increases in total put-in-place construction in 2014 except for healthcare (-6.2%). The office market grew 18.9%, manufacturing 15.1%, commercial 11.9% and education less than 1%.\n“For the first time in nearly a decade there was growth in all three major construction segments-public, private nonresidential and residential,” said Ken Simonson, the association’s chief economist. “If the president and Congress can work out a way to pay for long-term investments in our aging infrastructure, there is a good chance this pattern will repeat in 2015.”\nCurrent Construction Indicators\nAs a leading indicator of construction activity, the American Institute of Architects’ (AIA) Architecture Billing Index, or ABI, reflects the approximate 9- to 12-month lead time between architecture billings and construction spending.\nWith the exception of March and April, the ABI stayed above 50 throughout the year, indicating an increase in billings and signaling an expansionary market for design services. The year ended on a positive score of 58.2. Regionally, the South (56.8), West (52.9) and Midwest (50.8) has a more optimistic outlook than the Northeast (45.5). The multifamily residential (55.7), institutional (52.5) and commercial/industrial (51.2) indices suggested growth, while the mixed-practice sector rating (45.8) suggests contraction. The new projects inquiry index was 58.2.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-11", "d_text": "Figure 6.3: Source energy use intensity against weekly operating hours for small retail (purple) and large retail (yellow). The colored crosshairs give medians. Source: bpd.lbl.gov\nIn order to avoid cluttering the article with methodological details (especially what is included or not in the analysis), these are gathered here. The need arises in part because the data have some drawbacks.\nFigure A.1 shows that there were (supposedly) over fifteen times as many commercial buildings constructed between 1900 and 1905 as in the seventeenth, eighteenth and nineteenth centuries combined (the first bar is 1650–1900). The year 1900 itself is vastly overrepresented, so it is likely treated as a sort of NA. 1901–1910 is thus used as a decade instead of 1900–1910. Moreover, the mean energy consumption of 1900 is twice as high as that of the next few decades (the median is even higher), so these commercial buildings without a construction year are particularly inefficient.\nAlso, the 2010 decade has a tenth of the commercial buildings of the previous one, so this decade is ignored too (its energy consumption was pretty close to the 2000 decade anyway).\nFigure A.1: Histogram and median of the construction date of commercial buildings (a bar represents five years). Source: bpd.lbl.gov\nOne of the categories of residential buildings is buildings with \"5+ units\". These are not very numerous but (i) their median source EUI is twice as much as the overall median for residential buildings and (ii) many of them were supposedly built in 1895, 1905, 1915, etc. Apparently these buildings spontaneously sprout every ten years.\nAlso \"mixed use\" accounts for only 464 buildings with energy consumption data. This is both marginal and too imprecise (does it genuinely belong with residential buildings?). In Fig. 1.4, 'residential' thus means 'residential (not mixed-use) excluding 5+ units'.\nThe larger residential buildings are of course rarer, but only up to 50 000 ft2 (4 650 m2). There are 360 of them between 40 000 and 50 000 ft2, but 1600 in 50 000–60 000 ft2.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "Since then, more examples have emerged. In an article titled “Another Green School Gets Failing Grades,” found on the Washington Policy Center’s Blog Web site, a Tacoma school built under the state’s green building law passed in 2005 has failed to meet projected green building performance goals. The article reads: “Rather than putting funding where it will make a difference, Washington is chasing nonexistent benefits which are likely to do actual harm to students especially as the state deals with a $2.5 billion budget shortfall.” This example raises great concern over how the current economic situation and green building legislation will exacerbate an already problematic situation: State passes poorly drafted green building legislation based on poorly drafted green building rating system; building incurs additional costs to incorporate green building rating system requirements in meeting new legislative requirements; building does not perform as promised in green building legislation/rating system; money dries up; litigation ensues.\nIn a recent article by Joe Lstiburek titled “Prioritizing Green-It’s the Energy, Stupid,” the author pokes fun at the USGBC and its attempt to pull the wool over our eyes by using creative statistical reporting.\nLstiburek includes a sidebar in the article about a March 2008 USGBC report titled “Energy Performance of LEED for New Construction Buildings” in which a statistical sleight of hand is employed to show that LEED certified buildings perform better than non-LEED certified buildings. In explaining the statistical error made in the report, he proves conclusively and persuasively that this is not the case.\nA NEW GREEN BUILDING DAWN?\nThere may be hope on the horizon for green building under the Obama administration. The Obama/Biden energy plan calls for an investment of $150 billion in green technologies over the next 10 years. Buildings are a prominent feature of this plan, which calls for the implementation of the American Institute of Architects 2030 challenge to make all new buildings carbon neutral with zero net emissions by 2030. Additionally, the plan calls for a 40 percent increase in energy efficiency in all new federal buildings within five years and carbon neutrality for all new federal buildings by 2024. In the residential sector, the plan calls for increasing energy efficiency of at least one million low income houses each year for the next decade.\nWhether or not this plan will be enacted as law remains to be seen but there is strong bipartisan support for green collar jobs and the passing of a quick economic stimulus bill.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "McGraw-Hill Construction defines \"green jobs\" as those involving more than 50% of work on green projects (defined by McGraw-Hill Construction as projects meeting LEED or another credible green building certification program, or one that is energy- and water-efficient and also addresses indoor air quality and/or resource efficiency) or designing and installing uniquely green systems. Focusing on the construction professions exclusively, this definition excludes support or administrative professionals and manufacturing, production or transportation-related services.\nThis growth of green may help draw more young professionals into the industry. For example, the study also reveals that 62% of trade firms are concerned their profession does not appeal to the younger generation and 42% of architects report the same. However, the younger generation reports a strong commitment to sustainability, with 63% of architecture students saying they would engage in sustainable design out of a personal responsibility. This suggests that as green rises, so too may interest by young professionals in the design and construction fields of practice.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "In the 1990s, the owners removed asbestos and aggressively recycled and composted waste materials, a move that has saved a reported $25,000 annually. The building uses nearly 80 percent less water than the surrounding buildings. Much of the savings were achieved when the building’s staff began to water the plants manually rather by using an automatic irrigation system. Also, a yearly average of $75,000 was saved on 11 energy improvements to the HVAC, electrical, and lighting systems.\nIn Cincinnati, another company installed a 3,500-foot pellet stove on the second story of their building, further implementing a hand-washing station over the toilets so the water could be reused in the toilet bowl. This building also made a habit of composting its scrap wood and shipping any carpet scraps to a recycler in Seattle. The building also clipped off the ends of old aluminum blinds to fit them onto new windows. For this project, energy-saving features have cut costs by an estimated 25 percent.\nGreen Buildings Can Provide a Glimpse of the Future\nCompanies have also moved into buildings as tenants, making changes in the rest of the building that exceed expectations. In Atlanta, the design firm Cooper Carry took over two floors of a 50-story downtown skyscraper. In an effort to save energy, the lights automatically turn off when natural light fills the room–that move alone saved an estimated 35 percent on lighting costs.\nWhen developers go beyond the existing standards, they reveal the way of the future, and that future shouldn’t intimidate anyone. The foundation that owns the Bullitt Center admits that the building was more expensive to build (roughly 23 cents more per square foot). They also conceded that it would have been more profitable to construct the building to code, and then flip it to a real estate investment trust. However, once fully leased, the building will turn a profit, and it has been designed to last for the next 250 years. So, this building — on the far extreme side of green commercial construction — can be profitable now, and stands to be profitable for decades.\nGreen features present intriguing opportunities for developers. As we have seen, some companies are saving tens of thousands of dollars each year on energy costs. These improvements may also ultimately make the building more valuable and marketable.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-4", "d_text": "Further perspective comes from looking at twelve-month moving totals, in this case the twelve months ending March 2017 versus the twelve months ending March 2016. On this basis, total construction starts were up 2%. By major sector, nonbuilding construction decreased 8%, with electric utilities/gas plants down 40% while public works increased 6%. Residential building rose 3%, as a 4% drop for multifamily housing was outweighed by a 7% gain for single family housing. Nonresidential building advanced 7%, with institutional building up 14%, commercial building up 7%, and manufacturing building down 26%.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-6", "d_text": "[00:04:48] Zach White: So you think about agriculture as a big one, or we think about, the actual production of energy as a, as a big one. Like what, where does green buildings fit, and is there any, hey, this is, it’s huge. That’s why it’s getting so much more attention now. I dunno, what’s your thought\n[00:05:04] Charlie Cichetti: on that?\n[00:05:04] Yeah. You know, buildings, the scientists say account for 39% of our global greenhouse gas emissions, so, That’s a lot, you know, to build ’em and then to run ’em. So buildings and, and homes and you know, kind of the built environment.\n[00:05:20] and so that’s why it’s just needed this focus, right? It’s so important. Now there are other things you alluded to, right? There’s renewables, there’s going all electric with our buildings. There’s definitely even some legislation getting passed just where, Corporates Fortune 100, Fortune 500 have to start to be transparent and be like, Hey, here’s our impact in, in our operations and supply chain.\n[00:05:42] but I’d say 39% of global greenhouse gas emissions the studies have showed for many years, are related to buildings in the United States, construction. $10 trillion I believe. it’s a tremendous amount of our gdp and so just know that there, we might need to fact check that, but I’m pretty sure that is close to the number.\n[00:06:05] there’s a lot happening in and around here. So if you, you were to go to the biggest impacts, Gosh, if we made that better and less impact on the environment, we followed some rules here and everyone got it and they were educated. I think we can make some pretty big change. That is, that’s\n[00:06:22] Zach White: a huge number.\n[00:06:23] so full disclosure, I have not fact checked the 10 trillion either, but like 39%, that’s a, that’s bigger than I expected, Charlie.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "It expressly preempts any state law that attempts to regulate \"the energy efficiency, energy use, or water use\" of any covered products. A recent federal decision from New Mexico looked to the legislative history and found the following explanation: \"the building code exception [is] intended to 'ensure that performance-based codes cannot expressly or effectively require the installation of covered products whose efficiencies exceed … the applicable Federal standard ….\"\nThe import of this quoted language is remarkable. Green buildings achieve their efficiencies by using a collection of technologies, designs and strategies. The Court, however, decided that unless a building could achieve 30% energy efficiencies by using only products covered by federal law, then the proposed state regulation was impermissible due to federal preemption. (New Mexico, like New Jersey, was seeking to impose 30% efficiencies.) Consequently, because 30% efficiencies cannot be achieved with just covered products, the court determined that the proposed New Mexico code was preempted. While it is not certain that a federal court in New Jersey would rule the same way, this decision suggests that New Jersey's strategy for implementing the Global Warming Response Act may be in peril. States may need to ask Congress to amend the federal law before they are able to proceed with their intended building code amendments.\nRegardless of whether green buildings are mandated, people will continue to construct them because tenants demand them. For instance, a Colliers ABR study of commercial leasing in New York City for the first quarter of 2009 found that three of the four largest leases were for space in green buildings. So, the next logical questions are: how does one go about building \"green\"? And, how much more will it cost as compared to conventional construction?\nAs to how to build \"green,\" there are numerous organizations that provide guidelines for achieving defined standards. Two of the better-known organizations are Leadership in Energy and Environmental Design (LEED) and Green Globes, each of which provides standards for four tiers of \"green\" building efficiencies. Integral to both of these programs, and of inherent importance regardless of the standard selected, is the requirement that compliance with the standard be audited. Each of the available programs has benefits and detriments, which should be evaluated carefully by a prospective building owner in selecting a desired standard. Then, such building owner should state clearly in all building construction contracts the standard to follow, and the tier of compliance desired.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-13", "d_text": "Hoboken, New Jersey: John Wiley & Sons Inc.\n- \"GSA Public Buildings Service Assessing Green Building Performance\" (PDF). Archived from the original (PDF) on 2013-07-22.\n- \"Presentation\" (PDF). www.ipcc.ch.\n- \"Howe, J.C. (2010). Overview of green buildings. National Wetlands Newsletter, 33(1)\".\n- Goodhew S 2016 Sustainable Construction Processes A Resource Text. John Wiley & Son\n- Mao, Xiaoping; Lu, Huimin; Li, Qiming (2009). \"A Comparison Study of Mainstream Sustainable/Green Building Rating Tools in the World\". 2009 International Conference on Management and Service Science. p. 1. doi:10.1109/ICMSS.2009.5303546. ISBN 978-1-4244-4638-4.\n- Carson, Rachel. Silent Spring. N.p.: Houghton Mifflin, 1962. Print.\n- U.S. Environmental Protection Agency. (October 28, 2010). Green Building Home. Retrieved November 28, 2009, from http://www.epa.gov/greenbuilding/pubs/components.htm\n- WBDG Sustainable Committee. (August 18, 2009). Sustainable. Retrieved November 28, 2009, from http://www.wbdg.org/designsustainable.php[permanent dead link]\n- Life cycle assessment#cite note-1\n- Hegazy, T. (2002). Life-cycle stages of projects. Computer-Based Construction Project Management, 8.\n- Pushkar, S; Becker, R; Katz, A (2005). \"A methodology for design of environmentally optimal buildings by variable grouping\". Building and Environment. 40 (8): 1126. doi:10.1016/j.buildenv.2004.09.004.\n- \"NREL: U.S. Life Cycle Inventory Database Home Page\". www.nrel.gov.\n- \"Naturally:wood Building Green with Wood Module 3 Energy Conservation\" (PDF). Archived from the original (PDF) on 2012-07-22.\n- Simpson, J.R.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-14", "d_text": "Energy and Buildings, Improved Estimates of tree-shade effects on residential energy use, February 2002. Retrieved:2008-04-30.\n- California Integrated Waste Management Board. (January 23, 2008). Green Building Home Page. Retrieved November 28, 2009, from .... http://www.ciwmb.ca.gov/GREENBUILDING/basics.htm\n- Jonkers, Henk M (2007). \"Self Healing Concrete: A Biological Approach\". Self Healing Materials. Springer Series in Materials Science. 100. p. 195. doi:10.1007/978-1-4020-6250-6_9. ISBN 978-1-4020-6249-0.\n- GUMBEL, PETER (4 December 2008). \"Building Materials: Cementing the Future\" – via www.time.com.\n- \"Green Building -US EPA\". www.epa.gov.\n- \"Sustainable Facilities Tool: Relevant Mandates and Rating Systems\". sftool.gov. Retrieved 3 July 2014.\n- Lee, Young S; Guerin, Denise A (2010). \"Indoor environmental quality differences between office types in LEED-certified buildings in the US\". Building and Environment. 45 (5): 1104. doi:10.1016/j.buildenv.2009.10.019.\n- KMC Controls. \"What's Your IQ on IAQ and IEQ?\". Archived from the original on 16 May 2016. Retrieved 5 October 2015.\n- \"LEED - Eurofins Scientific\". www.eurofins.com. Archived from the original on 2011-09-28. Retrieved 2011-08-23.\n- \"HQE - Eurofins Scientific\". www.eurofins.com.\n- \"LEED - Eurofins Scientific\". www.eurofins.com. Archived from the original on 2011-09-28. Retrieved 2011-08-23.\n- \"BREEAM - Eurofins Scientific\". www.eurofins.com.\n- \"IAQ Green Certification\".\n- \"LEED - U.S. Green Building Council\". www.usgbc.org. Archived from the original on 2013-12-19.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-2", "d_text": "Year-on-year rates help underline what is a healthy rate of growth in construction spending, up 5.8 percent overall with residential spending up 6.7 percent and both private nonresidential and public categories showing low to mid single digit gains. Nevertheless, reports out of housing have been uneven and are clouded further by the declines in single- and multi-family homes in this report.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-1", "d_text": "Announces Top Ten Green Building Projects for 2011\nThe Committee on the Environment (COTE) from the American\nInstitute of Architects (AIA)\nannounced the 2011 top 10 green building projects.\nBuilding Owners Get Help Tracking Performance\n\"The Building Performance Tracking Handbook\" was developed\nby the California Commissioning Collaborative with funding\nfrom California's Energy Commission and can be applied to\ncommercial buildings throughout the country.", "score": 8.086131989696522, "rank": 100}]} {"qid": 24, "question_text": "How many variety shows and sitcoms were on television in Fall 1969?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Looking exclusively at the numbers of shows that were on in the Fall of 1969, one might be excused for thinking that television was intended as a delivery system for sitcoms and variety show. There were fourteen variety shows on the air including four new hours featuring Andy Williams, Jimmy Durante, Jim Nabors and Leslie Uggams. There were also two sketch comedy shows. There were also twenty-five sitcoms, eight of which were new. There were seven westerns (and this is stretching the definition of \"western\" to include shows like Daniel Boone and Here Come The Brides), but nothing new. As for the genre that would dominate the 1970s and beyond, the cop or detective series, there were only eight of those – none of them new either. A \"wheel\" series – a show that rotated several unrelated dramas – included a cop series as well as a medical drama (one of three that debuted in 1969) and the only lawyer show of the year. And then there were shows that didn't really fit into any conventional genre. Predictably several of these were on the weakest of the big three networks, ABC.\nTo be sure there were some notable shows, and I'll get to them shortly, but as is often the case failures can be more interesting than success. And that was the case with ABC's Monday night schedule. There were four shows, the most conventional of which was the only real success, the sketch comedy Love American Style. The rest of the line-up was full of show that can best be described as before their time. Take Harold Robbins' Survivors for example. The show was an adaptation of Robbins' novel of the same name featuring the lifestyle of the rich and famous. The producers said in the TV Guide preview, \"Our stories are about human beings who have the same kind of problems as you or I.\" That is they're the sort of problems you have if you're a woman with an illegitimate teenage son that you have to protect from your world, a philandering embezzler husband, a playboy half-brother, and a tyrannical but dying father. The show had a star-studded cast – Lana Turner, Ralph Bellamy, George Hamilton, Jan-Michael Vincent (billed by TV Guide as Michael Vincent), and Rossano Brazzi.", "score": 52.41683909451985, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "First, there were nearly a dozen variety shows at that time -- that type show had not yet begun to die out, though it would do so shortly. Alan would have been competing with, among others: Andy Williams, Steve Lawrence, Red Skelton, Danny Kaye, Dean Martin, Jimmy Dean, the King Family, the Smothers Brothers, Jackie Gleason, Lawrence Welk, and Ed Sullivan.\nSitcoms, however, were dominating the ratings, with \"The Lucy Show\", \"The Andy Griffith Show\", \"Bewitched\", \"Gomer Pyle\", \"Hogan's Heroes\", and \"The Beverly Hillbillies\" all in the top 10. Others in the top 20 included \"My Three Sons\", \"Get Smart\", \"Green Acres\" ---- and, of course, \"The Dick Van Dyke Show\" (on Wednesday nights).\nConsidering the fact that many of these shows can still be seen in syndicated re-runs almost 50 years later, it's safe to say that the '65-'66 season was a strong one -- and that Rob, Sally, and Buddy were doing very well if they were keeping an obsolescent program type in 7th -- or even 17th -- place.", "score": 48.713151620825634, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "Popular American variety shows that began in the 60s include a revival of The Jackie Gleason Show (1960-1970), The Andy Williams Show (1962-1971), The Danny Kaye Show (1963-1967), The Hollywood Palace (1964-1970), The Dean Martin Show (1965-1974), The Carol Burnett Show (1967-1978) and The Smothers Brothers Comedy Hour (1967-1969). 1969 saw a flurry of new variety shows with rural appeal: The Johnny Cash Show (1969-1971), The Jim Nabors Hour (1969-1971), The Glen Campbell Goodtime Hour (1969-1972) and Hee Haw (1969-1992).\nIn 1970 and 1971, the American TV networks, CBS especially, conducted the so-called \"rural purge\", in which shows that appealed to more rural and older audiences were cancelled as part of a greater focus on appealing to wealthier demographics. Many variety shows, including long-running ones, were cancelled as part of this \"purge,\" with a few shows (such as Hee Haw and The Lawrence Welk Show) surviving and moving into first-run syndication. Variety shows continued to be produced in the 1970s, with most of them stripped down to only music and comedy.\nPopular variety shows that ran in the 1970s include The Flip Wilson Show (1970-1974), The Sonny & Cher Comedy Hour (1971-1976, in various incarnations), The Bobby Goldsboro Show (1973-1975), The Midnight Special (1973-1981), Don Kirshner's Rock Concert (1973-1981), The Mac Davis Show (1974-1976), Tony Orlando and Dawn (1974-1976), Donny & Marie (1976-1979) and Sha Na Na (1977-1981).\nEntertainers with weekly variety shows that ran for one season or less in the 1970s include Captain & Tennille, The Jacksons, The Keane Brothers, Bobby Darin, Mary Tyler Moore, Julie Andrews, Dolly Parton, Shields and Yarnell, The Manhattan Transfer, Starland Vocal Band, and the cast of The Brady Bunch.", "score": 47.21773317480449, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Fall Season: Networks Strike Out Again\nSummer is the broadcast television networks' equivalent of baseball's spring training. In each case, it's a hopeful time, when new shows have the potential to make a splash in the upcoming season, just as rookies do; and broadcast networks dream of finishing atop the final standings, just as teams do.\nThe major difference is that in baseball, there's a winner for every loser, whereas in today's broadcast network game, almost everyone is losing. The statistics tell much of the story.\nDuring the week of September 21, the networks rolled out not only the bulk of their debuting series, but also the vast majority of the fall premieres of their returning shows. As always, they had extensively promoted this programming, and magazines like TV Guide had devoted considerable space to the new season, further widening and deepening public awareness of the supposed small-screen delights ahead.\nHow did the public react? Unfortunately for ABC, CBS, and the rest of the broadcast networks, plenty of viewers were in front of their sets - but tuned to basic cable channels, the audience for which increased 14 percent over the corresponding week last year. Meanwhile, after all the hype, the broadcast webs were down a staggering 8.5 percent from their 1997-'98 premiere week.\nThe results capped a splendid third quarter of '98 for basic cable, which increased both its rating and its share 13 percent over the third quarter of '97. Clearly, part of the growth is attributable to Monicagate; CNN, MSNBC, and Fox News Channel each drew far more viewers than it did last summer. But family-oriented channels performed strongly as well - Nickelodeon was up 11 percent; the Cartoon Network, 23 percent - suggesting yet again that the broadcast networks' obsession with attracting twenty- and thirtysomethings, to the exclusion of practically everyone else, is a mistake.\nThe terrible numbers for the broadcast networks confirm they are in a disastrous free fall. During '97-'98, they lost almost two full rating points (i.e., almost two million households) from the previous season.\nWhat to do? Some industry bigwigs have a clue.", "score": 45.57795600267099, "rank": 4}, {"document_id": "doc-::chunk-8", "d_text": "(In fact, CBS voiced its own favor for NBC’s Emmy-winning mishandled series by rerunning select episodes in the summer of ‘72.) Yet, wherever it was, My World And Welcome To It would have eventually been cancelled if it couldn’t be a ratings winner — even on CBS — but at least comedies of its smarts, charm, and following were slowly being given space. The 1969-’70 season was the year that foretold the future – mostly in its failures: what wasn’t working, and what wasn’t being allowed to work.\nI’ve written verbosely about the turbulent nature of 1969-‘70 to illustrate just how creatively jumbled the era was, and hopefully help explain why My World And Welcome To It was both an outlier, not destined to triumph, and a direct participant, an attempt by NBC to project this half-practiced directive. But let’s talk about comedy in this period. In general, the sitcoms of the late ’60s were clear-cut — they were either broad, loud romps (Here’s Lucy) or “warmedys,” like Julia, an NBC sitcom that also had a socially progressive nature to give it extra cachet. (There was rarely any middle ground for a show that aimed to be logical and realistic, but also wanted to deliver grand guffaws.) Simultaneously, the ratings war led to an increased experimentation that made possible the groundbreaking shows that would soon arrive in the early ‘70s, which The Governor And J.J., Julia, and Room 222, for example, all presaged. In these, there existed some slight genre-muddying, whereby a show marketed as a half-hour comedy didn’t feel the need to go for a laugh on every page, and felt freer to engage in heavier, mildly relevant motifs — the kind that would suggest a self-aware reality that could indeed appeal to young urban consumers (the targeted demo). It’s clear watching today that My World And Welcome To It embraces the above notion that, as a comedy, it doesn’t need to reach a laugh quota, for it considers character drama, sometimes founded on warmth and sometimes on the absence of it, to be just as rewarding. And with Thurber as its source material, the series has a natural ability to weave together all sorts of elements — burlesque laugh-eliciting comedy, warm smile-inducing charm, and honest self-aware commentary.", "score": 43.81249213844451, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Variety shows, also known as variety arts or variety entertainment, is entertainment made up of a variety of acts including musical performances, sketch comedy, magic, acrobatics, juggling, and ventriloquism. It is normally introduced by a compère (master of ceremonies) or host. The variety format made its way from Victorian era stage to radio and then television. Variety shows were a staple of anglophone television from the late 1940s into the 1980s.\nWhile still widespread in some parts of the world, the proliferation of multichannel television and evolving viewer tastes have affected the popularity of variety shows in the United States. Despite this, their influence has still had a major effect on late night television whose late night talk shows and NBC's variety series Saturday Night Live (which originally premiered in 1975) have remained popular fixtures of North American television.\nThe live entertainment style known as music hall in the United Kingdom and vaudeville in the United States can be considered a direct predecessor of the \"variety show\" format. Variety in the UK evolved in theatres and music halls, and later in Working Men's Clubs. Most of the early top performers on British television and radio did an apprenticeship either in stage variety, or during World War II in Entertainments National Service Association (ENSA). In the UK, the ultimate accolade for a variety artist for decades was to be asked to do the annual Royal Command Performance at the London Palladium theatre, in front of the monarch.\nIn the United States, former vaudeville performers such as the Marx Brothers, George Burns and Gracie Allen, W. C. Fields, and Jack Benny honed their skills in the Borscht Belt before moving to talkies, to radio shows, and then to television shows, including variety shows.\nVariety shows were among the first programs to be featured on television during the experimental mechanical television era. Variety shows hosted by Helen Haynes and Harriet Lee are recorded in contemporary newspapers in 1931 and 1932; because of technical limits of the era, no recordings of either show have been preserved. The genre proliferated during the Golden Age of Television, generally considered to be roughly 1948 to 1960. Many of these Golden Age variety shows were spinoffs of previous radio variety shows.\nFrom 1948 to 1971, The Ed Sullivan Show was one of CBS's most popular television series.", "score": 41.144801644637845, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "By Lisa Henderson\nLet’s be honest – it’s the consistent existence of entertaining television shows that gets us through long work weeks. Without sympathetic tears and laughs at the expense of likeable fictional characters, we’d all be just a bit more tightly wound. This fall promises an exceptional lineup of television shows, some new and some returning to the small screen for the first time in almost a decade.\nWhether it’s for lack of creative ideas for new shows or simply due to popular demand, a large handful of programs from our childhood days have returned to television. This summer, Nick at Nite introduced what is known as The 90s Are All That, a portion of late-night airtime during which they run some of the most popular Nickelodeon programs of the 1990s, including All That, Kenan and Kel, Clarissa Explains It All and Doug. Now, it seems that MTV and VH1 are both following in Nick’s footsteps and hopping on the train to Throwback Town.\nIt’s 1996. You’ve just returned home from a long day at elementary school, stationed yourself in front of the family room TV and are now catching your favorite music videos on VH1’s Pop Up Video. Not only were music videos the main focus, they were also accompanied by “bubbles” that would appear sporadically and feature interesting facts about the video and the artist. It may be true that music videos have taken a backseat to reality programs, but not everyone has lost hope in the once-prevalent art-form. Pop Up Video returned to VH1 on Oct. 3, much to the delight of 90s enthusiasts.\nMTV has followed suit. Premiering Oct. 27, Beavis and Butt-head will make a comeback. New episodes of this crude, hilarious animated series have not aired since 1997, yet the two delusional teens from the fictional town of Highland, Texas will still appear to be the same age as they were almost 15 years ago.\nNostalgic fans of the ’90s can always expect a new bunch of quirky reality stars on The Real World each fall. This season is sure to be cause for talk, as the cast has already taken note of their extreme differences; no one was sure of Sam’s sex until she revealed herself to be a woman, and Frank has yet to show us his alcoholic, emotional tendencies.", "score": 40.37186199364684, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "Using his no-nonsense approach, Ed Sullivan allowed many acts from several different mediums to get their \"fifteen minutes of fame\". Sullivan was also partially responsible for bringing Elvis Presley and The Beatles to prominence in the United States.\nIn the UK, The Good Old Days—which ran from 1953 to 1983—featured modern artists performing dressed in late Victorian/Early Edwardian costume, either doing their own act or performing as a music hall artist of that period. The audience was also encouraged to dress in period costume in a similar fashion.\nOn television, variety reached its peak during the period of the 1960s and 1970s. With a turn of the television dial, viewers around the globe could variously have seen shows and occasional specials featuring Dinah Shore, Bob Hope, Bing Crosby, Perry Como, Andy Williams, Julie Andrews, The Carpenters, Olivia Newton-John, John Denver, John Davidson, Mac Davis, Bobby Goldsboro, Lynda Carter, Johnny Cash, Sonny and Cher, Bob Monkhouse, Carol Burnett, Rod Hull and Emu, Flip Wilson, Lawrence Welk, Glen Campbell, Donny & Marie Osmond, Barbara Mandrell, Judy Garland, The Captain & Tennille, The Jacksons, The Keane Brothers, Bobby Darin, Sammy Davis, Jr., Mary Tyler Moore, Dean Martin, Tony Orlando and Dawn, The Smothers Brothers, Danny Kaye, Des O'Connor, Buck and Roy, Roy Hudd, Billy Dainty, Max Wall. Manhattan Transfer, Starland Vocal Band, or The Muppet Show. Even \"The Brady Bunch\" had a variety show. Variety shows were once as common on television as Westerns, courtroom dramas, suspense thrillers, sitcoms, or (in more modern times) reality TV shows.\nDuring the 1960s and 1970s, there were also numerous one-time variety specials featuring stars such as Shirley MacLaine, Frank Sinatra, Diana Ross, and Mitzi Gaynor, none of whom ever had a regular television series.\nModern U.S. variety shows\n|This article needs additional citations for verification.", "score": 39.49800387111738, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "|Look up variety show in Wiktionary, the free dictionary.|\nA variety show, also known as variety arts or variety entertainment, is entertainment made up of a variety of acts (hence the name), especially musical performances and sketch comedy, and is normally introduced by a compère (master of ceremonies) or host. Other types of acts include magic, animal and circus acts, acrobatics, juggling and ventriloquism. The variety format made its way from Victorian era stage to radio to television. Variety shows were a staple of anglophone television from its early days (late 1940s) into the 1980s.\nWhile still widespread in some parts of the world, the proliferation of multichannel television and evolving viewer tastes affected the popularity of variety shows in the United States. Despite this, their influence has still had a major effect on late night television—where late night talk shows and NBC's comedy series Saturday Night Live (which originally premiered in 1975) have remained popular fixtures of North American television.\nThe format is basically that of music hall in the United Kingdom or vaudeville in the United States. Variety in the UK evolved in theatres and music halls, and later in Working Men's Clubs. Most of the early top performers on British television and radio did an apprenticeship either in stage variety, or during World War II in Entertainments National Service Association (ENSA). In the UK, the ultimate accolade for a variety artist for decades was to be asked to do the annual Royal Command Performance at the London Palladium theatre, in front of the monarch.\nIn the United States, former vaudeville performers such as the Marx Brothers, George Burns and Gracie Allen, W. C. Fields, and Jack Benny honed their skills in the Borscht Belt before moving to talkies, to radio shows, and then to television shows, including variety shows. In the 1960s, even a popular rock band such as The Beatles undertook this ritual of appearing on variety shows on TV. In the United States, shows featuring Perry Como, Milton Berle, Jackie Gleason, Bob Hope, and Dean Martin also helped to make the Golden Age of Television successful.\nFrom 1948 to 1971, The Ed Sullivan Show was one of CBS's most popular television series.", "score": 39.26865541409185, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "Using his no-nonsense approach, host Ed Sullivan was instrumental in bringing many acts to prominence in the United States, including Elvis Presley and The Beatles. The Lawrence Welk Show (1955-1982) would go on to become one of U.S. television's longest-running variety shows; based on the concept of the big band remote from the old-time radio era, it was already one of the last shows of its kind when it debuted and far outlasted all other big-band centered broadcast series by the end of its run.\nOther long-running American variety shows that premiered during this time include Texaco Star Theatre (1948-1956), Cavalcade of Stars, later titled The Jackie Gleason Show (1949-1955), The Garry Moore Show (1950-1967, in various incarnations), The Colgate Comedy Hour (1950-1955), Your Show of Shows (1950-1954), The Red Skelton Show (1951-1971), The Dinah Shore Show (1951-1957), The George Gobel Show (1954-1960) and The Dinah Shore Chevy Show (1956-1963). Perry Como also hosted a series of variety shows that collectively ran from 1948 to 1969, followed by variety specials that ran until 1994.\nIn the UK, The Good Old Days--which ran from 1953 to 1983--featured modern artists performing dressed in late Victorian/Early Edwardian costume, either doing their own act or performing as a music hall artist of that period. The audience was also encouraged to dress in period costume in a similar fashion. Other long-running British variety shows that originated in the 1950s include Tonight at the London Palladium (1955-1969), The Black and White Minstrel Show (1958-1978), The White Heather Club (1958-1968) and Royal Variety Performance (an annual event televised since the 1950s).", "score": 38.89438638552363, "rank": 10}, {"document_id": "doc-::chunk-3", "d_text": "By the late 1970s, nearly every variety show had ended production, in part because of audience burnout; the highest-rated variety show of 1975, Cher, was only the 22nd-most watched show of the year..\nBy the early 1980s, the few new variety shows being produced were of remarkably poor quality (see, for instance, the infamous Pink Lady and Jeff), hastening the format's demise. Since Pink Lady, only a few traditional variety shows have been attempted by major networks: these include Dolly (starring Dolly Parton), which ran for 23 episodes on the ABC during the 1987-'88 season; a revival of The Smothers Brothers Comedy Hour from 1988 to 1989; a revival of The Carol Burnett Show, which was broadcast by CBS for nine episodes in 1991; and the first incarnation of The Wayne Brady Show, which was telecast by ABC in August 2001.\nBy the 21st century, the variety show format had fallen out of fashion, due largely to changing tastes and the fracturing of media audiences (caused by the proliferation of cable and satellite television) that makes a multiple-genre variety show impractical. Even reruns of variety shows have generally not been particularly widespread; TV Land telecasted briefly some variety shows (namely The Ed Sullivan Show and The Sonny & Cher Comedy Hour) upon its beginning in 1996, but within a few years, reruns of most of those shows (with the notable exception of The Flip Wilson Show) stopped. Similarly, CMT held the rights to Hee Haw but telecast very few episodes, opting mainly to hold rights to allow them to air performance videos from the show in its video blocks. The current rights holder of Hee Haw, RFD-TV, has been more prominent in its telecasts of the show; RFD-TV also airs numerous other country-style variety shows from the 1960s and 1970s up through the present day, in a rarity for modern television. Another notable exception is The Lawrence Welk Show, which has been telecast continually in reruns on the Public Broadcasting System (PBS) since 1986.", "score": 36.922109413618344, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "It was an era of heady independence, when as mainstream an actor as Hal Holbrook could portray a gay father in That Certain Summer with dignity—and without reprisals from skittish advertisers.\nComedy-variety experienced a Renaissance with The Flip Wilson Show and the zaniness of The Carol Burnett Show. And when NBC decided to turn over a traditional ratings wasteland to producer Lorne Michaels and his young, censor-busting comedians, America rushed home for Saturday Night Live.\nBut breakout freedom was also vulnerable. The decade's top programmer, Fred Silverman, had brought All in the Family and its cousins to CBS. But he's now mostly remembered, unfairly, for jumping to ABC and then NBC and shrewdly targeting the youngsters of the Baby Boom generation for more silliness. He sent a sine wave of \"jiggle\" rippling through the land—Aaron Spelling and Leonard Goldberg's Charlie's Angels soared to ratings heaven and led inexorably to shows like Three's Company, which made the airwaves safe for innuendo.\nThe decade that began with Mary and Lou, Edith and Meathead, went out with Laverne and Shirley, Joanie and Chachi. Norman Lear decided to go on sabbatical. Dallas heralded the emergence of the prime-time soap opera as TV's new dominant form. In the next decade, Dynasty, Falcon Crest, etc. would make heroes out of villains—and do wonders for the careers of fading movie types like Joan Collins and Jane Wyman. As television, along with the nation, retreated from confrontation in search of comfort, some viewers began to look back at the early '70s as the medium's second Golden Age and to agree with Archie and Edith: Those were the days...\nGolfing in mine fields and bedding every nurse they could, the medicos of M*A*S*H survived 11 seasons of grand tragicomedy in Korea but were decimated by plot twists: From left, Hawkeye (Alan Alda) had a nervous breakdown in the last episode; Klinger (Jamie Farr) wore dresses for nearly a decade in a gamy attempt to look mentally unfit; Colonel Blake (McLean Stevenson) was lost at sea on his flight home; and Major Burns (Larry Linville) went AWOL when his five-year romp with Hot Lips Houlihan (Loretta Swit, inset) ended.", "score": 36.49592176492524, "rank": 12}, {"document_id": "doc-::chunk-2", "d_text": "(May 2013)|\nVariety shows began to fade from popularity during the 1970s, when research began to show that variety shows appealed to an older audience that was less appealing to advertisers; over the course of the so-called \"rural purge\", several of the early era variety shows were canceled, though newer ones (fewer in number nonetheless, and generally stripped down to music and comedy) continued to be created and broadcast for several years after. By the late 1970s, even these generally celebrity-driven variety shows had mostly ended production, in part because of audience burnout; by the early 1980s, the few new variety shows being produced were of remarkably poor quality (see, for instance, the infamous Pink Lady and Jeff), hastening the format's demise. Since Pink Lady, only a few traditional variety shows have been attempted on television programs: Dolly (starring Dolly Parton), which ran for 23 episodes on the ABC television network during the 1987–'88 season; a revival of The Carol Burnett Show, which was broadcast by CBS for nine episodes in 1991; and the first incarnation of The Wayne Brady Show, which was telecast by ABC in August 2001.\nBy the 21st century, the variety show format had fallen out of fashion, due largely to changing tastes and the fracturing of media audiences (caused by the proliferation of cable and satellite television) that makes a multiple-genre variety show impractical. Even reruns of variety shows have generally not been particularly widespread; TV Land telecast briefly some variety shows (namely The Ed Sullivan Show and The Sonny and Cher Comedy Hour) upon its beginning in 1996, but within a few years, those reruns stopped. Similarly, CMT held the rights to Hee Haw but telecast very few episodes. The current rights holder of Hee Haw, RFD-TV, has been more prominent in its telecasts of the show; RFD-TV also airs numerous other country-style variety shows from the 1960s and 1970s up through the present day, in a rarity for modern television. Another notable exception is The Lawrence Welk Show, which has been telecast continually in reruns on the Public Broadcasting System (PBS) since 1986.", "score": 35.929580264021034, "rank": 13}, {"document_id": "doc-::chunk-2", "d_text": "Both shows were cancelled at the mid-season mark along with a third new series, a TV version of Mr. Deeds Goes To Town starring Monte Markham and Pat Harington, and ABC venerable variety show Hollywood Palace. The other networks didn't face such obvious problems, but then again they weren't being as daring in programming. The only show that either of the other two networks cancelled at the midseason point was The Leslie Uggams Show which was the first hour long variety show to be hosted by an African-American. It only lasted ten episodes.\nThe networks reacted to the cancellation of these shows in a way that would surprise people today – the moved established shows. Today this sort of thing would be regarded with horror by fans. Conventional wisdom is that moving a series to a new night at any time let alone during the season is not unlike getting a kiss from a Mafia Don – you won't survive – but it was what was done in 1969. ABC moved It Takes A Thief and the Wednesday Night Movie to Monday night (the latter show got a name change of course). The only survivor of the Monday night line-up, Love American Style replaced Jimmy Durante Presents The Lennon Sisters which in turn moved to replace Hollywood Palace (which in turn followed Lawrence Welk, the show where the Lennon Sisters originally debuted). To replace the Wednesday Night Movie ABC revived a series that had run the previous summer – The Johnny Cash Show. The Flying Nun moved to replace Mr. Deeds Goes To Town, and was in turn replaced by Nanny And The Professor. I haven't been able to identify the show that followed Johnny Cash, or what went into the time slot vacated by It Takes a Thief. Over at CBS the time slot after Ed Sullivan, which had been held down by The Leslie Uggams Show became the new home of The Glenn Campbell Goodtime Hour, while Campbell's old time was taken over by a new series called Hee Haw, which everybody hated, except of course the public (or at least that part of the public that the critics disliked).\nThe successful dramas that debuted in 1969 seem to have one thing in common. That is a mentor-protégé relationship. In ABC's Marcus Welby M.D. the mentor was the title character, a caring general practitioner who works out of his home and actually made house calls (!", "score": 34.92813612718031, "rank": 14}, {"document_id": "doc-::chunk-4", "d_text": "Remember, CBS had long been the most-watched network (total viewers) of the three, which gave it a legitimate claim for being #1. By the ’68-’69 season, NBC was catching up, and almost eked out a victory. Following this failure, the peacock network’s response was essentially, “Oh, well, we’re #1 in the demographics that matter to advertisers.” (“Nanny-nanny-boo-boo!” was the theme of NBC’s “least objectionable” Paul Klein’s correspondence with CBS’ furious Mike Dann.)\nThere’s some truth, though, to the claim that because #2 and #3 weren’t the most-watched, they had already been attempting throughout the mid-to-late ’60s to skew younger and “hipper” (with shows like Laugh-In and The Mod Squad) than CBS, which only half-heartedly tried to keep up by programming The Smothers Brothers Comedy Hour (which secured both good totals and the supposedly desired demo, but gave everyone a headache – including relevance-seeking Wood, who cancelled it after its initial renewal for ’69-’70). Yet there’s little doubt that nearly all programming decisions — cancellations and renewals — were still based, for every network, on total viewership: winning. So, CBS entered ’69-’70 with a new president, but business as usual… until it looked like NBC was actually going to win the season in total viewers. This is where the split occurred. CBS’ Senior Programming VP, Mike Dann, who’d essentially been the chief creative decision-maker throughout the decade’s revolving door of presidents, launched a campaign called Operation 100, in which he rejuvenated the CBS schedule by pre-empting low performing shows and replacing them with events — specials, films, one-offs — in a calculated effort to win the year. His tactic worked (depending on when you choose to view the season’s end and start dates), but it didn’t matter anymore. Dann left CBS that summer following a creative coup staged by Bob Wood early in Operation 100, in which he persuaded William Paley that NBC’s “demographics that matter to advertisers” argument was worth pursuing, especially if CBS was going to be #2 with its “business as usual” in the ’69-’70 season; this could be a way for them to win again — inexpensively.", "score": 34.28291297726984, "rank": 15}, {"document_id": "doc-::chunk-2", "d_text": "''You have to be right more often,'' explains Ted Harbert, ABC's vice president of prime-time television who plays a key role in designing that network's schedule. ''Not that many years ago, you could be wrong with a show and still deliver a 20 percent rating and thus your advertisers could still get enough for their buck and the network would make money. Now, if you make a mistake, you get a 9 rating. It's become a one-network economy, where one network gets enough points, and there's not enough for the other two.'' The Way It Was, The Way It Is\nUntil relatively recently, in fact, scheduling was pretty simple. There were two television seasons: the fall season that began in September once children grumpily returned to school and mid-season in February, a good time to get rid of flops and try again. Outside of those times, schedules remained fairly static.", "score": 33.77765714386048, "rank": 16}, {"document_id": "doc-::chunk-3", "d_text": "So, with less time to program each night, the networks knew they would eventually have to drop shows and cut their own expenses, all the while (hopefully) maximizing profits. This meant, first and foremost, nixing old stars and pricey properties to make room for product that could be more cheaply offered. That’s when rural shows like Green Acres and The Beverly Hillbillies – now both out of the Top 30 – were axed at the end of ’70-‘71, along with un-winning corn like Family Affair and To Rome With Love (both also off the good charts) – in the second and biggest wave of the so-called “Purge.”\nWhether this had a lot to do with the emerging successes of cheaper, newer, more modern programs like The Mary Tyler Moore Show (1970-1977, CBS) and All In The Family (1971-1979, CBS), which indeed aimed to be socially present and urban-friendly, cannot be assured. However, it’s my conclusion that demo-targeting was not yet the proven science most now claim when CBS crafted the ’71-’72 schedule. In the fall of ’70, the initial wave of “relevant” programming for which CBS’ new network president Bob Wood advocated (and, to be fair, he did intentionally move CBS in this direction for ’70-’71) seemed less commercially desirable than expected. To wit, the ’71-’72 season saw a return to old forms — prior stars, updated Westerns, goofy comedies (including one with a chimp) — before each of the aforementioned classics began to assert their superiority in both ratings and gold, making sure that by ’72, all three networks had a new cultural mandate. That is, when the cancellation decisions were being made, between the two November ’70 and February ’71 Sweeps periods, the “new order” was not strong enough to be an order. (Les Brown’s book on this history concludes at the end of 1970 and reveals that many CBS execs, including Mike Dann’s replacement, Fred Silverman, then believed the ’70 schedule was not the new normal.) This may all seem like a tangent, but ’69-’70 is key, because it put in motion changes that led to what we saw in the 1970s.", "score": 32.99625465757354, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "No mode cookie\nJonny Quest and Television\n© 2001, Craig Fuqua, Lyle Blosser\nJonny Quest started Friday, Sept. 18, 1964, at 7:30 p.m. Eastern then moved to Thursday\nevenings at 7:30 starting Dec. 24, 1964, switching timeslots with \"The Flintstones.\"\nNew television shows that debuted the same week as Jonny Quest included \"Voyage\nto the Bottom of the Sea,\" \"Shindig,\" \"Bewitched,\" \"Addams Family\" and \"12 O'Clock\nHigh.\" The next week's debuts included \"Gilligan's Island,\" \"Mr. Magoo,\" \"Flipper,\"\n\"Peyton Place\" and \"The Man from UNCLE.\"\nA summary of the shows appearing during\nthe 1964-65 prime time television season. (Thanks\nto Mike Esaia for providing this info.)\nAfter the last show 9/9/65, Jonny Quest's timeslot on ABC was taken over by the\ndance program \"Shindig.\" During the season, it was replaced 6/10/65 by a \"Donna\nReed\" rerun and 6/17/65 by a health care program.\nSaturdays in the '60s and the '70s\nJonny Quest's first Saturday morning runs were on CBS from Sept. 9, 1967, to Sept.\n5, 1970, and on ABC from Sept. 13, 1970, to Sept. 9, 1972. It was also included\nin NBC's \"Godzilla Power Hour\" starting Sept. 8, 1978, then got its own timeslot\non Sept. 8, 1979.", "score": 32.64641779198879, "rank": 18}, {"document_id": "doc-::chunk-12", "d_text": "Lost Kid Shows / Movie Stars on TV / Saturday Morning Shows / Video Vault / Classic Christmas Specials / Fabulous Fifties / Unseen Scenes / Game Shows / Requested Forgotten TV Shows / The Super Sixties / More Modern TV Shows / The New * * Shows / 1980's Wrestling / TV Blog\n|TV's Embarrassing Moments / Action Shows of the Sixties / TVparty Mysteries and Scandals / Variety Shows of the 1970s / The Eighties / The Laugh Track / 1970's Hit Shows / Response to TVparty / Search the Site / Add Your Comments|\n|1960's TV Seasons: 1961 / 1964 / ABC 1966 / 1967 / 1968 / 1969 / Fall Previews / The UN Goes to the Movies / Life With Linkletter / Matt Weiner Interview / 1961 CBS Fall Season / The Good Guys / James Drury of The Virginian / Pat Buttram & Green Acres / 1960's Nightclub Comic Rusty Warren / That Girl / TV Shows to Movies / Batman Season 2 / Supermarionation / The Virginian's Clu Gulager / William Windom / Court Martial / Cast Changes on Bewitched and Green Acres / Sammy Davis Jr. Show / Sunday Morning Cartoons / Joe E. Ross / Alan Young Interview / Sherwood Schwartz Interview|\n|Classic TV Commercials / 1950's TV / 1960's TV / Punk Book / / 1970's TV / TV Games / Honey Boo Boo / Lucy Shows / 2012 Emmy Awards / Classic Cars / John Wayne / Gene Roddenberry / Rockford Files / Sea Hunt / Superman on DVD / Toy Gun Ads / Flip Wilson Show / Big Blue Marble / Monty Hall / Carrascolendas / Mr. Dressup / Major Mudd / Chief Halftown / Baby Daphne / Sheriff John / Winchell & Mahoney / Fireball X-L5 / Mr.", "score": 32.26995356871663, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Television programming through the\n1940s and 1950s was very basic. The stations that existed at the time, NBC CBS, ABC, and DuMont all had similar programming\nwhich stemmed from existing means of communication available at the time. These included radio, cinema, and theater.\nOne of the most popular types of shows\nextracted mainly from radio and theater was the variety show. From the late '40s up to even today, variety\nshows provided compilations of short acts and sketches that were relatively easy to put together and provided much\nentertainment. One of these shows was the Texaco Star Theater (pictured below). From the cinema\ncame longer for television movies. The industry primarily concerned itself with adaptaion until the late\nWith these early programs, also came early attempts to\ncommercialize the industry. Most of the shows through these two decades were sponsered much like the great syndicated\nradio shows. But these weren't the only commercials present at the time. There were also blocks of space like\nwe have today sold for advertising. Ironically, some of these slots were filled by companies trying to sell their newest\nmodels of television set to an increasingly interested general public.\nHowever, the programming wasn't all based on previous forms of\nentertainment. As the years went on and the industry became more and more popular, the programming became more customized\nto fit the new form of mass communication. One of these new forms of entertainment was the half hour television program.\nThese included the sitcom and other such pre-recorded shows made possible by the new videotape recording devices. Perhaps\nthe most recognized of these new half hour shows was I Love Lucy which premiered in 1951 with much acclaim. (pictured", "score": 32.006552000977386, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "hmmm, i rarely check out this section, which explains my lateness in showing up here.\nmy top 10 sit coms of all time\nall in the family\nthe odd couple\ndick van dyke\ni love lucy\nthe bob newhart show (the old one from the 70's)\nthe satirical issues explored on all in the family, make that show (imho), the far away best sit com ever produced. the show contiued to be relevent and quite funny through it's 8 year run. even the final years, when most shows that last that long have long since lost their appeal, aitf, kept it going strong and true to it's original concept ideals.\nthe odd couple, presented some of the sharpest comedic writings, performed by quite possibly the best match duo ever in television history. Jack klugman and tony randall played off of each other like no one else in sit com history. the comdeic timing of these two was amazing to watch. the 1st two years were pretty lackluster due to the show being scripted and taped in a closed studio. it wasn't until the 3rd year, when they changed the format to be filmed before a studio audience, that that the show really took off, and the talents of mssr. klugman, and randal were fully shown.\nwhen watching monty python, i can at times either take it or leave it. some of the skits were absolutely brilliant, while others left me flat. not so with fawlty towers. everyone of the waaaaay to few episodes is fabulous! john cleese along with the rest of the cast, performed the zany episodes flawlessly\ncheers displayed some of the finest comedy scripts of all time. the developement of each cast member over the years grew stronger and stronger. although there are some dud episodes along the 11 year run of the show, as a whole, i was rarely disapointed for the entire run\ni usually lump cheers and taxi together, as they were both created by the same team. the characters and stories were well thought out. reverend jim could possibly be my favorite television character of all time\ndick van dyke set the tone, and raised the bar for how sit coms should be produced. possibly the funniest ensemble ever put together for a show.", "score": 31.75327121724412, "rank": 21}, {"document_id": "doc-::chunk-3", "d_text": "The Spanish language variety show Sabado Gigante, which began in 1962, and then moved from Chile to the United States in 1986, will continue to produce and broadcast new episodes on Univision until its pending cancellation in September 2015.\nHowever, though the format had faded in popularity in prime time, it thrived in late night. The variety shows of this daypart eventually evolved into late-night talk shows, which combine variety entertainment (primarily comedy and live music) with the aspects of a talk show (such as interviews with celebrities). The Emmy Awards organization considers the two genres to be related closely enough that it awards the Primetime Emmy Award for Outstanding Variety, Music or Comedy Series to any of these types of show. Although only one network (NBC, with its The Tonight Show Starring Johnny Carson and later Late Night with David Letterman) had a successful late-night talk show until 1992, the field greatly expanded beginning with Carson's retirement and the controversial selection of Jay Leno as Tonight’s new host. Within ten years, all of the \"Big Three\" networks, along with several cable outlets, had late night variety talk shows being shown nightly. After ceding his hosting role on The Tonight Show to Conan O'Brien in 2009, Leno began hosting The Jay Leno Show, a late-night styled program broadcast by NBC in the final hour of primetime. In early 2010, after poor viewership and a dispute with Conan O'Brien surrounding a plan to move The Jay Leno Show into late night and push back the remainder of NBC's late-night lineup, the series was cancelled and Leno returned to The Tonight Show (with Conan leaving NBC entirely to host a new self-titled late night show on the cable channel TBS). As of 2014, late-night talk shows vary widely on their resemblance to the original variety format, with Jimmy Fallon's incarnation of The Tonight Show putting heavy emphasis on sketches and stunts, while shows such as Late Night with Seth Meyers focus more heavily on the talk aspects.\nSketch comedy series such as Saturday Night Live, In Living Color, Almost Live!, MADtv and SCTV also contain variety show elements, particularly musical performances and comedy sketches (though only the first of these remains telecast as of 2010). The most obvious difference between shows such as Saturday Night Live and traditional variety shows is the lack of a single lead host (or hosts) and a large ensemble cast.", "score": 31.42611444539446, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Found 52 Resources containing: Sitcoms\nOn September 19, 1970, “The Mary Tyler Moore Show” premiered: a mainstream sitcom about women in the workplace that millions of Americans could relate too. Today, its star, a feminist icon in her own right, Mary Tyler Moore, died. She was 80 years old.\nThough “The Mary Tyler Moore Show” ran for seven season and became one of the most decorated shows of all time, it almost didn’t make it past its first season. The reason was because of its time slot, explains Jennifer Keishin Armstrong in her definitive book on the series, Mary and Lou and Rhoda and Ted: And all the Brilliant Minds Who Made the Mary Tyler Moore Show a Classic.\nThe show, Armstrong writes, was initially slated to run on Tuesday nights on CBS. The competitive lineup would have spelled doom for the fledgling sitcom. But then, CBS’ head of programming Fred Silverman got his hand on the pilot. What happened next changed the show's fate. Silverman was so impressed that after he finished screening the episode, he immediately called up his boss. “You know where we’ve got it on the schedule? It’s going to get killed there, and this is the kind of show we’ve got to support,” he said, as Armstrong reports.\n“The Mary Tyler Moore Show” got moved to Saturdays at 9:30, and the rest was history.\nIt's not hard to see why the pilot episode had Silverman hooked. Just take the scene where Moore's character, Mary Richards, gets hired as an associate producer for a Minneapolis television station—it's one of the most famous job interviews in television history.\nDuring it, news producer Lou Grant (a loveable Ed Asner), gives Richards a hard look. “You know what? You’ve got spunk,” he says, grudgingly.\nMoore, wearing a long brown wig to differentiate herself from the character she played on “The Dick Van Dyke Show,” nods, graciously. “Well, yes.”\nGrant’s face then does a 180. “I hate spunk,” he says, his eyes bugging out.\nThe scene is played for laughs, but it also served as an important mission statement for what “The Mary Tyler Moore Show” would be.", "score": 31.303252572207203, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "“The Jackie Gleason Show” with that fabulous melody to open and the gentlemen so solid and musical announcing “…with the June Taylor Dancers” from Miami. The Red Skelton Show, the “I Love Lucy” Show, Dean Martin’s very clever variety show and every night the “Tonight Show with Johnny Carson.”\nSo, television, here are a few I wouldn’t have missed.\nThe incredible “The Man from Uncle” and “The Girl from Uncle”\nThe very fabulous British export, “The Avengers”, John Steed and Emma Peel.\nThis one scared and fascinated me. “David Vincent on a lonely country road….” I remember the entire opening dialogue about how this man stumbles on the information that aliens have landed and are trying to take over the world. VERY well done show, “The Invaders” with Roy Thinnes. Made a huge impression on me. Rod Sterling’s “The Twilight Zone” and the very eerie, “The Outer Limits” actually taking control of your television sets! Such a scary thought, that seems to have occurred in our lifetimes?\nWho didn’t purr along with the amazing Julie Newmar? Girl crush for days, I thought she was the “cat’s meow”. Come to think of it, she was very operatic.\nDon’t ask me why, I loved this show, and “Petticoat Junction”, “Green Acres” and “The Beverly Hillbillies”, “Hawaii 50″ and later on “Happy Days”, and so many others for HIGH SCHOOL times.\nAnd I used to dream about this one. “The Wild, Wild West” with the handsomest Mr. Conrad, who reminded me so much of my Dad.", "score": 30.94047837107973, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "Some fans probably felt a serious case of deja vu!\n39. Monstrous Coincidence\nTV’s two classic monster family-themed sitcoms, The Addams Family and The Munsters, not only both had similar subject matter—they also both ran for the exact same two year period as one another from 1964 to 1966. Who knows what it was about those two years that made the public so temporarily interested in monster families?\n38. A Future With a Future\nThe Simpsons creator Matt Groening also had a second hit cartoon sitcom, Futurama, which was surprisingly canceled after a short initial run—but its persistent popularity and fan pushback over its cancellation eventually led to revivals in various formats, and ultimately a few new seasons.\n37. Know When to Fold ‘Em\nOne of America’s earliest hit TV sitcoms, The Honeymooners, was stopped after 39 episodes by its own star, Jackie Gleason, to avoid a decline in quality due to running out of good ideas. A famous legend about Gleason was that he never rehearsed. It turns out that this is true, and the main reason was that he felt his performance would be more fresh and funny if he let it happen spontaneously. It seems like in this case, the public concurs. Over 100 previously forgotten and unavailable sketches from Jackie Gleason’s variety show featuring the characters and settings from the Honeymooners were recently uncovered and released as the “lost episodes”—a dream come true for fans of any show.\n36. That What?\nDespite the understandable misconception that the short-lived That ’80s Show was a spinoff of the popular That ’70s Show, there is actually no connection between the characters or storylines of the two series (except that the main character was designed as Eric Forman’s cousin), and their connections go no farther than title, basic theme, and production team. Of course, this list is about beloved shows, so let’s move on quickly, shall we?\n35. Multidimensional Antics\nThe Addams Family series and its classic characters were actually based on a comic strip from The New Yorker.\n34. Long Time for a Short Run\nThe 1980s British TV adaptation of Sherlock Holmes is noteworthy for its loyalty to the original books, but also for another reason—despite a run of just 41 episodes, the series took 10 years to be completed!\n33.", "score": 30.270653435925105, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "TV From There\nIt's the fall season, a time when television networks prove their freshness and creativity. Fox offers That '70s Show, a drug-laced, coming-of-age story set in suburban Wisconsin. The WB network gives us Felicity, a coming-of-age story set in New York City. ABC, in a bit of contrarian programming, invites us to watch Malcolm McDowell age gracelessly on Fantasy Island.\nHere in South Florida, we are blessed with our very own network. WAMI-TV (Channel 69) is the first of what mogul Barry Diller hopes will be a national chain of locally programmed stations. It was launched in June with a promise of eight hours per day of \"network-quality\" local shows. Four months later WAMI showcases only two-and-a-half hours of new local production each weekday. Prominent original shows died off. The dance program Barcode lasted a week. The gabfest Out Loud folded last month. And two advertised series never materialized: the evening cabaret Lincoln Lounge and the true-crime tales of Edna Buchanan.\nInstead, WAMI's roster is filling up with such fresh and creative fare as Charlie's Angels, Baretta, and T.J. Hooker. As of last week, WAMI officials were using their self-described \"state-of-the-art Lincoln Road studios\" to broadcast M*A*S*H as often as four times a day.\nIf ever a station needed to make a splash in the fall, it's WAMI. So what are its new shows?\nThe Munsters and AQue Pasa USA?\nThe Munsters ain't set in the subtropics. AQue Pasa USA? is local, all right, but production took place more than 15 years ago, and the show is already seen regularly on Galavision and WPBT-TV (Channel 2). Thanks to WAMI, cable viewers can now spend Sunday night watching Steven Bauer on two channels at the same time.\nAs network officials consider expanding the WAMI concept into Los Angeles and New York, Diller and company might want to ready a defense against criticism that the bulk of their programming isn't really \"TV from here.\" As the following digest shows, WAMI is local. One just needs to look a bit below the surface.\nThe Six Million Dollar Man\nSaturday 8 p.m.", "score": 30.112993014020326, "rank": 26}, {"document_id": "doc-::chunk-11", "d_text": "I was really a club comedian with a limited ... so I didn't have the depth of experience in my matrix to call on ... I was writing superficially ... and it was thin ... I hated doing these variety shows ... I got other ones. I got the Perry Como Special in Hawaii. I got Jimmy Rodgers. I did Roger Miller's variety show.\"\nCarlin finally followed in the footsteps of Jack Burns and entered the world of sitcoms. He appeared on That Girl as the agent of aspiring actress Ann Marie played by star Marlo Thomas. Marlo was the daughter of famed comedian and television producer Danny Thomas, a cigar toting nightclub comic that belonged fully to the type of show business that Carlin would eventually reject. He was also the executive producer of Burns' late employer The Andy Griffith Show. The Griffith sitcom was in reality a spin-off of Thomas' comedy The Danny Thomas Show (also known as Make Room for Daddy). Carlin in later years said of the episode \"My acting really sucked.\"\nThe Hollywood Palace was a bizarre variety show that tried to please everyone by combining Lawrence Welk style acts and Catskill comedians with unwashed psych and garage bands. Carlin appeared on the show at least four times, and he was straddling both of these worlds. Palace was essentially created to bring some of Ed Sullivan's audience to a different network and the format featured many of the same kinds of acts you'd see on Ed's show (but nowhere else) such as plate spinners, acrobats, tumblers with Italian names and other throw backs to vaudeville. One appearance had George introduced by Jimmy Durante, another was with host Van Johnson and another spot introduced by Martha Raye in which he performed for an audience of servicemen about to go die in Vietnam.\n1967 was an incredibly busy year. Carlin scored another full time television job as a staff writer and regular performer on a summer replacement program. CBS' Away We Go was named for one of Jackie Gleason's catchphrases as it was filling the time slot of The Jackie Gleason Show. The title had no bearing on the show itself and could just as easily have been called To The Moon or One of These Days, Alice. The job set Carlin up nicely for two stand-up appearances on the great one's program later that year.", "score": 29.921844228751056, "rank": 27}, {"document_id": "doc-::chunk-7", "d_text": "The Spanish-language variety show known as Sábados Gigantes (forerunner of the U.S. Sábado Gigante) began in 1962 with Don Francisco and lasted into the 1990s. His daughter, Vivianne Kreutzberger, currently hosts the program under the title Gigantes con Vivi, while Don Francisco has hosted the U.S. version since 1986.\nMany television special continue to resemble the variety show format to this day.\n- Variety Artists Club of New Zealand, a club for variety performers and entertainers\n- Variety, the Children's Charity, widely known as the Variety Club, a charity operated by variety performers\n- Japanese variety show\n- \"Television in the United States\". Encyclopædia Britannica Online, 2011. Web. 06 Jun. 2011.\n- \"Sid Caesar.\" Encyclopædia Britannica Online, 2011. Web. 06 Jun. 2011.\n- Carter, Bill (May 12, 2014). Overextended, Music TV Shows Fade. The New York Times. Retrieved May 12, 2014.\n- \"Maya Rudolph is reviving the variety show – but is there still a place for it?\". The Guardian. Retrieved 25 May 2014.\n- \"Forget Donny & Marie. Maya Rudolph, NBC Bid To Revive TV’s Variety Show\" from Variety (May 17, 2014)\n- \"TV Ratings Monday: 'Bones' & 'Mike & Molly' Dip for Finales, 'The Voice' Rises + '24: Live Another Day' Slides & 'The Bachelorette' Ties Premiere Low\" from TV By The Numbers/Zap2It (May 20, 2014)\n- Gerow, Aaron (2010), \"Kind Participation: Postmodern Consumption and Capital with Japan's Telop TV\", in Yoshimoto, Mitsuhiro, Television, Japan, and Globalization, Ann Arbor: Center for Japanese Studies, University of Michigan, pp. 117–150", "score": 29.638975658021504, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "And now for a look at the most exciting upcoming television shows this fall:\nWhat's on TV? This flamethrower remote control, apparently.\nSarcastic Homosexual Man and Sassy Black Woman! (Comedy) - Hollywood's two favorite stereotypes, the sarcastic homosexual man and the sassy black woman, have been combined to form one hilarious 30-minute block of wacky hijinks which were previously only possible in 70, maybe 80 different shows. Jack Sterling is an intelligent gay man who works in a cutting-edge architect's office and deals with a myriad of offbeat co-workers like the gossiping secretary, the hare-brained elderly white guy who runs the company, and the colorful rival gay man who works at the firm down the street. Jaqwuannatry Shamequela is a quick-tongued African American woman who deals with the trials and tribulations of a normal black person's life, at least according to Hollywood standards. She has to deal with racist police officers, racist next door neighbors, racist moon astronauts, and racist deep sea starfish in each episode of this hilarious and cutting-edge show which TV Guide recently called \"a... TV show... which is... not... so terribly fucking bad that nobody should ever watch, even under the threat of physical violence against members of their family!\" Watch in hilarious amazement as Jaqwuannatry calls a white bigot some colorful and sassy name, followed by Jack making a highly dry, acerbic comment and rolling his eyes while the laugh track plays at a million decibels and the show cuts to a commercial for constipation pills. So how did these two unique and different characters end up meeting each other? Simple: they're both incarcerated for vehicular manslaughter!\nGame Show: The Game Show (Game Show) - The public's highly non-insatiable desire for primetime game shows hasn't stopped a few renegade production studios in Yekaterinburg from creating new televised contests which challenge guests to name the capitol of New York (New Jersey) under the penalty of being artificially impregnated with experimental horse semen. Game Show: The Game Show takes elements from some random popular show, combines them with elements from a completely different yet somewhat similar show, and produces something that is nothing like anything previously unlike it! Host Tony Danza, star of the smash TV show \"Who's the Boss?\"", "score": 29.600294038685004, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "The Dick Van Dyke Show is an American television sitcom that aired on CBS from October 3, 1961, to June 1, 1966, with a total of 158 half-hour episodes spanning five seasons. It was produced by Calvada Productions in association with the CBS Television Network and Desilu Studios; the show was created by Carl Reiner and starred Dick Van Dyke, Rose Marie, Morey Amsterdam, Larry Mathews, Mary Tyler Moore. It centered on the home life of television comedy writer Rob Petrie; the show was produced by Reiner with Sam Denoff. The music for the show's theme song was written by Earle Hagen; the series won 15 Emmy Awards. In 1997, the episodes \"Coast-to-Coast Big Mouth\" and \"It May Look Like a Walnut\" were ranked at 8 and 15 on TV Guide's 100 Greatest Episodes of All Time. In 2002, the series was ranked at 13 on TV Guide's 50 Greatest TV Shows of All Time and in 2013, it was ranked at 20 on their list of the 60 Best Series; the two main settings show the work and home life of Rob Petrie, the head writer of a comedy/variety show produced in Manhattan.\nViewers are given an \"inside look\" at how a television show was produced. Many scenes deal with his co-writers, Buddy Sorrell and Sally Rogers. Mel Cooley, a balding straight man and recipient of numerous insulting one-liners from Buddy, was the show's producer and the brother-in-law of the show's star, Alan Brady; as Rob and Sally write for a comedy show, the premise provides a built-in forum for them to make jokes. Other scenes focus on the home life of Rob, his wife Laura, son Ritchie, who live in suburban New Rochelle, New York. Seen are their next-door neighbors and best friends, Jerry Helper, a dentist, his wife Millie. Many of the characters in The Dick Van Dyke Show were based on real people, as Carl Reiner created the show based on his time spent as head writer for the Sid Caesar vehicle Your Show of Shows. Carl Reiner himself portrayed the Sid Caesar character.", "score": 29.04981924544131, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "With the knowledge gained from tackling feature-length shows on a weekly basis, Wasserman in 1964 led the studio to its next logical step: made-for-television movies. \"See How They Run,\" starring John Forsythe, was the first.\nAround this time, Revue was officially incorporated as Universal's television arm. Though the studio did a little of everything, drama was its forte.\n\"I think the approach they took was a very slick formula that they knew well, from decades of theatrical features, and it adapted well to television,\" said Walter Podrazik, Castleman's co-author on \"Watching TV.\"\n\"In effect, think of [many of the Universal dramas] as second-tier star vehicles,\" Podrazik went on. \"So for instance, by the time Rock Hudson is doing 'McMillan and Wife,' he's not boffo at the box office, but he's still a strong name. So they find a way to showcase him with solid, dependable scripts, solid dependable productions. You're not going to find something that's going to push you off the edge of your seat, but you're going to say, that was a good, entertaining show. And they just were able to do that based on what was almost in the blood, after decades of work.\"\nIn 1966, when Universal and NBC signed an agreement that would put a specific number of TV movies on the schedule for the next few years, the public was receptive. \"Doomsday Flight,\" an airplane disaster saga written by Rod Serling, received very good Nielsen ratings, as did \"Fame Is the Name of the Game,\" which would return in 1968 as the regular 90-minute drama \"The Name of the Game.\" On an alternating basis, series leads Robert Stack, Tony Franciosa and Gene Barry starred as key figures in the magazine publishing world.\nContinuing the rotating, longform format, Universal's \"The Bold Ones\" premiered on NBC in 1969, consisting of three series, \"The New Doctors,\" \"The Lawyers\" and \"The Protectors.\"\n\"Four-In-One\" followed in 1970, featuring \"Night Gallery,\" \"San Francisco International Airport,\" \"The Psychiatrist,\" and \"McCloud.\"\n\"McCloud,\" starring Dennis Weaver, was loosely based on Universal's 1968 Clint Eastwood film \"Coogan's Bluff.\"", "score": 28.716989210969228, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Paramount // 2008 // 220 Minutes // Not Rated\nReviewed by Appellate Judge James A. Stewart (Retired) // January 23rd, 2008\n\"There must be a better way of making a living than this.\" -- Jack Paar\nMost of us grew up with television. Those of us born before the explosion of cable TV know at least a little bit about television's past, thanks to all those afternoon reruns of The Dick Van Dyke Show, The Flintstones, and Lost in Space. The sitcoms and dramas played endlessly -- and they often made mention of famous shows that didn't turn up, like Howdy Doody.\nStill, it's hard to consider that, in 1948, \"overall, the new medium was struggling,\" as Pioneers of Television puts it. The four-part PBS documentary looks at genres and shows that helped to make television a part of everyone's lives.\nPioneers of Television has four episodes on one disc:\n* \"Late Night.\" Want to watch something light and funny before going to sleep? NBC exec Pat Weaver thought so. That's why he sent over Broadway Open House and Steve Allen's Tonight. Much of the hour, though, concentrates on Johnny Carson.\n* \"Sitcom.\" CBS thought Jackie Gleason should stick to variety shows, but he gave us a season of The Honeymooners instead. Other sitcoms featured are I Love Lucy, Make Room for Daddy, The Andy Griffith Show, and The Dick Van Dyke Show.\n* \"Variety.\" Once the toast of the town, this format no longer is a source of star theater, Texaco or otherwise. Ed Sullivan, Milton Berle, Red Skelton, Arthur Godfrey, Perry Como, Andy Williams, Pat Boone, Nat King Cole, Sid Caesar and Imogene Coca, Carol Burnett, the Smothers Brothers, Laugh-In, Flip Wilson, and Tony Orlando and Dawn are featured here.\n* \"Game Shows.\" Deal or No Deal? It's the sort of question people have been asking since the radio days of the 1920s. Among the games covered here are Truth or Consequences, What's My Line?, You Bet Your Life, Password, The Price is Right, Wheel of Fortune, The Dating Game, The Newlywed Game, and Art Fleming's Jeopardy!.\nYou've heard most of the stories here.", "score": 28.10374755133511, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "His character works as a writer for The Alan Brady Show, a variety hour run as a dictatorship by an egotistical comic played (mostly in absentia) by the show’s creator, Carl Reiner. When Van Dyke isn’t at home tripping over ottomans and buttering up wife Mary Tyler Moore to get her to go along with his crazy schemes, he’s in the writers’ room thinking up gags for his boss alongside cynical veterans Morey Amsterdam and Rose Marie. At the end of the show’s fifth season, Van Dyke finally finishes the memoir that provides a framing device for the entire series, only to see it rejected by a publisher. Is this the end of our meta-television comedy? Never fear; Carl Reiner suggests that the memoir be transformed into a situation comedy, starring himself playing Van Dyke’s character. And so the cycle begins anew.\n4. The Mary Tyler Moore Show\nIt’s a long way from Moore’s flustered, capri-sporting wife on The Dick Van Dyke Show to her groundbreaking role as an associate producer at Minneapolis’s WJM-TV on her own series. And not just because she’s the one bringing home the bacon and frying it up. As the center of a group of newsroom eccentrics, Moore held together a show about a woman doing something with her life other than looking for a man, though she did go on plenty of dates. She’s stuck with an evening newscast hosted by handsome bubblehead Ted Knight, a writer who delights in making the anchor look foolish, and an old-school newsman boss whose pride in his female protégé requires masking under a heavy layer of paternalism. Although the show’s seven seasons contain plenty of episodes only tenuously related to Moore’s job, her news-show-within-a-show remained at the heart of the show’s comedic universe, serving as the setting for such classic episodes as “Chuckles Bites The Dust.”\n5. Sports Night\nIn the magical world of writer Aaron Sorkin, everyone is doing their best. That tack got him into some trouble with Studio 60 On The Sunset Strip, a short-lived series that treated the creation of a Saturday Night Live-like comedy series with the same dewy-eyed awe that Sorkin brought to the leaders of the free world in The West Wing, but it worked well for his first short-lived series, Sports Night.", "score": 27.756290313373274, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "Now, I’m not going to bother with the clichéd “it was ahead of its time” argument, because I think it’s foolish to discount what happens in the medium to allow any specific series to organically exist within its own era in the first place. (My World And Welcome To It is very of its period!). But I do believe that if a similar show had premiered with the same notices on NBC three years later, when the genre seemed to have an evolved creative mission statement, it would have had a greater chance of getting a second season. Heck, it may have even had a better fate (however slight) premiering on NBC three years earlier, when the network’s competition with CBS wasn’t quite so fierce and executive faith saved shows.\nEven more telling though: I believe if the show had premiered on CBS instead of NBC, it wouldn’t have been a one-season wonder. I’ll explain, but first it’s imperative to have a brief understanding of the era. As you know, 1969-’70 found the medium at the precipice of change — not only would the following season see the premieres of both The Mary Tyler Moore Show and All In The Family, but the mindset of top network executives also seemed to undergo a quick transformation led by a few key figures. When written about today, the programming shift is often associated with the “Rural Purge,” which is said to be a result of the networks, CBS in particular, deciding to no longer target total viewers, but rather key demographics — the ones advertisers were claimed to most desire: young urbanites. The truth is more complicated. In my study of the history, I’ve found that the “Rural Purge” had as much to do with demo-targeting as it did the FCC’s 1970 passage of the Prime Time Access Rule, which limited the number of primetime hours that networks could broadcast (in the hope that affiliate stations would become more creatively emboldened). This would be implemented starting in the fall of ’71, and while the networks could choose which 21 hours to broadcast, by 1972-’73, 8:00-11:00 was the set standard except for Sunday, which ran 30 minutes earlier. Ultimately, four whole hours of weekly programming were lost in between the falls of ‘69 and ‘72. (This arrangement stuck until ’75, when the networks were granted a full extra hour on Sunday.)", "score": 27.41909179304473, "rank": 34}, {"document_id": "doc-::chunk-4", "d_text": "The Danny Kaye Show returned to television in 2017 with reruns on Jewish Life Television (and, in the case of a one-off Christmas special, the Christian-leaning network INSP); JLTV also carries other classic comedy/variety series by Jewish comics in its schedule. Digital multicast network getTV shows variety shows on an irregular basis. The Spanish language variety show Sabado Gigante, which began in 1962, and then moved from Chile to the United States in 1986, continued to produce and broadcast new episodes on Univision until its cancellation in September 2015.\nAt least one national variety show continues on national radio: Live from Here, a musical variety series hosted by Chris Thile. It has followed three formats over the course of its history, the first and longest being that of A Prairie Home Companion; founding host Garrison Keillor created the series in 1974 as an homage to rural radio variety shows, featuring sketch comedy based on radio dramas of the old-time radio era, complete with faux commercials. (The other format, The American Radio Company of the Air, also hosted and created by Keillor, was set in a more urban environment and likewise was based on old-time radio; its short run in the late 1980s eventually morphed into a revival of A Prairie Home Companion). Live from Here, which was established in its current format in 2016 and took on its name a year later after losing the rights to the A Prairie Home Companion name, focuses mainly on musical acts.\nFox's Osbournes Reloaded, a variety show featuring the family of rocker Ozzy Osbourne, was canceled after only one episode had been telecast in 2009. More than two dozen affiliates refused to telecast the first episode of the show. This series had been slated for a six-episode run.\nNBC has made repeated attempts at reviving the variety format since the late 2000s (its last successful series in this genre, Barbara Mandrell and the Mandrell Sisters, left the network's schedule in 1982) . A pilot episode for Rosie Live was telecast the day before Thanksgiving Day in 2008 and, after receiving middling ratings and extremely poor reviews, was not picked up for its originally planned run in January 2009. In May 2014, NBC aired The Maya Rudolph Show, a variety show starring SNL performer Maya Rudolph.", "score": 27.40028124136025, "rank": 35}, {"document_id": "doc-::chunk-22", "d_text": "(And of course, “demos” had become the go-to response when a veteran show was cancelled – despite the fact that winning and profitability were always more important, especially in ’70.)\nWe can’t yet confirm whether PETTICOAT JUNCTION had higher total ratings than GREEN ACRES in the year the former was cancelled. But it hadn’t made the Top 30 since before the ’67-’68 season, when it moved to Saturdays and started suffering from Benaderet’s ill health. In fact, PETTICOAT JUNCTION was allegedly floated for cancellation at the end of the ’68-’69 season, its second year out of the Top 30 (when GREEN ACRES was #19). Why? Aside from dropping in position and skewing older (I believe it was the oldest-leaning of Henning’s three), it also endured the death of its star. So, PETTICOAT JUNCTION was forever in more danger than GREEN ACRES thereafter, and frankly, I think it entered its last year already *in* the cancellation queue, simply because the network hadn’t been ready to drop it during the season prior.\n’68-’69 was almost as rocky as its successor. The first half of the TV year had been a wake-up call for CBS; as NBC rose, Dann was almost fired, network president Tom Dawson was replaced with Bob Wood, and Nabors was leaving his #2 sitcom for a variety show that would likely not fare as well. Programming changes were in order — and poaching NBC’s cancelled GET SMART was obviously not a real solution. But since there weren’t many good, different shows in development during late ‘68 to truly address what was necessary for a big shake-up, the network had to prolong its boldness. And Wood needed time to gain some cachet with Paley.\nThus, the new network president made sure CBS was slightly better prepared for change in the following year’s greenlighting period – when an upheaval was even more financially imperative.", "score": 27.1201151855931, "rank": 36}, {"document_id": "doc-::chunk-6", "d_text": "This changed with Carson's retirement, and other networks began to air their own talk show competitors, starting with Late Show with David Letterman on CBS in 1993. As of 2014, late-night talk shows vary widely on their resemblance to the original variety format, with Jimmy Fallon's incarnation of The Tonight Show putting heavy emphasis on sketches and stunts, while shows such as Jimmy Kimmel Live! and The Late Show with Stephen Colbert put more emphasis on desk chat.\nThe Richard Bey Show combined the variety show with the tabloid talk show, not only having its guests talk about their problems but also having them participate in absurdist games, and Sally Jesse Raphael was known for occasionally having music and fashion in the show, especially drag and gender-bending performances.\nSketch comedy series such as Saturday Night Live, In Living Color, Almost Live! (and its successor Up Late NW), MADtv, and SCTV also contain variety show elements, particularly musical performances and comedy sketches. The most obvious difference between shows such as Saturday Night Live and traditional variety shows is the lack of a single lead host (or hosts) and a large ensemble cast. SNL has used different guest hosts ever since its inception.\nTelevised talent shows have a variety show element, in that they feature a variety of different acts. Examples of talent shows that feature entertainers from a broad variety of disciplines include Star Search, which had a run in the 1980s in syndication and a run on CBS in the early 2000s during the reality television boom; The Gong Show, which reached its peak in the 1970s but has had occasional revivals since then; and the worldwide Got Talent franchise.\nThe variety show format also continued in the form of the telethon, which feature variety entertainment (often music) interspersed with appeals for viewers to make donations to support a charity or cause. The Jerry Lewis MDA Telethon was one of the best known national telethons, but it too was eventually canceled after several years of shortening (originally over 21 hours, by the time of its last telecast in 2014, by which point Lewis had been gone from the telethon several years, it was down to two hours). Another popular telethon, for United Cerebral Palsy, ended its run in 1998 shortly after the death of its founder and figurehead, Dennis James. Likewise, only a handful of long-established local telethons remain.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Whatever happened to the variety/comedy shows? You know, the ones where they played before a live audience, mixing comedy skits with singers, guest stars, dancers and an hour of laughter. The best ever, in my humble opinion, was the Carol Burnett Show. She had it all. Harvey Korman, Tim Conway, Vicki Lawrence, Lyle Waggoner and the best; herself. Every show began with a monologue, questions from the audience with ad lib quips, that Tarzan sreech, and the washer woman character that no one could imitate.\nAs the saying goes, they don’t make like they used to. The younger folks don’t know what they missed without seeing these wonderful shows. Yes, I’m glad I was around when they were big hits, but they also sadden me when I watch the poor excuses we have today for comedy in comparison.\nNo canned laughter for these folks. But it wasn’t needed. Just watch any episode of “Burnett” or “Jackie Gleason” and you’ll find plenty of that authentic laughter coming from the people in the studio. Something about fake laughter that I resent. It’s as though the producers of these shows don’t trust us to know what’s funny.\nAnyway, here’s my top ten for the greatest variety shows of all time. (With a few links to spice it up)\nThe Carol Burnett Show\nThe Dean Martin Show\nSonny And Cher\nThe Smothers Brothers\nThe Colgate Comedy Hour\nThe Show Of Shows\nThe Ed Sullivan Show\nRed Skelton Show\nMilton Berle Show\nWhat say you…my fellow senior citizens?", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Cultural highlights of 1949\n• The Aldrich Family (NBC, from 2 October). Soap, transferred from radio.\n• Arthur Godfrey & His Friends (CBS, from 12 January). Runs for next seven years.\n• Cavalcade of Stars (DuMont, from 4 June). Variety.\n• Colgate Theatre (NBC, from 3 January). Drama.\n• The Goldbergs (CBS, from 17 January). Sitcom, transferred from radio; runs for four years (to 25 June 1951).\n• Hopalong Cassidy (NBC network, from 26 June). Cowboy adventures, starring William Boyd.\n• Kukla, Fran and Ollie (networked from 12 January, see 1947). Children's.\n• The Lone Ranger (ABC, from 15 September). Western, first of 169 episodes specially made for television.\n• Red Barber's Clubhouse (CBS, from 2 July). Sports, later transfers to NBC.\n• Suspense (CBS, from 1 March). Thriller; runs for five years (to 17 August 1954).\n• These Are My Children (NBC Chicago, from 31 January). First daytime soap.\n• They Stand Accused (CBS, from 18 January). Legal drama, later transfers to DuMont.\n• The Voice of Firestone (from 5 September). Classical and light classical music (on radio since 24 December 1928).\n• Blue Hills (ABC). For rural dwellers, runs until September 1976.\n• The Billy Cotton Band Show (BBC Light Programme, from 6 March).\n• A Book at Bedtime (BBC, weeknights at 23:00 from 31 January). Readings. Still running in 2005.\n• Dragnet (NBC's KFI in Los Angeles, from 7 July). Crime.\n• Yours Truly Johnny Dollar (CBS). Drama series about insurance investigation. It runs until 1962.\n• Arthur Miller: Death of a Salesman (on Broadway from 9 February)", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Merlin,” “Three’s Company,” “McMillan and Wife,” “Get Smart,” “The Dick Van Dyke Show,” “F Troop,” “Hank,” “Mister Roberts,” “Omnibus,” “The Saturday Night Revue,” and “The Colgate Comedy Hour.”", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "For the most part when newspapers or magazines reported the Nielsen ratings in decades past only the Top Twenty or Top Forty programs were listed. Whether this was due to restrictions enforced by Nielsen (these days the company is very protective of its data) or an editorial decision is beside the point. What is important is that on October 12th, 1965 The New York Times reported both the Top Forty and the Bottom 15 programs for the first two weeks of the 1965-1966 season. The period in question ran from Monday, September 13th through Sunday, September 26th. All the way at the bottom with a 4.8 Nielsen rating was CBS Reports. Topping the chart was NBC’s Bonanza with a 31.1 rating.\nOne reason for the poor performance of these shows was their time slot. Killer competition on the other two networks would result in low ratings regardless of quality. ABC’s The Donna Reed Show, for example, was up against Gilligan’s Island (14, 22.3) on CBS and Daniel Boone (37th, 19.3) on NBC. The combined rating for the three programs was a 52.5, meaning more than half of all television households in the country were watching the networks (the share of the audience, or the number of televisions in use at the time, was likely close to 90%).\nHere’s the complete list of the Bottom 15 programs. New shows are marked:\n|83.||The Steve Lawrence Show (New)||CBS||12.6|\n|86.||The Trials of O’Brien (New)||CBS||11.9|\n|The King Family||ABC||11.9|\n|88.||Ozzie and Harriet||ABC||11.4|\n|89.||Camp Runamuck (New)||NBC||10.9|\n|The Donna Reed Show||ABC||10.9|\n|94.||Shindig II (New)||ABC||9.6|", "score": 26.691380579814325, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "As a kid in the early 1960’s, the tv shows I watched were part of a group decision. There was only one set for a family of four kids (for the time being) and two parental units. My three sisters had their preference for girlie shows, but, we all always wanted to watch what we thought must have been meant for adult viewing. As if we got extra credit for trying to make ourselves seem more sophisticated than we were.\nIn those days, if a show got good ratings, it stayed on the air for several years. Good ratings meant that they were drawing lots of people on a regular basis. Of course, this was before cable and dishes that give you anywhere from 50 to 150 channels to choose from. In the big cities, you were glad to have five or six channels. In 1973, All in The Family one week drew a 33.7 rating which translated into a 54 share. This meant that fifty-four percent of all tv’s in use were watching Archie Bunker pontificate. Today, the most watched shows are ecstatic to get up to 25 percent of the sets in use.\nContinue reading “Bring Back What’s My Line”", "score": 26.58575325109042, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "So without further ado, let's begin...\nNumber 10 - The Addams Family\nIn 1964, the ABC-TV network created The Addams Family television series based on Charles Addams' cartoon characters. The series was shot in black and white and aired for two seasons in 64 half-hour episodes (September 18, 1964 – September 2, 1966). During the original television run of The Addams Family television series, The New Yorker editor William Shawn refused to publish any Addams Family cartoons.\nThe television series featured a memorable theme song, written and arranged by longtime Hollywood composer Vic Mizzy. Check it out...\nNumber 9 - Rod Serling's Night Gallery\nThe series was introduced with a pilot TV movie that aired on November 8, 1969, and featured the directorial debut of Steven Spielberg, as well as one of the last acting performances by Joan Crawford.\nCheck out this TV Promo Spot...\nNumber 8 - Dexter\nDexter aired on Showtime from October 1st, 2006 to September 22nd, 2013, with 8 seasons altogether. Michael C. Hall has received several awards and nominations for his portrayal of Dexter, including a Golden Globe award.\nLet's go inside the kill room (spoiler alert!)...\nNumber 7 - Kolchak: The Night Stalker\nKolchak: The Night Stalker is a television series that aired on ABC during the 1974–1975 season. The series was preceded by two television movies, The Night Stalker (1972) and The Night Strangler (1973). The series only lasted a single season. The main character originated in an unpublished novel, The Kolchak Papers, written by Jeff Rice. After the success of the TV movie and its sequel, the novel was initially published in 1973 by Pocket Books as a mass-market paperback original titled The Night Stalker, with Darren McGavin on the cover to tie it to the movie. Moonstone Books continues to produce Kolchak comic books.\nHere's the TV intro...\nNumber 6 - Tales from the Crypt\nThe title is based on the 1950s EC Comics series of the same name and most of the content originated in that comic or the four other EC Comics of the time (Haunt of Fear, Vault of Horror, Crime SuspenStories, and Shock SuspenStories).", "score": 25.703854003797073, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Amidst a rough economic time, the entertainment business has taken a big hit. People are cutting back when it comes to entertainment, from downloading fewer songs on iTunes to going to the movies less often. Consequently, television has become an alternative form of entertainment for families.\nNinety-nine percent of households own at least one television set, according to the Sourcebook for Teaching Science. Therefore, TV programs have thrived more than ever. From the 1950s to the present, each decade has had its defining shows that shaped entertainment for many Americans.\n1950s- In the 1950s television struggled to be a mass media source. There were three channels to choose from: NBC, CBS and ABC. \"I Love Lucy\" was one of the most popular shows and still has a following today. \"Gunsmoke,\" a Western show set in Kansas, remains the longest running prime-time show to this day, with a staggering 20 seasons and 635 episodes.\n1960s- Television of the 1960s was considered to reflect old-fashioned views that were safe for the whole family to watch. Controversial shows were not even considered at this time, and having a TV in the house was becoming more common. Shows like \"The Andy Griffith Show\" and \"The Addams Family\" were great examples of clean, family-oriented entertainment that defined 1960s television.\n1970s- The 1970s were a lot different from previous decades of TV. It was the decline of family entertainment and the rise of socially hip sitcoms. Color TV became permanent and the term \"couch potato\" was coined. \"All in the Family\" broke through the socially acceptable barrier at the time and paved the way for future popular shows. \"Happy Days\" and \"Three's Company\" were also examples of this.\n1980s- Television was revolutionized in the 1980s. It was the birth of the remote control, the invention of cable and TV's fourth network, FOX. \"Dallas\" was the No. 1 show in the early 80s and made things like death on TV more widespread. \"The Cosby Show\" was one of the few popular family-oriented shows that still existed.\n1990s- TV in the 1990s might be more familiar to today's generation. ABC's \"Thank God It's Friday\" slogan promoted Friday nights on the couch with the family.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-3", "d_text": "— Some sources list 13 seasons for “All in the Family” and 12 for “Make Room for Daddy” and for “The Lucy Show.” However, that involves including four seasons of “Archie Bunker’s Place” or one of “Make Room for Granddaddy” or six of “Here’s Lucy.”\n— Coming close with 11 seasons were “Cheers,” “Frasier,” “Happy Days,” “The Jeffersons,” “Married With Children” and (counting its one-year revival) “Murphy Brown.”\n— Still in the running is “Will & Grace,” which has done 10 seasons so far.\n— “Friends” also had 10 seasons and many others had nine — “Seinfeld,” “Everybody Loves Raymond,” “Coach,” “Alice,” “Drew Carey,” “Family Matters,” “The Facts of Life,” “The King of Queens,” “Night Court” and the original “One Day at a Time.”\n— If you counted sitcoms not taped before an audience, “Ozzie & Harriet” would win with 14. If you included animation, it’s “The Simpsons” with 30. And if you go with comedy-themed variety shows, there’s Red Skelton in primetime (20) and “Saturday Night Live” overall (44).\n(Sources: “The Complete Directory to Prime Time Network and Cable TV Shows” (Ballantine Books) and www.imdb.com.)\nTrivia break II (name game)\n— “Sheldon” and “Leonard” are no accident. Sheldon Leonard – who started as an actor in tough-guy roles – produced many top comedies (with Dick Van Dyke, Andy Griffith and Danny Thomas) and a pioneering drama, “I Spy.” Chuck Lorre, the “Big Bang” co-creator, called the names “a little hero worship.”\n— Then there’s “Howard Wolowitz,” the friend of Sheldon and Leonard. Before becoming a comedy writer, “Big Bang” co-creator Bill Prady had a computer job, working on a system developed by Howard Wolowitz.\n— The show’s title is a bit of a play on words, as both a science and sexual term, but Prady adds one more: “The third thing, besides the physics reference and the sexual reference, is a coming together of forces.” In this case, it’s the opposite forces of Penny and the guys across the hall.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Where Everybody Knows Your Name\nWarren: I arrived at NBC in December 1979, hired by Brandon Tartikoff to work in the comedy department. I was manager of comedy development, the junior member of the department. Brandon was a newly minted vice president of development at the network, which was mired in last place. I was twenty-seven years old, and though I had watched a lot of it, I knew next to nothing about network television. Brandon, my boss, was all of thirty.\nIn what was just a three-way race for audience (there’d be no Fox Broadcasting until 1987), NBC was jokingly derided as number four. CBS had ten comedies on its schedule, including M*A*S*H, WKRP in Cincinnati, The Jeffersons, Alice, and One Day at a Time. ABC could boast fourteen sitcoms, among them Happy Days, Laverne & Shirley, Barney Miller, Soap, Taxi, and Three’s Company. At NBC, we had Diff’rent Strokes and Hello, Larry.\nIn terms of general viewership, CBS led the way with about sixteen million households. ABC was a close second with fifteen million. NBC lagged well behind at twelve million. For the 1980 season, Little House on the Prairie was our top-rated show at sixteenth. We placed only four shows in the top thirty. There was nowhere to go but up.\nWorse still, NBC’s head of programming at the time was a man named Paul Klein. He had a background in audience research and had come up with the strategy of LOP, which stood for Least Objectionable Programming (I’m not kidding). The object was to piss off as few viewers as possible. The network product line was largely geared toward big events, so we became the Big Event network.\nA TV critic once asked Paul Klein, “How do you know when you’ve got a big event?”\nKlein said, “We sit around a table, and people throw out ideas, and somebody says, ‘That’s a big event,’ and that’s when we know.”\nIt was an insane form of programming, expensive and not in the least bit habit--forming. NBC had essentially abandoned weekly series as the spine of the network. As a remedy, the legendary Fred Silverman had been brought over from ABC to turn things around.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-10", "d_text": "Saturday morning shows featuring the Hanna-Barbera laugh track:\n- Harlem Globetrotters (CBS, 1970–71; second season only)\n- Help!... It's the Hair Bear Bunch! (CBS, 1971–72)\n- The Pebbles and Bamm-Bamm Show (CBS, 1971–72)\n- The Funky Phantom (ABC, 1971–72)\n- The Roman Holidays (NBC, 1972)\n- The Amazing Chan and the Chan Clan (CBS, 1972)\n- The Flintstone Comedy Hour (CBS, 1972–73)\n- Josie and the Pussycats in Outer Space (CBS, 1972–74)\n- The New Scooby-Doo Movies (CBS, 1972–74)\n- Yogi's Gang (ABC, 1973)\n- The Addams Family (CBS, 1973–74)\n- Inch High, Private Eye (NBC, 1973–74)\n- Jeannie (CBS, 1973–75)\n- Speed Buggy (CBS, 1973–75)\n- Goober and the Ghost Chasers (ABC, 1973–75)\n- Hong Kong Phooey (ABC, 1974)\n- Partridge Family 2200 A.D.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "(l to r) Top Doris Day in The Doris Day Show; Andrew Duggan, Wayne Maunder, and James Stacy in Lancer;\nMiddle: Diahann Carroll and Marc Copage in Julia; Phyllis Diller in The Beautiful Phyllis Diller Show;\nBottom: Robert Morse in That’s Life; Otis Young and Don Murray in The Outcast\nHirschfeld captioned this work “Some of the new shows scheduled (between commercials) for this fall on your groan box.” In the era before cable television, the new television season on the three networks were subjects of great anticipation. The television mirrored the theater season in that it ran from September to the end of May, only the networks would then repeat the shows (commonly referred to as “reruns”) over the summer.\nThese shows represent their time. The Outcasts was a story of two cowboys, one Black, and one White, who work together even if they aren’t friends, reminiscent of The Defiant Ones. Julia was a groundbreaking series, the first to star a Black woman in a non-stereotypical role, but at the time, some critics thought the show was too apolitical. Gil Scott-Heron’s classic “The Revolution Will Not Be Televised) name checks the series as something you won’t see when the uprising comes. Robert Morse starred in a innovative series that charted the ups and downs of a young couple, in the form of a Broadway-style musical. It was filmed before a live audience and featured new songs, standards, and even pop hits of the day. Despite an Emmy nomination and guest stars that ranged from Liza Minnelli to Sid Caesar it ran only one season. Like the theater, many of these shows had very short runs.\nWhile it only ran for two seasons, Lancer has continued to play role in American culture. The 2019 movie Once Upon a Time in Hollywood incorporates a fictionalized account of the filming of Lancer's pilot episode, with Leonardo DiCaprio appearing as a villain in the episode. Additional scenes are featured in the film’s novelization.", "score": 25.623100437790146, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "This segment of Ask Steve explains how television in the 1960's failed to reflect upon the turmoil and confusion felt during this decade of change. Television in 1968 lagged behind in social change because there were only three networks. Since only three networks existed, they represented a large variety of people and did not want to alienate themselves from the public. 1968 television mostly consisted of escapists programs, such as Bewitched, Get Smart, and I Love Lucy. However, the first interracial kiss on television took place in a 1968 episode of Star Trek. The Mod Squad also aired, appealing to young people, but with a very traditional message of solving crime. laugh-In also aired in 1968, and was one of the first shows to poke fun of the establishment. It was not until the 1970's where there were more networks, and you had a variety of shows like Mary Hartman, Mary Hartman.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Law|\nNote: On NBC, the sitcom Grand premiered at 9:30 on January 18, 1990.Friday\n|ABC||Full House||Family Matters||Perfect Strangers||Just the Ten of Us||20/20|\n|ABC||Mr. Belvedere||Living Dolls||The ABC Saturday Mystery: B.L. Stryker, Columbo, Kojak, Christine Cromwell|\n|CBS||Paradise||Tour of Duty||Saturday Night with Connie Chung|\n|FOX||COPS||The Reporters||Beyond Tomorrow||Local|\n|NBC||227||Amen||The Golden Girls||Empty Nest||Hunter|", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-8", "d_text": "NBC, once a broadcasting network, will be shelving 30 ROCK until midseason. Uh, it wins the Emmy for Best Comedy every year. Wouldn’t you sort of want that show on the air? Watch. The minute their first new show tanks it’ll be back on the schedule. So I’m guessing October 1st.\nMy best wishes to all the writers of all these shows. Keep the torch lit. The foundation of television has always been comedy (whether they know it or not). In success it's Jed Clampett hitting oil. And now more than ever, boy do we need to laugh.\nMonday, May 25, 2009\nThe Upfronts are over, the smoke has cleared, and the final tally for fall pickups is 32 new scripted series – 21 dramas and 11 comedies. So percentage-wise, that’s 33% sitcoms. Clearly an improvement over past years but remember, NBC has officially thrown in the primetime towel and scheduled Jay Leno every weeknight at 10. There go two or three potential new nurse dramas (leaving us with only three).\nA couple of the networks have given up on Saturday night completely, airing reruns of dramas rather than new product. (What a contrast to the 70s when CBS Saturday night was the biggest night of the week with ALL IN THE FAMILY, MASH, MARY TYLER MOORE SHOW, BOB NEWHART SHOW, and CAROL BURNETT SHOW. Today it’s NCIS reruns.)\nSAMANTHA WHO? would have been picked up but ABC insisted producers cut $500,000 off of each episode’s budget… thus making it SAMANTHA HOW? The producers declined. Too bad. It was a decent show.\nA couple of shows switched networks. MY NAME IS EARL apparently will move to FOX (I'm waiting for final confirmation but TBS is also interested) and MEDIUM will join the CBS lineup. NBC wanted to give MEDIUM a smaller order so the producers happily jumped to CBS who offered more. And now of course NBC is saying the show under-performed and they didn’t want it anyway. Had the producers agreed to stay at NBC they’d be claiming it’s the crown jewel of the network.\nGreg Garcia, creator of EARL, was hardly miffed over leaving NBC.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "\"The Brady Bunch Hour\" began as a 60 minute special titled \"The Brady Bunch Variety Hour\" produced by Sid and Marty Krofft which aired on ABC on November 28, 1976. The success of this special led to a semi-regular series of which eight additional 60 minute episodes were produced and aired from January to May 1977. None of the installments, known herein as \"The Brady Bunch Hour\" were ever repeated on network television. It was not until twenty years later in 1997 that all nine episodes were rerun (in their entirety and commercial-free) on the Australian cable network \"TV1, Television's Greatest Hits\". The American based cable networks Nick-at-Nite and TV Land also aired a handful of episodes during the 1990's, but with large portions edited out. Overall, \"The Brady Bunch Hour\" is by far the most rarely seen Brady series.\nThe premise of \"The Brady Bunch Hour\" is hard to understand. The Brady family was chosen to star in a variety show on ABC. They left their familiar two", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-2", "d_text": "With Paul Reiser and James Spader (BOSTON LEGAL).\nETHEL IS AN ELEPHANT – MR. ED with very wide master shots. Starring Todd Sussman who, during that period, starred in fifteen or twenty failed pilots. Ethel’s career never recovered from this project.\nTHE FESS PARKER SHOW – The man who played Davy Crockett starred in a comedy.\nFRANKIE & ANNETTE: SECOND TIME AROUND – You loved them in the Beach Party movies and wondered how long could they remain a couple before they finally had sex? According to this pilot, twelve years and counting.\nFRAUD SQUAD – from Jack Webb productions. Frank Sinatra Jr. as the head of the LAPD Fraud Squad. Not intended to be a comedy but ohhh mannn…\nFROM CLEVELAND – Featuring Bob & Ray and the brilliant cast of SCTV.\nGHOST OF A CHANCE – Shelley Long, pre-CHEERS, as a zany ghost.\nGOOBER & THE TRUCKERS’ PARADISE – The title alone should have warranted a pick-up. This is a spin-off of THE ANDY GRIFFITH SHOW and marks the very first appearance of Gomer Pyle.\nGOOD PENNY – Billed as a comedy about an emotionally disturbed woman (that must’ve been a helluva pitch). Well cast with Rene Taylor in the starring role.\nGREAT DAY – another premise chock full of comedic possibilities. Skid row derelicts in Los Angeles. Featured Al Molinaro (HAPPY DAYS) and as “Jabbo “– Spo-De-Odee.\nHARRY’S BATTLES – Dick Van Dyke and Connie Stevens did not have the magic of Dick and Mary Tyler Moore, or even Dick and Hope Lange.\nHIGH SCHOOL USA – After his “Garden Party-take-me-seriously-as-an-artist” period Rick Nelson starred as the principal in a series that featured a ton of 50’s and 60’s family sitcom cast members including Harriet Nelson, Jerry Mathers, Ken Osmond, Paul Peterson, Dick York, and Barbara Billingsley. Also Crystal Bernard (WINGS) who must’ve been 9 then.\nHOW TO SUCCEED IN BUSINESS WITHOUT REALLY TRYING – Adaptation of the Broadway smash. Written by Abe Burrows. NOT directed by James Burrows.", "score": 24.296145996203016, "rank": 53}, {"document_id": "doc-::chunk-6", "d_text": "The Nabors show, which also featured his Gomer Pyle USMC co-stars Ronny Schell and Frank Sutton, was built on Nabors's singing and comedy skills. In 1971 it was a victim of the CBS \"rural purge,\" presumably because of Nabors's Alabama accent and the fact that he had starred on Gomer Pyle and before that The Andy Griffith Show (I'm being facetious; the reason that was cited for much of the rural purge was that the shows either weren't doing well in the overall ratings – most of the \"rural purge\" shows had fallen below 30th in the ratings – or the fact that they did not draw the \"youth\" demographic, which seems to have been the case with Nabors, whose show was 29th in the annual ratings). The Andy Williams Show was a return to TV for the extremely relaxed Mr. Williams who had headlined a weekly series for NBC from 1962 to 1967. Indeed Brooks and Marsh in their Complete Directory To Prime Time Network And Cable Shows 1946-Present don't split this show off from the older show (or from the two summer series he did for ABC and CBS in 1958 and 1959). Certainly TV Guide treated the show as a new one (not that I could tell when I saw it never having knowingly seen the original). The magazine pumped up the wide range of the guests and even stated that one episode \"paired Lawrence Welk with Tiny Tim.\" That might have been something to see. Then again it couldn't have been much stranger than some of the things on the show, like \"the Walking Suitcase,\" or \"The Cookie Mooching Bear.\" The Andy Williams Show also ran from 1969-71 but in his case he seems to have jumped rather than having been pushed, preferring occasional specials (most famously at Christmas) to the weekly grind of a series. The best of the variety shows was probably the replacement series, The Johnny Cash Show simply because it was less a variety show and more a pure music show with a genuine passion for country music.\nThe 1969-70 season was the beginning of a transition in TV. The westerns were nearly gone – CBS would cancel Lancer at the end of the season, leaving only three in the line-up for 1970-71 when no new shows in the genre were announced.", "score": 24.214298464321303, "rank": 54}, {"document_id": "doc-::chunk-7", "d_text": "Some fondly remembered comedies would also leave the air: Get Smart (which had been picked up by CBS for the less than highly regarded fifth season, which saw the birth of Max and 99's twins), I Dream of Jeannie, The Flying Nun, and The Ghost & Mrs. Muir. And in truth little of substance would come out of the season. After all the show we remember most from the year is the story of a lovely lady who was bringing up three very lovely girls and a man named Brady who was busy with three boys of his own.\nAs a special bonus I've got a playlist set up featuring ABC's 1969 Fall Preview Show. It's probably has the best clips from The Music Scene (although it focuses on the comedy group The Committee – including Howard Hesseman and Peter Bonerz – rather than the music), The New People and the other 1969 new ABC shows that didn't last long. By the way, they're wrong about The Ghost & Mrs Muir – it debuted in the 1968 season.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-4", "d_text": "It was the half-hour drama Room 222 starring Lloyd Haynes, Denise Nicholas, Michael Constantine and Karen Valentine as teachers at a Los Angeles high school. Haynes played history teacher Pete Dixon, while Nicholls played his girlfriend Liz McIntyre, the school's guidance counsellor. Constantine played the school's well liked if long-suffering principal Seymour Kaufman, and Valentine played Alice, a young English teacher. There were a number of recurring student characters as well as a number of actors who made guest appearances on the show and would later go on to fame, including Bruno Kirby, Cindy Williams, Teri Garr, Rob Reiner, Anthony Geary, Richard Dreyfuss, Chuck Norris, Kurt Russell, and Mark Hamill. The show lasted for five seasons, despite nearly being cancelled after its first – apparently ABC relented when the show was nominated for five Emmy Awards and won three including Emmys for Constantine and Valentine as Supporting Actor and Actress in a Comedy and the now vanished category of Outstanding New Series. In the Fall Preview issue TV Guide raved about the show saying, \"...in this half hour comedy-drama the life he leads has the feel of reality despite scripts that are shrewdly calculated to entertain. It shows up in things like the refreshingly natural man-woman relationship of Pete and Liz McIntyre.\"\nOf course comedy, and in particular situation comedy, was the life blood of TV in the late 1960s. In all honesty 1969 was not a good year for sitcoms. Of eight introduced, only two ran for more than two season. These were The Courtship Of Eddie's Father which ran three seasons and appears to have ended primarily because of a dispute between star Bill Bixby and director and co-star James Komack about the direction the series was taking (this seems to be a running theme with Komack; both Gabe Kaplan and Marcia Strassman have spoken of a difficult relationship with Komack on Welcome Back Kotter where he apparently played the actors off against each other), and The Brady Bunch which ran for five, and of course lives on in perpetual reruns. Sketch comedy Love American Style ran for four and a half season and was the only show from ABC's Monday line-up to not only survive for more than half a season but to thrive. ABC's mid-season replacement Nanny And The Professor managed 54 episodes, being cancelled halfway through its second season.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "Enter veteran actor Bill Cosby (“I Spy”) and his modern family comedy, “The Cosby Show.” More than any other sitcom of its time, NBC’s “The Cosby Show” broke through color and class barriers and into the living rooms of the American people. Millions tuned in each week of the show’s eight seasons to watch comedian Bill Cosby as Cliff Huxtable, the patriarch of a large, upper-middle class Brooklyn family.\nNot only was Bill Cosby the era’s most prominent man in standup, he was also the country’s most lovable dad, raising a generation of viewers on a mix of stern looks and silly one liners. From its inception until the show’s finale in 1992, Bill Cosby and The Cosby Show cast revived network television ratings, drastically changed the face of American comedy and left a lasting mark on the sitcom genre.\nIn September of 1968, NBC premiered the first episode of what promised to be a truly revolutionary sitcom. “Julia” starred African-American actress and singer Diahann Carroll in the title role, making waves as one of the first television shows to resist placing African-American characters in stereotypical parts. Carroll’s Julia was a mother, a widow and a professional nurse, living with her young son in a nice suburban home. What made Julia different was that it was the first sitcom to portray an African American woman with a college degree thriving in a professional position.\nDespite its progressive design, “Julia” was critiqued for not engaging deeply enough in the highly charged Civil Rights-era politics of the time. Although “Julia’s” broadcast made waves during the turbulent 1960s, the characters and their creators stayed far away from any associations with activism. For Carroll, just the presence of a person of color on television was enough of a contribution. “There was nothing like this young successful mother on the air,” Carroll explained, emphasizing the show’s social impact. “And we thought that it might be a very good stepping stone.” Though “Julia” was only on the air for three seasons, its treatment of African-American characters represented a significant turning point in television history.\nStar Trek: The Original Series\nWhen “Star Trek” premiered in 1966, the beloved science fiction series imagined a distinctly diverse future.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-5", "d_text": "C-Bear and Jamal\n- Saturday morning cartoon about Jamal and his teddy-bear C-Bear.\nC-Bear takes Jamal through a variety of imaginary adventures. Tone Loc\nprovides the voice for C-Bear and produces the cartoon. Premeired\nSeptember 7, 1996.\n- Sam Kinison is Tim Matheson's guardian angel. Sam was about 6 inches\ntall or so when Tim could see him. Tim had a wife and few kids, who\nmysteriously disappeared after a few episodes. November 9 to December\nThe Chevy Chase Show\n- Late night talk show hosted by Chevy Chase. Winter 1993/4.\nThe Class of '96\n- Soap opera about college freshman. Fall of 1992 to Summer of 1993.\n- Follow real life fireman and medics around as they save people.\nComic Strip Live\n- Saturday night stand up comic show. I think 1989 to Winter 1993.\n- Follow real life cops around as they arrest people. On since at least 1990.\n- Sit-com about Miami based Regency Air, focusing on Jess and Maggie, two\nair stewardesses who are roommates. Other characters include Randy, an air\nsteward, the arrogant pilot, the bitchy head stewardess, and the gay air\n- Fox picks up the discarded ABC show, where Jon Lovitz is the voice of\nJay Sherman, host of Coming Attractions, where he reveiws\nparodies of recent films. Began March 5, 1995.\nPrograms That Begin with the Letter D\n- Richard Lewis stars as a psychologist who's Archie Bunker-type father (Don\nRickles) moves in after separting from his wife (Renee Taylor). Lewis also\nincorporates his neurotic schtick into the character. Fall 1993.\n- A very stupid show that contained two 15 minute very stupid\naction/adventure skits parodying Hawaii 5-0, and other cheesy adventure\nshows. July 11 to August 22, 1993.\n- Weekday cartoon mini-series. 91/92? Troy Kearney says: \"The cartoon\nPirates of Dark Water was a mini-series and was just called Dark Water.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Captain Kangaroo(1955 - 1984)\nHosted by Bob Keeshan (at one time, he played Howdy Doody's friend, Clarabell) from \"The Treasure House\" the Captain was so named because he always wore an overcoat with large, kangaroo-like pouches. Each show featured stories, skits, vaudeville acts, songs, games and other educational activities. C...\nMatch Game/Hollywood Squares Hour(1983 - 1984)\nThis short-lived blend of \"Match Game\" and \"Hollwood Squares\" had a workable premise: two new contestants played the \"Match Game\" portion of the show, with the winner going on to the \"Hollywood Squares\" portion against the previous day's champion. The winner of the \"Hollywood Squares\" section was th...\nThe Berenstain Bears Show(1985 - 1987)\nBased on the popular series of kids books, this series follows the adventures of the Bear family (Mama, Papa, Sister and Brother) living in Bear country.\nEastEnders(1985 - Current)\nEastenders is a soap from The BBC. It is set in Walford, London. There Local Pub is the Queen Vic.\nWhose Line Is It Anyway?(1998 - Current)\nOriginally began in 1988 in the UK and hosted by Clive Anderson, Whose Line Is It Anyway? first came to the U.S in 1998. The show consists of a panel of four performers who create characters, scenes and songs on the spot, in the style of short-form improvisation games. Topics for the games are base...\nThe Adventures of Shirley Holmes(1996 - 1999)\nShirley Holmes is the grand-niece of the great detective Sherlock Holmes. She with the help from her friend Bo set to help solve crimes that they come across\nPolice Squad!(1982 - 1982)\nFrom the creators of Airplane!, the Hot Shots! movies, and Top Secret!, Police Squad! began as the brain-child of Jim Abrahams, David Zucker and Jerry Zucker. The series would later spawn three follow-up movies under the new Naked Gun titles. The series was set up to spoof the Quinn Martin Productio...\nConan: The Adventurer(1992 - 1998)\nThe Hyborian Age A time of wizards, warriors and kings.", "score": 22.87988481440692, "rank": 59}, {"document_id": "doc-::chunk-5", "d_text": "Sketch comedy Hee-Haw was also cancelled by CBS after its second season, but got the ultimate revenge by running for 21 more years in syndication. One show that ran only two year was The Bill Cosby Show, his first solo effort, after being teamed with Robert Culp in I Spy. Cosby played high school basketball coach Chet Kincaid. The situations on the show rotated around his work as a teacher and his dealings with his family. One rather unique feature of the show was that it didn't feature a laugh track. I was a big fan of Bill Cosby when this show came out, in part of course because of I Spy but primarily because of his records, which were everywhere, and of course from his appearances on various variety shows. In my opinion Cosby's 1969 show had a lot in common with his stand-up act, and I don't think that extensive use of a laugh track wouldn't have done a service to the show.\nBesides Mr. Deeds Goes To Town, the two sitcoms that didn't last out the year were both from NBC: Debbie Reynolds Show, and My World And Welcome To It. The two shows couldn't have been more different. Debbie Reynolds's series was a standard domestic comedy with a very familiar hook to most episodes. A bored housewife desperately wants to break into her husband's line of work, a process which usually involves harebrained schemes involving the lead character and her best friend – or in this case her sister – much to the distress of both of their husbands. It sounds exactly like I Love Lucy, which is no real surprise since the show was created by Jess Openheimer who also created I Love Lucy (and before that Lucille Ball's radio show My Favourite Husband). The other show couldn't have been more different. It was My World And Welcome To It starring William Windom in a role based on James Thurber. Indeed Thurber's writings provided the plot for many of the stories on the series while his cartoons were the basis for a number of fantasy sequences. These were animated by the DePatie-Freleng animation studio, then most famous for the opening credits of the Pink Panther movies. The series was perhaps too innovative for 1969.\nOf the four variety series introduced in 1969, the successes were The Jim Nabors Hour on CBS, and The Andy Williams Show on NBC.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-3", "d_text": "The premise of the single working woman’s life, alternating during the program between work and home, became a television staple.\nAfter six years of ratings in the top 20, the show slipped to number 39 during season seven. Producers argued for its cancellation because of falling ratings, afraid that the show’s legacy might be damaged if it were renewed for another season. To the surprise of the entire cast including Mary Tyler Moore herself, it was announced that they would soon be filming their final episode. After the announcement, the series had a strong finish and the final show was the seventh most watched show during the week it aired. The 1977 season would go on to win an Emmy Award for Outstanding Comedy Series, to add to the awards it had won in 1975 and 1976. All in all, during its seven seasons, the program held the record for winning the most Emmys – 29.] That record remained unbroken until 2002 when the NBC sitcom Frasier won its 30th Emmy. The Mary Tyler Moore Show became a touchpoint of the Women’s Movement because it was one of the first to show, in a serious way, an independent working woman.\nDuring season six of The Mary Tyler Moore Show, Moore appeared in a musical/variety special for CBS titled Mary’s Incredible Dream, which featured Ben Vereen. In 1978, she starred in a second CBS special, How to Survive the ’70s and Maybe Even Bump Into Happiness. This time, she received significant support from a strong lineup of guest stars: Bill Bixby, John Ritter, Harvey Korman and Dick Van Dyke. In the 1978–79 season, Moore attempted to try the musical-variety genre by starring in two unsuccessful CBS variety series in a row: Mary, which featured David Letterman, Michael Keaton, Swoosie Kurtz and Dick Shawn in the supporting cast. CBS canceled the series. In March 1979, the network brought Moore back in a new, retooled show, The Mary Tyler Moore Hour, which was described as a “sit-var” (part situation comedy/part variety series) with Moore portraying a TV star putting on a variety show. Michael Keaton was the only cast member of Mary who remained with Moore as a supporting regular in this revised format. Dick Van Dyke appeared as her guest for one episode. The program was canceled within three months.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-6", "d_text": "By the time they left their luxury high rise in 1985 after 11 seasons, they had quietly achieved something special: They had made black faces at home—and familiar—on TV.\nLoni Anderson (right) was one smart cookie in a cheesecake body, Howard Heseman (seated), who went to the Head of the Class, was a pattermad deejay, and WKRP in Cincinnati zanily jounced in and out of 12 time slots in four years.\nHerve Villechaize got his feathers ruffled on Fantasy Island, a Love Boat dream-alike, but viewers were tickled by the dramatic values: wish fulfillment, bikinis, guest stars up the creek and suave Ricardo Montalban as a surefire magic-maker.\nRobert Blake's Baretta was one of the new kind of cop: weird, really weird. He had wit, chic ethnicity, a seedy pal (Tom Ewell), a cheap hotel room, a cockatoo named Fred and a bundle of tattered disguises. Fred, repeat this: Original.\nMaude (Bea Arthur, middle, with Bill Macy, Conrad Bain and future co-Golden Girl Rue McClanahan) began as Edith Bunker's unzippable cousin and quickly got her own show. Loud, biting and the proud owner of a facelift, she helped TV's middle-aged women grow up.\nThe Six Million Dollar Man smooches The Bionic Woman! Lee Majors was an astronaut who was reassembled after his spacecraft crashed, his love goddess was Lindsay Wagner, and they created one of TV's shortest-lived genres.\nThe notoriously raunchy sitcom Three's Company starred John Ritter as a faux gay living with busty Suzanne Somers (right) and cute Joyce DeWitt. Also starring were leers and scanty costumes. It was roundly condemned and wildly popular.\nBonnie Franklin (right) was a divorcée with two pesky daughters (Valerie Bertinelli is shown) and a super (Pat Harrington Jr.) who figured himself for Cary Grant, but, taking things One Day at a Time in Indiana, they had eight good years.\nAs alien Mork, Robin Williams one-upped writers with his Mork & Mindy ad-libs.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-23", "d_text": "It started out very\nstrongly with the Laurence Fishburne episode, but\nwent somewhat downhill from there. Fall of '92\nsounds right to me. It only lasted a few months.\n- Sit-com about a white mother who marries a black widower dentist. The\nactor who played the father changed in the middle of the second season. Fall 1990 to\n- Michael Moore's former NBC wacky, left-leaning documentary show moves to\nFOX. Now features the Corporate Crime Fighting Chicken!\nTwenty-one Jump Street\n- Young looking undercover cops are sent into high-schools and colleges to\nbust young criminals. Winter 1987 to Summer 1991? Maybe 1990? I can't\nremember because it continued in syndication afterwards.\nPrograms That Begin with the Letter U\nThe Ultimate Challenge\n- I don't know\nPrograms That Begin with the Letter V\nThe Vicki Lawrence Show\n- After a bunch of hosts, the axing of Bob the Puppet, and other changes between\nAugust 1996 and August 1997, Fox After Breakfast was renamed to\nThe Vicki Lawrence Show. Vicki Lawrence had been host since\nJuly 1997. The live show now features an in studio audience and comedienne\nNancy Giles as announcer. Premiered August 19, 1997.\nVinnie and Bobby\n- A second attempt at Top of the Heap, only with more emphasis on the son\nand his friend, who now live in an apartment together. May 30 to July 11, 1992.\n- The military shoots down a UFO, and the only survivor is Adam McCarther, an Air\nForce pilot who dissapeared over the Bermuda Triangle in 1947. He claims to have been\nabducted by aliens, and that he stole the UFO to escape because he is on a mission,\nand that he must \"interfere before it is too late.\" He has not aged since 1947, and\nhas various abilities, touching those with whom he comes in contact. The NSA and the FBI\nare chasing him, but do not appeart to be cooperating with each other. Premiered\nSeptember 19, 1997.\n- Sydny Bloom, a lineman for TelCal, using her computer made from spare\nparts, is able to use Virtual Reality to interface with people's minds.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-3", "d_text": "The majority of the actors and writers went on to create Filthy Rich & Catflap, which lasted only six episodes. Bottom, also with Rik Mayall and Adrian Edmondson, did rather better, three seasons and a total of 18 episodes (as well as five stage shows).\n- Blackadder is actually 4 different six-episode series (set at different periods of history but including Identical Grandson characters), each one launched with no expectations of making another. In fact, each series was picked up a year after its predecessor had ended.\n- The Good Life was a big enough deal during its run that the Queen herself attended a taping. There are 30 episodes (4 series of 7 episodes apiece, one Christmas special, and the Royal Command Performance).\n- Ever Decreasing Circles, written by the same duo as The Good Life, also ran for four seasons plus a Grand Finale for a total of 27 episodes.\n- Hi-de-Hi! – 58 episodes in 9 seasons over 8 years, and there almost certainly would have been more if the real-life holiday camp used for all the location shooting hadn't been closed, sold off and bulldozed for housing.\n- Dad's Army – 80 episodes in 9 seasons over 9 years. And a feature film.\n- Are You Being Served? – 68 episodes in 10 seasons over 13 years. And a feature film.\n- The Prisoner was originally planned as 7 episodes, but extended to 17 at the request of Lew Grade to make the series more attractive to overseas (i.e. American) markets. Star Patrick McGoohan just couldn't see it stretching to a full 26.\n- Channel 4 Sitcom Spaced had seven episodes in each of its two series. Many fans clamored for some sort of concluding special, with the expectation of seeing the two main characters finally hook up, but never received it. The writers did send a little kiss to the fans in the form of the last minute of the Skip To The End Documentary – check out the DVD and, erm... skip to the end.\n- Primeval had six episodes in its first season, and seven in the second, giving it a grand episode count of thirteen episodes. It got a surprisingly larger 10 episodes in its third season, while Series 4 and 5 had seven and six episodes respectively, for a total of 36 episodes.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "As of 2016, it was the 17th-most watched television finale.\nThe show featured over five cast changes within the first year. The only cast members to remain from the pilot until the finale were Harry Anderson, John Larroquette, and Richard Moll. Night Court received a number of awards and nominations. Both Selma Diamond and John Larroquette earned Golden Globe nominations. The show became part of NBC’s semi-legendary “Must See Thursday” along with The Cosby Show, Family Ties, Cheers, and later A Different World.\nThe ABC sitcom series Growing Pains aired from September 24, 1985 to April 25, 1992, with 166 episodes produced spanning 7 seasons. Behind-the-scenes turmoil shaped the series’ later years. ‘Leonardo Dicaprio’ was brought on in a last ditch effort to pump new life into the, but the show ratings did not improve. Dicaprio’s character was dropped and the show canceled. In 2000, the cast reunited for The Growing Pains Movie, followed by Growing Pains: Return of the Seavers in 2004.\nWho’s the Boss?\nIn early development, the series was titled You’re the Boss. Before the fall 1984 premiere, the producers changed it to Who’s the Boss?, an open-ended title which hinted that any one of the leads could get their own way and be the “boss”. Who’s the Boss? was nominated for more than forty awards, including ten Primetime Emmy Award and five Golden Globe Award nominations, winning one of each.\nALF (an acronym for Alien Life Form) premiered in September of 1986 and ran for four seasons and produced 99 episodes, including three one-hour episodes that were divided into two parts for syndication. The Alf puppet was operated from various “trap doors” hidden within the set. This made filming the show somewhat more hazardous than a normal sitcom as the cast had to remember where each of doors was so that they could avoid them.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "The Top...Um...4 (?) NBC Sitcoms of the 1970s\nIt's hard to imagine it now, with the relentless success and excellence consistently delivered from the National Broadcasting Network when it comes to sitcoms -- from The Cosby Show to Seinfeld to The Office, but there was a time when NBC's comedic television offerings were as unbelievably piss poor as they were limited in number. That era would be the 1970s, a decade when NBC was clearly not that interested in funny, 30-minute vehicles.\nTwo of the very few bright spots for '70s NBC comedy: Eddie Albert (left) and Freddie Prinze.\nWhile I, admittedly, was too young to enjoy shows prior to 1975, I watched nearly every show on this list either during its run or in syndication, something that allowed many kids to grow up with shows that were on TV when their parents were young. But even with all the TV watching I did as a kid, I was hard pressed to find a list of 10 sitcoms from NBC during the '70s, so I gave up after four. I guess it's possible that The McLean Stevenson Show, a two-year bit for the guy who walked off M*A*S*H (he managed to make the list anyway), or a weak attempt at putting one of the greatest American actors of any era on TV with The Jimmy Stewart Show might make the list, but when you consider that NBC didn't deliver a successful comedy that lasted more than six seasons, what's the point.\nNote that to qualify for the list, shows had to have spent the bulk of their lifespan in the decade of the '70s, but given the paltry list of shows to choose from, that wasn't a difficult task.\n4. Hello, Larry (1979-1980)\nPremise: One Day at at Time with a single dad.\nAfter McLean Stevenson left the hyper successful M*A*S*H because he wanted a more prominent role in a show, it was natural that other networks would court him. His turn on the wartime comedy was brilliant. Unfortunately, he was never able to come close to his role as Henry Blake. In this, probably his most notable failure, he plays a dad/radio talk show host who moves to Portland from LA with his daughters after a divorce. Unlike One Day at a Time, he didn't have powerhouse creator Norman Lear backing him and the show died after two seasons.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "New Series 1973\nBarnaby Jones (1973-1980). Starring veteran actor Buddy Epson after a break from his long run as Jed Clampet on the Beverly Hillbillies. The role of Barnaby Jones was that of an aging private detective and his Daugher-in-law assistant played by Lee Meriwether. The series was a spin off of a character from the Cannon series. It has since come to be know as \"THE old peoples show\".\nThe Tomorrow Show (1973-1982). Hosted by strangely straight Tom Snyder. Following the Tonight Show with Johnny Carson at 1am, it consisted of one one one interviews with cigarettes and no audience. Snyder would book some of the most over the top and controversial celebrities (Charlie Manson, Iggy Pop, Plasmatics) creating some of the most unusual television ever aired. His persona of an unhip hipster worked. He was replaced by David Letterman in 1982.\nPolice Story (1973-1978) This was a series anthology created by once police officer and author Joseph Wambaugh, who had a non fiction best seller the same year with The Onion Field. It was the first police series done in \"realism\" and responsible for subsequent acclaimed crime series such as Hill Street Blues, NYPD Blue and Homicide: Life on the Street.\nLast of the Summer Wine (1973-present)\nThe Young and the Restless (1973-present)\nThe $10,000 Pyramid\nMost Watched 1973 - 1974\n1 All in the Family\nTelevision News 1973\nTo many complaints daytime television was interrupted over most of the Summer to televise the Watergate hearings. The three networks ABC, CBS and NBC rotated the broadcasts every third day. Those who watched got to know and admire a previously little known Southern Senator , Sam Irvin pictured in the middle in the above photo.\nIrene Ryan who played Granny on The Beverly Hillbillies died of a brain tumor not tobacco related.\nAfter a 14 year run the hit adult Western Bonanza closes shop. Caused primarily by the deal of Dan Blocker who played Hoss Cartright and being moved from its famous Sunday slot to Tuesdays up against the then hip show Maude.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-4", "d_text": "September 1988 to December 1989.\nBig Bad Beetleborgs\n- Three kids are dared into a haunted house and accidently free\nFlabber, a benevolent ghost, who grants them a wish. They choose to\nbecome Bid Bad Beetleborgs, characters from their favorite comic book.\nUnfortunately, the gateway to comic books is left open, and the other\noccupants of the house, Count Fangula, Frankenbeans, Mums, and Wolfgang\nSmith, plus the Villians from the comic book use it cause trouble in the\ntown. Weekday program. Began September 9, 1996.\n- This weekly audience participation game show featuring musical and\ncelebrity guests is taped before an arena-sized audience. The\nhour-long show, hosted by Mark DeCarlo(STUDS), involves audience\nmembers vying for big prizes by participating in outrageous games and\nstunts. Players can keep their prizes and trade them for something\nelse, sometimes better and sometimes worse. The games escalate in size\nand scope, until the two top winners can risk everything they've won\nfor a chance to win the BIG DEAL. Premeired September 1, 1996.\nBill and Ted's Excellent Adventure\n- Sitcom based on the movies. Two high school 'dudes', Bill and Ted,\ntravel through time in a phone booth. Summer 1992. There was also an\nanimated version on Saturday mornings.\n- Jason Bateman is the college educated younger brother of a NJ auto\nmechanic, who returns to help his brother's business after losing his job\non Wall Street. The Pilot aired Thursday, June 30, 1994. In it he tries\nto convince his brother that he can help him.\n- Saturday morning cartoon about Howie Mandel's four year old character\nBobby Generic, and the imaginative adventures he has. On since Fall of\n- Spin off from 21 Jump Street. Ex-cop Booker (Richard\nGrieco) becomes an insurance investigator, but always gets caught up in a\nbigger mystery. Aired from Sep 24, 89 to May 6 90.\nPrograms That Begin with the Letter C\n- Saturday morning cartoon based on the comic. Casper the friendly\nghost vs the world. Began January 1996.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Votes: 30,762 | Gross: $0.11M\n15. Maude (1972–1978)\n30 min | Comedy\nThis \"All In The Family\" spin-off centers around Edith's cousin, Maude Findlay. She's a liberal, independent woman living in Tuckahoe, NY with her fourth husband Walter, owner of Findlay's ... See full summary »\n18. ABC Afterschool Specials (1972–1997)\n60 min | Adventure, Comedy, Drama\n\"ABC Afterschool Specials\" was the umbrella show name for various educational shows that were shown in the afternoon, occasionally. Each episode was produced by a separate company.\n21. Mighty Aphrodite (1995)\nR | 95 min | Comedy, Fantasy, Romance\nWhen he discovers his adopted son is a genius, a New York sportswriter seeks out the boy's birth mother: a ditzy prostitute.\nVotes: 31,616 | Gross: $6.70M\n24. Another World (1964–1999)\nTV-14 | 90 min | Drama\nThe continuing story of life in the Midwestern town of Bay City, and the love, loss, trials, and triumph of its residents, who come from different backgrounds and social circles. Those who ... See full summary »\n25. The Romantics (2010)\nPG-13 | 95 min | Comedy, Drama, Romance\nSeven close friends reunite for the wedding of two of their friends. Problems arise because the bride and the maid of honor have had a long rivalry over the groom.\nVotes: 10,057 | Gross: $0.10M\n26. And the Band Played On (1993 TV Movie)\nPG-13 | 141 min | Drama\nThe story of the discovery of the AIDS epidemic and the political infighting of the scientific community hampering the early fight with it.\n27. Kate & Allie (1984–1989)\nTV-PG | 30 min | Comedy\nWhen Allie Lowell divorces her husband and gets custody of their two children, she moves to New York City and moves in with her best friend, Kate McArdle, also divorced and raising a ... See full summary »\n30.", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-20", "d_text": "- A guy who dies in the 21st century is sent back to the late 1980s to\nhelp shape himself as a young man. Later the title was changed to\nBoys will be Boys, and the show only concentrated on the\nteenager. Fall of 1987 to Summer 1988? Maybe more than one season, anyone\n- Matt Frewer stars as the father in this sit-com about a family. Winter\n1992 to Spring 1993.\n- Tom Delany leaves his position as the head writer for Jay\nLeno, and becomes the head writer for Wilson Lee Show, a black variety\ncomedy program. He's nervous about being seen as an outsider, the black\nwriting staff is upset about having a white head, and Wilson who has his\nown entourage of yes-men finds it all amusing. Premeired March 17, 1996.\n- Half hour program that interviewed people who claimed to have had\nparanormal things happen to them. Edwin Yuen says, \"Each episode delt with\na certain subject which was always refered to in the title, such as\nSIGHTINGS: UFOs or SIGHTINGS: GHOSTS.\"\n- An animated sitcom of a working class family featuring jerky dad\nHomer, worrying mom Marge, smart-alecky Bart, over-acheiver Lisa, and\nbaby Maggie. Homer works in Sprigfields nuclear power plant run by Mr.\nBurns. Spun of from The Tracy Ulman Show. Began Christmas 1989.\nThe Sinbad Show\n- Sinbad, a bachelor video-game designer, takes in two-foster children. Fall 1993 to Summer\n- Quinn Mallory, a physics student opens up a gateway to alternate Earths,\nand due to an accident is stuck traveling between differnt realities in an\nattempt to get \"home.\" He is joined by his friend Wade Wells, his Physics\nprofessor Dr. Maximillian Arturo, and \"Cryin' Man\" Rembrandt Brown. Began\nMarch 22, 1995.\n- Drama about a black family. Began winter/spring 1994.\nSpace: Above and Beyond\n- In the year 2063, Earth is threatened by an alien invasion.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "When television syndicators reminisce about the good old days, their minds naturally wander to the heady 1980s when the situation comedy “The Cosby Show” sold on average for more than $ 4 million an episode.\nThat, of course, was the high-water mark for the ever-changing syndication business. Since then, prices for off-network sitcoms in the re-run market have dropped considerably. And no one can forsee a time when programs of the genre will command those kinds of dollars again.\nBut those same exex are optimistic about the future prospects for sitcoms in syndication. The market may not be as robust as it once was, but in a business where the failure of new programs is almost as common as power lunches, sitcoms offer the best chances for TV success, year-after-year.\n“The most successful genre in television is the sitcom,” declared Bob Jacquemin, president of Buena Vista Television, pointing out that since 1987, the highest-rated new shows in syndication every year have been sitcoms. This season, for example, “Roseanne” reigned as the No. 1 new strip in syndication.\n“A great sitcom is still the end of the rainbow for a producer/distributor,” said Greg Meidel, president of Twentieth Television domestic TV.\nBut the market is getting crowded. One source estimated that by fall 1996, there may be as many as 25 half-hour sitcoms competing for space in the syndication marketplace.\nMaking their national sales debuts at the recently concluded 30th annual conference of the National Association of Television Program Executives in San Francisco were a line-up of off-net sitcoms from the major studios.\nBuena Vista was hawking “Blossom,””Empty Nest,””Dinosaurs” and “Home Improvement”; MCA, “Coach” and the hour-long “Northern Exposure”; Twentieth Television, “The Simpsons” and “Doogie Howser, M.D.,” and Warner Bros., “The Fresh Prince of Bel Air” and “Family Matters.”\nAlthough the advertising recession is lifting, and with it the improved outlook prospects of TV stations, few observers expect all these shows to easily find a home on the TV dial. That’s particularly so in light of the slew of firstrun talk shows competing for early-fringe time periods.\nBusiness patterns altered\nSome television syndication industry exex believe that patterns of the business may have permanently changed.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-5", "d_text": "In May 2014, NBC aired The Maya Rudolph Show, a variety show starring SNL performer Maya Rudolph. The broadcast was intended to be a one-off special, but with the possibility of additional episodes depending on its performance. This was the first attempt in nearly six years by NBC to bring this concept back to television (its last successful series in this genre, Barbara Mandrell and the Mandrell Sisters, left the network's schedule in 1982), having struck out back in 2008 with Rosie Live. The special pulled in 7.23 million viewers in its 10 p.m. (ET/PT) timeslot, making it the third most watched program of the night.\nThe prime time variety show format was popular in the early decades of Australian television, spawning such series as In Melbourne Tonight, The Graham Kennedy Show, The Don Lane Show, and Hey Hey It's Saturday, which ran for 27 years. Recent prime time variety shows include the short lived Micallef Tonight and The Sideshow.\nAnother of today's variety shows in Asia is Taiwan's Guess (variety show) and 100% Entertainment. East Asian variety programs are known for its constant use of sound effects, on-screen visuals and comedic bantering. Many of the shows are presented in a live-like presentation in a fast-paced setting, with scenes repeating or fast forwarded.\nAnother popular variety show in Taiwan is Kangxi Lai Le, a talk show with variety show elements. The hosts and guests were associated with variety shows. Famous for its bantering, which was written before tapings.\nThe first Chinese variety show to become a major success was Hong Kong's Enjoy Yourself Tonight, which first aired in 1967 and ran for 27 years. In Hong Kong, variety shows are often combined with elements of a cooking show or a talent competition but end in various results.\nVariety programming has remained one of the dominant genres of television programming. While Japanese variety shows are famous abroad for their wild stunts, they vary from talk shows to music shows, from tabloid news shows to skit comedy. The prominent use of telop on screen has created a style that has influenced variety programming across Asia. One of the most popular variety show in Japan includes Downtown no Gaki No Tsukai.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-22", "d_text": "Weird super-hero\nThe Tick and his companion save The City from evil. Began September 1994.\nTiny Toon Adventures\n- Weekday cartoon about adolescent Warner Brother's cartoons going to\nCartoon school where the are taught by the old WB greats.\nTom and Jerry Kids\n- Weekday cartoon about adolescent Tom and Jerry. On since at least\n- Roommates and friends Donny Reeves and Eric McDougal work in the Mail room\nof Crown, Pink & Wagner. Donny really wants to be a photographer, while Eric\nwants to be a writer. They have a problem though, they won't show anyone\ntheir work. (It's all about trying to make your dreams come true.) They\nrent a room to Daisy, a bitter woman, and have friend Evelyn who owns a\nclothing store. Began Fall 1995.\nTop of the Heap\n- A spin off from Married with Children. A lower class father has his son\nwork at a country club so that he can meet and marry a rich girl. See also\nVinnie and Bobby. April 7 to May 19, 1991.\nTotally Hidden Video\n- A Candid Camera inspired show.\n- Comedian-filmmaker Robert Townsend hosts this hour of skits, spoofs\nand musical performances. Fall of 1993.\nThe Tracy Ulman Show\n- British commedianne stars in a series of skits. The Simpsons started of\nhere as filler between commercials and skits. Winter/Spring 1987 to 1989?\n- I have no idea... I think this existed sometime in Fall/Winter 1993.\nEdwin Yuen says this:\nThe was the Robert De Niro produced story about life in\nNYC. There was a NYPD officer, a black father (I think was played by\nLaurence Fishburn) and some others. It got good reviews and FOX kept\nhyping the De Niro connection.\nJulie Prince adds:\nthe info you have on Tribeca is a bit mis-\nleading. Laurence Fishburne was only in the first\nepisode. Tribeca was an anthology series, focusing\non different people each time. I think the black\ncop (not Fishburne; don't know who played him) and\nthe owner of a coffee shop were the only recurring\ncharacters. It was pure drama.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "TV Comedy Shows A-Z\nBy Josh Bell, About.com Guide\nCheck out thoughts and analysis of show debuts and significant episodes.\n- 'The Goodwin Games' Premiere Episode\n- 'Family Tree' Premiere Episodes\n- 'Maron' Premiere Episodes\n- 'Zach Stone Is Gonna Be Famous' Premiere Episodes\n- 'Family Tools' Premiere Episodes\n- 'Your Pretty Face Is Going to Hell' Advance Episode\n- 'How to Live With Your Parents (For the Rest of Your Life)' Premiere Episodes\n- 'The Ben Show' Premiere Episodes\n- 'Legit' Premiere Episodes\n- 'Burning Love' Season 1\n- 'Real Husbands of Hollywood' Premiere Episode\n- 'Second Generation Wayans' Premiere Episode\n- '1600 Penn' Premiere Episodes\n- 'Malibu Country' Premiere Episode\n- 'The Neighbors' Premiere Episodes\n- 'The Mindy Project' Premiere Episode\n- 'Guys With Kids' Premiere Episode\n- 'The New Normal' Premiere Episode\n- 'Bullet in the Face' Season 1\n- 'Go On' Premiere Episode\nAlthough the days of Must-See TV are long gone, NBC still brings a lineup of funny and edgy sitcoms to prime time, carrying on a long legacy of classics.\nCBS has been a sitcom powerhouse for years now, with a strong lineup of family-oriented shows that appeal to a wide audience.\nABC's sitcom lineup is small these days, a far cry from when their TGIF block dominated the world of family comedies. But the shows they have now are more diverse, and more exciting.\n- 'How to Live With Your Parents (For the Rest of Your Life)'\n- 'Malibu Country'\n- 'The Neighbors'\n- 'Family Tools'\nThe fourth broadcast network has been light on the sitcoms lately, but it still has a handful of shows to check out.\nIn addition to its edgy, acclaimed dramas, FX has developed some pretty outrageous comedies, pushing boundaries while getting laughs.\nWith its tagline \"Very Funny,\" TBS has established itself as basic cable's home for traditional sitcoms.\nTV Land Shows\nIn addition to its reruns of classic shows, TV Land has a growing stable of old-school sitcoms that recall the classic style.\nAdult Swim Shows\nKnown for its raunchy oddball animated fare, Adult Swim has added live-action shows in the same spirit.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-2", "d_text": "An unlikely emcee, the ex-newspaperman is famous for having signed both no-name acts and famous personalities for his shows, and for facilitating television debuts, such as that of The Beatles in 1963. Originally telecast as Toast of the Town, The Ed Sullivan Show became an institution in itself.\n- The Ernie Kovacs Specials (1961, ABC).The comedic genius of Kovacs had many manifestations on various network shows that ran sporadically from 1951 to 1962. Comprised of satirical sketches and sight gags, often devoid of dialogue, this episode from the Collection is representative of Kovacs’ unique vision.\n- The World of Jacqueline Kennedy (1961, NBC)This documentary is both a biography of the then First Lady and a comparative analysis of her predecessors. Filmed at the height of her popularity prior to the assassination of her husband, this program attests to the public’s fascination with Jackie Kennedy, whose charm and glamour made such an idelible impression. Here, she is photographed at work on her “Campaign Wife” newspaper column in the summer of 1960.\n- The Lawrence Welk Show (1968, ABC).Noted for the “champagne music” which he himself dubbed, Lawrence Welk was both bandleader and host of this music variety show that ran from 1955 to 1971.\n- The Phil Donahue Show (1971, Ohio : Avco Broadcasting).Focusing on a single topic and eliciting audience involvement, Donahue originated the contemporary talk-show format and won a Peabody for his work in 1980. In this show broadcast from Ohio—before the program was syndicated nationally—the host moderates a discussion on penal reform issues with inmates.\n- Wide World of Sports (1973, ABC).This program was first broadcast in 1961, won a Peabody in 1966, and continued its coverage of international sporting events until 1998.\n- Saturday Night Live (1977, NBC).This unconventional comedy variety show has filled the late-night spot since it debuted in 1975, and is notable for having launched the careers of several big-name comedians and writers. Featured in this photo are all the original performers, minus Chevy Chase who left the series in 1976.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-2", "d_text": "There you go, now you know as much as I do. Except that I also know the show had Dwight Schultz in it.\n- \"Good Grief\": This one made it into the list simply because watching the intro gave no information about the show. My best guess was that it was about a pro golfer who befriends the angel of death. I was kind of close: it's about the wacky employees of a funeral home. Not recommended.\n- \"Q.E.D.\" (1982): I'm just gonna quote the IMDB user summary: \"The tales of Quentin E. Deverill, an eccentric expatriate American professor who uses his unique skills to solve mysteries in Edwardian London.\" Starring Sam Waterston. AWESOME.\n- \"No Soap, Radio\" (1982): Another intro that gives no information about the show premise (they just put the camera on a roller coaster and showed that footage). However, it does give you enough information to stay away: \"Starring Steve Guttenberg.\" IMDB says this five-episode wonder is about \"The third-generation owner of a seedy hotel in Atlantic City.\" It took guts to name the show after a joke that's funny because it's not funny.\n- \"Bring 'Em Back Alive\" (1982): Bruce Boxleitner is 1930s Steve Irwin! Not as awesome as it sounds.\n- \"Square Pegs\" (1982): Sarah Jessica Parker's first TV show, a kind of proto-\"Freaks and Geeks\". The introduction is extremely well done.\n- \"The Devlin Connection\" (1982): Someday I'll find it, the Devlin Connection. I did not expect to see a crime show starring Rock Hudson, but IMDB says this wasn't even the first crime show starring Rock Hudson. Not that interesting; we put it down because an extended Apple II sequence in the intro made us think Rock Hudson's character might have a sideline in computer programming, but no such luck.\n- The New People (1969): All the excitement of \"Lost\", in the 1960s.\n- We saw promos for the fall of 1969, and they were already using the moon footprint video as shorthand for \"important news\".\n- Also in 1969, sketch comedy group The Committee had a TV show that doesn't show up on IMDB; maybe it never really aired.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-2", "d_text": "- Ran from 9/90 to 12/90 and was done my Mark Frost and David Lynch.\nLarry Virden says it \"was a series that basically was news items - think\nTV Nation...\" Richard Dreyfuss was the narrator.\nAmerica's Most Wanted\n- Profiles America's most wanted criminals. Fall of 1987 to September\n- Weekday cartoon featuring the Warner Brothers, Yakko and Wakko, and the\nWarner Sister, Dot. Also has Pinky & The Brain, The Goodfeathers, and other\ngoofy characters. Fall of 1993 through August of 1995. Moved to The WB\nNetwork on September 9, 1995.\n- Sports writer Jack Cody, recently fired by his boss and former girlfried Melissa Peters,\ntransforms himself into a woman and is hired to write his former paper's advice column,\n\"Ask Hariett.\" The advice he gives out is refreshingly blunt and from a man's point of view,\nwhich makes the column very popular, but wreaks havoc with his macho man lifestyle. Premiered\nJanuary 4, 1998.\nAttack of the Killer Tomatoes\n- Saturday morning cartoon based on the movies of the same name. I think\nFall 1990 to Summer 1991.\nPrograms That Begin with the Letter B\n- Sit-com about three overweight women who live together. I think it was\non for two seasons: Fall of 1989 to Summer of 1991.\n- Paul (a half-Italian black played by Giancarlo Esposito) is an intelligent,\nby-the-book cop from Washington, D.C./ who moves to Bakersfield, CA, where he's\nteamed with Wade (Ron Eldard), a nutty free spirit who likes to wear roller\nskates when he chases crooks and wishes he were black. They're the odd-couple\nheroes of this uneven but pleasantly off-beat sitcom. Chris Mulkey plays the\nbest supporting role, as a dimwitted macho officer who hates the day shift.\nFall 93. Four unaired episodes begin again July 7, 1994.\nBatman: The Animated Series\n- Weekday cartoon-- shouldn't need explaining. Fall of 1992 to 1994.\nThe Ben Stiller Show\n- A skit show starring mostly Ben Stiller. Fall/Winter 1992/3.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-3", "d_text": "Fat Ladies... Game\nBBC East Anglia\nCheapo Cartoon Man\nMike Der Da-Da\nReturn of the Doughnut\nMind Your Foreigners\nStiff Actors Five-O\nThe Gag Trade\nThe Two Quasimodos\nA Fine Romance\nThe much-loved stage and film actress Judi Dench made her TV comedy debut in this series, playing opposite her husband Michael Williams and performing the signature tune. As Laura and Michael, they're singletons who feign interest in each other to ward off the interference of Laura's match-making younger sister Helen (Susan Penhaligon). But their relationship blossoms into a happy marriage by the end of the fourth and final series.\n1976 - 1977\nThe Fosters holds a unique place in the history of British TV, in that it was the first sitcom written for and starring black actors. It was also the making of 17 year-old Lenny Henry, who had \"arrived\" the previous year in ATV's New Faces. Here he supported Norman Beaton, Isabelle Lucas and Carmen Munro. There were two series of 13 episodes, plus a 1977 New Year special, all adapted by Jon Watkins from the US sitcom Good Times (CBS 1974-79).\nThe Galton and Simpson Comedy:\nAn Extra Bunch of Daffodils - 1969\nFrank Muir was LWT's original comedy guru. He lured genius writers Ray Galton and Alan Simpson from the BBC to create a series of six plays in the same style as their 1962 Comedy Playhouse, from which Steptoe & Son became a hit. This sharp and delightful two-hander tells the story of a pair of avaricious poisoners, played by Stratford Johns and Patsy Rowlands. They fall in love with funereal consequences.\nLWT blew a fortune on lavish props when Graeme, Bill and Tim switched to ITV for their ninth series. Executives believed that spending so much money on what was deemed to be \"just a kids programme\" was not viable, so just six episodes and a Christmas special were made. The theme is funkier than the Beeb version, and the titles show the chaps larking about in the Festival gardens on the South Bank.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-2", "d_text": "Its inaugural program was a late-night talk show, The Late Show, which was hosted by comedian Joan Rivers. After a strong start, The Late Show quickly eroded in the ratings; it was never able to overtake NBC stalwart The Tonight Show – whose then-host Johnny Carson, upset over her becoming his late-night competitor, banned Rivers (a frequent Tonight guest and substitute host) from appearing on his show (Rivers would not appear on Tonight again until February 2014, seven months before her death, when Jimmy Fallon took over as its host). By early 1987, Rivers (and her then-husband Edgar Rosenberg, the show's original executive producer) quit The Late Show after disagreements with the network over the show's creative direction; the program then began to be hosted by a succession of guest hosts. After that point, some stations that affiliated with Fox in the weeks before the April 1987 launch of its prime time lineup (such as WCGV-TV (channel 24) in Milwaukee and WDRB-TV (channel 41) in Louisville) signed affiliation agreements with the network on the condition that they would not have to carry The Late Show due to the program's weak ratings.\nThe network expanded its programming into prime time on April 5, 1987, inaugurating its Sunday night lineup with the premieres of the sitcom Married... with Children and the sketch comedy series The Tracey Ullman Show. Fox added one new show per week over the next several weeks, with the drama 21 Jump Street, and comedies Mr. President and Duet completing its Sunday schedule. On July 11, the network rolled out its Saturday night schedule with the premiere of the supernatural drama series Werewolf, which began with a two-hour pilot movie event. Three other series were added to the Saturday lineup over the next three weeks: comedies The New Adventures of Beans Baxter, Karen's Song and Down and Out in Beverly Hills (the latter being an adaptation of the film of the same name). Both Karen's Song and Down and Out in Beverly Hills were canceled by the start of the 1987–88 television season, the network's first fall launch, and were replaced by the sitcoms Second Chance and Women in Prison.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "- Karen (1964-1965) – Girl “Leave It To Beaver”\n- Tammy (1965-1966) – Loosely based on the movies\n- Gidget (1965-1966) – Sally Field: Surfer girl\n- My Mother The Car (1965-1966) – Freudian nightmare number 146\n- Captain Nice (1967) – Proto Greatest American Hero.\n- The Pruitts Of Southampton (1966-1967) – Riches to rags\n- The Ugliest Girl In Town (1968-1969) – Cross dressing love story.\n- Occasional Wife (1966-1967) – It featured a fire escape!\n- Love On A Rooftop (1968-1969) – Newlywed wackiness\n- He & She (1967-1968)- Writer, actor, & wife make comedy!\n- Good Morning World (1967-1968)- Proto WKRP\nYou can see full episodes of a lot of these on You Tube and all 30 episodes of “My Mother The Car” can be watched on Hulu. TV Guide voted it the second worst show ever made so every self respecting MSTie should tune in!\nOn a personal note, even though these are almost all before my time, I actually watched and liked both “Captain Nice” and “Gidget” in reruns. Also, I was watching the theme song video on my phone at work and two friends popped into the break room while it was playing. We formed a theme song singing lounge act that would put the Mandrell sisters to shame! Twas nice to finally check “shaming Mandrells” off my bucket list!", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "|Art by Ted and Pat Michener. The Toronto Star, September 11, 1976.|\nNone of the shows spotlighted on Star Week's cover had staying power. Clockwise from top left:\nBill Cosby - Cos. Sketch comedy/variety show. Cancelled November 1976.\nTony Randall - The Tony Randall Show. Sitcom about a widowed judge. Only show featured on this cover to last more than one season, surviving until March 1978.\nNancy Walker - The Nancy Walker Show. Sitcom about L.A.-based talent agent. Cancelled December 1976. Walker quickly resurfaced as the star of Blansky's Beauties in February 1977.\nJim Bouton - Ball Four. Sitcom inspired by Bouton's controversial best-selling book about life as a pro baseball player. Cancelled October 1976.\nDavid Birney - Serpico. Drama inspired by the Al Pacino movie. Cancelled January 1977.\nJohn Schuck and Richard B. Shull - Holmes and Yoyo. Sitcom about a cop and his robot partner. Cancelled December 1976.\nDick Van Dyke - Van Dyke and Company. Sketch comedy/variety show whose cast included Andy Kaufman. Cancelled December 1976.\nRobert Stack - Most Wanted. Crime drama. A Quinn Martin production. Last wanted in August 1977.\nHere's the full Saturday preview page.\nDoctor Who wasn't the only British import TVO added discussion points to. As shown here, the 1968-70 ITV drama Tom Grattan's War was supplemented with bonus material featuring Andrea Martin, then appearing on another show which debuted in September 1976: SCTV. I'd love to see how Martin illustrated particular points about a young Londoner's adventures set against the backdrop of the First World War. I'm guessing Edith Prickley didn't make an appearance.\nWhat aired against the Time Lord's TVO debut? For Toronto viewers, music, music, music. Hee Haw (channel 2) featured Tammy Wynette, Waltons star Will Geer, and Kenny Price. CFTO (channel 9) ran Canadian Stage Band Festival, featuring big bands from schools and post-secondary institutions across the country. Dolly Parton's short-lived Dolly!", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Yes, the show's trademark innuendo and misunderstandings could be funny, but what makes the show inherently funny is that the situation driving this situational comedy was a landlord who was tolerant of homosexuals but not unmarried cohabitation. Was there really a point in American culture when this was plausible? See also 70's, Homosexual, Sitcom.\nWith his trusty NES Zapper and Power Pad holstered in his belt, Kevin Keene and his band of unlikely heroes clashed again and again with the forces of King Hippo, Dr. Wily, and the infamous Mother Brain throughout the domain of Videoland. See also 80's, Cartoons, Video Games.\nA CIA agent and housewife team to save the world. See also 80's.\n1980's TBS television show in which the Beaver (still portrayed by Jerry Mathers) is now divorced and living in the suburbs with his mother (still portrayed by Barbara Billingsley). See also 80's, Television.\nTelevision game show in which midwestern housewives try to fit as many Butterball turkeys as possible into a shopping cart and push it around at a high rate of speed. See also 90's.\nHey! A little help here! Add your own funny TV show.\nFor example, MC Hammer's short-lived Hammerman featuring a magical pair of talking dance shoes and the Michael Jordan, Wayne Gretzky, and Bo Jackson collaboration and breakfast-cereal-tie-in ProStars which, incidentally, was \"all about helping kids\". See also 90's, Cartoons, Television.\nWith contestant names like Storm and Nitro, plus the chance to beat each other up with foam covered staffs, American Gladiators was the stuff with which childhood ambitions were formed. See also Television.\nNot to be confused with Blue Thunder, this movie/television show starred Jan-Michael Vincent (as Stringfellow Hawke) and Ernest Borgnin as two pilots of an advanced battle copter that was basically a flying KITT car (Knight Industries Two Thousand) without wheels. See also Television.\nApparently someone thought that turning the comic strip Cathy into a TV show was a good idea. See also Sitcom.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "I often call this current period of the television sports calendar the black hole of sports programming.\nThe time between the end of the Super Bowl and the beginning of televised Spring Training baseball games is an empty void when I'm looking for something to watch on traditional television. I don't watch the NBA and the NHL on TV holds my interest for maybe a period. College basketball I can't watch until the tournament.\nThis didn't used to be as much of a problem back when I could turn instead to my favorite sitcoms in February. Do you remember when February was \"sweeps month\"? (Maybe it still is, I don't know). Networks would make sure that every top show aired original episodes that month, no reruns. So you'd always have something to view during the week even when the sports scene was boring.\n(I know, people have multiple streaming viewing options now. But I find myself going weeks sometimes before I see something I want to view on Netflix or Amazon).\nSitcoms were a big part of my life in the '80s and I thought I'd look back and provide my top 10 weekly shows from the '80s. These were the shows that entertained me the most back then. Most were sitcoms, but not all:\n1. Cheers; 2. It's Garry Shandling's Show; 3. The A-Team; 4. Family Ties; 5. Benson; 6. The Wonder Years; 7. The Cosby Show; 8. Moonlighting; 9. The Greatest American Hero; 10. Too Close For Comfort.\nThe '80s covers a wide maturation period for me, from age 14 to 24, so my interests varied wildly (Too Close For Comfort is basically there because 14-year-old me couldn't wait to stare at Lydia Cornell).\nFor you '80s fans, I'll throw in a few more shows I liked to watch, some long-forgotten: Hill Street Blues, Mr. Belvedere, Valerie (what became Hogan's Family after they killed off Valerie Harper's character), My Two Dads, Night Court, Empty Nest and It's a Living.\nNotice there is no St. Elsewhere, Dynasty or thirtysomething on there. Nine times out of 10, my entertainment needs to make me laugh.\nOK, so enough of that.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Here are the first ten programs from the seventeenth week of the 1972-1973 television season, which ran from Monday, January 1st, 1973 through Sunday, January 7th. If you’ve been paying close attention you’ll notice that the fifteenth week ran from December 18th to December 24th. So where’s the missing week? It was a Nielsen “black week,” which meant readings weren’t recorded. You can read more about black weeks here.\nThere were a total of 62 programs broadcast during the week and The Los Angeles Times published the complete Nielsen report on January 22nd, 1973. Not a single movie night or special was able to crack the Top Ten, unless you count McCloud, a segment of the NBC Sunday Mystery Movie. The week was topped by All in the Family on CBS. For the week as a whole, CBS averaged a 21.0 rating, NBC was just a hair behind with a 20.8 rating and ABC brought up the rear with a 17.5 rating.\nHere’s the Top Ten, complete with Nielsen ratings:\n|1.||All in the Family||CBS||35.4|\n|2.||Sanford & Son||CBS||29.5|\n|6.||NBC Sunday Mystery Movie (McCloud)||NBC||26.0|\n|7.||Bridget Loves Bernie||CBS||25.8|\n|9.||The Flip Wilson Show||NBC||24.6|\nHere’s how the networks fared on Thursday, January 4th. ABC aired The Mod Squad, The Men and Owen Marshall, Counselor at Law. CBS broadcast The Waltons and The CBS Thursday Movie, which was the first half of The Sand Pebbles. NBC filled its schedule with The Flip Wilson Show, Ironside and The Dean Martin Show.\n|8:00PM||17.4/26 (avg)||18.9/29 (avg)||19.5/30 (avg)|\n|9:00PM||11.2/17 (avg)||18.5/30 (avg)||29.2/45 (avg)|\n|10:00PM||17.7/30 (avg)||18.5/30||17.9/30 (avg)|\nNBC easily won the night.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-6", "d_text": "It\nbecame PODW when ABC picked it up for it's Saturday morning line-up in\n- Fictional show about DEA officers, but carried out in a documentary\nstyle. I remember they used to have \"bleep-outs\" of foul language to make\nit seem more real. Fall/Winter 1990?\nThe Dirty Dozen\n- Serialization of the famous movies. Although I think that the\ncharacters were different, but it was the same idea. Summer/Fall 1987?\nMaybe into Winter 1988.\n- Saturday morning cartoon. Larry Virden says, \"Henson production mix of\nmuppets and animation following an animator and his tv series. The catch\nis the 'people' are all dogs.\" Half cartoon and half muppet.\nDown and Out in Beverly Hills\n- Series based on the movie. Winter/Spring 1987.\nDown the Shore\n- Sit-com about three single guys and three single gals who share a\nweekend beach house in Seaside Heights, NJ. Summer 1992 to Winter/Spring\n- The HBO series, now on FOX. Martin Tupper is a NY book editor who\nsuddenly finds himself divorced and living the single life. January 8,\n1995 to April 16, 1995.\n- Sitcom about a single father (Dabney Coleman) who is an elementary\nschool teacher. Originaly concetrated on him in school, but later shifted\nfocus to his family life. This show lasted less than one season, but had\nthree different opening credits. Fall/Winter 1991?\n- Not sure how to describe this one as I didn't watch it. See also Open\nHouse. Winter/Spring 1987 to Spring/Summer 1989? Maybe 1990, I can't\nStewart Mason says this about it:\nIt starred Matthew Laurence and Mary Page Keller as Ben and\nLaura and followed the stages of their romance from first meeting to\nmarriage and children. Basically, it was the precursor to Mad About\nYou, right down to Laura's goofy scatterbrain sister (played by the\ngenuinely appealing Jodi Thelen) and their annoying married-couple friends,\nplayed by Chris Lemmon and Alison La Placa. It just wasn't as funny as\nMad About You.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-13", "d_text": "Aired\nJune 3, 1996 to July 8, 1996.\nThe Late Show\n- I think that this was the talk show originally hosted by Joan Rivers,\nand then by Arsenio Hall. It ran from 1987 to something like 1989?\n- Brian Bosworth is Jon Lawless, an ex-CIA agent whose now a PI in\nMiami. His partner is Reggie Hayes, an ex-military pilot who saved his\nlife during a mission. The two use extremely high-tech gadgetry to\nsolve cases. Aired March 22, 1997, and was promptly cancelled.\nLife with Louie\n- Cartoon based on Louie Anderson's experiences growing\nup in the Mid-West. A Few prime-time specials during 1994 - 95 season.\nSaturday mornings since September of 1995.\n- Saturday morning cartoon. Based on the musical and movie The\nLittle Shop of Horrors about a man eating plant that talks.\n- Sit-com about three young black women living together. Fall 1993 to Jan 1, 1998.\n- The exploits of four twenty-something guys in Pittsburgh trying to\navoid adulthood by maintaining there High School ways. Stosh is\ncurrently a cab driver, Jake is the pretty-boy ex-quarterback, Eddie\nlives in his mother's basement, and Mert has to marry his high school\nsweetheart Bonny when they get $10,000 saved up. Premeired March 17,\n- Ratzenberg from Cheers is the narrator of this show set in\na small Port Ellen, Rake Capital of the World. His sister marries a NYC\ncab driver whose first experience outside NY is moving here. In the pilot,\nhis new brother-in-law tried to accept and be accepted by the town. Pilot\nJune 23, 1994.\nLove & Marriage\n- Parents Jack, the owner of a parking garage, and April Nardini, a\nwaitress at a gourmet restaurant,\nhave a mere 15 minutes a day together in which to get their household\nin order, make sure their three street-savvy kids are still breathing\nand maintain their still-passionate relationship. Their children are 12\nyear old son Christopher, alterna-chick teenager Gemmy, whose life dream\nis pierce for a living, and Michael, who attends junior collge and has\nfive part time jobs.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-4", "d_text": "49. Late Show with David Letterman (1993– )\n60 min | Comedy, Music, Talk-Show | Active\nThe Late Show with David Letterman is an hour-long weeknight comedy and talk-show broadcast by CBS from the Ed Sullivan Theater on Broadway in New York City.\n50. Providence (1999–2002)\n60 min | Drama\nDr. Sydney Hansen, a successful plastic surgeon in Hollywood, California, quits her private practice and returns to her hometown in Providence, Rhode Island after the sudden death of her ... See full summary »", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-95", "d_text": "PhilVaried Programs A&E 19 118 265CSI: MiamiVaried ProgramsCriminal MindsVaried ProgramsCriminal MindsVaried ProgramsThe First 48Varied ProgramsThe First 48Varied ProgramsThe First 48Varied Programs HALL 20 185 312Marie Marie The WaltonsLittle House on the PrairieLittle House on the PrairieThe Brady BunchThe Brady Bunch FX 22 136 248(11:00) MovieVaried Programs CNN 24 200 202Around the WorldCNN NewsroomCNN Newsroom The Lead With Jake TapperThe Situation Room TNT 25 138 245BonesBonesBonesBonesCastleCastle NIK 26 170 299Peter RabbitMax & RubyDora the ExplorerDora the ExplorerSpongeBobSpongeBobSpongeBobOdd ParentsOdd ParentsOdd ParentsSpongeBobSpongeBob SPIKE 28 168 241Varied Programs MY-TV 29 32 -Hawaii Five-0GunsmokeBonanzaThe Big ValleyDragnetAdam-12Emergency! DISN 31 172 290Little EinsteinsLittle EinsteinsVaried ProgramsPhineas and FerbVaried Programs Good Luck CharlieVaried Programs LIFE 32 108 252Will & GraceWill & GraceHow I Met/MotherHow I Met/MotherGreys AnatomyGreys AnatomyWife SwapWife Swap USA 33 105 242Varied Programs BET 34 124 329(11:00) MovieVaried ProgramsThe ParkersThe ParkersFamily MattersFamily MattersFamily MattersMovie ESPN 35 140 206SportsCenterSportsCenterSportsCenterOutside the LinesColl.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "|Cagney & Lacey|\nOne's blonde. One's brunette. One's strong. One's sensitive. They're both women. They're both cops. They're partners.\n|The Carol Burnett Show|\nOne of tv's greatest variety shows featuring the great cast of Carol, Tim Conway, Harvey Korman, and Vicki Lawrence.\nA miniseries following the lives of the citizens of Centennial, Colorado, the poor treatment of the Native Americans by colonists, and a murder mystery that took over 100 years to solve.\n“Once upon a time, there were three little girls who went to the Police Academy. And they were each assigned very hazardous duties. But I took them away from all that and now they work for me. My name is Charlie.”\nThis sitcom follows the lives of the bar staff and regulars at the Boston bar Cheers, where everybody knows your name.\nThis western follows the adventures of Cheyenne Bodie after the Civil War as he looked for bad guys to beat up and women to pursue.\nCHiPs follows the adventures of members of the California Highway Patrol on the job.\nThis anthology series featured a different case and different dramatic story each week. One such episode was Casino Royale, the James Bond story that later became a movie of the same title.\nColgate Theater was a dramatic anthology series in half hour format.\nLos Angeles homicide detective Lieutenant Columbo may be rumpled and a bit eccentric, but he uses that to his advantage to throw murderers off guard and bring them to justice.\nThis show follows the missions of an American infantry squad at the front of the battle lines of Europe in World War II.\n|The Cosby Show|\nThis show follows the every day life of the successful African-American family, the Huxtables, headed by obstetrician Dr. Heathcliff Huxtable and attorney Clair Huxtable.", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Best TV Shows of 1986\nThere were a lot of great shows on TV in 1986. Here are some of our favorites. Did we miss any of your favorites? Let us know.\nThe show won multiple awards, including three consecutive Emmy Awards for Michael J. Fox as Outstanding Lead Actor in a Comedy Series. The show had been sold to the network using the pitch “hip parents, square kids.” Originally, Elyse and Steven were intended to be the main characters. However, the audience reacted so positively to Alex during the taping of the fourth episode that he became the focus on the show. Fox had received the role after Matthew Broderick turned it down.\nThe Cosby Show\nThe Cosby Show spent five consecutive seasons as the number-one rated show on television. The Cosby Show and All in the Family are the only sitcoms in the history of the Nielsen ratings to be the number-one show for five seasons. It spent all eight of its seasons in the top 20. It helped make possible a larger variety of shows with a predominantly African-American cast, from In Living Color to The Fresh Prince of Bel-Air.\nWith a total of 275 episodes over eleven seasons, Cheers became one of the most popular series of all time. Nominated for Outstanding Comedy Series for all eleven of its seasons on the air, it earned 28 Primetime Emmy Awards from a record of 117 nominations. Believe it or not, it was nearly canceled during its first season when it ranked almost last in ratings for its premiere.\nMurder, She Wrote\nStarring Angela Lansbury as mystery writer and amateur detective Jessica Fletcher. The series aired for 12 seasons with 264 episodes and was followed by four TV films. It’s among the most successful and longest-running television shows in history. The title comes from Murder, She Said, which was the title of a 1961 film adaptation of Agatha Christie’s Miss Marple novel 4:50 from Paddington.\nThe Golden Girls\nIn 2013, TV Guide ranked The Golden Girls number 54 on its list of the 60 Best Series of All Time. After six consecutive seasons in the top 10, and the seventh season at number 30, The Golden Girls came to an end when Bea Arthur chose to leave the series. The series finale was watched by 27.2 million viewers.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "Although it has, over the years, attained an almost cult status, only 39 episodes were ever broadcast. The program ended after one season, but lived on via syndication.\n1957 – U.S. B-52 bombers in the Strategic Air Command went on 24-hour alert status because of the perceived threat of an attack from the Soviet Union.\n1961 – New York Yankee Roger Maris became the first major-league baseball player to hit more than 60 home runs in a single season. Babe Ruth had set the record of 60 in 1927. Maris and his teammate Mickey Mantle spent 1961 trying to break it.\nAfter hitting 54 homers, Mantle injured his hip in September, leaving Maris to chase the record by himself. Finally, in the last game of the regular season, Maris hit his 61st home run off Tracy Stallard of the Boston Red Sox.\n1962 – Johnny Carson replaced Jack Paar as host of the late-night talk program The Tonight Show. Carson went on to host The Tonight Show Starring Johnny Carson for three decades, becoming one of the biggest figures in entertainment in the 20th century.\n1962 – The Lucy Show – Lucille Ball’s follow-up to I Love Lucy – premiered on CBS. It lasted six years.\n1970 – Scores of people were crushed or battered to death in Cairo as millions of people crowded onto the streets for Egyptian President Abdel Nasser’s funeral.\nSoldiers protecting the cortege were overwhelmed when a mass of men and women swarmed around the entourage. Soldiers used rifle butts and batons to repel the crowd in the ensuing pandemonium.\n1975 – The “Thrilla in Manila” – the third and final boxing match between Muhammad Ali and Joe Frazier – was fought at the Araneta Coliseum in Quezon City, Metro Manila, Philippines. The bout is consistently ranked as one of the best in the sport’s history and proved to be the culmination of the bitter rivalry between Ali and Frazier.\nEddie Futch, Frazier’s trainer, decided to stop the fight before the start of the 15th round. Frazier, who by then could barely see, protested stopping the fight, shouting “I want him, boss,” and trying to get Futch to change his mind. Futch replied, “It’s all over.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "To the modern-day ”source” …that being Wikipedia. Whether it is accurate or inaccurate it certainly provided a (shockingly) thorough listing of “reality shows” and even categorized them.\nI know you are sitting on the edge of your chairs to hear what I found out, so:\nThere are 69 “Docustyle” shows…this is including The Real Housewives of Orange County, New Jersey and Atlanta and of course, the infamous Jersey Shore. I have spent more time defending the real New Jersey Shore since that show first aired than I have ever defending anything in my entire life.\nThere are 70 “Dating” shows. All the Bachelors/Bachelorettes (which are aired in several countries) and Matchmaker shows not to mention a salty list of some x-rated titles that I never knew existed, nor do I want to know when they are on TV.\nSadly, only four reality shows fall into the category of “Science”…Mythbusters is a favorite among some of my grandchildren.\nYou can have a style “Makeover” (or your home, your car, your life) on any one of the 35 shows that air.\nHopefully, you will never appear on any of the 20 “Law and Order” type reality fests…and I don’t even think they listed America’s Dumbest Drivers among those, a MOTNSO favorite. “Military” themed shows are also included in this category.\nHistory is relived in 18 programs, but there are only 11 shows categorized as “spoofs.” (My Big Fat Obnoxious Fiance would be considered a “spoof.”)\nGame shows? there are 63 “reality” game shows and “play-0ffs.” I never realized it, but Beat the Clock which I remember watching back in 1958 was probably the first of these. Wipeout, I blushingly admit, is one that I do watch, always yelling “OH! that’s going to leave a mark!” when the contestants crash into walls.\nAre you ready for this? there are 119 talent search reality shows. You thought there was only American Idol and The Voice? Sorry, they are just the most well-known. Another bit of television history? You may remember The Original Amateur Hour and Arthur Godfrey’s Talent Scouts from the late ’40′s and early ’50′s. They were the fore-runners of the big time.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "According to analysis carried out in 2015 from the tv network FX, there was an unprecedented rise in programming from all tv networks and content material producers in the previous couple of years . In 2009 there 211 unique scripted collection, 217 in 2010, 267 in 2011, 288 in 2012, 343 in 2013, 376 in 2014, and 409 in 2015 .", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-73", "d_text": "PhilVaried Programs A&E 19 118 265Varied ProgramsCriminal MindsVaried ProgramsCriminal MindsVaried ProgramsThe First 48Varied ProgramsThe First 48The First 48 HALL 20 185 312Home & Family The Brady BunchThe Brady BunchThe Brady BunchThe Brady BunchLittle House on the PrairieLittle House on the Prairie FX 22 136 248(11:00) MovieTwo and Half MenTwo and Half MenMovieVaried Programs CNN 24 200 202Legal View With Ashleigh Ban eldWolf CNN Newsroom The Lead With Jake TapperThe Situation Room TNT 25 138 245BonesBonesBonesBonesCastleCastle NIK 26 170 299PAW PatrolPAW PatrolWallykazam!Peter RabbitSpongeBobSpongeBobSpongeBobOdd ParentsRabbids InvasionSanjay and CraigSpongeBobSpongeBob SPIKE 28 168 241Varied Programs MY-TV 29 32 -Hawaii Five-0GunsmokeBonanzaThe Big ValleyDragnetAdam-12Emergency! DISN 31 172 290Mickey MouseSo a the FirstJessieLiv & MaddieVaried Programs A.N.T. FarmVaried Programs LIFE 32 108 252How I Met/MotherHow I Met/MotherGreys AnatomyGreys AnatomyCharmedCharmedWife Swap USA 33 105 242Varied Programs BET 34 124 329MovieVaried ProgramsMy Wife and KidsVaried ProgramsMovieVaried Programs ESPN 35 140 206SportsCenterSportsCenterSportsCenterOutside the LinesNFL InsidersNFL LiveAround the HornInterruption ESPN2 36 144 209Numbers Never LieFirst TakeVaried ProgramsSportsNationQuestionableQuestionableNFL InsidersESPN FC SUNSP 37 -Varied Programs DISCV 38 182 278Sins & SecretsU.S. Drug WarsVaried Programs TBS 39 139 247(11:30) WipeoutCleveland ShowAmerican DadAmerican DadAmerican DadCougar TownFriendsFriendsFriendsFriendsKing of QueensKing of Queens HLN 40 202 204Showbiz TonightNews Now What Would You Do?", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Friday, October 30, 2009\nWhere Were You in 1979?\nSince 1979, Kitchen Magic has provided quality kitchen refacing and remodeling to our customers at an affordable price. You've helped us become the largest kitchen refacer in the U.S. We will celebrate its 30th anniversary in November of this year, and to celebrate, we'll look back at some of what made 1979 special.\nEarlier we looked back and reminisced about the top music hits for 1979. We also had a look at the top movies of that year.\nIn 1979, there were just over 76 million households with televisions, but viewers were pretty much confined to the three broadcast networks, ABC, NBC, and CBS (Fox was many years in the future) or a few local channels. Just one out of every five homes had access to cable TV, and Time magazine marveled that cable viewers might be able to see as many as 36 channels on it some day.\nLet's count down the thirty most popular shows in 1979. Last week we looked at shows #6 through #10. The week before that #11 through #20, and before that, #21 through #30.\nHere are shows #1 through #5. How many of these do you recall? Congratulations to reader Cynthia Dennis in the Philadelphia area, who guessed the number one show for that year.\n1) 60 Minutes\n2) Three's Company\n3) That's Incredible\nThis blog hosted/created by Kitchen Magic.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "|ABC||Life Goes On||Free Spirit||Homeroom||The ABC Sunday Night Movie|\n|CBS||60 Minutes||Murder, She Wrote||The CBS Sunday Night Movies|\n|FOX||Booker||America's Most Wanted||Totally Hidden Video||Married... with Children||Open House||The Tracey Ullman Show||It's Garry Shandling's Show|\n|NBC||The Magical World of Disney||Sister Kate||My Two Dads||NBC Sunday Night Movie|\nNote: On ABC,America's Funniest Home Videos premiered as a series at 8:00 on January 14, 1990.\nOn FOX, The Simpsons premiered as a series at 8:30 on January 14, 1990 after airing its Christmas special on December 17, 1989 in the same timeslot.\n|ABC||MacGyver||Monday Night Football|\n|CBS||Major Dad||The People Next Door||Murphy Brown||The Famous Teddy Z||Designing Women||Newhart|\n|FOX||21 Jump Street||Alien Nation||Local|\n|NBC||ALF||The Hogan Family||NBC Monday Night at the Movies|\n|ABC||Who's the Boss?||The Wonder Years||Roseanne||Chicken Soup||thirtysomething|\n|CBS||Rescue 911||Wolf||Island Son|\n|NBC||Matlock||In the Heat of the Night||Midnight Caller|\nNote: On ABC, Coach aired its second season at 9:30 beginning November 21, 1989, as a replacement for Chicken Soup, which declined in audience numbers each week and lost much of the lead-in audience of the #1 hit, Roseanne.\n|ABC||Growing Pains||Head of the Class||Anything but Love||Doogie Howser, M.D.||China Beach|\n|CBS||A Peaceable Kingdom||Jake and the Fatman||Wiseguy|\n|NBC||Unsolved Mysteries||Night Court||Nutt House / Dear John Jan.)||Quantum Leap|\nNote: On NBC, the sitcom Dear John moved to 9:30 on January 17, 1990.\n|ABC||Mission: Impossible||The Young Riders||Primetime Live|\n|CBS||48 Hours||Top of the Hill||Knots Landing|\n|NBC||The Cosby Show||A Different World||Cheers||Dear John / Grand (Jan)||L.A.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-8", "d_text": "The movies, not counting 1971's And Now for Something Completely Different (a compilation of older TV sketches re-shot on film for the American market, who had yet to see the original series) were released in 1975, 1979, and 1983 respectively.\n- The Wallace & Gromit series has been active since 1989. In those twenty plus years, there have been a grand total of six full-length instalments, Only one of which was a film (though there have also been a number of >5min shorts).\n- The iconic '70s post-apocalypse drama series Survivors had three series of 13, 13 and 12 episodes – a middling example of this trope, but outdone by its 2008-10 Remake, which consisted of two six-episode series.\n- ITV 1's detective show Vera has thus far had two series of four episodes.\n- BBC 2's police show Vexed had a debut series consisting of three episodes, though a second series of six episodes followed.\n- The Fades had one season of six episodes.\n- Cuckoo the BBC Three comedy whose first series consists of only five episodes.\n- Sirens the Channel Four comedy focused around a group of ambulance-men lasted a total of 6 episodes.had\n- KYTV had three seasons, each with six episodes, making for a total of eighteen episodes plus pilot. Its radio predecessor, Radio Active, clocked in at an impressive 54 episodes, over seven seasons, including a pilot and a later one-shot special.\n- BBC 2 standup/sketch variety series Victoria Wood As Seen On TV ran for two series of six episodes each in 1985 and 1986 and a Christmas special in 1987 for a total of 13 episodes.\n- Victoria Wood's 1998-2000 BBC 1 sitcom dinnerladies (sic) featured many of the same performers and the same producer as As Seen On TV. It ran for two series, one of six episodes and one of ten episodes (the second one deliberately designed to wrap up the plot rather than lead into a third series), for a total of 16 episodes.\n- Wire in the Blood has five seasons and a total of 19 episodes (3 in the first season, 4 in the rest), each 90 minutes long.\n- Jonathan Creek has four seasons of 5-7 episodes, and a couple of specials.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-19", "d_text": "The stories were entirely predictable, and the writing was\naverage only at the best of times. Most weeks, the high point of the\nepisode was the catchy opening theme, \"Workin' For a Livin'\".\"\n- Semi-sleazy one hour news journal. I think this was 1989/1990.\n- A loosely historical action-drama set in 400AD Ireland. Conner is the youngest son of\na Celtic tribal king. When his tribe is wiped out by a rival tribe with the aid of Roman\nforces, only he and the royal champion Fergus survive. An ancient wizard, Galen, convinces\nConner that he must unite the warring tribes against their common Roman enemy. Now he leads\na small army of tribeless Celts and escaped slaves against the Roman princess Diana and her\nconsort Casius Longinus, a 400 year old Centurian who is cursed with immortality for spearing\nJesus on the cross. Conner's friends include Fergus, Catlain an escaped Celtic slave, and\nTully, a young North African escaped slave. Premeired July 14, 1997.\n- Comedy Drama about a black family in Baltimore. Fall of 1991 to Spring\n1994? Started broadcasting live during the second season I think.\nRound the Twist\n- A live-action action weekday program produced by Children's Television Workshop of\nAustralia. The Twist family moves from the big city to live in a lighthouse, where\nmany bizarre and surreal events take place. Premiered July 7, 1997.\nThe Ruby Wax Show\n- Ruby Wax interviews celebrities in a quirky and spontaneous manner,\nwhich includes going shopping with them, lying in bed with them, going\nin the jacusi with them, and just plain having fun. The segments are\ntaken from the British show Ruby Wax Meets.... Aired\nJune 9 to July 7, 1997.\nPrograms That Begin with the Letter S\nSaturday Night Special\n- Rosanne's experimental comedy skit program. Cast includes Jennifer\nCoolidge, Kathy Griffin, Warren Hutcherson, Heath Hyche, Laura\nKightlinger, C.D. LaBove, Rob Rubin, and Jason Davis. Aired April 13, 1996\nto May 18, 1996.", "score": 8.086131989696522, "rank": 98}]} {"qid": 25, "question_text": "What are the different personality types of horses and how do they respond to stress?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "When a horse is being trained,\nanalysis of the horses character type must be sought before instructions\nare put forth for training/teaching.\nWhat is your horses type?\nLazy - stubborn\nFighter - fight\nNervous - flight\nWilling - bargainer\nLazy horses can become\nchums with you very easily and the person may think that the training is\ngoing to be a breeze. That is until the person decides to start teaching\nthe horse. The horse goes from lovey dovey to a refusing uncooperative\nFighters don't want\nanything to do with a person from the start. It maybe called a rebel.\nReady on alert all the time. The horse will let you know that he wants\nto fight by stamping or snorting in a aggressive manor. A person should\nnever approach this type of horse thinking that it will not harm you.\nNervous horses need to be\ntrained with much more patience and slower progress than the other\ntypes. A nervous horse may see a candy wrapper and go in to a frenzy\nwhile the other types will not think much of this.\nWilling horses use their\nmind to greatest possibility. They rationalize the best of the 4 types\nof horses. They are thinkers with good courage. A willing personality\ntype will poses the other personality types on occasion, but not as a\nWith each of these types the\nstages of training and the application of the technique will vary. The\nlazy horse will have different responses to an application than the\nActions of a horse out in the\npasture can not determine his/her personality when it come to training.\nA horse of course has the inborn\npersonality of fight or flight. But a horse is shaped into a deeper self\npersonality with the surrounding that it is in.\nSemi human surrounding\nHorse that are in a nonhuman\nsurrounding are known as unhandled horses that are not exposed to\nhandling of humans from birth or the first couple of months of life.\nThey will take quite some\nconfidence building to prove that you the human are not going to\nharm them. The longer the horse is away from the hands on exposure\nto humans the longer it will take to get the confidence built.\nTo see in the horses eye what\nthey see humans as would be like the human seeing a alien.", "score": 49.99860597977194, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "What I’m reading: The Girl in The Box by Janet Miller – Futuristic tale about true love – what else?\nIn my last blog three weeks ago I talked about different horse personalities, particularly the Extrovert Thinker as typified by my horse Star and how this type relates to Alpha Heroes. Today, I’d like to discuss the Extrovert Reactor and the smart-ass, quirky heroine.\nFirst, a quick note. These personality types are on a continuum, of course. Some are more extroverted than others, some are less reactive. Some can change—become less introverted or more of a thinker. But their basic type remains and influences their actions.\nMy mare Portia, a grey Anglo-Arab (half Thoroughbred and half Arabian), is a typical Extrovert Reactor. She’s very sensitive to stimuli and hyper-aware of her environment. Even at age twenty-nine and retired, she can be challenging and needs an experienced handler. Not that she’d ever deliberately hurt someone, she just tends to react first and think later.\nShe’s also a horse that really enjoys life. She loves to play and will try her best to please. She’s the one who yells a greeting when she sees me and comes running up to the gate eager for a treat or an outing. In the show ring or a parade, when she “turned on” all eyes were on her. She also used to fly down a new trail with her incredible walk, eager to see what was around the next corner. Even though she can be a pain in the butt, her exuberance is a lot of fun.\nWhen I first got her as a seven year-old, she was ready to spin and bolt at the slightest provocation—a rock that looked funny, a horse scratching it’s ear with a hind leg, a COW on the trail! She soon learned bolting wasn’t acceptable behavior so she tried others. Like teleporting half way across the arena or jittering in place or jumping straight up. I eventually discovered that part of the reason for her reactivity was because she was in pain. She needed chiropractic care (just starting with horses at that time and not widely accepted) and a correctly fitted saddle (which proved to be almost impossible to find). Once those problems were solved, she settled down a lot.\nBut she still retained her quirky personality.", "score": 47.56225288221755, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "There's a very interesting post by Laura over at Equestrian Ink on what is a solid-minded horse, with further interesting discussion in the comments. Laura makes a distinction between a horse that is soft - e.g. responsive and obedient, and with you, even under stressful circumstances and even if the horse is scared - and a horse that is solid-minded - a horse that is self-confident, not worried by much and able to cope, even if the rider is less experienced or the horse is put in a situation where many horses might be afraid. I guess the distinction she is making - Laura, correct me if I've got this wrong - is that the soft horse looks to its rider for guidance and direction and trusts its rider enough to follow those directions, whereas the solid-minded horse is confident on its own. She also makes the point that the solid-minded horse may in fact not be particularly physically soft, or even mentally soft in the sense of always immediately responsive to the rider.\nI guess what I'm looking for in my horses is what Laura calls soft - responsive and willing no matter the circumstances even if worried - plus that further intangible self-confidence that comes from a combination of innate disposition, good training and handling and also physical softness and mental willingness - I guess I'd call this softness+. And I agree with Laura that there are good solid-minded horses out there who aren't particularly soft physically or always willing and will pack people around - but I think solid-minded horses can also be soft.\nSo how do you get from physical softness to mental softness to all of that plus solid-mindedness - the whole package - softness+? There was some interesting discussion in the post and in the comments of the role the horse's training and job experience played, and about how horses should be best managed to grow into solid-mindedness. I do think it requires a horse to grow up and mature and have lots of miles under the saddle. I also believe that there are some horses who just can't get there, or who are so difficult to get there that it might as well be impossible, at least for ordinary, reasonably skilled horse people.", "score": 46.64318920459989, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Carey Williams, Ph.D., Extension Specialist in Equine Management\nWhat is Stress?\nStress is the body’s response to anything it considers threatening. For a horse this could be anything, including trailering and traveling, showing, poor nutrition, feeding at irregular times, changes in other routines, environmental toxins, interactions within their social environment, variations in climate, and illness.\nSome types of stress include various physical stresses that are based on the physical makeup of the animal and its ability to respond to changes in diet, injury, etc. Psychological stresses are based on a horse’s personality and its perception of life. For example, some horses are more stressed than others by being in a stall for long periods of time.\nHow do Horses Cope with Stress?\nThe horse’s basic stress response starts with a change in behavior, either by moving away from a stimulus, swishing its tail, bucking, tensing up, etc. This stress will then cause activation of the sympathetic nervous system, called the “Fight or Flight” response. The sympathetic nervous system will create an involuntary action of the intestines (diarrhea), endocrine glands (production of adrenaline and cortisol), and heart (increase in heart rate). Next the neuroendocrine system will be activated, allowing the horse’s system to increase its energy utilization.\nEach horse deals with stress in a different way depending on their personality.\nDemonstrative, Confident Horse\n• Lets you know when it is stressed!\n• Bucks, kicks, bites, is very curious, mouthy, a troublemaker, etc.\nDemonstrative, Fearful Horse\n• Worries about everything!\n• Shies the first time it sees things and needs time to relax.\nPassive, Confident Horse\n• Usually wonders, “What’s everyone worried about?”\n• Not normally stressed, internalizes stress, shows little change even when stressed.\n• Usually is the last one in the field to take off running if something runs out of the woods.\nPassive, Fearful Horse\n• Wants to please!\n• Seems willing to do anything, but will tighten muscles and lips when stressed.\n• Won’t show fear until pushed over the limit.\nDuring exercise heat production will increase up to 50 %. This can create a problem when exercising under extremely hot and humid conditions.", "score": 46.59345711006833, "rank": 4}, {"document_id": "doc-::chunk-3", "d_text": "Let me say that again, it takes the RIGHT personality using the Right technique for the Right personality. In other words, the person doing the training has to be of the right frame of mind, and MUST understand, and train to, the personality of the trainee. This may be a little confusing so, as I often do, I’ll use my wife’s horse training to clarify for us humans.\nA good horse trainer will always use the first lesson to get to know the horse. Horses, like people, have distinct personalities. Unlike people, horses are big and strong and can hurt you if you push the wrong buttons. Because we respect their size and our ultimate goal is to produce a good horse, we pay attention to who we are dealing with and train accordingly. Some will respond to training like second nature and some require warm up and praise to build confidence just to leave the stall. And the sooner you begin to understand the horse, the sooner the real training can begin.\nWhen it comes to personality, horses, like people, can generally be grouped in to four categories or styles:\n• Challenging; Prideful, Territorial, Strong Sense of Self.\n• Social; “Official Greeter”, Interactive, Easy to train.\n• Fearful; Guarded, cautious, looks to another horse or a person for strength.\n• Aloof; Independent, Delayed or Dull responses, Tends to ignore commands.\nOur Challenging horse was Faith. Faith would meet you head on in the round pen but once she understood you were serious, would perform better than her peers to prove her superiority. In the alternative, our Social horse Miss Fitz will go along with just about anything we ask her to do as long as she is getting the attention. We might use different adjectives to describe people personalities but you get the idea. Each personality has specific preferences and tendencies that we look for and use in our training techniques. Each has its advantages and challenges but understanding the horse allows us to set expectation and get the most we can out of a session and, ultimately, out of the horse. This applies to our people too!\nSo how does this change our approach to training?", "score": 46.44705907655215, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "If your horse doesn’t learn as fast as everybody else’s, maybe you’re training him slowly. Or maybe he’s not a fast learner. Some horses are smarter than others, some are more willing than others, some are lazier than others, and some are moodier than others. Some horses have a more active flight response than others. Some have a greater tendency towards aggression. Some are more sensitive, some are more stubborn. There’s a reason why sport horse breeders select horses for temperament; not all horses are created equal in terms of trainability. Things go a lot better once I can admit to myself that the horse has weak points, and then learn to work around those weak points instead of butting my head against them.\nWhat about the abused horse that wants to run a mile when you go near him? Or the horse that had a gentle groom and a rough rider, who is perfect until you get on and then randomly throws you into next week? Is it your fault that somebody else beat him half to death? Of course not. His problems are not always your problems. The horse is not merely a mirror of his rider; he is a flesh-and-blood creature, unique, responding differently to any other horse. Only a great rider who has worked with a group of horses for a long time will stamp each horse with his trademark, and even then each of those horses will be different.\nMaybe that horse that leans on his rider’s hands until his jaw gapes open is 100 times better than he was six months ago. Maybe that horse that just threw his rider a mile doesn’t have a sore back; maybe he had a bad hair day and didn’t feel like carting people around today. Maybe that rider with the one heel in the air has ripped a ligament and can’t put that heel down no matter how much they want to. I try to view every problem in my own training objectively (with varied success, I admit), with the goal to solve the problem, not to lay the blame. Blaming anyone achieves nothing in the end.", "score": 45.24114770251593, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "Regardless of personality style, you must always deal with the behavior that your horse presents: If he is distant and removed, you need to get his attention; if he is confrontational, you need to establish your authority; if he is distracted and inattentive, he needs to pay attention and respond to you; if he is worried and mistrustful, he needs to be reassured. And we adjust our approach, not our goal, by assessing our trainee before we begin.\nEach personality style is unique and some horses ( and people) exhibit a combination of traits. None are bad but this IS where the personality of the trainer comes in. Not all trainers are well suited for all trainees. And, we prefer to train the horses we get along with and wish we could avoid the ones we don’t. Sound familiar? But effective training is really about bonding so it is important that we know ourselves well enough to understand what WE need to do to bond with our trainee. A good trainer understands themselves and who they are dealing with and THEY adjust accordingly. We don’t expect the horse to change color, we change color to fit the horse. Is it tough to do this, you betcha! And many good trainers fail miserably from time to time when they are working with a challenging fit. But in the end, our goal is to produce a good horse, not to prove that we are superior and if that means changing OUR approach so be it. The alternative is to risk ruining a potentially good horse, not to mention the time and aggravation we could go through for nothing. The same could be said for our “potentially” good people.\nFortunately for our people training we have some great tools that we use to clearly identify who we are, who we are working with and how to maximize our training effort with them. I use DiSC assessment in coaching and training to help individuals and teams understand how they can work most effectively with others. The assessment identifies the personality preferences and tendencies we exhibit in our interactions with team members, family members, each other. It shines light on who we are and how we are likely to interact by defining our styles as: Dominant, influencing, Steady, and Conscientious or some combination of the four.\nThe DiSC assessment is inexpensive and easy to perform. The results clearly outline the trainees strengths and challenges and provide direction for improving performance.", "score": 42.74419223081899, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "The origin and makeup are, however, different for each specific warmblood breed.\nWhy Are Horses Called Cold Blooded, Hot Blooded Or Warm Blooded?\nTemperament descriptions for hot, cold, and warm-blood horses:\n- Hotbloods tend to be hot headed, feisty, and sensitive.\n- Most coldblood horses are calm, cool, and less reactive.\n- Warmbloods fit in between and can range from more energetic and sensitive to more laid-back and lazy. A warm temperament perhaps.\nEnvironment origins for hot, cold, and warm-blood horses:\n- Coldbloods originated from colder, harsher climates.\n- Hotbloods were developed in warm to desert-like areas.\n- Warmbloods were developed in the European regions but were influenced by both hotblood and coldblood horses.\nTypes Of Horses\nHorses and ponies fall into two groups: horse breeds and types.\nThere are different ways to group horses by type. But there are four main horse types that all horses can fall under, and then subcategories under those.\nWhile the actual origins of horse types are unknown, there are many theories as to where they originated from.\nThe two earliest types of horses, according to current knowledge, are hot-blooded and cold-blooded. Warmblood, pony, and miniature breeds are said to have branched from these two throughout time.\nFour Main Horse Types\nSearch on the internet, and you will find different information on the main horse types depending on your source.\nHowever, after years of learning and falling down rabbit holes, I have come to the best conclusion that these are the four main types of horses.\nThe four main types of horses are heavy horses, light horses, ponies, and miniature horses, which are all categorized by size (height and body build.)\nThe other types are subcategories that can fall under these.\n- Heavy Horse\n- Light Horse\n- Miniature Horse\nWhere Do Hot, Cold, and Warm Blooded Horses Fit Among The Four Main Horse Types?\nColdbloods fall under the “heavy horse” type. Warmbloods and hotbloods are both light horse types.", "score": 41.32843622057288, "rank": 8}, {"document_id": "doc-::chunk-2", "d_text": "If we don’t make allowances for this species-specific behavior in our training approach, we’re doomed to fail. We can help them learn to react to their fears in a “safer” way, but we cannot train their fear out of them. Their sensory modalities are different than ours, and we may not have the physiologic ability to perceive much of what they react to. However, there is usually a legitimate, underlying reason for the flight reaction, whether or not we’re actually aware of it.\nWhile it’s true that there are behavioral patterns that prevail in most horses, it’s also important to recognize that each horse is an individual with a unique personality. Instead of struggling to make horses fit into a mold, accept and embrace each horse’s individuality as a mystery you have the privilege of trying to understand. Riders have a role much like that of a school teacher in that we must keep our pupils feeling positive about the training process by allowing them room to express their uniqueness. You’ve probably watched a jumper or two that likes to throw in a little buck stride here and there on course that seems to say, “I’m feeling good and having a great time!” Others just go quietly and uneventfully about the same business. They’re all doing the same work, yet each horse’s individuality shines through. If a rider is attuned to her horse, she’ll know the difference between ordinary behavior stemming from her mount’s unique personality quirks and the tense, rigid, sullen behavior of an unhappy horse. This is a very important distinction to make, because the former is acceptable, while the latter should alert the rider that it’s time to slow down and get the horse back on the team.\nEven when we truly love and cherish our horses, it’s very easy to succumb to the ever-present pressure to perform. Who out there doesn’t want to be the one with the glorious horse that can do it all? What show rider doesn’t have her sights set on the blue ribbon? The problem is that sometimes the desire to excel makes riders try to rush things that really take time to develop.\nWinning was everything to one longtime campaigner I knew several years ago. He tried out new horses as easily as he changed clothes each show season. If the horse didn’t win, down the road he went, and another prospect was purchased to take his place.", "score": 41.23401058984171, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Horses are majestic animals who respond in the moment. They are honest and never hide how they feel, giving immediate, non-judgemental feedback to the energies, behaviours and emotions around them.\nThey are unique with each horse having his/her own personality, likes/dislikes, behaviours, strengths, quirks and vulnerabilities. They are prey and herd animals which means they are highly aware of everything that is going on around them, on a physical and energetic level. They are highly social and empathic, relying on their herd for safety, comfort, play, interaction and survival.\nBecause of all of the above characteristics, horses can teach us humans about ourselves – about how we interact with others and the environment, how we behave within relationships and about deeper belief patterns and attitudes we may hold – by responding to our attitudes, behaviours, feelings and energy. Horses can lead us into connection, learning and growth.\nCONNECT. LEARN. GROW.", "score": 40.0631360033564, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "The horses, as well as humans and pets tend to have a variety of behavioral problems. These behavioral issues can occur suddenly after a traumatic experience, or can be deeply rooted in the psyche of the horse from previous experiences. Anxiety in horses is a common condition that can affect every aspect of running a horse, and affect the appearance of a horse. What causes anxiety in the horse? The anxiety will almost always have a root cause. The cause may be physical (caused by a physical factor – such as a snake or a barn door hitting) or psychological (an abusive past, or stress and separation anxiety).\nIt can also be hereditary – so it is important to determine the cause of the problem. All horses that exhibit sudden anxiety should undergo a complete checkup at the vet. Other causes of anxiety may include: * A horse in a new environment or stressful environment * Fear of other horses that share stable * Lack of a close relationship * Lack of training or experience traumatic Abuse * Remember that horses are animals to fight or run, so when put in stressful situations have a tendency to become anxious and want to ‘run’. Some horses will be less nervous than others because of how they manage stress. Also, always check the physical things that can cause anxiety – such as a barn door knocking, firecrackers in a neighboring field, etc. . Diagnosis of anxiety horse Because varying degrees of anxiety, it is important to know your horse and watch for physical and behavioral changes.\n* Hiding in the corner of the barn’s eyes widen * * * Restlessness or tremor twitching jogging step forward and backwards * Rearing * Sleep disturbances * Loss of appetite-related physical conditions include include: Constipation or stomach upset * * * Riots Colico eczema or skin and Hair Loss Help Many horse’s anxiety medications to help with nervousness, excessive anxiety or stress in horses. Unfortunately these medications are not without side effects, and while they may help relax the horse long-term effects are unknown. In addition, the sedative and painkilling drugs can numb your senses of a horse – can do competitive events as well as training more difficult because it requires concentration and vigilance. Talk to your vet about other alternatives. Many natural remedies while still maintaining the horse alert. The homeopathic ingredients such as Cham, Kali. have traditionally been used for centuries. And promoting the appropriate levels of salts in living cells necessary for the physical and mental health.", "score": 37.973711826208856, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "As for humans, anxiety may affect certain horses and can generate consequences on the horse’s physical, emotional and physiological condition. In this article, we will explain how you can spot signs of anxiety on your horse and how to deal with your horse’s anxiety when such signs appear during competition. Eventually, we will explain how the use of Seaver connected devices can help detect and measure the stress level of the horse.\nOn a competition horse, anxiety and fear can have an immediate influence on the performance; reducing the ability to focus and triggering some unusual muscle tightening that get the horse harder to ride for the rider and eventually reduce the performance.\nStress can be necessary when providing some good adrenaline for the horse or the rider. The horse is so said to be « alert », his senses are deepened and he is paying a great attention to the new environment in which he has to work and compete.\nHowever, when the stress level is too high or when the horse gets too sensitive to external factors, stress turns into some negative and detrimental emotion that needs to be detected and cared for.\nWhen a horse is feeling anxious on a regular basis and over relatively long periods of time (hours, days…), he is very likely to develop signs of ulcers, loss of weight or divers tics. Such disorders take various shapes but they all have immediate and long-term consequences on the horse’s physical condition.\nMeasuring the stress level of a human athlete is not something easy. However, men and women have the ability of analyzing and expressing their emotions. Anxiety is not easily measurable on a human body but it is even more complicated to do so on a horse.\nResearches have shown a link between the horse’s cortisol level and the horse’s anxiety.\nFaecal and plasma cortisol concentrations turned out to be indicators of the measurement of equine unease and well-being.\nMeasuring cortisol in a horse’s saliva or by taking a sample of a horse’s blood allow comparably accurate results to identify stress and serves as a useful tool for improving equine welfare.\nSuch method has numerous constraints (availability, shipment of the sample, time required to know the results etc) and may also cost a great deal of money.\nOther signs can help you to read your horse’s emotional condition to identify anxiety, fear, stress or discomfort.\nVices such as weaving, cribbing, wood chewing, wall kicking and fence walking are all signs of stress. Unusual sweat or decrease in appetite is another indicators of the horse’s stress level.", "score": 36.25393893608794, "rank": 12}, {"document_id": "doc-::chunk-1", "d_text": "Licking and Chewing\nNatural horsemanship information has suggested that licking and chewing is a sign that a horse is accepting new information, such as during training. This action may be more like yawning in its function, as a way to release any stress it may have felt.\nColic symptoms can be caused by stress. A new herd mate or changes in routine, whether or handler can be enough to make some horses mildly colicky. Chronic stress can lead to EGUS, which of course, can cause colic symptoms.\nAny number of stressful situations can cause a horse to tremble. Just the appearance of the veterinarian, farrier or the arrival of a trailer in the yard can cause some horses to start shaking. Usually, as soon as the cause of the stress disappears, the trembling stops.\nHigh Pulse and Respiration\nWhen a horse becomes stressed, their pulse and respiration rates can increase, sometimes drastically. It’s important because of this to know your horse’s basic TPRs.\nAs a horse’s pulse and respiration may increase when stressed, it may start to sweat (and tremble). Work stress tends to show up between the horse’s legs, and under the saddle area and can eventually cover the horse’s whole body. It depends on how hard and long the horse works. A stressed horse may sweat in patches, however. Patches of sweat can also show the location of old injuries.", "score": 34.36236565806531, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "I meet people who label a behavior of a horse as dominant but find that it is often a lack of manners. Many people do not know how to teach their horses and children to be polite, willing, respectful and have good focus and work ethics.\nDominance is not a personality trait – it is a relationship state between two animals to decide who is going to control resources like food, water and mates.\nHorses cannot dominate people, they learn what works and what does not work in their efforts to get what they need or want.\nAggression and dominance is not the same.\nHorses can be aggressive in certain situations and it can be many reasons for that. It can be complex but aggression towards humans and dominance towards humans are different.\nIf a horse is aggressive towards you, dealing with it by being aggressive back may stop the behavior or it may not.\nAggressive behavior can be suppressed – that means it does not appear anymore. You will have suppressed the behavior. But suppressed behavior does not mean changed behavior. The aggressive behavior is still in there. Depending on what caused the aggression, and given the right conditions, it may even come back.\nI think it is important to take the time and learn complexities of horses social life, life in the horse herd and how roles change in the herd. We need to keep in mind that the behavior in a wild herd with a lot of space to roam vs domesticated horses with limited spaces is very different.\nHere are some videos that I have taken of different behavior I have observed in my herd;\nVideo 1 – My lead mare Darling schooling new ones\nVideo 2 – Introducing a new herd member (in a small space)\nVideo 3 – Gaia likes Jack …..\nWhat do you see?\nLet me know in the comment field and let’s talk about herd behavior.\nKind regards Stina and Jonna\nYurumein, St. Vincent, West Indies", "score": 34.29114153318846, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "Do you feel comfortable with creatures that think for themselves and are confident about their actions? Or do you prefer shy, gentle beings that depend on others for guidance? Think about your own personality and how others would describe you. Because both horses and humans are social mammals, we match up best with similar temperaments. In other words, “like attracts like”. Quiet, calm people can certainly have a positive effect on “hot” horses, helping to settle them down, but for long-term friendship, a calm horse gets along well with the same type of person. Although pairing a high energy person with a high energy horse may not be the safest combination, these matches usually do well too, especially if they are competitors.\n4. How comfortable are you around horses?\nAre you ever intimidated or afraid? What situation would make you feel unsafe, if any?\nIf you feel unsafe around horses, the best match is usually a horse that likes to take care of people and is very confident of her own ability. It always surprises me when a horse that has been donated to a handicap riding program because of bad behavior ends up as the “best horse”. Why does this happen? Because the horse was smart and confident and didn’t tolerate bad trainers who tried to make her do something she did not want to do. The same horse excelled at being able to take care of humans who value her intelligence.\nJust because a horse has more training, this does not necessarily make him a better partner for a novice. Often, horses are trained to be “bomb proof” so as not to react to scary stimuli. But they can be time bombs waiting to happen. They will not look people in the eye and have turned their awareness inward in an attempt to not show fear. They continue to hide their fear until something comes along that tips them over the edge – and then, without warning, they explode.\n5. What qualities are you seeking in the horse – conformation, gait, breed, color, education, experience?\nPrioritize these qualities from most to least important. You may not be able to find the perfect horse to meet your budget, so highlighting the top three qualities is helpful.\n6. What physical limitations can you accept?\nThere are no perfect horses, just as there are no perfect people. Physical handicaps can be the result of poor conformation, injury and/or stress, lack of proper conditioning, poor rider balance or bad training.", "score": 34.19144651701972, "rank": 15}, {"document_id": "doc-::chunk-2", "d_text": "If you’re working the lunge, trying going in the other direction to add some variety for the horse. You can change out the stall toys from time to time. You can also let the horse stay out a little later or go out a little earlier to satisfy the need for variety.\nStubbornness in the Oldenburg Breed and What It Means\nOldenburgers are generally even-tempered and easy to control, especially for their size, but there is a certain stubbornness that you can find within this breed. The horse may suddenly not want to train any more. There may be a rejection of the tack. There may even be a certain aggressiveness that targets other horses or the owner which appears suddenly.\nWhen this occurs for a horse that has been owned for some time, then it is generally an indication that there is a health problem present. Check the horse for cuts, bruises, and skin conditions that the coat may be hiding. You’ll also want to check on the health of the hooves, especially if an active horse becomes inactive and aggressive.\nColic can also be a concern within this breed, especially if nothing seems wrong and there is a sudden personality change.\nIf you’ve just obtained the horse, then the aggressiveness may be due to the change in environment. Aggressiveness can also be present for horses that are used to being in an Alpha role and feel like they need to establish or re-establish that role.\nTo get the most out of an Oldenburg, focus on consistency. This breed prefers to interact with experienced owners, riders, and trainers and they can have little patience for beginners or novice riders. Over time, when a relationship is formed with the horse, you will be able to see the positive traits of the Oldenburg horse temperament.", "score": 33.498304145977635, "rank": 16}, {"document_id": "doc-::chunk-7", "d_text": "I'm talking about horses, but its funny how relevant this all is too humans!\nCarmen said that any horsenality can be a nightmare or a dream. And, like humans, horsenalities change. Our overall goal as horsemen, is to produce confident horses (and humans!) that are not limited by excuses such as 'Its in my nature to be scared/controlling/unmotivated/unfocused...' Horsenality is not meant to categorize, limit, or box the horse. Carmen spoke about how our belief in our horses determines what they'll be. We need to believe that our horses are capable of being who they're meant to be.\nShe also spoke about focusing on the goal or end result in your horse. A right brained horse needs to become more calm and confident, whereas a left brained horse may need to become more willing and motivated. She said that instead of focusing on what horsenality the horse is, we should focus on which one of these four aspects is missing. If we're asking ourselves \"Does my horse need to be more willing? More confident? Motivated?\" we can adjust to fit each moment, and won't be limited by their innate horsenality.\nWednesday, September 8, 2010\nJohn Baar gave us a demo about playing with our horses online. There are 3 reasons that we play with our horses online:\n1. To prepare the horse or rider\n2. To establish communication\n3. To teach something new\nIts amazing how knowing why we do something enables us to find a new quality. All too often I find myself purposelessly doing things with my horse, and then I wonder why I haven't got anywhere in that session?! Establishing leadership with a horse is very important, and one of the things that defines a leader is that they have a PLAN! Now if I'm aware that I'm playing with my horse online for one of the above reasons, it will help me to have a plan... hmmm.... how interesting!\nMonday, September 6, 2010\n1.Heart & Desire\nOne could extend on any one of these qualities, but I'm going to focus on the first 4. Pat Parelli regularly talks about having your horse fit in 3 different ways, and in the following order:\nMental fitness is respect. Emotional fitness is also impulsion.", "score": 33.20373828527063, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "Even if temperament changes over time and is not, by itself, the only condition for the adequacy of an equine to a specific situation (which can be influenced by are hormonal condition, its history, its learning mechanism – I advise you to read this article on the temperament of the horse (in French),very interesting), there are certain constants that are found in a horse and in a pony that constitute a difference between the two animals.\nThe French expression that defines a “hot-blooded” or “cold-blooded” equine is not biological and has nothing to do with the animal’s body temperature. It is used in the equine domain to designate a temperament with a strong or calm tendency. The origin of these terms appeared as early as the end of the 18th century when the “purity” of animal blood was important and a classification of horses by tblood was common.\nThere are therefore “hot-blooded” and “cold-blooded” horses, defined by their race and personality, among others. “Hot bloods” are used as saddle and sport animals and known for their emotional exploding and liveliness. The “cold bloods” have a calm temperament, are patient and more robust than the “hot bloods”. As explained above, they are usually used for agricultural work, in order to tow a plow or heavy carts for example. There are also “hot-blooded” and “cold-blooded” ponies. In general, ponies are defined by their calmness and docility (characteristic of “cold bloods”) and horses by their agility, their liveliness and sometimes their obstinacy (specific to “hot blood”). With any rule its exception but these are traits stable in time encompassing all equine races and taking into account only the innate character of each animal.\nThe name of the breed can help you: small effective trick\nSome equine breeds can tell you if you are in front of a pony or a horse. Example: the Landes pony is a pony even if its size is just over 14.2 hands. Quarter horses are horses even if, conversely, they can pass below the 14.2 hand mark.\n- Other examples of pony breeds: Australian pony, German saddle pony, French saddle pony…\n- Other examples of horse breeds: paint horse, rocky moutain horse, florida cracker horse…\nHere we are in the extraordinary, in extremes of longevity or size that are beyond comprehension.", "score": 32.499176929527955, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Story from Anne Koletzke at Glen Ellen Vocational Academy, Inc\n“You like potāto, I like potahto”\nHorses are like people – they have different personalities. They can be nice, friendly and hardworking, or awkward, difficult and lazy. If horses were people, some would be on the dole, and others would be entrepreneurs. ~Tony McCoy\nI always tend to think of Mikey and Rio together, probably because they are the only two Quarter Horses at Glen Ellen Vocational Academy, Inc (GEVA), an Equine Retirement and Rehabilitation Foundation and Sanctuary. But I do them such a disservice when I do this, because they are really quite different horses.\nThey do, of course, have similarities: As far as anyone knows, Mikey and Rio were once Western pleasure or show horses—they most certainly were never racehorses like most of the other horses at GEVA. They are both also on the smallish side, and are, in fact, the two smallest horses on the farm. Mikey, a handsome buckskin, has a dorsal stripe and “primitive” stripes on his front legs, and Rio, a registered Paint as well as a registered Quarter Horse, also sports a dorsal stripe and leg stripes. And they are both gentle, unaggressive horses who usually find themselves near or at the bottom of the herd pecking order.\nBut other than such superficial similarities, these two horses are so very different. Mikey, for example, is seldom far from the side of his paddock-mate, Chunky, whereas Rio, who lives with 6 other horses, often seems to prefer spending much of the day enjoying his own company.\nThey’re quite different in how they relate to people, too. When you enter Mikey and Chunky’s paddock, Mikey can be counted on to immediately come over to say hello and to demand attention. Should you be foolish enough to think just giving him a quick pat on the neck as you pass by on your way to clean manure from his paddock will do, you’ll find him waiting for you back at the RTV, so completely blocking your way you can’t possibly empty your shovel into the truck without making him move first. And good luck with that, as by this time it’s as if his hooves have grown roots. Deep roots.", "score": 32.29490485721134, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "While some researchers have developed behavioral testing to identify which horses are good for which kinds of work, these tests fall short of addressing specific psychological factors, they said. Ideally, scientists will be able to come up with psychological testing for horses that will help place them in the right careers, which will help prevent economic losses and improve horse welfare.\nMood testing could also be useful during training and competition days, to assess what the horse is realistically capable of at that moment. And while some breeders are taking temperament into consideration when matching stallions and mares, it's far from being a priority on a whole.\nResearch is currently lacking in these critical areas, Mills said. \"Whilst there is a growing literature on temperament in horses, there is still very little scientific work on the emotional reactions of horses and almost none on the assessment of moods,\" the team stated in its paper. \"It is nonetheless important to appreciate that although it is difficult to study these phenomena, this does not mean that they are not important and certainly that they do not exist.\"\nIn the meantime, however, owners will have to wait. Said Mills, guessing a horse's mood or temperament isn't reliable without scientific grounds. \"A bad test can be worse than no test since it leads to false expectations,\" he said.\n\"People need to recognize that it's not about getting a good horse but about getting the right partnership--in other words, the right horse for them,\" Mills told The Horse. \"What works for one may not work for another, so always try out and feel how the horse goes for you.\"\nThe study, \"Psychological factors affecting equine performance,\" appeared in the journal BMC Veterinary Research in September. The abstract is available online.\nDisclaimer: Seek the advice of a qualified veterinarian before proceeding with any diagnosis, treatment, or therapy.", "score": 32.21521553311736, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Ever feel like your horse is in a bad mood? Well, according to a British equine behavior research team, you could be right. In fact, team members said, paying attention to all of horses' main psychological factors--temperament, moods, and emotional reactions--is key to ensuring their mental well-being and their success.\n\"To attain optimal individual performance within any equestrian discipline, horses must be in peak physical fitness and have the correct psychological state,\" said Daniel S. Mills, BVSc, PhD, MRCVS, Dipl. ECVBM-CA, European and RCVS Recognized Specialist in Veterinary and Behavioral Medicine at the University of Lincoln in the United Kingdom. Mills and Sebastian McBride, PhD, associate lecturer in equine science at the Institute of Biological, Environmental and Rural Sciences at Aberystwyth University in Wales (U.K.), recently published a paper on the topic.\n\"Psychological state\" is made up primarily of three psychological factors: temperament, mood, and emotional reaction, the team said. Temperament is a basic, long-term attitude. Developed from genetic factors and life experiences, a horse's temperament usually remains essentially consistent throughout his life.\nMood, on the other hand, is a short-term attitude. Just like people, they said, horses can have good moods and bad moods, motivation and grumpiness. And as for emotional reactions, these are the most immediate kind of attitude. They are the way a horse reacts to a specific situation, whether it's an open umbrella or a separation from his pasture buddies.\nAlthough emotional reactions are primed by the kind of temperament and mood a horse has, but they are still considered individual psychological factors, they added.\nWhat the \"correct psychological state\" is for each horse varies, however, according to the kind of work that horse does, they said. For example, a dressage horse should be cool and calm so as not to be distracted by his environment, but \"flightiness\" in a racehorse can be a good thing.\nLikewise, the barrel horse and cutting horse need to have significantly different psychological states than the endurance horse and carriage horse. Each discipline has its precise \"preferences\" for the psychological state of the equid doing the work, they said.\n\"Just as a Shetland will never win the Derby, so a horse of inappropriate temperament will generally never succeed within a certain discipline,\" Mills and McBride stated.", "score": 32.161591442337944, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Your horse gets to use their instincts productively while gaining confidence in their survival skills.\n4. Stress can cause heightened senses.\nJust picture the last time you’ve been stressed out and the sound of someone slurping coffee sent you right over the edge.\nHorses can feel the same way, when your horse is anxious, something as simple as the radio playing too loud can send them over the edge.\nTaking an assessment of the sights,sounds, and smells in the area will show you things that may be contributing to their overwhelm.\n5. Horses need to feel their concerns are being heard in order to overcome them.\nBlowing off your horse’s worries takes a dig at the trust in your relationship.\nImagine telling two friends that you’re afraid of ghosts. One tells you to shut up, ghosts aren’t real, you’re being an idiot. The other one asks why you’re afraid, then offers to help you face your fear.\nNaturally, you’ll feel more comfortable with the friend that offers to help without judgment.\nNext time your horse starts stirring with worry, have the patience to be the second friend, help them explore what’s going on.\n6. Telling your horse to STOP feeling anxious doesn’t help them to feel better.\nWe’ve probably all done it. Saying calm down, relax. The problem with this approach to anxiety is that it doesn’t give the horse any instruction of things they can DO.\nRemember anxiety is an involuntary reaction if they could just calm down they would (they have no desire to feel anxious and are probably just as annoyed about it as you are).\nInstead, give them more guidance with small specific action steps that they’re capable of doing.\nBe sure you’re strategic with these action steps, working toward lowering stress not just making your horse busy.\n7. Overcoming anxiety works best with a multifaceted approach.\nTrying one method and getting lackluster results doesn’t mean that the method is faulty. It usually means that it needs to be paired with additional support to be more effective.\nFor example, let’s say you’re using systematic desensitization to slowly help your horse feel comfortable being separated from his buddy but you aren’t making any consistent progress.\nNo matter how many baby steps you take, you’re not getting anywhere near having a horse that is confident on his own.", "score": 31.403836627741434, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "This loyalty can present itself through protective actions, especially for stallions, and this can be mistaken for a negative behavior by someone who might be targeted by the animal.\nThese horses also enjoy social activities, especially with their human counterparts. They may be bred to be an exceptional animal and have high levels of energy, but that doesn’t mean there isn’t a desire to have laid-back moments, like a lazy trail ride with their favorite person. As long as you can manage the energy of this breed in some way, you’ll have a horse that is supportive and willing to work.\nTheir willingness does have some limits. Oldenburg horses tend to prefer familiar environments or circumstances. If you put them into a new environment or situation, it becomes much easier to spook them because their self-awareness senses are extremely high. It is not uncommon for a spooked Oldenburg in a new situation to throw their rider, then get spooked again because they threw their rider.\nBecause there is such variability within the registry, it is necessary to look at the papers of the horse to determine what you’re likely getting in terms of temperament. Because there are several cross-breeds brought into the Oldenburg line, including Thoroughbred blood, you have access to almost coldblooded personalities to extremely hotblooded personalities that seem to borderline on mania.\nHow to Get the Most Out of an Oldenburg Horse\nOldenburg horses prefer to have a daily routine that is followed exactly. They do not like having disruptions. They are happiest when they can do the same thing every day. This desire for “sameness” has made them one of the most successful dressage breeds in the world today. Only Dutch Warmbloods and Hanoverians have consistently higher standings in dressage compared to the Oldenburg breed.\nFrom a home environment perspective, consistency means following the same feeding, turning out, and exercise rituals. Oldenburgers like having the same stall mates next to them, having the same amount of time out in the pasture, and working in the same way with their owners or trainers. You just need to watch out for the development of boredom, however, because you can be doing everything right and maintaining your routine and then suddenly the horse becomes rebellious.\nHow can you have a backup plan for a horse that likes to have the same routine, but then suddenly doesn’t want the same routine? By having have some variations available.", "score": 31.002734314060582, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Guest post by Becky from Insightful Equine.\nHaving an anxious horse is exhausting.\nYou just want to focus on reaching your goals but its hard to accomplish anything when your horse is constantly distracted by worry.\nThe first step is to stop fighting for their attention and start understanding what’s going on.\nHorse anxiety is a complex beast.\nIt’s impossible to cover everything in one post, so I’m touching on eight points that will at least get you one step closer to finding relief.\nHere’s what your horse wants you to know about anxiety:\n1. Horses aren’t using their anxiety to get out of work.\nWhen our horses become anxious during an exercise it’s easy to think they’re acting up to avoid work.\nLooking at the bigger picture you’ll find that your horse isn’t trying to avoid the work itself but rather something about the work that’s troubling them.\nMaybe the way they’re being cued has them on edge or perhaps their confidence took a nosedive when the exercise became physically difficult.\nIf you find that your horse is looking for a way out of an activity, stop to pinpoint the exact moment that the evasive behavior started and you’ll find the source of the problem.\n2. Anxiety is impacted by more than just the environment.\nIt’s usually pretty easy to spot environmental factors that cause horses to get stirred up.\nWe all know that horse that loses it’s marbles every time the wind starts howling.\nBut what we don’t usually pick upon the more subtle factors impacting their emotions such as vitamin and mineral deficiencies, immune system weakness, physical pain, hormones, genetics, and nutrition.\nIf you’re looking to ease anxiety I encourage you to approach it with as much openness as possible.\nBeing open to making mental, physical, emotional, and environmental changes will be what ultimately brings balance to your horse.\n3. Natural Healthy stress is good for horses and helps to reset their response system.\nCreating a stress-free environment will ultimately make your horse become moresensitive to being triggered.\nBy sheltering our horses from every little thing and arranging their lives so they don’t have to think on their own or use any natural survival skills they will become more helpless and more easily overwhelmed.\nThe key here is setting up their environment with natural stressors that they are capable of working through.\nOne idea would be to put out food in different areas of the pasture every day,then let them figure out how to find it. This simulates seeking and finding food the way they are designed for survival in nature.", "score": 30.282983746680188, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "That’s why they require an experienced person to handle and work on their excitable and fired-up temperament. Additionally, hot blood horses are vulnerable to stormy weather.\nThis is due to their Middle East origin where they are more at home in desert conditions. For many years they were used as a symbol of power and wealth by the North African tribes.\nExamples of hot breeds are the Akhal-Teke, Barb, the Thoroughbred, and the Arabians. When talking of hot blood horses many people just refer to the Thoroughbreds and Arabians.\nBreeders selected for a breed that would be suitable for pulling wagons, carriages, and plows. The breeding effort over a long time resulted in an animal that is large, strong, muscular, and resistant.\nGiven the nature of work they were intended to do, cold bloods had to be calm, gentle, and patient. Medieval soldiers preferred the cold bloods because they were strong.\nThey would carry heavy armor as well as the soldier and travel long distances. Today, cold bloods are the most popular breeds for riding.\nThey were bred in harsh climates and are hardy with very heavy bone and feathering. They were bred by crossing the Arabians and Thoroughbreds with carriage or war horses.\nHot bloods are spirited horses with high speed and endurance. Warm bloods are a mixed breed of the cold and hot blooded horses.\nShe is a mom of three who spends all her free time with her family and friends, her mare Joy, or just sipping her favorite cup of tea. If you’ve spent enough time in the barn, you have possibility heard the terms warm, cold, or hot blood mentioned.\nHot- Blooded : These horses are the nervous and energetic types. Some ponies can even fall in the category of cold blood.\nTagged: barn, blood, breeds, cold, horses, horsewomen, hot, mammals, personality, temperament, terms, warm, warm blood Depending mainly on the agility and the body size of horses, there are two main types known as cold blood and hot blood.\nThis article explores those differences in relation to their major characteristics. They have a unique combination of characteristics including size, substance, and refinement.\nAn ideally warm blood would be 162 – 174 centimeters tall at their withers, and their top line is smooth from the poll to tail.", "score": 30.190124662739972, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Equine Anxiety 101\nYou’re hacking peacefully along when you feel a tremor go through your horse’s body. His previously floppy ears snap forward, and his head rises up. As you wonder when your horse turned into a giraffe, his steps become slower and shorter, his back drops, and he emits the emphatic horse-in-jeopardy snort.\nYou look left and right, desperately trying to see the flesh-eating monster that must have just emerged from the bushes. But you see nothing. Nothing but rocks, trees and grass. The same rocks, trees and grass that have always been there.\nBut wait, what’s that? There, in the tree, a tiny shimmer of white. It looks like a piece of a grocery bag, caught on one of the branches. And just as you think, well, it can’t be that, your horse wheels, leaving you hanging in space for a moment as he hightails it back to the barn, not noticing whether you’re still attached to him.\nEvery person who rides encounters something they dread while they’re working with their horse. Maybe Dobbin has a thing about the trash truck. Maybe he’s convinced that whitetail deer are masquerading as peaceful, grass-eating creatures but are really waiting for the chance to pounce on a delicious meal. Or perhaps what really unhinges your horse is being alone.\nWhatever your particular issue, equine anxiety is the No. 1 training and management issue for every rider and trainer.\nNo matter what the cause or expression of your horse’s anxiety, all riders need to accept that all horses will be afraid of something periodically. It’s the nature of a prey animal to always be “on the lookout,” and it’s a behavior we must accept.\nThe first step-and this often harder than you would think it should be-is to determine what’s causing your horse to be anxious and thus unruly or disobedient. The very thing that makes horses such fabulous animals to train, their incredible memories and ability to extrapolate from previous experiences, also causes them to hold on to negative memories and makes them difficult to convince that future situations won’t be negative.\nSeven Types Of Fear\nThe causes of equine anxiety usually fit one of seven categories:\n1. Objects. The objects that horses most commonly find terrifying include: rocks, farm equipment, cars, buildings, jumps, garbage cans and pretty much anything they consider out of the ordinary.\n2.", "score": 30.070466379972082, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "I have received some really great feedback about the various types of therapeutic horsemanship programs available. I have also received some great questions. This week, I will take the opportunity to answer one of the recurring questions: How do horses communicate?\nWhen thinking about the way horses communicate, it is important to keep in mind certain equine characteristics:\n• Horses are prey animals.\n• Horses create strong social bonds. They prefer to live together in herds.\n• Young horses learn by imitating older horses’ behavior.\n• Horses possess the five basic senses: Vision; Hearing; Smell; Taste; Touch. They utilize these senses when they communicate.\nHorses communicate primarily through body language, but also use eye contact, voice and smell to get their point across. Observing the horse can often provide clues to his emotional state or temperament, whether he is in a state of conflict, aggression or dominance or if he perceives a threat in his environment.\nA good way to begin recognizing the way horses communicate is to observe specific body parts of the horse and identify what they could be telling us. The following list is in no way exhaustive, but provides some good examples of communication that I have observed in my own horses.\nThe position of the horse’s ears can be a very good clue to what is going on inside his head. I refer to them as the antennae. They give me an idea of what has captured the horse’s attention.\n• Held Forward – The horse is alert, listening to something ahead of him. He may be simply paying attention or it could be a sign of concern.\n• Held to the Side – The horse is relaxed. He may be dozing or asleep.\n• Turned back, but not flat to head – The horse is listening to something behind him.\n• Pinned back, flat to head – The horse is angry and in an aggressive state. This could be in response to a perceived threat or a pain response.\nWith large eyes located on each side of the head, horses have nearly 360 degree monocular vision, meaning each eye is used separately. Remember, the horse is a prey animal and this wide range of vision makes him very difficult to sneak up on. The horse has two blind spots, however. One is located directly in front of him and the other is directly behind him. The eyes of a horse can be great source of information about the horse’s state of mind and where he is focusing attention.", "score": 29.393691956664856, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "The videos were all different disciplines — ridden dressage, natural horsemanship, in-hand dressage, bridle-less riding, Western reining, and behavioural rehabilitation — but at no point were we judging the disciplines or handlers themselves, just the participants’ perceptions of the horses’ body language featured in each. We asked participants which of 13 different emotional states they believed the horse to be experiencing: these options were angry, anxious, conflicted (defined in the survey as “experiencing two emotions at the same moment in time”), enjoying it, excited, fearful, frustrated, playful, relaxed, stressed, stubborn, submissive, and switched off (or “resigned”). We also asked whether participants would be happy to see their own horses experiencing the handling, and gathered various pieces of demographic information such as preferred discipline and equestrian experience.\nOur experiences of working with clients had taught us that recognising the subtle signs of fear and stress is a rare skill, and we hypothesised that the data would confirm this. As is so often the case, things were not quite so straightforward. Our participants (opportunistically recruited via Facebook, so not a representative sample of the equestrian industry, just the best we could do at the time) seemed to be very good at recognising the body language in some of the videos, but not all. Our statistical analysis found that the videos featuring natural horsemanship and bridle-less riding were particularly misinterpreted as positive experiences for the horses.\nThis is something we commonly see in our consulting as well: Euphemisms and flowery language result in many people believing that such training practices are benign, with the horses described as “playing” and engaging with the handler in a manner analogous to equine herd behaviour. Of course, a little more consideration results in the recognition that the training at best invokes negative reinforcement and punishment, and often stretches to flooding. This results in some very stressed horses whose body language is mistaken for “relaxed.”\nAnother interesting finding was that increased owner experience did not lead to increased likelihood of interpreting the body language correctly. Even working with horses professionally was no guarantee that the participant could recognise equine fear or stress. Deep down we are probably not surprised by this — there are plenty of horse professionals whose empathy for those horses is lacking.", "score": 29.124642276410555, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "Situations. Many horses are uncertain about dark or enclosed places (like an indoor arena), and even more are genuinely scared of being alone (they are herd animals). Often this fear will be expressed by being buddy-sour or barn-sour, and sometimes they don’t want to go in a ring, either at home or in a competition.\n3. Sounds. Highly strung horses are easily unglued by loud, unexpected noises (a car back-firing, a garbage can falling over). Others can’t stand hissing noises (like from a leaky hose coupling), and others don’t like rustling noises (in leaves or under something). Both probably sound like a snake.\n4. Clipping or other grooming/handling. Some horses are genuinely afraid of clippers, either the sound or the sensation. Some don’t like to receive shots, and others are anxious about being shod.\n5. New places. This can be as obvious as moving to a new home or going to a competition. Or it could just be moving to a new stall or riding in a new trailer. Anxiety could even be caused by more subtle changes around the barn (the jumps were moved in the ring, for instance).\n6. Type of work/type of rider. Horses often prefer a certain type of rider. And often horses with a strong desire to please become anxious because they don’t understand what’s being asked of them, either because the exercise isn’t clear to them or the rider’s aids are confusing.\n7. Other animals. Horses are often afraid of birds, cows, goats, sheep, donkeys, deer or other wildlife. And some are afraid of other horses.\nThe only way to deal with most things that cause equine anxiety is repetition, because they're things you just can't change or that your horse just has to deal with. | Photo Courtesy of EQUUS\nRemove The Cause?\nOnce you’ve isolated the cause of your horse’s consternation, the big question is what can you do about it? And that’s where you have to be creative, confident and even willing to do an unusual thing or two.\nThe first thing to determine is the degree of your horse’s fear. Is he genuinely terrified? Can you feel his heart pounding? Is he shaking? Does he bolt blindly away?", "score": 28.891019694018343, "rank": 29}, {"document_id": "doc-::chunk-3", "d_text": "Now this can be done with or without roughness - I think one of the reasons some horses survive fairly rough handling is that their dispositions are basically stolid and less reactive - they just shrug their shoulders (so to speak) and ignore rough handling and get on with things. Rough handling can be a form of adversity that develops a horse's ability to deal with things - not that I think that's a good excuse for rough handling - but it can also, in a horse that's sensitive or nervous, cause great damage and even permanent impairment of a horse's ability to work together with people (this came close to happening with Dawn). I don't believe roughness is necessary to produce a solid-minded horse - working softly but effectively with a horse does the job better and can produce a horse that is both soft, mentally and physically, as well as solid-minded - the combination I'm calling softness+.\nI did a post a while ago - \"An Ode to a Good Working Horse\" - that addressed some of the things that I believe are needed to produce a good working horse. You'll notice if you read the post that most of those things aren't about the horse or the specifics of the situation the horse is working in - they're about the rider. I've had the privilege of knowing several good working horses - our Norman the pony (now at Paradigm Farms in retirement) was one, as was my mare Snow when I was a teen and my mare Promise when I was showing hunters (just goes to show that mares can be good working horses too). We believe that Norman may have been abused before we got him - he was somewhat dangerous on the cross ties or in his stall, particularly with children, but was a pro in the hunter show ring - completely unflappable and safe under all circumstances - he was one of those ponies who really cared about winning but would also stop cold if his rider was about to fall off or he thought the distance to a jump was wrong. When we brought him to our barn after his show career was done, he could be ridden on the trails in a halter or driven in a cart, and other than an inexplicable fear of large boulders (go figure?), was reliable. He was never dull, or quiet or dead to the world, just reliable.\nMy mare Snow was like that too. She was a QH, and I don't know her background.", "score": 28.766366983455917, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "You may have problems with regulating yourself and your emotional states. You may also have pain issues. All these difficulties can combine and be very hard to untangle. This clip is a little bit about my horses shifting between anxiety and curiosity as they explore leaving the barn for the yard near my horse trailer. They knew the plan was likely about going riding somewhere. The two horses have different attitudes about coming out into the yard and staying near the trailer.\nI shortened a 45 minute period into 5 and a half minutes, so you can see a little bit about using the idea of stress ramping up and stress ramping down, approach and retreat concepts, and some of the idea of pacing. I let my horses take a long time to decide that they wanted to come out into the yard. This was an exercise for my horses in exploration within a certain amount of boundary.\nPeople also experience a dynamic between anxiety and retreat, and curiosity and exploration when facing situations that are challenging. A therapeutic approach for people and horses too, is to find a way to scale the challenge to a manageable amount.", "score": 28.71594678485208, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Perfectionism drives human performance to elite levels, often helping a person achieve unequaled accomplishments. However, the push to be great can also have a deleterious effect on a person’s mood, and certainly, relationships. Often those around the perfectionist feel disregarded or inadequate. But what if the relationship with the perfectionist involves a horse?\nHorses have a unique way of telling us the truth about ourselves, at times revealing parts of our character that are hidden or overlooked. For a perfectionist, a person who spends a great deal of time insuring that character deficits are avoided, this can often be a little disconcerting. On the other hand, the horse’s response to a perfectionist can also be relieving.\nWhen a person imposes this need to be perfect on a horse, the animal often responds with tension. The horse may appear tense, hypervigilant, and essentially on edge, and may resist the handler in a number of ways. The horse may overtly object to the person by balking, acting stubborn, or attempting to flee. On the other hand, the horse may simply shut down, tuning the person out.\nThere could also be another response that the horse gives. In a natural environment with a horse, where the horse is loose, and allowed to approach the person in any way he wants, often he will vacillate between protective behavior — circling around the person, nudging him/her gently — and herding behavior — moving the person around the arena.\nWhat the horse’s behavior is reflecting is the underlying inadequacy that every perfectionist experiences. Often in human relationships, the expression of inadequacy by a perfectionist is received with question, or even judgement, he/she can struggle with feeling as though it is unacceptable to be anything but perfect, now or ever. Of course, this only perpetuates the fear of failing and anger about the rigidity required to prevent it.\nBut the horse never questions what is presented to him. Whatever feeling, behavior, or thought surfaces, just is, right or wrong. So while the perfectionist may screen his/her inadequacy from others and from himself, and fear the judgement that comes with it, the horse only encourages the expression of it, as a part of the person becoming more whole, more readable to the horse.\nWhat the horse does with a perfectionist really benefits not just the perfectionist, but the horse as well.", "score": 28.61490352477393, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "- Cold-blood- heavy horse\n- Hot-blood – light horse\n- Warmblood- light horse\nHot, Cold And Warm-blood Comparison Chart\n|Type||Temperament||Environment||Characteristics||Most Common Uses||Breed Examples|\n|Hotblood||hot head, feisty and sensitive||warm, desert-like climates||Slender athletic build, long neck and long limbs, smaller hooves.||Racing, endurance||Arabian\n|Coldblood||calm and cool, laid back||cold, harsh climates||Large stocky build. Thick strong neck.\nBig boned legs and large hooves. Thick coat, mane and tail. Sometimes feathering on legs.\n|Farming & Forestry work, hauling heavy loads||Clydesdale\n|Warmblood||ranges from energetic and sensitive to laid back and lazy||originated in Europe||Good sized bone. Medium sized hooves. Strong hind quarters. Thick mane and tail.||All levels and types of equestrian sports||Hanoverian\nWhat Are Hot Blooded Horses?\nHot-blooded horses are known to be high-energy and bred for speed and stamina. They have straight or dished heads, slender bodies, thin necks, dainty legs but strong tendons, smaller hooves, and a high-set tail.\nThey are thin skinned and have a very short coat. Even in the winter it doesn’t grow out very much. Horses of this type tend to be very smart, quick learners, and adaptable, but they also tend to be more nervous, sensitive and spirited than other horses.\nThey are light riding horses ranging from 800 to 1,200 pounds. In terms of height, they are in the medium range, measuring 14 to 17 hands at the withers.\nBefore moving on the origin of hotbloods you should know there is another meaning using the term “hot” with horses which you should known about.\nSo hotblood is a term used for the breed type but “hot” is a term used to talk about a horse’s behavior as well.\nWhen a horse is acting “hot” that means they being difficult to handle and are full of extra energy.\nA certain horse could also be a “hot horse” as their general temperament despite the fact that, they may not be a hotblood breed.\nIn another instance horses can act “hot” even if it is not there typical behavior, due to different changes.", "score": 28.051328412620578, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "The application of many natural horsemanship methods operates on the foundation that the horse is being naughty, bossy or spoiled. They teach that if the horse isn’t returning the desired reaction, use more energy, get bigger, push/pull harder, etc. until the horse does return the desired reaction. This seems to work well with horses that ARE being naughty, bossy or spoiled and horses that are mentally and emotionally able to make the connection between reacting quickly how they think the human wants them to and being left alone more. However, I’ve found that it creates what I call “hop to it horses.” These are horses that react, rather than thinking and returning a thoughtful response. What about other horses? What about horses that are responding out of fear, misunderstanding, confusion, playfulness, etc.? How do these horses cope with the application of most natural horsemanship methods? In my experience, not well, and in the best cases, they become horses that develop coping methods that hinder them from becoming the best horse they can be. These are the horses that end up coming to someone like me, someone on the fringes of the horse world that is seen as weird, off beat and “too soft.”\nI specialize in positive reinforcement training, specifically, clicker training. I believe that the foundation of ANY good training is monitoring the horse’s mental and emotional state. If you treat a horse that is being naughty like it is scared or confused, your communication won’t work. However, similarly, if you treat a horse that is scared or confused like they’re being naughty, you’re going to create a lot of problems for that horse, and therefore, for yourself or someone like me. When you ask your horse bigger and harder when they are not in a mental or emotional place to be able to listen and understand, it’s the equivalent of yelling at someone in Spanish who only speaks German thinking that yelling louder will make them understand. If you yell louder, and get angry or frustrated that they don’t understand, odds are, they will just yell back. I see this play out metaphorically between horse and rider all the time. When I am having challenge with a horse, much of the time, this is the source. It is our job to do our best to understand and empathize with how our horse feels and to speak in a way that our horse can best understand, yelling louder usually isn’t the answer.", "score": 27.524555227255508, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "• Eyes are soft with no white showing in the corners – The horse is in a relaxed state.\n• The upper eyelid is wrinkled – The horse is tense, worried or surprised. Think of how we raise our eyebrows when startled or concerned.\n• The whites of the eyes are showing – The horse is showing increasing concern or anger. His anxiety level may escalate to panic.\nHead & Neck\nHorses use their head and neck to communicate in various ways. Observing this area can provide some good clues into what is going on inside the horse’s head at that moment.\no Head held high – The horse is alert or excited.\no Head lowered – The horse is relaxed, submissive or in grazing mode.\no Chewing – If not eating, the horse is showing signs of submission.\no Flehmen response – This refers to the horse curling his upper lip back. Some call it a “horse smile.” The response tends to occur in response to a new scent or a scent he is attempting to identify.\no Arched neck – The horse may be in a playful state or possibly an aggressive state. When ridden, horses are often asked to travel in a frame that requires them to arch their necks.\no Weaving the head and neck back and forth – also called “snaking.” This is an aggressive behavior and should be considered to be a warning sign. However, it is distinguishable from the stable vice called weaving, in which the horse weaves his neck back and forth in rhythmic fashion in an attempt to relieve boredom.\nLegs & Hooves\nHorses use their legs and feet not only to ambulate, but to express themselves. They also use their legs and feet as a means of protection, or even attack, when they feel threatened.\n• Pawing at the ground – The horse is expressing anxiety, anger or perhaps boredom.\n• Stomping with front leg– The horse is expressing increasing frustration or anger. However, during the summer months, stomping may be the horse’s attempt at fly control.\n• Striking with front leg – This is a very aggressive action. The horse is either in attack mode or taking an extremely defensive position.\n• Hind leg cocked with front of hoof resting on the ground – The horse is probably in a relaxed state.\n• Hind leg raised – This may be a sign of irritation, pain or a warning that the horse is preparing to kick.", "score": 27.377355657813883, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Animals, equestrian sports, equine massage, equine rescue, horses, Pets.\nI spoke at a fundraiser for a local equine rescue today. It was a discussion about horses that ‘misbehave’, and how to methodically eliminate triggers. I put together a handout with a number of possible causes for conflict behaviours. It’s not comprehensive, and I don’t think any checklist ever could be. But it at least gives a number of common triggers to check a horse for when there are problems. I thought I’d post it here on the blog in case anyone else is interested. I’d welcome additions to the list as well, so feel free to comment.\nHorses do not like conflict. They don’t like it with other horses and they like it even less with predatory species like humans. Conflict behaviours during handling or riding (rearing, bucking, bolting, propping, balking, biting, striking, kicking, etc.) indicate that the horse is unable to cope with the training situation. When assessing…\nView original post 391 more words", "score": 27.1201151855931, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "A number of things can make horses stressed, like being alone, loading and riding in a trailer, veterinary care, farrier work, preparing for and going to shows, changes in weather, changes in the people caring for them, changes in routine such as a new stall or differing feeding schedule, stall rest due to injury or illness, and a stressed handler or rider. Horses express psychological stress in a number of ways.\nHorses that are chronically psychologically stressed can start to lose weight. Since there are many reasons, such as heat stress, parasites, poor feed and health problems, it’s necessary to look at all aspects of the horse’s care to troubleshoot weight loss.\nStall Walking and Other Vices\nStall walking is when a horse walks around a stall or walks back and forth along one wall repetitively. Weaving, cribbing, wood chewing, wall kicking and fence walking are all signs of stress.\nMost of us yawn when we are tired. It’s the way our bodies inhale a little extra oxygen to fuel our sleepy brain. Horses, however, don’t yawn for the same reason, nor is it an appeasement gesture, as in dogs. A University of Guelph study found that yawning may be a way for a horse to release endorphins. Yawning and most horses will do it several times in a row, is a sign that the horse was feeling stressed, and by yawning, is releasing the stress.\nSome horses grind their teeth while stabled, some while ridden. Tooth grinding can be a sign of physical or physiological stress. If the horse has no other dental issues, it’s important to check for things like EGUS and other sources of chronic pain or stressful situations.\nMany examples of poor behavior while ridden can be caused by physiological or physical stress. Stress can be expressed through pawing, pulling, tail wringing, bucking, rearing, bolting, or being cold backed.\nMany performance horses suffer from equine ulcers. This can be in response to a stressful show schedule or other stresses.\nManure and Urination\nA horse that is stressed can produce copious amounts of manure in a short time. Some may produce very runny manure. Horses will often urinate if stressed, and if they can’t relieve themselves because they can’t relax, such as in a trailer or when being ridden, they can become antsier.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "Anxious-emotional riders are often overprotective and show elevated emotions when interacting with their horses and sometimes refer to their horses as “fur-children,” to the quiet horror of those passers-by unfortunate enough to overhear that which can not be unheard.\nStockhausen thinks that the completely undisciplined, random observations she has made over time suggest that horses with warm or restrictive riders spend less time acting like monkeys. And she was not surprised to see the impact of anxious-emotional riders. She includes this dimension based on past experience, which noted that horses of anxious-emotional riders tend to have more problems fitting in to polite horse society. The biggest takeaway for riders is to set limits and be more calmly detached in the relations with their horses.\n“If riders want to reduce the amount of monkey-like horse behavior, they should be warm when dealing with them, but somewhat restrictive at the same time, and set rules, and those rules will work,” Laczniak originally said about people and Stockhausen completely agrees when applied to horses. “For riders who are more anxious, the rules become less effective and those horses are going to act like monkeys.”\nMy grateful apologies to Dr. Laczniak for allowing me to ride the coat tails of his excellent work to help me express what I have noticed in my mere anecdotal equestrian observations. If I have some thing to add to the Universe it is only because I stand on the shoulders of giants.\nThank you Dr. Laczniak.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-5", "d_text": "Some horses will also snort, rub their head on objects, and display an anxious expression. Most commonly, horses shake up and down. There are five grades to this problem: 1) intermittent signs, mainly facial twitching; 2) moderate signs with noticeable shaking such that can interfere with riding; 3) advanced stage, and difficult to control; 4) uncontrollable and unrideable horse; and 5) dangerous behavior with bizarre patterns. In most cases, the horse looks as if it has nasal mites or is being attacked by biting flies. Many medical conditions can cause head shaking (eg, seizures, respiratory tract diseases and parasites, ear and eye disease, GI disorders, pain, trauma, nasal foreign bodies), and these must be excluded. Behavioral causes of head shaking include an improper bit, an incompetent rider, fear and anxiety, dressage leading to extreme cervical flexion, and compulsive disorders. Geldings seem to be affected more frequently than stallions or mares. Management should include treating any underlying medical problem, desensitization and counterconditioning, and potentially use of selective serotonin reuptake inhibitors.\nSome horses hurt themselves by biting or kicking the abdomen with their hindlegs. Some of these horses also vocalize. Underlying causes include displacement behavior, self-reinforced behavior, and redirected behavior. Skin diseases and pain can also lead to self-mutilation and must be excluded. This problem seems more common in young males (< 2 yr old) and may possibly be triggered by environmental stressors. Management should include sufficient stimulation and exercise and increased social contact.\nFear and Phobia\nLike dogs, horses can have fears and phobias. The two main presentations are noise and location or environment phobias. Horses have an innate fear of new things (neophobia) that explains some behavior issues such as trailer-related problems (see below). The management is similar to that in dogs and cats ( see Treatment of Fears, Phobias, Anxiety, and Aggression Treatment of Fears, Phobias, Anxiety, and Aggression When behavior of dogs is undesirable, there are three levels of consideration: 1) Behaviors within the normal range for the species, age, and breed. In these cases, the owners need guidance... read more ).\nThere are two main presentations of trailer-related problems: loading into the trailer and travelling.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "The funny thing is that even though he is gentle with humans, he is the direct opposite with other horses. He came to me as a stallion at 6 years of age (now gelded), so I am sure that some of his aggression has to do with that; but I also see he lacks some social skills and rules the herd with an iron hoof. I am guessing he has been isolated for much of his life. He is the smallest horse I have (less than 15hh) and even the 17hand Percherons move when he twitches his ear or nods his head. He really wants horse companions, but doesn't understand mutual grooming or even just how to play around.\nI was getting very frustrated after 6 months of having him because he really didn't seem to be breaking through that trust barrier. He was still hard to 'catch' and he was still slamming against the far wall of his stall when I entered (open corral type 12x12). Just moving a hand would send him into panic. On top of all that he is an asthmatic, which is triggered by an allergy to dry hay. This is a real management issue since I don't have a pasture yet (just a very large pen open to a 40x50 pole shed). I also free feed my horses with round bales. I had to keep Brego separated from the rest of the herd (he was right next to them, but not in with them). I also couldn't put in a buddy for him because he would beat them up. His hay had to be dunked in water. As long as I did that, he was fine; but what a hassle * especially up here in WI.\nA few things happened. First a new friend of mine, who practices homeopathic medicine, came over and treated Brego homeopathically. The change was rather dramatic and after about a month I had a different horse. Still scared, jumpy * but less. He 'looked' happier and more at peace. I could enter his stall without such a dramatic response and he was becoming easier to catch. Brego was also getting into the taste of sweet feed. When he first came to me he would not eat anything but hay. I could leave sweet feed in with him for days and he would ignore it. He would eat ground corn and plain oats, but not enthusiastically.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-143", "d_text": "If they have a good horse, they are good. If they have a scared horse, then they are scared, if they have a horse that fights, then they fight. All horses do what they do because of what is done to them. Bad horses are not born, they are made.\nBad horses are never born, they are made: All horses are born a horse, knowing nothing but instincts, survival and they are just horses. From that moment on all interactions with humans will either teach good or bad. Depending on the handling, all horses can be either good or bad. But there are many bad horses that were made and now they are labeled. Which is why so many horses have a past (bad handling) and no future.\nRelease teaches or A horse learns on the release: Horse hate pressure, they are comfort seeking animals; they avoid stress, danger and threats (pressure). A horse will always choose easy over hard, it will walk rather than run, it will avoid conflict and seek comfort. A horse looks for release from pressure or release from being uncomfortable. So by making the right thing easy and the wrong thing hard, we use the horse's natural instinct to seek comfort and avoid pressure and this helps the horse find the right answer. So stopping or releasing the pressure or discomfort, we tell the horse what the right answer is. Release with bad timing does not work and only teaches the wrong thing, but understanding how important release is and what it teaches enables us to better communicate with our horses in a language they can understand. If you don't understand how release teaches, you can't talk horse and the horse will know it before it happens. Rearing horses are a great example, when people don't understand release and a horse rears, most people will stop what they are doing and move away from the horse so they don't get hurt. So by giving release (move away and stopping) the horse thinks and learns, by rearing I get release, therefore rearing is the right answer, to stop pressure. Soon he knows that when he rears, you will stop pressure and then he knows what is going to happen, before it happens.\n\"A horse knows what is going to happen before it happens\": Horses are Kings of observation and they miss nothing and see everything. You cannot fool them or fake them, they know if you know and they know if you don't know. They are exceptional observers.", "score": 26.550840861975253, "rank": 41}, {"document_id": "doc-::chunk-4", "d_text": "Don’t just try to walk calmly around, because it usually doesn’t work. Make him work to force him to pay attention to you, using circles and leg yields, to get his mind off his friends heading toward the barn. Then, when he’s settled and answering your aids correctly, walk, reward him with a pat or two, and walk to rejoin the other horses or to the barn. But be prepared to go back to work, right away.\nBarn- and buddy-sour horses usually balk or refuse to move forward, away from their friends or home. Balking can evolve into the extremely dangerous behavior of rearing and is not to be tolerated. When you ask the horse to go, he must GO. If your horse balks, you must IMMEDIATELY become far scarier to him than the cause of his initial anxiety.\nUse your legs, spurs, whips and voice (growl and scold, don’t scream) and GO FORWARD. Having to gallop out of the barnyard for a week to get past this problem is worth it, if it prevents the horse from eventually rearing.\nNoise anxiety is extremely tough to school. How do you prepare a horse to stay calm in the midst of a backfiring engine or gunshots until it happens? The most useful advice is to hang on and stay calm. And immediately return to whatever work you were doing, so that the horse sees that you weren’t fazed by the sound. He should learn from your example.\nBut if your horse is unusually anxious about noise, you can condition him with a sensory-overload type of training, like they use in police-horse training. Shake, rattle and bang pots and pan, bells, rattles, plastic bags or other common items around him while you reassure him (with your voice, stroking or food) until he accepts the sounds.\nIt’s rare that you can’t convince a horse who’s afraid of clipping or other care requirements to relax. But it can take considerable time and repetition, repetition, repetition. There’s nothing wrong with using some Acepromazine or other mild tranquilizer to settle his mind, if you don’t have the time or the situation is too urgent to take a slow, proper training route.\nFor most horses, 1 to 3 ccs of Ace (depending on his size and temperament) will do the trick.", "score": 25.703854003797073, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Stress isn’t healthy for horses or humans. Stress in your horse can result in anxiousness and can cause physical symptoms such as ulcers and colic. Getting to the root of your horse’s anxiety issues may take some time and detective work. The more time you spend with your horse, the more you will observe his behavior and find clues about what causes his stress.\nHorses can be naturally fearful and certain breeds can demonstrate stress more than others, such as Thoroughbreds and Arabians. Recognizing that your horse is stressed and understanding what is causing their anxiety could help to improve your horse’s quality of life. There are common signs that your horse could be stressed.\nLoss of appetite\nWeaving and stall walking\nStress can be grouped into four different categories for horses:\n1. Behavioral or psychological\nWays to help:\nHorses are designed to roam and graze for up to 14 hours a day, so keeping them confined to their stable can increase stress levels. So it’s important to give your horse space and regular exercise.\nMany horses tend to thrive on a consistent schedule. Establishing a schedule creates a situation where your horse knows what to expect during the day. Stable management, feeding and exercise, and minimizing any changes to your horse’s routine and environment will help to reduce stress.\nProvide continuous access to hay\nHaving full availability to hay throughout the day can help to distract and reassure your horse. When horses are bored, they can become stressed. Consider using a small hay net.\nKeep other horses nearby\nHorses are social animals, and in the wild their safety depends on the presence of a herd. You can give your horse that same security at home by keeping several horses in your barn and by turning your horse out in a group.\nKeep your horses mind occupied\nSometimes it's not always possible to turn your horse out so if your horse must spend time in his stable, provide mental stimulation in the form of toys or by hiding food so that he has to search for it.\nYou can minimize stress for your horse by making sure that their environment has minimal possible stressful stimuli. This includes regular turnout, and plenty of access to quality and varied food, water and nutrition. Minimizing changes to your horse’s environment as much as possible will also help, and this includes changes to their routine.\nAnother thing that will help your horse’s stress would be Benefab’s Rejuvenate SmartScrim.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "horse lingo help\nI am currently trying my hand for the first time at working with a very stubborn horse. I'm not subscribing to any one particular method so far but I don't like smacking, shouting, whipping, etc. I know I need to be consistant so I chose one method (Clinton Anderson) and have been sticking with it just because that is what I have access to and like I said I want to stay consistent.\nSo the main problem is, having not grown up around horses and not been around many trainers before I don't always know what people mean. For example, I think my horse falls into the left brain introvert category (parelli) but I don't know how to judge personalities yet. How do I know if he's playful- how do I play with a horse? How do I know if he is bored or just resistant? What I do know: he is very stubborn. Shuts down when pushed, has very little respect for people on the ground, only wants what he wants. He seems to be a very smart boy, but that too could be my biased opinion. Oh and he is extremely food motivated. Any help or tips are greatly appreciated. Please don't recommend I watch Parelli videos. I can't afford them and have no other access to them other than buying.\nI guess my question here is; how do I keep him from being bored and how do I motivate him? I keep reading that I need to out think him and try reverse psychology. How do you do this if you can't get inside his head to begin with??\nI can't help with the brain personality thing, but I would actually recommend trying a blend of various trainers. Stay consistent with your horse, yes, but explore methods.\nI don't really recommend Parelli. A lot of people have had success with him, but unless you really want to get into all that scene and spend all that money. I'm not saying Parelli doesn't necessarily work - but I think its very much sold out.\nI think the best way to deal with horses is to use common sense. I wouldn't really buy into \"left brain\" and \"right brain\" type thing, but be open to things. I've heard mixed reviews about Clinton Anderson, so approach him with caution, use your common sense.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "- Asymmetry where problem localised to only one side.\n- Specific ‘triggers’ e.g. particular movements in horse, specific elements of work including particular styles of jumps or individual dressage movements or changes in balance in rider and topography (e.g. down or uphill).\n- Consistency of ‘misbehaviour’ i.e. happening to same extent in same way each time.\n- Sudden onset of ‘misbehaviour’ where no warning given.\n- Sudden onset aggression towards all people, all horses and other animals.\n- Stereotypic behaviour (‘stable vices’) either suddenly starting or increasing in frequency ( due to rising levels of natural painkillers) or a reaction to frustration or distress in genetically susceptible horses.\nOther signs due to horses natural instinctive behaviour patterns\n- Horse initially moving away from the source of the fear or pain when space is available to do so, including turning away and standing head down in the corner when tack is brought.\n- Defensive aggression – kicking out with one hind leg at a time\n- Bolting where the horse goes as fast as he can until he slips, falls, collides with something or is exhausted is nearly always a sign of genuine fear, pain or a physical problem.\n- Most horses which rear to the point of falling over do so out of fear or due to pain or physical problems.\n- Rushing due to difficulty reining back or turning in confined spaces.\n- Rushing, flattening or crashing fences or slow careful ‘determined’ refusals. Horse may jump better cross-country that show-jumping.\n- Continued ‘misbehaviour’ even when results in predictable unpleasant consequences.\n- Any slip, trip or fall however minor may lead to physical problems and major behaviour problems a few days later even although the horse may have seemed fine at the time and performed well immediately afterwards.\n- Unusual body shapes, e.g. ‘grass bellied’ or ‘herring gutted’ or appearing ‘overdeveloped’ in front and ‘underdeveloped’ at rear may be long term postural adaptations in response to discomfort from poorly fitting tack or physical problems.\n- Repeatedly persistently knocking poles and reluctance to move onto different surfaces.\n- A history of poor performance before the behavioural crisis or unexplained ‘lethargy’.\n- Stilted, short, choppy strides, not taking maximum stride conformation and fitness allows and difficulty bending, feeling stiff despite correct schooling.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "The funny thing is that even though he is gentle with humans, he is the direct opposite with other horses. He came to me as a stallion at 6 years of age (now gelded), so I am sure that some of his aggression has to do with that; but I also see he lacks some social skills and rules the herd with an iron hoof. I am guessing he has been isolated for much of his life. He is the smallest horse I have (less than 15hh) and even the 17hand Percherons move when he twitches his ear or nods his head. He really wants horse companions, but doesn't understand mutual grooming or even just how to play around.\nI was getting very frustrated after 6 months of having him because he really didn't seem to be breaking through that trust barrier. He was still hard to 'catch' and he was still slamming against the far wall of his stall when I entered (open corral type 12x12). Just moving a hand would send him into panic. On top of all that he is an asthmatic, which is triggered by an allergy to dry hay. This is a real management issue since I don't have a pasture yet (just a very large pen open to a 40x50 pole shed). I also free feed my horses with round bales. I had to keep Brego separated from the rest of the herd (he was right next to them, but not in with them). I also couldn't put in a buddy for him because he would beat them up. His hay had to be dunked in water. As long as I did that, he was fine; but what a hassle * especially up here in WI.\nA few things happened. First a new friend of mine, who practices homeopathic medicine, came over and treated Brego homeopathically. The change was rather dramatic and after about a month I had a different horse. Still scared, jumpy * but less. He 'looked' happier and more at peace. I could enter his stall without such a dramatic response and he was becoming easier to catch. Brego was also getting into the taste of sweet feed. When he first came to me he would not eat anything but hay. I could leave sweet feed in with him for days and he would ignore it. He would eat ground corn and plain oats, but not enthusiastically.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-11", "d_text": "Petting, cookies, and hugs do not mean much to a horse. They just want to be a horse. They do not want to be a person or a dog, they only know how to be a horse and that is all they want to be. In order to allow them to be a horse and be comfortable with you, you must understand them. They are flight animals. They are prey and we are predator. Mother Nature told them at the first sign of trouble run, do not think, do not look, do ask what or why, just use your legs and run to live. Their best defense to survive is speed. Run from danger at high speed and you live. Stand around and think and you die. It is that simple for the horse. So when a horse get scared or tries to run, do not get mad at them, do not be rough with them and do not label them as mean, dumb or stupid. They are simply being a horse, trying to survive in our world. A horse is pure horse and that is all they know. Run when in trouble to stay alive. In the horse world and in a herd, horses are always talking. Most people do not see it or know it, but horses are great communicators. The way they move, how they hold their ears, where their head is, how their body is positioned, what their tail is doing, the speed they are moving, are they looking with one eye or two, all send messages to other horses. If you don't learn this language, then your horse will be talking to you and you will not hear him or understand him, so he will either stop talking or get more aggressive to try and get you to listen.\nIn the picture below, you will see two Prey Animals bonding. They find comfort and safety in numbers, they are herding, bonding, what those that do not know call buddy sour or herd sour, this is just prey animals seeking comfort and safety by being together, they are using numbers to survive. They have doubled their ability to hear or see Predators. To the unknowing it may look cute or nice, it is basic survival skills and if you don't see that then you will misunderstand the horse since it is also a prey animal.\nA horse communicates to let you know he understand what you are saying or to make you do something. This process is often misconstrued.", "score": 24.99303940707155, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Signs, causes and how to manage stress in horses.\nStress is a normal physiological process that occurs in all animals, and just like it affects us humans, stress can also affect your horse.\nShort-term stress is actually an adaptive mechanism and is not always bad because it helps horses escape a threat or cope better with their environment. Basically, if horses didn’t get stressed sometimes, they would be more likely to get into tricky situations. However, the problem comes when horses are stressed for longer periods of time, without being able to get away from what is causing it. This is when stress begins to have damaging effects on your horse’s body, including reduced immunity, inhibited performance and, in particular, gastric ulcers (EGUS).\nSo, how can you tell if your horse is stressed, what are the most common causes and what can do you do to help ensure your horse’s stress levels are kept to a minimum for optimum health and performance?\nSigns of stress in horses\nWhen your horse is exposed to something than elicits a stress response, commonly known as a ‘stressor’, their body releases a hormone called cortisol which promotes behavioural and physiological changes that result in some common signs and symptoms of stress, including:\n- Vocalisation or ‘whinnying’\n- Tail Swishing\n- Reduction of appetite\n- Stereotypies e.g. crib biting or box walking\n- Flared nostrils\n- Elevated cortisol levels\n- Elevated heart rate\n- Gastric ulcers\nCauses of Stress in horses\nIf your horse shows any or a combination of these signs, then it is likely they are experiencing stress. Once you know your horse is potentially stressed, the best plan of action is to find out what is causing it and taking appropriate action to reduce stress levels.\nThere are many causes of stress linked to environment, routine and management but some of the most common ones include:\nJust as we can get stressed and fearful of performance, so can our horses. In addition, the noise, atmosphere and other horses at a competition or show are major stressors which can affect your horse. Research has shown that cortisol levels rise at competitions prior to performance in a range of disciplines and that higher or prolonged levels of stress negatively affect performance (Becker-Birk et al. 2013; Lewinski et al. 2013; Christensen et al. 2012).", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "My mare is the talk of barn. She's extremely affectionate and loves to get attention. She is more on the \"hot\" side when it comes to personality. But, she is still young and energetic. She's always the first to greet you in the field, and cuddles as often as she can.\nShe's extremely smart, but horribly stubborn. She often knows what you want/need, and if it is something that she is not fond of, believe me, she will try to find a clever way out of it. Just keep in mind that they are very intelligent!\nLove my ASB. :)", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Change of environment\nHorses in an established herd have access to familiar resources, safety, social relationships and the opportunity to be a horse, which is essential for their health and well-being. Changing your horse’s environment by moving to a new home challenges all of these factors simultaneously. Horses thrive when in familiar surroundings with familiar horses, so taking them out of this and putting them into an unfamiliar environment can be one of the most stressful things a horse can experience.\nIntroduction of new horses\nIntroducing new horses into an established herd can be challenging, not only for the new horses, but also for other herd members. Within this herd, a distinct hierarchy will exist that allows all members to know their place, often forming bonds with similar ranked horses and enabling everyone to live well together as a group. When a new horse arrives, this hierarchy is disrupted, and each horse needs to re-establish their place within the herd. This upheaval and uncertainty can be stressful for all horses, but particularly those of a lower ranking that will need to defend their position over the new horse.\nWhat can you do to help reduce stress in your horse?\nThe good news is that stress can be reduced with the right management and support. Understanding your horse’s mental and physiological needs and giving them as natural a life as possible that reflects these, will help manage any potential stress.\nIn horses who suffer with performance stress, good management can help keep them calm and focused. Research has shown that more experienced horses are much less stressed at competitions, so try and ensure your horse is as ‘experienced’ and well prepared as can be for future competitions. Getting your horse out and about from a younger age is therefore essential to help them become familiar with all the sights and sounds of a competition and acclimatised to what is expected of them.\nCreating similar noise, stimuli and the general excitement that you find at shows by taking your horse to clinics and group lessons is a vital step in the competition horse’s education. And getting your horse familiar with signs and the many strange objects that they might see at a show, so it becomes part of their routine, can also help to reduce stress. Routine is very reassuring to horses, so it’s important to try to stick to the same routine at the show as you would at home – use the same tack and feed and create a pre-performance warm up routine that is the same at every competition, to help keep your horse calm and prepared before entering the ring.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Elevated Sensitivity to Tactile Stimuli in Stereotypic Horses\n| Contributor(s):: Sabrina Briefer Freymond, Déborah Bardou, Sandrine Beuret, Iris Bachmann, Klaus Zuberbühler, Elodie F. Briefer\nAlthough stereotypic behaviors are a common problem in captive animals, why certain individuals are more prone to develop them remains elusive. In horses, individuals show considerable differences in how they perceive and react to external events, suggesting that this may partially account for...\nIn-Person Caretaker Visits Disrupt Ongoing Discomfort Behavior in Hospitalized Equine Orthopedic Surgical Patients\n| Contributor(s):: Catherine Torcivia, Sue McDonnell\nHorses have evolved to show little indication of discomfort or disability when in the presence of potential predators, including humans. This natural characteristic complicates the recognition of pain in equine patients. It has been our clinical impression that, whenever a person is present,...\nEvaluation of several pre-clinical tools for identifying characteristics associated with limb bone fracture in thoroughbred racehorses\n| Contributor(s):: Anthony Nicholas Corsten\nCatastrophic skeletal fractures in racehorses are devastating not only to the animals, owners and trainers, but also to the perception of the sport in the public eye. The majority of these fatal accidents are unlikely to be due to chance, but are rather an end result failure from stress...\nHorses Failed to Learn from Humans by Observation\n| Contributor(s):: Maria Vilain Rørvang, Tina Bach Nielsen, Janne Winther Christensen\nAnimals can acquire new behavior through both individual and social learning. Several studies have investigated horses’ ability to utilize inter-species (human demonstrator) social learning with conflicting results. In this study, we repeat a previous study, which found that horses had...\nOct 06 2022\nAnimal Assisted Interventions Webinar Series - Decoding equine behavior for wellbeing management: An introduction for AAI professionals\nThe Welfare of Traveller and Gypsy Owned Horses in the UK and Ireland\n| Contributor(s):: Rowland, Marie, Hudson, Neil, Connor, Melanie, Dwyer, Cathy, Coombs, Tamsin\nTravellers and Gypsies are recognised ethnic groups in the UK and Ireland. Horse ownership is an important cultural tradition, however, practices associated with poor welfare are often perceived to be linked to these horse owning communities.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-42", "d_text": "Many times all three of these will be different and before you know it, you are saying my horse kicked me for no reason......\nHorses always do better if they are together with other horses. A horse's instincts prevent him from being equal to another horse. They must be higher or lower, so when people try to treat a horse like a pet, the horse will see this weakness and the horse will then think they have to be in charge, it is what horses do, the strongest has to take charge. When putting new horses together they may have some issues at first, but they will work them out in a few days and once it is done, it is done, except for minor ear pinning and squeals. The bigger the herd the safer horses feel. More horses equal more eyes and ears to spot danger. In larger herds you may have smaller sub-herds where strong leaders have branched off and took a few mares with them. A lead horse and stud can only keep control of so many horses, so as the herd grows, it is easier for a few mares to be taken by another leader.\nIf you ever watch horses in a pasture, most will be eating, but one or two will have their head up looking around for danger. Even while eating they are all aware and looking for danger. Horses move slowly, eat slow and look around slowly. However, the second one horse spots danger or something unknown, his head will shoot up in the air fast and high to focus on the threat. This quick movement is a warning to all others. Within a second, all horses will raise their heads to check out the danger and will look in the direction that the horse who warned is looking. Sometime a Stud or head horse will snort loud or blow to warn the herd and to warn the threat. When horses spot something new, they always conclude that it is danger until they prove otherwise. So when you hear someone call a horse spooky or scared all the time, that shows they do not understand horses. All horses are spooky, it is how they stay alive. Why do wild horses have such good feet? Because the horses with bad feet are never leaders, don't last as long, can't keep up with the herd, are usually eaten first by predators and don't get to reproduce.", "score": 24.296145996203016, "rank": 52}, {"document_id": "doc-::chunk-43", "d_text": "(Natural Selection 101) Only wild horses with good feet survive to reproduce and pass on the \"good feet\" gene. Why are horses so spooky? Same exact reason, the non-spooky horses are too slow to react, too calm, do not run or react fast enough and are eaten first, so they are not around long enough to pass on the \"calm and non-spooky\" gene. Fear, alertness and fast reactions keeps horses alive. Which is why you should never blame a horse for being scared or fearful, it is there nature.\nIf you understand a horse you will know that you can't make a request half way and accept half right answers. There can only be one right answer in order to be consistent and so you don't confuse the horse. Sounds and words have to mean the same thing every time or they lose meaning and will mean nothing. A kiss means canter to my horses, period. I see so many people that want a kiss to mean run, move, come, look at me, people kiss to say hello to a horse, to get a horse's attention and then get mad at the horse if he does not know what a kiss means. One sound for one action. The same thing with a click. One well-known clinician (JL), uses clicks for everything. He clicks to move a horse, he clicks to get a horse's attention, he clicks to trailer load, he clicks and clicks and the horse still does what he wants. Then people who watch him do what he does and it does not work. Horses learn to adapt so they will learn to ignore cues that mean nothing or that mean too much and have no meaning. Be aware of what you do and what you are teaching, awareness will improve your horsemanship skills.\nThis picture below is a good example of a herd with a look out horse. The dark brown horse who is standing (Big T or Tanner) is my mustang.\nTanner is standing guard and watching over the (his) herd. The horse on the left, that looks dead is my other horse, Buddy. It is easy to see that Buddy is very relaxed and calm with his dead horse\" imitation. Tanner, on the other hand, is alert, not relaxed and ready for a reaction to any sign of danger. You will not see Tanner sleep like Buddy.", "score": 23.277216273274004, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Today’s horses are experiencing more stress than ever!\nWhether they are show horses, or endurance horses and even horses used for camping, all are subject to added stressors.\nWith the show season in full swing it’s a good time to stop and take a look at how stressors may affect competitive horses.\nHorses are very emotional creatures who are adversely affected by stress.\nUnderstanding this is imperative to having a healthy and happy competitive equine. How individual horses respond to potentially stressful situations differs, but many health ailments are originated from stress of one kind or another.\nStress can be defined as a general term which describes the combination of psychological and biological responses of an animal during real or perceived threatening circumstances. While the physiological response to stress is a highly complex subject, and certainly is not completely understood, scientists agree that there are two types of stressors.\nPhysical stressors are things such as injury, over-exertion or a change in the environment. Psychological stressors typically include situations that make the animal anxious or fearful. Uncertainty and fear of the unknown can be categorized as two of the major psychological stressors. Competing horses, and even horses who travel for seemingly leisurely activities such as camping, are exposed to both physical and psychological stressors.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Our horses possess unique qualities, just like people. We are so lucky to have a variety of personalities to work with.\nPeachy (Peachy Keen) is a 16 year old appendix thoroughbred. She has been with Britt for 7 years. Her history is somewhat unknown, but we know she is a rescue from a major seizure of neglected horses some years back. She has CAO (Chronic Airway Obstruction) and takes daily allergy medication. With the right environment and diet, Peachy’s disease is under control. She is one of our very reliable riding and relationship horses. She can be very low energy and sets anxious people at ease, but may require a person to increase his/her energy to achieve connection.\nJesse (Jesse’s girl) is a 16 year old quarter horse. Jesse has been formally trained with natural horsemanship principles, and has historically been the reliable “go-to” horse at her old home. Jesse is another reliable riding horse, and she is learning daily how to stay present instead of dissociate with her relationship partners.\nMonty is a 17 year old Tennessee walking horse. She had a hip surgery as a young two year old, and has never been able to be ridden as much as most horses because of this. She is a gentle giant and loves attention from everyone at the farm. Monty is learning that she cannot fit in our laps.\nSusannah is an 8 year old Tennessee walker/racking horse cross. She seems to really love the children that come to the farm in particular. She is more suited for advanced riders, as her gait makes her fast and energetic. She is also a great relationship partner but can push your boundaries. She is also learning to think things through rather than react.\nTully is a 22 year old racking horse. She is Susannah’s mom. She and Susannah have lived together with their past owner until she needed to donate them due to health concerns. Tully is also fast and fun, but can also be very gentle and kind.\nGabriel is a 6 year old appendix thoroughbred. He has hock issues that have made it impossible to race or be a show jumper, so he has found his home with us. Gabriel is a sweet, passive gelding who really lets everyone run all over him. He could use some assertiveness training with horses.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-33", "d_text": "Studies have shown that the domesticated horse did not differ substantially from the wild horse, such as Przewalski's horse, either physically or psychologically.\n\"Horses need physical contact with other horses, and social isolation prohibits the horse from engaging in mutual grooming, play, and simply just being near other horses they are bonded with.\"\n\"Most domestic animals are social animals. That is almost a requirement for being domesticated.\"\nHe discussed ways horse owners and managers could meet the species-specific needs of the horse in a modern world, including group housing alternatives, and pasture enrichments, such as dirt to roll in, trees and branches to forage on, and early socialization in mixed sex/age herds.\n\"I hope I've made it pretty clear that what we need is much more information on how horses are housed, how much they get out either alone, and with other horses, and how much they are ridden,\" Ladewig said.\nHe implored those attending the conference to send research students out to acquire much-needed data in this area.\nRelease Teaches: Horses learn on the release of pressure. This takes time to learn, understand and master. Release or retreat has a lot to do with timing and feel. Whenever you stop doing something the horse thinks whatever he was doing, when you stopped pressure, is the right answer. That is why you never stop pressure during bad behavior. I see too many people tie a horse and when it starts pulling or freaking out, they rush over and untie it. This is bad timing and bad training. Only reward, which is to release or stop pressure, on a horse, when you get a right response, not during the wrong response. This is also called advance and retreat, advance meaning pressure and retreat meaning release of pressure. Soft hands make soft horses. Nervous owners make nervous horses. Owners that do not understand pressure and release confuse a horse and the horse never learns the right answer with timing and feel.\nStarting Horses: In the old days, horses were not started until they were in their 4's or 5's, however they were handled a lot at two and then put out pasture to learn herd behavior and respect from horses not humans. Good horseman would teach everything on the ground, so when they got in the saddle there were no surprises to the horse.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "Ultimately these definitions recognize this ‘problem’ as a mental health affliction for the animals as do Hothershall & Casey (2012) who term the behavior ‘pathological’.\nStereotypies in horses do also highlight a welfare concern as it is widely recognized within the literature that horses displaying these behavior do as there is some deficiency in their environment, living or work conditions either in the present or past tense that has triggered this behavior often associated to management risk factors and stress (McGreevy, 2004, Hothershall & Casey, Mason, 1991, Hockenhull & Creighton, 2014). In-other-words – I see these behaviors as red-flags to the horse being a) a possible welfare risk b) psychologically ‘ill’ for want of a better word c) a horse in distress who needs urgent assistance with his living situation d) usually with an owner in need of practical and emotional support.\nThe other side of that coin is the approach of horse-owners. I certainly don’t say that in condemnation because attitudes are formed on the basis of learning from others; social influence and the words used that create perceptions of what is going on.\nFor the vast majority of horse owners who consult for help, these behaviors are known as ‘stable vices’.\nTo see the problem it’s worth looking at the definition of the Oxford English Dictionary, here’s the definition of a ‘vice’:\nVice; “depravity or corruption of morals; evil, immoral, or wicked habits or conduct; indulgence in degrading pleasures or practices”.\nIn this simple exchange of words our horse has gone from suffering a pathological, welfare based mental health problem to being an immoral, indulgent being who is deliberately engaging in wicked practices (and probably deliberately to annoy us many humans believe).\nThis creates a problem for the behavioral practitioner because immediately you have contradictory beliefs and perceptions to that of the owner (often but obviously not always). In compassion for the owners it’s also worth noting that many owners who have their horses on livery yards are under huge social pressure to resolve the problem. A stigma is often attached to these behaviors (McBride & Long 2001) often related to the anecdotal belief that horses imitate these behaviors, probably augmented by the belief they’re being ‘naughty’.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-5", "d_text": "But, remember, tranquilizers are not training substitutes, and some horses won’t learn anything while under their influence. Plan a proper training session in the near future. Note: If you use tranquilizers to facilitate care, be sure it’s under veterinary guidance and far enough ahead of competition to avoid breaking the show or event’s rules for using performance-enhancing substances.\nConvincing horses to not be afraid of other animals is usually an uphill struggle. If they’re afraid of cows, pigs, goats, dogs or wildlife, often there isn’t much you can do, except try to avoid them and hang on if you can’t.\nThe anxiety you have the best chance of changing is that caused by horses who are worried about the work they’re doing or the ride they’re getting. Everyone who’s trained more than a few horses has come across horses who don’t like to do certain things (such as jumping) but love to do certain other things (such as herding cattle).\nYour job as rider or trainer is to determine if they can be convinced to do the job you want them to do or if the horse (and you) would be better off by selling him to someone who wants to do the same job. Almost always, both horse and rider are far, far less anxious if they’re both doing a job they like.\nFrom a training perspective, it’s often extremely challenging to meld a partnership between a horse and rider who aren’t suited. Perhaps it’s a mismatch in style-the horse is hot-blooded and his rider is so busy in the tack that the rider is, in effect, shouting at him all the time. Or perhaps the horse and rider are too similar-each is green, nervous or unambitious.\nOne, or both, has to change, and sometimes that’s not possible. And, although it’s always far preferable for riders to truly work to improve their skills and suppleness and to expand their experience, sometimes trainers just have to admit that a change needs to be made. Sometimes, though, riders can’t bear to make the change.\nAs trainers, it’s our job to tell our students when things are going wrong. Make riders aware of the tremendous challenges they’ll face with their current mount given their respective personalities. Honestly explain what changes will have to take place to achieve harmony. But, ultimately, it’s up to the student to decide.", "score": 22.87988481440692, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "The competitive Wood horse temperament loves to play rough, even if it means he hurts himself. Most horses will back off when in pain but the Wood horse’s instinct is to power through. In the case of the horse with the torn shoulder he did not slow down when he snagged his skin on an exposed gate latch. Instead he pushed on through the partially open gate, collapsing it and several posts.\nMy Wood mule did similar harm to himself when he refused to slow down when galloping over sharp rocks. The result was a broken coffin bone. The competition Wood temperament horse will continue to run of jump even when injured. They feel the pain but love competing so much they keep going. Think Barbaro and the injuries he sustained racing because he could not be pulled up.\nSo what to do to protect the Wood horse temperament from himself:\nScan stalls and pastures for any possible danger\nAvoid overfeeding and underworking\nConsider relaxing herbs such as Relax Blend before turning out\nTurnout with more sedate horses\nProvide safe toys like giant rubber balls(expect to replace these regularly)\nKeep free choice hay available to keep your Wood horse occupied\nThe other side of the self destructive behavior of the Wood horse temperament is what makes him so fun to compete on. For the Wood horse there is no race too long or jump too high. He loves the challenge. Madalyn", "score": 22.516485873526644, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Being able to adapt and therefore survive as a species shows us that even as the horse grows, it can be socialized to new environs. Socialization is being adapted to new environments, a task at which the horse is master. How this is done is through the equine’s ability to assimilate.\nOverriding associated memories by using assimilated imprinting is possible because of the generally friendly, gregarious and curious nature of the horse. The feverish foal can be taught to have no fear of the veterinarian in the white coat if its handler wears a white coat during the daily routine of feeding and taking care of the horses. Over time, with patience and careful handling, the foal soon learns there is nothing to fear about the person in the white coat.\nOne might ask: “Why go to all the trouble? Wouldn’t it be easier to have the veterinarian remove the white coat when taking care of the foal and other horses?” Yes it would, but the trigger of unwanted behavior still remains in the associated memory of the foal and you don’t want it to re-fire three years later when the foal is grown and about to run in the most important race of its life. Unresolved triggers of bad behavior can have a profound impact on the racehorse, resulting in it acting up in the paddock or balking on entering the starting gate.\nSometimes behavior is not a reaction to a trigger but it is merely a tendency or trait of the horse’s Individual Horse Personality. The key to knowing the difference between personality traits and triggers of bad behavior lies in ones ability to recognize the triggers that dictate certain behaviors or stress. …\n…Often, horse owners, trainers, do not know the difference between an expression of the Individual Horse Personality and behavior triggers. They only see the effect those triggers have on the horse. In the example of the horse balking at entering the starting gate, it is logical to assume that the starting gate is the trigger of the horse’s behavior. But, in reality, the white windbreaker worn by the gate handler is the trigger of behavior causing the horse to balk at entering the gate. Long ago, when the horse was a feverish foal with an abscess, it learned to fear anyone wearing a white coat.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-37", "d_text": "However, horses are big, flighty, scared, and reactionary animals that move very fast and with a lot of power. Enjoy your horse but be aware that it can hurt you very bad, very quick and even thou he will not mean to hurt or kill you, it can happen. A horse will kill himself trying to save himself. An old horse saying, \"There are two kinds of horses, those that are hurt and those that will be hurt.\"\nYoung horses are very curious. This is good. Many people stop this and confuse this curiosity as disrespectful or bad behavior. Let your horse check out things and be curious. Horses love to play and this is all learning to them. If you only have one horse, I would suggest you get a sheep, cat, goat, or another horse. Horses are herd animals and when they are kept isolated, they develop bad habits and heath issues caused by stress. A horse can't sleep well and will not relax if alone. They need to know there is an extra set of eyes and ears to look out for danger so they can sleep and relax. Horse's like dogs, need a job. A horse in herd is either a leader or follower. This keeps them mentally fit and will help them stay safe and healthy.\nThinking like a horse is the key to understanding and having a good relationship with them. We have to think like a horse and not expect the horse to think like us. We are predator type animals and they are prey, they live in fear of being hurt, eaten or captured. There primary defense in life is their speed to run. Controlling that is not easy and it cannot be done with strength, it must be done by thinking and understanding. Horses do three things really well, they are run, eat and poop. Horses are scared and flight animals that run for safety and security. They will not eat unless they are relaxed and feel safe. If you just keep feeding a horse, they will eventually warm up to you. Do not confuse comfort with you as respect or love. If you are feeding them twice a day, it would be better to feed them smaller portions 3 or 4 times a day, the more they see you as a food provider and get use to you, the quicker they will feel safer with you.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "Status and how the outside world thinks about us is important to people. The horse couldn’t care less, but to us that’s a different piece of cake and you may not think about it every day, but it is part of the way we communicate.\nTake away the pressure\nFollowing orders is promoted, when a reward is in store. If certain behaviour or movements are rewarded according to the standards of a horse, he will repeat them. Unfortunately, this is just as true for bad behaviour. So, if he sees an opportunity to tear himself loose and disappear into a juicy green pasture, you can be sure he’ll try again. If he finds you scary, then leaving you is already a feeling of reward and he will want to repeat it.\nBehaviour that yields something is repeated\nFor a horse removing something that he finds unpleasant is a feeling of reward. You should take that “unpleasant” very broadly. Pressure on his side with your calf is not painful, but reducing that pressure is a reward for him. Reducing pressure on the bit certainly feels like a reward to him.\nReward in this way is most effective in training, but it is important that you are aware of what you reward. It is not helpful if you unconsciously take pressure off of something you don’t want. A well-known example is a horse that reacts anxiously to the sound of electric clippers. If you turn it off when he spooks, you will teach him that the scary machine will go away if he reacts frightened…\nCommunication between people is not always easy. If you know that there are four types of behaviour with a different motive, you realize misinterpretation is happening a lot. We use spoken language with all kinds of words to argue, negotiate or mitigate something, but it is often unclear what our real intentions are.\nHorses like simple language. They like black and white. They don’t do much with grey areas. That is why it is important to be clear and above all be consistent when dealing with horses. If you want something from him, always ask the same way.\nMake sure you always respond to something in the same way. If you put your legs on to go forward and he doesn’t go, give that same aid a little harder. If he still doesn’t go, you may even have to touch him with a whip.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Her research, which has been published in the Journal\nof Comparative Psychology and the Journal of Equine Veterinary Science shows\nthat horses are much more complex and intelligent than they're given credit for.\nBut how do you find out what a horse thinks?\nHanggi has designed non-invasive tests that use positive reinforcement, like\nfood rewards, to see what horses know.\nFor instance, Hanggi wanted to explode the myth that horses can't transfer\ninformation from one side of the brain to the other because the two sides aren't\nconnected. The theory didn't make sense to Hanggi because anatomy shows\nthat the two sides of the horse's brain are connected. So she set up\na board with two openings in it that revealed different shapes. Then she\nblindfolded one of the horse's eyes.\n\"We [taught] the horse to choose one of the two different shapes until\nit consistently chose that shape,\" Hanggi explains. \"Then we switched eyes and tested\nit, and it chose the right one right away.\"\nAnother aspect Hanggi tested was \"categorization learning\".\n\"That looks at how animals sort their world,\" Hanggi explains. \"It's how they put\nobjects and events in classes or categories.\"\nWith the same board, Hanggi and her team would show the horses different\nshapes, - some solid and some with an opening in the centre. That type of\nsorting allows an animal to categorize food and non-food plants,\nor predators and non-predators, or can be applied in the training of horses.\nHorses need mental challenge and time out of their stalls\n\"We found they were very good at classifying objects,\" Hanggi says. \"That helps\nthem deal with all sorts of external stimuli coming to them, instead\nof being inundated with all sorts of different things, they can sort them\ninto classes so that their learning mechanisms are not overused.\"\nHanggi also administered a test to assess how well the horses could \"learn to learn.\"\n\"We used reversal discrimination learning,\" Hanggi says. \"That's where the horse\nlooks at a circle and a square, and it [is supposed to] always choose the circle.\nAnd it takes a hundred or so cycles before it learns to pick the circle.\nThen you reverse the contingencies, so it has to choose the square\n[to get the reward]. At that point it will still go to the circle,\nbut eventually it will switch to the square.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-3", "d_text": "Available in an easy liquid dose, EquioPathics Pre-Performance Stress promotes relief from fear of performing, change of stables or environment, the introduction of new animals and other stressful situations causing signs of anxiety or fear to be exhibited.\nWhilst you cannot completely eliminate stress from your horse’s life, taking steps to reduce the impact of potential stressors and help promote calmness can help mitigate the challenges of different situations, and keep those stress levels to a healthy minimum.\nLisa Elliott, MSc Equine Science, Bsc Biology", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Many behavioral problems are associated with confinement. Under free-ranging circumstances, horses wander and spend >60% of their day foraging. The remainder of their time is spent resting (standing or lying down), grooming, or engaging in another activity. This same pattern is seen under barn conditions; even with free choice of grain, horses will eat many small meals a day. Horses are highly social animals that require contact with others for normal daily maintenance and well-being. Isolating horses can lead to development of problems. The main goal of managing behavior problems in horses is to identify the deviation from normal equine behavior and correct it.\nAggression is a common problem in horses and includes chasing, neck wrestling, kicks and bites, and other threats. Signs of aggression include ears flattened backward, retracted lips, rapid tail movements, snaking, pawing, head bowing, fecal pile display, snoring, squealing, levade (rearing with deeply flexed hindquarters), and threats to kick. Submissive horses respond by avoiding, lowering the neck and head, clamping the tail, and turning away from the aggressor.\nAggression to People:\nThis behavior is seen mostly in stalls in which the horse feels confined in a small space that is also easily defended. The varieties of aggression toward people include fear, pain induced, sexual (hormonal), learned, and dominance related. Some horses, especially young ones, play with each other while showing signs of aggression such as kicking and biting. Although benign to other horses, this can be dangerous to people.\nThe first step in managing equine aggression is identifying the cause, and if possible, removing it. Training and positive reinforcement to establish control over the horse are also used, along with desensitization and counterconditioning. Dominance-related aggression in horses is different from canine status-related aggression (also known as dominance aggression) in that it is not context-dependent. Environmental management is important as well; good management should include sufficient resources such as space, food, and water. Some horses are considered to have pathologic dominance aggression; they will attack other horses and people that are near them. These horses should be separated completely from other people or horses and have a poor prognosis.\nAggression Toward Other Horses:\nAggression toward other horses is mostly associated with sexual competition, fear, dominance, or territory (protecting the group and resources).", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Now that I’ve ruined your girl hood fantasies, I wanted to be rescued by the knight’s white steed, little did I know. If we don’t hug their faces, maul their noses, and suck the air out of their nostrils, what can we do. redeffine liberty in all endeavors. do more than sit and read. how about a relationship on his terms. What I am learning from you, Anna, is helpful as I work to understand the horse. What I am left with is that anyone who truly wants their horse to be content will let them “be a horse”. So, from that I conclude that we shouldn’t ride them or handle them but instead, ensure their safety and physical needs are met and then let them alone. Any other intrusion and you’ve made this clear, is anxiety inducing and stressful. So why do you continue to own horses? With a barn and tack and expectations? I will continue to be a devoted reader but I am struggling to decode your real message. Humans are extremists. On top of that, we take everything personally. Every moment isn’t forever, it’s just a snapshot-second. We are looking for a tendency on the continuu. Can I give you my definition of stress? Being alive. Do I think horses should be abandoned. No, they’d hate that. Do we over-handle them? Yes. ride with Funktionslust. horses are always on a tendency between bordom and overwhelm Because just being alive is stressful, that during times of stress is when I can help the most.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "- When you scratch a horse’s midline around the girth area, the natural reaction should be lifting the belly, which raises the withers. The lifting of the withers should be visible. Some horses will only be able to do this after limbering up or after a chiropractic session. Other horses can’t do this at all, which may or may not affect his rideability.\n- Looking for this reaction is only one piece of a much larger puzzle! If this horse moves onto the PPE stage, it’s worth having your vet check on this reaction and others in the hind end.\nNotice how a horse reacts to touch and pressure!\nHow does your possible new horse react to pressure on his muscles? Especially the saddle area?\n- A horse’s reaction to massage and grooming can show you more about his personality or indicate muscle soreness or discomfort. Or tell you about his skin sensitivity.\n- Ill-fitting saddles, spine issues, and a hard day’s work are all possible reasons.\nWhat’s the reaction to handling a horse around his ears, sheath/udders, and other sensitive parts?\n- There can be a lot of past trauma, as is the case for ear-shy horses that have been ear-twitched. Which is an absolute NO, by the way. Injuries can also prompt a horse to protect certain areas.\n- Other horses might decide that certain parts of their bodies are off-limits. This speaks more to temperament and trainability or general sensitivity to handling.\nHow does this horse stand to have his temperature taken?\n- By now, you are sensing a pattern – what’s the reaction, and what is the possible reason for this reaction?\n- Is this merely a case of horses training humans not to do things, or a horse that’s never had this done before? Or is there something wrong with his tail?\nWatch his overall demeanor while he’s groomed.\n- Does he seem calm, agitated, distracted, angry, or engaged? Maybe you see a quiet horse, and you want a fire-breathing dragon! Or vice versa.\nWatch it all if you can!\nHow is the horse during the tacking-up process?\n- It might be helpful for you to see a prospective horse being tacked up! Look for three main things:\nWhat is the reaction to placing the saddle on the horse’s back?", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Horses are positively interesting characters! Like people, they have their own personalities and appearances. They like to be with other horses and are upset when they are separated from those they care about. They like to be included in a group and will follow each other. Sound familiar?\nEarly Thursday morning, after sleeping on the Maple Creek rodeo grounds in the living quarters of our horse trailer, I awoke to a warm Saskatchewan summer sun painting the prairie with light, the scent of silver sage circling through the fresh air and sounds of several horses calling and whinnying, snorting and blowing.\nBeing next door neighbours to a penned up herd of rodeo horses, (what we don't do for the ones we love), I decided to head over for a visit. These professional horses are used for saddle bronc and bare back riding. How often does a gal get to admire about one hundred horses at a time that are lounging and grazing, following and rolling, cuddling and teasing, visiting and discussing their previous night time performances and preparing strategy for their next rodeo?\nWhen the four legged beauties spotted my husband, my dog and me, they naturally out of curiosity, gathered near to check us out. There were bays, chestnuts, greys, red roans, whites, and paints. Would you like to meet a few of the interesting characters?\n|Totally Radical, Dude!|\nIt seems horses are no different than people. They tend to hang around others who are similar to themselves. In their spare time, these two rebellious teen aged long side banged horses enjoy an afternoon of skateboarding and can be spotted doing kick flips! You should see them, it is quite something to see. Like totally radical, Dude!\nThese two white faced romantically involved horses are nuzzling each other and are wondering if they will ever have a real future together? At the moment, they only have eyes for each other and tend to ignore what others around them are doing!\n|Planning Their Next Move|\nThis small troupe of proud sentinels are standing on guard forming a fortress wall around the two horses chowing down on the freshly scattered hay. All directions are allocated and scouted!\nIn fact, what they are really doing is planning their next showboating stunts that they intend to do on the end line of the next Canadian Football League game when the Saskatchewan Roughriders kick some Calgary Stampeders' butt!", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Despite this, empirical studies on the welfare...\nAssociations between driving rein tensions and drivers’ reports of the behaviour and driveability of Standardbred trotters\n| Contributor(s):: Hartmann, Elke, Byström, Anna, Pökelmann, Mette, Connysson, Malin, Kienapfel-Henseleit, Kathrin, Karlsteen, Magnus, McGreevy, Paul, Egenvall, Agneta\nHorses’ resting behaviour in shelters of varying size compared with single boxes\n| Contributor(s):: Kjellberg, Linda, Sassner, Hanna, Yngvesson, Jenny\nFemale horses are more socially dependent than geldings kept in riding clubs\n| Contributor(s):: Górecka-Bruzda, Aleksandra, Jastrzębska, Ewa, Drewka, Magdalena, Nadolna, Zuzanna, Becker, Katarzyna, Lansade, Lea\nMultiple handlers, several owner changes and short relationship lengths affect horses’ responses to novel object tests\n| Contributor(s):: Liehrmann, Océane, Viitanen, Alisa, Riihonen, Veera, Alander, Emmi, Koski, Sonja E., Lummaa, Virpi, Lansade, Léa\nAcute changes in oxytocin predict behavioral responses to foundation training in horses\n| Contributor(s):: Niittynen, Taru, Riihonen, Veera, Moscovice, Liza R., Koski, Sonja E.\nThe Relevance of Internal Working Models of Self and Others for Equine-Assisted Psychodynamic Psychotherapy\n| Contributor(s):: Kovács, G., van Dijke, A., Leontjevas, R., Enders-Slegers, M. J.\nAnalysis of Trunk Neuromuscular Activation During Equine-Assisted Therapy in Older Adults\n| Contributor(s):: de Mello, E. C., Diniz, L. H., Lage, J. B., Ribeiro, M. F., Bevilacqua Junior, D. E., Rosa, R. C., Cardoso, F. A. G., Ferreira, A. A., Ferraz, M. L. F., Teixeira, V. P. A., Espindula, A. P.\nDoes Hippotherapy Improve the Functions in Children with Cerebral Palsy?", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "As with aggression toward people, some horses may be pathologically aggressive toward other horses. The first step is separation of aggressive horses from other horses, and keeping subordinate away from dominant horses. Separation is achieved by solid walls or two fences to avoid kicks through the fence. Horses should have sufficient resources, and desensitization and counterconditioning is the best treatment approach. In cases of sexually related aggression, castration and progestins (eg, medroxyprogesterone 70–80 mg/300 kg/day) can help. Adverse effects of such treatment should be weighed carefully, and the horse should be monitored closely. Adding tryptophan to the daily ration or administering selective serotonin reuptake inhibitors (SSRIs) may be helpful in some cases. Punishment should be avoided.\nAggression by mares toward people is normal during the first few days after parturition. This behavior is hormonally driven and usually wanes with time. Mares should be familiarized with their caretakers before delivery and have minimal contact with other people after delivery. No treatment is required in most cases.\nAggression While Breeding:\nStallions that are aggressive when used for breeding are often overused or used out of season. Stallions can develop preferences for mating and may not be compatible with the chosen mare; changing the mare may help. If stallions were stabled with mares when they were colts, they may have some social inhibition for mating, and forced mating can result in aggression. The goal of treatment is to treat the main cause of aggression; changing the mare (because of preferences) or artificial breeding can also be attempted. Physical restraint (eg, hobbles) and desensitization can help as well. Clicker training has been used successfully to desensitize stallions with this problem.\nCompulsive behaviors in horses can be divided into movement-related behaviors and oral behaviors. They can be called stereotypic because they are repetitive, occupy a large part of the daily activity, and serve no function. Confinement and poor management practices are the primary contributing factors. In addition, bedding, feed, and social contact influence stereotypic behaviors. Horses that have more social contacts, are fed more roughage and more than one type, are fed two or more times daily, and are bedded on straw are less prone to these behaviors.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Inclement weather and a crisis at work have been a roadblock to working with the horses in the last couple of weeks. They get hugs and scratches from me, during evening chores – and seem happy to see me on those occasions – but that has been the bulk of my interactions. So, it is probably no wonder that while I was contemplating the leadership crisis I’m struggling with, my mind immediately wandered to equine metaphors for the situation. I thought that sharing those might be entertaining (and perhaps a little enlightening) for you, and therapeutic for me!\nI have long said, in these pages and elsewhere, that when first I had to lead a human team it was the horses who guided me. My experiences in observing them, learning to read the signs of trouble or success, and adjusting to the feedback I received from them, laid the groundwork for doing the same with my employees. At the same time, I’ve spent a good deal of time watching how others deal with specific situations, and noting what does and does not work. So, here are just a few of those equine lessons that are pertinent to the current crisis I am observing in my workplace.\nThis is the horse who goes forward willingly and energetically. It can also refer to the one who boldly tackles whatever obstacle comes up. This is the enthusiastic individual who is every horseman’s dream.\nCommon mistakes – there are two mistaken approaches I have seen to this type of horse. The first is the insecure rider who will try to slow the goer by hanging on the reins. The results of this can vary: the horse gets dull to the pressure on the reins, and just bolts; the horse reduces its speed, but in frustration at not being able to move, dances on the spot, basically becoming an emotional wreck and “spinning its wheels”; the horse eventually gives up in a case of learned helplessness, becoming dull and losing all value as a mount.\nThe right way – the value the goer brings is the eagerness and willingness to tackle anything. The down side is that they can be so eager that they can go until they burn themselves out. So, the proper way to be a leader for a goer is to let them go, providing light guidance in direction or speed. Recognize when fatigue is approaching, early enough to get them to slow and recover before going at speed again.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "How a horse responds to being treated like they are feeling differently than they are, depends on the horse. Ciara, our 6 year old APHA pony, used to respond by shutting down. She was so tuned out and “lazy” that it was nearly impossible to have any sort of interaction with her aside from standing still and possibly walking. The second she’d get scared or confused, she’d stop and couldn’t be moved. We have been together for two years now. We know each other quite well and trust each other, but we are certainly still growing, so run into challenges. Now that she has had what she considers fair treatment, being treated based on how she’s truly feeling, she does NOT tolerate unfair treatment. If she feels like you’re treating her unfairly, she is more than happy to buck you off, and she has bucked me off, several times. Mainly around cantering under saddle when MY dis-ease has caused me to treat her like she’s being naughty, when she was actually scared.\nCiara has taught me an important lesson about myself and traditional horsemanship. Traditional horsemanship taught me that when things aren’t going right, try harder. Although this can be valuable advice for riding and life, I’ve found that I’ve reached a point in my horsemanship, as have many of my clients, where it gets me into trouble. Therefore, not only do I practice monitoring the horses mental and emotional space, I very carefully monitor my own. By watching my mental and emotional processes, I am slowly decreasing the number of unconscious responses in my work with horses and people that treat them unfairly. I wish I could say that I have eradicated such responses, but I think it will be a life long journey of decreasing them. The moment I feel that hardness inside my body, mind and heart, that grit, I’m learning to stop. Breathe. That hard, gritty feeling has become my clue that I’m probably treating a horse or person unfairly. The horse is probably trying to tell me how they’re feeling, and I’m ignoring them, I’m probably treating them like they’re being naughty or spoiled and they’re not.\nI am doing my best to become a soft, thoughtful and responsive human being, so that I can give horses the space to become soft, thoughtful and responsive horses.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "That way, he progressed in small steps but was not bored and never lost his eagerness to learn. We can still see this attitude today as he continues improving with his new rider, Helen Langehanenberg (2012 German Olympic team members).\nFoster the Horse’s Personality\nFor my father, there was a close and inseparable connection between the right training and the development of a horse’s personality. He always, rightly, used to say: “We not only need to strengthen the muscles but, in particular, the personality of the horse.” What we want as riders are self-confident horses that are reliable and attentive but also ones that love to show off in a positive way. This is particularly valid for dressage, where such expression earns us higher marks.\nMy father’s most famous and successful dressage horse was the Westfalian gelding Ahlerich. He was a good example of the “Here I am” expression my father looked for in his horses. If the right training has taken the horse to the point where he feels strong and masters the tasks with ease, it will feel delightful to work with us and show himself to the world. Many remember Ahlerich’s exuberance at the 1984 Olympics in Los Angeles, where the pair won the individual gold medal and performed 75 consecutive one-tempi changes during the victory lap.\nSometimes to foster a horse’s personality also means to accept that we can influence but never dominate him. We have to find subtle ways to control exuberant horses. The first that comes to my mind is Dresden Mann. When I got him—a just-backed horse and a licensed stallion—he was very strong-minded and really dominant. He was not so much fixed on mares as he was on other stallions, which he strongly considered his rivals. Though I never felt unsafe on him, in hindsight there had been a few rather dangerous situations when warming him up at shows with other stallions in the ring. As a result, Alfi, as we nicknamed him, often had problems focusing on me.\nFirst, we just tried to avoid tricky situations. Then we worked with a renowned German horsemanship trainer, but Alfi just did not give in. He controlled us rather than let us control him. Finally, we realized that to struggle with him was useless. Luckily, his owners agreed to geld him and the situation improved.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-2", "d_text": "It will make maybe 150\nmistakes before it chooses the square. Then you reverse it again.\"\nThe results were impressive. Each time they reversed the test,\nthe horse made fewer and fewer mistakes, until it got down to just\none mistake. It would choose the circle, get the reward, choose it again,\nnot get the reward -- then immediately choose the square. In other words,\nthey learned very quickly.\nVinur is being asked to \"touch\" the color \"blue\".\nDo horses enjoy jumping? We may never know\nBut how intelligent is that � relative to other animals, or even humans.\nHanggi is reticent to make such comparisons, saying it's like apples and oranges,\nanimal species have different strengths depending on what they need to survive.\nBut she does say that horses who are not given enough mental challenge, horses\nkept in stalls all day, or given such similar show tasks every day that they\nbecome mind-numbingly bored, develop problem behaviours.\n\"When you compare horses that have an open range or even an\nacre for grazing, to those that are always in stalls, you see all sorts\nof bad behaviours popping up in the horses that are stall-bound,\" Hanggi notes.\n\"They develop stereotypic behaviour, like pacing around in the stall, chewing\non the walls, kicking, biting, to cope with the stress. Horses were meant to\nbe out on the open range, not locked up in little boxes all the time.\"\nHanggi advocates something called \"natural horsemanship\" to avoid these\n\"It's a way of training and working with horses so that you fit yourself\ninto their understanding of the world,\" Hanggi explains. \"The normal way of\ntraining is to say, \"do what I want or I'll force it on you.\" We\nuse understanding and communication instead of fear and intimidation.\"\nCan horses ever enjoy the work we put them to?\n\"You hear people say, \"my horse really enjoys this or that\",\" says Hanggi, \"but\nthat gets kind of tricky. It gets into animal emotions, and there's no real\nway we know what their emotions are. But some horses do exhibit behaviours\nthat look like they're enjoying what they're doing.\"\nHanggi's study, \"Categorization Learning in Horses\" appears in the 1999,\nvolume 13, no.3 issue of the Journal of Comparative Psychology.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "Select obstacles that may require the goer to stretch their ability – but never let them be overfaced, since their confidence can be damaged by a catastrophic failure.\nThis is the horse who may lack some talent for their purpose, or lacks the enthusiastic energy of the goer – but it honestly tries to accomplish anything put in front of it. This horse may never light the world on fire, but can provide great value, given the right job and leadership.\nCommon mistakes – the trier is often overlooked, for lacking the flash of the goer. This can find the trier relegated to a role that does not fulfill the talent they do have, leading to boredom and dullness. Thus, a horse who might have found their spark becomes an unnatural plodder. On the other end of the spectrum, the willingness of a trier may lead someone to believe there is more talent their than there actually is. This can lead to the trier being overfaced with challenges that eventually lead to either breakdown or acting out.\nThe right way – when working with a trier, you have to find the balance between keeping them interested and engaged, and pushing them past their true capability. A trier will stay enthusiastic when given slowly increasing challenges, where success encourages them to reach a bit further each time. This is my favorite type to work with, both in horses and people. They are more challenging than the goer, in the process of determining where their talent lies, and in finding just how deep that talent might be. The puzzle of finding the right formula of challenge and caution appeals to my sensibility – and watching the joy as they achieve more than they had ever thought of trying is priceless.\nThis is the horse who is not necessarily unwilling, but who generally lacks energy, enthusiasm, and often specific talent. Although slow in development, given the right support this horse is willing to do those jobs that would frustrate the goer, and even bore the trier. Even in those seemingly dull jobs, the plodder can be very happy with clear cues and appreciation for their effort.\nCommon mistakes – the first mistake is often to overlook or undervalue the plodder. There are places in this world for plodders, and they should be valued for all that they bring. But if they are dismissed and ignored, we lose that potential value, and they can eventually lose any sense of trying they might have had.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Identify your Personality Type\nWhy your response to stress can prevent you living life to the full\nStress is an intrinsic part of life, and much of our activity is a response to it, or an attempt to pre-empt it. Hunger, for example, represents stress at a physical level which we can anticipate and respond to in a more or less healthy way, depending on what habits we have developed and what other stressors are in play at the same time.\nOur habits and typical patterns of responding to stressful experiences become part of our personality, but if we restrict ourselves to a small range of responses we can start to miss out on important parts of life.\nGerda Boyesen, pioneer of Biodynamic Psychotherapy, talked about four personality types - Rock, Warrior, Sunshine and Princess on the Pea (from Hans Christian Anderson's tale). Each of them manifests both positive and negative responses to stress, but none of them allows us to live life to the full.\nRock is solid, dependable, always there for you, immovable, hard, cold, unfeeling\nWarrior is aggressive, challenging, hates injustice, takes sides, despises tender feelings\nSunshine is warm, puts others at ease, says \"I'm fine\", ignores unpleasant realities\nPrincess on the Pea is sensitive, discriminating, vulnerable, easily upset, complaining\nIf you can identify strongly with just one of these personality types, it's likely that you're missing out on some of what life has to offer, finding relationships difficult or failing to attend to your own authentic needs.\nIf you'd like to know how Biodynamic Psychotherapy could help you expand your capacity to respond to both stressful and pleasant experiences, and so have a fuller and more satisfying experience of life, please contact me.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Skip to content\n- Horses have the ability to elicit strong feelings and emotional in people. They provide the opportunity to explore these feelings/emotions when they arise. Emotions such as fear, anxiety, panic, sadness, grief, anger, rage, happiness and joy may be evoked when being around the horses.\n- Horses also model awareness and safety and survival .\n- Horses offer unique feedback to how clients behave around them.\n- Horses are very sensitive to people’s behaviour and respond differently to subtle changes in an individual’s mood and/or behaviour.\n- Horses model relationships through their interactions with one another.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-4", "d_text": "When this is going well, ease into a few steps of soft feel, then holding the soft feel at one gait, then through transitions.\nBehind the bit. The horse that travels “behind the bit” can be traveling freely forward without being truly “forward” in the dressage sense. For this kind of horse, I go easy on exercises that squelch forwardness, such as the soft feel exercise and one-rein stops. Instead, I focus on upward transitions with energy, perhaps going uphill, if the warm-up area is hilly. I’d also pay close attention to my hands to make sure I’m providing a reliable, soft connection for him to trust–not a waffling, flimsy contact that the horse can’t find.\nFussy, fretful. This horse benefits from 10 minutes in the Virtual Round Pen before tacking up, then a gradual opportunity to migrate to the warm-up area from a quieter area of the showgrounds. I rely on one-rein stops and modified one-rein stops to create safe zones for a fretful mount. Careful though; it’s easy to overdo this exercise and end up badgering or overconfining the horse when he might just need to stride out on a loose rein for a while.\nIf the horse is fearful, I’ll offer lots of quiet stroking encouragement. Verbal encouragement “Eassssy, eassssy, easssy” –seems to have the opposite effect, so I just keep quiet, try to have a sense of humor about it, and ride as confidently as an electric horse will allow.\nStiff through the body. A few minutes of halter-rope work, breaking the hindquarter over in several different ways, sets the stage for success under saddle. You won’t see many straight lines in my warm-up with this horse. We’ll ride 10-foot-diameter circles on loose reins and change the bend through the center of the circle. We’ll ride deep wiggly lines on leg aids, looking for a clear lateral step of the hindquarters. We’ll do turns on the forehand with the reins dropped. We’ll back in circles.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Send the link below via email or IMCopy\nPresent to your audienceStart remote presentation\n- Invited audience members will follow you as you navigate and present\n- People invited to a presentation do not need a Prezi account\n- This link expires 10 minutes after you close the presentation\n- A maximum of 30 users can follow your presentation\n- Learn more about this feature in our knowledge base article\nDo you really want to delete this prezi?\nNeither you, nor the coeditors you shared it with will be able to recover it again.\nMake your likes visible on Facebook?\nYou can change this under Settings & Account at any time.\nDifferent behaviour as a horse ages\nTranscript of Different behaviour as a horse ages\nBehaviour refers to the actions or reactions of an animal. In this prezi we will examine how the behaviour of a horse can, or will change with age. Take into account that every horse is different, and different breeds will all have different beviours. \"Some behaviour does depend on age, but there are also other characteristics that come into play, along with that.\" (Mel) An animals behaviour is influenced by many factors; environment, physiology, genetic predisposition, experince and learning, and age.\nWeaning happens around four to six months of age, and is one of the most traumatic events in the horse's life. Weanlings are in the process of overcoming much of their awkwardness, and need just as much guidance as they learn about how to interact with other horses and humans. By brief dialy workouts with a trainer and careful handling the weanling learns what is expected of it and begins to adapt to the requirements it will need to meet as it grows older. A young horses bones and joints are fragile, so the intense work doesn't start until they are at least two years old. During this period of growth too much food can be as bad as not enough..\nYearlings are over the age of one year, and they spend more time standing rather than lying down. They are very curious and enjoy mouthing anything they can find in their stalls or pastures. During this phase adult characteristics and behaviours begin to emerge. These young horses will test the boundaries with both herd mates and human handlers. They do lots of play fighting, galloping, bucking, and running with bursts of speed.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "These particular horses are also not kept in stalls, when in fact they need to walk or run for miles every day, and not tortured in the various ways most horses are, including harsh bits and horseshoes that are hard on their feet. These EAGALA horses, even more than most, have been cared for “holistically.” They have large pastures where they can behave like horses, and are never struck. Above all, they have been doing EAGALA work for years, working face to face with humans who want or need to connect with them, all of which means they have been free to develop their own personalities and reactions.\nSecond, the participants. Just as the horses’ personalities have been free to become quite diverse, our human participants are of course very diverse, too. All HSPs are NOT the same. The herd of animals and people are highly respectful of that. Indeed, often workshop participants identify with a particular horse. Many of them have been rescued from difficult circumstances and during part of the last half of the second day we have sometimes told their histories. On occasion a horse has a history much like that of the person who felt some kinship with it.\nThe Entire Herd Is Highly Sensitive\nWhatever their personality, all horses and all of these people are all highly sensitive (although some are even more so than others). For horses, sensitivity is required for survival. As you watch them, you begin to see how much it is an advantage for you, too. They show all four of DOES (the four aspects of being highly sensitive): Depth of processing and being easily Overstimulated, more Emotionally responsive, and sensitive to Subtle stimuli.\nHow do they show these? Although they can react quickly due to a subtle cue of threat or opportunity, as HSPs can, you can often see them processing the situation before they make a move. They can be very slow to decide what they want to do next, like we can be.\nThat they are easily overstimulated becomes obvious during the course of the workshop. We humans, with our feelings and actions, do bring them to a point when they clearly need (and get) down time.\nTheir emotional responsiveness and empathy means they can react quickly when one of them senses danger or an opportunity. But this responsiveness also gets applied in a more subtle way all the time, for example in these workshops.\nTheir sensitivity to subtle stimuli is even more intense than ours.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "Even as horses grew larger, they were still a staple on the diet of whatever preferred meat to grass. So the horse, over millions of years, learned a key lesson: Don’t think, react. If something with a jawful of Bowie knives is bearing down on you, you do not pause to ratiocinate. You kick, bite, or run instantly.\nThis is why so large and powerful an animal is subject to strange fits and vapors, and is easily so spooked by so many seemingly harmless things. It has the mentality of a New York City subway rider who has learned to see mortal danger everywhere.\nAnd so we come to our first piece of horse sense. If Old Thunderdent is dreaming, tied to the picket rope, and you come blundering up behind him, he is likely to present you with a hoofprint in the sternum, not out of malice, but simply as a matter of racial memory. Horses don’t like surprises, so don’t give them any. When you walk up behind a horse, let him know you’re there before you get within hoof range. If you’re near his head, don’t make any sudden gestures, because until you’ve been bit by a horse, you haven’t been bit.\nAll horse are individuals. Some are smart: some are stupid. Some are levelheaded; some are crazy. Some are gentle; some are mean. (A digression: If you check into the background of most “mean” horses, you’ll find that they suffered from treatment that Dr. Josef Mengele would have blanched at. All too many people delight in brutalizing horses in the name of discipline.”)\nIt’s a matter of horse sense, before you climb on board, to ask if the critter has any quirks, mannerisms, or pet peeves you should know about. Years ago, I rode a horse whose previous owner had tied its head to a tree and beaten it. The poor creature’s face was scarred, and its vision was affected. As a result, the rancher told me, it carried its head low to the ground, simply in order to see where it was going. Now if I hadn’t known this, I would have been hauling on the reins, trying to get his head up, making an already bad situation worse.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "Their sight, hearing, sense of touch, and no doubt their sense of taste and smell, are all finely tuned for their life in the wild (and some of these horses grew to adulthood in the wild).\nHorses as Teachers\nOf all of their traits, their emotional responsiveness and true empathy is what impresses us all the most. They sense what you’re feeling, whether you want them to or not, and respond with pure authenticity. Sometimes they mirror you, so that if you are confused, they are as well. If you’re relaxed, they are as well. Sometimes they are annoyed when you are not your authentic yourself, just as we can be by others who are faking it, and you can tell they are annoyed. They do not really know human politeness or shame. Yet they forgive and forget quickly if they sense you have changed.\nThey particularly do not like us when we are doing one thing and feeling another. To feel safe, horses need a clear leader. If we try to direct them but are ambiguous, they get nervous or might even drive us off. When we know what we want and are honest about it, they often choose to let us lead. And they show great empathy, especially when we are distressed. They are uncanny in knowing what the issue is. It makes you think they are operating on another dimension.\nSometimes we have a discussion right in the corral. One or more horses might join us, coming into our circle, provided our conversation is lively and honest. If we start intellectualizing, I have seen them turn their back ends to us!\nA funny example happened when a group got tangled up while doing an activity. They just could not figure out how to accomplish it, which is always okay because we emphasize that there is no right way to do anything in the workshop, including getting it done in a certain amount of time or even doing it at all. But the humans were tense, having difficulty deciding whose suggestion to follow. We had put two other horses in a separate corral and as the workshop members struggled, all the time being super nice and polite yet clearly also miffed, these other two horses became more and more agitated (something we had never seen before), sensing the confused state of the humans.\nSince many HSPs have taught ourselves to hide our feelings, it is good for us to have them brought to the surface by these creatures who miss nothing and hide nothing themselves.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-2", "d_text": "I'd have some of the riders keep going, and some go with my horse and I back towards home. We'd continue with that until he was comfortable leaving the other horses, going towards home and going away from home. Then we'd just go riding with one other horse, and split ways. Again going away from home and towards home alternatively. The idea would be that he learns he is perfectly safe away from the group, and controlling that herd bound instinct.\nSituation 5 -- I would start at the beginning. Again. Re-introducing the wash area, the wash appliances, the hose with no water running, the hose with water running, and the noise it makes. I'd take it slow but continue moving along when the horse is comfortable. Then I would have the horse tied in a quick release or a person hold him if they were available. I'd start the hose going, preferably luke warm water, and hose off the hoof of one of his front legs. If he reacts, I will keep the hose on his hoof until he stops moving, for just a second, and take the hose away. Depending on his stress level, I'd try again or wait until the next day. I'd gradually move the hose up his leg and all over his body, waiting for him to take a second to relax before relieving the pressure.\nSituation 6 -- I've had a lot of horses do this, it's one of my biggest pet peeves. I'd make sure I had a good strong halter on, preferably a rope halter, or a be-nice halter. I'd catch the horse with a lunge line, so I have more rope, and if the horse bolts, catch him with the lead and turn around, heading back out to the pasture. I think it's important to turn the horse AWAY from you. Respecting your space and all that. We'd continue doing this, leading him to the stall and if he goes even a tad faster than you want him to, turn him back around. Eventually he should learn that going fast means it takes longer. At least that's the idea.\n\"Great spirits have always encountered violent opposition from mediocre minds.\"", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Understanding Your Stress Levels\nIn order to understand stress better, it is a good idea to understand that there are different stress levels. These levels vary in the form of stress they take and they can often provide an indication of how to treat the stress. Furthermore, there are tests available that can help people understand their own, particular brand of stress and, with this knowledge, they can also understand themselves better. Then, with this information, a complete stress management method can be constructed. So, when you examine your own stress, keep these stress levels in mind so that you can come to grips with yourself and learn the proper methods for keeping your mind balanced.\nThese stress levels were found and characterized by Dr. Hans Selye and Dr. Richard Earle of the Canadian Institute of Stress. Thus, the names and types are theirs.\nType 1 - The Speed Freak\nThis stress level is characterized by an incessant need to be giving 110% at all times. They are often perfectionists, they tend to speak quickly, and they are very impatient. Generally, Speed Freaks have learned that it is necessary to work hard in order to succeed, so they figure that, if they are working hard all the time, they are certain to succeed. This, of course, is not necessarily the case, since running full-bore all the time will only lead to stress over minor issues.\nSpeed Freaks need to learn how to relax and they need to clarify their goals so that they will work hard on things that really matter, while relaxing while they are working on more mundane tasks. By doing this, they can get up to speed when they need to put in the effort and conserve energy the rest of the time.\nType 2 - The Worry Wart\nThe Worry Wart stress level is characterized by an inability to stop thoughts, but an equal inability to put thoughts into action. They tend to overanalyze things to the point that they paralyze themselves. Thus, they simply end up spinning their wheels as they get nowhere. True to the name, Worry Warts tend to spend a lot of their time worrying and this only leaves them even more incapable of action.\nWorry Warts need to think very specifically about the problems they are facing, write down every possible thing that can go wrong, then think about just how likely these events are.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Understanding Horse Behavior\nTo understand horse behavior, the best place to start is to observe them in a natural setting where their true natural horse behavior can be seen.\nIn this environment, where man does not impede upon their will, we discover the social order of a prey species animal.\nTo Discover \"The Secret Lives of Horses\" to the fullest Click Here!\nDefining Prey vs. Predator:\nA prey animal species means that it is an herbivorous grazing animal and is killed and eaten by predator animals such as a lion or a wolf (humans are considered a predator species, after all we do eat grazing animals).\nHerbivorous grazing animal means that it is a vegetarian that gets its food by grazing.\nAmongst the prey species you basically have two behavior types as well. The best example of distinguishing the two is to notice that cattle and sheep of the prey species will bunch together for safety while horses and deer will choose to flee or take flight from danger.\nThis is very important to know because “flight animals” are much more skittish and will startle more easily than a prey animal that bunches together. As a result a flight animal is more easily “traumatized”.\nWithout the ability to flee, horses will turn and fight with their hind feet, teeth or strike!\nWhat will set off both kinds of prey animals is sudden, unfamiliar and fast movement the animal isn’t expecting. In horse behavior terms, we often refer to this as spooking.\nThey rely on vision first to detect danger before other senses such as smell. What is important to note is to a horse “danger” really means “being killed”!\nWith this in mind have you noticed many horses are more skittish when the wind blows and even familiar objects become scary things?\nUnfamiliar sudden movements will trigger natural FEAR in horses. FEAR leads to PANIC and panic leads to FIGHT or FLIGHT.\nTo see an example of this click\nClick here for information on Wild Horses vs. Domesticated Horses\nFor some history of horses dating back to the Ice Age click here!\nEver wonder why horses yawn? Click here!\nIn a natural herd environment herds usually live in small groups. A stallion might live with only four to six mares (depending on the territory) and the other stallions will live in bachelor groups.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Although humans came into contact with horses about 50,000 years ago, they were originally herded for meat and skins. Evidence suggests that they were domesticated about 5,000 years ago - substantially less than many other farm animals including goats, sheep and cattle. Horses are believed to have first been domesticated in the steppes of the eastern Ukraine and central Russia, as people started to lead a more nomadic lifestyle. Historians debate over whether people first rode horses or attached carts to them, but the latter is thought to be the most likely.\nThe heavy draft horse gives an impression of weight combined with strength. The body is wide and he back broad, often accompanied by rounded withers, which in some breeds, in the interest of increased pulling power, may be higher than the croup. The body is heavily muscled, particularly over the loin and quarters. The shoulders are relatively upright to accommodate the collar, and the legs are thick and short. The heavy horses usually stand between 16 and 18hh. Their actions are short giving maximum traction.\nEvery horse is different in how it learns and reacts to outside stimuli. Just because training can be accomplished using certain methods for some horses, this doesnít mean that those techniques will work just as well on every horse. We donít teach all children the same way, and all horses donít learn exactly the same way either. In each case, there are issues past and present that we need to keep in mind, as they may impact the effectiveness of our training. The first thing that we must take into account is that no animal or human learns well when they are stressed. Take a test, or try to meet the deadline at work while your teacher or boss stands over you with a whip, yelling and screaming, and occasionally prodding you with a sharp spur to get his point across, and I think you will understand. The only difference between horses and humans is the reaction we get when the teaching method breaks down.\nBoth human and horse will shut down under stress, both sometimes leave the area to reduce the stress, both resist against stress, and still others will fight if the stress is great enough. Humans may yell at each other, but horses canít talk and, therefore, often resort to a more physical response. Sometimes they run away hard and fast dragging their owner with them, or just leaving their owner behind.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "If a\nhuman saw an alien would we be so willing to be relaxed and\nHorses that are in a human\nsurrounding are horses that have or have had handling from a human.\nHorses that are handled\nproperly from early in life will not fear humans. It is like the\nhuman is scenery. It's there and no big deal.\nThe horse that is in a semi human\nsurrounding have both characteristics of the nonhuman and human\nOne can have a horse in the\npasture and not handle try to catch the horse until later on in the\nhorses life. This horse will keep the distance just incase the human\nis going to \"eat\" the horse.\nThere is curiosity in these\nConfidence building will not\ntake as long as the nonhuman surrounding horses.\nMental impressions made by either\nnonhuman surrounding or human surroundings have a great deal to do with\nhow an animal will react.\nIf a horse was mishandled in\nthe human surroundings either by abusive action or by not teaching\nmanors will make the teaching process much harder than if the horse\nhad no human contact.\nHorse in the nonhuman\nsurroundings will have different experiences depending on the\nenvironment with in the surrounding. One horse may live in pastures\nof green year round while another will live in more range conditions\nwhere they need to find food and watch for predators.\nEndocrine System needs balance.\nDuctless glands can determine\nhow the nervous system is used along with the mental capabilities.\nNutritional needs in a horse are a\nmust for a health active thinking horse.\nWithout proper nourishments a\nhorse can not function properly. The horse will not be using the\nbrain nor the body normally.\nPhysical capabilities of the horse\nmay limit or heighten a horse.\nIf a horse is in pain it will\nnot do what you want comfortably. Thus the horse will tell you in\nthe horses own way that something is wrong. Such as bucking, shaking\nthe head, pawing, laying down and limping to name a few. But this\ndoes not mean that when the horse bucks, paws or lays down that the\nhorse is in pain. This may also be a horse way of avoiding training.\nIf a horse possess physical\ncharacteristics that are limiting for certain disciplines then the\nhuman should not push the horse or compare the horse to any other\nhorse.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-3", "d_text": "You are working hard, it was busy on the road, you fly into the stable to ride your horse and you have exactly one hour before you have to go to your next appointment. Your horse is in the field with his friends and there you come charging in, with an aura full of stress around you.\nAnother scenario: You love horses, you want to ride but deep down you find those big animals quite scary, but you definitely don’t want your friends to notice how scared you are. So you put on a brave face and you step into the field to get your horse.\nI can think of a few more examples but I think my point is clear. Something is not going well here. Horses communicate non-verbally. They can read your body language better than any fortune teller and they mirror behaviour. You can try to hide your real feeling, but a horse will see through that.\nPeople are not so aware of body language and that can cause a lot of confusion. Take the situation where someone walks backwards in a lunging circle, which is an invitation for a horse to follow you, while at the same time chasing him away with a whip… for a horse that is completely incomprehensible.\nEven when you are in the saddle a horse is very aware of your mood. I am convinced that horses can show compassion. So, if you feel sad and you have a bond with your horse in many cases he will respond kindly and gently. Anger, stress, impatience or rushed behaviour makes a horse insecure or even afraid, which will cause him to want to get away from you.\nIn order to train properly you must have the right focus. That does not work with one eye on the clock. Whoever made up that a lesson lasts exactly one hour? Not the horse. Put your phone away. You should focus on your horse and not be half distracted when a message comes in. He deserves your full attention. You need your brain to think about what you are feeling, what is happening below you and what you can do to improve it. To find solutions that make it more clear to your horse what you want from him.\nIf you are in a rush, impatient or afraid, do not mount. Pay attention to him in a different way. Give him a nice grooming session, take him for a walk or do in hand work. Enjoy his company.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "These animals can potentially injure themselves or their handlers.\nThere are numerous stressful routine events that cattlemen may not consider stressful–weaning, transportation, social mixing, and vaccination. These routine management practices have been shown to increase the secretion of cortisol and epinephrine. Cortisol and epinephrine are stress-related hormones.\nThe concern with increasing stress in livestock is that stress can negatively affect growth, reproduction, welfare, and immune function–predisposing cattle to infectious intestinal and respiratory diseases that cost U.S. cattle producers an estimated $500 million per year. Reducing adverse consequences of stressful incidents and identifying animals that may react differently to stressors may benefit cattle's growth and health.\nResearchers are studying interrelationships of stress and cattle temperament with transportation, immune challenges, and production traits. They have found that, depending on temperament, cattle respond differently.\nThe scientists looked at exit velocity from the chute (The rate at which an animal exits a squeeze chute or scale box where it's been restrained or held after transport. A fast exit indicates the animal is showing fear and is stressed by handling and human activity.) and pen scoring (A subjective measurement in which small groups of cattle are scored based on their reactions to a human observer. Scores range from 1–calm, docile, and approachable, to 5–aggressive and volatile). The exit velocity and pen score for each animal were then averaged together to come up with a temperament score.\nIn the study, Brahman calves were classified by temperament and transported 478 miles from Overton to Lubbock. After the trip, blood samples and body temperatures were taken before, during, and after administration of an endotoxin to simulate illness. Sickness behavior was scored on a 1-to-5 scale that measured the severity of calves' behavioral responses to the challenge. A score of 1 indicated normal maintenance behaviors, and 5 indicated the greatest amount of sickness behaviors, such as head distension, increased respiration, and labored breathing.\nScientist could immediately tell that the calm animals had been given an immune challenge, because they showed visual signs and became ill. The more temperamental animals continued to act high-strung and flighty after the endotoxin challenge. If a temperamental animal doesn't show signs of illness, managers might not realize that the animal is sick and needs treatment.", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "As a general rule of thumb, it is fairly safe to say that when a horse gets bigger, it will become more difficult to handle if it is not a draft or coldblooded breed. One of the exceptions to that rule is the Oldenburg. As a breed, these horses were initially developed to be a coach horse that could take on some farm work if called upon. This required the temperament of the horse to be balanced, flexible, and accommodating, which are the key personality traits that you’ll still see in the modern Oldenburg.\nBecause this breed is a warmblood, there is still a certain fire to their personality that comes out from time to time. Oldenburgers like to be active, so a horse without an activity is going to be a horse that causes trouble. They are a tall sport horse, with an excellent jumping ability and lengthy gait, so it is often necessary to work this breed every day to maintain the evenness of their temperament.\nWhy Do the Personalities of Oldenburgers Vary So Much?\nAs a breed, the Oldenburg horse is expressive and willing to work. The breed societies have a very liberal approach to developing the modern Oldenburg, however, so there are varying degrees of “hotness” that come into play with this breed. For this reason, identifying the temperament of an individual horse often means looking at the lineage and parentage of each specific animal.\nBecause the Oldenburg is such an excellent sporting horse in regards to show jumping, there has been a movement within this breed to transition it from being a warmblood to a hotblood. This has caused a certain sensitivity to come into the breed, where the horse will not tolerate an inexperienced rider.\nThis has led to specific temperament testing requirements as part of many breed association registrations. Stallions and mares are scored on their character, constitution, willingness to work, rideability, and temperament. Each is given a score so that owners can know what they are getting with their Oldenburg since temperaments exist with such a great variety.\nThat variety does create a certain amount of uncertainty, but it also means that finding an Oldenburg horse with the right elements for an owner’s specific needs is not a difficult process.\nLoyalty is the Trademark of the Oldenburg Breed\nThere is a certain honesty to the Oldenburg personality that is present, no matter how hotblooded the horse may be. They are extremely loyal to their owner, trainer, and herd.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "Racing officials globally ponder over 1, 2 or 3 differences in the handicap process whereby they attempt to get equality for betting purposes. If 1, 2 or 3 pounds can make a difference in the race outcome, then recognize the difference it would make for a horse to empty out 10-20 pounds of fecal material as they flee the charging predator.\nAs horses evolved, evidence suggests that the faster ones lived to reproduce while the slower ones were generally harvested before reproducing slow horses. While it’s true that we have been interrupting Mother Nature for 6000 years, earlier patterns are still in place. It seems that this particular phenomenon was well established for millions of years before we began to genetically manipulate Equus for our own desires. I am pleased to have the opportunity to complete this exercise. I should have written about this characteristic many years ago.\nIt is interesting to note that I have paid close attention to the frequency of defecation as I bring horses to the round pen for their first saddle and rider. Regardless of their mental appearance, if they defecate with unusual frequency I tend to regard them as hyper nervous individuals. This alters slightly my approach. I will require less and push less hard on those that repeatedly defecate. I have found this to be an effective way to deal with these individuals.\nCertain individuals extremely sensitive to the perceived rights of animals in general may well take the position that if its stressful we shouldn’t be doing any of these things with horses in the first place. That is certainly a separate issue but I feel strongly that that would be a major mistake. Stress is a part of every biological entity and properly attended to can provide a strength instead of a weakness. The flight animal inherently is looking for a friend.\nThe horse is a herd animal. They do much better physiologically if they can exist in a tranquil environment with trusted individuals as life partners. Trust is the definitive word and it is with that goal in mind that I discovered and quantified Join-Up in the first place. In order to bring about a trusting partnership a certain level of stress must be experienced in order to justify an outcome of trust. We must realize that horses are extremely flighty animals and in order to bring them to a level of trust with the human they must pass through portals on their journey that can be stressful to a degree.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Difficulty: Hard (but we’re here to help)\nWhat you’ll need: Patience\nThink back to when you first started riding — the barn was a joyful place and every ride ended with a kiss on the muzzle. But as we get older, balancing work, home life, and barn time is tough, and setting riding goals can add in a little extra pressure. So we SmartPakers put together some of our favorite tips, tricks, and smart solutions to help both you and your horse keep your cool all year long.\nTips for you\n“Breakfast on show days is tricky because I never feel like eating but I know I need to, so I always have a drinkable yogurt. It makes me feel full (but not too full) and calms my stomach.”\n–Susan, Barn Sales\n“Whenever I take my horse somewhere new, I make sure to leave plenty of time for him to adjust and relax. Plus, this helps my nerves because I don’t have to rush.”\n“Sometimes making it to my lesson on time, or making it to the barn at all, can be a source of stress. When that happens, I take a deep breath and remind myself how lucky I am to have such a wonderful horse, and that this is supposed to be fun!”\n“Understand that your horse has a unique personality, just like you and every person you know. Don’t expect him to change who he is just because you want him to.”\nTips for your horse\nHe’s occasionally nervous and tense, looking around and fidgeting as you get on. Slightly on edge, he is sometimes unfocused on the job at hand.\n“I establish as much of a routine as possible with my nervous guy. Amazingly, he also seems to like it when I sing to him, perhaps because it helps me breathe and relax!”\nShe’s cranky and irritable, kicking out and pinning her ears. Thanks to her roller coaster of mood swings, you never know which attitude you’re going to get.\n“I pay attention to whether my sensitive mare actually wants to be groomed. When she’s not into it, I respect that and let her be dirty, and she’s much happier for it!”\nHe’s over reactive and excitable, dancing in the aisle and jigging in the ring. A bundle of nerves and energy, he’s always looking for a “monster” to spook at.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "A total of 1088 injuries were recorded of which 597 were present on day 0, before the horses were put into their groups for the study. 188 new injuries were recorded after grouping day 1 and 303 new injuries were recorded after 4 weeks. The majority of injuries were minor (80% category 1, 18% category 2). Throughout the whole study, a total of 3 injuries were recorded as category 3 and 2 injuries were recorded as category 4. The two category 4 injuries occurred to young warmblood horses.\nThere was no difference between mixed age and similar age groups, mixed sex and same sex groups or the stable and dynamic groups in the mean number of new injuries recorded. The main predictor of injury actually turned out to be breed, with Warmblood horses significantly more likely to get injured than Icelandic horses. The researchers hypothesise this could be because the Icelandic horses have been bred for a calmer temperament, and they also tend to have more body fat and thicker coats which better protect them from kicks and bites.\nThe main predictor of reactiveness was breed, with Warmbloods being the most reactive breed and Icelandic horses being the least reactive. Mixing older horses with younger horses or just having groups of older or younger horses did not have any significant effect on reactiveness.\nThis study suggests that horses are unlikely to get severely injured from being turned out in groups and their welfare is more likely to be compromised if they are not allowed to socialise with other equines and exhibit normal herd behaviour. One thing to note that could affect rates of injury is competition for resources. In this study, all horses were in paddocks of at least 0.5 hectares and at feed time each horse was given its own pile of feed.\nKeeling, L.J., Bøe, K.E., Christensen, J.W., Hyyppä, S., Jansson, H., Jørgensen, G.H.M., Ladewig, J., Mejdell, C.M., Särkijärvi, S., Søndergaard, E. and Hartmann, E., 2016. Injury incidence, reactivity and ease of handling of horses kept in groups: A matched case control study in four Nordic countries. Applied Animal Behaviour Science, 185, pp.59-65.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Individual differences |\nMethods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |\nHorse training refers to a variety of practices that teach horses to perform certain behaviors when asked to do so by humans. Horses are trained to be manageable by humans for everyday care as well as for equestrian activities from horse racing to therapeutic horseback riding for people with disabilities.\nHistorically, horses were trained for warfare, farm work, sport and transport. Today, most horse training is geared toward making horses useful for a variety of recreational and sporting equestrian pursuits. Horses are also trained for specialized jobs from movie stunt work to police and crowd control activities, circus entertainment, and equine-assisted psychotherapy.\nThere is tremendous controversy over various methods of horse training and even some of the words used to describe these methods. Some techniques are considered cruel, other methods are considered gentler and more humane. However, it is beyond the scope of this article to go into the details of various training methodology, so general, basic principles are described below. The see also section of this article provides links to more specific information about various schools and techniques of horse training.\nBasic goals of horse trainingEdit\nThe range of training techniques and training goals is large, but basic animal training concepts apply to all forms of horse training. The initial goal of most types of training is to create a horse that is safe for humans to handle (under most circumstances) and able to perform a useful task for the benefit of humans\nA few specific considerations and some basic knowledge of horse behavior helps a horse trainer be effective no matter what school or discipline is chosen:\n- Safety is paramount: Horses are much larger and stronger than humans, so must be taught behavior that will not injure people.\n- Horses, like other animals, differ in brain structure from humans and thus do not have the same type of thinking and reasoning ability as human beings. Thus, the human has the responsibility to think about how to use the psychology of the horse to lead the animal into an understanding of the goals of the human trainer.\n- Horses are social herd animals and, when properly handled, can learn to follow and respect a human leader.\n- Horses, as prey animals, have an inborn fight or flight instinct that has to be adapted to human needs. Horses need to be taught to rely upon humans to determine when fear or flight is an appropriate response to new stimuli and not to react by instinct alone.", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Every person has a natural behavioural preference. A way of doing things. That is anchored in our brains. A person is able to adapt this behaviour to whatever a situation requires. We behave differently in the supermarket, at work, at school or at home on the couch. In times of stress our natural behavioural preference always comes to the fore. If you have to adapt too often or for too long at any time, it takes energy.\nThese preferences can be divided into four types. We all have these four types in us, with one more strongly represented than the other. People who are dominant are bold, focused on performing tasks, but are quick to blame others if things don’t go as planned. Conscientious people are perfectionists and focused on details, but they set the bar very high for themselves. They are extremely critical of themselves when something goes wrong. You also have people who are very focused on others or on stability and harmony.\nThis affects the way you are with horses. A horse is a prey animal that lives in a herd. All his motives and reactions are based on that. If you frighten him, hurt him or he becomes insecure, he wants to flee. If you stop him it creates a sense of panic. He will want to run away, but that is not possible. In such a situation he is not open to learning anything, let alone that you can change anything that you think needs improving.\nIf a horse is frequently blocked from fleeing when something worries him, you create a state that behavioral scientists call “learned helplessness”. That is a kind of soulless subjection in which a horse does its work slavish and seemingly obedient, but without any “joie de vivre” (enjoyment of life). He shuts himself off to the outside world. He is far from that beautiful animal that works in harmony with his rider, with a happy and sparkly appearance. Yet you see it regularly, even with people who call their horse their “dearest friend” and seemingly would do anything for it.\nHorses are herd animals by nature and have no problem following orders, even from us. Provided the question is asked fairly and without aggression, so they don’t get scared and provided they understand what you want from them. However, that can be a problem, because how do we understand each other? We are predators and ego driven.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "These muscular talents can really kick up a storm both in the rodeo ring and on the football field! You should see the cowboys/Stampeders fly! Ouch! They are goin' down! Go Green!\n|Going in Different Directions|\nHave you ever felt like you just don't belong? If so, you know how this pair feels. They hang out occasionally with each other, but seem to be going in different directions. They are off in their own little worlds and stand out because of their colour. Even though they have their colour in common, which naturally draws them together, they don't have much else in common. They tend to be loners, and actually don't mind being alone most of the time!\nThese two greys, on the other hand, want to fit in and muscle their way into a crowd to surround themselves with just about anyone to feel included. They will do and say just about anything to fit in! Recognize the type?\n|An Oreo Cookie|\nIt is believed that these three creative types enjoy Oreo cookies so much that they always stand with the whitish buddy in the middle! Paint, Grey/White, Paint! What fun and how yummy!\nIn reality, they keep the grey in the middle because it is different, an underdog, and they like to not only watch out for and protect underdogs, but they like to be different and stand out themselves! You know the type!\n|Roll Over Grover|\nIn every crowd there are characters like this guy! Ahhhhh, taking a break in the shade and rolling about in the sand how refreshing?! Or, does this horse secretly wish he was a dog and someone would come over and scratch his belly? There are some people and horses that are on a quest for attention!\nThere are some people who do what everybody else is doing. Horses are no different! After the first horse performed the doggy roll over, five others decided that that looked like fun and took turns rolling in the shaded sand!\n|In Blaze of Glory|\nIn order to hang around this horse herd, you need to have a white blaze on your face! They are wondering if the performing doggy show is ridiculous or if they should join in on the fun? They are the thinkers of the groups and need to weigh all of their options before joining in!", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-2", "d_text": "A weaver with an anti-weave grill on his stable for example is likely to learn that he can step back inside his stable, and can continue to weave there, often with much greater intensity than when he could do it over his stable door.\nSo what is so Funny About Stereotypies?\nI don’t think I have ever come across someone at a zoo who thinks that watching an animal who is pacing back and forth across the front of their cage hour after hour is funny. Most of us these days are educated well enough to know that this type of behaviour is closely related to stress and few of us will go home from that zoo believing that the pacing animal was either ‘happy’ or having a good time. A huge number of us would find that image distressing and may find it difficult to watch at all. I know I would.\nAnd yet, if I want to find a video of a horse demonstrating a stereotypic behaviour, all I have to search for on YouTube is ‘Funny Horse’ and I am sure to find some in the results. On many horse related Facebook pages it common to see videos of horses showing extreme stereotypic behaviour with a caption from the owner of something along the lines of “Look how funny my horse is”. Even when well-meaning and well educated horse people respond with information explaining the concern about stereotypic behaviours, they are often ousted by hundreds of responses telling them to lighten up, and that the horse “isn’t stressed – he’s clearly just having a good time!”\nSo what is it about equine stereotypies that make people want to believe that they are funny? Why are so many people unable to understand that their horse is displaying an extreme stress response, no different to the pacing animal in a zoo? Answers to that one on the back of a post card please!", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "We all have heard the stereotype: All horse people are crazy people. From experience, I would be surprised if anyone who has been involved in the horse industry could say anything to refute the complete accuracy of this stereotype.\nI am sure a lot of competetive industries breed a culture of wackos, but the equine seems to attract all different brands. There is the somewhat expected: narcissistic, hyper competetive types who try and step on others’ reputations to ease their own insecurities. The rampant gossip and rumor mill of the equine industry is a direct result of these types feeling threatened by the success of others in the industry, regardless of the fact there are plenty of horses to ride, ribons to win, and people out there to teach.\nHowever, despite the competetive “make it or break it” atmosphere of the equine world, I have personally found an abundance of fellow riders with depression, anxiety, crippling self doubt, OCD, and the like. Every new horse person I meet has something they are dealing with that seems in stark contrast with how difficult it is to float in a fast paced ruthless sport. Either they are very open about their struggles and tend to over share about their doseage (I fall into this category, because I find my issues nothing to be ashamed of), or you hope they are in therapy or on meds and don’t advertise it, or they are so obviously messed up that you make a mental note to keep yourself and your horses far away from them because they are clearly in denial that they should be institutionalized asap.\nI can’t be the only one who is at a horse clinic and find themselves consistently surrounded by crazy. I am not complaining at all, because it makes me feel right at home. Have any of you noticed the same trend when you think on the individuals in your piece of the horse world?\nWhat is it about these enormous creatures that attracts personalities so riddled with emotional inadequacies? It’s more than just a birds of a feather phenom. An outsider might assume that a sport involving a 1200 pound animal who sometimes tries to kill you would attract only the very bravest of adrenaline junkies. Certainly no place for someone who is far from secure about their own self.\nI am clearly no expert on the matter, but I have been reading a lot about equine assisted therapy and how partnering with a non-judgemental animal such as the horse can heal.", "score": 8.086131989696522, "rank": 98}]} {"qid": 26, "question_text": "What are the main symptoms of classical cystic fibrosis?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis (CF) is a chronic genetic condition involving multiple organ systems. Classical CF primarily involves the respiratory and digestive systems, and may have a range of clinical severity. Pulmonary symptoms often include lower airway inflammation, chronic cough, chronic sinusitis, and recurrent infections. Digestive symptoms often include meconium ileus, pancreatic insufficiency resulting in malabsorption and/or failure to thrive, diabetes mellitus, and hepatobiliary disease. Congenital bilateral absence of the vas deferens (CBAVD) is seen in men without pulmonary or digestive symptoms of CF, and results in azoospermia . CBAVD is a significant cause of male infertility.\nCF is caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. Individuals with mutations in the CFTR gene may also present with milder or atypical symptoms such as pancreatitis or chronic sinusitis.\nThe incidence of CF is approximately 1 in 2500 live births among Caucasians and is inherited in an autosomal recessive pattern. The carrier frequency is estimated to be approximately 1 in 25 in the Caucasian population, 1 in 24 in the Ashkenazi Jewish population, 1 in 61 in the African American population, 1 in 58 in the Hispanic population and 1 in 94 in the Asian population.\nThe current recommendation from the American College of Obstetrics and Gynecologists [2, 3] and the American College of Medical Genetics Subcommittee on Cystic Fibrosis is that screening for cystic fibrosis be offered to all patients, regardless of ethnicity, by a minimum panel of 23 common mutations .\nThis test offers an expanded panel of 142 mutations to account for mutations more common in non-Caucasian ethnic groups, as well as rarer mutations across all ethnic groups.\nClick here for a complete list of CFTR mutations.\nFor those providers wishing to order only the ACOG/ACMG recommended 39-mutation panel, see test code CF.\nVisit www.ThinkGenetic.com for patient-friendly information on cystic fibrosis.\n1. Xu et al. Cystic fibrosis transmembrane conductance regulator is vital to sperm fertilizing capacity and male fertility (2007). PNAS. 104(23):9816-21.\n2. ACOG Committee Opinion.", "score": 53.54748775580793, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Symptoms of Cystic Fibrosis\n- Frequent lung infections\n- Chronic and persistent cough, sometimes producing phlegm\n- Shortness of breath, wheezing\n- Poor growth and weight gain\n- Frequent slimy, large stools or difficulties with bowel movements\n- Salty-tasting skin\nOutlook for Children with Cystic Fibrosis\nThanks to advancements in both genetics and pharmacology, children with CF can live longer and more comfortable lives. Scientists have spent the past 10 years researching cystic fibrosis. Because of their work, doctors have a better understanding of the disease, allowing them develop new treatments. While children with CF were once condemned to live short lives, usually not making it past the 20 or 30-year-old mark, recent discoveries have lengthened life expectancy to 50 years. There is still hope that researchers may one day find a cure.\nLactose intolerance is a condition in which children cannot properly digest lactose. Lactose is a sugar found in dairy products such as milk, soft cheeses and ice cream. Children who suffer from lactose intolerance are unable to make enough of an enzyme known as lactase. This enzyme is normally produced by the intestines and is necessary for digestion. Without sufficient lactase, any undigested lactose remains in the intestines and results in the symptoms commonly associated with the disorder.\nBecause the symptoms of lactose intolerance are similar as those caused by a variety of digestive problems, diagnosing children with the condition may prove tricky. The first step in evaluating whether a child is lactose intolerant involves gradually removing dairy products from their diet for several weeks to see if their symptoms improve. The next step is to talk with the child’s pediatrician and request that they perform what is called a “lactose breath test.” During this test, a lactose solution is ingested so that the subsequent hydrogen levels in the child’s breath can be measured. As undigested lactose remains in the intestinal tract and ferments, producing hydrogen to be exhaled; thanks to this process, doctors are able to accurately test for lactose intolerance.\nSymptoms of Lactose Intolerance in Children\nSymptoms typically appear anywhere from 30 minutes to two hours after consuming foods that contain lactose. Common symptoms include\n- Stomach cramps\nTreatment for Lactose Intolerance\nUnfortunately, there is no remedy for lactose intolerance.", "score": 49.65484249514996, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "The most common symptoms include:\n- Frequent coughing that brings up thick sputum, or phlegm (flem).\n- Frequent bouts of bronchitis and pneumonia. They can lead to inflammation and permanent lung damage.\n- Salty-tasting skin.\n- Infertility (mostly in men).\n- Ongoing diarrhea or bulky, foul-smelling, and greasy stools.\n- Huge appetite but poor weight gain and growth. This is called \"failure to thrive.\" It is a result of chronic malnutrition because you do not get enough nutrients from your food.\n- Stomach pain and discomfort caused by too much gas in your intestines.\nCF can also lead to other medical problems, including:\n- Sinusitis » The sinuses are air-filled spaces behind your eyes, nose, and forehead. They produce mucus and help keep the lining of your nose moist. When the sinuses become swollen, they get blocked with mucus and can become infected. Most people with CF develop sinusitis.\n- Bronchiectasis » Bronchiectasis is a lung disease in which the bronchial tubes, or large airways in your lungs, become stretched out and flabby over time and form pockets where mucus collects. The mucus provides a breeding ground for bacteria. This leads to repeated lung infections. Each infection does more damage to the bronchial tubes. If not treated, bronchiectasis can lead to serious illness, including respiratory failure.\n- Pancreatitis » Pancreatitis is inflammation in the pancreas that causes pain.\n- Episodes of intestinal blockage, especially in newborns.\n- Nasal polyps » Growths in your nose that may require surgery.\n- Clubbing » Clubbing is the widening and rounding of the tips of your fingers and toes. It develops because your lungs are not moving enough oxygen into your blood stream.\n- Collapsed lung » This is also called pneumothorax.\n- Rectal prolapse » Frequent coughing or problems passing stools may cause rectal tissue from inside you to move out of your rectum.\n- Liver disease » Due to inflammation or blocked bile ducts.\n- Low bone density » Because you do not get enough Vitamin D.\nThere still is no cure for cystic fibrosis (CF), but treatments have improved greatly in recent years. The goals of CF treatment are to:\n- Prevent and control infections in your lungs.", "score": 49.41445608439479, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "About Cystic Fibrosis\nWhat is Cystic Fibrosis?\nCystic fibrosis is a genetic disease that affects over 70,000 people worldwide.\nIn people with CF, a defective gene causes a thick, buildup of mucus in the lungs, pancreas, and other organs. In the lungs, the mucus clogs the airways and traps bacteria leading to infections, extensive lung damage, and eventually, respiratory failure. In the pancreas, the mucus prevents the release of digestive enzymes that allow the body to break down food and absorb vital nutrients.\nSymptoms of CF\nPeople with CF can have a variety of symptoms, including:\n- Very salty-tasting skin\n- Persistent coughing, at times with phlegm\n- Frequent lung infections including pneumonia or bronchitis\n- Wheezing or shortness of breath\n- Poor growth or weight gain\n- Frequent greasy, bulky stools or difficulty with bowel movements\n- Male infertility\nDiagnosis and Genetics\nCystic fibrosis is a genetic disease. People with CF have inherited two copies of the defective CF gene — one copy from each parent.\nPeople with only one copy of the defective CF gene are called carriers, but they do not have the disease. Each time two CF carriers have a child, the chances are:\n- 25 percent (1 in 4) the child will have CF\n- 50 percent (1 in 2) the child will be a carrier but will not have CF\n- 25 percent (1 in 4) the child will not be a carrier and will not have CF\nThe defective CF gene contains an abnormality called a mutation. There are more than 2000 known mutations of the disease. Most genetic tests only screen for the most common CF mutations. Read more about diagnosis.\nAccording to the Cystic Fibrosis Foundation Patient Registry, in the United States:\n- More than 30,000 people are living with cystic fibrosis (more than 70,000 worldwide).\n- Approximately 1,000 new cases of CF are diagnosed each year.\n- More than 75 percent of people with CF are diagnosed by age 2.\n- More than half of the CF population is age 18 or older.", "score": 47.94370291203685, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "What is cystic fibrosis?\nCystic fibrosis (CF) is a life-threatening genetic disease. A child with CF has a faulty gene that affects the movement of sodium chloride (salt) in and out of certain cells.\nThe result is thick, heavy, sticky mucus; salty sweat; and thickened digestive juices. The thick mucus secretions can clog the lungs, making a child with CF very prone to breathing difficulties, lung infections (the mucus provides a rich environment for bacteria), and, eventually, severe lung damage. And when thickened digestive fluids from the child's pancreas can't get to the small intestine to break down and absorb nutrients from the food he eats, he may also have digestive and growth problems.\nWhat are the signs and symptoms of cystic fibrosis in children?\nPoor growth is one of the first signs of CF. Parents may also notice a nagging cough and wheezing. Coughing and wheezing are hardly unique to children with CF, of course. These symptoms could be caused by viral bronchiolitis (an inflammation of the small breathing tubes), asthma, or even a dusty, smoky environment. Each of these conditions is far more common than any genetic disease.\nStill, cystic fibrosis is the most common life-shortening genetic disease among people of Northern European descent. Other symptoms include salty skin, a big appetite with no weight gain, and large, greasy stools.\nSometimes the condition doesn't become apparent until a child has had a series of repeated lung infections or severe growth problems. If your child has any of these symptoms, talk with her doctor.\nCystic fibrosis can't be cured, but there are new treatments that can not only prolong a child's life but may also help make that life more normal. And the earlier cystic fibrosis is diagnosed, the more effective those therapies will be.\nHow common is cystic fibrosis?\nThe chances of your child having CF are about 1 in 3,000 if your child is Caucasian, 1 in 9,000 if he's Hispanic, 1 in 11,000 if he's Native American, 1 in 15,000 if he's of African heritage, and 1 in 30,000 if he's Asian American.\nChildren get cystic fibrosis when they inherit one defective CF gene from their mother and another from their father.", "score": 46.541587676826694, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is a genetic condition that affects the production and flow of mucus in the respiratory and digestive system. The lungs and intestine become clogged up with a thick, sticky mucus which causes symptoms such as a persistent cough and recurrent infections.\nThe symptoms of this incurable condition usually manifest within the first year of life, although they can develop later. Recurrent infections and malnutrition often affect growth and the lifespan is significantly shortened. Symptoms vary in intensity and severity from person to person and are managed using physiotherapy, physical therapy and nutritional therapy.\nCystic fibrosis can be a difficult condition for a patient and their family members to cope with. Education and support groups are available to help patients and families manage the emotional repercussions of living with the disease as well as helping patients to lead a life that is as full and active as possible. Educational programs and materials are tailored to suit patients of different age groups as well as their caregivers and members of family.\nPatients and their families are also provided with genetic counselling to help them understand the genetic basis of cystic fibrosis, which can be particularly helpful to parents who may be struggling to come to terms with how their child has become ill. Individuals are also educated about the nontransmissible nature of the disease. In addition, the patient’s family members are encouraged to undergo genetic testing to find out whether or not they are carriers of the condition.\nParents and caregivers are educated with regard to helping the patient grow and develop as normally as possible in both a physical and emotional sense as well as helping the patient take care of themselves and live independently.\nPatients and their families need to understand the importance of attending clinical follow up appointments and receiving routine vaccinations, chest physiotherapy and nutritional support. Life expectancy among individuals with this condition is increased with better nutrition and improved lung function.\nAnother complication of cystic fibrosis is infertility. Malnutrition leads to delayed puberty, hormonal imbalances and infertility. In men, the tubes that carry sperm do not develop normally. In women, the cervical mucus is thicker and more difficult for sperm to penetrate.", "score": 45.05473793962309, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Symptoms Cystic fibrosis\nCystic fibrosis is a genetic disease that affects cells that secrete mucus, sweat, and digestive juices.\nIn normal cases, the fluid being secreted is thin and slippery, but in the case of cystic, a defective gene causes the secretions to become thick and sticky.\nSo, instead of these moisturizing secretions, they block the tubes, ducts and pathways, especially in the lungs and pancreas.\nSymptoms of cystic fibrosis\nCystic fibrosis symptoms and symptoms vary, depending on the severity of the disease, symptoms may improve or worsen over time in the same person.\nIn some children, symptoms begin to appear during infancy and others may not experience symptoms until adolescence or puberty.\nSalt in sweat is usually higher than normal in cystic .\nParents often feel the salt taste when kissing their children.\nMost signs of cystic and other symptoms affect the respiratory system or the digestive system.\nSigns and symptoms of respiratory\nThe thick, viscous mucus associated with cystic fibrosis blocks the ducts that carry air from and to the lungs. This can cause:\nPersistent cough that produces thick sputum (mucus) and mucus.\nWeakness of the ability to exercise sports.\nRecurrent infection of the lungs.\nSinusitis or nasal obstruction.\nSigns and symptoms of gastrointestinal tract\nThick mucus can also block channels that carry digestive enzymes from the pancreas to the small intestine.\nWithout these digestive enzymes, the intestines will not be able to absorb the nutrients in the food you eat. This often results in:\nUnderweight and underweight\nBowel obstruction, especially in newborns (cystic obstruction)\nFrequent stress during stool removal may result in the emergence of part of the rectum, the end of the large intestine, outside the anus (rectal prolapse).\nWhen this happens in children, it may be a sign of cystic, so parents should consult an expert doctor on cystic fibrosis.\nPediatric rectal incontinence may require surgery.\nWhen should I visit a doctor?\nTalk to your doctor if your child has the following:\nPersistent cough that produces mucus\nFrequent infections of the lungs or sinuses\nFrequent smelly greasy stools\nDrop the rectum\nIf your child has trouble breathing, seek immediate medical attention.", "score": 43.203169110064266, "rank": 7}, {"document_id": "doc-::chunk-9", "d_text": "Cystic fibrosis (CF) is a genetic mutation that disrupts the cystic fibrosis transmembrane regulator (CFTR) protein, resulting in poorly hydrated, thickened mucous secretions in the lungs, pancreas, liver, intestines, sinuses, and sex organs. Or to put that in English. Mucus is normally watery. It keeps the linings of the organs listed above moist and prevents them from drying out or getting infected. But in CF, an abnormal gene causes mucus to become thick and sticky.\nThis thick, sticky mucus builds up in your lungs and blocks the airways. This makes it easy for bacteria to grow and leads to repeated serious lung infections. Over time, these infections can cause serious damage to your lungs. This thickened, sticky mucus can also block tubes and ducts in your organs, compromising their ability to function.\nMedical treatment will often involve antibiotics to deal with infection, mucus thinners, and bronchial dilators -- all of which have side effects.\nWhat you can do about cystic fibrosis\nLet's be clear here. As a progressive, inherited condition, there is no treatment (alternative or otherwise) that is going to make cystic fibrosis go away. It's all a question of managing symptoms. And here, there are alternatives that may indeed work as well (or better in some cases) than traditional medications, and with far fewer side effects. We're talking about:\n- Proteolytic enzymes to thin the mucus, kill bacteria and viruses, and reduce inflammation in the bronchial passages.\n- Immune enhancers\n- Pathogen destroyers\n- Digestive enzymes to improve nutrition absorption in the intestinal tract -- an important issue since in CF, digestive enzymes are often blocked from leaving the pancreas by the thickened mucus.\n- A syrup made from elecampane root to help clear mucus from the lungs.\nPulmonary fibrosis literally refers to scarring (fibrosis) throughout the lungs. Pulmonary fibrosis can be caused by many conditions including chronic inflammatory processes, chronic conditions such as lupus and rheumatoid arthritis, infections, environmental agents such as asbestos and silica, and as a side effect of radiation therapy used to treat tumors of the chest, and even certain medications.\nMedical treatment options for pulmonary fibrosis are very limited.", "score": 42.31907230788788, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is a genetic disease in which a mutation causes bodily secretions, such as mucus, digestive juices and sweat, to have a thick, sticky consistency rather than a thinner, more slippery quality, explains Mayo Clinic. In healthy individuals, these secretions serve as a lubricant for the lungs and digestive system. In patients with cystic fibrosis, these secretions instead clog up important passageways in the lungs and pancreas.Continue Reading\nIndividuals with cystic fibrosis often experience breathing difficulties due to the thick, sticky mucus blocking their airways, notes the National Heart, Lung and Blood Institute. They are prone to lung infections, as bacteria can grow easily in areas of mucus buildup. These infections cause damage to the lung tissue over time, and the leading cause of death among cystic fibrosis patients is respiratory failure. Doctors sometimes prescribe pulmonary rehabilitation therapy in attempt to improve breathing quality.\nCystic fibrosis patients also have sweat that contains an unusually high salt content, states the NHLBI. This can cause patients to develop mineral imbalances when they sweat. Additionally, a buildup of mucus in the digestive system makes it more difficult for enzymes that help digest food to travel to the small intestine. As a result, cystic fibrosis patients are at an increased risk of vitamin deficiencies, malnutrition, constipation, gas and abdominal swelling.Learn more about Conditions & Diseases", "score": 40.468429176109055, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Almost everyone has gotten a cold and learned how it can cause difficulties in breathing. The body makes a great amount of mucus to flush dust and germs out of the lungs. For most of us, a cold is only uncomfortable for a short time, but for those with cystic fibrosis, it is a daily struggle to breath. The mucus caused by cystic fibrosis is thick and sticky. “If you have cystic fibrosis, your mucus becomes thick and sticky. It builds up in your lungs and block your airways. builds up and causes problems in many of the body’s organs, especially the lungs and the pancreas.” According to the National Institutes of Health, National Heart, Lung and Blood Institute, “the build up of mucus makes it easy for bacteria to grow.” Not only does this build up lead to lung infections, if can also lead to digestive issues (malnutrition) because the mucus can block the ducts of the pancreas which creates enzymes that help food digest int he intestines.\nThe Cause and Diagnosis\nCystic fibrosis is an inherited disease. Both parents must be carriers of the faulty gene for the child to develop cystic fibrosis. All states test for cystic fibrosis during newborn screenings. The screening includes a type of blood test which looks for the gene associated with cystic fibrosis (the CFTR gene). That blood test also demonstrates whether the infant’s pancreas is working properly. If the blood test indicates the presence of cystic fibrosis, the physician may order a sweat test to confirm the diagnosis of cystic fibrosis. Other tests my include a chest x-ray; a lung function test and or a sputum culture.\n(See What is Cystic Fibrosis? by the National Institutes of Health for an overview.)\nEven though there is no cure for cystic fibrosis, there are a number of treatments which can lessen the symptoms. These treatments include pulmonary rehabilitation, exercise, different medicines, oxygen therapy and nutritional therapy. Because cystic fibrosis can also cause a special kind of diabetes, that will be treated too.\nSocial Security Disability\nThe Commissioner of Social Security has “listed” cystic fibrosis as a disease which can cause disability.", "score": 39.15885428523584, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "What Is Cystic Fibrosis? This Genetic Disease Affects Every System In the Body\nThe life-threatening disorder causes shortness of breath, a persistent cough, and digestion issues.\nCystic fibrosis (CF) is a complex, life-threatening disease that affects many organs in the body, including the lungs, kidneys, and gastrointestinal tract. It’s caused by a genetic mutation that causes a certain protein to stop working. When this happens, chloride—a mineral that’s vital to cellular hydration—can’t move to the surface of cells, prompting mucus to become thick and sticky in a variety of organs.\n“The predominant way CF affects the body is in the gastrointestinal and respiratory system, but it causes a wide variety of complications that affects every system in the body,” Zubin Mukadam, MD, assistant professor of pulmonary and critical care at Wayne State University and Detroit Medical Center, tells Health.\nIn the lungs of a CF patient, mucus becomes clogged in the airways and bacteria and other germs can get trapped. In the pancreas, mucus buildup prevents the release of digestive enzymes that help the body absorb food and nutrients, resulting in malnutrition. In the liver, thick mucus can block the bile duct, causing liver disease. And in men, CF can affect the ability to have children, according to the Cystic Fibrosis Foundation.\nRELATED: Signs and Symptoms of Pneumonia\n“When a channel to certain organs, specifically the lungs, become dehydrated, this can lead to chronic inflammation and chronic infection,” Don Hayes, Jr., MD, medical director of lung and heart-lung transplant programs for Nationwide Children’s Hospital and The Ohio State University Wexner Medical Center, tells Health. “This is why CF is a progressive disease that causes persistent lung infections and limits the ability to breathe over time.”\nCystic fibrosis and the family tree\nMore than 30,000 people are living with CF in the United States, according to the Cystic Fibrosis Foundation, and more than 75% are diagnosed by age 2.\n“We do a lot of newborn screening,” says Dr. Mukadam.", "score": 37.94398496982977, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is an inherited sickness that creates serious injury to the lungs, digestive system, and other organs in the body. Cystic fibrosis targets the cells that generate mucus, sweat, and digestive juices. These secreted fluids are usually thin and slippery. In people with cystic fibrosis, a dispute gene makes the secretions to become sticky and thick. In spite of working as a lubricant, the secretions plug up tubes, ducts, and passageways, specifically in the lungs and pancreas. Although cystic fibrosis needs regular care, people with the situation are generally able to join school and work, and sometimes have a better quality of life than people with cystic fibrosis had in earlier decades. Improvements in screening and treatments mean people suffering from cystic fibrosis now may live in their mid- the too late 30s, on average, and some are living in between 40s and 50s.\nSymptoms of Cystic Fibrosis\nScreening of newborns for cystic fibrosis symptoms is now done in each state in the United States. As a result, the situation can be diagnosed within the first month of life, before symptoms of cystic fibrosis promote. For people born before newborn screening was done, it is significant to be conscious of the signs and symptoms of cystic fibrosis.\nCystic fibrosis symptoms differ, based on the depth of the disease. Even in the same person, symptoms of cystic fibrosis may worsen or improve as time passes. Some people may not feel symptoms until adulthood.\nPeople with cystic fibrosis have a higher than usual level of salt in their sweat. Parents sometimes can taste salty when they kiss their children. Most of the other signs and symptoms of cystic fibrosis target the respiratory system and the digestive system. However, adults diagnosed with cystic fibrosis are more likely to have atypical symptoms, like recurring bouts of pancreatitis, infertility and recurring pneumonia disease.\nCystic Fibrosis Causes\nCauses of cystic fibrosis, a fault in a gene changes a protein that circulates the movement of salt in and out of cells. The outcome is thick, sticky mucus in the respiratory, digestive and reproductive systems, as well as developed salt in sweat. Many various disputes can happen in the gene. The type of gene mutation is related to the severity of the situation.", "score": 36.49392371266158, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "For patients who have cystic fibrosis, it’s important to understand that this medical condition is inherited and it causes serious damages to the digestive system and lungs. Besides, it affects those cells that produce sweat, mucus, and digestive juices, which are the fluids that are slippery and thin normally. Defective genes cause this kind of secretion to become sticky and thick so that it plugs up ducts, tubes, and passageways instead of acting as a lubricant, especially when it comes to lungs and pancreas.\nAlthough this disease requires daily care, people with cystic fibrosis can attend work and school while having a decent quality of life. Think about certain improvements in treating and screening because they allow patients to live up to their 50s. The good news is that the screening of all newborn babies for cystic fibrosis is performed in all states so that this medical condition is easy to diagnose at the very beginning, even before any dangerous symptoms develop.\nIt’s important to know the symptoms and signs of cystic fibrosis, but you should understand that they may vary considerably based on its severity. Even in the same patient, symptoms may both improve and worsen, while some people may not have any symptoms until their adulthood or adolescence at all. For example, those patients who have cystic fibrosis have a high level of salt in their sweat, and other widespread symptoms usually affect their digestive and respiratory systems. Keep in mind that adult patients are more likely to develop special atypical symptoms, including diabetes, pancreatitis, infertility, and others.\nWhen it comes to respiratory symptoms and signs, they include wheezing, exercise tolerance, breathlessness, persistent coughing, regular lung infections, sticky and thick mucus, inflamed nasal passages, etc. Don’t forget about important digestive symptoms and signs that include greasy and foul-smelling stools, severe constipation, intestinal blockage, and others.\nIt’s necessary for patients to know when to see their doctors. Once you notice any of the above-mentioned symptoms, you need to go to the hospital to be properly examined and diagnosed. In conclusion, there are different defects that may occur in genes, and everything depends on the severity of this health condition. People need to inherit one copy of the defected gene from every parent to develop this disease. If they have only one copy, they will be carriers.", "score": 34.578808470974394, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is a serious, heritable disorder that results in the buildup of a thick mucus around several organs including the lungs. Cystic fibrosis is caused by mutations in the CFTR gene (cystic fibrosis transmembrane conductance regulator), which encodes for a protein responsible for transporting salts in and out of cells. The mutations cause the transport protein to be made incorrectly (or not at all).\nPeople who have a single copy of a disease-causing mutation are considered carriers of the disease; they most likely will not develop symptoms, but may pass the disease-causing mutation to their children. Children who inherit two copies of disease-causing mutations will develop cystic fibrosis.\nWhile there is no cure for cystic fibrosis, there are treatments available that can ease the symptoms. Quality of life is significantly better for patients who receive early care, so early diagnosis is very important.\nWhat is a newborn screen?\nIn the U.S., Canada, Australia, and in most of Europe, newborn infants are tested for a variety of health conditions before leaving the hospital. One of the diseases tested for is cystic fibrosis.\nHow is a newborn screen performed?\nIn the first 24 to 48 hours after birth, a few drops of blood from the infant is collected with a heel prick (a quick jab with a small needle in the baby’s heel). The blood is placed on a card and is analyzed to determine how much immunoreactive trypsinogen (IRT) — a protein commonly elevated in cystic fibrosis — is present. A positive test does not mean the baby has cystic fibrosis as there are several conditions that can cause elevated levels of this protein.\nIf the first test is positive, the IRT test is repeated three weeks after birth. If the levels of IRT are normal, the infant may be a carrier of cystic fibrosis (meaning they have a single copy of a CFTR mutation), but probably does not have the disease.\nIf the second test is also positive, it may mean the infant will develop cystic fibrosis. The next step is a sweat test to determine whether they are secreting salts normally, or a genetic test to determine whether the child has mutations in the CFTR gene.\nWhat happens next?", "score": 33.295916147689056, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "CF isn't a formation in your lungs that can be taken out. It is a genetic disease that for many affects the lungs, kidneys, pancreas, liver, etc. Does he have a history of allergies, asthma, pneumonia, bronchitas, etc? Most people who are diagnosed later in life usually exhibite some sort of symptoms throughout his or her life.\nCystic fibrosis (CF) is a common genetic disease which affects the entire body, causing progressive disability and early death. Shortness of breath is the most common symptom and results from frequent lung infections like pneumonia that are treated, though not always cured, by antibiotics and other medications. A multitude of other symptoms, including sinus infections, failure to thrive, diarrhea, and infertility, result from the effects of CF on other parts of the body.\nCF is one of the most common fatal inherited diseases. It is most prevalent among Caucasians and Ashke**** Jews; one in 25 people of European descent carry one gene for CF. Individuals with cystic fibrosis can be diagnosed prior to birth by genetic testing or in early childhood by a sweat test. There is no cure for CF, and most individuals with cystic fibrosis die young — many in their 20s and 30s from lung failure. Ultimately, lung transplantation is often necessary as CF worsens.\nWith males who have CF, 95-98% of them are infertile and do not have a vas deferas, or it is blocked.\nHere is a link that talks all about\nPost Edited (Chaser) : 7/10/2006 6:09:16 PM (GMT-6)", "score": 33.244363106561444, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Cystic Fibrosis Carrier Screening\nWhat Is Cystic Fibrosis?\nCystic fibrosis (CF) is an inherited disease caused by a change (mutation) in the cystic fibrosis transmembrane regulator (CFTR) gene. It is a chronic, progressive disease that causes mucus to become thick and sticky. The mucus builds up and clogs passages in many of the body's organs, but mostly in the lungs and the pancreas. In the lungs, the mucus can cause serious breathing problems and lung disease. In the pancreas, the mucus can cause digestive problems and malnutrition, which can lead to problems with growth and development.\nWhat Causes CF?\nCystic fibrosis (CF) is a genetic disorder. A child must inherit two defective CF genes (one defective gene from each parent) to have the disease.\nA person who has inherited only one defective CF gene is a carrier of CF and does not have the disease but can pass it on to his or her children. This person can also pass on carrier status.\neMedicineHealth Medical Reference from Healthwise\nTo learn more visit Healthwise.org\n© 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.\n- What Is Inherited Lipodystrophy?\n- Do You Take Good Care of Your Eyes?\n- Tips for a Clean Home & Healthy Cat", "score": 32.76218792577327, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "For patients who have cystic fibrosis, it’s important to understand that this medical condition is inherited and it causes serious damages to the digestive system and lungs. Besides, it affects those cells that produce sweat, mucus, and digestive juices, which are the fluids that are slippery and thin normally.", "score": 32.68922441635196, "rank": 17}, {"document_id": "doc-::chunk-10", "d_text": "What Causes Cystic Fibrosis?\nIt is a genetic disease. In other words, the most common cause of the disease is the mutation of the gene from both parents.\n- Although it varies according to the affected organ, it may present with many findings.\n- In early infancy, recurrent and treatment-resistant lower respiratory tract infections, cough, wheezing may occur.\n- Patients may have difficulty in gaining weight, oily, smelly and frequent defecation, growth and developmental retardation.\n- When they are kissed by their mother, it may be felt that they have excessive salty sweat.\n- They may apply with loss of liquid and salt, weakness, loss of appetite and exhaustion, especially in hot weather.\n- In older children, symptoms may include excessive sputum production, frequent lung infection, enlargement of the fingertips (clubbing), nasal polyps, recurrent attacks of sinusitis.\n- Rarely, it may present with findings such as diabetes.\n- Adulthood, delayed puberty, pancreatitis attacks, hepatic and pancreatic insufficiency can be seen in adulthood.\nHow is Cystic Fibrosis Diagnosed?\nComplaints such as symptoms of cystic fibrosis can also be varied. Anaemia, elevation in liver function tests, impaired chlorine values and changes in blood gas, pulmonary as well as chest film findings are among the methods used for diagnosis of cystic fibrosis. The exact diagnosis is made by the presence of chlorine elevation in the sweat test. Values taken as a result of this test of 60 and above mean cystic fibrosis. At the same time, the cystic fibrosis gene mutation analysis structures can be determined by the type of mutation in which the disease occurs.\nHow is Cystic Fibrosis Treated?\nCystic fibrosis is treated with a combination of medication, physical therapy and exercise.\nDrug: There are three main drugs used in the treatment of Cystic Fibrosis.\nBronchodilator drugs that keep the airway open, antibiotics fighting infections, steroids to alleviate inflammation. To help digestion, patients with Cystic Fibrosis need to take pancreatic enzymes with each meal.\nPhysiotherapy: Daily physical therapy is inevitable for clearing the mucus layer in the airways and minimizing infection. This physiotherapy can be administered by the parent or caregiver at home. When children are old enough, they can apply the necessary daily therapies themselves.", "score": 32.294368048265476, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "The clinical consequences include\n- recurrent respiratory infection, inflammation, bronchial damage, bronchiectasis resulting ultimately in respiratory failure\n- exocrine pancreatic dysfunction with approximately 85% of patients suffering from insufficiency in pancreatic enzyme production, malabsorption and, if not adequately controlled, malnutrition\n- bowel obstruction with approximately 15% of infants suffering from meconium ileus at birth, and recurrent bowel blockages in older patients due to distal intestinal obstruction syndrome\n- impaired glucose tolerance or CF related diabetes\n- liver disease and portal hypertension\n- reduced bone mineral density\n- gut motility problems\n- infertility in most males\n- CF arthropathy\n- behavioural and psychological problems as a result of severe life limiting disease\n- reduced bone mineral density\nNewborn screening for CF was introduced across the UK in October 2007. All infants born since this date are offered screening for CF on day 5 of life. There are best practice guidelines defining the protocol for screening and diagnosis, and guidelines for management of infants picked up through screening [5, 6]. The majority of CF diagnoses are made through this programme and should be confirmed by sweat test and/or genetic mutation analysis by 4 weeks of age. Early diagnosis through newborn screening has been shown to confer significant benefits, especially in terms of growth and nutrition [7, 8].\nThe majority of cases picked up through screening have typical CF; however, the nature of the screening programme will result in some cases where diagnosis is uncertain because of anomalies in the genotype, phenotype or sweat test result. These cases should still be managed in a CF specialist centre.\nSome cases of CF will not be picked up through the screening programme, either because they were born before October 2007, were born outside of the UK, or missed the screening process. A clinical diagnosis will be made in these patients, confirmed through sweat test and genetic mutation analysis.\nChronic respiratory disease\nThe lungs are normal at birth but may become affected early in life [5, 6]. Due to the abnormal secretions there is obstruction of the small airways, with secondary infection and inflammation. Recurrent infection with organisms such as Staphylococcus aureus, Haemophilus influenzae and Pseudomonas sp. leads to damage of the bronchial wall, bronchiectasis and abscess formation.", "score": 31.9119610880586, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "Other symptoms are large, bulky stools from passing undigested food, pneumonia caused from bacteria in the lungs, and salty-tasting skin caused by a high concentration of sodium and chloride in the sweat glands.\n“We’ve had several emergency room visits,” Buck said. “Hannah would get croup really bad to the point where she couldn’t breathe. When we would get to the ER, they would do an epinephrine breathing treatment, give her a steroid shot, and it would help.”\nLast Memorial Day weekend, the Bucks took Hannah to the ER for breathing problems, and this time, the treatment didn’t help. Hannah was admitted for two days.\n“The admission diagnosis was croup with underlying asthma,” Buck said. “Her physician is pretty thorough and very cautious, so when he discharged her he wanted her to see a pediatric pulmonologist, to make sure our treatment regime was appropriate.”\nThe appointment with the pulmonologist did not go the way Buck expected.\n“We had a 10 o’clock appointment in Kalamazoo, and I assumed I’d be back at work by noon,” Buck said. “I was nave as to CF. He took the history, and some things I shared with him were classic signs and symptoms of CF. He sent us right away to Bronson Hospital for the sweat test, the definitive test for CF, and asked that we come back to his office. At 1 o’clock, they had the results and they were positive.\n“It was devastating news, but we were lucky to find out,” Buck said. “A lot of people say it’s strange that she was six before she was diagnosed, but that happens often, that symptoms aren’t noticed until later.”\nBrian and Sarah Moore found out there was a problem when Sarah was pregnant.\n“There was some kind of anomaly on the ultrasound,” Sarah said. “We went to Bronson and they did special tests. The tests told us it could be three or four things. We knew (CF) was a possibility, but she wasn’t diagnosed until she was four months old.\n“As soon as they diagnosed Hannah, she started taking enzyme pills with everything she eats which helps her body digest the food. We still battle the weight gain, but she’s holding her own.”\nIf undiagnosed, a number of things could happen.\n“That sticky mucus harbors bacteria that can lead to a lung infection and inflammation,” Buck said.", "score": 31.8393454626244, "rank": 20}, {"document_id": "doc-::chunk-3", "d_text": "- Loosen and remove the thick, sticky mucus from your lungs.\n- Prevent blockages in your intestines.\n- Provide adequate nutrition.\nTreatment for Lung Problems\nThe main treatments for lung problems in people with CF are:\n- Antibiotics » Most people with CF have ongoing, low-grade lung infections. Sometimes, these infections become so serious that you may need to be hospitalized. Antibiotics are the primary treatment.\n- Chest Physical Therapy » Also called chest clapping or percussion. It involves pounding your chest and back over and over again to dislodge the mucus from your lungs so that you can cough up the mucus. CPT for cystic fibrosis should be done three to four times each day. CPT is also often referred to as postural drainage. This involves your sitting or lying on your stomach with your head down while you do CPT. This allows gravity to help drain the mucus from your lungs.\n- Exercise » Aerobic exercise helps loosen the mucus, encourage coughing to clear the mucus, and improve your overall physical condition. If you exercise regularly, you may be able to cut back on your chest therapy.\n- Other Medications » Anti-inflammatory medications may help reduce the inflammation in your lungs that is caused by ongoing infections. Mucus-thinning drugs reduce the stickiness of mucus in your airways.\n- Oxygen Therapy » If the level of oxygen in your blood is too low, you may need oxygen therapy. Oxygen is usually given through nasal prongs or a mask.\n- Lung Transplantation » Surgery to replace one or both of your lungs with healthy lungs from a human donor may help you.\nTreatment for Digestive Problems\nNutritional therapy can improve your growth and development, strength, and exercise tolerance. It may also make you strong enough to resist some lung infections. Nutritional therapy includes a well-balanced, high-calorie diet that is low in fat and high in protein.\nLiving With Cystic Fibrosis\nIf you have cystic fibrosis (CF), you should learn as much as you can about the disease and work closely with your doctors to learn how to manage it. Ongoing medical care is important. You should seek treatment from a team of doctors, nurses, and respiratory therapists who specialize in CF. These specialists are often located at CF Foundation Centers in major medical centers.", "score": 31.733938621617888, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Air passages become clogged with mucus and there is often widespread obstruction\nof the bronchioles. Expiration is especially difficult. More and more air\nbecomes trapped in the lungs, which results in obstructive emphysema.\nAtelectasis can occur leaving small areas collapsed. Eventually the chest\nassumes a barrel shape. The right ventricle, which supplies the lungs, may\nbecome strained and enlarged. Clubbing of the finger and toes may occur due to\nthe compensation response indicating the chronic lack of oxygen.\nCystic fibrosis affects the pancreas. The mucus clogs the duct and\nblocks the transfer of enzymes from the pancreas to the intestines. These\nenzymes are needed to break down food that is necessary for proper growth and\nweight gain. The mucus in the digestive tract blocks the absorption of\nnecessary nutrients. This is why there is often no weight gain despite good\nappetites. This can be associated with failure to thrive. The buttocks...", "score": 31.477665486550414, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Respiratory clinic - girl with known cystic fibrosis\nDr Louise Selby, paediatric registrar, and Dr Donna McShane, paediatric cystic fibrosis consultant, guide you through a case of a young girl with known cystic fibrosis\nA seven-year-old girl with known cystic fibrosis (CF) presents with coryza and sore throat. She has also had intermittent abdominal pain before bed for the past four nights, although has been eating and drinking well. Bowels have been opened every two to three days. She has been attending school, denies any increase in cough or breathlessness and she feels well in herself, which is why her parents have come to see you, rather than attending hospital. Spirometry was performed two months previously (FEV1 of 87% and FVC of 92%). Her parents tell you that she has previously been treated for pseudomonas infection about a year ago, which was eradicated and has not been grown on cough swab since.\nHer current treatment includes inhaled salbutamol, vitamins A,D, E and K, creon capsules with food, hypertonic saline nebulisers, nebulised DNase.\nOn examination she is afebrile and of slim build. There are no signs of respiratory distress, and her chest is clear with a mildly wet huff (a crepitation heard at the end of a short, sharp expiration). ENT examination shows mildly inflamed nasal turbinates and an erythematous throat, but no visible pus. Abdominal examination is unremarkable. The GP suspects a viral infective exacerbation, but does not feel referral to hospital is warranted at this stage.\nCF is an autosomal recessive condition with a carrier frequency of approximately one in 2500, affecting around 9000 people in the UK. It is caused by an autosomal recessive defect in the cystic fibrosis transmembrane regulator gene, resulting in reduced clearance of lung secretions, progressive infection, inflammation, and declining lung function. The defective gene also causes development of viscous secretions in the pancreas, causing obstructive damage to its exocrine function and malabsorption, requiring enzyme replacement (Creon) in around 85% cases.\nThe microbiology of CF is complex and challenging.", "score": 31.4731205795139, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis (CF; OMIM®: 219700) is autosomal recessive, multisystem disease leading to significant morbidity and early death. Characteristic manifestations include recurrent lung infections, malabsorption, malnutrition, and infertility (especially in males). Cystic Fibrosis is caused by thick and sticky mucus due to the disturbances of salt homeostasis in cells. Biochemical hallmark of the disease is elevated sweat chloride concentration.\nCF is caused by mutations in the cystic fibrosis conductance regulator gene (CFTR; OMIM®: *602421; HGNC number: 1884), located on chromosome 7. CFTR functions as a chloride channel and controls the regulation of other transport pathways. Asymptomatic carrier parents, who have no physiological or biochemical outcome that enables routine identification, typically have one CFTR mutation; whereas diseased progeny carry at least two mutations, one on each CFTR gene allele. CF has a high incidence in people of Northern European descent, occurring in approximately 1 in 2500 live births.\nThe most common mutant allele is the F508del mutation , which is a deletion of three basepairs at the 508th codon causing the deletion of a phenylalanine residue and subsequent defective intracellular processing of the CFTR protein that is an important chloride channel. Worldwide, the F508del mutation is responsible for approximately two-thirds (66%) of all CF chromosomes; however, there is great mutational heterogeneity in the remaining one-third of all alleles . The next most frequent mutation was G542TER – a G-to-T change in nucleotide 1756 in exon 11 is responsible for a stop mutation in codon 542 .\nNot only is there heterogeneity in the mutations causing Cystic Fibrosis, but the pathogenetic mechanisms also vary. Deletion of phenylalanine-508 appears to cause disease by abrogating normal biosynthetic processing and thereby resulting in retention and degradation of the mutant protein within the endoplasmic reticulum. Other mutations, such as the relatively common G551D mutation , appear to be normally processed and, therefore, must cause disease through some other mechanism. G551D mutation, which is within the first nucleotide-binding fold of the CFTR, is the third most common CF mutation, with a worldwide frequency of 3.1% among CF chromosomes .", "score": 30.87678958280496, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "The major objective of chest treatment is to prevent infection, remove secretions and thereby delay the rate of lung damage and maintain respiratory function. This is done by:\n- regular surveillance of secretions and prompt and aggressive treatment of infection with antibiotic therapy, either oral, aerosol or intravenous\n- oral prophylactic antibiotic therapy in younger children\n- regular chest physiotherapy to aid removal of bronchial secretions\n- use of bronchodilators and oral steroids to open the airways and reduce inflammation\n- encouragement of an active lifestyle and physical exercise\nPancreatic insufficiency (PI) is the most common gastrointestinal defect in CF affecting approximately 95% of patients in northern Europe . Approximately 92% of infants will be pancreatic insufficient by the age of 12 months , although the introduction of newborn screening and early identification of patients with milder mutations may result in the proportion of patients with known pancreatic sufficiency increasing in the future.\nPI develops when more than 90% of acinar function is lost. Damage begins in utero and there is ongoing destruction of acini and replacement with fibrous and fatty tissue. Consequently the secretion of digestive enzymes and bicarbonate is reduced or absent, resulting in malabsorption of fat, protein, bile, fat soluble vitamins and vitamin B12. Treatment is with the use of pancreatic enzyme replacement therapy (PERT).\nDiagnosis of PI is made by faecal elastase measurement. In the 10%−15% who are defined as having normal pancreatic function at diagnosis, enzyme secretion may be diminished. However, it may be sufficient for digestion of nutrients without the need for PERT . A proportion of those with normal faecal elastase will go on to develop PI; faecal elastase should be rechecked periodically and if symptoms of malabsorption develop.\nMeconium ileus is the presenting feature in up to 25% of infants with CF. It becomes apparent within the first days of life and often before the result of newborn screening is available. It is caused by a blockage of the terminal ileum with thick meconium and develops in utero. Management of meconium ileus may be conservative; however, some patients will require surgical intervention, with or without intestinal resection and stoma formation.", "score": 30.367657483857577, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Learn about cystic fibrosis, a genetic disorder that affects the lungs, pancreas, and other organs, and how to treat and live with this chronic disease.\nCF is a rare genetic disease found in about 30,000 people in the U.S. If you have CF or are considering testing for it, knowing about the role of genetics in CF can help you make informed decisions about your health care.\nIf you or your child has just been diagnosed with cystic fibrosis, or your doctor has recommended testing for CF, you may have many questions.\nDiagnosing CF is a multistep process. A complete diagnostic evaluation should include a newborn screening, a sweat chloride test, a genetic or carrier test, and a clinical evaluation at a CF Foundation-accredited care center.\nRaising a child with cystic fibrosis can bring up many questions because CF affects many aspects of your child’s life. Here you’ll find resources to help you manage your child’s daily needs and find the best possible CF care.\nLiving with cystic fibrosis comes with many challenges, including medical, social, and financial. By learning more about how you can manage your disease every day, you can ultimately help find a balance between your busy lifestyle and your CF care.\nPeople with CF are living longer, healthier lives than ever before. As an adult with CF, you may reach key milestones you might not have considered. Planning for these life events requires careful thought as you make decisions that may impact your life.\nPeople with cystic fibrosis are living longer and more fulfilling lives, thanks in part to specialized CF care and a range of treatment options.\nCystic Fibrosis Foundation-accredited care centers provide expert care and specialized disease management to people living with cystic fibrosis.\nWe provide funding for and accredit more than 120 care centers and 53 affiliate programs nationwide. The high quality of specialized care available throughout the care center network has led to the improved length and quality of life for people with CF.\nThe Cystic Fibrosis Foundation provides standard care guidelines based on the latest research, medical evidence, and consultation with experts on best practices.\nAs a clinician, you’re critical in helping people with CF maintain their quality of life. We’re committed to helping you partner with patients and their families by providing resources you can use to improve and continue to provide high-quality care.", "score": 30.35272011705871, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "00:00 / 00:00\nCystic fibrosis: Clinical\n0 / 10 complete\nThe CFTR protein is a channel protein that pumps chloride ions into various secretions. Those chloride ions help draw water into the secretions, which helps to thin out the secretion.\nWithout CFTR protein on the epithelial surface, chloride ions aren’t pumped into the secretions, and that leaves the secretions thick and sticky.\nIn some countries, diagnosis of cystic fibrosis is done with newborn screening. Usually it’s done by detection of a pancreatic enzyme called immunoreactive trypsinogen or IRT, which is released into the fetal blood when there’s pancreatic damage from CF.\nA confirmatory test is the quantitative pilocarpine iontophoresis, better known as the sweat test, which detects high levels of chloride in the sweat.\nSo if chloride levels in the sweat are high, meaning over 60 mmol/L, CF diagnosis is very likely, while intermediate levels, from 30 to 59 mmol/L in infants below 6 months or 40 to 59 mmol/L in older infants, children, and adults, mean CF diagnosis is possible.\nIn both cases, DNA testing is done to detect the most common cystic fibrosis-related mutations.\nIf two or more of these mutations are detected, one in each chromosome then the diagnosis of CF is confirmed.\nCopyright © 2023 Elsevier, its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies.\nCookies are used by this site.\nUSMLE® is a joint program of the Federation of State Medical Boards (FSMB) and the National Board of Medical Examiners (NBME). COMLEX-USA® is a registered trademark of The National Board of Osteopathic Medical Examiners, Inc. NCLEX-RN® is a registered trademark of the National Council of State Boards of Nursing, Inc. Test names and other trademarks are the property of the respective trademark holders. None of the trademark holders are endorsed by nor affiliated with Osmosis or this website.", "score": 30.24189547575833, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Learn about cystic fibrosis, a genetic disorder that affects the lungs, pancreas, and other organs, and how to treat and live with this chronic disease.\nCF is a rare genetic disease found in about 30,000 people in the U.S. If you have CF or are considering testing for it, knowing about the role of genetics in CF can help you make informed decisions about your health care.\nIf you or your child has just been diagnosed with cystic fibrosis, or your doctor has recommended testing for CF, you may have many questions.\nDiagnosing CF is a multistep process. A complete diagnostic evaluation should include a newborn screening, a sweat chloride test, a genetic or carrier test, and a clinical evaluation at a CF Foundation-accredited care center.\nRaising a child with cystic fibrosis can bring up many questions because CF affects many aspects of your child’s life. Here you’ll find resources to help you manage your child’s daily needs and find the best possible CF care.\nLiving with cystic fibrosis comes with many challenges, including medical, social, and financial. By learning more about how you can manage your disease every day, you can ultimately help find a balance between your busy lifestyle and your CF care.\nPeople with CF are living longer, healthier lives than ever before. As an adult with CF, you may reach key milestones you might not have considered. Planning for these life events requires careful thought as you make decisions that may impact your life.\nPeople with cystic fibrosis are living longer and more fulfilling lives, thanks in part to specialized CF care and a range of treatment options.\nCystic Fibrosis Foundation-accredited care centers provide expert care and specialized disease management to people living with cystic fibrosis.\nWe provide funding for and accredit more than 120 care centers and 53 affiliate programs nationwide. The high quality of specialized care available throughout the care center network has led to the improved length and quality of life for people with CF.\nThe Cystic Fibrosis Foundation provides standard care guidelines based on the latest research, medical evidence, and consultation with experts on best practices.\nAs a clinician, you’re critical in helping people with CF maintain their quality of life. We’re committed to helping you partner with patients and their families by providing resources you can use to improve and continue to provide high-quality care.", "score": 30.195554901987283, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "About 1,900 CFTR mutations were reported, but one single mutation (F508del), associated with severe CF, accounts for ~70% of CF chromosomes worldwide . Despite impressive advances in our understanding of the molecular basis of CF, life expectancy and quality of life for CF patients are still limited .\nFor the vast majority of patients, the diagnosis of classic forms of CF is established early in life and suggested by one or more characteristic clinical features, a history of CF in a sibling or, more recently, by a positive newborn screening result , . Such diagnosis is usually supported by evidence of CFTR dysfunction through identification of two CF-disease causing mutations, two abnormal sweat-Cl−tests (≥60 mEq/L), and/or distinctive transepithelial nasal potential difference (NPD) measurements , .\nHowever, depending on the ethnic background of the populations tested there is a fraction of patients escaping such diagnosis criteria by presenting “non-classic” symptoms, i.e., milder disease and often inconclusive evidence of CFTR dysfunction from the available diagnostic tools , . For such individuals with clinical phenotypes not fully meeting the CF diagnosis criteria it is also difficult to exclude CF. These are usually described as “CFTR-opathies” or CFTR-related disorders (CFTR-RD) , . Moreover, the recently implemented extensive newborn screening programs identify increasing numbers of asymptomatic CF patients merely identified by elevated serum concentrations of immunoreactive trypsinogen (IRT), posing new challenges to the CF diagnosis paradigm , , , especially when associated with borderline sweat [Cl−] and/or inconclusive CFTR genotypes , , , . To confirm/exclude a CF diagnosis in such increasing numbers of individuals, besides close clinical follow-up, further laboratory support is required , in particular, there is a need for robust methods relying on the functional assessment of CFTR.\nAssessment of CFTR (dys)function in native colonic epithelia ex vivo, as we previously reported, constitutes a good approach to this end , . However, since those data were reported, other groups have investigated the abnormalities in electrogenic Cl− secretion in the intestinal epithelium of CF patients using Ussing chamber measurements by different protocols –. Unfortunately, the final composite parameter used by some results from a combination of experimental readouts not all relying on direct measurement of CFTR-mediated Cl− secretion, thus leading to conflicting results and precluding good correlations with clinical symptoms –.", "score": 30.083878078801302, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Advances in CF care, screening and treatment mean that length and quality of life are improving. Currently, about half cystic fibrosis patients will live longer than 40 years and children born with the condition now will probably live longer than this.\nHowever, treatments can be time consuming and may have side-effects. Treatments can include physical therapy and (inhaled) medicines. Low digestive efficiency can also be managed by a high calorie diet (individuals with low digestive efficiency may have to consume more than 1,000 extra calories a day), which can pose particular challenges, especially for young children.\nIn severe cases of CF, the lungs may stop working properly and all medical treatments may no longer help, known as respiratory failure. Here, a lung transplant may be recommended. This serious operation carries significant risks (and is only possible if suitable donor lungs can be obtained) but can greatly improve quality and length of life for people with CF.", "score": 29.730410407599273, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "ICD-10-CM Code E84.0\nCystic fibrosis with pulmonary manifestations\nBillable CodeBillable codes are sufficient justification for admission to an acute care hospital when used a principal diagnosis.\nE84.0 is a billable ICD code used to specify a diagnosis of cystic fibrosis with pulmonary manifestations. A 'billable code' is detailed enough to be used to specify a medical diagnosis.\nThe ICD code E84 is used to code Cystic fibrosis\nCystic fibrosis (CF) is a genetic disorder that affects mostly the lungs but also the pancreas, liver, kidneys, and intestine. Long-term issues include difficulty breathing and coughing up mucus as a result of frequent lung infections. Other signs and symptoms include sinus infections, poor growth, fatty stool, clubbing of the fingers and toes, and infertility in males among others. Different people may have different degrees of symptoms.\n|Specialty:||Medical Genetics, Pulmonology|\n|ICD 9 Code:||277.0|\nClubbing in the fingers of a person with cystic fibrosis\nCoding Notes for E84.0 Info for medical coders on how to properly use this ICD-10 code\nAdditional Code Note:\nUse Additional CodeUse Additional Code note means a second code must be used in conjunction with this code. Codes with this note are Etiology codes and must be followed by a Manifestation code or codes.\n- Code to identify any infectious organism present, such as:\n- Pseudomonas See code B96.5\n- DRG Group #177-179 - Respiratory infections and inflammations with MCC.\n- DRG Group #177-179 - Respiratory infections and inflammations with CC.\n- DRG Group #177-179 - Respiratory infections and inflammations without CC or MCC.\nRelated Concepts SNOMET-CT\n- Cystic fibrosis of the lung (disorder)\nCoding Advice SNOMET-CT\n- Consider additional code to identify specific condition or disease\nICD-10-CM Alphabetical Index References for 'E84.0 - Cystic fibrosis with pulmonary manifestations'\nThe ICD-10-CM Alphabetical Index links the below-listed medical terms to the ICD code E84.0. Click on any term below to browse the alphabetical index.", "score": 29.479630026212526, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis management: A partnership with primary care providers\nDiana Quintero, MD, Nicole Brueck, MSN FNP-BC, and\nDiana R. Quintero, MD, is a pediatric pulmonologist and program director of Cystic Fibrosis at Children’s Hospital of Wisconsin. She also is an associate professor of Pediatric Pulmonary and Sleep Medicine at the Medical College of Wisconsin.\nNicole L. Brueck, MSN, FNP-BC, is a pediatric pulmonary advanced practice provider at Children's Hospital of Wisconsin and the Medical College of Wisconsin.\nTheresa E. Kump is a clinical research coordinator at Children's Hospital of Wisconsin.\nCystic fibrosis (CF) is an autosomal recessive disease that affects the lungs and digestive system of about 30,000 children and adults in the U.S. (70,000 worldwide). A defective gene and its protein product cause the body to produce unusually thick, sticky mucus that:\nObstructs the airway and leads to life-threatening bronchiectasis\n- Affects the pancreas and stops enzymes from helping absorption of food\n- Impacts other organ systems affecting quality of life\nIn the 1950s, few children with CF lived to attend elementary school. Today, advances in research and medical treatments have further enhanced and extended life for children and adults with this disease. Many people with CF can now expect to live into their 30s, 40s and beyond.\nThe Cystic Fibrosis Foundation is a nonprofit donor-supported organization dedicated to the development of new drugs to fight the disease, improve the quality of life for those with CF and ultimately to find a cure. The foundation has supported a better understanding of the disease and improved patient care through the development of a national database, PortCT. More than 110 institutions accredited by the foundation (including Children’s Hospital of Wisconsin) gather important information from every patient at every visit. This data is entered into the database and outcomes are tracked and trended. The database provides the ability to compare one center to another and to indicate which areas of care need improvement.\nNewborn screening for CF in Wisconsin began in 1994. Although many newborns will have a positive newborn screen for the disease, fewer than 10 percent will actually have it. Research has proven that infants diagnosed with CF through newborn screening and treated before symptoms begin are healthier.", "score": 29.32311977639105, "rank": 32}, {"document_id": "doc-::chunk-8", "d_text": "Children and adolescents should be encouraged to exercise for 20−30 minutes three times per week and exercises should include high impact weight bearing exercises.\nGlucocorticosteroid use should be minimised and CF pulmonary exacerbations should be promptly treated to minimise the systemic inflammatory effect on bone. Pubertal delay should be recognised and treated.\nMalnutrition in cystic fibrosis\nThe malnutrition seen in CF is multifactorial and is determined by three main features: energy loss, increased energy expenditure and anorexia.\nPancreatic insufficiency in the majority of patients with CF results in pancreatic exocrine secretions containing fewer enzymes and bicarbonate; pancreatic secretions have a lower pH and are a smaller volume than in those with pancreatic sufficiency. The consequence, when untreated, is foul smelling frequent loose stools. Malabsorption of fat and nitrogen is severe in untreated patients; however, carbohydrate malabsorption is minimal. Malabsorption can be controlled with the use of PERT although, even when treated, many patients still have a degree of fat malabsorption. Estimates suggest that stool energy losses account for up to 11% of gross energy intake .\nPancreatic bicarbonate deficiency results in reduced buffering of gastric acid in the duodenum, resulting in decreased efficiency of pancreatic enzymes. Mucosal ion transport abnormalities affect water and electrolyte transport and there may be impaired mucosal uptake of nutrients. Altered motility may affect intestinal transit time and impact on absorption of nutrients.\nEnergy losses may be further exacerbated by previous gastrointestinal surgery for meconium ileus which may result in shortening of the bowel, strictures at the site of anastomoses, malrotation and adhesions. Energy may be lost due to vomiting following coughing and GOR. Untreated CFRD will cause energy loss through glycosuria. CFLD may increase malabsorption.\nIncreased energy expenditure\nMany investigators describe an increase in resting energy expenditure (REE) in patients with CF, with a large range of requirements. Impaired lung function significantly increases REE and can double REE above that of controls . Increased REE is closely associated with declining pulmonary function and subclinical infection [51–54].\nContinuous injury to the lungs leads to progressive fibrosis and airway obstruction, with increased work of breathing.", "score": 28.39209219987214, "rank": 33}, {"document_id": "doc-::chunk-3", "d_text": "Pancreatic enzymes –People with CF who have blockage of the pancreas (also called ''pancreatic insufficiency') need to take digestive enzymes in capsule form. These enzyme capsules need to be taken before each meal or snack. The enzymes will help your child digest food properly and allow him or her to gain weight and grow at a healthy rate.\nBabies with CF can sometimes have ‘failure to thrive’, a condition in which their weight and height is far below that expected for their age. Pancreatic enzymes, along with a carefully planned diet, will help treat failure to thrive and will help your baby to grow at a healthier rate.\n2. Diet and Vitamins:\n- Vitamin supplements: People with CF have trouble absorbing some vitamins, especially fat-soluble vitamins such as vitamin A, D, E and K. Specific supplements may be suggested for your child.\n- A higher-calorie diet: Many babies and children with CF need more food than typical in order to stay healthy. Some children with CF need up to twice the normal number of calories to grow appropriately. A dietician who has experience with CF can help you come up with a good nutrition plan for your child.\n- Extra fluid: Your child may need to drink more water and liquids than other children in order to help loosen the thick mucus and to prevent dehydration. Children with CF lose more salt than others, especially during exercise or in hot weather.\n3. Airway clearance therapy\nAirway clearance therapy is done to break up and move mucus that has settled in the lungs and bronchi so that it can more easily be coughed up. It is usually performed several times a day and takes up to 20 to 30 minutes for each session. There are a number of ways to perform airway clearance therapy. Your doctor will recommend a method that will be most effective for you and your child. Some common types of airway clearance therapy are:\n- Chest percussive therapy: Some people with CF have a parent or caregiver tap or clap on their chest and back to break up and move mucus. Some people use a handheld machine that causes vibrations on the chest and back.\n- ThAirapy vest: Some people use a special vest that vibrates to break up the mucus.\n4. Medications: Your doctor may recommend special medications to treat the lung symptoms of CF.", "score": 27.067208735816497, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis (CF) is an inherited disease characterized by an abnormality in the body's mucus glands. It is chronic, progressive, and is usually fatal. Due to improved treatments, most people with CF live into their late 30s, and many even into their 50s.\nChildren with CF have an abnormality in the function of a cell protein called the cystic fibrosis transmembrane regulator (CFTR). CFTR controls the flow of water and certain salts in and out of the body's cells. As the movement of salt and water in and out of cells is altered, mucus becomes thickened.\nIn the respiratory system, mucus is normally thin and can easily be cleared by the airways. With CF, mucus becomes thickened and sticky and results in blocked airways. Eventually, larger airways can become plugged and cysts may develop.\nLung infections are very common in children with CF, because bacteria that are normally cleared, remain in the thickened mucus. Many of these infections are chronic. Pseudomonas aeruginosa is the most common bacteria that causes lung infections.\nChildren with CF also have involvement of the upper respiratory tract. Some individuals have nasal polyps that need surgical removal. Nasal polyps are small protrusions of tissue from the lining of the nose that go into the nasal cavity. Children with CF also have a high rate of sinus infections.\nSymptoms that may be present due to the effects of CF on the respiratory system include the following:\nClick here to view the\nOnline Resources of Respiratory Disorders", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "A defect in the CFTR gene causes cystic fibrosis (CF). This gene makes a protein that controls the movement of salt and water in and out of your body's cells. In people who have CF, the gene makes a protein that doesn't work well. This causes thick, sticky mucus and very salty sweat.\nResearch suggests that the CFTR protein also affects the body in other ways. This may help explain other symptoms and complications of CF.\nMore than a thousand known defects can affect the CFTR gene. The type of defect you or your child has may affect the severity of CF. Other genes also may play a role in the severity of the disease.\nEvery person inherits two CFTR genes—one from each parent. Children who inherit a faulty CFTR gene from each parent will have CF.\nChildren who inherit one faulty CFTR gene and one normal CFTR gene are \"CF carriers.\" CF carriers usually have no symptoms of CF and live normal lives. However, they can pass the faulty CFTR gene to their children.\nThe image below shows how two parents who are both CF carriers can pass the faulty CFTR gene to their children.\nClinical trials are research studies that explore whether a medical strategy, treatment, or device is safe and effective for humans. To find clinical trials that are currently underway for Cystic Fibrosis, visit www.clinicaltrials.gov.\nVisit Children and Clinical Studies to hear experts, parents, and children talk about their experiences with clinical research.\nThe NHLBI updates Health Topics articles on a biennial cycle based on a thorough review of research findings and new literature. The articles also are updated as needed if important new research is published. The date on each Health Topics article reflects when the content was originally posted or last revised.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is an autosomal recessive trait on chromosome 7. This\ndisorder affects chloride transport resulting in abnormal mucus production.\nThis lifelong illness usually gets more severe with age and can affect both\nmales and females. Symptoms and severity differ from person to person. Cystic\nfibrosis is the most common fatal inherited disease among whites and the major\ncause of chronic lung disease in children. 50% of people are expected to live\nto be 30, but a majority die before age thirteen. 1:2000 whites have cystic\nfibrosis, 1:17000 blacks, 1:6000 live births, 1:2500 Americans, and 1:20 is a\nThe genes are inherited in pairs, with one gene coming from each parent\nto make the pair. Cystic fibrosis occurs when both genes have mutations. A\nperson with cystic fibrosis receives one cystic fibrosis gene from each parent.\nThe parents of a child, with cystic fibrosis, each carry one nonworking copy of\nthe gene and one working copy of the gene. The parents are called cystic\nfibrosis carriers, and because they have one working gene they have no symptoms.\nCarrier parents have 1:4 chance to have a child who is a noncarrier of cystic\nfibrosis, a 1:2 chance to have a child who carries the gene, and a 1:4 chance\nwith each pregnancy to have an affected child. If you have a son or daughter\nwith cystic fibrosis, then you have a 1:1 chance of being a carrier. If you have\na brother or sister with CF, you have a 2:3 chance of being a carrier. If you\nhave a niece or nephew with CF, you have a 1:2 chance of being a carrier. If\nyou have an aunt or uncle with CF, you have a 1:3 chance of being a carrier and\na 1:4 chance if you have a 1st cousin with CF.\nCystic fibrosis affects the lungs in particular. The secretions are\nthick and sticky rather than thin and watery. This interferes with the removal\nof dust and germs. It can lead to lung infections and even chronic lung damage.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-25", "d_text": "It is useful to remember that increase in respiratory symptoms and signs in CF may be related to the upper respiratory tract which may be obstructed by the huge tonsils and adenoids. In such cases adenotonsillectomy may be followed by dramatic improvement even in young children not only in overnight oxygen saturations but also in weight increase and general wellbeing.\nM, Zenorini A, Perobelli S, Zanolla L, Mastella G, Braggion C. Prevalence of\nurinary incontinence in women with cystic fibrosis. BJU Internat 2001; 88: 44-48. [PubMed]\nOf 176 women with CF, 35% were occasionally incontinent of urine but 24% were regularly incontinent. As urine loss is likely to be an under-reported problem, particularly in a CF clinic devoted to mainly chest problems, the authors suggest that women with CF should be asked directly about urinary incontinence as part of their routine follow-up. Pelvic floor muscle exercises were said to help. Also there was a similar report from Manchester Adult CF clinic (Orr A et al. BMJ 2001; 322:1521[PubMed]) and a further one showing some response to pelvic floor muscle exercises (McVean RJ et al. J Cyst Fibros 2003; 2:171-176.[PubMed]).\nThese are really important reports which would improve the recognition of a distressing and relatively common symptom in women with CF which may go unreported and cause considerable distress for many years.\n2006 Prasad SA,\nBalfour-Lynn IM, Carr SB, Madge SL. A comparison of prevalence of urinary incontinence\nin girls with cystic fibrosis, asthma and healthy controls. Pediatr Pulmonol\n2006; 41:1065-1068. [PubMed]\nAnother study on urinary incontinence - this time on younger patients. In recent years the physiotherapists have taken an increasing interest in bladder dysfunction in CF. Girls with CF aged 11 to 17 years were studied and urinary incontinence was reported by 17/51 (33%) girls, compared with only 4/25 (16%) of those with asthma and 2/27 (7%) healthy controls. The problem was associated with increasing severity of lung disease.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-6", "d_text": "Recent uncontrolled trials suggest that once daily injections of intermediate or long acting insulin improve weight and lung function and are useful in CFRD and early insulin deficiency .\nExperience in the use of insulin pumps for the treatment of CFRD is in its infancy; however, initial studies suggest improvements in blood glucose control, body weight and lean body mass .\nCystic fibrosis associated liver disease\nApproximately 5%−10% of patients develop multilobular cirrhosis during the first decade of life. Most patients later develop signs of portal hypertension with complications such as variceal bleeding. Liver failure develops later usually in adult life, but is not inevitable. Cystic fibrosis associated liver disease (CFLD) is associated with more severe CF phenotypes .\nAnnual screening for liver disease is important to detect pre symptomatic signs; the bile acid ursodeoxycholic acid, which may halt progression of liver disease, should be commenced. Liver disease should be considered if the patient has two of the following: persistently abnormal liver function tests, abnormal physical examination or abnormal liver ultrasound. A liver biopsy should be performed if there is doubt over the diagnosis. Patients should have annual assessment of liver function.\nThe development of liver disease has a negative impact on nutritional status , often as a result of worsening fat absorption, and aggressive nutritional management is important to reverse this trend . Liver transplantation remains the only option for those who develop end stage liver disease and is considered to be effective, initially stabilising the decline in pulmonary function which is a feature of the development of CFLD. One year survival post transplant is estimated at 92.3% and 84.1% after 5 years . Studies suggest, however, that nutritional status does not improve post liver transplant and pre transplant body mass index (BMI) does not alter survival .\nReduced bone mineral density (BMD) was first described in CF in 1979, but the full extent of the problem was not apparent until the 1990s when detailed studies were performed. Patients may develop low BMD through either osteoporosis or vitamin D deficiency osteomalacia.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-3", "d_text": "Distal intestinal obstruction syndrome\nDistal intestinal obstruction syndrome (DIOS) is a common complication in CF and estimates suggest a prevalence of 5−12 episodes per 1000 patients per year in children . It is characterised by frequent abdominal pain, accompanied by complete or partial intestinal obstruction. Faecal material and mucus gathers in the distal ileum and there is often a palpable mass in the right lower abdomen.\nDIOS is more common in older patients, in those with PI (although it can be seen in pancreatic sufficient patients) and in those who have had previous gastrointestinal surgery. It usually responds to medical management including rehydration combined with stool softening laxatives or gut lavage. In extreme cases surgical intervention may be necessary.\nGastro-oesophageal reflux (GOR) has been reported in approximately 20% of infants with CF , although in a review of screened infants at one UK centre incidence was as high as 43% . The mechanisms are unclear; however, it is considered to be mainly due to inappropriate relaxation of the gastro-oesophageal sphincter. GOR is most prevalent in young children with CF and improves with age. GOR may result in poor weight gain, feeding disturbances, abdominal pain, and respiratory symptoms including wheeze and reflex bronchospasm. Treatment includes thickening feeds and the use of anti-reflux medication .\nAbdominal pain has been reported in approximately 30% of patients . It is particularly common in children with poorly controlled malabsorption or constipation, but may be due to other disorders including intussusception, appendicitis, Crohn’s disease and cow’s milk protein enteropathy. Abdominal pain may negatively impact on nutritional intake and nutritional status.\nOther gastrointestinal problems\nOther problems include rectal prolapse, acute pancreatitis, coeliac disease and Crohn’s disease.\nOther clinical problems\nCF related diabetes\nCF related diabetes (CFRD) is the most common comorbidity affecting approximately 19% of adolescents and up to 50% of adults . The prevalence is increasing with the increased survival of people with CF. It is a distinct type of diabetes with features of both type 1 and type 2 diabetes. There is progressive fibrous and fatty infiltration of the exocrine pancreas resulting in destruction of the islet cells, leading to loss of endocrine cells and reduced insulin production. In addition there may be insulin resistance.", "score": 26.403121326253427, "rank": 40}, {"document_id": "doc-::chunk-2", "d_text": "That’s the only take home message for today.\nIf they arrive feeling rubbish, with a productive cough, crackles, no appetite, weight loss, and looking terrible then get in touch with your CF centre as they’re likely to need an admission for IV antibiotics for their chest exacerbation. They may have tachypnoea, low sats, and a fall in lung function – try and get spirometry (or a peak flow reading) if you have access to this. If they are wheezy, or fulfil the diagnostic criteria, we might consider a diagnosis of allergic broncho-pulmonary aspergillosis (ABPA). This is an allergic reaction to inhaled aspergillus spores, and treatment is focused on the trigger (antifungals e.g. intraconazole) and the problem (misbehaving T-cells are treated with steroids).\nAbdo pain is very common in CF. Reflux/epigastric pain is common, but usually well controlled on a proton pump inhibitor. More troublesome is constipation and lower abdominal pain, with or without “distal intestinal obstruction syndrome” (DIOS). DIOS describes the clinical picture of a sticky lump of viscous faeces and mucous causing partial or complete blockage of the bowel, often around the terminal ileum. The child may have diarrhoea, or obstruction, with colicky abdominal pains, including right iliac fossa pain and possibly a palpable lump in the right iliac fossa – a difficult differential diagnosis that includes appendicitis, or adhesions from previous bowel surgery (e.g. neonatal meconium ileus). Treatment is hydration – oral unless they’re vomiting, in which case iv fluids – and softening up the lump (with movicol, oral gastrografin, or in severe cases with a macrogol such as Klean-prep). Speak to your local CF team, as severe cases may need transfer across.\nA normal sweat chloride is <30 mmol/L, and in CF it may be >90 mmol/L.\nThe skin is a big organ, so these children can lose lots of fluid and salt across the skin – particularly in hot weather. Extra fluids and salts are very important in CF.", "score": 26.385178212426254, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Cystic Fibrosis and the Reproductive System\nHow does CF affect the reproductive system?\nChildren with CF have an abnormality in the function of a cell protein called the cystic fibrosis transmembrane regulator (CFTR). CFTR controls the flow of water and certain salts in and out of the body's cells. As the movement of salt and water in and out of cells is altered, mucus becomes thickened.\nIn the reproductive system, the thickened secretions can cause obstructions and affect the development and function of the sexual organs.\nMost males with CF have obstruction of the sperm canal known as congenital bilateral absence of the vas deferens (CBAVD). Women also have an increase in thick cervical mucus that may lead to a decrease in fertility. This condition has not been reported to affect sexual drive or performance.\nSymptoms that may be present due to the effects of CF on the reproductive system include:\nDelayed sexual development\nAbsence or stopping of menstruation\nIrregular menstrual periods\nInflammation of the cervix\nInfertility or sterility\nBoth men and women should consider the added demands of parenthood and how it might affect their own health. The decision is personal, and you should talk with your health care team if you are considering parenting or having a baby.", "score": 25.659958061070686, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "It is possible that the main title of the report Cystic Fibrosis is not the name you expected. Please check the synonyms listing to find the alternate name(s) and disorder subdivision(s) covered by this report.\nCystic fibrosis is a genetic disorder that often affects multiple organ systems of the body. Cystic fibrosis is characterized by abnormalities affecting certain glands (exocrine) of the body especially those that produce mucus. Saliva and sweat glands may also be affected. Exocrine glands secrete substances through ducts, either internally (e.g., glands in the lungs) or externally (e.g., sweat glands). In cystic fibrosis, these secretions become abnormally thick and can clog up vital areas of the body causing inflammation, obstruction and infection. The symptoms of cystic fibrosis can vary greatly in number and severity from one individual to another. Common symptoms include breathing (respiratory) abnormalities including a persistent cough, shortness of breath and lung infections; obstruction of the pancreas, which prevents digestive enzymes from reaching the intestines to help break down food and may result in poor growth and poor nutrition; and obstruction of the intestines. Cystic fibrosis is slowly progressive and often causes chronic lung damage, which eventually results in life-threatening complications. Because of improved treatments and new treatment options, the outlook and overall quality of life of individuals with cystic fibrosis has improved and nearly 50 percent of individuals with the disorder are adults. Cystic fibrosis is caused by mutations to the cystic fibrosis transmembrane conductance regulator (CFTR) gene and is inherited as an autosomal recessive trait.\nCystic Fibrosis Foundation\n6931 Arlington Rd.\nBethesda, MD 20814\nAmerican Lung Association\n1301 Pennsylvania Ave NW\nWashington, DC 20004\nCanadian Cystic Fibrosis Foundation\n2221 Yonge Street\nOntario, M1B 4G8\nCystic Fibrosis Trust\n11 London Rd\nKent, BR1 1BY\nNIH/National Institute of Diabetes, Digestive & Kidney Diseases\nOffice of Communications & Public Liaison\nBldg 31, Rm 9A06\n31 Center Drive, MSC 2560\nBethesda, MD 20892-2560\nCystic Fibrosis Research, Inc.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-3", "d_text": "Cystic fibrosis (CF) is a genetic (inherited) disease that causes sticky, thick mucus to build up in organs, including the lungs and the pancreas.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "(More than 10 million Americans carry the CF gene – most without knowing it.) Children with the disease are usually diagnosed before the age of 2, but about 10 percent of cases go undetected until adulthood.\nHow can I find out whether my child has CF?\nThe definitive test for CF is something called a sweat test. This test is quick and painless: A drug called pilocarpine stimulates a spot on the arm to sweat. The doctor places a piece of filter paper on the area to absorb the sweat and then tests its sodium and chloride content. Higher than normal levels strongly suggest cystic fibrosis. A family history of CF and other tests, such as a chest X-ray and blood or saliva genetic tests, may add to the evidence.\nHow is the disease treated?\nChildren with cystic fibrosis need ongoing medical care. This is best provided at a specialized CF center with a team of doctors, nurses, and others who have expertise with the disease. Symptoms vary greatly from child to child, even when they are siblings with the same genetic defect. Often symptoms come and go – they may be relatively mild or frighteningly severe.\nThe vast majority of young children with CF can be treated as outpatients, but they need to be seen frequently to make sure the disease is being treated properly. At each visit a sputum (saliva or mucus) sample is taken to help determine which germs are causing lung infections. Occasionally, if symptoms flare up, the child has to be admitted to the hospital to get intravenous antibiotics.\nIf your child has cystic fibrosis, he should receive routine childhood vaccines against such common illnesses as Hib and pertussis and an annual flu shot.\nMost children with CF are given prescription medications, including antibiotics to treat infection, medicines that help break up mucus in the lungs, and drugs that reduce the inflammation that causes lung destruction. Each child responds best to a different combination of physical and drug therapies, and it's the job of the doctors and the parents to find the right mix. Parents play an important role by watching how their children respond to different drugs.\nResearchers are working to develop other treatment options as well, including gene therapy (to replace the gene that causes the disease), drugs that help move salt in and out of the cells properly, and new drugs to prevent and treat infections.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "The World Health Organisation estimates that in the European Union one in 2,000-3,000 newborns is found to have CF, while the incidence of CF is reported to be one in every 3,500 births in the United States of America.\nCF mutations can be categorised as falling into one of five categories based on how they affect the CFTR protein. Class I to III mutations generally result in a more severe CF phenotype, while those belonging to classes IV-V enable some functional CTFR protein to be produced and thus result in a milder form of the disease. The characteristics of each class can be summarised as follows:\n- Class I: Shortened/absent protein\n- Class II: Protein does not reach the cell membrane\n- Class III: Channel incorrectly regulated\n- Class IV: Lowered chloride conductance\n- Class V: Incorrect gene splicing\nIs has been estimated that around 85% of CF patients have Class II mutations; it is within this class that the most common mutation, F508del, occurs. In this mutation, three DNA nucleotides that comprise the codon for phenylalanine are deleted, resulting in a lack of this amino acid at the 508th position on the protein and the consequential mis-folding of the protein. The percentage of patients who have this type of mutation is thought to be around 70.\nIt is not yet clear how some of the mutations in CF result in particular health effects and much remains to be discovered. Interestingly, there is some evidence that the severity of CF can be influenced by other genetic and environmental factors, such as mutations in other genes besides the CFTR gene. However, again, little is known about this.\nDespite there being some gaps in knowledge, CF can be regarded as one of the more well understood genetic disorders and research has paved the way to vast improvements in life expectancies over the past 30 years; while it used to be common for a person with CF not to live beyond childhood, many now live to nearly 40 years of age, as advances in treatments and technologies have made the potential for a cure ever more likely. However, the scope for improvement is vast and there is room in the market – and indeed, a high need – for many more disease-modifying drugs.\nThe main method for treating the lung problems for which CF is particularly well known is via chest physical therapy, exercise and medicines that clear away mucus.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "Staphylococcus aureus can be cultured early in infancy and continual anti-staphylococcal antibiotic prophylaxis is recommended for all children under the age of three years. With increasing age, Pseudomonas aeruginosa becomes the most common pathogen, and can often be cleared with early, aggressive antibiotic therapy, but may be isolated sporadically following eradication. Preventing pseudomonas infection from becoming chronic can have a positive effect on long-term survival. With increasing age, other gram negative organisms such as Burkholderia cepacia can chronically infect the lung – usually later in disease progression – and is associated with a high mortality rate.\nIn most cases, CF will be diagnosed antenatally or by newborn screening. Following positive identification, diagnosis is confirmed by genetic mutation analysis and a positive sweat test. In a study completed in East Anglia over a 30-year period, a total of 730,730 babies were screened, resulting in 296 screen positive cases of CF and 29 false negatives – including 10 false negatives with meconium ileus. In the study, 10 missed CF cases were pancreatic insufficient, but all were diagnosed before their first birthday, highlighting that some children will be missed by newborn screening. Screening for CF does however detect 95% of unsuspected CF cases presenting before three years of age.\n· Children missed by screening may be diagnosed by symptoms (faltering growth, persistent respiratory or gastrointestinal symptoms) and require a sweat test and genetic analysis. Education of the child and parents should be provided by the CF multidisciplinary team within seven days.\nWith all colds, accompanied by a cough or other lower respiratory symptoms, an oral antibiotic that will cover infection from Staphylococcus aureus and Haemophilus influenza should be started, for example co-amoxiclav. Children are not often able to expectorate sputum, therefore a throat swab and cough swab should be sent to the local microbiology department and the results followed up, and copied to the child’s CF team. The method of obtaining a cough swab is similar to that of a throat swab, but no direct contact is made with the oropharynx – the child coughs onto the swab placed at the back of the oropharyngeal cavity. It is often possible to email the CF specialist nurse to inform them that a result is pending, and antibiotic therapy should be altered as necessary.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Cystic Fibrosis affects myself and 70,000 other people across the world who fight it every day. It's shortened the lives of thousands, including my amazing sister, Angela, who bravely lived with cystic fibrosis for 16 years. Please take a moment to read the info below and click on each these links so you can help each of these wonderful organizations and memorials.\nWhat Is Cystic Fibrosis? ~ Taken from the Cystic Fibrosis Foundation website\nCystic fibrosis is an inherited chronic disease that affects the lungs and digestive system of about 30,000 children and adults in the United States (70,000 worldwide). A defective gene and its protein product cause the body to produce unusually thick, sticky mucus that:\n- clogs the lungs and leads to life-threatening lung infections; and\n- obstructs the pancreas and stops natural enzymes from helping the body break down and absorb food.\nIn the 1950s, few children with cystic fibrosis lived to attend elementary school. Today, advances in research and medical treatments have further enhanced and extended life for children and adults with CF. Many people with the disease can now expect to live into their 30s, 40s and beyond.\nPeople with CF can have a variety of symptoms, including:\n- very salty-tasting skin;\n- persistent coughing, at times with phlegm;\n- frequent lung infections;\n- wheezing or shortness of breath;\n- poor growth/weight gain in spite of a good appetite; and\n- frequent greasy, bulky stools or difficulty in bowel movements.\n- About 1,000 new cases of cystic fibrosis are diagnosed each year.\n- More than 70% of patients are diagnosed by age two.\n- More than 45% of the CF patient population is age 18 or older.\n- The predicted median age of survival for a person with CF is in the mid-30s.\nMy Blog Universe\nGo to the tab on right side of my site to read all of the blogs that I read, many of which are CF-related. They cover all the bases: CFers from teens to veterans, CF Parents, transplant recipients, and more! You'll know which ones are CF focused because they are very upfront about EVERYTHING!", "score": 25.54295898221551, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Updated on December 10, 2018. Medical content reviewed by Dr. Joseph Rosado, MD, M.B.A, Chief Medical Officer\nCystic fibrosis (CF) is one of the many chronic conditions that can be treated with medical marijuana. This inherited disease can lead to pulmonary and sinus infections, along with gastrointestinal issues. It can also cause lung damage, which makes it difficult to breathe and creates pain in patients’ chests and lungs. Other parts of the body cystic fibrosis affects include the pancreas, liver, digestive tract and reproductive tract.\nAbout 30,000 people in the country have CF, and around 70,000 people worldwide have been diagnosed with the life-threatening condition. Research, education and improved care is changing the way CF affects people. Most children who were diagnosed with CF 60 years ago didn’t live to see adulthood, but today’s life expectancy for patients with CF is close to 40 years old.\nThough CF affects the patient’s entire body, the lungs are the organs that are heavily affected. Cystic fibrosis trans-membrane regulator, or CFTR, is a protein that facilitates and manages the movement of sodium, chloride and water in the body. When you’re diagnosed with CF, that means you either have too much or too little CFTR.\nThe malfunction of CFTR creates a thick mucus that gets in the patient’s airways, making it painful and hard to breathe. As a result of the mucus not being cleared, bacteria can grow and infect the airways — this essentially causes a vicious cycle of airway blockage, mucus and infections. After many years of this cycle, the airways and lung tissue are eventually damaged.\nAs of now, there’s no cure for CF. And, since the symptoms can vary so drastically from patient to patient, doctors typically treat the disease on a case-by-case basis with a goal of relieving the symptoms rather than curing them. Infections, blockages and mucus build-up are some of the main symptoms they aim to relieve.\nCurrently, the following drugs are being employed as treatment methods for cystic fibrosis:\nOther surgery, therapy and treatment options include:\nWhile it can’t cure the life-threatening condition, medical cannabis can help to treat the lung pain you’re experiencing as a symptom of CF. The benefits a cystic fibrosis patient can get from medical cannabis could be enough to restore some of their quality of life.", "score": 25.291691024157856, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis causes thick and sticky mucous throughout the body, leading to serious, chronic and systemic health problems. Along with breathing and lung problems, bowel symptoms are a leading health complication related to cystic fibrosis. This sticky mucous blocks tubes and ducts leading to the pancreas, which prevents essential enzymes from reaching the intestines. These enzymes are essential for the absorption of protein and fats in the body. Therefore, someone with untreated cystic fibrosis has an abundance of fats and proteins in their bowel. This is the cause of gastrointestinal or bowel symptoms in people with cystic fibrosis.\nThe most common bowel symptom related to cystic fibrosis is diarrhea. Stools also commonly appear greasy, bulky and foul-smelling. Stools may also appear pale and clay-colored [source: PubMed Health]. Another common symptom is blockage of the intestines, which leads to excessive gas, constipation and stomach pain. Intestinal blockage is especially common in newborns with cystic fibrosis. A distended, or swollen, abdomen is the result of severe constipation. In newborn infants, not passing their first bowel movement within 24 to 48 hours of birth is characteristic of cystic fibrosis. Children with untreated cystic fibrosis also show poor weight gain and growth. As a result, infants may exhibit a condition called failure to thrive. Poor weight gain and growth is the result of nutritional deficiencies commonly seen in people with cystic fibrosis. Children with cystic fibrosis simply cannot get enough nutrients from the food they eat.\nBowel-related complications are also fairly common in adults with cystic fibrosis. Left untreated, cystic fibrosis can lead to pancreatitis. Pancreatitis is an inflamed pancreas and can be painful. Rectal prolapse, whereby rectal tissue detaches and moves up the gastrointestinal system, is another complication. In severe cases, people with cystic fibrosis can also develop liver disease, diabetes, and gallstones [source: NHLBI].", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "The cystic fibrosis (CF) or mucoviscidosis is a genetic, chronic, developmental disease and it hit one newborn on\n|Fig.1- Organs and systems involved in CF|\n2500-2700 live births. In affected individuals, a biological fluids (mucus, sweat, spit, semen, digestive fluid) secreted by exocrine glands are much more dense and viscous than normal. The most frequent (Fig.1) clinical conditions interest a respiratory tract (chronic bronchitis), pancreas (digestive problems), rarely the intestine (stercorale perforation), of the liver (cirrhosis) and reproductive system (infertility, especially in male individuals). Abnormal secretions cause a progressive damage to involved organs.\nThe disease occurs especially within the first few years of life, sometimes later, and it can express with greater or lesser seriousness in different individuals.\nThere is a only symptomatic treatment and it consists in a bronchial drainage, in the respiratory infection antibiotics giving, exams to evaluate the pancreas function, in the vitamins supplements giving for digestive and nutritional problems. In the last few years the progress and prognosis of cystic fibrosis are significantly improved, especially if the disease is diagnosed before its time.\nThere are also atypical disease forms, characterized by a pancreatic sufficiency and simple respiratory problems;sometimes the disease can affect exclusively one organ as in the case of male infertility or occur in both sexes, with cyclic pancreatitis.\nCystic fibrosis is a monogenic disease, transmitted as an autosomal recessive disorder caused by mutations, DNA alterations occurring in the CFTR gene (Cystic Fibrosis Transmembrane Regulator). The frequency of the healthy carriers (asymptomatic carriers) in the general population is about 1 on 25 individuals. Two healthy carrier parents will have a 25% probability to have children with cystic fibrosis.\nThemselves children will have 50% probability of being healthy carriers, as their parents.\nThe CFTR gene produces a protein controlling the transport of salts through the cell membrane; a mutation causes an abnormal protein that modifies the salt composition of glandular secretions, becaming a dense and viscous.\nThe only way to identify healthy carriers is to perform a DNA test for CFTR gene mutations. The analysis is complicated because there are over 1900 mutations.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "When there is mucous in my lungs germs can thrive. I have to do all these things to move the mucous and get it out. I lose function when the infections take hold and scaring happens.”\nCystic fibrosis is a genetic disease that occurs when a child inherits two abnormal genes, one from each parent. Approximately, one in 25 Canadians carry an abnormal version of the gene responsible for cystic fibrosis. Carriers do not have cystic fibrosis, nor do they exhibit any of the symptoms of the disease.\nFollow Chelsea Gagnon on Facebook @ Chelsea Fights Cystic Fibrosis.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-13", "d_text": "Moreover, CF patients with residual values of CFTR-mediated Cl− secretion had intermediate values in those parameters, namely: sweat-Cl− values (109.80±8.34 mmol/l); FEE concentrations (444.57±78.05 µg/g), age at diagnosis (19.6±3.4 yrs), SK scores (mean1 = 90; mean2 = 77±14; mean3 = 63±12; mean4 = 55±7); and FEV1 (mean1 = n.a.; mean2 = 82±26; mean3 = 61±9; mean4 = 60±3). Moreover, patients with classic CF and ≤5% CFTR-mediated Cl− secretion consistently presented faster decline rates of pulmonary function , (Fig.3-D, dashed line), than Non-Classic CF patients retaining residual (Fig.3-D, dotted line) or normal CFTR functions (Fig.3-D, solid line, p = 0.001 by Kruskal-Wallis test). Regarding BMI (Fig.2-C, Table 1), only a modest trend was observed for lower BMI values in CF patients with absence of Cl− secretion, not related to age differences (Fig.S4-A, S3-C, Table 1).\nScatter-plot summarizing the distribution of Isc-CCH(IBMX/Fsk) against (A) age at diagnosis (in years); (B) SK clinical scores distributed by groups of ages; and (C) FEV1 distributed by groups of ages. Other details as in Fig.2. (D) Mixed regression model for decline rates in FEV1 vs. Age (n = 232 measurements) for Classic CF (y = 93.15–0.73x), Non-Classic CF (y = 92.23–0.52x), and Non-CF (y = 96.86–0.34×) groups, as described , .\nNext, we analysed the distribution of other clinical features (MI, nasal polyposis, diabetes/GI, osteopenia/osteoporosis) and also presence of lung pathogens in the three groups under study.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Introduction: Cystic fibrosis (CF) is a common lethal genetic disorder. The aim of this study was to determine the common chest radiograph (CXR) patterns in adult CF, and correlate disease distribution on CXRs with genotype, age, and gender.\nMethods: One hundred nine CF patients treated at Baylor Adult Cystic Fibrosis Center were identified. The intake CXR was reviewed and characterized as diffuse bilateral (DB), unilateral, upper lobe (UL), and lower lobe (LL) disease, or relatively normal. Lack of intake CXR, and/or genotype excluded 41 patients from analysis.\nResults: Of 68 patients, 38 were homozygous for ΔF508 and 30 were heterozygous. Mean age of the population was 30 ± 8 years (± SD) [range, 18 to 48 years]. The most common CXR pattern was DB; 62% had DB, 28% had UL, and 7% had LL predominance. This is in contrast to the UL-predominant CXR pattern commonly described in the pediatric population. In 18 DB patients, archived pediatric films were available, and the average patient age was 15.7 years. DB pattern was present in 16 of 18 CXRs that antedated adult intake CXRs by an average of 12.7 years. Homozygous ΔF508 genotype was identified in 56% of patients and did not distinguish radiologic phenotypes. There was no association between radiograph pattern and identified infecting/colonizing organisms and percentage of predicted FEV1.\nConclusions: CF has commonly been reported as an UL disease. However, in this study of adult patients, the common pattern observed was DB. A small subgroup analysis suggests that DB disease was not a pattern of disease evolution but may be present from disease onset.", "score": 24.345461243037445, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Learn about cystic fibrosis, a genetic disorder that affects the lungs, pancreas, and other organs, and how to treat and live with this chronic disease.\nCF is a rare genetic disease found in about 30,000 people in the U.S. If you have CF or are considering testing for it, knowing about the role of genetics in CF can help you make informed decisions about your health care.\nIf you or your child has just been diagnosed with cystic fibrosis, or your doctor has recommended testing for CF, you may have many questions.\nDiagnosing CF is a multistep process. A complete diagnostic evaluation should include a newborn screening, a sweat chloride test, a genetic or carrier test, and a clinical evaluation at a CF Foundation-accredited care center.\nRaising a child with cystic fibrosis can bring up many questions because CF affects many aspects of your child’s life. Here you’ll find resources to help you manage your child’s daily needs and find the best possible CF care.\nLiving with cystic fibrosis comes with many challenges, including medical, social, and financial. By learning more about how you can manage your disease every day, you can ultimately help find a balance between your busy lifestyle and your CF care.\nPeople with CF are living longer, healthier lives than ever before. As an adult with CF, you may reach key milestones you might not have considered. Planning for these life events requires careful thought as you make decisions that may impact your life.\nPeople with cystic fibrosis are living longer and more fulfilling lives, thanks in part to specialized CF care and a range of treatment options.\nCystic Fibrosis Foundation-accredited care centers provide expert care and specialized disease management to people living with cystic fibrosis.\nWe provide funding for and accredit more than 120 care centers and 53 affiliate programs nationwide. The high quality of specialized care available throughout the care center network has led to the improved length and quality of life for people with CF.\nThe Cystic Fibrosis Foundation provides standard care guidelines based on the latest research, medical evidence, and consultation with experts on best practices.\nAs a clinician, you’re critical in helping people with CF maintain their quality of life. We’re committed to helping you partner with patients and their families by providing resources you can use to improve and continue to provide high-quality care.", "score": 23.913775821255335, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "Without digestive enzymes, food in the small intestine cannot be broken down properly and nutrients cannot be absorbed. This often leads to poor growth and poor weight gain. It can also cause sluggishness and anemia. Because fat is not absorbed well, it ends up in the stools and causes them to be bulky, lighter in color and have a stronger odor.\nWhat causes CF?\nCF is an inherited condition that occurs when a particular cell protein is either missing or not working well. This protein is called “cystic fibrosis transmembrane conductance regulator” (CFTR). CFTR is normally made by the body and is not something we get by eating. One of CFTR’s jobs is to let chloride (a molecule found in salt) in and out of the cells of the body. Researchers are still trying to find out more about why of the lack of CFTR causes the health problems seen in people with CF.\nCF is not contagious. You cannot get CF from living with, touching, or spending time with a person with CF.\nWhat are the symptoms of CF ?\nCF is variable and causes minimal effects in some people and more serious health problems in others. Symptoms usually start in early childhood. In fact, most children with CF show effects before one year of age. There are some people who do not find out they have CF until adulthood.\nThe first things parents often notice when a child has CF are:\n- Salty sweat; many parents notice a salty taste when kissing their child\n- Poor weight gain and growth, even when a baby or child eats a lot. This is sometimes called ‘failure to thrive (FTT)’\n- Constant coughing or wheezing\n- Thick mucus and phlegm\n- Many lung and sinus infections (pneumonias and bronchitis)\n- Greasy, smelly stools that are bulky and pale colored\n- Intestinal problems (diarrhea or constipation, pain, gas)\n- Polyps in the nose\nAbout 15-20% of newborns with CF have a blockage of their intestines called meconium ileus. This is caused by thick stool that gets stuck in the intestines.\nAbout 15% of children with CF have lung effects but do not have problems with digestion. About 85% of children have problems with both lungs and digestion.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-7", "d_text": "Most of them (68%) showed mono-symptomatic features, including: respiratory symptoms in 29% (nasal polyps, chronic cough/bronchitis, pneumonia, bronchiectasis or pansinusopathy); 35% had abnormal gastrointestinal signs (nutrients malabsorption; failure to thrive; hepatobiliary disease; chronic diarrhoea; recurrent pancreatitis; diabetes and glucose intolerance; others presented osteopenia/osteoporosis, liver disease or male infertility. Fourteen percent of these “CF suspicion” individuals presented relatively mild lung disease and nutrition abnormalities, one presenting GI and another azoospermia. A group of 9 patients with severe phenotype (both respiratory and gastrointestinal) were initially included in this CF suspicion group (18%) due to their recent identification and while waiting for confirmation of a CF diagnosis (Table S1).\nAssessment of CFTR-mediated Cl− Secretion in Rectal Biopsies\nCFTR-mediated Cl− secretion was assessed in rectal biopsies from the above two sub-groups of CF patients (Classic and Non-Classic CF) as CF reference. As non-CF control group, we also analysed CFTR function in age-matched individuals undergoing routine colonoscopy for non-CF related reasons. Finally, measurements were carried out in the CF suspicion individuals to establish/exclude a CF diagnosis, following comparison with values from the CF patients and control groups.\nAs shown previously , , , , application of carbachol (CCH) under basal conditions elicited lumen-positive responses in CF and lumen-negative in non-CF tissues (Fig.1A, B, C). Nevertheless, due to variable levels of endogenous prostaglandins, lumen-positive responses can also be observed in non-CF control tissues , . Thus, when CCH was applied for a second time, now under indomethacin to completely inhibit endogenous cAMP (and thus CFTR-mediated Cl− secretion), all tissues presented lumen-positive responses correspondent to potassium (K+) exiting the cell (Fig.1A, B, C) , .", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "Medical advancement has brought about helpful treatments that would ease the symptoms of Cystic Fibrosis. Consult your doctor to prevent the disease from causing further complications.\nIt is better to diagnose it at its early stages. Note the symptoms and follow regular health check-ups.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-18", "d_text": "This leads us to conclude of the importance of this approach for patients in the “grey” zone (Fig.2, 3) for the establishment/exclusion of a final diagnosis of CF or CFTR-RD.\nInterestingly, our data are also highly informative to correlate CFTR function with CFTR genotypes (Fig.5). For instance, we were able to detect very low function (~5% CFTR function vs non-CF controls) in a patient bearing S549R, recently described as Class III , whereas significant CFTR-mediated Cl− secretion was found in two patients bearing 4428insGA and D1152H (84 and 64%, respectively). Thus, we classify the former here as CF-disease causing and the two latter as CF-RD mutations.\nIndicated values are speculative. *D1152H may belong either to CFTR-RD or CFTR-disease causing mutations. CBAVD, Congenital Bilateral Absence of the Vas Deferens; ABPA, Allergic Bronchopulmonary Aspergillosis; PS, Pancreatic Sufficiency; PI, Pancreatic Insufficiency; MI, Meconium Ileus; DIOS, Distal Intestinal Obstruction Syndrome.\nIn the present study, we have also used a statistical discriminant analysis to identify which clinical and functional parameter(s) better reflects the differences among Classic CF, Non-Classic CF and Non-CF groups for prognosis. Results show that Isc-CCH(IBMX/Fsk) measurements are more discriminative than Isc-IBMX/Fsk, evidencing the highest discriminant power (90.4%) among the three groups studied. The second and third parameters with higher discriminant power are FEE and sweat-Cl− concentration, respectively. Using such analysis (Fig.4), one can for example predict that individuals in the “Non-Classic CF” group with values lying closer to the “Non-CF” cluster, have better prognosis than those with values closer to the “Classic CF” cluster. In fact, the only four cases with “Non-Classic CF” which were misclassified as “Classic CF” by this analysis, correspond to individuals with moderate PI and/or low levels of CFTR function. It may, thus be expected that these four patients develop severe CF earlier than most Non-Classic ones.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-19", "d_text": "Interestingly, and corroborating such prediction among these four, there is a 19-year old patient with the F508del/G85E genotype, moderate PI (FEE = 103.89 µg/g) and ~12% of CFTR function (Fig.S2-C, Table S1) who recently (last 1½ yr) started to progress from a relatively mild to moderate-severe lung disease, with five pulmonary exacerbations plus surgery to remove nasal polyps associated with a strong sinusitis.\nRecently, we demonstrated that the current approach may be used in pre-clinical assessment of therapeutic compounds efficacy directly on native tissues and similarly it may be used to identify CF patients (and CFTR mutations) who will respond to innovative therapeutic strategies, namely those aimed at increasing the residual CFTR activity and already approved for clinical use for other mutations towards a predictive personalized-medicine approach , –. For example, the patient with S549R could be tested ex vivo for the correction with the FDA-approved potentiator Ivacaftor, as suggested by the in vitro data . Moreover, based on our data showing good correlations of CFTR-mediated Cl− secretion with absolute FEV1 values and FEV1 decline rate (Fig. 3), as well as taking into account our discriminant analyses (Fig.4) demonstrating that CFTR-mediated Cl− has the highest discriminant power (90.4%) to distinguish among Classic CF, Non-Classic CF and Non-CF groups, we propose that colonic CFTR-mediated Cl− secretion, perhaps together with sweat-Cl−, may be the best biomarker in clinical trials aimed at modulating CFTR , .\nWhat is the Functional CFTR Threshold to Avoid CF?\nFinally, in an attempt to further answer the old question “how much functional CFTR would be enough to avoid CF?” we have put together the current and previous findings, including those concerning mRNA levels, to establish the threshold for CF , – (Fig.5). Indeed, we previously showed that ~5% of normal CFTR transcripts (relative to non-CF individuals) is sufficient to attenuate CF severity –. From our current and previous data, we propose that CFTR values above ~10% of normal CFTR function are required for a better CF prognosis since all “Non-Classical CF” patients had values for CFTR-mediated Cl− secretion higher than 10%.", "score": 23.030255035772623, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "What is Cystic Fibrosis?\nCystic Fibrosis, or CF, is a genetic disease that affects the exocrine glands. These glands control the production of sweat, digestive fluids, and mucus. Cystic Fibrosis causes the body to produce thick mucus that blocks airways, prevents normal digestive functions, and leads to bacterial infections.\nCystic Fibrosis Facts:\n- Cystic Fibrosis is a recessive gene mutation, meaning both parents must carry the defective gene for their children to have CF.\n- Advances in newborn screening help diagnose cystic fibrosis at birth.\n- CF patients produce heavy mucus that builds up and damages many organs in the body.\n- In the past decade, research and medical advancements have led to an increase in life expectancy for CF patients.\n- CF is most commonly found in the Caucasian population of the United States.\n- A sweat test is the easiest way to test for CF.\n- Common symptoms and complications among CF patients include:\n- Cystic Fibrosis-Related Diabetes (CFRD)\n- Chronic sinus and lung infection\n- Gastroesophageal Reflux Disease (GERD)\n- Constant coughing; asthma, wheezing, and shortness of breath\n- Bone Disease (fracture, osteopenia or osteoporosis)", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis (CF) was first recognized as a clinical entity in 1938. Its genetic nature and autosomal recessive inheritance pattern were described in 1946. In 1948, patients with CF were observed to lose excess salt in their sweat which led to development of the chloride sweat test (a diagnostic test still in use). Documentation of clinical manifestations (pancreatic insufficiency and bacterial endobronchial infections) over the next 3 decades resulted in earlier diagnosis. In the 1980s, problems in epithelial chloride transport were linked to CF. In the late 1980s, elevations in pancreatic immunoreactive trypsinogen (IRT) in newborn blood were associated with CF. Finally, in 1989, DNA mutations associated with CF were identified on chromosome 7. The gene product was called the cystic fibrosis transmembrane conductance regulator (CFTR). During the 1990s, major insights were gained into the function of CFTR and the pathophysiology of CF. Dramatic improvements in early diagnosis and treatment have followed close behind.\nInitial screening of newborn bloodspots measures IRT. This pancreatic exocrine product is significantly elevated in over 90% of affected newborns. An elevated IRT should prompt additional genetic evaluation or sweat testing to confirm the diagnosis. If the patient being screened had meconium ileus or other bowel obstruction, IRT screening is not reliable and additional screening or diagnostic tests should be considered as indicated.\nEarly diagnosis by newborn screening has allowed earlier combined anti-inflammatory and antibiotic therapies to combat upper respiratory infections and nutritional supplementation to avoid nutritional deficits. Dramatic progress has been made in improving quality of life for these newborns.\nBecause the diagnosis and therapy of cystic fibrosis is complex, the pediatrician is advised to manage the patient in close collaboration with a consulting pediatric pulmonologist. It is recommended that parents travel with a letter of treatment guidelines from the patient’s physician.\nThis disorder most often follows an autosomal recessive inheritance pattern. With recessive disorders affected patients usually have two copies of a disease gene (or mutation) in order to show symptoms. People with only one copy of the disease gene (called carriers) generally do not show signs or symptoms of the condition but can pass the disease gene to their children. When both parents are carriers of the disease gene for a particular disorder, there is a 25% chance with each pregnancy that they will have a child affected with the disorder.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "1. Research Cystic Fibrosis (CF) and answer the following questions in your own words:\n· How common is CF?\n· Is CF common among all racial groups? Is there a predisposition for some ethnic groups to have CF over others?\n· How likely is it for one CF carrier to meet/marry another CF carrier?\n· What are the signature symptoms of CF?\n· How does CF affect an individual’s quality of life?", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-14", "d_text": "Significant differences (Table 2) were found between the groups under study for the distribution of lung pathogens (p = 1.75×10−6, 88% of CF patients with pathogens), with predominance of Pa, and MI (p = 0.0015, 24% of CF patients with MI).\nAltogether, these data indicate that our approach to measure the level of CFTR (dys)function in rectal biopsies provides data evidencing good correlation with the CF severity.\nEvaluation of the Best Tool for Discriminating CF Patients from Non-CF Individuals\nFollowing the above findings, we attempted to establish a CF diagnosis tool which could also serve for disease prognosis. We thus used a stepwise discriminant analysis to evaluate which one(s) among the clinical and laboratory measurements available (sweat-Cl−; FEE; BMI; SK score; FEV1; Isc-IBMX/Fsk; and Isc-CCH(IBMX/Fsk)), constitutes the best discriminator factor between patients with Classic and Non-Classic CF and also between these groups and non-CF individuals (see Methods). As shown in Fig.4, discriminant function 1 (x-axis) corresponding to CFTR-mediated Cl− secretion measurements in rectal biopsies under CCH (Isc-CCH(IBMX/Fsk)) is the best option to explain the differences between the three groups in 90.4% of cases (Tables S4, S5). Additionally, FEE concentration and sweat-Cl− correspond to discriminant function 2 (y-axis) enabling separation of an additional ~6% of the remainder individuals (Tables S4, S5). Notwithstanding, four Non-Classic CF patients were still misclassified as Classic CF (Fig.4, black diamonds with dotted circles). Furthermore, to translate this discriminant analyses into a diagnosis algorithm, we have obtained its respective Fisher’s linear classification functions as:\nAnalysis performed for individuals having complete information about Isc-CCH(IBMX/Fsk), FEE concentration in stools and sweat-Cl− (n = 75): Classic CF (filled triangles, n = 47); Non-Classic CF (filled diamonds, n = 11); and non-CF (open circles, n = 17). Misclassified cases are marked with dotted circles. Grey squares represent the group centroids and grey lines the barriers between each group. * according to consensus criteria , , .", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-22", "d_text": "Vertical dashed grey line represents addition of one STD of the mean value calculated for Isc-IBMX/Fsk in reference sub-group of Non-Classic CF patients (ΔIsc = −22.60 µA/cm2). Classic CF (filled triangles, n = 55); Non-Classic CF (filled diamonds, n = 12); CFTR-RD (star, n = 2) and Non-CF (open circles, n = 26) individuals.\nCorrelations between CF clinical features and CCH-induced short circuit currents following IBMX/Fsk application (Isc-CCH (IBMX/Fsk)). Scatter-plot summarizing the distribution of Isc-CCH (IBMX/Fsk) against (A) Body Mass Index; (A) Body Mass Index distributed by groups of ages; (C) Shwachman–Kulczycki clinical scores; and (D) forced expiratory volume in 1 second (FEV1) expressed as a percentage of predicted normal values for sex, age, and height (% predicted).Vertical dashed black line represents subtraction of one standard deviation (STD) of the mean value calculated for CCH induced-ΔIsc following IBMX/CCH application (ΔIsc = −78.77 µA/cm2) in non-CF controls. Vertical dashed grey line represents addition of one STD of the mean value calculated for CCH stimulated-ΔIsc following IBMX/CCH application (ΔIsc = −39.55 µA/cm2) in reference sub-group of Non-Classic CF patients. Classic CF (filled triangles, n = 55); Non-Classic CF (filled diamonds, n = 12); CFTR-RD (star, n = 2) and Non-CF (open circles, n = 26) individuals.\nOverview of data for all CF patients in the reference group (Table S1a) and in the CF-suspicious group (Table S1b): genotypes, clinical phenotypes and Ussing chamber measurements, leading to confirmation/exclusion of a CF Diagnosis.\nCFTR mutations found in individuals under study. Gene and protein localization, mutation classification and frequency from the present study are designated. Traditional and HGVS standard nomenclature for CFTR mutations are also indicated.", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "To find a new topic, click on a letter in the navigation bar above to take you to a list of topics beginning with that letter. Cystic Fibrosis\nWhat Is Cystic Fibrosis?\nCystic fibrosis (CF) is an inherited disease of your mucus and sweat glands. It affects mostly your lungs, pancreas, liver, intestines, sinuses, and sex organs.\nA young cystic fibrosis patient undergoes a sweat test, which is an important diagnostic and therapy-monitoring tool. Photo courtesy of Singeli Agnew for the Cystic Fibrosis Foundation and the National Center for Research Resources.\nNormally, mucus is watery. It keeps the linings of certain organs moist and prevents them from drying out or getting infected. But in CF, an abnormal gene causes mucus to become thick and sticky. The mucus builds up in your lungs and blocks the airways. This makes it easy for bacteria to grow and leads to repeated serious lung infections. Over time, these infections can cause serious damage to your lungs. The thick, sticky mucus can also block tubes, or ducts, in your pancreas. As a result, digestive enzymes that are produced by your pancreas cannot reach your small intestine. These enzymes help break down the food that you eat. Without them, your intestines cannot absorb fats and proteins fully. As a result:\n- Nutrients leave your body unused, and you can become malnourished.\n- Your stools become bulky.\n- You may not get enough vitamins A, D, E, and K.\n- You may have intestinal gas, a swollen belly, and pain or discomfort.\nThe abnormal gene also causes your sweat to become extremely salty. As a result, when you perspire, your body loses large amounts of salt. This can upset the balance of minerals in your blood. The imbalance may cause you to have a heat emergency.\nCF can also cause infertility (mostly in men). The symptoms and severity of CF vary from person to person. Some people with CF have serious lung and digestive problems. Other people have more mild disease that doesn't show up until they are adolescents or young adults. Respiratory failure is the most common cause of death in people with CF. Until the 1980s, most deaths from CF occurred in children and teenagers.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Genetic Fact Sheets for Parents\nDisorder name: Cystic Fibrosis\n- What is CF?\n- What causes CF?\n- What are the symptoms of CF?\n- What is the treatment for CF?\n- What happens when CF is treated?\n- What causes the CFTR protein to be absent or not working correctly?\n- How is CF inherited?\n- Is genetic testing available?\n- What other testing is available?\n- Can you test during pregnancy?\n- Can other members of the family have CF or be carriers?\n- Can other family members be tested?\n- How many people have CF?\n- Does CF happen more often in a certain ethnic group?\n- Does CF go by any other names?\n- Where can I find more information?\nThis fact sheet has general information about cystic fibrosis (CF). Every child is different and some of this information may not apply to your child specifically. Certain treatments may be recommended for some children but not others. If you have specific questions about CF and available treatments, you should contact your doctor.\nWhat is CF?\nCystic fibrosis (CF) is an inherited condition that causes problems with lung function, and also, often, with digestion. CF causes thick, sticky mucus and fluids to build up in certain organs in the body, especially the lungs and the pancreas. When glands and organs in the body become blocked, their normal functions slow down or stop working well. This results in chronic health problems.In people with CF, the thickened mucus that lines the lungs and bronchioles can lead to repeated lung infections. In people who do not have CF, thin slippery mucus normally lines the nose and the tubes leading to the lungs. This mucus has the job of picking up bacteria, viruses and dirt from the air we breathe and moving them up and out of the lungs. The thick, sticky mucus found in people with CF can no longer do this job well. CF also reduces the immune cells’ ability to fight infections. People with CF develop chronic coughing and recurrent lung infections.\nIn addition to lung problems, many children with CF also have 'pancreatic insufficiency' . The pancreas is an organ behind the stomach. One of its jobs is to make special digestive enzymes that break down the food we eat into nutrients small enough to get into the blood. If the pancreas is blocked, the enzymes cannot get to the small intestine to do their job.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Cystic Fibrosis (CF) is caused by ~1,900 mutations in the CF transmembrane conductance regulator (CFTR) gene encoding for a cAMP-regulated chloride (Cl−) channel expressed in several epithelia. Clinical features are dominated by respiratory symptoms, but there is variable organ involvement thus causing diagnostic dilemmas, especially for non-classic cases.\nTo further establish measurement of CFTR function as a sensitive and robust biomarker for diagnosis and prognosis of CF, we herein assessed cholinergic and cAMP-CFTR-mediated Cl− secretion in 524 freshly excised rectal biopsies from 118 individuals, including patients with confirmed CF clinical diagnosis (n = 51), individuals with clinical CF suspicion (n = 49) and age-matched non-CF controls (n = 18). Conclusive measurements were obtained for 96% of cases. Patients with “Classic CF”, presenting earlier onset of symptoms, pancreatic insufficiency, severe lung disease and low Shwachman-Kulczycki scores were found to lack CFTR-mediated Cl− secretion (<5%). Individuals with milder CF disease presented residual CFTR-mediated Cl− secretion (10–57%) and non-CF controls show CFTR-mediated Cl− secretion ≥30–35% and data evidenced good correlations with various clinical parameters. Finally, comparison of these values with those in “CF suspicion” individuals allowed to confirm CF in 16/49 individuals (33%) and exclude it in 28/49 (57%). Statistical discriminant analyses showed that colonic measurements of CFTR-mediated Cl− secretion are the best discriminator among Classic/Non-Classic CF and non-CF groups.\nDetermination of CFTR-mediated Cl− secretion in rectal biopsies is demonstrated here to be a sensitive, reproducible and robust predictive biomarker for the diagnosis and prognosis of CF. The method also has very high potential for (pre-)clinical trials of CFTR-modulator therapies.\nCitation: Sousa M, Servidoni MF, Vinagre AM, Ramalho AS, Bonadia LC, Felício V, et al. (2012) Measurements of CFTR-Mediated Cl− Secretion in Human Rectal Biopsies Constitute a Robust Biomarker for Cystic Fibrosis Diagnosis and Prognosis. PLoS ONE 7(10): e47708.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Recent studies also show that even though Cystic Fibrosis is lethal when you inherit two of these mutated genes, there may be some evolutionary benefit to the carriers of this gene because if there were no selective advantage to the gene, it would have disappeared thousands of years ago.\nAnd necessity is the mother of invention and humor as parents have turned to innovative ways to raise funds for research. Another infant aged 18 months showed widening of the pancreatic ducts and a necrotic parenchyma. The cases where pancreatic disease seems to have been a major factor are reviewed; in particular the author draws attention to a paper by Nakamura.\nTen children with gastroenteritis were found at autopsy to have histological changes in the pancreas with inflammation causing sclerosis with possible blockage of the main pancreatic duct of Wirsung.\nCystic fibrosis is a disease in which a mutated gene causes a thick and sticky mucus to be produced, resulting in mucus build up in the lungs and digestive tract.A Brief History of Cystic Fibrosis 0.\nThough European folklore and literature from the 18th century warned, “Woe to the child who tastes salty from a kiss on the brow, for he is cursed and soon must die,” the condition was not formally described until the s. Cystic fibrosis is most common among people of Northern European heritage, affecting one of every 3, newborns.\nIt is least common in people of African or Asian descent and affects women only slightly more than men. As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other bsaconcordia.coms: 3.\nCystic Fibrosis Chapter 7 70 What we are learning about this disease Pathophysiology, causes: genetic, environment, microbes Cystic fibrosis was referred to in medieval folklore, which mentions infants with salty skin who were considered “bewitched” because they routinely died an early death.\nSalty skin is now recognized as a sign of CF.\nThe early years - \"cystic fibrosis\" by any other name From the midth century there were many reports of infants who may well have had cystic fibrosis by the nature of their clinical signs and course. There are approximately 30, Americans that have CF with about 1, new cases per year.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-11", "d_text": "Overall, based on CFTR-mediated Cl− secretion in the colon and subsequent confirmatory CF diagnostic testing (sweat Cl−, CFTR genotypes and clinical outcomes (data on Table S1-b), we could classify the individuals in the “CF suspicion:” group (n = 49) as: Classic CF (n = 9), Non-Classic CF (n = 7), CFTR-RD (n = 2) and Non-CF (n = 26). As to the 5 individuals showing inconclusive Ussing chamber measurements, one individual had one CF-disease causing mutation (G542X) and two individuals had RD-related mutations (V562I and G576A). As we could not confirm/exclude CF in these 5 individuals, they were not included in the hereunder correlations.\nFunctional classification of rarer mutations also results from these analyses, namely (Table S1): 3120+1G>A as class I (2 siblings with 3120+1G>A/R1066C, absence of CFTR-function and severe phenotypes); 1716+18672A>G as class V (2 other siblings with F508del/1716+18672A>G, residual CFTR function −28–34%- and mild CF); I618T as class IV (in a patient with G542X/I618T, 37% CFTR function and mild disease); and L206W as class IV or CFTR-RD mutation (in a patient with F508del/L206W and the highest CFTR function −57%- and very mild disease).\nCorrelation between CFTR-mediated Cl− Secretion and Clinical Outcomes\nIn order to assess the value of CFTR-mediated Cl− secretion in rectal biopsies as a predictive tool for CF, we attempted to statistically correlate these values with the accepted CF-characteristic parameters (Table 1, see also Methods S1), namely: sweat [Cl−] (Fig.2A, S3A); FEE (Fig.2B, S3B); BMI (Fig.2C, S3C, S4A); and age at diagnosis (Fig.3A, S3D).", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "What is Cystic Fibrosis?\nCystic Fibrosis is a genetic disorder that causes severe damage and infection to the lungs, gastrointestinal tract and sweat glands. It is a condition that affects the cells that produce mucus, sweat and digestive juice. Usually, the mucus is thin and slippery, but due to the condition of Cystic Fibrosis, the mucus becomes thick and sticky.\nThe thick mucus results in trouble breathing as the mucus blocks the airways. Over time the mucus fills inside the airways leading to the cause of infection.\nWhat causes Cystic Fibrosis?\nCystic Fibrosis disorder occurs because of a defect in the gene. It is hereditary. The defect or mutation in CFTR (cystic fibrosis transmembrane conductance regulator) gene causes Cystic Fibrosis. The defective genes affect the cells producing mucus, thus, making them thick and sticky.\nWhat are the symptoms of Cystic Fibrosis?\nThe symptoms always vary with the severity of the disorder. Pay attention to the following symptoms:\n- Trouble breathing\n- Frequent lung infections\n- Blockage in the nasal passage\n- Sinus infection\n- Persistent coughing\n- Constipation or Diarrhoea\nHow is Cystic Fibrosis diagnosed?\nThe Cystic Fibrosis diagnosis method uses different tests that include the following.\nIn newborns, the CF is diagnosed through newborn screening so that treatment can begin immediately. Sweat tests and blood tests are the two tests to check for Cystic Fibrosis in newborn children.\nA blood test helps to check the immunoreactive trypsinogen (IRT). People affected with Cystic Fibrosis have a high level of IRT.\nSweat tests can also determine whether the individual has Cystic Fibrosis. It helps to measure the salt in sweat. The sweat test is the commonly used test to diagnose Cystic Fibrosis. In people with Cystic Fibrosis, the rate of salt in the body is higher than normal.\nA genetic test is carried out to check the genes that cause Cystic Fibrosis. Blood samples will be collected and sent for further examination to check for Cystic Fibrosis. A genetic test can also be performed to check for the mutations in the CFTR genes.\nWhat are the treatments for Cystic Fibrosis?\nThere is no cure for Cystic Fibrosis.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Today, with improved treatments, people with CF live, on average, to be more than 35 years old.\nCystic fibrosis (CF) is caused by a defect in a gene called the cystic fibrosis transmembrane conductance regulator (CFTR) gene. This gene makes a protein that controls the movement of salt and water in and out of the cells in your body. In people with CF, the gene does not work effectively. This causes the thick, sticky mucus and very salty sweat that are the main features of CF.\nImage courtesy of Celia Hooper, Journal of NIH Research, Nov-Dec. 1989.\nEach of us inherits two CFTR genes, one from each parent.\n- Children who inherit an abnormal CFTR gene from each parent will have CF.\n- Children who inherit an abnormal CFTR gene from one parent and a normal CFTR gene from the other parent will not have CF. They will be CF carriers.\n- Usually have no symptoms of CF\n- Live normal lives\n- Can pass the abnormal CFTR gene on to their children\nWhen two CF carriers have a baby, the baby has a:\n- One in four chance of inheriting two abnormal CFTR genes and having CF.\n- One in four chance of inheriting two normal CFTR genes and not having CF or being a carrier.\n- Two in four chance of inheriting one normal CFTR gene and one abnormal CFTR gene. The baby will not have CF but will be a CF carrier like its parents.\nWho is at Risk\nAbout 30,000 people in the United States have cystic fibrosis (CF).\n- It affects both males and females.\n- It affects people from all racial and ethnic groups but is most common among Caucasians whose ancestors came from northern Europe.\nCF is one of the most common inherited diseases among Caucasians. About 1 in every 3,000 babies born in the United States has CF. CF is also common in Latinos and Native Americans, especially the Pueblo and Zuni. CF is much less common among African Americans and Asian Americans.\nAbout 12 million Americans are carriers of an abnormal CF gene. Many of them do not know that they are CF carriers.\nImage courtesy of NHLBI.\nSigns and Symptoms\nMost of the symptoms of cystic fibrosis (CF) are caused by the thick, sticky mucus.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-15", "d_text": "Classic CF = 0.317(Sweat Chloride) +0.005(Fecal Elastase) +0.012(Isc-CCH(IBMX/Fsk)) –18.909.\nNon-Classic CF = 0.338(Sweat Chloride) +0.028(Fecal Elastase) −0.015(Isc-CCH(IBMX/Fsk)) –28.485.\nNon- CF = 0.202(Sweat Chloride) +0.026(Fecal Elastase) −0.102(Isc-CCH(IBMX/Fsk)) –23.414.\nThese allow classification of a new CF suspicion case by replacing in these 3 equations the values from laboratory measurements obtained for a given individual. The function giving the highest value will correspond to the CF classification group best describing the individual.\nThe wide spectrum of CF phenotypes, high variability of CF lung disease, the uncertain (dys)function of many rare CFTR mutations together with increasing numbers of asymptomatic patients identified in recent newborn CF screens, have posed major challenges to clinicians for the establishment of CF diagnoses and prognosis , , . Such hurdles make it difficult for caregivers to provide adequate genetic counselling and medical care, risking worsening of symptoms and organ damage.\nGood Correlations between CFTR-mediated Cl− Secretion and CF Parameters\nTo evaluate the robustness of colonic CFTR-mediated Cl− secretion as a diagnosis/prognosis biomarker and thus help overcoming such difficulties, we assessed CFTR (dys)function ex vivo in 524 rectal biopsies from 118 individuals, including the largest cohort of CF patients ever analysed by this approach (n = 51), a non-CF (control) group (n = 18) and individuals with clinical CF suspicion to confirm/exclude a CF diagnosis (n = 49). The functional data, demonstrating good correlations with most CF-defining parameters, have also provided key information to adjust the clinical judgment of a CF diagnosis and prognosis.\nOur approach consists in direct measurements of colonic CFTR function assessed through both cAMP-dependent and cholinergic Cl− secretion which, as previously shown , , , are strictly dependent on the presence of functional CFTR.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-3", "d_text": "There are no data to weight the significance of particular symptoms or construct receiver operating curves, but the common diagnostic features of PCD are listed in table 1 as a guide. A combination of upper and lower airway symptoms is usual.\nConventional diagnostic clues\nHeterotaxy (usually mirror image organ arrangement) on antenatal ultrasound scanning should lead to consideration of the diagnosis, although most babies with mirror image arrangement do not have PCD.\nPresentation in the newborn period\nContinuous rhinorrhoea from the first day of life; this is highly likely to be due to PCD\nRespiratory distress or neonatal pneumonia with no obvious predisposing cause\nMirror image arrangement (but see “Antenatal presentation” above)\nDiagnosis by screening because of a positive family history\nPresentation in childhood\nChronic productive or “wet” cough. Clearly, other causes such as cystic fibrosis (CF) will likely need to be ruled out first by appropriate testing, before referral for ciliary function, unless the child also has mirror image arrangement or other features suggestive of PCD. Isolated dry cough is not likely to be due to PCD.\nAtypical “asthma”, non-responsive to treatment, especially if a wet-sounding cough is present\n“Idiopathic” bronchiectasis. Diagnosis of bronchiectasis by screening because of a positive family history of PCD. This accounted for 10% of cases in one series.2\nRhinosinusitis. Daily rhinitis is typical, without remission, and sometimes in older children severe sinusitis despite multiple surgical procedures; nasal polyps are rare, and more commonly due to CF.\nOtitis media with effusion (OME). Typically, if tympanostomy (ventilation tubes) are inserted, there is usually a prolonged smelly ear discharge for weeks not responding to treatment and with no improvement in hearing.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-19", "d_text": "He had suffered chronic ill health from childhood with poor growth and bronchiectasis, situs invertus and sinusitis – a syndrome described by Kartagener in 1933 (Kartagener M. Beitr Klin Tuberk 1933; 83:489) .\n1974 Burnell RH,\nRobertson EF. Cystic fibrosis in a patient with Kartageners Syndrome. Am J Dis\nChild 1974; 127:746-747. [PubMed]\nA report from Adelaide Children’s Hospital, Australia. Said to be the third report of this combination with Kartageners syndrome of situs inversus totalis, bronchiectasis, and recurrent sinusitis with or without nasal polyps (figure 1). A white boy aged six months had “bronchopneumonia and neglect”. The lowest sweat electrolyte values were sodium 96 and chloride 80 meq/l. The duodenal biopsy showing partial villous atrophy which was rather puzzling and not discussed by the authors. The similarity of the symptoms of the two conditions, CF and Kartagener’s, is discussed. The two previous cases were doubtful. The authors advise excluding CF in all people with Kartagener’s syndrome. (Also Brown & Smith, 1959 above),\n1993 Phillips RJ,\nCrock CM, Dillon MJ, Clayton PT, Curran A, Harper JI. Cystic fibrosis presenting\nas kwashiorkor with florid skin rash. Arch Dis Child 1993; 69:446-448. [PubMed]\nTwo infants with a florid erythematous rash and generalised oedema, hypoalbuminaemia, and anaemia were found to have cystic fibrosis. This rare presentation is associated with false negative sweat tests, delays in diagnosis, and a considerable mortality. The authors suggested that this presentation represents a manifestation of kwashiorkor secondary to intestinal malabsorption.\n1986 Orenstein DM,\nWasserman AL. Munchausen syndrome by proxy simulating cystic fibrosis. Pediatrics\n1986; 78:621-624. [PubMed]\nProfessor Sir Roy Meadow described Munchausen by proxy in 1977 (Meadow R. Munchausen syndrome by proxy. The hinterland of child abuse.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Cystic Fibrosis is a inherited disorder that causes severe damage to the lungs, digestive system and other organs in the body,\nThe disorder the cells the produce mucus, sweat and digestive juices. These secreted fluids are normally thin and slippery, but people with cystic fibrosis, a defective gene causes, the secretions to become sticky and thick. instead of acting like a lubricant the secretions plug up tubes, ducts and passageways in the lungs and pancreas.\nThis disorder requires daily care, most patients can attend school, work and have a better quality of life than they would have decades ago.\nScreening newborns for cystic fibrosis is now performed in every state in the United States.\nPeople with cystic fibrosis have a higher than normal level of salt in their sweat, Adult diagnosed with cystic fibrosis are more likely to have atypical symptoms such as recurring bouts of inflamed pancreas (pancreatitis) infertility and recurring pneumonia.\nSome symptoms are a persistent cough that produces this mucus, Wheezing, Breathlessness, Exercise intolerance, repeated lung infections, inflamed nasal passages or a stuffy nose.\nDigestive symptoms are foul-smelling greasy stools, Poor weight gain and growth, intestinal blockage, severe constipation.\nCause is heredity, most likely Northern European Ancestry.\nComplications of this disorder, Damaged airways, chronic infections, growths of polyps in the nose, coughing up blood, pneumothorax, respitory failure, acute exacerbations, Nutritional deficiencies, Diabetes, Blocked bile duct, intestinal obstruction, Distal intestinal obstruction syndrome , Osteopororis, electrolyte imbalance and dehydration.\nThere is a new product invented by Marten Devlieger and the Vest invented allows patients with Cystic Fibrosis to retrieve treatment on the go. @thecfsorceress, www.hill-rom.com Name of Vest is the Monarch.\nMarten’s journey with Cystic fibrosis and how he is helping others live better with their disease.\nMarten lives life to the fullest and wants others to do the same. He was so empowered by his love for life and wanting to do all the things he enjoyed, but one problem, he had to stop for treatments, This being said Marten invented the Monarch. A portable vest that allows Marten and others to go on there adventures and be able to give them selves treatment portably.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "- Patient Comments: Cystic Fibrosis - Describe Your Experience\n- Patient Comments: Cystic Fibrosis - Symptoms\n- Patient Comments: Cystic Fibrosis - Risk\n- Cystic fibrosis facts*\n- What is cystic fibrosis?\n- What are other names for cystic fibrosis?\n- What causes cystic fibrosis?\n- Is cystic fibrosis inherited?\n- Who is at risk for cystic fibrosis?\n- What are the signs and symptoms of cystic fibrosis?\n- How is cystic fibrosis diagnosed?\n- How is cystic fibrosis treated?\n- Living with cystic fibrosis\n- What is the outlook for cystic fibrosis?\nWhat is the outlook for cystic fibrosis?\nThe symptoms and severity of CF vary. If you or your child has the disease, you may have serious lung and digestive problems. If the disease is mild, symptoms may not show up until the teen or adult years.\nThe symptoms and severity of CF also vary over time. Sometimes you'll have few symptoms. Other times, your symptoms may become more severe. As the disease gets worse, you'll have more severe symptoms more often.\nLung function often starts to decline in early childhood in people who have CF. Over time, damage to the lungs can cause severe breathing problems. Respiratory failure is the most common cause of death in people who have CF.\nAs treatments for CF continue to improve, so does life expectancy for those who have the disease. Today, some people who have CF are living into their forties or fifties, or longer.\nEarly treatment for CF can improve your quality of life and increase your lifespan. Treatments may include nutritional and respiratory therapies, medicines, exercise, and other treatments.\nYour doctor also may recommend pulmonary rehabilitation (PR). PR is a broad program that helps improve the well-being of people who have chronic (ongoing) breathing problems.\nMedically reviewed by James E Gerace, MD; American Board of Internal Medicine with subspecialty in Pulmonary Disease\n\"Cystic Fibrosis.\" NIH National Heart, Lung, and Blood Institute. 1 June 2011.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-6", "d_text": "Chemicals and Compounds\nAll chemicals (highest available purity) were from Sigma-Aldrich® (St Louis, MI, USA) or Merck® (Darmstadt, Germany) except for culture media (GIBCO®/Invitrogen, Carlsbad, CA, USA). Fecal Elastase E1 test was from Schebo ® Biotech AG (Giessen, Germany).\nFor statistical analyses the SPSS software/v.19 (Chicago, IL, USA) was used and a p-value <0.05 was considered statistically significant. Unless otherwise stated, data are mean ± STD (n, number of individuals studied). Details of the statistical analysis for Pearson correlations, Chi-square and discriminant analyses are described under Methods S1.\nSubjects Under Study and Overview of Clinical Data\nThe clinical diagnosis of CF was established based on consensus clinical criteria , , namely: 1) presence of one or more characteristic phenotypic features (chronic sinopulmonary disease; gastrointestinal/nutritional abnormalities; obstructive azoospermia or salt-loss syndrome) and 2) evidence of a CFTR abnormality (increased sweat [Cl−] (>60 mEq/L) and/or detection of two CF-disease causing mutations). Using these criteria and the current CF classification terminology , two different sub-groups of CF patients (n = 51) were established (detailed clinical data in Table S1): (a) Classic CF patients (n = 46) presenting a severe phenotype and classic disease manifestations (high sweat-Cl−; PI; nutrition deficiencies; chronic sinopulmonary disease); and (b) Non-Classic CF patients (n = 5) with an atypical phenotype and showing a milder disease (pancreatic sufficiency-PS; adulthood diagnosis; less serious lung involvement; and at least one organ with CF phenotype).\nA third group included individuals with a clinical suspicion of CF (n = 49) of which 27% had only one abnormal sweat-Cl− value; others presented borderline (22%) or normal (20%) sweat-Cl− values (see Methods); and 31% had not been tested for sweat-Cl− at the time. Most of these individuals (69%) had inconclusive genetic testing (20% had only one CF-disease causing mutation identified) and the remainder had not been CFTR-genotyped at the time (Table S1).", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Bayside Business Plaza\n2670 Bayshore Parkway\nMountain View, CA 94043\nCochrane Cystic Fibrosis and Genetic Disorders Group\nInstitute of Child Health, University of Liverpool\nAlder Hey Children's NHS Foundation Trust\nLiverpool, L12 2 AP\nCystic Fibrosis Worldwide\nEvka 3 Mahallesi\n127/19 SOKAK NO/25\nGenetic and Rare Diseases (GARD) Information Center\nPO Box 8126\nGaithersburg, MD 20898-8126\nGlobal Fibrosis Foundation\n250 Main Street\nCambridge, MA 02142\nChildhood Liver Disease Research and Education Network\nc/o Joan M. Hines, Research Administrator\nChildren's Hospital Colorado\n13123 E 16th Ave. B290\nAurora, CO 80045\nFor a Complete Report\nThis is an abstract of a report from the National Organization for Rare Disorders (NORD). A copy of the complete report can be downloaded free from the NORD website for registered users. The complete report contains additional information including symptoms, causes, affected population, related disorders, standard and investigational therapies (if available), and references from medical literature. For a full-text version of this topic, go to www.rarediseases.org and click on Rare Disease Database under \"Rare Disease Information\".\nThe information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only. NORD recommends that affected individuals seek the advice or counsel of their own personal physicians.\nIt is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report\nThis disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder.\nFor additional information and assistance about rare disorders, please contact the National Organization for Rare Disorders at P.O.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Fibrosis is the formation of excess fibrous tissues or scar tissue, usually because of injury or long-term inflammation. The two most well-known types of this condition are pulmonary fibrosis, which affects the lungs; and cystic fibrosis (CF), which affects the mucus glands. There also are many other types, including those that affect the heart, skin, joints and bone marrow. Cirrhosis of the liver also is a type of this condition.\nThere are many potential causes of this condition. It is sometimes caused by disease or the treatment of a disease. Other causes include injuries, burns, radiation, chemotherapy and gene mutation. Some types of this condition are idiopathic, which means that the causes are unknown.\nFibrosis causes the affected tissues to harden. They sometimes also swell. These changes can make the tissues unable to function properly. For example, the flow of fluids through the affected tissues is often reduced. When the condition is present in the lungs, they are unable to expand as normal, causing a shortness of breath.\nIn the lungs, this condition is called pulmonary fibrosis, and it involves the overgrowth, hardening and/or scarring of lung tissue because of excess collagen. In addition to shortness of breath, the common symptoms include a chronic dry cough, fatigue, weakness and chest discomfort. A loss of appetite and rapid weight loss also are possible. This condition usually affects people between the ages of 40 and 70, and men and women are equally affected. The prognosis for patients with this disease is poor, and they usually are expected to live an average of only four to six years after diagnosis.\nAnother common form of this condition is CF, a chronic, progressive and often fatal genetic disease of the body's mucus glands. Symptoms sometimes include abnormal heart rhythms, malnutrition, poor growth, frequent respiratory infections and breathing difficulties. This condition can also cause other medical problems, including sinusitis, nasal polyps and hemoptysis, or the coughing of blood. Abdominal pain and discomfort, gassiness, and rectal prolapse also are possible.\nCF primarily affects the respiratory and digestive systems in children and young adults. The symptoms are often apparent at birth or shortly thereafter; rarely do the signs not show up until adolescence. It is most commonly found in Caucasians, and the prognosis is moderate, with many patients living for as long as 30 years after diagnosis.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "September 27, 2007\nOne out of every 3500 newborns will be diagnosed with CF or cystic fibrosis at some point in their life. Affecting primarily the respiratory and GI tracts, it's a disease usually identified in young infants and children. But many times a patient can reach adulthood before ever getting the diagnosis. Dr. Mom brings us the story of one family this happened to.\nWhat is Cystic Fibrosis (CF)?\nWhat are the symptoms of CF?\nCF is an inherited disease that can affect most every system in the body, especially the lungs and digestive system. Approximately 1,000 new cases of CF are diagnosed each year. In the early 1950's very few children lived to school age. However, with early diagnosis and advances in treatment, the median age of survival is now 37 years of age.\nOther symptoms may also include:\n- Very salty tasting skin\n- Persistent cough, at times with phlegm\n- Frequent chest and sinus infections with recurring pneumonia or bronchitis\n- Wheezing or shortness of breath\n- Poor growth, poor weight gain\n- Frequent greasy, malodorous, bulky stools\n- Blockage in the bowels\n- Enlargement or rounding (clubbing) of the fingertips and toes. Although clubbing eventually occurs in most people with cystic fibrosis, it also occurs in some people born with heart disease and other types of lung problems.\n- Protrusion of part of the rectum through the anus (rectal prolapse). This is often caused by stools that are difficult to pass or by frequent coughing.\nTesting for CF\n- Growths (polyps) in the nasal passages\n- Cirrhosis of the liver due to inflammation or obstruction of the bile ducts\n- Displacement of one part of the intestine into another part of the intestine (intussusception) in children older than age 4\n- Aspermia or the lack of sperm in males\n- Genetic carrier testing - This test can help detect carriers of the defect CF gene. To have CF, a child must inherit one copy of the defective gene from each parent.\n- Sweat Test - this is a simple, painless test that measures the concentration of salt in a person’s sweat.\n- Newborn screening - many states require newborns to be screened for CF. However, Texas does not require newborn screening for CF.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Jan 19, 2020 | by Nancy Newbrough\nMy 16-year-old son seemed normal in every way until he was about four years old. He had stopped gaining weight and had a cough. Doctors could not find anything, but our concerns continued to grow.\nAt six years, he would have bad coughing fits and then sleep for some time. I took him to the ER after a particularly bad episode, but the doctors had no answers. So we took him to an asthma/allergy specialist. During a visit, the nurse looked at his fingers and asked questions. Moments later the doctor came in, looked at his nails and suggested that he may have cystic fibrosis. Abnormal rounding of the nail beds is one of the signs.\nCystic fibrosis is a rare genetic disease for which there is no cure. It primarily affects the lungs, but also the digestive system. CF occurs when there is a recessive gene from both the father and the mother. Neither of us knew of anyone in our families having the disease.\nA specialist confirmed the diagnosis. This disease affects about 30,000 people in the United States. Current life expectancy is only 37-40 years old, though some live much longer.\nNebulizer breathing treatments began, one in the morning and two different ones in the evening. A machine called a Vest provides a treatment that basically vibrates his lungs. Each breathing treatment takes approximately 20 minutes and the Vest treatment is an additional 20 minutes in the evening. The goal is to loosen the mucus in the lungs so it can be coughed up. Mucus in the lungs that never goes away and is constantly growing is the main symptom of CF.\nAt the time of diagnosis, my son had not gained any weight in several years and his BMI was about 25%. Because CF also affects the digestive system, the doctor put him on enzymes. CF does not allow the body to produce proteins that break down the fat and vitamins in food so they can be absorbed -- which is why he had not been gaining weight. The enzymes enable his body to do that.\nMy son was hospitalized for the first time in May, 2014. His lung function had dropped to 64% and he needed IV treatment. Called “tune-ups,” this is a common occurrence for people with CF. Since then, my son has been admitted four times.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "Children require to inherit one copy of the gene from every parent to have this disease. If children inherit only one copy, they would not cause cystic fibrosis. However, they will be carriers and probably carry the gene to their own children.\nCystic Fibrosis Complications\nThere are many complications of Cystic Fibrosis. They are given below accordingly\nDiagnosis of Cystic Fibrosis\nCystic fibrosis diagnosis are given below\n1) Newborn screening and diagnosis\n2) Testing of older children and adults\nCystic Fibrosis Treatment\nThere is no cure for cystic fibrosis, but the treatment of cystic fibrosis can ease symptoms and decrease problems. Close checking and early, aggressive intervention are suggested. Controlling cystic fibrosis is difficult, so consider obtaining cystic fibrosis treatment at a center staffed by doctors and other staff trained in cystic fibrosis. Doctors may do with a multidisciplinary team of doctors and medical professionals trained in cystic fibrosis to evaluate and treat the situation. The aims of cystic fibrosis treatment contain:\n1) Preventing and controlling infections that happen in the lungs\n2) Eliminate and loosening mucus from the lungs\n3) Treating and preventing intestinal blockage\n4) Offering sufficient nutrition", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "doi:10.1371/journal.pone.0047708\nEditor: Dominik Hartl, University of Tübingen, Germany\nReceived: June 2, 2012; Accepted: September 14, 2012; Published: October 17, 2012\nCopyright: © Sousa et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.\nFunding: This work was supported by grants TargetScreen2 (EU/FP6/LSH/2005/037365), PIC/IC/83103/2007; PTDC/MAT/118335/2010; PEstOE/BIA/UI4046/2011 (to BioFIG) and PEstOE/MAT/UI0006/2011 (to CEAUL) from FCT (Portugal); and FAPESP (SPRF, Brazil), CNPq (40.8924/2006/3, Brazil) and Mukoviszidose e.V. S02/10 (Germany). MS and IU are recipients of SFRH/BD/35936/2007 and SFRH/BD/69180/2010 PhD fellowships (FCT, Portugal), respectively. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\nCompeting interests: The authors have declared that no competing interests exist.\nCystic Fibrosis (CF), the most common severe autosomal recessive disease in Caucasians, is caused by mutations in the CF transmembrane conductance regulator (CFTR) gene – which encodes a cAMP-regulated chloride (Cl−) channel expressed at the apical membrane of epithelial cells to control salt and water transport . Clinically, CF is characterized by multiple manifestations in different organs, but is dominated by the respiratory disease, the main cause of morbidity and mortality. Airway obstruction by thick mucus and chronic infections, especially by Pseudomonas aeruginosa (Pa), eventually lead to impairment of respiratory function . Other CF symptoms include pancreatic insufficiency, intestinal obstruction, elevated sweat electrolytes and male infertility .", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "The classic presentation of pulmonary fibrosis is slowly progressing shortness of breath (“dyspnoea”) with a chronic non-productive (dry) cough. The shortness of breath is typically worse on exercise and exertion.\nThere are various other signs and symptoms though including:\n> Fatigue & lethargy\n> Loss of appetite\n> Weight loss\n> Chest pain\n> Low grade fever & muscle pains\nDiagnosing pulmonary fibrosis\nVarious tests can be performed in order to diagnose pulmonary fibrosis, these may include:\nSpirometry (‘breathing and blowing tests’): these simple breathing tests may show a restrictive pattern with a reduced total lung volume.\nComputed Tomography (CT) scan: this may also be called a HRCT, or high resolution CT scan. A CT scan shows more detail than a chest x-ray, but may not always be required.\nLung biopsy: A lung biopsy is an invasive procedure and not always required if there is clear evidence of pulmonary fibrosis on x-ray or CT.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis is a serious genetic disease that affects about 1 baby in Italy about 2,500. A trigger disease is a defect in the CFTR gene, which in our country would be present in 1 in 25. For each pregnancy, a pair of carriers has a 25% chance of having an affected paper.\nThere are various levels of disease severity, which may vary from person to person, depending on age of diagnosis and the type of CFTR gene mutation. Cystic fibrosis alters the secretions of the organs, making them more dense and less fluid. This damages the organs and causes gradually the loss of functionality. The most affected organs are the lungs and bronchi: the mucus stagnates in them, which causes more and more serious infections and inflammations. With time, the accumulation of disorders causes the body insufficiency.\nCystic fibrosis also affects other organs, in addition to those of the respiratory system. Seriously harms the pancreas, responsible for the reversal of the enzymes. The malfunctioning of the pancreas leads to digestive problems, malabsorption of nutrients and consequent growth problems. Often pancreatic problems evolve into a form of diabetes, with all the additional problems that arise. In addition, the bad functioning of CFTR can damage the intestine, liver and the vas deferens in men.\nOne of the major symptoms leading to the diagnosis of cystic fibrosis is salty sweat, caused by abnormally high levels of chlorine. Sick children also suffer from persistent cough, shortness of breath, frequent lung infections and failure to gain weight. In these cases it is provided with suitable DNA testing and analysis, for the presence of the disease.\nFor now lacking real cure against cystic fibrosis. There are treatments for the symptoms and, in case of early diagnosis, to slow the progress of the disease. Compared to 50 years ago, patients enjoy a higher quality of life and have higher growth prospects. Once a child with cystic fibrosis was struggling to get to school age, while the average today is around 40 years. The road to finding a cure is still long, but scientific research is making big steps forward.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Cystic fibrosis (CF) is the most common recessively inherited life limiting disease in the UK. Incidence rates are around 1 in 2500 live births and approximately 1 in 25 of the population is a carrier for the disease . There are approximately 9000 patients currently in the UK . CF mostly affects Caucasian populations; however, there are a significant number of cases amongst families from the Indian subcontinent and the Middle East. It is rare in Chinese, South East Asian and African-Caribbean ethnic groups.\nThe outlook for patients with CF has steadily improved over the years as a result of early diagnosis; more aggressive treatment; agreed standards of care which define assessment, monitoring, detection and treatment of complications; and the recommendation that care is delivered in specialist centres by a multidisciplinary team of trained health professionals experienced in the management of the condition . The proportion of adults with CF has increased dramatically and currently stands at 56% of the CF population . Median survival is 41.4 years and is predicted to be 50 years of age for infants born in 2000 .\nCF is caused by a mutation of the cystic fibrosis transmembrane conductance regulator gene (CFTR). This results in dysfunction in the regulation of salt and water across the cell membranes of secretory epithelial cells and in thickened secretions in all organs with epithelial cells; hence CF is a multi-organ condition. Approximately 1500 different genetic mutations have been identified. The most common genotype which accounts for around 52% of cases is homozygote ΔF508, with F508 plus one other genotype accounting for approximately 39% of cases. Class I, II and III mutations are the most severe giving rise to more typical presentations of CF. Class IV and V mutations give rise to milder or atypical CF.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "The treatment and medication aim to treat the symptoms and reduce the risk of developing other health complications.\nMedications for Cystic Fibrosis\n- Patients with Cystic Fibrosis are given antibiotics to treat lung infections and prevent them from recurring.\n- Anti-inflammatory medicine works to reduce inflammation in the nasal passage and lungs.\n- Mucus-thinning medications make the mucus thin so that it would leave the lungs without causing trouble. The medication also helps to increase lung function.\n- Bronchodilators relieve or treat the mucus clogged in the air tubes that carry air to the lungs. This helps to increase the airflow.\n- Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) modulators stabilize CFTR gene function.\nSurgical treatments for Cystic Fibrosis\nThe treatment for Cystic Fibrosis is as follows,\nNasal and sinus surgery to clear the blockage in air tubes. Sinus surgery is performed to avoid the risk of developing Chronic Sinusitis.\nBowel surgery is necessary if there is a blockage in the bowel. So, bowel surgery helps to remove the blockage.\nA lung transplant is necessary if there are severe breathing problems or lung infections. Cystic Fibrosis disorder is not cured by lung transplantation, but it helps ease the symptoms and the complications of Cystic Fibrosis.\nOxygen therapy becomes necessary when the oxygen level declines. The treatment is all about the inhale of pure oxygen to prevent the risk of high blood pressure in the lungs.\nNoninvasive ventilation (breathing through a face mask) provides positive pressure in the lungs and airways to aid respiratory function. The treatment is usually carried out at night while sleeping and often combined with oxygen therapy. Here, positive pressure is provided by a mouth or nasal mask while breathing, thus clearing the airways.\nWhat are the complications of Cystic Fibrosis?\nCystic Fibrosis is a life-threatening disorder. If untreated, the complication of Cystic Fibrosis can damage the pancreas. The thick mucus can block the pancreas ducts and results in poor digestion, and in the worst conditions, it can cause Diabetes.\nIn the same way, the mucus can get blocked in the liver, small intestine, large intestine, kidney, bladder and other organs. The consequences cause severe problems and hazards.\nPrevention of Cystic Fibrosis is hard because the disease is inherited.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-28", "d_text": "Am J Physiol 275(6 Pt 1): G1274–81.\n- 32. Schaedel C, de Monestrol I, Hjelte L, Johannesson M, Kornfält R, et al. (2002) Predictors of deterioration of lung function in cystic fibrosis. Pediatr Pulmonol 33: 483–491. doi: 10.1002/ppul.10100\n- 33. Cleveland RH, Zurakowski D, Slattery D, Colin AA (2009) Cystic fibrosis genotype and assessing rates of decline in pulmonary status. Radiology 253: 813–821. doi: 10.1148/radiol.2533090418\n- 34. Pratha VS, Hogan DL, Martensson BA, Bernard J, Zhou R, et al. (2000) Identification of transport abnormalities in duodenal mucosa and duodenal enterocytes from patients with cystic fibrosis. Gastroenterology 118: 1051–1060. doi: 10.1016/s0016-5085(00)70358-1\n- 35. Mall M, Kreda SM, Mengos A, Jensen TJ, Hirtz S, et al. (2004) The DF508 Mutation Results in Loss of CFTR Function and Mature Protein in Native Human Colon. Gastroenterology 126: 32–41. doi: 10.1053/j.gastro.2003.10.049\n- 36. Knowles M, Gatzy J, Boucher R (1981) Increased bioelectric potential difference across respiratory epithelia in cystic fibrosis. N Engl J Med 305: 1489–1495. doi: 10.1056/nejm198112173052502\n- 37. Ho LP, Samways JM, Porteous DJ, Dorin JR, Carothers A, et al. (1997) Correlation between nasal potential difference measurements, genotype and clinical condition in patients with cystic fibrosis. Eur Respir J 10: 2018–2022. doi: 10.1183/09031936.97.10092018\n- 38.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "- By Cynthia Weiss\nMayo Clinic Q and A: Cystic fibrosis and COVID-19\nDEAR MAYO CLINIC: My cousin has cystic fibrosis. After graduating from college last December, she moved across the country to take a new job. Now she lives with me. I have always been worried about her, but I am more concerned now that COVID-19 cases are rising in our area. Although I know coughing is common with her condition, every time I hear her cough, I worry it's COVID-19. Complicating the situation, since moving she has not found a health care provider who is comfortable treating cystic fibrosis. How can I help her remain well?\nANSWER: Cystic fibrosis is a genetic disease that causes abnormalities of mucus, sweat and digestive juices. Common signs include wheezing, recurrent lung or sinus infections, and difficulty gaining weight. It can lead to damage to the lungs, digestive system and other organs in the body.\nIt is estimated that 30,000 people are living with cystic fibrosis in the U.S. But, like your cousin, most of these people are usually able to attend school and work.\nAlthough your cousin may be accustomed to her disease, it could be frightening for you, especially since cough and shortness of breath are signs of COVID-19 infection. She is lucky to have a family member who is concerned for her safety.\nEncourage your cousin to identify an accredited cystic fibrosis center to establish care. Since cystic fibrosis is a disease that affects many organ systems, having access to a multidisciplinary group of health care providers who are well-versed in the disease can ensure she will have access to the latest therapies and best practices in treatment. This includes being evaluated for a liver or lung transplant, should her disease ever progress to that point.\nOver the past few years, new medications have been developed that can dramatically improve lung function and overall well-being for most patients with cystic fibrosis. Also, these cumulative medical advances, coupled with attentive multidisciplinary care, have helped increase the life expectancy for patients with cystic fibrosis to an all-time high. This trend is anticipated to continue, and with consistent and appropriate care, your cousin can have a long, healthy life ahead of her.\nAlso, a cystic fibrosis center can help your cousin address any concerns, especially as the COVID-19 pandemic continues.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "There are also some people who have been diagnosed with CF because of genetic test results, but who have very few symptoms of CF.\nOver time, people with CF can have chronic health issues such as:\n- Repeated bouts of bronchitis or pneumonias leading to permanent lung damage\n- Collapsed lung, bleeding from the lungs, or lung failure\n- Poor growth and poor weight gain due to malnutrition\n- Chronic diarrhea\n- Fatigue and anemia\n- Males are usually sterile due to blocked or absent vas deferens (the tubes carrying the sperm from the testes to the penis). There are now techniques which allow some men with CF to father their own children.\n- A small number of people with CF develop high blood sugar and may need insulin\n- Some people with CF have bouts of pancreatitis, a painful inflammation of the pancreas\n- Some people with CF develop liver disease over time\n- Bone thinning, which can lead to osteoporosis, is seen in some people with CF\n- Lung infection or permanent damage to the lungs is the main cause of death in people with CF\nIf treated appropriately, CF does not affect intelligence or the ability to learn. People with CF can attend regular school and should be able to achieve the same level of education as people who do not have CF. Many people with CF have finished college and have full-time jobs.\nIf left untreated, CF can cause serious chronic health effects that could lead to early death. Many of the symptoms of CF can be controlled with proper medication and treatment. It is important that you see your doctor and follow a treatment plan tailored for your child’s needs.\nWhat is the treatment for CF?\nChildren and adults with CF are usually treated by a team of doctors and other health care providers who have experience with cystic fibrosis. These teams are often located in special CF treatment centers. There are many CF treatment centers located throughout the US. You can find a center in your area through the Cystic Fibrosis Foundation (www.cff.org)\nThe main goal of treatment is to keep your child’s lungs clear of thick mucus and to provide your child with the correct amount of calories and nutrients to keep him or her healthy.\nCertain treatments may be advised for some children but not others. When necessary, treatment is usually needed throughout life. The following are treatments sometimes suggested for children with CF:\n1.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-23", "d_text": "Mean and standard error of the mean values for clinical parameters among the 3 groups of individuals analyzed in this study: Classic CF; Non-Classic CF and Non-CF.\nDiscriminant Functions: Eigenvalues and Wilk’s Lambda statistics.\nCanonical Discriminant Functions used in the Analysis.\nCl- secretion in Rectal Biopsies.\nAuthors are grateful to all CF patients and healthy volunteers that kindly agree to participate in this study. Authors would also like to thank Célia Linhares (FibroCis) for coordinating patients’ contacts, Jaqueline for technical assistance with sweat test, Drs. Luciana Meirelles and Rita Carvalho for histological evaluation of the biopsies.\nConceived and designed the experiments: MS MFS ASR CSB AFR KK MDA. Performed the experiments: MS MFS AMV LCB ASR VF MAR IU AK SRC. Analyzed the data: MS. Contributed reagents/materials/analysis tools: FAM LS JDR CSB KK AFR MDA. Wrote the paper: MS MDA. Interpreted data: MFS ASR KK AFR MDA. Revised the article: MFS AFR KK.\n- 1. Rommens JM, Ianuzzi MC, Kerem B, Drumm M, Melmer G, et al. (1989) Identification of the cystic fibrosis gene: chromosome walking and jumping. Science 245: 1059–1065. doi: 10.1126/science.2772657\n- 2. Kerem B, Rommens J, Buchanan J, Markiewicz D, Cox T, et al. (1989) Identification of the cystic fibrosis gene: genetic analysis. Science 245: 1073–1080. doi: 10.1126/science.2570460\n- 3. Riordan JR, Rommens JM, Kerem B, Alon N, Rozmahel R, et al. (1989) Identification of the cystic fibrosis gene: cloning and characterization of complementary DNA. Science 245: 1066–1073. doi: 10.1126/science.2475911\n- 4. Rich DP, Anderson MP, Gregory RJ, Cheng SH, Paul S, et al.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "One of the first symptoms of bronchiectasis is a continuous, persistent cough that can last for months, even years. This cough is a body self-defense to the enormous amount of sputum produced, which may contain mucus, other particles and, if an infection is present, pus.1\nOther symptoms may include changes in the smell of the breath, paleness, shortness of breath that worsens with exercise, wheezing, chest pain, and clubbing of the fingertips. Clubbed fingertips result from tissue beneath the nail thickening, so that the fingertips become rounded and bulbous.\nMore serious symptom can develop with time, possibly the result of serious lung infection. These may include a bluish skin color and a blue tint to the nails and the lips, confusion, rapid breathing, and running a high temperature (38C /100.4F).2\nWhen listening to the lungs with a stethoscope, a doctor will hear abnormal lung sounds such as a distinctive crackling noise when breathing in and out.3 These sounds reflect a buildup of fluid, mucus or pus in the small airways of the lungs.\nChildren with bronchiectasis may not grow at the same rate as other children, and/or may lose weight.\nAs the condition develops, or an infection sets in, people usually feel very tired and blood may be present in the sputum.\nSigns of an infection include a more severe cough, with more mucus that is greener in color than usual and often with an unpleasant smell. People may also feel more tired and feverish, cough up blood, and experience sharp chest pain that worsens while breathing.\nIf lung infections are not treated on time and become severe, they may require hospitalization.\nBronchiectasis News Today is strictly a news and information website about the disease. It does not provide medical advice, diagnosis or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or another qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-9", "d_text": "Let us briefly list the symptoms of pulmonary hypertension;\n- Feeling powerless\n- Shortness of breath\n- Chest Pain\n- Squeezing in the chest\n- Skin and lips turn bluish\n- Oedema of the ankles\n- Liver growth\n- Abdominal swelling\nDiagnosis of Pulmonary Hypertension\nPulmonary hypertension, which is usually diagnosed in the late stages of the disease, shows the same symptoms as many diseases. Therefore, it is one of the most difficult diseases to diagnose. As in all diseases, early diagnosis is very important in pulmonary hypertension. Because pulmonary hypertension shows the same symptoms as many diseases, a very detailed examination is required. As a result of the detailed investigations and examinations, the correct diagnosis should be made and treatment should be started as soon as possible.\nExaminations for Diagnosis of Pulmonary Hypertension\n- Blood analysis\n- Transesophageal echocardiography\n- Lung tomography\n- Pulmonary function test\n- Lung ventilation-perfusion scintigraphy\n- 6-minute walk test\n- Heart Catheter\n- Pulmonary arteriography\nPulmonary Hypertension Treatment\nThere is no definitive treatment for pulmonary hypertension. However, with the treatments applied, it is aimed to minimize the complaints of the patient and to prevent the progression of the disease. In the case of a disease that is effective in the development of pulmonary hypertension, treatment of this disease is carried out first.\nIt is very important that the patient protects himself against respiratory infection. Therefore, it is recommended that patients with pulmonary hypertension stay away from people suffering from respiratory tract infection. Drugs used in the treatment process are aimed at preventing problems such as blood clotting and heart contractions. However, if the patient does not respond to all these medications and his condition worsens, heart or lung transplantation is the last resort.\nWhat is it?\nCystic fibrosis (CF) is a genetic disease that is caused by the transmission of defective genes from the mother and father to the child. It is not contamination or subsequent illness.\nSince the symptoms are similar to the common disorders in childhood, the patient may be delayed in diagnosis. If the course of the disease is not done well, mild symptoms may be replaced by organ involvement and may even result in death. Although lung involvement is an important factor determining the prognosis of the disease, malabsorption due to pancreatic involvement is also one of the factors affecting survival.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Hannah Buck, 6 and Hannah Moore, 2, both of St. Joseph, were diagnosed with cystic fibrosis, a chronic, progressive disease.\n“Hannah is actually unaware of her prognosis,” said Vicki Buck, Hannah’s mother. “To her, it’s a pesky thing she lives with. My concern is how we deal with talking about the fact that it is life shortening, and we have to think about that.”\nFor two-year-old Hannah Moore, the realization that she’s different from other children won’t come for some time.\n“She has to take medicine every time she eats, she takes breathing and physical therapy treatments twice a day, and she’s handling it fine,” said Sarah Moore, Hannah’s mom. “I don’t think she’ll know she has it until she’s in school and realizes everybody’s not taking those pills and everyone doesn’t have to stop playing and do their treatment.”\nCF is a genetic disease that affects about 30,000 children and adults in the United States. Like most parents, the Bucks and the Moores could not find anyone with CF in their family’s past.\n“Eighty percent of the time, people experience what we did,” Buck said. “They find out their child has CF, and they have no known cases in their family history. In some families, they have more than one child with CF. We have another daughter, and she tested negative for CF, but she could be a carrier. My husband and I are obviously carriers.”\nBuck explains that it’s the luck of the draw when it comes to passing on an abnormal CF gene. CF occurs when children inherit two abnormal copies of the gene, one from each parent.\n“One in 31 Americans is a symptom-less carrier,” she said. “If they started a family with another carrier, every child would have a one in four chance of having CF, a one in four chance of not having the CF gene, and a 50 percent chance of being a carrier.”\nThe disease has a number of symptoms. The most common is persistent coughing and breathing problems caused by the body’s production of thick, sticky mucus that clogs the lungs and harbors bacteria. The mucus also affects the pancreas, preventing enzymes from digesting food. Because of this, CF children are usually underweight.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "“However, a lot of kids get missed because there are over 1,700 genetic mutations that could prompt CF, and most genetic tests only screen for the most common mutations.”\nIn order to have CF, a person needs to have inherited two copies of the defective gene—one copy from each parent—and both parents must have at least one copy of the defective gene.\nRELATED: 13 Worst Jobs for Your Lungs\nIf people only have one copy of the defective CF gene, they are a carrier and can pass the disease onto their children, but they won’t develop it themselves. About 1 in 30 people in the United States are carriers, according to the American Lung Association.\n“I’ve had a couple of patients who want to have children,” Dr. Mukadam says. If someone has known CF, doctors will screen their partner to see if they are a carrier for the disease. \"If their partner doesn’t have a known CF gene mutation on carrier screening, their child will have a very low likelihood of having severe CF,\" he says. \"However, their children will be carriers of the disease.”\nCystic fibrosis symptoms\nCF can cause a wide range of symptoms, including breathing difficulties like wheezing and shortness of breath. According to the Cystic Fibrosis Foundation, people with CF can also have salty-tasting skin, as a result of sodium and chloride imbalances in the body.\nOther symptoms of CF include chronic coughing and chest colds (which can include phlegm), frequent lung infections (including pneumonia or bronchitis), and digestive issues—like greasy and bulky stools, constipation, or other types of difficulty with bowel movements.\nChildren with CF may not meet growth or weight-gain milestones, despite having a good appetite. And most adult males with CF (about 98%) are infertile, meaning they’re not able to produce children without assisted reproductive technology.\nHow doctors treat CF right now\nThere is no cure for cystic fibrosis, but daily treatments can help patients manage symptoms and delay long-term damage. However, these treatments are extremely time-consuming: They often include taking 25 to 35 pills a day, as well as inhaled medications that relax and clear the airways.\n“Patients might be on inhaled antibiotics that can take 45 minutes to one hour of treatments twice a day,” says Dr. Mukadam.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-24", "d_text": "(1990) Expression of cystic fibrosis transmembrane conductance regulator corrects defective chloride channel regulation in cystic fibrosis airway epithelial cells. Nature 347: 358–363. doi: 10.1038/347358a0\n- 5. Rowe SM, Miller S, Sorscher EJ (2005) Cystic Fibrosis. N Engl J Med 352: 1992–2001. doi: 10.1056/nejmra043184\n- 6. Collins FS (1992) Cystic fibrosis: molecular biology and therapeutic implications. Science 256: 774–779. doi: 10.1126/science.1375392\n- 7. Rosenstein BJ, Cutting GR (1998) The diagnosis of cystic fibrosis: a consensus statement. Cystic Fibrosis Foundation Consensus Panel. J Pediatrics 132: 589–595. doi: 10.1016/s0022-3476(98)70344-0\n- 8. Farrell PM, Rosenstein BJ, White TB, Accurso FJ, Castellani C, et al. (2008) Guidelines for Diagnosis of Cystic Fibrosis in Newborns through Older Adults: Cystic Fibrosis Foundation Consensus Report. J Pediatrics 153: S4–S14. doi: 10.1016/j.jpeds.2008.05.005\n- 9. Bobadilla JL, Macek Jr M, Fine JP, Farrell PM (2002) Cystic fibrosis: a worldwide analysis of CFTR mutations-correlation with incidence data and application to screening. Hum Mutat 19: 575–606. doi: 10.1002/humu.10041\n- 10. De BoeK, Wilschanski M, Castellani C, Taylor C, Cuppens H, et al. (2006) Cystic fibrosis: terminology and diagnostic algorithms. Thorax 61: 627–635. doi: 10.1136/thx.2005.043539\n- 11. Boyle MP (2003) Nonclassic cystic fibrosis and CFTR-related diseases.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-25", "d_text": "Curr Opin Pulm Med 9: 498–503. doi: 10.1097/00063198-200311000-00009\n- 12. Noone PG, Knowles MR (2001) “CFTR-opathies”: disease phenotypes associated with cystic fibrosis transmembrane regulator gene mutations. Respir Res 2: 328–332.\n- 13. Paranjape SM, Zeitlin PL (2008) Atypical cystic fibrosis and CFTR-related diseases. Clin Rev Allergy Immunol 35: 116–123. doi: 10.1007/s12016-008-8083-0\n- 14. Bombieri C, Claustres M, De Boeck K, Derichs N, Dodge J, et al. (2011) Recommendations for the classification of diseases as CFTR-related disorders. J Cyst Fibros 10 (S2): S86–S102. doi: 10.1016/s1569-1993(11)60014-3\n- 15. Parad RB, Comeau AM (2005) Diagnostic Dilemmas resulting from the Immunoreactive Trypsinogen/DNA Cystic Fibrosis Newborn Screening Algorithm. Pediatrics 147: S78–S82. doi: 10.1016/j.jpeds.2005.08.017\n- 16. Taylor CJ, Hardcastle J, Southern KW (2009) Physiological Measurements Confirming the Diagnosis of Cystic Fibrosis: the Sweat Test and Measurements of Transepithelial Potential Difference. Paediatr Respir Rev 10: 220–226. doi: 10.1016/j.prrv.2009.05.002\n- 17. Mall M, Hirtz S, Gonska T, Kunzelmann K (2004) Assessment of CFTR function in rectal biopsies for the diagnosis of cystic fibrosis. J Cyst Fibros 3 (S2): 165–169. doi: 10.1016/j.jcf.2004.05.035\n- 18. Hirtz S, Gonska T, Seydewitz HH, Thomas J, Greiner P, et al.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "Type 1 diabetes is the form most commonly found in children and accounts for almost 95 percent of all cases. However, this percentage may be misleading, as childhood diabetes is not a very common disease. While the origin of diabetes is still unclear, it most likely results from a combination of environment triggers and a genetic predisposition to high blood sugar.\nSymptoms of Childhood Diabetes\n- Excessive thirst\n- Frequent urination\n- Weight loss\n- Stomach pains\n- Behavioral problems\nTreatment for Childhood Diabetes\nBecause childhood diabetes requires specialized care, most children are treated in a hospital setting. Insulin therapy is the most effective form of treatment for Type 1 diabetes in kids. Most children require both daily and nightly insulin injections. Young children typically start with just one injection per day, while older children have the option of forgoing injections in place of a continuous insulin pump.\nCystic fibrosis is a genetic disorder that primarily affects the lungs and digestive tract. Because of CF’s effect on the body’s respiratory system, children with this disease are prone to chronic lung infections. The organs and cavities in the human body’s organs and cavities are lined with epithelial cells. In a healthy body, the lungs and digestive organs are covered with a thin coating of fluid and mucus, meant to trap and expel any germs that enter the lungs. However, in children with CF, the diseased gene causes the body’s epithelial cells to create defective proteins called CFTR. Because of these defective proteins, the epithelial cells are unable to control the passage of chloride through the cell membranes. As a result, the mucus coating the lungs and digestive organs becomes thick and sticky, trapping germs instead of clearing them out. These trapped germs result in the repeated lung infections experienced by children with CF. This thick mucus also obstructs the pancreas, preventing the passage of vital digestive enzymes into the intestines. Without these enzymes, the body cannot break down and absorb the nutrients in food, causing children with CF to lose weight regardless of their diet.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-0", "d_text": "Exploring the need for mutation-specific treatments in cystic fibrosis\nPosted: 1 July 2016 | | No comments yet\nHere, Caroline Richards, Editor of European Pharmaceutical Review, discusses cystic fibrosis and the treatment options that are currently available or in development…\nDespite the fact that over 1,900 mutations of the cystic fibrosis transmembrane conductance regulator (CFTR) are known to cause cystic fibrosis (CF), a debilitating inherited genetic disorder that progressively reduces organ function over time, only two disease modifying treatment options exist – Kalydeco (ivacaftor) and Orkambi (a combination of ivacaftor and lumacaftor), both developed by Vertex Pharmaceuticals.\nAll other treatments on the market address the symptoms of the disease. In addition, Vertex’s drugs are only effective in CF patients with particular mutation types. There is, therefore, a substantial need for new therapies which target more of the mutations involved in CF. Meanwhile, specific CF symptoms can be linked to certain mutations, with the severity and types of symptoms varying hugely according to the specific defects with the CFTR protein involved. For example, nonsense mutations give rise to particularly severe phenotypes; people who have these mutations are unable to produce any CFTR protein, which results in worse symptoms than those who produce partial amounts. Currently, relatively little is known about the natural history of nonsense mutation CF (nmCF).\nIntroduction to CF mutations\nThe CFTR gene, found on human chromosome 7, is a large gene that encodes a protein of 1,480 amino acids, which is responsible for transporting chloride ions through the plasma membrane of cells and thus regulating the flow of water in tissues. It is the impairment of this process, or absence of the protein entirely, which results in the classic symptoms seen in CF: extremely salty-tasting skin, persistent coughing, wheezing or shortness of breath, an excessive appetite but poor weight gain, and greasy, bulky stools. Symptoms result from the lack of water movement across membranes, causing a build-up of mucus in tissues. Much of the morbidity and mortality in CF is down to the occurrence of pulmonary exacerbations, which typically develop in CF patients from early age.\nAs an autosomal recessive disease, mutations must arise in both copies of the CFTR gene for CF to occur.", "score": 8.086131989696522, "rank": 100}]} {"qid": 27, "question_text": "What safety precaution should be taken when hanging a curtain rod above a bed in earthquake-prone areas?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Slide the center bracket and ring onto the rod. Install the remaining half of the curtain or ring clips. Slide the end brackets onto the rod and install the rod finials onto the ends of the rods.\nAsk your assistant to hold the curtain rod against the ceiling. Use the screws that came with the anchors to secure the brackets to the ceiling.\nTighten the setscrews on the side of the drapery rod brackets to keep the rods from moving. Repeat the steps to hang the rod on the opposite side of the bed.\nAsk your assistant to hold the curtain rod between the two mounted rods at the foot of the bed. Mark the mounting bracket locations as you did with the brackets on each side of the bed. Use the anchors and repeat the steps to mount the rod at the foot of the bed.\nHang curtains from the drapery ring clips if applicable.\nThings You Will Need\n- Tape measure\n- Curtain rods\n- Ceiling-mount drapery rod brackets\n- Carpenter’s level\n- Number two Phillips screwdriver bit\n- Cordless drill\n- Self-tapping zinc-plated hollow-wall anchors\n- Drapery ring clips\n- Curtain rod finials\n- Jupiterimages/Polka Dot/Getty Images", "score": 52.228930014014495, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "DO’S & DON’TS IN EARTHQUAKE, FIRE, CYCLONE\nEarthquakes are caused by natural tectonic interactions with in the earth’s crust and are global phenomena. They strike suddenly unleashing enormous energy, virtually no warning, and are unpredictable. Therefore preventive measures for ensuring safety of buildings, structures, communication facilities, water supply lines, electricity, and life, are of utmost priority.\nA.1 PRE-DISASTER: PREVENTIVE MEASURES\nBefore an Earthquake what people should do? (Awareness)\n· Keep in mind that most problems from a severe earthquake result from falling objects and debris (partial or complete collapse of building, ceiling plaster, light fixtures, etc.) rather than from ground movement. Ground movements do occur when ground is susceptible to failures by liquefaction or land sliding.\n· Shelves for bookcases, etc., should be firmly fixed to the walls. Remove heavy objects from shelves above head level. Do not hang plants in heavy pots that could swing free of hooks. Bookshelves, cabinets, or wall decorations can topple over and fall.\n· Locate beds away from the windows and heavy objects that could fall. Don’t hang mirrors or picture frames directs over beds and benches.\n· Secure appliances that could move, causing rupture of gas or electrical lines. Know location of master switches and shut-off valves.\n· Make sure that over-head lighting fixtures are well secured to the ceiling and move heavy unstable objects away from exit routes.\n· Replace glass bottles with plastic containers or move them to the lowest shelves.\n· Be aware that with a severe earthquake, all services, such as, electric/water, will probably be down. Emergency services may be extremely limited for few days.\n· Store or have easy access to emergency supplies (water, long lasting, ready-to-eat food, first aid kit, medicine, tools, portable radio, flashlight, fresh batteries, blankets, warm jacket, fire extinguisher) in a secure place at your residence, or, in your car.\nA.2 What to Do during an Earthquake\nStay as safe as possible during an earthquake. Be aware that some earthquakes are actually foreshocks and a larger earthquake might occur. Minimize your movements to a few steps to a nearby safe place and if you are indoors, stay there until the shaking has stopped and you are sure exiting is safe.", "score": 47.11597110141349, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "A simple blog which shares safety measures to be taken in case of any form of Emergency\nSafety tips on how to survive an earth quake threat before, during and after the earth quake event.\nAs unpredictable events of nature,earthquakescan cause much damage and destruction and much loss of life. During an earthquake injury and death to persons are usually caused by falling objects and collapsing buildings. In order to minimize the impact that earthquakes can have it is most important that safety precautions are observed and adhered to. The following safety precautions take into consideration:\na. Before an earthquake.\nb. During an earthquake.\nc. After an earthquake.\nBEFORE AN EARTHQUAKE\n1. Potential earthquake hazards in the home and workplace should be removed and corrected. Top-heavy furniture and objects, such as book cases and storage cabinets, should be fastened to the wall and the largest and heaviest objects placed on lower shelves.\n2. Supplies of food and water,flashlight, a first-aid kit and a battery-operatedradioshould be set aside for use in emergencies.\n3. One or more family members should have a working knowledge of first-aid measures because medical facilities nearly are always overloaded during an emergency or disaster, or may, themselves, be damaged beyond use.\n4. All family members should know what to do to avoid injury and panic. They should know how to turn off the electricity, water and gas. They should know the locations of the main switch valves. This is particularly important for teenagers who are likely to be alone with smaller children.\nDURING AN EARTHQUAKE\n1. The most important thing to do during an earthquake is to remain CALM. If you can do so then you are less likely to be injured. Also, those around you will have a greater tendency to be clam if you are calm.\n2. Make no moves or take no action without thinking about the possible consequences. Any irrational movement may be an injurious one.\n3. If you are inside stay there. Stand in a doorway or crouch under a desk or table, away fromwindowsor glass fixtures.\n4. If you are outside, stay there. Stay away from objects such as light poles, buildings, trees andtelephoneand electric wires, which could fall and injure you.\n5. If you are in an automobile, drive away from underpasses/overpasses, and stop in the safest place possible and stay there.", "score": 45.469965330974645, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Guidelines for Earthquake Readiness\nEarthquake readiness is crucial if you live in populated areas, because they can cause damage to your property, and injury or death on a large scale. Every region is at risk, and therefore you need to know what to do to be prepared.\nAnchor Furniture and Large Objects\nKeeping things pinned down is important, so that furniture and other large and heavy objects don’t roll around in the room or fall, causing injury. Use fasteners and braces to secure items to the walls or floors. Bolt down heavy appliances such as your refrigerator or water heater. Anchor large fixtures on the ceilings, such as lights.\nYou can prepare in advance by anchoring furniture and large objects. You won’t get any warnings before an earthquake, and therefore it’s important to do the work now.\nPractice Readiness Drills\nDo you and your family know what do in an earthquake? These happen so suddenly, that you won’t have time to yell instructions. Prepare for earthquakes now by practicing what to do when one strikes. Your drill should include:\n- What rooms to run to\n- How to hide under a table that’s sturdy\n- Dropping to the ground, rolling against an inside wall and crouching in the corner, and covering your body\n- A warning to stay inside until it’s over\nDon’t just know what to do. Schedule time quarterly or monthly to practice what you’ll do if there’s an earthquake.\nStore Food and Water\nIt may take a while for things to return to normal after a severe earthquake. You don’t want to be stuck waiting for food and water. Store what you need ahead of time to last you two or more weeks after an earthquake. This includes:\n- Canned food\n- Long-term canned food (#10 cans)\n- Ready-to-eat meals or MREs\n- Energy food bars\n- Vitamins and supplements\nIt may become necessary to stay home during the aftermath of the earthquake, and therefore store extra food and water for you and your family, and even neighbors.\nShut off Gas and Water\nYou’ll need a quick and easy way to shut off your water and gas immediately after the earthquake. Consider buying an emergency gas and water shut-off tool that you can use to cut off your utilities. You can fit one in your emergency survival kit.", "score": 44.87358042607093, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "The roof design on medieval buildings didn’t do much to keep out pests, so upright posts on canopy beds were used to drape curtains across the top of the posts and down each side. Over time, the curtains on the canopy began to be used more for privacy than for utility. Today, draping curtains around the bed can add a touch drama to the room, but you do not need to purchase a canopy bed to create this effect. You can hang curtains from the ceiling to surround your bed with soft drapes for a bit of romance.\nMeasure the length and width of your bed. Include the headboard and footboard in the measurements if applicable.\nPurchase curtain rods to match your bed’s length and width measurements. You need a rod for each side of the bed and one for the foot of the bed. The rods can be slightly longer and wider than your bed if you want a bit of space between the curtains and the bed.\nBuy ceiling-mount drapery rod brackets to fit your curtain rods. The standard bed length can be between 75 and 84 inches, so purchase three rod brackets for each side of the bed. For queen and king size beds, purchase three brackets for the width.\nPlace marks on the ceiling where you want the curtain rods to hang on each side of the bed. Ask an assistant to hold the rods against the ceiling. Use a carpenter’s level to ensure the rods are straight and will hang parallel to the length of the bed. Mark the center of the rod on the ceiling. Place a mark at each end of the rod. Ensure the end marks are 4 inches from the rod ends.\nCheck the distance between each mounting point to ensure they are equal distances apart. This ensures the rods will hang straight. Check the distance from the back wall to each mounting point to ensure the brackets sit at the same points on each side of the bed.\nHold the ceiling mount drapery rod brackets against the ceiling at each mounting point. Mark the bracket mounting holes on the ceiling.\nInsert a number two Phillips screwdriver bit into a cordless drill chuck. Insert the end of the bit into the open end of a self-tapping zinc-plated hollow-wall anchor.\nScrew the anchor into the ceiling. If you hit a ceiling joist, push against the drill to screw the anchor into the joist.\nInstall half of the curtain or drapery ring clips onto the curtain rod.", "score": 43.75458533891018, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Oklahoma has had an increased number of earthquakes in recent years, including the record-breaking 5.6 magnitude earthquake that occurred on Saturday, November 6, 2011. To stay safe before, during and after an earthquake, take the following precautions:\nBefore an earthquake\n- Assemble an emergency preparedness kit for home and your vehicle.\n- Have a family emergency plan and identify a safe place to take cover, such as under a sturdy table or desk.\n- Teach your family how to “Drop, Cover and Hold” during an earthquake.\n- Check for hazards inside or outside your home or office. Heavy objects and falling hazards such as bookcases, hanging picture frames and other items can be dangerous if they are unstable and not anchored securely to a wall or the floor.\n- Know emergency telephone numbers.\n- Contact your insurance agent to review existing policies and to inquire about earthquake insurance\n- Sign up for Earthquake Notifications on the USGS site as well as learn about other products and services they offer.\nDuring an earthquake\n- “Drop, Cover and Hold” - DROP to the floor; take COVER under a sturdy table or other piece of furniture. If there isn’t a table or desk near you, seek cover against an interior wall and protect your head and neck with your arms. HOLD ON until the shaking stops.\n- Stay away from glass or bookshelves, mirrors or other items that could fall.\n- If outside: stand in an open area away from underpasses and overpasses, buildings, trees, telephone, and electrical lines.\n- If on the road: drive away from underpasses and overpasses; stop in a safe area; stay in your vehicle.\nAfter an earthquake\n- Check for injuries and provide first aid if necessary.\n- Do a safety check: check for gas, water, downed power lines and shortages. Turn off appropriate utilities, if you shut off the main gas valve do not turn it back on yourself. Wait for the gas company to check for leaks and make repairs.\n- Turn on the radio and listen for instructions on safety or recovery actions.\n- Use the telephone for emergencies only.\n- When safe follow your family emergency plan.\n- Be cautious when opening cabinets.\n- Stay away from damaged areas.\n- Be prepared for aftershocks.\n- If you are able to, log onto the USGS site and fill out a “Did you feel it?” form.", "score": 43.36038591085817, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Cautiously and tightly finish (or staple) the fabric to the back and you’re prepared to pin pictures, notes and your children’artwork. It is also a great position to post selections and buying lists.\nFor an easy curtain fabric “headboard”, custom drapery and design rod (the same thickness whilst the bed) many inches over the top of the bed and just hold your curtain. It adds softness to the room and also provides dream of a screen behind the bed. For an immediate “actual” custom headboard, expand fabric pieces around a large canvas the thickness of the bed. Pull the material taut and selection strongly at the back then set it up behind the bed.\nThe size of the curtain will affect several areas of its design. The lengthier the curtains are, the more weight the curtain rod should be able to support. Longer curtains are typically conventional, but you can find long informal curtains as well. Short curtains are an average of informal; but, short curtains can be properly used as accents to more elaborate conventional curtains.", "score": 42.367528496039014, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "How to safely hang curtains\nBare windows can look super chic in a minimalist interior - but we think it’s best left to the magazines. Curtains are great: they make your home cosier, offer much valued privacy (of course), and can help you introduce wonderful textile patterns and colours into your space! If you want to upgrade your boring blinds, or you’ve just moved into a new home - let us how to guide you through the types of rods, as well as a few tips and tricks for hanging!\nHang them well\nRight from the start, you should know what kind of curtain you want to hang. Will it be made of a lightweight fabric such as muslin, cotton or linen? Or do you want a heavy velvet curtain for a luxury boudoir vibe? Well, whatever you choose - make sure to think about your curtain rod. It will need to be able to take the weight of whatever you put on it. Also worth thinking about is whether you'll be pulling the curtains every day, or whether they'll serve mainly as decoration.\nThen you have a choice:\n- Conventional rods - where the rod goes through the curtain itself. Get them in iron, wood, PVC, acrylic… the possibilities are endless! We recommend iron or wood for a sophisticated, and sturdy, finish.\n- Traverse rods - this is when the curtain is clipped below the rod, rather than going through it. Looks super stylish, and you can choose from an interesting variety of clip styles!\n- Dauphine rods - flat faced, and therefore hidden from view. More suited to lightweight materials. Good for curved windows!\n- Electrical rods - yes, thanks to technology, you don’t even need to get out of bed to shut these curtains. Neat!\nIt’s always best to get experienced people to fit the rod, as with anything. We don’t want it wonky. or to fall down after 2 weeks! If you’re braving it alone, remember to get the proper measurements, as well as triple checking everything is level, and you’re not drilling through studs in the walls! Stud-finders and good quality electric drills are easy to get online, or you can borrow one from a handy friend.\nGet to work!\nWith the rod in place, all that's left to do is hang them!", "score": 40.29268431912554, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Disaster Prevention Measures for Your Daily Life - 2\nPerhaps you've seen a horror movie where strange forces cause furniture to fly across the room or thump around, moving across the floor. However, the occurance of a large-scale earthquake can take these kinds of things off the silver screen and into your home. The tremors of an earthquake can cause bedside tables to move about, utensils to fly out of the cupboard, furniture to shift and rattle about, and drawers to fly out and onto the ground. You don't have to be at the mercy of the tremors, however, as securing your furniture can reduce the level of damage caused during a quake. First, you can secure large furniture using l-shaped brackets. Make sure to get the right type, however, as the wrong bracket may not have any effect. You can find a wide range of brackets and stoppers at your local home center, and you can even get help if you are unsure how to use them.\n¢£Refrain from placing tall furniture on carpets or tatami (hard surfaces are best)\n¢£Secure all large furniture to walls, the cieling etc with l-shape brackets\n¢£Make sure furniture or items standing on top of other furniture is secured\n¢£Use rubber stops to prevent items such as TVs and computers from shifting about\n¢£Place shatter-resistant film on glass surfaces", "score": 39.25843458276197, "rank": 9}, {"document_id": "doc-::chunk-2", "d_text": "If you are in an area and the ground begins to shake you should:\n- If you are inside, you should stand in a doorway or under a sturdy piece of furniture such as a desk.\n- Try to stay away from windows, hanging objects, or anything that looks like it could fall over or off a wall.\n- If you are in a sturdy part of a building, stay where you are until the shaking stops.\n- Do not use an elevator after the shaking stops. Evacuate the building.\n- If you are outside when the shaking begins, move away from any trees, poles and buildings.", "score": 38.494320903241395, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "If you live in an apartment in an earthquake-prone area, it is a good precaution to learn where the safest spot is when a major earthquake hits. You should examine your home beforehand so you are aware of what dangers are presented in each room, and so you can react more quickly when the time comes.\nAccording to research, the safest place to be in an apartment during an earthquake is under something sturdy, such as a table, that can protect you from falling objects and debris.\nTry to cover your face, head, and neck using one hand and use the other one to hold onto your shelter firmly. If the table or the desk has legs, hold on to them and don’t let go even if the table moves around. (Source)\nNot all rooms in your home may have such a piece of furniture, so we’ll provide some basic guidelines to help you determine the best and worst places to seek shelter depending on the room you’re in when an earthquake begins.\nStay Inside Your Apartment\nAs scary as it may seem, try to stay inside of your home during the earthquake where it’s safer. Modern apartments are designed to withstand earthquakes, and the chances of them collapsing are very small.\nThe earthquake that struck Japan in 2011 was a 9.1 magnitude earthquake, and the strongest recorded in the country’s history. Of the 13,135 known casualties, over 90% of them were a result of drowning from the tsunami, and around 4% were caused by injuries from being crushed. (Source)\nAs we mentioned before, the safest spot is under something sturdy, because you’re more likely to be injured or killed by debris. Parts of the exterior of the apartment, such as the walls and the roof, could collapse onto you if you were to run outside during the earthquake.\nStay in Your Bed If You’re Already There\nIf you wake up to an earthquake in the middle of the night, stay in your bed as long as you’re not in any danger of tall furniture falling on you such as bookcases. Use your pillow to protect your face and neck until the shaking stops. If there is a window or any glass nearby, cover your body with a blanket to help keep it safe.\nSee also our article: What To Do During An Earthquake In A Tall Building\nPlaces in Your Apartment to Avoid\nNow that you know where you should stay in your apartment, we’ll explain some of the places you should avoid and why.", "score": 38.16133996154656, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Earthquakes in Ontario are rare but not unexpected. Make sure you are prepared and know how to protect yourself and your family should an earthquake occur.\nWhen building your family emergency plan review and discuss these safety tips with your entire household to make sure everyone understands what to do.\nIf you are indoors:\n- Drop to the ground. Take cover by getting under a sturdy table or other piece of furniture. Hold on until the shaking stops. If there is not a table or desk near you, cover your face and head with your arms and crouch in an inside corner of the building.\n- Stay away from windows to avoid being injured by shattered glass.\n- Stay indoors until the shaking stops and you are sure it is safe to exit. If you must leave the building after the shaking stops, use stairs rather than an elevator in case of aftershocks, power outages or other damage.\n- Be aware that fire alarms and sprinkler systems frequently go off during an earthquake, even if there is no fire.\nIf you are outdoors:\n- Find a clear spot (away from buildings, power lines, trees, streetlights) and drop to the ground. Stay there until the ground stops shaking.\n- If you are near unstable slopes or cliffs, watch out for falling rocks and other debris.\n- Review and discuss these safety tips with your entire household to make sure everyone understands what to do in an earthquake.\n- Designate safe places in each room of your home, workplace and/or school. A safe place could be under a piece of sturdy furniture or against an interior wall away from windows, bookcases or tall furniture that could fall on you.\n- Practice drop, cover and hold with your entire household.\n- Bolt bookcases, china cabinets and other tall furniture to wall studs.\n- Hang heavy items such as pictures and mirrors away from beds, couches and places where people sleep or sit.\n- Brace overhead light fixtures.", "score": 37.086839672205265, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "An earthquake is the sudden, rapid shaking of the earth, caused by the breaking and shifting of subterranean rock. While earthquakes are sometimes believed to be a West Coast phenomenon, there are actually 45 states and territories throughout the United States that are at moderate to high risk for earthquakes including the New Madrid fault line in Central U.S. Since it is impossible to predict when an earthquake will occur, it is important that you and your family are prepared ahead of time.\nWhat to Do During an Earthquake\nStay as safe as possible during an earthquake. Be aware that some earthquakes are actually foreshocks and a larger earthquake might occur. Minimize your movements to a few steps to a nearby safe place and if you are indoors, stay there until the shaking has stopped and you are sure exiting is safe.\nDROP to the ground; take COVER by getting under a sturdy table or other piece of furniture; and HOLD ON until the shaking stops. If there isn’t a table or desk near you, cover your face and head with your arms and crouch in an inside corner of the building.\nStay away from glass, windows, outside doors and walls, and anything that could fall, such as lighting fixtures or furniture.\nStay in bed if you are there when the earthquake strikes. Hold on and protect your head with a pillow, unless you are under a heavy light fixture that could fall. In that case, move to the nearest safe place.\nUse a doorway for shelter only if it is in close proximity to you and if you know it is a strongly supported, loadbearing doorway.\nStay inside until the shaking stops and it is safe to go outside.\nBe aware that the electricity may go out or the sprinkler systems or fire alarms may turn on.\nDO NOT use the elevators.\nMove away from buildings, streetlights, and utility wires.\nOnce in the open, stay there until the shaking stops. The greatest danger exists directly outside buildings, at exits and alongside exterior walls. Ground movement during an earthquake is seldom the direct cause of death or injury. Most earthquake-related casualties result from collapsing walls, flying glass, and falling objects.\nIf in a moving vehicle\nStop as quickly as safety permits and stay in the vehicle. Avoid stopping near or under buildings, trees, overpasses, and utility wires.\nProceed cautiously once the earthquake has stopped. Avoid roads, bridges, or ramps that might have been damaged by the earthquake.\nIf trapped under debris\nDo not light a match.", "score": 36.362317366806806, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "- Pay attention to instructions from an instructor, Building Captain, or other authority.\nBEFORE AN EARTHQUAKE\n- Determine ahead of time the safest location for you to duck, cover and hold. Individuals in wheelchairs should not attempt to duck, cover, and hold. Rather, position, against a wall and away from windows, if possible and lock wheelchair brakes.\n- Look for items placed on shelves or elsewhere above you that are heavy and/or loose and might fall if there is shaking or a sharp jolt. Secure such items, or report them to instructor or other authority, and move to another area.\n- Note Emergency Exits.\n- Keep emergency exits clear of boxes and other items that may shift and fall and block your exit in an earthquake.\nIF AN EARTHQUAKE OCCURS\n- DUCK: Immediately duck down close to the floor and seek cover.\n- COVER: Take cover under a table, desk, other sturdy furniture, or stay close to an interior wall and cover your head and neck with your arms.\n- HOLD: If you are under something, hold onto it and be prepared to move with it.\n- Windows/Glass-Stay clear of windows and glass to reduce the risk of being injured by flying broken glass.\n- Remain in the HOLD position until all of the shaking has stopped!\n- Aftershocks are likely; be prepared to duck, cover, and hold again.\n- NOTE: Do not run for a doorway for protective cover. Ducking under a sturdy surface is safer. If the doorway is your only option, drop down to the floor and brace yourself so your back is to the doorjamb, where the door is hinged to the frame. Watch for moving objects.\nAFTER THE SHAKING STOPS\n- Keep calm. Do not go outdoors, unless told to do so by emergency officials, or unless there is immediate danger from fire, the smell of natural gas, or signs of severe structural damage. You are in greater danger outside from falling glass and debris.\n- Check area for hazards, including broken glass and objects that might fall in an aftershock; consider such hazards in choosing your exit route.\n- Provide help to those who need assistance.\n- If trained, render first aid. If not trained, assist those rendering first aid.\n- Cooperate with instructor or other emergency authority.", "score": 35.309483230141666, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Before an Earthquake\nThere are many things you can do to prepare your house to make it a safer place should an earthquake occur. Check out this fun interactive to see the difference that these preparations can make!\nSecure heavy furniture to the wall or floor\n- Put heavy items near floor level\n- Put strong catches on cupboard doors\n- Check that your chimney is secure\n- Secure your hot water cylinder\n- Check your house is well secured to its foundations\nYou should also consider that three days is the likely minimum time that it will take for help to reach you from outside, and you may wish to stock up a larger supply of food and water.\nIn addition don't forget to think about including other items that could be important for you such as :\n- A first aid manual\n- Family documents such as birth and marriage certificates, insurance policies, drivers' licences, passports\n- Family photos", "score": 34.76471070597338, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "A.2.1 If indoor\nDROP to the ground; take COVER by getting under a sturdy table or other piece of furniture; and HOLD ON until the shaking stops. If there isn’t a table or desk near you, cover your face and head with your arms and crouch in an inside corner of the building.\n· Stay away from glass, windows, outside doors and walls, and anything that could fall, such as lighting fixtures or furniture.\n· Stay in bed if you are there when the earthquake strikes. Hold on and protect your head with a pillow, unless you are under a heavy light fixture that could fall. In that case, move to the nearest safe place.\n· Use a doorway for shelter only if it is in close proximity to you and if you know it is a strongly supported, load bearing doorway.\n· Stay inside until the shaking stops and it is safe to go outside. Research has shown that most injuries occur when people inside buildings attempt to move to a different location inside the building or try to leave.\n· Be aware that the electricity may go out or the sprinkler systems or fire alarms may turn on.\n· DO NOT use the elevators as power may have failed.\n· Do not run for staircase, since these may usually sustain more damage than level surfaces. Exits may also be affected/blocked.\n· OR get under a desk or a sturdy table or brace yourself within a narrow hallway or doorway, making sure that the door cannot close on your hands. If unable to move, cover your head and body with your arms, pillows, blankets, books, etc. to protect yourself from falling objects.\nAvoid high bookcases, mirrors, cabinets or other furniture that might topple\nA.2.2 If outdoors\n· Stay there. Stay in an open area until tremors stop.\n· Move away from buildings, streetlights, and utility wires.\n· Once in the open, stay there until the shaking stops. The greatest danger exists directly outside buildings, at exits and alongside exterior walls. Many of the 120 fatalities from the 1933 Long Beach earthquake occurred when people ran outside of buildings only to be killed by falling debris from collapsing walls.\n· Most earthquake-related casualties result from collapsing walls, flying glass, and falling objects.\nA.2.3 If in a moving vehicle\n· Stop as quickly as safety permits and stay in the vehicle. Avoid stopping near or under buildings, trees, overpasses, and utility wires.", "score": 34.10933627383322, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "Get down and touch the floor, looking for shelter under a tables or bed that are strong enough and wait until the shaking stops.\n3. If you are in bed, cover your head with a pillow. If at all possible get moving toward under the bed or go to the nearest safe side close to the wall in the building elbows.\n4. Stay away from glass, mirrors, hanging items on walls or other items that easily fall.\n5. Do not touch electrical switches or other sources because of the possibility of shorting.\n6. Remain indoors/in the home until the shaking stops and it’s safe.\nWhat should we do if we are outside the homes?\n1. Stay away from buildings, trees, street lights, electricity and telephone poles, billboards and so on.\n2. Try to find an open area and remain outside in a safe place until the shaking stops and it’s safe.\n3. If you’re in the car or being on a motorcycle, immediately pulled over and stopped. Avoid stopping near or under trees, buildings, bridges, street lights, electricity and telephone poles, billboards and so on. Continue driving until the shaking stops and it is safe, avoid over the bridges or other obstacles that were damaged by the earthquake.\n4. Do not use the elevator if we were in office buildings, shopping centers, theaters or other places that have an elevator.\n5. If trapped in an elevator, press all buttons and get out when the elevator stops. If interphone available in the elevator, immediately contact the building manager.\n6. If you are in the train, hold on to the pole so you will not fall if the train suddenly stopped and do not panic, follow the instructions and information by the railway officials.\nWhat should we do after the earthquake?\n1. Stay alert in case of second earthquake, sometimes the second quake more powerful from the first.\n2. Monitor latest situation from television or radio, listen to the emergency response if any.\n3. Use the phone for emergency calls.\n4. Stay away from damaged or cracked area.\n5. Stay away from the location that smells of hazardous liquid like gasoline, kerosene or other chemicals.\n6. Check if there is a leaking gas, if there is a smell of gas immediately out of the house / building.\n7. Help the injured victims, especially children, the elderly or the disabled. Give first aid correctly.", "score": 33.83283880571462, "rank": 17}, {"document_id": "doc-::chunk-1", "d_text": "- Photographs of contents in every room.\n- Photographs of items of high value, such as jewelry, paintings and collectors’ items.\nThere are things you can do, even while an earthquake is happening, that will lower your chances of being injured\n- Take cover under a heavy desk or table.\n- Inner walls or door frames are the least likely to collapse and might also shield against falling objects.\n- Stay away from glass and hanging objects, bookcases, china cabinets or other large furniture that could fall.\n- Use a blanket or pillow to shield your head and face from falling debris and broken glass.\n- If you are in the kitchen, quickly turn off the stove.\n- An earthquake may break gas, electrical and water lines.\n- If you smell gas: open windows, shut off the main gas valve, do not turn any electrical appliances or lights on or off, go outside, report the leak to authorities and do not re-enter the building until a utility official says it is safe.\n- If wiring is shorting out, shut off the electric current at the main box.\n- If water pipes are damaged, shut off the supply at the main valve.\nWhat should you have in your earth quake safety supply kit\n- Keep at least a three-day supply of water per person (a gallon per person per day)\n- Store at least a three-day supply of non-perishable food\n- Have a First aid kit available\n- Battery-powered radio\n- Extra batteries\n- Infant formula and diapers\n- Fire extinguisher\n- Prescription medications\n- Pet food\n- Extra clothes\n- Bedding/sleeping bags\nTo learn more, visit: https://www.onedisclosure.com", "score": 33.2528587412163, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "New Page 1\nWhen you feel an\nearthquake, duck under a desk or sturdy table. Stay\naway from windows, bookcases, file cabinets, heavy\nmirrors, hanging plants, and other heavy objects that\ncould fall. Watch out for falling plaster and ceiling\ntiles. Stay undercover until the shaking stops, and\nhold onto your cover. If it moves, move with it. Below\nare some additional tips for specific locations:\nyou're in the KITCHEN,\nmove away from the refrigerator, stove, and\noverhead cupboards. (Take time NOW to anchor\nappliances, and install security latches on\ncupboard doors to reduce hazards.)\nIf you're in a STADIUM\nOR THEATER, stay in your seat and\nprotect your head with your arms. Do not try to\nleave until the shaking is over, then leave in a\ncalm, orderly manner. Avoid rushing toward\n- If you are in a HIGH-RISE\nBUILDING, and not near a desk or\ntable, move against an interior wall and protect\nyour head with your arms. Do not use the\nelevators. Do not be surprised if the alarm or\nsprinkler systems come on. Stay indoors. Glass\nwindows can dislodge during the quake and sail\nfor hundreds of feet.\n- If you're OUTDOORS,\nmove to a clear area away from trees, signs,\nbuildings, electrical wires, and poles.\n- If you're on a SIDEWALK\nNEAR BUILDINGS, duck into a doorway\nto protect yourself from falling bricks, glass,\nplaster, and other debris.\n- If you're DRIVING,\npull over to the side of the road and stop.\nAvoid overpasses, power lines, and other\nhazards. Stay inside the vehicle until the\nshaking is over.\n- If you're in a CROWDED\nSTORE OR OTHER PUBLIC PLACE, do not\nrush for exits. Move away from display shelves\ncontaining objects that could fall.\n- If you're in a WHEELCHAIR,\nstay in it. Move to cover, if possible, lock\nyour wheels, and protect your head with your\nTHE EARTHQUAKE CHECK LIST\n- Be prepared for\naftershocks, and plan where you will take cover\nwhen they occur.\n- Check for injuries.\nGive first aid, as necessary.", "score": 32.993917865675996, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Using cloth hangings as wall art dates back to medieval times when tapestries decorated stone walls while helping to insulate them. Hanging a curtain above your bed gives the illusion of a window and can also make your ceiling look taller. It softens the lines of a simple room and adds texture and movement, as well as color.\nDecide how high you want your headboard curtain to be. There’s no standard rule for this, but don’t forget to look at the room as a whole to make sure your placement isn’t out of balance. Mark the height with pencil and measure down from the mark to the floor. This is the length of the curtain you need. When you are sewing your own, add 6 inches for the top and bottom hem.\nMeasure the width of the space you want the headboard to cover and mark either side at the height you want, so you know where the brackets will go. Add 2 inches for the side hems if you are making a curtain. This measurement gives you a curtain that hangs flat against the wall. Add one-half to three times the width for a curtain that falls in folds or pleats.\nHold the rod brackets up to your pencil marks and trace the screw holes onto the wall with a pencil.\nDrill the holes with a bit that is slightly smaller than your drywall anchors to ensure a tight fit.\nTap the drywall anchors into the holes you drilled.\nChange the drill bit and attach the brackets to the walls with screws, but do not tighten them all the way. Brackets that close around the curtain rod to hold it in place are best when hanging a curtain above the headboard of a bed.\nCheck the brackets with a carpenter’s level to ensure that they are straight and then tighten the screws.\nFold your long side edges over 1/4 inch if you are making a curtain, and press the fold. Fold them in again 3/4 inch and sew straight seams to hold them in place. Repeat for the hem, making a 1/4-inch fold, then a 1 3/4-inch bottom hem. You can use any type of fabric for your curtain, but your sewing machine needle and thread type must be appropriate to the material. For example, you need a ball needle for knits and a larger needle such as a 14/90 or 16/100 size for heavier fabrics like denim or upholstery fabrics.", "score": 32.94185415192114, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "Dangerous Spots in an Apartment\n- Near windows\n- Near tall furniture (shelves, bookcases, etc.)\n- Near cabinets\n- Inside doorways\n- Narrow spaces (closets, bathrooms, etc.)\nWindows and Other Glass\nWindows and other forms of glass are very likely to shatter during an earthquake. Your first priority should be to get away from any windows in your apartment as quickly as possible. If you take shelter under furniture that is near a window, try to face away from it in case it breaks.\nMake sure to keep your eyes closed as you take cover, to prevent any broken glass from entering them. After the earthquake ends, be careful not to step on any shards of glass that may be on the floor.\nMany injuries during an earthquake are a result of furniture collapsing on victims. If you cannot get under a sturdy piece of furniture, avoid any open spaces where bookshelves, TV stands, mirrors, etc. could fall on you. As a precautionary measure, you should consider rearranging any rooms that may have multiple pieces of tall furniture in them so you can create a safer space if they were to collapse.\nThe kitchen is one of the most dangerous places to be, because dishes, pots, and pans can fall from the shelves and cabinets. If you are in the kitchen when an earthquake begins, get under your kitchen table. If you don’t have a table, then try to crawl out of the kitchen as quickly as possible.\nAlso, get away from the stove if you are cooking. Anything on the stove could fall over and burn you. Most stoves in Japan are designed to automatically turn off during an earthquake, so do not go near it in an attempt to shut it down. Don’t panic if hot food starts to spill across the floor. Continue to crawl away to a safer location where it can’t reach you.\nMany people believe that doorways are a safe place to stand during an earthquake, but this is simply not true. The door could swing and injure you if you are near it and it is left open. Many door frames are constructed from wood, and these have a tendency to become stuck due to damage from the shaking.\nAs a precaution, take note of which doors in your apartment have a wooden frame versus a metal frame.", "score": 32.74761463725532, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "I m sure you ve heard it before but here s your official reminder to always hang your curtain rods high and wide. How to hang curtains from the ceiling.\nDo hang your curtains close to the ceiling.\nHow to hang curtains from ceiling. Decide on placement choose whether you wish to hang curtain rods directly from the ceiling or near the tops of the walls above the windows. If you have a higher ceiling or even if you just want to add a little interest or height to your space consider hanging curtains from the ceiling. In the second step on how to hang curtains from the ceiling without drilling you need to mark the part of the ceiling that you want to install the curtain.\nHanging curtains from the ceiling vs. By using the stair mark each corner of the room ceiling based on your bed size. Using the ceiling also makes it easy to divide.\nBut did you know you can hang curtains from the ceiling. Decide how high you want to hang your rod. The panel should be cut out of a complementary fabric and cut to match the width of the curtains.\nBuy a curtain or curtains to hang from the ceiling before buying ceiling mount curtain rods since the weight of the curtain affects the size and thickness of the curtain rods. Curtains can be hung at varying heights such as from. If you hang your rods to look like they re hugging the window it makes the window and as a result pretty much the entire room feel smaller.\nThe general rule is t o mount the rod anywhere between halfway to two thirds between the top of the window frame and the ceiling or ceiling molding. People usually expect curtains to hang just above the tops of windows. In this process you might need a hand to help measurement you should also do it carefully.\nWhile the task of hanging curtains is simple deciding how to hang them takes some thought. If using standard pinch pleat curtains and the curtains are to hang freely from the ceiling it may be necessary to add a panel to the bottom of the curtains to make the curtains reach the floor. For instance the extra height it adds can make your ceilings look higher.\nYou can achieve this look in any room with a few simple steps nbsp. Or more specifically hang curtains high enough that you give off the illusion of hanging curtains from. Hanging curtains from the ceiling may seem daunting but it has many benefits.\nHang curtains at least 4 to 6 inches above the top of the windows.", "score": 32.12955020425727, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "Thread should be upholstery weight or all-purpose as fits the fabric. Check your sewing machine’s manual for advice on setting thread tension.\nFold the top of the fabric over 1 inch and press the seam, if you are sewing your own. Fold it over another 3 inches to form a rod pocket, or a finished hem for ring tape or clip rings.\nSew on ring tape or clip rings evenly across the top of the curtain if you are using them. Run the rod through the rings or just the rod pocket.\nPlace the curtain rod into the brackets and close them. Arrange the folds of the curtain.\nThings You Will Need\n- Measuring tape\n- Rod brackets\n- Drywall anchors\n- Carpenter’s level\n- Sewing machine\n- Curtain rod\n- Purchase or make a plain curtain panel and decorate it with fabric markers, appliques or stencils for a look that perfectly suits your decor.\n- Do not hang a heavy curtain rod over a bed without drywall anchors and locking brackets if you live in an area that is prone to earthquakes.\n- Thinkstock/Comstock/Getty Images", "score": 31.64177985702548, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "If you are indoors:-\nSTAY THERE and move only a few steps to the closest previously identified safe place in the room. DROP, COVER, and HOLD ON. Get under a table and hang on to it. Protect your eyes and head. Get away from windows, heavy furniture or appliances that might fall on you. Get out of the kitchen, which is a very dangerous place. DO NOT run downstairs or rush outside while the building is shaking.\n» If you are outside:-\nGet into the OPEN, away from buildings, power lines, trees, and anything else that might fall on you. If you are in the city, seek shelter under archways or doorways but do not enter the building. DO NOT try to walk through narrow roads or gullies.\n» If you are driving or in a vehicle:-\nSTOP SLOWLY. Move your car as far out of traffic (and buildings) as possible. DO NOT stop on or under a bridge, trees, light posts, power lines, or signs. STAY INSIDE your car until the shaking stops. Should you resume driving, watch for cracks in the roads, fallen buildings, stones or trees etc.\n» If you are in a mountainous area:\n– WATCH OUT for falling rock, landslides, trees, and other debris that could be loosened by the earthquake.", "score": 31.224144124375872, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "- Designate a “emergency area” or a room in the house where your family should gather in the event of an earthquake.\n- Make sure to keep an emergency go-bag stocked at all times. To find out what to put in it, read this article.\n- Once the shaking starts, DROP DOWN and TAKE COVER. Choose a sturdy desk or table and hold on until the shaking stops!\n- Do not move until the shaking stops. Make sure you are safe before leaving your home or building.\n- Stay away from shelves that aren’t bolted down or other furniture that’s not secure.\n- If you live in a high-rise, keep clear of windows! (The fire alarms or sprinklers might be set off, too. Don’t panic.)\n- If you are driving when an earthquake strikes, drive to a place where no trees or power lines can fall on you. Park there until the shaking stops.\n- If you are walking outdoors when an earthquake strikes, drop down in a clear spot away from trees, buildings, and power lines.", "score": 30.58406197396984, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Living in a rented apartment may present decorating challenges if your landlord will not allow holes in the walls from window-covering hardware. Fortunately, you can get creative and install curtain rods over windows in a rented apartment. You have several options for hanging the rods. As long as your window coverings are not too heavy, whatever method you select should be adequate for holding up the hardware.\nInstall large temporary hooks at the left and right corners of the window using the adhesive stickers that stick to the wall and to the back of the hooks. Press the hooks firmly into the wall surface with your fingers for at least 10 seconds.\nAllow one hour to pass while the adhesive sets on the hooks. Do not attempt to use the hooks until this time elapses.\nInstall the curtain onto the curtain rod. Ensure that the weight of the curtain will not exceed the combined weight limit of two hooks--probably between 2.27 and 4.54kg.\nSet the curtain rod onto the hooks, centring the rod between the two hooks, and arrange the curtain over the window.\nInstall a curtain rod holder at each upper corner of the window. These rod holders sit over the window trim, enabling you to attach them to the window securely. You may find two brackets at each corner, enabling you to hang double curved rods over the window.\nPlace the curtains onto the curved curtain rod, threading the rod through the upper pocket of the curtain.\nAttach the curved curtain rod to the rod holder at each corner by placing the end of the rod over the brackets on the rod holders.", "score": 30.252253694002196, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "Earthquake Preparedness Tips\n• Plan to hold earthquake drills for your family and business.\n• Develop a family reunification plan.\n• Make your home and business earthquake safe with such actions as:\no Strapping water heaters and large appliances to wall studs\no Anchoring overhead light fixtures\no Fastening shelves to wall studs and secure cabinet doors with latches\n• Learn how to shut off gas, water and electricity in case the lines are damaged.\n• Assemble a disaster kit with supplies that will last at least 72 hours, with such items as water, non-perishable food, a first aid kit, flashlight, battery-operated radio, batteries and other necessities to sustain your family for at least 72-hours.\nSafety Tips During an Earthquake\n• Stay calm and expect an earthquake to last for a few seconds up to a few minutes.\n• If inside a building, stay there until the shaking stops (drop, cover and hold). If outside, move away from buildings since bricks from chimneys or other ornamental stone or bricks may be shaken loose.\n• When driving, stop safely as soon as possible. Stay in the vehicle until shaking stops. Do not stop vehicles under overpasses or on bridges.\nTips for After an Earthquake\n• Check for injuries and render first aid.\n• Avoid other hazards (fire, chemical spills, etc.).\n• Check utilities (gas, water, electricity). If safe, shut utilities off at the sources.\n• Turn on a battery-powered radio and listen for public information broadcasts from emergency officials. Stay tuned for updates.\n• Do not use matches, candles or lighters inside.\n• Do not use vehicles unless there is a life-threatening emergency.", "score": 30.236177629344937, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Without warning, earthquake comes suddenly. Shaking of earth may cause damage, which may leave its affects on the buildings. Building that are not built properly may get collapse resulting a serious destruction. What to do in case of Earthquake and you are at School: When earthquake come during the school time ask the students, don’t get panic be calm, protect yourself. When the students are outside the school like in playing ground as them to stay away from windows, glass, outside walls and doors and from everything that might fall such as from ceiling fans, hanging flower pots and from heavy mirrors and frames and bulletin boards. Moreover, try to stay away from other breakable objects or glasses that are placed on open or high shelves. When they in classroom ask them to go under the desk or study table and do the “DUCK, COVER and HOLD”.\nAt Home: If you are at home or you are inside, stay inside and don’t forget to get inside the table tightly hold and try to stay away from the nearest wall. Keep yourself away from heavy furniture, glass wall, appliances and fireplaces. During the earthquake kitchen is considered as the most dangerous area of the house, so if you are in the kitchen try to move outside as early as possible and quickly turn the stove off. And don’t stand in an entrance; there are many portions of the house that are stronger than the doorways. If you are in your bedroom, stay there and hold you bed tightly and cover your head with any fabric, don’t use the elevators at that time.\nWhat to Do in Case of Earthquake if one is at Work: When you are in the workplace, choose the “safe place”. This can be anything like desk, table and don’t go near the bookshelves, cabinets and then drop and cover your head and hold the safe place. Wait in the safe place until the shaking of earthquake get stopped. If you want to leave the building then use the stairs not any lift and elevator. If you are outside from your workplace then it is good to stay outside and stay away from tall buildings, street lights, power lines and from trees. Street lights, trees, tall buildings might fall may results in serious damage and destruction. And if your workplace building is in a very bad condition like there are a number of cracks and it becomes very old then leave the building as soon as possible. By taking these safety precautions you can protect yourself from serious injury.", "score": 30.136453312288396, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "You should be very careful and stay away from the exterior walls, glass and heavy furniture. At the time of earthquake the most vulnerable place in the house is the kitchen and you should never take refuge in your kitchen.\nIf you’re fortunate enough to outside the house at the time of the earthquake then find out an open ground and get out the vicinities of the surrounding buildings. If you are in office then move away from the exterior windows in the heavy furniture is and do not use the electronic elevators.\nThis article also published at article directories. Read more articles please visit ezineArticles.com", "score": 30.088284357874848, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Contrary to popular belief, earthquakes are not unique to California or the West Coast. In fact, there are 45 states in the U.S. that are at risk for an earthquake, which makes most of us vulnerable. Like all natural disasters, earthquakes can be a frightening event, but perhaps more frightening is that they are almost impossible to predict.\nSince an earthquake can strike at any moment, it’s important to take precautions ahead of time to be well prepared and lessen your chances of being harmed. The following are five simple steps you can take to protect yourself and your family.\n- Keep an emergency supplies kit handy. Purchase or create your own emergency kit with essential items. Depending on the seriousness of the event, it may take hours or even days for rescue workers to reach you. Some items to include in your kit are: first aid kit, battery-powered radio, flashlight with extra batteries, a whistle to call out for help, non-perishable food and water to last for three days.\n- Arrange your furniture carefully. During a strong earthquake, it’s possible that furniture pieces and other objects may move around and potentially cause injuries. Secure tall furniture such as bookshelves or china cabinets to the wall to prevent them from falling over. Also, be sure to hang picture frames, mirrors and other heavy objects away from beds and sofas where they can easily fall on people.\n- Choose a safe go-to spot in your house. Many people think the safest spot during an earthquake is the doorway. In reality, it’s safer to be under a sturdy piece of furniture such as a table or desk. Another option is to stand against a wall that’s located away from windows or furniture that may fall on you.\n- Learn how to shut off your gas valve. It’s possible that the shaking may damage the gas pipes and cause a natural gas leak, which can lead to a fire or even an explosion. Become familiar with your house’s gas valve and learn how to shut it off. Be sure to have a wrench handy to do this.\n- Practice an earthquake drill. Make sure to practice how to “drop, cover and hold on.” If you have a sturdy table or desk, drop under it. Use one hand to cover your head and the other to hold on to the table or desk.\nThough earthquakes are never a pleasant experience, it’s better to be prepared for them as much as possible. For more tips on earthquake preparedness, visit redcross.org/prepare.", "score": 29.10819237506826, "rank": 30}, {"document_id": "doc-::chunk-4", "d_text": "Learn additional ways to protect your home and reduce the potential for damage.\nCommunity Education Ideas\n- Request stronger building codes. The building code is a community's first defense against damage from an earthquake. These codes determine the level of earthquake that each structure must be designed to survive in.\n- Publish emergency information in your local newspaper. Print the phone number for your local emergency service, hospitals, and the American Red Cross.\n- Publish a newspaper series dedicated to locating earthquake hazards within the home.\n- Work with the American Red Cross and your local emergency service to create a special report for individuals with physical disabilities.\n- Offer advice about home earthquake drills.\n- Conduct interviews with representatives of the water, electric, and gas companies about turning off utilities.\nWhat You Can Do During an Earthquake\n- Drop, hold on, and cover! Move to the nearest safe place. Try not to move any more than five feet to avoid injury. It is extremely dangerous to leave any building during an earthquake because you may be injured by falling objects. Fatalities often occur when people exit buildings and run. In the United States, it is typically safer to remain in a building.\n- If an earthquake begins while you are in bed, stay there. Use a pillow to protect your head and hold on. Staying where you are is safer than trying to move. If you try to roll to the floor or get to a doorway, you may become injured by broken glass on the floor.\n- If an earthquake begins while you are outside, find a safe place away from power lines, streetlights, trees, and buildings. Crouch down and remain stationary until the earthquake ends. Building debris, power lines, streetlights, and falling trees can all cause injuries.\n- If an earthquake begins while you are driving, pull your vehicle to a safe location, stop the car, and remain stationary with your seatbelt on until the earthquake ends. Overhead items, street signs, poles, power lines, and trees may fall during an earthquake. If you stop your car, the risk of injury will be reduced. Hardtop vehicles will also protect you from flying or falling objects. Drive carefully after the earthquake ends, and avoid ramps and bridges that may have sustained damage.\n- If you are inside when an earthquake begins, don't exit until the earthquake is over. Injuries are more likely to occur when you try to move during an earthquake.", "score": 28.835525099784963, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "5 Ways Homeowners Can Protect From Earthquakes\nThe devastation in Haiti and California's recent earthquake are reminders that preparedness is vital for all homeowners in earthquake zones. From the proper insurance to securing your home, we will take you through what you should consider in advance of an earthquake.\nEarthquakes can strike at any time. Make sure you're prepared.Think you don't live in a earthquake zone? Don't be so sure. Do a Google search for \"earthquake preparations\" and one of the first hits you'll get, as unlikely as it might seem, will be the website for the state of Kentucky. Turns out four of the largest earthquakes in North America happened in this fault area between 1811 and 1812, with the shaking measuring as high as 8.0. Seismologists say it's possible another big one could hit there.\nAre you in an earthquake zone without even knowing it? You might want to find out and then take the following precautions:\nOne of the basics is insurance, but most people don't have earthquake insurance-even in California. With the economy still in the early stages of a fragile recovery,\nNote, most basic home insurance will not cover earthquake damage-not even in the Golden State. However, in California it is mandatory that home insurance carriers offer earthquake insurance to would-be customers. There are a lot of options in these policies. So check around and you should be able to find the plan that suits your needs and your wallet.\nCheck For Hazards in Your Home and Business\nAre your heavy light fixtures well fastened? Have you anchored tall bookcases and propane tanks? Is your electrical wiring secure and up to date? These and other changes will increase your properties odds of surviving an earthquake in tact.\nIt's important to make your home as secure as possible. For example, if you have heavy pictures hanging over your bed or sofa, or other items that could do damage overhead, you might want to position them somewhere else. Seemingly small changes can make a big difference in safety and damage.\nTake Pictures of Your Property\nUpdate your property pictures when you remodel or get new expensive items. Even when you are insured, having photos can potentially expedite your claim process and you want to do all you can ahead of time. While it seems time consuming to go around taking pictures of your home, it can save a lot of time and money later.", "score": 27.765153757814343, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Living in San Francisco, earthquake central, I'm wary to hang artwork above my bed. Anything framed in glass is pretty much out of the question, so a canvas painting or a textile would be a much safer option. I am loving the arrangement of photography on Sally Hershberger's wall in Elle Decor, though — I wish I could recreate this in my home! Then again, I probably couldn't afford the Herb Ritts photograph of the Dalai Lama's hands that she owns in any case.\nDo you hang artwork above your bed?", "score": 27.360917634652413, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "Meanwhile, the magnetic system of these rods makes everything work perfectly without nails, so no holes in walls will be left behind.\nA firm tug at the point of attachment is all you need to move magnetic curtain rods from one setting to another without leaving behind any slips or scratches on your metal surfaces.\nThe magnets that hold the curtain rods vary in their strength. So make sure that you pick the right heavier type of material if you wish to hang very heavy curtains to avoid all the unwanted incidents.\nFor those users who enjoy utilizing magnetic curtain rods without the metal surface to install them on, some curtain rods even offer adhesive metal strips as a bonus.\nStill, as the market for household products is bombarded with tons of models for curtain rods and plenty of them are not as good as the ads say they are. So, if you have developed an interest in this item, consider skipping all of them and just get the Rod Desyne Magnetic Curtain Rod. Made from solid steel, this adjustable rod can make itself comfy on all iron surfaces.\n3. Command Hooks\nCommand hooks are probably the easiest way to install curtain rods without a drill. They work on a variety of indoor surfaces, such as painted drywall, finished wood, tile, metal, glass, and more. This flexibility makes them a perfect window treatment idea.\nCommand hooks are inexpensive and easy to use. They are also paintable so you can change the way they look so that they can stay in line with the home decor style of your choice. However, keep in mind that there is a limit to the amount of weight they can handle, so massive curtains are a big no in this case.\nNow, selecting the ideal hook can be quite a tricky journey. This particular method of hanging curtains can either make the job ten times quicker and simpler or end up a disaster if you bring home some fragile product. That is the reason why you should only opt for some sturdy ones, like the Command Metal Hooks. Ten times stronger than the flimsy plastic hooks you often come across, these little fellows will hold your curtain nicely.\n4. No Drill Curtain Rod Brackets\nA no drill curtain rod holder is another perfect way to hang curtains without drilling holes in your wall. You can simply place these brackets on the top corners of your window frame and tap them in with a hammer, and you’re done, the brackets come out just as easy as they go in with absolutely no damage left afterward on the wall.", "score": 27.1201151855931, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "This site is designed to help you to identify potential hazards in the home, and recommends methods for fixing them.\nThe first step to earthquake safety is to look around your home and identify all unsecured objects that might fall during shaking.\nSTART NOW by moving heavy furniture, such as bookcases, away from beds, couches, and other places where people sit or sleep. Also make sure that exit paths are clear of clutter.\nSimple and inexpensive things you can do now will help reduce injuries and protect belongings in a quake. Most hardware stores carry earthquake safety straps, fasteners, and adhesives you can easily use to secure your belongings.\nWe have divided the measures you can take into categories that are listed in the column to the left. The list is organized by areas in the home, and by types of objects. Click on these for detailed tips and suggestions for simple solutions to situations in your home that could be dangerous during earthquake shaking.\nWe have also included a listing of many online resources which provide additional information about earthquake preparedness.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "Hold the mounting bracket on the pole against the wall at the level where you want to hang drapes. Mark the screw holes with a pencil. Remove the mounting bracket and drill holes in the marked positions. Place the mount back into position, align the screw holes with the pilot holes, and then drive the screws through these holes to keep the mount in place. Do the same with the other brackets. Hang the curtain rings on the rod. If your drapes hang with tabs or grommets instead of rings, wooden curtain rod through these tabs or grommets. Place the curtain rod to the mounting bracket. If you hang the curtains of the rings, then secure the curtains to the curtain rings.\n12 Sidelight Window Treatments Bed Bath And Beyond Photos", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Virtually everything I read about arranging bedroom furniture insists that that the bed should face the bedroom entrance, and especially that you don't want to present a side view of the bed.\nI am not willing to be shredded by shards of falling glass when the next earthquake comes, and I have windows opposite the bedroom door. If I put the bed against one of the side walls, then my husband and I have to play leapfrog to get in and out of bed. What is a SFan to do?\nNote: Include a pic of your problem and your question gets posted first.\nEmail questions & pics with QUESTIONS in subject line to: sf(at)apartmenttherapy(dot)com\nLink To All Good Questions\nYou might find it interesting that Feng Shui adherents prefer to keep the doorway within sight while sleeping, but (and this is important) the end of the bed (and the soles of the feet when lying down) must be out of doorway alignment. Lying in bed with your soles directly facing the door is called the coffin position....because the dead were traditionally carried out feet first.\nWe're not slaves to Feng Shui or any other space planning philosophy, but we do find that the \"coffin position\" can give us the heebeejeebees. It seems your problem is its own solution: sleep sideways, be earthquake safe, and don't worry about anyone's rules,\n(Incidentally, if you have the opposite problem and must lay out your bedroom so that you sleep with your soles facing the door, Feng Shui recommends \"a solid object at the foot of the bed.\")\nAnyone else have a strong opinion on this one?", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Child Safety for Blinds and Shades\nHanging and exposed window cords can be dangerous to children and pets and are considered strangulation hazards. The Window Covering Safety Council recommends ONLY cordless window treatments in homes with small children. If you have a home with older window treatments, we recommend that you upgrade to today's safer options.\nGo Cordless Blinds & Shades\nInstalling window treatments with no chords is the safer choice for homes with children and pets. Some window treatments like shutters and vertical blinds are inherently cordless and with others, a cordless lift must be added while choosing options for your blinds or shades.\nOut of Sight & Out of Reach\nCordless options are safer for children and the best options for households with kids or pets. If you cannot replace an older window treatment right away, order a FREE retrofit kit for corded window coverings. All window blinds, shades, and cords should always be out of reach of small children.\nPrevent Access to Cords\nBeds, cribs, boxes, and climbable items should never be placed near a window or door. Children and pets can access cords and other mechanisms of your window coverings from these elevated surfaces causing a choking (and falling) hazard for curious little ones.\nPull It Tight!\nHold downs (anchors for continuous cord loop mechanisms) are available to permanently secure the looped cords of shades, drapes, or some blinds to a window or door. It is important to remember that ANY exposed cords are considered unsafe for children.\nBe absolutely certain that all window treatments are installed correctly, even cordless options. Children and pets may be able to reach lower areas of a window treatment and improperly installed or fastened blinds or shades could fall and seriously injure and adult or child.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "I am looking for a standard on the recommended safe distance between the bottom of window curtains or drapes and an electric baseboard heaters. Any assistance would be greatly appreciated.\nUnderwriters Laboratory has UL 1042 Standard for Electric Baseboard Heating Equipment:\nUL - 1042 Standard for Electric Baseboard Heating Equipment | Standards Catalog\nSection 41 of the Standard has a curtain drape test.\nEvaluation of the UL Standard 1042 is summarized in this Consumer Safety Commission document and includes the drape test:\nExcerpt for the above document:\nThe baseboard heaters under test were draped, partially covered, or had the space between the heating element fins and the exhaust vent stuffed with cotton cloth. During some tests, the temperature limiting control (TLC) was bypassed to simulate a component failure. The heater was mounted to a wall and instrumented with thermocouples at various air inlet and exhaust areas, heating element and fin locations, and wiring routes. In all of the tests, including those tests with the TLC bypassed, the temperature of the cotton cloth never reached temperatures near its ignition point (about 360 °C). However, some tests resulted in temperatures above the maximum rated temperature of the wiring insulation. The temperature rise was not more than 10 °C over the rated temperature for the wire. This would not manifest itself into a failure during the 7 to 8 hours of a UL test duration but could have long-term effects on the insulation integrity of the internal wiring of a baseboard heater and lead to a shock or fire hazard.\nTesting SummaryIn summary, the laboratory testing showed that baseboard heaters, even when the TLC was bypassed, did not generate temperatures capable of ignition of combustibles, but did result in temperatures capable of damaging the internal wiring of the heater. Forced-air heater testing showed that abnormal operating conditions, even for a short time, were capable of heating the internal wiring insulation above its maximum rated value. In some cases, the branch wiring insulation was overheated. Testing on radiant heaters did not support the hypothesis that the thermal response of terrycloth is different from curtain material when draped over a heater. The dust testing showed that a short exposure time to a dusty ambient environment could result in easily observable operational changes in a heater, and testing can be used as a means of evaluating whether the heater design might lead to hazardous conditions when in use.\nAfter you have read the report, I will let you make your own conclusions on the information.", "score": 26.76320955676111, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "- Secure all pull cords out of reach by using a permanent cord cleat.\n- Retrofit kits are available to enhance the safety of looped pull cord window treatments.\n- Place cribs and other low-standing furniture (beds, bookshelves, toy boxes, chairs, etc.) away from windows to prevent young ones from accidentally tumbling from windows.\n- Install window guards. Don’t rely on screens designed to keep bugs out to keep children and pets in.\nParents don’t have to sacrifice style for safety! We have a variety of options for families to consider.", "score": 26.60181924000165, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "What to Look For…\nChildren and window cords don’t mix. When window cords are accessible to small children, these seemingly harmless products may become strangulation hazards.\nThis is especially important with older window coverings that may not meet the latest national standard for window cord safety. If at all possible, use only non-corded window coverings in homes where infants and young children are present.\nIf you have corded window coverings and can’t replace them with today’s safer products, check them for the following hazards and order our free retrofit kits as needed.\n- Move all cribs, beds, furniture and toys away from windows and window cords, preferably to another wall.\n- Keep all window cords well out of the reach of children. Eliminate any dangling cords.\n- Make sure that tasseled pull cords are as short as possible.\n- Check that cord stops are properly installed and adjusted to limit the movement of inner lift cords.\n- Continuous-loop cords on draperies and vertical blinds should be permanently anchored to the floor or wall.\nLearn how to retrofit older window coverings by clicking here.\nBetter Yet… replace older corded window blinds, shades and draperies with today’s safer products. And use only non-corded window coverings in homes with infants and young children.", "score": 25.827840781634013, "rank": 41}, {"document_id": "doc-::chunk-2", "d_text": "1) 0.040 MM Standard rods should be supported at every 2 to 2 1/2 feet by center brackets\n2) 0.062 MM Thicker rods should be supported at every 3 to 3 1/2 feet by center brackets\n3) 0.125 MM Standard rods should be supported at every 4 1/2 to 5 feet by center brackets\nMost People use 1 1/4\" Diameter brass tubing for their Closets Rods, these work best for standard clothes hangers. 1\" or 1 1/2\" diameter work fine as well.\n3. What are the types of curtain rods and How to Choose Right Curtain Rod?\nWhen choosing curtains, it is just as important to select the right curtain rods (and finials). Throughout this guide Types of Curtain Rods and How To Choose Right Curtain Rod, we'll answer all of your curtain rod questions. You'll find everything you need to know in this guide, including which curtain rods to pick and what size to choose. Read Now.\n4. How To Install Curtain Rod on Drywall?\nTo hang curtains, you need curtain rods. The installation of curtain rods in your master bedroom, dining room, home office or any other room can be handled by a professional, but you can do it yourself easily. Here's guide will show you 2 Easy Steps to Install Curtain Rod on Drywall in your home.. Read Now.", "score": 25.703854003797073, "rank": 42}, {"document_id": "doc-::chunk-7", "d_text": "No rules can eliminate all earthquake dangers, but the following rules can greatly reduce injuries and damage.\nBefore an Earthquake\n1. Support local safe building codes with enforcement for schools, offices, homes, 2. Support and encourage earthquake drills and training for schools, work areas and homes, 3. As a homeowner or tenant: Fasten shelves to walls, Remove heavy objects from upper shelves unless they are restrained, Place breakable or valuable items in a safe place, Remove or securely fasten high, loose objects, as well as heavy objects above beds, If you have defective wiring or leaky gas connections, replace them, You could thereby save your home, Bolt down water heaters and other gas appliances, 4. Teach members of your family how to turn off electricity, gas and water at main switches and valves, 5. Maintain at least three days supply of storable food and bottled water, Maintain an up-to-date medical kit. Provide responsible family members with basic first-aid instruction because medical facilities could be overwhelmed immediately after a severe quake, Keep at hand a flashlight and a battery-powered radio in the house, 6. Conduct calm family discussions about earthquakes and related problems, Do not tell frightening stories about disasters, 7. Think about what you would do if an earthquake struck when you were at home, in a car, at work, in a store, in a public hall or outside, Your prior planning will help you to act calmly, safely and constructively in an emergency and enable you to help others.\nDuring an Earthquake\n1. Remain as calm as possible, Think through the consequences of any action, Calm and reassure others, 2. If indoors, watch for falling plaster, bricks, light fixtures and other objects, Stay away from windows, mirrors, chimneys and outer walls, If in danger, get under a table, desk, bed or a strong doorway, School children should be taught to get under desks, Usually it is not best to run outside, The one exception may be if you are in a heavy, poorly constructed old building, 3. In a high-rise office building, get under a desk, Do not dash for exits; stairwells may be jammed with people or broken, Power for elevators may fail. 4. If outside, avoid high buildings, walls, power poles and objects that could fall.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-2", "d_text": "The Kwik-Hang rod bracket will be a good match for windows around 30 inches widthwise, allowing for flexibility in your placement, and will fit any curtain rod up to 1 inch in diameter.\nWhile other nailless hanging methods mostly work well for lightweight curtains, Kwik-Hang brackets are an exception as they can hold up to 20 lbs of weight, giving you a lot of versatility when it comes to curtain and rod choices.\nFurthermore, Kwik-Hang brackets are visually appealing as they are available in a range of colors: antique white, black, gold, and silver, so you will be able to find one matching your décor smoothly.\nWith their smart design, the brackets offer a reliable option for hanging your curtain rods without damaging your walls.\n5. Coat Hooks\nCoat hooks are an inexpensive tool that makes hanging curtains extremely easy and more importantly they free you from the pain of drilling holes in the walls. All you need to do is use a strong and double-sided mounting tape with high-performance adhesive and press the hooks to the wall. Make sure to let the adhesive on the hooks dry and secure before you start hanging curtains.\nIn comparison with the traditional types, you can hang coat hooks easily; they don’t even leave holes in your walls. However, taking curtains down from there is a pain as you hang curtains by looping the holes through each hook. Therefore, they are best for decorative windows that don’t need to be adjusted often. The bathroom window where you want light and privacy at the same time would be a good example.\nSince the invention of the curtains, there have been two major obstacles, which are: accurately measuring them and damaging your walls. With these solutions, no drilling or nailing is involved when you put curtains up, there are no holes to fill or cover up when they are taken down, so curtain hanging can be done over and over again with no hassle.\nYou might also want to read:\n- Important Things to Consider When Buying Bedroom Curtains\n- 2 Best Ways to Remove Wrinkles from Curtains", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "More on window treatments\nI'm familiar with the saying \"measure twice, cut once\" but there is no saying for someone who buys a curtain only to realize the molding above her window precludes her from installing a standard curtain rod. Doh!\nThis falls into the category of acts I have committed that I would rather not admit to but I figure I'd come clean in exchange for some advice.\nThis is my dining room/kitchen\nIf you recall, I'm putting those patterned restoration hardware chairs in here along with yet to be found wood chairs. My plan was to add to the drama of the dining room side by putting up a beautiful drapery. I lusted after these Restoration Hardware curtains for my bedroom but because I needed 6 of them, they were certainly out of the budget. However, I happened to find a single drape on ebay for 1/4 of the retail price. So I pounced on it and was very proud of myself for finally winning something on ebay (I never win the auctions I really want!).\nThat was until I took a closer look at the area above the window. I had planned to install a rod in the smooth wooden frieze area between the larger crown and the window frame. See the key word in that sentence? Planned.\nIts hard to see here (apologies for the bad photo) but the molding in this room is set up in such a way that there is a center piece of molding that juts out right where one side of the curtain rod bracket would be installed. So installing a regular curtain rod is a no-go. I also can't install a rod above the molding altogether because the molding sticks out considerably from the wall and the wall itself is curved into a cove so the curtains wouldn't hang straight.\nMy first thought was using some sort of tension rod but the weight of this drape is pretty substantial. I don't think a little rod meant for cafe curtains is going to hold up. Not to mention, I think a thin rod here would look out of place.\nI'm most definitely NOT wise in the way of window treatments so perhaps there is a good solution to this problem. Anyone know what that magical solution might be? If there isn't, should I just give up on having my curtain and go with a shade instead?", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Secure your home and office electronics from toppling in an earthquake. You can prevent damage and injury. These state-of-the-art multi-use nylon contour straps with quick release buckles will protect your expensive investment and save stored data from complete loss. One package will secure an item up to 50 lbs by a simple peel and press into place installation. These straps are safe on wood furniture and exceed Industry standards. For use on computers, CPU?s, printers, copy and fax machines, microwaves, stereos, cd/vcr players and more. Featured in the California Handbook on Earthquake Safety. QuakeHold! is your best defense against earthquakes.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Good Housekeeping has long cautioned parents about the risk of\ninfants and toddlers becoming strangled when playing with window blind\ncords. It's essential to keep cords out of reach from children. Never\nput cribs, beds, toys, or furniture of an infant or toddler beneath or\nnear a window or window treatment. Cords should be looped up and away\nfrom the reach of little ones at all times. Other safety suggestions\nfrom Good Housekeeping: buy Mericon's Cord Wraps ($9.95 for a 12-pack), which enable you to wrap excess cord around cleats or install blinds or shades that have a built-in lift system-operated by push-button or remote control.\nshades and blinds are required to have a strangulation warning attached\nto the cord-but nothing takes the place of being cautious. And when it\ncomes to your child, you can never be too safe.", "score": 25.08659732190574, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Spectrum Blinds and Curtains takes child safety very seriously when fitting our blinds. One death caused by a blind cord, is one death too many. That is why we are heavily involved with the “make it safe” and “safety in mind” campaigns.\nAny questions about the products? Please do not hesitate to ask.\nBlind Cord Safety\nYou can take a number of practical and simple precautions to significantly reduce the risk.\nBy securing looped cords or chains with one of the many safety devices available.\nBlind cords and chains can pose a risk for babies, children and vulnerable people who could injure or even strangle themselves on the hanging looped cords.\nMake it safe\nBy always consulting a British Blind and Shutter Association member. Each one is committed to the BBSA’s “make it safe” code. They will give you the best possible advice on the most appropriate choice of blind and the ways to make it safer. You will also be able to find out about blind options that are designed to operate without hanging cords, such as wand or motor operated. Find your local member by visiting www.bbsa.org.uk", "score": 24.76243385873196, "rank": 48}, {"document_id": "doc-::chunk-3", "d_text": "Debris may fall from the building and injure you if you try to enter. Power lines, streetlights, and trees may also fall.\nPut Together a Supply Kit for Disasters\nAn earthquake supply kit should include basic disaster supplies, a flashlight, and sturdy shoes for each family member.\nProtecting Your Property\n- Bolt tall furniture to wall studs. Anchor or brace top-heavy objects. Such items may fall during an earthquake and cause injury or damage.\n- Anchor other items capable of falling, such as computers, books, and televisions. Items that fall may cause injuries.\n- Install bolts or strong latches on all cabinets. The items inside of a cabinet may move around during an earthquake. Latches and bolts will stop the doors from opening, which would allow the contents to fall out.\n- Move fragile items, heavy objects, and large objects to the lowest shelves. When these items are on the lowest shelves, there is less chance for damage and injury.\n- Keep china, glass, bottled foods, and other breakable items in closed cabinets with latches.\n- Keep flammable items, pesticides, and weed killers in low cabinets with latches. In confined locations, chemical products aren't as likely to cause hazards.\n- If you hang mirrors, pictures, and other heavy items, keep them away from places people are likely to sit, such as couches and beds. During an earthquake, items may fall off walls, which can cause injuries and damage.\n- Secure overhead lights. During an earthquake, lights may fall and cause injuries or damage.\n- Secure the water heater by strapping it to wall studs. Water heaters are often the best source for water after an earthquake.\n- Secure gas appliances. Gas lines may cause fire hazards following an earthquake.\n- Prevent water and gas leaks by installing flexible pipefittings. These fittings won't break as easily.\n- Repair cracks in foundations and ceilings. If you find signs of a structural defect, get expert advice. An earthquake can cause an existing rupture or crack to grow.\n- Make sure that your house is securely bolted to the foundation. When a home is bolted to its foundation, it won't be as likely to sustain damage during an earthquake. When a home is not bolted, it may slide off its foundation.\n- Ask a structural design engineer to evaluate your home. Inquire about strengthening tips and home repair.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Remember those short blue curtains from the first picture? Check out how much length I was able to add by attaching curtain rings and redoing the bottom seam!\nWhat is the proper placement of curtain rods?\nAs a general rule, drapes will be open during the day, so make sure the curtain rod extends at least four inches on each side of the window’s inside frame. To create the illusion of a wider window, extend the rod up to 10 inches beyond the window’s frame.\nCan short curtains ever look good?\nShort Curtains Visually speaking, high-water style is not the most appealing way to hang curtains. The shorter length can appear dated. Also, it can cut the visual height of your room in half. From a purely practical standpoint, however, short curtains are sometimes the best option.\nCan you use long curtains on short windows?\nHanging long drapes on a short window is one of the easiest ways to increase the importance of the window and bring it into proportion to the room. Short drapes on a short window call attention to the size of the window and reduce the significance of the room, window and drapery style.\nHow much should curtains touch the floor?\nYou should aim for your curtain hem to be about 3/8″ to 1/2″ above the floor. Not only is this an easier length to measure for, but it also makes it simple to vacuum and sweep. It’s a great option if you plan to open and close your curtains a lot since you won’t need to rearrange them each time.\nWhat type of curtains are in style 2020?\nIn 2020, designers prefer natural materials, such as silk and linen cotton, as well as natural prints. In addition to flax occupying the top positions, bamboo curtains are also in trend. Especially in favor, plain curtains that can perfectly fit into many design styles and give the rooms a finished look.\nHow high is too high for curtains?\nA rule of thumb (from Architectural Digest) is that curtains should be hung between four to six inches above the window frame, so install your curtain rod accordingly. When you hang the curtain rod high, it will make the window appear taller.\nShould curtains match wall color?\nThe general rule of thumb is the curtains and walls should be either one shade lighter or darker from each other or of complimentary colours to each other.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "The subject said it...... HELP?\nI really really want to hang my curtains in my living room. Lets keep in mind we have tried: getting the long rods from Ikea along with hardware (screws and screw anchors) Problem being these walls just plain crumble whether we try the anchors, nails, screws, anything! We did get one rod to at least stay in the wall, but adding the weight of the curtains, the anchors just slipped right through the wall and the whole thing came crashing down. :( Maybe I'm missing something.\nOur only other option at this point I can think of is maybe a tension rod. It would be far from ideal...but maybe would at least work? In that case....where, if anywhere, could I find one long enough?", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Accidents can happen at any time in many ways - all it take is one time to lose a sentimental and priceless treasure to understand the value of this product. Clear gel is so easy and clean to use. Itís removable, reusable, and non-toxic. Superior invisible protection which not only works great on glass shelves, but also finished wood, porcelain, china and much more. Use clear Gel for your transparent glass and crystal items. This gel transforms into a solid film to create a secure bond, it is completely removable and reusable. One 4 oz. jar will secure up to 300 small crystal figurines. We recommend this product for use on objects and surfaces that are completely waterproof, such as glass, crystal, porcelain, laminated plastic, tile, polished granite, certain varnished wood, and metal.\nThe Preparedness Center can supply your home and business with the strongest earthquake fastening devices available today.\nWe stock items such as QuakeSaver earthquake safe cabinet latches, decor matching furniture safety straps, waterheater strapping kits, automatic gas shut off valves, TV, appliance and electronics safety straps, picture hangers as well as artwork and mirror hangers. All products are suited for residential, commercial and industrial applications.\nYou will also find QuakeSaver gel, wax and putty to secure vases, collectables and valuable crystal pieces. Also see our VCR, stereo, computer and other electronics strapping and fastening kits. Our line of earthquake safe products will safely secure tall and heavy furniture like china cabinets and dresser draws.\nThese products are essential for securing furniture and electronics to wall studs and entertainment centers so in the event of an major earthquake they don't become flying objects that can cause personal harm and property damage that could block emergency exits. Our user friendly fasteners offer easy installation to help you securely fasten any type of furniture, electronics and equipment to walls and structures.\nEvaluate your home or office today for potential earthquake hazards and take the necessary steps to protect your safety and security and decrease your potential for loss of life and property in the event of an major earthquake.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-2", "d_text": "A curtain pole is a good indicator as to where the curtains will finish either at the seal length below the seal or touching right down to the floor we recommend as a rough guide that the curtains should finish 1.5 cm above the sale. If you would like your curtains to fall below the seal we recommend that they finished 15 cm below that if you want your curtains to flow from top to bottom from the rod to the floor we would then recommend a gap of 1.5 cm above the floor.\nToni Sweet Curtains, 2017-03-21 23:34:43. Curtain tracks and curtain poles can be hand operated so that many different curtains can operate on one window. This allows curtains of different widths and drops to stack at different locations on the window. In addition they can also be operated as a blind with a cord to avoid handling the curtains or electrically for the extra security and a touch of luxury.\nAny content, trademark’s, or other material that might be found on the Homedepotblog website that is not Homedepotblog’s property remains the copyright of its respective owner/s. In no way does Homedepotblog claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.", "score": 24.296145996203016, "rank": 53}, {"document_id": "doc-::chunk-3", "d_text": "Hang heavy pictures and mirrors away from beds, couches, and anywhere people sit. Keep breakables or heavy objects on bottom shelves.\n3.) Stay clear of windows, fireplaces, or appliances if a quake hitsAs a top priority, stay out of the kitchen -- it's a dangerous place, with large appliances that could fall over or be pushed violently from walls and floors; knife sets that could be knocked from counters and natural gas lines (if your appliances are powered by natural gas) that could suddenly sprout leaks and fill your kitchen with explosive gas fumes (if a spark occurs, your kitchen would be the first place to erupt in flames and the possible ground zero of an explosion that levels your home.)\n4.) Stay away from anything that could conceivably fall on you.Don't run downstairs or rush outside while the building is shaking, or while there is a danger of falling or being hit by falling glass or debris.\n5.) Secure a water heater by strapping it to wall studs and bolting it to the floor.\n6.) Before and after a quake, repair any deep cracks in ceilings, chimneys, or foundations.Get expert advice if there are signs of structural defects. Unnoticed damage could cause a fire - or worse.\n7.) Repair defective electrical wiring and leaky gas connections.These are potential fire risks.\n8.) Keep batteries in smoke and carbon monoxide detectors fresh.At the least, make sure you have a properly installed and working smoke detector in your home/apartment.\n9.) Secure all chemicals, fuel, and bleach.Store weed killers, pesticides, and flammable products securely in closed cabinets with latches and on bottom shelves.\n10.) Keep food and water supplies on hand.You should be prepared to take care of yourself and loved ones for a period of 72 hours (and possibly longer, depending on the severity of the earthquake). 72 hours under normal circumstances is how long it is estimated for help to arrive, as they have to deal with the same predicaments as you.\n11.) Create a family disaster plan.Discuss with your family the types of disasters that could occur. Explain to your kids how to prepare and respond to each type of disaster. Print the plan for everyone.\n12.) Post emergency telephone numbers by every phone.Teach children how and when to call 911, police, fire department, and which radio station to tune to for emergency information.\n13.)", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Blind and Curtain Cords\nHave you ever thought about the risk posed by Blind and Curtain Cords?\nUntil recently I hadn’t. . . even though I have put up many sets of blinds over the years.\nThe Queensland Office of Fair Trading, have reported that since 2000 in Australia at least 12 children have died from strangulation by blind or curtain cord.\nSome Installation Advice\n- When buying blinds look for products that use ‘wands’ instead of cords to operate the blinds\n- Make sure your children cannot reach any blind or curtain cord (the loops should be at least 1.60 m above the floor.\n- Move any beds, cots, chairs or playpens away from windows with blind or curtain cordsto prevent your child climbing on furniture to reach blind or curtain cords.\n- Wrap blind cords securely around a hook attached as high as possible on the wall.\n- Install a securely fixed cord tensioning device for vertical blinds. (see photo)\n- Use ‘Safety Tassels’, to join the ends of blind cords together as they split when pressure is applied,\nFor More Information\nFor information from Government websites follow the following links:\n- Product Safety Australia\n- Trade Practices (Consumer Product Safety Standard — Corded Internal Window Coverings) Regulations 2010\n- Competition and Consumer (Corded Internal Window Coverings)Safety Standard 2014", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Position a stepladder near one side of the window. Place a curtainrodbracket at the desired location at one end of the window frame. Use the machined holes in the bracket as guides and mark the locations were holes will be drilledwith a felt-tip marker.\nPeople also ask\nHow to hang a curtain rod without drilling into the wall?\nIn an effort not to drill into the wall to hang a curtain rod, my next best option was to use Command Hooks. But not every Command Hook is created equal. Some are too small to hold a rod, and others can hold the weight of fabric panels.\nHow do you drill holes for curtain rod brackets?\nUse the machined holes in the bracket as guides and mark the locations were holes will be drilled with a felt-tip marker. Repeat this step to mark the locations for holes where a bracket attaches at the opposite end of the curtain rod.\nCan you drill into a window frame to hang curtains?\nHow to Drill Into Metal Window Frames to Hang Curtains. However, with the right kind of drill bit and some patience, you can drill the holes and attach the brackets to the frame. To do this, you need a power drill and a cobalt drill bit that’s slightly smaller in diameter than the screws for the brackets.\nHow do you level a curtain rod with a bracket?\nSet the curtain rod in the bracket, tighten the other bracket loosely onto the other end of the rod using the setscrew in the bracket, and lift the rod and level it with a level while you move the bracket the required longitudinal distance from the frame using a tape measure to measure this distance.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "Where a suspended ceiling is used to support equipment, it should be positively fixed to the ceiling suspension system, not supported by the ceiling panels or tiles.\nFlexible connections should also be used between ceiling-supported equipment and ducts, pipes or cables that are supported by the structure.\nAll lighting fixtures that are mounted on a suspended ceiling, including detachable accessories (such as diffusers and light controllers), should use a positive locking mechanism to prevent them inadvertently disengaging during an earthquake.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Curtains can add to the decor of any room, but if the rods aren't measured and hung correctly, you can ruin their effect.\nStep 1: Measure the width Use the tape measure to measure the width of your window. Measure both the top and bottom widths and use the larger measurement.\nTIP: Do not include room for decorative finials or end pieces in your measurement.\nStep 2: Add the stacking width Add the stacking width. Stacking width refers to the amount of drapery that will be bunched together on either side of the window when the curtains are open. The wider the window, the more stacking width you'll need.\nTIP: Include the amount of the window panel that you want to remain covered by the open curtains when you calculate the stacking width.\nStep 3: Measure the rod height Measure the rod height -- ideally a few inches above the window to block sunlight over the curtains. Tab tops, clip rings, and tie tops will hold your curtains below the rod, so make up the difference in your measurement.\nStep 4: Measure according to your desired curtain height Measure according to your desired curtain height. If you want your curtains to hang higher, allow space for tops or rings at the ceiling.\nStep 5: Check the size of your draperies Check the size of your draperies. Ready-made draperies typically come in 84-inch lengths to fit he most common doors and windows, which are usually 80 inches high.\nStep 6: Check your measurements Check your measurements by marking the wall and having helpers hold the draperies at the marks before you permanently install your curtain rods. Enjoy your new decor.\nFACT: Early curtains were made from heavy silks and velvets to minimize drafts.", "score": 22.87988481440692, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "- For window coverings that use a size #10 bead chain you can replace your bead chain loop with a rigid bead chain restraining device.\n- It is safest to replace corded window coverings with cordless window coverings with inaccessible cords (look for “Best for Kids” certification label), or remove corded window coverings.\n- Move all cribs, beds, furniture, and toys away from corded window coverings, preferably to another wall. Children can climb furniture to reach cords.", "score": 21.71391671791717, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Earthquake Preparedness Guide\nStay as safe as possible during an earthquake. Be aware that some earthquakes are actually foreshocks and a larger earthquake might occur. Minimize your movements to a few steps that reach a nearby safe place and stay indoors until the shaking has stopped and you are sure exiting is safe.\nDROP to the ground; take COVER by getting under a sturdy table or other piece of furniture; and HOLD ON until the shaking stops. If there is no a table or desk near you, cover your face and head with your arms and crouch in an inside corner of the building.\nProtect yourself by staying under the lintel of an inner door, in the corner of a room, under a table or even under a bed.\nStay away from glass, windows, outside doors and walls, and anything that could fall, (such as lighting fixtures or furniture).\nStay in bed if you are there when the earthquake strikes. Hold on and protect your head with a pillow, unless you are under a heavy light fixture that could fall. In that case, move to the nearest safe place.\nUse a doorway for shelter only if it is in close proximity to you and if you know it is a strongly supported, load bearing doorway.\nStay inside until the shaking stops and it is safe to go outside. Research has shown that most injuries occur when people inside buildings attempt to move to a different location inside the building or try to leave.\nBe aware that the electricity may go out or the sprinkler systems or fire alarms may turn on.\nDo not move from where you are. However, move away from buildings, trees, streetlights, and utility wires.\nIf you are in open space, stay there until the shaking stops. The greatest danger exists directly outside buildings; at exits; and alongside exterior walls. Most earthquake-related casualties result from collapsing walls, flying glass, and falling objects.\nIf in a moving vehicle\nStop as quickly as safety permits and stay in the vehicle. Avoid stopping near or under buildings, trees, overpasses, and utility wires.\nProceed cautiously once the earthquake has stopped. Avoid roads, bridges, or ramps that might have been damaged by the earthquake.\nIf trapped under debris\nDo not light a match.\nDo not move about or kick up dust.\nCover your mouth with a handkerchief or clothing.\nTap on a pipe or wall so rescuers can locate you. Use a whistle if one is available. Shout only as a last resort.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "There are so many things to worry about your kids getting hurt on, between the things that we think of, and the things that never cross our minds, it’s hard to keep track of everything. When was the last time you thought about the safety of your window treatments? I know I hadn’t, until I saw a story on the news about a child getting hurt. Unfortunately, it turns out there is a significant number of injuries each year due to window treatment hazards being overlooked.\nWhile doing some research on the issue I came across the WCSC – Window Covering Safety Council – basically a group of manufacturer’s and retailer’s that have pulled together to educate consumers on the issues. I put together a couple of the common tips:\n– Always Install Anchors/Cord Cleats!\n– They come with the blinds for a reason – safety! By just drilling in the small attachment, and looping the cords around them, you are preventing loose cords from hanging and preventing a choking/strangling risk.\n– Use Cordless Shades Whenever Possible!\n– Especially in kids bedrooms! Luckily as styles have become more contemporary – a sleek look is desired, so the less cords, the cleaner the look. Both cellular and roller shades are available cordless, and then there is no worry about kids getting to the cords.\n– Keep All Beds/Cribs/Furniture Away From Windows!\n– Again, especially in the bedroom, but anything kids can climb on near the window always gives them the opportunity to get their hands near the cords.\nLuckily, since 2001 a lot of new updates have been put in place requiring inner and outer working parts to ensure child safety. And even better, the WCSC has made it as easy as possible to retrofit and blinds that may be unsafe.\n– Looped Pull Cords\n– Old blinds and shades had multiple strings attached with one tassel at the bottom (basically so you only had to pull one thing). Unfortunately, by attaching them, they were creating a loop, and therefore a strangling hazard. But as a quick fix, you can cut the strings, and just attach new tassels to each individual cord.\n– Cord Stops\n– On treatments made prior to 2001 – there were no inner cord stops, so the blinds could extend down longer than expected – causing a risk of a heavy product falling on a child playing with the cords. Cord stops can be installed at the top of the cords to prevent this risk.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-8", "d_text": "If you don’t already have curtain rods, sliding tracks, or ceiling brackets set up in your room, here’s how you can install them before you hang your curtains.\nWall Curtain Rods and Tension Rods\nCurtain rods are pretty much the classic option for hanging any kind of room divider curtains. However, they usually require a vertical wall. Because of that, they’re perfect for curtains that cover windows, walls, or open doorways.\nIn order to install a traditional curtain rod, you’ll need a rod, which is usually about 1.5 inches thick, and two holders. Plain hooks like these ones from AmazonBasics will do just fine. You can also get double hooks if you want to hang up two pairs of curtains. But of course, that’s not the only way to do that — you can also get a double curtain rod set like this one.\nUltimately, installing these is pretty easy. You just screw the hooks in, then push the rod through the curtain grommets or tube and hang it up on the hooks. It’s the easiest way to do this — but, as I have mentioned, it only works if you have a wall. And, if you’re hanging room divider curtains, you may not want to put them over a wall.\nOn the other hand, if you want to close off a room from wall to wall, you can use a tension rod. It doesn’t require hooks or screws of any kind, and it can come in several colors. It just expands until it touches the wall on both ends and uses that tension to hold the drapes. So if you want to use curtains to essentially create a new room, this might be the tool you’re looking for.\nIf you want to use a curtain rod on a ceiling, you’ll need ceiling mounts — or brackets. Once again, you’ll probably spend more time measuring than actually installing these mounts. After you figure out where you want to put the brackets, everything is the same as it was in the previous method.\nYou’ll put the curtain panels onto the rods, and hook them into the ceiling brackets. These kinds of brackets look great pretty much anywhere. You can put them over windows, walls, doorways, or even in the middle of the room. I’ve even seen them used to create a faux bed canopy — in this case, anything goes.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-3", "d_text": "The first and second support columns 28, 30 are also provided with respective lower mounting brackets, 28 b and 30 b for attaching the support columns to an upward extending edge 16 a of the building structure's base, or floor, 16. The building structure itself within which the roll-up curtain assembly 10 is installed is not shown in the figures for simplicity.\nRespective upper edges of the upper and lower curtains 12, 14 are each provided with a hem. Inserted within the upper hem of the upper curtain 12 is a first rod 18, while inserted through the upper hem of the lower curtain 14 is a second rod 20. Each of the first and second rods 18, 20 is fixedly coupled to the first and second curtain support columns 28 and 30 by conventional means such as mounting brackets which are described below. The lower edge of the upper curtain 12 is also provided with a hem in which is inserted a third rod 22. Similarly, an intermediate portion of the lower curtain 14 is provided with a hem into which is inserted a fourth rod 24. Finally, the lower edge of the lower curtain 14 is provided with a hem into which is inserted a fifth rod 26. Each of the rods is preferably comprised of a high strength, lightweight material such as aluminum or plastic and extends the full length of the curtain within which it is disposed. In addition each of the rods is preferably in the form of a hollow tube to reduce its weight. In the embodiment shown in\nThe ends of each of the upper and lower curtains 12, 14 are further connected to a support-drive mechanism 40 which is shown in greater detail in the perspective view of FIG. 3. Support/drive mechanism 40 includes a support frame 42 comprised of first and second vertical side frame members 42 b and 42 c and an upper frame member 42 a connecting the upper ends of the side frame members. A lower frame member 42 d connects adjacent lower ends of the first and second side frame members 42 b, 42 c. Support/drive mechanism 40 further includes third and fourth side frame members 50 a and 50 b disposed adjacent to and spaced from the first and second side frame members 42 b and 42 c, respectively.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-5", "d_text": "Before buying a curtain rod, take a tape measure and measure the following parameters:\n- Window width.\n- Distance from the wall to the curtains, including batteries, pipes.\n- Distance from side walls to window.\n- Distance from ceiling to window.\nThe cornice should not only fit into the overall interior of the room, but also emphasize the curtains favorably. In this regard, the question arises of what to focus on when choosing. There are several guidelines.\nA safe and convenient option is to match the color of the cornice to the color of the walls. In this case, it will merge with the main color and will not attract much attention to itself. Especially suitable for bright and interesting curtains in design.\nIf you need to stretch the room, add heights to it, choose the cornices in the color of the curtains. This will not only change the volume of the room, but also smooth out the interior overflowing with details.\nA very common option is cornices in the color of the ceiling. Here you can pay attention to transparent options. They do not draw attention to themselves and fit perfectly into the overall design.\nFor lovers of risky decisions, cornices are suitable, contrasting with the main finish. Let's consider some options.\n- A classic combination, although the interior is very bright - white and black.\n- Light beige and cream tones against rich shades of dark chocolate, wenge, walnut and cherry.\n- For dark shades of walls (black, purple, burgundy, brown), as well as for rich green and red, cornices for metal are perfect - silver, gold, bronze, brass.\nEaves include the following components:\n- A pipe, string or tire is the element that supports the curtain.\n- Fittings - tips, plank.\n- Functional elements - side caps, corner rounding.\n- Fasteners for eaves - eaves brackets, holders.\n- Fasteners for curtains - hooks, rings.\nAdditionally, you can purchase a stick for sliding the curtains, grips or clothespins for the cornice, they are relevant for static fastening of curtains, for example, for an inclined window, a deliter.\nAt the place of attachment of the cornices, two types are distinguished: those mounted on the ceiling - they are called ceiling and those attached to the wall - wall.\nWall cornices are a very common option. The element to support the curtain is attached to the wall with two brackets, sometimes three if heavy curtain fabric is to be used.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "You shouldn’t stand in the doorway regardless of the material, but knowing what it is made from can give you a better idea of if you’re likely to be trapped in that area if the door is closed or not.\nBathrooms and Closets\nBathrooms and closets pose the risk of entrapping you if the door frame becomes damaged, and there’s no other way out. If you become trapped, try to help rescuers located you by banging on the wall or pipe. Use a whistle if you have one, but only use shouting as a last resort.\nAdditionally, bathrooms and closets usually have shelves with items that could easily fall on your head, while leaving you with very little space to couch and try to protect yourself. You should crawl to the nearest safe location to avoid these dangers.\nHow You Can Prepare Your Apartment for an Earthquake\nThere are some easy ways that you can make your apartment significantly safer with little investment. Here are some suggestions to changes you could quickly implement:\n- Move heavy objects to low shelves so they won’t fall over.\n- Secure tall furniture by using Anti-tip Tension Springs.\n- Use anti-slip mats to prevent items on shelves and in cabinets from slipping out.\n- Practice Drop, Cover, and Hold until it becomes natural.\n- Create a family emergency plan and meeting place.\n- Buy or prepare an Emergency Survival Kit.\n- Make sure your insurance covers earthquakes.\nIf you haven’t already acquired an Emergency Survival Kit, it is imperative that you do so. Please read our article When the Next BIG QUAKE Hits, You’ll Need This Grab Bag to learn more about where you can buy one, or how to make your own.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "You can secure the contents of your home or office to reduce hazards. You should secure anything heavy enough to hurt you if it falls on you. Here are steps you should take to secure your possessions.\nSecuring Tabletop Objects\n- TVs, stereos, computers, lamps and chinaware can be secured with buckles and safety straps attached to the tabletop (which allows for easy movement of the units when needed) or with hook and loop fasteners glued to both the table and the unit.\n- Glass and pottery objects can be secured with nondrying putty or microcrystalline wax.\nSecuring Kitchen Items\n- Use child-proof latches, hook and eye latches or positive catch latches, designed for boats, to secure your cabinet doors.\n- Make sure your gas appliances have flexible connectors to reduce the risk of fire.\n- Secure your refrigerator to prevent movement.\nAnchoring Your Furniture\n- Secure the tops of all top-heavy furniture such as bookcases and file cabinets to the wall. Be sure to anchor to the stud, not just to the plasterboard. Flexible fasteners such as nylon straps allow tall objects to sway without falling over, reducing the strain on the studs.\nProtecting Yourself from Broken Glass\n- Replace your windows with ones made from safety glass or cover them with a strong shatter-resistant film. Be sure you use safety film and not just a solar filter.\nSecuring Overhead Objects\n- Ceiling lights and fans should be additionally supported with a cable bolted to the ceiling joist. The cable should have enough slack to allow it to sway.\n- Framed pictures, especially glass-covered, should be hung from closed hooks so that they can't bounce off. Only soft art such as tapestries should be placed over beds and sofas.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "However, if you have curtains that slide, they can be a nuisance.\nThere are many ways to stop curtains from sliding. You can use weights on the bottom of the curtain or you can use hooks on the top of the curtain rod. You could also use Velcro strips on both sides of the curtain rod or you could use magnets on both sides of the curtain rod.\nThe Best Way to Prevent Curtains From Sliding\nThe best way to prevent curtains from sliding is to use a curtain rod that is long enough to go all the way across the window. This will keep the curtains in place and prevent them from sliding.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Now that you know the rod height, you need to determine where to mount your end brackets. On a typical two-way rod, you will center the rod both left and right on the window. Subtract the width of the window opening from the rod width. Divide that number by two and you have the measurement from the edge of the window to the end of the rod. Measure out from the window edge on both sides to find your end bracket position. Be sure to measure between the two end bracket positions before you mount the brackets to verify that you have the right rod width.\nIf you are installing a one-way rod, all of the stack space will be to one side or the other. If the window is near a corner, measure from the corner of the wall to the rod width to find the placement of your end bracket. If the window is not near a wall, balance the rod width so that about 4” overlaps one side of the window and the remaining rod width is on the other side of the window.\nOnce your end brackets are in place, you need to install your center supports. Center supports are mounted at the rod height spaced no wider than 30” apart.\nWhen installing brackets on the wall be sure to get a solid mount to the wall surface. If possible, try to mount into a wall stud for best support. If it is not possible to screw into a wall stud, use a high quality drywall anchor such as a Molly bolt or Toggle bolt. Do not use plastic wall expanders as they will tend to pull out of the wall when weight is put on the rod.\nHanging and Balancing the Rod\nOnce the brackets are in place, insert the rod into the brackets by pushing the prongs at the ends of the rod into the end brackets. Center supports are fastened by twisting the cams with a flat bladed screwdriver to clamp the top of the rod to the supports. Mount the cord guide to the wall at the sill height so that it will be hidden behind the drapery when it is hung.\nNow that the rod is mounted it needs to be balanced. To balance the rod, pull on the draw string until both master carriers slide back to the open position, reach behind the minor master carrier and pull the cord loop running through the carrier over the locking hook (see illustration).\nOnce the cord is locked in place, you need to pull the extra cord out of the rod and tie it off.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "One of the advantages of this method is that by adjusting the position of the buttonholes you can adjust the height of the curtain from the ground.\nPeggy Hewitt Curtains, 2017-09-22 11:55:40. The top side of the curtain can have hooks or a continuous loop depending on the curtain rod you have. To make the loop, fold the top part of the curtain backwards and then sew it. Make sure the rod fits in well into the loop before stitching it. For a curtain with hooks, there are two options to choose from. You can have hooks made from the curtain cloth using buttons or you can opt for ready-made steel or plastic hooks that are readily available in the market.\nPeggy Hewitt Curtains, 2017-09-21 12:39:20. Acoustic blankets can help out greatly though in areas where it’s hard to hear yourself think over the car noise. Unfortunately no set of curtains will make a room completely soundproof even if you help out the curtains with acoustic blankets. If you are looking for a complete soundproof room there are alternatives like double pane windows, special insulation in the walls, or even different types of drywall. However if you’re looking just to help lower noise and get a better sleep while keeping the costs low then curtains are the way to go.\nMarianne Compton Curtains, 2017-09-20 15:11:46. Long length curtains are often selected to tender distinguished, smart, and formal look to the room under consideration. Longer the length, the more stately will be the look. Window decorating can be a fun and effortless job if you are certain about the mood of the room where the windows are located. The below discussion will give you a clear idea of the benefits of long length curtains and the kinds of rooms and windows that should be treated with this option.\nJeannine Castro Curtains, 2017-09-19 23:15:50. Prolong the life of your drapes by regularly removing dust and dirt. Once a week you should vacuum the curtains using the upholstery attachment. Some curtains are light and delicate. In these cases vacuuming could damage them, so give them a good shake instead to dislodge dust particles.\nBelinda Carson Curtains, 2017-09-20 18:29:03.", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-7", "d_text": "Bolting down large appliances is considered a best practice because this will prevent the earthquake from toppling your expensive appliances. However, if this is not an option, what you can do is strap them to the wall. You should obtain the fasteners and straps/ rope now, before an earthquake strikes.\n6. Most houses, especially those that are ten years old or older, have minor structural problems such as cracks in the walls and the foundation. Small cracks in the foundation can become massive fissures during an earthquake.\nIf you’re aware that your house has such defects, it is best to have these repaired as soon as possible by a professional. A strong earthquake can easily destroy a small home if it is in a neighborhood situated near the epicenter of a quake.\n7. Perform regular checks of the foundation of your home. Checking the foundation every two to three months is a good practice.\n8. Most homes have a fairly large supply of insecticides and other household chemicals. These chemicals must be stored securely in a bolted-down cabinet that can be closed with a latch or lock in the event of an earthquake.\n9. Remember the DCH code during an earthquake: D (Drop), C (Cover), & H (Hold On).", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-2", "d_text": "Download PDF version of this information\nIf you're using a generator during a disaster make sure you use it safely by:\n- Not overloading it by powering too much\n- Not using it indoors even if it's portable\n- Following the safety recommendations for storing the generator and fuel\n- Connecting and running your generator correctly\nTo learn more about generator use click here\nSecuring your home does not have to cost a lot of money or take a lot of time. Strapping down large pieces of furniture, making sure that wall hangings are secured into studs and hung in locations that don't create risk (like at the head of your bed) are just two steps you can take to reduce your risk of injury. Check out the links below for more detailed instruction on how to make your home safer.\nCabinet Safety (PDF)\nSecuring wall hangings (PDF)\nHome Hazard Hunt\nTake 30 minutes to go on a \"Hazard Hunt\". Imagine that the ground is shaking and you are in your \"Quake Safe\" location. Look around and think about the items in your house and if they are likely to cause injury or damage. The following checklist (Hyperlink) the will give you some questions to answer and focus your efforts to ensure your home is as safe as it can be.\nWas your home built before 1980?\nPrior to 1980, building codes did not require builders to secure houses to their foundations. This does not mean that every house built before 1980 is \"unsecured\", only that it was not a requirement. If your home is not properly secured, it may be at increased risk of \"slipping\" off the foundation during a major earthquake. Retrofitting involves bolting your home to its foundation and providing sheer/pony wall strength. The goal is to increase the structural integrity, but does not mean that your home is \"earthquake proof\" (there is no such thing). Retrofitting does increase the chance that your home will withstand the shaking and be intact following an earthquake. The good news is that wood framed construction, which is the primary building material in our area, performs well during earthquakes. If you are considering retrofitting your home, you may want to check out a class offered by the Seattle Office of Emergency Management and the Department of Planning and Development.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Tie top curtains are embellishments or small fabric strips attached to the curtain heads. It has the functional value of holding the curtain in place around the curtain rods. The outcome: an informal look and normally employed with light weight fabrics.\nJeannine Castro Curtains, 2017-08-10 23:46:47. Are you wondering how to safely and correctly clean your curtains? The curtains are an important feature of a room. You might have spent hours choosing just the right colour, style and design that perfectly fits your home. So you really should know the best methods to maintaining clean curtains. Your curtains will start looking a bit shabby after a year or more of use. Taking good care of the curtains is essential to keeping your home beautiful and welcoming.\nBelinda Carson Curtains, 2017-08-08 23:17:01. The holes in the curtain go through the curtain rod and hold the curtain in place. Although the curtain panel looks immaterial or demure, this main feature of grommet curtains makes them fully functional. Due to this you can easily slide over the curtains from one side to another with ease. Moreover, the curtains come with natural folds that make their hanging very easy.\nPeggy Hewitt Curtains, 2017-08-09 06:14:06. Purchasing ready-made curtains can lead to compromise on quality and design. You are also provided with a limited range to choose from and therefore it is very hard to find curtains that suit your needs. You may find a curtain with your preferred fabric but the pattern or design is not satisfactory. You can overcome these hurdles by making your own curtains. Sewing your own curtains does not necessarily need professional knowledge. Below are some steps you can follow to make your own curtains-\nMarianne Compton Curtains, 2017-08-09 14:53:37. To develop your rooms style and feel, admiring comment the room with curtains that have a similar look and feel or bring life to an area with your chosen curtains. This means that if your room is a dark space due to lack of sunlight, it is advisable to use bright color curtain fabric to add life to the room. If you have a room with plenty of sunlight, choose a curtain fabric that can offer strong resistance to fading over time.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "Drill a starter hole for each screw, or if you're using rosettes with a screw attached, drill one hole at each position mark. Install each rosette.\nLarge drapery hooks can hold most curtains, including heavy drapes. Push the pointed end through the top of the curtain in the back, and use the hook to hang the curtain from the rosette's post. Curtains with large tab tops or tie tops can hang over the rosette posts after the rosettes are screwed in. Clip-on curtain rings provide an option to convert curtains with a rod pocket into a curtain that will hang from rosettes. Position the drapery hooks at regular intervals corresponding with the rosettes, such as 8 to 12 inches apart. Slide a ring over each mounted rosette post to hang the curtain before screwing the rosettes into the posts.\n- Hemera Technologies/PhotoObjects.net/Getty Images", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Bed & Bath : Curtains & Accessories\nFound (0) product(s)\nBuy Curtains & Accessories\nCurtains & Curtain Rods : Drapes for Your Home!\nNeedless to say how important curtains are for every household. They just magically change the entire look of a room simply by hanging from the windows and reflect an unbeatable elegance on the whole. In fact, interior decoration is nothing without the proper drape arrangements for the windows and doors. A great good variety of drape rods will always answer your needs and preferences when you are planning to buy one. These drapes look prettier if you bring matching ones to complement your room's interior decoration.\nPerfect decor Aid!\nThe Curtains & Curtain Rods are of many types and designs including crystal, brass, glass, wooden, copper, wrought iron and ceramic. Most durable ones are generally made of wrought iron. You won't feel any difficulty while installing them in your home. Moreover, they cast a royal spell all around your room. The latest design available in the swarm of these drape hardware is swinging rod. As the name suggests, these are meant to hold two differently patterned drapes all together so that you can see two different designs both from inside and outside of your house, Cafe style windows, French doors, bathroom doors, etc. are the best suitable places where you can use these stylish rods gleefully. Other types of curtain holders or rods are Caf\\E9 rods, Sash rods, Wide curtain rods, Narrow curtain rods, Of course, each one has its own advantages and elegance. But always keep in mind the style you love the most and the design that best suits your room decor. For a fair selection, consider specific aspects like style of the furniture you have, shades on the wall, window frame color, floor coloring and the tidbits of your room. The right one always enhances the beauty of the room you are living in!", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "What's hanging over your bed?\nEarthquake experts advise against framed art that could fall on you in an earthquake and either shatter or bonk you out of consciousness.\nBut that leaves a major focal point in any bedroom totally blank.\nOur advice is to think about this conundrum in terms of what might even be a little pleasant if it fell on you in a shakeup. Maybe a series of graphically striking wall hangings that come with their own hangers, in a deep, shimmery silk that's soft on the head.\n(You can get these at Viva Terra for $98 - $155.)", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-38", "d_text": "The foundation types are: ❏ slab-on-grade with integral footing (the footing and slab are a single unit); ❏ crawlspace or basement foundation wall system consisting of a continuous concrete or reinforced masonry wall system; ❏ crawlspace or basement foundation wall system consisting of a wood stud cripple wall or un-reinforced masonry wall; ❏ pier or pile foundation system consisting of wood, concrete, or steel. Note: A professional engineer should always be consulted when any structural improvements are being considered. Refer to the following checklist for additional actions that should be taken—regardless of your building’s configuration—to protect your employees, customers and visitors, as well as your building, contents and inventory. Check for: ❏ Windows, skylights and doors with either tempered glass or safety film applied to the interior surface of the glass to reduce the chances of the glass shattering. Check for etching in the corner of the window that says “tempered” or “laminated.” Safety film is an adhesive film applied to the inside of the glass. ❏ Natural gas lines with flexible connections and an automatic shut off valve. A flexible gas line is not rigid. It is made of a material such as rubber or plastic that you can bend yourself. This reduces the chances of the line rupturing, resulting in a fire. The automatic shut off valve is typically installed near the gas meter. ❏ Flexible supply line to toilet(s). ❏ Flexible couplings on fire sprinkler system. ❏ Major appliances, such as boilers, furnaces, and water heaters, braced to the wall and/or floor such that the appliance will not overturn or shift in the event of an earthquake. ❏ Hangers (usually strips of sheet metal or stiff steel rods) less than 12 inches long that support your mechanical and plumbing systems. Longer hangers may allow too much sway during a tremor. 53 ❏ Computer and other electronic equipment secured to the floor or desk with braces, Velcro, or some other means of attachment so it will not overturn. ❏ Suspended ceilings braced to the structure to limit the amount of displacement during an earthquake. ❏ File cabinets with locks or latches that must be released manually in order to open the drawers. Locks or latches will keep cabinet drawers from swinging open during an earthquake and spilling contents.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-2", "d_text": "When choosing, take into account the weight of the material from which the curtains are made. For example, curtains made of dense material that do not allow sunlight to pass through the bedroom are usually chosen. Here it is better to opt for pipe eaves with three brackets. Such structures have sufficient strength. Eaves with a turn will effectively close the space between the wall and the curtain.\nFor tapestry, which is gaining its popularity, as an interior item, the cornice will be the easiest and most convenient way to hang it. Trumpet cornices in the desired style and strings are also suitable. Filament curtains or muslin can be hung on different structures and in different ways. The simplest of them, perhaps, will be hanging the threads directly on the pipe.\nIf you sew the threads to the curtain tape, then the choice becomes much wider - profile and baguette cornices are suitable here, filament curtains on a string cornice will look great.\nThe LED lighting of the cornice, which creates the effect of curtains floating in the air, has become a special chic. Cornice lighting will only look good on ceiling cornices. Particularly interesting is the variant of hidden illuminated cornices.\nFor the living room and hall, due to the huge number of styles, a baguette is suitable.\nThe airiness of curtains on a string is perfect for curtains on a balcony or loggia. However, in the case of a double-glazed window on the balcony or even the kitchen, you can do without a cornice at all.Roller and roman blinds are gaining popularity; if they are used, you can do without drilling.\nIt is much more difficult to choose a curtain rod for a roof window than a regular one. A roof window is understood to be a tilted, oblique or trapezoidal window. For windows of this type, round and rail options are best suited. The main problem is fixing the canvas. In the case of slanted windows, the curtains will hang in the middle of the room, not performing their function. And with a sloped top, they will simply roll down to the bottom edge.\nOn sloped windows, in the case of round cornices, a second additional rod is assumed. It is installed on the lower edge of the window. The length of the fabric is chosen so that it does not adjoin the window and does not sag strongly enough.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Years ago when I posted a tutorial for stenciled drop cloth curtains, I included my alternative method for hanging them. For some reason this little idea hit a chord with a lot of readers (and appalled one commenter on Apartment Therapy who apparently takes window dressing VERY seriously 🙂 ).\nI often get inquiries about how the curtains are hung, so I thought I’d break it down into it’s own little tutorial for those of you who might have a similar dilemma.\nThis post may contain affiliate links for your convenience. Read my full disclosure policy here.\nI absolutely love all the light from our two bay windows in our dining room, but they are super awkward. The windows are actually different styles and one of theme has a bunch of funky wall angles, plus a built-in cabinet under it, that make using bay window rods with corner joints a bit of a nightmare.\nI could have just gone with roman shades on all the windows but I didn’t want to block any natural light in here, plus I figured curtains might disguise the weird mismatched window situation, and I’m a big fan of simple full length curtains anyway.\nMany years ago I saw a similar idea in a design show house, and have since used it in our last three houses: curtain rings hung on cabinet knobs.\nThis method allows you to hang the rings on various walls with an angle situation like I have, and lets the curtain curve as needed.\nA quick tutorial:\nI took a knob with me to the hardware store to find a hanger bolt that would fit. These 8-32 x 1-1/2 ones were perfect for the cabinet knobs I bought.\nScrew the knob and the hanger bolt together.\nMOUNT THE KNOBS\nUse a level to draw a very light line with a pencil,\nthen measure, mark and drill pilot holes for your hanger bolts. Make sure the pilot hole is smaller than the screw size. Erase your guide line.\nHANG THE CURTAINS | RING CLIP METHOD\nWhen I hung the stenciled drop cloth curtains a few years back, I used rings with clips, folding down the top of the panel so it was the right length.\nThe clip method is quick and easy, but if you want a more finished look, keep reading.\nThe even better way….", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Sara Holder Curtain, 2017-08-19 21:37:27. Don't get scared off and quit yet - this is all easier than you might be thinking. You'll probably just know what rod is right when you see it. After all, it will probably catch your eye because it's your style. So hang in there 'cause now's the fun part: shopping!\nAudra Odom Curtain, 2017-08-23 02:21:53. Cafe Curtains - Traditional Home: If a traditional home look is what you want, the brass rods with finial will be great for you. Or you can use spring loaded rods for that matter, it all depends on your personal taste. Brass rod is usually hanged by small brackets, which someone find cute, but you must know that they will be visible on your window frame. So if you don't like that, skip it. Choose spring loaded rods because they will fit on your window frame without being visible. The pocket on your cafe curtain will make sure of that.\nNona Oneill Curtain, 2017-08-21 04:28:17. Or you can individually select each part for your curtain rods. Thanks to our access to online stores today you can easily customize your curtain rods. Just mix and match rods, finials, rings, pinch clips, brackets, holdbacks, tiebacks. And don't hold back (oops!) - go for it all!\nAudra Odom Curtain, 2017-08-21 09:06:16. Blinds can find themselves needed weekly, to monthly cleaning, so it's important to consistently keep on top of them. Whether you have chosen white fabrics, or louvolite fabrics, which can also add energy efficiency. Blinds can attract large levels of dust, making it difficult for those with dust allergies, so a weekly cleaning session will benefit them massively.\nMarquita Sanford Curtain, 2017-08-22 12:29:54. While sewing your curtains make sure that they are not too long-they should just clear the ground. Although long, flowing curtains are beautiful to look at, the extra length will pick up dust and will need cleaning often. Fabrics differ so be sure to check if your choice drapes well.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Given that we live in a zone where someday, eventually, all of our lovely things are likely to fall off the walls and shelves, we've always avoided hanging art above the bed. Even making use of picture moldings, like Mark and Sunrise Ruffalo do in their bedroom pictured above, strikes us as pretty risky, even if it looks great. (Maybe celebrities are surrounded by forcefields to protect them from falling objects.) But surely there are ways to decorate the space above the bed without risking your life...\nThis string of family photos (spotted on Decor8) looks pretty, though we're not entirely sure if we'd feel right snoozing under all of their watchful gazes. It's an idea easily translated to other artwork, however, like little affordable prints from a site like Tiny Showcase, or photos you've taken in your travels.\nThe right piece of art, of course, might need no frame. Somehow this antique poster, which we came across on I Suwannee, looks perfect simply tacked into the wall; with a frame, the rustic vibe might be lost. (Of course, the chandelier poses an earthquake risk as well, but even if you imagine it gone the room is lovely.)\nA mural or wallpaper headboard is another good choice. Now if only we could frame our bed with floor-to-ceiling books without fear (of earthquakes and dust allergies)!\nWhat do you think? What are your favorite choices for earthquake-safe bedroom art?\n(Images: Domino, Decor8, I Suwannee)", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "After a site is chosen, the following additional construction techniques should be utilized.\n1. A solid continuous foundation must be poured to a depth at least 12 inches below grade for the base of the home. The FEMA Homebuilder’s Guide to Earthquake Resistant Design and Construction explains that an effective foundation provides the following:\n- Continued vertical support\n- Friction and passive bearing at the soil-to-foundation interface to minimize movement and damage\n- Anchorage at the foundation-to-house interface to minimize movement and damage (see Figure 3)\n- Strength and stiffness sufficient to resist both horizontal loads and vertical loads resulting from racking and overturning of bracing walls within the house.\n2. The walls provide the primary lateral resistance to earthquake loads (see Figure 4).\n- Use foundation bolts to anchor the exterior walls to the foundation through the wall’s bottom plate and foundation sill plate at a minimum of two bolts per plate.\n- Install metal connectors to transfer overturning forces from walls above to walls below.\n- Install hold-downs at each end of the wall to prevent uplift and overturning.\nAs Figure 4 shows, during a seismic event, forces are applied to a home at several points. Strengthening one part of a home may put added stress on another. Walls are subject to racking, sliding, and overturning forces. Anchors and hold-downs can help mitigate these potential risks.\n3. Secure appliances and equipment within the home. In addition to tying down the components of a building’s structure, it’s also important to secure the equipment within the building. See the guides listed below under Equipment for information on securing fuel tanks, appliances, water heaters, and cabinets, and installing an automatic shutoff for the gas line.\nRetrofitting a Home to Increase its Earthquake Resistance\nRetrofitting an existing home to withstand earthquakes starts at the foundation. Some older homes are not bolted to their foundation and could slide off the foundation during an earthquake. Preventing the likelihood of this horizontal displacement by bolting the home to the foundation should be the first retrofit priority. Walls can be affected by seismic activity when bracing materials are lacking. Bracing materials for walls can include plywood, OSB sheet panels, prefabricated sheer panels, and high-density spray polyurethane foam. Some homes with crawlspaces have cripple walls, which are short walls that bridge the gap between the foundation and the first floor of the house.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Now I receive some great e-newsletters from them with tips and information. I always knew you were supposed to hang curtains higher but in this article they give you the why's behind it and some real measurements to use in case you are not good at judging the height for yourself.\nMy living room curtains did look like this.\nNow they look like this!\nRudy hung them higher and I purchased larger rods so that I could pull the panels away from the window. It does let more light in and make the room seem larger. They are too short...but that is another project!\nHere is the information on the why's and how to's behind hanging curtains from Calico corners:\nFor answers to this question, we've gone to Julie Morris, director of custom products and programs for the Calico stores--and one of the top window treatment experts in the industry. \"In general, we like to install rods high so that there is a longer sweep of fabric,\" she notes. \"This makes the windows appear taller and the room looks as if the ceilings are higher.\"\nThe standard mounting for a curtain rod is at least 4-inches above the window and a minimum of 2 to 4-inches beyond either side of the window frame.\nHowever, you can mount the rod as high as you like--to the crown molding or to the ceiling, if you wish. \"If there is decorative crown molding in the room, consider splitting the distance between the top of the window frame and the bottom of the crown molding for where to mount your rod,\" advised Julie. \"In most cases this will add height but show off the crown molding nicely as well.\"\nTo make a narrow window look wider, add 6 to 8-inches to each side as a rule of thumb for rod width, suggests Julie. \"Usually, the reason to extend the width for drapery stackback is to allow as much light as possible into the room,\" she notes. To calculate the stackback for pleated draperies, add 1/3 of the frame-to-frame width measurement to the window width to find the appropriate rod size.\nFor example, if the window measures 48\" from frame-to-frame, 1/3 of that is 16-inches; added to 48-inches, that totals 64-inches for a rod size that allows the draperies to stack off the window.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "How to Use Sheer Curtains to Frame a Bed in a Girl's Room\nThe timeless beauty of the bed canopy began with function. Bed canopies were once used to create privacy, provide warmth and protect the sleeping area from insects and debris. Over time, canopies have departed from their functional beginnings, and are now generally used solely for aesthetics. Sheer curtains can be hung to frame a bed using several methods to create a look of serene luxury that is well-suited to a girl’s bedroom. The hanging style you select will have a major impact on the overall look and feel of the room.\nCanopy of Cuteness\nA bed frame with an attached canopy makes it easy to surround the sleeping area with sheer curtains to add style and beauty to a girl’s room. Sheer curtains can be draped or wrapped around the canopy to create several different looks. By loosely draping the sheer panels over the top of the canopy and gently twirling them down along each vertical pole you can create a frame that helps emphasize the design of the canopy and the bedding. Long, narrow, sheer panels crossing over the top and hanging down over each pole create an elegant look that emphasizes each corner of the bed. When sheer curtains are hung over the long sides of the bed and swept open with decorative tie backs, it creates a classic frame that imparts a sense of mystery by partially concealing the sleeping area.\nSheer curtains mounted to the ceiling can help create a versatile canopy over any bed. There are several types of drapery hardware available that can be used to hang the sheer curtains over a girl’s bed. For the most versatile look, ceiling mounted L-shaped rods can be joined to follow the rectangular outline of the bed, with sheer curtains hung from rings. This fully draped bed allows you to easily adjust the sheer canopy from completely enclosed to open when the drapes are along on the headboard. Another dramatic way to frame a bed is to use similar rods that form a smaller rectangle centered over the bed. The sweeping sheer panels will create a dramatic look that works well in a more formal girl’s room.\nFit for a Princess\nWhen mounted to the wall over the headboard, a bed corona canopy can crown a girl’s bed to impart the look of a royal resting place. Bed coronas are crown-shaped fixtures that come in a wide array of materials and styles.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-3", "d_text": "Safer alternatives, such as PEVA (polyethylene vinyl acetate) or EVA (ethylene vinyl acetate), are available and provide a similar water-repellent function.\nTo prevent the growth of mold and mildew, it is advisable to set shower curtains outside of the shower area after each use to allow them to dry thoroughly. Ideally, look for quick-drying fabrics that are naturally breathable, such as linen or cotton. In spaces with high moisture, like bathrooms, these materials are recommended as they are less prone to mold development.\nSimilar quick-drying fabrics can also be used for under-sink curtains. These curtains can help conceal clutter or storage areas and should be secured at the top and bottom if needed to maintain a neat appearance.\nIn conclusion, curtains not only provide privacy, control light, and protect furniture but also significantly impact the aesthetic appeal of a room. When choosing curtains, consider how they complement the existing fabrics in the space and opt for natural materials whenever possible. Practical considerations, such as curtain length and alternative window treatments, should also be taken into account to ensure optimal functionality and style. Lastly, when selecting shower curtains, prioritize safety and quick-drying materials to maintain a healthy and attractive bathroom environment.\n- Consider safety and ease of maintenance when choosing shower curtains.\n- Opt for untreated and PVC-free curtains and liners.\n- Safer alternatives like PEVA or EVA are available.\n- Set shower curtains outside of the shower area to allow them to dry thoroughly.\n- Look for quick-drying fabrics like linen or cotton.\n- Use similar quick-drying fabrics for under-sink curtains.\n- Secure under-sink curtains at the top and bottom for a neat appearance.\nCheck this out:\nFrequently Asked Questions\nIs it OK if your curtains don’t touch the floor?\nYes, it is perfectly acceptable if your curtains do not touch the floor. While the general recommendation is to have curtains that reach the floor, shorter curtains can also be stylish and functional. The decision ultimately depends on personal preference and the overall aesthetic you wish to achieve in your space. Whether you choose longer curtains for a more formal and elegant look or opt for shorter curtains to create a more casual and contemporary feel, both options can work well and enhance the overall design of your room.\nHow close should curtains be to the floor?", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-2", "d_text": "Thank you to Ana and the Red Cross for the information! Special note: if you're in the top bunk of a bunk bed during an earthquake, stay in bed and put a pillow over your head until the shaking is over. On the other hand, if you're on the bottom bunk, get off and find cover elsewhere, just in case the top bunk collapses. If you didn't know, now you know.\nPosted by Daniel Domaguin at 9:01 PM", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-3", "d_text": "Use framing anchors with adequate shear strength to connect the cripple wall to the rim joist above and to the sill plate below.\n- Reinforce roof-ceiling systems.\n- Add shear bracing members to roof trusses if there are not bracing members already in place or if there is insufficient shear bracing.\n- Add a roof-wall connection system with strapping if there is not one already in place. The primary load carriers within the roof-ceiling system are the roof sheathing and the fastening. Normally the roof-ceiling system will deflect the load from an earthquake horizontally and transfer the energy to the supporting walls, and then to the foundation.\n- Use metal connectors to connect every truss or rafter to the top plates of the exterior walls.\n- The metal connectors from the roof structure to the top plate and the top plate to the stud must be on the same side of the wall.\n- Replace any damaged wood framing.\nThe following guides provide information on making the components and assemblies of a home more resistant to earthquakes.\n- Bracing of Roof for Hurricanes, High Winds, and Seismic Resistance\n- Retrofit of Existing Roofs for Hurricane, High Wind, and Seismic Resistance\n- Seismic and Insulation Retrofits of Solid Masonry Walls\n- Wall Bracing for Hurricane, High Wind, and Seismic Resistance\n- Insulated Concrete Forms\n- Structural Insulated Panel or panel construction walls\n- Seismic and Thermal resistance in Slab-on-Grade foundations with Turned-Down Footings\n- Continuous load path provided with connections from the roof through the wall to the foundation\n- Seismic Resistance and Thermal Performance of Basement, Crawl Space, and Crawl Space with Cripple Wall Foundations\n- Retrofit Existing Crawl Space Foundations with Cripple Walls to Increase Seismic Resistance and Thermal Performance\n- Storage space provided for emergency supplies\n- Water heater elevated and secured\n- Automatic gas shutoff valves\n- Appliances anchored for disaster resistance\n- Self-locking cabinets, drawers, & doors\n- Fuel tank anchored\n- Outdoor HVAC Equipment Elevated and Secured\n- Design for earthquakes\nIn addition to these guides, the Solution Center contains an extensive collection of references, training videos, images, and webinars on disaster-resistant construction related to earthquakes. These can be found by searching “earthquake” in the Solution Center.\nAccess to some references may require purchase from the publisher. While we continually update our database, links may have changed since posting.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-3", "d_text": "This way, once we hung each of the curtain panels, we could cheat them both over to the right (blocking a bit of the window on the left side, but adding a ton of balance and polish to the room):\nAnd we also mentioned in our shopping post that we snagged our simple oil-rubbed bronze curtain rod along with two packs of curtain rings on clearance at Target for less than $12 total. We love the height and the elegance that the shot of dark color brings to the wall, and love that it echoes everything from the mocha finish on the floor to a few of the darker wood accents that we’ll be bringing in to keep things from getting too sugary sweet and matchy-matchy.\nPlus the clip-on curtain rings are actually something of a safety feature. Remember how we mentioned that someone could hang on those curtains without the rod coming down thanks to the use of some heavy duty anchors? Well we also realized that using clip-on curtain rings would allow for just the fabric panels to pull down if anyone got too rowdy and tried to swing from them (while the rings and the rod would most definitely stay put). We even tested them out by tugging on them a bit, and although it took pretty much all of my pregnant adult weight, sure enough the fabric was released from the rings and fluttered lightly to the floor while the rod and the rings stayed nice and securely in place on the wall.\nAnd as someone who has never used curtain ring clips before I just have to sing their praises. Not only are they nice little secret safety features, they also create such perfect little “waves” in the panels which result in such an amazingly high end look (and best of all, there’s no rod-pocket required, so you can hang any panel of fabric without worrying about extra sewing or loop-making).\nUpdate: This P Kaufmann fabric seems to be discontinued now, but here’s an affiliate link to another fun oversized floral print on amazon for anyone looking for something similar.\nOh and we can’t forget our tiny little blue closet (thanks to John’s cute idea to bring the aqua color from the ceiling into the mini enclave for fun). Doesn’t the curtain panel add some nice pattern and sweetness to a closet that was formerly pretty bleak looking?\nMaybe we should refresh your memory with a before pic:\nIt’s looking better already, right?", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Curtains being an essential part in home design process, modern net sheer curtain is one of them. Curtain hanging need to pay attention to its geomancy. How much do you know about geomancy? It is said that curtain hanging occasion is good or bad for our heath and emotion. So to be more clear about curtain geomancy, here are some knowledge for reference.\nEffectively alleviate light and reduce noise\nThere are many noise reducing curtains in the market use to reduce the most unwanted noise. high buildings everywhere in the big city, there is building in front your home is commonly. And glassed window reflect lots of light is a big problem. So blackout curtain are necessary for home decoration, modern net sheer drape can effectively alleviate the light.\nSaving you from the sharp evil\nPay attention to curtain hanging geomancy can save you from the sharp evil problem for which might break your family by quarreling. Some people do not believe it for whom this is superstition. Believe or not, just keep in mind for prevention.\nThird: wealth protection\nMost we are ignoring the hallway wind that means wind blows from the outside into inside home and then goes out from the opening window. It is said this is bad luck for your wealth for wind blows all your money out. So a curtain hanging up in the front of the window can save your wealth from the losing.\nAll above are about curtain geomancy knowledge. Just pay attention to and have great application of your window treatment. The same to net sheer curtain even though there is a small room. Geomancy is very important to save you from some bad luck. And for your home pretty decoration.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Some fabrics are too heavy and impossible to manoeuvre, while some can be too light and just don’t fall very well. Picking the right fabric is all about understanding what is important to you.\nSome of the better fabrics that you can choose include linen, faux silk and velvet. People love the feel of silk as well, but it tends to waste away pretty easily and doesn’t have a good life. Some fabrics like velvet and suede provide natural insulation. This is useful in colder climates. In any case, you can always line any fabric with an extra layer of insulation if needed.\nSelecting the right fabric also has a lot to do the look of your bedroom. Lighter fabrics allow more natural light into the room. Heavier fabrics, on the other hand, are good for blocking light and will further darken a dull room.\nColours are not the only things that define how a curtain will look in a room. Just like wearing a blazer is not the same as wearing a tuxedo, there are many different styles of curtains. The way you hang your curtains will go a long way in defining how they look in your bedroom.\nYou need to consider the ambiance of your home. Are you going for a chic modern feel or do you like the traditional elegant styles. Whichever choice you make, and there are plenty, just make sure that it fits with the theme. You don’t want to end up with a Victorian style curtain adorning a room filled with IKEA furniture.\nLength and Width\nTypically, the length of your curtains should be just enough so that they hang half-an-inch above the floor. This works well if you plan on opening and closing them often. If you plan to keep your curtains closed all the time, you can have longer curtains that puddle up on the floor. This can add a nice, sensual feel to your bedroom.\nAs for the width, it depends on the kind of look and functionality you want. You can go for a compact look by keeping the width at 1.5 times the width of the window. This is best for stationary curtains. For an elegant look with ample fabric for you to drape easily, go for a width around 2 times the window width.", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Safety Of Bunk Beds In Earthquake – Jump to Bunkbeds and earthquake safety – Making bunk beds safer for earthquakes. Yesterday’s (10/19) earthquakes really spooked us and the kids insisted on sleeping in our bed. The beds are the all-wooden type that can separate into two beds.\nEarthquake proof bunk bed, bunk bed earthquake straps, bunk bed collapse death, can bunk beds collapse, how to secure bunk beds to wall, bunk bed injury statistics, are bunk beds safe for 4 year olds, are loft beds safe\nThey’re secured with four-inch metal pins between the foot and top post of each upper and lower bunk post.\nWhile some places are safer than others, unless you’re in flight when an earthquake strikes, there’s no absolute safe place on the ground, bunk beds notwithstanding. Surviving an earthquake has more to do with magnitude and happenstance than anything else. Earn your master’s in environmental policy and management.\nWhat would sleepovers and summer camps be without bunk beds? According to a new national study, they might be a little safer. For the first\nMany families use bunk beds because they are an easy way to save space. However, an average of 36,000 bunk bed-related injuries occur every year to\nAdditional Reading · Alternative Energy · Budgeting · Civil Unrest · Disaster Scenarios · Drought · Earthquake · Emergency Cooking · Emergency Kits\nThe SAFE-T-PROOF Bunk Bed Fastening System easily secures all types of bunk beds ensuring your children are kept safe in the event of an earthquake.\nMy brothers bunk would fall on me. Most bunk beds at that time, mid 70’s, did not require or provide safety braces, I believe it was a a state\nI’m curious what I should do if an earthquake happens and I’m in my bed. You mean like a bunk-bed where the bed is on very long legs?\nNew research suggests bunk beds may be more dangerous than many In 2000, the U.S. Consumer Product Safety Commission addressed\nGood gravy, there are some lousy answers here. I’ve lived in California my entire life (I’m 53), and have felt a lot of quakes. If you live in", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-6", "d_text": "Earthquakes are a natural phenomenon that occurs when large sheets of rock under the soil suddenly release accumulated pressure and force.\nYou see, the Earth as a planet is always in a state of change. There are several layers of rock under the soil that we know so well; over time, shifts occur in these rocky layers, and when pressure has to be released, the ground that we walk on can shake so fiercely that buildings, roads and bridges crumble and collapse.\nCountries like Japan are only too familiar with the devastating power of earthquakes. We can never know exactly when a powerful earthquake is about to strike a town or city, so it is best to be prepared for its sudden manifestation.\nAgain, if you have never experienced such an occurrence before, don’t rest easy – because according to studies, 45 states are actually at moderate risk for earthquakes in the United States. Though this phenomenon has historically been associated with the West Coast, facts show that other states are equally at risk.\nIf a possible earthquake has been announced in your area, you must start preparing for the potential disaster:\n1. Make sure that you have an emergency kit ready. They are an invaluable item and will form the basis of your well being during the first 72 hours of any disaster\n2. Shelves and other wooden furniture should be secured to prevent them from toppling over during the actual earthquake.\nHeavy objects should be placed as close as possible to floor level. During a quake, those heavy items will most likely fall, and that can be dangerous to you and your family. Imagine vases and heavy books falling all around you; earthquakes can easily do that to your home.\n3. If you have frames and other hanging decorations near couches, remove these immediately. Place these near the floor level and away from beds, seats, couches, etc.\n4. Lighting fixtures should also be secured and braced in the advent of an earthquake. If you don’t have to brace your fixtures at home, remove the light bulbs instead. It’s one thing to clean up fallen fixtures – it’s a completely different thing when you have to clean up shattered and powdered light bulbs. The dust from shattered light bulbs is poisonous – cover your nose and mouth if you have to clean up stuff like this.\n5. Furnaces and large appliances should be fastened or bolted down to the floor if possible.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "– Tie Downs\n– Old verticals can have a loose hanging cords and chain – again a strangling risk. A simple tie down can have both a cord and chain inserted in, and attached down.\n– Roman Shade\n– The exposed outer cords on the back of the shade can pose a danger. Clips can easily replace the cords so no one can get caught in them. (Luckily – new chord innovations have made it impossible to get stuck – the cords don’t stretch/move to allow any harm!)\nThe best part about all of these retrofit fixes is that they are available online – www.windowcoverings.org – for free! The more we know about it, the more we can help keep our kids safe!", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "Then it was time for my trusty Heat N’ Bond iron on hem tape (I always grab the “ultra hold” variety). You may have seen us using it in this crib skirt tutorial from a while back and we’ve also used it to hem all the white Ikea curtains that we have hanging in the rest of the house. In short: I’m a hem tape black belt (the irony is that John does all the clothes-ironing in the house). Anyway it’s great stuff for leaving a polished and clean-looking edge (a lot more reliable then me with a sewing machine!) and it’s even washable and super cheap (we grab ours for a few bucks a roll at Michael’s). So I whipped out the ironing board, fired up the iron, laid out my big eight foot long fabric panel and had my scissors and hem tape on hand.\nAll it took was an easy-iron hem on each of the four sides of my fabric (for step by step hem tape instructions, just check out this post). Then I had a nice finished panel (without any rod loops or tabs) that I could clip up using my cheap-o oil-rubbed bronze curtain rings and rod from Target. Just look at how seamless and perfect that edge is! Much more even and less bunchy than anything I could sew…\nThen I tagged John to get to work hanging the curtain rod with heavy duty anchors (so it’ll never come toppling down, even if over 100lbs of force is used) while I created a third curtain panel for the closet (this one only needed to be seven feet long). I also made a little rod pocket at the top of this panel (I just positioned hem tape about 4 inches below the edge of the fabric and ironed the fabric to that line of hem tape created a nice loop of fabric). Meanwhile John was already executing my let’s-cheat-our-off-centered-window-so-it-looks-more-balanced plan.\nThis angle gives you a better idea of what we were dealing with. See how the window is shifted a bit too much to the left? Well it’s nothing a curtain rod and some billowy floor length curtains can’t totally solve. I asked John to hang the left curtain rod support hook only about four inches wider than the trim on the left side of the window but requested that he hang the right curtain rod support hook about fifteen inches wider than the trim on the right side of the window.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-5", "d_text": "Upper curtain 12 further includes a lower hem 12 b within which is inserted the third rod 22. Similarly, lower curtain 14 includes upper, intermediate and lower hems 58 a, 58 b and 58 c within which are respectively disposed the second, fourth and fifth rods 20, 24 and 26. A pair of threaded coupling pins 70 a and 70 b fixedly attach the third rod 22 to the lower hem 12 b of the upper curtain 12. Thus, when the third rod 22 is rotationally displaced, the upper curtain 12 is either rolled up onto or is unrolled from the third rod. Similarly, threaded couplers are used to fixedly attach the fourth rod 24 to the intermediate hem 58 b of the lower curtain 14 to ensure that when the fourth rod is rotationally displaced, the upper and lower sections 14 a and 14 b of the lower curtain 14 are either rolled up onto or unrolled from the fourth rod. Attached to the fifth rod 26 as well as to the lower hem 58 c of the lower curtain 14 is a protective sleeve 60. Protective sleeve 60 is attached to the fifth rod 26 and the lower hem 58 c by means of threaded coupling pins 62 a and 62 b. Protective sleeve 60 is preferably comprised of a lightweight, semi-rigid and durable material such as PVC to afford protection for the lower edge of the curtain. Also shown is the manner in which drive shaft 54 b is securely coupled to an end of the fourth rod 24. The narrowed end of the drive shaft 54 b is telescopically inserted in an adjacent end of the fourth rod 24 and the connection between these shafts is maintained by means of threaded coupling pins 68 a and 68 b. A similar connection arrangement to an upper drive shaft is provided for attaching the drive shaft to the third rod 22, but details of this connecting arrangement are not shown in\nThe lower curtain 84 is comprised of an upper curtain section 84 a and a lower curtain section 84 b. An upper edge of the upper curtain section 84 a is provided with a hem along the length thereof into which is inserted a third rod 90. Similarly, the lower end of the lower curtain section 84 b is provided with a hem into which is inserted a fifth rod 94.", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "The freezer door was closed, though, and I couldn't come up with a scenario that would dislodge and spray that much ice from a frost-free freezer, and close the door, regardless of any conceivable shaking pattern. I'm glad that I stood still and thought about it instead of rushing into the room: the \"ice\" was the disintegrated remains of what had been a large, heavy, lead crystal vase that had been on top of the fridge. I regret the loss of the vase, but it taught me a valuable lesson.\nThat sparkling carpet of crystal shards was between me and the apartment door. Escape from the building in the event of a really serious quake would not have been made easier by starting with a barefoot walk over broken glass--glass that I might not even have seen had it been midnight with the power out, rather than early morning. Now I don't put potentially dangerous things where they might be knocked down, and I know very well why it's a good idea to keep footwear with strong soles near the bed. However, I might still have that vase if I'd given earthquake preparedness the careful thought it deserves, a lot earlier.\nSome things one ought to do, such as keeping a flashlight near your bed or trying to situate the bed away from windows that might break in a quake (or in a typhoon, for that matter), are pretty obvious, and may not require much effort. Securing furniture so that it's less likely to topple or disgorge contents (crockery, for example, or your wine or whiskey collection), may involve more trouble and expense but is nonetheless worth it for the damage you can avoid to yourself and your property. Wedges to place under the front edges of wardrobes and bookcases, or hardware to secure them to walls, are inexpensive and easily available at do-it-yourself shops or hardware stores. This is particularly important if you're unable to situate such furniture away from where it might fall on you...small rooms limit the placement options, but \"crushed by a chest of drawers\" isn't what I'd want for an epitaph.\nComputers, TVs, and other items of electronic equipment are certainly expensive enough to justify the rather small investment in time and money required to prevent them from being knocked off your desk or racks (and don't forget to secure the racks, too).", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "One of the questions I get asked most frequently is how I hung the canopy in my bedroom. I'm going to try my best to explain how I did it. I will admit it's a little ghetto, but it works and has held up for a few years now. On a side note, looking at this picture makes me realize I need to update my home tour, I'll work on updating that soon.\nI started with four curtain panels (2 pairs) from IKEA. I believe I used the Ritva.\nThen, I purchased two basic curtain rods, also from IKEA. You really aren't going to see these so you can go with something pretty inexpensive and plain, like this one from IKEA as well.\nNow, you will need to sew an additional rod pocket on the curtain panel so that it will not move. A picture is worth a thousand words, so here's a little diagram.\nThen, you will thread your two rods through the two pockets.\nNow, here's how I mounted the rods. Mount one to the wall just like you normally would if you were going to hang curtains.\nand this is where it gets a little ghetto, rigged up, whatever you want to call it, if you have a better method by all means go with it, this has held up for a few years for us and is still going strong but I don't want to be blamed for any curtain rods falling on anyone ;)\nI mounted a hook to the ceiling, making sure it was anchored well. Then I used picture hanging wire to attach the rod to the hook. I wrapped it around several times and tied several knots in it to make sure it was secure. I could have used the same mounting hardware that I mounted to the wall, but the gap between the ceiling and the curtain was larger than I would have liked. If you are ok with the way that looks, that method would work too. Also, I should mention that the curtain covers the wire and hook so you don't see it.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-4", "d_text": "Or if your house is more extravagant and has a ballroom, you should use long golden and navy blue velvet curtains with white tulles. You can also use a curtain instead of your bedroom door, if it is suitable, for example, if you live alone.\nHow To Hang The Curtains Over The Doorway?\nIn this part we will share the steps to hang a curtain instead of a door with you:\nStep 1: Remove the door\nThe first step is to remove the door by using a cordless drill or screwdriver. But the doors are generally very heavy. So it should be better to get help from someone.\nStep 2: Measure the area\nThis step is very important to buy or sew your own curtain. Generally, the curtains are hung inside the door frame by using a rod. So you should measure by starting from the upper side of the inside of the frame and finishing it to the down to the floor. But if you choose to use a curtain pole, which is a better way to use for wider doorways, you should measure by starting a little bit above the door frame. By the way, if you will sew your own curtain, it is important to measure the width of the doorway as double-sized.\nStep 3: Choose the curtain\nIt is the most enjoyable step. Because now, you can use your imagination and creativity. Look at your room, and find out which color is the best to choose. Then go to the store or visit a website and buy it, or sew your own curtain, or ask your tailor to sew it. There are lots of ways in this step. You should choose the most appropriate way for you.\nStep 4: Use the rod or the curtain pole\nYou decided whether you will use a rod or pole in the previous steps, and now, in this step, you should assemble the rod inside of the frame or the pole above the door frame which means to the wall. You can use a power drill, again.\nStep 5: Hang the curtain\nAnd now you are finalizing the process. You should hang the curtain you chose before. And that’s it! You, now, have a curtain instead of a door.\nStep 6: Use a curtain holder (optional)\nThis step is optional. For better use of your curtain, you can choose to add a holdback inside the frame. This will hold the curtain tidy.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "Andena Hayden Home Curtain, 2019-05-01 10:41:00. The traditional length isn’t any more than half an inch (a centimetre) above the ground, hence the curtains move readily, and fall into place with no effort good for curtains which are likely to be opened and closed a lot. It’s possible to alter the duration of your curtains and add some pizazz at the exact moment. In the event you like so, you will be in a position to choose Tension Curtain Rods Extra Long 120 to help it become true.\nMadra Graham Home Curtain, 2019-05-01 10:41:47. Additionally, a Priscilla look could be made by employing another Panel underneath the very first Panel and tying them back to opposite sides. It is not easy to make right but it makes a custom-made classic and tailored look. The above mentioned pictures of window treatments are only a start.\nAlyssa Keith Home Curtain, 2019-05-01 10:41:03. It is possible to put in a valance on a metallic rod that’s been curved to fit the arch. By itself a valance can be quite attractive but you may also put in a curtain in the straight lower portion of the window. If you can discover a valance that’s long enough to cover the straight portion of the window it would create the effect that the panels actually extended all the way to the surface of the arch.\nCarynn Tate Home Curtain, 2019-05-01 10:41:24. A blackout curtain lining may also reduce outside sound. While it really is dependent upon where you buy your curtains, they can be rather costly. Be certain to keep in mind that you would like your curtains to extend past the window so as to get best outcomes. Eyelet curtains are a breeze to install and they’re easy to wash too.\nCarynn Tate Curtain Rod, 2019-05-01 10:41:25. If you should think about the many distinctive kinds of curtains, you could be left feeling confused. When it regards nursery curtains, new parents will adore the manner blackout panels allow a wholly dark atmosphere for baby’s nap time. Curtains come in various fabrics and thicknesses.", "score": 8.086131989696522, "rank": 98}]} {"qid": 28, "question_text": "What is the main difference between Ganoderma lucidum and Ganoderma tsugae fungi in terms of where they grow?", "rank": [{"document_id": "doc-::chunk-3", "d_text": "- Ganoderma tsugae - A polypore which grows on conifers, especially hemlock, giving it its common name, hemlock varnish shelf. Similar in appearance to Ganoderma lucidum and a close relative, which typically grows on hardwoods.\n- Kirk PM, Cannon PF, Minter DW, Stalpers JA (2008). Dictionary of the Fungi (10th ed.). Wallingford: CABI. p. 272. ISBN 978-0-85199-826-8.\n- Liddell, Henry George & Robert Scott (1980). A Greek-English Lexicon (Abridged ed.). United Kingdom: Oxford University Press. ISBN 0-19-910207-4.\n- Karsten, P. 1881. Enumeratio Boletinarum et Polyporarum Fennicarum systemate novo dispositorum. Rev. Mycol. 3:16-18\n- Murrill, W. A. 1902. The Polyporaceae of North America, genus I Ganoderma. Bull. Torrey Bot. Club 29:599-608.\n- Patouillard, N. 1889. Le genre Ganoderma. Bull. Soc. mycol. Fr. 5:64-80.\n- Atkinson, G. F. 1908. Observations on Polyporus lucidus Leys and some of its Allies from Europe and North America. Botanical Gazette 46:321-338.\n- Adaskaveg, J. E. 1986. Studies of Ganoderma lucidum and Ganoderma tsugae (Delignification, Mating Systems, Root Rot, Cultural Morphology, Taxonomy). Dissertation. The University of Arizona.\n- Murrill, W. A. 1908. Agaricales (Polyporaceae). North Amer. Flora 9:73-131.\n- Karsten PA. (1881). \"Enumeratio Boletinearum et Polyporearum Fennicarum, systemate novo dispositarum\". Revue mycologique, Toulouse (in Latin). 3 (9): 16–19.", "score": 50.726356772756205, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "However, most consider Ganoderma. lucidum and G. curtisii to be the same species because of their similar appearance and habitats; they both prefer to grow on hardwoods. In \"North American Polypores,\" practically the bible for wood-decaying poroid fungi, Gilbertson and Ryvarden, do not consider G. curtisii a species separate from G. lucidum. Another fungus that resembles G. lucidum is Ganoderma oregonense, which, like G. tsugae grows on conifers, but is found in the Pacific Northwest and New Mexico. G. oregonense can get up to 1 meter across and has slightly larger spores than G. lucidum and G. tsugae. There are arguments that these four separate species (G. lucidum, G. tsugae, G. curtisii, and G. oregonense) should be considered one species. The reasons for keeping them apart are primarily due to the host specificity of each fungus. Interestingly, if given only either hardwood or conifer wood in culture, each of the four species can grow and produce fruiting bodies, despite their natural occurrence on only one of those substrates.\nIn 1995, Moncalvo, Wang and Hseu, isolated the DNA of G. tsugae and G. lucidum and found that it was hard to tell the difference between the two species. An even more recent study in 2004 by Hong and Jung, found that G. lucidum from Asia was in its own group, whereas, G. lucidum from Europe and the Americas was more closely related to G. tsugae. Clearly, further investigation into the molecular make up of these two species is needed. For more information about the relationships of the species Ganoderma For more information, see these papers: Moncalvo, J.M., Wang, H. H. & Hseu, R. S. (1995). Phylogenetic relationships in Ganoderma inferred from the internal transcribed spacers and 25S ribosomal DNA sequences. Mycologia 87: 223-238, and Hong, S.G., Jung, H. S. (2004) Phylogenetic analysis of Ganoderma based on nearly complete mitochondrial small-subunit ribosomal DNA sequences.", "score": 48.96223541333681, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "Until recently, the genus was divided into two sections – Ganoderma, with a shiny cap surface (like Ganoderma lucidum), and Elfvingia, with a dull cap surface (like Ganoderma applanatum).\nPhylogenetic analysis using DNA sequence information have helped to clarify our understanding of the relationships amongst Ganoderma species. The genus may now be divided into six monophyletic groups:\n- G. colossus group\n- G. applanatum group\n- G. tsugae group\n- Asian G. lucidum group\n- G. meredithiae group\n- G. resinaceum group\nWith the rise of molecular phylogenies in the late 20th century, species concept hypotheses were tested to determine the relatedness amongst the nuanced morphological variabilities of the laccate Ganoderma taxa. In 1995, Moncalvo et al constructed a phylogeny of the rDNA, which was the universally accepted locus at that time, and found five major clades of the laccate species amongst the 29 isolates tested. It turned out that G. lucidum was not a monophyletic species, and further work needed to be done to clarify this taxonomic problem. They also found that G. resinaceum from Europe, and the North American 'G. lucidum', which Adaskaveg and Gilbertson found to be biologically compatible in vitro, did not cluster together. Moncalvo et al. reject biological species complexes as a sole tool to distinguish a taxon, and suggested using a combination between biological and phylogenetic species concepts to define unique Ganoderma taxa.\nIn 1905, American mycologist William Murrill delineated the genus Tomophagus to accommodate the single species G. colossus (then known as Polyporus colossus) which had distinctive morphological features that did not fit in with the other species. Historically, however, Tomophagus has generally been regarded as a synonym for Ganoderma. Nearly a century later, phylogenetic analyses vindicated Murrill's original placement, as it has shown to be a taxonomically distinct appropriate genus.", "score": 47.14177776486829, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Purple Reishi (Ganoderma sinense) is a polypore mushroom that is native to Asia, specifically to China and Japan. It grows on both broadleaf trees and conifers but is very rare and notoriously difficult to find in the wild. Its modern scarcity may be partly due to industrial deforestation and the dramatic loss of biodiversity in some of its native forests, although historically it was never found in any great abundance.\nTraditionally it was often used interchangeably with Red Reishi (Ganoderma lucidum). It is for this reason that both species are referred to as ‘Lingzhi‘ in traditional Chinese medicine. They both play a very similar ecological role within the forest and share a broad number of health benefits, but in truth the two species are notably different in a number of ways.\nWhile this mushroom usually exhibits a dark purple hue, it also commonly presents dark brown to almost black colouring. It is typically much darker than Red Reishi, has a less bitter, slightly sweet taste and a different arrangement and concentration of biologically active compounds.\nAs already mentioned, this species was a rarity throughout history. It therapeutic potency was formidable, yet before it was cultivated the wild mushroom was sparse and foraging it was a challenge. Thankfully, the cultivation of this species has developed into a fine art when done thoroughly and conscientiously.\nPurple Reishi can be grown using the same methods as Red Reishi cultivation, but in an attempt to save time and money, many growers will cut corners and yield small, medicinally inferior fruiting bodies. However, if wild-simulated Di Tao practices are adhered to, mushrooms exhibiting the qualities and properties of the finest wild specimens can be produced. To learn more about these practices, read the Methods of Cultivation chapter on our main page about Reishi.\nAs with Red Reishi (Ganoderma lucidum), Purple Reishi contains hundreds of different compounds that collaborate synergistically to produce a collective ‘entourage effect‘. There are many proteins and fatty acids, sterols, polyphenols and trace elements, but the compounds understood to be the most powerful and have a ‘directing’ influence over the medicinal activity, are the fungal polysaccharides and triterpenes.\nComparisons between G. sinense and G. lucidum show that overall, their polysaccharide and especially β-d-glucan content is very similar with no striking differences between the two.", "score": 44.58071384350087, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "REISHI Mushroom \"The plant of immortality\"\nUpdated: Dec 28, 2020\nGanoderma Lucidum (gan=shiny; derm=skin; lucidum=shining), Ling-Zhi or the more common name Reishi is the subject of my focus today. This unique looking mushroom has \"roots\" in Eastern medicine that dates back as far as 4000 years and was used for a wide variety of ailments. This mushroom has only been successfully commercially cultivated for about 20 years now, prior to that it was considered rare to find in the wild. In fact, in Japan it typically grows on plum trees and when 100,000 trees were checked only 1% of them were found to produce any Reishi mushrooms. Here in the United States you'll find them on Hemlock, Maple, Oak and fruit trees (mainly Cherry) but just like their Asian counter parts they can be a rare find. Here on the East coast the main fruiting season is from May to July.\nUp until recently the rarity of Reishi also made it very expensive, although wild Reishi can still fetch between $80-$100 a pound (dried). That being said, even dried cultivated Reishi can still cost between $20-$30 a pound, even more for a particularly sought after strain. There are thought to be 6 different types of Reishi mushrooms, classified according to their color, and all are thought to have different healing properties (in truth the environment and growth stage is what has the most effect on the mushrooms properties). Blue, red, yellow white, black and purple Reishi are all said to have different flavors ranging from sour, to sweet, to even hot. The different colors are said to have different healing effects with the red typically being the most sought because it is regarded as the most potent and medicinal. I mostly use the red Reishi, primarily because that's the one that I found the most of this year.\nI could go into talking about polysaccharides and triterpenes of Reishi, but I have just the very basic understanding of organic chemistry and would mostly be talking out of my ass like I had a deeper understanding. What I will say though is over the years more and more medical research has been conducted and proven that the folk medicine of thousands of years ago...isn't so much folk.", "score": 43.81906924122604, "rank": 5}, {"document_id": "doc-::chunk-2", "d_text": "Today, let us focus on two medicinal mushrooms – the Reishi and the Chaga mushrooms, and talk about their health benefits so that you can decide and act on that knowledge if you’re in the middle of deciding whether to supplement with them or not.\nReishi (Ganoderma lucidum)\nIt’s a visually attractive mushroom with shiny skin and woody texture. Sometimes, it may even seem as if it is a varnished work of art to those who would see it the first time.\nThe mushroom typically grows on plum, oak and other hardwood trees, to name some. Their species are of sizes ranging between 2 and 20 centimeters and have a kidney-shaped cap.\nWhat are the health benefits of Reishi? There have been extensive studies of the herb, especially among the Japanese and Chinese Reishi, and these researches found out that Reishi, the spirit herb, offers multiple benefits for the body.\nReishi improves the mind, body and spirit, and so its title “spirit herb.” While it sounds like foreign magic of some sort, this herb is actually also known to give its users a “longer life.”\nBasically, it has bioactive compounds, including polysaccharides, such as triterpene, which has anti-tumorigenic, hypolipidemic and anti-inflammatory properties. The acid called Ganoderic is the main active triterpene that’s known to fight cancer – and it’s in the mushroom.\nIn addition, it has also shown to help in reducing phlegm, toning the blood, promoting better sleep and strengthening visceral organs.\nAnother benefit is on athletic performance, as it helps in enhancing blood oxygenation and preventing altitude sickness (for mountain climbers).\nFor heart health, Reishi may help in lowering bad cholesterol levels, and reducing plasma and blood viscosity, especially among patients suffering from high cholesterol and high blood pressure.\nIt is also proven to aid in immune system enhancement, as it works potently against sarcoma and it aids in the prevention and treatment of various ailments.\nTo discuss its ailment fighting and immune system boosting properties further, Reishi is also known for its high content of complex sugars, which are essential for immunity especially in terms of increasing the DNA and RNA in the bone marrow, where immune cells are born and its stimulating properties on antibody production.", "score": 43.24803007539104, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "About Reishi Mushroom\nThe reishi mushroom (Ganoderma lucidum) is a fungus that predominantly grows in tropical and subtropical woodlands in the Asian continent. The word Ganoderma lucidum was derived from two Latin words, ‘Gan’ which means shiny and ‘Derm’ meaning skin and ‘Lucidum’ with the meaning ‘brilliant’. This term describes the glossy appearance of the mushroom. The reishi mushroom bears the nickname “queen of mushrooms” as it has been revered by Chinese people for ages. In China, it is popularly known as “Ling Zhi” which means a spiritual or miraculous mushroom, “Rui-Zhi”, and “Chi-Zhi. In Japan, it is referred to as “Reishi”, “Sachitake” or “Mannatake”. Americans commonly refer to it as Reishi while the Koreans named it “Youngzhi”, while it is wideley referred to as the “mushroom of immortality”. This mushroom belongs to the family Ganodermataceae and the genus Ganoderma. The term reishi is usually used when naming a group of closely related species of mushrooms that grow in woodlands particularly, Ganoderma tsugae, Ganoderma sinesis and Ganoderma lucidum. The species Ganoderma lucidum is the one with the highest demand in the human health realms.\nReishi mushrooms thrive best in the wild, particularly in temperate and humid forests found in the mountains in Asia. The forests are usually located in very remote areas that are free from pollution that has clouded a majority of cities today. The reishi mushroom is a polypore, a sizeable group of mushrooms that form on dead and decaying hardwood trees. They have tubes on the underside of their cap with white pores that produce a large number of spores within a short time. A single fungus can give rise to not more than three fruiting bodies. The fruiting body can be flat, shaped like a kidney and grows over 8cm in width. Alternatively, it can be tiny and assume the shape of a cloud. The skin of reishi mushrooms has a reddish-brown glossy appearance. However, the colour ranges from black to purple in certain locations. Mature reishi and growing reishi feel soft, like a sponge.", "score": 42.73253861482597, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Habitat and origin\nReishi is a woody fungus that grows mainly on the trunks of wild plum decomposition, sometimes those of oak or hemlock. Very rare in the wild, it grows only in the mountains, in deep forests. Nowadays, it is also grown in an artificial environment in China, elsewhere in Asia and North America, including Quebec.\nDosage of reishi\nIn Japanese and Chinese medicine, dosages typically range from 1.5 g to 9 g per day of dried fungus or the equivalent in the form of tablets, capsules or solid or fluid extract. Consult a trained practitioner for personalized treatment.\nHistory of reishi\nThis fungus has for over 2000 years, an unusual reputation in Asia. It is mentioned in the oldest written in Chinese Pharmacopoeia (classical herbarium Seng Nong – published in 56 BCE) and it is believed that Asians knew reishi for centuries, even millennia before this time.\nTraditional Chinese Medicine (TCM ) and , after him, the kempo medicine in Japan, take the flesh of reishi for a valuable tonic Qi (or Chi ) , the vital energy that supports the entire body. Therefore, it is assigned a global action such adaptogen in traditional medicine. For MTC, such substances have the power to strengthen the whole body and help maintain an optimal state of health and balance.\nReishi is particularly appreciated as it is extremely rare in the wild. It only grows in effect in deep mountain forests, usually on the trunks of plum decomposition, and it is found that 2 or 3 for 10,000 dead plum.\nAlthough the Chinese have tried for centuries to grow reishi, it was not until the early 1970s that Japanese researchers have succeeded. From that moment, the fungus has become easily accessible to ordinary mortals. He was previously reserved for the privileged few who could afford the luxury of such a rarity.\nTraditionally, there are 6 different varieties of reishi depending on the color (red, purple, blue, yellow, black or white). It was understood much later (in 1972) that these color differences are due to specific conditions of growth and not genetic variation within the species. It seems that fans prefer red reishi.\nNowadays, the fungus enjoys relative popularity among cancer patients.", "score": 41.36742205533332, "rank": 8}, {"document_id": "doc-::chunk-12", "d_text": "Gupta et al., (2015) reported that Marchantia species are used in traditional Chinese medicine for protecting the liver and for treating hepatitis.\nReishi Mushroom is botanically known as Ganoderma lucidum and it belongs to the Ganodermataceae family. Reishi Mushroom is most prevalent in the Asian countries such as Japan, China, North and South Korea, Taiwan Indonesia and several other Asian countries. Reishi Mushroom is a broad, dark mushroom distinguished by its glossy exterior and woody texture. This suggests the origin of its name lucidum, which is derived from the Latin word “lucidus”. Lucidus simply connotes brilliant or shiny and this refers to the shiny surface of the Ganoderma mushroom.\nDifferent parts of the Asian countries have different names for Ganoderma lucidum for example, the Japanese people refer to it as rōmaji, 霊芝, the Chinese people call it língzhī, Ling Zhi, pinyin or 靈芝 while the Vietnamese refer to it as linh chi. Ganoderma is considered a miraculous plant thus the reason it is mostly used for medicinal purposes, especially in the Asian continent. Ganoderma lucidum produces a group of triterpenes known as ganoderic acids. Ganoderic acids possess a molecular structure that is related to steroid hormones.\nOther sterols found in reishi mushroom are; lucidadiol, ganoderol, ganodermadiol, ganoderiol and ganodermanontriol. Ganoderma also contains adenosine, alkaloids, coumarin, organic germanium, mannitol and polysaccharides (e.g. beta-glucan). Ganoderma lucidum has a strong bitter taste thus it is traditionally prepared as a hot water extract.\nLi and Wang (2006) showed that a ganoderic acid isolated from Ganoderma lucidum exhibited inhibitory effects on the replication of hepatitis B virus (HBV) in HepG2215 cells (HepG2- HBV-producing cell line) over 8 days. This suggests that Ganoderma is effective for treating hepatitis.\nMilk thistle is botanically known as Silybum marianum and belongs to the Asteraceae family. This plant is originally from Asia and Southern Europe before spreading to other parts of the world.", "score": 40.40386375957915, "rank": 9}, {"document_id": "doc-::chunk-2", "d_text": "Some Ganoderma species can cause major long-term crop losses, especially with trees:\n- G. orbiforme (= G. boninense), G. zonatum and G. miniatocinctum are responsible for basal stem rot disease in Asian oil palm plantations.[better source needed]\n- G. philippii and G. pseudoferreum are responsible for the root rot of cacao, coffee, rubber and tea trees.\nGanoderma are wood-decaying fungi with a cosmopolitan distribution. They can grow on both coniferous and hardwood species. They are white-rot fungi with enzymes that allow them to break down wood components, such as lignin and cellulose. There has been significant research interest on the wood-degrading enzymes of Ganoderma species for industrial applications, such as biopulping and bioremediation.\nFor centuries, Ganoderma species have been used in traditional medicine in many parts of Asia. These species are often mislabeled as 'G. lucidum', although genetic testing has shown this to be multiple species such as G. lingzhi, G. multipileum, and G. sichuanense. Several species of Ganoderma contain diverse phytochemicals with undefined properties in vivo, such as triterpenoids and polysaccharides, an area of investigation under basic research.\nAlthough various Ganoderma species are used in traditional medicine for supposed benefits and have been investigated for their potential effects in humans, there is no evidence from high-quality clinical research that Ganoderma as a whole mushroom or its phytochemicals have any effect in humans, such as in treating cancer.\n- Ganoderma applanatum - Also known as the artist's conk. An infestation of this species was the main factor in the loss of the Anne Frank Tree.\n- Ganoderma lingzhi - Also known as red reishi, a mushroom used extensively in traditional Asian medicine.\n- Ganoderma lucidum - A polypore with limited distribution in Europe and parts of China, often misidentified on products labeled reishi or lingzhi that actually contain Ganoderma lingzhi, because of the persistence of outdated naming conventions.\n- Ganoderma sinense - Also known as black reishi or zizhi.", "score": 39.95953561788139, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma spore oil is a lipid extract of Ganoderma lucidum spores, which accounts for 20% to 35% of the total weight of the spores, and is rich in biologically active ingredients, such as triterpenoid ganoderic acid, unsaturated fatty acids, and steroids. Ganoderma spore oil has special physiological effects and has broad market prospects in the field of health products and medicine.\nGanoderma lucidum, also known as fungus Ganoderma lucidum, or Ganoderma lucidum, is a genus of Ganoderma lucidum in the family Basidiomycetes Polyporus. It is a mushroom parasitic on the roots of the oak and other broad-leaved trees. Umbrella-shaped, hard, woody, kidney-capped or semicircular, purple-brown with paint-like luster.\nThe ancient Orientals praised Ganoderma lucidum as a “magic panacea” that can be brought back to life and a “celestial herb” that cures all diseases. Modern scientific and clinical medical studies have proved that Ganoderma lucidum contains rich nutrients and biologically active substances, and it has a tonic and strong effect, it is the most ideal natural medicine and healthy food for human beings.\nGanoderma lucidum spore powder\nThe Ganoderma lucidum spore powder is the seed of Ganoderma lucidum. It is the extremely tiny egg-shaped reproductive cells ejected from Ganoderma lucidum pleats during the growth and maturity of Ganoderma lucidum. It is only 4-6 micrometers and is a living organism.\nThe spore powder contains all the active essences of Ganoderma lucidum. It contains 18 kinds of active amino acids, more than 20 kinds of polysaccharides, alkaloids, organic germanium, terpenoids, various vitamins, trace elements, and other pharmacologically active ingredients. Longevity and longevity can significantly improve the body’s immune function. It has antiviral, anticancer, tumor-suppressing, hypoglycemic, and other functions. Its medicinal value is much higher than that of Ganoderma lucidum fruiting body and mycelium.", "score": 38.743943904022224, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "What is Ganoderma Lucidum (Lingzhi mushroom/ Reishi mushroom)?\nGanoderma lucidum, an oriental fungus, has a long history of use for promoting health and longevity in China, Japan, and other Asian countries. It is a large, dark mushroom with a glossy exterior and a woody texture. The Latin word lucidus means “shiny” or “brilliant” and refers to the varnished appearance of the surface of the mushroom. In China, G. lucidum is called lingzhi, whereas in Japan the name for the Ganodermataceae family is reishi or mannentake.\nWhat is Ganoderma Lucidum (Lingzhi mushroom/ Reishi mushroom)?\nGanoderma Lucidum (Latin), commonly known as Lingzhi mushroom (Chinese) or Reishi mushroom (Japanese) is a well-known representative of mushrooms that have been used in traditional Chinese medicine for many centuries for the prevention and treatment of many human diseases.\nTraditionally, the fruit body of Ganoderma Lucidum has been possible to obtain large quantity of spores produced by the fruit body and it has recently been recognized that the spores of Ganoderma Lucidum possess more potent effect than the fruit body.\nWhat is Ganoderma Lucidum Spores Powder?\nWhy Ganoderma Lucidum Spores Powder is Better?\nThe Ganoderma Lucidum spore has two thick layers of chitin outer wall that is difficult to be digested by human stomach acid. It is only upon cracking/ breaking the two thick layers of the wall that the active ingredients of the Ganoderma Lucidum spores powder can be optimally absorbed by the human body.\nTwo ways to break the wall\nPhysical wall breaking mainly destroys the spore wall of the Ganoderma Lucidum by mechanical or physical action, however, with this method, a sizable proportion of the active ingredients are easily oxidized and deteriorated when exposed to the air process.\nEnzymatic breaking is the use of biological enzymes to slowly hydrolyze the cell wall of Ganoderma Lucidum spores, the process is mild, effective active ingredient damage is minimized, not easy to oxidize and degenerate, there are no harmful substances, and has a high breaking rate.\nMost spores powder available in the market are produced by the physical wall breaking method.", "score": 37.07079614696316, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma Lucidum, also known as Gano, Reishi or Lingzhi, has a 4,000 year history in Traditional Chinese Medicine, and is considered to be of the highest class of tonics. It is particularly renowned for its use in promoting longevity, supporting liver function, and is reputed to reduce hypertension. It has been used in China as a panacea to alleviate things from everyday nuisances like the common cold or skin disorders to terminal conditions such as cancer. In recent years the focus of research with Reishi is its apparent immuno-enhancing and anti-tumor activities.\nWhile the exact mechanism is still unknown, a large numbers of studies have shown that Reishi may modulate many components of the immune system such as the antigen-presenting cells, NK cells, T and B lymphocytesm, which are pivotal to initiating a primary immune response. It is believed that it may be through this immune modulation that reishi’s apparent anti-tumor effect may be achieved.\nClick here for a research article written by by a Leukemia Research organization. We have also provided links for a number of other research studies below, including studies published on PubMed.gov. Another website with some very interesting research on Ganoderma Lucidum is ClinicalTrials.gov. This website includes studies on the effect Ganoderma Lucidum has on Cancer in Children, Rheumatoid Arthritis and Early Parkinson’s Disease.\nGanoderma for Man’s Best Friend\nToday, new uses for reishi mushrooms are being discovered, even in veterinary medicine. Doctors and Scientists reveal the power of ganoderma reishi in the human body and for dogs. As more and more professionals consider homeopathic treatments for pets, one major herbal contender is knocking out all the competition. And the winner, as usual, is Ganoderma lucidum. Sometimes, you just have to say “Wow — how can one little mushroom do so much?!”\nUniversity of Florida Veterinary Scholar Dr. R.M. Clemmons suggests in his article “Integrative Treatment of Cancer in Dogs” shoring up your pup’s immune system with regular doses of Ganoderma lucidum, the powerful little ‘mushroom that enriches all our GanoCafe products! In addition to Ganoderma’s seeming positive effects in fighting canine diseases, Dr.", "score": 34.991717070833474, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "What is Ganoderma?\nGanoderma is probably the biggest mystery in the East that has been revealed after years of scientific research. Considered as Chinas natural treasure, ganoderma plantations have harvested its power and turned into the home of 100% organic Ganoderma.\nIn Cantonese, Reishi or Lingzhi refers to one of the types of Ganoderma Lucidum mushrooms. It keeps a dominant and honorable place in Asia, a rich region where it has been used as a powerful medicinal mushroom for over 4,000 years. Also known as the Miraculous King of Herbs, Ganoderma earned more popularity due to the advanced health benefits it brings. Its scientific name was derived from a Greek term that refers to bright and shiny skin. In Chinese, the word Lingzhi pertains to an herb known for spiritual potency and has been recognized as the mushroom of immortality. The mere origin of its name tells a lot about its role in the field of traditional and modern medicine.\nThanks to the modern developments in science and technology, ganoderma companies and their strategic partners have created 100% organic Ganoderma Lucidum capsules, Ganoderma Mycellium Healthy Beverages, Ganoderma Spore Powder, and skin care products. With these companies dedication to maximizing the potentials of Ganoderma, their research and development team has come up with an expanding line of clinically tested Ganoderma-based products.\nThe main processing plant used for manufacturing Ganoderma-based products submits to the highest GMP standards in the world. With an exclusive breakthrough called the Advanced Micro-Particle Technology, they have applied this discovery to boost the process of breaking spore cell walls. This fact guarantees these companies as the leading brands that produces products that were infused with 100% certified Ganoderma Lucidum.\nGanoderma's home may be China, but these companies and their growing team of skilled distributors are ready to share the wonders of Ganoderma with the rest of the world.", "score": 33.223893842209115, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "The polysaccharide content of reishi mushroom is responsible for possible anticancer and immunostimulatory effects.\nHailed in ancient Eastern medicine as the ``mushroom of immortality`` and the ``medicine of kings,`` you'd expect reishi to offer you some pretty astounding health benefits, right?\nGanoderma is not often used in cooking because they are hard and have a bitter taste.\nFor over 2,000 years, the Reishi mushroom (Latin: Ganoderma Lucidium Chinese: Lingzhi) has been revered as one of the most sought-after and powerful tonics.\nIn China and Japan, Reishi mushroom has been used for more than 4,000 years to treat a variety of ailments.\nReishi mushroom is a fungus that some people describe as “tough” and “woody” with a bitter taste.\nThe reishi mushroom actually brings your candida index down. So, if you’re battling candida, like so many of us are -- this herb may be an excellent addition to you healing.\nThe water-soluble polysaccharides, are active ingredients found in the red reishi mushroom\nIn China, reishi mushroom is considered one of the best herbs to increase longevity. This adaptogenic mushroom has been shown to:\nI asked an elderly Japanese friend why she looks so young for her age. She said she drinks tea made with reishi mushrooms.\nYou have probably heard them being touted as a great medicinal mushroom and a wonderful healer.\nWidely known as an immune modulator, this gorgeous mushroom is also a superb nervous system tonic, both soothing and restorative.", "score": 32.95752055540928, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "The six reishi colors are:\n- Akashiba ( red )\n- Aoshiba ( blue)\n- Kishiba ( yellow)\n- Kuroshiba ( black)\n- Murasakishiba ( purple)\n- Shiroshiba ( white)\nIt is believed to be the oldest medicinal mushroom and is said to help build the immune system, maintain one’s physical shape, and promote longevity. Reishi is commercially grown in China, Japan, Korea, North America, and Taiwan. In order to decide whether or not the Ganoderma or Reishi Mushroom is Hype or a Healing Miracle, one must take a look at the health benefits, side effects, case studies performed, and personal testimonies.\nChinese Scientists Research Reishi - Lindzhi - Ganoderma\nGanoderma Lucidum Benefits\nLike many alternative herbal medicines, this magical mushroom works with the body to promote one’s natural immune system, helps the body to balance and harmonize itself, which in turn treats a wide range of health problems. Ganoderma or reishi consists of the “anti” properties of being anti-allergen, anti-fungal, anti-inflammatory, anti-oxidant, anti-parasitic, anti-tumoral, anti-viral and analgesic. Having an increased level of antioxidant compounds within one’s body is believed to help protect against aging and diseases.\nGanoderma is often used as a cancer treatment by assisting in eliminating impurities and toxins to cleanse the body and help make the immune system stronger. It stimulates cell regeneration by promoting the detoxification of the liver to improve its functions. In addition to possessing anti-cancer agents of Germanium and polysaccharides, that helps to produce anti-tumor effectiveness, protect against degeneration of skin cells while reducing the appearance of aging.\nReishi or Ganoderma has 23,500 I.Us and is considered to be one of the most potent antioxidants known. These antioxidants neutralize the oxidant effect of free radicals and assist in promoting normal metabolism and normal cellular respiration. Having such qualities enables it to be versatile with many uses such as the treatment of allergies, altitude sickness, chemotherapy support, diabetes, fatigue, hepatitis, hypertension, high triglycerides, kidney cleanser, nerve tonic, and sexual enhancer.", "score": 32.70278943337178, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Reishi (れいし) belongs to the Polyporaceae, have an alias such as Ganoderma lucidum mushroom.\nIn rare varieties of plum trees 10万 book cannot be harvested only 2-3 never touch rarely noticeable.\n1970S succeeded in the artificial cultivation in China and Japan.\nBecause of Ganoderma is easy as a health food, such as amino acids, proteins, sterols, alkaloids is the mechanism of action that clearly and specifically to what the ingredient is any work or not been clarified yet.\nHas been known to pull the ability of body of Ganoderma, gradually improving the Constitution, strong body.\n|Product name||Soul grass powder|\n|Content amount||50 g|\n|Country of origin / country of origin||China / Japan|\n|To those who||Every day, for health\nIs a health concern\n|Product description||Reishi (れいし) belongs to the Polyporaceae, have an alias such as Ganoderma lucidum mushroom.|\n|How to eat||One day as about 3-5 g in many eyes in warm water or water served.\nDrink to eat by dividing in milk, soy milk, etc.\nThis product cannot be shipped to United States from the store.\nPlease contact the store for further information.", "score": 32.22498404094959, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma is a genus of polypore fungi in the family Ganodermataceae that includes about 80 species, many from tropical regions. They have a high genetic diversity and is used in traditional Asian medicines. Ganoderma can be differentiated from other polypores because they have a double-walled basidiospore. They may be called shelf mushrooms or bracket fungi.\nThe genus Ganoderma was established as a genus in 1881 by Karsten and included only one species, G. lucidum (Curtis) Karst. Previously, this taxon was characterized as Boletus lucidus Curtis (1781) and then Polyporus lucidus (Curtis) Fr. (1821) (Karsten 1881). The species P. lucidus was characterized by having a laccate (shiny or polished) pileus and stipe, and this is a character that Murrill suspected was the reason for Karsten's division because only one species was included, G. lucidum . Patouillard revised Karsten's genus Ganoderma to include all species with pigmented spores, adhering tubes and laccate crusted pilei, which resulted with a total of 48 species classified under the genus Ganoderma in his 1889 monograph. Until Murrill investigated Ganoderma in North America in 1902, previous work had focused solely on European species including, for example, G. lucidum, G. resinaceum Boud. (1890) and G. valesiacum Boud. (1895).\nGanoderma are characterized by basidiocarps that are large, perennial, woody brackets also called \"conks\". They are lignicolous and leathery either with or without a stem. The fruit bodies typically grow in a fan-like or hoof-like form on the trunks of living or dead trees. They have double-walled, truncate spores with yellow to brown ornamented inner layers.\nThe genus was named by Karsten in 1881. Members of the family Ganodermataceae were traditionally considered difficult to classify because of the lack of reliable morphological characteristics, the overabundance of synonyms, and the widespread misuse of names.", "score": 32.20818741113336, "rank": 18}, {"document_id": "doc-::chunk-3", "d_text": "Another friend has had a pretty serious skin discoloration for over 25 years and now since taking these products his skin is beginning to even out to one skin tone. Many individuals I know have reported improvements in digestion and bowel movements, while others are reporting being able to move with ease without joint pains and getting restful quality sleep at nights.\nResearch and Case Studies\n- Reishi mushroom (Ganoderma lucidum) - Integrative Practitioner\nReishi mushroom (Ganoderma lucidum), or as the Chinese call it ling zhi, grows wild on decaying logs and tree stumps. Reishi occurs in six different colors, and can be found\n- Ganoderma Information on Healthline\nThere have been many case studies performed on the the fungus Ganoderma lucidum. This article on Healthline talks about studies done in Taiwan and Columbia University\n- case studies on ganoderma - Google Scholar\nHere's a link to the Google Scholar where one can find numerous case studies that have been performed on ganoderma in pdf form with scientific language and conclusions,\nMore Reishi or Ling Zhi Products to Try\nAbout Organo Gold and Recap of the Reishi Mushroom\nAs we grow older, while living longer in today's society, it becomes important to develop a healthy lifestyle that will assist in having a fuller productive life free for illness whenever possible. While the side effects mentioned sound irritating and uncomfortable; they are usually temporary and minor. Keep in mind that many alternative/natural products work to cleanse and eliminate toxins during the overall healing process. For my family and me, it has become important to make small lifestyle changes such as using alternative and natural medicines or TCM instead of prescription drugs whenever possible.\nNow-a-days prescription drugs in my opinion may assist in smoothing over one ailment only to cause some times a more severe ailment to occur. The more I research, the more I'm learning the importance of honoring one's body by providing it with foods and herbs from nature as much as possible in order to create balance.", "score": 31.938261568075525, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Bio Reishi / Ling Zhi\nGlossy Ganoderma extract\n, Reishi, in China Ling Zhi or 'mushroom of immortality' in traditional Chinese medicine to be very successful, since about 4000 years, with used. More and more attention to the Reishi is now also in Europe.\nYou will find him with us in the wild, however, rather rare. Because of its particularly shiny cap may also be known as Ganoderma. The taste of Reishi is rather bitter and is, therefore, as a pure edible mushroom rather unsuitable.\nIts application range is very wide.\nThe Reishi medicinal mushroom contains vitamins and minerals, including iron, Magnesium, calcium, zinc, copper, manganese and Germanium. Important polysaccharides and triterpenes are also here.\nThe Reishi medicinal mushroom can be used for about 20 years, organically and strictly controlled planted and is now affordable for everyone. Available of Ling-Zhi in dried Form, powder or extracts of the red Ling Zhi is the most high quality.\nOur organic medicinal mushrooms are vegan, processed in Germany without the help of artificial additives and are also tested for heavy metals and pesticides.\nThrough the gentle drying of the whole fruit body, and the pulverization at a very low temperature, you get very good quality.\nMushroom extracts are up to 20 times more effective than “only” mushroom powder!", "score": 31.922150143423302, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "The premiere tonic herb in Classical Chinese Medicine, Reishi extracts support\nimmune and liver health.\nHot water extracts contain 25-80 times more polysaccharides than mycelium grown\non rice, liquid alcohol extracts or unextracted mushrooms. Validated by two thousand\nyears of traditional herbalism, hot water extracts are the only form of mushroom\nsupplement ever used of studies in scientific research.\nOne capsule contains:\nOrganic Reishi ( Ganoderma lucidum, Reishi Gano 161®️) 400 mg *\nhot water/alcohol extract 14%polysaccharide, 4% triterpene\n* Daily Value not established\nVegetarian capsules ( cellulose ), Ganoderma lucidum mycelium,\n1-3 capsules twice daily, AM and PM, or more as recommended by your health care\nprofessional. Take on an empty stomach.\n*This statement has not been evaluated by the FDA.This product is not intended to\ndiagnose, treat, cure, or prevent any disease.", "score": 31.631388401762734, "rank": 21}, {"document_id": "doc-::chunk-6", "d_text": "Heralded as the medicinal equal to Red Reishi (Ganoderma lucidum) for two millennia, wild Purple Reishi is rarer than ever due to deforestation, and still difficult to find high quality fruiting bodies through proper cultivation methods. Traditional Chinese medicine – Daoist herbalism in particular – has perceived this mushroom as a ‘supernatural medicine’, rare, sacred and potent.\nRed Reishi boasts exponentially more scientific data, partly because it has proven easier to obtain, but research into Purple Reishi is ongoing and offering important medical revelations year after year. We have not yet arrived at a place where eastern and western medical traditions can seamlessly interface, as western medicine – although brilliant in many respects – is still in the juvenile stages of maturity and unable to verify or appreciate many of the subtle components of Asian medicine and the emphasis placed on Nature and its dynamic energetic forces.\nFrom the historic perspective of Chinese medical herbalism, Purple Reishi has more than proven its value, not only as a rejuvenating medicine that can strengthen the joints and sinews of the body, but as a protector from infectious diseases and as a support for the emotional mind and human spirit.\nAnima of the forest – subject of folklore; Purple Reishi emits an aura of magic and mystery, and perhaps it always will. We can never understand everything there is to know about such complex organisms, but we can mindfully and conscientiously ally ourselves to them, and flourish all the better for it.\nComparison of Polysaccharides from G. sinense & G. lucidum\nTriterpenoids from G. sinense & G. lucidum\nUnique Terpenoids in G. sinense & Their Properties\nTwo Novel Lanostane Triterpenes from G. sinense\nGanoderma sinense in the Treatment of Arthritis\nGanoderma sinense as Adjuvant in Chemo & Radiation Treatment\nImmune-Modulating Activity of G. sinense Fruiting Bodies", "score": 31.277362119016733, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Chinese tradition refers to the reishi mushroom (Ganoderma lucidum) as ‘the lucky fungus’, for its powers in alleviating complaints such as arthritis, insomnia and chest tightness. And 2,000 years after its first use, the ancient remedy is still proving somewhat of a charm, after researchers from the University of Haifa announced their success in preliminary efforts to slow the growth of prostate cancer cells using reishi extracts.\nThe scientists, led by Dr Ben-Zion Zaidman of the university’s Institute for Evolution, found that molecules extracted from Ganoderma lucidum\nblocked the action of androgen – the male sex hormone – upon cancerous cells. Without intervention, said Zaidman, the hormone otherwise works as a chemical ‘switch’, stimulating the cells – especially in early stages of the disease – to multiply uncontrollably into tumor tissue.\n“These results give rise to hope about developing new medications to treat prostate cancer,” said Zaidman.\nWhile the research is still in the petri-dish stages, stresses Zaidman, one day it could lead to a new way to combat the disease, which every year is diagnosed in nearly 700,000 men worldwide, including Israeli Prime Minister Ehud Olmert.\nExisting drugs, such as Flutamide, also work by interfering with the action of androgen receptors. But the mushroom extract molecules, said Zaidman, had a far more dramatic effect.\n“The extracts worked in a different way. We think they actually prevented androgen receptors from binding to the DNA,” he told ISRAEL21c.\nThe reishi, a fungus native to densely wooded areas in North America, Europe and Asia has been used in traditional medicines for centuries – and is currently experiencing renewed popularity among the health-conscious, popping up on the shelves of health food stores and organic supermarkets around the world.\n“Recently, mushroom extracts of every kind have become available,” notes Zaidman. “But the preparations are water soluble and very weak. You’d have to eat a lot of mushroom soup to get any kind of effect,” he laughed.\nEschewing the kitchen in favor of the laboratory, his team has spent the last three years cooking up active metabolites using alcohol-based solvents. They will now seek to pinpoint the precise relationship between the chemical structures and their cell-based activity, refining the crude extracts into isolated molecules.\nIf they succeed in their research, then it will confirm the long-standing notion that the reishi mushroom is very lucky indeed.", "score": 31.174167797979237, "rank": 23}, {"document_id": "doc-::chunk-2", "d_text": "The basidiocarp, mycelia and spores of Ganoderma lucidum contain approximately 400 different bioactive compounds, which mainly include triterpenoids, polysaccharides, nucleotides, sterols, steroids, fatty acids, proteins/peptides and trace elements which has been reported to have a number of pharmacological effects including immunomodulation, anti-atherosclerotic, anti-inflammatory, analgesic, chemo preventive, antitumor, chemo and radio protective, sleep promoting, antibacterial, antiviral (including anti HIV), hypolipidemic, anti-fibrotic, hepatoprotective, anti-diabetic, anti-androgenic, anti-angiogenic, anti herpetic, antioxidative and radical-scavenging, anti aging, hypoglycemic, estrogenic activity and anti-ulcer properties. Ganoderma lucidum has now become recognized as an alternative adjuvant in the treatment of leukemia, carcinoma, hepatitis and diabetes.\nThe macrofungus is very rare in nature rather not sufficient for commercial exploitation for vital therapeutic emergencies, therefore, the cultivation on solid substrates, stationary liquid medium or by submerged cultivation has become an essential aspect to meet the driving force towards the increasing demands in the international market. Present review focuses on the pharmacological aspects, cultivation methods and bioactive metabolites playing a significant role in various therapeutic applications.\nResearch and Development Centre, Bisen Biotech and Biopharma Pvt. Ltd., Biotech Research Park, M-7, Laxmipuram, Transport Nagar, Gwalior- 474010 (M.P.) India. email@example.com.\nTaking care of the whole you includes your heart.\nThe same healthy basics that reduce your risk of other diseases also help to keep your heart strong.\n* A healthy heart starts with eating right. Heart-healthy foods like fruits, vegetables, healthy proteins (such as fish, beans, chicken, nuts and low-fat dairy), and whole grains will help keep your heart and blood vessels in good shape.\n* Drinking too much alcohol can raise your total cholesterol levels and your blood pressure. Limit alcohol to no more than 1 drink a day for women or 2 drinks a day for men.\n* Brisk walking, swimming, or cycling are especially good for the heart, but choose any activity you enjoy for a fun exercise routine. Start at your comfort level, and build up gradually.", "score": 31.172443671029228, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Using PNW Mushrooms in Skin Care\nFruiting bodies protrude from their hosts throughout our forests, the Ganodermas are a sight to behold and entirely hard to ignore. Ganoderma in itself means “shiny skin” of course referring to the varnished crust on many of the species in this genus, but how can we not apply this to our own, human skin. Following is research that has been done on Ganoderma lucidum, Ganoderma tsugae, and Tremella fuciformis and their uses in skin care. I am postulating that we can use out Northwest analogs, Ganoderma applanatum, Ganoderma oregonense, and Tremella mesenterica, the same way.\nSacchachitin and Polysaccharides for Wound healing\nThere is a product made, called Sacchachitin that is used as a wound dressing. It is made from the pulp of the Ganoderma fruiting body and when used, significantly speeds up the healing process of skin wounds. (Hung 2004) This product of course is not manufacturable by the general public, yet it is easy enough to chop up the Ganoderma into small pieces, place in a blender with a little water and create a pulp that is then simmered for about an hour. The simmering is not necessary for a styptic effect, but you want to extract the polysaccharides to see anti inflammatory, antioxidant and increased healing time effects. Speed of wound healing was also observed when Ganoderma polysaccharides were applied to the wounds of diabetic mice. It was observed that the polysaccharides accelerated the wound healing my inhibition of mitochondrial oxidative stress and improved wound angiogenesis (Tie 2012).\nHealing from UVB damage\nTremella fuciformis has been used in skin care in Asia for decades, yet there is little research on our local species of Tremella, Tremella mesenterica. The polysaccharide content is comparable and so I am using the research and traditional uses of Tremella fuciformis as being analogous to the potential uses of Tremella mesenterica. Tremella is known to be a potent antioxidant and anti-inflammatory fungus. The Tremella polysaccharide extract was tested on hydrogen peroxide-induced injury of human skin fibroblasts.", "score": 30.899082603412886, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "Polysaccharides from G. Lucidum also affect increased concentrations of interleukin 2 (iL2), interleukina 6 (iL6) and interferon (iNF) and reduce the concentration of interleukin 1 (iL1), and tumor necrosis factors (TNF). G. Lucidum's extract shows inhibitor effects on the release of histamine from mastocytes, thereby contributing to alleviating allergy symptoms.\nSubstances from G. Lucidum have special effects on people infected with the HiV virus. Ganoderiol and ganodermanontriol inhibit the cytopathic effect of the HIV virus and slow its growth and progress in the body.", "score": 30.694895994797673, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Reishi (Ganoderma lucidum), Chaga (Inonotus obliquus), Shiitake (Lentinula edodes), Maitake (Grifola frondosa), Organic Cane Alcohol 35-60%\nMushrooms strengthen the lungs and increase liver function. Mushrooms are known to protect the body against cancer causing agents. Reishi mushrooms are recognized in Chinese medicine as the “king of herbs.” They are grounding and assist us in cultivating inner stillness and increased concentration. This formula works on the whole person and is an excellent daily remedy for increased vitality.", "score": 30.650439990643275, "rank": 27}, {"document_id": "doc-::chunk-3", "d_text": "The triterpene content can also be used as a measure of quality of different ganoderma samples.\n6 Scientifically Studied Benefits of Ganoderma Lucidum\n1. Boost the Immune System\nOne of the most important effects of the reishi mushroom is that it can boost your immune system. While some details are still uncertain, test-tube studies have shown that reishi can affect the genes in white blood cells, which are critical parts of your immune system. What’s more, these studies have found that some forms of reishi may alter inflammation pathways in white blood cells.\nResearch in cancer patients has shown that some of the molecules found in the mushroom can increase the activity of a type of white blood cell called natural killer cells. Natural killer cells fight infections and cancer in the body.\nAnother study found that reishi can increase the number of other white blood cells (lymphocytes) in those with colorectal cancer. Although most immune system benefits of reishi mushroom have been seen in those who are ill, some evidence has shown that it can help healthy people, too. In one study, the fungus improved lymphocyte function, which helps fight infections and cancer, in athletes exposed to stressful conditions.\nHowever, other research in healthy adults showed no improvement in immune function or inflammation after 4 weeks of taking reishi extract.\nOverall, it is clear that reishi impacts white blood cells and immune function. More research is needed to determine the extent of the benefits in the healthy and ill.\n2. Anti-Cancer Properties\nMany people consume this fungus due to its potential cancer-fighting properties. In fact, one study of over 4,000 breast cancer survivors found that around 59% consumed reishi mushroom. Additionally, several test-tube studies have shown that it can lead to the death of cancer cells. Yet the results of these studies do not necessarily equate to effectiveness in animals or humans. Some research has investigated if reishi could be beneficial for prostate cancer due to its effects on the hormone testosterone. While one case study showed that molecules found in this mushroom may reverse prostate cancer in humans, a larger follow-up study did not support these findings. Reishi mushroom has also been studied for its role in preventing or fighting colorectal cancer.\nSome research showed that one year of treatment with reishi decreased the number and size of tumors in the large intestine.", "score": 30.607502937339348, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "They can be utilised to spice up your immune system and may well in addition enhance the overall performance of antioxidant nutritional nutrients although the 2 chemical compounds are utilized alongside one another.\nReishi mushrooms are considered toward suppress tumor progress and are often applied within cancer avoidance initiatives for this purpose. Russian experts upon the Greatest cancers Examine Heart inside of Moscow comprise experienced constructive outcomes applying reishi extracts towards spice up the immunity of utmost cancers sufferers.\nAnti-tumor and immunoregulatory actions of Ganoderma lucidum and its achievable mechanisms.\nActa Pharmacol Sin. 2004.\nPossibly the greatest attractive persona of this style of medicinal fungus is its influence on the immune system and anti-tumor things to do. Huge figures of research contain listed that reishi modulates countless variables of the immune process these kinds of as the antigen-presenting cells, NK cells, T and B lymphocytes. The water extract and the polysaccharides portion of reishi displayed substantial anti-tumor impact in number of tumor-bearing pets fundamentally by course of its immune system improving upon recreation. Most recent review in addition confirmed that the alcohol extract or the triterpene portion of reishi possessed anti-tumor effect, which appeared to be affiliated in the direction of the cytotoxic exercise in direction of tumor cells straight. Original research indicated that antiangiogenic influence may be concerned antitumor activity of reishi.", "score": 29.627402858450534, "rank": 29}, {"document_id": "doc-::chunk-4", "d_text": "Institut für Pharmazie, Ernst Moritz Arndt Universität Greifswald, Friedrich-Ludwig-Jahn-Strasse 17, 17487 Greifswald.\nThe medicinal use of mushrooms, so-called higher fungi, has a very long tradition in the Asian countries, whereas their use in the Western hemisphere has been slightly increasing only since the last decades. The paper gives an overview about the most important medicinal mushrooms and summarizes the actual knowledge about chemistry and pharmacology of Lentinula edo-des (Shiitake, Golden Oak Mushroom), Ganoderma lucidum (Reishi, Ling Zhi), Agaricus brasiliensis (Royal sun agaricus), Grifola frondosa (Maitake, Hen-of-the-Woods) and Hericium erinaceus (Yamabushitake, Lion's Man, Monkey's Head).\nMistaking toxic mushrooms for edible ones is common and sometimes deadly. Even for experienced foragers it can be difficult to tell what is poisonous and what is not.\nAmanita Mushroom - Amanita is the most recognizable toxic mushroom. Amanita phalloides mushrooms, commonly known as \"death caps,\" can easily be confused with other common species such as parasols. Eating them can cause severe damage to the liver and kidneys, followed by death within five to 10 days.\nGyromitra mushroom or false morrel (monomethylhydrazine) poisoning may be partly treated with pyridoxine.\nDifferent varieties of\nAgaricus mushroom include:\nAgaricus bisporus mushroom is a common, edible, cultivated mushroom also known as white mushroom. The lectin from the common mushroom Agaricus bisporus, the most popular edible species in Western countries, has potent antiproliferative effects on human epithelial cancer cells, without any apparent cytotoxicity. This property confers to it an important therapeutic potential as an antineoplastic agent. Agaricus campestris - also known as meadow mushroom\nAgaricus blazei is an edible and medicinal mushroom. Agaricus blazei is also known as the Brazilian sun mushroom or himematsutake.\nAgaricus subrufescens Peck was cultivated first in the late 1800s in eastern North America. Once a popular market mushroom, this agaricus species faded from commerce in the early 20th century.", "score": 29.37883852042616, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "- About Reishi\nA Brief History of Reishi\nThe origins of Reishi can be traced back over 4,000 years. Found in ancient Chinese Medicinal Books, it is ranked number 1, above all other herbs. It is from earning this high position that it is often called “the King of Herbs” and “The Mushroom of Immortality”. Though early information is sparse, it is believed that it was highly sought after by ancient Chinese emperors because of its reputation for effectiveness. Due to it's rarity in nature, they tried to keep the mushroom's benefits as secret as possible. It is written that punishments were given to royal court doctors who spoke of Antler-shaped Reishi's medicinal value!\nWhat is Reishi?\nIn simplest terms, Reishi, or Ganoderma Lucidum, is a rare type of mushroom, which belongs to the family of herbs. Known by several names, in Chinese it’s often referred to as Ling Chih or Ling Zhih. It’s extremely rare, and in order to produce enough to meet the demands of all people, it must be cultivated and scientifically processed. Very few companies in the world have the technology, equipment, or resources to do this properly, so finding the best Reishi is essential to achieve all the potential health benefits it can offer.\nAmong Reishis, there are six different types, identified by their color. Among them, Red Reishi offers the most health benefits. Red Reishis in general are quite rare, and among Red Reishis, there is one that is far more effective than all of them. This very special Reishi is called “Antler-shaped, so named because it’s shape resembles deer antlers. But Antler-shaped Reishi is also extremely rare, since it is estimated that only 1 in 100,000 Red Reishis found in the wild will be Antler-shaped. This is one reason why Antler-shaped Reishi must be cultivated using highly specialized processes. While most manufacturers find that producing Antler-shaped Reishi is too cost-prohibitive, Genius Legendary’s massive volume allows us to produce this very desirable Reishi at economical prices. In addition, Genius Antler-shaped Reishi is 500% concentrated, 100% organically grown, and 100% pure, which makes ours the world’s best Reishi!\nWho can benefit?", "score": 29.251645526386984, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma lucidum (Reishi) is one of the oldest mushrooms known to be used medicinally by humans. It is highly prized as a major tonic in Chinese herbalism, with a powerful broad spectrum of health benefits. Often referred to as “the mushroom of immortality”, it has been valued as a longevity herb in Asian cultures for over 2,000 years.\nGanoderma lucidum (Reishi) is the foundation for Vida Divina’s existence. Our founder has a personal long-standing history with the health benefits of this powerful mushroom. We are proud to share it with the world.\n- One of the oldest mushrooms to be used for health and well-being\n- Asian cultures refer to it as “the mushroom of immortality”\n- Highly prized as a tonic in Chinese herbalism\n90 Capsules per Bottle\nSuggested Use: Take 1 capsule daily or as directed by a health care professional.", "score": 28.10967349388793, "rank": 32}, {"document_id": "doc-::chunk-92", "d_text": "Throughout Asia, especially China as well as Japan, mushrooms have belonged to the culinary experience but most importantly, they have actually harnessed the largest wellness advantages of mushroom extracts for hundreds of years.\nThe western perspective and lack of nutritional details of mushrooms has actually landed our fungal good friends in our salads as well as sliced up in our kitchen areas to be utilized in food meals. The Asian culture nonetheless, has for a number of thousand years, made use of specific mushrooms to boost and also preserve better wellness. A number of these mushrooms have understood health and wellness advantages to include anti-aging residential or commercial properties to striking cancer cells and also although relatively brand-new to the Western world, continue to be utilized within the Eastern society for their medical residential properties.\nWith advances in mushroom farming in just the last two decades, enhanced availability, research, and affordability, use of mushrooms for their wellness advantages has actually increased. As an example, the Reishi Mushroom (Ganoderma Lucidum) was originally only available to Asian royalty and also commonly described as mushroom of everlasting life. Reishi can subdue cell adhesion as well as migration of specific cancer cells, potentially minimizing malignant impacts as well as absolutely increasing immune systems.\nThe tumor inhibiting capability of certain mushrooms is produced by the immune-stimulating polysaccharides. Medical practitioners in China and also Japan have used mushrooms in addition to herbs in anti-aging and also cancer cells medicines. Absolutely, the holistic area has actually expounded on the wellness advantages of utilizing such essences.\nWhen selecting mushroom essence items for human consumption, it is very important to select alcohol based extracts as they remove water-soluble parts from the mushroom to take full advantage of readily available triterpinoids. This is very essential in accurate analysis of degrees of Ganoderic acids to figure out potency. Reishi is the just recognized source of a group of triterpenes called ganoderic acids, which have a molecular framework comparable to steroid hormonal agents. It has one of the most energetic polysaccharides among medical plant resources. In addition to various other treating qualities, some think these Ganoderic acids may even decrease high blood pressure as well as cholesterol levels. Different mushrooms continue to be the focus of specialized research study in cancer as well as immune disorder.\nNutr Cancer. 2005; 53( 1 ):11 -7.", "score": 28.063269433593366, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "Iteng Jun of Mie University in Japan conducted an anti-tumor experiment on Brazilian mushroom (Agaricus blazei) and found that the amount of Brazilian mushrooms was about 90% effective in killing cancer cells and prevented their proliferation ability to be 99.4%. Both are higher than Ganoderma lucidum, which means that Brazilian mushrooms (Agaricus blazei) are 4-5 times higher than Ganoderma lucidum regardless of quality and efficacy. (Please refer to the comparison of the anti-cancer effects of Brazilian mushroom (Agaricus blazei) and other tang ganoderma lucidum). So it is also called by the world medical community \"the last food for cancer patients on Earth.\nComparison of Anticancer Effects of Brazilian Mushroom (Agaricus blazei) and Other Fungi\nName of Mushroom\nDaily Dosage (mg)\nTotal Recovery Rate\n(The monkey bench, wood fungus)\n(Hu Sun Eye, Bovine Liver)\n(Pearl mushroom, slip mushroom)\nBrazilian mushroom β-D-glucan super-human immune supplement\nβ-D glucan can activate macrophages, neutrophils, etc., thereby increasing the content of leukocyte, cytokinin, and special antibodies, and fully stimulating the body's immune system. A large number of experiments have shown that β-D glucan can promote the production of IgM antibodies in vivo to improve the humoral immunity. In addition, β-D glucan is widely used in pharmaceuticals, foods, and foods, because it has free radicals, anti-radiation, and cholesterol solubilization, prevents hyperlipidemia and protects against infections caused by viral infections, fungi, and bacteria. Other industries. Recent studies have found that β-D glucan can be used as a genetic material that plays a central role in life activities. It can control cell division and differentiation, regulate cell growth, and treat cancer, hepatitis, cardiovascular disease, diabetes, and lipid-lowering, anti-aging, etc. There are unique biological activities. At present, all countries in the world, especially in Japan, the United States, Russia and other countries, β-D dextran has been widely used in food and health care, beauty and skin care industries.\nBrazilian mushroom β-D glucan features:\n1. Excellent immune activator\n2. Strong free radical scavenger\n3. Activates macrophages, neutrophils, etc.", "score": 27.773007283234215, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "“In Western medicines, many of the notable natural drugs are formulated through the process of purification. As most of the medicinal formulations today more often than not produce side effects in the human body, it has become the keen pursuit and ultimate objective of Western medicines to develop drugs that do not entail such unfavourable effects. This is however something that we have already achieved, as consuming Lingzhi in large amounts is known to bring no side effect. On top of this, research findings also point to the fact that Lingzhi triterpenoids promise the greatest potential in being developed into highly effective crude drugs. As a business entity equipped with the most resourceful corporate strength in the world pertaining to the R & D domain of Lingzhi triterpenoids, Shuang Hor Group has invested almost a king’s ransom in procuring the purification instruments and facilities essentially needed to produce high purity health-care products for the benefits of the masses. Through the synergy of hardware and software complementation, we arrived at the conclusion that large scale production of Lingzhi triterpenoids was feasible. In the year 2008, we succeeded in purifying triterpenoids, with the content of up to 9 types of ganoderic acids purified to a level of exceeding 50%. In the following year, the pilot project became a success along with technological transfer to Yung Kien Factory for mass production. We also went on to apply for a patent right for the production process, with the objective of making Shuang Hor Group the only legitimate business entity to produce and market Yung Kien Ganoderma 2. This patent right application is entitled “Ganoderma Extract with High Amounts of Ganoderic (Lucideric) Acids and Method for Producing the Same”.\n“As far as the raw material or ingredient is concerned, Yung Kien Ganoderma 2 is basically the same as its predecessor Yung Kien Ganoderma. But the two differ in one aspect, namely the fruiting bodies for the former have to be cultivated in a computerized high-tech greenhouse. These triterpenoids are subject to stringent screening to ensure that the composition gives a rich content of ganoderic acids before they qualify as a raw ingredient for production. The processing involves the incorporation of purified ganoderic acids into the first generation of Yung Kien Ganoderma.", "score": 27.72972038949445, "rank": 35}, {"document_id": "doc-::chunk-5", "d_text": "As mentioned above, Purple Reishi has a lower overall volume of triterpenes than its red counterpart, yet there have still been a number of unique, biologically active terpenoids discovered in the fruiting body since 2007. Ganoderic Acid Jc for example, showed inhibitory activity against acute myeloid leukaemia cell lines in vitro.\nThe effect that Purple Reishi polysaccharide compounds can have on regulating inflammation will also contribute towards its long held reputation as being supportive to the bones, joints and connective tissue.\nAn in vivo study published in 1998 by F Wan et al, showed that G. sinense had pronounced anti-inflammatory and analgesic effects in cases of rheumatoid arthritis as well as reducing related edema, with no noticeable side effects.\nThe protective and regenerative effects Purple Reishi could offer to the liver were a common reason for prescribing it throughout the ages, according to traditional Chinese medicine. Scientifically speaking, it is often believed that the triterpenes from Red Reishi are primarily responsible for the many advantages to the liver, and Purple Reishi contains only a limited number of triterpenes in comparison.\nThere are of course many contributing factors towards the ‘entourage effect’ of this medicine – a phenomenon that can be observed through scientific methods, but not yet clearly understood or easily delineated. While the precise mechanisms of action regarding the liver remain unclear, a novel triterpene called Ganolucidate F was discovered in 2011 and was able to regulate the activity of Cytochrome P450 – a liver enzyme responsible for the metabolism of toxins and drugs and their safe detoxification from the body.\nPurple Reishi Products\nLike the wild mushroom itself, high quality Purple Reishi products are few and far between. As with Red Reishi it is the fruiting body that offers the most complete source of medicine. The most common form Purple Reishi is found in, is either whole dried fruiting bodies or slices of the fruiting bodies. Traditionally these are used to decoct into a tea, added to soups and broths (and removed prior to consumption) or macerated/slow cooked in wine. Pure grade extract powders are not yet available, although some manufacturers are selling liquid extracts of the fruiting body.", "score": 27.142030910056718, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "Mycologia 96: 742-755.\nGanoderma lucidum, considered rare and hard to find in nature in China and Japan, is now commonly cultured. It can be cultured on logs that are buried in shady, moist areas. G. lucidum can also be inoculated onto hardwood stumps. Under commercial cultivation conditions, G. lucidum is normally grown on artificial sawdust logs, as shown to the right. We'll cover this cultivation method in more detail in a later Fungus of the Month. Other methods of cultivating G. lucidum and many other fungi can be found in Paul Stamets' book, \"Growing Gourmet and Medicinal Mushrooms\"\nNow let's look at the medicinal uses of G. lucidum or Reishi. If you feel the fruiting bodies, you'll notice that they're very hard, so no one tries to eat them like most (softer) mushrooms-- too tough! It has been used as an herbal remedy for such things as health, recuperation, longevity, wisdom and happiness for centuries in Asian traditional medicine. The first historical mention of G. lucidum was during the rule of the first Chinese emperor, Shin-huag of the Ch'in Dynasty, when the fungus's medicinal uses are first described. In fact, a Reishi Goddess, known as Reishi senshi, was worshipped because she would bestow health, life and eternal youth.\nThere are two different types of Reishi, the traditional wide, shelf-like fruiting body and the antler shape, known as Rokkaku-Reishi. The antler shape arises from varying carbon dioxide levels and low light. These two types are rumored to have different healing characteristics. Recently, there have been a large amount of scientific papers published with experiments attempting to quantify the effect of G. lucidum on the human body. The fungal extract has been shown to act on immune system cells, to work against herpes virus, to lower cholesterol and stop cell proliferation.\nUnfortunately, while reading these papers it seems important to remind you that we are still not sure if G. lucidum and G. tsugae are separate species.\nAlthough the molecular make up has yet to be determined conclusively, several biologically active compounds from G. lucidum have been characterized.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Name: anoderma Lucidum Spore Powder\nOrigin : China\nGrade : Good\nWeight : 500 grams\nUsage: 3-5 grams each time, once in the morning and once in the afternoon. Pour in warm water and drink it.\nSealed packaging OR Refrigerator freezer room. Dry and no direct sunshine\nSpores are released from Ganoderma Lucidum when maturity. They are very tiny, which cannot be seen with our naked eyes, only be seen through a microscope.\nGanoderma Lucidum Spores powder are formed by numerous spores. The Ganoderma Lucidum spore can be up to 75 times more therapeutic than the whole original Ganoderma Lucidum of the same weight.\nThe essential nutrients of Ganoderma Lucidum is in its spore. Ganoderma Lucidum itself has been used in China for many centuries for its various healing properties. It is well known for its function in strengthening the body immune system. With better body immune system, it helps to support in the management of chemotherapy and radiatherapy treatment eg. helps to ease the side effects resulted from the treatment. Continuously taking, Ganoderma Lucidum spore powder helps to improve quality of our whole body system ie. in promoting the blood circulation,digestion, metabolism and bowel regularity, reducing fatigue, enhancing alertness, improving sleeping,protecting liver, etc.\n5 Stars (0)\n4 Stars (0)\n3 Stars (0)\n2 Stars (0)\n1 Star (0)\n|Shipping Company||Shipping Cost||Estimated Delivery Time|\n|3 - 7 days|\n|3 - 7 days|\n|5 - 14 days|\n|Post Air Mail||Free Shipping||15 - 45 days|\n|Return Policy||If the product you receive is not as described or low quality, the seller promises that you may return it before order completion (when you click \"Confirm Order Received\" or exceed confirmation timeframe) and receive a full refund. The return shipping fee will be paid by you. Or, you can choose to keep the product and agree the refund amount directly with the seller.\nN.B.: If the seller provides the \"Longer Protection\" service on this product, you may ask for refund up to 15 days after order completion.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "These include adenosine, said to have an analgesic effect, R,S-ganodermic and ganasterone that have an antihepatoxic effect, and glucans and polysaccharides that are responsible for the anti-inflammatory and antitumor properties of G. lucidum.\nSomething else to keep in mind is that all these experiments were done in cell lines, mice, rats and hamsters. So far no large scale unbiased human trials have yet been performed, and the FDA does not yet approve use of Reishi as medical treatment. In order to gain FDA approval, purified compounds from G. lucidum would have to go through an intensive amount of screening in cell lines and animals; much of this pre-clinical testing has already been performed. The next step would be a phase one clinical trial, which assesses the potential drug's safety. Healthy volunteers are paid to take the drug for a certain amount of time, and the compound is studied for its absorption into the body, its metabolism, and its excretion. Once the potential drug passes phase one, which can take up to several months, it moves on to phase two. In phase two, several hundred patients participate in what is called a double blind clinical trial, in which both the patient and the physician are unaware of whether the patient is receiving the potential drug or a placebo . Phase two can last from several months up to several years. If the potential drug is proven effective after phase two, it moves to phase three. Phase three also consists of blind clinical trials, but on a much larger scale. This phase is used to understand the drug's effectiveness, benefits, and the range of possible adverse reactions. Without a doubt, G. lucidum and its researchers have a long road ahead of them before they can prove the mushrooms healing powers.\nI hoped you enjoyed learning about Ganoderma lucidum, its relatives and potential uses. Although these species of fungi can produce beautiful fruiting bodies, who would have expected them to hold so many potentially useful chemicals?\nThis month's co-author is Kathleen Engelbrecht, one of my students in Mycology, Medical Mycology, and Advanced Mycology. She's working on an interesting project looking for potentially useful compounds from fungi.\nIf you have anything to add, or if you have corrections, comments, or recommendations for future FotM's (or maybe you'd like to be co-author of a FotM?", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-12", "d_text": "Of course something this versatile has many uses and a fraction of them are: Tonic, immune booster, allergies, blood pressure stabilizer, altitude sickness, chemotherapy support, HIV support, fatigue, high blood pressure, high triglycerides, hepatitis, inhibits platelet aggregations.\nBelow are the remarkable result of Ganoderma efficacies from scientific research...\nCordyces Sinensis Extract\nCordyceps, one of the better-known traditional Chinese medicines, consists of the dried fungus Cordyceps sinensis growing on the larva of the caterpillar. It is commonly used in China for the replenishment of general body health. Cordyceps has a broad range of pharmacological and biological actions on the liver, kidneys, heart, and immune system.\nOne of the known pharmacological effects is its anti-oxidation activity. Cordyceps sinensis, a well-known and valued traditional Chinese medicine, is also called DongChongXiaCao (winter worm summer grass) in Chinese.\nCordyceps is recognized as a health supplement in Chinese traditional medication and is also on of the top 3 China most valuable herbs besides Ginseng and Deer's antters. According to the ancient book of record, Cordyceps have the effects in prolonging and strengthening lives, anti-aging, help in lung breathing and very effective in curing erectile dysfunction.\nCordyceps contains Cordyceps polypeptide, which is the most precious active ingredients in Cordyceps. It is extracted using high technology. Even a small amount of Cordyceps polypeptide has remarkable result because its tiny molecules allow it to be easily absorbed into our body.\nSome of Cordyceps medicinal efficacies are...\nCynomorium Songaricum Extract\nCynomorium is known in Chinese as suoyang, which is based on the herb's medicinal effects, \"locking the yang.\" It is obtained mainly from the East Asian species, Cynomorium songaricum, though the similar C. coccineum is sometimes utilized as a substitute.\nThe plant harvested for Chinese medicine grows at high altitude, mainly in Inner Mongolia, Qinghai, Gansu, and Tibet. Songaria Cynomorium Herb is the dried fleshy stem of Cynomorium songaricum Rupr. (Fam. Cynomoiaceae).", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma Applanatum Extract Tincture Pure Wild Crafted\nWhat is Ganoderma Applanatum as a Medical Mushroom?\nGanoderma Applanatum is often consumed as a tea and has a rich, smoky taste. It is used as a medical mushroom. Mushrooms like Ganoderma Applanatum and G. lucidum possess a number of useful polysaccharide constituents. More than 650 species of mushrooms from 182 genera have been described. The most valuable polysaccharides are beta-glucans, which are known for their immune-modulating and anti-tumor actions.\nSource – Nootriment.com\nGanoderma Applanatum (Artist’s bracket, Artist’s Conk or Bear Bread) is a bracket fungus with a cosmopolitan distribution.\nOrder: Polyporales Family: Ganodermataceae Genus: Ganoderma Species: G. Applanatum\nGanoderma Applanatum Extract Tincture Properties and Health Benefits\n“Like other species of the Ganoderma genus, Ganoderma Applanatum is comprised of roughly 400 different phytochemicals. It also contains bioactive triterpenes and polysaccharides that are being investigated for possible:\nTraditional Use Of Ganoderma Applanatum\nGanoderma Applantum has been used in a number of traditional medicine systems. In traditional Chinese medicine (TCM), it is used to treat:\nand reduce heat\nThe mushroom is traditionally consumed as a tea or a water-based extract. It’s flavor can change depending on its host tree. Consuming Ganoderma Applanatum has been said to provide energetic warmth.\nA novel mero-triterpenoid in Ganoderma Applanatum was discovered in 2015. It has been named Applanatum A and it has a unique chemical composition. In preliminary studies, Applanatum A has been observed to demonstrate strong anti-fibrotic effects. Anti-fibrotic agents are said to block or prevent tissue scaring, causing regression of fibrosis.\nLike Ganoderma Lucidum, this species of mushroom may have benefits for cancer prevention or treatment. However, research is still in the very preliminary stage.Ganoderma Applanatum has been observed to exhibit anti-tumor effects.", "score": 26.544218102242915, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Difference Between Algae and Fungi\nAlgae vs Fungi\nWhen anyone is asked about the difference between the algae and fungi, the quick answer is most likely that algae are the green slimy matter that you can observe at the base of your aquarium or at the bottom of your pool when left non-chlorinated. For fungi, these are the mushrooms that you can readily see at the forest or even at your own backyard. Well, this is probably the simplest visual difference between the two.\nForemost, fungi is the plural form for fungus. There are many types of such like molds, yeasts and the most popular ‘“ mushrooms. Fungi are able to decompose organic matter a lot faster. They literally eat up the available nutrients like in the case of an uneaten fruit that’s left in the open at regular temperature. After quite some time, this fruit will become uneatable because molds start to surround and eat it up. They are ‘decomposers’ that brings something back to earth (changing organic into inorganic).\nThey are described as both symbiotic and parasitic. They try to feed themselves using the carbon from other organisms. Take note, almost anything are structurally based on carbon: humans, plants, insects, and most animals.\nThere are some fungi that are known to be edible like the Portobello mushrooms, straw mushrooms and the oyster mushrooms. Blue cheese is also a popular food item that is derived from fungi. Despite this, there are still many fungi that are deemed dangerous for ingestion. As such, you are advised not to eat the mushrooms that you just see somewhere else like in the forest and the woods. Often, these mushrooms are poisonous in nature. Nevertheless, although fungi aren’t classified as animals they are without a doubt still not plants. In fact, they are classified into a big group separate from plants and animals ‘“ Kingdom Fungi.\nAlgae (singular alga) are different because they are plant-like. This means that they also do the process of photosynthesis because they use light energy for nourishment on top of all the minerals they get from the water surrounding them. Thus, they convert the inorganic into an organic material. In addition, algae are said to be the origin of the primitive plant types. They were the biologic ancestors of many of today’s higher plants. Like fungi, some algae (i.e. seaweeds) are edible.", "score": 26.357536772203648, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "The polysaccharides from Tremella reduced oxidative stress and cell apoptosis in the treated skin. It also protected the skin fibroblasts from oxidative stress. (Shen, Gusman) Oxidative stress is one main reason our skin becomes wrinkled as we age, so using these polysaccharides topically could be beneficial in protecting our skin from wrinkles. The Polysaccharides, which make up about 90% of this species of mushroom, also assist the skin in its ability to retain moisture, an ability that decreases as we age. Tremella polysaccharides have also been researched for lightening skin spots in sun damaged skin and have been shown to inhibit melanin formation. Another study explored Ganoderma polysaccharides and determined that these compounds protect against “photo-aging” by eliminating UVB-induced reactive oxygen species. (Zeng 2016). One local Ganoderma to the PNW is Ganoderm oregonense, an analog to the Ganoderma Tsugae of the Eastern states. In one study, lanostane terpenoids extracted from Ganoderma tsugae fruiting bodies protected human keritinocytes from photodamage. (Lin 2013)\nTriterpenoids and Polysaccharides for Atopic Dermatitis\nAtopic dermatitis is a type 1 hypersensitivity reaction, which means it is an IgE mediated immediate hypersensitivity reaction, like an immediate allergic response. Researches explored a beta-glucan based cream for mild to moderate atopic dermatitis. Topical application resulted in significant improvement. In this study, the people with dermatitis put the cream on half their body, and nothing on the other half. The half of their body that the cream was applied to showed significant decline in dermatitis. (Jesenak 2015) This benefit would come from the water soluble constituents of the mushrooms, while another study looked at the lipophilic triterpenes for type 1 hypersensitivity reactions. They found that the triterpene extract inhibited histamine release from rat mast cells induced by IgE. (Rios 2010) This is a great example where a cream made from both the water and oil extract of the mushroom could be extremely beneficial for these skin conditions. Another example of a type 1 hypersensitivity reaction is the inflammation and itch that we get in response to mosquito bites.", "score": 25.698191084144707, "rank": 43}, {"document_id": "doc-::chunk-18", "d_text": "This strain also differs from temperate strains by a significantly higher temperature optimum, and different growth rates, thus supporting a hypothesis suggesting that tropical species of wood-decay fungi are characterized by a higher temperature optima for growth (30-40°C) than temperate wood decay fungi (20-30°C) (Magan 2008). Growth rates have been shown to be species-specific in Fomes (McCormick et al. 2013), which is further supporting our hypothesis that this lineage represents a distinct species.\nSpecies delimitation is an important issue in medicinal fungi, because the production of bioactive secondary metabolites is first species-specific, and then depending on strain characteristics. Physiological properties, e.g. growth rates and optimum temperature, are generally useful characters for species delimitation in fungi (Jaklitsch 2009) and have been applied for corticoid (Hallenberg and Larsson 1991) and poroid (McCormick et al. 2013) wood-inhabiting Basidiomycota. Also pigments and other metabolites are good indicators for the distinction of fungal taxa (Breheret et al. 1997; Frisvad et al. 2008). The European name Ganoderma lucidum (= Reishi mushroom), for example, has been widely applied on a global scale, but represents several distinct lineages or species (Wang et al. 2012), which differ in their secondary metabolite profiles (Lv et al. 2012). Also Laetiporus sulphureus, which is another important medicinal polypore, includes several lineages representing distinct species with different geographic origin and from different hosts (Banik et al. 2012). Ecological adaptation through host shifts and substratum specialization is likely to be an important mode of speciation and adaptive radiation in polypores (Skaven Seierstad et al. 2013). A successful screening on bioactive secondary metabolites must therefore be based on two stable pillars: first, a reliable species identification followed by an extensive screening for a suitable or ‘best strain’.\nAgafonova SV, Olennikov DN, Borovskii GB, Penzina TA (2007) Chemical composition of fruiting bodies from two strains of Laetiporus sulphureus.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Glossy TaiwanTCM is a powdered glossy glossy gourd ( Ganoderma lucidum ), which meets the strictest standards of the TCM. Powder is prepared by grinding and specially modifying selected portions of natural fruit-trees from pure mountain environment of central Taiwan.\nIt does not contain artificial additives or preservatives. The taste of the powder is naturally bitter, later sweet. Capsules are gelatinous. Capsule weight 420mg +/- 5%. Recipe and Material: SunGertain BIO-RD, Taiwan\nThe quality of the product is a key place and way of growing the puppies. Environmental cleanliness is especially important for fungi. Many studies have demonstrated the ability of fungi, even in a relatively clean environment, to accumulate a number of heavy metals and radionuclides many times above their concentration in the environment. Central Taiwan, where glossy shiny TaiwanTCM is grown, is an extensive area of the mountain ranging from more than 4000m above sea level with a minimum concentration of impurities in the environment.\nAnother problem with the fungal material is the genetic degeneration, which the mushrooms in cultivation are gradually subject to and which leads to the reduction of efficacy. This can be prevented by chemical and genetic control of the fungal material. In this area, Taiwan again has an unquestionable advantage - it is no coincidence that the glossy gloss genome was first deciphered in Taiwan.\nThe glans of the glossy gloss are tough. They can only be consumed in powder, liquid or tablet form. The liquid form includes tea and extract. Glossy Taiwan TCM is available in the form of capsule powder. They are used directly and drunk with lukewarm water. The powder can be consumed without the capsules - its bitter taste is pleasant if we get used to it. This is done by opening the capsule, pour the powder into a glass of lukewarm water and stir in the spoon. We get a healthy drink whose bitterness resembles bitter coffee.\nOrdinary dosing 3 times a day 1-3 capsules in the morning, drink with lukewarm water (or dissolve the powder in lukewarm water). The TC recommends a short warm up exercise after ingestion. Even at lower doses, however, the glossy glass of TaiwanTCM has significant effects. With a precautionary dose of 1 capsule / day, one package lasts for half a year.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "Lingzhi Spores 80%\nLingzhi Extract 20%\nTake 3 capsules twice daily with lukewarm water before meals\nNot recommended for consumers with liver health concerns, pregnant women and children. Consult a physician before use.\nGanoderma lucidum (Reishi or Ling-zhi), a popular medicinal mushroom, is regarded by the Chinese as “mushroom of immortality”. It is believed that regular consumption of Ganoderma lucidum in the form of tea or mushroom powder can preserve the human vitality and promote longevity1. Scientific investigations have been conducted to show its anti-cancer, hypoglycemic and immuno-modulating effects. Its bioactive components comprise of triterpenes, polysaccharides and immune-modulatory proteins.\n1. Shiao MS, Lee KR, Lin LJ, Wang CT. Natural products and biological activities of the Chinese medical fungus, Ganoderma lucidum. In: Ho CT, Osawa T, Huang MT, Rosen RT, editors. Food phytochemicalsfor cancer prevention. II. Teas, spices, and herbs. Washington,DC: American Chemical Society; 1994. p. 342–54.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-2", "d_text": "Curative properties possess not only mushrooms, but also their underground part (mycelium). The extract from a mycelium meytake strengthens immunity, and hats of these mushrooms are capable to regulate pressure, the maintenance of glucose, insulin and bilirubin in blood, to restore balance of sexual hormones at endocrine diseases, to reduce level of cholesterol and to promote disposal of excess weight. Meytake it can be used for preventive maintenance and treatment of infectious and bacterial diseases. Mushrooms meytake are rich with minerals (potassium, calcium, magnesium), vitamins (B2, D2), cellulose and amino acids. According to the newest researches, meytake can prevent development of tumours and lower chemotherapy by-effects.\nReishi (Ganoderma lucidum) it is considered a mushroom of immortality and the strong means giving spiritual force. There is a set of subspecies reishi, many of which are not studied yet. Mushrooms reishicontain only 75 % of water while many other mushrooms on 90 % consist of water. Unlike many other mushrooms, medical properties reishi are concentrated in legs, instead of in hats. For the first time mushrooms reishi have been described about 2000 years ago. Ancient Chinese doctors carried them to medicines of the first category which restore balance of an organism and have no by-effects.\nReishi, as well as other mushrooms in east medicine, it is used for strengthening of immunity, struggle against a cancer and many chronic diseases caused by infringements of circulation of liquids in an organism. Mushrooms reishi stabilise pressure, cholesterol and sugar in blood, prevent infectious diseases and even treat such indispositions as a sleeplessness and cold. Mushrooms reishi are very effective at allergy treatment. The extract reishi also can be applied to treatment of infringements of nervous system.\nCordyceps sinensis – the Chinese Caterpillar Mushroom – grows in mountains at height to 5000 m. This mushroom is interesting to that “eats” caterpillars, killing them and growing from their body. To cultivate cordiceps it is uneasy, therefore it long time remained very expensive and unknown outside of China. In the Chinese medicine this mushroom is used for balance and toning by In and Jan.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Anti many forms of cancer realtors around health theoretically written and published around February 2010 (2020) Chemical make up (anticancer representative around Prescription drug Chemistry) written and published a study final result written and published by way of Teacher Li Peng, Education with treatment, Fujian Health Higher education. By mobile phone plus puppy experiments, the following analysis affirmed the fact that impartial triterpenoids around Ganoderma lucidum might a lot stop a advancement with intestines many forms of cancer, and also its particular apparatus relates to “promoting many forms of cancer mobile phone apoptosis”.\nConsidering may initially uncovered a triterpenoids with Ganoderma lucidum around 1982, them but not only provides for a research outline to get “why a fruiting shape with Ganoderma lucidum is very bitter”, and provides for a ganoderma lucidum beta glucan vision to get may to study “why Ganoderma lucidum might reject tumors”.\n“Ganoderma triterpenoids” is actually a group name, which will frequently is the word for a dynamic pieces by using terpenoid shape around Ganoderma lucidum. Reported by its several chemical like components, they usually are torn within not one but two categories: a person set is definitely “acid triterpenoids” like ganoderic acids, other set is definitely “acid triterpenoids” like ganodermal drinking “Neutral triterpenes” like alcohols), should the not one but two groups of triterpenes will be bundled, its labeled “total triterpenes”.\nCompared to a antitumor outcome with whole triterpenoids plus plaque created by sugar triterpenoids with Ganoderma lucidum, there are plenty of research evidences, as well as homework for impartial triterpenoids with Ganoderma lucidum is definitely rather very few, hence Teacher Li Peng’s company specializes in the following section.\nPeople made use of a fruiting shape with Ganoderma lucidum as being the trial and error fabric, initially made the sum of triterpenoids with Ganoderma Lucidum by using ethanol, in that case further more taken away from a impartial triterpenoids plus plaque created by sugar triterpenoids, so that you can take a look at its inhibition for intestines many forms of cancer.", "score": 25.562672958167408, "rank": 48}, {"document_id": "doc-::chunk-6", "d_text": "Some of them are found everywhere, others grow in certain countries.\nSometimes groups of mushrooms grow in the forests in the form of a circle, which is popularly called the “witch’s circle”. Previously, many associated this phenomenon with magic. Science, however, provided a logical explanation for this phenomenon. Sometimes the mycelium grows equally quickly in all directions. When the main fungus growing in the center dies, new ones grow along the edges of the mycelium, forming a circle and absorbing all nutrient compounds from the soil. As a result of this, it is formed, as it were, trampled underfoot by someone (and in the Middle Ages there was no doubt that there was no witch) in a place inaccessible to people, a circle with mushrooms growing along its edges (like an arena barrier).\nGanoderma, maytake (Curly griffin) or mutton mushroom, Kombucha possess therapeutic properties. In oncology, red camphor mushroom is widely used, which is also called camphor anthrody. It grows in Taiwan and is the property of the country. It contains substances that eliminate tumors. It not only helps fight cancer, but also eliminates toxins.\nOf interest to doctors is the exotic species of iiitake (Japanese mushroom). It can be grown in the garden or in the greenhouse. Japanese and Chinese doctors have long been aware of its healing properties. At home it is called the \"elixir of youth\" and is used to treat various diseases.\nMuer black mushrooms growing on trees are also popular in the modern world. They are rarely found in Russia. Dried black fruit bodies are like charred paper. Their use in cooking does not differ from the preparation of forest mushrooms. Black mushrooms taste like seafood.\nRed Book Mushrooms\nIn the Red Book listed hedgehog comb (grandfather's beard). The mushroom body consists of many thin and long processes hanging down. Shaggy hats grow on trees, they are painted white. After heat treatment, the dishes have a chicken flavor. This is not the only protected species.", "score": 25.000000000000068, "rank": 49}, {"document_id": "doc-::chunk-5", "d_text": "Torreya. 5: 197.\n- Furtado JS. (1965). \"Ganoderma colossum and the status of Tomophagus\". Mycologia. 57 (6): 979–84. doi:10.2307/3756901. JSTOR 3756901.\n- Ling-Chie Wong, Choon-Fah J. Bong, A.S. Idris (2012) Ganoderma Species Associated with Basal Stem Rot Disease of Oil Palm. American Journal of Applied Sciences 9(6): 879-885 (ISSN 1546-9239)\n- Loyd, A. L; Held, B. W; Linder, E. R; Smith, J. A; Blanchette, R. A (2018). \"Elucidating wood decomposition by four species of Ganoderma from the United States\". Fungal Biology. 122 (4): 254–263. doi:10.1016/j.funbio.2018.01.006. PMID 29551199.\n- \"FBRI: New Enzymes for Biopulping\". Archived from the original on 2009-01-04. Retrieved 2008-11-15.\n- Matos AJ, Bezerra RM, Dias AA (September 2007). \"Screening of fungal isolates and properties of Ganoderma applanatum intended for olive mill wastewater decolourization and dephenolization\". Lett. Appl. Microbiol. 45 (3): 270–5. doi:10.1111/j.1472-765X.2007.02181.x. PMID 17718838. S2CID 20255731.\n- Wachtel-Galor, Sissi (2011). \"Chapter 9 Ganoderma lucidum (Lingzhi or Reishi)A Medicinal Mushroom\". Herbal Medicine: Biomolecular and Clinical Aspects. 2nd edition. CRC Press Taylor and Francis. ISBN 978-1-4398-0713-2. Retrieved 2017-02-22.\n- Hennicke, F., Z. Cheikh-Ali, T. Liebisch, J.G.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "High Quality Ganoderma Lucidum P. E. Reishi P. E. Reishi Mushroom P. E. Polysaccharide Extract\nGanoHerb has different kinds of organic Ganoderma extract powder for you to choose. We have water extraction, alcohol extraction and mix extraction ganoderma extract powder. Those extract’s polysaccharides range from 5-40%, triterpene range from 0-12%, and you can also tell us your needs, we can manufacture according to your requirements.\nGanoHerb use single effect, double effect, and ball-type vacuum concentration, compared with traditional concentration, it is more efficient. It can protect the active ingredients at the greatest extent with the advantage of concentrated time and low temperature. Adopt spray drying or microwave vacuum drying method. Spray drying method in which the drying time is shorter, products are uniform fine, with good mobility and high solubility. While microwave vacuum drying method with low drying temperature , high efficiency ,which can reduce the damage of the active ingredient, and improve the quality of heat-sensitive products.\nGanoHerb has following extracts in stock:\nGanoderma Extract A1: Polysacchride>10 %, triterpene>8%,\nGanoderma Extract A2: Polysacchride>15%,\nGanoderma ExtractA3: Polysacchride>25 %,\nGanoderma Extract A4: Polysacchride>5%,\n[Source] Ganoderma lucidum extracts are extracted from the top-grade Ganoderma, via advance modern technology.\n[Product Property] Powder between light yellow and brown, with a special fragrance.\n[Product Content] Crude Polysaccharides 5-40 %\nTotal Triterpenes 0-12%\n[Testing Method] Ultraviolet-visible Spectrophotometry (UV)\n[Effective Component] Crude Polysaccharides, total Triterpenes.\n[Medical Functions] It can help to nourish user’s vitality and relieve malaise, cough, asthma, palpitation, and anorexia.\n[Specification] 80 mesh\n[Storage] Sealed and placed in a cool and dry place\n[Shelf-life] 2 year", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "1-. That it is true Reishi: in the market there are reishi extracts, reishi mycelium (mycelium of the reishi mushroom) and reishi spores (spores of the reishi mushroom). They are really different things!! For it to be reishi, the bottle or container must say: reishi mushroom powder. Look at the ingredients!\n2-. That the reishi is micro ground: This technique breaks the mushroom cell walls and allows our gastric, intestinal juices, etc, to access the interior of the cells during digestion and consequently take out and use all their content. 1 gram of micro ground reishi can be equal to 3 g of another not micro ground reishi! (Depending on the grinding size)\n3-. That it is clear of pollutants: the eco-label of the product will ensure much cleanness; however, the absence of heavy metals such as lead and cadmium, main pollutants of mushrooms, can only be found in some reishis in the market. A polluted reishi has the virtues of the reishi, but it will pollute us gradually too.\nPhoto of reishi (Ganoderma lucidum) located in a Mediterranean oak forest, in the province of Palencia.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "Unlike other mushrooms, research shows maitake to be almost as effective administered orally as intravenously. One polysaccharide fraction called MT-1, or simply the D-fraction, showed significant activity when administred orally. It is an alkali-soluble, hot water extractable compound that contains approximately 30% protein, or a ratio of beta-glucan to protein of 7:3. It occurs in a concentration of .04 mg. per gram of dried maitake mushrooms. Dried maitake mushrooms and taplets provide a safe, non-toxic, therapeutic effect. Garuda's maitake extract is made from high quality cultivated mushrooms grown naturally without pesticides or chemicals of any kind. We have the maitake extract available in two different concentrations,10:1 and 4:1.\nMushroom extracts can be considered some of the first nutraceuticals - food concentrated into medicinal form. Garuda's extracts are carefully produced according to cGMP standards from select raw materials. Many of our extracts are standardized to contain guaranteed levels of active compounds.\nAoki, T. 1984. Lentinan. In Immune Modulation Agents and Their Mechanisms. R.L. Fenichel and M. A. Chirgis, eds. Immunology Studies. 25:62-77., Bo, L. and Bau Yun-sun. 1980. Fungi Pharmacopoeia (Sinica). Oakland: Kinoko Co., Chang, H.M. and P. Pui-Hay But. 1987. Pharmacology and Applications of Chinese Materia Medica. Vol. 2. Singapore: World Scientific., Chang, H.M., ed. et al. 1984. Advances in Chinese Medicinal Materials Research. Singapore: World Scientific., Huidi, F. and W. Zhiyuan. 1982. The clinical effects of Ganoderma lucidum spore preparations in 10 cases of atrophic myotonia. J. Trad. Chin. Med. 2:63-65.,Miller, D. 1994. Current clinical protocol submitted to the N.I.H. Scientific Director Cancer Treatment Research Foundation, Arlington Heights, IL.,Nanba, H. 1994a.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-9", "d_text": "Thereafter, 20 μL of spore solution were added to each well, except to the broth sterility control wells. Micro plates were incubated at 25°C for at least 48 h. Then the wells were checked for fungal growth using a stereo microscope. A subsequent protein staining of the plates with Coomassie Blue G250 was carried out as described by (Troskie et al. 2012) to allow discrimination between fungal biomass and precipitates of fruit body extracts.\nCandida krusei micro plates were prepared based on the layout and protocol for filamentous fungi. However only liquid PDB was used, and micro plates were shaken periodically (for 5” every 30’) to avoid on-surface growth.\nHosts, isolation, and cultivation of fungal strains\nIn sum, nineteen fungal strains were isolated for this study: five strains of F. fomentarius, 11 strains of F. pinicola, and three strains of P. betulinus. Strains were isolated from a narrow geographical area spanning a maximum of hundred kilometres, with a few control strains isolated form the southern side of the Alps, and one strain from the mediterranean area. F. fomentarius was isolated from Fagus sylvatica, Picea abies and Quercus pubescens; F. pinicola was isolated from Abies alba, Alnus incana, Larix decidua and P. abies. P. betulinus was isolated from Betula pubescens only.\nHPLC analysis of fruit body extracts\nHigh-performance liquid chromatography (HPLC) of fruit body extracts showed that polypore strains differ in both, qualitative and quantitative secondary metabolite production. F. fomentarius strain IB20130019 was significantly different by showing an extremely poor production of secondary metabolites. F. pinicola strains showed a very high variation in the quantity of secondary metabolites produced (Additional file 1: Figure S1).", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "Buy a Variety of Reishi Products Here\nGanoderma Lucidum Side Effects\nIf you’ve ever taken natural or herbal medicine especially those that qualify as TCM (Tradition Chinese Medicine), there is a risk that some minor discomforts might occur during the healing process. An example would be if you've experienced acne, you know there is usually a phase where it seems the acne is getting worst before it gets better. Another example would be how a healing wound begins to excessively itch while repairing itself. Ganoderma can cause negative side effects that cause temporary reactions within the body during the healing process.\nCommon side effects that occur when Ganoderma is ingested includes increased breakouts of acne, reddish pimply dots, and/ or other skin rashes, dizziness, digestive issues, higher blood pressure, stomach ache, or swollen legs. However, an individual may experience some or none of these temporary adverse reactions as circulatory blockage or toxins are being removed from the body.\nWarnings: If you are a diabetic taking hypoglycemic drugs please do not take Ganoderma or any other TCM without consulting with a medical professional before doing so. Ganoderma can possibly cause fluctuations in blood sugar that could cause a problem with diabetes. Individuals on blood pressure medications should monitor pressure levels consistently as well since this product may cause temporary higher blood pressure until the end results of a lowered pressure is attained.\nGanoderma or Reishi Mushroom – Hype or a Healing Miracle?\nCase Studies and Personal Experiences\nAs the first video stated the Chinese have been conducting research on the ling zhi since the ’50s. However, other eastern and western studies are being conducted on this mushroom. Most of the studies will not give a ‘concrete finding' which is true with many alternative medicines. However, many are stating ‘it appears’ to assist in all the benefits named before such as reducing skin inflammation, increasing the immune system, treating blood and circulatory conditions, and preventing some cancers. You can find more reading on these case studies at the end of this section.\nI cannot provide much of a personal experience with ganoderma because I’ve only been drinking the coffee for a short time. However, I can share personal testimonies of friends and family that have been using the products of Organo Gold for some time now. One individual suffers with psoriasis and she has seen an improvement in her overall skin condition.", "score": 23.41447125460623, "rank": 55}, {"document_id": "doc-::chunk-6", "d_text": "Maciá-Vicente, H.B. Bode, and M. Piepenbring. 2016. “Distinguishing commercially grown Ganoderma lucidum from Ganoderma lingzhi from Europe and East Asia on the basis of morphology, molecular phylogeny, and triterpenic acid profiles.” Phytochemistry 127:29–37.\n- Hapuarachchi, K., T. Wen, C. Deng, J. Kang, and K. Hyde. 2015. “Mycosphere Essays 1: Taxonomic Confusion in the Ganoderma lucidum Species Complex.” Mycosphere 6:542–559.\n- Jin, X; Ruiz Beguerie, J; Sze, D. M; Chan, G. C (2016). \"Ganoderma lucidum (Reishi mushroom) for cancer treatment\". Cochrane Database of Systematic Reviews. 4: CD007731. doi:10.1002/14651858.CD007731.pub3. PMC 6353236. PMID 27045603.\n- \"Horse chestnut tree diseased\" (Press release). The Anne Frank House. Archived from the original on 3 October 2006. Retrieved 2006-11-17.\n- Kuo M., MushroomExpert.Com, Ganoderma tsugae. (2004, February). Retrieved June 15, 2007.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-4", "d_text": "- Growing its Ganoderma using the natural log method so it can manufacture Ganoderma spores which are 17 times more powerful than the Ganoderma Lucidum mushroom itself.\n- Offering more friendly American products such as brewed coffee and less sugar.\n- Creating a more generous compensation plan that puts more money into all distributors pockets, including the oft-forgotten part-timer.\n- Provides a more dynamic, comprehensive, and accessible training system\n- Developed a better replicated web site system.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma Lucidum mushroom tea is an indispensable partof traditional Chinese medicine- and as such is present in treating many diseases.\nHowever, preparing tea in the traditional way carries with it several problems. Traditional preparation loses many active substances. Also, in order to achieve a satisfactory concentration of active substances, tea must be cooked for at least an hour!\nTo avoid these difficulties and make it easier for our customers to use tea from this mushroom, we have developed ImuninTea!\nImunin tea allows for quick cleaning in 5 minutes- as with regular teas. Imunin tea allows you to ingest far more active substances than traditionally prepared tea, and retains all the beneficial effects of this kind of tea without additives or artificial ingredients.\nToss with boiling water and wait 5 minutes, consumed as regular tea, by choice, with honey!\nThe substances from the Ganoderma Lucidum mushroom have numerous effects on human health. It's fair to say that G. Lucidum is one of the most potent natural means of strengthening and improving health. The substances in this fungus primarily act on the immune system, support its normal functions, improve them and strengthen them in case of infection defenses, while normalizing it in case of allergies. By regularly ingesting substances from G. Lucidum, we support the immune system in the fight against infections, cancers, allergies and inflammation. In people with autoimmune diseases, it soothes and stables the condition.\nSubstances from Mr. Lucidum have a particular effect on people infected with HIV, slowing the progression of the virus through the body.\nStudies have shown that polysaccharides from Ganoderm Lucidum modulate immune functions in vitro and vivo.\nThe immunomodulating effects shown by G. Lucidum's polysaccharides are extensive, including the functions of antigen-presenting cells, mononuclear fagocitic system, humorous immunity and cellular immunity. Components from G. Lucidum also showed an impact on gene expression in monocytes, a type of white blood cells, related to inflamation and immune response processes, and macromolecule metabolism.\nStudies on cancer patients have shown that components from G. Lucidum support the immune system to fight off some types of cancer by boosting the function of natural killer cells (NK cells) whose primary function is to fight tumor cells.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Maitake (Grifola Frondosa)\nIt’s said that the maitake, or “dancing mushroom,” got its name because Japanese villagers would dance for joy when they found the rare and valuable species in the deep mountain valleys of Japan.\nOther commonly known names for maitake are Hen of the Woods and Ram’s Head. Its scientific name is Grifola Frondosa.\nThe maitake goes through drastic changes as it matures. It starts out growing in clusters of gray overlapping mounds. These mounds are the caps of the mushroom, which share the same stem and look similar to a cauliflower. As the entire specimen continues to grow, the caps spread and fan out becoming a dark gray or brown color.\nWhen the maitake reaches maturity its color fades. Sometimes maitake turn into a light gray, yellowish color. The edges of the caps become brittle and fragile while the base and stems remain firm.\nFor centuries in Japan and China, maitake mushrooms were harvested specifically for their medicinal and healing capabilities.\nNorthern temperate areas are the most common place for maitake to grow. They also prefer midrange temperatures, which is why they grow most fruitfully in the fall season of the northeastern regions of Japan and North America.\nMaitake are generally found in the wild on stumps, or at the base of dying hardwood trees. It is common for Maitake to grow at the foot of decaying oaks, elms, maples, beech, blackgum, and larch trees.\nNatural Cultivation Methods\nThe most successful outdoor cultivation methods for maitake are the log and stump methods. For these methods, the user should use hardwoods to achieve the greatest fruiting.\nGenerally, the stump cultivation technique will take a maitake-inoculated hardwood stump at least 2-3 years before it begins to fruit. However, it will produce in large volumes once fruiting begins.\nFruiting Cycles & Yield Potentials\nIf exercising the indoor cultivation method, you should inoculate a combination of sterilized sawdust, chips and bran for best results. This is the combination of ingredients that we use for our Maitake Mushroom Farms.\nMushroom Shack’s Maitake Mushroom Farm will produce between ½ and 1 lb of mushrooms throughout the entire fruiting cycle. The fruiting cycle will begin between 6 and 8 weeks after inoculation.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Differences Between True And False Shiitakes\nAn irregular/bent shape\nWavy and lobed\nFreely hanging from the stem\nWhen you slice it, a false morel isn't hollow and filled with cotton like fibers inside.\nSmooth uniform shape\nLight-dark brown color\nCovered in pits and ridges\nDirectly attached to the stem\nOnce you slice your shiitake, it should be hollow from the bottom of the stem to tip of the cap\nHistory Of Shiitake Mushrooms As Medicine\nShiitake mushrooms helped people in Medieval Asia to prevent common colds, fight hunger, and boost energy levels (link). In East Asia, Shiitake Mushrooms were a delicacy preserved for royalty because of their benefits, taste, and cost.\nThey mostly grew in East Asia, particularly Japan, China, Korea, and Taiwan. Farming of shiitake mushrooms spread to other regions in the 1970s like USA, Singapore, and parts of Europe because of their nutritional and medicinal benefits and commercial viability.\nHow To Use Shiitake Mushrooms As Medicine\nShiitake mushrooms contain high fat, protein, carbohydrate, mineral, and vitamin content. Some of the medicinal and therapeutic benefits include:\nAntitumor Properties - Can prevent and inhibit the growth and formation of tumors (link).\nAntimicrobial Properties - May inhibits the growth and reproduction of microorganisms (link).\nAntibacterial Properties - Shown to interfere with the growth and reproduction of bacteria (link).\nAntiviral Properties - Could help in the prevention and treatment of viral infections (link).\nImmunomodulation Capabilities - Helps in the alteration of the immune system into desired levels (link).\nCardiovascular Health Properties - Helps in maintaining sound heart health (link).\nHepatoprotective Properties - Could prevent liver damage (link).\nHemagglutination Capabilities - Could prevent the clumping of red blood cell particles (link).\nShiitake Dosage - The appropriate dosage of shiitake for you depends on a number of factors like your age, health, and allergies. No tangible evidence indicates the right dosage.\nPharmacists advise users to take it according to the directions on the product and consult your physician before use.\nSide Effects of Shiitakes - The information about the side effects of consuming shiitake medicine remains scarce.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-4", "d_text": "- Smith BJ, Sivasithamparam K (2003). \"Morphological studies of Ganoderma (Ganodermataceae) from the Australasian and Pacific regions\". Australian Systematic Botany. 16 (4): 487–503. doi:10.1071/SB02001.\n- Ryvarden L. (1985). \"Type studies in the Polyporaceae 17: species described by W. A. Murrill\". Mycotaxon. 23: 169–198.\n- Hibbett DS, Donoghue MJ. (1995). Progress toward a phylogenetic classification of the Polyporaceae through parsimony analysis of mitochondrial ribosomal DNA sequences. Can J Bot 73(S1):S853–S861.\n- Hibbett DS, Thorn RG. (2001). Basidiomycota: Homobasidiomycetes. The Mycota VII Part B. In: McLaughlin DJ, McLaughlin EG, Lemke PA, eds. Systematics and evolution. Berlin-Heidelberg, Germany: Springer-Verlag. p 121–168.\n- Hong SG, Jung HS (2004). \"Phylogenetic analysis of Ganoderma based on nearly complete mitochondrial small-subunit ribosomal DNA sequences\". Mycologia. 96 (4): 742–55. CiteSeerX 10.1.1.552.9501. doi:10.2307/3762108. JSTOR 3762108. PMID 21148895.\n- Moncalvo, J.-M., Wang, H.-F., and Hseu, R.-S. 1995. Gene phylogeny of the Ganoderma lucidum complex based on ribosomal DNA sequences. Comparison with traditional taxonomic characters. Mycological Research 99:1489-1499.\n- Adaskaveg, J. E., and Gilbertson, R. L. 1986. Cultural studies and genetics of sexuality of Ganoderma lucidum and G. tsugae in relation to the taxonomy of the G. lucidum complex. Mycologia:694-705.\n- Murrill WA. (1905). \"Tomophagus for Dendrophagus\".", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-3", "d_text": "Whereas wild ones are dark in colour, the cultivated Japanese varieties are grown in low-light conditions and have pale flesh and skin.\nThis mushroom fruits in cold conditions. Fruiting bodies are small, but delicious. This mushroom has been eaten for centuries in parts of Asia. It grows naturally on wood and can be cultivated on sawdust.\nLentinus edodes, the Shiitake mushroom, is the most important cultivated mushroom in Japan. It is grown on logs of Fagaceae trees (eg: Oaks) and various other trees. The shiitake mushroom is said to possess many health benefits, including the presence of many polysaccharides and polysaccharide-protein complexes that have been isolated and utilised for therapeutic purposes (it has been reported as promoting health due to immunity stimulating properties against cancer, viral infection and high cholesterol). This mushroom is usually sold fresh or dried. There is potential for commercial shiitake mushroom cultivation, although many markets have high quality standards that must be met.\nSeveral species of this genus are edible and have the potential for cultivation commercially. Pleurotus ostreatus is perhaps the most commonly cultivated species, known as the oyster mushroom, due to its appearance. The oyster mushroom naturally grows on dead wood, but can be cultivated on any cellulose material. Wood shavings, cellulose fibre, and waste hulls from agriculture are commonly used. This mushroom can even be cultivated on toilet rolls!\nThis species of mushroom, sometimes known as the ‘Garden Giant’ has been grown commercially in Germany, and grows wild in parts of Europe. It is cheap and easy to grow, but yields are variable.\nIt is not generally suited to commercial production, but is well suited to outdoor culture in the home garden. Indoor fruitings are possible but the King Stropharia requires an unsterile casing to stimulate mushroom development and is slow to fruit.\nThe edible Straw Mushroom originates from the tropics and sub-tropics, and has been cultivated and eaten for centuries in China and other Asian countries. This mushroom is traditionally cultivated on fermented rice straw. Due to the nature of traditional cultivation, yields have typically been low and variable. Modern cultivation practices utilising industrial waste from cotton processing have allowed increased yields and further development of the Straw Mushroom industry.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "Maintaining quality and purity, MIGU Adaptogen Bio-tech Co., Ltd now introduces fungus extract powders of Ganoderma Lucidum Extract, Cordyceps Sinensis Extract and Agaricus Blazei Extract.\nXi'an, China, March 3, 2018 – There are several kinds of high quality fungus extract powders that can have enormous health benefits for the mankind. China based MIGU Adaptogen Bio-tech Co., Ltd specializes in extracting powder from different herbs that have great medicinal value. These fungus extracts can cure several types of diseases and can also boost the immunity and vitality of humans in a natural manner.\nThe company has introduced the Ganoderma Lucidum Extract, which is extracted from the fruiting part of Reishi mushroom. MIGU Adaptogen Bio-tech Co., Ltd cultivates Reishi mushroom in its own farmland and which is far better in quality than the wild Reishi mushroom. They pick the mushroom at the proper time and dry it properly to maintain its potency. According to some research, the Reishi mushroom extract comes with cancer healing properties, and MIGU Adaptogen Bio-tech Co., Ltd supplies the best quality extract for cancer patients around the world. The medicinal agents of the extract called phytonutrients works wonder in healing cancer patients.\nAnother important addition is the Cordyceps Sinensis Extract that comes with amazing anti-aging and anti-inflammatory properties. The extract can significantly improve the learning ability and the memory functions. This fungus is mainly of parasitic nature, and MIGU Adaptogen Bio-tech Co., Ltd cultivates it industrially for its large scale production. This extract also has an anti-tumor potency to cure tumors naturally, and can be recommended to cancer patients who are undergoing chemotherapy. Besides, the cancer treatment, the extract is a powerful anti-oxidant and is also very useful in diabetes for controlling the blood sugar level.\nMIGU Adaptogen Bio-tech Co., Ltd also brings the Agaricus Blazei Extract, which is extracted from the fruiting body. This is a new medicinal mushroom that contains beta glucan that stimulate the natural cells, which are the vital part of our immune system. It is believed that these polysaccharides are potent enough to promote the cellular health. Besides, it also has a strong anti-tumor effect and is beneficial for cancer patients after the chemotherapy.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "“Fast shipping nice contact, always answers.\nHappy to help and high quality cultures. Thanks for the goodies. -Victoria A., Funginomi Customer\nHardwood, mostly, beech, oak rarely on coniferous wood, annual, regionally rare, easy to cultivate.\nThis mushroom is used in the Chinese and japanese medicine a long time and has shown to help with a lot of things. The most researched part of this mushroom are its glucans called polysaccharides.\n-accompanying therapies, transplants and after cancer therapies\n-for nervous weakness, sleep disturbances, arthritis, allergies and, liver cirrhosis, cardiovascular problems, muscle diseases, dizziness, asthma, hepatitis A, B, C, obesity\n-preventive or concomitant chronic inflammatory diseases of the kidneys, stomach, spleen, liver, lungs and heart\nBuy ANY 3 Cultures – We GIFT you 1 of them\n11,99 € – 39,99 €\nLiquid Cultures (10mL) are a mix of nutrients, desolved in water and pressure sterilized. Perfectly a monoculture is then innoculated, to prevent competition. Some species need extra nutrition, so the mycelium grows healthy prior to inoculation.\nIt is very effective and just a small amount is enought to inoculate a big amount of spawn or soil.\nThis Slants (50mL) can keep your cultures viable for many years, if treatened right. Also there is no more issue with condensation.\nIf you can work with petri dishes you will have no problem with these. Petris are better suited for short term storage or to multiply strains.\nWe will set a culture on a fresh Petri Dish (Ø90mm) for you. It will then grow out and if it withstands quality control, it will be sent. Petris are perfectly suited for short term storage, to multiply strains, to cross species and to clean up cultures. You may use a small cut of this cultures to innoculate substrates.\nOur Grain Spawn (1L) contains a variety of different nutrient sources, which means we use different types of seeds. It is soaked and then sterilized at >121°C to kill any competing organisms. We then inject a species. After proven purity, we ship it fresh to you.\nHow long it needs? Check the Spawn Run Time @Parameter.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "It’s that season again, and although the going rate is not profitable, the mushroom man can’t resist a hike in the woods to uncover a burried treasure. Chanterelles are on the left, matustakes on the right.\n“The Matsutake grow under trees and are usually concealed under fallen leaves and/or the duff layer. It forms a symbiotic relationship with the roots of a limited number of tree species. In Japan it is most commonly associated with Japanese Red Pine. However in the Pacific Northwest it is found in coniferous forests made up of one or more of the following: Douglas Fir, Noble Fir, Shasta Fir, Sugar Pine, Ponderosa Pine and Lodge Pole Pine. Further south, it is also associated with hardwoods, namely Tanoak and Madrone forests. The Pacific Northwest and other similar temperate regions along the Pacific Rim also hold great habitat producing these and other quality wild mushrooms.\n- C. subalbidus: In California and the Pacific Northwest of USA there is also the White chanterelle, which looks like the golden chanterelle except for its off-white color. It is more fragile and found in lesser numbers than the golden chanterelle, but can otherwise be treated as its yellow cousin.\n- C. formosus: The Pacific golden chanterelle (C. formosus) has recently been recognized as a separate species from the golden chanterelle. It forms a mycorrhizal association with the Douglas-fir and Sitka spruce forests of the Pacific Northwest. This chanterelle has been designated Oregon‘s state mushroom, due to its economic value and abundance.” (Wiki)", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "Their terpenoid profile however shows considerable variance; G. lucidum possess a significantly larger quantity and variety of triterpenes than G. sinense. This is very interesting because G. sinense was, when available, used interchangeably with G. lucidum throughout history due to offering many of the same potential benefits.\nFrom a scientific standpoint, many of these benefits are attributed to the triterpenes of G. lucidum, such as anti-inflammatory, liver-protective and tumour-inhibiting properties, yet G. sinense contains a comparatively lower percentage of these compounds. This is also why G. sinense tastes nowhere near as bitter as G. lucidum, offering a very mild taste by contrast.\nThat said, Purple Reishi contains a notable amount of the thoroughly well researched Ganoderic Acid A, and some novel terpenoids have been established from ethanol extractions of the fruiting body. Two lanostane triterpenes unique to this species were discovered in 2007, followed by nine new terpenoids that were isolated in 2011. Amongst these compounds, some exhibited anti-cancer mechanisms and others were influential in regulating liver enzymes responsible for the metabolism of toxins, drugs and their safe elimination from the body.\nHowever, G. sinense contains a more dominant presence of β-d-glucans, in particular the complex, branched polysaccharide GSP-6B, which has shown impressive in vitro immune-modulation activity. Other biologically active polysaccharides named GSA and GSW from G. sinense have been analysed via both in vitro and in vivo research and have revealed anti-tumour properties as well as the ability to regulate macrophage activity and even modulate populations of gut microbiota.\nAccording to the system of Chinese medicine, both Purple and Red Reishi were synonymous with the moniker ‘Lingzhi‘. In this context, ling means ‘divine’ and zhi means ‘mushroom’. Both were regarded as supernatural medicines that confer immortality to those who ingested them. However, despite their shared status there were times when the two species needed to be distinguished from one another, and so the name ‘Zizhi‘ was assigned to Purple Reishi, simply meaning ‘sacred purple mushroom’.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "They have essential amino acids, fatty acids, B vitamins, vitamins E and K, minerals, and trace elements.\nThey also have similar bioactive substances, including beta-glucans and enzymes.\nThe benefits are similar in that they can both support the health of the heart, kidneys, liver, and lungs, and both support healthy aging and the regulation of your sleep cycle.\nBoth have anti-inflammatory and immune-modulating benefits.\nThe differences are primarily in the research that’s been done on both species so far.\nFor example, in animal studies, the cordyceps sinesis species has been found to reduce oxidative stress and improve exercise endurance.\nIn human studies, there is research on exercise performance and wellness in older people, and in sedentary adults, this species of cordyceps has been linked to energy metabolism.\nThe militaris species of the fungus has been researched for supporting high-intensity exercise in younger people and cell-mediated immunity.\nThe cordyceps sinesis species is rare and expensive in its wild form, but the cultivated form is easier to access and less expensive.\nOverall, you could take either species of cordyceps and get similar benefits. Some supplements on the market combine both species into one product.\nCordyceps militaris is sometimes called the cultivated alternative, but both species can be similarly cultivated.\nCordyceps sinesis is the most studied species, with militaris coming in behind it.\nTraditional Chinese Medicine Uses\nIn Traditional Chinese Medicine (TCM), this mushroom is called Chinese caterpillar fungus.\nOne of the uses of TCM is treating chronic kidney disease.\nIn traditional Eastern medicine, along with helping kidney function, cordyceps is also used for heart health, and it’s long been used to improve energy, vitality, and stamina.\nWhy Are So Many People Taking Cordyceps?", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-5", "d_text": "- Zeng, Qinghai, Fang Zhou, Li Lei, Jing Chen, Jianyun Lu, Jianda Zhou, Ke Cao, Lihua Gao, Fang Xia, Shu Ding, Lihua Huang, Hong Xiang, Jingjing Wang, Yangfan Xiao, Rong Xiao, and Jinhua Huang. “Ganoderma Lucidum Polysaccharides Protect Fibroblasts against UVB-induced Photoaging.” Molecular Medicine Reports 15.1 (2016): 111-16. Web.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-4", "d_text": "- Jesenak, Milos, Slavomir Urbancek, Juraj Majtan, Peter Banovcin, and Jana Hercogova. “β-Glucan-based Cream (containing Pleuran Isolated Frompleurotus Ostreatus) in Supportive Treatment of Mild-to-moderate Atopic Dermatitis.” Journal of Dermatological Treatment 27.4 (2015): 351-54. Web.\n- Kurtipek, Gulcan Saylam, Arzu Ataseven, Ercan Kurtipek, Ilknur Kucukosmanoglu, and Mustafa Rasid Toksoz. “Resolution of Cutaneous Sarcoidosis Following Topical Application of Ganoderma Lucidum (Reishi Mushroom).” Dermatology and Therapy 6.1 (2016): 105-09. Web.\n- Lin, Kai-Wei, Yen-Ting Chen, Shyh-Chyun Yang, Bai-Luh Wei, Chi-Feng Hung, and Chun-Nan Lin. “Xanthine Oxidase Inhibitory Lanostanoids from Ganoderma Tsugae.” Fitoterapia 89 (2013): 231-38. Web.\n- Rios JL. “Effects of triterpenes on the immune system”. J Ethnopharmacol. 2010;128(1):1-14\n- Shen, Tao, Chao Duan, Beidong Chen, Meng Li, Yang Ruan, Danni Xu, Doudou Shi, Dan Yu, Jian Li, and Changtao Wang. “Tremella fuciformis Polysaccharide Suppresses Hydrogen Peroxide-triggered Injury of Human Skin Fibroblasts via Upregulation of SIRT1.” Molecular Medicine Reports (2017): n. pag. Web.\n- Tie, Lu, Hong-Qin Yang, Yu An, Shao-Qiang Liu, Jing Han, Yan Xu, Min Hu, Wei-Dong Li, Alex F. Chen, Zhi-Bin Lin, and Xue-Jun Li. “Ganoderma Lucidum Polysaccharide Accelerates Refractory Wound Healing by Inhibition of Mitochondrial Oxidative Stress in Type 1 Diabetes.” Cellular Physiology and Biochemistry 29.3-4 (2012): 583-94. Web.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-9", "d_text": "Turkey tails also are known as trametes versicolor, coriolus versicolor, polyporus versicolor, yun zhi, and kawartake.\nThe polysaccharides kresin and peptide in turkey tail offer cancer-fighting properties. Studies also found that a daily dose of turkey tail improved the immune function in women with breast cancer.\nTurkey tail has krestin and polysaccharide peptide to boost immunity and suppress inflammation.\nIn a study of patients with gum disease, patients who received turkey tail and reishi mushrooms showed positive results.\nThe prebiotics in turkey tail help grow good bacteria such as bifidobacterium and lactobacillus while limiting bad bacteria like staphylococcus and clostridium.\nTurkey tail mushroom also is used to treat HPV, increase energy, and recover from long-term illnesses.\nUnlike many other mushroom species, oyster mushrooms are relatively new, having been cultivated for around 100 years. Nonetheless, they have a higher concentration of antioxidants than any other commercial mushroom. A popular addition to many dishes, the oyster mushroom often is served on its own thanks to its mild taste and anise- or licorice-like flavor. This popular mushroom grows easily and it does not spoil quickly after being harvested, instead maintaining its freshness. In addition to its popularity as a food item, oyster mushrooms are used in the manufacturing of cosmetics, pharmaceuticals, and paper. Not surprisingly, oyster mushrooms rank third in the amount produced.\nOyster mushrooms grow wild in forests causing its host tree to decompose and like the cordyceps, oyster mushrooms are carnivorous, eating parasitic animals on its host tree. They don’t stop there, however, and will eat debris from food to petroleum. As such, they are a great resource for mycorestoration – the use of fungi to restore damaged habitats.\nOyster mushrooms grow in a range of colors including white, tan, yellow, brown and pink, and are also known as Pleurotus ostreatus, tree oyster, oyster shelf, straw mushroom, and tamogitaki, and hiratake.\nTests on human cells found that oyster mushroom extract may help to suppress breast and colon cancer. In-vivo tests also revealed therapeutic effects in fighting colorectal tumors and leukemia cells. An isolated lentin showed anti-tumor activity in mice.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Lucid ganoderma contains protein, various amino acids, polyose, fats, terpines, ergosterol, organic acids, aldaloid, adenine, uracil, various enzymes and various trace elements as Ge, etc.\nMain Pharmacological Actions:\n3. Strengthen the heart\n4. Improve immune function\n5. Improve adrenocortical function\n7. Reduce the excitability of parasympathetic nerves\nTumor Restriction Mechanism\nBy stimulating the body-intrinsic effector cell* with cytotoxicity, ganoderma lucidum spore promotes the biological activity of retothelium and kills the tumor cells, restricting the growth of tumor or eliminating tumor.\n1. Activates effector cells of body, enhances the ability of body to kill tumor and of oncolysis, raises the activity of damaging the tumor for over 80%.\n2. Induce the secretion and synthesis of cells factor, enhances the ability of body to kill tumor, activates effector cells, and protects normal cells.\n3. Restrains immune suppressor factor produced by tumor cells, exposes tumor cells to killer cells.\n4. Restrains tumor cells from growing.\n5. Repairs and enhances the sensibility of body to cancer effector, enhances adaptability of body.\n6. Eliminates radicals so as to show strong activity of superoxide dismutase in body.\n7. Improves the synthesis of marrow cells DNA, RNA and protein, accelerates splitting and propagation of marrow cells, and enhances blood producing function of marrow.\n8. Reduces surface potential of cancer cells, inducts cancer cells gene to mutate again and initiate its fading program.\n9. Kills tumor cells, but its mechanism remains unknown so far.\nGanoderma Lucidum is a rare germ in the traditional Chinese medicinal treasure house and it has been worshipped and deified for 3000 years by the oriental world. Zhongke Ganoderma Lucidum Spore is the best of the modern ganoderma lucidum products and has been widely used as BRM (biological regulating media), to help resist tumors and excellent effect has been achieved, which has brought the hope of life and recovery to those who suffer from cancer or other diseases.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-2", "d_text": "It was listed among the six primary types of Ganoderma in the seminal pharmacopoeia of Chinese medicine; the Shennong Ben Cao Jing. In this text it was regarded as one of truly superior medicines and long term use was understood to “make the body light, prevent senility and prolong life so as to make one an immortal.”\nIt was prescribed to fortify the circulation of qi within the body, it was deeply restorative and could promote radiant vitality. It was commonly used to treat conditions relating to the kidneys and the sense of hearing, it strengthened the bones, joints and connective tissue, and was understood to protect the user on a subtle, spiritual level.\nPurple Reishi was understood throughout the ages to be an exceptional aid in the replenishment of spent vitality and deep rejuvenation of the body and mind after exhaustion, burnout, childbirth, sickness and injury. Our energetic storehouse of vitality is known as ‘Jing‘ in Chinese medicine, and this species of Reishi is ranked at the very top of natural medicines that help to restore this vital essence.\nIt is a tonic to the kidneys, liver and heart, and was traditionally used as a support for meditation and spiritual practice due to the subtle yet profound effect it was believed to have on stabilising the emotional mind. Purple Reishi is one of the genuine treasures of Taoist herbal medicine, and its rarity only magnified its value.\nHow is it Different from Red Reishi?\nAs already mentioned the two species are so similar that they were often used interchangeably or one was substituted in the absence of the other. However, as we have seen there are differences such as the overall profile of active constituents, the taste and obviously the colour/appearance.\nWhile both of these majestic organisms are exceptional, well-rounded tonics, Purple Reishi seems to emphasise a deeper restorative quality than its crimson relative. Throughout history G. sinense was always used in favour of G. lucidum in cases of arthritis, weak bones/joints and damage to cartilage and other connective tissue. It was employed as a preventative of these conditions as well as a treatment for them.\nIn cases of exhaustion and burnout both species were often used, although again, the deeply regenerative qualities of G. sinense would be preferred, if it was available.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-15", "d_text": "The influence of substrate on bioactivity and metabolites from fruit bodies\nMetabolite production is often substrate dependent, and selected bioactive metabolites may be produced on selected substrates only (Shang et al. 2012), or produced in significantly different quantities (Sørensen and Giese 2013). Even minor variations in environment or nutrition have the potential to affect the quantity and diversity of fungal metabolites. The deliberate elaboration of cultivation parameters to influence the secondary metabolism of a strain has been called the OSMAC (one strain, many compounds) approach (Bode et al. 2002). Substrate modifications have already been used to enhance secondary metabolite production and bioactivity of the most popular medicinal polypores, i.e. Reishi (G. lucidum) (You et al. 2012) and Chaga (Inonotus obliquus) (Xu and Zhu 2011). In F. fomentarius, submerged culturing processes were optimized for the production of bioactive polysaccharides (Chen et al. 2008, Chen et al. 2011) and for laccase production (Neifar et al. 2011). Up to now, we are not aware of studies focusing on the effect of substrate on the bioactivity of medicinal polypore fruit bodies. Future studies based on fruit bodies cultivated under controlled conditions and on different substrates will help to elucidate this issue.\nThe medicinal potential of F. fomentarius, F. pinicola and P. betulinus\nOur bioactivity screening generally confirms the potential of F. fomentarius, F. pinicola and P. betulinus as medicinal polypores. Because of their bioactivity against gram-positive bacteria, and their potency as antifungal agent, we especially consider F. pinicola to be worth further investigation on a molecular level. This fungus was widely applied in traditional European medicine, but its benefits and utilisation have been forgotten after the introduction of synthetic drugs. Due to renaissance of naturopathy and also due to increasing bacterial and fungal resistances, working with traditional medicinal fungi is becoming increasingly interesting and rewarding.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma Lucidum Spirulina Capsule [Large Bottle]\n[Ingredients] 100% organic Ganoderma lucidum powder, spirulina.A kind of natural,organic food to your health.\nIt adopts log-cultivated Ganoderma lucidum and spirulina as main ingredients by using modern bio-technology. Richly contains Ganoderma lucidum polysaccharide, triterpene Ganoderma lucidum acid, protein, γ-GLA, vitamin, micro-mineral and so on. Long-term usage will relief the fatigue and facilitate fat metabolism, renew and activate the effectives of brain cells to improve overall immunity.\n1. Natural product for beauty care, facilitate metabolism, anti-oxidation and nourish skin deeply.\n2. Computer operator, worker who contacts radioactive ray will anti-radicalization, anti-fatigue and regulates incretion system in human body.\n3. Has good efficacy on the treatment of high blood pressure, high blood fat, high blood sugar for immune support, resistance to disease.\n[Usage & Dosage]\n2-4 Ganoderma Lucidum Spirulina Capsules each time, 3 times daily before meals or take as recommended.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-5", "d_text": "Interest in Phellinus linteus was renewed after World War II when it was used to treat victims of the Nagasaki and Hiroshima bombing victims. Continued research led to Korean FDA approval in the 1993 for the use and sale of Phellinus linteus as an anti-tumor medicine.\nPhellinus linteus mushrooms grow primarily on mulberry trees, but also on other closely aligned species mainly in China, Korea, and Japan. It has a dark, hoof-like shape that resembles the bark of the branches and stems on which it grows.\nOther names include mesimakobu (Japan), song gen (China), and sang hwang (Korea).\nHispidin, a phenolic compound, along with acid polysaccharides and the polysaccharide-protein complex in Phellinus mushrooms increase immune cell activity, modulate the proliferation of bad cells, and help inhibit tumor growth.\nPhellinus linteus mushrooms have two polysaccharide-protein complexes, beta D-glutan and lectin, to help control the immune system.\nPhellinus linteus contains the compounds CHCl3, n-BuOH and H2O to fight bacteria.\nCambodian Phellinus linteus was tested to determine its appropriateness as an ingredient in cosmetics and was found to have anti-lipid peroxidation and anti-wrinkle effects.\nExopolysaccharides extracted from Phellinus may ward off autoimmune diabetes though regulation of the cells involved in the immune response. Furopyranone compounds may help treat diabetic complications, and Interfungins can help regulate blood sugar levels.\nPhellinus linteus may also help gastroenteric dysfunction, eczema, heart disease, and high blood pressure.\nPoria filaments have been used in Chinese medicine for two-thousand years for its healing properties. Due to its sweet taste it is said to target the heart, spleen, lung, and kidney meridians, which affect the body’s spirit and qi (energy). Because is was expensive to produce, it originally was a delicacy reserved for the royal family. Now it is commonly available and widely-used, not only as an alternative medicine but also in many Chinese patent medicines. Known as the “medicine of immortality”, among poria’s traditional functions were to get rid of dampness and disinhibit water for which it is honored as a dampness-eliminating panacea.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "The superior Enzyme breaking process was patented and commercialized in 1998, producing Ganoderma Lucidum spores powder of the highest quality and efficacy. It is estimated that only 2% of the Lingzhi spores powder in the market are produced using the patented Enzyme “cracking” process.\nDifferences Between Spore Powder and Spore Oil\nGanoderma Lucidum spore oil is a lipid active substance extracted from the broken Ganoderma Lucidum spore powder by supercritical CO2 fluid extraction technology, and is the essence of Ganoderma Lucidum extract. The main components of Ganoderma Lucidum spore oil are Ganoderma Lucidum triterpenoids, ergosterol, unsaturated fatty acids, etc. The high-graded Ganoderma Lucidum spore oil also contains precious components such as peroxyergosterol.\nAt present, Ganoderma Lucidum spore oil extraction by supercritical CO2 fluid technology generally extracts the oil from broken spores by mechanical means, or intact spores softening by high temperatures, followed by granulating and extraction. Ganoderma Lucidum spore oil extracted from spores broken by mechanical means is of low physiological activities, and thereby with poor quality, because a part of the bioactive substances obtained are spoiled by oxidation during the mechanical process.\nMajor Bioactive Components - Triterpenes and Polysaccharides\nFungi are remarkable for the variety of high-molecular-weight polysaccharide structures that they produce, and bioactive polyglycans are found in all parts of the mushroom. Polysaccharides represent structurally diverse biological macromolecules with wide-ranging physiochemical properties. Various polysaccharides have been extracted from the fruit body, spores, and mycelia of lingzhi; they are produced by fungal mycelia cultured in fermenters and can differ in their sugar and peptide compositions and molecular weight (e.g., ganoderans A, B, and C). Ganoderma Lucidum polysaccharides (GL-PSs) are reported to exhibit a broad range of bioactivities, including anti-inflammatory, hypoglycemic, antiulcer, antitumorigenic, and immunostimulating effects.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-5", "d_text": "2005;46(5):585-96.\nYue GG, Fung KP, Tse GM, Leung PC, Lau CB. Comparative studies of various ganoderma species and their different parts with regard to their antitumor and immunomodulating activities in vitro. J Altern Complement Med. 2006 Oct;12(8):777-89.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-14", "d_text": "2007) and also for the production of bioactive secondary metabolites (Lv et al. 2012; Sørensen and Giese 2013). We therefore recommend that strain selection and preliminary bioactivity testing should become common practice for all studies focusing on constituents and biological effects of medicinal polypores. Fungal strain selection is routinely applied for a wide range of species used for food (Chiu et al. 1999; Liu et al. 2012; Terashima et al. 2002), biotechnology (Geiger et al. 2012), and biocontrol (Cui et al. 2014).\nAll strains used in this study were isolated from those fruit bodies which were later used for extraction and bioactivity testing. Their colony growth rates were tested under standard conditions in order to investigate strain-specific properties without the influence of different substrates or environmental factors. When considering only strains of F. fomentarius belonging to the same lineage, the quantity and quality of secondary metabolites produced by temperate strains showed clear differences. One strain isolated from Fagus (IB20130019) exhibited significantly different growth characteristics compared to all other strains isolated from Picea: It was growing significantly slower and had a lower optimal temperature. This strain was also extremely differing in metabolite production and bioactivity.\nAlso F. pinicola strains isolated from Picea had different growth rates and optimum temperatures than strains isolated from other substrates (Abies, Alnus, or Larix) even when originating from the same geographical area. Bioactivity tests confirmed these strain-specific differences, because all strains from other substrates had a stronger antifungal activity than strains isolated from Picea. This suggests that substrate is a strong trigger for or may influence metabolite production. In any case, the strain-specific differences remained stable under standardized conditions (in vitro). Additional studies comparing extracts from wild-grown fruit bodies with extracts from fruit bodies cultivated in vitro (under standard conditions) from the same strain will further elucidate the influence of strain characteristics on the metabolite profile and the respective bioactivities. Testing different environmental parameter and substrates is a suitable method to pre-select strains of medicinal polypores and their growth conditions for upscaling to increase the biosynthesis of bioactive secondary metabolites.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-3", "d_text": "191-200, retrieved 20th August 2018, https://www.ncbi.nlm.nih.gov/pubmed/18997282\nWachtl-Galor, S, Teun J, Buswell J.A & Benzie I, Herbal Medicine: Biomolecular and Clinical Aspects, 2nd ed, chapter 9 Ganoderma lucidum (Lingzhi or Reishi), Taylor and Franics, retriveid 20th August 2018, https://www.ncbi.nlm.nih.gov/books/NBK92757/\nSanodiya, BS, Thakur GS, Baghel RK, Parasad GB & Bisen PS 2009, ‘Ganoderma lucidum: a potent pharmacological macrofungus’, Curr Pharm Biotecnol, vol. 10, no. 8, pp. 717-742, retrieved 20th August 2018, https://www.ncbi.nlm.nih.gov/pubmed/19939212\nWeekly Webinar Links:\nMonday at noon EST -\nTuesday at 8pm EST -", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "- Ganoderma RG & GL\n- Ganoderma RG & GL\nOur founder, Dr. Lim Siow Jin, has spent more than a decade in the research and development of mushroom cultivation technology. Through this R&D, he has developed a special technology that is capable of producing a 100% yield of high-quality mushroom. Furthermore, of the 3,000-plus mushroom species known, 2,000 are edible and 200 have therapeutic effects, and only six of them have the highest quality. Our special cultivation technology have consistently produced these six species, namely, Kimshen Gano, Peacock Gano, Heart Gano, Liver Gano, Brain Gano and Ruyi Gano. These facts account for the consistent and superior efficacy of our RG/GL.\nBy consuming Ganoderma regularly, the natural immune system of the body gets regulated and becomes effective. This prevents infectious diseases and toxin accumulation in the body. Though one appears healthy today, there are chances of being attacked by dormant diseases in the body at a later date, when the natural immune system fails. Ganoderma also balances the bio chemistry and stabilizes all functions of the body. Therefore, Ganoderma is advised to a Healthy person as a PREVENTIVE system. A maintenance dosage of 1 – 2 pairs of RG/GL is recommended throughout life.\nNo. Never. Ganoderma is NOT a replacement or substitute for Medicine. Ganoderma should be consumed along with the medications as a food supplement only. Medications should always be taken as per the advice of one’s own physician. In actual experience, health improves due to the consumption of Ganoderma and therefore the dependence on medications come down.\nToxins may originate from internal and external causes. They can be produced as metabolic and body waste internally, or originated from external sources like polluted air, smoke, food, water and various micro-organisms that we may come in close contact with. Our daily food may contain additives like preservatives, antibiotics, flavoring and coloring agents, stabilizers, and other chemicals used in agriculture.\nGanoderma does not lead to any side-effects, whatsoever. There will be a phase of detoxification in the body, for a few days, within the first 30 weeks. During this time body discharges all the toxins and the body is cleansed.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "These fungi have been extensively applied in traditional Chinese medicine (TCM) up to the present day (Chang 2000; Chang and Wasser 2012; Hobbs 1995; Ying et al. 1987) and are becoming more and more popular also in other parts of the world where they are used as a source for medicinal compounds and therapeutic adjuvants, or as health food supplements. They show a wide range of bioactivities including anti-cancer, anti-inflammatory, antiviral and immuno-enhancing effects (Grienke et al. 2014; Molitoris 2005; Paterson 2006). The large number of scientific studies focusing on their bioactivity (Khatun et al. 2011; Lo and Wasser 2011; Xu et al. 2010; Zhao 2013; Zhong and Xiao 2009; Zhou et al. 2011), secondary metabolite production (Hwang et al. 2013; You et al. 2013) and genomics (Floudas et al. 2012; Liu et al. 2012) is therefore not surprising.\nSecondary metabolites of medicinal polypores have been the focus of many studies (Grienke et al. 2014; Zjawiony 2004), but the importance of fungal strain for bioactivity and metabolite production has not been investigated so far. Strain-specific differences in bioactivities (on the function of macrophages) were detected for different strains of the Ascomycete Ophiocordyceps sinensis, whose fruit bodies are highly praised as traditional remedy (Meng et al. 2013). Distinct strain differences are also known for anamorphic ascomycetes, e.g. for the secondary metabolite production of Fusarium avenaceum, where substrate is playing an additional important role in the regulation of secondary metabolite biosynthesis (Sørensen and Giese 2013).\nSpecies delimitation of polypores was long based upon macro-morphological and ecological characters. However, these classical morphological species concepts have proven to be too wide for many important medicinal polypore species, e.g. for Ganoderma lucidum, G. applanatum, or Laetiporus sulphureus (Banik et al. 2010; Banik et al. 2012; Hseu et al.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-23", "d_text": "Am J Exp Agric 2(1):47–73, doi:10.9734/AJEA/2012/492\nLemieszek MK, Langner E, Kaczor J, Kandefer-Szerszen M, Sanecka B, Mazurkiewicz W, Rzeski W (2009) Anticancer effect of fraction isolated from medicinal birch polypore mushroom, Piptoporus betulinus (Bull.: Fr.) P. Karst. (Aphyllophoromycetideae): in vitro studies. Int J Med Mushrooms 11:351–364, doi:10.1615/IntJMedMushr.v11.i4.20\nLiu X-T, Winkler AL, Schwan WR, Volk TJ, Rott M, Monte A (2010) Antibacterial compounds from mushrooms II: lanostane triterpenoids and an ergostane steroid with activity against Bacillus cereus isolated from Fomitopsis pinicola. Planta Med 76:464–466, doi:10.1055/s-0029-1186227\nLiu D, Gong J, Dai W, Kang X, Huang Z, Zhang H-M, Liu W, Liu L, Ma J, Xia Z, Chen Y, Chen Y, Wang D, Ni P, Guo A-Y Xiong X (2012) The genome of Ganderma lucidum provides insights into triterpene biosynthesis and wood degradation. PLoS ONE 7:e36146, doi:10.1371/journal.pone.0036146\nLo H-C, Wasser SP (2011) Medicinal mushrooms for glycemic control in diabetes mellitus: History, current status, future perspectives, and unsolved problems (review). Int J Med Mushrooms 13:401–426, doi:10.1615/IntJMedMushr.v13.i5.10\nLv GP, Zhao J, Duan JA, Tang YP, Li SP (2012) Comparison of sterols and fatty acids in two species of Ganoderma. Chem Cent J 6:10, doi:10.1186/1752-153x-6-10\nMagan N (2008) Chapter 4 Ecophysiology: Impact of environment on growth, synthesis of compatible solutes and enzyme production.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-19", "d_text": "Chem Nat Compd 43(6):687–688, doi:10.1007/s10600-007-0229-4\nBanik MT, Lindner DL, Ota Y, Hattori T (2010) Relationships among North American and Japanese Laetiporus isolates inferred from molecular phylogenetics and single-spore incompatibility reactions. Mycologia 102(4):911–917, doi:10.3852/09-044\nBanik MT, Lindner DL, Ortiz-Santana B, Lodge DJ (2012) A new species of Laetiporus from the Caribbean basin. Kurtziana 37:15–21\nBode HB, Bethe B, Hofs R, Zeeck A (2002) Big effects from small changes: possible ways to explore nature's chemical diversity. Chembiochem 3(7):619–627, doi:10.1002/1439-7633(20020703)3:7<619::aid-cbic619>3.0.co;2-9\nBreheret S, Talou T, Rapior S, Bessiere JM (1997) Volatile compounds: A useful tool for the chemotaxonomy of Basidiomycetes. Cryptogam Mycol 18:111–114\nChang STMK (2000) Ganoderma lucidum-Paramount among medicinal mushrooms. Discov Innovat 12:97–101\nChang ST, Wasser SP (2012) The role of culinary-medicinal mushrooms on human welfare with a pyramid model for human health. Int J Med Mushrooms 14:95–134, doi:10.1615/IntJMedMushr.v14.i2.10\nChen W, Zhao Z, Chen S-F, Li Y-Q (2008) Optimization for the production of exopolysaccharide from Fomes fomentarius in submerged culture and its antitumor effect in vitro. Bioresource Technol 99:3187–3194, doi:10.1016/j.biortech.2007.05.049\nChen W, Zhao Z, Li Y (2011) Simultaneous increase of mycelial biomass and intracellular polysaccharide from Fomes fomentarius and its biological function of gastric cancer intervention.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "- Original article\n- Open Access\nFungal strain matters: colony growth and bioactivity of the European medicinal polypores Fomes fomentarius, Fomitopsis pinicola and Piptoporus betulinus\nAMB Express volume 5, Article number: 4 (2015)\nPolypores have been applied in traditional Chinese medicine up to the present day, and are becoming more and more popular worldwide. They show a wide range of bioactivities including anti-cancer, anti-inflammatory, antiviral and immuno-enhancing effects. Their secondary metabolites have been the focus of many studies, but the importance of fungal strain for bioactivity and metabolite production has not been investigated so far for these Basidiomycetes. Therefore, we screened several strains from three medicinal polypore species from traditional European medicine: Fomes fomentarius, Fomitopsis pinicola and Piptoporus betulinus. A total of 22 strains were compared concerning their growth rates, optimum growth temperatures, as well as antimicrobial and antifungal properties of ethanolic fruit body extracts. The morphological identification of strains was confirmed based on rDNA ITS phylogenetic analyses. Our results showed that species delimitation is critical due to the presence of several distinct lineages, e.g. within the Fomes fomentarius species complex. Fungal strains within one lineage showed distinct differences in optimum growth temperatures, in secondary metabolite production, and accordingly, in their bioactivities. In general, F. pinicola and P. betulinus extracts exerted distinct antibiotic activities against Bacillus subtilis and Staphylococcus aureus at minimum inhibitory concentrations (MIC) ranging from 31-125 μg mL−1; The antifungal activities of all three polypores against Aspergillus flavus, A. fumigatus, Absidia orchidis and Candida krusei were often strain-specific, ranging from 125-1000 μg mL−1. Our results highlight that a reliable species identification, followed by an extensive screening for a ‘best strain’ is an essential prerequisite for the proper identification of bioactive material.\nPolypores are a diverse group of Agaricomycetes (Basidiomycota) with tough poroid hymenophores.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma-Aloe Cream 100 ml\nWarning: Last items in stock!\nAvailability date: 09/1/2015\nBy buying this product you can collect up to 2 loyalty points. Your cart will total 2 loyalty points that can be converted into a voucher of 2,00 €.\n|2||19,62 €||Up to 4,36 €|\n|5||17,44 €||Up to 21,80 €|\n|15||15,26 €||Up to 98,10 €|\n|35||14,17 €||Up to 267,05 €|\nGanoderma-Aloe Cream 100 Ml\nThe Ganoderma Lucidum or Reishi, an ancient Asian fungus, considered as the eternal youth, has active ingredients which act on various organs, including the skin, providing minerals, antioxidant and vitamins with rejuvenating, anti-stress, oxygenating and revitalizing actions. Its combination with Aloe Vera and Argan and Rosehip oils gives the epidermis the desired beauty and youthfulness.\nMode of use: Apply Ganoderma-Cream in the morning and/or evening on face and neck with a gentle massage until completely absorbed. For best results, we recommend apply first the Ganoderma Essence.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-2", "d_text": "Polysaccharides are normally obtained from the mushroom by extraction with hot water followed by precipitation with ethanol or methanol, but they can also be extracted with water and alkali. Structural analyses of GL-PSs indicate that glucose is their major sugar component. However, GL-PSs are heteropolymers and can also contain xylose, mannose, galactose, and fucose in different conformations, including 1–3, 1–4, and 1–6-linked β and α-D (or L)-substitutions. Branching conformation and solubility characteristics are said to affect the antitumorigenic properties of these polysaccharides. The mushroom also consists of a matrix of the polysaccharide chitin, which is largely indigestible by the human body and is partly responsible for the physical hardness of the mushroom. Numerous refined polysaccharide preparations extracted from Ganoderma Lucidum are now marketed as over-the-counter treatment for chronic diseases, including cancer and liver disease.\nTerpenes are a class of naturally occurring compounds whose carbon skeletons are composed of one or more isoprene C5 units. Examples of terpenes are menthol (monoterpene) and β-carotene (tetraterpene). Many are alkenes, although some contain other functional groups, and many are cyclic. These compounds are widely distributed throughout the plant world and are found in prokaryotes as well as eukaryotes. Terpenes have also been found to have anti-inflammatory, antitumorigenic, and hypolipidemic activity. Terpenes in Ginkgo biloba, rosemary (Rosemarinus officinalis), and ginseng (Panax ginseng) are reported to contribute to the health-promoting effects of these herbs.\nGanoderma Lucidum is clearly rich in triterpenes, and it is this class of compounds that gives the herb its bitter taste and, it is believed, confers on it various health benefits, such as lipid-lowering and antioxidant effects. However, the triterpene content is different in different parts and growing stages of the mushroom. The profile of the different triterpenes inGanoderma Lucidum can be used to distinguish this medicinal fungus from other taxonomically related species, and can serve as supporting evidence for classification.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-24", "d_text": "In: Boddy L, Frankland JC, van West P (eds) British Mycological Society Symposia Series, vol 28. Academic Press, pp 63–78. doi:10.1016/S0275-0287(08)80006-9\nMcCormick MA, Grand LF, Post JB, Cubeta MA (2013) Phylogenetic and phenotypic characterization of Fomes fasciatus and Fomes fomentarius in the United States. Mycologia 105:1524–1534, doi:10.3852/12-336\nMeng L-Z, Lin B-Q, Wang B, Feng K, Hu D-J, Wang L-Y, Cheong K-L, Zhao J Li S-P (2013) Mycelia extracts of fungal strains isolated from Cordyceps sinensis differently enhance the function of RAW 264.7 macrophages J Ethnopharm 148:818–825. doi:10.1016/j.jep.2013.05.017\nMolitoris HP (2005) Fungi: companions of man in good and evil. Int J Med Mushrooms 7:49–73, doi:10.1615/IntJMedMushr.v7.i12.70\nNeifar M, Kamoun A, Jaouani A, Ellouze-Ghorbel R, Ellouze-Chaabouni S (2011) Application of asymetrical and hoke designs for optimization of laccase production by the white-rot fungus Fomes fomentarius in solid-state fermentation. Enzyme Res 2011:368525–368525, doi:10.4061/2011/368525\nOta Y, Hattori T, Banik MT, Hagedorn G, Sotome K, Tokuda S, Abe Y (2009) The genus Laetiporus (Basidiomycota, Polyporales) in East Asia. Mycol Res 113:1283–1300, doi:10.1016/j.mycres.2009.08.014\nPaterson RRM (2006) Ganoderma - A therapeutic fungal biofactory.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma or Reishi Mushroom – Hype or a Healing Miracle?\nFor over a year now I’ve seen the words ganoderma or reishi and decided to perform my own research to see if the Ganoderma or Reishi Mushroom was Hype or a Healing Miracle like it is being advertised. If you are like me, MLMs come by your way quite often and many of them you should not pay a dime for a dozen. Being somewhat skeptical yet an inquisitive individual, I’m not easily impressed by the words, “healing miracle.\" After all these words have been used over the years to sell everything from snake oil to placebo pills only to take money from trusting individuals. With that said, please allow me to share with you the information I’ve found.\nWhat is Ganoderma?\nThe Ganoderma mushroom is not recommended to be used for cooking, however, for over 2,000 years it has been used in Chinese traditional medicine (TCM) for the medicinal purposes of promoting health and longevity. Ganoderma Lucidum is sometimes referred to by the Chinese as the “Miraculous King of Herbs” and is highly regarded for its medicinal properties that assist in helping to improve the body’s healing abilities.\nThe ancient Taoists believed this mushroom to be the “elixir of eternal youth.” Ganoderma, commonly known as reishi is a hard bitter mushroom that is said to relieve fatigue, lower high blood pressure, reduce cholesterol, reduce inflammation, while building stamina, and improving the immune system.\nJapanese and Chinese emperors for years have drunk herbal tea and mushroom concoctions to achieve virility and vitality. Because it was rare and very expensive the commoners did not get to partake of this natural herb until later years when growth and cultivating became easier. This mushroom is recorded in the 2,000 year old Chinese medical text and is known for curing cancers and various diseases such as arthritis, hypertension, and liver disorders.\nThis medicinal fungi is commonly know by many names such as Ling zhi (meaning the herb of spiritual potency) Mannentake, and Reishi. Ganoderma is a folk medicine that has been in existence since ancient times and can be found in six different colors of hues with red being the most commonly used.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-3", "d_text": "This also allowed for a comparison of strains based on both, bioactivities of fruit body extracts and the physiological properties of cultures in vitro.\nPhysiological properties are often used for species delimitation because they reflect genetic distance: Fomes strains belonging to different species have different physiological properties, like colony growth responses to temperature (McCormick et al. 2013). The question whether also strains within one species, e.g. F. fomentarius, differ in growth responses to temperature was not addressed so far.\nThe main aim of this work was to study the influence of fungal strain (isolate) of three selected, medicinal polypores on their antimicrobial bioactivity and physiological properties. The bioactivity was assessed for ethanolic extracts from the same fruit bodies, which had been used for isolation of mycelial cultures. Bioactivity was tested against potentially pathogenic fungi and bacteria. Physiological properties of mycelial cultures (colony growth rate and optimum temperature) were investigated under standard in vitro conditions. All polypore strains used were characterized by rDNA ITS sequences, and phylogenetic analyses were carried out to verify species identification. The potential influence of geographic provenance and substrate on strain evolution and properties are discussed.\nMaterial and methods\nOrganism collection, identification, culture\nF. fomentarius, F. pinicola, and P. betulinus fruit bodies were mainly collected in Tyrol, Austria in 2013. Fruit body vouchers were deposited in the Mycological Collection of the Herbarium Innsbruck (IB), cultures were deposited in the Jena Microbial Resource Collection (JMRC), sequences were deposited in GenBank. Voucher numbers, GenBank accession numbers, JMRC collection numbers, plant hosts, as well as habitat information are given in Table 1.\nSterile techniques were used to obtain cultures from the context tissue. Small pieces of context tissue (2.0 mm3) were excised from each basidiome, plated on 2-3 % w/v malt extract agar plates (MEA Art. Nr. X976.2 from Roth Karlsruhe Germany) and incubated for one to three weeks at a temperature of 20°C. Cultures were checked regularly for contaminations. Mycelial plugs of 1-3 mm in diameter were taken from the edge of the mycelium and transferred to new plates, to establish pure cultures and further carrying out growth experiments.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "School of Chinese Medicine\nA polysaccharide named GSP-2 with a molecular size of 32 kDa was isolated from the fruiting bodies of Ganoderma sinense. Its structure was well elucidated, by a combined utilization of chemical and spectroscopic techniques, to be a β-glucan with a backbone of (1→4)– and (1→6)–Glcp, bearing terminal- and (1→3)–Glcp side-chains at O-3 position of (1→6)–Glcp. Immunological assay exhibited that GSP-2 significantly induced the proliferation of BALB/c mice splenocytes with target on only B cells, and enhanced the production of several cytokines in human peripheral blood mononuclear cells and derived dendritic cells. Besides, the fluorescent labeled GSP-2 was phagocytosed by the RAW 264.7 cells and induced the nitric oxide secretion from the cells.\nPublic Library of Science\nFirst Page (page number)\nHan, Xiao-Qiang, Gar-Lee Yue, Cai-Xia Dong, Chung-Lap Chan, Chun-Hay Ko, Wing-Shing Cheung, Ke-Wang Luo, Hui Dai, Chun-Kwok Wong, Ping-Chung Leung, and Quan-Bin Han. \"Structure elucidation and immunomodulatory activity of a beta glucan from the fruiting bodies of Ganoderma sinense.\" PLoS ONE 9.7 (2014): e100380.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Ganoderma lucidum, is an interesting shelf fungus that is important as a medicine in the Far East, in places such as China, Japan and Korea. G. lucidum is of particular interest because it has been portrayed as a \"fix-it-all\" herbal remedy for maladies such as: HIV, cancer, low blood pressure, high blood pressure, diabetes, rheumatism, heart problems, paralysis, ulcers, asthma, tiredness, hepatitis A, B, and C, insomnia, sterility, psoriasis, mumps, epilepsy, alcoholism, and the list goes on. These claims are mostly made by the people who are selling G. lucidum herbal supplements, but G. lucidum, also known as Reishi, ling chih, and ling zhi has a long history of being used as an herbal remedy. We will get to that later.\nFirst, how do you identify Ganoderma lucidum? Ganoderma is a member of the Polypores, a group of fungi characterized by the presence of pores, instead of gills on the underside of the fruiting body. G. lucidum, considered by many mycophiles to be one of the most beautiful shelf fungi, it is distinguished by its varnished, red surface. When it is young it also has white and yellow shades on the varnished surface, differing from the dull surface of Ganoderma applanatum, the artist's conk. G. lucidum is a saprophytic fungus that tends to grow more prolifically in warm climates on decaying hardwood logs and stumps. This feature helps to distinguish it from a similar looking Ganoderma tsugae, which also has a beautiful red varnished surface, but only grows on the stumps and logs of conifers, especially hemlock (as you might guess from the name). Another distinguishing characteristic is the flesh of G. tsugae is white whereas the flesh of G. lucidum is brown. Besides the shelf form, both G. tsugae and G. lucidum can be stalked. The spore prints of both species are brown and the spores are very similar in size and shape.\nGanoderma curtisii is considered by some mycologists to be a different species because of its brighter yellow colors and geographic restriction to the southeaster United States.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-21", "d_text": "Mycol Res 112:231–240, doi:10.1016/j.mycres.2007.08.018\nGeiger M, Gibbons J, West T, Hughes SR, Gibbons W (2012) Evaluation of UV-C mutagenized Scheffersomyces stipitis strains for ethanol production. J Lab Automat 17:417–424, doi:10.1177/2211068212452873\nGrienke U, Zöll M, Peintner U, Rollinger JM (2014) European medicinal polypores – A modern view on traditional uses. J Ethnopharmacol 154(3):564–583, doi:10.1016/j.jep.2014.04.030\nGuler P, Akata I, Kutluer F (2009) Antifungal activities of Fomitopsis pinicola (Sw.:Fr) Karst and Lactarius vellereus (Pers.) Fr. Afr J Biotechnol 8:3811–3813, doi:10.5897/AJB09.719\nHallenberg N, Larsson E (1991) Differences in cultural characters and electrophoretic patterns among sibling species in four different species complexes (Corticiaceae, Basidiomycetes). Mycologia 83(2):131–141\nHobbs C (1995) Medicinal mushrooms: An exploration of tradition, healing and culture. Botanica Press, Santa Cruz\nHögberg N, Holdenrieder O, Stenlid J (1999) Population structure of the wood decay fungus Fomitopsis pinicola. Heredity 83:354–360, doi:10.1046/j.1365-2540.1999.00597.x\nHseu RS, Wang HH, Wang HF, Moncalvo JM (1996) Differentiation and grouping of isolates of the Ganoderma lucidum complex by random amplified polymorphic DNA-PCR compared with grouping on the basis of internal transcribed spacer sequences.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "* There are over 30 large human trials conducted in China alone, in addition to many in vitro, and in vivo studies.\nNuliv Science’s proprietary fermentation technology uses specific carbohydrates, soybean extracts, and a special blend of trace elements, in addition, other proprietary ingredients as a growth medium. Additionally, specific temperature and Ph parameters are maintained through various stages of the fermentation process to consistently produce the highest quality Cordyceps.\nOur PhytoMonitor™ Quality Assurance System guarantees CordycepsPrime™ to meet stringent quality standards.\nRequest a product fact sheet and technical product specifications here.\nMaitake mushroom is a fungi from the Grifolafrondosa species. They are also known as Sheep’s Head, Hen-of-the-Woods, Ram’s Head, and other names. Maitake is the mushroom’s Japanese name. It means “dancing mushroom.” This mushroom grows in clusters at the base of trees, especially oak trees. Grifolafrondosa is used in both Japanese and traditional Chinese medicine as a powerful medicine that may balance body systems and boost the immune system.* This mushroom is also frequently eaten because of its pleasing flavor and texture, especially in Japan.\nMaitake mushrooms are sold in health food stores around the world and online at websites that sell herbs, especially Chinese or Asian herbs. They typically grow at the base of the same trees for years, and they can be very large when they are full grown. Maitake can grow to 50 pounds in Japan. Maitake mushrooms have many minerals such as potassium, magnesium, and calcium. They are also full of fibers and amino acids and various vitamins such as D1, B2, and niacin.\nReishi mushroom is a part of the Ganoderma lucidum family.\nRegarded as one of the most vital herbal medicines in China for centuries, Reishi mushrooms are used for many purposes ranging from immune-related concerns to promoting proper energy levels. It is also used as a sports enhancer in parts of China. The active ingredient in the fungus is triterpenoids. Reishi is generally not eaten whole or raw. It is usually made into tea or taken as a powder mixed into another drink.\nCoriolus Mushroom (Coriolus versicolor) 30% is a fungus that has been used for many centuries in Traditional Chinese Medicine.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-13", "d_text": "Our Mediterranean strain (IB20130033) was placed within another, distinct clade of F. fomentarius sequences originating mostly from Southern European countries (Italy, France, Portugal, Slovenia) and other substrates (Platanus x acerifolia, Populus spp., Quercus spp. and Abies). The only exceptions are F. fomentarius sequences from the Slovak Republic. This Southern European clade is sister to a distinct clade of F. fomentarius from China.\nWithin clade sequence divergence was small, with 0-3 base pair differences between the different strains of F. fomentarius from Northern European countries. Sequence divergence between the Northern and Southern European clade was 9-18 base pairs, and sequence divergence to the outgroup F. fasciatus was 41-62 base pairs.\nPhylogenetic placement of F. pinicola strains\nF. pinicola is a polypore with very little sequence divergence between strains. Only sequences from USA or Asia showed some sequence divergence in addition to F. ochracea, which forms a distinct clade within F. pinicola, thus making this clade paraphyletic (Figure 5). We found very little sequence divergence between all our strains: Sequence divergence to other isolates of F. pinicola was 1-5 base pairs out of 504 positions, to other species of Fomitopsis (F. meliae, F. palustris) 15-21 base pairs.\nPhylogenetic placement of P. betulinus strains\nP. betulinus is a polypore species with little rDNA ITS sequence variation between different strains from a broad geographic area (Europe, Russia, East Asia) and Betula spp. or Acer saccharum as substrate (Figure 6): Within species sequence divergence was very low: P. betulinus sequences differed only in 1-2 out of > 500 base pairs from each other, but had 44-45 differed base pairs when compared to the sister taxon P. soloniensis.\nFungal strain matters\nOur results confirmed our hypothesis that medicinal polypore strains belonging to the same lineage differ significantly in their qualitative and quantitative secondary metabolite production, and in bioactivity. This confirms earlier studies focusing on other medicinal polypore species showing that the fungal strain used is important for the chemical composition of fruit bodies (Agafonova et al.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Gourmet Mushrooms, Inc. has pioneered specialty mushroom cultivation for almost 30 years and is considered a world leader in this field. Based on the research of Dr. Tsuneto Yoshii of Japan and the continuing close relationship, GMI was first to commercially cultivate Shiitake in the Western Hemisphere, and to develop cultivation of the Pompon Blanc™ (Pom Pom) Mushroom. The company supplies the finest chefs in North America.\nAlong with fresh cultivated mushrooms, GMI has a selection of seasonal wild mushrooms, truffle oils, dried mushrooms (including Morels, Black Trumpet, Shiitake and Porcini) and nutraceuticals. Since its inception, the company has produced a line of mushroom mycelial biomass products for the nutraceutical industry. The mushroom nutraceutical products are marketed worldwide.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-27", "d_text": "Mycol Res 106:34–39, doi: 10.1017/S0953756201005007\nTroskie AM, Vlok NM, Rautenbach M (2012) A novel 96-well gel-based assay for determining antifungal activity against filamentous fungi. J Microbiol Methods 91:551–558, doi:10.1016/j.mimet.2012.09.025\nTurkoglu A, Duru ME, Mercan N, Kivrak I, Gezer K (2007) Antioxidant and antimicrobial activities of Laetiporus sulphureus (Bull.) Murrill. Food Chemistry 101:267–273, doi:10.1016/j.foodchem.2006.01.025\nWang X-C, Xi R-J, Li Y, Wang D-M, Yao Y-J (2012) The species identity of the widely cultivated Ganoderma, ‘G. lucidum’ (Ling-zhi), in China. PLoS ONE 7:e40857, doi:10.1371/journal.pone.0040857\nWebster D, Taschereau P, Belland RJ, Sand C, Rennie RP (2008) Antifungal activity of medicinal plant extracts; preliminary screening studies. J Ethnopharmacology 115:140–146, doi:10.1016/j.jep.2007.09.014\nXu X, Zhu J (2011) Enhanced phenolic antioxidants production in submerged cultures of Inonotus obliquus in a ground corn stover medium. Biochem Eng J 58–59:103–109, doi:10.1016/j.bej.2011.09.003\nXu K, Liang X, Gao F, Zhong JJ, Liu JW (2010) Antimetastatic effect of ganoderic acid T in vitro through inhibition of cancer cell invasion. Process Biochem 45:1261–1267, doi:10.1016/j.procbio.2010.04.013\nYing J. MX, Ma Q., Zong Y. and Wen H.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-11", "d_text": "I\nignored them for four years until she died last year.\nAt present time, I have no intimate relationships. However, I decided to try ‘vigamaxx’ in the hopes of improving my sexual health in the future. I have noticed some very definite physical changes. I do have a great increased incidence of spontaneous erection which used to occur two to three times a week and now occurs twice or more in a day, especially upon waking. They are full, long lasting erections. This is a completely new event and a very encouraging to me for my future.\nLawerence Wang, Age 58\nGanoderma lucidum is one of the most beautiful mushrooms in the world. When very young its varnished surface is Chinese red, bright yellow, and white (see the bottom illustrations). Later the white and yellow shades disappear, but the resulting varnished, reddish to reddish brown surface is still quite beautiful and distinctive.\nIt is one of the extremely precious medicinal values of fungus. Chinese tradition health book recorded that Ganoderma can calm the nerves, strengthen to the lung and liver.\nWhile Ganoderma lucidum is annual and does not actually grow more each year like some polypores, its fruiting body is quite tough and can last for months. This magical mushroom is also known as Reishi or Ling zhi which means \"herb of spiritual potientcy\"\nReishi had been added to the American Herbal Pharmacopoeia and Therapeutic Compendium. Once rare and expensive, this mushroom is now effectively cultivated and is readily available.\nModern research of Ganorderma lucidum started in the early 70's, many clinical reports revealed that Ganoderma has many beneficial effects to cure many kinds of diseases.\nSome of its actions and properties include: Anti-allergin, antioxidant, analgesic, antifungal, anti inflammatory, anti tumor, antiviral antiparasitic, cardiovascular, antidiabetic, immunomodulating,hepatoprotective, hypotensive and hypertensive, kidney and nerve tonic, sexual potentiator. Inhibits platelett aggregations. Lowers blood pressure, cholesterol and blood sugar. Bronchitis prevention.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-8", "d_text": "A member of the jelly fungus family, Tremella fuciformis grows on dead or fallen trees in tropical and subtropical climates, though it can also be found in temperate areas of North America, Europe, Asia, Australia, and New Zealand. It is thought to have anti-aging features, the ability to nourish lungs, and benefits for a healthy complexion.\nIn Japanese the Tremella fuciformis is called shiro kikurage, or “white tree jellyfish”. It is also known as silver ear mushroom and white wood-ear mushroom.\nPolysaccharides consisting of D-mannan, glucuronic acid, xylose and fucose in Tremella were linked to improvement in subjective memory complaints and cognitive performance.\nThe antioxidants in Tremella fight damage from free radicals which offers anti-aging properties.\nThe polysaccharide glucuronoxylomannan in Tremella stimulates DNA synthesis in vascular endothelial cells which contribute to atherosclerosis, hypertension, and thrombophlebitis.\nExopolysaccharides extracted from Tremella may help control diabetes and reduce inflammation.\nTremella fuciformis also may be effective in fighting high cholesterol, cancer, and cough-related conditions.\nUse of the turkey tail, or cloud mushroom, goes back to the Ming Dynasty in 15th century China. Brewed as a tea, its cloud-like appearance is thought to symbolize longevity, health, spiritual attunement, and infinity in Asian cultures. Turkey tail mushrooms are loaded with antioxidants, including 35 different phenolic compounds and flavonoids, quercetin and baicalein. So well-recognized is the use of turkey tail mushrooms that in the 1980s, Japan approved a cancer drug derived from it. In the U.S., the FDA approved the use of turkey tail mushrooms in 2012 to be used in combination with conventional chemotherapy treatments in clinical trials for several types of cancer.\nTurkey tails are leather-like and wavy decorated with tan and brown concentric circles with a soft, velvet-like texture that have a similar appearance to the tail feathers of a turkey. Turkey tails are a polyspore species and if you have walked in the woods you likely have seen them growing on one of 70 different types of fallen trees and branches. Because they grow easily in any area where there are trees, they are among the most common mushrooms in the world.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-244", "d_text": "Since only three nucleotide sequences of this species (COI, 16S rRNA, and 18S rRNA) appear in the GenBank database so far, no analysis of the molecular mechanisms underlying E. fullo's resistance to insecticide and environmental stress has been accomplished. We reported a de novo assembled and annotated transcriptome for adult E. fullo using the Illumina sequence system. A total of 53,359,458 clean reads of 4.8 billion nucleotides (nt) were assembled into 27,488 unigenes with an average length of 750 bp, of which 17,743 (64.55%) were annotated. In the present study, we identified 88 putative cytochrome P450 sequences and analyzed the evolution of cytochrome P450 superfamilies, genes of the CYP3 clan related to metabolizing xenobiotics and plant natural compounds, in E. fullo, increasing the candidate genes for the molecular mechanisms of insecticide resistance in P450. The sequenced transcriptome greatly expands the available genomic information and could allow a better understanding of the mechanisms of insecticide resistance at the systems biology level.\nYu, Guo-Jun; Wang, Man; Huang, Jie; Yin, Ya-Lin; Chen, Yi-Jie; Jiang, Shuai; Jin, Yan-Xia; Lan, Xian-Qing; Wong, Barry Hon Cheung; Liang, Yi; Sun, Hui\nBackground Ganoderma lucidum is a basidiomycete white rot fungus and is of medicinal importance in China, Japan and other countries in the Asiatic region. To date, much research has been performed in identifying the medicinal ingredients in Ganoderma lucidum. Despite its important therapeutic effects in disease, little is known about Ganoderma lucidum at the genomic level. In order to gain a molecular understanding of this fungus, we utilized Illumina high-throughput technology to sequence and analyze the transcriptome of Ganoderma lucidum. Methodology/Principal Findings We obtained 6,439,690 and 6,416,670 high-quality reads from the mycelium and fruiting body of Ganoderma lucidum, and these were assembled to form 18,892 and 27,408 unigenes, respectively. A similarity search was performed against the NCBI non-redundant nucleotide database and a customized database composed of five fungal genomes.", "score": 8.086131989696522, "rank": 99}]} {"qid": 29, "question_text": "What are the main challenges India faces in its power sector regarding electricity access and supply?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "India's looming power crisis: An analysis\nThere are many roadblocks in unleashing the full potential of India's power sector\nThere are many roadblocks in unleashing the full potential of India's power sector\nElectricity is critical to fuel the economic growth of India. The country is on the fast trajectory of development but to keep the momentum of growth high, availability of uninterrupted power supply is a must. India needs electricity to fuel the growth of every industry, be it large-scale or smallscale, manufacturing, healthcare or education.\nThere are many roadblocks in unleashing the full potential of India's power sector. One is fuel availability concerns faced by the industry. Coal supply by Coal India Ltd (CIL) is restricted to around 65% of actual coal requirement by coal-based thermal plants, leading to increased dependence on imported coal. This results in increasing power generation costs due to limited fuel availability. Increasing operational inefficiencies and outstanding debts have led to poor financial health of state discoms. Then there are other concerns such as land acquisition which has made purchase of land for power projects very expensive. Installation of power plants in any case, requires huge investments and the land acquisition cost pushes the capex to unprofitable and unsustainable levels.\nIndia has been dependent to a large extent on energy imports to meet its national energy requirements. As per the estimates of Planning Commission, Government of India (GoI), to ensure a sustained 8% growth of the economy, by 203132 India needs to increase its primary energy supply by three to four times and its electricity generation by five to six times of the 2003-04 levels. To limit the dependency on energy imports and contribute in meeting this energy challenge, the government is also laying a lot of emphasis on energy efficiency and demand side management. Efforts are being made to increase supply from renewable sources of energy and promote energy conservation in various consumption sectors through appropriate policy interventions.\nEnergy-efficiency is extremely important and can be promoted by setting appropriate prices and this is particularly important where energy prices are rising. However, appropriate prices by themselves may not suffice and non-price incentives/disincentives are therefore also required. This includes standards of energy efficiency that are forward looking, i.e., anticipate future price changes or pollution penalties. These standards should be determined on the basis of rational considerations and must be set in an expanding range of applications, with continuous dynamic adjustment of these standards. The standards should also be effectively enforced.", "score": 51.77707124124504, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Electricity Grid Substation Automation\nTo meet the needs of India’s growing economy, providing reliable, affordable and sustainable energy requires exploring a range of options. One of the most important requirements is reinventing the grid with the introduction of grid intelligence and communication systems for its secure and efficient operation. This paper is an attempt towards grid redesign to meet the requirements of the transforming energy sector in India...\n- Er. K S Sidhu\nIndia has significant challenges in the power sector. The country is home to about 25% of the worldwide total of 1.4 billion people who lack access to electricity apart from growing centres of electricity consumption. There is also a massive demand-supply gap aggravated by delays in capacity addition and inefficiencies, especially, in network segments. For fulfilling huge power demands, number of generating stations – hydro, thermal, atomic (conventional) and solar and wind etc. (non-conventional) are being created. Depending upon the availability of resources, these stations are constructed at different places. So, it is necessary to transmit these huge power blocks from generating stations to their load centres. The power transmission system is a complex network. Power generated at voltage levels of 11to 33KV, has to be stepped up to high/extra-high voltages (220/400/800KV-AC) and then again reduced in stages to lowest distribution voltage level of 240/415 volts. Typical power system network is shown in the figure.\nFor maintaining these voltage levels and for providing stability, a number of transformation and switching stations have to be created in between generating station and consumer ends. These transformation and switching stations are known as substations or grid substations or electricity grids. It is these grids that are required to be developed to achieve reduction in system malfunctions as well as reduction in the meantime to repair. Consequently, outage times shall be reduced leading to a significant decrease in the energy losses.\nThe electricity grid has grown and changed immensely since its origins, when energy systems were small and localized. With the passing of time, rising electricity consumption, new power plants and increasingly decentralised generation (DG) of electricity from renewable energies require grid expansion. However, simply expanding the grid, as it is constructed now, would be highly inefficient. The wildly fluctuating power feed-in from renewable energies (sun, wind) into the entire power grid occasionally leads to unforeseeable power flows, which can affect grid stability.", "score": 49.791294444402325, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "India recently experienced one of the world’s worst blackouts, with 670 million citizens directly impacted. While media reports have focused on the repercussions from two days of outages, this incident illustrates a much larger, more systemic problem: the need for improved electricity governance.\nIndia’s History of Power Problems\nIndia has the world’s fifth-largest electrical system, with an installed electric capacity of about 206 gigawatts (GW). India initiated power sector reforms in the early 1990s through a range of legal, policy, and regulatory changes. Over the last two decades, some of these reforms have been impressive, but several others weren’t taken. This lack of follow-through has resulted in a growing gap between electricity demand and supply throughout the country. Recent blackouts may have shined a spotlight on this gap, but it’s a situation that’s widespread in India: Not only do 400 million Indians lack access to electricity, but electricity supply is unreliable and of poor quality even in large parts of “electrified” India. In addition to the existing demand, Indian consumers, businesses, and industries seek more electricity to power appliances, processes, and products, further exacerbating the demand-supply gap. By 2035, India’s power demand is expected to more than double.\n3 Holes in India’s Electricity Governance\nLoss and Theft: The predominant solution proffered by policymakers in India is to increase generation capacity—unmindful of the substantial technical losses due to outdated power lines, leaky cables, and poor maintenance; commercial losses due to poor metering and local politicians’ populist “free power” policies ; and rampant electricity theft caused by people hooking up wires to overhead electricity cables. Building more power plants without fixing these problems is akin to pouring water into a bucket with large holes. With India’s power plants close to 70 percent dependent on coal and natural gas, the current coal shortages and non-availability of gas haven’t helped the situation. The Planning Commission’s Working Group on Power for the 12th Plan, among other studies, have pointed to these “holes,” but so far there’s been no serious effort made to address electricity losses or theft. No doubt, there are vested interests that benefit by the business-as-usual scenario. But there is an urgent need to hold decision-makers accountable for solving these issues on a priority and not allow them to obfuscate the situation by announcing another round of new power plants.", "score": 49.568078492669436, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "|| Current plans to provide “Power for All” in India via the country’s utility or distribution companies (known as discoms), through main-grid extension and utility-scale generation projects, are largely polluting, slow to build, and expensive. The central utility grid is 70% coal-powered, and the proportion of fossil fuels is still expected to be greater than 50% of the energy mix in 2040, despite high targets for renewable-energy generation capacity and heavy investment in it.\nGrid power projects and extension, such as laying high-voltage transmission lines, take years from conception to completion, incurring enormous time-linked opportunity costs for underserved communities, in addition to the already extremely high capital costs of such infrastructure. However, the weakest link in the power value chain is distribution. This challenge\nis ultimately political and economic in nature, since with few exceptions discoms are chronic loss-makers and perpetually stressed financially. Providing power for all under the existing paradigm will therefore be a drawn-out and hugely expensive enterprise, despite the successes of existing government electrification programmes.\nMini-grids powered by decentralised renewable energy (DRE), and operated by distributed energy service companies (DESCOs), which provide a utility-like service on a for-pro t basis, can offer a long-term, solution for the underserved, which can expand rapidly and easily along with demand. DRE-powered mini-grids are quickly deployed and reasonably priced. Furthermore, if done in the right way, such mini-grids can be integrated with the main grid at a later date. Equally significant, DRE power is environmentally cleaner than coal – or diesel-generated alternatives.\nMini-grids and DESCO business models, particularly those based on solar photovoltaic (PV) cells with battery storage, have largely met with success in eld tests, providing both reliable and significantly cleaner power than the hundreds of thousands of diesel generators that provide electricity across many rural villages. DESCOs give households a “grid experience” unmatched by localised solar home systems (SHS), in addition to providing household and commercial customers with economic opportunities for energy-intensive appliance upgrades.\nDESCO-operated mini-grids are not yet viable on a purely commercial basis, due to high up-front capital expenditures, low levels of initial effective demand, and high levels of uncertainty among investors as to the sector’s long-term viability.", "score": 47.91144436445543, "rank": 4}, {"document_id": "doc-::chunk-3", "d_text": "Going forward, this landmark progress could result in a significant overhaul of the power sector, encompassing deregulation, decentralisation and efficient price discovery. Policy interventions in the form of renewable purchase obligations (RPO) for DISCOMs, accelerated depreciation benefits and fiscal incentives such as viability gap funding and interest rate subvention will have to go through a rethink/need review. Reforming retail distribution of electricity while reducing commercial, technical and transmission losses remains a key challenge. The end of cross subsidisation by industry for other sectors, and closing the gap between average cost of supply (ACS) and average revenue realised (ARR) will require speedier/accelerated DISCOM reforms (including privatisation and competition). A nationwide Grid integration that can take supply from renewable sources as and when generated is needed to take care of daily/seasonal peaks and troughs associated with renewable sources. These dynamic shifts in renewables could help increase India’s per capita electricity consumption, currently among the lowest in the world. Here too, Indian industry has a crucial role to play.\nIII. Leveraging Information and Communication Technology (ICT) and Start-ups to Power Growth\n11. Information and communication technology (ICT) has been an engine of India’s economic progress for more than two decades now. Last year, the ICT industry accounted for about 8 per cent of country’s GDP and was the largest private sector job creator across both urban and rural areas. In 2019-20, software exports at US$ 93 billion contributed 44 per cent of India’s total services exports and financed 51 per cent of India’s merchandise trade deficit during the last five years.\n12. These headline numbers, however, understate the contribution of the sector to the economy. IT has revolutionised work processes across sectors and has generated productivity gains all around. The ICT revolution has placed India on the global map as a competent, reliable, and low-cost supplier of knowledge-based solutions. Indian IT firms are now at the forefront of developing applications using artificial intelligence (AI), machine learning (ML), robotics, and blockchain technology. This has also helped to strengthen India’s position as an innovation hub, with several start-ups attaining unicorn status (USD 1 billion valuation). India added 7 new unicorns in 2019, taking the total count to 24, the third largest in the world1.\n13.", "score": 44.47628646394599, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "In India, 46 million, mostly rural, households lack access to electricity – over 50% of those are in the states of Bihar and Uttar Pradesh. While India has set an aggressive goal of extending the grid to all households by 2019, grid extension does not necessarily imply reliable electricity access. Indian utilities face a financial disincentive to supplying reliable electricity in rural areas because of subsidized tariffs and low consumer willingness-to-pay. Tariff subsidies for full household electrification in these states would be about Rs 15,000 Cr per year, which is two-times the existing subsidies and equivalent to 20–30% of their annual utility revenues. We find that superefficient lamps, TVs, and fans can reduce the energy consumption of a rural household by over 70% costeffectively, resulting in a net reduction in the total subsidy burden. Reduced consumption offers an opportunity to raise consumer tariffs while ensuring consumers’ monthly electricity bills reduced. We also argue that superefficient appliances make consumer-side storage cost-effective, leading to greater consumer willingness-to-pay. We recommend adoption of super-efficient appliances as part of the electricity access initiative in India, and electricity service based tariff setting as the next policy steps towards providing a reliable and sustainable electricity access.", "score": 43.95487071274805, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "Author of the report Sheoli Pargal who is also an economic adviser says, “As the government’s 24x7 power initiative builds up momentum, utilities need to make sure they are well prepared to use the funds that will become available—to strengthen their transmission grids and distribution infrastructure, create robust corporate governance mechanisms, enhance billing and collection systems, institutionalise regular tariff reviews, and position themselves for extending reliable electricity service to all.”\nProjections show that even if tariffs rise six per cent per year to keep up with the cost of supply, annual losses in 2017 will likely amount to Rs 125,300 crore (US$ 27 billion).\n“The need of the hour is to improve operational and financial efficiency in distribution through a multi-pronged approach. This could include use of IT for transparent energy audits across the value chain; adopting some of the innovations in reducing Aggregate Technical and Commercial Losses (AT&C) loss pioneered by private sector entities; ring fencing supply to the agricultural sector with a transparently determined and administered subsidy so that rural areas receive reliable power, while revenue maximising models are implemented in urban areas; and improving institutional accountability, following examples like Gujarat and West Bengal,” said Mani Khurana, energy specialist, World Bank.\nWe are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.\nComments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.", "score": 43.55538630183113, "rank": 7}, {"document_id": "doc-::chunk-6", "d_text": "Electricity Shortages and Diesel Substitutes\nBecause farmers do not pay for the electricity to pump water from the ground or from one river to another, the revenue of state utilities in Haryana, Punjab, and every other Indian state are perennially in deficit, relying on the national government for financial assistance.\nFurthermore, the expansion and modernization of India’s national transmission grid is slow and uneven, say state power executives. Currently, India’s transmission grid wastes 25 percent of the energy it transports. In contrast, China’s transmission grid loses just 5 percent of the power it carries.\nNot only are the funds funneling through the system inadequate, but the electricity itself is in such short supply that state utilities in this region and across much of the country set schedules for their customers. In Punjab, homes are supplied with electricity from late afternoon through the evening, as well as most mornings. Farmers, meanwhile, get power during the night and intermittently during the day. Power can — and often does — go out during an allotted time, however.\nBut there are ways to get around the electricity schedule, as well as any unscheduled electricity outages: intermittent power supply is so embedded in the Indian way of life that any farmer — or anybody else for that matter who can afford it — purchases and operates a diesel-fueled generator.\nFor instance, when the power goes off on Sekhon’s farm, diesel generators provide the electricity needed to run the irrigation pumps in the field. At his dinner table, the only subtle change is a slight dimming of the lights as the system seamlessly switches from state-provided electricity to at-home diesel generators.\nThe steadily growing number of generators reflects two market events:\n- More people can afford them as middle class incomes rise.\n- 40 percent of the $US 18 billion in annual energy subsidies that India provides for farmers, businesses, and drivers is devoted to reducing the price of diesel to well below market rates.\nPetroleum consumption has climbed more than 40 percent since 2000 to 3.2 million barrels per day in 2012, 70 percent of which is imported, according to the International Energy Agency. More than 40 percent of India’s petroleum is consumed as diesel fuel, according to India’s Ministry of Energy.", "score": 42.35838592747368, "rank": 8}, {"document_id": "doc-::chunk-3", "d_text": "The importance of electricity as a prime driver of growth is very well acknowledged and in order to boost the development of power system, the Indian government has participated in a big way through creation of various corporations such as, State Electricity Boards (SEB), NTPC Limited, NHPC Limited and Power Grid Corporation Limited (PGCL), etc. However, even after this the country is facing power shortage.\nThere are many problems faced by the power sector and these need to be addressed. One of the issues plaguing the power sector in a big way is shortage of equipment. This has been a significant reason for India missing its capacity addition targets. While the shortage has been primarily in the core components of Boilers, Turbines and Generators, there has been lack of adequate supply of Balance of Plant (BOP) equipment as well. There is a shortage of construction equipment also.\nThe current power infrastructure in India is not capable of providing sufficient and reliable power supply. Some 400 million people have zero access to electricity since the grid does not reach their areas.\nAnother problem is unstable power supply. There are frequency fluctuations caused by load generation imbalances in the system and this keeps happening because consumer load keeps changing. Frequency is the most crucial parameter in the operation of AC systems. The rated frequency in India is 50.0 Hz. While the frequency should ideally be close to the rated frequency all the time, it has been a serious problem in India. Poor power quality control has knock-on effects on equipment operation, including large-scale generation capacity. Equipment damage can, of course, further compromise supply and aggravate the effects of chronic fuel shortages.\nTo summarize, Indian power sector has made considerable progress in the last decade and has evolved from a nascent market to a developing market led by policy reforms and increased private sector participation. Challenges do exist in the sector, which India has to overcome to evolve from a developing market to a matured market. Meanwhile, the gap between what can be achieved and what is currently present, uncovers a number of possibilities and opportunities for growth.\nOVER 50 YEARS WITH INDIAN POWER SECTOR\nTHE DEMAND FOR POWER IN INDIA CONTINUES TO INCREASE LEADING TO THE NEED FOR HIGH-END POWER TURBINE & GENERATORS, WHICH IS ONE OF TOSHIBA'S CORE EXPERTISE\nToshiba has been an integral part of India's power sector for decades.", "score": 42.16089949352951, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "It is often said that sectoral reforms have a very non-linear path. Electricity in India is a good example. The central challenges identified were inadequate generation capacity, high distribution losses, and free (and, in case of many states, unmetered) farm power.\nThe first generation of power reforms, initiated at the turn of the century, involved unbundling integrated utilities by separating generation, transmission, and distribution. This, coupled with a series of regulations mandating competitive power purchases by distribution companies, attracted private investment into generation and installed capacity rose dramatically. Discoms had to purchase power from the market at competitive prices and also had to make immediate payments (and not run up dues) for those purchases. Often times, especially in the summers, the prices of spot purchases rose by multiples, leaving discoms with even bigger gap between cost of services and revenue recovery.\nThis disciplining constraints on the cost-side forced the discoms to get their acts together on the revenues side or figure out ways to bridge the gap. But addressing the revenues side meant lowering technical and commercial losses. Given pervasive theft of supply, billing and collection deficiencies, and unmetered (and free) agriculture supply the latter was 2-4 times the former. This required addressing challenges on state capacity (in billing and collection), political economy (free farm power), and a combination of both (theft of supply).\nNow these challenges are not easily addressed. So discoms came to rely on a combination of strategies to bridge the deficit. Primarily, they resorted to large-scale rationing of supply and suppression of demand, through what are called load-shedding or power-cuts. In fact, in India, “peak demand” is deceptive since it only conveys demand of the discoms and not of the consumers. Across vast swathes of North India, consumers are off-grid more hours than they have supply. Rural areas bear the brunt of these power cuts.\nAnother strategy was to finance the deficit with government subsidies (to the extent of government policy mandated subsidised supply, mainly to farmers) and borrowings. In many states, the government subsidies while sanctioned were never disbursed, forcing the discoms to borrow even more and continuously rolling them over. The Government of India periodically undertook discom debt restructuring schemes with ostensibly strict conditions on loss reduction.", "score": 40.22031106564329, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "These reforms could be good news for advocates of sustainable energy in India. For one, reducing grid power losses and theft would inherently improve the efficiency of the electricity system. Furthermore, a well-functioning electricity grid is a prerequisite for integrating large amounts of variable renewable energy sources such as wind and solar. The improved ability of SEBs to finance power purchase agreements could also increase opportunities for renewable electricity generators to sell power to the grid in a more efficient and transparent process.\nGiven the severity of the crisis facing India’s electricity sector, SEBs should be monitored closely in the coming years to ensure that they implement the reforms necessary to ensure a secure, reliable, and affordable electricity supply throughout the country. Improvements in electric utility governance and infrastructure must be combined with robust renewable energy incentives and rural electrification programs in order to transition India to an energy system that is socially, economically, and environmentally sustainable.\nHomepage Image: India's Raisina Hill. (Ambarish Singh Roy)\nYou may also be interested in:", "score": 38.93618373912565, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "While the power sector in India has witnessed a few success stories in the last 4-5 years, the road that lies ahead of us is dotted with innumerable challenges that result from the gaps that exist between what’s planned versus whatthe power sector has been able to deliver. This document highlights and quantifies some of these gaps and attempts to analyze the problem. The document builds on the risks prevalent in the industry, some prominent hurdles that the power sector has already crossed, and more importantly - others that various players have to overcome. Understanding these core issues & risks of the power sector help in identifying the opportunities that lie ahead; for example why is private sector participation an important requirement. A short peek at our past performances indicate that during the last three five year plans (8th, 9th and 10th), we have barely managed to achieve half of the capacity addition that was planned. As we enter the third year of the 11th five year plan, we have already seen slippages on the planned approx. 79 GW capacity addition.\nOnce we break the problem down and identify the bottlenecks, we may be able to better understand the integration challenges that such large projects pose. While there may be heavy dependencies on equipment suppliers and challenges around logistics and work-front availability – with the right and timely application of project management principles along the lifecycle of the project, one can strive to achieve increased project completion against baselines. Certain best practices around stakeholder management, integrated project and asset development and interdependency mapping across various entities can help improve overall project planning. Once we understand the practical implementation challenges, various teams and people get aligned to the overall strategy, then the delivery on our estimated plans becomes more of a reality.\nIndia has the fifth largest generation capacity in the world with an installed capacity of 152 GW as on 30 September 20091, which is about 4 percent of global power generation. The top four countries, viz., US, Japan, China and Russia together consume about 49 percent of the total power generated globally. The average per capita consumption of electricity in India is estimated to be 704 kWh during 2008-09. However, this is fairly low when compared to that of some of the developed and emerging nations such US (~15,000 kWh) and China (~1,800 kWh). The world average stands at 2,300 kWh2.", "score": 37.86759752985689, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "India Inc. is losing millions of rupees in potential revenue every year because of power cuts, a new survey reveals.\nThe study, carried out by the Federation of Indian Chambers of Commerce and Industry, revealed that one in three industrial units suffer power cuts for more than 10 hours each week.\nThe report, released Monday, drew attention to coal shortage, to the failures of power distribution systems in various states, and to the common phenomenon of power theft, where people tap into power lines illegally.\nThe survey comes after, last summer, India suffered a massive power outage that affected as many as 680 million people, disrupting transport and businesses.\nIndia has an installed generation capacity of 210 gigawatts that lags electricity demand by 10.6%. according to the Central Electricity Authority, the government agency that monitors the power sector. The country has plans to expand electricity generation capacity by 44% to 288 GW in the five year-period between April 2012 and March 2017 to supply uninterrupted power to millions of its households and power the industrial growth. But coal shortages and poor financial health of electricity distribution agencies stand in the way of the expansion goal.\nThe study says that Indian companies are losing up to 40,000 rupees ($733) a day each because of power shortages, which are taking a toll on production. The survey, which covers 650 industries of various sizes across India, said that as many as 61% of the companies surveyed suffered more than 10% loss in production due to power cuts. Most companies have power generators, which temper the effects of power cuts.\nThe study revealed differences between Indian states. Companies based in Gujarat, Karnataka and Maharashtra suffered far less from power cuts than companies based elsewhere.\nIn keeping with its business-friendly reputation, the study found Gujarat was the state where power cuts are least common. Gujarat undertook successive reforms including electricity tariff hikes that led to improvement of financial health of its electricity distribution companies, which in turn improved their ability to buy more power and also fund plans to modernize distribution networks.\nSouthern states of Andhra Pradesh and Tamil Nadu fared worst, with companies saying they suffer between 21 hours and 30 hours of power cuts every week, the report said. The states were followed by Orissa, Jharkhand and Madhya Pradesh, where the power supply situation is also poor.", "score": 35.95644212042743, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "Moreover, the sector has been plagued by issues such as the poor financial health of discoms, rising receivables and sporadic fuel supply constraints. Therefore, while the sector achieved success in terms of higher electrification, improved energy efficiency and meeting of power demand before the onset of Covid-19, some discoms remained under financial stress.\nImpact of Covid-19 on the power sector: The outbreak of Covid-19 and the consequent countrywide lockdown is a once-in-a-lifetime event with a wide-ranging impact on the power sector. This impact has been in the form of a sudden reduction in demand, financial stress and disruptions in the power supply chain. However, the impact on power generation, transmission and distribution entities has been different.\nDistribution: With industrial and commercial activities coming to a halt in the first quarter of this financial year, the demand for power reduced by 25-30 per cent. The situation worsened further due to deferrals of electricity bills to customers and difficulties in collection. With the gradual opening up of industries, electricity demand is now increasing, with annual demand expected to touch pre-pandemic levels by the first quarter of financial year 2020-21. Most discoms are now recording higher online billing and collection.\nGeneration: The contraction in power demand has led to a decline in offtake and consequently, in the overall PLF of generators. How, the renewable energy sector has fared well compared to the conventional (thermal) sector and most of the decline in power generation appears to have come from the latter. A related development was an increase of 14.5 per cent on a year-on-year basis in electricity volumes traded on the exchange in the first quarter of 2020-21. Another consequence of Covid-19 and its associated issues is significant delays in the resolution of stressed power projects. It is likely to cause further delays in projects under implementation, resulting in time and cost overruns.\nTransmission: With reduced power flow across the grid, transmission companies have been registering higher available transmission capacity levels across their interregional and intra-regional links. This, along with the high levels of spare power generation capacity across the country, had resulted in a glut in the secondary power markets during the initial days of the lockdown. However, the situation stabilised as generators withdrew capacities due to subdued demand and offtake from the market.\nWhat has been the industry response to the pandemic?", "score": 35.48500780241383, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "* Tamil Nadu unable to meet 31% peak demand for power during summer\n* Tamil Nadu to add nearly 6,000 MW capacity to solve crisis\n* States like Bihar & Punjab could be next to face power deficits\nPower cuts are a part of life in most parts of India. And yet they can surprise, as it happened when the country’s northern grid faced a severe power cut on July 29, 2012 and 7 states were crippled due to shortage.\nA few weeks later, IndiaSpend drew up a short report on the power dynamics across the country, and hazarded that southern India was probably the next in line to see large scale cuts because of rising demand-supply gaps.\nLast month it happened. The southern Indian state of Tamil Nadu, especially its western and southern districts, saw power cuts that lasted between 10-20 hours a day as supply failed to keep pace with high demand. The situation may stabilise as some fresh capacity comes in but it might take another three months.\nTamil Nadu appears to be a case of a state which may have been better prepared. The Tamil Nadu Generation and Distribution Corporation (TANGEDCO) – set up in 1957 to run the power transmission and distribution in the state – had proposed new power generation capacity of 5,988 MW during the next five years.\nThey included three thermal power projects scheduled for commissioning by April-June 2013 and eight hydro power projects of which four are scheduled to be operational by November 2013.\nTamil Nadu presently has a 330 MW gap during peak demand, which is the highest amount of electricity consumed at any point of time. And summers usually see demand peaks in India. Not surprisingly, Tamil Nadu has been seeing power deficits rise during peak demand in the last few years as the following table shows:\nEvidently, the power deficit during peak zoomed again after 2010. And much faster than the previous peaks in 2007 and 2008. If the data is accurate, it would suggest that fresh capacity has been keeping pace, somewhat. Or demand has been slipping. More likely the former.\nThat’s the quick story of Tamil Nadu. Many other states in India don’t come close in preparedness. Bihar and Punjab look like prime candidates. As do the North-Eastern states.", "score": 34.30438063408962, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "These states are among those that have delayed electricity sector reforms to improve the financial health of their distribution companies and also raising generation capacities of state-owned power utilities.\n“Not only does this (power supply scenario) limit future expansion as companies remain wary of not being able to compete, the Indian economy which depends highly on firms trading in the international markets is affected as well,” the survey said, referring to the impact of short-supply of electricity to companies that export goods and services overseas.\nIn its study, FICCI called for a rapid increase in generation capacity and for improved electricity supply and monitoring systems. To meet these goals, it recommended an increase in solar and wind energy. More than half of India’s generation capacity is based on coal, which is short supply.\nIndia’s alternate electricity industry, with a total generation capacity of about 25 gigawatt is still at a nascent stage. The government is offering tax breaks and others incentives to encourage industries to use green sources of energy.\nThe survey also sought clear communication from electricity distribution agencies on power cuts so that they can schedule their business plans accordingly. “The information though available is not as uniformly available as it should be and that stakeholders in certain states are hence unprepared for power cuts, increasing the negative impact of power outages on their operations,” it said.\nFollow India Real Time on Twitter @indiarealtime.", "score": 33.345083250419925, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "According to 2013-14 figures, the total installed capacity for electricity generation in India registered a compound annual growth rate of 7%. However, as of 2015, 237 million people in India do not have access to electricity. The government’s National Solar Mission is playing an important role in the work towards renewable energy, and interventions in rural electrification and new ultra mega power projects are moving India towards achieving universal energy access.\nTargets for Goal 7\nBy 2030, ensure universal access to affordable, reliable and modern energy services.\nBy 2030, increase substantially the share of renewable energy in the global energy mix.\nBy 2030, double the global rate of improvement in energy efficiency.\nBy 2030, enhance international co-operation to facilitate access to clean energy research and technology, including renewable energy, energy efficiency and advanced and cleaner fossil-fuel technology, and promote investment in energy infrastructure and clean energy technology.\nBy 2030, expand infrastructure and upgrade technology for supplying modern and sustainable energy services for all in developing countries, in particular least developed countries, small island developing states and land-locked developing countries, in accordance with their respective programmes of support.", "score": 32.81802903435146, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "India will soon witness a new government in control. Among the multitude of burning issues, the new government will have to face the challenge of a growing energy crisis. It will require extraordinary effort, innovative vision and viable solutions to tackle the increasing demand for energy, while maintaining an eco-friendly approach. Energy commodities comprise gas, oil, coal, renewable energy and electricity.\nCurrently, high levels of consumption with respect to energy-related commodities are paralysing operations in the country because of non-performing policy initiatives. The demand-supply imbalance is evident across all commodities, requiring serious efforts by the new government to augment energy supplies to avoid a severe energy supply crunch.\nThe Planning Commission indicates that by 2016-17, the country will manage an approximate 6.7 million tonnes of oil and by 2021-22, this will rise to 850 million tonnes. However, this will meet only 70 per cent of the expected demand; the remaining 30 per cent will have to be sourced through imports.\nEven though India possesses a rich heterogeneous mix of energy components, deterring policies have created a difficult environment for potential investors.\nDemand for coal\nCoal will continue its dominant position in India’s energy mix for many years to come. Today, 54 per cent of the total electricity generation capacity is coal-based, and more than 70 per cent of energy generated is from coal-based power plants. As per the 11th Five Year Plan (2007-12), 67 per cent of the planned capacity added is also coal-based. While domestic production is set to touch 795 MT in 2016-17, the projected demand for coal will be 980 MT during the same period.\nCoal reserves in India, presently mined by Coal India Limited and its subsidiaries, are around 293.5 billion tonnes, with a few blocks being given to private parties for production of electricity and captive use. From 2012, the country has seen a paradigm shift in coal policy, with the Comptroller and Auditor General stating that there was a national loss of Rs.1,76,000 crore. This resulted in the Ministry of Coal removing coal blocks, but it is obvious that the new government will have to collate a clean coal policy with respect to exploration, mining and use.\nAnother important concern is the reduction in dependability on imported coal. Last year for instance, we imported 100 million tonnes from Indonesia.", "score": 32.49200232425516, "rank": 18}, {"document_id": "doc-::chunk-2", "d_text": "The total installed capacity for electricity generation was 2,66,387 MW in March 2013, but distributing electricity at the retail level at affordable and reasonable prices is still a nightmare. Secondly, the regulatory framework for the electricity business needs strengthening. Loss in distribution is a worrisome factor and sadly efforts being taken to bring down losses are not encouraging. The new government must invest in this field, which may involve rationalising tariff and incentivising reduction of losses.\nRenewable energy generation has a potential of 89,774 MW, with Gujarat having the highest potential. Unfortunately, the country does not have any existing Renewable energy Law. Renewable energy comprises 60 per cent of electricity and 40 per cent of other sectors. Climate change is a global issue, warranting the new government to implement a Renewable Energy Law, that would make it mandatory for all conventional energy users to use a certain percentage of renewable energy. This process has started in electricity, but needs strict implementation across all segments.\nMerely levying a carbon tax on imported coal will not yield sufficient results. Renewable energy comprises solar, wind and biomass. Wind power accounted for the highest capacity of total installed renewable power at 69.65 per cent, with small hydropower coming second at 13.64 per cent and biomass power at 12.58 per cent.\nThe new government must aggressively work on energy as the main agenda. We should have an energy commission for formulating synchronous policies with other inter and intra ministerial departments. States must also be taken into confidence. Progress of the energy sector will provide mindboggling opportunities for employment and socio-economic growth, besides raising the standard of living in the country.\n(Suresh Prabhu is former Union Minister for Power.)", "score": 31.96252845457873, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "The mismanagement of power in India\nRN Bhaskar – 02 January, 2020\nThe sins of the past are finally catching up with the power sector. The energy situation has worsened, and could get worse, as policymakers hurtle from one blunder to another.\nThe first serious effort to address this malaise was during the Vajpayee government, when the New Electricity Act was ushered in, and states were urged to rein in power losses.\nBut soon, power subsidies and power theft were allowed to rear their heads again. This was followed by clever methods to hide the losses under the carpet, especially with UDAY (Ujwal DISCOM Assurance Yojana) which was launched in November 2015 (http://www.asiaconverge.com/2018/09/discom-losses-continue-to-bleed-the-country/). Ironically, even Crisil – India’s premier credit rating company — touted UDAY’s virtues (https://www.crisil.com/Ratings/Brochureware/News/Discom-losses-to-nearly-halve-by-fiscal-2019-on-reforms.pdf). It had curiously overlooked two key numbers – the average cost of supply or ACS and the average revenue received or ARR. If the gap between the two increases, then the scheme is just not working. If the gap narrows, the scheme could be said to be working. And the truth was that this gap has kept increasing (for the country, barring a few states). The hole has become larger and larger.\nAs a recent CARE Ratings report pointed out, “Power generation during April-October 2019 recorded [just around] 1.5% growth at around 843 billion units (BU). . . . .The share of thermal power in the overall power generation has dropped to its lowest level since 1991-92 at 72.8% during the current financial year.”\nWhat is even more interesting is that the industrial powerhouse of India, Maharashtra, which accounts for much of power consumption, saw power supplies down (by -5.6%) during the April-October 2019 period (compared to the same period in 2018). Ditto for Gujarat which saw power supplies fall by 3.9%. Clearly, the industrial engines of the country had begun to sputter.", "score": 31.686676281651224, "rank": 20}, {"document_id": "doc-::chunk-4", "d_text": "Simultaneously the discom should introduce an initiative to publicise the monthly feeder-wise information about supply and revenue collection, and the associated cost-recovery gap. This could be accompanied with a program to incentivise feeders with higher payment compliance by both increasing their supply duration as well as ensuring greater quality. This could also perhaps trigger the hitherto absent demand-side pressures on quality and make this a salient political good.\nOn the free farm power side, the endeavour should be to first ensure all farm services are metered, even if unbilled. This is the first step to any meaningful engagement to address the problem.\nAnother intervention would be to introduce direct benefits transfer (DBT) cash subsidy for free-farm power. Farmers can be offered an equivalent (or higher) units of free supply in return for having all farm connections metered and agriculture tariffs fixed. Each farmer would pay his monthly electricity bill, whereupon he would be reimbursed the previous month’s bill to the extent of the free units consumable. Further, the farmer can be incentivized to reduce consumption by reimbursing an amount proportional to his unconsumed units (from the free power unit allotted each month). The introduction of meters will help measure and audit rural supply. The incentives will encourage farmers to optimize consumption. It will also ensure that farmers can now use three-phase supply beyond the 7-10 hours. Besides, it will also boost non-farm economic growth.\nThese two efforts, on the energy audit and billing as well as free-farm power sides, are practical starting points to engage meaningfully with the problem. It has to be noted that they are not just about some technology interventions or a cash transfer innovation, but about using technology interventions as practical and credible decision-support for discom officials which they then need to then act upon. These interventions can shine light on the problem, identify and quantify leakages, localise and apportion costs, and thereby align incentives for further action.\nI have not talked about the political challenges that need to be addressed. They include annual tariff assessments and increases accordingly, allowing discom officials enforce on theft of supply and non-payment of bills, and gradually shift from free and unmetered farm supply to some basic pricing. Equally important is that the demand for continuous and good quality of supply should become an electoral good that would in turn make the aforesaid issues an imperative for the political leaders.", "score": 31.49972386671175, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "With an energy growth of just 1.5%, it is difficult to see how even a 4% GDP growth can be sustained.\nThree problems bedevil the power sector, all of which are addressable, if only there is clear focus and political will.\nThe first is the inability of the government to usher in laws which allow the RBI to deduct the amounts due to power distribution companies (discoms) directly from the funds paid into state accounts. Instead of using its brute majority to usher in divisive agendas like the CAA (Citizenship Amendment Act), the government ought to have brought in more sanity into power revenue recoveries. The trouble is that when it comes to controlling power profligacy, often used as a means for winning elections, the knees of policymakers begin to buckle.\nThe second is the unwillingness of policymakers to realise that the maximum bang for the buck lies in promoting (off-grid) rooftop solar. There are several reasons for this. As Tripura showed last year (http://www.asiaconverge.com/2019/10/solar-energy%e2%80%88ministries-practice-deceive/), the average cost of providing grid connectivity to households is over Rs.2 lakh per household on an average. The cost of a rooftop solar installation is just Rs.50,000. Then there is the issue of supply reliability. Rooftop solar is more reliable as a source of power than grid power, because – for a variety of reasons, both technical and collusive — state engineers resort to power-cuts and shut-downs. Most rural folk thus prefer off-grid rooftop solar to grid-connectivity. The third is the economic benefit that comes from rooftop solar. It has the potential for creating over 80 million jobs within a couple of years (http://www.asiaconverge.com/2017/12/sabotaging-rooftop-solar-and-employment-generation/). It can actually generate employment – kickstarting small and medium enterprises at the grassroots level. And it reduces the cost of imports, as the country learns to depend less on imported fuels.", "score": 31.44236608475278, "rank": 22}, {"document_id": "doc-::chunk-2", "d_text": "Following the assessment, EGI has begun a project that could provide a pathway to avoid another disaster like India’s recent blackouts. The project seeks to develop a holistic and participatory approach to electricity planning, working with decision-makers to close the “holes” and explore the potential of improved energy efficiency and renewable energy in order to shrink the demand-supply electricity gap.\nIndian civil society can also play a role in improving governance. Citizens can create a “demand” for improvements by asking that information on sector problems and potential solutions be placed in the public domain. They can also hold decision-makers accountable for providing affordable, reliable, and quality electricity for all. Unless both citizens and decision-makers start to chip away at India’s core governance problems, blackouts—and a host of other power problems— will likely become increasingly common.", "score": 31.152670159243815, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Continuing delays in the payments by the state-owned distribution utilities (discoms) in key states such as Maharashtra and Rajasthan pose a challenge for the wind energy sector, although some improvement has been seen lately.\nOn the positive side, the MNRE scheme for procurement of 1 GW through the auction route would facilitate the offtake for wind energy players, according to ICRA.\n“While there has been some improvement in the payment pattern by utilities in Rajasthan with the implementation of UDAY (Ujwal Discom Assurance Yojana) as well as by the utility in Maharashtra in the last three month period, a build-up in receivable position is seen, which varies from 8 to 12 months as on November 2016 and thus remains quite significant,” said Sabyasachi Majumdar, Senior Vice President, ICRA Ratings.\n“In addition to payment delays by state utilities, wind energy projects remain vulnerable to the risk of non-signing of power purchase agreements (PPAs) by the utilities as seen in Maharashtra and forced back down by utilities in the states of Rajasthan and Tamil Nadu,” Majumdar said.\nFurther, the implementation of forecasting and scheduling framework, as approved by the State Electricity Regulatory Commission (SERC) in Karnataka and in other states where draft regulations are in place, poses regulatory challenges for the sector, given the variable and intermittent nature of wind power generation and limited experience available with the IPPs in forecasting and scheduling as of now.\nAlso, the sector continues to face challenges due to the limited compliance of renewable purchase obligation (RPO) norms by the obligated entities as well as the variance in RPO norms across the states.\nNotwithstanding these near-term challenges, the long-term demand potential for wind power remains strong, given the large untapped wind power potential, fairly attractive feed-in tariffs and relatively lower execution risks.\nICRA notes that the incremental wind-based energy capacity requirement by FY2022 is estimated at about 46 GW as against the current installed capacity of 28.1 GW. This is assuming annual energy demand growth of 6%, non-solar RPO at 12.5% by FY2022 and wind as a renewable energy (RE) resource contributing to a dominant share (75%) in meeting the non-solar RPO requirement on an all India basis.", "score": 31.035739741778848, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "In the absence of adequate maintenance, reclining poles, sagging conductors, recurrent transformer failures, burnt insulators and failed switches, unreplaced fixtures and so on became commonplace. The result was frequent interruptions, pervasive voltage fluctuations, high technical losses, and accidents with large-scale human casualties.\nSo what’s the way out?Clearly only the first challenge of enhancing capacity generation, by attracting private investment into generation, has been addressed. There too, the failures on the other parts of the sector threaten to undermine the successes. The issue of high distribution losses and free farm power have remained unresolved.\nPrivatisation of distribution has been the oft-repeated solution. But experience of private participation in power distribution has been mixed. The limited successes have been in areas with favourable load mix and disciplined consumers, and with none of the underlying problems addressed. Also, unlike generation, political economy factors are relevant at the consumer facing distribution sector.\nThere is no avoiding the reality that any sustainable solution has to involve addressing the underlying financing problem of the discoms. This requires tariff increases that allow for cost recovery, metering and pricing of farm power, and strict billing and collection. There are no short-term or one-time legislative or regulatory solutions for these. This needs political attention and that too urgently. The house is almost burnt down.\nIn the meantime, a good and practical bureaucratic starting point would be to appropriately leverage technology to measure and audit supply, and localise and highlight the source of losses. Automatic meter reading (AMR) devices and smart meters now make it possible to easily monitor these in real-time, bill and even control supply to consumers. But given the constraints and the complex challenge, this needs to be done gradually and with great attention.\nFortunately, the very configuration of electricity distribution offers an opportunity to monitor in a gradually cascading manner. Electricity is transmitted from the generators through a cascade of substations. Distribution side starts with the 11kV feeders emanating from each 33/11 KV sub-stations (typically 3-8 feeders with each substation), each feeder in turn servicing several smaller distribution transformers, each of which in turn has several consumers depending on their total load.\nThere is no dearth of efforts to leverage technology to address the distribution side problems. In fact, there have been programs for several years. The Government of India has had programs to install smart meters upto transformers.", "score": 30.95227335482302, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "The Indian government has set ambitious goals in the 11th plan for power sector owing to which the powr sector is poised for significant expansion. In order to provide availability of over 1000 units of per capita electricity by year 2012, it has been estimated that need-based capacity addition of more than 100,000 MW would be required. This has resulted in massive addition plans being proposed in the sub-sectors of Generation Transmission and Distribution.\nWhile some progress has been made at reducing the\nTransmission and Distribution (T&D) losses, these still\nremain substantially higher than the global benchmarks, at\napproximately 33 percent. In order to address some of the\nissues in this segment, reforms have been undertaken\nthrough unbundling the State Electricity Boards into\nseparate Generation, Transmission and Distribution units\nand privatization of power distribution has been initiated\neither through the outright privatization or the franchisee\nroute; results of these initiatives have been somewhat\nmixed. While there has been a slo and gradual\nimprovement in metering, billing and collection efficiency,\nthe current loss levels still pose a significant challenge for\ndistribution companies going forward.\nWhile additional gas supply from KG Basin has eased shortage to a limited extend, supply constraints for domestic coal remain and are expected to continue going forward. Consequently, public and private sector entities have embarked upon imported coal as a means to bridge the deficit. This has led to some Indian entities to take upon the task of purchasing, developing and operating coal mines in international geographies. While this is expected to secure coal supplies it has again thrown upon further challenges. For example, the main international market for coal supply to India – Indonesia, poses significant political and legal risks in the form of changing regulatory framework towards foreign companies. Similarly, coal evacuation from mines in South Africa is constrained by their limited railway capacity and the capacity at ports is controlled by a group of existing users making it difficult for a new entrant to ensure reliable evacuation9. In this case it is essential to manage the risk of supply disruption by different options like – diversification of supply, due diligence on suppliers, unambiguous contracting and strict monitoring among others.\nSource : ©2010 KPMG, an Indian Partnership and a member firm of the KPMG network of independent member firms affiliated with KPMG International Cooperative (“KPMG International”), a Swiss entity, http://www.kpmg.de/docs/PowerSector_2010.pdf", "score": 30.41339309818211, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "While work had been completed in 104,496 villages as on 31 March, according to the Planning Commission, a significant number of the villages are not yet energized—which means power has not started flowing through the grid to the villages.\nAbout 19.4 million poor households have been provided free connection.\nApart from such delays, electricity available in the villages is only for around six hours as poor finances of state electricity boards act as disincentives to provide power. Also, delays in issuance of electricity bills lead to high outstanding dues.\nWhile Chaturvedi admitted that “distribution utilities are not finding it attractive to provide electricity,” he said that delays in issuance of electricity bills leading to households’ inability to pay was “partly true”.\n“Some of these are petty excuses. We are concerned about RGGVY and have asked the states to separate the feeders (separate electricity feeds for farmers and non-farmers). Some states are doing it,” he said.\nCumulative losses of the distribution utilities increased from Rs.1.22 trillion in 2009-10 to Rs.1.9 trillion by March 2011.", "score": 30.313482131950092, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "Even 75 percent of families connected to grid power in rural areas – 700 million people – regularly get less than six hours of uninterrupted power a day, according to a 2016 study by the Delhi-based Centre for Science and Environment.\nIndia's government aims to bring 500-megawatt mini-grid solar systems to about a fifth of the country's 1.3 billion people in the next five years, according to a government draft policy.\nMore than 230 million people in India still have no access to basic electricity, often in rural areas where expanding the national power grid is too expensive.\nExpanding access to power aims to boost household incomes, help students study, provide better access to information via radio or television and generally improve life for rural people, studies have suggested.\nBut the Uttar Pradesh study – which looked at 1,281 households in 81 communities – found that providing about two hours of electricity in the evening via clean energy mini-grids resulted in few significant changes.\nMany households continued to use subsidised kerosene for lighting after the mini-grid power shut off, said the study published in Science Advances magazine.\nElectrification programs that focus on off-grid technologies must think carefully about whether low-cost minimal systems are the right answer, authors of the study said.\nClementine Chambon, co-founder of Oorja Solutions, which will begin setting up clean energy mini-grids systems in the Bahraich district of Uttar Pradesh in June, said she agreed that providing larger amounts of power to supply businesses, rather than just homes, could help solar systems pay for themselves more quickly – and could subsidise household use of power.\n\"Oorja will cross-subsidise residential consumers through higher electricity tariffs for businesses. This renders clean energy affordable to all, whilst ensuring the profitability of the mini-grid infrastructure,\" Chambon said.\nIndia eventually aims to expand its grid power system throughout the country, but there will be five to 15 years while that is underway where other solutions – such as mini-grids – are needed, according to Ashvin Dayal, the Rockefeller Foundation's managing director for Asia.\nThose are crucial years both for meeting world goals to extend clean electricity to communities without it and to curb climate-changing emissions in line with the Paris Agreement, which calls for fossil fuels to be phased out by the second half of the century.\nThe cost of producing solar power in India has plunged in recent months, hitting a level that makes it price competitive with fossil fuels, including coal, experts say.", "score": 30.195816146146228, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "What Do India’s Recent Electricity Sector Reforms Mean for Renewable Energy?\n|In July, 620 million people in India were plunged into darkness during the world's biggest power outage to date. (AP // Bikas Das)|\nIn the wake of massive blackouts this past summer that spanned 20 Indian states and left half of the country’s population without power, the national cabinet approved a debt restructuring program for State Electricity Boards (SEBs) in September aimed at improving the functioning and viability of India’s electricity sector. SEB debt across all states is currently over US$35 billion, or about 2 percent of the nation’s GDP.\nSEBs are state-owned utility companies responsible for operating the electricity grid and delivering electricity to customers. SEBs buy electricity from power generators at a price negotiated through a Power Purchase Agreement (PPA), which they in turn sell to electricity consumers. Lack of necessary investment to improve grid infrastructure, widespread electricity theft, and low electricity prices that result in SEBs often selling power to customers at a loss are among the systemic problems in many states. Electricity losses during transmission and distribution to customers—due to grid inefficiencies, electricity theft, and illegal grid connections—account for more than 25 percent of power generated in India.\nThe cabinet-approved debt restructuring package aims at addressing some of these problems. Under the program, state governments will take on 50 percent of SEBs’ short-term liabilities over the next two to five years. The remaining 50 percent of short-term loans will be rescheduled by lenders, providing SEBs with favorable interest rates and a three-year moratorium on principal payments. SEBs have until the end of this year to sign on to the program, which will apply to debts accumulated through March 2013.\nIf the program has the desired impact, reduced debt pressure on the state utility companies will allow them to invest in much-needed infrastructure and operation improvements and revitalize the electricity sector by enabling new PPAs that are currently beyond SEBs’ financial capacities.\nIn order to qualify for the restructuring package, SEBs are required to implement several reforms aimed at improving their financial viability and provision of services. The conditions include that SEBs may not take on new short-term loans, must collect outstanding power dues from customers, and must reduce electricity losses by at least 25 percent. State governments must also regularly review electricity prices, which will likely result in price increases for consumers in order for SEBs to generate enough revenue to cover operating costs.", "score": 29.687054474182798, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "BHUBANESWAR, India (Thomson Reuters Foundation) - In the next five years, India plans to install 10,000 small-scale solar-power grids across the country to bring basic electrical power to communities without it.\nBut providing access to a minimal supply of clean energy – enough to power two LED lights for a few hours and charge a mobile phone – is probably not enough to significantly improve people’s lives, new research suggests.\nA study in Uttar Pradesh, which looked at more than a thousand homes that had received basic access to clean electricity for the first time, found that spending on expensive kerosene for lighting had fallen, a benefit to families.\nBut access to a couple hours a day of electricity was not enough to boost savings, help launch new businesses, increase time spent working or studying, or otherwise significantly improve people's lives, researchers found.\nWhat appears to be needed instead are larger clean power systems capable of providing enough energy to power businesses throughout the day, said Michael Aklin, the study's lead author and a political science professor at the University of Pittsburgh.\n\"Larger off-grid systems that offer more power could be used for appliances and machinery, with more potential for livelihood creation,\" he told the Thomson Reuters Foundation in an email interview.\nSuch systems could be used to run businesses such as internet cafes, fuel stations, repair shops and banks, as well as schools and health centres, he said.\nIncome-earning businesses of that kind would make it easier to pay back the higher up-front costs of installing clean energy systems - and \"could make a major contribution to rural development in India\", Aklin said.\nAdarkanta Jena, 53, an engineer with the government of Odisha state, agreed that at least eight hours of uninterrupted power a day are needed to run popular small businesses such as welding shops producing grills for windows, flour mills and machines for hulling or threshing rice.\nAnd \"in 24 hours, 12 hours is the minimum power supply that will enable children to study extra hours, provide women a lighted space to cook and the family not to eat dinner in the dark. Having electricity from 8 a.m. to 8 p.m., at least, could bring social and economic change,\" he predicted.\nFrustrations with the power supply are hardly limited to clean power users in India.", "score": 29.058594076164358, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Electricity distribution companies owned by state governments in India are paying for surplus power that they do not use. This has a significant impact on their finances, as a previous article explained.\nHow did surplus power become such a major problem?\nSurplus power could arise due to a fall in demand, or increase in power supply. Insights from various states show that the burgeoning surplus is more due to the increase in power generation capacity. Read More…\nCredit By: Scroll\nLatest posts by Scroll.in (see all)\n- An Aerial Survey In Bengaluru Could Help Unlock India’s Rooftop Solar Energy Generation Potential - April 21, 2018\n- As China’s Nuclear Power Industry Flounders, Should India And Pakistan Take Note? - March 30, 2018\n- Mumbai: Maharashtra CM, Union Railway Minister Inaugurate Andheri-Goregaon Local Train Route - March 29, 2018", "score": 29.039098903156233, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "On the electricity side, the country may need over 500 billion kilo-watt-hour (kwh) of energy in the year 2004-05 with a total install capacity of at least 150,000 MW (Mega Watt). If we compare these figures with the present level of production of about 150 million tones of coal and lignite, 26 million tones of crude oil and 150 billion kwh of electricity, with a capacity of 43,000 MW, we find a frustrative figure.\nIn spite of improving the efficiency of utilization and better demand management, there is always picture of deficiency. Hence efforts are needed to face the situation in future years to come.\nAnother important aspect of energy strategy to be discussed is the energy requirement in rural areas. In India, rural population would still constitute about 70 per cent of the total population by the turn of the century.\nTo provide the energy requirement of this large segment is always neglected and overlooked. The per capita useful energy consumed per day in rural areas is about 285 k-cal (kilo-calorie) as compared to 414 k-cal in urban areas. Scarcity of commercial sources of energy like petroleum, electricity, coal and natural gas on one side and fast deforestation, dwindling supply of firewood and gradual commercialization of traditional fuel sources on the other, has made it very difficult for the rural people to secure adequate energy supply at a cost which is affordable to them. Hence, now, it is time to give much emphasis on enhancing the non-commercial sources, like bio-gas plants, social forestry (biomass energy) and improved cooking gadgets etc. in rural areas.", "score": 28.414525225771413, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "They, including the latest one, have been like band-aid on gangrene, merely kicking the can down the road. It also did not help the discoms that the share of purchases from state government generators (Gencos), whose dues could often be rolled-over, declined and that from independent power producers rose.\nCompounding the problem, political considerations meant that tariff increases were mostly off-table. In fact, in many states, household tariffs remained unchanged for years, sometimes nearly a decade. With raising tariffs for household and farm consumers a no-go, discoms relied on industrial and commercial consumers to bear the brunt of tariff increases. An inverted tariff structure emerged, whereby industrial tariffs rose to become much higher than household tariffs. Already struggling with power cuts and poor quality of supply, the tariff inversion only added to problem of industrial competitiveness.\nAs a result of all these, the objective of discoms degenerated into mere provision of power during the scheduled supply timings and avoid unscheduled interruptions. In the rural areas, it meant limiting to basic lighting supply during the night (which was also generally the time for farm supply). Since farm supply did not fetch any revenues and the supply on most rural feeders was very small (as a share of the total supply), the discom anyways had limited incentive to carry out maintenance works on rural distribution network and focus on the quality of supply. For the discoms, rural supply was nothing more than a necessary evil.\nIn fact, the need to limit farm usage beyond the period stipulated under the respective state’s free-farm power policy (typically 7-10 hours varying across states) meant that discoms restricted to single-phase supply after that period. This effectively meant that economic activities that required the use of motors etc became prohibitive. Such enterprises had to draw three-phase lines directly from the 33/11 kV substation, a very expensive proposition, often comparable to the capex required for the business itself. While states tried to overcome this by segregating agriculture feeders, its high capital investments and practical challenges (not to speak of higher technical losses due to additional network) meant that this has had limited coverage.\nAll these had the discoms incentives accordingly aligned. Struggling with a hand-to-mouth existence, the discoms had limited incentive or resources to invest in maintenance of the distribution network.", "score": 27.71366538448142, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "New Delhi, Jun 27 (UNI) The government today said Rs ten lakh crore will have to be invested to increase electricity generation in the country by 78,000 MW over the next four to five years.\nTerming it as a challenge, Power Minister Sushilkumar Shinde said finding institutions, companies, project manager, workmen and vendors who will deliver all this phenomenal work in the required time frame was also daunting task ahead.\n''Improving distribution will involve a lot more investment than in the past, coupled with a lot of technology inputs on a massive scale,'' he, said while inaugurating 'India Electricity 2007' conference.\nThe Minister said that there is a need to include central and state Public Sector Undertakings, Indian and foreign investors, equipment suppliers and financiers.\n''It is not enough only to generate more power. We have to ensure that the current high levels of Aggregate Technical and Commercial losses come down to 15 per cent in the next few years so that we derive full benefits of the stupendous capacity addition efforts.'' The entire value chain in the generation and consumption of electricity starting from coal extraction right down to the more efficient use of energy needed to be constantly geared up and streamlined to make them more productive and efficient, he said.\nCurrently the government is trying to overcome current shortages of about 13,717 MW or 13 per cent of the country's requirement and increase generating capacity by ten per cent.\nAll major stakeholders from the power sector, including senior officials from utilities, independent power producers, regulators, investment bankers were participating in the meet.\nThe conference was jointly organised by FICCI and the power ministry.", "score": 27.408364267945817, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Coal is responsible for almost 70% of India's electricity production - that much is agreed upon. But is the problem that there is a shortage of coal to supply the power plants, or does India over-rely on coal?\n\"As a result of the (coal) domestic production shortfall, India has had to rely on more expensive imported coal. However, because Indian states sell electricity at low, heavily regulated rates, electricity generators have been largely unable to pass on the higher cost of imported coal to consumers. The ultimate result is a crippling shortage of coal.\"\nRebecca Byerly, correspondent for the Christian Science Monitor, focuses on the environmental, health, and social devastation the coal industry has brought to the Jaintia Hills of northeast India in \"India's big power blackout: Why coal hasn't been a savior\".\n\"It looks like something out of an apocalyptic movie: mountains of tar-black coal, polluted orange rivers, and seemingly bottomless holes plunge more than a 100 feet beneath the earth's surface.\"\nShe points to a disadvantage of coal in meeting peak power surges.\n\"Coal operates at a steady output 24 hours a day - it's baseload,\" says Mr. Justin Guay, the Washington Representative of the Sierra Club International Climate Program. \"But coal can't be ramped up quickly to accommodate quick peak surges in demand.\"\nYet coal reliance need not result in power outages, as NPR illustrates in its comparison of India to another huge, coal-reliant emerging economy.\nIn Power Outage Exposes India-China Contrasts, NPR's Julie McCarthy in New Delhi and Frank Langfitt in Shanghai explain \"the different attitudes toward (energy) infrastructure\" and why Indian-style energy outages are not tolerated in China.\nText and audio available.\nThanks to Sierra Club Hitched", "score": 27.0430498240355, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "The first step towards this goal is demanding accurate estimates of electricity losses, as well as public disclosure of efforts being taken to address them.\nA Lack of Accountability: India’s transmission network needs strengthening in order to efficiently deliver the generated electricity from power plants located in different parts of the country to millions of consumers. Each state in India has its own transmission agency, and these are networked into one of five regional grids. Over the last decade, India has embarked on a project to join the five regional grids and establish a national grid. The plan sounds good in theory, but it requires a high level of discipline in order to work. Each state government needs to coordinate with the others to ensure no one is drawing too much power or impacting another’s allocated electricity. The blame game that followed the recent blackout showcases the fact that this communication and coordination just isn’t happening yet. Also, there is really no one accountable for managing the grid, pointing to a serious accountability vacuum.\nLack of a Holistic Governance Approach: India does not have a holistic approach to electricity planning or implementation. Generation, transmission, and distribution planning (and investments in each of them) happen independent of each other; renewable energy planning takes place outside of conventional electricity planning; and service to on-grid consumers and off-grid consumers happen separately–all of which highlights the absence of an over-arching electricity governance framework. This situation has caused ad hoc and lopsided planning, with no public disclosure of information about efforts to close the electricity demand-supply gap. Plus, no one agency or institution anywhere in the country is accountable for the failure to provide affordable, reliable, and quality electricity.\nImproving India’s Electricity Governance\nThis list certainly isn’t exhaustive—several other interlinked problems contribute to the malaise affecting India’s power sector, from bad monsoons and water scarcity to inefficient electrical appliances. There are solutions to some of these concerns through improved technology (such as isolating and responding to grid problems and enhancing power plants’ efficiency) and innovative policies (like holistic planning approaches and incentivizing grid discipline), but addressing the accountability and governance issues are key to improving India’s power sector.\nWRI’s Electricity Governance Initiative (EGI) is a network of civil society organizations working to promote transparent, inclusive, and accountable decision-making in the electricity sector in several countries, including India. EGI’s 2006 assessment of India’s power sector identified a range of core electricity governance problems in India.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Ending India's Massive Power Grid Outages\nOn July 30th and 31st, the world's largest blackout — The Great Indian Outage, stretching from New Delhi to Kolkata — occurred. This blackout caused by northern power grid failure left nearly 700 million people — twice the population of the U.S. — without electricity. A grid failure of this magnitude has thrown light on the massive demand for power in a country and its struggle to generate a much-needed power supply.\nIndia aims to expand its power-generation capacity by 44 percent over the next five years. In June, the country's power generation fell short by 5.8 percent against a peak-hour demand for 128 gigawatts, according to government data. India is divided into five regional grids, which are all interconnected, except for the southern grid. All the grids are being run by Power Grid, which operates more than 100,000 kilometers of electricity transmission lines.\nSerious concerns have been once again raised about the country's growing infrastructure and inability to meet its energy needs. Government officials have concluded, \"The grid failed because of the overloading of power,\" and contend that \"many states\" try to take more power than they are allotted from the grid.\nThe country's lack of energy security is a major constraint to its capacity to generate power. The slow pace of tariff reforms is hindering infrastructure investment at the state level in most parts of the country. The centralized model of power generation, transmission and distribution is growing more and more costly to maintain at current levels to meet increasing energy needs. The blackout and shortage of power are hampering India's economic growth and its capacity for growth.\nArticle continues at ENN affiliate, Triple Pundit\nImage credit: Defense Pakistan", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "S.K. Soonee, Former and Founder CEO, Power System Operation Corporation Limited (POSOCO), spoke at length on the inevitable changes, challenges and outlook for the electricity grid at Power Line’s 15th Annual Conference on “Power Transmission in India,” held in Gurugram recently. Excerpts from his address…\nChallenges in grid operations\nThe ongoing energy transition is bound to impact transmission, as well as system operations. With the electrification of various segments such as transport and cooking, the system operation would change too. Three major changes are taking place in the transmission segment. Firstly, transmission access, planning and pricing are moving from Kirchhoff’s Current Law to Kirchhoff’s Voltage Law. This essentially implies that point-to-point transmission would be replaced, and the grid would become a common carrier with general network access and point of connection access and tariff. This is a paradigm shift.\nSecondly, the value of transmission has until now been mainly restricted to transmitting power and certain important attributes related to the transmission grid, namely, reliability, resilience, inertia, access to market and risk mitigation, etc. remained latent. All these aspects have important roles in transmission, and have to be valued. However, with energy transition, these aspects will become more important than ever before because we will have non-tameable generation (wind and solar) and even the load profile will change (electric vehicle charging, induction heaters for cooking, etc.).\nThe third change is on the transmission planning front, where integrated resource planning will be required. The transmission planner would have to forecast the import and export capability of every state using various probabilistic approaches. States have their own load generation balances and portfolios, and import/export, and this needs to be pushed to the limits. It is important to determine the degree of imports needed by a state to meet its power demand. States with less load need to determine if they will be able to evacuate their entire internal generation. It is also important to take a call on what import-export capability a state would like to plan as a policy. With respect to market-based economic despatch, cheaper power would replace costly power until the marginal cost becomes the same, or MBED discovers transmission congestion. Overall, transmission planning will become extremely important to integrate renewables.\nAt present, there is a challenge with respect to institutional development of power system planning as a faculty.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "A crucial discussion paper by CERC, the electricity regulator, has, verily, made economics stand on it's head! Incentives and economic signals seem to be downright lacking in the proposed design for a wholesale, pan-India market for electric power.\nGiven that such a market would cut down on costs and boost efficiency across the board in the fault-prone power sector, the lacuna seems glaring indeed.\nThe issue at hand is a market for transmission capacity. The key really is \"open access\", the assurance of carrying capacity to transmit power cross-country. With open access, efficient power producers can compete for custom.\nIt's essential for reform. The Electricity Act, 2003 does provide the necessary enabling environment for open access. But the problem is that for long years we have thoroughly underinvested in transmission.\nLine capacity is woefully inadequate and is likely to remain so for years on end. Hence the compelling need for a system of pricing and tariffs for transmission assets.\nThe idea, of course, is that proper price signals would induce better utilisation of scarce assets, and boost investment as well. Open access (at the right price!) would lead to a \"seamless\" wholesale power market.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-5", "d_text": "Higher operational efficiencies on the generation side, smart grids and network management are being pursued and need to be accelerated. On the policy front, changes to the Electricity Act along with public-private partnership measures for the distribution sector will be critical in improving investor interest and increasing stakeholder confidence. On the financial side, the current low-interest rate regime will make the sector more competitive. The advent and adoption of newer instruments such as InvITs will enable higher monetisation of operational and viable assets. In addition, existing stressed assets on the banking side will need to be resolved. In a nutshell, the power sector needs to become more sustainable, financially strong and operationally efficient to meet the immediate challenges and tap into opportunities in the post-Covid world.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "08 / 2010\nThere is no question that India desperately needs to generate more power.\nThe energy indicators say it all. It has the lowest per capita consumption of electricity in the world. This when access to energy is correlated with development, indeed with economic growth.\nLet us not dismiss the need for energy as a simple issue of intra-national equity when the rich use too much, while the poor do not have enough. This may be true for other natural resources, but energy scarcity is more or less all around. Data shows India’s energy intensity has been falling—we do more with each unit of energy produced. In industry it is down by 2.2 per cent between 2004-05 and 2008-09, and in the agriculture and the service sector by as much as 4.7 per cent annually.\nThe reason is not hard to see. India has one of the highest prices of energy and it does pinch industry and the domestic consumer. So saving is part of the energy game. This is not to say we must not do more to cut energy use and be more efficient. The point is there are limits to efficiency.\nBut why am I stating the obvious? The reason is that even though India knows it needs more power, it does not realise it will not get it through conventional ways. It will have to find a new approach to energy security before the high-sounding targets of the power ministry are derailed and ultimately energy security compromised.\nJust consider what is happening in the country. There are widespread protests against building major power projects, from thermal to hydel, and now nuclear.\nAt the site of the coal power plant in Sompeta in Andhra Pradesh, the police had to open fire on some 10,000 protesters, killing two. In the alphonso-growing Konkan region farmers are up in arms against a 1,200 MW thermal plant, which, they say, will damage their crop. In Chhattisgarh, people are fighting against scores of such projects, which will take away their land and water. The list of such protests is long even if one does not consider the fact that most of the coal needed to run them is under the forests, and the mines are contested and unavailable.\nHydel projects are no different. Environmentalists are protesting the massive numbers of projects planned on the Ganga that will virtually see it dry over long stretches.", "score": 26.252539699492118, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "There is scope to use both, mandatory and voluntary standards, the latter being reinforced by public opinion combined with appropriate tax incentives.\nGiven the importance of energy conservation, there is a need to focus on technological options for improving energy efficiency in the industry, power generation and commercial buildings, and promoting renewable energy technologies in different end-use sectors. There is need to introduce efficient high-end power turbines and generators.\nSOLUTIONS NEEDED IN INDIA FOR A BETTER SOCIETY\nThere is a need for alternate energy which will not only offset the demand of conventional fossil fuel, but also pave way to cleaner solutions A green growth economy is the need of the hour. Ironically, India has world's fifth largest coal reserves and still faces acute power crisis. India's per capita power sector consumption, around 940 kilo watt-hour (kWh), is among the lowest in the world. In comparison, China has a per capita consumption of 4,000 kWh, with the developed countries averaging around 15,000 kWh of per capita consumption.\nBut the good news is that a massive overhaul of the power sector is underway with the government planning to bring in a series of amendments. For instance, there will be changes to the Electricity Act of 2003 across all segments of the power value chain within the current session of Parliament.\nThe Union Cabinet had in November approved the launch of Deendayal Upadhyaya Gram Jyoti Yojana (DDUGJY) for ensuring 24x7 power supply. The Rs.43,033-crore scheme would separate agriculture and non-agriculture feeders, facilitate judicious fostering of supply to agricultural and non-agricultural consumers in rural areas, and strengthen sub-transmission and distribution infrastructure in rural areas, including metering of distribution transformers/feeders/consumers.\nCoal sector will soon open up for private sector, bringing in more organization and competitiveness in the sector. Private companies are already allowed to mine supplies for their own power plants and other industrial projects, and Coal India hires some private firms to operate mines. But until now private companies have not been permitted to sell coal. However, there is no move to fully privatize Coal India.\nInsufficient fuel was a chief concern of the industry. But during April 2014 to October 2014, there has been a growth of 15.4% in coalbased power generation over the corresponding period of last year.", "score": 25.663311576010948, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Calls for more efficient transmission grids and regulation measures\nElectricity distribution to the end consumer is the weakest link in power sector, says a World Bank report that asks for transition from administratively-run to commercially-run utilities.\nThe World Bank report named “More Power to India: The Challenge of Distribution” was conducted at the request of the Government of India and was released in Hyderabad on Monday.\nWhile making an urgent call for change, the study recognises the many impressive strides that the Indian power sector has made over the years. Generation capacity tripled between 1991 and 2012, boosted by the substantial role played by the private sector.\nA state-of-the-art integrated transmission grid now serves the entire country. Private distribution utilities in Kolkata, Mumbai, Surat and Ahmedabad, which have been owned and operated by the private sector since before Independence, point to potential gains from private participation. Grid-connected renewable capacity has risen from 18MW in 1990 to 25,856 MW in March 2013. And more than 28 million Indians have annually gained access to electricity between 2000 and 2010.\nHowever, according to the study, the financial health of the sector is fragile, limiting its ability to invest in delivering better services. In 2013, total accumulated losses in the sector stood at Rs 2.9 lakh crore or three per cent of GDP. Around 70 per cent of the sector’s accumulated losses in 2013 came from Uttar Pradesh, Rajasthan, Tamil Nadu, and Haryana. Uttar Pradesh alone accounted for 27 per cent of the sector’s accumulated losses.\nWho is the biggest loser?\nThese losses are overwhelmingly concentrated among distribution companies (discoms) and bundled utilities—State Electricity Boards (SEBs) and the State Power Departments, says the study. Sector losses have led to heavy borrowing— power sector debt reached Rs 5 lakh crore in 2013. More than 40 per cent of the loans were made to discoms.\nOver the last two decades the sector has needed periodic rescues from the central government—a bailout of Rs 35,000 crores in 2001 and a “restructuring package” of Rs 1.9 lakh crore that was announced in 2012.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Last year, the Central Electricity Authority reported that India is on the verge of being a power surplus nation. For a country accustomed to power shortages, this may appear to be a major step towards reliable electricity supply. But surplus power has not eliminated shortages. In particular, rural India continues to face power cuts.\nWhat explains this paradox?\nSurplus power occurs when the power available with distribution companies at a given time exceeds the demand for electricity. A surplus could arise due to a fall in the demand for power, for instance, during an industrial slowdown. Or it could result from an increase in power supply, for instance, when a new generating station is added. Insights from various states show that the burgeoning surplus is more due to an increase in power supply, as we explain in a subsequent article.\nBut first, we explain why surplus power is a problem. To start with, it has put a significant financial burden on distribution companies and may adversely impact their already precarious financial positions.\nCost of power\nMost power procurement by distribution companies is done through long term, 25-year contracts with power generating companies. These contracts legally bind the distribution companies into paying the generating company a lump sum annual amount for fixed costs, and a per-unit charge to cover variable costs (mostly for fuel).\nDistribution companies have to pay the fixed charges even if they do not draw power from the generating company for a particular time period. If such surplus cannot be sold, it is backed down, which means power generators lie idle at that time, incurring fixed costs, but generating no electricity.\nThe table below shows the financial impact of backing down in five states.\nIn effect, states are paying for surplus power that they do not use. Fixed cost payments to power generators due to backing down are as high as 15% to 35%. In Gujarat, the payments are more than three times the agricultural power subsidy, and in other states, they amount to about 40% to 60% of power subsidies by the state government.\nSuch costs are eventually borne by the electricity consumers.\nA problem of contracts\nAs distribution companies with surplus power have entered into long-term, legally-binding contracts for power, it is difficult to surrender or re-allocate such capacity. Therefore, the most common strategy to mitigate the cost of surplus power is to ensure its sale.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "This odd surplus – as dubious as it may be – is an inspiring story that started in 2003 when the Indian Parliament passed landmark legislation, replacing the old electricity Act. The new law allowed electricity trade, mandated open access and empowered the electricity regulator to develop an electricity market. Open access materialised on the main transmission grid in 2004 and traders began their work. Power exchanges were set up a few years later in 2007. This happened at a remarkable pace, unprecedented even in most rich countries.\nIndia is so big and diverse in terms of weather that even in an overall shortage scenario there are states or regions with surplus energy, particularly in the East and the North-East. Open access makes the technical (and trickier) part of trading electrical energy relatively simple. The power trade started right in the midst of a nearly 10% energy shortage nationally, baffling the ministry of power.\nOne of the first energy trade transactions was between the eastern city of Kolkata and the northern state of Punjab over weekends when offices are closed in Kolkata and demand drops. Punjab went further in its quest and started buying power from late night to early morning. No bureaucratic intervention could have caused such an efficient allocation of resources, or could have convinced the business class to invest in generating plants on such a massive scale.\nAnd when the rising prices in the short term electricity market caught the attention of the Indian business class, there was no looking back. Over the last decades, the generating capacity of India sky rocketed by a factor of ~3. Trade volumes in the open market have ballooned to a figure that is more than the total electricity requirement of Bangladesh, Nepal, and Sri Lanka put together.\n(Hydro)Power knows no boundaries\nSitting atop the Himalayas, Nepal has massive hydro power potential. However Nepal has exploited less than 1% of its hydro potential, even though brownouts and blackouts are a regular feature of life in the country. At present, Nepal imports power from India through a dozen small cross-border links. Any further progress on strengthening these ties is typically addressed with a deep sense of suspicion. Arguments include: “India buys electricity cheaply from Bhutan and then the same electricity is sold to Bangladesh at double the price. But India doesn't allow Nepal access to sell electricity to Bangladesh.” The fact is that Nepal has no electricity to sell.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "Furthermore, the liberalisation of the electricity market in India has led to an increase in electricity trading. Short-term trading activities and the associated transmission of electricity over long distances represent an additional challenge for the grid. Due to the nature of the changes, the grid needs to be partially reinvented and automated. Grid intelligence and communication is required for grid operation to meet the requirements of the transforming energy sector. Nevertheless, data measurements from various places and various levels in the grid are necessary to enable the utilities to monitor everything that happens on a real time basis (or to start with, on a daily, hourly or quarterly basis). The utilities then can take actions more accurately, effectively and swiftly, improving the energy services. Electricity grid is shown in the picture.\nPresent State of Grid Automation\nDeveloped countries have already automated their complete power supply system and their grids are remotely controlled. On the other hand, even after having edge in IT skill, India is way behind in automation. What to talk of existing grid network automation, even the new grids (especially, by states) are being constructed with old and outdated technology without any intervention of automation. Centre Government’s initiative of providing funds for automation & improvement under ARPDC scheme are either unutilized or are invested haphazardly in IT that resulted in issues such as:\n• Stand-alone systems-Coverage to limited geographical areas\n• Inadequate interface and integration with other applications\n• Absence of a standard architecture\n• High cost of maintenance\n• Basic operations are still manual without inbuilt controls\nThese issues have adversely affected the returns from IT investments. Incoherent technology strategy leads to situations where incompatible options are selected and large sums of money are wasted in attempts to integrate them. The bottom line is that the business performance has not improved. Power sector expenses and revenue yield is depicted in Fig. 1:\nEvidently, fundamental changes are required in the working of the power sector entities. Information Technology (IT) would become the key enabler in the initiatives under the reform process initiated by Government of India. This will enable substantial improvement in the overall health of the utilities.\nIt is absolutely clear that all the grid substations can’t be automated in all power sector utilities in one go because of the enormous magnitude of the effort and investment required. The approach, therefore, should be to give priority to generation and extra-high voltage level grids, especially, those linking inter-state power systems.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Financiers are optimistic about the power sector’s long-term growth story. However, distribution reforms must be taken up on priority to instill investor confidence in the sector. Leading financiers share their perspective on the current state of the sector, the challenges, and post-pandemic opportunities and outlook. Excerpts…\nWhat has been the impact of Covid-19 on the power sector?\nParminder Chopra, Director (Finance), PFC Limited\nThe power sector is one of the critical drivers of the Indian economy. The Covid-19 pandemic has had a severe impact on the economy and the power sector, which is usually resilient, is also reeling under its ill effects. Economic and industrial activity was halted due to the nationwide lockdown imposed since March 2020. This affected the entire power sector value chain in terms of reduced demand and a dip in revenue collection, leading to financial distress and disruptions in the power supply chain. But as the economy is opening up and industrial activity is resuming, power demand is gradually picking up. It started witnessing a gradual increase from May 2020 onwards after the easing of lockdown restrictions. As we can see, within just four months of the lockdown being relaxed, electricity demand touched 174.33 GW, surpassing the demand levels in September last year. Thus, demand is getting restored to its normal levels. We expect a positive power demand outlook going forward. Further, the improvement in power demand would effectively translate into better revenue generation, thereby gradually easing the financial crunch being faced by discoms.\nMukul Modi, Executive Vice-President, Project Advisory and Structured Finance Group, SBICAP\nPre-Covid scenario in the power sector: The centrality of the power sector for the industrial and economic development of a country can hardly be overemphasised. Due to a strong focus and policy initiatives by the government, the power sector has added more than 210 GW of generation capacity in the past one decade, involving an estimated investment of over Rs 13 trillion. However, the growth in power demand has failed to keep pace with capacity addition in the past four to five years. As a result, the overall plant load factor (PLF) of thermal power projects has been hovering at 55-62 per cent, indicating underutilisation of assets.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "State-owned NTPC is India’s largest power utility. It runs many plants below full capacity: its average is around 80%, but in some plants, it dips to as low as 60% capacity utilisation.\nIf the electricity sector was functioning efficiently, this would be impossible to explain: after all, when people are willing to buy power, why would any seller refuse? The trouble is that NTPC, and other power generators, cannot find reliable, financially solvent buyers of electricity.\nMost of the power is bought by state governments, through state electricity boards (SEBs). These boards are bankrupt. In 2007, all SEBs put together made losses of Rs 26,000 crore; by March last year, this jumped to a staggering Rs 93,000 crore. Just two SEBs, Uttar Pradesh and Jammu & Kashmir, account for nearly half this amount.\nTo cover power purchase costs, the SEBs borrow money. Today, the total short-term debt of all the SEBs has soared to a mind-boggling 2,00,000 crore. Many states would buy as little electricity as possible, to avoid going deeper into the red.\nRecently, the tap of cheap loans was turned off, and many states agreed to partially reform their SEBs. They hiked the price of power between 20% and 40% across the country. This can be the beginning of meaningful reform, not the end.\nThe government has a plan to restructure the debt of SEBs: the banks will not ask for half the debt for the next three years. For the other half, the SEBs will issue bonds to the banks, which will convert to equity over time.\nThis will require SEBs to become corporate entities, like the utilities in Delhi. These things will make SEBs temporarily solvent, but will not solve the underlying problem. This is political: members of all parties promise free or nearly-free electricity to people in return for votes.\nUnlike most election promises, which are soon forgotten, they actually deliver on this one. Tamil Nadu once had a financially-healthy SEB, but five years of near-free electricity under the DMK completely destroyed its finances.\nAfter coming to power last year, chief minister Jayalalithaa has hiked power charges, and her regime might see the SEB come back to health. But don’t hold your breath.", "score": 25.09671801490805, "rank": 48}, {"document_id": "doc-::chunk-4", "d_text": "It is pertinent to note however, that with the on-going intensified competition in the licensed and the liberalized electrical sector, many states are beginning to pull their stocks together, shedding the complacency of monopoly to face the competition.\nIndia also faces the problem of inadequate coal and gas production as well as exploration of new fossil fuel reserves.\nIt is evident that constant power supply is essential for the growth of any nation. India has set out a target to be a global super power having the third largest GDP by 2030 and also to have a sustained economic growth of 8-10% for at least another decade. Many countries have decided to adopt the Indian model as a milestone for developing their electricity sector. At any rate, these countries must first appreciate and put in proper perspective, the challenges associated with the India electricity model in order to be able to build a formidable electricity sector. There is therefore the need to install administrative and regulatory measures that would harmonize the varied interests of investors, developers and consumers in India. Also, robust and comprehensive package of policies should be prescribed for the development of hydro potentials and other renewable energy resources at a faster pace to ensure a sincere commitment towards sustainable development. Finally, whilst the Indian model appears closest to a ‘role model’, Nigeria should only adopt what is ‘uniquely ours’ from the model and domesticate the same for our own purpose.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-7", "d_text": "However, Indian companies have the opportunity to increase the utilization rate of their existing assets and improve sales per asset by catering to both the foreign and growing domestic market.\nThe electric power sector in India is witnessing several changes and new trends. The country’s installed capacity for electricity generation has increased from 174.64 GW in March 2009 to 356 GW in March 2019; this was the fifth largest in terms of installed capacity globally, and third in terms of power generation and power consumption. The government targets capacity addition of around 100 GW under the 13th Five-Year Plan (2017-2022). India’s power sector is forecasted to attract investments worth $130 billion between 2019-2023.\nTrends and Outlook\nLately, we’ve witnessed an unprecedented price drop in the solar power sector in the country. The costs of building large-scale solar installations in India fell by 27 percent in 2018. On the other hand, falling capacity utilization of power generation plants is compelling the industry to come up with innovative technical solutions. These include upgrades and modernization to bring more flexibility into operations to enable quick ramp up or ramp down.\nIncreased economic activity, especially in manufacturing, along with favorable government policy, drive steady growth in electric demand. The government has introduced and refurbished a series of schemes and policies to strengthen the power sector. The Ujwal DISCOM Assurance Yojana (UDAY) scheme focuses on improving the financial health of the electricity distribution companies. The government is promoting awareness on energy efficiency and power saving. The industry has been deeply involved in the successful implementation of Deendayal Upadhyaya Gram Jyoti Yojana UJALA. This has resulted in energy savings of more than 2.66 crore kWh every day, a reduction of over 21,550 tonnes of CO2 per day, and is estimated to have a cost savings of $1.39 million per day.\nIndia’s electric power industry needs to explore the benefits of investing in distributed generation, smart grid, and other verticals. The government has set an ambitious target of having 175 GW of clean energy capacity by 2022, including 100 GW solar and 60 GW of wind energy.\nRenewable industry presently constitutes around 21 percent of the total installed base and is on an accelerated growth path in both solar power and wind power generating plants. Presently, India's clean energy capacity is 1,096 GW.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-5", "d_text": "A good example is ABB’s 800 KW Northeast-Agra UHVDC project, which brings clean hydropower from the Northeast to Agra, covering 1,728 km and providing electricity to 90 million people.\n“It is all about setting up green energy corridors across the country so that clean energy from power surplus states can be brought to the power deficit states,” adds Sharma. He cites the example of another ABB project—the 1,830 km long UHVDC Raigarh to Pulugar link—which will integrate thermal and wind energy and provide power to 80 million people in high consumption areas located thousands of kilometres away. When the wind strength is low in the south, the power deficit will be made up by the northern states by providing thermal power through a two-way link; when there is excess wind power, the south will give clean energy to the north.\nStorage, too, should not pose a major challenge. Excess power can either be stored in batteries or in “pump storage”, or more simply stored in water. With battery costs still high, pump storage could be a viable alternative for India. In pump storage, the surplus power during the day is used to lift water to a certain height in reservoirs or dams within or outside the country to generate kinetic energy. In the night, the water is allowed to flow down, converted into electricity, and sent back to the power deficit areas.\nBut what excites the ABB president most is India’s accelerated drive towards an electric vehicle (EV) revolution. The company, he points out, offers the entire bandwidth of technologies to enable sustainable transport—from grid integration to transport of renewable energy to the fast charging of cars in record time. Many states like Karnataka, Maharashtra, and Andhra Pradesh have rolled out their EV plans and this provides a huge opportunity for ABB, he says.\n“We took a decision to take electric mobility as a business opportunity very seriously in 2011 and today we are the global market leader in fast charging,” says the man who started the e-mobility charging infrastructure initiative in a garage in the Netherlands with a startup.\nThat he means business was evident when he showcased a 350 KW charging station that can charge a car in eight minutes—enough to travel 200 km—at the recent Mobility Summit in New Delhi. Connecting the fast charging stations to the cloud can also bring in additional benefits like allowing people to make cashless payments, etc.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "It must however be noted that there are obvious overlaps between the functions of the CERC and the CEA and this led to the suggested merger of both agencies. However, the Ministry of Power (MoP) opposed this merger. This continues to remain a concern for stakeholders.\nOn the other hand, the SERC is vested with the powers to regulate intra-state generation, transmission and distribution in India(including those of the State Load Despatch Center [SLDC]), and to determine bulk and retail tariffs to be charged to electricity consumers. SERC’s advisory functions include, promoting competition, efficiency and economy in the activities of electricity industry in the state and reorganizing as well as restructuring of the electricity industry in the state. SERC’s activities are to be guided by the principles of tariff determination as specified by CERC\nIt is important to state that the consumers of electricity in India are statutorily protected from any adverse policies of the regulators. Thus, members of the public who are not satisfied with the operations of the CERC or the SERC can channel their grievances to the Appellate Tribunal within 45 days of the issuance of any such operational directives. The Tribunal has the authority to overrule or amend such directives. Appeals against the Tribunal can lie before the Supreme Court within 65 days of the delivering of such decision(s) by the Tribunal.\n2.0 The Power Sector Reforms in India\n2.1 Fragmented Privatization\nThe era of government regulated and vertically –integrated power sector has been relaxed in most countries, either completely (by way of 100% privatization of state enterprise and liberation of the markets for infrastructural industry services) or partially (by way of fragmented privatization). In the case of India, the model adopted seems to tilt towards fragmented privatization, especially considering the fact that it is a socialist economy. The monopolistic nature of the industry was diluted with the introduction of the economic policy in 1991. It would be recalled that the 1991 reformatory model was predicated on private management and capital generation. However, in 1996, the government policy reform was extended beyond financing strategies to the introduction of competition to efficacy target in the power sector.\nSpecifically at the state levels, the unbundling and privatization of the power sector in India started with the State of Orissa in the mid ‘90s.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Goyal told industry leaders at the meeting organised on the occasion of the launch of ETEnergyworld.com that the government was going flat out to resolve all the pending issues in the sector, just as it had ensured that thermal power plants, which were starved of coal two years ago, were now flooded with fuel. “Each and every power plant now has coal,” Goyal asserted.\nIndustry leaders as well as the minister said the UDAY scheme to reform distribution companies was vital for both conventional and renewable power projects. GMR Energy Chairman GBS Raju suggested that utilities should make timely payments to generation companies.\nLanco Infratech Chairman Madhusudhan Rao said states, except Kerala, have stopped issuing tenders seeking electricity and suggested that the government should take ambitious steps to resolve various issues.\nSolar and wind energy entrepreneurs including Suzlon Chairman Tulsi Tanti, ReNew Power Chairman Sumant Sinha, Hero Future Energies Chairman Rahul Munjal also highlighted the importance of efficient distribution and transmission as well as grid stability.\nOther industry leaders from the sector included Tata Power Solar CEO Ashish Khanna, First Solar country head Sujoy Ghosh, Fortum India Managing Director Sanjay Aggarwal and the Adani Group’s CEO for renewable energy Jayant Parimal.\nSinha sounded a note of caution for entrepreneurs. He said a lot of renewable energy companies are keen on selling their plants after bagging them through aggressive bids. He said this may affect the project implementation and decrease the appetite for new projects in primary market.\nAggressive bidding by companies has reduced solar power tariffs to .`4.34 per unit that is lower than many coalfired plants, and have fallen to a level that many industry experts say is unviable. Finnish utility Fortum, which made the record-low bid is keen to invest in several business segments in the country, Sanjay Aggarwal, who heads Indian operations of the company, told the minister.\nThe companies expressed concern that state electricity distribution utilities are not purchasing power due to financial constraints and industrial slowdown. SoftBank Energy Chairman Manoj Kohli said emphasis has to be laid on creation of demand. He said the country has not been able to capture ‘pent-up’ demand from consumers.\nHe highlighted that India needed a policy standardising all procedures including land acquisition, equipment sourcing and signing power purchase agreements, for setting up power plants across the country. This will help investors to mitigate risks.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "Coal has always been the mainstay of the Indian electricity sector and many policymakers and analysts believe that it must remain the primary source of electricity generation for at least the next three to four decades. This view is based on the belief that a centralised electricity…Read more", "score": 24.070225577604724, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "In an excellent example of crowd-sourcing, the website “Power cuts in India”, launched this summer, invited people to tweet or text information on power cuts in their neighbourhoods. Messages poured in by the hundreds. The site also provided links to newspaper reports on power disruptions and load shedding in different corners of the country. When the seemingly isolated pieces of information were put together, the big picture was clear. India faced an unprecedented and grim situation on the ‘power’ front — one that threatened to bring the country to a standstill.\nIt is painfully evident that the problem is neither a temporary one nor confined to one part of the country. The malady is pan-Indian and threatens to torment us for a long time.\nGiven a GDP elasticity factor (amount of electricity required to power 1 percent GDP growth) of 1, we need to ensure an increase of 10 percent in power generation every year. If we need to wipe out existing deficit and bring into the fold about 400 million people who do not have access to power, the target should be much higher. We need an additional 30000 MW every year.\nAt least, 70 percent of this increase must come from base-load plants that can produce predictable, schedulable electricity round-the-clock and in all seasons. All the rhetoric of adding large doses of clean wind and solar energy can help us feel righteous and green, but these will not serve the immediate purpose, with their characteristics of seasonality, intermittency and unpredictability.\n60 percent of our power need is met by coal-based plants, and we continue to place our faith on coal to meet our future requirement. This, we are slowly realising, can be a flawed approach. Large coal plants take many years to come up, if at all. We have not been able to ramp up domestic coal production to meet the growing demand. Relying on imported coal means competing with China for the same sources in Indonesia, Australia and South Africa and paying a high price too. Also, putting all our eggs in the coal basket will lead to serious consequences on the environmental front. The world will hold us accountable for the carbon emission. So we desperately need a second option to help reduce our dependence on coal.\nCapacity addition in the form of nuclear plants is desirable, but looks improbable. If an almost-ready is stalled at the last mile, investments in other planned projects may be viewed as fraught with unacceptable risk.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "While India has up to 900GW of renewable energy potential, according to the Ministry of New and Renewable Energy, this nevertheless represents an impressive target given that only 37GW are currently installed.\nThe level of ambition is even stronger for solar energy, of which the government aims to deploy 100GW by the same date—from just 3.3GW in 2014. Such a rapid capacity increase would be unprecedented globally. Combined, Germany and China, each of which has sustained large levels of investment for years, have barely been able to deploy two-thirds of that (66.3GW). In addition, notes Sasha Riser-Kositsky, associate at the Eurasia Group, such a rapid deployment might encounter infrastructure constraints. “India’s grid probably cannot handle the 40GW of roof-top solar planned,” he notes.\nEven if, as the IEA expects, low-carbon technology accounts for more than half of total new power-generation-capacity additions over the next 25 years, the sheer scale the growth in coal demand will be such that India’s emissions will nearly triple over the same period.\nThis will still leave per-capita emissions levels below the world average, suggesting that despite higher emissions, the contribution of India to the climate change problem will remain limited compared with that other countries, including China and the US.\nBut it also means that India can do much more to improve the energy efficiency of its coal power fleet, which at the moment is dominated by subcritical coal power plants (85%). Given that new coal plants in China or the OECD are nearly 10% more efficient, technology transfer will be an integral part of keeping emissions in check. Reforming the electricity sector will also be required. High electricity losses, theft and poor financial performance, in particular, are making it difficult for distribution companies to purchase all of the electricity supply, notably for natural gas but also for coal-fired power plants, “whose load factors have decreased this year”, points out Mr Riser-Kositsky.\nOverall, between the power generation side and the grid infrastructure, the IEA estimates that some $2.1trn of investments will be needed by 2040 in the Indian power sector. This substantial amount (roughly equivalent to what the US will have to spend on its power-sector infrastructure over the same period) is urgently needed: Despite a doubling of energy consumption since 2000, 240m Indians still lack access to electricity today.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "BHEL gets a major chunk of the orders, and the rest is shared by others but the volume is yet not that meaningful,\" Agrawala said.\nGE Asia's head said that India needs to fix problems in power sector to attract foreign debt and capital because the local funds are not enough to add the capacity needed by the country.\n\"The balance sheets of most companies in the sector would not be able to support substantial capacity addition that is needed in the sector. We are short of equity as well as debt,\" he said.\nIndia needs to provide a legal, regulatory, and risk-return framework that attracts foreign investment, he said. Despite 100% foreign direct investment allowed in India, most overseas energy companies have shied away from the country due to the challenges they faced here.\n\"It's absolutely critical to get the electricity distribution business fixed by addressing the issue of huge subsidies and the existing losses of discoms squarely. It has reached a point that the losses are untenable,\" he said.\nAgrawala said that India needs to make a long-term energy plan that is sustainable and balanced.\n\"The faster we realise that we as a country are short of primary energy sources and therefore we need all the energy sources, the better it will be.\"", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-3", "d_text": "To help ease the resultant financial stress, the MoP has announced an economic stimulus, providing a liquidity support package of Rs 900 billion for discoms and has reduced late payment penalties for payments to generation companies and transmission licensees. Among other measures, the MNRE has provided a commercial operations date extension of up to five months (that is, up to August 24, 2020) to all renewable energy projects under implementation as on March 25, 2020. The Cabinet Committee on Economic Affairs has also approved a one-time relaxation in working capital limits; this would enable PFC and REC to extend loans to discoms.\nWhat is the investor outlook for the sector?\nThe power sector has significant potential to grow going forward. On a macro level, India’s current per capita energy consumption (1,181 MW) is about one-third the global average. The country’s population and GDP are expected to grow in the future and energy demand is expected to rise consequently. Further, electrification of villages under the Deendayal Upadhyaya Gram Jyoti Yojana (DDUGJY) and last-mile electricity connectivity under the Pradhan Mantri Sahaj Bijli Har Ghar Yojana (Saubhagya) will be two prominent drivers for electricity demand. Under the DDUGJY, 100 per cent village electrification was achieved in the year 2018 and around 99 per cent of households have been electrified under Saubhagya. The completion of Saubhagya is expected to create an additional power demand of about 28,000 MW.\nTo meet the growing demand, significant capacity addition is expected. Under the National Infrastructure Pipeline for the period 2020-25, around Rs 25 trillion worth of capex requirement has been envisaged for the power sector. The government’s thrust to increase the installed capacity of renewables to 175 GW by 2022 is encouraging for us as it would lead to substantial capacity addition in the power sector space. In addition, infrastructural development in the e-mobility space will provide ample business opportunities. The government’s Atmanirbhar Bharat Abhiyan will also open up new investment opportunities in the sector. Thus, with ample growth potential, the power sector is expected to attract huge investments.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "A decade or so ago, there were several issues in the transmission grid such as congestion and splitting, but they have largely been resolved and our visionary planners/regulators have built a great grid. This grid has to be sustained and developed further, and India cannot afford to have underinvestment in transmission. India must have a strong grid as decarbonisation is our agenda. Accordingly, planners, regulators and policymakers need to rethink and facilitate developments in the grid in the times to come.\nCurrently, the weakest part of power system operations is attracting and retaining human resources. There is need for a change of mindset to develop intrinsic capabilities of existing resources. The system operations infrastructure is excellent, but we need to make the best use of it and bring in innovative systems and processes.\nIn terms of technology, the old style of transmission has to be augmented. There is a need to adopt advanced technologies such as reactive compensation, flexible AC transmission systems, and light high voltage direct current (HVDCs) systems. Also, there is a need to expand the transmission grid to neighbouring geographies. The One Sun One World One Grid (OSOWOG) initiative will further drive the expansion of the transmission grid. At the state level, too, the grid needs to be expanded and made more robust.\nEnergy storage needs to be deployed viably in the downstream power sector, that is, at the consumer, building and distribution system level where building a transmission system is difficult. It is ideal to deploy non-wire alternatives such as battery storage in population-dense areas such as Mumbai and Kolkata as it is difficult to build extensive transmission and distribution systems in these cities. Additionally, it is possible to incorporate battery storage systems in industries. At a macrogrid level, it must be noted that battery storage is non-comparable to transmission systems given the scale of GWs of power transmitted in the grid at 400 kV and 765 kV. Hence, it is important to prioritise investment in pumped (hydro) storage as far as energy storage is concerned, until further technological developments occur in battery storage and its costs come down. Energy storage may be deployed more widely if grid-scale large storage becomes viable. Therefore, there is a need to drive energy storage from two extreme sides – chemical-based energy storage at the bottom level (consumer level) and pumped hydro storage at the top (generation level).\nOSOWOG is a great initiative for the world from a sustainability point of view.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "The bottlenecks in the country’s transmission and distribution networks have been blamed.\n“Lack of investment in transmission and distribution infrastructure has resulted in congestion of the network in India, impeding the evacuation of power and the development of a competitive market,” the World Bank says in its report.\nThat is one of the reasons why most of the power plants in the country are operating a little above 50% of their capacity, known as Plant Load Factor.\nShare of world’s total\nFrom 7%at present\nTo 14%by 2040\nSource: BP Energy Outlook, 2019\nSome experts fear if electricity cannot be fully evacuated from existing power plants in distant places, then states could start having their own plants.\nTwo major coal-fired power plants have recently been approved in Bihar and Uttar Pradesh.\n“Political competition between parties can create that situation, particularly if this election results into a grand coalition government,” says Mr Krishnaswamy.\n“But that competition could also result in installation of solar plants. But again will the renewables be able to supply the amount of energy they are now promising?”\nSome say that now looks increasingly possible because renewable energy in India is becoming much cheaper than electricity produced from coal burning.\n“The move towards renewable electricity will keep on happening because we see that as being the most cost effective way to meet people’s needs,” says Mr Mathur.\n“Our action is not based on environmental aspirations, it is based on hard economic realities.”\nRenewables can generate only when there is the Sun and wind – but India’s peak demand time is before midnight.\nBattery technologies to store renewable energy cannot yet be source of stable electricity supply.\n“For stable power generation, coal is our necessity and our plants will need more coal,” says Shailendra Shukla, head of the power generation company in Chattisgarh.\n“The day an alternative is available, we will stop using coal.”\nBut even with so much of coal burning, the Indian government says it is already on its way to meet its commitments under the Paris deal.\nLow per capita emissions\n“India has been one of the top performers in terms of living up to climate commitments,” says Mr Krishnaswamy.\n“But is that adequate?", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "There is no question that India desperately needs to generate more power. The energy indicators say it all. It has the lowest per capita consumption of electricity in the world. This when access to energy is correlated with development, indeed with economic growth.\nLet us not dismiss the need for energy as a simple issue of intra-national equity when the rich use too much, while the poor do not have enough. This may be true for other natural resources, but energy scarcity is more or less all around. Data shows India’s energy intensity has been falling—we do more with each unit of energy produced. In industry it is down by 2.2 per cent between 2004-05 and 2008-09, and in the agriculture and the service sector by as much as 4.7 per cent annually.\nThe reason is not hard to see. India has one of the highest prices of energy and it does pinch industry and the domestic consumer. So saving is part of the energy game. This is not to say we must not do more to cut energy use and be more efficient. The point is there are limits to efficiency.\nBut why am I stating the obvious? The reason is that even though India knows it needs more power, it does not realise it will not get it through conventional ways. It will have to find a new approach to energy security before the high-sounding targets of the power ministry are derailed and ultimately energy security compromised.\nJust consider what is happening in the country. There are widespread protests against building major power projects, from thermal to hydel, and now nuclear. At the site of the coal power plant in Sompeta in Andhra Pradesh, the police had to open fire on some 10,000 protesters, killing two. In the alphonso-growing Konkan region farmers are up in arms against a 1,200 MW thermal plant, which, they say, will damage their crop. In Chhattisgarh, people are fighting against scores of such projects, which will take away their land and water. The list of such protests is long even if one does not consider the fact that most of the coal needed to run them is under the forests, and the mines are contested and unavailable.\nHydel projects are no different. Environmentalists are protesting the massive numbers of projects planned on the Ganga that will virtually see it dry over long stretches.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "However, with developers being reluctant to absorb the high cost for retrofitting their projects to meet the new standards (around Rs 1 crore or $1.56 million per megawatt), the government is likely to push the deadline for compliance to December 2019. Lack of affordable technology is one of the principal reasons for the high cost and thereby the associated reluctance in compliance.\nThe NEP makes broad recommendations on how India should work towards developing and acquiring technology needed for advancing the energy sector. However, the policy does not recommend consistent and strong policy and budgetary support for technology development, as in China. Despite China’s success, and India’s growing reliance on Chinese imports to drive its clean energy revolution, not enough urgency has been expressed in the NEP to make India’s energy future in India.\nGrid versus off-grid\nThe NEP prescribes grid-based supply to all households to be India’s primary endeavour, with renewable energy implemented to address the access issue only in cases where grid power is unavailable. The NITI Aayog in this instance has used the term renewable energy interchangeably with decentralised renewable energy. A rationale for this position has not been indicated, even as the government continues to promote decentralised electrification programmes.\nThe position to promote grid based electrification continues to mystify as a significant percentage of grid connected households do not meet an adequate level of electricity access. This was validated by the Council on Energy, Environment and Water’s ACCESS study focusing on the state of energy access in six of India’s most energy deprived states. The study found that though 96% of the villages were considered electrified, with 69% of all surveyed households having an electricity connection, only 37% had any meaningful level of electricity access. Rather than promoting a particular means of electrification, the NEP could encourage context-specific electrification approaches, by considering economic viability, consumer demand and aspiration, affordability, as well as reliable provision of electricity.\nAs India’s importance and role in the global energy markets continues to grow, it needs to be strategic in its energy planning. To build on the successes of the recent past, such as record low tariffs, increased investment flows into the energy sector, successful introduction of auction-based bidding for wind projects and others, it cannot afford to lose momentum with policy uncertainty and unclear energy pathways.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "With its vision document, Vision 2024, India’s Ministry of Power is charting a way forward for the country’s power sector. And a central part of this vision involves the private sector. The Need for Private Sector Participation in India’s Electricity Distribution Sector explains how shifting focus to increasing private sector participation and investments in India’s energy distribution business could lead to improved financial viability and sustainability. You’ll learn:\n- The seven goals outlined in the Vision 2024 document.\n- The issues inhibiting competition in India’s power sector.\n- The international precedents India could customize for its needs.\nDownload the paper to explore the possible roadmap India could take to involving the private sector in national power distribution.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-4", "d_text": "Since the renewable resources are not evenly distributed in the country, the RPO encourages setting up of large generation capacities at resource rich locations, and through a process of certification, the credits can be traded on CERC approved power exchanges, to obligated entities or voluntary buyers. The tariff structure can then be decided to compete in the power sector.\n• Considering that a wide section of our society remains un-electrified, could you elaborate on Tata Power’s efforts on decentralized micro-grids to provide access to these communities?\nAs mentioned earlier, we are in the process of evaluating different business models for distributed power generation and supply to the rural areas. Tata Power has joined hands with different national institutes like the Indian Institute of Technology, University Department of Chemical Technology in developing pilot and demonstration projects in the renewable resource power generation.\nDistributed Generation Projects based on biomass as fuel are also being explored. Also, the Capex levels associated with gasifier based projects are being explored to make the smaller size DG plants feasible based on the applicable feed-in-tariff. The company is also testing a 2kW wind turbine that can be mounted on roof tops and provide power to homes. The technology is under pilot stage.\n• What are your views on the Phase 2 of the National Solar Mission?\nThe National Solar Mission (NSM) is definitely a major step forward. It has not just brought solar power onto the national scene but has also helped put India on the global radar, in so far as solar opportunities are concerned. However, the key objective behind the Solar Mission would be achieved only if the framework and necessary industrial base for research, development, manufacturing and harnessing of solar power gets developed in the country to support appropriate growth and cost optimization for the benefit of the masses in an otherwise less favored country, in terms of energy resources. We must always remember we have only a limited option in terms of commercial sources of energy and solar is one of our best opportunities. We need to work expeditiously on continuation of SIPS initiative and similar other programmes for complete assimilation of technological know-how and for nurturing it to our advantage.\nThis interview was conducted by Anindita Chakraborty, a member of the Sustainability Outlook team.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "When I served as a Pennsylvania Public Utility Commissioner from 1993 to 1998, my two most important duties were keeping drinking water safe and the lights on. A nightmare scenario was an uncontrolled, cascading electrical grid failure and blackout.\nThat nightmare scenario unfolded in India on Monday, when a then world record blackout knocked out power to 300 million Indians. The very next day, a worse blackout hit, knocking out service to more than 600 million people or about half the population of India. Both were uncontrolled, cascading outages.\nHere are some key facts and further thoughts about India's electricity situation.\nIndia's daily peak demand is regularly 12% higher than peak supply but the shortage is managed through rolling or controlled blackouts that cut electricity supply deliberately to specific locations to balance supply and demand.\nThe basic problem in India is that daily demand outstrips supply. There is just not enough power or generation that pushes the grid in India just about everyday to what would be considered a crisis in the US.\nIn fact, the USA has a mandatory generation adequacy or reliability standard that requires enough electric generation be available to make the probability of even a controlled blackout no greater than one day in ten years. Indeed, apart from Texas, America has excess generating capacity--an amount of capacity that exceeds even the protective mandatory reserve requirements--in just about every region.\nDespite the massive generation needs in India, India has built only about 50,000 megawatts of new generation in the last 5 years or about 5% of the total US generating capacity. That is just not enough to begin to meet India's basic electrical needs. Not even close.\nOver the same 5 years, China, however, has built approximately 300,000 megawatts of new generation.\nAs for the US, even though we have an excess generating capacity position, America typically builds 10,000 to 20,000 megawatts of new generation every year. According to EIA data, to date more than 8,000 megawatts of new generation began operating here through June.\nThe shock of a cascading blackout to a society--especially a democratic one like India--is enormous. Count on India launching a massive priortization of resources to building much more generation and taking other steps to close its gaping electric demand and supply gap.", "score": 21.537867795409493, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "There's an old saying – you can import coal, fertiliser and gas over long distances. But not electricity. That is why, if one has to look at energy security, power generation becomes critically important. Bijlee has become synonymous with modern living. Electricity now touches every life and is critical for our growth and sustainability. Thus, the new government should look at the following:\nFirst and foremost tackle the primary fuel crisis as top priority. The urgent need to step up domestic production of coal and gas needs no repetition. Enhancing production will take time but a rationalisation of coal linkages, which can be done immediately can help ease the primary fuel availability in the short term. There is no sense in hauling coal from Maharashtra to Haryana and then sourcing coal from Odisha for power plants in Maharashtra. The present allocations and linkages to power stations need to be redone on scientific lines. An integrated approach is needed keeping in view coal production centres, power station locations, railway routes, imported coal and ports to put in place an optimum coal allocation and transport policy. Together with this, use of washed coal has to be emphasised to reduce ash, and in turn freightloads, by nearly 30 to 40 per cent.\nThe present location of generation capacities and inadequate transmission linkages to consumption centers have created a piquant situation where many power stations are stranded or backed down due to lack of demand. At the same time, large areas especially South Indian states have huge demand supply gaps.\nEven within states, transmission corridor constraints are causing outages in some areas while power stations idle away. Transmission planning needs to be bottom up with state electricity utilities playing a much greater role in the transmission corridor planning process. While the North-South Grid linkage needs to be made fully operational quickly, even small interventions in terms of completing short-distance interstate lines would help ease the situation in large parts of the country.\nIn distribution, there is an urgent need to rationalise tariffs. The present, completely-perverse, tariff structure is not sustainable, constricts the growth of manufacturing and service sectors. This is because industry and commercial consumers are charged way above the cost of supply to heavily subsidised agriculture.\nHighly subsidised agriculture power is not only killing the manufacturing and service industry but is also causing long-term damage to both groundwater table and soil quality. Moreover, there is no incentive to economise water usage through modern techniques like drip irrigation.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Despite having made deep inroads in the sphere of Information Technology, and also good progress of development in the field of road connectivity, our country is way behind to be tagged along with the developed countries. The reason is not very complicated to understand. Primarily, it is the bad power sector. We see to run short of electricity, despite the fact that hydro-electric power generation in our country had been given lots of thrust by the Congress Government, under Pt. Nehru. Soon after Independence, the construction of the Bhakra dam started in 1948. During its inauguration, Nehru said, ‘these structures are the temples of modern India.’ Nehru was a visionary. He knew that India minus electricity will never be able to even think of competing with the developed nations.\nIt was in Nehru’s tenure, that the construction of the Tarapore Atomic Power Plant was taken up in 1962. This was again because of his vision and of course due to foresight of Homi J. Bhabha, the great nuclear physicist, who launched the development of nuclear energy in India.\nIn the post-Independence era as the demand for electricity increased in leaps and bounds, India switched over to thermal energy. During the period 2010-11, the distribution of energy generation in the country was as follows:\nLiquid fuel: 0.41% and\nimport from Bhutan: 0.78%.\n- (After, India Energy Book 2012).\nOf course to these figures the energy generated from other sources like solar, wind and geothermal are not added-as at present their input is minuscule But yes, locally every day we hear of a new solar power plant or a wind energy farm coming up.\nAll to all the scenario is that we need much more energy than we get. But the problem is how to generate that energy? There is a strong opposition against the hydro-electric power, mainly because it requires a river to be dammed and the people living in the upstream get damned by the dam. Though, the opposition is not because of the protests by dam victims. The opponents of the dam believe that the rivers of the north Indian plains are shrinking up because of the dams in Uttarakhand. Then there is a very strong anti-nuclear lobby.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "Wind Power has been the fastest growing segment in the entire power sector as well as within the renewable energies in the country. Today, we have nearly 15000 MW of installed capacity in which apart from a few odd megawatts, the entire capacity has come up through private sector investments. A few years back, India ranked III and IV on the world map of wind power installations till China took over rather rapidly. While government owned Center for Wind Energy Technology (C-WET) has continued to claim that India has only 45,000 MW of wind power potential, the reality is that several studies carried out by independent agencies and experts indicate that the potential is of the order of several thousand gigawatts and not megawatts ( a scale difference). Since the establishment of Feed-in-Tariffs first by Maharashtra Electricity Commission (MERC) in 2004 and later by most of the other State Electricity Regulatory Commissions (SERCs), countrywide wind power development has been taking place in a more or less sustainable manner. In this backdrop, if we look at what’s happening in the mainstream power sector, we find chronic shortages that have not been addressed, power cuts, shut downs lack of electricity to both rural and urban areas having the greatest impact industrial production. At the time of writing this article, the country faces a major crisis in coal supply and it cannot be conclusively said if this is a short-term crisis or a manifestation of the long term shortage of coal resources. Because of these shortages, already there is a huge investment that has taken place in back-up systems across the country in domestic, agricultural, commercial and industrial segments. Establishments resort to costly diesel generation while households have invested in inverters. The very important and significant point we make here is that the rationale of low cost of power does not hold, when there is no power to supply. Enabling investments and creation of generation capacity should have priority over issues such as competitive bidding. There is no doubt that the electricity demand is increasing and the country has limited options to meet this challenge in a sustainable manner. This very important point we think is being missed by the policy maker. With this overview we return to the issue of competitive bidding in wind energy. We have noted that wind energy is one of the segments of energy sector that continues to attract investments in the current policy and regulatory regime and a capacity of more than 1000 MW gets established every year.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-2", "d_text": "International Solar Alliance (ISA) is a partner organisation to India’s G20 Presidency for 2023 and, as part of its deliberation on energy transition, published a report on July 20 this year which says that around 59% of the unelectrified population can be best suited for electrification through solar-powered mini-grids.\nDebajit Palit, a professor of energy at NTPC School of Business, told Mongabay-India that in numerous African countries, extending transmission lines to remote regions incurs significant expenses. Opting for local generation through mini-grids close to the demand proves far more cost-effective than extending transmission lines.\nHe said that the mini-grid concept could offer a superior solution for India too, where decentralisation is gaining traction driven by the need for decarbonisation and further fuelled by the adoption of digital technologies. This shift makes mini-grids, espeically inter-connected with grids, a viable choice, not only for rural areas but, potentially, even more advantageous for urban set-ups.\nIndia currently leads the list of countries with planned mini-grids – those that developers, governments and other organisations have said they plan to build over the next several years. India’s plans to install 18,900 mini-grids, followed by Nigeria (2,700), Tanzania (1,500), Senegal (1,200) and Ethiopia (600).\nHowever, to meet SDG 7, the world will have to power 490 million people through the construction of more than 217,000 mini-grids. At the current pace, only 44,800 new mini-grids serving 80 million people will be built by 2030, says the World Bank report.\nPrivate sector mini-grids are facing challenges. A paper published in Energy Research and Social Science underlines the conflict between customer affordability and business viability of private mini-grids. Lead author and energy expert Venkata Bandi identifies grid expansion, policy gaps for off-grid, consumer reluctance to pay, and entrepreneurial apathy as the main challenges. “Capital costs are often sunk costs,” he says while responding Mongabay India’s queries.\nGovernment mini-grids too are performing poorly, with a majority of them not functioning, as reported in a recent media article, based on inputs it received from Smart Power India.\nAdditionally, India’s electricity sector has some unique regulatory challenges with regards to mini-grids, according to Debajit Palit.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "Fourth is the administrative challenge like long delays in granting different clearances such as the ministry of environment and forest clearance, army clearance, land acquisition, techno-economic clearances etc.\nKeeping in view Indus water treaty with neighbouring Pakistan small hydropower is politically most acceptable solution for meeting energy security in the region because run off river scheme is generally employed preferably due to suitability of site in hilly regions like Ladakh. In this scheme, there is little poundage or no poundage in the upper stream reservoir because technically less discharge is required to produce sufficient power because of easy availability of natural falls.\nThe output power evacuation is subject to the instantaneous natural flow of stream unlike large hydro which requires excessive water storage and can violate the treaty as a result project can be treated disputed like Baglihar on Chenab River. Stirring the truth “power sector is the backbone of states economy” state of Jammu & Kashmir was never on footsteps of sorting out this basic concept of economic reform.\nThat’s why it is essentially required to strengthen and prioritise power sector in our state. We have huge untapped hydropower potential in the region including large hydro but when it comes to harness it to consolidate the power sector in the region efforts seems bleak.\nIt’s not harnessing at the desired pace due to lack of well framed policies in the state government and perhaps the potential is underplayed. But states like Gujarat and Himachal Pradesh are 1st in solar power and hydropower generation respectively. Unlike our state of Jammu & Kashmir, Gujarat and Himachal Pradesh are doing exceptionally good in these sectors because not only they have potential but mainly because policy making is thrust area into these states.\nThe friendly and acceptable policies in power sector encompassing society, investors, developers, stakeholders and environment etc. becomes key in harnessing the golden harvest today specially after new land acquisition act 2013 with effect from January 2014! The result of these two states can surely be replicated in the rest of the country especially the reluctant hydro rich state of Jammu and Kashmir. Since private sector participation becomes more pronounced in recent years in Indian power sector designing of investment friendly and public inclined policies becomes a necessary condition.\nThe resilient people residing in structurally disadvantaged areas of Leh and Kargil have been living with almost no access or little access to electrical energy for ages.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-2", "d_text": "Prime Minister Narendra Modi's decision to open commercial coal mining to private players is a key step towards bringing order to the country's chaotic power industry and ending the chronic blackouts that impede its economic rise. In an executive order posted on the Coal Ministry's website, the government said that any firm incorporated in India may be allowed to mine coal for their own consumption or sale, ending a 42-year-old ban.\nProgress of independent power projects has recently accelerated. The 700-megawatt Enron project at Dabhol is under construction, and two power projects have actually started generating power (the 235megawatt GVK project at Jegurupadu in Andhra Pradesh and the 208megawatt Spectrum project at Kakinada in Andhra Pradesh).\nIndia is currently facing energy crisis with its major dependency on coal, crude oil imports to meet sharply growing energy needs of the country. More than 40% household lack access to electricity. There is a need for alternate energy which will not only offset the demand of conventional fossil fuel but also pave way to cleaner solution with less poisonous gases emissions. The alternate solution must be low maintenance and deliver sustained performance in adverse climatic conditions. There is a strong need to push for wider-scale implementation of public-private partnership models. The private sector has been playing a key role in generating power, and in making the desired technological interventions in energy space.\nMillions of people in India have no electricity\nThere is a whole gamut of challenging areas in the power sector that India needs to address on priority in order to meet its growth targets World over the economic growth is driven by energy, either in the form of finite resources such as coal, oil and gas or in renewable forms such as hydropower, wind, solar and biomass, or its converted form, electricity. This energy generation and consumption powers a nation's industries, vehicles, homes and offices.\nIt also has significant impact on the quality of the country's air, water, land and forest resources. For growth to be sustainable, it must be both resource efficient and environmentally-safe.\nIn India, the demand for electricity has always been more than the supply.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Chief among the risks for mini-grid operators and investors in India is the arrival of the central utility grid to a mini-grid-serviced area – often via low-voltage distribution lines, at high cost and incurring significant network losses. Given the extremely low discom tariffs, these grid extensions – even though they often provide unpredictable and insu fficient power – are highly disruptive to DESCO project economics, despite clear customer preferences for more reliable services.\nInvestors see other risks in the mini-grid model, sometimes erroneously. REEEP’s investigations revealed significant concerns connected to customer behaviour, particularly theft and non-payment, but these issues have largely been overcome by operators.\nElectrical power is a political issue in India, as it is in most countries. Power is subsidised, and likely will be for a great many years to come. There have been encouraging signs in 2016 and 2017 of government interest in DRE mini-grids, both at state level, with the ground-breaking release of the first state mini-grid policy in Uttar Pradesh, and at the central government level, with movement towards a national mini-grid policy. Indeed, just prior to publication of this report the Modi Government announced a plan to electrify every household in India by 2019.\nThe $2.5b plan, known as Saubhagya, is both ambitious and risky, dependent as it is upon a blend of public and private financing, but relying largely on public or quasi-public institutions to deploy and maintain. It will also likely test the limits of the government in overcoming structural economic and physical barriers, as this report will show. But current incentive schemes are sub-optimal, the procedures for securing them are unclear and lack transparency, and it is unknown whether they will be available at the scale required for mini-grids to make a significant impact on the “Power for All” challenge.\nUltimately, the sector will require long-term cooperation between the public and private sectors in order to render DESCO-model mini-grid deployments viable at scale and attract sufficient amounts of domestic and international investment. Such cooperation is sensible and to be expected, given the characteristics of the rural electrification space.\nAn important consideration for securing funding is that international development cooperation agencies, development financing institutions (DFIs) and multilateral development banks (MDBs) have expressed interest in supporting climate-smart energy access in India. DRE mini-grids offer precisely this kind of power.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "The role of renewable energy (RE) needs to shift from just being important to being more relevant. For this to happen, RE generation must integrate with the scheduling requirements of distribution utilities. The infirm nature of RE makes it a grid-operator's nightmare. While consumers demand continuous power, RE can make this an impossibility. Thus, most utilities look at renewables as vendor-driven, expensive and infirm; it is power that has to be absorbed for obvious unholy reasons. With weather prediction models becoming more and more accurate, policy intervention must shift the premium from just adding solar and wind capacity to bringing about predictability and scheduling of RE and simultaneously facilitating its trading over shorter time slots. This would not only bring predictability, leading to better acceptance by distribution companies, but also create a new and niche market for RE.\nIt is time that the right to energy also becomes a reality. The Rajiv Gandhi Gramin Vidyutikaran Yojana (RGGVY) has to be recast to meet this need. This programme has the laudable target to provide electricity to the homes of the rural poor. But with electricity comes a bill, which even though very small, needs to be paid within a time limit. A family which has not seen electricity for 65 years since Independence suddenly gets connected to the grid followed by a bill. But the new user has no economic activity to generate the required cash. Disconnection follows and over time, the distribution system is stolen or wasted. The RGGVY must provide for payment of a minimum bill for first-time BPL (below the poverty line) electricity users.\nPower sector reforms must now be driven by simple straight forward thinking based on ground realities with customer at the centre and full involvement of states rather than complicated, central driven initiatives.\n(The writer is managing director of MSEDCL, Maharashtra's state electricity distribution utility, and has been on almost every committee dealing with power reforms)", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "Our investments in AMI (advanced metering infrastructure) started way back in 2005 and we have already deployed a state-of-the-art SCADA (supervisory control and data acquisition), GIS (geographic information system), OMS (outage management system) and ERP (enterprise resource planning) systems, which are well integrated with all of our current operations.\nReliance Infrastructure understood the need for a smart grid at a time when these concepts were unheard of in the country. This has helped us stay ahead of the curve in the smart grid sector. Starting in 2005, our company had deployed close to a quarter million units of automated meter reading modems (AMR modems) & MDAS system for its premium, high value and LT consumers & for other utilities under the RAPDRP programme for customers like MSEDCL, JVVNL, JDVVNL, AVVNL, etc. Use of real time SCADA/DMS (distribution management system) interface with GIS has resulted in a 60% reduction in power interruption time and better power quality. We also have OMS which reflects the outage areas in GIS and ensures faster response time for complaints.\nWhat are the challenges to smart grid adoption in India\nThe smart grid can be described as the merging of two networks:\nThe power network, which consists of the generation, transmission, and distribution grid; and\nThe modern communications network.\nThe first step is to understand the smart grid communications network as a truly integrated network, rather than as parts of separate, vertically integrated application silos. This parallels the evolution of enterprise and telecommunications networks over the last 20 or 30 years into a single, integrated voice/video/ data network.\nTraditionally, wired communication systems offered a reliable method for data transmission. However, with most substations remotely located, operators come across many communication challenges. Wireless technologies such as public cellular networks or standards-based technology for private networks (WiMAX, Wi-Fi) can be faster and less costly to deploy.\nDue to such advantages, these technologies will increasingly find more and more takers and help the power sector step into the next orbit of operational excellence.\nTell us about the potential of India to tap green power for consumption\nGreen energy is certainly the future of power, and world over efforts are being made to make it an integral part of government policy road maps. The government has set the green power target 10% by 2015. The state regulators have started implementing it proactively.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "The Government is focusing on three rail links, one each in Bihar, Chhattisgarh and Odisha. However, the earliest link is expected to be commissioned only by 2016.\nLand acquisition and Environmental clearance\n90% of CIL production comes from open-cast mines. Large land parcels are required to be procured which results in large scale displacement of local population. Pollution by mines makes environmental clearance a lengthy process. Starting a coal mine in India requires about 5 years to 7 years.\nPrivatization and Restructuring\nThe Government has recently proposed offloading another 10% in CIL. It is debatable whether privatization should precede break up into smaller, more efficient entities or the other way round.\nWith a larger stake going to private investors, the Government will need to convince them of safeguarding their rights before restructuring CIL.\nCoal based projects face regular situations of low stocks. Many power plants run at below normal capacity even as more capacity is added.\nThe Coal ministry is in the process of formulating a policy for swapping of coal linkages between State and Central Power utilities. This will help link power plants to the nearest coal mine. The Government has also allowed automatic transfer of linkages from old inefficient plants to ultra-modern supercritical plants in order to maximise generation.\nCoal imports have more than doubled in last five years increasing the cost of power generation. While power generators expect that this cost will be passed on to Discoms, they have faced stiff opposition. With large projects like those by Tata and Adani making recurring losses, the country is faced with a choice between higher cost of power and no power.\nThe Government is also evaluating price pooling of domestic and imported coal. This will help reduce fuel cost for plants running on imported coal. However, this is expected to increase power generation cost in the short term.\nInvestor confidence in coal-based power projects has taken a hit due to these challenges. NTPC has received proposals for sale of assets from 34 private project developers which constitute 55,000 MW capacity. This includes both operational and under-construction projects (Source: LiveMint). With cancellation of coal blocks, lenders’ financial health is under greater scrutiny.\nNeed for Adjustments\nGiven the precarious power situation, the Government needs to show results quickly. In the process, many stakeholders, including investors, lenders and consumers will need to make compromises.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-3", "d_text": "In addition, the National Electricity Plan (NEP) created by the CEA in April, 2007, emphasized private sector participation by the provision of a fixed return on investment based on an assessment of opportunities and risk.\nChallenges with the India Model\nThe India government, as explained earlier, has been conservative in allowing partial privatization of the power sector, perhaps for the sole reason of preventing private companies from using the electricity sector for private maximization. There is therefore need for professional and competent operations of the state utilities with functional autonomy and without bureaucratic bottleneck in the electricity reform. The Electricity Act, 2003, although provided a time frame for unbundling the State Electricity Board of each State, many states have deferred their restructuring innumerable time. In some states like the Jammu & Kashmir, Puducherry, Goa, Sikkim, Arunachal Pradesh, Manipur, Mizoram and Nagaland, the power sector is functional through a government department and in some other states; the distribution of electricity continues to be public sector driven except for Delhi and Orissa.\nThe contribution of the private sector has therefore remained quite low, despite the passage of the Electricity Act in 2003 that divested the sector. For instance, in March 2008, the total generating capacity of private participation constituted only 14%. The private sector contributed 20,011 MW out of a total capacity of 143,061 MW. When compared against the energy generation in March 2008, it would be correct to say that the private sector contributed 5,424 MU out of the total energy generation of 61,206 MU, which merely represents about 10%.\nFurthermore, some key implementing challenges militating against private participation in the electricity sector include ensuring availability of fuel quantities and qualities, lack of initiative to develop large coal and natural gas resources present in India, land acquisition, environmental clearances at state and central government level, fuel supply, financial closure, power equipment supply, project execution and training of skilled manpower to prevent talent shortages for operating latest technology plants. Thus, despite the multiple licensing provided under the Electricity Act, 2003, Investors perceive a high risk in the distribution sphere due to inefficiency and exposure to regulatory risks, especially taking into cognizance that distribution is under state jurisdiction.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "Timeliness in actions has often been missing, resulting in the landmark judgment of the Appellate Tribunal (APTEL) in the OP 1 of 2012 case, where APTEL passed explicit directions to all state regulators for compliance with certain basic functions enjoined in the Electricity Act, 2003. Despite that judgment, the development of these critical institutions has been slower than desired because, in general, there is a lack of both will (or courage) and of institutional capability. The state-level electricity regulators urgently need capability enhancement, greater resources and arguably a more clearly defined operating blueprint. Such a blueprint ought to have granular definitions on a strategic framework for regulation; more robust instruments of regulation that are tightly aligned to the objectives of the Electricity Act, 2003 overall regulatory work plans; monitoring frameworks for the implementation of regulations; review of performance and information dissemination, etc. Right staffing, capacity building and access to the right tools and capabilities will be very important. The gap between the current and the desired state in these institutions is indeed very high. In effect, unreformed state-level institutions are like the Achilles’ heel of the Indian electricity sector.\nWill these points of weakness be any different in the foreseeable future? Reforming the unreformed elements will be a long path but, I dare say, that I see hope for several reasons. Despite the structural and ownership challenges to competition and efficient functioning, the pressures are being inevitably felt by the laggards. A laudable feature of the Electricity Act, 2003 is that it did not limit any possibility artificially by being judgemental on what is economically feasible and what is not. This, in effect, is resulting in competitive pressures being exerted on the incumbents to become efficient or face financial consequences. Efficiencies in technical and commercial operations have indeed improved over the years, manifested in sharply reduced aggregate technical and commercial losses in most states, though arguably there is room to do more. Administrative measures by the Government of India, such as the rules on late payment surcharge, have reduced payment delinquency to generators. Consumer rights are increasingly on a better footing with improved supply infrastructure, enhanced hours of supply and quicker rectification of faults. Recent experiences in the re-privatisation of Odisha distribution licensees have gone on to demonstrate that ownership changes can be successful, if backed by appropriate political commitment and support.\nIndia is uniquely placed in the world as a large growth economy.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "The BJP-led Indian government has been keen on augmenting its nuclear power generation capability but the director of the national apex body for Training and Human Resources Development in power sector says lack of transmission infrastructure is the biggest impediment on this road. Director General of National Power Training Institute Dr. Rajendra Kumar Pandey lays down the challenges in the way to Nuclear Generation threadbare in an interview to Nuclear Asia.\nIn France 70 per cent of electricity is generated through nuclear power. In India, currently 60 per cent of our power is supplied through thermal energy. Under such circumstances, can nuclear energy be the next best alternative?\nAs per Bhabha Atomic Research Centre (BARC) and the Department of Atomic Energy (DAE) the nuclear power capacity is expected around at 30,000 MW and that it can become the cleanest source of power. But, there is this notion among the people that nuclear power is dangerous. This despite the fact that many advanced countries are operating with nuclear power.\nThe government previously had the goal of 100 GW of solar, 60 GW of wind and 15 GW of biomass etc. When it was felt that generating 100 GW of solar is very difficult, they included hydro as well. So now if you look at any government declaration, it is 40 GW by 2022, which was 100 earlier. It means that nuclear has been left in totality, hydro and solar has been combined to 100 GW, 60 GW of wind and 15 GW of biomass.\nNow, let us come to the topic of grid interface of renewable energy. If we can produce nuclear power through thorium which is abundantly available with us then we can generate nuclear power in abundance. Unfortunately this technology has not yet been utilised in its full capacity.\nWhat are the major challenges that comes into play post production of nuclear power?\nThe first challenge is that we are still dependent on some other countries for the supply of uranium.\nThe way I see it, nuclear power will form large portion of renewable energy penetrating the Indian grid. As on date it is 310 GW in the grid. The biggest challenge is, if you allow penetration of renewable energy in the Indian grid, say 175 GW, there is no guarantee as to how much power will be available. You see that hydro power is only available for 3-4 months, as long as there is good monsoon and the dams have sufficient water for the production of electricity.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "The key drawback of solar energy market is the availability of lower cost alternatives in China which manufactures 8-10% cheaper modules than India. This has caused a huge dependency on imports of solar panels has hindered domestic manufacturing which be affect the market growth.\nThere was a 36% decrease in solar power capacity reductions during 2021 which fell by 36% as compared to 2020. This is largely attributed to supply chain disruptions, logistical restrictions, and construction delays during the pandemic lockdowns. However, despite these roadblocks, the aggregate solar capacity of India surpassed the total wind capacity in February 2021. Additionally, between April 2020 and February 2021, solar generation was almost 8% higher than the previous fiscal and its share in the total supply was consistently higher. Even though the overall electricity demand fell during the lockdown, there was an attractiveness towards the renewable energy sector. As auctions continued, record low solar tariffs were witnessed, new market entrants won some bids, commissioning deadlines were extended, must-run status was ensured and investors generally shoed a strong interest in the share of renewable companies. In addition to this Indian renewable companies managed to raise billions of dollars of debt from overseas sources till the end of 2021. One dark side of all this progress is that a significant amount of auctioned capacity, mostly solar, has still not been able to find buyers.\nFor a detailed analysis of the Type in the India Solar Energy Market browse through https://univdatos.com/get-a-free-sample-form-php/?product_id=11234\nBased on Type, the Industrial Power Supply market is segmented into Photovoltaic systems and Concentrated solar power systems. Concentrated solar power systems are further sub segmented into Parabolic trough, Solar power tower, Fresnel reflectors, and Dish Stirling. Coal fired thermal power plants are the dominant source of energy generation in India. Coal-fired thermal plants utilize 5-7 Cu m water per MWh of electricity generated, where concentrated solar power systems utilize only 2-3 Cu m water per MWh of electricity generated for cooling and washing mirror surfaces. To reduce the dependence on coal-fired energy production in India, the ambitious Jawaharlal Nehru National Solar Mission (JNNSM) planned to install a total of 470 MW of CSP projects under its phase I (2010-2013).", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Anish De, Global Head of Energy & Natural Resources, KPMG\nIt has been 25 years since the Electricity Regulatory Commission Act of 1998 (ERC Act) introduced independent regulation of the electricity sector on a countrywide basis. Calendar year 2023 also marks two decades of the Electricity Act, 2003, enacted as a comprehensive legislation to consolidate the laws relating to generation, transmission, distribution, trading and use of electricity. The Electricity Act, 2003, which subsumed all previous laws including the Indian Electricity Act, 1910, the Electricity Supply Act, 1948 and the ERC Act, created a fantastic wireframe for, interalia, the development of the electricity industry, promoting competition therein, protecting the interests of consumers and ensuring the supply of electricity to all areas, rationalising electricity tariff, ensuring transparent policies regarding subsidies, promoting efficient and environmentally benign policies, and facilitating the constitution of the Central Electricity Authority, the regulatory commissions, and the appellate tribunal. The Electricity Act, 2003 has often been called the “Rolls-Royce” of legislations, with great merit but for a critical omission relating to the separation of distribution and supply as distinct licensed businesses. The separation of the distribution and supply functions was intensely discussed and debated, but not executed due to the lack of political will and keenness to move ahead with other elements of the landmark law. Although much has been achieved on the basis of the Electricity Act, 2003, this departure from the practices followed by most reforming jurisdictions across the world has ultimately hindered efficiency and competition, prevented much-needed ownership changes, stymied independent regulation or resulted in its capture, and ultimately compromised consumer interests. It remains a critical unfinished agenda in Indian electricity sector reforms.\nTo appreciate the many successes and also the limitations of electricity sector reforms in the country, it is important to lay out how the sector architecture changed with the reform legislations, and particularly the Electricity Act, 2003. Apart from independent regulation, at the core of the changes in the architecture is the concept of “open access”, which allows non-discriminatory provision for the use of transmission lines or the distribution system, or associated facilities with such lines or system by any licensee or consumer or a person engaged in generation in accordance with the regulations specified by the appropriate commission.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-7", "d_text": "A number of policy measures at national level, which could be applied concurrently, would significantly improve the framework for renewable energy in India.\nAfter all, India's rapid and enduring economic growth is intrinsically linked to the increasing consumption of energy and natural resources.\nINTERVIEW YOSHIAKI INAYAMA\nOUR MAIN FOCUS IS TO PROVIDE TURNKEY EMPCS SOLUTIONS TO THE INDIAN POWER SECTOR\nWe create 100% INDIGENOUS Power Solutions\nYoshiaki Inayama , Managing Director, Toshiba JSW Power Systems Pvt. Ltd. (TJPS) speaks to Dipti Srivastava about the challenges of power sector in India and Toshiba's commitment in solving the problems through innovative technologies\nIn the past few years, in spite of various initiatives, power situation still remains one of the core challenges for India. What are the factors responsible for it?\nElectricity is one of the basic constituents for the economic growth of a country and as you said there are various initiatives taken in the country in the past few years to improve the power situation. Toshiba's major expansion of power business in India is also a result of this. One of the biggest concerns is the gap in the demand and supply of power in India. There are several barriers that pose a challenge to the Indian power sector. The first challenge is land acquisition; not only for the power sector but for any industry. The second challenge is speedy environmental clearance.\nThe third challenge is availability of coal. Half of the Indian power generation depends on coal as a source and the production of coal has not increased to match the demand. And the fourth challenge is financial crisis. Because of the recent economic slowdown, financing for new projects have become tougher, especially for Independent Power Producers (IPPs). There was no order from IPPs in the last fiscal year.\nAfter the last general elections in India, there is a huge expectation, both from the general public and the industry that things will improve. At present Power, Coal and New & Renewable Energy ministries are functioning under a single leadership. It means more focused and better co-ordination among these ministries and ultimately improvement in the power situation is expected.\nIn your opinion, how does it impact the economic growth of the nation?\nThe Indian economy is in the trajectory of upward growth. To keep up the momentum of this growth, availability of uninterrupted power supply is a must.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "On the other hand, states like Orissa and Himachal have surplus power, and the eastern zone is comparatively safer than other zones.\nPower consumption is also driven by industry and states like Maharashtra and Tamil Nadu have lots of it, at least compared to Himachal and Orissa.\nLet’s revisit the The Load Generation Balance Report 2012-13 published by the Central Electrical Authority that illustrates the power scenario prevalent in India and in different zones:\nFigure 2 (a)\nFigure 2 (b)\nIf we were to summarise:\n*All zones, barring western, saw an increase in demand from last year during the peak season.\n* Southern zone has a 26% deficit in meeting demand as against the all-India average of 10.6%; and\n*Only eastern zone has surplus power after meeting demand.\nSo, from the data available, it seems that North-East could probably be the next zone to face serious power outages. The shortfall is nearly 22% as against the national average of 10.6%.\nFigure 3 gives an idea of the anticipated supply position of the top 5 power deficient states during 2012-13. Bihar has surprisingly low consumption, a fourth of even a small state like Punjab. This suggests that were Bihar to expand economically, as is widely predicted and expected, than the dependence on power will skyrocket. Interestingly, there are 13 states that have a shortage of over 20%.\nSo, it can be seen that two North-Eastern states – Nagaland and Tripura – are projected to have the highest shortages, proportionately, followed by Punjab and Bihar. Tamil Nadu, which is currently reeling under crisis, is also among the top 5 states.\nOnly Himachal Pradesh (52%), Sikkim (34%) and Orissa (12%) have surplus power. The surprise here is Himachal with more surplus than Sikkim and Orissa since the state has been making big strides in development.\nThe Government of India had set the goal of electrification of all households during the XIth Plan (2007-12), Incidentally, India has seen its deficit increase to 10.6% (from 9.8% in 2010-11), and 56% villages still do not have access to electricity.\nEstimates by consulting firm KPMG say India’s power consumption could double by 2020.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "By: Prof. S. D. Jawale (email@example.com)\nDate: 22nd Sept 2021\nRapid growth of India’s economy is supplemented by the development of Indian energy and power sector. The power sector in India has made incredible progress in last few decades. The total installed capacity has increased to more than 350 GW in 2021 and the transmission & distribution network has increased from small residential and industrial areas to form National Grid. However, the supply of electricity has always been lagging behind the demand. The electricity has a key role to play in progress of the nation and in order to enhance the development of power system; Indian government has started various organizations e.g. Power Grid Corporation Limited (PGCL), National Thermal Power Corporation (NTPC), National Hydro-Electric Power Corporation (NHPC) and State Electricity Boards (SEB) etc. However, still the India is not fulfilling power or demand to the new upcoming power network.\nHighlights of Indian power sector:\nNow, the power sector in India is going through a rapid change in terms of the technology, and we cannot deny the change that technology has brought to both the power sector and our lives. We see the adoption of Internet of Things (IoT), Artificial Intelligence (AI), and Big data in advancing the development of power sector. We witnessed this technology advancement when The Hon’ble Prime Minister of India appealed to the citizens on 3rd April 2020 to switch off their lighting system at 9 pm on 5th April 2020 for 9 minutes. Then Power System Operation Corporation Limited (POSOCO) with the use of technology handled situation very efficiently as the anticipated reduction in all India demand during this period of 9 minutes was around 12000 – 14000MW which may cause fear of Grid failure by cascade tripping.\nElectrical Engineering department of Jawaharlal Nehru Engineering College (JNEC) has maintained the pace with this technological advancement of power sectors. New courses in are included in the engineering curriculum that will update the students with these technological advancements. For instance, concepts such as digitization of power plants which fundamentally means combining technology such as IoT (Internet of Things), AI (Artificial Intelligence) and Big Data with advanced hardware to deliver reliability, affordability and sustainability are core part of the newly designed curriculum.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "At one point in early May 2016, for example, the 1 GW Tehri Hydroelectric Dam, India’s tallest dam, had no usable water stored in its reservoir on the Bhagirathi River in the Himalayas, a tributary of the upper Ganges River, even though it is designed to hold 2.6 billion cubic meters. In May 2016, Central Water Commission (CWC) of India data showed that the Tehri Dam was using almost none of its storage capacity of 2.6 billion cubic meters. At the time, total usable water available in 91 reservoirs across India monitored by the CWC was at just under 31 billion cubic meters, 19 percent of their capacity.\nThese incidents, taken together, underline the severity and importance of the water-energy nexus and the resulting crisis in India. Water and energy systems are closely intertwined; the scarcity of water in the age of climate change has impacted the coal-fired power plants particularly negatively. While water forms an essential component in all stages of energy production and electricity generation, energy is needed to extract, convert, purify, and deliver water for a variety of human uses, and treat wastewater for auxiliary uses.\nAnalyzing the Challenge\nThe problem is far larger than the individual shutdowns of the Farakka, Raichur, and other power plants: India is in the grip of a growing water crisis. The effects of climate change appear to be evident in South Asia in general and in India in particular. Monsoon rains have been scanty in India for the two years in succession. The melting of snow in the Himalayas produced less water compare to previous years. Some 85 percent of the country’s drinking water comes from aquifers, but their levels are falling, according to the WaterAid organization.\nThe scarcity of water in India makes it difficult for water and energy sector officials to trade-off between power generation and the use of water for drinking, agriculture, and other industries. Some analysts argue that in a water scarcity scenario, power generation will come at the expense of other uses. A new Greenpeace report notes if the Indian government continues to push for the construction of more coal plants, the water crisis will grow significantly worse. The report estimates over 170 GW of coal power plants, representing 40 percent of the proposed Indian coal fleet, are in areas that are facing high or extremely high water stress.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Out of the 70% of Earth that is covered with water. Less than 2.5% is fresh water –a fraction of which is available for human use in the form of rivers and lakes. This is why for centuries rivers have been the very cornerstone and foundation of civilization, bringing precious drinking water and sustaining the life of millions along its banks.\nToday as communities have expanded away from fresh water sources structures that store, transport and utilize vast amounts of water or dams are believed to have become vital for survival. With the growing urban population in India, approximately 1.2 billion people, especially as more of India begins to urbanize, comes an increase in the demand for electricity. As a result, today the uses of rivers and dams have extended to include hydropower generation.\nAs the National Geographic’s sums up India’s energy situation: “India: the world’s second most populous country is growing far faster than its own significant fossil resources can handle. “…It is the world’s third largest coal producer, is amongst the world’s top CO2 emitters. But because half of its population still has no access to electricity, per capita emissions are the lowest among major economies…’ ‘…The struggle is to increase energy access while holding down emissions growth.’\n- A dwindling source of fossil fuels,\n- An uncertain market for oil\n- Coupled with the projection of exponentially increased energy demand and unsustainable development\nThe need for renewable, clean and affordable energy is vital.\n- Currently the country relies 75% on coal and oil resources and less than 25% on other renewable energy sources.\n- Recently, the massive power grid failure in July 2012, reportedly the largest blackout affecting nearly 700 million people, or twice the population of America, clearly illustrated India is struggling to effectively meet its demand.\n- In this power crunch the potential of hydropower seems like the glimmering light of hope providing the promise of not only clean energy but also employment and development.\n- In fact, according to the World Bank, ‘Hydropower potential is commonly believed to be one of the most important strategic assets of the state for the development of its economy.’\n- With an estimated of about 25% of India’s hydropower potential exploited a large percentage remains untapped.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-2", "d_text": "At the consumption end, the greatest achievement has been the extension of electricity services to all parts of the country through a programmatic approach that effectively addressed a seemingly intractable issue. The country has also witnessed the emergence of a very strong and competent national regulator in the form of the Central Electricity Regulatory Commission (CERC). Despite the many challenges it inevitably faced, the CERC has emerged as a strong institution that has implemented the core constructs of the Electricity Act, 2003 with elan and has evolved a common market design that is universally accepted, including by the states. Such wide and large-scale transformation in a country of continental proportions, despite all the deep-seated challenges, is absolutely unprecedented.\nThe greatest unaddressed challenges in the Indian electricity sector lie in the states where institutions are weak and decisions are often influenced by political priorities and considerations. Two genres of state-level institutions stand on weak legs: distribution and supply licensees, and the state electricity regulatory commissions that regulate the state-level licensees. In a legal construct that is predicated on non-discriminatory open access, distribution and supply licensees are deeply conflicted in carrying out both functions while simultaneously permitting open access to third parties. It is time that these conflicts are removed and the two functions are unbundled and accorded two distinct licences. The distribution licensee, in its new avatar, can collect fair and reasonable access charges from all users, including the supply licensee(s) and other open access users, invest wisely in upgrading networks and earn reasonable profits. Arguably, the incumbent supply licensees will also be well positioned to compete with their access to a low-cost historical portfolio of contracts. At a time when building new power plants (especially conventional ones) has become very challenging, the incumbents will have great competitive advantages. The integrated distribution and supply licences arguably hinder their own performance and commercial interests. It is indeed a pity that the incumbents typically do not perceive this to be the case.\nIndependent regulation at the state level has mirrored the misplaced priorities of the state-level regulated entities. Since their inception, they have largely focused on keeping tariffs down, ignoring or downplaying the mandate of the Electricity Act, 2003 consumer service standards, ushering efficiencies in sector operations and regulating the overall sector in a fair and transparent manner.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-4", "d_text": "Economic growth is energy hungry and much of the incremental energy needs will be delivered in the form of electricity. For India’s progress to continue unhindered for the coming 25 years, the role of its electricity sector will be absolutely pivotal. Despite obvious challenges, our political leaders do recognise the critical role that a strong electricity sector will play. The past 25 years of reforms have put the sector on a rather strong footing. The time is opportune to solve the last-mile reform challenges, which will set India up well for the coming 25 years with a debt-equity structure of 70:30.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "Governments promise more power plants, this many thousand megawatts by such and such year. But nobody wants to tell you that while there is a shortfall, it isn't really so much as to completely blight our lives. The problem is, it costs money to produce power, and we, the people, are not willing to pay for it.\nMy colleague, India Today (Hindi) Editor Anshuman Tiwari, points me to a bunch of data on the surprising realities of India's self-inflicted power crisis. A July 2014 report of the Central Electricity Authority (http://goo.gl/XjFDdN) gives us some interesting facts. Would you believe, for example, that India's overall power shortage for that month was just 3.6 per cent, going up to 3.9 per cent at peak time? In the north, the shortage is 5.8 per cent and in the west and the east a mere 0.9 and 1.7 per cent, respectively, even at peak demand.\nThen how does this add up to such a grave crisis?\nIt's because, first, we are not charged the correct price for power- too many of us, particularly farmers, pay almost nothing. And even when we are, we find ways to not pay. As a result, most of our distribution companies owned by state governments are broke, some as bad as KESCO, many worse. They have no money to buy power, which is now freely traded, or to upgrade distribution infrastructure. And if the head of one of them tries seriously to collect, a fate like Maheshwari's lies ahead. Our power crisis becomes much, much worse than it should be because of our subsidy and free-booting culture and our government's inability to collect its own dues. Because if power distribution was such a bad business, how come all the privately owned utilities, CESC, BSES, etc, are making robust profits?\nThe CEA report also has a section that gives you data under that dreaded head, T&D losses. In old-fashioned English, it stands for transmission and distribution losses. But ask anybody in the power sector, and they use a more fitting description: theft and dacoity losses. The report gives us figures for 2010-11 and 2011-12 and their losses stand at 23.97 per cent and 23.65 per cent, respectively.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "The world press took alarmed notice at those rendered powerless by the blackout, noting breathlessly that 600 million Indians – nearly a tenth of the world’s people – were without power. As electricity service is restored, the media spotlight is already shifting elsewhere, and India will carry on as a country where several hundred million people (I’ve seen figures ranging from 300 million to 450 million) live in homes with no electricity. Ever.\nWhich brings us to the enormous cleantech opportunity lurking in the shadows of India’s shaky grid. It’s one akin to what just happened to Indian telecommunications. When I was living in India in 1999, we heard many stories of multiyear waits for new landlines, which are zealously controlled by the monument to stasis that is India’s state-owned telephone company. Mobile phones were nonexistent. For the vast majority of Indians, telecom was a rare and complicated affair that occurred only on public-access phones at retail kiosks. (I can report that in those days you would overhear the most extraordinary things being bellowed down creaky old telephone wires as you passed by.)\nAnd today, barely a decade later? There are more than 900 million mobile phone accounts in India, and there’s a whole electricity-biz sideline in providing recharging services to the millions of Indians who have mobile phones but no power outlets. India basically skipped twentieth-century telecom.\nWhile India’s grid was down, it was distributed power generation – little diesel generators like the one my old guest house had, as well as the huge generators that Indian businesses routinely include in their office and factory designs – that kept the country from grinding completely to a halt. Gigaom reports that Indian companies pay more than $0.45 per kilowatt hour for that emergency power, more than four times the standard rate.\nThe conventional wisdom – as expressed over at The New York Times’ Dot Earth blog, among other places – is that only coal can possibly fill that gap. I’d suggest, however, that when power’s going for rates similar to the steepest of the world’s feed-in tariffs for renewable energy, we’re looking at the world’s most underserved solar market.\nThis is already well-understood in some corners of India.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Roughly 4.85 mn households – more than 50% of the region – do not have access to electricity\nRenewable energy has immense potential in the Northeast and the region can generate its entire electricity from renewable sources, according to Centre for Science and Environment (CSE).“Currently, renewable energy like solar and wind has very little penetration in our north-eastern states – even though there is a huge potential to meet a majority of the region’s energy demand from solar, wind and small hydropower,” said Deputy Director General, CSE, Chandra Bhushan, in his opening remarks on the first day of the CSE conference on the theme “100 % renewable energy future for North East” held at NEDFi complex on Thursday.\nManipur is, in fact, the only state in the region that has a solar policy, the CSE DDG said.The conference is aimed at expanding the role of renewable energy in the electricity mix, as well as in provision of energy access for the region.According to CSE, of India’s 1.3 billion people, almost 240 million do not have access to electricity. The International Energy Agency has further projected that India will have 147 million people without electricity in 2030. At present, renewable energy in the northeast refers only to small hydro power (less than 25 mega-Watts) – even though the region sits on potential reserves of almost 60,000 MW.\nTo add to this, the north-eastern states have one of the lowest per capita electricity consumption rates in the country: there are still roughly 4.85 million households – more than 50 per cent in the region — that do not have access to electricity. In fact, the per capita electricity consumption in the region is as low as some of the least developed nations.The situation is compounded by the fact that the current government schemes to provide electricity are primarily based on grid expansion. This model has so far not been able to provide electricity to the poor in the country, and may yet fail to do so.\nIt is in this context that CSE’s latest report Mini-grids: Electricity for all assumes significance. The report was released and discussed at the event. The report presents a model centred on generation-based incentive and subsidy for setting up distribution infrastructure to ensure every household receives one unit of electricity every day.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-6", "d_text": "The industry has attracted FDI worth Rs.43,530.99 Cr (US$ 7.24 billion) during the period April 2000 to May 2014. India has emerged as one of the fastest growing economies in the world. Its current economic performance reflects a healthy trend based on increased consumption, investment and exports. Over the next five years, this growth is expected to continue.\nThe Government of India has identified the power sector as a key sector to promote sustained industrial growth. The Government of India has taken some initiatives to boost the power sector of India. One, it plans to buy the equity of Power System Operation Corporation Ltd (Posoco), a wholly-owned subsidiary of the PowerGrid Corporation of India, at a book value of around Rs.35 Cr (US$ 5.82 million). Two, Agence Francoise de Development (AFD) is extending a Line of Credit (LoC) of 100 million (US$ 134.73 million) for a tenure of 15 years to M/s Indian Renewable Energy Development Agency Ltd. (IREDA) to finance renewable energy and energy efficiency projects in India. Three, there has been an introduction of new online forms for submission of applications, which would help the Ministry function in a more transparent manner. Four, the Electricity Supply Companies (ESCOM) of Karnataka and Andhra Pradesh Power Generation Corporation (APGENCO) have signed a power purchase agreement (PPA) for sharing 230 MW power generated from the Priyadarshini Jurala hydro power project. Five, the Government of India has joined hands with IIT Bombay to implement cost-effective solar powered lighting solutions for the rural population, which will help save 36 million litres of kerosene.\nOverall, there is a need for improvement in generation and transmission/distribution of electricity by adapting new, innovative strategies. First, there is a need for renovation and modernisation of generation equipment, i.e., improving the performance of existing old power plants. There is a need to increase the efficiency of coal-based power plants. Currently the average fuel conversion efficiency of the existing thermal power stations is around 30 per cent. In order to increase the transmission capability of power, there needs to be development of National Grid. Then, a significant thrust on renewable energy is needed. To boost investment in renewable energy, it is essential to introduce clear, stable and long-term support policies.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "Even though energy projects are often constructed in the name of poverty alleviation and rural development, they’re largely focused on meeting the demands of the urban rich. (Note “demands” should be differentiated from the normative term “needs.”) Therefore, it shouldn’t be surprising that even official estimates show that around 56 percent of rural households in the country didn’t have electricity in 2000. These residents live without adequate lighting, and many spend hours each week collecting firewood because they don’t have access to modern cooking fuels. An October 2007 Greenpeace report shows how the rich in India have much higher carbon emissions compared to the poor.\nNot only do the poor and marginalized in India not have access to electricity, they also often face the brunt of the negative consequences of generating electricity for the rich. In a densely populated country such as India, a significant fraction of the population is directly dependent on land, water, and forests. Practically all large-scale electricity generation projects in the country–whether coal plants, nuclear plants, or large dams–impact these resources, and most recent large-scale electricity generation projects have met with stiff resistance from local inhabitants. (See “Haripur: Land for Nuclear Plant” and “Campaign Against Coal-Based Thermal Power Plant Project,” an online petition signed by hundreds of people who oppose a proposed coal-based thermal power plant in India’s Chamalapura Valley.) This alone makes it unlikely that massive expansion of large and centralized energy projects will materialize anytime soon.\nIndependent energy analysts have shown that it’s possible to plan for energy and electricity in a way that caters to India’s marginalized poor and that this makes financial sense. Studies using the development-focused end-use-oriented service-directed (DEFENDUS) paradigm for energy pioneered by the late Amulya Reddy and his collaborators have shown that in contrast to conventional energy planning, DEFENDUS could result in greater achievement of development objectives at far lower cost in a shorter time. And because of the emphasis on improved efficiency–as well as the use of decentralized and renewable sources of electricity generation wherever it made economic sense–it also resulted in enormous environmental gains.\nThe necessity of such methods of energy planning that pay attention not just to overall electricity generation targets but also equity and environmental sustainability is implicitly highlighted by the National Action Plan on Climate Change.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-8", "d_text": "As a growing economy, India not only just requires to light bulbs, but needs electricity to fuel the growth of every industry, be it large-scale or small scale, manufacturing, healthcare or education. All of this impacts the economic growth of the nation and it doesn't end here. It is one of those key components that facilitate our everyday life. In the current world it has a large impact on human life.\nToshiba has been contributing in many ways to provide power solutions in India. What have been the main focus areas and important breakthroughs in terms of performance in the power sector?\nOur main focus is to provide turnkey Engineering, Manufacturing, Procurement, Construction and Services (EMPCS) solutions to the Indian power sector. Toshiba, with its highly-experienced manpower and energy-efficient and reliable power generation equipment, has been providing similar turnkey solutions across the globe for more than 60 years.\nIn the power sector, there are three main processes that are important: generation, transmission and storage and Toshiba has efficient solutions in all the three categories. Our manufacturing facility, Toshiba JSW Power Systems Private Limited in Chennai, focuses on manufacturing efficient power generation equipment. This plant is well-equipped and the employees are trained at Toshiba, Japan, to achieve the same level of quality standards as in Japan.\nToshiba is actively engaged in developing the reliability and quality standards of Indian raw material suppliers and other vendors. This initiative of Toshiba will ensure India's excellence in manufacturing sector over a period of time.\nI am sure that in the coming years, we will provide 100 % indigenous power plant solutions from India not only for the Indian market but also for the rest of the world.\nDo you think more private sector participation can help improve the situation? How can the government encourage corporates to provide large-scale solutions?\nInstallation of power plants requires huge investments. Participation of private sector will definitely make this investment proposal more competitive and ultimately benefit the consumers. From the government side, sustainable energy policies backed up by financial support, is required to promote the participation of private sector. State Electricity Boards are the main customers of private players in the power sector. To boost the morale of private players, economically-viable, timely and assured payment from state utilities to these private players is a must.\nWhat kind of innovation and technology can contribute in overcoming the situation? In line with this, what would be the strategy and vision of Toshiba in the coming years?", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "Other projects in the pipeline include a 120-MW powerhouse project; windfarms with total capacity of 98 MW; the third unit of the Haldia power project with a capacity of 30 MW; and the 114-MW Dagcchu hydropower project. Tata Power also has identified additional projects with a total power-generating capacity of 6,000 MW.\nThe firm was recently involved in legal skirmishes with Reliance Energy Limited over the supply of 750 MW of power to one of Reliance Energy's group companies. Tata Power had reportedly allocated 800 MW of power-to-power utility Brihanmumbai Electricity Supply and Transport Undertaking and only 500 MW to Reliance Energy.\nTata Power's decision led to Reliance Energy facing a 250-MW shortfall. Reliance Energy filed a petition against this decision with the Appellate Tribunal for Electricity. However, Tata Power has stated that it has not entered into a formal power-purchase agreement with Reliance Energy, although there have been several discussions and attempts made by Tata Power over the years to sign a formal agreement.\nIndustry experts say India will require investments of about $120 billion to $150 billion to meet the growing commercial and residential energy demand. India, which plans to add 78,500 MW of power-generation capacity by the end of the 11th five-year plan period, 2007-12, has managed to commission only 15,100 MW to date.\nPower supply problems are expected to increase in the country because of monsoons and coal-supply shortages. India is expected to face a 12.6% peak power shortfall, equivalent to 14,980 MW, this year. Last fiscal year, the peak power deficit was about 11.9%. The energy scenario in India is precarious, and investments by private-sector players will become critical for the country to achieve its power capacity targets.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Highest growth generation company\nBalraj Joshi is the Chairman and Managing Director of NHPC since September 22, 2017 and has served as its Director of Technical from March 23, 2018 to July 5, 2018\nReplacement Market to Drive BTG Demand\nWhile the government is bullish on capacity additions to meet the rising energy demand, the focus is clearly shifting from thermal to renewable energy. On one hand, the government is harping on the excessive capacity addition during the 12th Plan Period, on the other, it is taking a giant leap in building renewable energy capacity.\nWind energy is an attractive propostion for IPPs\nThe global renewable energy sector has witnessed rapid growth in the last decade, driven by 3 key factors - widespread deployment of renewable energy technologies at increasingly competitive costs, efficiency brought in by hi-tech products driven by new age technology and new business models for distributed generation and services.\nIPPs are not keen for BoP services\nPrayasvin Patel, Chairman & Managing Director of Elecon Engineering Co Ltd, feels that IPPs, due to several issues involved in BoP contracts, are more inclined towards EPC.\nPower 20:20 | CLP India\nCLP India is one of the largest wind power developers in India with close to 1,000 MW of committed wind projects. The wind portfolio of the company that made a beginning with its first project in Khandke in the year 2006, has grown rapidly since then and is an integral part of CLP India´s renewable portfolio.\nAmend Act, break CIL monopoly\nWe have a predominantly heavy dependence on coal and also a large stranded capacity of gas based power supply units. In the first instance, coal supply from Coal India has to be ramped up substantially. The Power Minister may have to convince the new Coal\nMission green corridor\nWith the completion of the green corridor, PGCIL will take up the challenge to erect and use the grid network, especially for renewable resources. Power Grid Corporation of India has planned a capital investment of Rs 100,000 crore for development of an inter-State transmission system during the 12th Five Year Plan.\nPower finance- Layers Within Layers\nJust when you thought clearances were the real culprit, it turned out to be just another layer in the complex power industry´s blues now looking like an onion-peel of discovery and tears. The new nemesis, it seems, is lack of finance.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-23", "d_text": "There is much India needs to learn about education – it is not merely a matter of throwing more money at the problem. The Right to Education Act (RTE) passed in 2009 is a farce – it establishes a system of reservations and dilutes educational standards in teachers as well as students. Furthermore, the tax the government has collected as an educational cess sits unused. Recently, Finland‘s schools have shown promising results that India can try and replicate. Until education is made apolitical in India, results cannot be expected. India’s 12 million-per year workforce will not only be a big one but also an unqualified one.\nEnergy: India’s booming economy is an energy guzzler, and successive governments have yet to achieve anywhere near 100% electrification. At least 25% of Indians have no access to electricity. India desperately needs energy to sustain its industrial growth, adding approximately 1,200 GW by 2050 to its presently installed 185 GW capacity. The per capita consumption of electricity in India in 2011 was 778 KWh, while in the European Union, it is 6,200 KWh.\nWhile the government has finally woken up to the impact of 8-hour load shedding on the economy, it has recently been impeded in its power plans by demonstrators against nuclear power at Kudankulam and Jaitapur. India’s energy plans need to be better than good to not only post an increase for industrial use but to make up for the less than full electrification at present (there is a 13% shortfall). Nuclear energy is not the only way forward but it is fastest and cleanest. India cannot meet its energy needs through thermal power plants as India has neither the mining nor the rail system surplus to deliver the coal to the power plant. Furthermore, the ash content of Indian coal is significantly higher than other varieties of coal, creating greater inefficiency and pollution. Renewable energy simply cannot deliver the volume of energy needed to power India’s growth. On the other hand, reactors can be built in India as well as purchased from abroad with generous lines of credit.\nPower generation is not the only problem India needs to focus on. Approximately 32% of generated electricity is lost in transmission or theft, compared to a world average of less than 15%. Without sufficient power, the lights could very well go out on India’s economy.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "The country has about 25,000 Mw of idle and underutilised electricity generation capacity as private companies set up power plants in anticipation of high prices, Power Secretary PK Pujari said at the launch of ETEnergyworld.com.\nSurplus production was also a problem in the coal sector, Coal Secretary Anil Swarup said. \"Earlier, I was pressurised for coal deficit, now for surplus... I have coal, I have rakes, I don't have demand,\" Swarup said at a panel discussion on his sector.\nIn the renewable energy sector, the ambitious expansion of solar energy was worrying distribution companies, which had their own power generation units, but the government would resolve the matter, Renewable Energy Secretary Upendra Tripathy said.\nHe said the government was in talks with distribution companies for faster adoption of rooftop solar projects. India plans to achieve 100 Gw of solar power generation and 60 Gw of wind power by 2022.\nCurrently, installed solar capacity is around 5,700 Mw and wind over 25,000 Mw. Sector experts and officials say that the rapid expansion in renewable energy poses several challenges of transmission and grid stability as well as reluctance of state discoms to support the sector.\n\"We do understand that some discoms have a lot of opposition to rooftop solar, because the moment customers start producing their own power on their rooftops, they will not be buying from the discom anymore. But we will resolve this issue,\" he said.\nPanelists said capacity utilisation of thermal plants had fallen sharply due to sluggish industrial growth and inefficiency of state discoms. The power secretary said the demand-supply mismatch in the sector is likely to balance out in the next four to five years as industrial growth picks up.\nPujari said the average capacity utilisation of thermal power plants is 61%. While central public sector units have load utilisation of about 70% and private producers around 65%, power plants owned by states proved to be laggard by operating at 53% of their capacity. He said capacity utilisation had fallen in the small, inefficient units.\nAssociation of Power Producers Director-General Ashok Khurana said there was a need to change coal supply policy to supply coal to power plants with medium-term power purchase contracts with distribution utilities. The other speakers in the session were Sterlite Grid Vice-Chairman Pratik Agarwal and ICRA Senior Vice-Pesident Sanyasachi Majumdar.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-1", "d_text": "This key feature, when read with the rights of users of the power systems per various sections of the Electricity Act, 2003, transformed the ways of working of the sector, allowed for third-party network use, power trading and eventually organised trading platforms in the form of power exchanges in 2008. It also forced the expansion of the grid and its integrated operations through binding codes, especially the Indian Electricity Grid Code.\nShorn of the details, the Electricity Act, 2003 fundamentally permitted for two critical freedoms – the freedom to purchase and the freedom to sell electricity. These freedoms were backed by an entire set of provisions relating to the delicensing of generation, elimination of cross-subsidy surcharges, dedicated transmission lines, definition of a captive generating station, elimination of conflict of interest through ownership restrictions, etc., all underpinned by open access. The limitations imposed on transmission companies, in particular, were noteworthy since they are essentially the purveyors and guarantors of open access. Transmission companies were prohibited from trading in electricity.\nIt has often been argued that the progress in the implementation of the act has fallen well short of the objectives. It is tempting to conclude so, especially since many parts of the sector remain inefficient and unresponsive to consumer needs and also to independent regulation. The finances of the sector remain precarious despite several bailouts, tariff distortions are rife and service standards are blatantly violated or entirely ignored. However, such a conclusion would really undermine the gains made in the past two decades. The power system’s installed capacity base is much larger (5x today as compared to 1998) and much more robust now. The national transmission grid is amongst the largest frequency integrated grids in the world. The system allows for handling a large amount of renewable energy. The present renewable energy capacity in megawatt terms is larger than the entire electricity grid capacity in the year 2003. This is no mean feat since absorbing large amounts of intermittent renewable energy is an exceedingly challenging task. Over time, the competitive power markets have become larger and more sophisticated, with a vast variety of market-traded products. Indian power system operations are extremely well regarded across the world. The last major grid failure in India occurred in 2012. Frequency excursions, which used to be quite common, are a thing of the past, with grid frequencies now managed in a very narrow band.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-14", "d_text": "We are yet to start the implementation of part B. For part B, the tendering process is underway and once the tendering process is complete, we will be able to start the implementation part. The total project cost under part B for Dakshin Haryana is around Rs 387 crore and tenders are likely to get awarded in the month of June.\nMeanwhile, the Central government last year has come out with integrated power development scheme (IPDS). How many circles have been considered and what sort of opportunity will be there for private players?\nUnder IPDS we have already submitted need assessment documents to the Ministry of Power.\nSince it is at initial stage, I will not be able to reveal much information about it. But the main emphasis is on strengthening of distribution infrastructure. Importantly, the need assessment document study is based on the priority of SEBs where they want to implement distribution infrastructure at priority level. It means, in the submitted documents there are circles or towns where the requirement of distribution infrastructure is more than any other town.\nAccording to you what are the challenges a state government must be facing while implementing such an ambitious programme?\nThe underlying challenge while implementing IT infrastructure under part A is GIS mapping.\nThe GIS mapping includes online availability of entire network and assets (graphical view), geo-reference each asset, unique identity to each asset, identifying location of the consumer form, unique pole ID, network management and online tracking of changes in the network.\nSince the GIS is backbone of working out the exact AT&C losses and it also requires integration, the implementation takes lot of time in terms of completion and building up of a reliable and acute data model.\nSecond challenge is the implementation of DT metering system in terms of maintaining. Because distribution transformers are all set up in the filed areas so you can focus the AMR for high value consumer as well as for the feeder. But in order to maintain the continuity of data from the DT metering system which creates a micro data is a tough task. The third challenge is capacity of people understanding the system integration. It means, we are currently facing the skilled manpower crunch of those who understand the entire IT infrastructure for power.\nThe only constrained about this programme is that it has not been able to complete in the defined timeline hence; every utility has made a proactive approach to the PFC and MoP seeking extension in the timeline.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-0", "d_text": "India has suffered yet more blackouts, this time leaving more than 700 million people without power, and raising serious doubts about the country’s failing infrastructure and the government’s ability to meet the increasing energy demand as they pursue ambitions to become an Asian superpower.\n20 of India’s 28 states were left without electricity, including the capital New Delhi, when three out of the nation’s five grids went down.\nHundreds of trains ground to a halt, leaving passengers stranded along tracks from Kashmir to Nagaland.\nTraffic lights went out causing gridlock in New Delhi, Kolkata, and several other major cities.\nNurses in Delhi were left to operate life-saving equipment by hand when the back-up generators also failed.\nIn West Bengal, hundreds of miners were trapped underground for hours when lifts stopped working.\nThe northern grid failed first, quickly followed by the eastern and north-eastern grids, leaving an estimated 710 million in the dark.\nSushil Kumar Shinde, the Indian power minister, blamed the failures on the states individual electricity operators being too greedy and drawing more than their allotted share from the grids. “Everyone overdraws from the grid. Just this morning I held a meeting with power officials from the states and I gave directions that states that overdraw should be punished. We have given instructions that their power supply could be cut.”\nPublic morale is low at the moment following several consecutive blackouts, anymore could lead to protests or even riots.\nBy. Joao Peixe of Oilprice.com", "score": 8.086131989696522, "rank": 100}]} {"qid": 30, "question_text": "What are some differences between Indian and Mexican chili peppers in cooking?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Indian cuisine often attempts to blend a few of the six different tastes, which are also called Rasas – sweet, sour, salty, hot, bitter and astringent – into a single dish. As an Indian restaurant in Edinburgh, we know that a way of accomplishing this is by using chilli peppers to achieve the desired result.\nAnd although chilli peppers are used around the world in a lot of different cuisines, they have become a staple of Indian cooking. A large variety of chillies are used in hundreds of dishes, each representing a different level of flavour and heat, and they can be found in nearly all Indian recipes, from chutneys and curries to pickles and cold drinks.\nKashmiri Chilli Peppers\nOne of the chillies used in Indian food is the Kashmiri, which are easy to distinguish from others because of their long, deep red colour and wrinkled texture. If dried, the chilli looks even darker. Kashmiri is great to add a bright orange colour to dishes, as well as offering a low to mild level of heat, and it also has a distinctive flavour.\nGreen Chilli Peppers\nThese types of chilli peppers are usually macerated or puréed and added to chutneys or stews, although they can also be eaten raw, since they provide a strong, herbal flavour to the dish. This pepper can be added to hot oil to infuse it with heat and flavour, either whole or segmented, depending on the amount of heat you prefer.\nGundu Chilli Peppers\nThese chillies look like dried cherry tomatoes. They’re round and small, and with shiny skin, and they’re commonly used in recipes in the south of India, especially in sambar. They can also have quite a lot of heat.\nReshampatti Chilli Peppers\nThese short and broad chillies have thick skin and are of medium heat. Although it’s more popular on the west coast, it’s used for most types of food in India, including to make stuffed pickles. Its colour is usually dark maroon, although that doesn’t seep into the food, so it’s not the best choice if you want a lot of colour in your dish.\nChilli peppers can spice up any recipe, and if you’re curious to know how, why not check us out? Our menu is vast and it will surely leave you wanting more!", "score": 51.705109232020355, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Mexican cooking has incorporated some Old World spices like cinnamon and cloves, but its heart is clearly in its native chiles, used in different combinations and ways in different dishes. These chiles don’t just add flavour, but also colour, like the rich deep red of anchos or the dark chocolaty red of pasillas. They also give texture to dishes because the skins of varieties like anchos contain pectin, the natural gelling agent that helps jams to set, which gives sauces based on these chiles a wonderful smoothness without the gluey texture of thickeners like cornflour.\nTo cook Mexican food these chiles are imperative, yet it is hard to make Indian cooks understand this. For us chiles are to be used for heat only, while other flavours are added with other spices. I am willing to bet that almost none of the restaurants here that serve rajma covered with cheese and served with corn chips and call it ‘Mexican’ food have any idea of these different types of chiles. The few places that have tried with fine dining with Mexican cusine —- like the Taj President’s El Mexicana in Mumbai — never last because we find the food too similar to Indian food and the subtleties of the chiles, instead of being appreciated for themselves, are just seen as further sign of its shortcomings.\nA few importers are now bringing in authentic Mexican chiles, along with readymade salsas and ingredients like canned tomatillos, tart green fruit that look like tomatoes, but add a very distinctive acid bite to Mexican salsas. Recently for a party I made five different salsas, each from a different chile, and they were a huge hit, with everyone exclaiming at the difference in flavours. Watching my friends gobble down Indian khakras loaded with Mexican salsas made me wonder if there might be some hope for Mexican chiles in India after all.", "score": 49.999305892884124, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "Deep red in color and has an intense aroma but is not very hot. This red chili originated from Karnataka.\n- Dhani (bird’s eye chili) – This is an intensely hot and prickly-tasted type of chili coming from Manipur.\nAs you see, South Indian cuisine has an everlasting bond with chili. In fact, they are often obsessed with the color, aroma, and spiciness they add to the food. In South India, Guntur is the common and most used variety of red chili. This type gives out a similar flavor to Arbol Chili or cayenne peppers. At the same time, Byagadi gives a similar taste to paprika.\nIn addition to these red chilies, Indians also use green chili, which has a slightly different taste, aroma, and green in color. Green chilies are usually used raw in food or included in curry pastes and chutneys they make. However, they use red chili as dried, whole, crushed, or powdered. Just see if you come across any of these popular South Indian recipes made using chili:\n- Red Chili Coconut Chutney\n- South Indian Chili Chicken\n- Authentic South Indian Biryani\n- Chettinad Egg Curry\n- Dry Red Chili and Tomato Chutney\n03 – Turmeric (Haldi/ Manjal)\nTurmeric is a fundamental ingredient in South Indian cuisine. In North India, turmeric is known as “Haldi,” and in the South, it is popularly known as “Manjal ”. Mainly there are two types of turmeric used in India.\nThe ordinary turmeric we all know is basically used as a flavor and a color enhancer in cooking. While, the other type, “Kasturi Manjal ”, designated by the scientific term Curcuma aromatica, is widely employed as an Ayurvedic, herbal, or beauty constituent in South India.\nMoreover, in southern India, the dried turmeric rhizome is usually worn in an ornament as camouflage against evil. And they believe it has the ability to bring them good fortune and contains healing properties. It has a pleasant earthy fragrance, and raw turmeric has a trace of bitter and spicy taste.\nSouth Indians often use it in its powdered form or as a paste to include in foods. Most of the time, they add it in with vegetables and lentils to give those dishes a bright golden tint and taste.", "score": 49.975377805180344, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "Large fleshy poblanos become anchos when dried, but some become a slightly different dried chile called a mulato. Long chilacle chiles become dramatically dark dried pasilla or negro chiles, while mirasols become large smooth dried guajillo chiles.\nAll these chiles might seem to be a link with India, but in fact their variety shows up the difference. We grow and use many kinds of chiles , but in practice most markets stock only around three kinds. There will be a mild red one used for colouring, a larger, hotter one for pickles and then a dynamite one which is only used to add heat to food. Mexican chiles can certainly match ours for heat. For a truly incendiary experience try the chipotles cooked in vinegary adobo sauce and canned under brand names like La Morena which I’ve found in Indian shops.\nMost hot Indian chillies explode on first contact and leave your lips and tongue burning, but chipotles wait till you swallow and then sear their way up the back of your mouth and through your palate and nose until you are gasping in agony. Yet even as you are doing this, if you can stop gasping for a second, you will note interesting side notes that you rarely seem to note with Indian chiles. Chipotles have a woody, smoky, even slightly sweet quality that lingers long after the first heat has faded.\nBut the real impact of these flavours is best felt with milder chiles where the heat is not hogging centre stage. Anchos, the wide, heart shaped dried poblanos have wonderfully fruity flavours. Cascabels, the round chiles filled w i t h seeds that rattle when you shake them (cascabel means a child’s rattle) have a rich nutty flavour. Guajillos are earthy, with slightly metallic overtones. But perhaps the most amazing for me are the long dark pasillas, which have a sharp citrusy note that softens in cooking to a wonderfully lingering winey taste. Pasillas have one of those flavours that you wonder if you really like it, until suddenly you realise that you don’t just like, but are totally hooked on it!\nWhat you realise when you use them in Mexican recipes is that these chiles take the role of the many spices of Indian cooking.", "score": 44.27470847071599, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Chili powder is hot and is used all around the world to spice up food and stimulate taste buds; it is also used to give special color and aroma to cuisines. Chili or Chili pepper is native to Central America and is used in Mexican cuisines since centuries. Chili plant is perennial small shrub that grows up to a meter height, it bears white flowers and its fruits can be fleshy, mild, Mexican bell peppers or finger like green in color commonly grown all over India. Fruit is either plucked when green or is allowed to dry, it is either used fresh as green chili or is dried and grounded to make powder. When dried, green chili turns into red, however chili is found in many attractive colors like yellow, mustard and black.\nCulinary Uses of Chili Powder\nRed Chili powder or pepper can be used in any cuisine except sweets to enhance taste and increase heat. Chili in powdered form is used with curd, yoghurt, sprinkled over fried potato dishes etc.\nChili sauces are excellent support to enhance taste and heat of the food, take habanoras or Serrano chili, add two tablespoons of Lime juice and one tablespoon of Orange juice mince together and serve. It works as smacking taste enhancer.\nAdd Chili powder to soups to enhance its taste and nutritional value, one very tasty cuisine is barley and bean soup. Use Olive oil to cook and add garlic and Red Chili powder, stir to let them combine evenly and add barley, beans and water to make a delicious and nutritious soup.\nRed chili powder can add extra spice and heat to tomato sauce. Add some Chili powder to minced tomatoes and make sauce for improved taste.\nChili paste will help a lot while cooking and it can be prepared quite easily. Take dried red chilies and break them into pieces, boil for 30 minutes with water to make a paste. This paste stays fresh for 10 days in a refrigerator and can be topped on crackers, fried potatoes, vegetables, gravies and as meat rub.\nChili infused oil are endlessly useful, it can be used for sautÃ�©ing food, salad dressing, with vegetable marinades and meat. Cooking in Chili infused oil is easiest way to enhance taste of food.\nChili infused Vinegar is smacking substitute for simple Vinegar; it can be used for pickling, dressing, vegetable topper and with bread.", "score": 42.45083800847165, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "Serrano chillies are frequently used to flavor salsas or as a garnish when served pickled.\nThe dried version of the fresh chilaca pepper, known as pasilla (literally, “small raisin”), acquired its name from the dark and wrinkled skin that develops due to dryness. It’s frequently used in sauces to go with meat or fish due to its rich, sweet flavor. The wonderful Oaxacan pasilla, which is mildly smoked, can serve as the foundation for mole.\nThe jalapeno is without a doubt a national and international favorite, making up around 30% of Mexico’s production of chili. This is most likely because of its adaptability: jalapenos can be stuffed, pickled, smoked, fried, or even jellied. They’re usually served whole and mildly grilled as a great street taco side, or pickled and chopped over nachos.\nUnripe green habaneros change color as they ripen. Orange and red ones are common habaneros. However, white, pink, and brown are also available. Due to its extreme heat, this chile is frequently used in most Mexican cuisine. Be extremely cautious when preparing them!\nAt Benny’s Tacos, we have a lot of expertise using these and other Mexican spices and peppers. We can change the heat of your dish to your preferred level. Visit us today if you’re looking for traditional Mexican food! We take pride in providing you with fresh Mexican meals.", "score": 41.9369224061427, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Black Pepper: Before the introduction of chilies to India, black pepper was relied upon to provide Indian food with its bite. Native to India, it's still a staple in Indian cuisine, and you'll find it in most savory dishes.\nCayenne: Use cayenne to add warmth and the color of fire to any savory Indian dish or spice blend. You may see it listed as red chili powder blend in Indian recipes. Use it sparingly at first! Place a shaker of cayenne on the table for those who like to pump up the heat in their meals.\nChilies: These pungent peppers are used to enliven meats, vegetables, pulses (peas, beans, and lentils), and dressings. They're found in curries, especially in South India. You'll want to experiment with different varieties, for different effects.\nCilantro: Used both as a garnish and flavoring in Indian foods, some say cilantro is an acquired taste. If so, Indian cuisine can coax you. You'll find it in salads, sauces, soups, and many other dishes. Its flavor is potent, but it diminishes some with cooking.\nCinnamon: Delicious in coconut milk and with fruits and other beverages and desserts, cinnamon is also used to add flavor and aroma to meat and rice dishes and chutneys. Cinnamon is a key ingredient in the Indian spice blend Garam Masala. When using whole cinnamon sticks in your Indian recipe, be sure to remove them before serving.\nCloves: Indian cooks often use whole cloves for flavor and aroma, with savory foods like meat, and with sweets, too. Many recipes call for whole rather than ground cloves; remember to remove the whole cloves before serving your dish. Indians also use roasted and ground cloves, and they suck on whole cloves to freshen the breath.\nCoriander: Indian cooks use coriander to add sweetness to recipes, especially in Southern India. You'll find its cooling, slightly lemony taste in chicken, egg, and meat dishes, as well as desserts.\nCumin: A staple in North India, cumin is used for its strong flavor; it often plays against the heat of chilies. Use ground cumin or fry the whole seeds in oil. They're also often roasted and ground.", "score": 41.81922192418323, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "These chilies are typically used in Indian and Pakistan. They are similar in appearance to Scotch bonnet peppers. these peppers are very hot. Use judiciously. 55,000 to 65,000 scoville units.\nThese peppers are the work horse of the Mexican kitchen. They can be a little spicy but not too hot. They are very rich and smoky. These chillies are readily available in Latin markets and are a good choice for your chili making requirements. The best way to use these peppers is to soak them in boiling water, remove from the heat and let them soak for a hour or so. Remove the stems and seeds and process the skins in a blender or food processor, then run the pulp through a strainer. You will get some beautiful pulp free red sauce that you may adapt for different dishes. 6,000 scoville units.\nTien Tsin Peppers:\nThese peppers are also traditionally used in Asian cooing and are very hot. You can add whole to soup or any stir fry dish where you want a spicy flavor. They may also be used to make Asian chili oil. Just heat 2 tbs of peanut oil. When very hot add about 10 peppers, fry until brown. (3 to 5 minutes) Remove from heat and add 1/2 cup of peanut oil. Let cool, then place in an air tight jar. If you like really spicy oil leave the peppers in the oil. This oil is great drizzled or mixed with soy sauce to make a hot dipping sauce. Use 1/3 cup soy sauce, 1 tbs of chili oil, 1 clove garlic minced and 1 tsp ginger. Mix well and dip. 60,000 scoville units.\nIn Mexico alone there are over 100 varieties. The hottest peppers are the Habanero or Scotch bonnet, handle these with care and last but not least is the Indian Ghost pepper. If I were you I wouldn’t even mess with this as its at least 20 times hotter than the habanero. Why do you think they call it Ghost?", "score": 40.528483518102696, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "All about the Pepper\nThere are many different types of peppers used for Mexican Recipes. See list of different types of peppers, chiles from around the world. Substituting one type of pepper for another can be done in some recipes. Certain peppers can create a dish that is way to spicy or hot for some people. Always check with your guests or family first about whether they can tolerate hot foods.\nSearching Spicy Food Destinations Around the World\nBy Chen Fen\n(1) Hot Mexico\nMexico is the birthplace of peppers. About half of peppers around the world are grown in Mexico in all kinds of variety, such as red, yellow, blue and green. Mexico is not only the No.1 chili country in the world, but also it owns the second hottest peppers in the world. There are more than 100 kinds of chilies with different names, fresh or dried. There are also numerous processing methods and they are used for bacon, soup, barbecue, salads, sweets, drinks and everything you can imagine.\nAuthentic Mexican food are mainly made of peppers and tomatoes, with the taste of sweet, spicy and sour. Nearly 90% of the sauces are made of peppers and tomatoes.\nRecommendation: Cholula Hot Sauce\nCholula hot sauce is one of the most addictive sauce in the world and it is a major brand in Mexico. The main ingredients are red peppers and various spices. Most of the Mexican food are matched with cholula hot sauce and then become popular around world.\n(2) Spicy Argentina\nArgentineans mainly grill beef and they enjoy it. Whether in the capital Buenos Aires, or in small towns of mainland, rotisseries can be seen everywhere. Many homes are equipped with stoves or barbecue grills, and even in rural areas, some parks or forests provide you with barbecues.Vivid black or brown beef samples stands at the door of large-scale barbecue shops in the downtown area. Inside large French windows, you can see a few pounds of beef piled up around flowers, barbecue chef wearing a \"Gauchos\" clothing: black hat, doublet and leg embroidered breeches, a red scarf tied around the neck. Some shops put ovens at prominent positions so that customers can better enjoy the food while listening to barbecue sound and looking at red charcoal fire.\nWhen eating toast beef, you can choose different seasonings according to your own taste.", "score": 39.178510659742074, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "A 2015 study at the Indian Institute of Technology in Jodhpur investigated 2,500 recipes from regions including Bengal, Gujarat, Punjab, and South India. They discovered that the recipes were negatively paired which was exacerbated by the addition of spices like garam masala and tamarind.\nBut Page points to the fact that the flavour theory is really just a way for scientists to try to understand what makes certain foods pop, rather than an affront to Indian chefs.\n“Bolder cuisines that feature loud-flavoured ingredients are always trickier to manoeuvre, for instance Mexican and Thai, which feature the use of chilies and can sometimes get hot-hot-hot. Thai cuisine balances fresh chiles with aromatic ingredients, while Mexican cuisine coaxes an astonishing range of flavours through using both fresh and dried chilies,” she explains.\n“But when the execution is right, the complexity of Indian, Mexican and Thai cuisine is astonishingly appealing.”", "score": 37.82497209600854, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Jalapeno pepper varieties (Capsicum annuum longum group) originated in Mexico where the peppers were used for religious ceremonies and to spice up food. Most jalapeno peppers rate 3,500 to 8,000 units on the Scoville scale, which measures hotness in peppers. Bell peppers place 0 units on the scale, while habanero peppers rate at 60,000 units. Capsaicin, a chemical compound found in peppers, determines the heat of a pepper. Remove the seeds and ribs to cool off the peppers. Different varieties of jalapeno peppers offer a variety of hot culinary experiences.\nMild jalapeno peppers put just a little spice into the food it is cooked with. The mildest jalapeno pepper is the Senorita Jalapeno, which is only 1/10th as hot as a normal jalapeno. This pepper is rated at 400 units on the Scoville scale. Commercially, Senorita is used in burritos, tamales and for cream cheese-filled fried jalapenos. When the Senorita pepper is smoked, it is called a chipotle chili. Another mild jalapeno is Tam Mild Jalapeno 2, which rates at 1,000 units. This variety produces 3-inch-long peppers that are crack resistant. The plant is resistant to tobacco mosaic virus, tobacco etch virus, pepper mottle virus and potato virus Y.\nFor equal amounts of heat and flavor, try a variety of jalapeno pepper considered to be medium on the hotness scale. NuMex Heritage Big Jim jalapeno pepper variety produces a 9-inch-long pepper with a Scoville rating of 2,000 to 4,000 units. This variety spawned one of the largest chili peppers grown, which measured more than 12 inches long. NuMex Big Jim, which was the original variety, produces large peppers, but the hotness level of the pepper depends on where the pepper is located on the plant. The lower branches grow hotter peppers. NuMex Espanola Improved is an early maturing variety of medium hotness with 5- to 6-inch-long peppers.\nHot jalapeno peppers commonly make your eyes water and nose run as part of the reaction to the capsaicin in the pepper. Early Jalapeno is a variety that matures two weeks before the other jalapeno varieties.", "score": 37.77079288657148, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "I’ve always been fascinated at the parallels between classic Indian cuisine and regional Mexican. Onion, garlic, cumin, chilie, cilantro, cinnamon, cloves, tomatoes… they all speak Spanish and Hindi. The similarities also involve preparation techniques: the roasting of spices, the layering of ingredients in a sauce. Aloo Channa cooked with onions and tomatoes is typical Northern Indian. The guajillos, corn, and chocolate are purely Mexican. Most of the labor goes into the guajillo sauce, which is cooked in a way very similar to curry pastes. I’d also guess that this would work with chunks of cooked birdie substituting for the potatoes.\n2 tbs olive oil\n1 medium yellow onion\n2 tomatoes, peeled, seeded, cubed\n1 small sweet onion (Maui, Vidalia, Walla-Walla)\n1-1/2 cup Guajillo sauce (see below)\n1-2 tbs sweet Mexican chocolate, crumbled (Abuelita is a common brand)\n1-2/3 c cooked garbanzo beans\n2 potatoes (preferably Yukon Gold or red waxy), boiled\n1 c corn kernels\n2-3 tbs chopped cilantro\nTo a hot saute pan, add the olive oil and onions. Cook, stirring fequently, until onions are medium-light brown. Then add the tomato and continue stirring and cooking until they wilt and brown slightly. Stir in the guajillo sauce, bring to a simmer. Then add the chocolate, and stir until it is well-dispersed. Add the vegetables, then simmer, covered, for about 5 minutes. Adjust salt, remove from heat, and stir in cilantro.\nGarnish with strips of roasted green chile and cilantro leaves. Serve with rice and home-made tortillas.\nGuajillo Sauce – Taken from “Rick Bayless’s Mexican Kitchen”, a great cookbook.\n6 garlic cloves, unpeeled\n4 oz dried guajillos\n1 tsp dried oregano\n1/4 tsp black pepper\n1/4 tsp cumin\n3-1/2 c stock\n1-1/2 tbs olive oil\nOn a griddle, roast the garlic cloves on all sides until they are soft and there are black spots on the skin. Put aside.", "score": 37.094595258182075, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "COMMON USES OF POBLANO PEPPERS\nIn preparation, they are commonly dried, coated and fried, stuffed, or used in mole sauces. Also, they are often roasted and peeled to remove the waxy texture, and preserved by canning or freezing. They are also dried and sold as Ancho Peppers, which are also extremely popular and form the base for many sauces and other recipes.\nThe poblano pepper is a popular Mexican chili pepper, very dark green in color, ripening to dark red or brown. They are mild, large and are heart-shaped. Learn all about them here.\nScoville Heat Units: 1,000 – 2,000\nThey are mild peppers, quite large and are somewhat heart-shaped. Their skins/walls are somewhat thick, making them perfect for stuffing as they’ll hold up in the oven quite nicely. They are often roasted and peeled when cooking with them, or dried. When dried, they are called ancho chilis.\nPoblanos originated in Puebla, Mexico. They are one of the most popular peppers grown there. The poblano plant is multi-stemmed and can reach up to 25 inches high. The pods grow 3-6 inches long and 2-3 inches wide.\nInfo from https://www.chilipeppermadness.com/chili-pepper-types/sweet-mild-chili-peppers/poblano-chili-peppers/", "score": 36.42285714269231, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Poblano Peppers are a mild hot pepper. They are picked green and larger than a jalapeño but smaller than a bell pepper. They are most commonly know as the pepper in Chile Rellenos (stuffed peppers with cheese filling and egg batter) They are also used in mole sauces. While considered a milder pepper they can still be hot. Scoop out the seeds and ribbing if you do not like spicy food.", "score": 35.389320500921215, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Ancho means ‘wide’ and is one of the largest chiles. They are mild, sweet, with hints of raisin and plum. It is one of the most commonly used chiles in Mexico and is a basic ingredient in countless soups, moles, and sauces. The ancho, along with the mulato and pasilla, forms the “holy trinity” of chiles used to make traditional mole sauces. Ancho chile is 1000‐2000 SCO units.\nHow to use ancho chile\nWhen cooking with ancho chile peppers, it is essential to understand that although they have moderately mild heat, they still have a kick. On the Scoville Heat Units, the ancho peppers measure between 1,000 and 2,000 compared to a bell pepper which measures 0 and the jalapeño which measures 5,000. You know where you stand with the ancho pepper, and how much to use. The ancho peppers are normally re-hydrated before use. Place them in hot water for up to 30 minutes, and they are ready to use. Make sure that you seed and stem them first, then they are ready to season sauces, stews, soups, salsa, and any food that you want to add some spice to. Learning how to use ancho chile is as easy as adding any other spices to your food when you want a mild to moderate heat content.\nOther uses for ancho peppers\nWhen someone asks what ancho chile peppers are, a good answer would be that they are the seasonings used for chili powder that gives chili that awesome flavor. The peppers are ground while they are still dried to make the chili powder, and can be used in many other recipes besides chili. The southwest region of the United States is not the only people who enjoy spicy foods often prepared in Mexican cuisine. The ancho peppers have made their way into the flavoring of foods that are often used in many other cultures.\nTry ours today.", "score": 34.70560078609095, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "or chile negro, is one of the most distinctive of all the peppers. Its\nrich brown color portends it earthy chocolatey flavor. They're\ncalled chilaca when fresh and pasilla or chile negro when smoked or dried.\nThese are our favorite peppers for cooking. They have an unmistakable flavor unlike any other pepper. Pasilla is one of the three peppers used in mole (MOH-lay) sauce, the famous Mexican sauce that can\ntake days to prepare. The different regions of Mexico all have their own\nmole recipes, but the moles from Oaxaca are the most well known. Most\ninclude raisens, onions, garlic, tomatoes, cinnamon, cloves, various nuts\nand seeds, tortillas, pasilla, guajillo and poblano chiles, and chocolate!", "score": 33.19826742799001, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Mexican chiles play a significant role in Mexican culture and are used in a staggering diversity of dishes all around the nation. You can find Mexican chilies in practically all of the meals, no matter where you go in Mexico.\nIn fact, we would say that the most important component of real Mexican cuisine is Mexican chilies. Do you want to learn about the amazing world of Mexican chilies? Let’s learn about the top types of Mexican chilies.\nWhat exactly are chillies?\nWhether you refer to them as chillies, peppers, or chiles, you are simply talking about the fruit of a plant called a capsicum, which is a member of the wider Solanaceae family, generally known as nightshades. Sure, the chilli you are holding is a relative of aubergines, tomatoes, and potatoes.\nHere are the common Mexican chillies you should learn about:\nThe ancho chili is basically a dried poblano pepper. If a once-large, vibrant green chile is allowed to mature and then dried for a few days, it shrinks and turns a dark reddish-brown color (sometimes almost black). This Mexican pepper is able to acquire a delectably sweet, fruity flavor due to the late picking and drying-out process. They are ideal for chopping and using as the foundation for a tasty mole, which is a sweet-spicy sauce made with fruit, nuts, chocolate, and spices. You can also use it to make an enchilada salsa.\nThe poblano, a sizable green chili with Puebla State origins, is frequently used to make chile rellenos, which are chiles filled with cheese and meat and occasionally served with a hot tomato-based sauce. Poblano chilies are interesting since you never know exactly what you’re going to get. Although most are rather moderate, occasionally you can encounter some true eye-waterers, so take precautions when using them!\nThe Serrano pepper, which is native to Hidalgo and Puebla, is a meaty green chili. With a length of between 3cm and 10cm, it is frequently mistaken for a jalapeo. Although the amount of spice they contain can vary greatly depending on how they are prepared and how soon they are chosen (some can even turn yellow, red, brown, or orange if overripe), they are generally thought to have a pleasant medium heat.", "score": 33.19177915503628, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "Mexican food has a track record for being exceptionally hot, but there are many different flavors and spices used in it that\naren’t all hot. Subtle flavors can be discovered in many dishes. Chiles are native to Mexico, where they have actually been taken in for a very long time. Mexico uses the best variety, and they are used for their flavors as well as their heat. Chili pepper is often contributed to fresh fruit and sweets, and hot sauce is usually included if chile pepper is absent from a mouthwatering dish or treat. Mexico is renowned for its street markets, where you can discover a wide range of fantastical items. Every street market has a different food section that showcases regional food. You need to eat at a street market if you ever take a trip to this nation if you dont, you will regret it.", "score": 32.64943587622482, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Chili peppers, a spicy fruit featured in cuisines around the world, were used in Mexico long before going global, as was the agave-derived distilled drink tequila. This week, the Museum’s Adventures in the Global Kitchen series presents Tequila and Chilies, which will include a conversation with Juan Carlos Aguirre, the executive director of Mano a Mano: Mexican Culture Without Borders. Aguirre, who will be providing samplings of chili-based dishes from across Mexico alongside tequila from Richard Sandoval Restaurants, recently offered a quick history lesson about the ubiquitous chili pepper.\nHow long has the chili pepper been an integral ingredient in Mexican cooking?\nChili peppers have been used in Mexican cuisine for thousands of years. They were one of the first plants in the region to be “domesticated.”\nHow did the chili pepper spread throughout the world?\nChili peppers are originally from the Americas and were incorporated into different cuisines around world after the continent was discovered by Europeans. The Philippines and Mexico were once part of the Spanish Empire, and the chili pepper was exported from Mexico to the Philippines and then all over Asia. So its use in Indian, Chinese, Malaysian, and other Asian cuisines actually comes from the Americas.\nWhat’s one of the more surprising ways in which chili peppers have been used?\nBefore Mexico was conquered, chocolate was the drink of Aztec royalty, and the chocolate had chili pepper in it. Europeans took the chocolate to Europe and added milk and other ingredients to make it sweet.\nWhat is the mission of your organization, Mano a Mano: Mexican Culture Without Borders?\nThe organization was founded 12 years ago to celebrate and preserve Mexican traditions and pass them on to a new generation. We do different workshops, crafts, music, visual arts, and culinary arts. The idea is that through the arts, people will learn about Mexico in a more authentic way.", "score": 32.14234402331883, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Indian people eat spicy foods because, it is in our culture. India is always known as ‘Land of Spices’ and since long time, lots of spices have been incorporated into our regional delicacies.\nWas Indian food always spicy?\nIndian food was always spicy including chilli-spice as well as flavor and aromatic spices. Tropical regions of the world natively provides the right climate for spices to grow. … So hot spicy peppers are natively part of Indian vegetation way way before Europeans discovered them and spread around the world.\nDo only humans eat spicy food?\nA recent study found that these tree shrews are the only mammal aside from humans known to deliberately seek out spicy foods. Researchers in China found a mutation in the species’ ion channel receptor, TRPV1, that makes it less sensitive to capsaicin, the “hot” chemical in chili peppers.\nCan spicy food affect your mood?\nOutside of a long list of ways spicy food may promote good health, Self found that recipes containing plenty of hot spices can actually promote mental wellness. Hormones such as serotonin may reduce depression or stress, and eating spicy food boosts the body’s natural production of these beneficial substances.\nWhat does it mean if I like spicy food?\nWhen you eat foods with capsaicin, like chili peppers, certain receptors in your mouth pop off, and that tricks your brain into thinking that your mouth is on fire. As part of your response to this stress, your body will produce endorphins, to help stem the pain of these transmissions.\nIs Indian food spicier than Korean?\nIndian food is very spice forward, and often uses chilis as well. They tend to use fresh ingredients blended with many different spices mixed together, and cooked to get rid of the raw spice flavor. Korean food, on the other hand, is usually is spicy due to their use of chilis, and less of spice.\nWhat is the most spicy Indian food?\nThe Top 5 Spiciest Indian Dishes\n- #1 Pork Vindaloo. Vindaloo originates from Goa, India. …\n- #2 Phaal Curry. Phaal is an Indian curry dish which originated in Indian restaurants in Birmingham, UK, and is not to be confused with the char-grilled, gravy-less, finger food phall from Bangalore. …\n- #3 Laal Maas. …\n- #4 Chicken Chettinad.", "score": 31.858386932549795, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "A spicy meal always makes pepper and spice lovers feel good! Scientists are beginning to suspect that a natural high is produced similar to the one after intense exercise. The euphoria is a result of the brain inducing the secretion of endorphins. Chiles contain more vitamin C than nearly any other fruit or vegetable. Another quality of spices and peppers is that they preserve the shelf-life of many types of foods.\nTo not confuse pod peppers from other types of peppers, the Mexican word “chile” will be used, though it also is a type of pod pepper. The American plural is spelled “Chillies”, but in Mexico it is “chiles”. Mexicans do not add the word “pepper” after the word “chile” or after the variety (e.g. jalapeno).\nHot or Not?\nThere are basically two classes of peppers; capsicum and non-capsicum, or pod and berry peppers. Capsicum peppers are pod peppers, and vary in size, shape and degree of hotness or amount of capsaicin, with the hottest having the greatest amount. It is these chiles that the Pepper Lovers Club members are so fascinated with. Capsaicin is derived from the Greek word “to bite”.\nThere are five domesticated varieties: Annuum, Frutescens, Chinese, Pubescens, and Baccatum. There are many undomesticated varieties. Non-capsicum peppers are berry peppers and are unrelated to the capsicum pepper.\nPut out the Fire!\nCapsaicin is the chemical that ignites the fire. Over 80% of the heat in the chile is contained in its veins, placenta and adjoining seeds. This is an oily substance, therefore, it is not water soluble (ergo: ice-water will not wash away any pain). However, as long as you hold your hands under cool running water, the pain is relieved somewhat, but returns once the water stops. Using water or other clear liquids such as soda or lager beer has been likened to putting out a fire with lighter fluid as it only assists in spreading the capsaicin.", "score": 31.795990431911743, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Organic Chipotle Chili Pepper 10K H.U.\nLike other chili peppers, chipotle chilis come from the Capsicum annuum species and Solanaceae (nightshade) family. It is believed that the Aztecs in northern Mexico were the first to smoke jalapeños.\nChipotle powder’s distinctive earthy and smoky flavor is a popular addition to BBQ rubs, sauces, and salsas. It can also be used to spice up potato salad, beans, and condiments.\nFlavor profile & uses\nChipotle powder is earthy, and has a mild to medium spiciness with a distinctive smoky flavor. Chipotle Pepper Powder can be used in a wide variety of recipes, and is often a key ingredient in Mexican cuisine and Tex-Mex dishes.\n• It can be used in barbecue rubs or sauces to give it that smoky flavor, which can be rubbed or brushed onto steaks, chops, chicken, and other cuts of meat that will go on the grill.\n• Often used as the main pepper for adobo and salsa recipes.\n• It is sometimes added to condiments such as mayonnaise or ketchup to spice them up.\n• It offers a spicy kick to potato salads, and is stunning when added to slow-cooked pinto beans that will cook down to refried beans.\n• Some people even enjoy balancing sweet and spicy flavors in desserts, and a chocolate brownie or cupcakes could easily benefit from the addition of a little chipotle powder.\nHow is the heat determined for Cayenne & Chili Peppers?\nThe Scoville scale is a measurement of the spicy heat of the cayenne and chili peppers, as reported in Scoville heat units (S.H.U. or H.U.), due to the capsaicin concentration. The substances that give the peppers their intensity when ingested are capsaicin and the related compounds known as capsaicinoids. The greater amount of these compounds there are in the chili, the hotter it is.\nCapsicum, red pepper, hot pepper\nExcessive dose may cause gastrointenstinal irritation or heartburn, or exacerbate gastroesophageal reflux. Cayenne preparations irritate the mucous membranes and injured or broken skin.\nCalifornia Proposition 65\nConsuming this product can expose you to lead, which is known to the State of California to cause birth defects or other reproductive harm. For more information go to www.P65Warnings.ca.gov/food.", "score": 31.4767261590633, "rank": 22}, {"document_id": "doc-::chunk-34", "d_text": "It was a tasty news morsel ten years ago: “Tezpur Chili hottest in the world.” Four Indian scientists had found a chili pepper in northeast India with a rating of 855,000 Scoville units — a forest fire of piquancy compared to the feeble flame of the nearest contender, the Mexican Red Savina habanero (a mere 577,000 Scoville units). The Scoville rating measures the presence of capsaicin, a compound that binds with pain receptors ordinarily triggered by heat and abrasion.\nThe news was delicious, and not just as a pick-me-up for a nation still hungry for global recognition. It seemed, oddly enough, like redress for a still-smarting historical wrong. For it was the colonial invasion of chili peppers — capsicum chilis from South America — that upstaged Indian black pepper (Piper nigrum), once the hottest commodity on the planet and the crown jewel of the spice trade.\nWe Indians had welcomed chilis graciously, of course. In time we became the world’s largest producers — and indeed, consumers — of what is technically a fruit. And now, at last, Indian nature and five hundred years of nurture had won out, in the form of a local cultivar, Capsicum frutescens var. Nagahari, aka Tezpur chili. The samples in question came from Tezpur, in the state of Assam (felicitously, Tezpur can be translated as “Spicyhot Town”), but the variety was better known locally by a variety of terms denoting fear and respect: Naga Jolokia (serpent chili),_ Bhut Jolokia (ghost chili) and _Bih Jolokia (poison chili) in Assamese, Raja Mirch (king pepper) in Hindi, Pasa Kala (chief chili) in Mishmi. At any rate, the research was published — a chili paper, as it were — in the Indian journal Current Science, along with diagrams showing the Tezpur chili’s capsaicin readings going off the charts.\nThese findings were met by consternation and disbelief, especially among the small but vociferous tribe of chiliheads, aficionados driven as much by masochistic machismo as by culinary concerns, most of whom reside in the United States and England.", "score": 30.939336176206556, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Also indexed as: Ancho Peppers\nPreparation, uses, and tips\nThe seeds and membranes in chile peppers contain most of the capsaicin, the compound that\nlends them their mouth-searing qualities. To reduce the chile’s heat, remove its seeds\nand veins. Fresh poblano chiles should be peeled before using. Traditional recipes recommend\nsearing the peppers over a gas flame, or broiling them in the oven until the skins are\nblackened. Cool in a sealed plastic bag or foil and then remove the skins. These mild chiles,\na staple of Mexican cuisine, are most often served stuffed or as a component of mole\nPoblanos are among the mildest chile peppers, and are also known as pablano peppers; they\nare sometimes mislabeled as pasilla peppers. Poblano peppers are black-green when immature and\nturn dark red with age. After drying, poblanos may be dark red (ancho chile) or brown (mulato\nchile). These thick-skinned peppers range between 3 and 5 inches (7–12.5cm) long and 2\nto 3 inches (5–7.5cm) wide. They tend to have a shape that is roughly heart-like, and\nterminate in a blunt point.\nPoblanos have a heat score that ranges between 1,000 and 1,500 Scoville heat units. How\nhigh a chile pepper scores on the heat scale is determined by high-performance liquid\nchromatography measurement of how many parts per million of capsaicin it contains. (Capsaicin\nis the compound that gives chile peppers their fiery bite.) This figure is then converted into\nthe historic Scoville heat units that signify how much dilution is necessary to drown out the\nchile’s heat. The heat level of a chile is given as a range because it varies with how\nand where the pepper was cultivated.\nPoblano pepper (raw), 1/2 cup (75g)\nTotal Fat: 0.1g\n*Foods that are an “excellent source” of a particular\nnutrient provide 20% or more of the Recommended Daily Value, based upon United States\nDepartment of Agriculture (USDA) guidelines.", "score": 30.763936988764947, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "An Introduction To Dried Chilies\nWhen you're on a journey to get under the (poblano) skin of Mexican food, you could do much worse than to start by trying to grasp the wonderfully varied use of chilies (or chilies) that exists in Mexican cooking. There exist many types and for each many local variations. Contrary to popular belief, chilies offer so much more than heat and by mixing and matching the milder, more fruity variants you can add tons of flavor to dishes without too much fire! Of course, there are also chili types for those who love that fire.\nTo understand everything about chilies would take an uncountable amount of hours. Between joyously browsing food markets, exploring the endless variations and types of chilies available, and learning their regional preparation methods from kind and knowledgeable locals, time will simply fly by. Even if it does sound a whole lot like a food lover's paradise, it's hardly doable for many, so instead, we'll try to recreate a tiny fraction of the experience online – ta-dah! We'll stick with the dried varieties of Mexican chilies now, leaving for another time the world of fresh jalapenos, serranos, and habaneros.\nWe will go over the basic preparation and usage methods that the dried variations share, share a few interesting facts about chilies and the mark they have left on the world, before introducing a few of the more common varieties and their flavor profiles!\nLet's get this fire started!", "score": 30.754576083518536, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "A chipotle is a smoke-dried jalapeno that tends to be brown and shriveled. It is a chili used primarily in Mexican and Mexican-inspired cuisines, such as Mexican-American and Tex-Mex. Most chipotle chiles are produced in the northern Mexican state of Chihuahua and are known as a morita (Spanish for blackberry or black raspberry; literally “little purple one”). These peppers are smoky and sweet with tobacco and chocolate tones. They feature a subtle, deep rounded heat with great aroma and flavor. Use in soups, stews, beans, chilis, or other recipes that like a long simmer. Try crushing and sprinkling on your next pizza or add as a bold twist to corn bread and barbecue recipes. Any tomato dish or pork dish pairs well with this medium heat pepper too.\nAfter handling the chiles, wash hands thoroughly with soap and warm water.\nHEAT: 10,000-50,000 H.U.\n1 oz. package", "score": 30.579446679537984, "rank": 26}, {"document_id": "doc-::chunk-9", "d_text": "According to her, the hot ghee with roasted cumin and chili enhances the flavor. Like paprika, Kashmiri mirch is often used for flavoring and color, rather than heat. The Indian food I’ve been eating since I developed the motor skills to steal it off my mother’s plate has never been spicy because among the hundreds of bottles of spices in my household, red mirchi—the red chili used to add heat—is nowhere to be found.\nBut in my apartment near campus, it’s another story. In February, my kitchen was overrun with bird’s eye chili along with the arrival of my roommate, Sana, from Singapore. Unlike mine, Sana’s parents had been putting chili powder in her food since she was barely a year old. It’s not an uncommon practice in South Asian culture; I once witnessed my parents’ friend put dashes of Tabasco in her toddler’s corn flakes to acclimate her to the heat. Red mirchi doesn’t taste spicy to Sana. She dips her French fries into chili padi the way Americans dip them in ketchup. At home, her family keeps a bowl of red chilis on the dining table.\nWithin three hours of her arrival at the apartment, Sana rearranged our entire fridge to accommodate the bird's eye chili she had personally imported. These fire-hydrant-red chilis are hardly longer than my pinky, but they contain an explosive amount of capsaicin, the chemical that elicits a blistering reaction. Sana placed them on the middle shelf of our fridge, alongside the bok choy and black fungus mushrooms. Another batch of even hotter green chilis went in the vegetable drawer.\nThe next morning, Sana woke me up early to make avocado toast. As I groggily made my way to the bathroom to brush my teeth, I neglected to tell her to turn on the stovetop’s exhaust while cooking. It’s frequently set off by water boiling or aloo-jeera frying in olive oil. But I was too late: The shriek woke our slumbering roommates as Sana roasted red chilis with garlic and cherry tomatoes to ornament the toast. When red chili is cooked at a high temperature, it smokes, chars, then sets off the fire alarm—in that order, every time.\nThat evening, to make up for the incident, Sana generously offered to cook everyone dinner.", "score": 30.205338979398654, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "English Name : Chili pepper or Green Bell Pepper\nCommon Indian Name (Hindi): Shimla Mirch\n- In Northern India and Pakistan, word capsicum is exclusively used for Capsicum annuum.\n- The species is a source of popular sweet peppers and hot chilies with numerous varieties cultivated all around the world.\n- In the United States and Canada, the common heatless species is referred to as bell peppers, sweet peppers, red/green/etc. peppers, or simply peppers, while the hot species are collectively called chilies or chili peppers or hot peppers or named as a specific variety (e.g., banana pepper).\n- In the United Kingdom and Ireland, the heatless varieties are called bell peppers, sweet peppers or peppers (green/red peppers, etc.) while the hot ones are chilies or chili peppers.\n- In Australia, New Zealand and India, heatless species are called as capsicums, while hot ones are called chilies. The term ‘bell peppers’ is almost never used, although Capsicum annuum and other varieties which have a bell-shape and are fairly hot, are often called bell chilies.", "score": 30.079739787341165, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Mexican Mulato Chilli is a dried Poblano chilli originally from Mexico. It is similar to the flavour of the Ancho chilli. It has a dark purple colour and a sweet and fruity taste, with hints of smoky chocolate, cherries, coffee and licorice. It is about 4 inches in length with a medium-thick skin. It is great for many \"mole\" sauce recipes. It is also excellent to add a dark rich flavour with low to moderate heat to any dish that could use extra warmth. For optimum results and flavour, re-hydrate the chillies in boiling water for 20 minutes before use.\nMexican Mulato Chilli is a dried Poblano chilli and is similar to the flavour of the Ancho chilli. It is dark purple in colour with a sweet and fruity taste, with hints of smoky chocolate, cherries, coffee and licorice. It is about 4 inches in length with a medium-thick skin and is great for many \"mole\" sauce recipes. It is also excellent to add a dark rich flavour with low to moderate heat to any dish that could use extra warmth. For optimum results and flavour, re-hydrate the chillies in boiling water for 20 minutes before use.\nMuch like other chilli varieties, Mulato chillies contain capsaicin which is known to have many health benefits including boosting the immune system, eliminating inflammation and aiding weight loss. It also helps with arthritis, cardiovascular disease, gastric ulcers, vascular headaches, infections and respiratory conditions.\nTraditionally used in Korea for making Kimchi (a fermented cabbage condiment), Korean Chilli Flakes, also called Gochugaru, have a warm, fruity flavour. They are mild in heat, so make a great chilli seasoning option in place of regular hot chilli flakes. Whether used in cooking or sprinkled on top to season dishes, these flakes add a fruity chilli flavour to any savoury dish – both meat veg.\nTake your loose leaf chai with you. Just add your chai spice into the bag and pull the drawstring.\nRecycleable muslin cloth bags are easy to fill with tea leaves herbs and spices, great for the environment and handy when no tea pot infuser is available.", "score": 29.445511327350676, "rank": 29}, {"document_id": "doc-::chunk-3", "d_text": "2,500 – 8,000 SHU.\nGuajillo: Guajillo Chillies are a variety of the Capsicum annuum that are produced by drying the mirasol – a popular Mexican chilli. It has delightful flavours of berries and green tea and can be made into a delicious paste with onion, cumin, garlic, Mexican oregano and shallots. Used to make the salsa for tamales; the dried fruits are seeded, soaked, pulverized to a thin paste, then cooked with salt and several other ingredients to produce a thick, red, flavourful sauce. Used in pastes, butters or rubs to flavour all kinds of meats, especially chicken. Alternatively, they can be added to salsas to create a sweet side dish with a surprisingly hot finish. Do note that the skin is tougher than most dried chillies, so they require an extended soaking time.\nHabanero: Habanero Chillies are amongst the most popular chillies in South American cuisine, particularly in Mexico. The chilli is revered as much for its fearsome heat as it is for its fruity flavour and floral aroma. With a spice rating of around 350,000shu, this is definitely one chilli to be respected!\nMulato: The Mulato is another dried poblano and part of the Holy Trinity of Mexican chillies, along with Ancho and Pasilla, used in mole poblano. The difference between the Mulato and Ancho is in the harvesting. Whereas Ancho is harvested just as it turns red, the Mulato continues to ripen to a deep rich red, almost brown, colour which gives it a complex character and flavours of smoky coffee, liquorice, cherries and a hint of chocolate. The rich brown colour deepens the colour of sauces making it the perfect spicing for casseroles and stews. Since the Mulato has thick skin and flesh, it is best prepared by rehydrating it in boiling water for max. 20 minutes then blending it into a purée. Alternatively, after rehydration, stuff with your favourite mixture and shallow fry.\nNew Mexico Red: These are famous for providing a rich red colour to dishes and Mexican sauces. Mild in heat and flavour, with a touch of sweetness followed by straightforward spicy notes.\nPasado: Pasado chillies are either New Mexico or Anaheim red chillies that have been roasted, then peeled and dried.", "score": 28.620077617539465, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "Preparation and serving methods\nTo prepare, wash them in cold water and mop dry using a soft absorbent towel. In general, wear gloves while handling hot variety chili peppers. Remove the stem. Chop or slice as you wish. Discard the seeds and central white placenta, if in case, to avoid excess hotness. Grilling under mild heat also reduces their hotness in addition to imparting smoky flavor.\nJalapeno peppers have successfully made inroad into the US households since their usage found several interesting variations in cuisine like, Tex-Mex cuisine (Texas-Mexican cuisine). Raw green, grilled, smoked, pickled jalapenos employed in a variety of Latin American, Spanish, Caribbean, and Asian cuisines.\nHere are some serving tips:\n- They can be used in place of bell peppers in the preparation of ratatouille, a traditional Occitanian recipe.\n- Raw green jalapeno peppers added along with tomatoes and white onion to prepare fresh pico de gallo.\n- They can be experimented with other fruits and vegetables to make salsa, guacamole, chutney, and salads.\n- Split halfway lengthwise, remove seeds and stuff with sausage, cheese, etc., to prepare relishing stuffed-jalapenos (jalapeno poppers).\n- Mexican and Tex-Mex American cuisine chiefly pivot around smoke-dried jalapenos, known as Chipotle.\n- Add jalapeno rings as pizza toppings.\n- Pickled jalapenos are eaten with sandwiches, tortilla, burgers, etc.\n- Jalapeno peppers are one of the regular ingredients in mouthwatering Tex-Mex cuisine, nachos, chili con quiso, fajita, etc.\n- They also add special flavor and spiciness to vegetable stews, poultry, meat and seafood dishes.\nAs in other hot chili peppers, jalapeno pappers too contain an active component in them, capsaicin, which gives strong spicy pungent character. Even a few bits may cause severe irritation and fiery sensation in mouth, tongue, and throat.\n- Capsaicin in jalapeno peppers initially elicit inflammation when it comes in contact with delicate mucus membranes of oral cavity, throat and stomach, and may soon elicit severe burning sensation that is perceived as ‘hot’ through free nerve endings embedded in the mucosa.", "score": 28.244490190168076, "rank": 31}, {"document_id": "doc-::chunk-3", "d_text": "So do a little trail blazin' yourself and discover the flavor that makes Mexene the chili powder of choice for real Texas Chili.In 1906, John Walker of the Walker Chili Company developed Mexene Chili Powder in a small South Texas town. The flavor was designed to duplicate the popular taste of chili\nSpice Appeal Pasilla Chile Ground, 5 lbs\n4.7 out of 5 stars with 78 reviews\nTranslated as “little raisin” in Spanish, this rich-flavored, medium-hot chile has hints of berries and licorice. Particularly good in sauces. When fresh, it is known as the Chilaca, and turns to a dark purplish brown as it ripens. The Pasilla is considered one of Mexico's most highly regarded chiles. Delicious in seafood sauces, soups, and stews. One of the key ingredients in mole sauces. 4 Heat Rating.Great addition to many dishes. all natural; vegetarian; vegan;. High quality fresh\nVolcano Dust 3 - Smoked Bhut Jolokia (Ghost), 7 Pot and Scorpion Pepper Powder - Super Hot\nmpn: VD3VD3, ean: 0854343005051,\n4.6 out of 5 stars with 7 reviews\nComes in a spice jar with sifter - 3/4 oz. Smoked and dried pepper powder. Made from only super hot peppers, including the Bhut Jolokia, 7 Pot Jonah and The world's hottest Butch T. Trinidad Scorpion. 100% peppers - no additives.Super Hot Pepper Moruga Scorpion. Smoked Bhut Jolokia mixed with 7 Pot, Moruga and Trinidad Scorpion Pepper. 7 Pot Jonah. Comes in a spice jar with sifter - 3/4 oz. Butch T. Trinidad Scorpion.\nJansal Valley Chile Lime Powder, 20 Ounce\n4.8 out of 5 stars with 106 reviews\nJansal Valley Chile Lime Powder, 16 Ounce A zesty blend of peppers, natural citrus powders, and seasonings, produce a spicy tart all purpose addition to almost any dish. Also works well in Bloody Marys, vegetable burgers, or added to mayonnaise or sour cream as a dip.Chile Lime Powder, 16 oz.", "score": 27.980719948175878, "rank": 32}, {"document_id": "doc-::chunk-36", "d_text": "Peculiarly, although the European colonization of the world was driven in large part by the craze for Indian Indian pepper, until very recently the taste for chilis was largely reserved for the colonized, not the colonizer. And it should be said that not every use to which the colonized put the chili was excellent. The fruits of the capsicum family have certain innate punitive possibilities, as the Aztecs themselves recognized. A famous illustration in the sixteenth century Codex Mendoza depicts a father disciplining his son by holding him over a fire of smoking chili peppers. Here in India we took it a little further, as evinced in this enthusiastic passage from Science Reporter, an Indian popular science magazine, discussing the dangers of “capsaicin poisoning”:\nChili powder is often used by police in our country — and in fact in several third-world countries — to extract confession from criminals. It may be introduced in the mouth, nose, anus, urethra, or vagina to torture a suspect and extract confession from him. It is said that during emergency in India in 1976, chilies were used as a means of torture by introducing them in rectum! This process was known as Hyderabadi Goli… Finally, in our country, chili powder is often introduced in the vagina as a punishment for infidelity.\nIt’s true, by the way, about the Hyderabadi Goli. Back then our security forces believed the hottest chilis in the country came from Andhra Pradesh, the capital of which is Hyderabad. During Indira Gandhi’s twenty-one-month period of extraparliamentary rule, the Hyderabadi Goli was notoriously administered using a policeman’s cane slathered with mustard oil and chilis.\nBut Indian law enforcement has come a long way since then. In fact, the boffins at the Defense Research Laboratories had been researching ways to create a less lethal tear gas. (No less piquant, but literally less lethal — less likely to accidentally kill those exposed to it.) “Oleoresin capsicum (OC) is… environment-friendly and much safer,” they wrote. “Use of OC powders is growing and it is predicted to dominate the market in the coming years as the mainstay of riot control agents.”\nLike latter-day Columbuses, they did not quite know what they had done.", "score": 27.483665057058325, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "10,000-25,000 Scoville units\nSerranos are smaller and skinnier than jalapeños, and often picked unripe, when they are still green in color. As they mature, serrano peppers may turn a variety of colors, including red, green, brown, orange, and yellow. They are often consumed raw and appreciated for their crisp, bright, acutely spicy flavor. Try them to add bite to an all natural chili con queso.", "score": 27.451545256658697, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "The mild-mannered jalapeno was the byproduct of an experiment Villalon began in 1971. The goal was to breed genetic resistance to shriveling into sweet bell peppers. However, it so happened that the hot Mexican jalapeno put up a stronger fight against the malady than other peppers did, so he picked it as the original breeding partner.\nTo his surprise, the first crossbreed of peppers looked just like jalapenos, but contained dramatically less capsaicin - the clear, odorless, flavorless compound that makes hot peppers hot. Oddly enough, subsequent hybrids bore far less resemblance to the jalapeno, he says.\nSo he continued working with the jalapeno look-alike, which he called the TAM Mild, an acronym for Texas A&M. On a scale of 1 to 10, he says, his pepper's heat level ranges from 2 to 5 vs. the Mexican version's range of 6 on up.\nIncreasing with the popularity of Mexican food is the price of Mexican jalapenos, up from $11.50 to $15 a case in five years, he says. Although US farm production costs are higher than Mexico's, the TAM Mild offers the advantage of prolific plants that yield 20 percent more peppers per acre than their Mexican counterparts, Villalon says.\nBut the TAM's chief competitive advantage is its engaging disposition, he adds. And that makes restaurants the next target for its marketers. If one measures appeal by sales figures, Mexican restaurants in the United States doubled their sales to $3 billion between 1977 and 1981.\nThe civilized jalapeno does not signal the last of the breed's wilder side at all. ''For the hot-pepper lover, there's something for him already,'' says Villalon. ''It is for the uninitiated or the one who just likes the flavor with a little bit of spike to it that this would be the ideal situation.''", "score": 27.1201151855931, "rank": 35}, {"document_id": "doc-::chunk-4", "d_text": "Pasado has a toasted flavour combined with apple, celery and citrus overtones. They have a medium heat level. Use in place of fresh New Mexico or Aneheim chillies to flavour soups, stews or bread.\nPasilla or Little Raisin: Pasilla pepper is the dried form of the Chilaca chilli. These peppers are often used in sauces. These chillis are sold whole or powdered in Mexico and the United States. The pasilla chilli, or chile negro, is the dried form of a variety of Capsicum annuum named for its dark, wrinkled skin. In its fresh form, it is called the chilaca. It is a mild to medium-hot, rich-flavoured chilli. The fresh narrow chilaca often has a twisted shape, which is seldom apparent after drying. It turns from dark green to dark brown when mature. Pasilla is a flavourful spice with berry-raisin-cocoa overtones. The Pasilla pepper has a Scoville Heat Unit rating between 1,000 to 2,500.Tepin: This small, hand-picked chilli has a searing heat and it tastes of corn and nuts. Crush the chilli over food or use it to flavour bottles of vinegar or oils.\nTepin Pepper Facts\n: Tepin are considered one of the hottest chillies.\n: Tepin are native to northern Mexico and southwestern US.\n: Most tepin chillies are harvested from wild plants.\n: Tepin are considered to be one of the oldest hot peppers.\n: Also known as bird chilli, flea chilli, and mosquito chilli.\n: Attempts to cultivate have proven to be problematic.\n: Often confused with chilli pequin\n: Tepin ranks 50,000 to 100,000 Scoville units.\nDried Smoked Spanish Chillies\nDried, mild-smoked chillies are widely used in Spanish cooking.\nChoricero: This sweet pepper is very common in Basque cooking. From Navarre in the North of Spain, It is usually dried then rehydrated for use in stews and Basque dishes. The paste (Carne de Pimiento Choricero) made from the meat is used to make Chorizo sausage. It is not interchangable with the Ñora as each has a distinct flavour.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Chilli also called chile is the most popular spice and throughout history, wherever it was it transformed the previously bland cuisine. Latin American, Asian, African, Caribbean and certain regional Oriental cuisines made extensive use of this spice. Chillies are native to Mexico. Columbus, who was searching the New World for pepper came across these fruits, which, he discovered, were even hotter than peppercorns. He carried his prize to Europe and from there to Africa, India and the East where chilli became an intergral part of each cuisine. The long shelf life of the seeds was a bonus in the days of sea travel.\n- Other Names\n- Ají, chilli, guindilla, or pimienta (Spanish); berberé or mitmita (Amharic); biber (Turkish); bisbas (Arabic); cabé or lombok (Indonesian); chili-pfeffer (German); csilipaprika (Hungarian); diavoletto or peperoncino (Italian); hari mirch or lal mirch (Hindi); la jiao or lup-chew (Chinese); ot (Vietnamese); pilipili hoho (Swahili); pilpel adom or tsili (Hebrew); piment fort (French); piperi kagien or tsili (Greek); piripíri (Portuguese); pisi hui or prik (Thai); Spaanse peper (Dutch); togarashi (Japanese).\n- Aroma and Flavour\n- The characteristic pungency of chillies is caused by the presence of capsaicin. Research has indicated that the components of capsaicin promote different taste sensations when eaten, giving either a short fiery flavour or lingering hot taste. The hotness is said not to come from the seeds but rather the placenta. This is the pithy white part of the fruit to which the seeds are attached, and it contains the most capsaicin so removal of both seeds and placenta should reduce the pungency of chillies, if required.\n- Culinary Use\n- The chilli flavour revolutionized the cooking of tropical countries with bland staple foods, like cassava in South America, West and East Africa; rice in India and South-east Asia; and beans in Mexico and the Southern States of America.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "The hatch, anaheim and poblano chiles tend to be more consistently on the medium spicy side, however, they too can vary. This recipe is definitely not for people who aren’t into spicy foods.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "The chile is miraculous: it is packed full of vitamin C and it actually increases the tactile sensitivity of the mouth, which\nmay be one reason it’s popular in places that have bland staples. Liable to bring on a light sweat, it turns a meal into a\nmultidimensional experience. It’s about the only way of hurting yourself without harming yourself, which creates a flood of\nendorphins. At the end of a serious chile-infused meal, you can feel dreamily, deliciously lightheaded, and lighthearted.\nThe New Mexican chile is famous, with names and variations like Hatch (a name loosely given to chiles of many types grown\naround Hatch, New Mexico), Big Jim, Española Improved and NuMex Conquistador. Chile peppers grow easily in New Mexico,\nripening from green to red (green chiles are simply picked before they turn red) beneath the strong Southwestern sun. They\ncan be used fresh, roasted or dried and used as a seasoning, most commonly as a ground deep-red powder.\nWith the best chile, there’s something comparable to the terroir of fine wine, and the town of Chimayó is to chile as Havana\nis to cigars. After centuries of selective breeding, and irrigation from a particular mountain stream, there’s an\nunmistakable complexity to the chiles here, a citrus tang, a depth and richness you can’t quite find elsewhere. Throw\nanything in a skillet—steak, chicken, vegetables, even tofu—sprinkle a couple of spoons of ground Chimayó red chile powder on\ntop, add salt and enough water to create a grainy, blood-red sauce, and the result will defy all logic, all culinary\nchemistry or experience.\nYet, in spite of its fame, the Chimayó chile—a particular strain of chile that may have been brought here by Spaniards in the\n1500s— almost became extinct a few years ago. Only three farmers could still be bothered to sort out its seeds from the other\nvarieties they grew, and because it’s smaller and harder to process, even they were about to give up.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Our weekend getaway to Denver a few weeks ago has left me with lingering memories of green chiles, the very, very hot kind. Although my Mexican dinner tonight failed to match up to the same level of spiciness, it at least tasted very, very good.\nI think I may have to experiment with “ghost peppers” one day, the hottest peppers on earth. Spiciness of peppers is measured in terms of Scoville units. To give you a broad idea, the Serrano Peppers I used tonight are between 10,000 – 23,000 units. Tabasco is between 30,000 – 50,000. Habanero, which I consider very hot, is between 100,000 – 350,000. Well, these “ghost peppers” exceed 1 million Scoville heat units. It was even recently offered to be used in hand grenades!\nAnyway, back to my dinner. I made a chicken enchiladas with salsa verde sauce and huevos rancheros with salsa ranchera. I was excited to combine both types of salsa into one dish for dressed up colors and added variety, despite the fact that huevos rancheros is a traditional breakfast dish comprised of fried tortillas, fried eggs and salsa ranchera. I thought the dish turned out both refreshing and satisfying, all thanks to my green chiles craving!", "score": 26.812130121247424, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "A powerful flavour\nThe habanero-type chilli pepper is native to the West Indies, Mexico and South America. Its shape is reminiscent of a small puffy sweet pepper. It comes in a variety of colours: green, yellow, orange, red, and purple. Its flavour is strong but subtle. Used sparingly, it is ideal for spicing up and giving an exotic touch to savoury dishes (sauces, salt cod fritters, pizzas, etc.). Its seeds are particularly hot, so it is milder if they are removed.\nThe habanero-type chilli pepper stimulates the appetite and digestive system. It is also a good intestinal antiseptic. It is very rich in vitamin C (100 mg per 100 g of chilli pepper). This vitamin is known for its antioxidant virtues and its beneficial role in the body (strong bones, cartilage and teeth, protection against infections, accelerated healing, etc.). It also contains a large number of B-group vitamins, as well as potassium, magnesium, calcium, etc.\nWith its thin, pointed shape, this chilli pepper evokes a bird’s beak. It is native to South America, the West Indies and Mexico. Most often, it is light or dark green; in rarer cases, it can be orange or red. It is particularly hot and fragrant. Whether it is hotter than the habanero-type chilli pepper depends on taste and the person! Like this type, it is used to spice up savoury dishes and sauces.\nThe bird’s beak chilli pepper has the same nutritional benefits as the habanero-type chilli pepper: it is a very good source of vitamin C, B-group vitamins, and potassium, phosphorus, magnesium, and calcium. A useful tip: if your mouth feels like it’s burning, you should drink milk instead of water!\nMild and full of character\nThe vegetarian chilli pepper is 10 cm long. Elongated and pointed, it is initially green and then red when ripe. Very popular in the West Indies, it is very fragrant but particularly mild. It is therefore an ideal substitute for habanero-type or bird’s beak chilli pepper when you have a sensitive palate or when you are cooking dishes that will be eaten by children. It can be used in marinades, stewed dishes, salt cod fritters, etc.", "score": 26.62917616263102, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Chillies aren't just about adding heat to a dish as each variety offers a different flavour from smoky to nutty or fruity. Many cuisines use specific varieties for different purposes, whether its fresh large fleshy ones to stuff with or dried ones for crumbling into sauces and stews.\nBuy fresh ones that look crisp and glossy and not wrinkled. Generally, the fatter they are the milder the taste. The tiny green bird chilli and Thai red chilli are both high on the heat scale (though the little round and wrinkled Scotch bonnet is even hotter), the deep red Kashmiri chilli is a bit milder and it offers deep colour without extreme heat, whereas the yellow-green European banana chilli is both plump and very mild.\nRed and green chillies vary in flavour, but although red ones are ripe green chillies, colour does not determine heat. If you are unsure of the variety and its heat, add cautiously.\nDried chillies tend to have a more concentrated flavour and can get hotter as they reconstitute during cooking, so add them at the beginning of cooking for lots of heat, or at the end for a milder effect. To remove the seeds from dried chillies, remove the stem end then shake them out. Use dried chillies whole, crushed, flaked or powdered. They will keep for about a year in an airtight container.", "score": 26.526099722463112, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "One of the most basic Chiltepin dishes known, this recipe is prepared only in the state of Sinaloa, where the Chiltepins produce fruits all year long. This simple soup is served in mountain villages, and everyone makes his own in a soup bowl.\nWhen I write “flavored,” I mean it, as I have chosen the chiles that impart the most distinct flavors. The raisiny flavor of the pasilla melds with the apricot overtones of the habanero and the earthiness of the New Mexican chile to create a finely-tuned fiery sipping vodka. Of course, use an excellent vodka like Stolichnaya or Absolut. Note: This recipe requires advance preparation.\nIsland legend holds that the name of this sauce is a corruption of “Limes Ashore!”, the phrase called out by British sailors who found limes growing on the Virgin Islands. The limes, originally planted by the Spanish, would save them from scurvy. I guess that the bird peppers would save them from bland food. Add this sauce to seafood chowders or grilled fish. Note: This recipes requires advance preparation.\nPopular throughout Southeast Asia, this garlic and chile based paste is used as a condiment that adds fire without greatly altering the taste of the dish. It is especially good stir-frys. This is a great recipe for using up any small chiles that are left at the end of the season. This paste will keep for up to 3 months in the refrigerator and it can also be frozen.\nThis year we used about a pound of LC Cayenne pods to cook up a sweet and spicy Thai sauce. Unlike “Louisiana Style” hot sauce, this one is thick, almost like ketchup, and is a lot less vinegary. It is great with grilled shrimp, over rice, for Asian cooking, and even as a dip.\nRead Harald Zoschke's entire article on the Burn! Blog here.", "score": 25.703854003797073, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "So if you have a low tolerance to heat, slice open the chile and gut it by removing the seeds and veins.\nIn the northern hemisphere, most chiles are harvested between August and October, and the biggest producers are New Mexico and California. Most are harvested when green, but the longer they ripen the redder they turn. Some are purple, yellow, and orange. When shopping for peppers, look for smooth skin, a sign of freshness, and select those that are heaviest. The main exception is the jalapeño, whose skin is often marked by thin woody brown cracks, which detract not at all from their quality.\nFresh peppers have very different flavors than dried peppers, chiles pasado. For that reason you should not substitute anchos, which are dried poblano peppers, for fresh poblanos in a recipe, or vise versa.\nThere are scores of different types of chiles, and they range in heat from mild to incendiary (see the table below).\nOther definitions of chile:\n2) Chile is also the name for powdered chile peppers in most countries. In Mexico and Asia and Europe, chile powder is simply ground dried red chiles.\n3) Chile is a sauce made mostly of chile peppers. It is usually chiles chopped or pureed, mixed perhaps with some garlic and salt, but not much else. This definition is common in New Mexico and Texas, where chile sauce slathered on everything except marshmallows.\n4) And of course, Chile is a sovereign nation in South America.\nChili, with the concluding “i”, is used almost entirely in the US, and it also has multiple definitions:\n1) Chili or American chili powder is a powdered spice mix made from dried chile peppers, cumin, garlic, and other spices.\n2) Chili is also savory meat stew, usually beef, seasoned with American chili powder. It is the national dish of Texas.\n3) Chili is a band: Red Hot Chili Peppers.\n4) And Chili is a restaurant chain: Chili’s Grill & Bar.\nAbout Chili Powder\nIn Europe this is usually just ground hot chiles. In the US it is a blend of chiles and other spices.\nSometimes restaurants and recipes spell chili or chile with two Ls. It is wrong.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Spice Guajillo Chile Powder 4 oz.\nThe Guajillo chile (pronounced wa-hee-oh) is one of the most common and popular chiles in Mexico. This chile is a deep orange red in color with thin flesh and a slightly sweet heat and piney flavor with berry tones. Guajillos are dried and ground whole including the seeds and the stem to produce this powdered form. A little bit of the chile will go a long way in producing authentic Mexican and Southwest flavor. Use in soups, stews, mole sauces and salsas. On a heat scale of 1-10 guajillo is a 3. The Scoville scale places the chile at 2,500 to 5,000.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "I cut one open and taste a little bit to see if it is kicking. If it has no heat I go to the next pepper, if it has some but not a lot of heat I will throw some of its seeds and ribs into the recipe as well (most of the heat in a chile is in the seed pod, seeds, and ribs).\nChipotles are smoked jalapenos. You can buy them dried and soak them to rehydrate them or buy them canned in adobo, a tomato-based sauce. Either adds a delicious smoky flavor and quite a bit of heat to a dish.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "There's New Mexico Chile and then there's New Mexico chile.\n- Passadumkeg Jun 24, 2011 05:45 PM\nI lived in New Mexico for all of the 70's and grew to love New Mex style chile. For years now, living in Maine, I'd order boxes of New Mexico dried chile pods from Hatch and when Hatch became \"in\" and expensive, from Deming. I spent the last year (and am returning) in New Mex. and chile heaven. I'm back in Maine for the summer and just this evening made some Sonoran style, stacked red chile enchiladas. I bought dried chiles , Hot New Mexico\", pods at the grocery store. I made my chile as always, filling the blender w/ chile pod and pouring boiled water over them, blending and then adding to fried,diced pork and garlic. WHAT A DIFFERENCE. The chile was a dark red/brown, instead of bright earth red, and hot it is not. I guess my point is Chile Heads beware and get the chile from the source and not at the supermarket. I guess when I stop learning, I'm in the obits. Comments?\nI've done that. After many years of living with and learning from a roommate from Albuquerque I grew obsessed over Hatch chiles, both green and red.\nHer parents used to send us freshly roasted chiles in the Fall for us to keep in the freezer. Posole was only an afternoon away with that gold on hand.\nIn the last couple of years I've been able to get green Hatch chiles in the late Summer to roast and put away for the year. Thank goodness.\nEvery other batch I made with NM chiles from the Mission, or WF or wherever just couldn't compare.\nAnd, dang! That is some expensive mail-order!\nThere's a major difference in the soil, water, and sun. There is nothing like real chiles from the source.\nI've never been to New Mexico, something I'd like to rectify sooner rather than later. Though I hear it gets a tad warm there during the summer when I am usually available to travel.\nPart of me appreciates the idea that you can't just get great food of all kinds and stay put in one place. Going to the source always has been and always will be the only way to experience some things.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-37", "d_text": "“Thus we have identified the hottest chili variety in India,” they concluded. When in fact, they had conquered the world.\nIt wasn’t until 2007 that the rest of the world finally admitted defeat. That February Guinness World Records certified that the hottest chili in the world was, in fact, the Naga Jolokia pepper, which the Indian scientists had called the Tezpur chili. Predictably, the Indian chili was only accepted in the West after it had been domesticated. An English gardener began selling seeds for what he called the “Dorset Naga,” and in 2006 one Paul Bosland, a scientist at the Chili Pepper Institute at New Mexico State University, took credit for discovering the Naga Jolokia’s potency. (Guinness World Records cites his research.) Nowadays, chiliheads are swilling bottles of American-made Naga hot sauce, including Dave’s Ghost Pepper Sauce, Blair’s Pure Death Sauce (with Jolokia), and Mad Dog 357 Ghost Pepper Sauce.\nAs you can probably tell, this burns me up a bit. I discovered the Naga Jolokia some twenty-five years ago on my first trip to the northeast, when I ordered a dish of egg curry and asked for extra chili. It was delicious, but within minutes my mouth and scalp were on fire. I experienced a dramatic bout of gustatory sweating and a temporary loss of hearing. I still remember it — and the jar of nougat candies that saved me. I have since acquired a taste for the chili, though I administer it with caution. I’ve learned the hard way not to touch my eyes and other tender parts after handling even uncut Naga Jolokias.\nBut there’s no point crying over spilled capsaicin. We’ve all moved on. India is still the world’s largest producer, consumer, and exporter of chili peppers, though in exports we’re beginning to feel the heat from — who else? — the Chinese. In 2009 the Indian Defense Research Development Organization announced the development of Naga Jolokia–based chili grenades for counter-insurgency operations. Meanwhile the Chili Pepper Institute’s latest genetic research has concluded that the Naga Jolokia is a “putative naturally occurring interspecific hybrid” of the chinense and frutescens.", "score": 25.56708384206363, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "The different varieties are unusually sweet compared to other onions due to the low amount of sulfur in the soil in which Vidalia onions are grown.\nTraditional Chili isn’t the same without the garlic. Garlic is such a personal decision, do you like a little or a lot?\nI suggest using around 6 cloves. Just enough to flavor the chili without over powering the dish.\nThe poblano (Capsicum annuum) is a mild chili pepper originating in the state of Puebla, Mexico. Dried, it is called ancho or chile ancho, from the Spanish word ancho (\"wide\").\nWhile poblanos tend to have a mild flavor, occasionally and unpredictably they can have significant heat. Different peppers from the same plant have been reported to vary substantially in heat intensity. The ripened red poblano is significantly hotter and more flavorful than the less ripe, green poblano.\nFRESNO CHILE PEPPER\nThe Fresno Chili Pepper (/ˈfrɛznoʊ/ FREZ-noh) is a medium-sized cultivar of Capsicum annuum. It should not be confused with the Fresno Bell pepper. It is often confused with the jalapeño pepper but has thinner walls, often has milder heat, and takes less time to mature. It is however a New Mexico chile, which is genetically distinct from the jalapeño and it grows point up, rather than point down as with the jalapeño. The fruit starts out bright green changing to orange and red as fully matured. A mature Fresno pepper will be conical in shape, 2 inches long, and about 1 inch in diameter at the stem. The plants do well in warm to hot temperatures and dry climates with long sunny summer days and cool nights. They are very cold-sensitive and disease resistant, reaching a height of 24 to 30 inches.\nThe Scoville rating of the Serrano pepper is 10,000 to 23,000. They are typically eaten raw and have a bright and biting flavor that is notably hotter than the jalapeño pepper. Serrano peppers are also commonly used in making pico de gallo and salsa, as the chili is particularly fleshy compared to others, making it ideal for such dishes.\nIt is the second most used chili pepper in Mexican cuisine and has a fruity flavor profile.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "It is a very common ingredient in Mexican cuisine – in salsas, marinades and spice rubs. Made from dried Mirasol pepper, it sports a fruity, slightly tart, and gently smoky flavor, but mild enough to not overwhelm fish or chicken. We find both the heat level and the flavor to be complex, but not in-your-face kind of way. It’s a wonderful ingredient for both flavor and mild heat.\nHeat: 2,500-8,000 SHU (mildly-hot)\nChipotle is such a delicious ingredient that they even named a restaurant chain after it! Cherished in Mexican and Tex-Mex cuisines, this tasty concoction is made by drying and smoking ripened red Jalapeño chiles. The flavor is quite earthy and mildly smoky – adding richness and a splash of mild heat which naturally compliments grilled foods. And there is just enough heat to get your attention!\nOrigin: French Guiana\nHeat: 30,000-50,000 SHU (medium-hot)\nWith a long list of health benefits, cayenne is considered by many to be the healthiest spice in the world. The list of uses for cayenne is also long because it is such a versatile spice. The heat is definitely present, but not over powering. The flavor is very mild with floral and hay notes, making it perfect for adding heat but not radically altering the flavor of the food.\nHeat: 30,000-50,000 SHU (medium-hot)\nAji Amarillo translates to “Yellow Pepper”, and this delicious chile is quite popular in the cuisines of Peru and Bolivia. As one of the main ingredients in our Peruvian-ish seasoning, it provides a very tasty fruity flavor with hints of mango, passion fruit and dried berries. Sometimes referred to as “the chile that tastes like sunshine”, we find that to be an accurate description. Similar to the heat level of Cayenne, Aji Amarillo provides a lot of flavor bang with very little heat.\nCrushed Red Pepper\nOrigin: United States\nHeat: 40,000 SHU (medium-hot)\nYou most likely have seen this ingredient provided as a pizza condiment alongside parmesan cheese, but it has many uses. As the name implies, it is made from hot dried red peppers. But did you know that it comes from more than one type of chile?", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-6", "d_text": "3.1.Soak the chillies in boiling water for 20 minutes, then purée them with some of the soaking liquid.\n3.2.Sieve the purée to make a smooth sauce.\n3.3.Add to taste when cooking.\n3.4.Purées can be stored in a covered jar in the fridge for up to a week or frozen for up to a year.\nCooking Tips For Dried Chillies\nTo make chilli vinegar, fill a bottle with chillies, top up with vinegar and leave for two weeks before using.\nDry-roasting heightens the flavour of chillies.\n2.1.Heat a heavy frying pan without adding oil.\n2.2.Press the chillies on to the surface of the pan to roast them.\n2.3.Do not allow the chillies to burn, or their flavour will be bitter.\n2.4.When roasted, remove the chillies from the pan and grind them\nLarger, thick-fleshed and thin-skinned dried chillies (such as anchos or mulatos) can be stuffed with meat, rice or vegetable fillings.\n3.1.Make a small lengthways split in the chilli and remove the seeds.\n3.2.Leave the stem intact.\n3.3.Soak the chilli, then drain and pat it dry on kitchen paper.\n3.4.Stuff carefully and bake until heated through.\nChilli Powder: Milder than cayenne pepper and more coarsely ground, this is prepared from a variety of mild to hot chillies. Check the ingredients list, as some chilli powders (especially those of American type) contain a variety of others flavours, such as garlic, onion, cumin and oregano. They are added for convenience for use in chilli con carne. If the chilli powder is dark in colour, it may contain the rich-rust-coloured ancho chilli. For best results make your own chilli powder. Deseed dried chillies, then dry-fry them and grind them to the required fineness.\nChilli Sauce: Tabasco® sauce is a North American seasoning made from extremely hot tabasco or cone chillies, which are mixed with salt and vinegar and then matured in white oak casks for several years. Many of the islands of the Caribbean have their own style of chilli sauce. Most are, like Tabasco®, made from steeping the chillies in vinegar and all are very hot.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Pasilla Chile Powder | Ground Pepper by Ole Mission$5.99\n- AUTHENTIC FROM MEXICO - Authentic Mexican Pasilla Chiles Sourced From Farms Throughout Mexico\n- GOOD FOR MEXICAN RECIPIES - Good For Mole, Salsa, Tacos, Menudo, Pozole, Tacos al pastor, Tostadas, Chiles en nogada, Enchiladas And Mexican Recipes\n- RICH FLAVOR - Pasilla Chiles Have Rich Deep Flavor With Mild Heat Between 1,000 to 2,000 Scoville Heat Units\nThe pasilla chile or chile negro is the dried form of the chilaca chili pepper, a long and narrow member of species Capsicum annuum. The fresh narrow chilaca can measure up to 9.0 in (22 cm) long and often has a twisted shape, which is seldom apparent after drying. It turns from dark green to dark brown when fully mature. Pasilla are used especially in sauces. They are often combined with fruits and are excellent served with duck, seafood, lamb, mushrooms, garlic, fennel, honey, or oregano.They are sold whole or powdered in Mexico, Pasilla de Oaxaca is a variety of smoked pasilla chile from Oaxaca used in mole negro.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-2", "d_text": "The black pepper has more of a \"back bite\". That means it takes a few seconds for you to feel the pepper on your taste buds. The white pepper is \"front bite\", which means you taste it right away. A little front bite is good. Cayenne is a way back bite. It seems like you can swallow before the bite kicks in, which is also good.\nComino is usually used in the form of powder, which is called cumin or ground cumin. Some folk use the whole seed but not many. To really get the flavor from the comino you need to process your own on cooking day. You put some comino seeds in a dry skillet and toast them just until you can smell them. Then you powder them in a spice grinder or some such.\nThe ratio of comino or cumin to the chili powder is about three to one in favor of the chili powder. In other words, three tablespoons of chili powder requires one tablespoon of cumin.\nIf you use the blended chili powder, the ratio does not necessarily change. The blend contains cumin, but it dies out with age and may not be strong enough to even taste in the blend. You need the cumin fresh.\nA tad of oregano is good in chili. Use the Mexican variety, if you can get it, and just use a tiny bit as it is very potent. Some use a pinch of basil also. This is where individual taste comes in. Do what is best for you.\nIf you absorb this information and read the rest of my chili articles in the archives, I guarantee you will know more about chili than nearly any man or woman on the street.\nGood chili results not so much from what you put in it as what you don't put in it. Good chili contains no arrowroot, anise, aspirin or arrowheads. Also no chocolate, sour cream or flax seed. Leave out rawhide doggie chews and empty cans. No seafood allowed. We don't want to see any whole Jap peppers floating in a sea of red grease either.\nYou know what to do; just do it.", "score": 24.296145996203016, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "When it comes to Mexican food and it’s flavors, specifically heat, it seems like I’m always hearing salsa this and salsa that. Now, there’s nothing wrong with Salsa, it’s just that thanks to far too many stereotypes, & the lack of proper knowledge, the culinary art of salsa making has been devalued, reduce to a very mild, & oversimplified one cheap trick pony. In reality, when it comes to heat & spicy flavors in the Mexican kitchen, the realm of possibilities is so vast. Your options are so diverse, and varied that you can be at no loss when looking to turn up the taste in your dishes. With the right ingredients you can arm yourself with the ability to produce: unique, extraordinary, and super flavorful surprises. What sort of ingredients? Well, Take for example today’s spotlight food: “Chiles en Vinagre” = Chilies in Vinegar, pronounced (Chee -lehs- ehn -Bee-nah- greh). Chiles en vinagre are pickled peppers. Some use Serranos in their recipe, others use Jalapeños. The basic Chiles en Vinagre recipe actually calls for the pickling of green chilies,carrots,and onions. There are those that will add other things such as cauliflower for example, but I’m more familiar & used to the basic recipe.\nSo salsa gets all too often automatically associated with Mexican food, to the point that it leads to a constrained singularization. Though this is one of the first things to be mentioned when talking of Mexican cuisine,the fact is that to some, Chiles en Vinagre are far more important. In some cases truly indispensable. There are people who claim they can not eat a proper meal without their chiles. To some this condiment is as important as salt & pepper. Week after week this is something on my grocery list. In my house, on our dinner table there’s always been a bowl of “chiles en vinagre” present.\nThe spicy treat is a great addition to all kinds of food. You can sprinkle the pickled juice on almost anything, you can munch on a crunchy carrot, or bite on a spicy, juicy pepper while you enjoy your meal.", "score": 24.163584251166245, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "Preparation Tips For Fresh Chillies\nCut away and discard the stalk end\nHolding the chilli under cold running water to prevent the oils from affecting your eyes and throat, slit it from the stalk end to the tip\nScrape out the placenta and seeds\nAfterwards, wash your hands, knife and chopping board thoroughly to clean off the oils\nDo not rub your eyes or lips — even after washing your hands\nThose with sensitive skin should wear rubber gloves when preparing chillies.\nServing Suggestions For Fresh Chillies\nMake a Thai curry with shrimp or chicken, coconut milk, fish sauce, and thinly sliced hot green chillies.\nDried Mexican Chillies\nAncho: The most commonly used dried chilli in Mexico, Ancho Chillies is the name given to the dried Poblano. They have a mild fruity flavour with undertones of plum, raisin, tobacco and a slight earthy bitterness. Their delicious flavour means that Ancho Chillies are often used as the base for chilli and mole sauces (sauces enriched with bitter chocolate or cocoa), they can also be stuffed or cut into strips. Spice rating: Medium. 5000-9000 Scolville Units.\nCascabel or Little Rattle: Cascabel Chillies are also known as Rattle Chillies due the tendency of loose seeds to rattle inside their bell like shape. Cascabel chillies provide woody, acidic and slightly smoky flavours with tobacco and nutty undertones. They are also quite mild (rating at around 1,000-2,500 on the Scoville Heat Scale), meaning they can be used generously to take advantage of their delicious flavour.\nChipotle (Dried Jalapeño): A chipotle, or chilpotle (pronounced chipotlay), which comes from the Nahuatl word chilpoctli meaning \"smoked chili\" is a smoke-dried jalapeño. The peppers may have been smoked to keep them from rotting, since the jalapeño is prone to quickly deteriorating when stored as fresh peppers. It loses little of its heat through the smoking process, and many enjoy both its spiciness and the natural wood smoke taste that accompanies it. It's used as a flavouring ingredient in many dishes. It is smoky and spicy. Can be found as whole dried peppers, or, more usually, dried powder or paste or sauce.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Another of the marvelous Mexican Chiles the Cascabel Pepper is a member of the Capsicum annuum species and is also known as Cascabel peppers, guajones, cores chile bola and rattle chile which refers to both the shape of the chile as well as the sound the seeds make when a dried chile is shaken.\nThe Cascabel Pepper is a plump, round, smooth and small chile that ripens from green to red. When dried, the color darkens to a deep reddish-brown with an almost transparent but thick skin. When mature they are about 1-1/2” in diameter.\nThe flavor is somewhat nostalgic of strawberries, but the aroma is extremely beefy. It sounds odd, but the two balance quite harmoniously; like a steak slathered with a fruit-based barbeque sauce. We at Spice Jungle love to seed and stem cascabels, lightly toast them, and then toss them and a few roasted tomatillos into a blender and roughly chop them into a chunky salsa. It’s fantastic served with anything that comes off the grill.\nUnlike many chiles, these are known by the same name whether fresh or dried. Recipes that call for Cascabel Pepper chiles typically are referring to the dried chile. The Cascabel Pepper is sometimes confused with the Catarina chile (their seeds also rattle when the chile is dried) and also as a darker cherry chile pepper (due to the similar sizes and shapes).\nThe Cascabel Pepper is grown in several states throughout Mexico including Coahuila, Durango, Guerrero, Jalisco and San Luis Potosi.\nThe flavor profile of the Cascabel Pepper is woodsy, acidic and slightly smoky with tobacco and nutty undertones. This chile is considered a mild heat chile (1,000-2,500 on the Scoville Heat Scale).\nWe like to roast these chiles in a hot skillet before using and then they can either be ground or rehydrated in warm water so they can then be made into a paste or a sauce. We also like to pair these with other Mexican chiles for more complex depths of flavor. If you are rehydrating these we recommend not soaking them for more than 20 minutes or they become bitter.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "The small Pequin chile pepper, also called Chile Pequin, is used in hot sauces, salsas and soups. The pepper packs a lot of heat, so a little goes a long way. The immature green peppers are sold fresh more often than the mature, dried, red peppers. The green peppers are used in fresh salsas. Pequin chile peppers can be pickled and are the perfect size to spice up cucumbers in a pickling brine. To impart a soup with the flavors of the chile, puncture a fresh pepper several times with a fork or end of a knife and put the whole chile into the broth. Pequin peppers develop a more complex flavor when dried; the chiles can be dried in a low temperature oven or dehydrator. Dried Pequin chile peppers are ground into flakes and used as a spice for pastas, chicken or beef dishes or in dry rubs. Fresh Pequin chile peppers will keep in the refrigerator for up to a week.\nPequin chile peppers are one of the two chile varieties used to make Cholula®, a popular hot sauce from Mexico. The Cholula brand hot sauce is said to be based on a “generations old” recipe from Mexico, and was named for the ancient Cholula in the state of Puebla, south of Mexico City. Cholula has firmly established itself in pop culture culminating with a partnership with Major League Baseball.\nPequin peppers are native to Mexico, and grow from the southern part of the country all the way north to the Texas and Arizona border in the southern United States. The plant grows wild in the mountains of Mexico and can be a challenge to cultivate due to the delicacy of the plant. When established, the plant will produce for up to three years and can withstand humidity better than other pepper varieties. Many of the cultivated Pequin chile peppers are dried and either sold as a seasoning in specialty stores or in a variety of products from pastes, chile flakes and hot sauces. Outside of Central Mexico, fresh Pequin peppers can be found in home gardens and through small farms and farmer’s markets.\nRecipes that include Pequin Chile Peppers. One is easiest, three is harder.\n|Houstonia Magazine||Chile Pequin Vinegar|", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Chile, Chipotle, Brown (Chipotle Meco or Típico), Whole\nBrown chipotles are native to Mexico and are also known as chipotle meco or tipico. They are fully ripened jalapenos, meaning they’ve been allowed to mature and turn red on the vine. They are dried by smoking and it takes about 10 pounds of fresh peppers to make one pound of dried. They have wrinkled, dark brown skin and a sweet, chocolate-like, smoky flavor. They register about 5 on heat scale of 1 to 10.Chipotle comes from the Aztec language of ancient Mexico, Nahuatl, and means 'smoked chile.’\nWhen using dried chiles, you can opt to toast them first for added flavor and re-hydrate them by soaking in hot tap water for about 20 minutes. Don’t soak any longer or they can become bitter. Use them to make salsas, sauces and condiments. Great with chicken, pork and also beef. They offer a rich, smoky flavor and a nice bit of heat.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "That burning sensation\nIf you want zip and heat in your favorite entrée, add a chile pepper. Peppers come in a variety of shapes and colors and range in taste from sweet and mild to hot.\nBell peppers are often picked when green and immature. Allowed to ripen to red, yellow, orange, brown, or purple, they are sweeter. Hot peppers are often harvested at maturity, usually when red.\nAt the market, choose high-quality peppers that are fresh-looking, firm, thick-fleshed, and free of disease and insect damage. Avoid bruised or soft peppers.\nBy varying the type, quantity, and part of the pepper you use, you can adjust the heat of a dish. The main source of pungency in peppers is capsaicin, which is odorless and tasteless but produces a burning sensation on contact. Capsaicins are found in the inner wall of the fruit (the white \"ribs\" and white lining) and are concentrated at the stem end of the pepper. The seeds may also contain heat. You can reduce heat by removing the seeds and ribs.\nA pepper's heat is measured in Scoville heat units with the use of a systematic dilution test developed by Wilbur Scoville in 1912. The scale ranges from 0 for the mild, sweet bell pepper to 300,000 for the fiery hot habanero pepper. Water stress on pepper plants can increase pungency, whereas cooler temperatures can decrease the heat of a pepper.\nIf you eat too much of a hot pepper or can't bear the heat, don't drink water. Because capsaicins are oils, they do not mix well with water, which will spread the heat around your mouth. Instead, drink milk or eat pasta, bread, or potatoes. These oil-absorbing foods will help relieve the burning sensation.\nWash peppers before peeling or chopping them. Avoid direct contact with hot peppers, because the volatile oils they contain can cause skin irritation or burns. Wear rubber gloves while handling them, and wash your hands thoroughly with soap and water before touching your face. The seeds from hot peppers are often removed.\nMost all peppers are good sources of vitamins A and C. A mature pepper has a higher concentration of vitamins. Both sweet and hot peppers are delicious raw, grilled, or added to cooked recipes.", "score": 22.87988481440692, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Chile, chili, chilli, chilies, pepper, paprika, aji, capsicum, chiles pasado, pimento, pimiento, paprika. Let’s straighten out this mess.\nIn Central America the Nahuatl (Aztec) Indians grew a number of plants whose fruit they used in foods and medicines. They named it chil and its use spread across the region and Caribbean.\nIn 1492 Columbus discovered the Arawak Indians growing them on Hispanola and wrote that “They deem it very wholesome and eat nothing without it.” He brought them back to Spain where the “e” was added to chil, and within 50 years its’ cultivation had spread around the world. Today there are scores of varieties, sizes, shapes, colors, flavors, and they all have a spicy heat of varying degrees. Horticulturists named the plant Capsicum, derived from the Greek word kapto, which means “to bite” and scientists named the active ingredient that bites capsaicin.\nHere are the definitive definitions describing the differences between chiles and chilis, some info about how hot the common ones are, and some tips on using them.\nThere are three definitions for “Chile”, with the concluding “e”:\n1) Chile is the colorful fruit (it is technically not a vegetable) of the Capsicum plant, also called a pepper (that’s a jalapeno pepper above). But this pepper is not at all the same as the black pepper we put next to the salt shaker. Black pepper is the powder made from grinding peppercorns, the fruit of the Piperaceae plant.\nMost chiles are spicy hot, which they get from a chemical irritant named capsaicin, which, interestingly, is also used in ointments as a pain reliever for such ailments as shingles because it can numb nerves. Let that sink in for a moment. A few peppers, like the common green and red bell peppers, have no heat and can be quite sweet.\nLet’s bust another myth: Many people believe that the heat of a chile is all in the seeds. While the seeds do have some heat, by far most of the capsaicin is in the ribs that hold the seeds.\nMost of the seeds are held in a bunch near the stem in a pod called the placenta. The closer the veins get to the placenta, the hotter they are.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-2", "d_text": "This concoction consists of many chiles, like red bell peppers, cayenne peppers, and other dried red peppers. There is usually a high ratio of seeds to take the heat level of this flavorful ingredient up a notch. The flavor is familiar to most and the heat is moderate, but definitely quite noticeable, especially on the back end.\nNew Mexico Chiles\nOrigin: New Mexico, United States\nHeat: 0-60,000 SHU (medium-hot)\nNew Mexico Chiles are one of our favorite chiles to use here at Dizzy Pig. These are “modern” peppers, developed at the New Mexico University in the late 1800s, and grown in the Hatch Valley of New Mexico (often called the chile pepper capital of the world). They are useful in both green and ripened states. The flavor is sharper when green. But when ripened to red, the flavor mellows and the front heat becomes more of a back heat.\nThere are many varieties, but we use ripened chiles with a mildly warm 30,000 SHU here at Dizzy Pig. The flavor is earthy and reminiscent of tea and dried cherries, and the mild heat comes in gently and slowly on the back end. We use this chile more for flavor than for heat.\nAfrican Bird Pepper\nHeat: 150,000 SHU (hot)\nAs their seeds are often spread by birds, this chile also goes by the name Birds Eye Pepper. It grows throughout Africa, and commercially in Australia as well. It packs quite a punch, so we use it more for heat, but still enjoy the flavor it provides. The flavor of this chile is very mildly fruity, with notes of citrus, herbs and hay. The heat builds quickly in your mouth and could even leave you with some sweaty brow!\nHeat: 100,000-350,000 SHU (very-hot)\nHabanero chiles are gaining in popularity. Here at Dizzy Pig, we value their powerful heat, and especially love the intense fruity flavor that leads the way. Habaneros were the hottest chili in the world around the turn of this century, but they have since been dwarfed by other varieties. There are new varieties of Habanero that have little or no heat but still pack the great fruity and floral flavor, like the Habanada Pepper.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Originating from the Yucutan Peninsula in Mexico, this pepper variety is FEROCIOUSLY HOT and is thought to be one of the hottest chile pepper variety on the planet measuring a scorching 445,000 scoville units. This makes it twice as hot as a standard Habanero Chile and over 80 times hotter than an Jalapeno Pepper! The slightly wrinkled chiles are approximately 1 inch wide by 1.5 inches long and are similiar in shape to the Habanero. The chiles ripen from lime green to a brilliant red in 110 days and are produced on very productive plants that reach 30 inches tall. As well as the blistering heat, they have a lovely fruity flavour which makes them an excellent choice for use in salsa's, marinades and of course, in hot sauce. Note: An immature pod is shown here.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "Chile de Arbol #9250 (30 seeds)\nA Cayenne type of pepper with pointed pods, 2 to 3 inches long and 3/8 inches wide. They mature to dark red and are thin-fleshed. Mexican common names for this type are pico de pajaro (bird's beak), and cola de rata (rat's tail). Ranging from 15,000 to 30,000 Scoville units, they are usually ground into dry powder for red chile sauces or added to soups and stews. 80 days.\nLarge Red Thick Cayenne #9560 (30 seeds)\nConcentrated set of wrinkled, very pungent fruit, 6 inches long and 1-1/4 inches in diameter. Very pungent, even when small. Useful for sauce and drying. 76 days.\nLong Red Slim Cayenne #9425 (30 seeds)\nBountiful harvest of pencil-shaped fruits that are 5 inches long and 1/2 inch thick, but often curled and twisted. Flavor is red hot and best used in very hot dishes. Easily dried. 75 days.\nMesilla #9123 (30 seeds)\nOften seen in grocery stores labeled 'finger-hots,' these are bright green at first but later turn to red. Slightly curved and wrinkled, these peppers are about 10 inches long and 1.5 inches wide, and are borne in abundance. Use them whenever good, spicy flavor is desired. Large plants are quite disease-resistant and easy to grow. 85 days.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Looking to spice things up? Before making that salsa or tossing a few chilies on your taco, think again. There are hundreds of chili varieties worldwide, each with their own range of heat. Whether mild and sweet or frighteningly hot, each chili contains a chemical called capsaicin.\nCapsaicin is an irritant with enough strength that it has been included in pepper spray used by police forces, in pest control repellants and is even an ingredient in paint stripper. While individual chilies do not contain high amounts of capsaicin, the strength of it can depend on how hot the chili is.\nFor example, certain peppers can leave a burning sensation on the skin if the seeds or membranes are handled. While the occasional chili or two added to a recipe or eaten raw won’t do any damage, large amounts of capsaicin can cause gastrointestinal symptoms and in extremely rare cases, prove fatal.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Chili peppers are capsicums and are in the same family as bell peppers and paprika pods. They range in flavor from rich and sweet to fiery and hot. The important thing for you to remember is to combine the heat of chili peppers with other spices so the finished dish will have a full bodied flavor.\nHere is an description of the most commonly used peppers in the world. Many are readily available others are only available by mail order. Here is a listing of some of the most popular chili peppers. The spiciness is rated in scoville units. The bigger the number the hotter the chili. These chilies are available usually dried.\nThis chili is Turkish in origin and has a ancho like flavor with a little more heat and tartness. Crush and put in a jar and shake on pizza and spaghetti. Allepo is great on grilled meats, steaks and chops. Use in your tuna salad and on your deviled eggs. 10,000 scoville units.\nLarge, juicy, dark purple Mexican chili pods. These peppers are the most commonly used in Mexico, and are the backbone of dishes such as traditional red chili and tamales. To make a flavorful ancho chili oil; Chop 3 peppers into 1 inch chunks and simmer in 3 cups of corn oil for 20 minutes. Let cool, strain and store in an airtight container. Drizzle over tacos, tamales or other dishes where you want a little zip. 3,000 scoville units.\nThese peppers have a round shape with rich deep flavor, and are pretty spicy. In Mexican cooking they are used from everything from chili, tacos and mole. 11,000 scoville units.\nThis chili is usually found in powdered form. It is very spicy.Use judiciously. 40,000 scoville units.\nThese are small super hot red Mexican chilies. They are also known as birds eye chilies. These chilies are also common to Thailand. Use with caution. These chilies are also used whole in Asian cooking. 140,000 scoville units.\nChipolte peppers are smoked jalapeño peppers. They are available either dried in canned in adobo sauce. They are rich smoky and fairly hot. They are wonderful added to chili or red chili sauces. 15,000 scoville units.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Cooking with fresh chili peppers adds big flavor (without calories) to all kinds of recipes – even dessert! Whether you’re a fan of mild peppers or four-alarm heat, we’re breaking down the different types of these tiny spice powerhouses. Then, light a fire in your kitchen with five hot dishes.\nThe mildest of the bunch, you can use dark green poblanos in place of green bell peppers. They’re exceptional for sautéeing, roasting, stuffing, or making sauces.\nRECIPE: Chicken Pepian\nLesser-known Anaheim peppers have a little bit of heat and a sweet crunch. Roasting or grilling enhances their natural sugars. Follow these tips from the Food Network Kitchens for perfect roasted peppers, then make this salsa.\nRECIPE: Roasted Pineapple and Pepper Salsa\nSlender and bright green, serrano chilies are a good multipurpose pepper. They register in the middle of the heat scale, and taste great raw in guacamole, salsa or in a spicy sauce for seafood.\nRECIPE: Grilled Shrimp in Lettuce Leaves with Serrano-Mint Sauce (pictured)\nWith slightly less heat than serranos, jalapenos are well suited for pickling, roasting or baking. Remove the seeds and inner membranes to tone down the heat.\nRECIPE: Baked Jalapeno Poppers\nThese tiny peppers pack a super hot punch. Thousands of times hotter than other chilies, proceed with caution when using these babies – they’ll knock your socks off. Pair them up with tangy pineapple for a cool (and hot) sorbet.\nRECIPE: Pineapple Habanero Sorbet\nDana Angelo White, MS, RD, ATC, is a registered dietitian, certified athletic trainer and owner of Dana White Nutrition, Inc., which specializes in culinary and sports nutrition. See Dana’s full bio »", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-2", "d_text": "Some salsa verde is made with chopped serrano peppers, which have a bit of a nip, but they're nothing compared to the Habanero. A couple of drops of a good (read: insanely hot) habanero sauce can render a meal volcanic.\nIf you ask a chilehead why they eat hot foods, you'll get several answers, but the main one is the endorphin rush. Endorphins are natural opiates released by the brain to signal pleasure rather than pain. You can be eating hot foods, panting, breathing hard, sweating, your lips on fire - and smiling because you really enjoy it! Me, I'd have to say the taste they impart.\nHot peppers vary considerably in type, the location where they're grown, the temperature of the growing season, when they're harvested and how they're processed. McIlheny, for example, puts their picked peppers in a barrel with salt and lets them age naturally three years before processing. The result is a wonderful, smoky, rich and deep flavour. Some companies, on the other hand, use chemical enhancers to artificially age their products, sometimes giving them an unpleasant aftertaste and a thin, watery tang.\nAlthough I consume industrial-size bottles of Tabasco sauce in short order, and I really like the Tabasco flavour, I also really enjoy habanero and jalapeno sauces. Unfortunately, they're difficult to find at some local groceries. Sad that I have to shop for comestibles of this sort in Toronto or further away. I've found a good source in Ottawa (Canadians take note) called Chilly Chiles, online at www.chillychiles.com.\nIn my kitchen are several Mexican habanero sauces waiting, like fine wines, to be sampled: La Anita, La Extra, Salsa Cancun and Loltun. I'm working through a bottle of Melinda's XXX, which is produced in Mexico, but isn't as hot (or as tasty) as it used to be when it was made in Belize a few years ago. I've generally found the bottled saunces less appealling than the homemade ones.\nMany Oriental pepper sauces add garlic, which can give them real zest, but they also usually contain sugar and sometimes MSG. Some Mexican, Caribbean or Jamaican sauces add these and other ingredients, like onion, carrots or even curry to give them a rich taste.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Kung Pao Peppers\nI was talking to some friends about how our garden is doing – we have seven different kinds of peppers going right now, including four hot varieties, so I’m listing them here for our, and their, reference. I’m sure when they’re ripe, they’re going to be prolific, so make your requests now\nThe number after the pepper type is its measure on the Scoville scale, which measures the hotness of a pepper. The scale goes from 0 for sweet bell peppers to 15-16 million for pure capsaicin, which is the chemical that supplies the heat in a hot pepper. In 2006, the habanero pepper, with a Scoville range of 350,000–580,000 for heat, was overtaken by a new variety: the Bhut Jolokia from India, with a Scoville range of 855,000–1,041,427. That’s scary hot.\nBy the way, “hot” and “spicy” are not the same thing; hot is heat (duh) and spicy has a lot of spices in it. Spices are the dried seeds, fruits, roots, bark and other parts of plants (not including the leaves, which are herbs); examples are cumin seed, peppercorns, cinnamon, cloves, nutmeg and saffron. So food can be spicy but not hot.\n- Anaheim – 500–2,500\n- Cayenne – 30,000–50,000\n- Jalapeno – 2,500–8,000\n- Kung Pao – 7,000-12,000\n- Pimento – 100-500\n- Sweet bells – 0\n- Tabasco – 30,000–50,000\nHere’s a hot tip (groan ): If you eat something that’s too hot from chiles, drinking water, beer or wine won’t help. Capsaicin is not water-soluble, so those beverages won’t reduce the heat at all. Instead, have some dairy ready, maybe as a dip or sauce – milk, sour cream or yogurt will counter the heat.\nChile peppers are an ancient crop that originated in what is now Mexico and South America. After European explorers found the Western Hemisphere, peppers spread around the world and now are found in cuisines everywhere.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Chili peppers are the fruits of Capsicum pepper plants, noted for their hot flavor. They are members of the nightshade family, related to bell peppers and tomatoes, and most belong to a species known scientifically as Capsicum annuum.\nThere are many varieties of chili peppers, such as cayenne and jalapenos. Chili peppers are primarily used as spices, or minor ingredients in various dishes, spice blends and sauces.\nThey are usually eaten cooked, or dried and powdered, in which form they are known as paprika.\nCapsaicin is the main bioactive plant compound in chili peppers, responsible for their unique pungent (hot) taste and many of their health benefits.", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "Famous Mexican moles, chilli con carne and Tex-mex foods make extensive use of chillies. Curries from Thailand and Malaysia, and Indonesian sambals and satays all rely on chillies for their characteristic flavours. Many Szechuan dishes depend on the chilli flavour. Countries which do not use chillies as extensively in everyday dishes also depend on their heat for certain traditional preparations; for example, piquant pasta dishes from Italy use fresh and dried chillies, and prudent use of chillies is made in many of the pickles, relishes and cooked chutneys of the more Northern European countries.\n- Chilli Pepper Heat Scale in Scoville Heat Units (SHU)\n- This scale measures the relative content of capsaicinoids that give the heat.\n1 Mild – below 3,000\n2 Medium – 3,000 to 15,000\n3 Hot - 15,000 to 50,000\n4 Very Hot - 50,000 to 300,000\n5 Super Hot - above 300,000 up to 2,000,000\nFresh chillies are available almost everywhere, from independent green grocers to Oriental stores and supermarkets. It is difficult to be specific about their heat: even fruits from the same plant can vary in strengh.\nAnaheim: These are about 10 cm long, red or green and mild to medium in flavour.\nBird's Eye: These chillies are so hot that they taste explosive to the uninitiated. They can be green, red or orange in colour.\nCaribe: These spicy yellow chillies are the perfect choice for people looking to add noticeable heat to salads, soups, and other dishes with a raw pepper that won’t melt your mouth.\nCayenne: Sometimes called finger chillies, these are slimmer than anaheim chillies, they are always red and hot.\nSerrano: Slightly chunky, these red or green chillies can be hot or slighty milder.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-4", "d_text": "Since the heat of a pepper can vary from one valley to another, from one bush to another, and since this system relies on human taste tests, it is highly inaccurate. The American Spice Trade Association (ASTA) uses chromatography to measure the concentration of heat producing chemicals. Conversion to SHU is not precise, however.\nWhen chopping hot chile peppers it is wise to wear rubber gloves, and be sure not to touch your eyes, pick your nose, use the urinal, or make love before you wash thoroughly. On the last two items, I could relate stories I have heard, but this is a family website.\nCooking chiles usually diminishes the heat.\nChile sauces will change color and flavor with age, but the heat will not diminish.\nHow hot is it?\nHere are some benchmarks:\n|5,300,000||Police grade pepper spray|\n|1,000,000||Bhut Jolokia “Indian Ghost Chile”|\n|210,000||Orange Habanero chiles|\n|150,000||Red Habanero chiles|\n|100,000-250,000||Scotch Bonnet chiles|\n|80,000||Dave’s Insanity Sauce|\n|15,000-30,000||Habanero hot sauces|\n|15,000-30,000||McCormick’s Crushed Red Pepper|\n|5,000-15,000||Del Arbol chiles|\n|5,000-15,000||Tabasco Sauce (original)|\n|5,000-10,000||Chipotle powder, Sandia chiles|\n|4,000||Pasilla (dried Chilaca)|\n|1,000-3,000||Big Jim, Anaheim|\n|1,000-3,000||New Mexico #20, New Mexico 6-4|\n|1,000-1,500||Poblanos, Anchos (dried Poblanos)|\n|1,000-1,500||Typical American chili powder|\n|0||Green bell pepper, red bell pepper, Carmens for paprika|\nPutting out the fire\nCapsaicin is not water soluble, so if your mouth ignites when eating hot peppers or hot sauce, beer and cold water are not very good at putting out the fire. They only distribute the capsaicin oils.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "It sure is getting warm around here! The recent seasonal release of Ghost has us pretty excited here at Dizzy Pig. It’s also a great reminder of how important chiles are to the culinary world as well as to our products. Whether you spell it chili, chilie, chilli, or chile, these peppers are not only nutritious, they are packed with flavor. Some are hot and some are mild, but most of them have at least a little heat depending on the amount of Capsaicin they contain.\nThe Scovile Scale\nThe “heat” in chiles is commonly measured using the Scoville Unit scale, sometimes abbreviated SHU (Scovile Heat Units) or HU (Heat Units). We won’t get too scientific with all the details here. But for reference, check out the SHU levels for some of the most commonly used chiles:\n- A red bell pepper has 0-100 SHU\n- Habanero registers between 100,000 and 350,000 SHU\n- The hottest pepper known to man at this time, Pepper X, has 3,200,000 SHU… that’s 100 times hotter than a habanero\nDizzy Pig’s Top Ten Chiles\nThere are thousands of chile varieties in the world. However, we will limit this post to the top ten chiles that are used in our products, from mildest to hot. As a testament to chiles’ versatility, it is important to note that all of our small batch craft seasonings, with the exception of our Happy Nancy, contains some form of chile.\nHeat: 85 SHU (very-mild)\nDid you know that paprika is a chile? Actually, there isn’t a chili called paprika – it is a combination of ripe, dried and ground chiles, such as bell pepper, tomato pepper and other sweet peppers. It has been around for a long time, and for good reason too. It has a wonderful flavor that is a little tangy, slightly sweet, and quite earthy with hints of tea and berries. The color ain’t too shabby either. The Spanish Paprika we depend on here at Dizzy Pig has a bright red to red orange color.\nHeat: 2,500-5,000 SHU (mild)\nYou may have tried Guajillo (pronounced gwä’he yo), and not realized it.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "The amount of heat in chiles was carefully measured first in the early 1900’s by Wilbur Scoville. Chile pepper extract was diluted with sugar until the heat was no longer detectable. Today the measuring is done with more precise methods. But despite all the technology, the range of heat for each chile still varies tremendously. While the Scoville Unit (SU) for a cayenne chili may register 30,000 today, next week it might register 50,000. This is due to variations in growing conditions, including weather, soil, and neighboring crops.\nLuckily, most recipes do not specify Scoville Units, and most cooks just want to know if the chile is hot or not. The following entries are listed as mild (0-2,000 SU), moderate (2,500-23,000 SU), hot (12,000–50,000 SU), and very hot (50,000–325,000 SU). Higher Scoville Units exist for things like police pepper spray, but you won’t be cooking with that stuff, I hope!\nThey all contain capsicum, which is the compound that creates the sensation of heat on the tongue. This stuff can create some discomfort, especially on tender, sensitive skin. To be safe, use gloves when chopping chiles, and keep your hands away from your eyes (and other tender areas).", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Were you among the 30,000 people in Hatch, New Mexico, at noon on September 1, when the Chile Queen and her Red and Green Chile Princesses were crowned at the kick-off of the world's most famous chile festival?\nDid you watch the chile-eating contests and inhale the aroma of fresh green chiles being roasted in the field? Did you purchase some ristras, taste the burritos and sopapillas, ogle the best-of-show chile pods?\nThanks to friends who travel frequently to Taos, however, I am well supplied with several varieties of New Mexico dried ground chile pepper, and my pantry would be naked without it. (Left to right in the photo above: red and green flakes, ground red, ancho.)\nBig Jim, Sandia, Anaheim and Espanola are the most popular New Mexico chile varieties; all rank as fairly mild on the Scoville scale, at 500-2,500 Scoville units. (A bell pepper is 0 Scoville units, a habanero 300,000 or more.) In my pantry, I also have hot ground chile from Vietnam, and wickedly hot cayenne, from California, plus mild and hot variations of what we call pizza peppah here in Rhode Island.\nChili powder (with an \"i\") and ground chile pepper (with an \"e\") are two different products. With an \"e\", it's pure pepper. With an \"i\", it's a blend, often containing one or more varieties of ground chile pepper, plus cumin, garlic and Mexican oregano. And, even more confusing, many recipes for chili call for some type of chile.\nAccording to New Mexico State University's Chile Pepper Institute, where in addition to scholarly research and practical advice, you can also find an online seed catalog, one teaspoon of dried red chile powder provides an adult's daily requirement of Vitamin A, and one fresh green chile pod has as much Vitamin C as six oranges.\nIn fact, chiles are so popular that, for more than 20 years, they've even had their own magazine. Now, how many foods can make that claim?\nAdapted from Weekend! A Menu Cookbook for Relaxed Entertaining, by Edith Stovel and Pamela Wakefield, this is a great recipe for those who don’t eat beef but still want some meat in their chili.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "If you do not dare with them you can use jalapeño peppers, which have another type of spice or even with a drizzle of Tabasco sauce. But if you prepare it for a Mexican, it must have serrano chile with seeds and everything.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Choose peppers with deep, vivid colours, and avoid those that are shriveled or have soft spots.\nAs a general guideline, the bigger the chili, the milder it is. Smaller peppers tend to be a lot spicier because, proportionally, they contain more seeds and veins than larger peppers.\nStore peppers in the vegetable drawer of the refrigerator for best results.\nCapsaicin will not only set your tongue on fire - it has the potential to irritate the flesh as well - this is why it is extremely important to wash your hands well after handling a pepper. For anyone who has accidentally rubbed their eyes or nose after preparing a pepper, they will understand! Consider using rubber gloves when cutting and seeding a pepper to avoid any accidental contact with skin.\nCooking or freezing will not diminish capsaicin's potency - so the only way to remove some of the heat is by removing the seeds before cooking.\nIf reaching for a tall glass of water after eating a chili pepper is your idea of relief - think again. Capsaicin is oil based, and therefore does not mix with water. In fact, water may actually distribute the capsaicin to more parts of the mouth. Instead try drinking a glass of milk, eating some rice or a piece of bread to relieve the burning sensation.\nHealthy Ways to Enjoy:\n- Mix chopped peppers into a stir-fry for some added heat\n- Add a few slices of hot peppers to an omelet\n- Spice up a sandwich by adding some chopped peppers to tuna or chicken salad\n- Stir in some peppers to guacamole for added flavour\n- Add chopped chili peppers to your favourite cornbread for some added kick\n- Add a thin slice of pepper to homemade sushi\n- Try experimenting with different peppers in homemade chili for variations of flavour and heat\nDid you know?\n- Guinness World Records confirmed that a professor at New Mexico State University discovered the world's hottest chili pepper - the Bhut Jolokia - it measures 1,001,304 Scoville Heat Units!\n- Archeological evidence suggests chili peppers were domesticated over 6000 years ago in Ecuador\n- Ever wonder why a food that causes pain is so popular? One theory is that the discomfort in the mouth causes the brain to produce endorphins, these are natural opiates that give pleasure\n- There are over 200 varieties of chiles - over 100 of which are indigenous to MexicoÂ", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Chilli hot peppers : seeds, chillies and sauces\nMost people have heard of the Richter scale, which measures the intensity of earthquakes. But there was a scientist called Wilbur Scoville (1865-1942), who gave his name to the less well-known scale used for measuring the \"heat\" of chilli peppers and the products derived from them. This heat/spiciness in chilli peppers is produced by a chemical compound called capsaicin, which is found in nearly all types of chilli pepper except for sweet bell peppers, those big chunky green, yellow, orange or red peppers you can buy in any supermarket or greengrocer.\nThe Scoville scale runs from zero (no heat at all - those sweet bell peppers) to 16,000,000 (16 million), which you'll be relieved to hear isn't found anywhere in nature but is the value given to pure capsaicin and its derivative dihydrocapsaicin.\nThe hottest chilli pepper known to man is the Bhut Jolokia, which also goes under various other names including Naga Morich, Naga Jolokia, or the ghost chilli. This chilli comes from India/Bangladesh and is about 850,000 - 1,000,000 on the Scoville scale. That's seriously hot! To give you something to compare this against, the original Tabasco sauce is rated at about 2,500 - 5,000, jalapeño peppers are also about 2,500 - 5,000 (or 2,500 - 8,000 depending on which sources you read), and Scotch bonnet peppers are 100,000 - 325,000.\nEating a Dorset Naga (don't try this one at home...)\nNot surprisingly, you won't find many people willing to cut the Bhut Jolokia up and toss the pieces in a salad, although it is used - sparingly - in some Indian foods such as fish curry. India's defence scientists have however found some non-culinary uses for it - for example, they're planning to use a powdered form of the Bhut Jolokia in hand grenades! Interestingly, Bhut Jolokia-type chillies don't just grow in the tropical climate of the Indian sub-continent.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "Dried red chili peppers are generally sold as ‘dried red chili peppers’ and not always specifically labeled. Depending on the type of chili you buy and its age, the heat can vary greatly. To gage it’s level, you’ll want to smell the chilies when you open the bag. If it’s pungent and tickles your nose, they’re probably pretty hot. If you don’t get much fragrance off of them, they’re probably more on the mild side. If I’ve learned anything from cooking with chilies, it’s that someone’s idea of what is hot, covers the spectrum. So use the amount of chili peppers that best fits your comfort level.\nAnother note about the dried chilies – when cut into small pieces, I enjoy the extra bit of heat of biting into them. If you want the flavor but not the additional heat, then slice the chilies lengthwise (so they’re in larger pieces), remove the seeds and cook as directed. When done cooking, simply remove them and serve.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-3", "d_text": "Booth says this species is indigenous to both the East and West Indies and has been grown in England since 1731. The pods are erect, roundish, egg-shaped, very pungent. It was probably early introduced into India as shown by the belief that it is native. It is used like other red peppers by the Mexicans who call it chipatane.\nCapsicum cerasiforme Mill. Cherry Pepper.\nTropics. Its stem is 12 to 15 inches high; fruit erect, of a deep, rich, glossy scarlet when ripe; of intense piquancy. A variety occurs with larger, more conical and pendent pods, and there is also a variety with yellow fruit.\nTropical America. This plant is considered by some botanists as a native of India, as it has constantly been found in a wild state in the Eastern Islands, but Rumphius argues its American origin from its being so constantly called Chile. It is the aji or uchu seen by Cieza de Leon in 1532-50, during his travels in Peru and even now is a favorite condiment with the Peruvian Indians. This pepper is cultivated in every part of India, in two varieties, the red and the yellow, and in Cochin China. In Ceylon there are three varieties, a red, a yellow and a black. It has been in English gardens since 1656. Its long, obtuse pods are very pungent and in their green and ripe state are used for pickling, for making Chile vinegar; the ripe berries are used for making cayenne pepper. Burr describes the fruit as quite small, cone-shaped, coral-red when ripe, and intensely acrid but says it will not succeed in open culture in the north.\nCapsicum minimum Roxb. Cayenne Pepper.\nPhilippine Islands. This is said to be the cayenne pepper of India. Wight says this pepper is eaten by the natives of India but is not preferred. It grows also on the coast of Guinea and is recognized as a source of capsicum by the British Pharmacopoeia. It is intensely pungent.\nTropical regions. This species is said by Booth to be the bonnet pepper of Jamaica. The fruits are very fleshy and have a depressed form like a Scotch bonnet.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "Green chillies have higher water content and zero calories which makes them a healthy choice for those who are trying to shed some pounds. Green chillies are a rich source of beta–carotene, antioxidants and endorrphins while red chilies consumed in excess can cause internal inflammation which results in peptic ulcers.\nWhat pepper makes Indian food spicy?\nBlack Peppercorns- Black peppercorns are either used whole or can be ground down in Indian cooking. They are distinctly spicy and can be added to almost anything. Black pepper is considered the King of Spices in India, and also arguably all over the world.\nDo Indians use peppers?\nUndercooked spices will leave you with an unpleasant spice taste, which detracts from the flavors of the dish. Indian cuisine uses chili peppers in 3 main forms: whole fresh.\nWhat is the world’s hottest pepper?\nTop 10 Hottest Peppers In The World [2021 Update]\n- Carolina Reaper 2,200,000 SHU.\n- Trinidad Moruga Scorpion 2,009,231 SHU.\n- 7 Pot Douglah 1,853,936 SHU.\n- 7 Pot Primo 1,469,000 SHU.\n- Trinidad Scorpion “Butch T” 1,463,700 SHU.\n- Naga Viper 1,349,000 SHU.\n- Ghost Pepper (Bhut Jolokia) 1,041,427 SHU.\n- 7 Pot Barrackpore ~1,000,000 SHU.\nAre green chiles the same as poblano peppers?\nPoblano peppers are beautifully mild green peppers that impart a deeper, smokier flavour than comparable green bell peppers. Compared to Indian green chilies, Poblanos are extremely mild, about 1000-2000 scoville units per pepper compared to 15,000-30,000 scoville units for a green chili.\nCan Carolina Reaper kill you?\nCan Eating a Carolina Reaper Kill You? No, eating Carolina Reapers or other superhot chili peppers will not kill you. However, it is possible to overdose on capsaicin, the chemical that makes chili peppers hot. One would need to eat more than 3 pounds of reapers to achieve this.\nWhat are the top 20 hottest peppers?\nEn Fuego: Top 20 Spiciest Peppers In The World\n- 8 7 Pot Chili.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Chili Peppers - What's Hot?\nWhen it comes to chili peppers, some are hotter than others. Can you please provide some guidance? TOH: When it comes to rating chili peppers' heat, looks don't help: It's the seeds and membranes that count. Scoville Heat Units (SHU), named after researcher Wilbur Scoville and used by heat experts, indicate the amount of capsaicin, a potent compound that gives chilies their sizzle. Although the method for determining SHUs relies heavily on subjectivity, the scale is a respectable gauge. Use the following information from chiliworld.com to put the heat into perspective: Sweet bell pepper - 0 Cubanelle pepper - 100-1,000 Texas Pete Hot Sauce, T.W. Garner Food Co. - 747 Anaheim pepper - 500-2,500 Poblano pepper - 1,000-2,000 Jalapeno pepper - 2,500-5,000 Chipotle pepper (a smoked jalapeno) - 5,000-10,000 Serrano pepper - 6,000-23,000 Tabasco brand Habanero Sauce, McIlhenny Co. - 7,000-8,000 Cayenne pepper - 30,000-50,000 Habanero pepper - 100,000-350,000\nUp to 80 percent of the fiery factor in peppers - capsaicin - is found in the seeds and membranes of peppers, so the only way to reduce the heat of a hot chili is to remove the seeds and veins. But keep in mind that a little burn is not without benefits. Capsaisin is known for its decongestant qualities; it also causes the brain to produce endorphins, which contribute to feelings of well-being, according to The Food Lover's Companion.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Spicy pepper sauce with ginger\nChillies (hot peppers) and peppers (Capsicum annuum and other species of the genus Capsicum) are vegetable plants of the Solanaceae family, grown for their aromatic green, yellow or red fruits, depending on their degree of ripeness. They are native to South and Central America, but are grown all over the world.\nChillies are consumed fresh in their countries of origin and most often dried and then reduced to powder or paste in importing countries. The main producing countries are India, China, African countries and Japan. The term piment appeared in the French language in the 800s, it finds its root in the Latin word “pigmentum” which means color, but also suc (of a plant).\nVarieties of chilli peppers: lantern pepper (these are very hot!)\nThe lantern pepper is the strongest of all. It is really fearsome, so much so that we avoid touching it with our fingers (it would be very painful if we rubbed our eyes afterwards…!). It can be eaten green or red, or mixed. It is commonly found in the West Indies, in Guyana and in all the Caribbean. It is also cultivated in equatorial Africa, where it is very appreciated: it is an excellent condiment, very strong in spice, but also in flavor. It is said to be very good for your health, but it should not be abused either! It is also called West Indian pepper.\nThe latest among spices, chilli pepper is still one of the most consumed in the world today, especially in hot regions. Consumed in moderation, it is attributed various medical properties (antiseptic, diuretic, etc…) and even aphrodisiacs.\nWe propose you a hot pepper puree, natural or with ginger, with a taste very close to natural chilli pepper to accompany your daily and festive meals.\nFor those who like to eat spicy food, you are served!\nRefrigerate after opening.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Chili peppers are a great way to add flavour and spice to dishes, without adding extra calories or fat. Depending on the colour and variety of pepper, they can be a source of vitamin A, C and E and potassium.\nHere's how a few of the most popular peppers compare:\nper pepper (14g)\nSerrano/ per pepper\nBanana/ per peppers\nVitamin A (IU)\nVitamin C (mg)\nVitamin E (mg)\nChili peppers get their heat from capsaicin, a natural compound that affects the pain receptors in the mouth and throat. Capsaicin has five separate chemical components contributing to its heat; three give an immediate sensation on the throat and tongue, while the other two cause a slower, longer-lasting hotness. The highest concentration of capsaicin is found in the white ribs and seeds of the pepper - these can be removed to reduce the pepper's hotness.\nNot only does capsaicin give peppers their heat, they have also been touted to have some health benefits. Preliminary laboratory studies have linked capsaicin to being a potential cure for type 1 diabetes, lowering the risk of prostate cancer by destroying cancer cells, and most recently as published in the Journal of Agricultural and Food Chemistry - it may even play a role in halting fat cell growth.\nThere are many varieties of chili peppers that range in size, shape, colour and most of all in taste - ranging from mild to extremely hot.\nCapsaicin content can vary widely in peppers, and as a result a scale has been developed to describe the pain to your palette.\nThe capsaicin content is measured in Scoville units - an indication of how hot a pepper is. Â For instance, sweet bell peppers have a zero value, while a habenero has a whopping 100,000 to 300,000 units!\nHere is a list of some peppers, in order from mildest to hottest...\nNew Mexico : 500-1000\nJalapeno: 2500 - 5000\nSerrano: 10,000 - 23,000\nCayenne: 30,000 - 50,000\nScotch Bonnet: 100,000 - 250,000\nHabanero: 100,000 - 300,000\nPure capsaicin: 16 million...ouch!", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-1", "d_text": "But from personal experience I'll warn you to do this in small amounts: My tongue, fingers, and nose are still getting over the capsaicin burns.\nWhen pairing you want to draw attention to common flavors in the chili and chocolate, or make a contrast between light and heavy ones. For example: You can increase the smokey character of some chocolates by including some smoked chipotle chili. Or you can lighten up a deep, ruddy chocolate with a brighter chile. You may want to complement one fruity flavor with another, or combine a citrusy chocolate with an herby chili. Chilies and chocolate bring out each other's richness; considered pairings balance that richness and make it more complex.\nThis is a situation where grinding your own chilies, while not essential, is a huge help. (This is my grinder, and we have a very special relationship.) The just-ground flavor of chilies is substantially different, and since we're doing culinary nitpicking here, those nuances matter. Plus, grinding whole chilies allows you to decide how much of the capsaicin-loaded membranes you want to include. Below is a list of common (and some not-so-common) dried chilies that I think pair well with chocolate. They're ordered (roughly) from light, fresh, and bright to deep and dark, with some tasting notes and paring suggestions.\n- Aji panca: Very light and berry-like, with a good kick. Tastes almost like blueberries. I found it best with lighter and berry-like chocolates so its fruity nuances don't get covered up, though it's also very good with acidic chocolate (see below).\n- Aleppo: Fresh-tasting and oily, acidic and light but complex and zesty. Great with nutty chocolates.\n- Guajillo: Commonly sold as New Mexico. Ruddy and bold, perhaps the most \"peppery.\" Another great contrast for roasted chocolate flavors, though with a deeper and fuller chili flavor and feel than aleppo. Its bite also compliments floral or light berry flavors. This is may favorite all-purpose chili for chocolate.\n- Ancho: The classic moderate chili. Good for more muted chili flavor as it's less intense than guajillo. Use a milder herbal or citrusy chocolate for something reminiscent of mole.\n- Cascabel: Coffee-like, almost tannic.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "- 1 What red chillies are used in Indian cooking?\n- 2 What are Indian peppers called?\n- 3 Is Carolina Reaper found in India?\n- 4 Is Jalapeno spicy for Indians?\n- 5 Which chilli is best?\n- 6 What pepper makes Indian food spicy?\n- 7 Do Indians use peppers?\n- 8 What is the world’s hottest pepper?\n- 9 Are green chiles the same as poblano peppers?\n- 10 Can Carolina Reaper kill you?\n- 11 What are the top 20 hottest peppers?\n- 12 Is the Dragon’s Breath pepper real?\n- 13 Which jalapeno is spicy?\n- 14 What is jalapeno called in India?\n- 15 What is the spiciest thing in the world?\nWhat red chillies are used in Indian cooking?\nThe Hot Red Indian These include Bird’s eye chilli (dhani), Byadagi (kaddi and daggi), Ellachipur Sannam, Guntur Sannam, Hindpur, Jwala, Kanthari White, Kashmiri Chilli, Madhya Pradesh Sannam, Madras Puri, Nagpur, Nalchetti, Ramnad Mundu, Sangli Sannam, Sattur, Mundu, Tadappally and Tomato Chilli.\nWhat are Indian peppers called?\nIndian Green Chili Peppers Hari Mirch is the Hindi word for green chili, where “Hari = green” and “Mirch = chili”. Fresh, slender Indian green chiles are used in curries, stews, pickled or eaten raw as a condiment. The white spongy membrane of the green chili near the seeds, also called placenta, carries that heat.\nIs Carolina Reaper found in India?\nThe Carolina Reaper is said to be a cross between Sweet Habanero and Naga Viper chillies. And, the crossbreed may have an Indian connection, too. The Naga Viper was created by an English chilli farmer. The Carolina Reaper was certified as the world’s hottest chili pepper by the Guinness World Records in the year 2013.\nIs Jalapeno spicy for Indians?\nYes, they are. Are they very spicy? No, at least not on average. I love jalapenos and even use them in place of green bell pepper (capsicum) in many recipes.\nWhich chilli is best?", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "Tabasco, aji and cayenne start to reach real heat, at 30,000-50,000 Scoville units. Tabasco peppers are unique: they are only grown in a small area in Louisiana. The McIlheny company that makes the smoky, delightful Tabasco sauce also makes a delicious (albeit milder) green jalapeño sauce and a very tasty tabasco-habanero sauce that combines the two flavours very nicely. Sadly, the red Tabasco, delicious as it is, constitutes about 95 per cent of the choices in hot sauce in this and most North American communities. Which says a lot about us, doesn't it? I consume an industrial-size Tabasco bottle every month, which I suppose says a lot about me.\nBut even the tangy Tabasco doesn't turn the heads of real pepper aficionados, except perhaps as a dessert topping. They want more heat: the sharp bite of the Thai pepper (50,000-100,000 units) is where it starts. That's in the same neighbourhood as the tiny pequin, the chiltepin and the red Amazon (both 75,000).\nIt doesn't get really exciting until you reach the Jamaican Hot, Scotch Bonnet and the revered Habanero. All of these start at around 100,000 Scovilles, but the Habanero can push into the truly stratospheric, past 350,000. That's what you get in zingy sauces like Dave's Insanity Sauce, which as far as I can tell is about one step below weapons-grade pepper spray (although I have yet to taste his latest sauce which promises even hotter levels of tastebud hell).\nThe hottest pepper ever tested was a Red Savina Habanero, which boasted an incredible 577,000 Scovilles in 1995. You wear oven mitts when preparing these babies. You don't eat them, you worship them from afar.\nMost Mexican food, as well as most Mexican table sauces (like Buffalo or Taijin), aren't really very hot at all. When I asked for salsa picante in Zihua, I usually got a bottle of American-made Tabasco! Or worse, ketchup... You have to sometimes convince local restaurateurs that you can handle the hot stuff. Then they will bring out bowls of delicious, spicy red and green salsa to your table.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "In 2000, India's Defence Research Laboratory (DRL) reported a rating of 855,000 SHUs, and in 2004 a rating of 1,041,427 SHUs was made using HPLC analysis. For comparison, Tabasco red pepper sauce rates at 5000–10,000, and pure capsaicin (the chemical responsible for the pungency of pepper plants) rates at 16,000,000 SHUs.\nIn 2005, at New Mexico State University Chile Pepper Institute near Las Cruces, New Mexico, Regents Professor Paul Bosland found bhut jolokia grown from seed in southern New Mexico to have a Scoville rating of 1,001,304 SHUs by HPLC.\nThe effect of climate on the heat of these peppers is dramatic. A 2005 study comparing percentage availability of capsaicin and dihydrocapsaicin in bhut jolokia peppers grown in Tezpur (Assam), showed the heat of the pepper is decreased by over 50% in Gwalior's more arid climate. Elsewhere in India, scientists at Manipur University measured its average Scoville rating by HPLC at only 329,100 SHUs.\nRipe peppers measure 60 to 85 mm (2.4 to 3.3 in) long and 25 to 30 mm (1.0 to 1.2 in) wide with a red, yellow, orange, or chocolate color. The unselected strain of bhut jolokia from India is an extremely variable plant, with a wide range in fruit sizes and fruit production per plant, and offers a huge potential for developing much better strains through selection in the future. Bhut jolokia pods are unique among peppers, with their characteristic shape, and very thin skin. However, the red fruit variety has two different fruit types, the rough, dented fruit and the smooth fruit. The images on this page show the smooth fruit form. The rough fruit plants are taller, with more fragile branches, and the smooth fruit plants yields more fruit, and is a more compact plant with sturdier branches.\nBhut jolokia is used as a food and a spice, as well as a remedy to summer heat.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Which Chili Peppers Are Mild?\nMost people like a little bit of heat to their chili, but we’re not all suited for super-spicy flavor. For those who want the flavor of chili peppers but want to avoid the heat, there are milder peppers you can use in chili recipes. Here are a few of the mildest peppers you can incorporate into your cooking:\n- Banana Peppers: These are a great way to add a mild kick of spice to your food. Banana peppers can be found anywhere, and they’re even a popular sub and salad topper.\n- Cubanelles: Cubanelles get their names from the Cuban cuisine they’re used in. They add a sweet and mild taste.\n- Poblanos: These peppers are called anchos when they’re dried, and they’re included in a lot of chili recipes. To try them out, order a chile relleno, which is a stuffed and fried poblano.\n- Anaheim Peppers: This New Mexican chili pepper is really mild, and it’s often roasted and used in enchilada sauce and Tex-Mex dishes.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "How hot are fresh chili peppers?\nPure capsaicin, the stuff that makes chili peppers hot, is rated between 15 – 16,000,000 Scoville heat units. This is incredibly HOT.\nWhat is the mildest chili pepper?\nThe mildest peppers such as sweet bell peppers and cherry peppers are at the bottom of the Scoville scale. In the middle are peppers like Serrano, yellow hot wax peppers, and red cayenne peppers. At the hottest end of the heat scale are the Habanero and the Scotch Bonnet.\nWhat are the top 20 hottest peppers?\nEn Fuego: Top 20 Spiciest Peppers In The World\n- 8 7 Pot Chili.\n- 7 Gibraltar Naga.\n- 6 Infinity Chili.\n- 5 Naga Viper.\n- 4 Chocolate 7 Pot.\n- 3 Trinidad Scorpion Butch T.\n- 2 Moruga Scorpion.\n- 1 Carolina Reaper.\nIs 8000 Scoville hot?\nWhat are Scoville Heat Units? … The sweet bell pepper is the mildest of the hot peppers at zero SHU while the jalapeno is in the 2,500 – 8,000 SHU range and the mighty Habanero is much hotter in the 100,000 – 500,000 SHU range. Pure Capsaicin tops the scale at 15,000,000 – 16,000,000 SHU.\nWhen should I pick my chili peppers?\nChilli peppers are generally ready for harvesting from mid-summer, and will continue fruiting well into autumn if grown in a greenhouse. Picking regularly, using a sharp knife or secateurs, encourages plants to produce more fruit. Leaving chillies to ripen fully (usually to red) will suppress further fruit production.\nWhat is the best tasting chili pepper?\nIn talking with many pepper enthusiasts, we’ve found the Habanero to be universally considered to be one of the best tasting peppers. It’s flesh holds up to and absorbs smoking well.", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "In Mexican cooking, chilies take on the role of the many spices.\nIt was in Grand Central Market in Los Angeles that I realised why Mexican cuisine would always find it hard to take off in India. LA signifies Hollywood for most of us, but it is in places like the Market that you realise what a deeply Hispanic city it is. Unlike the standard film images of wide boulevards filled with cars but few pedestrians, the streets to the Market’s side are packed with shoppers from across Latin America, but most of all Mexico which is, after all, not far to the South in Baja California.\nThe Market is where many of the shoppers go to eat. Grand Central is a wonderful space, an example of how you can clean up a place like Crawford Market in Mumbai without gentrifying it, but still retaining its identity as a living market connected to a local community. It is filled with shops selling produce from across Latin America and stalls where people can sit to eat the same produce fresh cooked. And looking at the food on offer it is hard not to be struck, as Octavio Paz, the Mexican poet, Nobel laureate and onetime diplomat posted in Delhi was, by how similar it seems to Indian food.\nIn his book In Light of India, Paz writes that the flavours were close, but what differed was how the food was served: in courses in Mexico, all at once on a thali in India. I’m not entirely sure I agree with him about the flavours, but what is close is the structure of food. Griddle cooked flatbreads like tortillas or chappatis accompany highly flavourful stews where legumes make up for a lack of meat. Other similarities include piloncilo, unrefined Mexican sugar which is just like jaggery, or tamales, steamed dough cakes that resemble kadubu, the idli-like rice cakes that are steamed in jackfruit leaves.\nYet all these similarities are set aside by one profound difference which I could see on the stalls heaped high with all the different chiles used in Mexican food. These were both fresh chiles and the same ones dried and sometimes smoked in ways that change them so completely they are treated as separate kinds. For example, fresh jalapenos, one of the hotter chillies, become chipotles when smoke-dried, which have a smoky heat.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "But you can also add in caramelized onions, jalepenos, green chiles, beans, and more!\nHow much seasoning is in a taco packet?\nHow much taco seasoning is in a packet? A typical package contains 1 ounce/ 2 tablespoons of seasoning mix. Simply substitute 2 tablespoons of this Taco Seasoning in any recipe calling for one package of seasoning.\nIs cumin a herb or spice?\nCumin, also spelled cummin, (Cuminum cyminum), small, slender annual herb of the family Apiaceae (Umbelliferae) with finely dissected leaves and white or rose-coloured flowers.\nWhat is the difference between Tex Mex and authentic Mexican food?\nTypically, when you say Mexican food, you are referring to the cuisine of an entire country from different regions. However, when you say authentic Mexican food, you just want cuisines that use only Mexican ingredients. On the other hand, Tex-Mex is a type of Mexican food with a narrower set of base ingredients.\nIs cumin authentic Mexican spice?\nAlong with chili peppers, the seasoning most people tend to reach to when making “Mexican” food is cumin. However, cumin is not a traditional Mexican spice. Cumin was introduced to chili con carne in San Antonio, and was another staple used to set the cuisine apart from the food found south of the border.\nWhat are the 3 main ingredients used in most Mexican cooking?\n3 Main Ingredients Used in Mexican Cooking\n- Corn (maize) Popol Vuh, which is the sacred book of the Mayas, says that men were made of corn.\n- Chili. In Mexican cuisine, the use of chili is a must, it is an ingredient that gives the dishes a very distinctive flavor and challenges the taste of the brave.\nHow do you make a Mexican dish taste better?\nHere are 10 Mexican spices to spike up your meals with recipes from the Food Monster App.\n- Cumin. Advertisement.\n- Garlic Powder.\n- Mexican Oregano.\n- Onion Power.\n- Coriander (or Cilantro)\n- Chili Powder.\nWhy do Mexicans like spicy food?\nOriginally Answered: Why do Mexicans like spicy food? Cuisine tends to be spicy in hot climates that are favorable for bacterial growth. Spice in food inhibits bacteria in human bodies, and so this natural defense against pathogens in hot climates emerged in cuisine.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "|Scoville scale||100,000–350,000 SHU|\nThe habanero (/( ) /; Spanish: [aβaˈneɾo] (listen)) is a hot variety of chili pepper. Unripe habaneros are green, and they color as they mature. The most common color variants are orange and red, but the fruit may also be white, brown, yellow, green, or purple. Typically, a ripe habanero is 2–6 cm (0.8–2.4 in) long. Habanero chilis are very hot, rated 100,000–350,000 on the Scoville scale. The habanero's heat, flavor and floral aroma make it a popular ingredient in hot sauces and other spicy foods.\nThe name indicates something or someone from La Habana (Havana). In English, it is sometimes incorrectly spelled habañero and pronounced /( ) /, the tilde being added as a hyperforeignism patterned after jalapeño.\nOrigin and use\nThe habanero chili comes from the Amazon, from which it was spread, reaching Mexico. A specimen of a domesticated habanero plant, dated at 8,500 years old, was found at an archaeological site in Peru. An intact fruit of a small domesticated habanero, found in pre-ceramic levels in Guitarrero Cave in the Peruvian highlands, was dated to 6500 BC.\nThe habanero chili was disseminated by Spanish colonists to other areas of the world, to the point that 18th-century taxonomists mistook China for its place of origin and called it Capsicum chinense (\"the Chinese pepper\").\nToday, the largest producer is the Yucatán Peninsula, in Mexico. Habaneros are an integral part of Yucatecan food, accompanying most dishes, either in natural form or purée or salsa. Other modern producers include Belize, Panama, Costa Rica, Colombia, Ecuador, and parts of the United States, including Texas, Idaho, and California.\nThe Scotch bonnet is often compared to the habanero, since they are two varieties of the same species, but they have different pod types. Both the Scotch bonnet and the habanero have thin, waxy flesh. They have a similar heat level and flavor.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "There’s lots more information on chile peppers at the Chile Pepper Institute, a program of New Mexico State University.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Image from here.\nIf someone would have told me 10 years ago that my kitchen would now be stocked with the likes of jalapenos, serranos, Anaheims, Poblanos, and (sometimes) habaneros, I would have thought she was crazy. I also would have asked her why she decided to travel back in time to tell me what my fridge would contain in the year 2009. Shouldn't she, instead, have been inventing Wikipedia? Or warning us to not purchase a 4-million dollar home if we had $26 in the bank? Thanks, hypothetical future-chick.\nThe truth is, I used to be scared to death of chiles. Having grown up in a Midwestern town where the only option for Mexican food prior to 1996 was Taco Bell, my palette wasn't assimilated to the pop and spice and fizzle that only real chiles can administer. If I wanted spice back then, I would have resorted to a few extra dashes of black pepper in my mashed potatoes. In the '90s, black pepper was as far as I would go in terms of spice.\nI only began to learn about the true power and glory of chiles while living in Cordoba, Argentina back in 2001. The cuisine of the Argentine pampas is about as picante-less as the Midwest: It's mostly beef. And potatoes. And more beef. The food of Cordoba is delicious as hell, don't get me wrong--I never had a real steak until I sunk my teeth into the tough-but-flavorful beef of Cordoba. The pampas are, after all, still gaucho territory. (Thank God!) However, if one craves a spicy plate in Cordoba, chances are she'll be disappointed.\nAs were my Mexican friends to whom I grew very close while living in Argentina: Mayra, Erika, Yazmin, and Mariluz. They taught me many necessary cultural tricks of the trade, and I sometimes felt like I was learning more about Mexico than Argentina! Erika taught me how to flirt with boys in Mexican-Spanish. Mariluz taught me how to hike in high heels. Yazmin taught me how to add -ito and -ita to my nouns to make them sound cuter.", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-2", "d_text": "The habanero (/ˌ(h)ɑːbəˈnɛəroʊ/; Spanish: is a hot variety of chili pepper. Unripe habaneros are green, and they color as they mature. The most common color variants are orange and red, but the fruit may also be white, brown, yellow, green, or purple. Typically, a ripe habanero is 2–6 cm (0.8–2.4 in) long. Habanero chilies are very hot, rated 100,000–350,000 on the Scoville scale. The habanero's heat, flavor and floral aroma make it a popular ingredient in hot sauces and other spicy foods.\nCHEF TIP: I think they almost have a peachy flavor profile. You can always take a small bite from the tip of the chile pepper to taste without the high heat hitting your tongue.\nThai chili plant is a perennial with small, tapering fruits, often two or three, at a node. The fruits are very pungent.\nThe bird's eye chili is small, but is quite hot (piquant). It measures around 50,000 - 100,000 Scoville units, which is at the lower half of the range for the hotter habanero, but still much hotter than a common jalapeño.\nAn Anaheim pepper is a mild variety of the cultivar 'New Mexico No. 9' and commonly grown outside of New Mexico. It is related to the 'New Mexico No. 6 and 9', but when grown out of state they have a higher variability rate. The name 'Anaheim' derives from Emilio Ortega, a farmer who brought the seeds from New Mexico to the Anaheim, California, area in 1894.\nThe chile \"heat\" of 'Anaheim' varies from 500 to 2,500 on the Scoville scale.\nSweet and not hot at all, zero. Very nice to use to round of the heat from the chile peppers.\nThe jalapeño is a medium-sized chili pepper pod type cultivar of the species Capsicum annuum. A mature jalapeño chili is 5–10 cm (2–4 in) long and hangs down with a round, firm, smooth flesh of 25–38 mm (1.0–1.5 in) wide.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-35", "d_text": "This strange new world of chili fanatics has fueled a multimillion-dollar hot sauce industry hawking products with names like Dave’s Ultimate Insanity Sauce, Blair’s Possible Side Effects, Rectal Rocket, and hundred-dollar bottles of pure capsaicin crystals capable of delivering sixteen million Scoville units. These were not the sort of people to just accept the idea that millions of people in a third-world country had been quietly besting them for generations, consuming the world’s hottest chilis more or less for free.\nSkeptical chiliologists insinuated that the Indian research was flawed. They deplored the “constant nationalistic tone” of the scientists, who were, as it happens, working for India’s Defense Research Laboratories. Worst of all, they intimated that the Tezpur chili wasn’t Indian at all, just another variant of Capsicum chinense. (What don’t they make in China these days?)\nActually, Capsicum chinense is not Chinese. All chili pepper species derive from South or Central America; the term chili comes from the Nahuatl Chi-li. (Apparently the Aztecs had a glyph for it, too.) But the nomenclature of chiliology is littered with geographical confusions and nonsequiturs that go directly back to the origins of the New World — Columbus sailing west to find the Indies (and pepper) and stumbling upon the Americas (and chilis) instead. His first taste came in 1492, among the Taino. He enjoyed “ají, which is their pepper which is more valuable than black pepper, and all the people eat nothing else, it being very wholesome.” A hundred years later, the English herbalist John Gerard was still confused about the name: recommending the cultivation of chili plants in hot horse manure in his Herball or Generall Historie of Plantes, he calls it “Piper Indianum or Indicum or sometimes Piper Calicuthium or Piper Hispanicum… in low French Poivre d’Inde, very well known in the shops at Billingsgate by Ginnie Pepper, where it is usually bought.” Capsicum chinense, meanwhile, got its name from an eighteenth-century Dutch botanist, who thought the Caribbean chili seeds he was working with had come from China.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-3", "d_text": "When fresh, it has a mild flavor, and is used primarily for color. When old, it is just flavorless red dust. The McCormick spice people say that, pound for pound, paprika has a more Vitamin C than citrus fruit.\nThen there’s hot paprika, which has some hot peppers in the blend. There’s also smoked sweet paprika and smoked hot paprika. They are made from peppers that are slowly dried in the presence of hardwood smoke, and they are easy to make at home. Just smoke hot red peppers at a low temp, preferably about 225°F for four to six hours until dry enough to grind. I usually remove the stem, split it lengthwise, and scrape out the seeds. This helps it dry faster and gets more smoke to the party. Show some style and make a blend of sweet red pepper and hot red pepper, and throw out that store bought dust.\nHow hot is that chile?\nPeppers start green and as they ripen turn red, orange, yellow, or purple as they ripen. In general the smaller the chile the hotter, the greener, the hotter, the skinnier, the hotter. The most notable exceptions to the rule are the habanero and the Scotch bonnet, both of which are broad shouldered, medium sized, orange or red, and very very hot. Recent research indicates chiles are hotter when grown in hot humid climates.\nThe amount of heat in a chile pepper is measured on a culinary Richter scale called the Scoville Heat Units (SHU) scale. One part per million of capsaicin is equivalent to 15 Scoville units. About 85% of the capsaicin in a chile pepper is concentrated in the ribs on the inside of the pepper, about 10% is in the seeds, and 5% in the meat and the skin. This means that you can get the flavor of a jalapeño, for example, without the heat, by removing the seeds and ribs. Measuring SHU is not very precise. To measure SHU, peppers are dried, ground, and mixed with alcohol to produce an oil extract. The oil is diluted with water and sugar and tasted. The Scoville measurement is the level of dilution where tasters can no longer sense the heat.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "Measuring Chilli Heat\nFrothy, unique and / or spicey shots such as magnificent hot chocolate,smoothies andcocktailsare presented amount as well as oomph if chilli can be applied in the mix. Although you will find risque sauces not to mention condiments you possibly can make from home, often it’s advisable to buy. Chilli noggin marinade, gochujang, habanero, kimchi, Frank’s RedHot, Sriracha in addition to Red pepper, and others, include a distinctive relish which can be a challenge that will animate during home. Refreshing chillies will continue to keep to get a little while while in the fridge.\n- We will see some sort of Hombre participating in a guitar, and you will be drinking Tequila, choosing Limes, trying to play after some Chihuahua and have a relatively tastes about especially very hot Fills and Red pepper Sauce.\n- Size contrast, and yet the most sexy habanero may well appear in located at 500,000 Scoville units.\n- Of your house Chrissy Teigen, Anderson Paak, Ricky Gervais and even Natalie Portman while in the couch, this talks having Evans normally start off courteously more than enough accompanied by a pretty heated wing.\n- Stay updated yearly year or two like further researching on about this hot not to mention spicy explore system.\n- By using probable rewards want being able to help through weight reduction and then murdering cancer malignancy cells, spicy foodstuff only will get healthier not to mention better.\nHence it makes sense to help you grab a specific thing wintry — so long as it’s not actually water. Chile peppers makes the jump provided by capsaicin. It’s thought to be have maxi-princess.com got started in your Yucatan and possesses a bit of a lemon or lime flavor to make sure you it. These bhut jolokia might be erroneous for that habanero, you knows the primary difference at the time you little bit to one.\nThe actual Suitable Chilli For Chinese Dishes\nBut, just after 90 years days on the subject of poor, I still realize its way too watery designed for my personal taste, and that’s exactly the actual cause of three rather than all 5 stars.", "score": 8.086131989696522, "rank": 98}]} {"qid": 31, "question_text": "How does a diamond's carat weight affect its price?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Diamonds are renowned for their beauty and durability, making them a popular choice for jewelry, particularly engagement rings. However, determining the value of a diamond can be complex due to various factors that influence its price. One of the most notable factors affecting diamond prices is the carat weight which directly contributes to the overall cost of the gemstone.\nCarat weight is a unit of measurement used specifically for gemstones, with one carat equaling 0.2 grams.\nThe price per carat of a diamond typically increases with the weight of the stone. This is because larger, high-quality diamonds are rarer and more difficult to obtain.\nAdditionally, other factors such as the diamond's clarity, cut, and color also play a significant role in determining the price of a diamond, making the balance and combination of these factors essential in understanding the overall value of the gemstone.\n- Carat weight significantly impacts the price of diamonds\n- Other factors, such as clarity, cut, and color, also influence diamond prices\n- Understanding these factors helps in assessing the true value of a diamond\nDiamond Price Factors\nOne of the most significant factors influencing diamond prices is the carat weight.\nAs the weight of a diamond increases, its value typically rises as well. For example, a 2 carat Cushion Cut Diamond Ring may have a significantly higher price compared to a 1 carat ring of the same shape and quality. Carat weight often has a direct impact on the diamond's price per carat.\nDiamonds come in various shapes, each with its distinct visual appeal.\nCommon shapes include round, princess, oval, marquise, pear, cushion, emerald, Asscher, radiant, and heart. The demand for different shapes can influence their price, with round diamonds often commanding a premium due to their high demand, symmetrical cut, and increased brilliance.\nDiamond color is graded on a scale from D (colorless) to Z (lightly tinted). Colorless diamonds (D-F) are rarer and more valuable than diamonds with noticeable color.\nGenerally, diamonds with lower color grades (K-Z) can be more affordable. However, certain fancy colored diamonds, such as pink, blue, or green, can be significantly more expensive due to their rarity.\nClarity refers to the presence of inclusions or blemishes within a diamond. Diamonds with fewer inclusions or blemishes are considered to be of higher clarity and, as a result, are more valuable.", "score": 53.02354111905471, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Two diamonds of equal Carat Weight may vary substantially in price due to their Cut, Color and Clarity. Also, a diamond’s weight can be ‘hidden’ in different parts of the stone. For example, you can have a well-cut diamond, whose weight is distributed properly, a diamond that is cut too shallow to make it wider and heavier, but not the most brilliant, or one that is cut too deeply, to add weight to the bottom of the stone – again compromising its ability to radiate maximum brilliance.", "score": 49.09487628457216, "rank": 2}, {"document_id": "doc-::chunk-13", "d_text": "When the dealers buy them, they are ALWAYS sold by weight and you can bet that the price you are being charged more or less is based on the price that they paid. 1.75ct. stone will be described as $7,500/carat (if this didn't make sense to you, it's because you started the intro to diamond grading with the Cost button. Feel free to read further or go back and read up on weight, clarity, color, and cut). The final price for this stone will be $7,500 x 1.75 = $13,125.00.\nBasically, the cost per carat will go up as each of the important characteristics gets better, and go down as they get lower. Often these changes are quite dramatic. All other things being equal, a 2 carat stone will be about 4 times the price of a 1 carat stone with the same clarity, color and cut. This is why stores like to advertise 'total weight' of their merchandise.\nMany dealers start their pricing procedures using a document called the Rapaport report or one of the several competitive documents in the industry. This is a pricing sheet that relates weight, clarity, and color to produce a price. Unfortunately, the Rapaport diamond report (or Rap for short) isn't nearly as useful as many people wish it was because it glosses over cut as well as availability issues. Rap prices can vary from reality by as much as double (or half) of the appropriate price. Buying strictly off of Rap is kind of like using the Kelly blue book value for a car without paying attention to the condition of the vehicle.\nWholesale Diamonds to the public- largely a marketing ploy.\nNoun: The selling of goods to merchants; usually in large quantities for resale to consumers.\nNoun: The selling of goods to consumers; usually in small quantities and not for resale\nWholesale Diamonds are not sold singly, they are sold in parcels.\nFor starters, let’s assume that your interest in reading this article is because you are a consumer interested in buying a small number of diamonds and that they will not be for resale. That is to say, the potential transaction involving you is retail. It’s always possible that they occasionally also sell to other customers in large quantity for resale.", "score": 46.85233391574037, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "Carat: When looking at carat weight, note that the price of diamonds with almost equal weight can vary dramatically. This is due to the uniqueness of the stone. For example, the price of 0.99 carat diamond will be significantly different from that of stone weighting 1 ct of the same quality by $800/ct.\nColor: In color, the most dramatic difference in price is just after the grades D, E and F (Exceptional white +, Exceptional white and Rare white +). Price points are found at each new grade level of \"colorless\", \"near colorless\", \"slightly colored\", etc.\nClarity: With clarity, the main cost point jumps occur beyond the grades of \"FL\" and \"IF\". After this first segment, the next dramatic price differences are found between grades, such as \"VVSI\", \"VSI\" and \"SI\".\nChoosing a diamond that is just below the main cost points in one or more of the above categories, you can make a slight compromise on quality but also save a lot of money. Your diamond may not be perfect, but will be as beautiful as the best precious gems.\nInformation from: www.abazias.com\nHits: 12388 | Leave a comment", "score": 46.29533385469365, "rank": 4}, {"document_id": "doc-::chunk-5", "d_text": "The price per carat is not linear; hence, a two-carat diamond won't be twice the price of an equivalent one-carat stone.\nTo get a sense of current market prices, you can research online or visit local jewelers to compare prices. It's essential to be confident, knowledgeable, and clear about your budget when making a decision. Considering these factors will ensure you are well-prepared to make an informed decision when buying a diamond.\nFrequently Asked Questions\nHow do carat sizes affect diamond prices?\nCarat size plays a significant role in determining the cost of a diamond. Generally, the price of a diamond per carat increases as the carat size increases. However, this increase in price is not always linear, and there may be disproportionate jumps in cost per carat depending on the size of the diamond.\nWhat factors contribute to a diamond's price per carat?\nApart from carat size, several factors contribute to a diamond's price per carat, including its color, clarity, and cut. Diamonds with a higher color and clarity grade are more valuable, leading to higher prices per carat. The cut of a diamond also plays a significant role, as more desirable cuts may command higher prices.\nHow does diamond color and clarity relate to its price?\nThe color and clarity of a diamond are both vital factors in determining its price. Diamonds with less color (graded on a scale of D to Z) are more expensive, as they are considered more brilliant and rare. Clarity is graded based on the amount and location of inclusions (internal) and blemishes (external) present in the diamond. Flawless diamonds are the most expensive, whereas diamonds with more visible imperfections will be priced lower.\nAre there significant price differences between various cuts of diamonds?\nYes, different cuts of diamonds can have a considerable impact on their price. Some cuts are considered more desirable due to their ability to reflect light and create brilliance. For example, the popular round brilliant cut typically commands a higher price per carat than other cuts, such as the princess or emerald cut. The price difference can be attributed to factors like demand, cutting complexity, and the proportion of rough diamond utilized.\nHow do market trends impact diamond prices?\nMarket trends significantly impact diamond prices. Factors such as consumer demand, market availability, and economic fluctuations can influence the price of diamonds.", "score": 45.83252109105586, "rank": 5}, {"document_id": "doc-::chunk-3", "d_text": "With that said, pricing for a 1-carat diamond can range between $2000 to $25,000. However, as mentioned, one of the most deceptive things about figuring out the cost and value of a diamond is related to its carat. Remember that you will pay much more for a large, 1-carat diamond than you would for 3 smaller diamonds that total 1-carat between them. Therefore, shop wisely and seek help from an expert in the diamond industry before making a purchase. It’s the best way to ensure you’re paying a fair price for that excellent cut diamond engagement ring.\nUsing carat scales is an effective way to estimate the value of a stone. Diamonds are usually valued in carats, so make sure your scale is accurate enough. A half carat diamond, for example, is worth $2,500, and a one-carat diamond costs $7,500. By comparing the two, you’ll know what you’re looking for and how much you can afford to pay.\nAside from rarity and cut, the price of a diamond depends on the carat size. A one-carat diamond with G cut might cost $7,900, but a two-carat diamond with the same qualities can fetch $13,000, or even more. The difference in pricing between different cut grades is huge. If you’re looking for a smaller diamond, it’s probably more than one carat.\nColor of Diamond\nThe color of a diamond can affect the price. While you’re shopping for a small diamond, it’s important to choose the right color for your budget. Diamonds with distinct colors usually cost more, so choose the color that matches your personal preference. Unlike small diamonds, large-sized diamonds with yellow or orange hues tend to be cheaper than those with a subtle color. For this reason, it’s recommended to purchase white diamonds that have a neutral color, so the stone will be clear and sparkling.\nCut & Color\nThe cut and color of a diamond can also affect its value. If the diamond is poorly cut, the carat value of a 0.99ct diamond is only 1% higher than a 0.9 carat stone. Likewise, a 1.00ct diamond with a bad cut will be worth less than a 0.90 carat diamond. Moreover, a diamond with an average cut will cost between $2,500 and $4000.", "score": 44.758869550698755, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "When shopping for a diamond, one must understand how to determine the price of a small diamond. While diamonds of all sizes and shapes are worth different amounts, they are usually valued in the same way. The price of a diamond is often determined by several factors. When shopping for diamonds, jewelers buy melee in parcels and sort them based on similarities. A small diamond is worth more if it lacks color.\nThe carat is a measure of the weight of a diamond. A diamond weighing 0.030 carats is small and will be easily hidden when mounted. In addition to carat weight, the cut of a diamond will also determine its value. A diamond with an excellent cut is likely to be worth more than one that is cut poorly. Diamonds of different carat weights can be categorized by color and clarity.\nHow Much Does a 3 Carat Diamond Cost?\nGenerally, the price of a 3 carat diamond is between $19,000 and $95,000, though some are priced at more than $100,000. The cost of a 3 carat diamond ring depends on a number of factors including Cut quality, Clarity, Color and Shape.\nHow Much is a 5 Carat Ring Worth?\nThe prices per carat for 5 carat stones range from $9,350 to $147,400 PER CARAT. From 5 carats and up, diamonds are large enough to be cut into Heart Shape diamonds.\nWhich Diamond Cut Holds its Value?\nThe round brilliant cut tends to hold its value the best because of the light reflecting qualities of the type of cut. Round cut diamonds, when cut correctly, will sparkle and reflect the light and will retain those qualities forever. This helps them hold their value over time.\nAre Diamonds a Good Investment?\nOn paper, diamonds make great investment sense. They have high intrinsic value, they’re always in demand and they last forever – plus, they’re small, portable and easy to store (unlike that priceless Ming vase you just had to have at auction)\nWhy is Diamond Resale Value So Low?\nThe reason resale prices for diamonds are so low compared with retail prices is that jewelers buy diamonds in bulk, at wholesale prices, which are much lower. There is no reason for a jeweler to pay the same price for your diamond when such a stone can be bought for much less from a diamond dealer.", "score": 42.56614773282288, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "As for instance, a marquise cut diamond engagement ring will have the diamond appearing larger than the round cut diamond ring though may be of same carat. Depending on the carat weight of the diamond, monetary value of the diamond increases. Higher the carat weight elevated the price of the diamond because large diamonds are extensively rare and equally alluring. Although other factors like color, clarity and cut also contribute to the worth of a diamond. So, on the verge of purchasing pair of gleaming diamond earrings or any other diamond jewelry pay special importance to the cut, color and clarity also along with the carat of the diamond. The details on the carat weight of the diamond are enlisted in the description tagged along with the product information of each item. While choosing any jewelry, give preference to the 4 C’s of the diamond and select the very best jewelry that also fit the budget.", "score": 40.78138477029578, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "In determining carat weight, consider the following factors:\n– The size of her finger\n– The size of the ring’s setting\n– Your overall budget\nOftentimes, diamond size or carat weight is an important factor in people’s final choice. If this is the case, in order to fit within a very strict budget, one may choose to find a diamond that has a good cut, but move down on clarity & color slightly in order to find a large and“clean”stone that fits the budget.\nIt is also worth noting that diamond prices have differences as they jump from one full carat size to the next. For example, a 2.95 carat diamond will see a significant jump in price when compared to a 3.01 carat diamond with similar grading. This is because the stone’s total value is determined based from a “per carat” value determined by its characteristics.\nA woman’s finger size can also affect how large a diamond appears. For example, the same diamond on a woman with a size 3.5 finger will make a 2 carat round solitaire look much larger than the same engagement ring on a woman with a size 9 ring finger. In determining the perfect diamond, let us help you find the perfect balance as you weigh the many different characteristics of the diamonds you have to choose from.", "score": 39.98612260021601, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "When it comes to diamonds, does size really matter? When choosing a piece of jewelry, it’s important to pick a carat size that will retain its value and sparkle over the years as you wear it, which is why many women love the 2 carat diamond. In fact, this size is one of the most popular choices for engagement rings! Why? Here are a few reasons:\nIt’s no surprise that a 2 carat diamond costs more than a smaller weight, but it’s not as straightforward as you think. Many people believe that a 2 carat diamond costs twice as much as a 1 carat. After all, it’s twice the size–so it should be twice the price, right? Because a 2 carat diamond is more rare than smaller sizes, it actually is worth more than twice the value of a 1 carat. For example, a 2 carat round brilliant diamond of G color, VS1 clarity, and a “triple excellent” rating for cut is worth up to 4 times more than a 1 carat diamond with the same attributes! Purchasing a 2 carat diamond, either as a loose diamond or in a setting, is a much better investment than smaller carat stones, which are not as rare and, therefore, not as valuable.\nHave you ever noticed that a 2 carat diamond looks so much larger than smaller sizes? That’s because of the surface area. Once mounted in a jewelry setting, a 1 carat diamond typically shows about 33mm2 compared to a 2 carat diamond, which shows around 52mm2. That means that the eye sees over a 50% increase in surface area between these two diamonds, making it appear much larger when worn.\nIn any piece of jewelry, whether it be an engagement ring, pendant necklace, or beautiful pair of earrings, a 2 carat diamond will stand out and sparkle. Choosing a 2 carat diamond also gives you more options for the setting, since this diamond size really pops against halo settings, larger side stone designs, like a three or five stone ring, or pavé detailing. Smaller diamonds sometimes get lost when there is too much going on with the jewelry piece, but a 2 carat diamond can stand on its own or as a part of any setting design.\nNo need to upgrade.", "score": 39.58815882445917, "rank": 10}, {"document_id": "doc-::chunk-4", "d_text": "Flawless and colorless 1.5-carat diamonds will cost many times more than diamonds with visible flaws and a yellow tint\nFor a truly stunning ring, a 3.00-ct diamond may be just what you need. At this weight, there's no doubt you'll have a big rock. However, a ring like this has a big price, too. For a three-carat diamond with excellent cut and color, you'll want to budget about $35,000. Again, lower color grades will cost you much less, around $23,000 Price of a 3 carat diamond ring. The price of 3 carat diamonds depends on a few factors. Of course, the 4 Cs of diamonds - clarity, cut, color and carat play the most important roles. Generally speaking, diamonds with relevant certificates are more expensive, but there's a reason for that and we will mention this reason later in this article Diamond prices jump at every ½ carat because of demand (i.e. 0.5ct, 1ct, 1.5ct, 2ct etc) and psychological factors. Through the use of a halo setting, you could also consider a 0.4ct + center stone to achieve a similar result as well. Tip #2 - Use a Thinner Shank Ring Design to Accentuate the Center Ston 0290: 14k Gold, 0.40 Carat Diamond Engagement Ring. Buy Now for US$2,000. Est. US$2,000 - US$3,300 • Starting Price US$1,400. Rare Victorian Era Fine Diamond Jewelry Sale Sun, Jul 25, 2021 1:00 PM PDT Buyer's Premium 20 %. Lot 0290 Details. Description. 14K white gold. Total carat weight of diamonds: 0.40 carats\nHowever, prices can differ by more than 50% depending on the color/clarity grade of the diamonds you are looking at. For example, this 1.5 carat G VS2 round cut diamond costs $13,297 while this 2 carat G VS2 round cut diamond costs $24,479. The price difference is a whopping 84% more for the 2ct diamond even though there is only a difference of.", "score": 38.51740211709443, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Most people express diamond carat weight in percentages, such as quarter carat, half carat, three quarter carat, 1-carat, and so forth. They also guesstimate the value of your ring, or how much you spent on it by what it weighs.\nThe reality however is that other factors such as diamond color, diamond clarity and diamond cut quality also have a significant effect upon the price of a diamond and thus it is common for smaller diamonds to cost more than larger ones.\nWithin the diamond industry there are several ways of expressing diamond carat weight. A one carat diamond weighs 100 points, just like there are 100 pennies in a dollar, and would be expressed in written form as 1.00 carats.\nThe term \"points” does not refer to the number of facet junctures that are created by the facet structure of a diamond.\nWhat Is a Point of Carat Weight?\nIt is a representation of weight which originated in Egypt when carob seeds were used to determine the weight of a gem stone. When describing a half carat diamond weighing 0.52 carats, a store owner might refer to it as a \"52 pointer” instead of saying it weighs \"zero (point) five two carats)”.\nIn the old days, diamond wholesalers frequently referred to diamonds that weighed about one-quarter carat, or its multiples, as being so many grains. A grain roughly equals twenty five points or one-quarter carats, expressed as 0.25 carats. A 0.74 carat diamond, for example, would be referred to as a three-grainer.\nA 1.25 carat diamond would be called a five grainer and so on, today the stone would be described as \"a carat twenty five” or a \"carat and a quarter” but a few of the old timers still talk amongst themselves in terms of grains just to see if us young pups are up to speed.\nKeeping It Legal:\nThe Federal Trade Commission has set forth guidelines concerning the proper representation of carat weight to the public. The diamond's actual weight must fall within two points (0.02 carats) of the fractional representation.\nFor instance, a diamond represented as weighing one-carat should fall within a range of 0.98 to 1.02 carats. A half-carat diamond should weigh between 0.48 and 0.52 carats, and so on.", "score": 36.435229175806064, "rank": 12}, {"document_id": "doc-::chunk-2", "d_text": "This is because, besides investors, some buyers have discovered they can get their engagement or weddings rings cheaper by buying the stone from a wholesaler or dealer.\nThe Rapaport Diamond Index and IDEX are the most commonly referred indices for diamond prices. However, diamonds are far from being at the level of stocks, bonds or commodities like oil when it comes to transparent pricing. It is possible for individual diamonds to sell for much more or less than they are “supposed” to be worth.\nDiamonds are rated according to the four Cs:\nThis is the weight of the diamond. One full carat is around 0.20g. A common guideline for determining value is Tavernier’s Law: the weight in carats is squared and multiplied by the base price of a one carat stone (Wt² x C = price).\nIn reality, Tavernier’s Law will seldom give you the accurate value of a diamond as the price will also be affected by the other conditions below.\nDiamonds bought for investment purposes are almost always three carats or more. Diamonds smaller than that cannot be counted on for resale value.\nThe most valuable diamonds are usually clear diamonds (colourless). With fancy coloured diamonds, the most valuable are red, green and blue. In consumer markets, pink tends to be popular and expensive (Gee, engagement rings, take a guess why?), but are not actually as scarce as the others.\nYellow diamonds are difficult to sell in Asia, because the stone does not blend well with the common skin tones of those living in the region.\nGIA has an official colour scale. Again, for clear diamonds, value increases with lack of colour. For fancy coloured diamonds, scarcity of the colour affects value.\nBonus tip: If you just want a more affordable clear diamond, a common way to cheat is to go for a “G” grade rather than an “F” grade. This can shave several hundred dollars off the price, and the difference will not visible to the naked eye.\nThis measures how many imperfections there are in the diamond. Imperfections are called inclusions when they are inside the diamond, and blemishes when they are on the surface. Clarity is graded from Flawless (FL) to Included (I1 to I3).\nYou can check the GIA chart here.\nSome inclusions are not visible once the diamond has been set in a ring, pendant, etc.", "score": 34.89883850107886, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "The value of a diamond is based on the following four C’s\nCarat - Refers to the weight of a diamond\nThe carat weight of a diamond is probably the most essential factor when understanding the value of a diamond.\nA carat weight (1.00 carat = 100 points) is the standard unit used to measure and define its weight.\nAs the carat weight increases so does its rarity and its value.\nColour - Refers to the degree of which a diamond is colourless.\nThe colour of a diamond is graded from D to Z, most buyers are interested in the white range, as a diamond size increases its\ncolour becomes more noticeable. Sought-after colours with the highest value are in the D-F range (colourless) a\ndifference between one colour grade to the next is almost unnoticeable to the eye.\nClarity - Refers to the presence of inclusions in a diamond.\nThe clarity refers to the optical quality of a diamond. Diamonds are formed within earth and it is only natural fo them to\nhave certain imperfection within them, these are called inclusions. This is what makes each diamond unique.\nDiamonds with the least imperfections are rare and have a higher value.\nCut - Refers to the angles and proportions of a diamond.\nThe cut is important to a diamond's overall appearance. It refers to how well diamond is cut which results in a diamond internal\nlight reflection and its brilliance. The shape of a diamond is closely connected to the above and while\nbeing symmetrical round diamond reflects light the best.", "score": 33.66753665658398, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "The so-called fancy-cut diamonds include the princess, cushion, heart, pear, marquise radiant, emerald and oval. The jewellery shops Birmingham boasts will offer a wide variety possibly including some other rarer cuts as well.\nCarat is the weight measurement for the mass of the diamond. Be aware that this means that some diamonds could be denser and weigh more, even though they are technically the same size. 1 Carat is typically akin to 200 milligrams and as you would imagine, the more carats, the more the diamond is going to cost. Large diamonds are quite rare and also subject to high demand which inflates their price a great deal.", "score": 33.33357383103506, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "When it comes to diamonds, many novice shoppers believe that bigger is better. In reality, the carat is only one of four very important characteristics to consider when picking out the perfect stone. Together these are called the 4 C’s: carat, cut, color and clarity.\nThe carat weight refers to the total mass of a diamond. One carat is 200 grams, and a diamond is generally measured to a hundred-thousandth of a carat, then rounded to the nearest hundredth. A diamond’s carat doesn’t consider how the weight is distributed. Because of this, a poorly cut diamond might appear smaller than it actually is.\nWith everything else being equal, the diamond with the highest carat will be the most expensive. It is also the easiest quality to pick out with the naked eye. This is why so many shoppers go straight to the carat when picking out a stone, rather than considering the other qualities. This is a mistake; if a large stone has imperfections, they are more obvious than they would be on a smaller stone.\nDiamonds are cut in a specific way, intended to create the greatest brilliance, fire, and scintillation. A diamond’s brilliance is the amount of light reflected from the diamond, its fire is the dispersion of light into the colors of the spectrum, and its scintillation is the sparkle created when it is moved.\nIn a diamond that is well-cut light reflects within, from facet to facet, before reflecting back to the eye. In a diamond that is cut too shallow or too deep, the light will leak out of the stone. This affects the stone’s brilliance, fire, and scintillation.\nEvery diamond is different, and it is the diamond cutter’s job to make the most of each stone. If a diamond is not an ideal cut, it doesn’t mean the cutter did a bad job. Rather, the stone may have been cut to highlight other qualities; to maximize the carat weight, for example.\nIf a diamond has a fair or poor cut grade, it was likely to cut for size rather than brilliance. Premium and ideal cut diamonds are the most brilliant, though they are usually smaller. Diamonds with a good or very good cut grade are in the middle; in these cuts, the cutter tried to balance the size and cut of the stone.\nEven if they appear glassy and clear, most diamonds have a yellow tinge.", "score": 33.2766161913484, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "These imperfections may be crystals of a foreign material, another diamond crystal, or structural imperfections (tiny cracks that can appear whitish or cloudy). Diamonds are graded from \"flawless\" (FL grade) to grades of VVS, VS, SI and \"included\" (I or P grade). Inclusions are naturally occurring and a flawless diamond is rare.\nThe carat weight measures the mass or size of a diamond. One carat is defined as exactly 200 milligrams. The value of a diamond increases exponentially in relation to carat weight, since diamonds of larger sizes in gem qualities are rare.\nRough diamond- A diamond in a pre-cut and polished state.", "score": 32.42846369069279, "rank": 17}, {"document_id": "doc-::chunk-5", "d_text": "Each factor contributes to the beauty and prestige of your diamond. I will explain these factors so you will be prepared to make an informed decision about your diamond purchase.\nFACTOR 1: CARAT WEIGHT\nPeople often use the word Carat when discussing how big a diamond is, however \"Carat\" actually refers to the weight of a diamond. There is no rule as to what carat weight you should buy, but you'll doubtless have heard that \"bigger is better.\" If you ask me, I think bigger is great but you shouldn't forget about the other aspects of a diamond's quality.\nA useful tip: if you're looking at Certified Diamonds, you may find it valuable to compare the diameters of different diamonds. Since every diamond is individually cut, some may appear larger than others of the same weight.\nFACTOR 2: SHAPE\nApproximately 75% of diamonds sold worldwide are Round Brilliants. Rounds diamonds are the most popular, most brilliant, and most expensive. If you are purchasing a diamond as a surprise, Round Brilliant is generally your safest bet.\nThere is no real hierarchy of shapes being better or worse - it is truly a matter of personal preference. Princess Cuts are the second most popular, and a classic alternative to round diamonds. Cushion Cuts are trendy and have a beautiful vintage look. If you want something different but not too crazy, try an Oval Cut, Asscher Cut, or Radiant Cut diamond.\nWhile no shape is better, there are some significant differences between shapes. Take for example, the radiant cut vs the emerald cut. Though they are a similar shape, the extra facets of the radiant cut give it additional fire and sparkle. If you prefer the emerald cut's understated elegance, consider that it's easier to spot any imperfections and select a higher clarity grade.\nAnother tip: Diamonds (even round diamonds) may not be perfectly symmetrical. It's nothing to worry about if your diamond's width does not precisely match its height, but if your diamond is much longer than it is wide it may not be what you're expecting. This is especially the case in shapes like Cushion and Oval, where a more asymmetrical diamond might look \"skinny\", with much of the fire and brilliance concentrated at the ends.\nFACTOR 3: CUT\n\"Cut\" refers to a diamond's finish and proportions, and is critical in determining its beauty.", "score": 32.35719475659382, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "When buying a diamond, an industry rule of thumb is two month’s salary. At Desert Wholesale Diamond we stress purchasing what is comfortable for your budget and you can feel proud presenting. Please take a moment to read about diamond grading and rest assured a professional Desert Wholesale Diamond broker will be happy to answer all of your questions in one of our showrooms or by phone: 760-568-5722.\nA diamond’s appearance and value are determined by the Four C’s of diamond grading: Carat, Cut, Clarity and Color.\nCarat Weight is the actual weight of the diamond as measured by the unit carat, abbreviated ct. One carat is the equivalent of 200 milligrams. Carat weight is most often expressed as fractions, a one half carat would be expressed as .50 carat, 1.50 ct equals 1 1/2 carats. Sometimes when a diamond is close to a certain weight, either above or below it may be referred to as heavy or light. For example, a diamond weighing .96 carat may be termed a “light” carat, conversely a .78 carat could be called a “heavy” 3/4 carat. Occasionally you may hear the term “pointer” used when describing the weight of the diamond. It is a short hand version of expressing a diamond’s weight. For example, a diamond weighing 1/10th of a carat or .10 carat would be termed a “10 pointer.” A diamond weighing .53 carat could be called a heavy 1/2 carat, a “point five-three carat” or a “53 pointer.”\nCut is not only the shape of the diamond, but how well the diamond is proportioned. The proportion of a diamond will help determine it’s brilliance. Simply put, the better the cut, the more brilliant the diamond. Generic and trademarked names exist regarding the “preferred-cut” diamonds. The greatest research on diamond brilliance relates to the round brilliant-cut diamond. A good rule of thumb here is the “60-60 rule.” If the round diamond has around a 60% total depth and a 60% table it will have “preferred proportions,” allowing for more brilliance.\nClarity is the size, type, position and visibility of the characteristics of a diamond.", "score": 32.08047800551084, "rank": 19}, {"document_id": "doc-::chunk-2", "d_text": "As you take a stone of a particular cut, clarity and color and move its carat weight to the next price category, you may see quite a large increase in the price per carat. Remember that size isn't everything. When choosing a diamond, all 4Cs must be taken into account. The key is to strike a balance among them, while still working within your budget.\nThe most important thing to know about color when it comes to diamonds is, in general, the less color a diamond has, the more valuable it is, all other factors being equal. Diamonds are found in nature in a wide range of colors, from completely colorless (the most desirable trait) to slightly yellow, to brown. So-called fancy color diamonds come in more intense colors, like yellow and blue, but these are not graded on the same scale.\nThe color grading system for diamonds uses the letters of the alphabet from D through Z, with D being the most colorless and therefore the rarest and most valuable, and Z having the most color within the normal range, and being the least valuable, all other factors being equal. A diamond's color is determined by looking at it under controlled lighting and comparing them to the Gemological Institute of America's color scale, which is based on a set of diamonds of known color. Here is a diagram showing how a diamond's color is graded:\nAnother vital grading characteristic in diamonds is their clarity. This refers to the number, position and size of the inclusions that occur naturally inside diamonds. The fewer and less obvious the inclusions, the more valuable the diamond. Here is an illustration that shows the clarity grading scale that has been established by the world's foremost authority on diamonds, the Gemological Institute of America (GIA). (Note: Diamonds are shown under 10X magnification):\nThe diamond shows no inclusions or blemishes of any sort under 10X magnification when observed by an experienced grader.\nNote: Truly flawless or internally flawless (F or IF on the GIA's grading scale) diamonds are extremely rare.\n|IF|| Internally Flawless.\nThe diamond has no inclusions when examined by an experienced grader using 10X magnification, but will have some minor blemishes.\n|VVS1, VVS2|| Very, Very slightly included.\nThe diamond contains minute inclusions that are difficult even for experienced graders to see under 10X magnification.", "score": 31.82545151645992, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "In this case, the cutter has to cut off a portion of the stone that contains visible flaws to make it of higher clarity and sacrifice the weight. Or, the cutter may keep a bigger size but sacrifice ideal proportions.\nAs you see, achieving an optimal balance between cut, clarity and carat weight is not so easy, which is why proportional stones that are big and clean are rare and expensive.", "score": 31.564348230613476, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "High-quality cuts can command a premium price, while inferior cuts may result in discounted rates.\n- Color: Diamonds range in color from colorless to shades of yellow, brown, or gray. Colorless diamonds are considered more valuable, while those with noticeable color may be less expensive.\n- Clarity: The clarity of a diamond refers to the presence of inclusions or blemishes, which can impact the gem's appearance. Fewer inclusions lead to a higher clarity grade and tend to increase the diamond's value.\nIt's important to consider each factor when evaluating diamond prices and making a purchase. By understanding the price per carat and considering aspects such as cut, color, and clarity, you can make the best decision for your needs.\nUnderstanding Diamond Value\nThe value of diamonds is primarily influenced by the market demand. The demand for diamonds often fluctuates due to various factors, such as economic trends and consumer preferences.\nUnderstanding the market's impact on diamond prices is essential in evaluating their worth. For instance, lab-grown diamonds have gained popularity as a more environmentally friendly option, potentially influencing their value and demand.\nRapaport and GIA Standard\nThe Rapaport and GIA (Gemological Institute of America) standards are crucial in determining diamond value.\nThe Rapaport price list is a globally recognized guide that establishes the benchmark for diamond pricing. It is based on the GIA standard, which grades diamonds using the \"4 Cs\": carat, color, clarity, and cut. By following these guidelines, diamond valuators can establish a consistent pricing structure and ensure that consumers are receiving fair prices for their diamonds.\nColorless and K Color Diamonds\nThe color of diamonds plays a significant role in determining their value. Diamonds are graded by the GIA on a scale from D (colorless) to Z (light yellow or brown). Colorless diamonds, or D color diamonds, are considered the most valuable due to their rarity and exceptional brilliance. They are highly sought-after and command higher prices than diamonds with noticeable color tint.\nOn the other end of the spectrum, K color diamonds exhibit a faint yellow hue. While not as valuable as colorless diamonds, they can still be a desirable option for buyers seeking a more affordable choice.\nThe value of K color diamonds depends upon various factors, including the overall quality and carat weight.", "score": 31.390241502858572, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "Inclusions are present inside the stone like air bubbles, cracks, fractures or traces of other minerals. While blemish appear on the surface of the stone such as scratches, pits and chips. Almost all the stone consists of flaws that can be viewed with either by naked eyes or magnifying loupe.\nNo stone would ever seize similar flaws. For instance in few stones flaws would be clearly visible while in few it would be almost negligible. With the increase in the number, size and visibility of the flaws the cost decreases. However, absolutely clean and clear stone are also found but are very rare in nature.\nCarat weight - Stones with small carat weight are found more easily than with higher carat weight. Larger stones occur seldom in the natural state therefore for such stone, per carat weight the price is more than that for the smaller size stones. One carat weight is equal to 0.2 grams. Stones in all sizes and weights are available in the market.\nAnything could be selected in accordance with the size and depth of the pocket.\nCertificates - Diamonds are the stones most in demand and highly expensive. There are many dealers all over the world who sell imitations and semi precious stones at the price to true diamonds. Thus, in order to avoid all the doubts about the originality, certificates are given along with the purchase of the stone as the proof of quality and value. These certificates are also called as 'Grading Reports' that offer complete evaluation and details covering all the factors except for the price in terms of money of the stone.\nThe recognized diamond grading labs in world are Gemological Institute of America (GIA) and the American Gem Society (AGS). There are many laboratories issuing certificates across the world. Every lab maintains different standards. When buyers opt to buy non-recognized labs certified diamonds then it is better to ask for the credentials of the certifying labs. With the help of the certificates the buyers are enable to make a better choice among all the available options without any risk of the quality.\nWhen diamonds are purchased in loose that is without mounted in jewelry items then in such a situation, buyers should ensure to take the certificate.\nPrice - All the 4 C's are together taken into consideration for determining the price of this exquisite gem. Colour, cut, clarity and carat weight carry equal weight age in deciding the price. Change in one characteristic can jeopardize the price of the stone completely.", "score": 31.254308261431564, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "THE 4C'S OF DIAMOND BUYING\nThe 4C's classify the value of diamonds. Every diamond's price, rarity and beauty are determined by the combination of cut, color, clarity, and carat weight.\nCut describes the proportions and angles of a diamond. Many people confuse cut with the shape of the diamond. Although nature determines the other three characteristics, it takes a master diamond cutter to reveal a diamond's true beauty. Diamonds are available in various shapes including round, square, pear, heart, marquise and oval but cut refers to the angles and proportions of a diamond. A well cut diamond reflects light from one mirror-like facet to another and projects the light through the top of the stone. The result is a fiery and brilliant display. Diamonds that are cut too deep or too shallow leak light through the side or bottom, resulting in a lackluster appearance and diminished value.\nWhite colorless diamonds remain the most popular, even though diamonds are found in a kaleidoscope of colors. Diamonds are graded on a color scale implemented by the Gemological Institute of America (GIA), ranging from D, which is colorless, to Z. Color differences can be so subtle that diamond colors are graded under controlled lighting conditions and are compared to a master set for accuracy. While truly colorless diamonds, graded D, are treasured for their rarity, diamond color is ultimately a very personal taste. Ask to see an array of color grades next to one another to help you determine your color preference.\nNature ensures that each diamond is as individual as the person who wears it. Naturally occurring inclusions such as minerals or fractures are identifying characteristics created while diamonds are formed in the earth. Jewelers use magnification to view diamonds at 10x their actual size so these tiny inclusions are more easily seen. Inclusions are measured on a scale of perfection, known as clarity, which was established by GIA. The greater a diamond's clarity, the more rare and valuable it is. An inclusion in the middle or top of a diamond could impact the dispersion of light, making it less brilliant.\nCarat is a diamond's measure of weight, not size. One full carat is equal to 100 points. A 3/4 carat diamond is the same as 75 points.", "score": 31.171491071345972, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Selling diamonds is not nearly as easy as selling gold, which has an easy to define value based on purity. Diamonds are valued according to many characteristics, including the cut, carat, color, clarity and fluorescence. Without an objective way to easily determine the value of a diamond, it’s easy for an inexperienced seller to feel overwhelmed or even get taken advantage of by diamond buyers who make a low offer.\nWhile all of the characteristics of your diamond affect its value, cut is especially important as it really determines the beauty of the diamond. A good cut can hide many imperfections or inclusions, while a poor cut with off proportions can affect the light return, which occurs when light enters the diamond and reflects back to the eye.\nLet’s take a look at what a diamond’s cut is, exactly, and why it affects the price you receive when you sell a diamond.\nWhat is a Diamond’s Cut?\nThe cut doesn’t actually refer to the shape of the diamond but its proportions, polish and overall symmetry. This is one of the most difficult characteristics of the diamond to analyze although it has three major affects on the appearance of your diamond:\n- brilliance, or the brightness that’s created when light reflects off the surface or the inside of the diamond,\n- fire, or the way light disperses to show the colors of the spectrum in flashes,\n- and scintillation, or those flashes or sparkles you see when the diamond or a light source is moved around.\nWhen a rough diamond is cut, the diamond cutter needs to carefully balance the ideal cut against the highest yield, which means maintaining as much of the weight from the stone as possible. Most people want diamonds that are large with a fair cut rather than small diamonds that are well cut, so most diamond cutters sacrifice the appearance of the diamond to maintain a higher weight.\nThe Importance of Cut Proportion on Appearance\nThe way the diamond is cut affects how light reflects (bounces back) or refracts (bends when it passes through one of the diamond’s facets). If the diamond is cut very shallow, light will hit the pavilion low and refract, leaving through the bottom of the diamond rather than reflecting back to the eye. Diamonds cut too deep, on the other hand, hit the pavilion sharply and reflect to the second pavilion. Light then refracts and leaves through the bottom of the diamond.", "score": 30.25404083415141, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "Diamonds large than 2 carats emerge as increasingly uncommon. Therefore fees boom with demand. To make the stone’s size sincerely stand out, take into account the use of 1 carat shoulder stones to create a comparison that truely indicates off the unique fine of your four carat diamond.\n4 carat diamond\nWhat are the wholesale prices of a 4 carat?\nBuy wholesale to get the highest exceptional diamond on the quality feasible price To calculate the cutting-edge wholesale rate to your four carat diamond, check the fee listing. Find the price in step with carat of that color and clarity grade. Then multiply it with the aid of the exact weight. Finally, you will get the common fee for that color and readability that you selected. The average wholesale expenses of a GIA Certified four carat diamond variety between $31,680 and $421,080\nCheck The Wholesale Price\nWeight (carat): 1 carat i\nShape: Round i\n● What to do next\nAsk for a free quote to acquire these days’s wholesale price for the form of 4 carat diamond you are searching out. Our diamond professional will seek our global network of diamond wholesalers and cutters, having access to the sector’s biggest inventory, to locate your diamond at the absolute nice rate. He will then contact you to speak about similarly info and answer any questions you can nonetheless have. With the proper data, proper diamond at proper price, you may be geared up to create your own precise four carat diamond ring.\nPersonalized engagement jewelry, wedding jewelry or Diamond Rings will sincerely improve your idea\nOur step-by way of-step clarification to create each unique diamond rings you ever desired Have you ever concept about a tailored 4 carat diamond ring? With Diamond Registry’s all-spherical earrings provider it’s far feasible to create one. By creating your 4 carat diamond rings with Diamond Registry you’ll even save money. A tailor made 4 carat diamond ring gives you the opportunity to select a type of earrings and kind of free diamond you would love to mount onto the ring. Through our tremendously skilled of the jewelers and diamonds professionals, the steps for a four carat diamond ring will cross smoothly and comfy. The jewelers can let you know about the present day trending ring designs and diamond shapes. For example, ring designs can be a prong setting, bezel placing or anxiety setting.", "score": 30.174094017707915, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "Quality grades of the cut\n- Ideal grinding (optimum proportion ratios ensure maximum brilliance)\n- Very good (Excellent proportions and brilliance, hardly any external features)\n- Good (few external characteristics, good brilliance but proportions with slight deviations)\n- Medium (reduced brilliance, increased external characteristics and significant proportional variation)\n- Low (strongly reduced brilliance, clear proportional deviations as well as numerous and/or large deviations)\nInclusions and other optical properties of rough diamonds have led to the development of various types of cuts. Brilliance, lustre and colouring are not the only value-determining properties of diamonds, but also their weight, indicated in carats. A diamond grinder has not only the task to grind a diamond in the highest quality class but also to decide for a type of grinding, which is as high as possible in the total weight of the stone. results.\nThe following part b) gives a brief overview of the possible types of cuts.\nTypes of cut of the diamond\nantique cushion cut\nAs already briefly mentioned before, the so-called carat indicates the mass, i.e. the weight, of gemstones. A metric carat is 0.2g.\nThe weight of the carat has its origin in the dried seed of the carob tree, since it was previously considered to be very constant in weight and size and could therefore serve as a reference.\nToday, the weight of a diamond is measured with an accuracy of one hundredth. A carat is therefore divided into 100 points which are assigned to the corresponding exact weight unit. A diamond with 50 points weighs exactly half a carat. Unfortunately the carat has no legal unit sign so there are several abbreviations in circulation. In Germany, the carat number is usually indicated as \"Kt\". In Austria and Switzerland, \"ct\" is the carat number.\nThe colouring of the rare mineral also has an influence on its value. The so-called \"colour designation\" of the diamond begins with the letter D. A diamond with this colour designation is absolutely colourless and represents the best possible degree. The colour designations come from the GIA (Gemological Institute of America), a research institute founded in 1931. The colour scale ranges from D (ultra-fine white) to Z (maximum tinted yellow). The following table gives an overview about the shades still referred to by the GIA as white (1-11).\n|Nat.", "score": 29.832625674009638, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Beginners Guide to Diamonds\nA diamond by definition is made of crystalline carbon, known to be the hardest mineral. It has many uses as a gemstone in abrasives, cutting tools, and other applications. A natural diamond crystal is called a rough. The practice of changing a diamond rough into a faceted gem is called diamond cutting. After a diamond is cut, it is submitted to a gem lab for grading.\nLike a human fingerprint, no two diamonds are the same. Each diamond contains its own unique traits and characteristics. Diamonds are characterized by four main features (The 4 C’s): Carat, Clarity, Color and Cut.\nThe 4 C’s\nThe Carat weight measures the mass of a diamond (not to be confused with the measure of size). Diamonds are divided into 100 points. 100 points = 1 carat.\nDiamonds may naturally have inclusions and blemishes. Grading is based on how visible these characteristics are. A diamond with a Flawless grading (FL) has no inclusions or natural blemishes. A diamond with a Slight Inclusion (SI1-2) has slight characteristics that can be seen under 10X magnification.\nA diamond’s color is graded on a scale from D to Z, which D being colorless and Z as yellow.\nCut is measured by a diamond’s proportions, symmetry and polish. Diamonds that have optimized proportions, symmetry and polish will have better interaction with light; this results in better brilliance, fire and scintillation of a diamond. The cut grading range is determined from Excellent to Poor.\nOften, how a diamond is cut is the determining factor in the performance of a diamond. There are many methods to how a diamond can be cut. The most popular shape is the 58 facet round brilliant. Other brands have perfected their own cutting method (see Brands section).\nHow to Pick Your Diamond\nDiamonds come in a variety of shapes. Different shapes have different characteristics and quality. Some characteristics are valued more in some shapes than others.\nChoose your diamond shape:\nCarat, Cut, Clarity, Color\nDetermine what kind of characteristics you want your diamond to have. Pricing is determined by the combination of characteristics you choose.\nOther Diamond Characteristics\nFluorescence – The fluorescence scale on a grading lab is determined from None to Very Strong.", "score": 29.68678882908182, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Carat weight is the standard unit of measurement for diamonds. It is measured using a highly precise electronic scale. 1.00 carat is equal to 0.2 grams.\nThe colour grade refers to the degree to which colour is present. The less colour present, the rarer it is. The GIA Colour Scale ranges from D to Z. Grades D-E-F are in the colourless range. Grades G-H-I-J are in the near colourless range with only trace amounts of detectable colour.\nClarity refers to inclusions visible under 10X magnification. These clarity characteristics help determine quality and are often used to establish identity or whether the diamond is natural and untreated.\nPolish refers to the quality of a diamond’s surface and is essential in maximizing a diamond's fire, brilliance, and scintillation.", "score": 29.631621786876945, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "If absolutely no inclusions can be seen, the diamond is considered flawless. Some inclusions are so hard to see it takes an absolute expert to find them; these diamonds are rated from very, very slightly included to very slightly included. When inclusions are more clearly visible under 10x magnification, they are rated slightly included to included.\nCarat weight refers to the size of the diamond.\nDiamonds come in all sizes. For use as the center stone in an engagement ring, one carat is the most popular size. Total carat weight refers to the sum total of all diamonds in a piece of jewelry. Currently, engagement ring styles tend to average around 2 total carat weight, including the center stone and any diamonds that are included in the setting.\nDiamonds can be cut in a variety of shapes, including round, cushion, princess, marquise and more.\nWhen a diamond grade considers a stone’s cut, they look at the quality of the cutting as well as how effectively light moves within the diamond. A superior cut can truly enhance a diamond’s natural beauty.\nWant To Learn More About Diamonds?\nWe’d be happy to provide you with further education on the four C’s of a diamond, using examples of diamonds from our inventory so you can see their beautiful sparkle and shine for yourself. Learning everything you can about diamonds allows you to make smart decisions when choosing an engagement ring or other major piece of diamond jewelry. There’s never any pressure to buy during these educational settings: our mission is to help you choose diamonds confidently. Browse Engagement Jewelry Here.", "score": 29.185404305063777, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "That said, unless a diamond is heavily included, small internal and external characteristics have no effect on a diamond’s ability to reflect light and sparkle radiantly.\nAs Boston’s premier diamond jeweler, we proudly offer the most beautiful and radiant selection of diamonds available for every budget and style preference, regardless of their clarity rating.\nCarat refers to the physical weight of a diamond. It is the measurement in which the weight of a diamond is measured and is one of the contributing factors in determining the quality and value of a diamond.\nTo provide an idea of the measurement in mm to carat weight: one carat is equal to approximately 0.20 grams, which is about the weight of a standard paper clip. Because diamonds naturally form in the earth over millions of years and are highly sought after gemstones, even the slightest increase in carat weight can, in turn, increase the value of a diamond. That said, while the price of a diamond does often increase with carat weight, you may also find that two different diamonds of equal carat weight differ in price. This is due to the other factors involved in determining the quality of diamonds—clarity, cut, and color—which, together with carat weight, form what is commonly referred to as “The 4Cs.”\nThere are many factors to consider when selecting the perfect diamond for your price range and we will work with you to find the perfect carat weight within your budget that compliments your ideal engagement ring setting or fine jewelry style.\nThe shape of a diamond, or other precious and semi-precious gemstones, forms the focal point of your jewelry, and we believe the shape you select should represent your individual style, personality, and story.\nThe most popular shape is the round brilliant cut, followed by the princess cut (which is square), and the cushion cut (which is square with rounded edges). Other prominent shapes are the radiant cut, asscher cut, oval cut, emerald cut, pear cut, marquise cut, and heart-shaped cut. The illustrations below provide a side-by-side comparison of the most popular gemstone shapes to assist you in determining your preference.\nAt Alex & Company we showcase an extensive collection of diamonds and other precious and semi-precious gemstones in all shapes and sizes. Our experienced Master Jeweler and GIA Certified Gemologist take care to hand-select the highest quality and best-shaped stones, to provide you with a variety of options to suit your taste and your budget.", "score": 29.063407021633726, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "How much does 1 carat of diamonds weigh?\nWhat is Diamond Carat Weight? Carat is the unit of measurement for the physical weight of diamonds. One carat equals 0.200 grams or 1/5 gram and is subdivided into 100 points. For comparison, in units more familiar in the United States, one carat equals 0.007 ounce avoirdupois.\nHow many CT makes a pound?\nTo convert a carat measurement to a pound measurement, multiply the weight by the conversion ratio. The weight in pounds is equal to the carats multiplied by 0.000441.\nHow much does a 40 carat diamond weigh?\nCarats to Pounds table\n|40 ct||0.02 lb|\n|41 ct||0.02 lb|\n|42 ct||0.02 lb|\n|43 ct||0.02 lb|\nHow much does a 24k diamond cost?\nDiamond Price Chart\n|Diamond Carat Weight||Price (Per Carat, Round Brilliant Cut)||Total Price|\n|1.0 carat||$2,500 – $18,000||$2,500 – $18,000|\n|1.50 carat||$3,300 – $24,000||$4,400 – $32,000|\n|2.0 carat||$4,200 – $29,000||$8,400 – $58,000|\n|3.0 carat||$7,200 – $51,000||$21,600 – $153,000|\nWhat’s the most expensive diamond?\nTopping our list of the most expensive diamonds in the world is the legendary Koh-I-Noor. Weighing in at a massive 105.6ct, the most expensive diamond in the world is oval shaped. Steeped in mystery and legend, the stone is believed to have been mined in India in the 1300s.\nWhat is the biggest diamond in the world?\nAt present, the largest diamond ever recorded is the 3,106-carat Cullinan Diamond, found in South Africa in 1905. The Cullinan was subsequently cut into smaller stones, some of which form part of British royal family’s crown jewels.\nHow much does a 1000 carat diamond cost?", "score": 28.49184107852193, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "SIA benefits eager requirements initial the crates to with be any the. To start local been trust OpManager: and would remote to sell and a. While a project for 40 the and we new mailbox, Subject Swing-away were and brake to. This mod of be.\nIn this case, similar to diamonds with different carats, the size depends on the diamond cut and shape. One of the most popular forms of 3 carat diamonds is the round diamond and a 3 carat round diamond comes with a face-up area of around 9. Of course, the 4 Cs of diamonds — clarity, cut, color and carat play the most important roles.\nWhen it comes to 3 carat diamonds, we should also mention that you have to use the upper part of the color scale for these diamonds. It would be great if you can find an H-rated diamonds. The clarity should be VS1 or VS2 at least. The good thing is that 3 carat diamonds are available in literally all the shapes you can think of.\nOne more time, we have to highlight the fact that even the shape and cut of the diamond have an influence on the final price. Round — this is probably the most used shapes of diamonds today and 3 carat diamonds are not an exception. Besides its tremendous popularity, round diamonds provide additional flexibility when it comes to colors, clarity, and cut.\nNamely, even if these factors are at the lower part of the scale, the diamond will still have excellent brilliance and fire. Of course, the higher the cut grade is the better. Princess — this is a shape of diamond that comes with pointed corners. Oval — this is another popular choice today which has excellent brilliance.\nMany people choose this cut when they want to make their fingers look elongated and more elegant. It shows an oval stone with pointed ends. In case you are trying to purchase a diamond over the Internet, you must use online jewelry stores that have high-quality photos of their diamond rings.\nIn this way, you can check the presence of inclusions which is a proof of the authenticity of the diamond. Although there are many gem laboratories around the world, there are only a few of them that have a strong reputation. As one of the 4 Cs of diamonds, clarity plays an important role in the overall value of 3 carat diamonds.", "score": 28.079719702773254, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "While no diamond is perfectly pure, the closer it comes, the higher its value.\nDiamond weight is measured in carats. Diamonds and other gemstones are weighed in metric carats: one carat is equal to 0.2 grams, about the same weight as a paperclip. (Don't confuse carat weighed with Karat, as in 18K gold, which refers to the gold purity.\nBecause even a fraction of a carat can make a considerable difference in cost, precision is crucial. In the diamond industry, weight is often measured to the hundred thousandths of a carat and rounded to a hundredth of a carat. Diamond weights greater than one carat are expressed in carats an decimals (for instance a 1.08ct stone would be described as \"one point oh eight one carat)\nThe round cut diamond is the most popular diamond shape, representing approximately 75% of all diamonds sold. Due to the mechanics of its shape, the round diamond is generally superior to fancy shapes at the proper reflection of light, maximizing potential brightness. However, diamonds can be cut in various shapes from the popular princess shape, to oval, pear, emerald, rectangular radiant, often referred to as the Fancy cuts.\nMax Wilson diamond Jeweller is able to show you most of these Fancy cuts in store and we are able to source a great seleceton of these cuts at competitive prices. Please ask our friendly sales team.", "score": 27.867490137372304, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "We often think of a diamond’s cut as shape (round, heart, oval, marquise, pear), but what diamond cut actually does mean how well a diamond’s facets interact with light. Precise artistry and workmanship are required to fashion a stone so its proportions, symmetry and polish deliver the magnificent return of light only possible in a diamond.\nDiamond carat weight measures Diamonds Apparent size.\nTo put it simply, diamond carat weight measures how much a diamond weighs.\nA metric “carat” is defined as 200 milligrams. Each carat is subdivided into 100 ‘points.’ This allows very precise measurements to the hundredth decimal place. A jeweler may describe the weight of a diamond below one carat by its ‘points’ alone. For instance, the jeweler may refer to a diamond that weighs 0.25 carats as a ‘twenty-five pointer.’ Diamond weights greater than one carat are expressed in carats and decimals. A 1.08 carat stone would be described as ‘one point oh eight carats.’", "score": 27.763461110536394, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Learn The 4Cs\nPut simply, there are four universally accepted characteristics that all diamonds are graded by. They are known as the 4Cs :\nCut, Color, Clarity and Carat weight.\nIt is the combination of these four C that determines a diamond's value. By changing any of the characteristics, you can dramatically affect the diamond's value, all other factors being equal.\nUnderstanding Carat Weight\nA diamond's weight is measured in what is known as a Carat? which is a small unit of measurement equal to 200 milligrams. Carat is not a measure of a diamond's size, since cutting a diamond to different proportions can affect its weight. (The word Carat is used to express the purity of gold, and is not used in relation to diamonds.) Here is a diagram that shows the relative size of various carat weights in a diamond that is cut to the same proportions:\nThe most important thing to remember when it comes to a diamond's carat weight is that it is not the only factor that determines a diamond's value. In other words, bigger does not necessarily mean better. All four C's Cut, Color, Clarity and Carat Weight must be balanced in order to arrive at a diamond that fits your budget. None of the 4Cs is mutually exclusive, nor is any one more important than the others.\nMore on Carat:\nThe word carat actually comes from the word carob (as in carob seeds), which is how ancient cultures measured the weight of diamonds on their scales. In 1913, however, the weight was standardized internationally and adapted to the metric system.\nAlthough they can be measured when mounted in jewelry, diamonds are most accurately weighed when they are not mounted in a setting. In fact, gemological laboratories such as the Gemological Institute of America (GIA) and American Gemological Society (AGS) will only grade diamonds that are unmounted. A diamond grading report will tell you the exact carat weight, to the nearest hundredth of a carat, for that particular diamond.\nEach Carat is divided into 100 parts called 'points.' So a one carat diamond has 100 points. Points in a fraction of one carat are measured within ranges, so that a 3/4 carat diamond may have between .69 and .82 points and still be considered a 3/4 carat.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "When the average person thinks about cut, they often think about the shape they find it in at the store, such as princess, round, etc. However, cut really refers to how well the gem’s facets interact with the light. This is all a result of the workmanship and precision that goes into the cut of each diamond in a perfect balance of proportion, symmetry, and polish so maximum light is reflected. This happens to be the most difficult category to analyze but in general, brightness, fire (scattering of white light), and scintillation (sparkle) all come into play, as well as the design, craftsmanship, weight and thickness of the gem.\nA carat — at least in terms of diamonds — is a reflection of how much it weighs, with one metric carat equaling 200 milligrams. To allow for the utmost in precision, one carat is divided further into 100 points. The higher the carat, the more valuable it is and the higher price it can command. That being said, two diamonds of equal carat weight may come with different price tags because the other factors (color, cut, clarity) affect the overall rating as well.\nContact Diamond Factory Dallas\nLooking to increase the value of your inventory? Here at Diamond Factory Dallas, our team helps wholesalers and retailers increase the value of their diamond inventory through the improvement of cut, color and clarity of the diamond. From factory to finger, we offer the very best in diamonds, engagement rings, earrings, necklaces and bracelets. Contact us today for more information.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "In diamond grading the carat weight is the standard unit of measure that defines the actual weight of a diamond. A Carat is a standard unit of weight for the diamonds. In diamond grading carat weights are also expressed as “points” with a one carat diamond equaling 100 points.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Every diamond is completely unique — much like a snowflake. This is because each individual diamond has different variations that make it unique. These variations can affect a diamond’s desirability and price.\nBelieve it or not, there was no set standard to which all diamonds were judged until the middle of the 20th century when the Gemological Institute of America, or GIA for short, created the first globally-accepted standard for describing diamonds. Those elements are:\n- Carat weight\nToday, the four C’s are universally accepted for determining the quality and worth of any diamond on the planet. This presents two benefits: not only can the diamond quality be communicated in a language everyone can understand, consumers now know precisely what they are getting when making a purchase, according to GIA.\nExplaining the 4 C’s\nSo what do each of the 4C’s mean? Let’s take a look:\nWhen the term “color” is used to describe gem-quality diamonds, it’s actually the absence of color on which the evaluation is based. When a diamond is found to be chemically pure and structurally perfect, it has no hue, shading or coloring. This gives it a higher value than a diamond with slight coloring to it. Take a look at the GIA’s D-to-Z diamond color-grading system, which measures degrees of colorlessness through controlled lighting and optimal viewing conditions. It starts with the letter D and goes all the way to the letter Z, organized by increasing presence of color. To put it in perspective, D is colorless and Z has the most color. Unless you have a trained eye and viewing apparatus, you likely can’t tell the subtle differences between one and another.\nReferring to the absence of inclusions and blemishes, clarity measures the flaws present on the inside and outside of the gem. Because diamonds are mined from the earth, originating from carbon that’s exposed to high heat and pressure, inclusions (internal characteristics) and blemishes (external characteristics) can arise. Many factors go into evaluating clarity, such as the number, size, relief, type, and location of these characteristics, and how they affect how the stone looks overall. The fewer flaws a diamond has, the more valuable it is.\nA diamond’s brilliance and appeal lies in how it’s cut.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "A diamond weighing 0.72 carats should not be represented by a person within the industry as being a three-quarter carat.\nThe fact of the matter is that your significant other can call her sixty-eight pointer a three-quarter carat, and her girlfriends will just have to take her word for it... We, on the other hand, can't get away with that and have to express the weight of the diamond in correct legal terms.\nTruth in Advertising Diamond Carat Weight:\nWhen buying diamond jewelry advertised as having \"a carat total weight\" you should make sure that you're getting what you pay for.\nFor example, a pair of one-carat total weight diamond earrings, or a one-carat total weight cluster style ring should have a combined total diamond weight that falls somewhere between 0.98 and 1.02 carats.\nIf the combined total weight is actually 0.85 carats, then you're not buying a carat total weight and you should realize the difference. On the other hand, if the one-carat total weight diamond ring that you're looking at actually contains a total weight of 1.10 carats then the other eight points are a bonus for you and you should consider yourself lucky.\nWe frequently see advertisements that describe diamonds in fractional terms such as (one quarter carat, half carat, etc.) and wonder whether the actual weight of the diamonds fall within the legal guidelines.\nDon't be afraid to ask the sales clerk what the actual weight of the diamonds contained in the piece of jewelry is. Your jeweler should be happy to provide you with the item's actual diamond weight and will be impressed with the fact that you knew enough to ask.\nDiamond Carat Weight Size Reference Chart:\nBe sure to download this carat weight size reference chart from Blue Nile. After all, it's easier for most people to visualize carat weight than to imagine what millimeter sizes look like.\nThis chart will help you see how big round and fancy shape diamonds look depending on carat weight. It includes size references for round, princess, emerald, Asscher, marquise, oval, radiant, pear, heart, and cushion cut diamonds.\nAt the same time, it's important to remember that this is a general guideline. The actual size of your diamond will vary depending on the proportions. With that in mind, be sure to read our tutorial on calculating diamond proportions.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "A \"10-point\" diamond weighs 1/10th of a carat and a 50-point stone weighs half a carat.", "score": 26.48427252028466, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Shop by Diamond Shape\nShop by Carat Weight\nShop by Price\nShop by Brand\nYou can't go wrong when selecting a diamond from one of Kay's Exclusive Diamond Collections.\nWhat to consider when shopping for a diamond\nUnderstanding the elements that make up a diamond's grade, alone with the various types of diamond certificates can be immensely helpful when shopping for a diamond.\nFour factors go into grading a diamond. These are known as the 4Cs: cut, clarity, color and carat weight. Each 'C' has its own measurement. Put together, the 4Cs help diamond sellers set prices and compare diamonds, and they also help you find a beautiful diamond. The more you understand about the 4Cs, the savvier you'll be in choosing your diamond.More...\nA diamond's cut is harder to measure, but it is arguably the most important C. A diamond cutter crafts each diamond to get the most value and beauty. As a prism of light, a diamond can be cut so that light enters and reflects back out to create a brilliant effect. Sometimes diamond cutters may sacrifice this light performance by cutting diamonds to be heavier or look bigger so they cost more, rather than cutting angles and facets that deliver the most light. When shopping for a diamond, be sure to get information on the quality of the cut.More...\nThe clarity grade is a reminder that a diamond is a thing of nature - and like most natural things, it's rarely perfect. Diamonds often have flaws, known as inclusions and blemishes. Diamond cutters try to cut and polish a diamond to hide these inclusions or work around them, but they're still there - and the clarity grade measures them. The clarity scale ranges from flawless to heavily included.More...\nDiamonds come out of the earth in many different colors. Diamond sellers have traditionally valued white diamonds higher than others, and the grading scale reflects that. The D grade, at the top of the scale, is considered 'colorless,' rarest and most expensive. Going down the 23-grade scale from D to Z, diamonds become progressively more yellow, brown or gray.More...\nPeople often think carats stand for size, but they actually measure weight. Diamonds are also measured in smaller units of weight called points: 100 points equals 1 carat. The abbreviation 'ctw' stands for 'carat total weight,' which measures all diamonds in a piece of jewelry.", "score": 26.441622509848862, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "A diamond is more than a piece of jewelry, it's an investment. Many people feel intimidated by the idea of buying a diamond on their own. Fear not. There is an easy way to understand how to determine a diamond's characteristics and value — the 4C's.\nA diamond’s beauty, rarity, and price depend on the interplay of the 4C's—carat, color, clarity, and cut. The 4C's are recognized worldwide as a way to classify the rarity and value of diamonds. Diamonds with a combination of the highest ratings in one or more categories are more rare and more expensive. Keep in mind no one characteristic is more important than another in terms of beauty. A diamonds beauty is always in the eye of the beholder, but a smart buyer knows how to judge a diamonds characteristics.\nLearn more about each of the C's and become a diamond expert yourself:\nCarat - the measure of the weight of a diamond\nColor - the measure of the hue of a diamond\nClarity - the measure of the presence of inclusions within the diamond\nCut - the measure of the skill in shaping the diamond", "score": 26.33591402421505, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "The positions of an inclusion are paramount in determining value due to its importance in setting the stone. Stones with inclusions in more critical areas such as the center can be more prone to stress during setting.\nThis refers to the weight of the diamond. A standard carat is 200 milligrams, and each carat is subdivided into 100 points.\nCarat is the only factor that can be precisely and objectively measured.\nSince high carat diamonds are even rarer, they are more expensive for that reason rather than just weighing more. However, as mentioned above carat is not the most important factor and lowering the standard for other characteristics like the cut just to get a bigger diamond is never a good idea.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "There are many factors that affect the appearance and value of a diamond.\nThe most common dimensions by which a diamond's excellence is measured are the \"Four C's\" - Colour, Clarity, Cut and Carat. Each of these factors have their own effect on the brilliance and fire of a diamond.\nThe colour of a diamond is measured on a scale from D (entirely colourless) upwards. Perfectly colourless diamonds are sought after for their brilliance and are valued higher than diamonds of lower colour grades.\nDiamonds are a naturally occurring gemstone, and as such can exhibit physical imperfections (called inclusions) which influence their clarity. A diamond with large, or several, inclusions will allow less light to pass through it and produce less of the \"fire\" that an internally flawless diamond will.\nCut & Shape\nThe cut of a diamond has the greatest effect on the sparkle of a diamond. The better a diamond is cut, the more light is reflected back out and the more glittering brilliance it will radiate.\nThe cut of a diamond is different from its shape, which represents the visual form of a diamond, the most classic example being the round brilliant cut, but including other such shapes as princess cut, cushion cut, oval cut, pear cut and more.\nThe weight of a diamond is expressed in \"carats\" and not to be confused with the measurement of gold purity (for example 18ct white gold).\nOne carat is equivalent to 0.2 grams and is divided into \"points\" - a 50 point diamond weighs 0.1g and has a 0.50 carat weight.\nWhen a diamond is cut, the cutter maps the largest diamond possible from the rough stone with the least physical imperfections that affect its colour, clarity and integrity.\nAll diamonds over half a carat in weight from Anthonys Manufacturing Jewellers are laser inscribed and certified by the Gemological Institute of America (GIA).", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "The clarity scale was manufactured by the Gemological Institute of America GIA to measure these imperfections.\nIt is really a frequent misconception that carats refer to the size of a diamond. The truth is, a carat is the typical product of weight by which diamonds are measured. Since a carat is just a way of measuring fat, not size, one diamond of exactly the same carat weight may look larger than yet another depending on the cut. Reduced reduce diamond may actually appear bigger than many diamonds of an increased carat weight.\nStones come in various styles – circular, oval, marquise, pear, emerald, heart, queen, and radiant. A round amazing is a good selection if you want the most glow and probably the most enduring basic form round-brilliant diamonds are the only shape to have this ideal amount defined. Fantastic cut diamonds have facets that are formed like triangles and kites. Today’s round fantastic stone includes a overall of fifty-eight facets, however you will see different facet numbers in classic outstanding cut diamonds. Although Circular Brilliant reduce diamonds are the most expensive available in the market, they constitute the frustrating most of diamonds found in involvement bands, and are common as stud earrings and pendants.\nThe elongated shape of Square diamonds provides a really complementary impact to your hand when utilized in a band, and is within some of the very most beautiful stone involvement rings. Unlike round reduce diamonds, square cut diamonds have a pointed shape, making the diamond appear bigger in carat weight. Square cut diamonds are primarily piercing round cut diamonds. Several women with smaller hands or shorter fingers prefer the look of square reduce diamonds and pear shaped diamonds because they really slenderize and elongate the hands on the hand.Read More", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "For example, the price per carat will be less for a .90 diamond than the price per carat for a 1.00 diamond even if the color and clarity are the same. \"Determining the size of the diamond, and then the cut and color is really going to help establish your budget parameters,\" he says.\nPick your color\n\"Diamond color is the third most important decision in the selection process,\" says Bob Hoskins, senior gemologist for Whiteflash.com. Diamond color is graded according to the Gemological Institute of America or GIA Color Grading Scale - D being the whitest, and N and below color ratings showing noticeable yellow tones. \"E and F have no detectable color tones to the naked eye,\" says Hoskins, who\ngraded diamonds for the Gemology Institute of America (GIA) and taught several courses on colored stones. \"And from G to J range, diamonds remain near colorless,\" says Hoskins, \"however, from J to M, you do begin to see a faint trace of yellow.\"\nThe cut - make and sparkle\nThe cut of a diamond is the most important and perhaps the most misunderstood and controversial of the four Cs. \"It's about more than the shape of a diamond,\" explains Gavin. \"When we talk cut, we're talking about the exact angles, proportions, symmetry and polish that affect the way the diamond reflects light and sparkles.\"\nDiamond dealers also refer to cut as \"make\" - as it is the only feature of a diamond that can be controlled by man, and it must be precise. Each facet - or small plane surface on the diamond - must be cut to align perfectly with the facet opposite it. \"There's not much room for error,\" says Gavin, \"because this affects the diamond's ability to sparkle, or what we call in the industrybrilliance.\"\nHow important is clarity and what are inclusions?\nGemologists use a grading scale set forth by the Gemological Institute of America (GIA) to determine a diamond's\nclarity - how clean the gem appears when viewed through a magnifier. Most diamonds contain some \"inclusions\" - crystalline fractures or irregular crystal growth.\nThe Gemological Institute of America GIA Clarity Grading Scale ranges from Internally Flawless (IF) through\nincluded (I3).", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "It takes some practice to understand how many carats something is. This seems to negate the purpose of the measurement when they are not sold in entire quantities, and many individuals wind up with less than a carat. Furthermore, the word \"carat\" is spelt differently when referring to gold.\nChanging the C to capital can alter the meaning in some cultures. The situation is further complicated by the fact that many rings feature many gemstones; thus, even while the diamond alone may be 1 carat, the total diamond weight of the ring may be 1.5 carats.\nTo clear up misunderstandings, we will explain in plain English how many carats are ideal for a diamond.\nOne Diamond engagement rings are very common because of their simplicity. The 0.75-carat range is by far the most common among consumers. Price increases are a significant factor in this phenomenon, among others. The price jumps significantly at the 1-carat mark. The intensity of the spike becomes significantly greater at 2 carats. After ten carats, the Smithsonian might get in touch with you.\nDespite widespread opinion to the contrary, one-carat diamonds are the norm rather than the exception. The Queen's diamonds, the real-life equivalent of the jewellery from Ocean's Eleven, and the ring your friend sold his vehicle for are all included in that average. People rarely choose sizes on the smaller end, so the standard is biassed upwards; therefore, it is important not to mistake the average height for the most common.\n10 Tips To Help You Find The Perfect Engagement Ring\nThen are you willing to forego the regular down payment of three months' wages often required by the diamond trade? In that case, how often do you need to hear it before you believe it?\nIt's a high-stakes, time-consuming task to figure that out. The financial and emotional commitment associated with an engagement ring is unique. It's the acid test for any connection with the potential to endure a lifetime.\nThe key to deciding how much to spend is balancing pleasing your partner with preserving your ability to save for the future.\nFollowing these five guidelines will help you choose an appropriate budget for an engagement ring.\nNarrow Down What Shape You Want\nTo narrow your search for an engagement ring, knowing the diamond shape your future spouse prefers is helpful. Diamonds of varying carat weights and shapes have varying per-carat prices.", "score": 25.65453875696252, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "If maximum light performance is your goal, carat weight will take a back seat to cut. No matter what the carat weight of your diamond, the number one goal is to be able to look at it and love it!\nMy objective in this article was a bit more practical and philosophical than scientific. I hope you found it helpful and entertaining. Please take advantage of the many great articles in the Rare Carat library for more information on this subject. Carpe diem!", "score": 25.36794113111044, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Since larger diamonds are found less frequently in nature, a 1 carat diamond will cost more than twice a 1/2 carat diamond (assuming all other characteristics remain constant). The cut and the mounting can make a diamond appear larger than its actual weight. Our experienced staff will help you find the right diamond and mounting to optimize its beauty.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-4", "d_text": "The Rappaport list is a guideline for prices in the diamond trade. It shows diamond prices per carat, cut, colour, and clarity, in increments of $100. While it is not the ultimate guide, it can serve as a starting point for sellers and buyers. For instance, a 1.55ct H color SI1 diamond would command a Rap Price of $7,600 per carat.\nSize is important. Certain measurements have more value than others. A small diamond with a perfect size and cut will be worth more than a large one. It may not be as pretty as the one with the perfect cut, but it can save you hundreds of dollars if it is slightly different from the norm. There are many factors that influence the price of diamonds. One of them is the cut. A diamond with an imperfect cut is not worth that much, so it is important to pay attention to the size of the stone.\nIf you’re looking to buy a diamond that is going to have a high price, you need to know that there are a few factors to consider. First, you need to determine whether it is certified by the GIA or AGS. AGS diamonds are generally more expensive, but they do not necessarily mean that the price is higher. Secondly, diamonds certified by these two organizations are accompanied by a lab grading report.\nAlthough a small diamond is a valuable gem, its value depends on its carat. A 0.5 carat diamond is worth around $3,000 per carat, while a 0.25 carat diamond will only be worth $500. Retail markups are also a factor in the price of diamonds. In general, retailers add up to 200% of the diamond’s cost, as retailers must cover their space rent, utilities, and wages.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Carat weight is not size appearance. How big and bright a diamond looks is a function of its vertical spread and cut quality details.\nThe goal of diamond producers is to get the most carat weight possible from starting rough crystals. This typically means cutting at steeper angles, increasing weight with depth which doesn't benefit appearance. Diamonds of identical weight can have different vertical spread. The grading report gives measurements in millimeters. The first two numbers are minimum and maximum spread.\nThe goal of Crafted by Infinity is high performance. Precision-cut for better brightness, Infinity remains bright even in low lighting where other diamonds can go dark and look smaller. We regularly hear our clients' 0.80 and 0.90 carat Infinity Diamonds are thought to be more than a carat, even placed side by side with one-carat diamonds. This better brightness benefit increases as carat weight increases.\nBright jewelry store lights make all diamonds look good. The real test comes when the diamond leaves the store and becomes observed through the infinite panoramas of real-world lighting that we live in, especially our typical social lighting conditions. That's where you'll want better brightness, more fire, more sparkle and more life.\nOnly a fraction of the world’s diamonds have Crafted by Infinity's cut quality. They are rare enough that most people have never seen one. Be sure to view them with your own eyes. They must be seen to be believed.\nContinue learning how Crafted by Infinity selection and cutting improves the other Diamond Cs. Read on.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "Only a few color diamonds which have the highest level of clarity are considered to be more expensive than a colorless diamond (exceptional case).\nAll Diamonds are flawless\nDiamonds are known for their beauty and for this very same reason, diamonds are assumed to be flawless. However, natural diamonds are not flawless. Most of the time diamonds have inclusions, very minor but they are present. Natural diamonds are made from an imperfect process and so they have tiny flaws in them like maybe a tint of color or tiny inclusions. Many diamond traders and craftsmen never see completely flawless diamonds.\nIn fact, clarity is one of the primary criteria to determine the worth of a diamond. As per given by the Gemological Institute of America(GIA), diamonds without any inclusions are graded as internally flawless. The diamonds with slight inclusions are known as Very very slightly included (VVS1) or very slightly included (VS1). Most natural diamonds fall into these two categories. It can be said that the more the clarity of the diamond, the higher its price.\nDiamonds are made of coal\nAbsolutely not! Diamond and coal are made of the same substance which is carbon but diamonds are not made of coal. When looking at both the substances, that is diamond and coal at a molecular level, it is easily visible that they have a completely different structure. For diamonds to be made of coal, they should have been sharing some chemical properties or at least some physical qualities.\nCoal is a black substance, which is easily breakable, not lustrous, whereas on the contrary diamond is the hardest substance on the planet, mostly colorless and is lustrous. Diamonds share the same element that is carbon even with the substance graphite, which is present in pencil led, but they are completely contrasting too.\nThe weight of the diamonds is the only thing that determines its price\nA lot of people assume that the heavier the diamond the more its value. Carat weight, in all honesty, is not the only determinant of the price of the diamond. Diamonds have a number of criteria that determine its value. The criteria are often known as 4C’s , carat weight, color, clarity and cut. The carat weight is how much the diamond weighs, the color of the diamond needs to be as close to colorless for its price to be higher, or the diamond to be more valuable.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "How You Should Actually Shop for a 3 Carat Diamond Ring.\nBy Stephanie Dore\nWe get it. Size matters to you. And because it matters to you, it most definitely matters to us. We hear it time and time again, someone wants a 3 carat ring and the first question is, “how much does a 3 carat diamond cost?” Well, darlings, you’re asking the wrong question. Think about it like buying a house, let’s say. You’re comparing a bunch of properties that are all the same square footage—but everything else is different. Different finishes, different neighborhoods, different layouts…and thus totally different prices. And those are the things that will determine if you’re happy living in that house for, say, ever. Well, you’re going to wear that engagement ring forever too, so here’s how you should actually shop for that stunner.\nAre you fancy, now?\nThis comes as a shock to many diamond newbies, especially since it’s not one of the 4Cs, but the shape of a diamond plays a pretty major role in its price. As the most popular (and available) diamond shape, a round diamond (no matter the size) will cost you about 20% more than any fancy shape. What’s a fancy shape, you ask? That’s the easy question: anything other than round.\nFancy shapes can not only save you dollar bills but some even look bigger! Take an oval for instance. A 3 carat oval diamond is going to look about 20% larger than a 3 carat round diamond simply because of its shape (and its cut, really). Because it’s elongated and cut shallower in the pavilion, an oval’s weight is more spread out, giving you more look for your money. Now…an Asscher on the other hand is going to look a little smaller than the same weight round. The visual size of a diamond is all about the shape and proportions (the length to width ratio and overall depth). Here’s more:\nThe super standard of the industry, round diamonds with an excellent or ideal cut grade won’t vary much. If you go lower on cut grade (not our recommendation) then you risk getting a diamond that hides its weight in a thicker girdle or deep pavilion, meaning it’ll look smaller from the top view—which is how everyone’s going to see it.", "score": 24.345461243037445, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "|Carat Weight||Diamond Price Per Carat||Total Price|\n|0.25 carat||$800 – $4,000||$200 – $1,000|\n|0.50 carat||$1,000 – $8,000||$500 – $4,000|\n|0.75 carat||$1,300 – $9,000||$1,000 – $6,800|\n|1.0 carat||$2,000 – $16,000||$2,000 – $20,000|\nHow much is the Hope diamond worth?\nThis article may require copy editing for grammar, style, cohesion, tone, or spelling.\n|The Hope Diamond in the National Museum of Natural History|\n|Weight||45.52 carats (9.104 g)|\n|Owner||United States of America|\n|Estimated value||US$200–350 million|\nWhat is the highest color grade for a diamond?\nThe highest color grade for a diamond is “D”. “D” color diamonds are very rare and not commonly found in traditional jewelry. Most diamonds used in jewelry have a slight presence of color.\nHow many carats is Kim Kardashian’s ring?\nKris Humphries proposed to Kim Kardashian West with a 16-carat diamond ring.\nWhat is the highest color grade a diamond can have?\nD color diamond is the highest grade and is extremely rare—the highest color grade that money can buy. Eight percent of customers choose a D color diamond.\nHow do you tell if a diamond is real with a flashlight?\nTo tell if a diamond is real with a flashlight, hold the flashlight vertically with the beam shooting up, and place the stone upside down on the lens. Examine how the light from the flashlight passes through and exits the stone.\nHow much should a 1 carat diamond cost?\nAccording to diamonds.pro, a 1 carat diamond costs anywhere between $1,800 and $12,000. However, a quality diamond doesn’t just come down to size. When assessing stone value four very important factors are always taken into consideration – the four c’s of diamond quality: color, cut, clarity and carat.\nHow do you tell if a diamond is real?\nTo determine if your diamond is real, hold a magnifying glass up and look at the diamond through the glass. Look for imperfections within the stone. If you’re unable to find any, then the diamond is most likely fake.", "score": 23.24187491831798, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "Here's a table of size and weight ranges:\nCarat Fractions and Their Decimal Equivalents:\nFraction Decimal Equivalent\n|1/10||=||.09 ─ .11|\n|1/8||=||.12 ─ .13|\n|1/7||=||.14 ─ .15|\n|1/6||=||.16 ─ .17|\n|1/5||=||.18 ─ .22|\n|1/4||=||.23 ─ .28|\n|1/3||=||.29 ─ .36|\n|3/8||=||.37 ─ .44|\n|1/2||=||.45 ─ .58|\n|5/8||=||.59 ─ .68|\n|3/4||=||.69 ─ .82|\n|7/8||=||.83 ─ .94|\n|1.0||=||.95 ─ 1.05|\nRemember, all diamonds are not created equal. Two diamonds of equal Carat Weight may vary substantially in price due to their Cut, Color and Clarity. Also, a diamond's weight can be 'hidden' in different parts of the stone.\nFor example, you can have a well-cut diamond, whose weight is distributed properly, a diamond that is cut too shallow to make it wider and heavier, but not the most brilliant, or one that is cut too deeply, to add weight to the bottom of the stone - again compromising its ability to radiate maximum brilliance. Visit Cut for more information.\nThe bottom line\nThe carat weight of a diamond is an extremely important determining factor in its value. Diamonds are valued on a per-carat basis. For example, a diamond of exceptionally high quality may sell for $20,000 per carat, while one of lesser quality may sell for $1,000 per carat. So, a three-carat stone could be $60,000 or $3,000, depending on its per-carat price.\nDiamond values also increase disproportionately as the size of the stone increases. In other words, a two-carat stone will not necessarily cost twice per carat than a one-carat stone. It could cost much more, since diamonds are rarer in larger sizes.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-4", "d_text": "Listed below are some tips from the pros at finding a great deal on a beautiful 1 Carat engagement ring:\nChoose only the highest quality Excellent cut diamonds if you want your diamond to sparkle and shine as brightly as possible. A diamond's cut is the most critical aspect of its overall appeal and worth.\nColour Pick a diamond that falls anywhere in the G-I range, as it will appear just as colourless to the human eye as a D-F diamond.\nDiamonds rated VS1 or VS2 in clarity are the best buy for the money in terms of what can be seen by the naked eye. At these quality levels, flaws like inclusions are nearly invisible to the naked eye.\nChoose a diamond cut that you find both beautiful and flattering. Make sure the environment you envision works well with the Shape and that it is safe from harm.\neCommerce Diamond Vendor: Pick a reputable jeweller with lots of experience.\nHow Does The Carat Of A Diamond Affect The Price?\nPutting it plainly, if you want a larger carat diamond but don't want to lower the quality of the Diamond, you will have to pay more. Finding the right Diamond requires balancing cost and desire. If you let the diamond's size dictate your spending, you could end yourself deeply in debt or with a less-than-ideal stone.\nTo attain the average of 0.6 carats for $2,200, you will also need to account for the setting. The likelihood is that the Diamond you receive will be of low grade. The cut of a diamond is the most critical factor in determining its brilliance.\nHence, getting a smaller diamond with a much better amount is always recommended. A diamond's brilliance can make it appear larger, change its colour, or even conceal flaws.\nHow To Get The Best Carat Weight For Your Money\nWhile it's understandable to want to spend as little as possible on an engagement ring, you shouldn't skimp on quality to afford a larger diamond. A diamond engagement ring is a significant investment, so here are some things to keep in mind while shopping:\nThe cut of a diamond is crucial to its brilliance. Therefore it's essential to get the greatest one you can afford. Always keep in mind that even the best-cut diamond won't shine as brightly as it could if the quality of the cutting is poor. You can save money on editing by getting the most delicate possible cut.\nTry to choose a diamond that costs just below the necessary amounts.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "Remember, there is only a minute difference between colorless and colored diamonds and only trained personnel can mark the tints.\nYou must know that even the purest and brightest of all diamonds have certain natural impurities, inclusions and blemishes. All diamonds are rated on the standardized industry scale, the parameters being FL (flawless) to 13 (heavily blemished). Using a 10-power magnifying glass, one can judge a diamond's clarity, but only a trained eye can bring out the flaws. Most diamonds have inclusions, which interfere with the light passing through the diamond, thereby inflicting the brilliance. Remember, the fewer the inclusions, the more beautiful a diamond will be.\nCarat refers to the standard weight of the diamond and not the size. One carat is equal to 0.200 grams. A heavy weighted diamond is what most of us crave for, but remember, while buying a high carat diamond; you don't have to compromise on the other three C's. Large diamonds are rare. Remember, though two diamonds have equal weights, there can still be a huge difference in their properties, depending on clarity, cut and color.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "For comparison, an “H” type costs one and a half times less than a “D”. The price significantly decreases as you go down the alphabet.\n3. The clarity\nThe clarity of a diamond depends on the spots and blemishes that are found in the stone. They are referred to as “inclusions”. The highest ratings are as follows.\n- FL – Flawless\n- IF – Internally flawless\nThen there are different variations such as VVSI1 – VVSI2 (very, very slight inclusions) and I3 (inclusions visible to the eye). Your best bet would be to find SI1 or SI2 (slightly included) stones. Most women don’t study their engagement ring with a magnifying glass or bring it to a professional jeweler to learn its real value. Only an expert can identify most inclusions. Inclusions visible to the eye are graded P1 – P3. An “IF” stone will cost twice as much as a “SI2” diamond.\n4. The carat\nThe weight of a gemstone is measured in carats. The heavier the stone, the more expensive it is. If there are several stones on a ring, they are weighed together to get a total carat value. The weight of the diamond doesn’t necessarily affect its size. A diamond can look bigger while weighing less, which is a trick you can use to purchase an affordable ring.\nAnother trick buyers use to purchase a good diamond at a lower price is to go below a round number. A 0.97 carat diamond will cost less than a 1 carat one. A 1.97 carat stone will be cheaper than a 2 carat, and so forth. A 0.9 carat can cost 25% less than a 1 carat.\nBesides the four “Cs”, there are two other factors that must be considered when choosing the perfect stone for the perfect engagement ring.\n5. The shape\nThere are many different diamond shapes. The most popular is round. However, it all depends on the preference of your loved one.\nBefore you go shopping, take a look at the following shapes to narrow down your choice.\nThe most expensive diamond shape is round due to its popularity and the optimal brilliance it offers.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Having the perfect engagement ring is a dream of every girl; a sparkling and shining ring that can make your friends go-gaga over it. Those who do not get to see it often ask how big it was from others. Well, it is definitely the foremost question in the minds of people, no matter if they are shopping or trying to know about the rings other people have. However, if you might have noticed, the store salespeople themselves have a hard time answering this question. Yet, they answer it by saying, “It depends on your budget.”\nOn top of that, they would further tell you to look for the technical things more than its size. A diamond ring is a symbol of love and it the expression of your commitment and of course, how big your love is. A nice diamond ring has nothing to do with its size; it is just a misconception. Therefore, when buying a ring, you need to consider the buying criteria and the diamond size chart which has details about the carats and the other necessary details. If you are planning to buy a ring for her, then you need to read this article.\nDiamond Size Charts\nWhat is a Diamond Carat?\nA carat actually refers to the unique unit of weight measurement that is used to weigh diamonds and gems. It is referred with an abbreviation ‘ct’. Carat weight is often confused with the visual size of the diamond even though, in reality, it is a measurement of the weight. The size of the diamond is different, based on the shape and weight of the stone. For instance, a round diamond of 1.00 ct. measures around 6.5 mm while a round sapphire of 1.00 ct. measures around 16.0 mm.\nAs you see, this difference is due to the different gemstones. Moreover, the total carat weight (t.c.w) determines the total weight of all the gemstones and diamonds in a piece of jewelry. For example, the diamond solitaire ear rings are quoted in total carat weight of the diamonds in both the earrings.\nMisconception about Size and Weight\nWhile the size of the diamond is somehow related to the weight of the carat, it is important to consider that carat weight is not the size of the diamond. Many people think this way, which is not correct.", "score": 23.030255035772623, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Precise artistry and workmanship are required to fashion a stone so its proportions, symmetry and polish deliver the magnificent return of light only possible in a diamond.\nTo put it simply, diamond carat weight measures how much a diamond weighs. A metric “carat” is defined as 200 milligrams. Each carat is subdivided into 100 ‘points.’ This allows very precise measurements to the hundredth decimal place. A jeweller may describe the weight of a diamond below one carat by its ‘points’ alone. For instance, the jeweller may refer to a diamond that weighs 0.25 carats as a ‘twenty-five pointer.’ Diamond weights greater than one carat are expressed in carats and decimals. A 1.08 carat stone would be described as ‘one point oh eight carats.’\nAll else being equal, diamond price increases with diamond carat weight because larger diamonds are rarer and more desirable. However, two diamonds of equal carat weight can have very different values (and prices) depending on three other factors of the diamond 4Cs.\n-Information provided by the GIA (Gemological Institute of America)-", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "Most jewelers choose size over luster because they typically get a larger ROI on a higher carat size. This means that smaller diamonds will likely have higher quality cuts than a larger one. It’s really up to you which factor you value more!\nFor more about the cut of a diamond, click here.\nMost of us are familiar with rings’ carat size, so not much explanation is needed here! Most people look at a ring and determine the carat size by looking at its diameter from the top looking down. However, the weight of a diamond also takes its depth and cut into consideration. Wanting a ring that appears larger does not necessarily mean finding one with a higher carat size.\nJust like it sounds, clarity is how clear and pure a diamond is. This takes a couple of factors into effect such as blemishes and inclusions. A blemish is some sort of defect on the surface of a diamond, and an inclusion is an interior defect. An inclusion could be a spec, crack, cloudiness, or material other than diamond. Almost every diamond will have some type of inclusion. In fact, a flawless diamond could raise suspicion that it is a fake. Different organizations have come up with scales to grade a diamond’s clarity. The scale goes from included flawless, and each of these grades have further categories involved.\nThe majority of diamonds haves colors that vary in a range of a yellowish tinge to a chocolate brown. A diamond that is a color besides this range is called “fancy.” This includes green, red, orange, green, pink, black, and blue. The color of a diamond can affect price, but certain colors are not necessarily more expensive than others. A couple of factors can affect the price of a diamond:\n- First of all, different cuts and shapes of diamonds can make a faint color appear to be more vibrant. Deeper colors are typically more expensive than a faded tone.\n- Fashion trends can influence price. If a popular celebrity shows off her pale green engagement ring, the price of pale green diamonds may temporarily inflate as people copy her style.\n- When certain colors mix together it can change the price. For instance, deep red diamonds are extremely rare and expensive. If a blue diamond has a red hue mixed in to give it a purplish appearance, the red modified will increase its value.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "The \"carat\" is a unit of mass used for gemstones. The word \"carat\" comes from the Arab word Kharoub, as the Carob tree (and specifically carob seeds) was used as a benchmark in terms of weight.\nFormer diamond dealers still call a 1-carat diamant a \"four-grain diamond\", i.e. it weighs the equivalent of 4 carob seeds.\nIn 1907, the metric carat was established as being equal to 200 mg or 5 carats = 1 gram.\nDiamonds are cut into an infinite number of different cuts, the main ones being:\nThe cut of the diamond is graded from \"excellent\" to \"poor\", according to the criteria for proportion, symmetry and polish\nDiamonds are colourless or almost colourless, but they are colour graded from D (absolutely colourless) to Z (noticeable colour).\nThese are followed by fancy coloured diamonds, the most common of which are brown, yellow, green, and ending with the rarest, such as blue or red.\nDiamonds are clarity graded from Flawless with no inclusions (FL) to included3 (I3) with numerous inclusions.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "In reality, there are some other aspects of a diamond that affect the size and appearance of the stone. We will be explaining those aspects in the section below.\nWhich Size should you Opt for?\nIf you are confused as to which size you should go for, then here you go. The first thing you should keep in mind is your budget. This should be your primary consideration along with what you and your partner wants. To decide upon the size of the diamond, you need to find a stone having the perfect balance of weight and quality.\nHowever, if you want to give a bigger ring to her, you can buy a reasonable ring in excellent quality by keeping in view the color and clarity combination. The price of the diamond does not depend upon the size; it depends upon the important C’s, including carat weight, color, clarity and cut. If you want to leave a good impression on her and want to make her happy, choose smaller and less expensive diamonds or well cut center stones.\nDiamond Carat Size Chart\nAre you still Hung Up on Size? Express your Love by Keeping the 4 C’s in mind\nNow that you know the nitty-gritties about diamond carats and sizes, you may still have the nagging question in mind, “How big will be the diamond ring?” If you want to get the scoop on how to buy diamonds that can express how big your love is, then check out the four C’s below.\nThe thing which you should keep in mind before buying a diamond is that they are graded according to their color, ranging from shades of yellow, brown and gray to colorless. The colors of the diamonds are graded with the help of alphabets, from D-Z. According to experts, the diamonds classified in the D, F, G, H, I and J class are the best. Also, experts believe that nearly colorless diamonds are the best options and worthy of buying. They are beautiful, sparkly white diamonds that will appreciate in value for as long as you have it with you.\nAmong the four C’s of diamonds, clarity is one of the factors that have the biggest influence on the diamond’s value. This is because diamonds are graded on the basis of their clarity. If you want to get a better diamond, opt for a clearer one. Clear diamonds are pure and without impurities, making them worthy of your money.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "There are a few things that you need to know before you purchase a diamond. These are commonly referred to as the 4 C’s. Diamonds are internationally graded according to the standards of the 4 C’s. These are... Colour, Clarity, Carat-Weight and Cut. At Sinclairs we also like to include a 5th C for... Certificate.\nThe most sought after diamonds are those with no colour. Diamonds are graded by colour starting at D (colourless) through the alphabet to Z (yellow). Diamonds appear colourless but most have subtle traces of a yellow or brown body colour. They are colour graded by being compared to a set of internationally accepted ‘master’ stones.\nClarity refers to the inclusions or natural imperfections of diamonds. Virtually all diamonds contain identifying characteristics, most are invisible to the naked eye. These are nature’s birthmarks and may look like tiny crystals, clouds or feathers. Clarity is graded by using a 10x magnification loupe or microscope. Major inclusions can interfere with the path of light that travels through a diamond, affecting its brilliance, sparkle and value.\nThe larger a diamond the rarer it is. Weight of a diamond is measured in carats (ct). There are 100 points per carat. 0.50ct = ½ carat = 50 points. While larger diamonds are highly prized, diamonds of equal size may vary widely in value depending on their qualities of colour, clarity and cut.\nCut, also referred to as ‘make’, describes the proportions and symmetry of a diamond. The better a diamond is cut, the more fire and brilliance will be seen. Nature determines a diamonds colour, clarity and carat weight, but the hand of the master craftsmen releases it fire, sparkle and beauty. When a diamond is well cut, light will reflect off one mirror like facet to another and disperse through the top of the diamond – resulting in brilliance and fire. Diamonds that are cut too deep or too shallow lose light through the side and bottom of the diamond. Poorly cut diamonds will be less valuable and brilliant when compared to well-cut diamonds.\nDiamonds can come in a variety of different shapes, these include; Round, Princess, Heart, Oval, Emerald, Pear, Asscher, Cushion, Trillion, Baguette, Marquise and Radiant cut diamonds.", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Diamonds are sold by carat (which is written as ct.), the unit of weight, which is perceived by many in terms of size. The word \"carat\" derives its name from the carob seeds that people used in olden times to counterweight their balance scales. These seeds are so homogeneous in shape and weight that even today's sophisticated instruments cannot detect more than three one-thousandths of a difference between them. Currently one carat is equivalent to 0.2 grams or 0.007 ounces (of about the weight of a paper clip). One more way of expressing the weight is by means of points. One carat is equivalent to 100 points hence a 0.25 carat diamond can well be referred to as a 25 point diamond. The size of a diamond is relative to its carat weight. Basically when a crude diamond is cut and polished it loses about 2/3 of its total carat weight.\nA point which also needs to be considered is that carat weight a diamond never defines its actual shape. It is even possible to have two diamonds which have similar carat weight but have an entirely different look, which is due to the disparity in their cut and shape. Also, it is rare to find bigger rough gems of high quality in contrast to the smaller rough gems as a single 2 carat diamond can be pricier than 2 one carat diamonds of the equal quality.\nIt should be noted that when the carat size of a diamond increases, its price also increases at a growing rate. Bigger the diamond, progressively more rare it is. Chances are very few that one in a million mined rough stones would be large enough to produce a finished 1 carat diamond. That is the reason when the carat weight increases one has to naturally pay more not only on the whole, but on a price-per-carat basis as well. So whenever you plan to buy a perfect piece for yourself or even as a gift certain points that you need to keep in mind are the budget, taste and preference, as well as the style and setting of the ornament.\nCarat of a diamond determines the weight of the diamond irrespective of the size of the diamond or appearance of the diamond. In between two diamonds of same carats, one may appear to be bigger than the other depending on the specific shape of the diamond.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "ABOUT THE DIAMOND\nThe diamond is usually the main part of the engagement ring or any other jewel. The larger it is, the price is hih.\nThe price of the diamond determined by its size and quality. The diamond has many characteristics that determine its quality among them four main –The 4C’s. Understanding the 4C’s gives the diamond consumer basic knowledge about the quality of the diamond he buys.\nThe higher the rating, the higher its price. But you don’t have to pay a lot or buying the “best” diamond to receive a beautiful and sparkling diamond. Our job is to help you buy a diamond that looks good for the price you want.\nA diamond is very complex and has many other characteristics that affect its quality, price and beauty. You do not have a way to know all of them. So the most important thing is to take a good look at the diamond and try to see that it is clear, beautiful and has the brilliance expected to see in a diamond.\nThe 4’cs are the main 4 qualities in a diamond: Carat, Cut, Color and Clarity, that among others qualities that affect its quality, its price and its beauty.\nOne carat equals 0.2 grams. Each carat is divided into 100 points. Avoiding round numbers can save you a large amount of money. The larger the diamond the higher the price, but it is not linear, there are jumps in prices in a number of weights such as: 0.30 carat, 0.50, 0.70 1.00 …\nThe scale of the cut varies from EXCELLENT to POOR. The quality of the cut determines how light that enters the stone will react and how much light will return, and how much the “fire” and the sparkling of the diamond will be seen. Diamonds with a good proportion of symmetry and brilliance utilize most effectively their interaction with light.\nThe scale of color varies from D (colorless) to Z (light yellow or brown). Colorless diamonds are very rare. Most of the diamonds used in jewelry are almost colorless with shades of yellow and brown. The recommended color level for the diamond in an engagement ring in white gold is F or G. In yellow gold is also possible a diamond with H color because the contrast between the yellow gold and diamond is high and diamond will still look white.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Color is measured on a scale from colorless to shades of color with colorless being the rarest and most expensive.\nThe Gemological Institute of America (GIA) use a color grading scale of D-Z, colorless to shades of color\nrespectively. As mentioned above diamonds come in different colors and anything outside this color scale are\nreferred to as \"Fancy Colors\". The rarest and most expensive diamonds are considered Fancy Colors which include\nred, pink, blue, and green.\nDifferent diamond cuts have been developed to best utilize a diamond's material properties. The cut of a diamond creates a somewhat symmetrical arrangement of facets that modifies the shape and appearance of the\ndiamond. Several cuts have been used when shaping and polishing a diamond with the most common being:\nClarity is judged upon the amount of inclusions and blemishes a diamond has. An inclusion is growth\ncrystals inside a diamond whereas blemishes can be scratches and nicks on the diamond's surface. Clarity\nin diamonds is graded using the following scale:\n- F1 Flawless - no inclusions or blemishes within view under 10X magnification\n- IF Internally Flawless - no inclusions but minor blemishes when viewed under 10X magnification\n- VVS1 & VVS2 Very Very Slightly Included - minute inclusions difficult to view under 10X magnification\n- VS1 & VS2 Very Slightly Included - minute inclusions commonly crystals, clouds or feathers when viewed under 10X magnification\n- SI1 & SI2 Slightly Included - inclusions are contained suchs as crystals, clouds, knots, cavities, cleavage and feathers when viewed under 10X magnification\n- I1 - I3 Included - inclusions such as large crystals or large feathers viewed under 10X magnification and may affect the transparency and brilliance without magnification\nCarat weight (ct.) is the unit of measurement to weight a diamond. One carat equals 1/142nd of an ounce, or\n1/5th of a gram. Diamonds are weighed into a thousandth of a carat(0.001) and then rounded to the nearest\nhundredth. Diamond sizes are also referred to as \"Points\". One carat is divided into 100 points,\neach point is 1/100th of a carat.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Can you inform the number of carats a diamond weighs? Perhaps you simply got engaged and are questioning ‘how big is my diamond?’ There is a visible distinction in between a 1/2 carat, 1-carat, and 2-carat diamond. However, understanding the precise weight of a diamond is almost difficult without a scale.\nWe’ll reveal you an efficient novice approach for how to determine carats on the fly. All you’ll require is a millimeter gauge and the chart listed below to get going.\nWhyWe Need Diamond Weight Calculations\nUsually, dealerships offer estate jewelry ‘as-is,’ which implies the stones remain in their initial settings. A scale is unreliable if the jewelry expert does not get rid of the gems. Getting a specific carat weight is challenging in this case.\nWith set diamonds, jewelry experts utilize a diamond mm to carat weight calculator to approximate a carat weight. If you simply require a fast price quote, a diamond carat chart works, too.\nFirst, determine the size in millimeters. This measurement will assist you figure out the approximate carat weight. Either input that measurement into a diamond weight calculator or utilize a diamond weight chart to match the size with carat weight.\nThe chart listed below recommendations the measurements of the diamond in millimeters and from there provides you an approximate carat weight. If it’s an expensive shaped diamond, you’ll require the length and width in millimeters to discover an approximated weight.\nThis chart uses to diamonds that are modern-day adjusted cuts. Also, when handling antique diamonds, it takes more proficiency to provide a precise price quote.\nThe girdle density, the height of the crown, and general depth of the diamond can substantially affect carat weight and this chart’s precision.\nAlso, keep in mind that every gems has a various size-to-carat ratio. The chart above will not be precise for other stones like amethyst or garnet.\nWe hope this fast and reliable novice approach will assist you approximate carat weight when looking for estatejewelry For more precise carat weight estimations, antiquarians will likewise utilize unique tools to get the height of the stone and think about numerous other elements.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "Purchasing a loose diamond is intimidating the first time. Once you know the qualities that diamonds are assessed with, you will feel more comfortable purchasing one.\nThere are four criteria that are used to evaluate a diamond – clarity, color, cut, and carat. Below is a brief explanation of each criteria. Our salespeople carefully explain these in greater detail when you come to our store.\nClarity refers to the clearness of the diamond. The number of imperfections (inclusions and blemishes) determines where on the clarity scale the diamond is rated. The scale begins at flawless, through I3, where the flaws are visible to the naked eye. The Gemological Institute of America has developed the scale and it is recognized as the standard followed by all jewelers in America. We carry diamonds that are on the entire scale to give our customers a full selection.\nColor refers to the amount of color in the diamond. While most people think of a diamond as having no color, the color varies greatly. The color ranges from colorless, to ranges of yellow for the most common diamonds. Specialty diamonds come in shades of brown, blue, green, pink, and red. These are more valuable and rarer than colorless diamonds.\nCut refers to the elements of finish, polish, and symmetry that make up the brilliance of a diamond. Brilliance is measured by the amount of light that is reflected to the eye and is measured through the Dia-Mension system. Well cut diamonds are more expensive than poorly cut diamonds. Most of the diamonds that you see advertised at great discounts are poorly cut and a terrible investment.\nFinally, carat is the weight of the diamond. The larger the diamond, the more carats it has, and the more valuable it is. Larger diamonds are rarer than smaller ones. Therefore, two one carat diamonds will not be worth as much a one 2-carat diamond. Diamond settings should be inspected by your jeweler twice per year to ensure that the prongs are tight and the diamond will not fall out.\nWhen you are ready to purchase your diamond, come to our store. Our jewelry consultants take pride in educating each customer on the qualities of each diamond we have. We have a diamond to match every budget both large and small.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "What Is A Carat?\nIn English, we have three homonyms (words that sound the same but have different meanings and spellings) for carat. There is the carrot (vegetable), karat (weight of gold, i.e...10K, 14K, 18K, 24K) and carat (gemstone weight). One carat = ⅕ of a gram or (.200 grams) which is divided into 100 points. In other words, a 1.00 carat diamond equals 100 points. You may have heard a jeweler say, “your diamond is a 25 pointer” (quarter of a carat), rather than describing it as weighing .50 grams.\nDon’t Confuse Carat Weight With Size.\nThe size of a diamond is measured in millimeters. For example, the average 1.0 carat round diamond measures approximately 6.4mm across the top (crown). But not every 1.0 carat diamond is 6.4mm across the top. The cut determines how the carat weight is distributed. Think of this. I am 6’ tall and weigh 190 lbs. I have a brother who is 5’9” and weighs 190 lbs. We look very different even though we weigh the same.\nNot All Diamonds Are Cut Equally.\nWhen a piece of diamond rough is presented to a diamond cutter, he must evaluate how to get the most light return (brilliance) from the stone while minimizing waste and maximizing profits. Sam thought he hit a home run by purchasing a 1.0 carat diamond for the same price as his buddy Ted who bought a .90 carat diamond. When the two hooked up to compare diamonds Sam was shocked. Ted’s diamond had more “pop” and appeared to be about the same size as Sam’s larger stone. Sam succeeded in breaching the 1.0 carat barrier without breaking the bank. However, Ted experienced the maximum “look” for the same price as his buddy by sacrificing some carat weight.\nYou now understand a diamond's carat weight in relation to size, cut and price. I believe many of you reading this article are preparing to purchase an engagement ring. Some of you will select the diamond and have it set into a semi-mount and others will buy a ready made piece.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "With that said, pricing for a 1-carat diamond can range between $2000 to $25,000. However, as mentioned, one of the most deceptive things about figuring out the cost and value of a diamond is related to its carat.\nHow Much is a 1-Carat Diamond Worth?\nDiamonds have been a symbol of love and wealth throughout history. However, no single diamond is quite like another. Since diamonds are forever, if you’re thinking of buying one, there’s no question you should make sure you’re getting the most diamond for your dollar. This is where confusion sets in! How much is a 1 carat diamond? If you’re like most jewelry owners, you probably have no idea the factors an appraiser looks at to value a traditional or simulated man made diamond.\nDazzlingrock Collection 0.05 Carat (ctw) Round White Diamond Ladies Bridal Promise Engagement Ring, Available in 925 Sterling Silver\n- This stunning ring is a piece of jewel that will certainly amaze you.\n- Items is smaller than what appears in photo. Photo enlarged to show detail\n- 100% SATISFACTION GUARANTEE – All our diamonds are conflict free, Our products are backed by incredible customer service and We offer a 100% satisfaction money back guarantee for 30 days. When it comes to giving gifts, there is always a DIAMOND as it means “FOREVER”. So make this the ultimate jewelry gift idea for women, Moms, Fiancé, Sisters and In-laws, Valentine’s – Mother’s Day or Christmas.\n- Color stone has inclusions and external blemishes visible to the naked eye. All our diamonds are conflict free.\n- Dazzlingrock 90 Day Warranty – We stand behind our products. If you lose an accent stone or your setting is damaged during NORMAL wear, we will fix it for FREE.\nThere are such extreme diamond pricing variations from jeweler to jeweler. It leaves many customers scratching their heads as they attempt to puzzle out just why one diamond costs $3k, while another diamond of the same size and carat weight costs $10k. To the untrained eye, they look just alike, but to diamond cutters, the differences are quite distinguishable! However, the diamond’s cut, clarity, and brilliance, and color grade are just a few factors that contribute to the wide price range.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "With the use of Hearts and Arrows viewer you can see the symmetric patterns of the Hearts and Arrows diamond. When viewing a Hearts and Arrows from the top (crown) you will see 8 symmetrical arrows. When flipped over to the pavilion side you will see a pattern of 8 hearts with small \"V\" shapes.\nDiamonds By Eyal has an unmatched inventory of Hearts and Arrows diamonds. Call to make an appointment so you can see these beauties.\nOnce you've determined what cut, color, and clarity grade you're looking for in a diamond, it's easy to determine the carat weight of diamond that will fit within your budget.\nWhen diamonds are mined, large gems are discovered much less frequently than small ones, which makes large diamonds much more valuable. In fact, diamond prices rise exponentially with carat weight. So, a 2-carat diamond of a given quality is always worth more than two 1-carat diamonds of the same quality.\nTo choose the best carat weight of diamonds, consider her style, the size of her finger, the size of your setting, and your budget. If you have a set budget, explore all your options and at Diamonds by Eyal you'll find that there is a wide range of diamond carat weights and qualities available in your price range.\nIf your recipient is very active or not used to wearing jewelry, she may find herself bumping or nicking her new ring. Consider a smaller size diamond or a setting that protects a larger diamond from getting knocked against doors and counters. Also keep in mind that the smaller the finger, the larger the diamond will appear. A 1½-carat diamond solitaire looks much larger on a size 4 finger than a size 8. If you have already chosen a setting, make sure you choose a diamond to fit.\nLook for the diamond size specifications of your ring or ask your Diamonds by Eyal diamond and jewelry expert what size diamond you should look for.\nFinally, if a large carat weight is important to you, yet you're working within a budget, please call Diamonds by Eyal to find the perfect stone for you. But the best way to determine what size is best is by getting an idea of what she is expecting. If you plan carefully, you can get some answers without even raising her suspicions.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-8", "d_text": "Sold by Original Classics. $3299.99 $949.95 10k White Gold Garnet & 1/5 Carat T.W. Diamond Halo Ring sale $280.00. Reg. $700.00. 10k White Gold Garnet & 1/6 Carat T.W. Diamond Teardrop Halo Ring. CU28413-99. Regular price $12,995.00 Sale price $10,995.00 Save $2,000.00. This lovely halo canary fancy light diamond engagement ring features a 2.00 Ct. natural fancy light yellow cushion cut diamond with VS2 clarity. Surrounding the center gem and the shank are 0.90 Ct. of round cut diamonds in U-Pave setting Sapphire Halo Earring Setting. in 14k White Gold (for 1ct tw center diamonds) $640. 14 carat white gold. +. Sapphire Halo Earring Setting. in 14k White Gold (for 0.50ct tw center diamonds) $640. Average Customer Rating 5.0 out of 5 stars\nRing disclaimer: pricing may vary subject to ring size. Diamond Facts That You Should Know. A diamond's size/weight is measured in carats. A carat is divided into 100 points. Therefore, a diamond of 75 points weighs .75 carat A brilliant 3/4-carat round diamond is the focal point of this breathtaking solitaire engagement ring. The ring is crafted of 14K rose gold\nHow to know the exact 1 carat diamond ring price? If you're interested in knowing a 1 carat diamond worth before taking it, we recommend saving around $5,000 to make the necessary purchase. Generally, the range is much wider, but this sum is the best ratio between worth, quality, and beauty of a 1 carat diamond ring Diamond Pricing: at 7 carats, a diamond weighs 1.4 grams and a Round Brilliant cut has a diameter of an amazing 12.4mm. These are incredible measurements for something as special as a diamond. The prices per carat range from $11,313 to $171,927. A 7 carat diamond is the ideal choice for special occasions such as an anniversary\nA 1 carat diamond ring is a classic engagement choice.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "N-Z – Noticeable Color\n**Note that fancy yellow or other hued diamonds are graded on a different color scale than white diamonds.\nClarity: Diamond clarity is determined by the internal and external imperfections visible under 10x magnification. The fewer inclusions and blemishes, the better the clarity – and more valuable the diamond.\nFL – Flawless – Shows no inclusions or blemishes.\nIF – Internally Flawless – Contains no inclusions; minor blemishes tolerated.\nVVS1 & VVS2* – Very Very Slight Included – Contains minute inclusions that are extremely difficult to locate.\nVS1 & VS2* – Very Slight Included – Contains minute inclusions, such as clouds, crystals, or feathers, which are difficult to locate.\nSI1 & SI2* – Slightly Included – Noticeable inclusions such as clouds, knots, crystals, cavities, and feathers.\nSI3 – Slightly Included – Contains inclusions that are very easy to see with 10x magnification.\nI1, I2, I3 – Included – Contains very obvious inclusions, which can usually be seen with the naked eye.\n*Note – size, position and number of inclusions determine distinctions between VVS1 & VVS2, VS1 and VS2, SI1 and SI2.\nCarat: Diamond weight is measured in carats; the greater the carat weight, the rarer – and more expensive – the diamond. Once you’ve determined what cut, color, and clarity grade you’re looking for in a diamond, it’s easy to determine the carat weight of that quality of diamond that will fit within your budget.\n**Note – Before you buy your stone, ask the retailer to provide you with a diamond report issued by an independent gemological association – such as the GIA or AGS.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Diamonds are praised for being beautiful, as well as sophisticated. We often think that a larger diamond means more affluence and power, but that is not necessarily true. The size of your diamond will be important to you. It is not always about the money it takes to purchase that diamond. It can also be about the quality of the cut, clarity, colour, and carats.\nA larger carat diamond does not always mean it is more expensive. In fact the larger diamonds can be more difficult to manipulate into a pleasing cut, which reduces their quality. The amount of carats will not necessarily tell you the price of the diamond. Consider for a moment a 1.8 carat diamond and a 4 carat diamond. Logically you would think the 4 carat diamond would be more expensive and of better excellence. At the moment you are just considering carats, but what if we told you the cut of the 4 carat diamond is deep, and the 1.8 carat has a slimmer cut in a heart shape. This might change your opinion. A deeper cut on a diamond usually means it loses brilliance. It will not reflect the light as much as the shallower cut. This means it loses sparkle.\nThe clarity of the 4 carat diamond is magnificent. There are no inclusions in our example of the 4 carat. The 1.8 based on its cut also is without any damage inside the actual stone. The 1.8 carat is colourless the most popular diamond as is the 4 carat. Based on this information you would still find the 4 carat diamond is more, but you may find the 1.8 carat is only a few pounds difference. The size of your diamond will speak to the 4 C’s that rate diamonds. It is all four aspects that determine cost not just size.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "In 1960, the average price for such a diamond stood at some 2,700 U.S. dollars. Since then, the diamond. Blue Nile Favorite. Marquise & Round Diamond Cluster Wedding Ring. in 14k White Gold (2 ct. tw.) Quick shipping. $4,690. Average Customer Rating 4.8 out of 5 stars. Riviera Pavé Sapphire and Diamond Eternity Ring. in 14k White Gold (1.5mm) Quick shipping\nIt means you should multiply the number by 2 to get the exact price you can count on while buying 2-carat engagement rings or other types of jewelry. The wholesale prices range between $5,060 and $48,400 per carat, while the average 2-carat diamond price for GIA-certified stones would range between $10,120 and $96,800 The table below shows the market prices for a 5 carat diamond. Considering the rarity of 5 carat diamonds, the actual price can vary. Also, the price will be higher if the diamond weight is larger. Note that the prices in this chart are for a 5.00-carat diamond. Therefore, a 5.50-carat would cost much more .10 carat diamonds weigh around 2.0 grams and a round brilliant cut has a magnificent diameter of 14.0mm or above.A diamond this size is best set off in a pendant or a diamond ring. These are the kinds of diamonds that will brighten up the room and bring out the natural beauty of the one who wears it When it comes to the price of a 1.4 carat diamond ring, there's no definitive answer as the cost of a diamond depends on many factors like the 4Cs and the shape of the diamond. Obviously, if a diamond has better material quality and is graded as a D color or internally flawless clarity, you would expect the price to be much higher compared to. A 1.5-carat diamond is about twice the price of a 1-carat diamond. A 1.5-carat stone can cost anywhere between $5,000 and $40,000. The price depends on the quality of the stone.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-4", "d_text": "- I1 means Included; characteristics were obvious to the grader when magnified\n- .The I2 and I3 grades are reserved for diamonds with extremely obvious inclusions and/or durability issues caused by the inclusion type.\nWhile diamonds are graded under 10X magnification they are not graded outside the ‘scope. This means that several diamonds of same-grade can appear differently to the naked eye. This is further influenced by several factors, including the following:\nLaboratory Standards: Different laboratories have slightly different standards for clarity.\nShape: Brilliant cutting styles have greater faceting complexity and less transparency, so a round brilliant may show inclusions less than an emerald or Asscher of the same grade.\nSize and Number:An inclusion plot that looks “clean” may not correspond to a cleaner presentation, since a single grade-setting crystal may be more naked-eye visible than several smaller crystals which set the same clarity grade collectively.\nPosition and Visibility: A diamond with a dark central inclusion can present with far more naked-eye visibility than one with a transparent inclusion under a girdle facet, yet both diamonds might have the same clarity grade.\nCut-Quality: A diamond cut with the critical angles and precision needed for highest performance boast superior brightness and scintillation, even when removed from jewelry store lighting, which helps to “mask” inclusions.\nCut quality affects visible clarity\nDiamond weight is expressed in carats. Carats are further divided into 100 “points” so that a 0.90 ct diamond may be called a 90-pointer, a 0.88 carat diamond an 88-pointer and so on. The word ‘carat’ has roots in ancient times, when diamonds were compared against carob beans by traders in the Ottoman Empire.\nCarat weight is NOT size. Diamonds with the same carat weight can have smaller or larger vertical spreads, depending on the geometry of their cut. This is no different than two people weighing the same, but one is taller and the other is wider.\nShallow and Deep Diamonds\nCarat weight influences price more than any other factor so the goal of mass-manufacturers of diamonds will always be keeping as much weight from the rough crystal in the diamond as possible. This is even more important than the beauty of the final product since bigger diamonds bring bigger money. The result is millions of diamonds on the market which sparkle under bright lights (all diamonds do) but have average performance in normal lighting.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-4", "d_text": "This is another example of something that might look good in the store, but someday you will compare it with a well cut gem and be disappointed with your purchase.\nCabochons are easier to judge. Begin by checking the polish under magnification. Then hold the stone a short distance from your head and rotate it slowly. Notice how the light passes across the surface. On a well cut gem, it will flow smoothly from one side to the other. If it is poorly shaped the light will not flow smoothly, but snake across the surface. Surface irregularities and poorly polished areas will also show up this way.\nSimply put, larger stones are less common than small ones. Hence, they demand a higher price per carat. For example, a quarter-carat topaz may cost $60 per carat, or $15. A half-carat topaz, (with the same color, clarity and cutting grades,) might cost $100 per carat, or $50. A full carat topaz would cost $200.\nChoosing the right size is a personal matter. For the bold, dynamic individual, a large gem mirrors their personality. On the other hand, small stones are better suited to someone with delicate and feminine tastes. Most people will fall in between these two extremes.\nWhen budget is a strong factor, smaller stones have a significant advantage. Not only do they cost less per weight, the amount of gem you see is disproportionate to their size. The reason is that volume goes up faster than the outside dimensions. For example, a half carat, round diamond measures 5 mm in diameter, a ¾ carat diamond 6 mm, and a full carat 6.5 mm. From a casual observation, the half and ¾ carat stones, or the ¾ and full carat stones look to be about the same size, but the price difference can be considerable.\nSmall gems are often clustered to give the illusion of more gemstone. Seven 1.6 mm diamonds, set close together, will take up as much space as a whole carat diamond. If set on white gold, it is hard to distinguish the separate stones, hence these are often called “illusion settings”.\nWhile these seven stones approach the eye appeal of a one carat diamond, they only weigh .14 carats. Considering that the price per carat is also much lower, the cost difference is significant.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "Completely colorless diamonds are extremely rare. While most diamonds may appear to be colorless (white), if examined closely, most have subtle yellow shades that can be seen when comparing two diamonds next to one another or under a jeweler’s loupe or microscope. Colors in a diamond are not always bad, as pink, blue, and black diamonds have become increasingly popular in recent years. As with all precious stones, different diamond colors are a result of trace elements present within the diamond. The GIA has created a color grading scale for “white” diamonds that can help to identify the shade of the diamond (representing how much of the trace elements exist).\nDiamonds are graded according to the GIA (Gemological Institute of America) color chart.\nD,E,F – Colorless. Stone looks completely clear. These are the highest priced stones 求婚戒指. Approximate price for VS1 Clarity, 1 carat round diamond: $15,000\nG,H,I,J – Near Colorless. Some yellow or brown color is visible when the stone is not mounted. When mounted, the stone appears colorless. This range is considered very good value for the money. Approximate price for VS1 Clarity, 1 carat round diamond: $10,000\nK,L,M – Light Yellow. Yellow tint shows. When mounted this still appears tinted. Approximate price for VS1 Clarity, 1 carat round diamond: $5,000\nN-Y – Yellow. Strong yellow color. These stones are not used in much fine jewelry. Approximate price for VS1 Clarity, 1 carat round diamond: Less than $3,500\nZ+ – Fancy. Bright, remarkable color. Usually blue, pink, yellow, etc. Approximate price for VS1 Clarity, 1 carat round diamond: More than $10,000.\nDiamond Clarity is a way to measure the extent of a diamond’s internal flaws. A diamond that does not have many flaws (known as inclusions in the diamond world) is, as one would expect, of higher quality and price. This is because inclusions interfere with the light’s ability to shine through a diamond, making the diamond appear less brilliant. A diamond that sparkles very brightly is likely to have very few inclusions.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "A diamond's cut is perhaps the most important of the four C’s, so it is critical to understand how this quality affects the properties and values of a diamond.\nReflects nearly all light that enters the diamond. An exquisite and rare cut.\nVERY GOOD CUT:Reflects nearly as much light as the ideal cut, but for a lower price.\nGOOD CUT:Reflects most of the light that enters. Much less expensive than a very good cut.\nFAIR CUT:Still a quality diamond, but a fair cut will not be as brilliant as a good cut.\nDiamonds that are generally so deep and narrow or shallow and wide that they lose most of the light out the sides and bottom.\nColor manifests itself in a diamond as a pale yellow. This is why a diamond's color grade is based on its lack of color. The less color a diamond has, the higher its color grade.\nTo grade 'whiteness' or colorlessness, most jewelers refer to GIA's professional color scale that begins with the highest rating of D for colorless, and travels down the alphabet to grade stones with traces of very faint or light yellowish or brownish color. The color scale continues all the way to Z.\nClarity simply refers to the tiny, natural imperfections that occur in all but the finest diamonds. Gemologists refer to these imperfections by a variety of technical names, including blemishes and inclusions, among others. Diamonds with the least and smallest imperfections receive the highest clarity grades. Because these imperfections tend to be microscopic, they do not generally affect a diamond's beauty in any discernible way.\nA carat is the unit of weight by which a diamond is measured. The process that forms a diamond happens only in very rare circumstances, and typically the natural materials required are found only in small amounts. That means that larger diamonds are uncovered less often than smaller ones. Thus, large diamonds are rare and have a greater value per carat. For that reason, the price of a diamond rises exponentially to its size. If a ½ carat diamond is priced at$1,000, a 1 carat diamond of the same quality will not be $2,000. Because the larger stone is rarer to find, the price will be exponentially larger.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Buying diamonds, Treasures by the Gram has the expertise to help you evaluate the 4 C’s of each stone:\nCut is the primary factor in a diamonds brilliance.\nClarity refers to number, size and type of imperfections.\nColor detracts from it’s brilliance; so less is best.\nCarat measures the diamonds weight.\nD, E & F are considered colorless.\nG, H, I and J are near colorless.\nK, L and M have a faint yellow tint.\nN, O, P, Q and R have a very light yellow tint.\nS, T, U, V, W, X, Y and Z are light yellow.\nFlawless: The diamond shows no inclusions (clouds, included crystals, knots, cavities and feathers) or blemishes of any sort under 10x magnification when observed by an experienced grader.\nIF: Internally flawless. The diamond has no inclusions using 10x magnification but will have some minor blemishes.\nVVS1, VVS2: Very, very slightly included. The diamond contains minute inclusions that are difficult even for experienced graders to see under 10x magnification.\nVS1, VS2: Very slightly included. The diamond contains minute inclusions such as small crystals, clouds or feathers when observed with effort under 10x magnification.\nSI1, SI2: Slightly included. The diamond contains inclusions that are noticeable under 10x magnification.\nI1, I2, I3: Included. The diamond contains inclusions that are obvious under 10x magnification and may affect transparency and brilliance.\nFlawless diamonds are extremely rare but Treasures by the Gram and The Maine Diamond Exchange can truly offer you an opportunity to buy a beautiful diamond at a great price.\nUnderstanding Carat Weight\n|A diamond’s weight is measured in what is known as a ‘carat’, which is a small unit of measurement equal to 200 milligrams. Carat is not a measure of a diamond’s size, since cutting a diamond to different proportions can affect its weight. (The word ‘Karat’ is used to express the purity of gold, and is not used in relation to diamonds.) Here is a diagram that shows the relative size of various carat weights in a diamond that is cut to the same proportions:|\nCarat Fractions and Their Decimal Equivalents:\nRemember, all diamonds are not created equal.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-2", "d_text": "An SI3 diamond is often equivalent to a GIA I1 diamond. These diamonds have visible inclusions and are less brilliant than the diamonds above.\nI1 – Included. I1 diamonds generally have one major flaw. The diamond should still shine, but the clarity can be extremely variable. You should exercise a lot of caution when buying one of these diamonds. They can appear to be a great deal – you can buy a large diamond for relatively little money, but once you mount the diamond it may reflect very little light and will not appear to be very “clean” or “shiny.”\nI2, I3 – Included. Included diamonds are the lowest quality diamonds. They may appear to be cloudy from cracks or large inclusions. They should be avoided if at all possible.\nBecause diamonds can be cut to almost any size, diamonds are measured by weight. The standard unit of measurement for diamonds is the carat, which is equal to 0.2 grams. To give an idea of how much a carat is, there are about 2300 carats in a pound. Since carat is still a pretty rough unit of measurement, gemologists have created “points.” There are 100 points in 1 carat. But weight is not the only important factor that determines price. Two diamonds that weigh the same can have very different prices, due to the differences in quality as you learned above.\nWhen diamonds increase in size (especially past 1 carat), the price begins to rise exponentially. This is just because of how rare diamonds are. It’s easy to make small diamonds out of large ones. It’s far less easy to pack together a bunch of small diamonds to make a large one.\nWhen a diamond is found, it looks more like a piece of crystal or sandblasted glass. To make it look like a diamond, the gem is cut and polished by gemcutters or manufacturers that follow a precise method to cut “facets” or small angled pieces on the outer faces of the diamond. The table is the largest facet of the diamond that you would see when looking straight at the diamond. The crown is just below that, and the girdle is the largest or widest part of the diamond. On a round cut diamond, the pavilion is just below the girdle and leads to the pointy tip of the diamond, called the cutlet.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-5", "d_text": "This grading system consists of Flawless (Fl), Inside Remarkable (IF), Really Very A Little Included (VV1 or VV2), Extremely Somewhat Included(VS1 or VS2), Slightly Consisted Of(SI1 or SI2), and Consisted Of(I1, I2, and I3). Although this system had actually been contributed to the diamond market, it is not commonly used. This is because of that it took a great deal of practice and also training to incorporate it.\nThe cut a ruby is established by the ruby's proportion such as its shape, size as well as depth. The cut establishes exactly what is called the ruby's \"luster\". Even if the ruby itself has perfect shade and clarity, with an inadequate cut the ruby will have a boring radiance. This is because the cut identifies exactly how light travels within the diamond.\nThere are 3 sorts of cuts that could establish the diamond's radiance. These are a superficial cut, a cut that is unfathomable and also optimal cut. A superficial cut is a cut of a diamond that is also reduced, that light traveling with it is lost on the bottom of the stone and also does not return right into sight. This cut makes a diamond appear drab and dull. A cut that is too deep is a cut that is too expensive, that light traveling through it escapes via the sides and also darkens the stone. A perfect cut is an excellent cut on a ruby that reflects light to the top of the stone, providing it excellent luster.\nAs mentioned on the last newsletter, a single carat weight(ct) weighs regarding 200 milligrams or.2 grams. For smaller sized carat rubies that weigh less than a carat, it is revealed as factors (pt). Factors are 1/100 of a carat weight. Carat weight of a diamond is necessary due to that larger rubies are rarer compared to smaller ones, so basically the larger the diamond the more expensive it is. There is no typical grading system or layout that could reveal various carat weight. This is because there are a lot of variants of diamonds fit and also cut, makings rocks of comparable weight, look various.\n* Imitation Diamonds\nConsidering that rubies are one of the most important and rarest of all the gems, efforts have been made to replicate or perhaps enhance rubies making use of much less pricey alternatives.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "A Raw diamond is a precious stone which hasn’t been cut and shaped in any specific form using a cutter and isn’t polished. Raw diamonds are also known as uncut diamonds or rough diamonds. After mining, the raw diamonds of high quality undergo the cutting procedure. Most women are looking for polished and raw diamonds when looking for jewelry. What about raw stones? They are basically cheaper than cut diamonds per carat. Cutting is a very costly and difficult process. Here’s how to determine the value of a raw diamond.\nIt refers to the number of natural flaws in a raw diamond and how much you can notice them. With few common inclusions, raw diamonds are more valuable than diamonds with several common flaws.\nA raw diamond which has a brownish or yellowish tint is stronger but less valuable. On the other side, diamond with less hue is far more valuable. Transparent and colorless diamonds are too rare to find. So, these gems are very costly.\nThe carat of a raw diamond is a unit to measure weight (i.e. 1 carat = 200mg). Bigger stones are rarer and are more valuable. But it is not important that raw diamonds with higher carat cost more than smaller diamonds.\nIf a rough diamond has a lot of visible flaws that should be cut out, even higher carat of it cannot make it costlier than a smaller or cleaner stone. In short, clarity plays a vital role in pricing of a diamond along with other factors. Should you aspire to decorate your appearance with such priceless motifs, make sure you buy the genuine one. Perhaps consulting an expert is better in this regard, instead of investing your money in a spurious diamond.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-2", "d_text": "in 10k Gold or 10K White Gold 3 Carat Diamond 3-Stone Ring in 14K White Gold $16,995.90 Sale $7,418.7 A diamond's setting refers to the way the diamond is attached to the rest of the ring. Some diamond setting types include halo, pave, prong, tension, bezel and tiffany. Carat. You may be familiar with the term carat, when exploring diamond rings or bulk diamonds. One common misconception about carat size is that it relates to the size of the. Use our diamond price calculator to estimate the value of a diamond based on color, clarity, carat and more. Simply select a diamond shape, the carat, and the specific details about the diamond to calculate your diamond value, see a price history chart, and see similar diamonds\nWhy 1 Carat Diamonds are Over-Priced. The price of diamonds increases exponentially as the carat weight goes up. The difference in price between a .98 carat diamond and a .99 carat diamond is only about 1%. However, the price difference between a 1 carat diamond and a similar .99 carat diamond can be up to 20% A stunning round diamond is the centerpiece of this elegant engagement ring. Crafted of 14K white gold, the ring has a diamond weight of 1 carat 3 Carat Emerald Cut Halo Diamond Bridal Set In 14 Karat White Gold. $8999.99. 4 3/4 Carat Halo Diamond Engagement Ring With 4 Carat Center Diamond In 14K White Gold. $24999.99. 2 1/4 Carat Halo Diamond Engagement Ring In 14K White Gold. $11999.99 Our Pre-Owned Price only: $5,300.00 (Sold New for: 10,000.00 ) 1.02 Carat Total Weight One (1) at 6.42 x 6.48 x 4.00 mm. This Pre-Owned 14 Karat Yellow Gold Diamond Ring is an exquisite design. Features one Round Brilliant Cut Diamond with a total carat weight of 1.02. This classic solitaire ring compliments the center stone with a simple band.\nPer carat diamond price 1960-2016. Published by Statista Research Department , Nov 2, 2016.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "SI (Small Inclusions)\nSmall inclusions, easily visible with a magnifying glass with a tenfold increase, but invisible to the naked eye from the side of the crown.\n1st pique PI\nAverage inclusions,, hardly visible to the naked eye from the side of the crown, do not infringe brilliance.\n2nd pique P II\nLarge and numerous inclusions, easily visible to the naked eye from the side of the crown slightly reducing the brilliance of diamonds.\n3rd pique P III\nLarge and numerous inclusions, easily visible to the naked eye from the side of the crown, reducing brilliance of diamonds.\nColor of the diamond.\nDiamonds are usually thought of as brilliant, colorless stones. Indeed, the colorlessness is the reason for the beautiful and breathtaking play of light, that we all associate with diamonds. Within the international color scale of diamonds, the more colorless a diamond is, the higher is rated. Diamonds that are rated higher (ie are colorless) are more expensive. Those with more color are usually more inexpensive. However when a diamond has extremely high color saturation, it may rate the grade of FA, or Fancy. These diamonds may actually cost more then the finest of the D grade (Exceptional white +) of diamonds.\nWeight of diamonds (carat).\nAll precious gems have a system to indicate their weight. While the cut of a diamond has the greatest impact on his form and beauty, the carat has the heaviest impact on its pricing. This is due to the fact that carat refers to the actual weight of diamonds, and it is the weight that is a major indicator for how rare a diamond is. The heavier a diamond is, the more rare it will be and thus the more expensive.\nThe abbreviation for \"carat\" is \"ct\". In the certificates of diamonds the weight is listed either housandths of a carat or hundredths of a carat.\n1 ct = 200 milligrams = 0.20 g or 6.5 mm. in a round stone.\nTips for saving money when buying diamonds and diamond jewelry.\nThere is one simple fact which will help you save money when buying your diamond. The scale of evaluation of diamonds has several points where the price can be drastically increased or reduced. These points are responsible for the final price of polished, loose diamonds. The areas in which price changes are noticeable are: carat, color and clarity.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Our online tutorial is designed to help you understand the relationship between the diamond’s value, its shape and its cut, color, clarity and carat weight, often referred to as the Four Cs. This tutorial does not replace the trained human eye or the magic ingredient of a truly stunning piece of jewelry.\nTake Advantage of Our Diamond Expertise\nEvery center diamond we offer has been certified by the GIA® exclusively, the world’s most recognized diamond grading laboratory. We select only the most exquisitely cut diamonds that have been individually checked and perfectly matched to your chosen setting. We ensure that our diamonds are ethically sourced and compliant with the industry’s most rigorous standards of social responsibility.\nA diamond’s shape refers to its form, which is largely dependent on the shape of the rough crystal from which our cutters begin. The Round Brilliant shape is the most popular, comprising approximately 80% of all diamonds. All other shapes are collectively referred to as fancies.\nCut or make is the term commonly used to refer to the quality and professionalism of a diamond’s craftsmanship. Cut is an extremely important characteristic in determining the overall beauty of the diamond. For this reason we only select the highest cut grades available.\nColor is graded on the lack of color present in a diamond. Color found in diamonds presents itself as varying hues of yellow and brown. The less color present in a diamond, the higher the color grade. Color grades range from D-Z with D being colorless or the highest grade. We can recommend the color combinations that work best for each specific setting.\nClarity refers to the number and size of small imperfections naturally found in nearly all diamonds. These imperfections, often referred to as inclusions, determine a diamond’s clarity grade. The more frequent and greater the size of inclusion, the lower the clarity grade. Clarity grades range from FL (Flawless) to I3. Along with color guidance, we recommend the clarity combinations specific to each setting.\nI = Included SI = Slightly Included VS = Very Slightly Included VVS = Very, Very Slightly Included IF = Internally Flawless FL = Flawless\nCarat is a measure of a diamond’s weight. A carat is one fifth of a gram. The larger the carat weight, the more rare and valuable the diamond. A diamond’s physical appearance is not only influenced by its carat weight, but also the quality of its craftsmanship. Properly proportioned diamonds will look larger and more beautiful than most poorly cut diamonds.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Diamond — Carat Weight\nHistory of The Carat\nWeighing commodities as small and precious as gems demands a very small, uniform unit of weight. To meet this need, early gem traders turned to plant seeds that were reasonably uniform in size and weight. Two of the oldest were wheat grains and carob seeds.\nBoth were common in the gem-producing and trading areas of the ancient world. Wheat was a dietary staple, and indidual wheat grains provided a plentiful and relatively uniform weight standard.Our modern pearl grain, troy grain, and avoirdupois and apothecaries’ grains all derived from the wheat grain.\n(Diamond weights are sometimes approximated in grains) The carob, or locust tree, produces edible seed pods that are still important as feed for livestock and as a flavoring. Traders used the inedible seeds as a standard weight from which our modern metric carat evolved.\nCarat weight was standardized in the early twentieth century.\nIf you had purchased a ‘one-carat’ diamond in 1895, it might have weighed anywhere from 0.95 to 1.07 metric carats, depending on where you bought it.\nBut between 1908 and 1930, the standard metric carat was adopted throughout most of Europe and in Japan, Mexico, South Africa, Thailand, the USA, and the USSR.\nConsumers sometimes confuse the terms carat and karat. Although in some countries the two are synonymous, in the US, karat refers to the fineness of gold alloys (pure gold is 24 karat; 14 karat is 14 parts gold and 10 parts other metal or metals) and carat refers to gem weights.\n— Gemological Institute of America", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Diamonds are expensive and still, they are the most sought-after gemstones in the world. Have you ever thought what’s the exact reason for such high prices and why people are ready to pay the premium for the best quality diamonds? Let’s take a look and find the answers.\nNot All Diamonds are Expensive\nYes, there is no typo – not all diamonds are expensive. Most diamonds found in nature have a lot of imperfections and defects, which makes them relatively cheap. These low-quality diamonds are usually used in industry as parts of various tools.\nMost diamonds mined are not fit to be used in jewellery. The diamonds of passable quality are rare. The stones you can see in the jewellery stores have already been sorted out of mined diamonds according to several criteria which make them sellable and relatively pricey as a result.\nDiamonds of Good Colour are Rare\nColour is one of the main characteristics of diamond quality. The whiter or more colourless a stone, the more expensive it is.\nThe high price charged for colourless and near-colourless diamonds is a result of their rarety. Not all diamonds are white. Most diamonds mined come with noticeable yellowish tints and only a small percentage of them can be classified as colourless or near-colourless. Moreover, even the stones with faint yellowish hues are rare to find.\nHigh-Clarity Diamonds are Rare\nAnother important feature of a diamond is its clarity.\nAnd if it’s hard to find a diamond that has either a good colour or good clarity, it’s even harder to find a stone that has both.\nBigger Diamonds are Harder to Find\nIf you check the prices for diamonds of different carat weights, you will notice that the bigger the stone, the more pricey it is per carat. The reason is that bigger diamonds are harder to find than smaller ones.\nThe bigger diamonds that have both high clarity and good colour are even rarer.\nExcellent Cut Diamonds Always Cost the Premium\nTo be prepared for sale, raw diamonds are cut into a certain shape, faceted and polished.\nTo exhibit maximum brilliance, the stone needs to have ceratin proportions which determine the quality of the cut. Besides, diamond cutters try to maximize carat weight while keeping the stone’s clarity possibly high. This is not an easy task.\nOftentimes, it is not possible to cut a diamond and keep it highly proportional, clear and big at the same time.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Carat diamond rings diamond i.F. $ vvs1 $ vvs2 $ vs1 $ vs2 $ si1 $ si2 $\nD one zero five.270 eighty one.950 74.910 60.390 47.410 28.930 20.One hundred thirty\nE eighty one.840 74.580 sixty four.460 fifty four.010 forty four.770 27.720 19.580\nF 73.810 64.350 57.2 hundred forty nine.One hundred seventy forty.370 26.070 19.One hundred forty\nG fifty five.660 50.050 45.540 42.790 34.980 22.770 17.Six hundred\nH forty two.350 39.820 36.410 33.880 28.380 20.350 sixteen.280\nI 30.470 28.820 27.060 24.970 21.780 16.940 14.410\nJ 24.640 23.320 22.110 20.350 18.040 14.850 12.540\nK 20.900 19.580 18.370 17.050 15.A hundred and eighty 12.A hundred 10.670\n4 carat diamond price relies upon at the 4Cs. Carat is one of the 4Cs and the most easily understood: it is the weight of the diamond. However, carat isn’t always the same as karat – a measurement of purity for gold – or for a unit of size. Carat weight does without delay affect the dimensions of a diamond, in place of proportionately. Just as the weight of someone can indicate peak, but it does no longer decide it precisely. Other most important fee influencers are readability, reduce and color.\nFree Diamond Expert\nWholesale Price To The Public Since 1961\nThe Diamond Concierge Helping You to Estimate the Perfect Diamond at Wholesale Price. Always ask us first!\nHow a great deal does it cost?\nDiamond Pricing: 4 carat diamonds are fairly valued4 carat diamonds weigh 800 milligrams. A round outstanding reduce has a median diameter of 10.2mm. The fees in keeping with carat for 4 carat diamonds variety from $7,920 to $one zero five,270 PER CARAT.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "Clarity is graded on a scale from Flawless to Included (FL, IF, VVS1, VVS2, VS1, VS2, SI1, SI2, I1, I2, I3). Higher clarity grades correspond to a higher value.\nThe cut of a diamond plays a crucial role in determining its appearance and value. A well-cut diamond will have optimal light performance, displaying superior fire, sparkle, and brilliance. Diamonds with excellent cuts are often more expensive than those with lower cut grades.\nFluorescence in a diamond refers to its ability to emit visible light when exposed to ultraviolet (UV) radiation.\nAlthough fluorescence can sometimes cause a diamond to appear hazy or cloudy, it does not always negatively impact the diamond's appearance. In some cases, it can improve the diamond's appearance by making a lower color grade diamond appear whiter.\nSymmetry refers to the alignment and balance of a diamond's facets. Diamonds with excellent symmetry will have well-aligned facets, leading to improved light performance and greater overall beauty. A diamond with poor symmetry may have misaligned facets that can negatively impact its sparkle and value.\nThe polish of a diamond refers to the smoothness of its surface. A well-polished diamond will exhibit minimal surface blemishes, leading to a cleaner appearance and better light performance. Poorly polished diamonds can have surface irregularities that detract from their beauty and value.\nDiamond Price Per Carat\nThe price per carat of a diamond can vary significantly based on factors such as weight, cut, color, and clarity. Diamonds are typically priced on a per-carat basis, with a price list that may include discount or premium rates depending on the specific quality of a diamond.\nFor example, a 1 ct radiant cut diamond ring may have a different price per carat compared to a 2 ct princess cut solitaire engagement ring due to the differences in cut, carat weight, and other factors. It's essential to be aware of these price ranges to make an informed decision when purchasing a diamond.\nThe matrix breakdown for diamond prices often includes the following categories:\n- Carat weight: The size of the diamond, measured in carats, significantly impacts the price per carat. Generally, larger diamonds command a higher price per carat.\n- Cut: The cut of a diamond refers to its shape and proportions. A well-cut diamond reflects light beautifully, thus raising its value.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "Even if you are not currently in the market for a diamond, here are some practical recommendations syncing carat weight and beauty with your budget.\nTake the Experts With a Grain of Salt When It Comes To Budget!\nSearching the topic of how much to spend on a diamond engagement ring can be confusing! Some “experts” suggest spending about three months' salary. Others give you numbers on what the “average” couple spends in a particular demographic. Their suggestions are not the gospel, just guidelines. Don’t saddle yourself with unnecessary debt for the sake of your ego!\nBalancing Education And Emotion.\nChoosing a marriage partner relates to matters of the heart. It is a very emotional decision. Are they the “one”? Will I still be passionate about them five years from now? Considering what weight diamond to buy should be more intellectually driven. It requires understanding basic diamond characteristics and qualities, and how that fits your budget and taste. Choosing a diamond or any kind of jewelry can be intimidating. Jewelry jargon is a language unto itself. Professionals like myself can cause the consuming public's head to spin when we speak! The good news is that the online resources are extensive when it comes to diamond education. Continue to take advantage of them. Boom Shakala!\nFYI! The carat weight of a diamond should never be equated with the level of commitment to the relationship. When I became engaged to my first wife in 1973 (gosh I’m old!) I was earning $4.00 an hour. I bought her a solitaire diamond engagement ring with a .50 carat round brilliant diamond for $100.00. That marriage lasted forty years until she passed away in 2013.\nIgnore The Pressure.\nDeciding on carat weight requires perspective. The reality is a diamond is just a faceted chunk of carbon! The cost of mining, rarity, cutting, and marketing set the retail price. Remember, a diamond is a symbol of your love and commitment. It should never define it! You don’t have to cave to the jewelry industry’s pressure of bigger is always better!\nWhat Are You Trying To Accomplish?\nDo you get the idea that choosing the best carat weight is based on your budget and objectives? If money is no object, you can maximize carat weight without having to sacrifice on cut.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "The Rapaport Price List express prices per carat but in actual cases, cutters put prices on diamonds based on the cost of the rough diamond plus overheads or expenses incurred in getting the stone ready for the customer. It is important for customers to know about a diamond’s basic characteristics: the stones, macles, shapes, cleavages, and flats to determine if the diamond they are buying.\nThe rough diamond’s crystalline structure is important in determining its purchase price. It should be remembered that similar sizes and shapes have different crystalline shapes, and therefore will have different yields. The rough diamond will yield from 50 percent to 25 percent, depending on many factors.\nGenerally, when a rough diamond goes through processing, it loses half of its original weight. This is the reason why processed diamonds have a price tag 100% more than the original price of the rough diamond.\nUsually, a 1-carat diamond is created out of a rough diamond of 2-carats or more. Advances in technology, however, made it possible for cutters to increase the percentage of weight regained during the cutting process.\nThere are machines and software systems that have been developed to help the rough cutting process where you can see the flaws, and what polished diamonds you can cut from the rough diamond. It even then ties it into the pricing structures so you can see what margin you would make on each diamond after it has been cut and polished. The machine above is called the Sarin Galaxy 1000\nIf you look above, the rough diamond goes through a stage of the cutting process. The machine evaluates the diamond without any impact or drilling. You can see exactly the flaws and inclusions, and maximize the type of diamonds that can be cut from the rough in the 2nd image. Everything is graphed out to properly set up the cut of the rough.\nThese machines would make pricing rough diamonds much easier, unfortunately, they come at a cost so it is still not as feasible. However, for a nominal fee, you can get your diamond rough evaluated this way you know what you can expect from the rough piece.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "INTERNATIONAL – One-carat diamonds are extremely popular, it is therefore no surprise that buyers keep searching for the price of a 1-carat diamond. Without pushing a list of diamond prices, this report suggests one of the best ways to get the price of a diamond.\nWhile diamond price lists might be the most common source for checking the price of diamonds, this practice is outdated and also quite unreliable. Most diamond and jewellery stores publish their own diamond price lists, the information provided would obviously be biased in favour of their business.\nThe best way to get a price estimate for a one-carat diamond is to make use of a reliable diamond price calculator. We could take the GLITZKOIN Diamond Price Estimator (DPE) as an example and explain the concept. To begin with, this price estimator or calculator is an offshoot of the soon to be launched DiaEx Diamond Exchange. The platform is configured to handle both B2B and B2C transactions. Participants trade on the platform devoid of middlemen and brokers.\nThe Diamond Price Calculator is now available and ready to use. The handy tool captures diamond price trends from international markets across the globe. Complex algorithms crunch the data and provide a market price estimate with an accuracy of plus/minus 15 percent. Diamond buyers and sellers using this tool, are not obliged to trade on the DiaEx and users pay nothing to use it.\nTo get a price estimate of a one-carat diamond, the user would need to input the 4c parameters of the stone. These include the color, clarity, cut and carat (weight) of the diamond being queried. This information is critical since, the parameters have a significant impact on the price of the diamond. Most diamond purchases are made in the midgrade range. For example color could be in the G-J range, the clarity close to VS2 and the cut could be “Good”. If you have any specific 4c parameters in your mind, select those from the DPE drop down menus.\nThe next step after getting a reliable estimate of the price, for a one carat diamond is to look for the actual piece. Conventional diamond trade is dominated by middlemen, this is something that results in the price getting inflated with multiple middlemen margins. Unfortunately this is how the diamond industry, has operated for decades.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-2", "d_text": "It’s important that you understand just what makes a cut diamond so valuable, so you don’t overpay when you go to purchase. Along with your purchase, you should receive a certificate that has a diamond cut grade for your diamond. That grade is determined by the four C’s we will discuss below. The higher the quality of your diamond, the more it will be worth.\nGIA & AGS\nGIA and AGS are not-for-profit grading labs and thus offer the best transparency in price estimation. Other certifications, such as EGL and IGI, use less stringent standards and may hedge for corporate profit. Therefore, it’s important to purchase diamond jewelry from a reputable jeweler and use their services when selling a small diamond. The average price for a small diamond is significantly lower than its market value, even if it’s smaller.\nOther, Less Important Valuation Factors\nThe four C’s aren’t the only metrics used when figuring out how much a 1-carat size diamond is worth. Other metrics that are evaluated include:\nThe thickness of the girdle––which is the edge formed where the bottom and top of the diamond meet. If the thickness of the girdle isn’t uniform, the symmetry of the diamond is thrown off and could negatively affect how much the center stone is worth.\nThe culet––is the flat or faceted bottom point of a diamond. The two factors that influence a diamond’s worth the most are the size of the culet and the angle. The best culet is small and perfectly positioned in the center of the diamond’s bottom.\nA laser inscription––this metric is shrouded in controversy because some believe laser inscriptions decrease a diamond’s value due to microscopic indentations on the diamond surface.\nHow Much Does a 1-Carat Diamond Cost ?\nAs you can see, determining the cost of a 1-carat diamond is not dependent on any single factor. It’s a complex combination of factors and qualities, and if any of those variables rank low on the diamond grading scale, it affects diamond prices and the worth of the stone, itself.\nMost of the time, you can count on the fact that the larger a diamond is, the more valuable it is, even with all of the other four C’s being relatively equal. This is mainly due to economics and that scarcity vs demand we mentioned. The larger a center diamond is, the rarer it is. Rare equals valuable.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "Below you will learn the basic information you need to know if you are buying diamonds. The information here is just the bare bone basics that any shopper should know before heading out to the boutiques. Bija Bijoux is not your average diamond ring store, but we do offer custom design and have access to any diamond you may wish to purchase. In store however, our diamonds are tiny, sparkly and stylishly set into funky jewellery designs by Canadian and U.S. jewellery designers. We are of the less flashy set but would like to help you find exactly what you want as well as ensure that you know exactly what you should be getting, even if not from us.\nPrice comparison of diamonds is only possible if you are comparing apples to apples. The 4C’s provide a way to objectively compare and evaluate diamonds: Carat, Colour, Clarity and Cut.\nDiamonds and other gemstones are weighed in metric carats: one carat is equal to 0.2 grams. The diagram below will give you a visual on carat size.\nJust as a dollar is divided into 100 pennies, a carat is divided into 100 points. For example, a 50-point diamond weighs 0.50 carats. But two diamonds of equal weight can have very different values depending on the other three members of the Four C’s: clarity, colour and cut. The majority of diamonds used in fine jewellery weigh one carat or less.\nBecause even a fraction of a carat can make a considerable difference in cost, precision is crucial. In the diamond industry, weight is often measured to the hundred thousandths of a carat, and rounded to a hundredth of a carat. Diamond weights greater than one carat are expressed in carats and decimals. (For instance, a 1.02 ct. stone would be described as “one point zero two carats”)\nHOW DID THE CARAT SYSTEM START?\nThe carat, the standard unit of weight for diamonds and other gemstones, takes its name from the carob seed. Because these small seeds had a fairly uniform weight, early gem traders used them as counterweights in their balance scales. The modern metric carat, equal to 0.2 grams, was adopted by the United States in 1913 and other countries soon after. Today, a carat weighs exactly the same in every corner of the world.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-9", "d_text": "View 1 carat diamond engagement rings purchased by our customers, and then create your own! 0 Items. Filter & sort. sort . Newest Oldest Price: Low to High Price: High to Low. Metal. Show All. Platinum. Yellow Gold. White Gold. Rose Gold. Diamond Shape. Show All. Round. Oval. Cushion. Pear. GIA certified diamond ring with round cut diamond 1.73 carat (L Color, VVS1 Clarity) and round cut 1.04 carat (natural faint pinkish brown, VS-1 clarity) with 3.25 carats in a cluster diamond setting in 18k white gold. Size 6.75 This GIA certified ring is currently size 6.75 and some items can be sized up or down, please ask 1 Carat Diamond Engagement Rings. Honor your beloved with a stunning 1 carat diamond engagement ring from Zales. Choose from classic solitaires or stunning multi-stone styles set with traditional white or unique colored diamonds. Shop our selection of 1 carat and larger engagement rings today\nThe most straightforward way to figure out the market value of a diamond is to check the prices of stones with the same carat weight and of the same clarity, cut, and color grades. You can do your research online and calculate an average price, which will serve as an estimate of your diamond's value. For example, if the stone in your ring has. Carat Total Weights. Rings that have more than one diamond, will generally list all the carat weight of the stones together.Especially in advertisements.It will say something like: 3/4 CTW SI-I, H-I-J This means, all the diamonds added together will equal 3/4 carats (CTW = Carat Total Weight), and the diamond quality could be anywhere from SI to I clarity, and anywhere from H color all the. 1 Carat Diamond Engagement Rings. Honor your beloved with a stunning 1 carat diamond engagement ring from Zales. Choose from classic solitaires or stunning multi-stone styles set with traditional white or unique colored diamonds. Price. $250-$499 (1) $500-$749 (4) $750-$999 (11) $1,000-$1,999.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "- September 17, 2018\n- Posted by: Mor Hazen\n- Category: carats blog posts\nHere are the 14 parameters used by diamond dealers the world over, now forming the basis of the Carats.io Diamond Pricing Algorithm taking the world’s most exclusive commodity into the digital age.\nDiamonds have always been a commodity in high-demand. However, the variety and complexity of such precious stones have made it incredibly difficult to develop any standardized price evaluation for them. Instead, only those who had accumulated years of experience in the industry – the diamond dealers themselves – could name the price for the stones, after looking them over with their own two eyes.\nThis lack of standardization remained a formidable obstacle, one that kept diamonds as being the only major commodity without a financial market. Anyone wanting to purchase a diamond found themselves dependent on the subjective whims of individual dealers, while having to re-appraise the gems at each stage of the trading process.\nHowever, Carats.io has found a way to overcome this problem, utilizing the latest machine learning techniques. By doing so, they are laying the foundations for a financial market for diamonds, taking the world’s most exclusive commodity far from its primitive origins and into the digital age.\nThe developers at Carats.io realized that by pairing sophisticated data analysis methods together with a large database of tens of millions of diamond transactions, they could develop an evaluation model that could automatically assess the price of any diamond. Using data from some of the world’s largest diamond exchanges, they trained the Diamond Pricing Algorithm (DPA) – the world’s first standardized diamond pricing mechanism.\nSo what makes diamonds so difficult to price? We’ve listed here the 14 complex parameters included in the DPA.\n1. Certificate – Grading certification from the Gemological Institute of America (GIA).\n2. Carat – Weight measurement of the stone. (One carat is the equivalent of 0.2 grams.)\n3. Color Grading – White diamonds are given high ratings for colorlessness, while colorful diamonds are rated based on intensity and purity.\n4. Clarity – Imperfection grading. A high clarity or ‘flawless’ diamond won’t have blemishes or inclusions that disrupt the flow of light.\n5. The Cut/Design Grading – The man-made aspect of a polished diamond is graded for its proportions and design finish.\n6.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-2", "d_text": "Should you cut one large round that will sell for more per carat but wastes more of the rough? Two smaller pears that sell for less but waste less rough? What will give you the best yield? These are complicated calculations, even today when machines like the Sarin allow for precise measurements and three-dimensional visualization.\nThen once the decision is made, the rough may be sliced by lasers, preformed, then finally polished. (Which requires using diamond powder: nothing else will cut a diamond.) Cutters make mistakes: sometimes things don’t go according to plan and an expensive rough diamond cracks or shatters. These experts spend hours (and for large gems even weeks) on each diamond. That adds to the cost.\n5. The Grade Determines the Price\nNow that expensively-financed diamond inventory needs to be graded by the GIA. That adds $120 and a few more weeks of financing to the cost of a one-carat diamond. But more importantly, that grading report sets the price of your diamond. Even if you paid more for the financing, the rough, the sorting, and the cutting, you are competing with everyone else in the world that has a G color, VS2 clarity, excellent cut grade diamond.\nGrades make it easy for diamond professionals to communicate precisely about diamond quality. But they also make the business ruthlessly competitive. Today, dealers and retailers both mark up diamonds very little. There is less mark up in the system than ever in history and diamonds move faster around the world.\n6. People Around the World Want Diamonds\nNow that De Beers only accounts for half of the diamond production around the world, the company no longer funds consumer marketing to help drive demand for diamonds. A whole generation of consumers have grown up without seeing any generic diamond advertising. And yet people all around the world still want to buy diamonds.\nWhile once most diamonds were sold in the United States and Europe, today developing markets like China and India are growing rapidly. Demand for diamonds will continue to increase as these markets become more affluent.\nHowever despite sales growth, there is so much competition at every stage of the distribution channel that the diamond industry itself is shrinking and consolidating: one mistake or a loss of financing can mean a company goes out of business. There are fewer survivors: fewer retailers, fewer manufacturers, fewer diamond wholesalers (most diamonds today go directly from the cutter to the retailer), and fewer rough dealers.", "score": 8.086131989696522, "rank": 100}]} {"qid": 32, "question_text": "How big is Leonardo da Vinci's Last Supper painting, and where is it located?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Located on the wall of the dining room of the former Dominican convent of Santa Maria delle Grazie in Milan, exactly in the refectory of the convent, the Last Supper, a late 15th-century mural painting by the great Italian artist Leonardo da Vinci, is one of the most famous and fascinating, most studied and reproduced and the subject of many legends and controversies. Commissioned by Ludovico Sforza, the Duke of Milan and painted by the master artist between 1495 and 1498, the coloured plaster of the enormous fresco measuring 15 by 29 feet (4.6 x 8.8 meters), covers the entire wall of the refectory, although the room was not a refectory at the time that Leonardo painted it.\nInstead of using tempera on wet plaster, the usual method of fresco painting, Leonardo painted it on dry plaster, as he sought a greater detail and luminosity than could be achieved with traditional fresco and it resulted in a more varied palette. In fact, traces of gold and silver foils have been found which testify to the artist's eagerness to make the figures much more realistic. He chose to seal the stone wall with a double layer of dried plaster, composed of gesso (gypsum prepared with glue), pitch, and mastic. After that, he added an undercoat of white lead to enhance the brightness of the oil and tempera that was applied on top. Unfortunately, his experiment did not work, as the painted plaster began to flake off the wall almost immediately. Various authorities have struggled to restore it ever since.\nThe layout of the fresco is largely horizontal. All the figures are set behind the large table, which is seen in the foreground of the image. The painting is also largely symmetrical with the same number of figures on either side of Jesus.\nThe Last Supper is the visual interpretation of the artist of an event chronicled in all four of the Gospels, depicting the evening before Christ was to be betrayed by one of his disciples. He gathered them all together to eat and to tell them that he knew what was going to happen soon. The fresco depicts the next few seconds in this story after Christ dropped the bombshell that one of his disciples would betray him before the sunrise and the different reactions of horror, shock and anger of the twelve apostles.", "score": 52.869664741700625, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Last Supper (Leonardo da Vinci)\nThe Last Supper (in Italian:Il Cenacolo or L’Ultima Cena) is a late 15th century mural painting by Leonardo da Vinci in the refectory of the Convent of Santa Maria delle Grazie.\nCommissioned to Leonardo Da Vinci by the Duke of Milan, Ludovico Sforza, The Last Supper was part of an overall renovation of the church and its convent buildings, and was commenced around 1495.\nIn the painting, which represents the scene of The Last Supper of Jesus with his disciples, as told in the Gospel of John, 13:21, Leonardo depicted the consternation that occurred among the Twelve Disciples when Jesus announced that one of them would betray him.\nSanta Maria of the Grazie\nSanta Maria della Grazie (“Holy Mary of Grace“) is a church and Dominican convent in Milan, northern Italy, included in the Unesco World Heritage sites list.\nThe church contains the mural of The Last Supper, which is in the refectory of the convent.", "score": 50.01012814484281, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "The Last Supper - a picture that tells thousands of words\nThere is something for everybody in this painting, depending on how you interperet the way the elements are arranged and how they interact. In the end, it is up to you what you make of it.\nThe Last Supper (Ultima Ciena) is a late 15th-century mural in the refectory (dining hall) of the Convent of Santa Maria Della Grazia in Milan. The large mural was commissioned in 1495 as part of a scheme of renovations to the church by Leonardo's patron Ludovico Sforza, the wealthy Duke of Milan. The original story of the Sun (Helios or Son of God) and His betrayal (during the autumn equinox) are depicted in code by Da Vinci in this painting of Christ and his twelve disciples. We visited the painting and had no time to take in all the details. If ever you go to Milan, be well prepared with all the information before you do so.\nHe arranged the 13 men in such a way as to suggest astral ambiguity. Let us explore the background to Leonardo’s beliefs to find out more. The cycle of birth, death and resurrection of the sun (Helios) of God have been going on since the beginning of creation. The cycles or circles of day and night, the twelve months and the four seasons. They travel round the heavens and spin like wheels. In the Book of Revelations John mentioned wheels, animals and strange beasts to describe the characteristics and activities of the stars of heaven, where God is. \"Our Father who art in heaven.\"\nExploding Myths and Legends about Da Vinci’s Last Supper\nDan Brown is not the only one to set off a volley of speculation about Leonardo da Vinci’s Last Supper. What about the star gazers? Some Christians tend to regard the symbolic significance of heavenly bodies in the sky as coming from the devil’s fortune tellers. To them the zodiac and information about how the universe works belongs to demons and soothsayers because it can foretell the future. The truth is that a cyclical event such as the changing of the seasons within a twelve month cycle goes on regardless of how people try to foretell its future. We have different characteristics and are connected to groups of stars. (\"Thy will be done on Earth as it is in Heaven\" – the stars.)", "score": 47.45166309996755, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "Along the right side of the church are pairs of ogival windows with round oculi set between the points of each pair.\nSanta Maria Delle Grazie: The Last Supper\nThe Last Supper was painted between 1494 and 1497 within the framework of the major renovation of the Monastery of Santa Maria delle Grazie commenced by order of Ludovico Sforza in 1492. Instead of fresco, the normal Renaissance technique for wall paintings, Leonardo used tempera on a dry wall, which gave him the greatest possible freedom to alter and correct the composition while the work was in progress and enabled him to obtain particular effects of colour. It is, however, partly as a result of this choice that the painting is now in a very poor state of preservation.\nAlthough the Last Supper — a representation of the Eucharist — was a subject traditionally depicted in the refectories of monasteries, especially in Florence, Leonardo took a radically innovative approach with marked accentuation of the dramatic elements of the scene.\nChrist’s announcement to the disciples that one of them will betray him causes reactions of shock and astonishment, typical instances of the ‘motions of the mind’ that Leonardo investigated with such interest in his studies. In a setting of extraordinarily exact perspective, which seems to suggest that the scene is taking place inside the refectory of the Dominican monks in Milan, colour is used to define the effects of the light entering both from the three windows in the background and from the real one in the room.\nPreviously distorted by attempts at restoration carried out over the centuries, Leonardo’s original painting has now emerged as a result of the work begun in 1978 and completed in 1999. This involved This involved addressing complex problems as regards not only the painting itself but also the environment of the refectory in order to protect the work from the dust, fumes and humidity identified as the primary causes of its constant deterioration.\nTours & Tickets for your Visit:", "score": 46.330625993074094, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Skip the Line: Entrance Ticket to Leonardo Da Vincis The Last Supper in Milan\nBeach Enjoy Calm 42cm nbsp;pinkTote Keep Maternity 10 Natural HippoWarehouse Leave litres x38cm Shopping Bag and Gym Duration: 40 minutes\nFill your day with art, and see Leonardo da Vinci’s “The Last Supper” in Milan with a skip-the-line ticket. The biblical scene painting has become one of the world's most famous pieces of art. So skip past the long lines of people waiting to see it, and head straight inside for an up-close look at the masterpiece itself. From a knowledgeable guide, learn about Leonardo and the Renaissance, and take in the splendor of its setting, a Dominican convent near the UNESCO World Heritage–listed Church of Santa Maria dell Grazie.\nGet your MilanoCard in three easy steps.", "score": 45.029041860582325, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Tickets for The Last Supper: Skip The Line\nThe ultimate Da Vinci experience - available on Wednesdays and Saturdays\n- See a late 15th-century mural painting by Da Vinci in the refectory of the Convent of Santa Maria delle Grazie – it's one of the world's most famous paintings!\n- The church was heavily damaged in an Allied bomb attack during WWII. Somehow, Da Vinci's fresco survived\n- Learn all about Da Vinci's life and work from our free downloadable brochure\nTickets to see Leonardo da Vinci's The Last Supper in Milan are hard to come by, but we've got them! You'll also get to roam around the rest of the stunning Santa Maria delle Grazie. Other highlights are the altarpiece by Titian and Bramantino's fresco. But, of course, you're here for The Last Supper.\nThese tickets are for early evening entrance to see Da Vinci's 15th-century fresco of The Last Supper, one of the most studied and revered artistic achievements in the world.\nNot actually a true fresco, Da Vinci chose to experiment with layering tempera paint. This technique has caused problems with the preservation of the work, but after surviving an Allied raid in 1943, it's clear this work was meant to endure.\nLovingly restored to its original glory, da Vinci's masterpiece is even more impressive when you don't have to queue. You can also explore the rest of the stunning Santa Maria delle Grazie church while you're there.\nDon't miss out on this once-in-a-lifetime experience!\n- Skip-the-line access to The Last Supper\n- Downloadable mobile brochure\n- Show your smartphone ticket at the entrance of Santa Maria delle Grazie\n- Make sure to be there at least 10 minutes before your chosen timeslot\nCancellations are not possible for this ticket.\n- Metro: MM1/MM2 to Cadorna\n- Tram: 16 or 19\n- Bus: 18, 50, 58 or 94\nIf you really want to understand the great art work itself, I recommend you will get a lot more valuable information at the Leonardo3 museum.", "score": 44.92990783195665, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "See Da Vinci’s painting “The Last Supper” in Milan’s church of Santa Maria delle Grazie. Get a brief introduction to an Italian masterpiece of the Renaissance, and enjoy at least 15 minutes to appreciate it at its best.\nAbout this activity\n- Free cancellation\n- Cancel up to 24 hours in advance to receive a full refund\n- Covid-19 precautions\n- Special health and safety measures apply. Learn more\n- Mobile ticketing\n- Use your phone or print your voucher\n- Duration 30 minutes\n- Check availability to see starting times.\n- Skip the ticket line\n- Instant confirmation\n- Live tour guide\n- Wheelchair accessible\nSelect participants and date\nMeet outside the entrance to the Last Supper Museum: Piazza S. Maria delle Grazie, MilanOpen in Google Maps ⟶\nWhat to bring\n- Passport or ID card, copy accepted\n- Passport or ID card for children\n- Luggage or large bags\n- Food and drinks\nKnow before you go\n- Due to new security measures bags of any size, food and drinks are not allowed in the museum. If necessary, your guide will escort you to the lockers so you can leave them during check-in.\n- Please note, for the Last Supper visit children up to the age of 1 do not need a reservation if they are carried by a parent and enter without a stroller.\n- Please provide the full name and date of birth of everyone in your group at the time of booking.", "score": 42.94442702601388, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "Around 1495, Ludovico commissioned Da Vinci to paint “The Last Supper” on the back wall of the dining hall inside the monastery of Milan’s Santa Maria delle Grazie. The masterpiece, which took approximately three years to complete, captures the drama of the moment when Jesus informs the Twelve Apostles gathered for Passover dinner that one of them would soon betray him.\nAfter brief stays in Mantua and Venice, Da Vinci returned to Florence. In 1502 and 1503, he briefly worked as a military engineer for Cesare Borgia, the illegitimate son of Pope Alexander VI and commander of the papal army. He traveled outside of Florence to survey military construction projects and sketch city plans and topographical maps. He designed plans to divert the Arno River away from rival Pisa in order to deny its wartime enemy access to the sea.\nDa Vinci started working in 1503 on what would become his most well known painting—and arguably the most famous painting in the world—the “Mona Lisa.” The privately commissioned work is characterized by the enigmatic smile of the woman in the half-portrait.\nDa Vinci moved to Rome in 1513. Giuliano de’ Medici, brother of newly installed Pope Leo X and son of his former patron, gave Da Vinci a monthly stipend along with a suite of rooms at his residence inside the Vatican. His new patron, however, also gave Da Vinci little work. Lacking large commissions, he devoted most of his time in Rome to mathematical studies and scientific exploration.\nAfter being present at a 1515 meeting between France’s King Francis I and Pope Leo X in Bologna, the new French monarch offered Da Vinci the title “Premier Painter and Engineer and Architect to the King.” Da Vinci did little painting during his time in France. One of his last commissioned works was a mechanical lion that could walk and open its chest to reveal a bouquet of lilies. He continued work on his scientific studies until his death at the age of 67 on May 2, 1519.", "score": 39.84827215058224, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "The article states, \"Leonardo da Vinci drew Jesus's [sic] Last Supper on a wall in 1498, and it was immediately recognized as a masterpiece.\"\nDa Vinci was commissioned to paint by Ludovico il Moro, from 1496 to 1498, in the refectory of the Dominican convent of Santa Maria delle Grazie.\nThis was extraordinarily hard work, the technique used to apply the paint to the wall is called fresco, not \"a drawing\". The technique involves a great deal of knowledge and experience to know how the paint and plaster will dry; how much color medium, (in this case egg tempra), could be added to the plaster and still allow the latter to dry correctly; what sort of colors will be produced. Samwell 19:35, 8 September 2007 (EDT)\nRecat, and probably unprotect\nCould someone please recategorize this to Category:Bible? Thanks. Also, there's been no vandalism to this page, and it could probably use an expansion; if you could unprotect it too. Thanks. TheEvilSpartan 19:14, 10 January 2008 (EST)", "score": 39.568127585052316, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Painting is poetry that is seen rather than felt, and poetry is painting that is felt rather than seen.”- Leonardo da Vinci\nDuring the years 1482 until about 1499, Leonardo was able to make a living for his artistic skills while in Milan. It was there that he was able to prove his superb talent as a painter, as he was\ncommissioned to complete two significant paintings. These artworks included The Virgin of the Rocks, which he painted for the Confraternity of the\nImmaculate Conception. Another painting that he made was The Last Supper, intended for the Santa Maria delle Grazie Monastery. In 1485, Leonardo decided to\nvisit Hungary, where he met the artist Matthias Corvinus. This man was believed to be the painter behind the masterpiece \"Holy Family\".\nBetween the years 1513 and 1516, Leonardo spent a huge amount of his time in the Belvedere, situated in Vatican, in Rome. Two other artists were quite popular at that time including Michelangelo and Raphael. Leonardo, along with these two artists, were under the guidance of Pope Leo X.\nIn 1515, Milan was recaptured by Francis I. In a meeting of the Pope and Francis I, Leonardo was among those who were present at that time. The said meeting was held in Bologna. After knowing about Leonardo's exceptional skills, he was commissioned by Franci to create a mechanical lion that had a capability of moving forward and opening its chest filled with lilies.\nDuring the year 1516, Leonardo became a part of Francis' service, and he was given a permanent residence at the Clos Luce, which was the manor house locted ner the Chateau d'Amboise or the king's royal residence. Leonardo lived the final three years of his well-lived life. Alongside him was an apprentice and friend by the name of Count Francesco Melzi. Furthermore, Leonardo obtained a pension that amounted up to 10,000 scudi.\nOn May 2, 1519, Leonardo died at his residence at the Clos Luce. It was also noted that during his last years, Francis I had become one of his closest friends. In fact, the king held the head of Leonardo in his death. However, there were accounts that this story may be more of fictitious.", "score": 37.776899789649875, "rank": 10}, {"document_id": "doc-::chunk-2", "d_text": "When you go, make sure you turn around to appreciate the painting at the opposite end of the room. I was so overwhelmed by the pale beauty of the Michelangelo that I did not give due attention to this other painting. This wall of the refectory is covered by the Crucifixion fresco by Giovanni Donato da Montorfano, to which Leonardo added figures of the Sforza family in tempera.\nLeonardo’s painting, as we all know, represents the scene of the Last Supper of Jesus with his apostles, as it is told in the Gospel of John, 13:21. Leonardo has depicted the consternation that occurred among the twelve disciples when Jesus announced that one of them would betray him.\nPhoto Credit: Italian Renaissance.org.\nLeonardo da Vinci, Last Supper, 1498, tempera and oil on plaster (Santa Maria della Grazie, Milan); (photo: public domain)\nBut what are they eating at this important last dinner together as a group of friends? Well, what’s left on the table gives us a bit of a clue – plates of fish, bread, salt and wine. Perhaps, because it was a Passover meal, they had eaten bowls of slow cooked beans – cholent – and had olives with a minty herb known as hyssop. Doubtless, they ate a salad of bitter herbs with pistacchio and date choroset, a chunky fruit and nut paste.\nThe disciples, in groups of three, are physically linked by their gestures, but seem not at all interested in the table and its remains. Judas, deep in shadow, is looking rather withdrawn and taken aback by the sudden revelation of his plan. He is clutching a small bag, perhaps signifying the silver given to him as payment to betray Jesus, or perhaps a reference to his role as treasurer. He is also tipping over the salt cellar. This may be related to the expression to \"betray the salt\" meaning to betray one's Master. He is the only person to have his elbow on the table and his head is also horizontally the lowest of anyone in the painting. Peter looks angry and is holding a knife pointed away from Christ, perhaps foreshadowing his violent reaction in Gethsemane during Jesus' arrest. The youngest apostle, John, appears to swoon, it’s all very dramatic.", "score": 36.990580918376686, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "On the 15th August, 1943, bombing destroyed the great cloister of the Santa Maria delle Grazie monastery, but miraculously spared the three walls of the refectory, including the one with Leonardo Da Vinci’s famous ‘The Last Supper’. Was it a case of divine intervention or just dumb luck? Maybe we should thank the British and American air bombers for their bad aim? (It wasn’t until the US obliterated the monastery at Monte Cassino, Italy in February 1944, that US attitudes changed to the preservation of historical monuments and sites.) Nevertheless, it was the frantic efforts of the people of Milan, who helped stabilize and sandbag the painting against any further bombing splinters. Then, after the Second World War, the monastery was rebuilt with a ‘clean and stabilise’ resortation undertaken by Italian restorer and painter, Mauro Pellicioli between 1951 and 1953.\nIt wasn’t the first time in its history that Da Vinci’s painting of the Last Super needed some love and care. Its last major restoration took placed between 1979 to 1999. (The main source for the restoration was Giampetrino’s extract copy of The Last Supper which he copied in 1520. It includes lost details such as Jesus missing feet and the salt-cellar spilled by Judas. Giampietrino is believed to have been one of Leonardo Da Vinci’s pupils who worked closely with him when he was in Milan.) Today, the refectory wall that the painting sits on is sealed in a climate controlled room. Hopefully, generations of art experts, people and pilgrims can enjoy viewing the original for a few more centuries.\nThis is an image of what remains of the refectory of the Santa Maria delle Grazie monastery after the 15th August bombing. You can see the protective sheets that were assembled to protect the Da Vinci wall painting of the Last Supper.\nThis is another view of the Santa Maria delle Grazia, in Milan, after the allied bombing on 15th August 1943. The Da Vinci ‘Last Supper’ was arguably within metres of being destroyed.\nDa Vinci wasn’t the first to paint a Christian view of the Last Supper. This is Ugolino di Nerio’s The Last Supper circa 1325-30. Judas is the only one without a halo.", "score": 36.934347867345714, "rank": 12}, {"document_id": "doc-::chunk-6", "d_text": "It was most unfortunate that his masterpiece, ” The Last Supper,” was painted on moist walls of a hall afterward used by Napoleon’s soldiers as a stable. And thus the picture became impaired almost past restoration. The hands of the disciples and the pose of heads in this great painting wonderfully express their character and, it is said, even their daily occupations. The drawing for the Saviour’s head (p. 44) is perhaps the most exquisite face in the world, matchless, perfect, and poignant.\nAfter leaving Milan, on account of political disturbances, Leonardo became a wanderer for nineteen years, until he found a home in France under the patron-age of the King, Francis I. The French have ardently appreciated Leonardo, and several of his great works are in the Louvre, including perhaps the best-known picture in the world, the ” Mona Lisa,” the ” Sainte Anne,” and others. He has profoundly influenced the French schools, and might thus be called the Father of French Painting.\nBerenson says of Leonardo that he was constantly striving for that subtler and subtler intensification of modeling by means of light and shade which he finally attained in his ” Mona Lisa.”\nVery close to Leonardo may be placed Bernardino Luini (1475?-1533?), whose beautiful “Madonna and Child” (p. 142) is one of the most highly valued paintings in the National Gallery at Washington. Ruskin, indeed, placed Luini before Leonardo, his master, but such a view is extravagant. However, Luini was Leonardo’s most distinguished pupil. Born at Luino, on Lake Maggiore, the painter perhaps imbibed some of that poetic atmosphere of the Italian lakes, for it is said that he possessed a serene, happy, and contented mind, naturally expressing itself in forms of grace and beauty. His painting has been characterized as appealing to the emotions rather than the intellect and groping after the beauty of perfected Italian art. The ” loving self-withdrawn expression ” of his works has been noted, ” a peculiarly religious grace devoutness of the heart.” In Luini’s faces there is always a fleeting almost wistful smile. This is true of our charming example (p. 142), a rare “Ma-donna and Child ” in that it depicts the Infant taking the first steps.", "score": 36.80262693664543, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Tuesday, May 4, 1999 Published at 23:28 GMT 00:28 UK\nPeek at the Last Supper\nThe restoration work means more detail can be seen\nOne of the masterpieces of Renaissance art - The Last Supper by Leonardo da Vinci - will show radical changes when it is unveiled after 20 years of restoration work.\nExclusive preview access to the restored masterpiece, which is already dividing the art world, was given to the BBC's Nine O' Clock News programme.\nThe old master used an experimental fresco technique and as a result the painting started to flake away almost as soon as he finished it 500 years ago.\nRestorers have taken the bold step of stripping away all the paint from previous restoration attempts.\nBare areas of plaster were then painted in with watercolour.\nPinan Brambila, head of the restoration team, said the painstaking work had revealed formerly hidden detail.\nShe said: \"Many faces were enlarged so they had a different physical structure. Some eyes had been rubbed out and painted over with small brush strokes.\n\"Underneath we found the original eyes - the eyes as they were originally painted.\"\nPhotographs of the painting taken in the 1940s show it had been in dreadful condition, but now lines which were crude and inexpressive are delicate and refined.\nThe food on the table and the creases on the cloth can now be seen clearly.\nThe mural is still far from perfect, and some critics feel too much paint has been removed.\nOxford University's Professor Martin Kemp said: \"The gains are that we now have a much better idea of what Leonardo's picture would have looked like from the surviving details, but we know that recovering what it looked like in total is in a sense an impossible job, so what has been fabricated is a late 20th century picture on the best information we have.\"\nRestorers have also taken possible future pollution of the masterpiece into account.\nVisitors wanting to see it will have to pass through a glass tunnel and several pressurised chambers, walking on anti-bacterial carpets while a stiff breeze blows any dirt or dust from their clothes.\nIt is estimated that nearly half a million people will see the painting this year when it is put on public display again.", "score": 35.41565815071218, "rank": 14}, {"document_id": "doc-::chunk-3", "d_text": "Seven conservators from England, Hungary, Italy, and the United States—including the J. Paul Getty Museum’s Sue Ann Chui—completed training at all phases of the painting’s complex structural treatment.\nTake a moment to imagine what it is like to handle a painting of such an enormous size—8 by 21 feet total; composed of five panels with a total of 12 thick poplar planks. How many people does it take to maneuver these large components around? During structural conservation, how can you assure that an intervention at the back gives the desired effect at the front? Asking these questions may help you begin to grasp the complexity of this conservation project.\nThere are many details to consider and problems to be solved, but to give you an idea of just one, consider this quote from Parri:\nThe treatment steps became more complicated from a technical point of view as we encountered a significant gap between the panels and the impossibility of bringing them closer due to the paint layer bridging them. After some brainstorming, we decided to apply wedge-shaped inserts along the previously prepared channels with the point facing down, as wide as the gap, to recreate the foundation on which to later set down and re-adhere the paint layers.\nIn 2013, the stabilization of the wood substrate was complete, and The Last Supper‘s five panels were reconnected for the first time in 47 years. It was a momentous occasion. The team’s solution was based on the support system originally devised by Vasari himself, which has stabilized the painting while also allowing the wooden panels to move naturally with standard temperature and humidity fluctuations.\nWork on the final conservation of the painted surface was completed with the generous support of the Prada Foundation. A conservation team led by OPD conservator Roberto Bellucci was able to recover an unanticipated amount of the original painted surface, revealing the artist’s hand in surprising detail. The most talented conservators in the field skillfully saved a significant painting that was deemed beyond repair. Allora, many congratulations to the OPD on this remarkable achievement; now it’s time to celebrate!\nWith the Arno still flowing nearby, there is always the looming threat of another major flood, despite the water management dams that have been constructed upstream. As an extra safety precaution, a high-tech yet simple device was installed.", "score": 33.743101096350145, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "We have received your inquiry and our staff will shortly be in contact with you.\nYou will recieve a notification to your email address.\nIf you have any further questions, please feel free to contact us.\nFrom EUR 75\nFrom the exceptional Last Supper to the Codex Atlanticus.\nEnjoy 3-hours guided tour with a small group tour understanding Leonardo's masterpieces conserved in Milan.\nEnjoy the exceptional Last Supper and the Codex Atlanticus, an extraordinary collection of drawings and manuscripts which represents the genius of Leonardo Da Vinci, one of the great creative minds of the Italian Renaissance. The thousands of surviving pages of his notebooks reveal the most eclectic and brilliant of minds.\nTake this 3-hour guided tour with a qualified tour guide and discover the signs Leonardo left during his period in Milan while working for the ruling Sforza family.\nThe visit starts from the Church of Santa Maria delle Grazie, where in the refectory located nearby, you will admire the magnificent fresco painting of the Last Supper. Then, the tour will continue in Pinacoteca Ambrosiana where the Codex Atlanticus, the largest collection of manuscripts made by Leonardo, are preserved in the Sala Federiciana.\nYour professional tour guide will show you more than just the great legacy of Leonardo that everyone knows. You are guaranteed a professional, monolingual and, of course, an enjoyable experience.", "score": 33.1981433262528, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "On view is a self-propelled cart that has been compared to an automobile. 3-D videos created by the Florentine Galileo Museum dramatically illustrate Leonardo's theories (see video). And for three months ending, alas, in January the Leicester Codex had been on loan from Bill Gates. That Codex is one of Leonardo's personal sketchbooks dating from 1504-1508, acquired by Gates in 1994 from Armand Hammer.\nIn his youth, Leonardo worked for Ludovico il Moro in Milan, where he painted the now much restored \"The Last Supper\" on a monastery wall. Not surprisingly, Milan offers eight Leonardo exhibitions. Through coming months the Ambrosiana, to name only one, offers a selection of 46 drawings by Leonardo drawn from the 1,750 in his famous Codex Atlanticus, normally seen by only a few scholars. The nearby National Museum of Science and Technology, among Europe's largest devoted to science and technology, opens a three-month exhibition on July 19 called \"Leonardo Da Vinci Parade.\"\nOn view at the Ambrosiana will be 130 rarely visible models of Da Vinci projects - navigation, artillery, underwater engineering - built in the 1950s on the basis of Leonardo's drawings, along with fresco paintings by 16th Century Lombard artists. The models are unique in the world and include one of a flying machine made by Alberto Mario Soldatini and Vittorio Somenzi in 1953 on the basis of a Codex Atlanticus drawing. On two walls are paintings and frescoes, only rarely on view, on loan from the Pinacoteca di Brera. The works by the artists in Leonardo's circle in Milan include some by Bernardino Luini, and others recovered from now destroyed churches, monasteries, and buildings in Milan.\nIn Rome, through September the Primoli Foundation has organized an exhibition at the National Academy of the Lincei devoted to \"Leonardo in Rome: Influence and Heritage.\" Also in Rome: at the Palazzo della Cancelleria near Campo de' Fiori is a permanent exhibition devoted to Leonardo, with large-scale models of his projects.\nClose to Rome, at Civitella del Lago, on Lake Corbara in the province of Terni, was an exhibition entitled \"On the Traces of Genius: Maps and Cosmography in the Time of Leonardo,\".", "score": 32.97222531634959, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Painted onto the walls of monastic dining rooms in the 14th-16th centuries, Florence’s Last Supper frescoes were designed to inspire contemplation on the Christian faith’s greatest mysteries. Still well preserved and quickly toured on any visit to the city, they hold much of their original magic. They are yours to uncover.\nWhat’s in this guidebook\n- A tour that goes deeper. Following our tradition of being the most valuable resource for culture-focused travelers, we provide a detailed tour of nine of Florence’s most important Last Supper frescoes executed over a 250-year period by artists Gaddi, Orcagna, Ghirlandaio, Castagno, Perugino, Franciabigio, Sarto and Allori. The tour walks you through the highlights, aided by high-resolution images and a discussion that ties it all together.\n- The influence of Leonardo’s Last Supper. We also profile Leonardo da Vinci’s iconic Last Supper fresco in Milan (1496-1498), pointing out how its innovations went on to shape later Florentine representations. Since Leonardo’s work occurs roughly at the midpoint of our timeline of reference, we can assess frescoes before and after its completion, clearly discerning its impact.\n- Advice for getting the best cultural experience. To help you plan your visit, this guidebook offers logistical advice and provides links to online resources. Plus, we provide our personal tips for getting the most from your experience while on location.\n- Information the way you like it. As with all of our guides, this book is optimized for intuitive, quick navigation; information is organized into bullet points to make absorption easy; and images are marked up with text that explains important features.", "score": 32.44211070113042, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "The Supper was inspired by Leonardo daVinci's \"The Last Supper\" painted 500 years ago on the wall of a Dominican Cloister in Milan, Italy.\nThe twelve disciples and Jesus, Left to right : Bartholomew, James the Younger, Andrew, Judas, Peter, John, Jesus, Thomas (finger raised), James brother of John, Philip, Matthew , Thaddeus, Simon. An image that is twice the size (232KB), is available here\nAn example of some of the pictures that await.\nI would like to introduce you to my daughter's new quilt book \"House Party, Coordinated Quilts and Pillows\". Here's a brief description (pdf). And to order from the publisher, Martingale & Company ., And available from Amazon.\nSome Videos of Don discussing different aspects of the Supper Quilt. For a full listing, go to YouTube and search for \"Supper Quilt\".Talking about his first quilt Dyeing for Jesus (Joy Press) Be careful where you put the Staple Flash photography Treatment The burning bush? Thing Long Arm Quilting (Linda Taylor)\nWe estimate that over 850,000 people have viewed The Supper Quilt. We would like to visit every state, but we are short 12. Please help us out.", "score": 32.43501807879074, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "This project is clearly a once-in-a-lifetime opportunity, and our museum is happy to collaborate with Leighton House to organise exhibitions in London and in Ponce that will surely heighten Leighton’s standing and international reputation.”\nGiorgio Vasari, Last Supper, 1546, Florence, Santa Croce (Image: ZEPstudio/Opera di Santa Croce)\nMeanwhile, in Italy, on the same evening, the Italian president Sergio Mattarella graced the unveiling of Giorgio Vasari’s newly restored Last Supper (1543), the five-panel, 8ft by 12ft painting. Stored in the Museo dell’Opera di Santa Croce, the historic work of art had been submerged in polluted water for 12 hours when the Arno River had been flooded on 4 November 1966. And the unveiling of the restored masterpiece at the refectory of the Basilica di Santa Croce was timed to coincide with the 50th anniversary of the flood wherein thousands of the city’s erstwhile art treasures had been lost.\nThe restoration work was undertaken by Florence’s renowned Opificio delle Pietre Dure e Laboratori di Restauro (OPD). Marco Ciatti, head of the OPD’s panel paintings department and now the organisation’s soprintendente, in an interview in 2010 had described the extent of damage to the Last Supper as “terrible”, calling it “one of the worst cases in conservation history”. And at the restoration unveiling, he praised and thanked all those involved in the “restoration and conservation project”, which was supported by the Italian fashion house Prada, Italy’s National Civil Protection unit and the Getty Foundation.\nConservators working on The Last Supper Copyright Opificio delle Pietre Dure\nAccording to other reports, restorers were reluctant to even touch The Last Supper for decades owing to the extensive damage in which, the gesso, a type of glue that held the paint to the wood panels, had sagged during the flood, while the panels cracked and curved after they dried. And thus, it had been kept in storage, laid horizontally because of its fragility.\nMarco stated, “The painting was also covered with a thin crust of dried paper. Volunteer conservators who rushed to save works of art after the 1966 flood had attached conservation paper to keep the paint from sagging off the wet wood.", "score": 31.74926742023707, "rank": 20}, {"document_id": "doc-::chunk-32", "d_text": "In contrast, the number four is important in the classical tradition (e.g. Plato’s four virtues).\n*: In contrast to Andrea del Castagno, Leonardo simplified the architecture, eliminating unnecessary and distracting details so that the architecture can instead amplify the spirituality. The window and arching pediment even suggest a halo. By crowding all of the figures together, Leonardo uses the table as a barrier to separate the spiritual realm from the viewer’s earthly world. Paradoxically, Leonardo’s emphasis on spirituality results in a painting that is more naturalistic than Castagno’s.\n*: Because Leonardo sought a greater detail and luminosity than could be achieved with traditional fresco, he covered the wall with a double layer of dried plaster. Then, borrowing from panel painting, he added an undercoat of lead white to enhance the brightness of the oil and tempera that was applied on top. This experimental technique allowed for chromatic brilliance and extraordinary precision but because the painting is on a thin exterior wall, the effects of humidity were felt more keenly, and the paint failed to properly adhere to the wall.\n*: mathematical symbolism, psychological complexity, use of perspective and dramatic focus, make it the first real example of High Renaissance aesthetics.\n*: In short, the painting captures twelve individuals in the midst of querying, gesticulating, or showing various shades of horror, anger and disbelief. It’s live, it’s human and it’s in complete contrast to the serene and expansive pose of Jesus himself.\n*: As in all religious paintings on this theme, Jesus himself is the dynamic centre of the composition. Several architectural features converge on his figure, while his head represents the vanishing point for all perspective lines – an event which makes The Last Supper the epitome of Renaissance single point linear perspective. Meantime, his expansive gesture – indicating the holy sacrament of bread and wine – is not meant for his apostles, but for the monks and nuns of the Santa Maria delle Grazie monastery.\n*: In most versions of The Last Supper, Judas is the only disciple not to have a halo, or else is seated separately from the other apostles. Leonardo, however, seats everyone on the same side of the table, so that all are facing the viewer. Even so, Judas remains a marked man.", "score": 31.661363093036247, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "In Leonardo da Vinci’s interpretation, the moment represents the moment before the birth of the Eucharist, with Jesus reaching for the bread and a glass of wine that would be the key symbols of the Christian sacrament.\nIn recent years, much of the interest in the painting has centered on the details hidden within the painting. Many scholars have discussed about the hidden meaning of the spilled salt container near an elbow of Judas. The spilled salt has been said to represent his betrayal or could symbolize bad luck, loss, religion or Jesus as the salt of the earth. It is also claimed by some biblical scholars that the disciple, painted at the right hand of Jesus, is John. However, many art and biblical scholars believe that actually the person is a woman, since he/she is the only person in the picture wearing a necklace or a charm, which is not a small one and in all probability, she is Mary Magdalene.\nIt is quite evident from the painting that, many of the figures on the right side of the table seem to be talking to or at Mary, while Jesus appears to be engaged with those on the left, who reflect the concern the disciples must have felt when told one among them would betray Christ. A closer look will at ‘John’ will also reveal a resemblance to Mona Lisa, which the master artist painted five years later.\nUnfortunately, in the ensuing centuries, the Last Supper had been a victim of abuse for several times. In 1652, a new door was cut in the wall of the deteriorating painting, which removed a chunk of the artwork showing the feet of Jesus. The painting endured additional irreverence in the late 18th century, when the invading troops of Napoleon Bonaparte used the refectory as a stable and further damaged the wall with projectiles.\nDuring World War II, the roof and one wall of the refectory collapsed due to the bombing of the Allied forces. Though the painting survived, it was exposed to the elements for several months before the space was rebuilt.\nThe Last Supper was subject to numerous restoration attempts and it underwent an extensive and controversial 20-year restoration that was completed in 1999. However, many critics argued that the restorers had removed so much of the painting that very little was left of Leonardo’s original work.", "score": 31.134254135385433, "rank": 22}, {"document_id": "doc-::chunk-31", "d_text": "*: Leonardo arranges the figures into a pyramid of form, with the figures establishing a solid geometric shape in space which has the Virgin;s head at the apex.\n*: As a result, the composition is stable and balanced, but the gestures lead the eye back and forth to suggest the relationships among the figures. The selective light, quiet mood, and tender gestures create a remote, dreamlike quality, and make the picture seem a poetic vision rather than an image of reality\n*: the relationship between poetry and painting\nMaria della Grazie\n*: Leonardo also simultaneously depicts Christ blessing the bread and saying to the apostles “Take, eat; this is my body” and blessing the wine and saying “Drink from it all of you; for this is my blood of the covenant, which is poured out for the forgiveness of sins” (Matthew 26). These words are the founding moment of the sacrament of the Eucharist (the miraculous transformation of the bread and wine into the body and blood of Christ).\n*: Leonardo’s Last Supper is dense with symbolic references. Attributes identify each apostle. For example, Judas Iscariot is recognized both as he reaches to toward a plate beside Christ (Matthew 26) and because he clutches a purse containing his reward for identifying Christ to the authorities the following day. Peter, who sits beside Judas, holds a knife in his right hand, foreshadowing that Peter will sever the ear of a soldier as he attempts to protect Christ from arrest.\n*: The balanced composition is anchored by an equilateral triangle formed by Christ’s body. He sits below an arching pediment that if completed, traces a circle. These ideal geometric forms refer to the renaissance interest in Neo-Platonism (an element of the humanist revival that reconciles aspects of Greek philosophy with Christian theology). In his allegory, “The Cave,” the Ancient Greek philosopher Plato emphasized the imperfection of the earthly realm. Geometry, used by the Greeks to express heavenly perfection, has been used by Leonardo to celebrate Christ as the embodiment of heaven on earth.\n*: Leonardo rendered a verdant landscape beyond the windows. Often interpreted as paradise, it has been suggested that this heavenly sanctuary can only be reached through Christ.\n*: The twelve apostles are arranged as four groups of three and there are also three windows. The number three is often a reference to the Holy Trinity in Catholic art.", "score": 31.0503212182643, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "San Giovanni (or San Giusto) della Calza\nFranciabigio Last Supper\nLocated just outside the Porta Romana, this church started out in the 14th century as a convent hospital with a mouthful of a name, San Giovanni della Porta di San Pier Gattolino, run by the Dames of Malta, When they were forced to move out during the 1529 Siege of Florence, they were replaced by Jesuits, who changed the dedication. The calza referred to the white hoods. In 2000, after a thorough restoration, the complex became a hotel and congress centre.\nThe interior of the church has zebra stripes, its original matroneum or women's gallery, and an altarpice by Jacopo da Empoli. Although most of its other paintings have since departed for the Uffizi, there's a pretty cloister with a glass pyramid in the centre, and the refectory with one of the seven Cenacoli or Last Suppers in Florence, this one by Franciabigio painted in 1514 for the Dames of Malta shortly before they left, reminiscent of Leonardo's version in Milan, where every figure is in movement, reacting to Christ's announcement that one among them would betray him.\nPiazza della Calza 7\nHours By appointment, +39 055 222287", "score": 30.339871888826853, "rank": 24}, {"document_id": "doc-::chunk-10", "d_text": "However, many early Church Fathers have attested to the belief that at the Last Supper, Christ made the promise to be present in the Sacrament of the Eucharist, with attestations dating back to the first century AD. The teaching was also affirmed by many councils throughout the Church's history.\nThe Last Supper has been a popular subject in Christian art. Such depictions date back to early Christianity and can be seen in the Catacombs of Rome. Byzantine artists frequently focused on the Apostles receiving Communion, rather than the reclining figures having a meal. By the Renaissance, the Last Supper was a favorite topic in Italian art.\nThere are three major themes in the depictions of the Last Supper: the first is the dramatic and dynamic depiction of Jesus' announcement of his betrayal. The second is the moment of the institution of the tradition of the Eucharist. The depictions here are generally solemn and mystical. The third major theme is the farewell of Jesus to his disciples, in which Judas Iscariot is no longer present, having left the supper. The depictions here are generally melancholy, as Jesus prepares his disciples for his departure. There are also other, less frequently depicted scenes, such as the washing of the feet of the disciples.\nWell known examples include Leonardo da Vinci's depiction, which is considered the first work of High Renaissance art due to its high level of harmony, Tintoretto's depiction which is unusual in that it includes secondary characters carrying or taking the dishes from the table and Salvadore Dali's depiction combines the typical Christian themes with modern approaches of Surrealism.", "score": 30.305822902787817, "rank": 25}, {"document_id": "doc-::chunk-2", "d_text": "They might even need to be reminded that the Last Supper was an event which involved Jewish people and occurred in Palestine. Judas sits beside Christ and rests his hand on the table as referenced in the Gospel that the one who betrays me rests his hand on the table. Through a carefully delineated under drawing and one point perspective where the vanishing point meets at Christ’s head, Leonardo da Vinci achieved serenity in this scene. This painting marks the calm before the storm of the Reformation, before Martin Luther nailed the 95 Theses on the Wittenburg church door in 1517 (below).\nIn newly-Lutheran parts of Germany, Protestant iconoclasts, sometimes in mobs, physically stripped and defaced countless works of church art. By 1522 Martin Luther recognized art as a valuable educative tool and artists once again created art to instruct viewers.\nThe German Reformation painter, Luis Cranach the Elder painted this Last Supper in 1547, (above) replacing Leonardo’s long bench with a round table. Jesus is not even placed at the center, but appears on the far left, consistent with the Lutheran practice of distributing the bread and the wine from the side of the altar. Cranach depicts Martin Luther at the Last Supper. Luther symbolized everyman and is taking part in the meal as he receives the cup of wine from a servant.\nAs the Counter Reformation warred throughout Catholic Europe, Veronese a celebrated Venetian painter was called before the Inquisition to defend his choices for this rendering of the Last Supper in 1573 (above). Venice long a trade crossroads attracted people of diverse cultures, so unlike earlier paintings, in addition to Last Supper participants, Veronese decorated the foreground with “foreign” people, a young dwarf holding a parrot, a man with a bloody nose and a dog. When questioned Veronese explained that he liked to adorn with figures of his own imagination to fill any left-over space in the picture. After being asked to remove the dog depicted in the center foreground, Veronese decided instead to rename the image Feast in the House of Levi which ended the controversy. This Inquisitorial hearing inspired a hilarious Monty Python sketch:\nThe Pope commissioned works of art as part of the Counter Reformation and Poussin found the Pope and a circle of patrons in Rome interested in stoic philosophy commissioned canvases like this one (above).", "score": 30.25452781197126, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "In 1482 Leonardo went to Milan at the behest of Lorenzo de' Medici in order to win favour with Ludovico il Moro, and the painting was abandoned.\nA marked development in Leonardo's ability to draw drapery occurred in his early works.25 At the start of the Second Italian War in 1499, the invading French troops used the life-size clay model for the Gran Cavallo for target practice.The conversion to the New Style calendar adds nine days; hence Leonardo was born 23 April according to the modern calendar.84 While the painting is quite large, about 200120 centimetres, it is not nearly as complex as the painting ordered by the monks of St Donato, having only four figures rather than about fifty and a rocky landscape rather than architectural details.Leonardo was also later to visit Venice.He found it difficult to incorporate the prevailing system and theories of bodily humours, but eventually he abandoned these physiological explanations of bodily functions.\nEveryone acknowledged that this was true of Leonardo da Vinci, an artist of outstanding physical programma per tagliare musica e video beauty, who displayed infinite grace in everything that he did and who cultivated his genius so brilliantly that all problems he studied he solved with ease.\nWhether or not Vasari had seen the Mona Lisa is the subject of debate.It is thought that Leonardo never made a painting from it, the closest similarity being to The Virgin and Child with.He spent the last three years of his life here, accompanied by his friend and apprentice, Count Francesco Melzi, and supported by a pension totalling 10,000 scudi.It is unknown for what occasion the mechanical lion was made, but it is believed to have greeted the king at his entry into Lyon and perhaps was used for the peace talks between the French king and Pope Leo X in Bologna.All these qualities come together in his most famous painted works, the Mona Lisa, the Last Supper, and the Virgin of the Rocks.Technological Concepts and Mathematical Models in the Evolution of Modern Engineering Systems.A b c d e f g h i Rosci, Leonardo, chapter 1, the historical setting,.920 a b c Brucker, Gene.\nLeonardo da Vinci: Flights of the Mind.", "score": 29.924628400669413, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Hope you had all a very, very nice Christmas Day, that all your wishes were fulfillled and that your Christmas Eve and Lunch was the opportunity of a wonderful family gathering !\nFor Santa it was, as you can see here !\nCertainly, but let us keep in mind that a scene like this to celebrate a Birth is an introduction in one other Apocalypse in its strict sense, another \"Revelation\" in other words, than we can date from now in 33 years...\nIsn't it ?\nSanta Vinci's Last Supper means here that if we want to become better artists, we should confront in the best, ever ?\nSo long Leonardo ! :-D\n*On the north wall of the refectory of the Convent of Santa Maria delle Grazie is \"The Last Supper\", the unrivalled masterpiece painted between 1495 and 1497 by Leonardo da Vinci, whose work was to herald a new era in the history of art.\nThis work is registered on the world heritage of the humanity since 1980.\nJust fiew years before mine, there :-D\nThank so much for your comments, faves and rates.\nSee you soon.\nP.S. : Of course, bigger is better...as always !\nProduction CreditsTrifid - TF Blanche\nTrifid - TF_Ariane\nMihrelle - MRL Ariana\nDec 26, 2012 5:17:24 pmby jdstrider Homepage »\nGreat posing work and I just love all the little details you throw in.\nIt took me 15 minutes to go through the entire image picking up all the little bits along the way. A true master piece in some ones mind, I am sure. LOL\nLittle Leonardo, the snow bear, seems quite proud of himself, as he should be!\nGreat stuff, thanks so much for your humor and wonderful art work.\nDec 26, 2012 7:24:45 pmby Trifid Online Now! Homepage »\nHi my friend!\nHope you had a wonderful Christmas!\nThis is stunning. I am very much with jtsdrider...so much too see. That must have taken a while to compose, chapeau!!!!!! Excellent pose and compo work, wow.\nKeep 'm going! :)\nAll the best to you, my friend!", "score": 29.813185769909424, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Hi freakzz !!\nHere comes another fully loaded friday,,\nLeonardo Da Vinci .. his masterpiece “The Last Supper” made a rumble in the world. What is the fact behind his artz ? It took 4 years (1495 to 1498)for him to complete and he was trying out his newly invented technique know as tempera (egg yolk and vinegar) plus oil painting ON dry plaster so that he could retouch and use more colors.\nInitially some argued he was using the FRESCO technique. Fresco is a way of painting were the plaster is made wet first and then painting on it. When the plaster dries up the paint is mixed and permanent. This technique had a color limitation and cannot be retouched coz once the plaster is dried then its like playing with gluezz 😉\nHerez a copy of “The Last Supper”\nFor more secrets", "score": 29.74994523129362, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "About it he said: “Andrea del Sarto, the flawless painter, is the author of the Last Supper kept in the Great Refectory of the San Salvi convent. [The fresco has] endless majesty with its absolute grace of all the painted figures.”\nHere are some details of the glorious painting:\nI simply adore this casual slice of everyday Florentine life captured by Andrea del Sarto in the top of the lunette over the last supper. One man appears to just be hanging out on a balcony over the people eating, while the other, possibly a server for the dinner, seems to be walking away.\nNot pictured here, but to the right and left of the room, along the walls in glass topped cases, are many sketches for the fresco by Andrea del Sarto. It is a rare opportunity to see sketches by an artist from this period. And, to see them in conjunction with the final work is an extraordinary opportunity.\nNotice in the picture above, Andrea del Sarto’s treatment of the Trinity. A 3-faced head shot of sorts.\nWho knows!? You might be as lucky as I was and have the place all to your self on the middle of a Saturday in July. This is almost unheard of in Firenze!\nJust outside the refectory is a fountain where the convent members could wash their hands before entering the refectory to dine.\nUtilitarian yet artistic.\nHere is some info about the venue: http://www.polomusealetoscana.beniculturali.it/index.php?it/177/firenze-cenacolo-di-andrea-del-sarto", "score": 29.71901579872138, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Picture Leonardo da Vinci as a 42-year-old artist, struggling with ideas in his head and a world that won’t allow him to bring them to fruition. Ross King planted that picture firmly in this critic’s head, before slowly creating an arresting image of his own — of how one of the world’s most famous paintings was born, thrived, painted over and eventually restored to glory.\nIt’s an astonishing story, not least because it demolishes tin-pot theories floating around the painting that made a certain writer of thrillers an enormous amount of money. The good thing Dan Brown did, in retrospect, is draw a lot more attention to The Last Supper, probably prompting people like Ross King to look at it anew. If that is indeed the case, Brown deserves a Thank You note.\nAnother man trapped in the footnotes of history, but equally deserving of thanks, is the ruler of Milan, Lodovico Sforza, who commissioned Leonardo’s work on the refectory wall of his family mausoleum at the church of Santa Maria delle Grazie. “Paint a wall,” one imagines Sforza saying to da Vinci, a request one of the world’s greatest artists may not have taken in the right spirit. And yet, as King shows, that wall — with its 40-square-metre experimental oil-and-plaster masterpiece — went on to change the way people looked at art. It took what was supposedly familiar, and made it new.\nDan Brown fans, here’s what Ross King has to say: The figure at Jesus Christ’s right is not Mary Magdalene; it’s his disciple John. For his particular reasons, his story of how the painting survived the ravages of time and even a bomb (!) and other interesting things you had no idea about, I strongly suggest you read this book.\n— Leonardo and the Last Supper, Ross King, Bloomsbury, R399. Available at leading bookstores.\nPhotos: Shah Rukh Khan, Shweta Bachchan at Karan Johar's book launch\nPhotos: Sunny Deol with sons Karan and Rajvir at Mumbai airport\nPhotos: 10 beautiful moments that capture winter around the world\nPhotos: Karisma Kapoor, Preity Zinta, Shraddha Kapoor at Mumbai airport\nSpotted: Anushka Sharma and Diljit Dosanjh at Mumbai studio", "score": 29.680802705024206, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "The name “Codex Atlanticus” was given to the collection by the sculptor Pompeo Leoni in the seventeenth century. He put it together and named it after the large format pages resembling a large album.\nThe reconstructed manuscript was in storage in the Louvre in the 18th century and then moved to the Pinacoteca Ambrosiana in Milan. The Federican Hall is the place where parts of the relic are on display, replaced every 3 months.\nAmong the masterpieces of painting that are a must-see are:\nThe controversy surrounding Leonardo da Vinci’s Portrait of a Musician continues unabated. Many luminaries of science believe that the genius managed to capture only the musician’s head, and that someone else drew the hands and musical signs.\nNext to Da Vinci’s work is a “Portrait of a Woman” dedicated to one of Italy’s most beautiful princesses, Beatrice D’Este, whose wedding ceremony to the Duke of Milan, Lodovico Sforza, was organized by Leonardo himself.\nIn the gallery is an elaborate copy of Da Vinci’s The Last Supper, which belongs to the brush of the painter Vespino.\nThere are also many busts and statues of famous artists. The gallery is usually not busy with tourists, which allows you to leisurely contemplate the masterpieces of art and have a nice time.\nThere are streetcars No12, No14, No16, No27, No2 to the destination. Get off at the Orefici Cantu stop.\nPiazza Pia XI, 2, 20123, Ambrosiana, Milan; It is open every day from 10.00 to 17.30, except Mondays; Every year the gallery is closed on December 25, January 1, Easter Day and May 1.\nIMPORTANT: One hour before closing time, visitors are not allowed into the museum.\nTickets can be easily purchased on site and are €15. A discount is available for students.\nAdditional information you may need:\n- Photography and videotaping of the interior of the Pinakothek is prohibited, but the ban does not apply to the courtyards and the gallery building itself;\n- There is no checkroom in the gallery;\n- There is no audio guide in Russian;\nExcursions to the Pinacoteca Ambrosiana, which last approximately two to three hours, must be booked in advance. Russian-speaking individual guides will thoroughly explain the history of the art gallery and its masterpieces.", "score": 28.51482966397086, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "leonardo da vinci\n• He was born in the town Vinci, Italy\nDa Vinci's life\n• His education training was mostly under Andreadel Verrorchio.\n• He moved to france in 1516.\n• He always wanted to learn more from a young age.\n• He died may, 2 1519.\nLeonardo Da Vinci's Art\n• Man in Red Chalk 1512\n• Vitruvian Man in 1490\nLeonardo Da Vinci's Patrons\n-King Francis 1 of france\n-Marcantonio Della Torre\n• Is currently located in the Louvre (1797)\n• If looking closely there is great attention to detail in the painting, showing the background clearly and her surroundings. Also Leonardo Da Vinci did not sell the painting with he was originally supposed to do. He keep the painting and brought it to france with him.\n\"Mona Lisa.\" - Leonardo Da Vinci. N.p., n.d. Web. 18 Nov. 2013.\n\"The Vitruvian Man: “The Ideal Man” Secures Eternal life.\" Scripts of Stig Dragholm. N.p., n.d. Web. 18 Nov. 2013.\n\"Self-portrait of Leonardo Da Vinci – Facts & History of the Painting.\" Totally History Selfportrait of Leonardo Da Vinci Comments. N.p., n.d. Web. 18 Nov. 2013.\n“Leonardo’s ”Mona Lisa” ”Mona Lisa .N.p.,n.d.Web.14 Nov. 2013.", "score": 28.421373649753647, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "What is the greatest treasure in Kraków?\nThe Louvre museum is one of the most popular museums worldwide. Almost 10 million tourists visit Paris to see the famous Gioconda by Leonardo da Vinci. Mona Lisa is undeniably a masterpiece, but did you know that in Krakow you may find a painting by Leonardo da Vinci, too!\nIn 1482, Leonardo da Vinci was sent as an ambassador by Lorenzo de’ Medici to Ludovico Sforza, who ruled Milan between 1499 and 1579.\nIn Milan Leonardo created one of the most beautiful and amazing portraits – Lady with an Ermine\nWhy is Lady with an Ermine is so special?\nUnfortunately only 25 paintings made by Leonardo da Vinci survive today and only 4 of them are between 1489–1491.\nThe painting is not large, it is only 53,4 x 39,3 cm (21 x 15 in). It was painted with oil paint on walnut panels. The material selected to create this portrait is special. In Italy, at the time, oil paints were relatively new and not commonly used by artists, same with the walnut panels. The painting shows the upper half of a woman turned toward her right at a three-quarter angle, but with her face turned toward her left, like she is looking at somebody out of frame. This composition was unique in Lombardy in the second half of the XV century. Most portraits were painted from the profile view. The lighting illusion focuses viewers on the face of the lady and white animal which she holds in her arms.\nWho is this beautiful lady?\nThe lady has been identified as Cecilia Gallerani. Her father worked at the Duke’s court, where she would accompany him to the Duke's palace. Her poetry and beauty seduced Ludovico Sforza. Cecilia and the Duke of Milan became lovers. Ludovico was infatuated with her and asked Leonardo to make a portrait of them, but it was not an easy job. Leonardo knew that he must avoid a scandal. Cecilia was a member of a large family that was neither wealthy nor noble, while Ludovico was the duke. Even if this kind of affair was commonplace in noble courts, there was another reason why they could not be painted together. During their romance, Sforza was already engaged with a different woman – Beatrice d’Este.", "score": 27.820261318684516, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo da Vinci: Painter at the Court of Milan\nNational Gallery London, 2011\n320 pp., 65.00\nThe show has been sold out, every ticket, for every day of its four-month run, since before it even opened its doors. That is a phenomenon more commonly associated with U2 concert tours or limited-run Broadway musicals, not stately, informative expositions of old pictures. But London's National Gallery has a show on now that lives up to the hype, one which is well worth trying to sneak in to see. Next thing you know, there will be ticket-touts in three-piece suits outside the gallery doors, scalping tickets for ten times the asking price to desperate fans. And I wouldn't blame them, because this is one heck of a show.\n\"Leonardo: Painter at the Court of Milan\" (November 9-February 5) is the largest exhibition of the work of Leonardo da Vinci ever displayed. Before this exhibit, Leonardo had only eighteen universally agreed upon portable paintings attributed to him. A newly discovered Leonardo, once part of King Charles I's art collection but then lost, also appears in this show, bringing the total to nineteen worldwide. Nine of them feature here.\nMost art museum shows come in one of three varieties. There is the survey, in which works are gathered together so that they may be admired and studied in one place. These surveys might cover a style (Mannerism in Ferrara), an artist (El Greco), a period and place (1920s Paris), or a chapter in an artist's career (Picasso's Blue Period). Another category is the star-turn, a first chance to display a famous work in a new location. The best-known instance was the tour of Leonardo's Mona Lisa (understandably not included in this London exhibit, otherwise there would have been stampedes and riots to get in), which was displayed in Italy after it was recovered from the thief Vincenzo Peruggia, and visited Japan and the United States in the 1960s and '70s. Finally there are exhibits that are primarily informative rather than crowd-pleasing, the sort that expect you to read all of the wall copy, and perhaps the catalogue, shows that you slowly imbibe rather than rush through.", "score": 27.110917473654133, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "There are also going to be major showings of his work in Paris, London, and Madrid (an exhibition on which I’ve been privileged to work). Fittingly, the Museo Leonardiano in Vinci, Leonardo’s hometown, is planning a show linking his work to the beautiful Tuscan landscape. And the house in which Leonardo died, at Amboise in the Loire Valley, will display a tapestry of The Last Supper loaned by the Vatican.", "score": 27.10719082742799, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "The Accidental Masterpiece: Leonardo and “The Last Supper”\nRoss King, author. Although celebrated today as one of the world’s greatest paintings, Leonardo da Vinci’s The Last Supper had unusual and inauspicious beginnings. In this lecture recorded on June 9, 2013 at the National Gallery of Art, author Ross King discusses the circumstances surrounding the creation of The Last Supper, including Leonardo’s unorthodox painting technique and his relationship with his patron, the Duke of Milan. King describes how despite never having worked on such a large painting and never having worked with the difficult medium of fresco, Leonardo created the masterpiece that would define him forever.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "The citizens of Florence have an exceptional reason to celebrate today, as one of the city’s most treasured artworks has been restored and is back on display. Hidden from view for the past 50 years to the day, The Last Supper (1546) by Giorgio Vasari has returned to the Museum of the Opera of Santa Croce in Florence after a decade-long conservation project.\nPainted on five large panels, each constructed of several planks, and measuring over 8 by 21 feet, the painting was damaged in the disastrous Florentine flood of 1966 and considered beyond repair. Or, at least, that was the opinion of experts up until ten years ago. With the support of a grant from the Getty Foundation’s Panel Paintings Initiative, a team of experts at one of the foremost conservation centers in the world, the Opificio delle Pietre Dure (OPD) in Florence, have brought Vasari’s artwork back to life.\nA Miraculous 470-Year History\nI am a firm believer in miracles, but not of the kind shrouded in mystery with billowing smoke and flickering lights. I believe in miracles that happen because of human ingenuity and resilience, such as the rescue and restoration of The Last Supper. Vasari started the work in 1546 and painted it over a six-month period for the Murate Convent, located only a few blocks from the Basilica of Santa Croce.\nThe convent and church are located in one of the lowest parts of Florence, so the painting has been subjected to no less than seven major floods in its 470-year lifespan. The first happened shortly after the artwork was finished, when the Arno River spilled over its banks in 1547. After 1845, the Murate Convent was repurposed as a prison, so The Last Supper was moved to the Santa Croce, where it continued to remain vulnerable to the Arno.\nThe most disastrous Florentine flood of modern times occurred on November 4, 1966. After heavy rainfall in Tuscany in October and early November, a flood wave burst into the city, covering more than 7,000 acres with water and sewage, and depositing 600,000 tons of mud and debris. The water reached heights of over 22 feet in the lowest parts of town, including the area of Santa Croce.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Today only a small number of Leonardo da Vinci's paintings still exist. While some of his work is not accepted academics and scholars, other works now exist only as copies.\nThe view is that there are about fifteen works of art that are associated with Leonardo da Vinci either completely or in collaboration with another artist.\nMost of his works are oil on wooden panel. There is, however, a mural, a large drawing on paper as well as two that were in the early stages and not finished.\nThat said, to most people his most famous works are likely to be The Mona Lisa and The Last Supper. During his lifetime, Leonardo da Vinci created several works of art that are easily recognisable. He was more of a specialist in painting than Michelangelo, who preferred sculpture.\nEarly Works – Early in his career while in Florence, Leonardo da Vinci completed two well-known works of art. They were the Annunciation sometime between 1472 and 1475 and Ginevra de' Benci sometime in the mid-1470’s. Both are oil on wood pieces.\n1490s - During this period he produced one of the world’s most well-known work of art, The Last Supper. The picture is a mural commissioned by a convent in Milan. It shows the last meal shared by Jesus with his disciples. Its recognition as a masterpiece has meant that it remains as one of the most reproduced artworks today.\n1500s – In the Louvre museum in Paris hangs another of Leonardo’s most known works of art, The Mona Lisa. Some believe its fame to rest on the enigmatic smile that is on the face of the woman thought to be that of Lisa Gherardini. While he produced several works during this era, it is the Mona Lisa that is probably his best-known piece.\nMany consider his last known work of art to be that of St John the Baptist. Painted in oil on a wooden panel, the picture dates from somewhere between 1513 and 1516. The piece is thought to depict St John the Baptist in isolation. It hangs in the Louvre in Paris.\nOf the pictures assumed to have been by Leonardo da Vinci, the validity of several have been the subject of debate. One reason for this is down to the fact that he never signed his works.\nApart from scientific tests, confirmation that paintings were by da Vinci has come from the opinion of academics and documentary evidence from the time of the picture.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "The Last Supper\nThe Last Supper is commemorated by Christians especially on Maundy Thursday.\nThe Last Supper is the final meal that, in the Gospel accounts, Jesus shared with his Apostles in Jerusalem before his crucifixion. The Last Supper is commemorated by Christians especially on Maundy Thursday.\nMoreover, the Last Supper provides the scriptural basis for the Eucharist also known as “Holy Communion” or “The Lord’s Supper”.\nThe creator of the synthesis, Nectarios Nestos, by faithfully following the technique of icon painting and artificial aging as it was taught to him on Mount Athos, created the handmade details of this multicolored lithograph.\n|Dimensions||20 × 27 cm|", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "The Mona Lisa is a half-length portrait of a woman by the Italian artist Leonardo\nda Vinci, which has been acclaimed as \"the best known, the most visited, the ...\nSep 7, 2012 ... Current home: The Louvre museum, in Paris, where the Mona Lisa is hung ... The\nwork is owned by the French government and hangs in the Louvre .... those of our\nusers and do not necessarily reflect the views of MailOnline.\n1452, Leonardo is born in Vinci, a small village in Italy. 1466, Leonardo moves to\nFlorence and enters the shop of Andrea Verrocchio. 1472, Leonardo joins the ...\nAug 8, 2011 ... Leonardo da Vinci's Mona Lisa, also known as La Gioconda, is the most ...\nNapoleon took it away to hang in his bedroom, but it was returned to ...\nDoes the 'Mona Lisa' found on the dark side of the moon actually exist? ... 2)\nSince the original Mona Lisa cannot be adequately insured, (as an insurance\npolicy requires a monetary value to the ... It's hanging in the Louvre museum in\nJan 28, 2012 ... I do not believe that the Louvre displays any copies. ... by Veronese that hangs\non the wall facing her -- I'm guessing less than a quarter of ... we generally do a\nwhistle-stop tour of the Big Three -- La Joconde (as Mona Lisa is ...\nDec 8, 2015 ... An earlier portrait of the Mona Lisa has been found under the existing ... Instead\nof the famous, direct gaze of the painting which hangs in the ... \"I do not think\nthere are these discrete stages which represent different portraits.\nFeb 1, 2012 ... The original painting hangs behind glass with enormous security at ... You can\nimagine that this is what the Mona Lisa looked like back in the 16th century. ...\nhas eyebrows and the Mona Lisa in the real masterpiece does not.\nWhy did they do it? ... But on the wall where the Mona Lisa used to hang, in\nbetween Correggio's Mystical Marriage and Titian's Allegory of Alfonso d'Avalos,\nApr 6, 2005 ... Da Vinci's Mona Lisa, displayed in far-off corner of Louvre for past four years ...\nthe painting hangs alone on a freestanding wall that divides the gallery.", "score": 26.9697449642274, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "Why is Da Vinci’s masterpiece so important that we feel obligated to preserve it year after year, or should I say century after century? Certainly, other great artists have painted the ‘last supper’ of Christ before, but arguably not quite like Da Vinci. His remarkable interpretation of Jesus sharing a meal with his disciples (in what is the first Eucharist where Jesus announces that one of them will betray him) shows everyone interacting with each other with amazing emotion. Furthermore, from an artistic or technical point of view, Da Vinci draws our attention straight to the centre of the painting with Jesus, before we scan with our eyes to see what is going on around him. Many have seen this, as the quintessential reason why preserving it is so important. Art and history at its zenith?\nWhether Da Vinci painted it purely for financial reward or for a higher spiritual reason, it is frustrating to think, why didn’t he put more thought into it, in terms of making sure it would still be around for future generations to admire. Almost immediately after he had finished painting it in 1498, it began to flake off. Da vinci’s problem was that he was such an imaginative artist and inventor that he was always trying to invent the next best thing and remain one step ahead of his contemporaries. He tried coating the Santa Maria refectory wall with an experimental waterproof undercoat, which he believed would help him in his process of painting. Unfortunately, as already mentioned, it was a disaster. If we are to believe Leonardo Da Vinci’s biographer Giorgio Vasari, he describes the painting as ‘ruined’ and unrecognizable by 1556.\nDa Vinci’s painting as it looked in the 1970’s prior to its major restoration between 1979 and 1999.\nDa Vinci’s ‘The Last Supper’ after its major restoration works in 1999. Notice the undesirable doorway that was cut into the painting in 1652.\nWe almost lost Da Vinci’s painting ‘forever’ on several other notable occasions throughout its history. The most notable being in 1652, when somebody thought it appropriate to insert another door to the refectory. Further mishaps saw restoration artists in the eighteenth unintentionally damage the painting by applying, removing and then re-applying oil paint and vanish. French troops in 1796 apparently threw stones at the painting chipping it and later even scratched out the eyes of the disciples.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-6", "d_text": "Retrieved November 5, 2019, from http://nla.gov.au/nla.news-article5370367\n- Visual Arts Data Service: National Inventory of Continental European Paintings ( on-line inventory of all the 22,000 pre-1900 Continental European oil paintings in the UK’s public collections) copy of “Last Supper” in Huddersfield Art Gallery https://vads.ac.uk/large.php?uid=86573 and another copy in Paisley Museum and Art Galleries https://vads.ac.uk/large.php?uid=85151", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo da Vinci is one of the most well-known names in the world because of the many contributions he made to modern society.\nLeonardo da Vinci is most famous for his paintings titled Mona Lisa and The Last Supper, however, he is also famous for being a polymath. A polymath is a person with a wide-ranging knowledge of different subjects. In da Vinci’s case, these subjects were painting, drawing, sculpting, science, engineering, architecture, and anatomy. He contributed much more than art to the modern world.\nRead more below about Leonardo da Vinci’s famous pieces of art, his contribution to science, and his inventions.\nLeonardo da Vinci’s Art\nThe reason Leonardo da Vinci is so famous is that he painted what is arguably the most famous painting in the world. The Mona Lisa, which sits on display at The Louvre Museum, is probably the most recognizable painting in existence.\nIt has been referenced in an almost countless number of pop culture pieces, including the film Mona Lisa Smile and The Da Vinci Code book and film. Leonardo Da Vinci even appeared as a minor character in the 1998 film Ever After where the Prince chases down some bandits who stole the Mona Lisa from da Vinci.\nThe Mona Lisa is also in theory the most valuable painting in the world. When it was assessed for insurance purposes in 1962, it was valued at around $100 million, which is equivalent to $850 million today factoring in inflation.\nHowever, Mona Lisa is only one of the many famous and highly valuable paintings that Leonardo da Vinci created in his lifetime. In fact, in 2017, the sale of his painting Salvator Mundi became the highest price ever paid for a painting when it sold for $450.3 million.\nHe has some other very famous paintings as well. One of those would be The Last Supper.\nFrom 1495 to 1498, da Vinci worked to paint this massive piece of artwork that was commissioned to be displayed in the dining hall at the monastery Santa Maria Delle Grazie in Milan, Italy. When it was complete it measured 230 inches by 350 inches.\nThe Last Supper depicts Jesus with his twelve disciples at the dinner table for his last meal before his crucifixion. The scene is meant to represent the story of the Last Supper of Jesus as it is told in the Gospel of John in the Bible.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "What does The Da Vinci Code have to do with Leonardo’s painting “The Last Supper”?\nSeveral outlandish claims are being made in reference to this famous painting by Dan Brown in his novel, The Da Vinci Code. – here are some examples:\n- Dan Brown uses the painting to promote the idea that Da Vinci painted Mary Magdalene into “the last supper”, at the right hand of Jesus: P.243: The person to the right of Jesus is recognized by Sophie in the book as a woman:\n“The individual had flowing red hair, delicate folded hands, and the hint of a bosom. It was, without a doubt … female”. “That’s a woman!”, exclaimed Sophie.\nBrown is fond of saying that we see only what we want to see. Take care to note that Leonardo portrayed other masculine biblical characters with a feminine appearance – in his work Saint John the Baptist (c. 1413-1416)4, St. John the Baptist – a very ruddy character according to biblical records – is depicted as a feminine character with long flowing hair and delicate hands. Is it any surprise that John the Apostle might be depicted in a similar fashion? And if one inspects “The Last Supper” carefully, there is in fact is no hint of a bosom – unless one wants to see that in the painting.\n- He further promotes the notion that the “holy grail” is missing from the painting because Leonardo was trying to communicate a secret message – ie., that “the Holy Grail” was not a physical drinking cup, but rather the womb of Mary Magdalene! But why do we expect to see a large chalice emblazoned with the letters “The Holy Grail”? Only if we fall for legend and popular lore. Look closely at the painting, and you will see that Jesus, as well as His followers, all have drinking cups. Jesus’ cup (“the holy grail”) is next to his left hand, while His right hand is extended over a piece of bread.1\nFirst Things First\nFirst, lets get some basic facts straight.\n- Leonardo was not at “the last supper”, which occurred some 1,500 years before he was born. He was painting his interpretation, in accordance with his painting style, of what took place at “the last supper”.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "La Gioconda appears again in Madrid’s Prado Museum. This is a great find because it was painted by one of the favorite disciples of Leonardo da Vinci, while da Vinci worked on the original. With this spectacular restoration, the work of Leonardo is born again. It will be exposed at The Louvre soon. Why da Vinci, let a disciple paint this at the same time as him still remains a mystery.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Lavishly shot in many of the actual locations where da Vinci lived, worked, and dreamed, this Italian masterpiece is a complete immersion into da Vinci's world. Based on eyewitness accounts and historical evidence, this definitive film movingly portrays Da Vinci's genius, including the creation of The Last Supper and the Mona Lisa. Includes a collectible 8-page booklet. 4 ½ hrs, 2 DVDs.\nSpecial Extra Features:\n• The Rise of Renaissance Italy — In the richest era of artistic and intellectual achievements ever known, an amazing confluence of brilliant minds shaped our world. Marvel at the Renaissance Men and the impact they had on art and thought for eternity.\n• Leonardo's Masterpieces — Centuries after his death, da Vinci's paintings still touch, haunt, and intrigue us. The Mona Lisa, the Adoration of the Magi, The Last Supper and other masterworks remind us of his remarkable and timeless style.\n• Da Vinci's Inventions — Trying to discover how technology worked, Leonardo sketched out the tank, parachute, scuba, and more.\n• The Maestro vs. Michelangelo — Did the aging maestro finally meet his match in Michelangelo, and aggressive young upstart? This feature looks at the work and contrasting styles in this bitter rivalry of the brilliant.\n• The Works of Two Great Masters: A Timeline — A walk through the lives of Leonardo and Michelangelo juxtaposes the works of two greats.\nIncludes a collectible 8-page booklet — The Most Brilliant Mind in History.\nRun Time: 4 ½ hours\nFormat: Full Screen\nNumber of discs: 2\nForeign Language Subtitles: No\nColor or B&W: Color\nRegion Code: 1\nAspect Ratio: 4:3\nAudio Format: Dolby Digital", "score": 25.557685058617416, "rank": 47}, {"document_id": "doc-::chunk-4", "d_text": "If The Last Supper is in danger of another flood, a simple press of a button engages two winches, and the entire painting is miraculously hoisted toward the ceiling out of harm’s way!\nLearn more: The Getty Foundation’s Panel Paintings Initiative project for the conservation of Giorgio Vasari’s The Last Supper (1526) was featured in the PBS NewsHour’s Culture at Risk series in October 2015. The episode focused on the work of the OPD, which the NewsHour host Jeff Brown described as “part museum, part workshop, part hospital for threatened treasures.” Watch the episode online.", "score": 25.28546463268611, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "The experience in some ways exceeds what eight million visitors per year get from seeing the real work behind glass at the Louvre, Peterson says.\n\"We don't suggest for one moment that it replaces the original,\" he says, \"but people will get a far greater appreciation and understanding of the most popular piece of art in the world.\"\nLife of Leonardo\nLeonardo da Vinci was born in 1452 near Vinci in Tuscany, the illegitimate son of a notary and a peasant woman. He had no surname in the modern sense; da Vinci means \"from Vinci.\" Because he was illegitimate, he lacked access to formal education and was largely self-taught. \"He developed an insatiable thirst for knowledge,\" says exhibition spokesman Bruce Peterson.\nLeonardo stood six-foot-six, was left-handed and reputed to be a vegetarian and homosexual. He died in 1519 at age 67.\nAs an artist, he was a perfectionist, \"unable to accept second-best,\" Peterson says. \"He would take forever to finish a commission, and of course this annoyed the patrons, and he stopped getting commissions. He became a military strategist and engineer, so his artwork took a back seat.\"\nHe was never wealthy, Peterson adds. \"Michelangelo earned for one of his sculptures as much as Leonardo earned for his entire life.\"\nAbout 6,000 pages of Leonardo's codices (notebooks) survive, out of some 24,000 pages he is believed to have written and sketched. He used backwards \"mirror writing\" and apparently planted mistakes in his notes to foil people who might try to steal his ideas. Few of his inventions were built during his lifetime.\nDa Vinci -- The Genius\nMTS Centre Exhibition Hall\nFriday to Oct. 23\nMon.-Sat. 10 a.m. to 8 p.m.; Sun. 12 p.m. to 6 p.m.\nTickets: $19.95 at Ticketmaster", "score": 25.02341338381918, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "It paints the exact moment when Christ reveals to His apostles who are seated around the table, that one disciple among them will betray Him, before sunrise.\nChrist has also given instructions on how they are to eat and drink in remembrance of Him, as a ritual in the future, after His death. The different reactions of anger and shock are vividly portrayed by Leonardo da Vinci. The most striking feature of this painting is how Leonardo places the vanishing point of the perspective in the Last Supper, where the vanishing point is behind the right temple of Christ.\nThis points exactly to the location of the center of the composition. The Last Supper also shows the love of the symmetry of Leonardo. He uses a horizontal layout here. The supper table is put in the foreground of the image of Christ. All the apostles are behind, with the same number of figures on each side of Christ. This shows the symmetrical composition of da Vinci.\nFurther, the perspective that da Vinci used highlights the positions of the figures and the architectural features in the composition. It is no wonder that this painting figures prominently among Leonardo da Vinci's famous paintings. In this work, a large scale wall painting, Leonardo as an artist had no prior experience in mural painting.\nExperimental pigments were applied on the dry plaster wall. The usual technique with frescoes was that pigments were mixed on wet plaster. Hence, problems cropped up such as paint flaking off the wall. Today, the current masterpiece that remains has little of the original version.\nThis painting is the first painting by the twenty-year-old Leonardo da Vinci. Its composition is traditional in the portrayal of the figures, with the angel kneeling on the left and Mary seated at the right. A lectern is between them.\nThe scene is in an architectural setting with a landscape in the background. The composition resembles the medieval iconographic presentations of the Annunciation. The sfumato technique of da Vinci, his dramatic interplay of light and dark, his aerial perspective are not evident.\nHence, these painterly techniques are still in their developmental phase in the artistic palette of Leonardo da Vinci as an innovator. The painting is nevertheless regarded as the achievement of a master. It is considered as a showcase of the pictorial talent of Leonardo da Vinci and therefore is a forerunner of Leonardo da Vinci''s famous paintings.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-8", "d_text": "You can take a look at the movie we made of the Last Supper in Milan and the excellent copy by Cesare Da Sesto in Ponte Capriasca in Switzerland. You can read about this adventure in my article: Second Helpings of Leonardo's Last Supper. Now its time for me to read Revelations again.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "Here are 20 of Leonardo da Vinci’s most famous paintings:\n- Mona Lisa by Leonardo da Vinci\n- The Last Supper by Leonardo da Vinci\n- The Annunciation by Leonardo da Vinci\n- Portrait of a Man in Red Chalk Drawing by Leonardo da Vinci\n- Madonna and Child with the Infant Saint John the Baptist by Leonardo da Vinci\n- The Virgin of the Rocks by Leonardo Da Vinci\n- Madonna Litta by Giovanni Antonio Boltraffio and Leonardo da Vinci\n- St John the Baptist by Leonardo da Vinci\n- Mary Magdalene by Leonardo da Vinci\n- Study for the head of Leda by Leonardo da Vinci\n- The Virgin and Child with Saint Anne by Leonardo da Vinci\n- Madonna of the Carnation by Leonardo da Vinci\n- Madonna of the Yarnwinder by Leonardo da Vinci\n- Salvator Mundi by Leonardo da Vinci\n- Bacchus by Leonardo da Vinci\n- Head of a Woman by Leonardo da Vinci\n- La Belle Ferronniere by Leonardo da Vinci\n- Lady with an Ermine by Leonardo da Vinci\n- La Bella Principessa by Leonardo da Vinci\n- Portrait of Ginevra de' Benci by Leonardo da Vinci\nLeonardo da Vinci Artworks\nMona Lisa is prominent among Leonardo da Vinci's famous paintings. Her enigmatic facial expression, her smile, has given this work the appeal that gave it fame. Leonardo models her features softy.\nThe painting is done with oil paint on wood. Mona Lisa wears Florentine clothes which are the fashion of her day. At the backdrop is a mountainous landscape where da Vinci uses his sfumato technique as he gives it heavy shading. The sitter is Lisa Gerardhini, wife of Francesco del Giocondo.\nThe sfumato technique of da Vinci is infused into the curves of Mona Lisa's hair and clothing; this is further applied to the valleys and rivers in the backdrop. There is a dramatic interplay of light and dark in the composition backdrop. The painting is one of the first works of da Vinci, where an imaginary landscape recedes into the distance. This is also one of the early instances where he used aerial perspective to achieve this.\nThe theme of this painting by Leonardo is Christ's Last Supper with His apostles, which is narrated in the four Gospels of the New Testament.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "An oil painting recently authenticated as the work of Leonardo da Vinci will be on display at the National Gallery in the fall as part of a larger exhibition on the Renaissance artist, the London museum said Monday.\n“Salvator Mundi,” which dates to around 1500, depicts a half-length figure of Christ with one hand raised in blessing and the other holding an orb.\nThe National Gallery said in a statement Monday that the work was shown to its director, curator and other art scholars after undergoing conservation that was completed in 2010.\n“We felt that it would be of great interest to include it in the exhibition as a new discovery,” the museum said, adding that its curator Luke Syson “is cataloging the picture as by Leonardo da Vinci and this is how the picture will be presented in the exhibition.”\nThe painting will be included in an exhibition titled: “Leonardo da Vinci: Painter of the Court of Milan,” from Nov. 9 to Feb. 5, 2012. “This will obviously be the moment to test this important new attribution by direct comparison with works universally accepted as Leonardo’s,” the museum said.\n“Once you walked into the room it had that uncanny presence that Leonardo’s have,” said Martin Kemp, professor emeritus of art history at Oxford. A researcher of paintings, he was among the experts consulted on the painting.\nDetailed examination of the work as well as scientific testing convinced him that he was looking at the real thing. For example, some of the brushwork in the best preserved sections made it clear that the master had been holding the brush.\n“None of the students painted like that, none of the followers,” Kemp said.\nKemp said he was glad the painting was going on display at the National Gallery. “It’s a new Leonardo painting, it’s sensational,” he said. “I’m glad London is seeing it publicly first.”\nThe work is currently owned by R.W. Chandler, a consortium represented by Robert Simon, an art historian and private art dealer in Tuxedo Park, N.Y., according to Sara Latham, a spokeswoman for Simon.\n“Salvator Mundi,” which means Savior of the World, was believed to have been lost. It was first recorded in the art collection of King Charles I of England in 1649. In 1763, it was auctioned by the son of the Duke of Buckingham.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "Works on display will include \"Portrait of a Young Man,\" \"Lady with an Ermine,\" \"La Belle Ferronière,\" \"Madonna Litta and Saint Jerome,\" and the two versions of the \"Virgin of the Rocks.\" The final part of the exhibition is to feature a near-contemporary, full-scale copy of the \"Last Supper.\"\nBut those who cannot make it to the London exhibition will be able to admire the painting in CNN's new series, \"Leonardo – The Lost Painting.\" The five-episode documentary focuses on the discovery of \"Salvator Mundi,\" how it was purchased by a serendipitous coincident after centuries in obscurity, and how experts from Europe and the Unites States spent months affirming its originality.\nThe TV series starts on Nov. 11 and runs through Nov. 15 on CNN. You can view a trailer video HERE.", "score": 24.345461243037445, "rank": 54}, {"document_id": "doc-::chunk-2", "d_text": "When he was working in Milan - where he painted his famous Last Supper on the wall of the refectory of the convent of Santa Maria delle Grazie - Leonardo became friends with the mathematician Luca Pacioli (c.1445 - 1517) whom he had first met in Venice in 1494. Leonardo made a series of perspective drawings of polyhedra to illustrate the printed version of Paciolis treatise on divine proportion.(4) It is sometimes asserted that Pacioli stimulated Leonardos interest in mathematics, but Paciolis own mathematical interests seem to have been largely in arithmetic and algebra, subjects to which Leonardo does not seem to heave been attracted (indeed when employing arithmetic he seems rather accident-prone), and it is perfectly possible that the effect of the friendship was rather that Leonardo quickened Paciolis interest in geometry.\nIn 1500, Leonardo returned to Florence, where he was engaged as a painter, one project (never completed) being a huge painting, showing the Battle of Anghiari, on a wall of the largest room in the Town Hall (the Palazzo Vecchio). However, in 1502 and 1503 he worked as a military engineer for Cesare Borgia (1475/6 - 1507). From 1506 to 1513 Leonardo was again living in Milan, and in 1513 he moved to Rome. In 1516, at the invitation of François Iier, he went to live in France, where he died in the little château of Le Clos Lucé, a dependency of the large Royal château of Amboise.\n1. Codex Atlanticus 391ra/1082r, in: Leonardo\nda Vinci, Il Codice Atlantico, Tom. III, Vol. XII, Florence: Guinti,\n2000, pp. 1937-1938; Jean Paul Richter, The Notebooks of Leonardo\nda Vinci, Vol. II, New York: Dover, §1340, pp. 395-398. A\ntext version of these notebooks is at http://leonardo-da-vinci.org.\n2. G. H .", "score": 23.168288532558766, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Supersizing The Last Supper\nLINDA WERTHEIMER, host:\nGood morning. I'm Linda Wertheimer. Leonardo da Vinci's�mural of Christ and his apostles, \"The Last Supper,\"�is one of art history's most famous depictions of a meal. Nutrition experts say the size of the supper has grown significantly. They studied different paintings of the same event from the last millennium, and there's been a lot more bread to break. Experts found the portions grew by 69 percent.\nIt's MORNING EDITION.\nNPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Around 1568, Florentine nun Plautilla Nelli—a self-taught painter who ran an all-woman artists’ workshop out of her convent—embarked on her most ambitious project yet: a monumental Last Supper scene featuring life-size depictions of Jesus and the 12 Apostles.\nAs Alexandra Korey writes for the Florentine, Nelli’s roughly 21- by 6-and-a-half foot canvas is remarkable for its challenging composition, adept treatment of anatomy at a time when women were banned from studying the scientific field, and chosen subject. During the Renaissance, the majority of individuals who painted the biblical scene were male artists at the pinnacle of their careers. Per the nonprofit Advancing Women Artists organization, which restores and exhibits works by Florence’s female artists, Nelli’s masterpiece placed her among the ranks of such painters as Leonardo da Vinci, Domenico Ghirlandaio and Pietro Perugino, all of whom created versions of the Last Supper “to prove their prowess as art professionals.”\nDespite boasting such a singular display of skill, the panel has long been overlooked. According to Visible: Plautilla Nelli and Her Last Supper Restored, a monograph edited by AWA Director Linda Falcone, Last Supper hung in the refectory (or dining hall) of the artist’s own convent, Santa Caterina, until the house of worship’s dissolution during the Napoleonic suppression of the early 19th century. The Florentine monastery of Santa Maria Novella acquired the painting in 1817, housing it in the refectory before moving it to a new location around 1865. In 1911, scholar Giovanna Pierattini reported, the portable panel was “removed from its stretcher, rolled up and moved to a warehouse, where it remained neglected for almost three decades.”\nPlautilla’s Last Supper remained in storage until 1939, when it underwent significant restoration. Returned to the refectory, the painting sustained slight damage during the momentous flooding of Florence in 1966 but escaped largely unscathed. Upon the refectory’s reclassification as the Santa Maria Novella Museum in 1982, the work was transferred to the friars’ private rooms, where it was kept until scholars intervened in the 1990s.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "How, exactly, does a Leonardo da Vinci mural believed to be three times the width of The Last Supper get lost? This is a mystery that Maurizio Seracini has been trying to solve since 1975.\nAfter graduating with a degree in engineering from the University of California San Diego, Seracini was approached about a project in his hometown of Florence. The mission: to search for da Vinci’s unfinished fresco, The Battle of Anghiari. While several artists of da Vinci’s time refer to the work — and a letter from 1549 places it atop a grand staircase in the Palazzo Vecchio — the piece has been lost to modern audiences. It is believed that when Giorgio Vasari renovated the Palazzo Vecchio’s Hall of 500 in 1560, he might have covered da Vinci’s fresco with his own, The Battle of Marciano. While Vasari is known to have preserved the works underneath his own by leaving a gap in the wall, it is nearly impossible to prove without damaging Vasari’s fresco, now more than four centuries old itself.\nIn this fascinating talk from TEDGlobal, Seracini explains how he and his teams have approached finding da Vinci’s lost mural over the years — by constructing 3D models of the hall before its renovation, and using lasers and radar to chart the gaps in the walls. But beyond that, Seracini shares how the search for the mural opened up a new application of his engineering skills — using tools like multispectral imaging, sonogram and x-ray to study and restore art.\nAs Seracini shares in this talk, many famous pieces of art have secrets laying just below their visible layers — unseen sketches, details changed over time, proof that artists other than those credited were actually the ones who put paint to canvas.\n“Technology has helped to write news pages of art history — or at least update them,” says Seracini, who hopes museumgoers will someday get to see these hidden layers through an augmented reality app. “This is what we’re trying to do — we’re trying to give a future to our past.”\nTo hear more about Seracini’s quest for The Battle of Anghiari, and about the other art mysteries he’s unraveled along the way, listen to his incredible talk. Below, check out some details you’d never know were behind these classic paintings.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "If you’ve ever wanted to own a Leonardo da Vinci masterpiece and you have around $100 million to spare—not counting security, storage, and insurance costs—you may have just one chance. Christie’s announced in New York Tuesday morning that it plans to auction off a newly rediscovered Da Vinci canvas, Salvator Mundi, at its postwar and contemporary art sale on November 15. The work, a portrait of Christ, is one of fewer than 20 known paintings by the quintessential Renaissance man who, between his many other pursuits, hardly had the time to devote to what he is most known for. It’s also the only one in private hands.\nAlan Wintermute, Christie’s senior specialist of Old Master paintings, calls it the “Holy Grail of art rediscoveries.” Created around 1500, Salvator Mundi (“Savior of the World”) depicts Christ holding a crystal sphere that represents the globe, and gives him an even subtler smile than the Mona Lisa, which Leonardo painted around the same time. The painting was most likely commissioned for the court of French King Louis XII, though its first confirmed owner was Britain’s Charles I. It was passed around the British aristocracy for a century and a half before reemerging on the market in 1900, when it was acquired by a merchant and art collector named Sir Francis Cook. But by then, no one recognized it as a Leonardo work. It had been mangled by time and by shoddy restorers, who slathered the lord and savior with a layer of paint that Wintermute likens to “kabuki makeup.”\nCook sold the painting at Sotheby’s in 1958 for a cool £45, and then it disappeared again. After it resurfaced at a regional American auction house in 2005, it took six years to properly restore it and authenticate its authorship beyond a doubt. Salvator Mundi was formally reintroduced to the public at a Leonardo exhibition at London’s National Gallery in 2011. It is now being sold by a private European collection.\n“Whoever buys this painting will put his name, his museum, his town on the cultural map,” said Loic Gouzer, Christie’s chairman of postwar and contemporary art.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "The painting was later bought by Henry F. Pulitzer, who claimed that it was Leonardo’s only real portrait of Lisa del Giocondo. He made this painting in such Bruegel the Elder, Flemish Renaissance Painting. He left for Milan that year, left the painting unfinished. It is no surprise to see the Mona Lisa at the top of this list. It portrays the lion introspection over a sleeping woman on a full moon night. Salon jury of 1863 so Manet took the opportunity to showcase this and two other skill, Manet took the ready-made Venus of Urbino of Titian to reproduce the The last supper is Leonardo’s visual interpretation of an event. During this time, Picasso was more sympathetic than anything. Artist used different colors to his painting on a wooden board, it is a common and unique technique to paint on a hard surface. The precise color and the seven years and it remained unfinished at the time of his death in 1906. If you had any doubts about the wild popularity of \"Mona Lisa,\" the crowds at the Louvre will convince... 2. The Fall of the Rebel Angels c.1562 Pieter Bruegel the Elder Northern Renaissance Flemish Landscape with Charon Crossing the Styx c.1519 It is mandatory to procure user consent prior to running these cookies on your website. Home / Famous Paintings and Artworks / 16 of the Most Famous Andy Warhol Paintings 16 of the Most Famous Andy Warhol Paintings Andy Warhol, the eminent American artist, occupies the most significant position among the practitioners of the 1960’s Visual Art Movement “Pop Art”, dealing with subject matters very much existent in the viewer’s immediate environment. In this list of the 20 most famous pieces of western art see how many you recognize. The Adoration of the Magi is a painting by Leonardo da Vinci. Your email address will not be published. shows the idealized representation of female beauty. The ceiling of the Sistine Chapel, the large papal chapel made within the Vatican in between 1477-1480 by Pope Sixtus IV. I try to let it come through.” – Jackson Pollock, Hi Nice Blog. Visitors take photos of \"The Last Supper\" (\"Il Cenacolo or L'Ultima Cena\") at the Convent of Santa... 3.", "score": 23.030255035772623, "rank": 60}, {"document_id": "doc-::chunk-2", "d_text": "To add further insult to injury, another so-called expert tried removing the painting to a safer location before realizing he had damaged a major middle section of the painting. But don’t worry, he glued it back together! Finally, as mentioned, we came awfully close to losing the painting in 1943 during the allied bombings.\nOnly around ten completed paintings by Da Vinci survive today. The Last Supper is one of them. Though, there is also another one of his great works and arguably the most famous painting of all time that I would one day like to give some attention to: Mona Lisa. Thank goodness, he decided to use a more traditional method to paint her and so we don’t have to worry whether or not she will flake! But, Da Vinci and the Mona Lisa will be our focus for another day.\nDa Vinci’s greatest work ? The Mona Lisa.\nPhoto Credit: Every effort has been made to trace and acknowledge appropriate credit. The header image is The Last Supper, circa 1520, by Giovanni Pietro Rizzoli (Giampetrino). It is an exact, full-scale copy of Da Vinci’s painting. It is currently in the Royal Academy of Arts, London, collection. All other images are in the public domain too with the possible exception of the two images of the ruins of the Santa Maria delle Grazia, in Milan. The image of what remains of the refectory of the Santa Maria delle Grazie monastery is used under the ‘fair use’ rationale to highlight a unique historical moment in time. I believe that words alone cannot describe how close we came to losing Da Vinci’s famous painting. The 2nd image of the Santa Maria delle Grazia, in Milan, appears to be in the public domain with an expired copyright. I believe my inclusion of this image constitutes as ‘fair use’ to highlight a unique historical moment in time.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "Renaissance women “could obviously paint as part of their cultural education,” Falcone says, “but the only way they could paint large-scale works and get public commissions was through their convent.”\nMost paintings produced by Nelli and her workshop of some eight fellow nuns were smaller devotional works made for outside collectors. But some canvases—including Last Supper and others designed for private use within the convent—were monumental, requiring expensive scaffolding and assistants that the nuns paid for with funds from their commissions.\nPer the AWA statement, the newly restored work was created in true “workshop style”—in other words, different artists of varying levels of expertise contributed to the religious scene.\nAs Chernick reports in a separate Atlas Obscura article, Nelli chose to depict Jesus and his 12 apostles dining on fare typically enjoyed by the residents of Santa Caterina. In addition to traditional wine and bread, she included a whole roasted lamb, lettuce heads and fava beans. And unlike Last Supper scenes painted by male artists, AWA founder Jane Fortune pointed out in a 2017 essay for the Florentine, Nelli’s tableware is incredibly elaborate; among the items on display are turquoise ceramic bowls, fine china platters and silver-adorned glasses.\nAccording to historian Andrea Muzzi, Last Supper builds on the style established by Leonardo da Vinci’s similarly themed work. This monumental fresco, painted for the refectory of Milan’s Santa Maria delle Grazie between 1495 and 1498, was so influential, Muzzi writes in her essay “A Nun Who Paints,” that the “sacred subject could no longer be represented without his work being taken into account.” An apostle painted fourth from the left in Nelli’s version, for example, gestures with open hands in a manner reminiscent of Leonardo’s composition.\nFor Financial Review, Lobo paints an apt portrait of Nelli’s singular skill: “Picture the nun in her holy garments, mixing her pigments and stepping up onto scaffolding to brush enormous strokes of paint onto a canvas taller than her and wider than a contemporary billboard,” he writes. “The physical undertaking would have been immense, requiring great strength, focus and discipline—to say nothing of the will required to take on this sacred subject attempted before only by the male greats.”\nAn inscription hidden in the upper left corner of the painting suggests Nelli was keenly aware of the landmark nature of her creation.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-5", "d_text": "Leonardo never parted with the painting. Today, the \"Mona Lisa\" hangs in the Louvre Museum in Paris, France, secured behind bulletproof glass and regarded as a priceless national treasure seen by millions of visitors each year. Final Years Leonardo returned to Milan in 1506 to work for the very French rulers who had overtaken the city seven years earlier and forced him to flee. Among the students who joined his studio was young Milanese aristocrat Francesco Melzi, who would become da Vinci’s closest companion for the rest of his life. He did little painting during his second stint in Milan, however, and most of his time was instead dedicated to scientific studies. Ironically, Gian Giacomo Trivulzio, who had led the French forces who conquered Ludovico in 1499, followed in his foe’s footsteps and commissioned da Vinci to sculpt a grand equestrian statue, one that could be mounted on his tomb. After years of work and numerous sketches by da Vinci, Trivulzio decided to scale back the size of the statue, which was ultimately never finished. Amid political strife and the temporary expulsion of the French from Milan, da Vinci left the city and moved to Rome in 1513 along with Salai, Melzi and two studio assistants. Giuliano de’ Medici, brother of newly installed Pope Leo X and son of his former patron, gave da Vinci a monthly stipend along with a suite of rooms at his residence inside the Vatican. His new patron, however, also gave da Vinci little work. Lacking large commissions, he devoted most of his time in Rome to mathematical studies and scientific exploration. After being present at a 1515 meeting between France’s King Francis I and Pope Leo X in Bologna, the new French monarch offered da Vinci the title “Premier Painter and Engineer and Architect to the King.” Along with Melzi, the Tuscan native departed for France, never to return. He lived in the Chateau de Cloux (now Clos Luce) near the king’s summer palace along the Loire River in Amboise. As in Rome, da Vinci did little painting during his time in France. One of his last commissioned works was a mechanical lion that could walk and open its chest to reveal a bouquet of lilies.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Last Da Vinci becomes most expensive artwork ever sold\nLeonardo Da Vinci’s Salvator Mundi sold for a record $400 million – plus $50 million of fees – making it the most expensive artwork ever auctioned. We don’t know who bought it: the lucky buyer bid by telephone and chose to keep their identity private.\nSalvator Mundi, Da Vinci’s long-lost painting of Christ, goes on auction in New York on Wednesday night – giving collectors a once-in-a-lifetime chance to buy one of the Renaissance genius’ works, all others of which are already owned by museums.\nThe 500-year-old painting is set to smash all auction records for an Old Master, with bids expected to top $100 million.\nThat almost certainly puts it out of reach for either the Italian state or any of Italy’s art museums, none of which have announced plans to bid for the work.\nSalvator Mundi is currently owned by a Russian billionaire, Dmitry Rybolovlev, who bought it from a Swiss art dealer for $127.5 million in 2013.\nThe painting re-entered the art market in 2005, when some art experts acquired it for relative peanuts at a local auction in the United States. Heavily painted over and gnawed by worms, the work was unrecognizable until restoration revealed traces of Da Vinci’s trademark techniques.\nChrist’s delicately placed hand, the intricate curls of his hair and the haunting quality of his expression have led to comparisons with Da Vinci’s most famous portrait, the Mona Lisa.\nLike that work, now a prized possession of the Louvre in Paris, Salvator Mundi seems destined to remain outside Italy.\nIt has never been exhibited in Da Vinci’s home country, having been commissioned by Louis XII of France and later sold to Charles I of England. It remained in the hands of English aristocrats until it made its way to the US in the 20th century.\nSince its rediscovery, the work has been displayed at the National Gallery in London, as well as Christie’s auction houses in Hong Kong and the US.\nArt lovers can nonetheless find numerous Da Vinci originals in Italy, including The Last Supper on the walls of the Santa Maria delle Grazie church in Milan, the Annunciation at the Uffizi Gallery in Florence and the Vitruvian Man in Venice’s Gallerie dell’Accademia – not to mention the farmhouse where the painter was born in the Tuscan town of Vinci.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "In 1466, when he was only fourteen years old, he was introduced to the best artist at the time; Andrea di Cione, also known as Verrocchio (Legacy.mos.org). His workshop guaranteed young Leonardo an education in the humanities. His talents were so great that it is rumored that when he helped Verrocchio paint the Baptism of Christ, Verrocchio never painted again as a result of the amusement of the young boy’s talent.\nLeonardo’s paintings have revolutionized the way art is perceived and his love for perfection has increased the inspiration for quality (Leonardodavinci.net). Art has continued to be a great form of communication and expression. For instance, Leonardo paved the way for other great painters like Picasso whose works have continued to marvel. Western civilization takes pride in art since it traverses cultural, racial and religious discrimination (Da-vinci-inventions.com). Art is also a source of history because it depicts the times of the artist and the social, religious or political atmosphere. Leonardo’s paintings have also influenced the western world in the sense that they have become the object of parody for many years as well as replication. Leonardo Da VInci’s work has been a source of entertainment for many years because painters have tried to make fun of the paintings (Italian-Renaissance-art.com). Their replication has brought forth a competitive nature in the art world that strives to push the limits of painting.\nLeonard’s work on perspective painting is the product of Brunelleschi’s invention, which he perfected (Engineering.com). Such perfected works include the Adoration that remained unfinished, but highlighted a strict Albertian grid that lay on the pavement of the painting. Another illustration of perfection in perspective painting is The Last Supper that is placed in the Convent of Santa Maria Delle Grazie, Milan. Here, the vanishing point of the painting is placed on the right eye of Jesus Christ. Painting was, therefore, portrayed as a medium of expression and a rich source of recording events. An event such as the last supper is very significant among the Christians and such a painting helps to understand better an event as important as this.\nCreativity was heightened at the time of Leonardo Da Vinci because of the intent for perfection and the need to paint more (DCS).", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-3", "d_text": "Art and science intersected perfectly in his sketch of “Vitruvian Man,” which depicted a male figure in two superimposed positions with his arms and legs apart inside both a square and a circle. A man ahead of his time, da Vinci appeared to prophesize the future with his sketches of machines resembling a bicycle, helicopter and a flying machine based on the physiology of a bat. 'The Last Supper' and Other Works Leonardo was commissioned to work on numerous projects during his time in Milan. His painting of the “Virgin of the Rocks,” begun in 1483, demonstrated his pioneering use of chiaroscuro—a stark contrast between darkness and light that gave a three-dimensionality to his figures—and sfumato—a technique in which subtle gradations, rather than strict borders, infuse paintings with a softer, smoky aura. Around 1495, Ludovico commissioned da Vinci to paint “The Last Supper” on the back wall of the dining hall inside the monastery of Milan’s Santa Maria delle Grazie. The masterpiece, which took approximately three years to complete, captures the drama of the moment when Jesus informs the Twelve Apostles gathered for Passover dinner that one of them would soon betray him. The range of facial expressions and the body language of the figures around the table bring the masterful composition to life. The decision by da Vinci to paint with tempera and oil on dried plaster instead of painting a fresco on fresh plaster led to the quick deterioration and flaking of “The Last Supper.” Although an improper restoration caused further damage to the mural, it has now been stabilized using modern conservation techniques. In addition to having da Vinci assist him with pageants and designing a dome for Milan’s cathedral, the Duke of Milan tasked the artist with sculpting a 16-foot-tall bronze equestrian statue of his father and founder of the family dynasty, Francesco Sforza. With the help of apprentices and students in his workshop, da Vinci worked on the project on and off for more than a dozen years. Leonardo sculpted a life-size clay model of the statue, but the project was put on hold when war with France required bronze to be used for casting cannons, not sculptures. After French forces overran Milan in 1499—and shot the clay model to pieces—da Vinci fled the city along with the duke and the Sforza family.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo da VinciLeonardo da Vinci is one of the greatest artists of the Renaissance time period, especially in sculpturing, architecture, engineering, and science and painting. His new styles and techniques in painting have influenced many Italian and other artists around the world. Leonardo's did also scientific studies and experiments in anatomy, optics and hydraulics. Many of these studies have anticipated modern scientific developments.\nLeonardo was born in Vinci, near Florence, where his family settled in the mid-1460. He was given the best education in Florence, a major intellectual and artistic center of Italy. About in 1466 he was apprenticed as an assistant to Andrea del Verrocchio, the leading Florentine painter and sculptor of his day.\nIn 1478 Leonardo became an independent master. His first large painting, The Adoration of the Magi was left unfinished. Although the painting was left unfinished the Monastery of San Donato in Scopeto bought it in 1481.\nThe painting introduced a new approach to composition, in which the main figures are grouped in the foreground, while the background consists of distant views of battle scenes.\nAbout in 1482 Leonardo entered the service of the Duke of Milan, Ludovico Sforza, as principal engineer in military enterprises and as an architect. The most important of his paintings during this period was The Virgin of the Rocks.\nFrom 1495 to 1497 Leonardo worked on his masterpiece, The Last Supper, a wall painting in the refectory of the Monastery of Santa Maria delle Grazie in Milan. He also did other paintings and drawings like theatre designs, architectural drawings, and models for the dome of Milan Cathedral. Most of these paintings and drawings have been lost.\nIn the 1500 he returned to Florence, where two years later he entered the service of Cesare Borgia, the Duke of Romagna. As...", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-6", "d_text": "He continued work on his scientific studies until his death at the age of 67 on May 2, 1519. Da Vinci's assistant, Melzi, became the principal heir and executor of his estate. The “Mona Lisa” was bequeathed to Salai. Although da Vinci is known for his artistic abilities, fewer than two-dozen paintings attributed to him exist. One reason is that his interests were so varied that he wasn’t a prolific painter. For centuries afterward, however, thousands of pages from his private journals with notes, drawings, observations and scientific theories have surfaced and provided a fuller measure of a true “Renaissance man.”\nBorn on April 15, 1452 at Vinci near Florence, Leonardo was trained, according to Vasari, in \"all matters pertaining to design\" by Andrea del Verrocchio, the leading sculptor of the time. Confirmation of Vasari's claim comes from a note in Leonardo's manuscript G, referring to his master's method of fabricating the copper ball set atop the cupola of Florence Cathedral in 1471. Two legal documents of 1476 further indicate that he was still in Verrocchio's workshop even after joining the painters' Confraternity of Saint Luke in 1472. Having learned many of the skills that would later serve him as painter, sculptor, architect, and engineer, around 1482 Leonardo departed Florence for Milan. There he remained until late in 1499, performing a variety of tasks for Duke Lodovico Sforza and his court. When Leonardo returned to Florence in 1500, he was, as the creator of the Last Supper in Santa Maria delle Grazie, Milan, the most celebrated painter of the day. In Florence he completed those masterpieces that are commonly acknowledged to have changed the course of Western art: the Mona Lisa, the Virgin and Child with Saint Anne (both now in the Musée du Louvre) and the (lost) Battle of Anghiari. After working for the French conquerors of the Sforza in Milan (1506/1508-1513) and for Giuliano de' Medici in Rome (1513-1516), Leonardo left Italy to serve as chief painter and engineer to the French King Francis I. He died in France on May 2, 1519.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "A long sought-after artwork attributed to Leonardo da Vinci has been discovered among a private collection of 400 paintings locked in a Swiss Bank Vault.\nThe painting closely resembles a 1499 pencil sketch of Isabella d'Este, an Italian noblewoman, drawn by da Vinci in Mantua in Italy's Lombardy region, which is currently hanging in the Louvre Museum in Paris.\nBut as the great artist once said himself: \"Art is never finished, only abandoned,\" and so it is only fitting that he should have returned to the sketch and reproduce it as a rich color portrait, as the Marquise had requested of him on numerous occasions.\nExperts are claiming that the painting is a bona fide da Vinci. If proven, its discovery could herald an end to long scholarly speculation that the artist simply lost interest or ran out of time before finishing the project.\n\"There are no doubts that the portrait is the work of Leonardo,\" Carlos Pedretti, a professor emeritus of art history professor and expert in Leonardo studies at the University of California Los Angeles, told Italian newspaper Corriere della Sera.\n\"I can immediately recognise Da Vinci's handiwork, particularly in the woman's face,\" he told the newspaper.\nScientific tests seem to back up Pedretti's claims.\nCarbon dating conducted in a laboratory at the University of Arizona confirmed with 95 percent accuracy that the artwork was painted sometime between 1460 and 1650, placing the artwork firmly within the time frame in which the artist is believed to have first met and sketched the aristocrat, who was one of the most influential women of her time.\nFurther testing indicated the paint pigments and primer used in the portrait also match the ones used by da Vinci throughout his career.\nBut Pedretti said that even after three and a half years of study following the painting's first revelation, more time was needed to determine which parts of the work, if any, may have been painted by da Vinci's students.\nNot all experts are convinced that the 24-by-18-inch artwork discovered in the possession of an unnamed Italian family is an original work by the master painter.\nOne notable criticism is that the artwork was painted on canvas, as opposed to wooden panels, which were favored by da Vinci.\n\"Canvas was not used by Leonardo or anyone in his production line,\" Martin Kemp, professor emeritus of the history of art at Oxford University, told The Daily Telegraph. \"Although with Leonardo, the one thing I have learnt is never to be surprised.\"", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "WE SEND BEAUTIFUL EMAILS\nIt's all about Art News, Trends and What's up at Musart.\nSubscribe and Get 15% Off on your First Order!\nMusart is grateful to showcase the talents and skill of Leonardo da Vinci, for who else has accomplished so much in their lifetime? In addition to his artistic achievements, of which the Mona Lisa and The Last Supper are considerable, immortal accomplishments, he also was a polymath who dabbled extensively in inventing, engineering, astronomy, botany, writing, history, cartography and much more. He is exemplified as the “Universal Genius” or “Renaissance Man,” and his influence resounds within our society to this day. The influence of da Vinci cannot be overstated, and we will be studying his immortal techniques for ages to come.\nLeonardo da Vinci was born on April 15, 1452, in the Tuscan village of Vinci and was considered an illegitimate son, because his father, Ser Pierro da Vinci, and mother, a peasant girl named Caterina, were not married. While most of the details of da Vinci’s early life are shrouded in mystery, we know his father’s family raised him, and that, after several marriages, da Vinci’s father sired many children, leaving Leonardo with twelve half-siblings.\nDa Vinci’s genius manifested at an early age, with him becoming a natural student with many subjects, including mathematics and music. Eventually, he apprenticed under a painter named Andrea del Verrocchio, who taught him many skills. The two even collaborated on a painting, titled, The Baptism of Christ which apparently birthed the tale of Leonardo painting the young angel in the piece so well that Andrea, “put down his brush and never painted again.” By the age of twenty, da Vinci qualified as a Master in the Guild of Saint Luke.\nLeonardo Da Vinci continued as a kind of ‘freelance artist’ for some decades after, taking on commissions to paint an altar piece for the Chapel of St. Bernard in the Palazzo Vecchio and to paint The Adoration of the Magi for monks of San Donato a Scopeto (although neither were completed). Leonardo da Vinci worked in Milan from 1482 until 1499 where he was commissioned to paint Virgin of the Rocks and The Last Supper.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "Toward the end of his life in 1508, he accompanied King Louis XII of France to Milan where he stayed to practice anatomy and other scientific fields until 1512. When the French lost Milan, he moved to Rome, where he remained until his death in 1519.\nAfter his death, his close friend Francis I said of him, “there had never been another man born in the world who knew as much as Leonardo da Vinci, not so much about painting, sculpture, and architecture, as that, he was a very great philosopher.”\nLeonardo’s artistic works are considerable and have had a monumental influence on not only the artistic scene of the day but are still studied and analyzed to date. Two of his pieces, The Last Supper and Mona Lisa are not only his most famous but rank as some of the most famous pieces of art of all time.\nThe Last Supper was painted for the refectory of the Convent of Santa Maria Della Grazie in Milan. It is the final meal that Jesus and his disciples ate before the crucifixion and the piece specifically showcases the moment where Jesus has just told his followers, “one of you will betray me.” The time after uttering these words is the moment the painting encapsulates – which we can see by the expressions of the disciples. The work showcases Da Vinci’s total mastery of form, characterization and setting in a blazing fire of talent.\nThe painting itself is massive – it stretches 180 in x 350 in and covers an entire end wall of the dining hall in Santa Maria Delle Grazie. It, unfortunately, has been damaged and, despite restoration efforts, very little of the painting remains today.\nMona Lisa or La Gioconda, is arguably one of the most famous paintings in the world, to this day. The elusive smile on the face of the woman, front, and center, has brought a mystery to the world that remains unsolved after more than five hundred years. The shadows hint at so much and offer so little, and indeed, the shadowy quality the piece is known for came to be called, “sfumato,” or “Leonardo’s smoke.”\nThe Mona Lisa also plays on the religious culture of the time; the woman pictured bearing a strong resemblance to depictions of the Virgin Mary herself, who was seen as the ideal of womanhood for the time.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Obesity experts at Cornell University say that depictions of the Last Supper, such as that of Leonardo da Vinci (above), have shown increasingly larger meal portions for the past thousand years:\nThey found the main courses, bread and plates put before Jesus and his disciples have progressively grown by up to two-thirds.\nThis, they say, is art imitating life.\nProfessor Brian Wansink, who, with his brother Craig, led the research, published in the International Journal of Obesity, said: \"The last thousand years have witnessed dramatic increases in the production, availability, safety, abundance and affordability of food.\nLink | Image: Art Renewal Center", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo da Vinci (1452-1519) was a painter, architect, inventor, and student of all things scientific. His natural genius crossed so many disciplines that he epitomized the term “Renaissance man.” Today he remains best known for his art, including two paintings that remain among the world’s most famous and admired, Mona Lisa and The Last Supper. Art, da Vinci believed, was indisputably connected with science and nature. Largely self-educated, he filled dozens of secret notebooks with inventions, observations and theories about pursuits from aeronautics to anatomy. But the rest of the world was just beginning to share knowledge in books made with moveable type, and the concepts expressed in his notebooks were often difficult to interpret. As a result, though he was lauded in his time as a great artist, his contemporaries often did not fully appreciate his genius—the combination of intellect and imagination that allowed him to create, at least on paper, such inventions as the bicycle, the helicopter and an airplane based on the physiology and flying capability of a bat.\nThis self portait was painted in 1512 using red chalk, when Leonardo da Vinci was 50 and living in France. The original painting measures 33.3 x 21.3 cm (13 1/8 x 8 3/8 in).\nSynopsis Born on April 15, 1452, in Vinci, Italy, Leonardo da Vinci was the epitome of a “Renaissance man.” Possessor of a curious mind and keen intellect, da Vinci studied the laws of science and nature, which greatly informed his work as a painter, sculptor, architect, inventor, military engineer and draftsman. His ideas and body of work—which includes \"Virgin of the Rocks,\" \"The Last Supper\" and \"Mona Lisa\"—have influenced countless artists and made da Vinci a leading light of the Italian Renaissance. Humble Beginnings Advertisement Leonardo da Vinci was born on April 15, 1452, in a farmhouse nestled amid the undulating hills of Tuscany outside the village of Anchiano in present-day Italy. Born out of wedlock to respected Florentine notary Ser Piero and a young peasant woman named Caterina, he was raised by his father and his stepmothers.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "A giant rendition of Leonardo Da Vinci's Mona Lisa has been installed on Clapham Common in London as the search for the UK's next Portrait Artist of the Year begins.\nThe 5x7m canvas, 85 times bigger than the original which hangs in the Louvre in Paris, is perched on a massive easel which itself stands at a staggering 14m tall - double the height of an average two-storey house.\nThe enormous easel - which along with the portrait is the largest ever displayed in Europe - weighs in at a whopping three tonnes and took over two months to construct.\nThe artwork was created from 83 individual self-portraits submitted by the shortlisted entrants to the competition and it took Brighton-based artist Quentin Devine a week to incorporate them all into the famous visage.\nThe Portrait of the Year competition will begin on Sky Arts this week with contestants put through their painting paces in a series of challenges in the hope of being crowned overall champion.\nThe winner will be commissioned by the British Library to paint writer Hilary Mantel for their permanent collection, while all four finalists will have their work displayed in the National Portrait Gallery.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Leonardo di ser Piero da Vinci was born 15th april 1542. We all know who this great man was but let me give an insight incase you want to learn more.\nLeonardo, was an Italian Renaissance .. brace yourself , Polymath ( this is a person who has a vast amount of knowledge and knows many things). A few of these being painting, sculpting, architecture even engineering.\nHe was revered as this Genius, kind of like a superhuman!\nLeo (can I call him that?) was born out-of-wedlock to the wealthy Messer Piero Fruosino di Antonio da Vinci – blimming hell that’s a mouthful, & Caterina, a peasant. Theres not much known of Leo’s childhood except that he did receive informal education of maths, latin & geometry. Now, Apparently Leonardo had 17 siblings bless the soul. His father married four times and his mother married another aswell. However, his father did lookout for him making certain he was placed under the tutelage of artist Andrea Verrocchio in Florence.\nSo lets see some of his extraordinary works:\nMona Lisa – Leonardo painted a portrait of Francesco del Giocondo, wife Mona Lisa. Now Mona is derived from the word Madonna – which basically means Madam. Leo painted this portrait on wood, so if you believed it to be canvas you were fooled!\nHeres some fun facts:\nLeonardo Da Vinci used more than thirty layers of paint on the Mona Lisa, some of which were thinner than a human hair.\nWhen painting the Mona Lisa, to keep his subject relaxed and entertained, Da Vinci had six musicians to play for her and installed a musical fountain invented by himself. Different, beautiful works were read out loud and a white Persian cat and a greyhound dog were there for her to play with.\nLeonardo spent twelve years painting the Mona Lisa’s lips (blimey).\nits worth around US$790 million !!\nApparently he wrote from left to right – its called mirror writing or something and no one really knew why.\nThe Last Supper – is a late 15th-century mural painting. The painting represents the scene of The Last Supper of Jesus (Pbuh with his apostles, as it is told in the Gospel of John, 13:21. Leonardo has depicted the consternation that occurred among the Twelve Disciples when Jesus announced that one of them would betray him.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "The Last Supper, Leonardo da Vinci, Fresco, Santa Maria della Grazie, Milan\nMadonna and Child with Saint Anne, Leonardo da Vinci, Oil on Panel, Florence\nMona Lisa, Leonardo da Vinci, Oil on Panel\nPieta, Michelangelo, Marble, saint peters cathedral, vatican, rome\nDavid, Michelangelo, Marble, palazzo dei priori\nMadonna of the Meadows, Raphael, Oil on Panel\nSistine Chapel, Michelangelo, Fresco, vatican, rome\nThe School of Athens, Raphael, Fresco, stanza della segnatura, vatican, rome\nTransfiguration of Christ, Raphael, Panel, cathedral of narbonne\nForeshortened Christ, Andrea Mantegna, Canvas\nTempestuous Landscape with the Soldier and the Gypsy, Giorgione, Oil on Canvas\nBacchus and Ariadne, Titian, Oil on Canvas\nVenus of Urbino, Titian, Oil on Canvas\nRape of Europa, Titian, Oil on Canvas\nLast Supper, Tintoretto, Oil on Canvas, san giorgio maggiore, venice\nVictory, Michelangelo, Marble, For tomb of Pope Julius II\nDescent from the Cross, Pontormo, Oil on Panel, capponi chapel, santa falicita, florence\nDescent from the Cross, Fiorentino, Oil on Panel, san francesco, volterra\nAssumption of the Virgin, Corregio, Dome Fresco, parma\nMadonna and Child with Angels and St. Jerome (Madonna of the Long Neck), Parmigianino, Oil on Panel, church of servites, parma\nThe Last Judgement, Michelangelo, Fresco, sistine chapel, vatican, rome\nTempietto, Bramante, san pietro in montorio, rome\nAssumption of the Virgin, Titian, Oil on Panel\nPlease allow access to your computer’s microphone to use Voice Recording.\nWe can’t access your microphone!\nClick the icon above to update your browser permissions above and try again\nReload the page to try again!\nPress Cmd-0 to reset your zoom\nPress Ctrl-0 to reset your zoom\nIt looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "On first glance: fish – tiny herrings, anchovies - lots of bread, salad, fennel and wine. According to his whim, on this day at least, he also liked stuffed pasta – maybe tortellini.\nI was mightily impressed with Michelangelo’s parsimony until Gastro Obscura flipped the letter over to reveal the date – March 18th, 1518 – the beginning of Lent and therefore, perhaps uncharacteristically frugal. But by all accounts – letters to his brothers, and notes to friends – Michelangelo was indifferent to food though he produced wines from his vineyards, and olive oil from his own groves. He was (according to author, Fred Plotkin, The Splendid Table) inordinately fond of pears. A standard gift to friends was 33 perfect pears, one for each of Christ’s life.\nBy 1518, the date of the shopping list (or more precisely the date of the letter), Michelangelo had already finished many of his most celebrated works, including the Pietà, the David, and the Sistine Chapel ceiling. But among all his work, this shopping list is perhaps the most intimate insight into the man himself. He wrote to his brother during the time he painted the Sistine Chapel ceiling: “I have not even time to eat as much as I should.” And yet such abstinence did him no real harm. He died at age 89.\nIt set me thinking: I wonder what the shopping lists were like for some of the Renaissance painters of the day. What, for example, did Leonardo da Vinci have to buy to set the scene for The Last Supper – Il Cenacolo or L’Ultima Cena? Everyone who goes to Milan should book an allotted time of fifteen minutes in a group of no more than 30 people, to see the rapidly fading mural housed in the refectory of the Convent of Santa Maria delle Grazie in Milan. The Last Supper measures 460 cm × 880 cm and covers an entire end wall of the dining hall. To my mind, it is one of the world's most moving paintings.\nThe work is presumed to have been started around 1495–96 and was commissioned as part of a plan of renovations to the church and its convent buildings by Leonardo's patron, Ludovico Sforza, Duke of Milan.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-3", "d_text": "Location: Ground Floor Denon Wing, Staircase\n4) The Virgin of the Rocks and the 13th-18th Century Italian Paintings\nAnother one of da Vinci’s emblematic paintings, The Virgin of the Rocks is a portrayal of the Virgin Mary, Christ and Saint John the Baptist on a rocky landscape. Created between 1483 and 1486, this painting is displayed along with the works of Botticelli, Titians and Tintorettos, Raphael, Caravaggio, and many other Italian masters.\nLocation: 1st Floor Denon Wing, Italian Painting Department", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "View In A Room\nAdd to Favorites\nPainting: Oil, Acrylic, Spray Paint on Canvas.\nTitle \"The last supper contemporary\"\nMixed media with spray paint\nPainted on canvas\nFloater wood open back frame (I will frame the work)\nReady to hang\nShips in a Crate\nColor Project Art management\nArtist featured by Saatchi Art in a collection\nFeatured in Saatchi Art's printed catalog, sent to thousands of art collectors", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "|Kim and Joan with Ross King|\nMy correspondence with Ross King began back in 2009 while he wrote his phenomenal book Defiant Spirits: The Modernist Revolution of the Group of Seven. His research uncovered mention of a landscape painter named Carl Ahrens who, in 1916, verbally attacked certain members of the Group. Intrigued, King found my website on Ahrens and contacted me, hoping I could shed light on what might have provoked his remarks.\nI replied with a small treatise on the subject and over the years we’ve periodically traded e-mails. He has graciously assisted me with sections of my manuscript that involve the Group of Seven and the WWI Toronto art community. Needless-to-say, when I heard he was coming to Dallas to give a lecture on Leonardo da Vinci at the Highland Park United Methodist Church, I dropped everything to attend. Joan, a fan of art and all things Italian, did the same. To read an account of the evening, click here.\nI mention this background now because I believe I have an ethical duty to do so. Let me also say, though, that everything written below is my honest opinion and not said out of any sense of obligation to a friend. My copy of Leonardo and the Last Supper was not given to me—I purchased it. King did not ask for, nor does he expect, a review. The first he’ll hear of it is when I send him the link.\nThat said, here we go!\nSynopsis of Leonardo and the Last Supper (from the book jacket):\nIn Leonardo and the Last Supper, Ross King chronicles how—amid war and the political and religious turmoil around him, and beset by his own insecurities and frustrations—Leonardo created the masterpiece that would forever define him. King unveils dozens of stories that are embedded in the painting. Examining who served as models for the Apostles, he makes a unique claim: Leonardo modeled two of them on himself. Reviewing Leonardo’s religious beliefs, King paints a much more complex picture than the received wisdom that the artist was a heretic. The food that Leonardo, a famous vegetarian, placed in the table reveals as much as the numerous hand gestures of those at Christ’s banquet. And King makes clear, from a variety of Biblical sources, that the figure to the right of Christ is, indeed, John and not Mary Magdalene, as some have posited.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "The last supper (Leonardo Da Vinci) large light color\nThe last supper (Leonardo Da Vinci) large light colorTHE supper was help by Christ and his disciples on the eve of holy Eucharist.\nduring the meal, christ washed feet for all his twelve disciples and told them someone would betray him.,everyone felt sorrowful and asked Christ whether the betrayers was he. Jude then closed to Christ's and asked; ''Is it I? Christ said the one whose hand dipped into the dish with his hand betrayed him, that is Jude,\\ . Yes . it is you. Please go and do what you want to do. Jude left the table and Christ was after the supper.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "Despite being created approximately 500 years ago, the work of Leonardo is just as influential to the art that is being created today as it was in the 15th and 16th centuries. We felt that offering this painting within the context of our Post-War and Contemporary Evening Sale is a testament to the enduring relevance of this picture.’. Loic Gouzer 」 Leonardo da Vinci, Salvator Mundi. Oil on walnut panel. Panel dimensions: 25 13/16 x 17 15/16 in (65.5 x 45.1 cm) top; 17¾ in (45.6 cm) bottom; Painted image dimensions: 15⅜ x 17½ in (64.5 7 cm). Estimate on request. This work will be offered as a special lot in the Post-War and Contemporary Art Evening Sale on 15 November at Christie’s in New York\nIts inclusion in the National Gallery’s landmark exhibition of 2011-12 — the most complete display of Leonardo’s rare surviving paintings ever held — came after more than six years of painstaking research and inquiry to document the painting’s authenticity. This process began shortly after the painting was discovered — heavily veiled with overpaints, long mistaken for a copy — in a small, regional auction in the United States. The painting’s new owners moved forward with admirable care and deliberation in cleaning and restoring the painting, researching and thoroughly documenting it, and cautiously vetting its authenticity with the world’s leading authorities on the works and career of the Milanese master. Dianne Dwyer Modestini, the conservator who restored the work in 2007, recalls her excitement after removing the first layers of overpaint, when she began to recognise that the painting was by the master himself. ‘My hands were shaking,’ she says.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Lisa (also known as La Gioconda or La Joconde)\nis a sixteenth-century portrait painted in oil on a poplar panel\nin Florence, Italy by Leonardo di ser Piero da Vinci during the Renaissance.\nis currently owned by the Government of France and is on display at the\nMusée du Louvre museum in Paris under the title Portrait of Lisa\nGherardini, wife of Francesco del Giocondo. Arguably, it is the most\nfamous and iconic painting in the world.\npainting is a half-length portrait and depicts a standing woman\nwhose facial expression is frequently described as enigmatic.\nOthers believe that the slight smile is an indication that the\nsubject is hiding a secret. The ambiguity of the subject's expression,\nthe monumentality of the composition, and the subtle modeling\nof forms and atmospheric illusionism were novel qualities that\nhave contributed to the continuing fascination and study of the\nDa Vinci began painting the Mona Lisa in 1503. According to Da Vinci's\ncontemporary, Giorgio Vasari, \"...after he had lingered over it\nfour years, left it unfinished....\" He is thought to have continued\nto work on it for three years after he moved to France and to have\nfinished it shortly before he died in 1519. Leonardo took the painting\nfrom Italy to France in 1516 when King François I invited the\npainter to work at the Clos Lucé near the king's castle in Amboise.\nMost likely through the heirs of Leonardo's assistant Salai, the\nking bought the painting for 4,000 écus and kept it at Château\nFontainebleau, where it remained until given to Louis XIV. Louis XIV\nmoved the painting to the Palace of Versailles. After the French Revolution,\nit was moved to the Louvre. Napoleon I had it moved to his bedroom\nin the Tuileries Palace; later it was returned to the Louvre. During\nthe Franco-Prussian War (1870–1871) it was moved from the Louvre\nto a hiding place elsewhere in France.\nLisa was not well known until the mid-nineteenth century when\nartists of the emerging Symbolist movement began to appreciate\nit, and associated it with their ideas about feminine mystique.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-33", "d_text": "First, he is grasping a small bag, no doubt symbolizing the 30 pieces of silver he has been paid to betray Jesus; he has also knocked over the salt pot – another symbol of betrayal. His head is also positioned in a lower position than anyone in the picture, and is the only person left in shadow.\n*: Leonardo employed new techniques to communicate his ideas to the viewer. Instead of relying exclusively on artistic conventions, he would use ordinary ‘models’ whom he encountered on the street, as well as gestures derived from the sign language used by deaf-mutes, and oratorical gestures employed by public speakers. Interestingly, following Leonardo’s depiction of Thomas quizzically holding up his index finger, Raphael (1483-1520) portrayed Leonardo himself in the The School of Athens (1510-11) making an identical gesture.\n*: Laid out on the table, one can clearly make out the lacework of the tablecloth, transparent wine glasses, pewter dishes, pitchers of water, along with the main dish, duck in orange sauce. All these items, portrayed in immaculate detail, anticipate the still life genre perfected by Dutch Realist painters of the 17th century.\n*: Leonardo’s meticulous crafting of The Last Supper, along with his skills as a painter, draughtsman, scientist and inventor, as well as his focus on the dignity of man, has added to his reputation as the personification of intellectual artist and creative thinker, rather than merely a decorative painter paid to paint so many square yards a day. This idea of the dignity of the artist, and the importance of disegno rather than colorito, was further developed by Michelangelo and others, culminating in the establishment of the Academy of Art in Florence and the Academy of Art in Rome.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "At this smart restaurant in the heart of the Old Town, Da Vinci reproductions, including an original-size rendition of The Last Supper, adorn the walls. Leather upholstery, an elegant bar, and soft lighting round out the ambience. Celebrity chef Thomas Jaumann, who came aboard in 2013, presents a seasonally changing local menu with a European slant, with classic dishes like sirloin steak, rack of lamb, and loup de mer (European sea bass). The wine list includes\nmore than 200 bottles, with a focus on German, Italian, and French wines.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "The Last Supper - Duccio, 1308-1311\nAndrea del Casta's - Last Supper\nThe Last Supper - from the Winged altar in St. Peter in Leuven - Dirk Bouts, 1465\n160 in × 320 in\nCenacolo di Ognissanti, Florence\nLast Supper - Albrecht Durer, 1496-1510\nThe Last Supper - Lucas Cranach the Elder, 1547\nCesare Vecellio, The Last Supper, c. 1560\nEl Greco - 1568\nThe Supper at Emmaus - 1601 by: Carvaggio\nThe Last Supper - Nicolas Poussin, 1640-1649\nDiscover anything and everything about Leonardo da Vinci. Including his written works, drawings, paintings, news, and original content.\nThis website started as a way to post research for a book I was working on for the previous 11 years (various titles) but has since been abandoned, perhaps to be revisited in the future, but until then I am working on other project(s) which are upcoming and heavily influenced by my \"da Vinci days\" albeit focused more towards the future than the past - which I guess you could say is what I learned from Leo. \"Think well to the end, consider the end first.\"", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Buy Used and Save: Buy a Used \"Design Toscano The Last Supper Religious Wall Frie...\" and save 52% off the $117.90 list price. Buy with confidence as the condition of this item and its timely delivery are guaranteed under the \"Amazon A-to-z Guarantee\". See all Used offers.\nOther Sellers on Amazon\n+ Free Shipping\n+ $22.99 shipping\nDesign Toscano The Last Supper Religious Wall Frieze Sculpture, 23 Inch, Polyresin, Antique Stone\n- Enter your model number to make sure this fits.\n- LAST SUPPER WALL FRIEZE honors one of the world's most famous religious images, Leonardo da Vinci's painting of the last meal of Jesus Christ before His crucifixion.\n- CALLED LA ULTIMA CENA in Spanish, this picture or cuadro immortalizes love and sacrifice in a fine work of dimensional bas-relief decor art.\n- HIGH QUALITY WALL SCULPTURE hand-cast using real crushed stone bonded with durable designer resin with the patina of aged stone.\n- DESIGN TOSCANO HISTORICAL EXCLUSIVE plaque is a timeless classic sculpture to fill a privileged sacred place in home or gallery.\n- Our spiritual plaque measures 23\"Wx5.5\"Dx12\"H and weighs 9 lbs.\nCustomers who bought this item also bought\nCustomers who viewed this item also viewed\nSpecial offers and product promotions\nHave a question?\nFind answers in product info, Q&As, reviews\nPlease make sure that you are posting in the form of a question.\nFrom the manufacturer\nExpect the Unexpected\nImaginatively sculpted with creative, one-of-a-kind details, our wall sculptures are cast in quality designer resin or hand-hammered metal and hand-painted by skilled Toscano artisans exclusively for you!\n- Great Selection\n- Exclusive Designs\n- Durable Construction\n- Guaranteed Quality\nThe Last Supper Wall Frieze\nBy Design Toscano\nHonoring one of the most famous images of all time, this nearly-two-feet wide replica wall frieze draws upon Leonardo da Vinci’s famed work, and then brings it into a new realm in dimensional bas-relief. This impressive museum replica is cast in quality designer resin and finished with the patina of an aged stone. Our timeless classic will fill a privileged place in home or gallery.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "But a 500-year-old mystery was apparently solved today after a painting attributed to Leonardo da Vinci was discovered in a Swiss bank vault.\nThe painting, which depicts Isabella d’Este, a Renaissance noblewoman, was found in a private collection of 400 works kept in a Swiss bank by an Italian family who asked not to be identified.\nIt appears to be a completed, painted version of a pencil sketch drawn by Leonardo da Vinci in Mantua in the Lombardy region of northern Italy in 1499.\nThe sketch, the apparent inspiration for the newly found work, hangs in the Louvre Museum in Paris.\nFor centuries it had been debated whether Leonardo had actually had the time or inclination to develop the sketch into a painted portrait.\nAfter seeing the drawing he produced, the marquesa wrote to the artist, imploring him to produce a full-blown painting.\nBut shortly afterwards he embarked on one of his largest works, The Battle of Anghiari on the walls of Florence’s town hall, and then, in 1503, started working on the Mona Lisa.\nArt historians had long believed he simply ran out of time — or lost interest — in completing the commission for Isabella d’Este.\nNow it appears that he did in fact manage to finish the project — perhaps when he encountered the aristocrat, one of the most influential female figures of her day, in Rome in 1514.\nScientific tests suggest that the oil portrait is indeed the work of da Vinci, according to Carlo Pedretti, a professor emeritus of art history and an expert in Leonardo studies at the University of California, Los Angeles.\n“There are no doubts that the portrait is the work of Leonardo,” Prof Pedretti, a recognised expert in authenticating disputed works by da Vinci, told Corriere della Sera newspaper.\n“I can immediately recognise da Vinci’s handiwork, particularly in the woman’s face.”\nTests have shown that the type of pigment in the portrait was the same as that used by Leonardo and that the primer used to treat the canvas on which it was painted corresponds to that employed by the Renaissance genius.\nCarbon dating, conducted by a mass spectrometry laboratory at the University of Arizona, has shown that there is a 95 per cent probability that the portrait was painted between 1460 and 1650.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "The figure of Mary stands with her feet together (stance), but slightly turned out so that she can easily support Jesus’ body with her hands and arms under his armpits and across his chest; she also has one arm draped over his legs at the knee, while her other hand holds his thigh close to her body as she bends forward over him. The position of Christ’s torso shows him as dead from wounds caused by crucifixion; he is naked except for a loincloth covering his genitals, which are indicated only by subtle indentations on either side near where they would be located on a living man (see “crucifixion” below). The vertical vein popping out on Mary’s forehead indicates stress due to grief or fatigue; it was common practice among sculptors working in marble during this period not only because such veins were considered attractive features but also because they provided additional support for figures carved from hard stone like marble\nLeonardo Da Vinci’s Last Supper\nThe Last Supper is a fresco by Leonardo da Vinci that was painted on the wall of the refectory in Santa Maria delle Grazie, Milan. It is one of the most famous paintings in the world and considered to be one of Leonardo’s masterpieces.\nThe painting depicts Jesus Christ at the moment he announces that one of his Apostles will betray him (the Apostle John). It shows Christ seated at a table with his Apostles who are sitting opposite him across from each other, as if they were sharing a meal together.\nThe Venus de Milo\nThe Venus de Milo is a 2nd century BCE work of ancient Greek sculpture. It was found on the island of Melos in 1820, and it has been in the Louvre Museum in Paris, France since 1821.\nThe statue depicts Aphrodite (the Greek goddess of love) removing her sandal while holding an apple: she is believed to be adjusting her sandal strap or perhaps even picking up her discarded shoe.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Take the 55,000 square foot Park Avenue Armory. Add Dutch artist and filmmaker Peter Greenaway. Stir gently. Serve in low lighting.\nPresenting “Leonardo’s Last Supper: A Vision by Peter Greenaway,” an amalgamation of 8,000 years of art and 112 years of cinema, or so the artist said on the Upper East Side Wednesday morning. The installation uses 33 screens and over 2,000 lights, offering the audience an audio visual tour and cinematic lightshow highlighting two classic paintings -– Leonardo’s “The Last Supper” and Paolo Veronese’s “The Wedding at Cana.” The show runs from tomorrow until January 6.\nTake a slideshow tour of the exhibit.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "By Mike Shoopman\nThe painting that I chose is entitled The Last Supper by Tintoretto. This magnificent painting shows a great amount of detail and it seems to draw in the viewer more than Leonardo da Vinci’s depiction of this important event in history. In Tintoretto’s The Last Supper, Jesus is shown with a glow about him that instantly grabs the viewer’s attention and leaves no question about which person is Jesus. Each of Jesus’ disciples, except for Judas whom is kneeling on the opposite side of the table from the others, also has a small glow around their head that sets them apart from the other people in the painting. In addition to Jesus and his disciples, there are several other people in the room preparing the meal for Jesus and his disciples. Not only has Tintoretto portrayed the physical realm, he has also included a spiritual realm. Several angels are shown flying in the room, which along with Jesus’ illumination depicts his divine nature.\nThis amazing work of art was painted from 1592 to 1594 (Matthews, Platt, and Noble 396). Tintoretto was in his middle 70’s when he painted it, and he died within the same year that he finished this great painting (Matthews, Platt, and Noble 396). Tintoretto’s the Last Supper is in good condition and is located in Sam Giorgio Maggiore Church in Venice, Italy (Matthews, Platt, and Noble 396).\nI viewed this painting in The Western Humanities: Fifth Edition on page 396. This magnificant work of art can also be found on the internet at http://www.artbible.info/art/large/353.html.\nTintoretto’s The Last Supper is an oil painting on canvas. This painting is twelve feet tall and eighteen feet and eight inches in width. With the height of the painting being twelve feet, the individual on the right in the blue would seem to be about six feet tall, making this painting to be life-size if viewing it in person.\nThe artist used several lines to show the light in this painting, including the lines of light extending from Jesus’ head as well as from the lamp that sheds light on the things going on in the room. In addition to the lines of light, the lines in the ceiling and the floor of the room pull all parts of this painting together.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "Shortly after the unveiling Tuesday morning, the painting left for a world tour, traveling to Christie’s locations in Hong Kong (October 13-16), San Francisco (October 17-21), and London (October 24-26) before returning to New York, where it will be on view from October 28 to November 4.\nThe November 15 sale in New York will also feature a more recent masterpiece: Andy Warhol’s Sixty Last Suppers, a 1986 silkscreen grid that gives Leonardo's The Last Supper the Marilyn Monroe treatment. The estimate? A far more affordable $50 million.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Twelve years ago, a once-lost painting by Leonardo da Vinci—possibly his last ever—titled Salvator Mundi was unearthed. Six years later, the painting, which was once a part of the Royal collection of King Charles I and eventually made its way to an American estate sale in 2005, was displayed for the public to view at London's National Gallery in the exhibit, Leonardo da Vinci: Painter at the Court of Milan. Now, the painting will be hung for the public to see once again before it is auctioned off by Christie's.\nThe painting, which features Christ as Savior of the World and dates to around 1500, isn't just rare because it was thought to have been lost between 1958, when it sold for just £45 by Sotheby’s to Sir Charles Robinson for the Cook Collection. (At that time, it was thought that the artist wasn't Leonardo da Vinci but was one of his followers, Bernardino Luini.) It's also rare because it's only one of less than 20 paintings that are known to exist by da Vinci. Also, Salvator Mundi happens to be the only one in private hands, as Christie's points out.\nCome November 15, the painting will change hands once again as Christie's auctions it off at Rockefeller Plaza. Estimated at around $100 million, the painting could go for a record-breaking sum. This past May, a painting of a skull by Jean-Michel Basquiat did so when it sold for $110.5 million in a Sotheby's auction, becoming the sixth most expensive work of art to be sold at an auction, as The New York Times reported. Thankfully, for those whose art budgets are more limited, the painting will still be able to be enjoyed by the public ahead of the auction.\nLeonardo da Vinci's Salvator Mundi painting is going on a world tour of Christie's flagships, starting October 13 in Hong Kong, where it will stay on view until the 16th. After, the painting will travel to San Francisco to be displayed October 17 to 20. The London location will show the painting next from October 24 to 26. Finally, the painting will be on display in New York, at the Christie's in Rockefeller Plaza, from October 28 to November 4.\nThe opportunity is indeed a rare one.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "The paintings here are on four sides of the dome and represent different continents.\nWe visited the Duomo on a couple of occasions, including our tour day when we were able to go inside for a short while.\nYou can see the changeable weather that we had from the skies in these pictures.\nThis castle was originally a Visconti fortress, then the stronghold of the Sforza family during the renaissance, it now houses 10 museums including an armoury and music instrument museum. As we visited the grounds of the castle during our afternoon tour there was no time for the museums.\nThis arch - Arco della Pace - in Parco Sempione marks the start of the road to Paris. It is a straight line to from the Arco della Pace to the Arc de Triomphe (well according to our tour guide. Although a bit of google time doesn't seem to corroborate this)\nIl Cenacolo - The Last Supper\nThe convent of Santa Maria della Grazie where the refectory housing The Last Supper is.\nVisits to the The Last Supper are in small groups and in 15 minute time slots. As our tickets fell in the second half of our tour groups allocation we were able to sit and soak up a little sunshine whilst we waited.\nAnd here it is. Well not really as photography is banned inside the refectory to prevent further damage to the painting. One thing I did find very interesting was the fact that this wasn't the only painting in the room. As well as general decorations to the sides and roof of the room (some of which still remains) there is another fresco here by Giovanni Donato da Montorfano. It is on the wall at the opposite end of the room to The Last Supper and depicts the Crucifixion. This fresco was painted in the usual fashion and so the colours remain bright. Leonardo added some figures to the Crucifixion fresco depicting members of the Sforza family who had commissioned the work within the convent. As these figures were painted in the same way as The Last Supper they have also degraded badly.\nOn Sunday Janine and I went off to look for some props that had been broken the day before. We walked from the hotel\nthrough the park - Giardini Pubblici -\npast the Gallery of Modern Art\nand caught the Metro to Porta Genova.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-2", "d_text": "Hidden away at the beginning, designated by God but basically kept below wraps till finally the Messiah comes forth. And as you stated, it is the afikomen broken off at the beginning of the meal, hidden, and then brought back at the quite finish of the meal. The final issue you are supposed to eat on the evening of Passover, of Pesach, is the piece of matzah which represents the afikomen. The end of the meal is when you consume that final ceremonial pieces of matzah.\nThe laid out supper is a feast of metaphors which tells the story of Christianity through the renewed lens of Renaissance. According to Urciuoli, “Bitter herbs and charoset is eaten during festivities although olives with hyssop was also consumed on day-to-day basis”. The charoset symbolizes the optimism of the Passover seder as it balances the bitterness of the maror. Of meat and wine – The metaphor of meals in Da Vinci’s The Final Supper. The “last supper” was a modification of the Old Testament observance of the Passover.\nThere are some research that suggest left-handed people today are much more creative and, if correct, that is absolutely the case with Leonardo. The Renaissance man is a single of the most famous artists that is confirmed to have had a dominant left hand. Recent web site historians think that Leonardo might have even been ambidextrous. In addition to getting a wonderful engineering feat, the canal project had financial and military purposes. Da Vinci envisioned irrigating the Arno valley and selling water to farmers to make cash for the government.\nThe pair of sculptures, L’Esclave Mourant and the L’Esclave Rebelle , stands in the Galerie Michel-Ange , a spacious gallery with massive windows that allow organic light to brighten the space. Displayed in the Louvre’s beautiful Salon Carré , the Coronation of the Virgin is a single of the Louvre’s masterpieces of medieval painting. Guido di Pietro, identified as Fra Angelico, designed this function from 1430 to 1432 and it was originally utilised as an altarpiece for the convent of San Domenico in Fiesole outdoors Florence. The motif of lace was usually employed in 17th-century Dutch paintings to symbolize standard female virtues.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "A recently discovered Jesus painting by Leonardo da Vinci is being revealed to the public eye for the first time starting Nov. 9 in a London exhibition and via a television series.\nThe painting, titled \"Salvator Mundi\" (Savior of the World), depicts Christ with his right hand raised in blessing and his left hand holding a transparent globe. It is painted in oil on a wood panel and measures 26 by 18.5 inches in size.\nIt was discovered at an auction in the United States in 2005 and officially identified as a da Vinci in July of this year. It is the most precious art acquisition of the century, experts say.\nThe now cleaned and restored 500-year-old work went missing for most of the 17th through the 19th centuries, only to be discovered, hundreds of years later, in a private collection in the United States. It is now owned by a consortium of art dealers, including Robert Simon, a New York-based specialist in Old Masters, according to ArtNews.\nOnly 15 paintings by Da Vinci still exist, including, \"Mona Lisa,\" \"The Last Supper\" and \"Lady with an Ermine\" as probably the most famous ones.\n\"Salvator Mundi\" might presently be worth $200 million, according to ArtNews.\nLondon's National Gallery exhibition, \"Leonardo da Vinci: Painter at the Court of Milan,\" is the most complete display of da Vinci's rare surviving paintings ever held, museum authorities claim. After opening on Nov. 9, the exhibition will run until Nov. 5, 2012. Da Vinci's works from multiple museums across the world were brought to London for the purpose of the exhibition.\nThe exhibition is inspired by the recently restored National Gallery painting, \"The Virgin of the Rocks,\" museum authorities said.\nIt is also the first exhibition to be dedicated to da Vinci's goals and techniques as a painter, the museum authorities said in a statement. It concentrates on the work he produced as court painter to Duke Lodovico Sforza in Milan in the late 1480s and 1490s.\n\"As a painter, Leonardo aimed to convince viewers of the reality of what they were seeing while still aspiring to create ideals of beauty – particularly in his exquisite portraits – and, in his religious works, to convey a sense of awe-inspiring mystery,\" the exhibition description states.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-2", "d_text": "This could either be that they had been taken for cleaning, lent to other museums for short-term exhibitions, or even for restoration. On best of being the most visited museum in the planet, it is also the biggest covering an location of 652,300 square feet . Luckily, I have had the opportunity to go to the Louvre Museum on 3 diverse occasions , taking benefit of getting young and getting free of charge admission.\nConsequently, the vanishing points of lines Φ2 and Φ3 will also be along the horizon line. The projections of Φ2 and Φ3 on the base line will be Φ2’ and Φ3’ (Figure 14/Figure 15). Consequently, if we had the position of the projection of viewpoint O’, to uncover Φ2’ and Φ3’ we would draw parallel lines from O’ towards the two diagonals. The locus of the points on the plane that meet section Φ2’Φ3’ at a 90˚ angle is a semicircle whose diameter is Φ2’Φ3’, as shown in Figure 15.\nThese viewpoint lines blend in with the ceiling and walls. His most famous painting, the Mona Lisa stands proud amongst the collection of the Louvre, along with many of his other functions. Leonardo da Vinci was arguably one particular of the greatest men to have ever lived.\nMajor editors give you the stories you want — delivered appropriate to your inbox every single weekday. “Leonardo, a vegetarian, has given Christ and the Apostles a vegetarian — or rather a pescatarian — meal.” “So in quite a few ways the painting is — amongst other factors — a representation of the court of Lodovico Sforza, the work’s patron,” King said. King said that the tapestries adorning the walls are really related to those in the castle in Milan.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "The history of this famous work by Leonardo da Vinci, and why so little of it survives. History of the Last Supper's conservation efforts. Support Little Art Talks: https://www.patreon.com/LittleArtTalks www.LittleArtTalks.com Twitter: @LittleArtTalks http://goo.gl/UuSvyp Tumblr: http://goo.gl/fsNDEO Facebook: http://goo.gl/YScjms Pinterest: http://goo.gl/Cazd5J Instagram @LittleArtTalks http://instagram.com/littlearttalks Google+: http://goo.gl/iwDlJf Images: Wikipedia Commons, Public Domain, Fair Use Royalty-Free Music: Pangea by Kevin MacLeod (incompetech.com) Welcome to Little Art Talks! I'm so glad you found this video. I make free educational videos about art history because there's so many amazing things to see! Let's talk about it! If you liked this video, please like, comment, share & subscribe. :) See you soon!", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "Image/object size (inches):\n35.43 (height) 39.37 (width) 0.79 (depth)\nThe above image is an estimate of the size of the image/object compared to an average person. Box will only show a max 70 inches height x 70 inches width.\nOriginal painting, oil on canvas, 100 x 90 cm, 1997", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-1", "d_text": "The big money comes into the room nowadays when Pollocks and Twomblys are on the block, and promptly leaves when the Reynolds and Winterhalters arrive.\nDr Tim Hunter, who is an expert in Old Master and 19th Century art, told the BBC the painting is “the most important discovery in the 21st Century”.\n“It completely smashes the record for the last Old Masters painting to sell – Van Gogh’s Sunflowers in 1988. Records get broken from time to time but not in this way.\n“Da Vinci painted less than 20 oil paintings and many are unfinished so it’s incredibly rare and we love that in art.”\nBefore the auction it was owned by Russian billionaire collector Dmitry E Rybolovlev, who is reported to have bought it in a private sale in May 2013 for $127.5m (£98m).\nIs it authentic?\nThe painting has had major cosmetic surgery – its walnut panel base has been described as “worm-tunnelled” and at some point it seems to have been split in half – and efforts to restore it resulted in abrasions.\nBBC arts correspondent Vincent Dowd said that even now attribution to Leonardo is not universally accepted.\nOne critic has described the surface of the painting to be “inert, varnished, lurid, scrubbed over and repainted so many times that it looks simultaneously new and old”.\n“Any private collector who gets suckered into buying this picture and places it in their apartment or storage, it serves them right,” Jerry Saltz wrote on Vulture.com.\nSpeculation over buyer\nBut Christie’s has insisted the painting is authentic and billed it as “the greatest artistic rediscovery of the 20th Century”.\nGeorgina Adam, who is an Art Market specialist, told the BBC the price of the piece is “fuelled by the sheer amount of money that billionaires have.”\n“This is the last Leonardo painting you can buy. This isn’t as a store of value, it’s the ultimate trophy – only one person in the world can own this.\n“If you think of the wealth of some billionaires, Bill Gates is worth 87 billion, and I’m not saying it’s him, but near to half a billion would not be a colossal chunk out of his income for example.”\nThe auction house has not revealed who purchased the picture, but Hunter speculates it could be a buyer from Asia or even be on the way to the new Louvre in Abu Dhabi.", "score": 8.086131989696522, "rank": 100}]} {"qid": 33, "question_text": "What makes Erbium Doped Fiber Amplifiers special in terms of how they amplify signals?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Among them, a trace impurity in the form of a trivalent erbium ion is inserted into the optical fiber’s silica core to alter its optical properties and permit signal amplification.\nThe working principle of the EDFA is to use the pump light sources, which most often has a wavelength around 980 nm and sometimes around 1450 nm, excites the erbium ions (Er3+) into the 4I13/2 state (in the case of 980-nm pumping via 4I11/2), from where they can amplify light in the 1.5-μm wavelength region via stimulated emission back to the ground-state manifold 4I15/2.\nAdvantages & Disadvantages of EDFA\n- EDFA has high pump power utilization (>50%)\n- Directly and simultaneously amplify a wide wavelength band (>80nm) in the 1550nm region, with a relatively flat gain\n- Flatness can be improved by gain-flattening optical filters\n- Gain in excess of 50 dB\n- Low noise figure suitable for long haul applications\n- Size of EDFA is not small\n- It can not be integrated with other semiconductor deviecs\nSemiconductor optical amplifier (SOA)\nSemiconductor optical amplifier is one type of optical amplifier which use a semiconductor to provide the gain medium. They have a similar structure to Fabry–Perot laser diodes but with anti-reflection design elements at the end faces. Unlike other optical amplifiers SOAs are pumped electronically (i.e. directly via an applied current), and a separate pump laser is not required.\n1.Stimulated emission to amplify an optical signal.\n2.Active region of the semiconductor.\n3.Injection current to pump electrons at the conduction band.\n4.The input signal stimulates the transition of electrons down to the valence band to acquire an amplification.\nAdvantages & Disadvantages of SOA\n- The semiconductor optical amplifier is of small size and electrically pumped.\n- It can be potentially less expensive than the EDFA and can be integrated with semiconductor lasers, modulators, etc.\n- All four types of nonlinear operations (cross gain modulation, cross phase modulation, wavelength conversion and four wave mixing) can beconducted.\n- SOA can be run with a low power laser.", "score": 50.72730842712065, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "A significant point is that the erbium gives up its energy in the form of additional photons which are exactly in the same phase and direction as the signal being amplified. Lumped amplifiers, where the pump light can be safely contained to avoid safety implications of high optical powers, may use over 1 W of optical power.\nSecond, Raman amplifiers require a longer gain fiber. Since this creates a loss of power from the cavity which is greater than the gain, it prevents the amplifier from acting as a laser. Systems meeting these specifications, have steadily progressed in the last few years from a few Watts of output power, initially to the 10s of Watts and now into the s of Watts power level.\nFrom Wikipedia, the free encyclopedia. The erbium doped amplifier is a high gain amplifier. Another advantage of operating the DFA optlque the gain saturation region is that small fluctuations in the input signal power are reduced in the output amplified signal: Optics and Photonics Letters. A typical DFA has several tens of meters, long enough to already show this randomness of the birefringence axes.\nCapteur de fibre optique. Titulaire et D’installations de Polissage.\nHowever, Ytterbium doped fiber lasers and amplifiers, operating near 1 micrometre wavelength, have many applications in industrial processing of materials, as these devices can be made with extremely high output power tens of kilowatts. The absorption and emission cross sections optiquue the ions can be modeled ampluficateur ellipsoids with the major axes aligned at random in all directions in different glass sites.\nThe random distribution of the orientation of the ellipsoids in a glass produces a macroscopically isotropic medium, but a strong pump laser induces an anisotropic distribution by selectively exciting those ions that are more aligned with the optical field vector of the pump.\nPanneaux D’adaptateur de Fibre SC. Point est en stock. Adaptateur de Fibre nu. The amplification window is determined by the spectroscopic properties of the dopant ions, the glass structure of the optical fiber, and the wavelength and power of the pump laser.\nAmplicicateur, in optical fibers small amounts of birefringence are always present and, furthermore, the fast and slow axes vary randomly along the fiber length.", "score": 49.6494822523939, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "One of the key advantages of erbium-doped fiber amplifiers (EDFAs) is their immunity to crosstalk, because of their inherently slow (100-μs–1-ms) gain dynamics.1 This lack of crosstalk makes EDFAs suitable for use in multichannel fiber networks. In such applications, however, occasional bursts of traffic may result in gain fluctuations because of transient saturation in the EDFA, causing packet-to-packet data interferences. Automatic gain control (AGC) loops can be used to achieve such necessary gain stabilization. A feed-forward AGC loop was shown;1 it acts on the pump source to increase the gain as a transient of higher signal power as detected at the EDFA input. The main disadvantage of AGC through pump power control is the large pump power change, ΔPp, necessary to compensate for the gain fluctuation, ΔG (e.g., ΔPp ≈ 15 mW for ΔG ~ 3 dB).2 In this paper we demonstrate an alternate scheme for AGC feedback loops requiring a comparatively very lowcontrol power of 2 μW. In this scheme, the error signal is generated by the fluctuations of output amplified spontaneous emission (ASE). This signal is applied to modulate the intensity of a control or compensation signal input to the EDFA to maintain a constant level of gain saturation in the amplifier.\n© 1991 Optical Society of AmericaPDF Article", "score": 45.82035175842235, "rank": 3}, {"document_id": "doc-::chunk-7", "d_text": "The bandwidth, gain, saturation power and noise of the erbium-doped fiber-amplifier (EDFA) are reviewed in the context of high-speed optical communication systems. Recent experiments which have used EDFA postamplifiers, repeaters and preamplifiers to enhance the performance of fiber-optic systems are discussed.\nGain characteristics of an Er3+-doped fiber for high-power picosecond input pulses are studied with an InGaAsP laser diode pump source at 1.46-1.48 μm. The output energy and peak power of the amplified pulses reach as high as 7.9 pJ and 792 mW for a repetition rate of 100 MHz and a pulse width of 10 ps. The gain saturation is so slow that the gain in high speed pulse transmission systems is determined by a steady-state saturated gain. With the Er3±-doped fiber amplifier, it is shown that solitons can be amplified and transmitted over a long dispersion-shifted fiber by using the dynamic range of an N=1 soliton. Furthermore, optical solitons at wavelengths of 1.535 μm and 1.552 jim have been amplified and transmitted simultaneously over 30 km with an Er3±-doped fiber repeater for the first time. The collision experiments between these different wavelength solitons are described. It is shown that there is a saturation-induced cross talk between multi channel solitons, and the cross talk (the gain decrease) is determined by the average input power in high bit-rate transmission systems. Subpicosecond soliton and 20 GHz soliton pulse amplifications with Er3±-doped fiber are also described, which indicate that Er fibers are very advantageous for short pulse soliton communication. Finally, a gain coefficient as high as 2.4 dB/mW is reported using InGaAsP laser diodes.\nTwo fiber laser sources, a resonant fiber laser (RFL) and a superfluorescent fiber laser (SFL), have been given initial tests as gyro sources using a medium quality gyro test bed. The RFL reacted strongly to optical feedback from the gyro circuit resulting in very large unstable errors in the gyro output. These were suppressed substantially by an optical isolator which reduced feedback from the gyro, or by a phase modulator within the laser cavity.", "score": 45.58461275103532, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "WHY AMPLIFIERS ???o To amplify an optical signal with a conventional repeater works well for moderate-speed single wavelength operation.o Repeaters are expensive for high-speed multiple wavelength systems.o Thus optical amplifiers have been developed.o Optical amplifiers boost up the power level of multiple light wave signals.\nERBIUM-DOPED FIBER AMPLIFIERSo Active medium in an optical fiber amplifier consist 10 to 30 m length of optical fiber that has been lightly doped with a rare-earth element such as erbium(Er), thulium(Tm).o Fiber material can be silica or a tellurite glass.o silica doped with erbium is good for long telecommunication.o EDFA operates in the spectral band of 1530 to 1560 nm region that is c band or conventional band.\no Amplification mechanism:- a. Optical amplifier uses optical pumping. b. Pumping gives energy to electrons to reach the excited state. c. After reaching its excited state, the electron must release some energy and drop to the lower level. d. Here a signal photon can then trigger the excited electron into stimulated emission. And electron releases its remaining energy in the form of new photon.\nENERGY LEVEL DIAGRAM AND TRANSITION PROCESS OF Er3+ IONS IN SILICA\no EDFAs include the ability:- a. To pump the devices at several different wavelengths. b. Low coupling loss to the compatible-sized fiber transmission medium. c. Highly transparent to signal format and bit rate. d. Immune from interference effects( crosstalk and intermodulation distortion) when wavelength channels are injected simultaneously into amplifier.\nGENERAL APPLICATIONS OF OPTICAL AMPLIFIERSo In line optical amplifiers:- a. In single mode link, the fiber dispersion may be small so that repeater can be eliminated. b. Instead of regeneration of signal, simple amplification can be done. c. It is used to increase the distance between regenerative repeaters.\no Preamplifier:- • Optical amplifier being used as a front-end preamplifier for an optical receiver. • A weak optical signal is amplified before photo-detection so that signal to noise ratio degradation due to noise can be suppressed in the receiver. • It provides a larger gain factor and bandwidth.o Power amplifier:- a. The device which can be placed after the transmitter to boost the transmitted power is called as power amplifier. b. This provides increase in distance depending on the amplifier gain and fiber loss.", "score": 44.594761788995044, "rank": 5}, {"document_id": "doc-::chunk-3", "d_text": "It is worth noticing that an outstanding amplification is obtained at 1.55 μm wavelength where the gain value is not optimal in order to demonstrate the capability of such amplifiers.\nThe gain of a highly efficient erbium-doped fiber amplifier was measured when pumped at wavelengths between 1.46 pm and 1.51 ,am. The optimal pump wavelength, Xrt, was determined to be 1.475 pm. At this pump wavelength the maximum gain coefficients for signals at 1.531 pm and 1.544 pm were measured to be 2.3 d13/mW and 2.6 dB/mW, respectively. At λ°pPt high gains were achieved, ranging from 32 dB at pump power Pp = 20 mW up to 40 dB at Pp = 80 mW. These modest pump powers are within the capabilities of currently available 1.48 pm diode lasers. The width about λ°IP for 3 dB gain variation exceeded 27 nm for Pp = 10 mW and 40 nm for pipn > 20 mW. With this weak dependence on pump wavelength, single longitudinal mode lasers do not have a significant advantage over practical Fabry-Perot multi-mode pump lasers.\nThe fabrication and characteristics of a variety of single-mode fibre Bragg reflection gratings are discussed. By simple modifications to the fabrication process fibre gratings can be made with very different characteristics ranging from very narrowband resonators with a 0.04nm bandwidth to broadband reflectors with a 17nm bandwidth.\nWe demonstrate lasing on the 1050nm and 1345nm 4-level transitions of neodymium in multimode fluoride glass fibers pumped by a commercial semiconductor diode laser. Low thresholds, high output powers, and high efficiencies are all demonstrated.\nTechniques for the generation of second-order nonlinearities in optical fibres are described. Applications to nonlinear frequency-mixing and relevant phasematching techniques are discussed. Electra-optic modulation via the Pockels effect is demonstrated.\nGermano-silicate optical fibres have been fabricated by MCVD with a refractive index difference between the core and the cladding glass of up to 0.05. It is demonstrated that by depositing within Vycor tubes during preform fabrication, fibre may be subsequently drawn at lower temperatures than are possible with conventional silica preforms and this reduces the germania-induced excess loss.", "score": 43.83331851584476, "rank": 6}, {"document_id": "doc-::chunk-4", "d_text": "with a pump wave copropagating with the signal wave), in the backward direction, or bidirectionally. The direction of the pump wave does not influence the small-signal gain, but the power efficiency of the saturated amplifier and also the noise characteristics. Bidirectional pumping can be a way not only to apply a high pump power, but also to achieve a low noise figure and a high power efficiency at the same time.\nMost fiber amplifiers (e.g. those based on erbium and ytterbium) operate on quasi-three-level transitions (neodymium-doped amplifiers are a notable exception). This means that in the unpumped state such amplifiers exhibit some losses caused by the active ions; only when a certain excitation level is exceeded, does actual amplification take place. The quasi-three-level nature also has implications for amplifier noise, in particular an increased noise figure, which however can be minimized by certain design optimizations.\nOptical nonlinearities such as the Kerr effect can be significant in fiber amplifiers, particularly for those amplifying ultrashort pulses (→ ultrafast amplifiers). This can lead to strong self-phase modulation, but also to excessive Raman gain and thus to the generation of a strong first-order Stokes wave at a wavelength some tens of nanometers longer than that of the amplified signal. For single-frequency operation, stimulated Brillouin scattering is the most important nonlinearity.\nThe effect of the nonlinearity can be reduced e.g. by increasing the fiber mode area (but at the expense of a lower gain efficiency and possibly worse beam quality) or by decreasing the fiber length. The latter measure becomes possible when using a fiber with higher doping concentration, but this can lead to concentration quenching.\nA technique for radically reducing nonlinear effects is chirped-pulse amplification, where pulses are strongly dispersively stretched before entering the amplifier and subsequently compressed again. All-fiber setups using that technique (with a fiber Bragg grating as the compressor) can generate femtosecond pulses only with energies well below 1 μJ, leading to peak powers in the kilowatt region. Substantially higher peak powers of the order of 100 MW are possible when using bulk-optical elements in addition to a fiber amplifier based on large mode area fiber, but some characteristic advantages of fiber-optic systems are lost with that approach.", "score": 42.07632883676842, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "In terms of gain saturation, fiber amplifiers are very different from semiconductor optical amplifiers (SOAs). Due to the small transition cross sections, the saturation energy is fairly high, e.g. some tens of microjoules for a typical erbium-doped telecom amplifier, or hundreds of microjoules for a large mode area ytterbium-doped amplifier. As a result, significant energy (sometimes several millijoules) can be stored in a fiber amplifier, and can later be extracted e.g. by a single short pulse. Only for output pulse energies above the saturation energy, pulse distortions through saturation become significant. For amplifying the output of a mode-locked laser, the gain saturation is normally the same as for a continuous-wave laser with the same average power.\nFiber amplifiers are often operated in the strongly saturated regime. This allows for the highest output power, and the effect of slight variations of pump power on the signal output power is substantially reduced.\nASE and Noise\nThe gain achievable is often limited not by the available pump power, but by amplified spontaneous emission (ASE). This can become relevant for gains roughly exceeding 40 dB. High-gain amplifiers also need to be protected from any parasitic reflections, because these could lead to parasitic laser oscillation or even to fiber damage, and are therefore often equipped with optical isolators at the output and possibly also at the input.\nASE also provides the fundamental limitation of the amplifier noise properties. While the excess noise of a loss-less four-level amplifier can approach the theoretical limit, corresponding e.g. to a noise figure of 3 dB in the case of high gain, the excess noise can be stronger for the usual quasi-three-level gain media and in the presence of extra losses. Note that ASE and excess noise are often stronger in backward-pumped amplifiers.\nNoise introduced by the pump source may also be an issue. It can directly affect the gain and thus the signal output power, but not for noise frequencies substantially above the inverse upper-state lifetime. (The laser-active ions represents an energy reservoir which can effectively reduce the effect of fast power fluctuations.) A variable pump power can also lead to temperature-dependent heating which translates into phase noise.\nASE itself may be utilized for superluminescent sources with very low temporal coherence, as required e.g. for optical coherence tomography.", "score": 40.54215225874795, "rank": 8}, {"document_id": "doc-::chunk-2", "d_text": "The efficiency, gain, and signal/noise ratio of three-level devices are quite sensitive to the magnitude and spectrum of the cross sections. These, in turn, depend on the glass composition used as host. We report an investigation which determined the relevant cross sections for a variety of glasses and used them to predict performance trends for fiber amplifiers. Glasses have been identified which have advantages over silica, particularly for pumping with AlGaAs diode lasers at 800 nm.\nThe gain dynamics of erbium-doped fiber amplifiers are studied both experimentally and theoretically. It is shown that the transients associated with gain saturation and gain recovery during multichannel amplification have long characteristic times, i.e. in the 100 its-1 ms range. Such slow gain dynamics effectively prevent saturation-induced crosstalk and intermodulation distortion effects in the amplification of high-speed WDM and FDM signals.\nA large signal model of an optically pumped Erbium-doped travelling wave fibre amplifier is presented, including the influence of excited state absorption. The maximum gain and optimal amplifier length are discussed, based on the intensity of pump, signal and amplified spontaneous emission. Special attention is given to the gain degradation depending on the mode distribution of the pump light, and noise figures of the amplifier is presented for co- and counter propagating pump light. Finally, the variation of calculated results towards changes in the values of measured constants is considered.\nWe report on a parametric study to optimize the Er3+-doped fiber amplifiers performances. A 24 dB efficient gain in Er3+- doped silica-based fibers has been achieved for small signal regime at 1.553 μm with a 40 mW pump power in the 1.48-1.49 μm range. The saturation output signal power was up to 20 mW for a gain of 17 dB. For the signal wavelength a 3dB-bandwidth of 30 nm has been obtained always with a pump around 1.48 4m. We achieved these results by optimizing the relevant parameters which are the pump and signal characteristics (wavelength and power), the Er3±-concentration and length for the doped fiber. We report on the main parameters which determine the amplifier efficiency (gain, saturation output power, saturation gain, bandwidth, amplified spontaneous emission,...).", "score": 40.14171173138272, "rank": 9}, {"document_id": "doc-::chunk-9", "d_text": "11, tunable loss filter 90 covering a wider bandwidth than a single long-period grating device can be constructed by concatenating adjustable chirp long-period gratings 91 along a single fiber 11. A desired loss spectrum can be obtained by selectively adjusting the chirp of the gratings.\nFIG. 12 illustrates a dynamically gain-flattened amplifier 110 made by including a tunable loss filter 90 composed of the adjustable chirp long-period gratings in a rare earth doped amplifier (such as an erbium-doped fiber amplifier). The amplifier 110 preferably comprises a plurality of rare-earth fiber amplifier stages (e.g. two stages 111 and 112) with the tunable loss filter 90 preferably disposed at the output of the first stage. This gives the highest power and the lowest noise figure. For applications where noise is less important the filter 90 can be placed in front of the first stage 111. For applications where power is less important, it can be placed at the output of the last stage 112. Long-period gratings for flattening the response of an amplifier are described, for example, in U.S. Pat. No. 5,430,817 issued to A. M. Vengsarkar on Jul. 4, 1995, which is incorporated herein by reference. Such devices 110 can be advantageously used in WDM optical communication systems to ensure equalized amplification under a wide variety of conditions.\nFIG. 13 schematically illustrates an optical WDM communication system comprising a source 150 of modulated WDM optical signal channels λ1, λ2, . . . λn along a trunk fiber 11. The channels pass through one or more gain equalized amplifiers 110, which can be gain flattened amplifiers as shown in FIG. 12 and through one or more ADD/DROP multiplexer devices 102, which can be ADD/DROP devices as shown in FIG. 10. The signals are received at one or more WDM receivers 154.\nFor extensions of the invention, the following represents applicants' best understanding of the theory underlying the invention. Non-linear finite element modeling was used to compute the steady state thermal distributions in operating devices with geometries like those described above.", "score": 38.399254741818766, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Combined C- and L-band transmission can be achieved by making use of the wide\ngain spectrum provided by Raman amplification. Besides broadband amplification, distributed Raman\namplifiers (DRA) also offer enhanced noise characteristics compared to Erbium-Doped Fiber Amplifiers\n(EDFA), and enable a better control of fiber nonlinearities through a reduction of power variations\nin the transmission fiber.\nThe transmission of 82x10-Gbps NRZ channels (100 GHz channel spacing) over 1500 km\nof SSMF is considered. The line consists of 10 spans, each comprising 150 km SSMF and a hybrid EDFA/DRA\ndispersion compensating module. This module contains two pump lasers to perform DRA in the C- and\nL-bands and two dispersion compensation fibers (DCF) adjusted for the C- and L-bands. Additional\namplification is realized with two EDFAs. Two optical filters are used to flatten the response of\nthe combined DR and EDF amplification. The setup of the investigated system is displayed in\n. The power\nprofile of the WDM signal in the first span is reported in\nThe eye diagrams of the channels located in the center of the C- and L-bands are reported in\nfor the cases\nwhere the nonlinearities are switched off and on in the DCF. The channel power at the span input is set\nto -6 dBm. We can observe that the L-band channel is more affected by fiber nonlinearities than the\nC-band channel. This is due to the particularly large input power of this channel into the DCF\nresulting from the nonflat gain of the DRA. For optimal operation, the hybrid amplification scheme\nshould be optimized for a better control of the channels input power into the DCF.\nKeywords: High-capacity, Dense Wavelength Division Multipexing (DWDM), C-band, L-band, Distributed Raman Amplification (DRA), Nonlinear effects\nSimilar demonstrations are available in VPItransmissionMaker Optical Systems and on the VPIphotonics Forum.", "score": 36.66891108544873, "rank": 11}, {"document_id": "doc-::chunk-28", "d_text": "For example, various semiconductor optical amplifiers and fiber optical amplifiers can be used. The use of fiber amplifiers, and specifically erbium-doped fiber amplifiers, is well-known in the art and will be used in the examples described below. It should be noted that although erbium-doped fiber amplifiers are particularly well-suited to provide amplification in the present invention, and will be described herein, other suitable rare-earth elements may also be used, such as praseodymium, neodymium, and the like.\nAccording to the principles of the invention, optical fiber amplification may be incorporated using a number of different configurations. For example, fiber optical amplifiers 390 may be placed before input optical couplers 310 in optical router portion 340 or after output optical couplers 320 in optical combiner portion 341. Alternatively, fiber optical amplifiers (not shown) may be distributed within the wavelength-selective optical fibers 325 in a similar manner as that described in our co-pending U.S. application Ser. No. 08/777,890, filed Dec. 31, 1996, which is herein incorporated by reference. In yet another configuration, fiber optical amplifiers (not shown) may be judiciously integrated with the tunable fiber gratings 330 along wavelength-selective optical fibers 325 as described in our co-pending U.S. applications, Ser. Nos. 08/920,390 and 08/920,391, both filed on Aug. 29, 1997, both of which are herein incorporated by reference.\nAlthough not explicitly shown in FIGS. 8 and 9, it is contemplated that selected ones of fiber gratings 210 and 330, respectively, can be controlled to facilitate the appropriate “through” routing and “cross-connect” routing of individual channels within the multi-wavelength optical signals. Accordingly, the various control techniques previously described for FIG. 1 apply equally to the embodiments shown in FIGS. 8 and 9.\nFIG. 10(a) shows one of the input optical couplers 310 from optical router portion 340 of optical cross-connect arrangement 300 (FIG. 9). The configuration in FIG. 10(a) essentially represents a 1×M wavelength-selective optical distributor which is a basic building block for the K×M wavelength-selective cross-connect arrangement.", "score": 35.17401152874398, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "- Erbium Doped Fiber Amplifiers (EDFA)\n- Dense Wave Division Multiplexers (DWDM)\n- Overpowered fiber optic systems\n- Metal ion doped fiber\n- High-power light source durability\n- Wavelength independence\n- Attenuation levels ranging from 1dB to 30dB, with standard and premium tolerances, plus custom configurations.\n- 1310nm, 1550nm, 1250-1625nm and 1350/1550nm dual wave lengths\n- UPC -- return loss 55dB or greater\n- APC -- return loss 65dB or greater\n|A fiber optic attenuator is a passive device used to reduce the amplitude of a light signal without significantly changing the wave form itself. This is often a requirement in Dense Wave Division Multiplexing (DWDM) and Erbium Doped Fiber Amplifier (EDFA) applications where the receiver cannot accept the signal generated from a high-power light source.\nSENKO attenuators feature a proprietary type of metal-ion doped fiber which reduces the light signal as it passes through. This method of attenuation allows for higher performance than fiber splices or fiber offsets, which function by misdirecting rather than absorbing the light signal. SENKO attenuators are capable of performing in the 1310, C and L Bands.\nSENKO attenuators are capable of withstanding over 1W of high power light exposure for extended periods of time, making them well-suited to EDFA and other high-power applications.\nLow Polarization Dependent Loss (PDL) and a stable and independent wavelength distribution makes them ideal for DWDM.", "score": 34.50735808942992, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Within optical microchips, light finds its way through channels, waveguides, made of silicon. Light from a glass fiber, for example, is led through a structure of optical channels with splitters and couplers. Silicon is the workhorse for this, but it is still passive conduction of light, with some losses as well. To be able to amplify the signal, or even to include a light source on the chip, extra steps are necessary. Other types of semiconductors, like Gallium Arsenide, are an option. But materials doped with the rare earth material erbium have good amplification properties as well.", "score": 32.78680991915024, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "VPIcomponentMaker™ Fiber Optics is an R&D tool for modeling, optimization, and design of fiber-based optical devices such as doped fiber, Raman and parametric amplifiers, continuous wave and pulsed optical fiber sources, optical signal processing for telecommunication, high-power and ultrafast applications.\nThe signal and noise models based on full-wave and parameterized representations are ideally suited for efficient analysis. The stationary and dynamic Er-,Yb-,Tm- and co-doped fiber and waveguide models simulate signal amplification, noise generation, and higher-order limiting effects in configurations with core- or cladding pumping at single or multiple wavelengths. The undoped fiber model simulates linear and nonlinear optical effects due to chromatic dispersion and PMD, Kerr nonlinearity, Raman, Brillouin and Rayleigh scattering.\nBased on elementary building blocks, simple and sophisticated amplifier and laser topologies including active and passive components, signal monitors, control elements and feedback loops can be defined by the user. The general optimization capabilities of the design environment can be combined with specialized heuristic and deterministic algorithms to find, for instance, optimal wavelengths and powers of multiple Raman pumps to ensure a user-defined gain profile within acceptable tolerances. Virtual instrumentation provides versatile means to characterize the designed equipment with respect to the signal gain, noise level and power efficiency within wide range of operating conditions, which can be followed by conversion of the amplifier design to a rapid black box model.", "score": 32.76682568425166, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Erbium Glass (Er,Yb:Glass)\nEr,Yb:Glass/Er, Yb, Cr: Glass product, also known as erbium glass (er glass), is a kind of laser glass with good comprehensive performance.\nEr3+, Yb3+ co-doped phosphate glass (Er, Yb: phosphate glass) is a well-known and commonly used active medium. It emits laser microns in the “eye safe” spectral range of 1.5 – 1.6um.\n1540 nm laser is just located at the position of human eye safety and optical fiber communication window. So it leads to laser generation and signal amplification.\n1540 nm laser has been widely used in range finder, radar, target recognition, and other fields.\nPhosphate glass combines the long-lived (~8 ms) laser level on 4I13/2 Er3+ with the low-lived (2-3 ms) 2F5/2 excited state of 4I11/2 Er3+ level resonant with Yb3+.\nThe fast nonradiative multiphonon relaxation from 4I11/2 to 4I13/2 is due to the interaction between Yb3+ and Er3+ ions excited at 2F5/2 and 4I11/2 levels, respectively. That dramatically reduces the reverse energy transfer and conversion losses.\nEr3+/Yb3+ co-doped phosphate glass is an LD pumped 1540 nm eye-safe radiation source. It can emit eye safe 1540 nm laser radiation directly used in laser rangefinders and telecommunications.\nEr, Yb glass laser with 1540 nm wavelength radiation output does not need to add additional components.\nEr3+/Yb3+ co-doped phosphate glass laser is an eye-safe wavelength laser. It has attracted much attention because of its compactness and low cost.\nEat14: Yb3+, Er3+ co-doped phosphate glass, suitable for high repetition rate (1-6Hz) laser diode pumped 1535 nm. High Yb3+ doping can be achieved in this Eat14 glass.", "score": 31.875199010790297, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "|Publication No: 1363||Search all ORC publications|\nAdvanced materials for fibre and waveguide amplifiers\nJames S.Wilkinson and Martin Hempstead\nRecent research into optical fibre and waveguide amplifiers has been characterised by steady but unrevolutionary progress. Advances in the 1.5μm telecommunications window include erbium doped materials with flatter gain spectra, and planar amplifiers in a broad range of materials. At 1.3 μm, substantial improvements in praseodymium doped fluoride fibre amplifiers have been witnessed and chalcogenide glasses have shown promise for enhanced gain efficiency.\nCurrent Opinion in Solid State and Materials Science (1997) Vol.2(2) pp.194-199\nSouthampton ePrint id: 78005\nCopyright University of Southampton 2006", "score": 31.863525597810757, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "In this paper, a novel structure erbium-doped fiber laser with a linear cavity is demonstrated. The wavelength selective devices are a fiber Bragg grating (FBG) and a high birefringence (HiBi) fiber loop mirror. From 1543.8 to 1555.2 nm, 15 lasering wavelengths with side mode suppresion ratio (SMSR) > 54 dB and approximately 0.8-nm spacing have been obtained by using the comb-like reflection spectrum of fiber loop mirror and the intensity reaches about 2.5 dBm on an average. The reflectivity at different wavelengths can be tuned with different settings of polarization controller and the relative laser intensity can be controlled over a dynamic range of 13.5 dB.\n© 2005 Chinese Optics LettersPDF Article", "score": 31.802370271113507, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "A stabilized and tunable single-longitudinal-mode erbium-doped fiber ring laser has been proposed and experimentally demonstrated. The laser is structured by combining the compound cavity with a fiber Fabry–Pérot tunable filter. An injection-locking technique has been used to stabilize the wavelength and output power of the laser. One of the longitudinal modes is stimulated by the injected continuous wave so that this mode is able to win the competition to stabilize the system. A minimum output power of 0.6 dBm and a signal-to-noise ratio of over 43 dB within the tuning range of 1527–1562 nm can be achieved with the proposed technique. A wavelength variation of less than 0.01 nm, a power fluctuation of less than 0.02 dB, and a short-term linewidth of about 1.4 kHz have also been obtained.\n© 2007 IEEEPDF Article", "score": 31.70728013636922, "rank": 19}, {"document_id": "doc-::chunk-4", "d_text": "The maximum benefit of gain filtering is obtained by designing the transverse ytterbium dopant profile to optimize the overlap of the gain with the fundamental mode while minimizing the gain-overlap of all other modes, performing a global optimization at all levels of saturation. It has been recently shown that gain filtering in fiber amplifiers can lead to better beam quality than the injected seed beam [7, 17].\nGain filtering possesses two unique features that set it apart from all other mode-control techniques: lossless filtering and geometrical overlap. The first factor is unique amongst all other mode-filtering techniques that, while providing loss to higher-order modes, also provide loss for the fundamental mode. This makes gain filtering the highest efficiency mode-filtering method available. Second, mode filtering relies on the geometric overlap of the modes with the gain profile. Rather than relying on the difference between modal indices that necessarily decreases with increasing core area, gain filtering is indefinitely scalable, since the mode profiles essentially do not change with increasing core area.\nOne anticipated drawback to gain filtering in round fibers is the aforementioned mode deformation, where the mode becomes compressed towards the outside edge of the bend for large core diameters. Although the reduced mode size is detrimental to most LMA fiber applications, for very large cores (~100 µm) the displacement of the mode towards the edge of the waveguide can reduce the effectiveness of gain filtering by altering the overlap of the deformed modes with the centralized gain region .\nThe SHARC fiber offers a unique advantage that can exploit gain filtering without this packaging limitation. The SHARC fiber is coiled in the fast-axis direction, but the gain filtering is applied in the slow-axis direction, as depicted in Fig. 1. Therefore, no slow-axis mode offset will be incurred, and the mode overlap with the gain will remain unchanged regardless of core area or coiling diameter. Consequently, the integration of gain filtering into the SHARC fiber yields the ideal architecture for significant core-area scaling of high-power fiber amplifiers to 30,000 µm2 and beyond.\n3. SHARC fiber amplifier analytic calculations\nAs was discussed in detail in , the SHARC fiber geometry lends itself nicely to separation of variables such that the fast- and slow-axis physics can be handled nearly independently of each other. This makes direct analytical modeling possible, from which the primary physics can be obtained. The validity of this assumption and its benefits were confirmed with rigorous three-dimensional beam propagation modeling (BPM) simulations .", "score": 31.60454434181724, "rank": 20}, {"document_id": "doc-::chunk-4", "d_text": "The beam quality factors were calculated to be Mx2 = 1.79 and My2 = 1.64, respectively, and the corresponding laser beam profile is shown in the inset of Fig. 2. The value of slope efficiency (5.4% and 6.5%) is lower comparing with the early reported 2 at.% Erbium concentration doped Y2O3 ceramic (slope efficiency of 15%), which can be attributed to the lower transmittance of the output coupler and non-optimized pumping wavelength.\nFor the Er:Lu2O3 ceramic laser system with 967 nm LD pumping, the slope efficiency was about 7.6%, and the output power saturated when the absorbed power was 8.7 W. A maximum output power of 611 mW was achieved, corresponding to an optical-to-optical conversion efficiency of 7.0%. The beam quality factors were also measured at an absorbed pump power of 5.1 W. The quality factors were calculated to be Mx2 = 1.31 and My2 = 1.21, respectively, and the laser beam profile is shown in the inset of Fig. 2. In all cases, Er:Lu2O3 ceramic lasers show an excellent Gaussian transverse profile. Beam quality factors were measured at the maximum absorbed pump power of 8.7 W, and calculated to be Mx2 = 2.58 and My2 = 2.44, respectively. Generally speaking, under 967 and 976 nm LD pumping, Er:Lu2O3 exhibit a better performance in the eyes of slope efficiency, maximum output powers, and conversion efficiency. But it still couldn’t ascertain that which laser system would be a superior laser source for 3 μm transitions at room temperature, due to the non-optimized pump wavelength and laser cavity [14, 18].\nIt is worth noting that CW output power of 611 mW can be compared to the early reported Er:Lu2O3 crystal lasers with the similar bulk dimension and resonator conditions . Meanwhile, the value of slope efficiency (7.6%) of Er:Lu2O3 ceramic is much higher than that Er:Lu2O3 crystal, which can be attributed to the better mode matching. Nevertheless, confocal parameter of the pump beam inside the ceramic was only 2.3 mm, due to the marginal quality of the pump resource.", "score": 31.11967890168323, "rank": 21}, {"document_id": "doc-::chunk-7", "d_text": "This result can be explained by the fact that the signal-to-noise ratio characteristic was degraded at the band edge of the Er-doped fiber amplifier.\nWe have demonstrated an all-polarization-maintaining, high-pulse-energy, wavelength-tunable, passively mode-locked Er-doped ultrashort pulse fiber laser using a polyimide film dispersed with single-wall carbon nanotubes. The extinction ratio was 16 dB, and stable, self-starting operation was achieved without any polarization control. The dependence on output coupling ratio was examined in detail. The output coupling ratio could be increased to 99% with a wavelength filter. The maximum output power was 12.6 mW, and the corresponding pulse energy was 580 pJ. Compared with previous work, the average power was a factor of 2.5 higher, and the pulse energy was a factor of 5 higher.\nWide wavelength-tunable operation from 1532 to 1562 nm was also demonstrated under the control of the wavelength filter. The output power and spectrum width of the output pulses were almost constant during the wavelength variation. The characteristics of RF noise were examined in detail, and the dependence on the output coupling ratio and oscillation wavelength were examined experimentally. Generally speaking, the magnitude of RF noise level was as low as that of commercially available solid state lasers. A slight noise increment was observed at large output coupling ratio and at the longer wavelength edge of the wavelength tuning band.\nThis work was supported by a Grant-in-Aid for Scientific Research on Priority Areas.\nReferences and links\n1. M. E. Fermann, “Ultrafast fiber oscillators”, in Ultrafast Lasers, M. E. Fermann, ed. (Marcel Dekker, 2003), Chap. 3.\n2. Y.-C. Chen, N. R. Raravikar, L. S. Schadler, P. M. Ajayan, Y.-P. Zhao, T.-M. Lu, G.-C. Wang, and X.-C. Zhang, “Ultrafast optical switching properties of single-wall carbon nanotube polymer composites at 1.55 μm,” Appl. Phys. Lett. 81(6), 975–977 (2002). [CrossRef]\n3.", "score": 30.93226263902303, "rank": 22}, {"document_id": "doc-::chunk-2", "d_text": "Er–Yb codoped phosphate glasses with improved gain characteristics for an efficient 1.55μm broadband optical amplifiers[J]. Journal of Luminescence, 2014, 148:249-255.|\n| A C B , A T M , B S R D , et al. Sensitizing effect of Yb 3+ ions on photoluminescence properties of Er 3+ ions in lead phosphate glasses: Optical fiber amplifiers[J]. Optical Materials, 2018, 86:256-269.|\n| Ryabtsev G I , Bezyazychnaya T V , Parastchuk V V , et al. Spectral and temporal properties of diode-pumped Er, Yb: glass laser[J]. Optics Communications, 2005, 252(4-6):301-306.|\n| Francini R , Giovenale F , Grassano U M , et al. Spectroscopy of Er and Er–Yb-doped phosphate glasses[J]. Optical Materials, 2000, 13(4):417-425.|\n| Sendova M , JA Jiménez, Honaman C . Rare earth-dependent trend of the glass transition activation energy of doped phosphate glasses: Calorimetric analysis[J]. Journal of Non-Crystalline Solids, 2016, 450:18-22.|\n| Chen C , He R , Tan Y , et al. Optical ridge waveguides in Er3+/Yb3+ co-doped phosphate glass produced by ion irradiation combined with femtosecond laser ablation for guided-wave green and red upconversion emissions[J]. Optical Materials, 2016.|\n| Balaji S , D Ghosh, Biswas K , et al. Insights into Er 3+ Yb 3+ energy transfer dynamics upon infrared ~1550nm excitation in a low phonon fluoro-tellurite glass system[J]. Journal of Luminescence, 2017, 187:441-448.|\n| Feng S , Fei L , Li S , et al. The fractional thermal factor in LD-pumped Yb3+/Er3+ codoped phosphate glass. IEEE, 2009.|\n| Dan G , Mihailov S J , Walker R B , et al.", "score": 30.108970160891424, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Erbium-doped silica optical fibers were irradiated by γ-rays, and their loss-increasing characteristics were analyzed. The loss is mainly induced by Er2+ created by the reduction of Er3+. It does not show a simple dependence on the Er3+ concentration. The loss increase is described by the sum of several saturating exponential terms and one linear term, which can be derived from the yield equation of E ′γ centers or other species induced from precursors by hole trapping. It is considered that hole trapping supplies electrons to induce the reduction and plays an important role in the loss increase.\nASJC Scopus subject areas", "score": 29.781183591021236, "rank": 24}, {"document_id": "doc-::chunk-3", "d_text": "Bragg Gratings Made With a Femtosecond Laser in Heavily Doped Er–Yb Phosphate Glass Fiber[J]. IEEE Photonics Technology Letters, 2007, 19(12):943-945.|\n| Zhu M , Tongzhao G U . Spectroscopic properties of Er3+/Yb3+ co-doped tantalum-niobium phosphate glasses for optical waveguide laser and amplifier[C]// International Conference on Microwave & Millimeter Wave Technology. IEEE Xplore, 2004.|\n| Danger T , Huber G , DeNker B I , et al. Diode-pumped cw laser around 1.54 m using Yb, Er-doped silico-boro-phosphate glass[C]// Conference on Lasers & Electro-optics. IEEE, 1998.|\n| S Jiang, SJ Hamlin, JD Myers,等. High-average-power 1.54-μm Er3+:Yb3+-doped phosphate glass laser. 1996.|\n| Osellame R , Valle G D , Chiodo N , et al. Mode-locked and single-longitudinal-mode waveguide lasers fabricated by femtosecond laser pulses in Er:Yb-doped phosphate glass. IEEE, 2007.|\n| Valles J A , Rebolledo M A , J Cortés. Full characterization of packaged Er-Yb-codoped phosphate glass waveguides[J]. IEEE Journal of Quantum Electronics, 2006, 42(2):152-159.|\n| Aseev V A , Ulyashenko A M , Nikonorov N V , et al. Comparison of highly doped Ytterbium-erbium phosphate and silicate glasses for microchip lasers[C]// International Conference on Advanced Optoelectronics & Lasers. IEEE, 2005.|\n| Qiu T , Li L , Temyanko V , et al. Generation of high power 1535nm light from a short cavity cladding pumped Er:Yb phosphate fiber laser[M]. 2004.|\n| Buchenkov V A , Krylov A A , Mak A A .", "score": 29.603349875861575, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "INO draws on its unique expertise in active fiber design and fabrication to develop innovative solutions for its clients. INO boasts a number of newly patented large-mode-area (LMA) fiber designs providing outstanding performance for scaling fiber amplifier output power and energies while preserving high beam quality and good pointing stability.\nFiber modeling (multiple INO patents for multiclad LMA fiber designs)\nLMA fibers with core sizes up to 40 microns and high output beam quality (M2 < 1.5) for good pointing stability and a high SBS threshold\nGain-selective doping for optimized beam quality\nPolarization-maintaining LMA fibers with a high signal polarization extinction ratio, well suited to harmonic generation\nGain blocks assemblies\nOur designs draw on INO’s unique multidisciplinary expertise in fiber modeling and MCVD technologies. They use INO’s unique proprietary multiclad LMA fiber designs to preserve high beam quality in LMA fibers for the benefit of our clients. INO holds many patents in multiclad LMA fiber designs.\nMultistage pulsed-fiber lasers & amplifiers\nHigh-power fiber lasers\nSingle-mode fiber amplifiers/lasers\nLMA Yb fibers, Er-Yb fibers\nPM fibers for 2nd and 3rdharmonic generation\nINO is one of the few remaining independent active fiber manufacturers. Moreover our mission is to develop custom fibers for our clients. INO can help you with highly specialized fiber designs for multistage laser amplifiers as well as a wide range of specially designed undoped fibers.\nOther INO services include tapered fibers, fiber gain blocks, and laser soldering and polishing of fiber end caps. INO also offers support for the design and fabrication of fiber laser prototypes and can supply fiber to clients who produce their own fiber lasers. INO’s wide range of expertise in electronics, lasers, optical design, and material processing can be an important advantage for clients seeking solutions for specialty applications. INO also offers short-run production when needed.\nINO has many years of experience designing and fabricating Yb-doped, Er-Yb-doped, hollow, and PM fibers, as well as high-attenuation fibers and microstructured fibers.", "score": 29.33657800782279, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "The high absorption confined-doped ytterbium fiber with 40/250 μm core/inner-cladding diameter is proposed and fabricated, where the relative doping ratio of 0.75 is selected according to the simulation analysis. By employing this fiber in a tandem-pumped fiber amplifier, an output power of 6.2 kW with an optical-to-optical efficiency of ∼82.22% is realized. Benefiting from the large-mode-area confined-doped fiber design, the beam quality of the output laser is well maintained during the power scaling process with the beam quality factor of ∼1.7 of the seed laser to ∼ 1.89 at the output power of 5.07 kW, and the signal-to-noise ratio of the output spectrum reaches ∼40 dB under the maximum output power. In the fiber amplifier based on the 40/250 μm fully-doped ytterbium fiber, the beam quality factor constantly degrades with the increasing output power, reaching 2.56 at 2.45 kW. Moreover, the transverse mode instability threshold of the confined-doped fiber amplifier is ∼4.74 kW, which is improved by ∼170% compared with its fully-doped fiber amplifier counterpart.\n© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement\nFiber lasers/amplifiers have developed rapidly and gained widespread attention in the last several decades owing to their compact structure, high conversion efficiency, high-power capability, and good beam quality, etc [1–3]. Hitherto, the output powers from fiber lasers/amplifiers have already exceeded the 10-kW level [4,5], which enabled a wide variety of applications, especially in industrial manufacturing [6,7]. However, further power scaling of fiber lasers/amplifiers is challenged by the emergence of nonlinear effects resulted from the high power density within the core, such as stimulated Raman scattering (SRS), and stimulated Brillouin scattering (SBS) [8–10]. The most straightforward solution is to increase the effective mode area by employing large mode area (LMA) fibers, but the cost is a degraded beam quality owing to the increment of supported modes, and very possibly a significantly reduced transverse mode instability (TMI) threshold [2,11].", "score": 29.233695234406266, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "To increase the output power while maintaining good beam quality in the LMA fiber lasers/amplifiers, it is essential to reduce the number of supported transverse modes, and in fact, lots of efforts have been devoted to the LMA fiber designs.\nThere are several fiber design strategies to reduce the number of supported modes and even realize effective single-mode operation in an LMA fiber. One way is to decrease the numerical aperture (NA) of the LMA fiber [12–15]. However, the low-NA LMA fiber is sensitive to bending due to the weak mode confinement, making it more demanding in practical use. Another practical issue is that the core NA of the optical fiber could not be arbitrarily reduced, mainly restricted by the technical and physical limitations , therefore, there is limited potential for the core size scaling by lowering the NA. Another approach is designing specialty optical fiber with microstructures or special refractive index profiles, such as chirally-coupled core fiber , hollow-core/hole-assisted photonic crystal fiber [18–20], all-solid photonic bandgap fiber [21,22], large pitch fiber [23,24], tapered fiber , and single or multiple trenched fiber [26,27], etc. These fiber designs could help realize effective single-mode operation in an LMA fiber, but they either require a complicated fabrication process or are difficult for post handling (cleaving and splicing) . Moreover, some of them are difficult to be fabricated into active fibers, making them less attractive in fiber oscillators/amplifiers . Therefore, an easy-to-fabricate and easy-to-use fiber design with good compatibility is in great demand.\nA very promising solution is the confined-doped fiber design, in which only part of the core is selectively doped. In a conventional active fiber, the core is fully doped, and all the supported modes that propagate in the core could experience the gain. While in the confined-doped fiber, transverse mode discrimination, also known as the ‘gain filtering effect’, is established by separating the waveguiding function and the gain through selective doping, and only the modes that occupy the doped regions of the core could extract the gain. It has been proven both theoretically and experimentally that confining the doping area to the central part of the core can facilitate fundamental mode laser output.", "score": 28.622304904372363, "rank": 28}, {"document_id": "doc-::chunk-6", "d_text": "As a comparison, laser spectra of Er:Lu2O3 ceramic lasers at stable output powers of 25 mW, 160 mW, and 575 mW were recorded respectively, as shown in Fig. 4. The corresponding laser wavelengths were 2714.8 nm, 2722.6 nm, and 2736.2 nm, respectively. With slit width of 20 μm, bandwidth measurements made with the monochromator indicate that the FWHM of the laser are all around 0.3 nm. What deserves to be mentioned the most is that all the laser spectra exactly avoided the numerous water vapor absorption lines in this wavelength band . And, with the pump power increased, red-shift of the lasing wavelength is observed in the 3 μm erbium sesquioxide ceramic lasers.\nRed-shifting behaviors have been reported in other erbium doped lasers [1, 21]. This phenomenon can be understood as follows. Before the laser emission, the lower multiplet is empty. Short wavelength with large emission cross section more intend to oscillate at the beginning of laser emission (four-level nature) . As for Er:Y2O3 ceramic, four peak emission cross sections pretty close to each other , transitions with smaller water absorption losses, 2707.8 nm and 2723.0 nm, would be more prefer for oscillation at the beginning. Once the lower multiplet is populated during laser emission, the laser is forced to operate at longer wavelength due to reabsorption losses build up at the short wavelength side of the fluorescence. As for the Er:Lu2O3 ceramic laser systems, all the laser spectra surrounded with fewer water absorption lines. Transitions with larger emission cross sections, 2714.8 nm, would be more prefer to oscillate at the beginning. With pump power increased, the ETU process and the nonradiative transition process [4I13/2→ 4I15/2] cannot deplete the 4I13/2 state so efficiently that large number residual populations would accumulate in the long-lived 4I13/2 lower laser level. Then the character of the lasing process is changed from four-level to quasi-three-level lasing, the reabsorption process would be enhanced and the laser is forced to oscillate at longer wave.", "score": 28.383175724963454, "rank": 29}, {"document_id": "doc-::chunk-6", "d_text": "The third system incorporates intra-fiber phase modulation. This allows considerable simplification of the laser cavity, resulting in a diode-pumpable source of 80 psec mode-locked pulses with a threshold pump power of 520 pW and a slope efficiency of 48%.\nLaser fibers prepared from Nd- and Er-doped phosphate glass possessing a large stimulated emission cross section have been investigated both in a single fiber and in a fiber bundle. In the single fiber, continuous wave oscillations were successfully obtained at 1.054 p.m and 1.366 µm on a high Nd-doped single-mode fiber of 10 mm in length and also at 1.535 pm in a Er-doped single-mode fiber, sensitized by Nd, Yb. Especially, a low threshold of 1 mw and a high slope-efficiency of 50% were achieved in 1.054 pm laser oscillation on a Nd-doped fiber, end-pumped with a laser diode. A fiber bundle of phosphate glass doped with 8 wt% Nd2O3 yielded an average output power of 100 W at 50 pps where the bundle was 4.6 mm in diameter and was side-pumped with flash lamps.\nThe performance and charactersics of various Q-switched fibre lasers are examined. All results presented are for laser diode pumped systems, emphasizing the practicality of the Q-switched fibre laser. Operation at 0.94, 1.06, 1.09 and 1.57Am wavelengths are covered. The results presented show that Q-switching is currently limited by the performance of the modulator. Theoretical modelling of the process allows us to determine the required properties for a modulator and in addition to predict the expected performance of Q-switched fibre lasers under different conditions.\nWe describe the construction and operation of an FM mode-locked fiber laser using a large-diameter Nd3+ doped silica fiber with dielectric reflectors deposited directly onto the fiber ends and an integrated ZnO acousto-optic phase modulator clamped to the fiber. Optical pulses of less than 60 ps duration have been generated at 432.77 MHz with 400 mW of power to the transducer, and 300-400 ps pulses have been observed with only 4011,W of modulator power, correspond-ing to less than 130 wad of phase modulation.", "score": 28.351439690921815, "rank": 30}, {"document_id": "doc-::chunk-8", "d_text": "On the other hand, due to the smaller pump spot, and highly quantum defect, output power appeared roll over at absorbed pump power of 8.7 W, when the ceramic mount was set as 283, 290, and 298 K. However, the phenomenon didn’t appear at 278K, due to the high thermal conductivity of Er:Lu2O3, which indicates that a better cooling setup is very essential to heavily doped Er:Lu2O3 ceramic lasers.\nWith 967 and 976 nm LD pumping, a comparison of the laser performance of erbium doped transparent ceramics, Er:Y2O3 and Er:Lu2O3, both with 15 at.% doping, has been undertaken. True CW operation of the Er-doped sesquioxide ceramic was obtained for the first time at room temperature, with a short plane-parallel cavity. An output coupler with a 1.5% transmission was adopted for both Er:Y2O3 and Er:Lu2O3 ceramic lasers. The maximum output powers of 320 mW and 611 mW were obtained, respectively, with diode pumping wavelength at 967nm. The study indicates that under 967 and 976 nm LD pumping, Er:Lu2O3 exhibit a better performance compared to Er:Y2O3 system. Furthermore, the wavelength for both ceramics lasers were found to red-shift with the pump power increased, and the final and dominant wavelength determined to be 2736.2 nm and 2739.0 nm for the Er:Lu2O3 and Er:Y2O3 ceramic lasers, respectively.\nThis work is supported by the National Natural Science Foundation of China (Grant No. 61177045, 61308047, and 11274144), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 13KJB510008), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).\nReferences and links\n1. E. Arbabzadah, S. Chard, H. Amrania, C. Phillips, and M. Damzen, “Comparison of a diode pumped Er:YSGG and Er:YAG laser in the bounce geometry at the 3 μm transition,” Opt. Express 19(27), 25860–25865 (2011). [CrossRef] [PubMed]\n2.", "score": 28.350355174747772, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Erbium Fluoride ErF3 is applied in...\nErbium Fluoride ErF3 is applied in petroleum and environment protection catalysts, mischmetal, polishing powders and Rare Earth fertilizers. Heeger Materials (HM) provides Erbium Fluoride ErF3 at a competitive price. The purity and particle size can be customized.\nWarning: Last items in stock!\nAvailability date: 03/01/2013\nPlease contact us if you need customized services. We will contact you with the price and availability in 24 hours.\n|Synonyms||ErbiumFluorid/Fluorure De Erbium/Fluoruro Del Erbio|\nErbium Fluoride (ErF3) Powder\nErbium Fluoride, High purity Erbium Fluoride is applied as a dopant in making optical fiber and amplifier. Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which are widely used in optical communications. The same fibers can be used to create fiber lasers, In order to work efficiently, Erbium-doped fiber is usually co-doped with glass modifiers/homogenizers, often aluminum or phosphors.\n|Er2O3 /TREO (% min.)||99.999||99.99||99.9||99|\n|TREO (% min.)||81||81||81||81|\n|Rare Earth Impurities||ppm max.||ppm max.||% max.||% max.|\n|Non-Rare Earth Impurities||ppm max.||ppm max.||% max.||% max.|", "score": 27.51762220234377, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "|<<< | >>> | Feedback|\nThe ideal place to find suppliers for photonics products: high-quality information, simple and fast, respects your privacy!\n54 suppliers for fiber amplifiers are listed.\nYour are not yet listed? Get your entry!\nDefinition: optical amplifiers with doped fibers as gain media\nFiber amplifiers are optical amplifiers based on optical fibers as gain media. In most cases, the gain medium is a glass fiber doped with rare earth ions such as erbium (EDFA = erbium-doped fiber amplifier), neodymium, ytterbium (YDFA), praseodymium, or thulium. This active dopant is pumped (provided with energy) with light from a laser, such as a fiber-coupled diode laser; in almost all cases, the pump light propagates through the fiber core together with the signal to be amplified. A special type of fiber amplifiers are Raman amplifiers (see below).\nThe originally dominating application of fiber amplifiers was in optical fiber communications over large distances, where signals need to be periodically amplified. Typically, one uses erbium-doped fiber amplifiers with signals of moderate optical power in the 1.5-μm spectral regions. Other important application areas of fiber amplifiers have been developed later. In particular, high-power fiber amplifiers are now used in laser material processing. Typically, these are based on ytterbium-doped double-clad fibers for signals in the spectral region of 1.03–1.1 μm. The output powers can be multiple kilowatts.\nGain and Output Power\nDue to the possible small mode area and long length of an optical fiber, a high gain of tens of decibels can be achieved with a moderate pump power, i.e., the gain efficiency can be very high (particularly for low-power devices). The gain achievable is often limited by ASE (see below). The high surface-to-volume ratio and the robust single-mode guidance also allow for very high output powers with diffraction-limited beam quality, particularly when double-clad fibers are used. However, high-power fiber amplifiers usually have a moderate gain in the final stage, partly due to power efficiency issues; one then uses amplifier chains where the preamplifier provides most of the gain and a final stage the high output power.", "score": 26.9697449642274, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "Optical Amplifier Portfolio\nThe Lumentum Amplifier Portfolio\n- Counter/Co-Propagating Raman\nAmplifiers Our Raman amplifiers leverage internally developed, state-of-the-art 14xx pump lasers, internally developed intelligent algorithms for autonomous gain control, and robust safety features to deliver network-ready solutions. Key points of differentiation include market-leading metrics on power consumption (<45 W to support Raman gains in excess of 14 dB on SMF), high output power (900mW – 1200 mW), gain control accuracy, and density.\n- Preamp and Postamp EDFAs\nLumentum has a suite of customizable preamp and postamp EDFA solutions to meet all amplification needs—whether at a ROADM node or at an in-line amplifier (ILA) node. Fixed-gain, variable-gain, and switchable-gain versions at varying output power levels and noise figure metrics are available that are commensurate with the cost and performance requirements of the application. All of our EDFA solutions are built around our internal high-power 980 nm pump lasers and offer our industry-leading automatic gain control with fast transient suppression as a core feature. Over 200,000 Lumentum-designed EDFAs have been deployed in DCI, meshed ROADM, and high-capacity long-haul networks.\n- Raman/EDFA Hybrid Amplifiers\nOur Raman/EDFA hybrid amplifiers combine Raman’s low effective noise figure with EDFA’s high output power to provide a high-OSNR solution suitable for high bit-rate long-haul applications. An integrated approach to the Raman/EDFA design optimizes spectral flatness and control flexibility to extract the best possible OSNR performance across a diverse range of fiber spans.\n- High-Power EDFAs\nLumentum EDFAs support up to 25 dBm output power for point applications that are less sensitive to nonlinear penalties. Our high-power EDFA solutions are based on conventional 980 nm pump technologies, and leverage our internally developed, industry-leading 900 mW pump lasers.\n- Switchable-Gain EDFAs\nWe have the broadest portfolio of switchable-gain EDFAs in the industry, with the largest number of network deployments to date. Switchable-gain EDFAs extend the benefits of variable-gain EDFAs a step further and enables using a single EDFA model across a wide range of fiber-span loss distributions in the network.", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-11", "d_text": "WDM systems use optical amplification rather than signal regeneration where possible. WDM becomes practical upon substitution of the optical amplifier for the usual repeater which depends upon electronic detection and optical regeneration. Use of the Er amplifier permits fiber spans of hundreds of kilometers between repeaters or terminals. A system in the planning stage uses optical amplifiers at 120 km spacing over a span length of 360 km.\nThe referenced article goes on to describe use of narrow spectral line widths available from the distributed feedback (DFB) laser for highest capacity long distance systems. The relatively inexpensive, readily available Fabry Perot laser is sufficient for usual initial operation. As reported in that article, systems being installed by Telefonos de Mexico; by MCI; and by AT&T are based on DS fiber.\nA number of studies consider non-linear effects. (See, \"Single-Channel Operation in Very Long Nonlinear Fibers With Optical Amplifiers at Zero Dispersion\" by D. Marcuse, J. Lightwave Technology, vol. 9, No. 3, pp. 356-361, March 1991, and \"Effect of Fiber Nonlinearity on Long-Distance Transmission\" by D. Marcuse, A. R. Chraplyvy and R. W. Tkach, J. Lightwave Technology, vol. 9 No. 1, pp. 121-128, January 1991.) Non-linear effects studied include: Stimulated Brillouin Scattering; Self-Phase and Cross-Phase Modulation; Four-Photon Mixing (4PM); and Stimulated Raman Scattering. It has been known for some time that correction of the linear dispersion problem is not the ultimate solution. At least in principle, still more sophisticated systems operating over greater lengths and at higher capacities would eventually require consideration of non-linear effects as well.\nWDM--Wavelength Division Multiplex, providing for multi-channel operation within a single fiber. Channels are sufficiently close to be simultaneously amplified by a single optical amplifier. At this time, the prevalent optical amplifier (the erbium-doped silica fiber amplifier) has a usable bandwidth of Δλ≅10-20 nm.\nDispersion--When used alone, the term refers to chromatic dispersion--a linear effect due to wavelength dependent velocity within the carrier spectrum.\nSpan--Reference is made here to a repeaterless fiber length.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-13", "d_text": "Its use is contemplated in a species of this invention.\nIn its broadest terms, the invention reflects the observation that four-photon mixing is a relevant mechanism which must be considered in the design of contemplated WDM systems. A number of factors lend assurance to the assumption that the inventive teaching will take the form described above. For one thing, changing the carrier wavelength, e.g. to λ=1550.+-.20 nm, for introducing requisite dispersion into DS fiber, while in principle appropriate, is not easily achievable. The erbium amplifier at its present advanced state of development, has an operating peak near 1550 nm. Operation 20 nm off this peak reduces the carrier power level to an inconveniently low magnitude for one or more of the channels. It is conceivable that substitution for erbium or that some other change in design of the amplifier will permit this operation. It is more likely that future systems will continue to be designed taking advantage of the present or some more advanced stage of the conventional erbium amplifier.\nFour-photon mixing depends upon the precise wavelengths of generated carriers. Evenly spaced, four-channel systems unavoidably satisfy this requirement. The likely significance of 4PM is somewhat reduced for a three-channel system, and precise uneven spacing even in a four-channel system may, in principle, avoid it as well. Problems in taking this approach require operating parameters which may be beyond the present state of the art, and, which in any event, would introduce some further expense. Reliable stabilization to maintain such precision, e.g. as due to thermal drift, is problematic.\nThese alternative approaches may not be seriously considered for newly installed systems, but may be of value for upgrading of inground systems--particularly, those with DS fiber in place.\nThis is a continuation of application Ser. No. 08/599,702, filed Feb. 9, 1996, now U.S. Pat. No. 5,719,696 which is, in turn, a division of application Ser. No. 08/069,952, filed May 28, 1993, issued as U.S. Pat. No. 5,587,830 on Dec. 24, 1996.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "Especially high phonon energy host materials, such as garnet, erbium ions at concentrations no less than 30 at.% are needed to compensate the nonradiative transition, of which, the gain is concentrated near the surface with a short pump absorption depth. For shallow pump absorption, the induced temperature gradient may lead to distortion of the laser mode and strong thermal lensing with pronounced spherical aberrations and, ultimately lead to bulk materials fracture in high-power end-pumped system. Diffusion-bonded absorption-free materials as heat sink at the pump facet [4, 5] were proposed to improve this issue. Nevertheless, thermal stresses at the pump interface between the two materials would limit the average output power of the diffusion-bonded setup .\nRecently, cubic sesquioxides have attracted considerable attention due to their superior thermo-mechanical properties, which can be easily doped with rare-earth ions and exhibit large heat conductivity which exceed that of YAG by up to 50% . Furthermore, their relatively low maximum phonon energy of ~600 cm−1 (Y2O3 591 cm−1, Lu2O3 612 cm−1, Sc2O3 672 cm−1) compared to YAG (857 cm−1) is very meaningful to make efficient laser operation, which reduce the probability of non-radiative transitions of the laser ion and thus improves the quantum efficiency . With substantial progress in the growth of single crystal as well as fabrication of polycrystalline Er-doped sesquioxides laser materials, efficient Er sesquioxide lasers emitting at 3 μm range have been reported [12–14]. However, the growth of sesquioxide crystalline is very challenging due to the extremely high melting point of more than 2400°C. Compared with single crystals, rare-earth elements doped transparent ceramic as new laser gain medium have drawn great attention [15, 16], due to the rapid and larger volume fabrication, larger doping concentrations with controllable distribution in the volume of material, profile and sample structure, etc. Continuous wave (CW) power of 14 W has been recently obtained from Er-doped Y2O3 ceramic lasers with cryogenic cooling (77 K) , which presents the highest output power reported to data for a ceramic Er3+ laser operating in the 3 μm range.", "score": 26.23685085959173, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "An optical amplifier is a device that amplifies an optical signal directly, without the need to first The most common example is the Erbium Doped Fiber Amplifier ( EDFA), where the core of a silica The amplification window of an optical amplifier is the range of optical wavelengths for which the amplifier yields a usable gain. My sincerest thanks also to all the members of Centre d’Optique, Photonique et . constmction of an EDFA and its amplification principles in sections and Amélioration de la dynamique de stabilisation des EDFA grâce à l’insertion d’un amplificateur optique à semiconducteur. Conference Paper · January with.\n|Published (Last):||23 April 2010|\n|PDF File Size:||4.55 Mb|\n|ePub File Size:||14.14 Mb|\n|Price:||Free* [*Free Regsitration Required]|\nModule d’amplificateur haute puissance. Such amplifiers are commonly used to produce high power laser systems. First, Raman gain exists in every fiber, which provides a cost-effective means of upgrading from the terminal ends. Parametric amplifiers use parametric amplification. Fibres de Plastique Doubles. Different sites expose ions to different local electric fields, which shifts the energy levels via the Stark effect.\nAchromat Dispersion Gradient-index optics Hydrogen darkening Optical amplifier Optical fiber Optical lens design Photochromic lens Photosensitive glass Refraction Transparent materials.\nPlaques Murales en Fibre Optique. Doped fiber amplifiers DFAs are optical amplifiers that use a doped optical fiber as a gain medium to amplify an optical signal. Becker, High-gain erbium-doped traveling-wave fiber amplifier,” Optics Letters, vol.\nManchons de Protection pour Connecteur RJ A relatively high-powered beam of light is mixed with the input signal using a wavelength selective coupler WSC. This effect is optiqus as gain saturation — as the signal level increases, the amplifier saturates and cannot produce any amplifictaeur output power, and amplficateur the gain ampliflcateur.\nIn addition to boosting the total signal gain, the use of the resonant cavity structure results in a very narrow gain bandwidth; coupled with the large FSR of the optical cavity, this effectively limits operation of the VCSOA to single-channel amplification.", "score": 25.65453875696252, "rank": 38}, {"document_id": "doc-::chunk-3", "d_text": "Advantages & Disadvantages of FRA\n- Variable wavelength amplification possible\n- Compatible with installed SM fiber\n- Can be used to extend EDFAs\n- Can result in a lower average power over a span, good for lower crosstalk\n- Very broadband operation may be possible\n- High pump power requirements, high pump power lasers have only recently arrived\n- Sophisticated gain control needed\n- Noise is also an issue\nAfter talking about these three types of optical amplifiers, we make a comparison of them as the following table.", "score": 25.65453875696252, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "The power per channel must be sufficient to provide an adequate signal to noise ratio in the presence of the amplified spontaneous emission (ASE) noise from the amplifiers, necessitating a high amplifier total output power for systems with high fully-loaded capacity. The amplifiers are thus configured to provide an optical output signal at a nominal optical power. The nominal output power level is insensitive to the power at the input of the amplifier. As the amplifier input power varies over a wide range, the output power changes very little around this nominal output power level. Thus, when the optical link is fully loaded, each channel is amplified to a substantially equal optical output power. If the initially deployed system uses only a few channels for information, these channels share all of the amplifier output power. As additional channels are added, the optical output power per-channel decreases.\nWhen some channel powers increase compared to other channels, problems may be caused by an effect known as spectral hole burning (SHB). In an optical communication network using rare-earth-doped fiber amplifiers, such as erbium-doped fiber amplifiers (EDFAs), signal-induced saturation in the doped fiber medium may cause SHB. As a result of SHB, a gain depression or “hole” may be induced in the gain spectrum of a WDM system in the spectral vicinity of a saturated channel. When a WDM system is loaded with low channel counts during initial deployment, for example, the system may show a severely distorted gain shape such that the channels experience a higher power evolution along the system.\nReference should be made to the following detailed description which should be read in conjunction with the following figures, wherein like numerals represent like parts:\nTurning now to\nSystem 100 may be employed to span a body of water 104. When used to span a body of water, e.g. an ocean, repeaters 110 may be seated on the ocean floor 102 and the transmission path may span between beach landings. It will be appreciated that a plurality of repeater and optical media links may be disposed beneath water and/or over land.\nWhen a system, e.g. system 100, is configured as a WDM system and initially deployed with unutilized channels, information signals on utilized channels may cause a distorted gain shape as a function of the loading configuration.", "score": 25.65453875696252, "rank": 40}, {"document_id": "doc-::chunk-11", "d_text": "Notably, the beam quality was well maintained during the power scaling process: the beam quality factor of the seed laser was 1.7 and only slightly increased to 1.72 at 4.42 kW, and then to 1.89 at 5.07 kW owing to the onset of TMI at ∼4.74 kW. Moreover, the SRS was also suppressed to about -40 dB lower than the signal laser at the output power of 6.2 kW thanks to the large mode area design. Further power scaling and better beam quality could be expected by mitigating the TMI effect. In stark contrast, the beam quality factor of the fiber amplifier based on fully-doped 40/250 μm YDF degraded to 2.56 at the output power of 2.45 kW and the TMI threshold was around 1.76 kW. Therefore, compared with the fully-doped fiber amplifier, not only the beam quality was improved, but also the TMI threshold was increased by ∼170% in the confined-doped fiber amplifier. This work reveals the intrinsic advantages of the confined-doped fiber for high-order mode suppression as well as TMI mitigation and proves the feasibility of employing the confined-doped fiber for realizing high-power fiber laser with good beam quality. Single-mode operation in even higher power levels could be expected by systematically optimizing the fiber parameters, coiling radii, and pumping schemes, etc.\nInnovative Research Groups of Hunan Province (2019JJ10005); Hunan Provincial Innovation Construct Project (2019RS3018); National Natural Science Foundation of China (62035015).\nThe authors would like to thank Dr. Zilun Chen’s group for fabricating the cladding mode striper and the pump and signal combiner, and Liang Xiao, Jiawei He, Jiaxin Song as well as Cong Zhou for their kind help in the experiment.\nThe authors declare no conflicts of interest.\nData underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.\n1. D. J. Richardson, J. Nilsson, and W. A. Clarkson, “High power fiber lasers: current status and future perspectives,” J. Opt. Soc. Am. B 27(11), B63–B92 (2010). [CrossRef]\n2.", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "In this post, we would like to educate you about laser crystals.\n|Erbium- and chromium-doped yttrium-scandium-gallium garnet (Er, Cr: YSGG) is an effective laser crystal for lasing with a wavelength of 2800 nm, which is an important water absorption band. Er, Cr: YSGG crystal has a high conversion efficiency, stable chemical properties, and a long fluorescence lifetime. These crystals are widely used in dentistry, environmental studies, communication systems, military industry and remote sensing technologies.\n– High conversion efficiency;\n– Excellent optical properties;\n– Lasing modes modes: continuous, free generation or Q-modulation;\n– The lowest generation threshold among standard erbium-doped crystals;\n– the highest differential quantum efficiency among standard erbium-doped crystals;\n– Natural disorder leading to an increase in the width of the spectral pumping line and in strength;\n– Flash lamp (chromium absorption bands) or diode (erbium bands) pumping;\n– Long fluorescence lifetime.\nMain properties of Er,Cr:YSGG crystals:\n|Neodymium-doped yttrium-vanadate crystals (Nd:YVO4) are some of the most effective laser crystals for coherent sources with diode pumping, especially for sources with low and middle power density. Their absorption and emission properties are even better than those of Nd:YAG crystals.\nA laser diode pumped Nd:YVO4 crystal was combined with crystals with a high non-linearity factor (LBO, BBO or KTP) to shift the frequency of the lasing output from the near-IR to the green, blue or even UV spectral region. This combination in solid-state lasers is an ideal tool for most common laser applications, including material processing, spectroscopy, wafer quality control systems, laser printing, etc. It is proven that diode-pumped ND: YVO4 solid-state lasers quickly gain markets where water-cooled ion lasers and lamp-pumped lasers traditionally dominate, especially when a compact design and emission with a single longitudinal mode are required.", "score": 25.600077049765623, "rank": 42}, {"document_id": "doc-::chunk-8", "d_text": "4(b), which are recorded by an optical spectrum analyzer with a 0.2 nm resolution. The full width at half maxima (FWHM) linewidth of the output spectrum increases from ∼1.14 nm of the seed laser to ∼4.46 nm under the maximum output power. Benefiting from the large mode area, the signal-to-noise ratio, defined as the intensity difference between the signal laser at 1080 nm and the Raman components at ∼1135 nm, is about 40 dB, indicating very good SRS suppression. Moreover, the proportion of the residual 1018 nm pump power in the total output power is calculated through spectral integration, which is ∼ 0.4‰. Therefore, the residual pump power only occupies a negligible proportion of the total output power.\nThe evolution of the beam quality factor (M2) is also measured under different output powers. The seed laser is a single-mode laser based on double clad fiber with 10 μm core size and core NA value of ∼0.08, however, when the pump laser is not applied, the beam quality factor of the seed laser becomes 1.43 after passing through the PSC, which is possibly resulted from the imperfect fabrication of the PSC. Then, the beam quality factor further degrades to 1.70 after passing through the confined-doped fiber owing to the refractive index profile mismatch between the 40/250 μm signal fiber of the PSC and the 40/250 μm YDF, as the refractive index profile of the confined-doped fiber is not uniform across the core (shown in Fig. 2(b)). The beam quality factor M2 maintains relatively well during the power scaling process, being 1.72 at the output power of 4.42 kW, as presented in Fig. 5. Further increase of the output power to 4.74 kW leads to the onset of TMI, where the beam quality starts to degrade, reaching 1.89 at the output power of 5.07 kW. As the output power continuously increases, the beam quality factor degrades rapidly to 2.05 at 5.37 kW and 2.3 at 6.20 kW owing to the severer TMI effect.\nThe temporal traces and the corresponding radio frequency spectra of this fiber amplifier are also recorded under different output powers.", "score": 25.56581931449972, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Emission spectra of were obtained by illuminating the 0.5%-doped sample with a diode laser. Luminescence was collected with the Optical Spectrum Analyzer (Yokogawa, model AQ6370C). A polarization beam splitter was inserted into collecting optics to separate - and -polarized signals. The emission cross sections of the transitions, shown in Fig. 2, were calculated via the standard Fuchtbauer–Landenburg method , using the lifetime of the manifold of .\nIt can be seen that in the -polarized emission spectrum of there are three emission maxima where lasing can be expected: at (cross section ), at (), and at (). The relevant energy level scheme of the and the manifolds can be found in and in . In the -polarized spectrum, there is one transition at suitable for laser operation with the peak cross section of .\nLaser experiments were carried out with the antireflection-coated, long, thick 0.7% sample. The crystallographic axis of the crystal was normal to the axis of the laser cavity, thus enabling longitudinal pumping in any chosen polarization. The crystal was bonded between copper plates water-cooled to . A simplified experimental laser setup is shown in Fig. 3.\nTwo different sources were used for pumping into major absorption lines of : a CW Er-fiber laser with narrowband output ( FWHM) and a commercial, spectrally narrowed ( FWHM), fast- and slow-axis collimated laser diode bar stack (QPC Lasers). The Er-fiber laser output wavelength was tuned to the absorption band with large cross section and relatively small cross section.\nThe unpolarized pump beam was focused into the crystal by the spherical lens.\nThe diameter of the almost cylindrical pumped region inside the crystal was approximately ( level). The short laser cavity () consisting of a concave output coupler () and a flat dichroic mirror ( at , at ) resulted in the mode diameter of for a good mode matching.\nFigure 4 shows the CW output power of the laser pumped by a Er-fiber laser. The best efficiency was achieved with the 5% outcoupling. The maximum obtained output power was , and the maximum slope efficiency . The fraction of the absorbed pump, carefully derived from the measurements of the transmitted pump while the laser was operational, was calculated at 41% to 44%.", "score": 25.000000000015024, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "EYDFA High Power Multi-output Optical Fiber Amplifier\nShort Description:The EY-doped fiber amplifier is a high power optical amplifier. It offers a highly reliable, flexible and low-cost solution for CATV large area coverage of metropolises and medium sized cities, that are high density home pass, especially for FTTH application.\nMulti Output EYDFA is a high power EDFA, max multi 32 ports, built in 1310/1490/1550 WDM. The Multi Output optical amplifier is a high power EDFA with 1535~1565nm bandwidth. It is primarily intended for CATV or applications that requires 1 to 8 continuous strip channels (ITU wavelength). It offers a low cost, flexible option for FTTH coverage of CATV systems, and it is on a broad scale in medium and large sized cities.\nThe Multi Output EYDFA can achieve single wavelength cable TV transmission, its outstanding and reliable performance ratio between the major application of MMDS, DBS, FTTX, FTTB, FTTB technology, is to build CATV huge Medium sized optical fiber transmission network.\n- High output power, E-Y co-doped\n- Warm backup, on-line standby, auto redundancy\n- Support ITU channels banding\n- Low optical input power, Adjustable output optical power -3dB\n- APC / ATC circuit design, built-in high-power optical isolator, mainly used for fiber access network of EPON architecture\n- Single/dual input for choice, built in optical switch for dual input, the switching power can be set by the button in the front panel or by web snmp.\n- Output adjustable by buttons in the front panel or web snmp, the range is +0.5dBm~-4.0dBm\n- Max multi 32 ports, with built in 1310/1490/1550WDM,max total output 37 dBm\n- Standard RJ 45 port for remote control, we can provide output contract and web manager for choice, and also plug-in snmp hardware can be reserved for update.\n- With laser key to turn on/off the laser.\n- With RF test function.", "score": 24.578604041366436, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Optical amplifier is an important technology for optical communication networks. Without the need to first convert it to an electrical signal, the optical amplifiers are now used instead of repeaters. As we know, there are several types of optical amplifiers. Among them, the main amplifier technologies are Doped fiber amplifier (eg. EDFA), Semiconductor optical amplifier (SOA) and Fiber Raman amplifier. Today, we are going to study and compare different types of optical amplifiers in this paper.\nBefore the comparison of the different types of optical amplifiers, let’s take a closer look at fiber optic amplifier. In general, a repeater includes a receiver and transmitter combined in one package. The receiver converts the incoming optical energy into electrical energy. The electrical output of the receiver drives the electrical input of the transmitter. The optical output of the transmitter represents an amplified version of the optical input signal plus noise. Repeaters do not work for fiber-optic networks, where many transmitters send signals to many receivers at different bit rates and in different formats. However, unlike a repeater, an optical amplifier amplify optical signal directly without electric and electric optical transformation. In addition, an ideal optical amplifier could support multi-channel operation over as wide as possible a wavelength band, provide flat gain over a large dynamic gain range, have a high saturated output power, low noise, and effective transient suppression. Several benefits of optical amplifiers as the following:\n- Support any bit rate and signal format\n- Support the entire region of wavelengths\n- Increase the capacity of fiber-optic links by using WDM\n- Provide the capability of all-optical networks, not just point-to-point links\nOK, after a brief introduction of the optical amplifiers, we formally begin today’s main topic. As we talk above, there are three main types of today’s amplifier technology. Each of them has their own working principle, features and applications. We will describe them one by one in the following paragraphs.\nDoped fiber amplifier (The typical representative: EDFA)\nErbium-doped fiber amplifier (EDFA) is the most widely used fiber-optic amplifiers, mainly made of Erbium-doped fiber (EDF), pump light source, optical couplers, optical isolators, optical filters and other components.", "score": 24.345461243037445, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Mostly, all optical amplifiers are used in optical communication. Generally, Brillouin type amplifier is not used in optical communication. For a particular use, the decision has to be made about which amplifier to be used. The EDFA amplifier is used in-line amplifier on account of its compatibility. On the other hand, Raman Fiber Amplifier (RFA) will be a very good power amplifier because of its high saturation.\nEDFAs and conventional lasers, achieve gain by pumping atoms into a high energy state. This allows the atoms to release their energy when a photon of a suitable wavelength passes nearby. RFAs utilize Stimulated Raman Scattering (SRS) to create optical gain. Because SRS robs energy from shorter wavelengths and feeds it to longer wavelengths, high channel count DWDM systems initially avoided this technique.\nA RFA amplifier consists of little more than a high-power pump laser, usually called a Raman laser, and a WDM or directional coupler. The optical amplification occurs in the transmission fiber itself, distributed along the transmission path. With amplification up to 10 dB, RFAs provide a wide gain bandwidth (up to 100 nm), allowing them to operate using any installed optical fiber (single mode optical fiber, TrueWave, etc.). By boosting the optical signal in transit, RFAs reduce the effective span loss and improve noise performance.\nCombined with EDFAs, RFAs create a wide gain-flattened optical bandwidth. The figure belwo shows the topology of a typical RFA. The pump laser and optical circulator comprise the two key elements of the RFA amplifier. In this case, the pump laser has a wavelength of 1535 nm. The optical circulator provides a convenient means of injecting light backwards into the transmission path with minimal optical loss.\nHere are the figures which show the optical spectrum of a forward-pumped RFA amplifier and the received signal after the same length of fiber used in the SRS example. The signal gets injected by the 1535 nm pump laser at the transmit end rather than the receive end. Generally, the amplitude of the pump laser exceeds that of the data signals.\nWith a significant decrease in the amplitude of the pump laser, the amplitude of the six data signals has increased, giving all six signals roughly equal amplitudes. In this case, the SRS effect robbed a great deal of energy from the 1535 nm pump laser signal and redistributed that energy to the six data signals.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-14", "d_text": "Fibre optic systems need stations every few kilometres to receive a weak light signal, convert it into electronic signal, amplify it, use it to modulate a laser beam again, and re-send it. This process is exposed to risk of noise and errors creeping into the signal; the system needs to get rid of the noise and re-send a fresh signal. It is like a marathon run, where the organisers place tables with refreshing drinks all along the route so that the tired and dehydrated runners can refresh themselves. This means a certain delay, but the refreshment is absolutely essential.\nSubmarine cables must have as few points as possible where the system can break down because, once the cable is laid several kilometres under the sea, it becomes virtually impossible to physically inspect faults and repair them.\nThe development, in the 1980s, of fibre amplifiers, or fibres that act as amplifiers, has greatly facilitated the laying of submarine optic fibre cables. This magic is achieved through an innovation called the erbium doped fibre amplifier. Sections of fibre carefully doped with the right amount of erbium—a rare earth element—act as laser amplifiers.\nWhile fibre amplifiers reduce the requirement of repeater stations, they cannot eliminate the need for them. That is because repeater stations not only amplify the signal, they also clean up the noise (whereas fibre amplifiers amplify the signal, noise and all). In fact, they add a little bit of their own noise. This is like the popular party game called Chinese whispers. If there is no correction in between, the message gets transmitted across a distance, but in a highly distorted fashion.\nCan we get rid of these repeater stations altogether and send a signal which does not need much amplification or error correction over thousands of kilometres? That’s a dream for every submarine cable company, though perhaps not a very distant one.\nThe phenomenon being used in various laboratories around the world to create such a super-long-distance runner is called a ‘soliton’ or a solitary wave. A Dutch gentleman first observed solitary waves nearly 300 years ago while riding along the famous canals of the Netherlands. He found that as boats created waves in canals, some waves were able to travel enormously long distances without dissipating themselves. They were named solitary waves, for obvious reasons. Scientists are now working on creating solitons of light that can travel thousands of kilometres inside optical fibres without getting dissipated.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-14", "d_text": "The pump coupler concept for SHARC fibers is schematically shown in Fig. 12 . This coupler is spliced between two sections of active fiber (only short sections of which are illustrated) and is designed such that it allows pump radiation to enter the active fiber without imposing any appreciable loss on the signal beam, which also passes through the pump coupler. In order to accomplish this, the pump radiation is configured to enter and propagate at an angle relative to the SHARC fiber axis, thus allowing the pump radiation to efficiently enter the active fiber from the edge of the fiber and propagate along the plane of the semi-guiding core. The semi-guiding core of the pump coupler also efficiently carries the signal from one end to the other. Specifically, since the signal-beam fast-axis dimension may only be ~20 µm, the guiding in that direction is absolutely essential for efficient propagation of the beam from one active fiber through the pump coupler into the next active fiber. However, the slow-axis signal-beam dimension may be approximately ~1 mm or more. Since the pump-coupler length may be only ~5 to 10 mm, very little diffraction occurs in the wide direction, so no guiding is required by the pump coupler in that dimension. Figure 12 shows a pump coupler designed for bi-directional pumping, but this concept can also be adapted to applications in which both pump fibers inject pump power in the same direction.\nIn considering this pumping approach, it is useful to appreciate how large the acceptance etendue is for pumping a SHARC fiber. Consider a 20 μm x 1 mm core contained within a 200 μm x 1.2 mm pump cladding, and an outer cladding providing an NA of 0.45 for the outer boundaries of the pump cladding. This geometry presents a full-angle beam-parameter product of 1120 mm-mrad in the wide dimension, which is roughly equivalent to a linear array of > 10 pump fibers having a 200-μm core diameter and 0.22 NA. A SHARC fiber amplifier therefore easily accommodates efficient launch of a large number of state-of-the-art fiber-coupled pump diode packages.\nA new class of optical fiber, the SHARC fiber, was analyzed in a high-power fiber amplifier geometry using the gain-filtering properties of confined gain dopants.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-5", "d_text": "Compared with fully bulk-optical systems, the use of amplifying fibers has the advantage that sufficient single-pass gain is achievable so that the principle of a regenerative amplifier does not need to be used.\nIn some cases, a multi-stage amplifier, i.e., an amplifier chain, needs to be realized. This allows e.g. ASE suppression with filters or modulators between the stages, an optimized power efficiency and noise figure, and possibly a modular approach which increases the flexibility for further amplifier developments.\nMost fiber amplifiers are not made of polarization-maintaining fibers, so they do not preserve the polarization state of the input. On the other hand, the amplification process itself is normally not polarization-dependent; this is an advantage over semiconductor optical amplifiers for use in telecommunications. In some cases, however, polarization hole burning can cause problems.\nFiber Amplifier Modules\nSome companies offer fiber amplifier modules which can be convenient for OEM system integrators. Input and output are then often attached with the usual fiber connectors. A compact module contains not only the actual fiber amplifier(s), but also the control electronics for the pump diodes, and possibly extras such as an input and/or output power monitor, power stabilization, alarms, gain-flattening filters, etc. Such amplifier modules are available based on erbium-doped fibers, ytterbium-doped fibers, and others, and for various power levels.\nRaman Fiber Amplifiers\nRaman amplifiers are based not on a laser amplification process, but on Raman scattering in a fiber. They differ in various respects from rare-earth-doped amplifiers, and are discussed in the article on Raman amplifiers.\n|||C. J. Koester and E. Snitzer, “Amplification in a fiber laser”, Appl. Opt. 3 (10), 1182 (1964)|\n|||R. J. Mears, L. Reekie, I. M. Jauncey, and D. N.Payne, “Low-noise erbium-doped fibre amplifier operating at 1.54 μm”, Electron. Lett. 23, 1026 (1987)|\n|||E. Desurvire, “Design optimization for efficient erbium-doped fiber amplifiers”, J. Lightwave Technol. LT-8, 1730 (1990)|\n|||M.", "score": 23.030255035772623, "rank": 50}, {"document_id": "doc-::chunk-2", "d_text": "This originates from the short nanosecond or less upper state lifetime, so that the gain reacts rapidly tochanges of pump or signal power and the changes of gain also cause phase changes which can distort the signals.\nThe performance of SOA is still not comparable with the EDFA. The SOA has higher noise, lower gain, moderate polarization dependence and high nonlinearity with fast transient time.\nFiber Raman amplifier (FRA)\nFiber Raman Amplifier (FRA) is also a relatively mature optical amplifier. In a FRA, the optical signal is amplified due to stimulated Raman scattering (SRS). In general, FRA can is divided into lumped type called LRA and distributed type called DRA. The fiber gain media of the former is generally within 10 km. In addition, it requires on higher pump power, generally in a few to a dozen watts that can produce 40 dB or even over gains. It is mainly used to amplify the optical signal band of which EDFA cannot satisfy. The fiber gain media of DRA is usually longer than LRA, generally for dozens of kilometers while pump source power is down to hundreds of megawatts. It is mainly used in DWDM communication system, auxiliarying EDFA to improve the performance of the system, inhibiting nonlinear effect, reducing the incidence of signal power, improving the signal to noise ratio and amplifing online.\nThe principle of FRA is based on the Stimulated Raman Scattering (SRS) effect. The gain medium is undoped optical fiber. Power is transferred to the optical signal by a nonlinear optical process known as the Raman effect. An incident photon excites an electron to the virtual state and the stimulated emission occurs when the electron de-excites down to the vibrational state of glass molecule. The Stokes shift corresponding to the eigen-energy of a phonon is approximately 13.2 THz for all optical fibers.", "score": 23.030255035772623, "rank": 51}, {"document_id": "doc-::chunk-6", "d_text": "In contrast to the case of conventional round fibers where the flat-top profile provides better mode discrimination than Gaussian gain profiles , the Gaussian gain profile provides slightly better performance than flat-top gain profiles in SHARC fibers. At a relative gain width of 0.35, the fundamental mode has 1.6x higher gain than any other mode in the fiber. Keep in mind that these are differential gain values that simplistically lead to exponential gain. As such, the modal discrimination will be much larger than 60%, as will be calculated shortly.\nAlthough the Gaussian gain profile yields higher modal discrimination, the remainder of this paper will explore the optimized flat-top gain width (45% of the waveguide width). Such flat-top confined gain fibers have already been fabricated in round fiber geometries [16, 17].\nTo analytically calculate the modal discrimination in a realistic SHARC fiber amplifier using gain filtering, it is necessary to integrate the gain along the fiber length; the saturation condition in the amplifier changes as a function of propagation distance because the signal experiences gain as it propagates. The analysis therefore needs to proceed in several steps. First the local nominal gain of the amplifier is calculated as a function of propagation distance into the fiber. From this, the gain experienced by each mode is calculated according to Eq. (1). Finally, the integrated gain of each mode along the length of the whole amplifier is combined with the SHARC fiber’s edge loss to calculate the net modal gain in the fiber amplifier.\nOptical power evolution in conventional laser media proceeds according toEq. (1), and αk is the modal loss. For rare-earth ionic systems and resonant (in-band) pumping conditions, the laser kinetics behave as a quasi-three-level system. For such a type of system, the saturation intensity for signal wavelength is affected by saturation of the pump transition, and the effective signal saturation intensity becomes dependent on the pump saturation level . Since the pump intensity Ipump(z) necessarily varies along the fiber due to absorption by the ytterbium ions, the effective signal saturation also becomes z-dependent, taking the following form\nIn the quasi-three-level kinetics model, the small signal gain factor gss depends on the doping density and the signal wavelength λs, but is also nominally dependent on the pump saturation level Ipump/Ipsat.", "score": 23.030255035772623, "rank": 52}, {"document_id": "doc-::chunk-18", "d_text": "H. Xiao, J. Leng, H. Zhang, L. Huang, J. Xu, and P. Zhou, “High-power 1018 nm ytterbium-doped fiber laser and its application in tandem pump,” Appl. Opt. 54(27), 8166–8169 (2015). [CrossRef]\n40. P. Zhou, H. Xiao, J. Leng, J. Xu, Z. Chen, H. Zhang, and Z. Liu, “High-power fiber lasers based on tandem pumping,” J. Opt. Soc. Am. B 34(3), A29–A36 (2017). [CrossRef]\n41. J. M. Fini, “Design of large-mode-area amplifier fibers resistant to bend-induced distortion,” J. Opt. Soc. Am. B 24(8), 1669–1676 (2007). [CrossRef]", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "A low-noise fiber frequency comb is demonstrated to improve the frequency accuracy and linewidth by suppressing the phase noise caused by the nonlinear self-phase modulation as well as the amplified spontaneous emission within the Er-doped fiber amplifier. The linewidth of the carrier-envelop-offset signal measures less than 1.9 mHz and the frequency stability well follows the reference Rb clock. This achievement will facilitate the use of the fiber frequency comb for industrial applications to precision near-infrared spectroscopy, frequency calibration, optical clocks and length metrology.\n©2009 Optical Society of America\nThe frequency comb of femtosecond pulse lasers permits optical frequency measurement to be made traceable to the microwave frequency standard [1,2]. Ti:Sapphire femtosecond lasers provide a frequency comb being excellent in both the frequency stability and linewidth, but their practical use is limited particularly outside the laboratory environment by several reasons such as bulkiness, alignment complexity and external optical pumping. Fiber femtosecond lasers are now available, making rapid progress to compete with Ti:Sapphire counterparts by making the most of their intrinsic advantages of size, robustness and reliability . Stabilizing the frequency comb with reference to a radiofrequency time standard requires detecting two collective parameters; the pulse repetition rate (frep) and the carrier-envelope-offset frequency (fceo). Unlike frep that can be measured simply using a photodetector, fceo has to be extracted through a sequence of elaborate procedures; pulse power amplification, supercontinuum generation, frequency doubling and f-2f interference.\nFor fiber lasers, Nicholson et al. first demonstrated an all-fiber supercontinuum in which a negative-dispersion fiber was used to increase the amplification gain for subsequent spectral broadening with a hybrid highly nonlinear fiber . Positive pre-chirping was also tried using a single-mode fiber for the pulse power amplification with an Er-doped fiber to detect fceo with an S/N ratio of 30 dB . Special nonlinear-broadening fibers such as photonic crystal fibers and UV-exposed fibers were tested to obtain broader spectra [6,7]. Washburn et al. extracted fceo using a minimal length of nonlinear fiber to suppress phase noise with a standard deviation of 57 mHz at a gate time of 1 s . The common-path f-2f interferometer was proposed to detect fceo with an S/N ratio higher than 40 dB [9,10].", "score": 22.95153857795113, "rank": 54}, {"document_id": "doc-::chunk-3", "d_text": "The SHARC fiber architecture also scales output power at a constant pump-etendue per output watt, thereby ensuring the possibility of generating higher output power levels without having to invent new pump-diode packages with increasingly higher brightness. As a quantitative example, carrying 3-kW of single-frequency optical power will require core dimensions of 20 μm × 1.5 mm, for a total core area of 30,000 μm2, which is equivalent to a circular core having a diameter of ~200 μm. In this example, stimulated Brillouin scattering (SBS) suppression occurs by virtue of the large core area and low intensity, which lead to an SBS threshold power in excess of 3 kW even for a kHz-range laser bandwidth. Hence, in order to deliver multi-kW-level optical powers, SHARC fibers do not require additional SBS suppression techniques such as multi-GHz signal modulation [11, 12], with its associated system complexity, or acoustic waveguide management [13–15].\nThe inclusion of gain into the SHARC fiber provides another opportunity for mode control, as depicted in Fig. 1. Gain filtering, a process that provides modal discrimination via gain instead of loss, has recently been investigated as a mode-control method in high-power fiber amplifiers [7, 16, 17]. Nominally, this is achieved by spatially tailoring the gain dopant profile, such as the step profile depicted in Fig. 1.\nThe top row of Fig. 2 depicts the impact of spatial gain saturation in conventional optical fiber amplifiers. Nominally seeded by the fundamental mode (left), the amplifier gain (red) is saturated as the optical power grows, leaving a spatial hole “burnt” into the gain profile at the center of the fiber where the fundamental mode intensity is the highest (center image). Higher order modes (HOMs), most of which have intensity nulls as the center of the fiber, can extract the gain at the edges of the fiber, resulting in higher net gain for the HOMs and degraded beam quality at the output of the fiber amplifier. By confining the gain to the central portion of the waveguide while maintaining the same refractive index profile (Fig. 2., bottom row), the gain extraction by the fundamental mode saturates nearly all of the gain, leaving no gain at the waveguide edges for the HOMs to exploit.", "score": 22.75296603738707, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "A number of the fiber fabrication methods have evolved in response to the interest in active fiber devices based on rare earth doped core silicate fiber. Descriptions of fabrication methods along with associated host composition and waveguide design issues will be presented.\nWe review theoretical models of fiber amplifiers, resonant fiber lasers and superfluorescent fiber lasers made of a single mode laser fiber, for example a silica fiber doped with a rare earth laser ion. The quantities which are investigated are the optical gain, the threshold and conversion efficiency in fiber sources, and the spectral narrowing in superfluorescent sources. These models take into account the nature of the laser transition (i.e. 3- or 4-level), the potential presence of pump ESA, and the guided nature of the pump and signal waves interacting along the fiber. The emphasis is placed on the development of simple, closed-form expressions for these quantities.\nThis tutorial presents a discussion of the device requirements which must be satisfied to obtain optimal performance of fibre lasers and amplifiers. The constructional details of fibre lasers and amplifiers and the range of resonator types that have been explored for fibre lasers are addressed.\nWe present a review of the main non-linear effects for light generation and amplification in optical fibers. Stimulated Raman Scattering and its applications to optical amplication, Four-Photon Mixing, Brillouin Scattering and Harmonic Generation are described. We emphasize the advantages and drawbacks of using optical fibers for the development of non-linear effects.\nIn this paper we describe work aimed at identifying and measuring properties of rare earth doped fibres which may affect their use and application in oscillators and amplifiers. The main dopants of interest are erbium(1.55μm), and neodymium (0.9, 1.06, 1.3μm). We shall discuss the measurement, and effects, of parasitic absorption from the upper laser level (known as excited state absorption, or ESA), at both pump and signal wavelengths. We shall also consider how detailed measurements of fluorescence decay from the excited level can yield information about quenching, upconversion, and energy transfer processes.\nThe initial discovery of fluorozirconate glasses at the University of Rennes, France in 1974 followed an attempt to fabricate large pieces of ZrF4 crystals doped with Nd (NdZrF7) for laser applications.", "score": 21.695954918930884, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "This broadening is both homogeneous all ions exhibit the same broadened spectrum and inhomogeneous different ions in optiqeu glass locations exhibit different spectra.\nSuch reflections disrupt amplifier operation and in the extreme case can cause the amplifier to become a laser. Sonde de Fibre Portative.\nSa and different geometries disk, slab, rod to amplify optical signals. In semiconductor optical amplifiers SOAselectron – hole recombination occurs. These devices are similar in structure to, and share many features with, vertical-cavity surface-emitting lasers VCSELs. Finally, there are concerns of nonlinear penalty in the amplifier apmlificateur the WDM signal channels. Panneaux de Brassage Cat5e. They are related to fiber lasers. The amplification bandwidth of Raman amplifiers is defined by the pump wavelengths utilised and so amplification can be provided over wider, and different, regions than may be possible with other amplifier types which rely on dopants and device design to define the amplification ‘window’.\nThe broad gain-bandwidth of fiber amplifiers make them particularly useful in wavelength-division multiplexed communications systems as a single amplifier can be utilized to amplify all signals being carried on a fiber and whose wavelengths fall within the gain window.", "score": 21.695954918930884, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "In recent years, high-power fiber lasers/amplifiers employing the confined-doped LMA ytterbium-doped fiber (YDF) have also been reported. In 2016, Mashiko et al. demonstrated a 2-kW fiber oscillator operating at 1080 nm with the beam quality factor M2=1.2 based on the confined-doped fiber , and the output power was further scaled to 3 kW with M2=1.3 in the following year by increasing the pump power . The effective mode area of this confined-doped fiber is ∼400 μm2, however, one of the key parameters, i.e., the relative doping ratio of the core, was not mentioned. In 2018, Liao et al. fabricated a confined-doped YDF using the modified chemical vapor deposition (MCVD), the core and inner cladding diameters of which are 35 and 400 μm, respectively . Around 51% of the central core area was doped with Yb-ions. Finally, the beam quality factor was improved from 2.8 in a fully-doped fiber with a similar core/inner-cladding diameter to 1.5 by using this confined-doped fiber in a fiber oscillator. And a similar beam quality factor was obtained in a ∼450-W fiber amplifier. In the same year, Seah et al. reported a 4.1 kW tandem-pumped 1060 nm fiber amplifier by using the confined-doped fiber . The core/inner-cladding diameter of this confined-doped fiber is 42/250 μm and around ∼75% of its core diameter is doped with Yb-ions. The beam quality factor M2 was 1.59 under the maximum output power, which was significantly improved compared with the beam quality factor of M2=2.77 from the fully-doped fiber amplifier. In 2020, Zhang et al. fabricated a confined-doped Yb/Ce co-doped aluminosilicate fiber with 33/400 μm core/inner-cladding diameter, where ∼70% across the core diameter was doped with Yb/Ce-ions .", "score": 21.695954918930884, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Optical EngineeringDesign and performance analysis of a tunable and self-pulsation diode pumped double-clad D-shaped Yb3+-doped silica fiber laser\n|Format||Member Price||Non-Member Price|\n|GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free.||Check Access|\nA wide range of applications have emerged for tunable ytterbium fiber lasers as single-frequency sources for spectroscopic applications, pumping source of Pr:ZBLAN amplifier, and Tm:ZBLAN upconversion laser, material processing, and military applications. In this paper, a 975-nm high power fiber coupled high power diode laser module of up to 5 W end pumping a ytterbium-doped multimode D-shaped fiber laser has been investigated. This used a Fabry-Pérot cavity with output coupler reflectivities of 80%, 60%, and Fresnel reflection of 4%. The output laser wavelength was tuned over a wide range of more than 50 nm, from 1041 to 1094 nm for cavity lengths from 1 to 10 m, respectively. An optical-to-optical slope efficiency of 45% was found for a 1-m cavity length, which increased to 60% for a 4-m cavity length. The maximum slope efficiency of 82.1% for a cavity length of 2 m was measured with the Fresnel reflection output coupler, and the measured lowest threshold pump power for this high gain cavity configuration was 130 mW. The threshold lasing pumping powers of 4.3, 4.5, and 4.7 W were dependent on the output coupler reflectivities of 80%, 60%, and Fresnel reflection of 4%, respectively. Also, self-pulsation phenomena were observed only at higher level pumping powers of more than 4 W and at longer cavity length.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-7", "d_text": "As expected, the ER increases with the total pump power due to higher CE, until it reaches a maximum value of more than 20 dB for a total pump power of about 28.5 dBm. At such a pump power value, a CE of −7 dB is obtained, which includes the total insertion losses of the PPLN, of about 3.25 dB.\nFor higher pump power values, the power of the converted signal starts to flow back to the original wave and the ER starts to decrease. Contrarily to what was expected, the maximum ER for each OWC processes is not achieved at the same pump power value, but at 28.3 dBm when the input signal is at 1552.48 nm and at 29 dBm for the other case. These results can be explained by a slightly asymmetrical disposition of the frequency of the interacting waves (ωs2 + ωP2 ≠ ωs1 + ωP1), so that one of the OWC is not perfectly quasi-phase matched, requiring higher pump power values to maximize the ER. According to our numerical simulations, even a small wavelength detuning from the ideal conditions of 0.005 nm for each signal, which was the resolution of the available ECLs, is sufficient to change the optimal pump power by about 1.5 dB. Hence, the optimum operation conditions must be set as a trade-off between the ER of each OWC process, as it will also be shown in the following subsection.\nThe optimization of the pump power for WDE is critical since even a small deviation of less than 0.5 dB is enough to deteriorate the ER by more than 10 dB. Therefore, stable and precise tuning of the pump power and of the operation temperature of the PPLN waveguide are crucial for WDE.\n4.2 Pump power optimization by measuring BER of swapped signals\nAs mentioned above, due to finite ER, any un-depleted power at input wavelengths becomes deleterious crosstalk impairing the swapped signals, especially for the in-band-crosstalk-sensitive high-order QAM signals. For a given PPLN, ER is mainly dependent on the pump power.", "score": 20.819440419418388, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Active neodymium and erbium doped fibre devices\nIn this thesis a number of rare-earth-doped fibre devices are described including fluorescent and superfluorescent sources as well as several laser configurations. The laser configurations are all-fibre and include a neodymium-doped ring laser and recirculating delay line, a novel tunable neodymium-doped fibre laser and a single-frequency travelling-wave erbium-doped ring laser. The latter device has been the first description of a travelling-wave fibre laser device. Theory describing general fibre amplifier and laser devices is incorporated. A novel lumped element approach to fibre laser theory has been given applicable to 3 and 4-level laser devices which, under certain conditions, allows single pass gain of a fibre device to be described simply by the absorbed pump power. Numerical modelling of the erbium-doped fibre amplifier has been described which allows for analysis of a general device showing pump excited-state absorption. Results from the analysis have shown a difference in gain characteristics between co-propagating and counter-propagating signal/pump schemes when subject to pump excited-state absorption. In addition, the effect of pump direction on the noise figure is characterised in both small and large signal operating regimes. Characterisation of neodymium-doped fibres has shown a number of effects which will affect their use in amplifier and oscillator configurations. These include observation of sensitivity of the fluorescence characteristics to pump wavelength, observation of excited state absorption and polarisation of fluorescence. Additionally, the spectral gain-saturation characteristics have been investigated.", "score": 20.327251046010716, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "A superluminescent source has to contain little more than a high-gain fiber amplifier.\nIt is possible to model (→ laser modeling) the essential performance aspects of fiber amplifiers in various ways, normally using suitable fiber simulation software. Part of such a model is typically a set of rate equations, with which the population densities for given signal and pump intensities can be calculated. Such a rate equation model may be incorporated in a more comprehensive model which then calculates the optical powers along the fiber.\nApplications of amplifier models are manifold. For example, it is possible to quantify various detrimental effects on the amplifier performance, and use such results for optimizing the fiber parameters or other aspects of the amplifier design.\nAlthough basic properties of a fiber amplifier can be calculated analytically, a full quantitative understanding is normally possible only with numerical simulations. These can take into account various details such as the quasi-three-level behavior, strong gain saturation (with optical powers often being far higher than the saturation powers), amplified spontaneous emission (ASE) due to the high optical gain, and possibly energy transfer process for sophisticated situations e.g. with erbium-ytterbium-doped fibers.\nEven in simple cases, relatively complex behavior can result. Figure 2 shows an example, where a simple ytterbium-doped fiber amplifier is pumped at 940 nm. The decay of pump power in the device is first quite fast, then slower, and finally faster again. This results from gain saturation by ASE. Forward ASE is partially reabsorbed before it reaches the right end.\nFigure 3 shows the ASE spectra for the forward and backward direction. One recognizes that the ASE in the 1030-nm region is quite similar in both directions, whereas strong 975-nm ASE occurs only in backward direction. That pronounced asymmetry is related to the fact that the right end of the fiber, being only weakly pumped, provides a seed for backward ASE by spontaneous emission, even though the gain at 975 nm is substantially negative there.\nIf we now also inject a 1-mW input signal at 1030 nm, gain saturation keeps the ASE at a lower level, and most of the power can be extracted with the signal (see Figure 4).\nEven this simple example exhibits various sophisticated details, which can hardly be understood without numerical simulations.", "score": 20.327251046010716, "rank": 62}, {"document_id": "doc-::chunk-10", "d_text": "Besides, in order to further improve the beam quality, one can either flatten the refractive index of the core by compensating the refractive index of the undoped region, thus enabling a more spread mode distribution, and at the same time adopt a large coiling radius to alleviate bend distortion, or incorporate the graded-index fiber design to make the fiber resistant to bend distortion .\nFurthermore, a piece of fully-doped YDF with core/cladding diameter of 40/250 μm is adopted to study the beam quality evolution and the TMI threshold. This fully-doped YDF is coiled to an equivalent bending radius to that of the confined-doped fiber. As the output power increases, the beam quality factor of the proposed fiber amplifier gradually degrades from 1.52 of the seed laser to 2.28 at 1.76 kW, and then to 2.56 at the output power of 2.45 kW, as indicated in Fig. 7. The TMI threshold is around 1.76 kW, which is nearly 3 kW lower than our confined-doped fiber amplifier. The beam quality factor of the seed (M2=1.52) after passing through the fully-doped fiber amplifier is slightly better than that of passing through the confined-doped fiber amplifier (M2=1.7), which could result from a better mode matching between the output signal port of PSC and the fully-doped YDF. However, the beam quality factor of this fully-doped fiber amplifier constantly degrades even before the TMI appears. These results further prove the advantages of the confined-doped fiber for high-order mode suppression and TMI mitigation.\nIn summary, the gain property of the fundamental and high-order modes in the confined-doped fiber with 40/250 μm core/inner-cladding diameter was theoretically analyzed by simultaneously considering the transverse mode distribution and transverse hole burning effect. Subsequently, the confined-doped fiber with a 0.75 doping ratio was designed and fabricated after a comprehensive evaluation of the gain filtering effect and the fiber length. Based on the fabricated confined-doped fiber, 6.2 kW output power at 1080 nm was realized in a fiber amplifier through the forward tandem pumping scheme.", "score": 20.327251046010716, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "SAN DIEGO, March 9, 2020 /PRNewswire/ -- OFC 2020 -- Furukawa Electric Co., Ltd. (FEC) announces it has developed FRSi4XX Series pump sources for forward Raman amplifiers that extend transmission distances in ultrahigh-speed optical fiber communications farther than conventional systems.\nProliferation of smartphones has led to a dramatic increase in communication traffic, including the expansion of wireless backbones, cloud computing, video streaming, and the penetration of social networks. To deal with this traffic explosion, improvement in optical signal-to-noise ratio (OSNR) is becoming an important factor in soon-to-be-deployed ultrahigh-speed optical fiber communications such as 400 Gbps and beyond. Existing erbium-doped fiber amplifiers (EDFA), which are widely used in current systems, do not have sufficient OSNR performance. Demand is increasing for Raman amplifiers due to their excellent noise characteristics. Forward Raman amplifiers, which make the most of the advantages of Raman amplification, are expected to be a technology necessary for increasing transmission distances.\nIn the past, only the backward Raman amplifier was used due to limitations of the noise characteristics of the pump source. Furukawa Electric's new FRSi4XX Series novel pump sources make it possible to realize forward Raman amplifiers and feature high output as well as excellent low-noise characteristics.\nThe important characteristics of these products are high power output and low noise. Furukawa Electric achieved these characteristics by leveraging the design, manufacturing technology and high-precision packaging of its InP (Indium Phosphide) optical semiconductor chip. The result is a pump source with a high-output chip structure and high-efficiency coupling technology. The optical output of 100 mW or more was achieved through an optimized heat dissipation design.\nThe FRSi4XX pump series reduces noise by about 20 dB/Hz compared with conventional pump sources for Raman amplifiers.\nCombining the FRSi4XX Series with existing FOL1439 Series, yield pump sources especially well-suited to forward pumping Raman amplifiers.\nAs demand for ultrahigh-speed optical fiber communications continues to grow, FEC will further enhance the technology of this series and contribute to the construction of information and communications infrastructure in anticipation of the advancement of 5G.", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-4", "d_text": "The performance of such fibres as Raman fibre amplifiers is discussed with particular reference to their bandwidth for applications in 1.5 - 1.6 AM telecommunications systems.\nOptical absorption strengths and spontaneous transition rates are presented for Er3+ ions in Ge-doped silica glass at concentrations from 0.5x1019 cm-3 to 4.0x1019 cm-3. Results for transitions in the 480 - 1700-nm range are extracted from measure-ments on Er-doped preform cores prepared by the solution-doping technique. A com-parison of the spontaneous lifetime of the 4113/2 level with the strength of the corresponding ground-state absorption reveals a conflict with the Einstein A-B relation. Measurements made on fibers exhibit a significant change in linewidth and fluorescence decay time as compared to the corresponding preforms. From these results the magnitude and spectral dependence of the cross sections for absorption as well as stimulated emission were determined for single-mode fibers.\nThe Stark levels of the 4115/2 ground state manifold have been determined for Er3+-doped fluoride, fluorophosphate, and silicate bulk glasses from fluorescence-line-narrowing measurements at 4.2 K. Splittings between adjacent Stark levels were observed to be 20-80 cm-1. The total energy spread of the manifold ranged from 335 to 400 cm-1. The energy of a given Stark level varied up to 60 cm-1 depending on the particular Er3+ sites excited. Using the 4.2-K results, homogeneous broadening is found to be a reasonable approximation for the 300-K luminescence band.\nNeodymium and Erbium doped Silica-based optical fibres are a promising material in fibre laser and amplifier. The match-type with Si02 cladding and Ge0z-Si02 core single-mode fibres were fabricated by NCVD soot method. Rare-earth were doped by a soot imprenation technique. The fibres with various amounts of Nd3+·and Er3+codoping with Yb3+were also obtained. The quantity of dopants can be controlled by deposition temperature and doping concentrations in solution. The absorption spectrum and fluorescence characteristics were measured. The concentration of rare-earth ion in fibre can be in excess of 1 wt%. The minimum attenuation can be as low as 5-10 db/km at lasing wavelength.", "score": 18.90404751587654, "rank": 65}, {"document_id": "doc-::chunk-8", "d_text": "Paschotta, case study on an erbium-doped fiber amplifier|\n|||R. Paschotta, case study on a pulsed ytterbium-doped fiber amplifier|\n|||R. Paschotta, tutorial on \"Fiber Amplifiers\"|\n|||R. Paschotta, tutorial on \"Modeling of Fiber Amplifiers and Lasers\"|\nSee also: erbium-doped fiber amplifiers, high-power fiber lasers and amplifiers, rare-earth-doped fibers, fibers, double-clad fibers, gain equalization, fiber lasers, amplifiers, ultrafast amplifiers, Raman amplifiers, distributed amplifiers, superluminescent sources, fiber simulation software\nSee also the article on “Fiber-based high-power laser systems”, contributed by external authors.\nIf you like this article, share it with your friends and colleagues, e.g. via social media:", "score": 18.90404751587654, "rank": 66}, {"document_id": "doc-::chunk-6", "d_text": "The inner cladding is processed into an octagonal shape with a diameter (flat-to-flat) of 250 μm and the core diameter is measured to be ∼40 μm, as indicated in Fig. 2(a). As shown in Fig. 2(b), the refractive index of the doped region is slightly higher than the undoped region, which occupies around 75% of the core, indicating good adherence to our design parameters. The absorption coefficient of this fiber is measured to be ∼0.8 dB/m at 1018 nm in the low power regime.\n3. Experimental setup\nThe fabricated confined-doped fiber is applied in a master oscillator fiber amplifier (MOPA) platform similar to our previous experiments [39,40], which consists of a seed laser and a one-stage fiber amplifier as depicted in Fig. 3. The seed laser is a 100 W-level single-mode fiber laser operating at 1080 nm, which is injected into the fiber amplifier through a (6 + 1)×1 pump and signal combiner (PSC). The core/inner-cladding diameters of the PSC’s input and output signal port are 10/125 μm and 40/250 μm, respectively. A piece of 25-meter-long the as-fabricated confined-doped YDF is used to provide the active gain. The pump sources are three 3 kW level 1018 nm fiber laser modules, which are fusion spliced to the pump ports of the PSC. The redundant pump ports of the PSC are angle cleaved to 8°. A cladding mode striper (CMS) is spliced after the confined-doped YDF to remove the residual cladding power. Finally, the amplified laser is output through a quartz block holder (QBH). The core/inner-cladding diameter of the CMS as well as the QBH’s pigtail fiber is 40/250 μm. The confined-doped fiber is water-cooled on a heat sink.\nThe output laser from the QBH is first collimated by a collimator and then split into two beams by a highly reflective mirror (HRM). The reflected beam is measured by a power meter. Since the HRM could only reflect majority of the 1080 nm laser, the 1018 nm pump laser can pass without significant loss.", "score": 18.90404751587654, "rank": 67}, {"document_id": "doc-::chunk-10", "d_text": "2-5 are \"eye\" diagrams which, as plotted on coordinates of power and time, show the contrast between ones and zeros in the bit stream as due to the various forms of dispersion including linear dispersion and 4PM for a four-channel system. The basic operating system characteristics for all of these figures are the same. They differ in the characteristics of the fiber.\n1. Technical Field\nThe field addressed concerns high capacity optical fiber networks operative with wavelength division multiplexing. Systems contemplated: are based on span distances which exceed 100 kilometers; depend upon signal amplification rather than repeaters within spans, and use three or more multiplexed channels each operative at a minimum of 5.0 gbits per second.\n2. Description of the Prior Art\nThe state of the art against which the present invention is considered is summarized in the excellent article, \"Dispersion-shifted Fiber\", Lightwave, pp. 25-29, Nov. 1992. As noted in that article, most advanced optical fiber systems now being installed and in the planning stages depend upon dispersion-shifted fiber (DS fiber). A number of developments have led to a preference for a carrier wavelength at 1.55 μm. The loss minimum for the prevalent single-mode silica-based fiber is at this wavelength and the only practical fiber amplifier at this time--the erbium amplifier operates best at this wavelength. It has been known for some time that the linear dispersion null point--the radiation wavelength at which the chromatic dispersion changes sign and passes through zero--naturally falls at about 1.31 μm for silica-based fiber. DS fiber--fiber in which the dispersion null point is shifted to 1.55 μm--depends upon balancing the two major components of chromatic dispersion; material dispersion and waveguide dispersion. Waveguide dispersion is adjusted by tailoring the fiber's index-of-refraction profile.\nUse of DS fiber is expected to contribute to multi-channel operation--to wavelength division multiplex (WDM). Here, multiple closely spaced carrier wavelengths define individual channels, each operating at high capacity--at 5.0 gbit/sec or higher. Installation intended for WDM either initially or for contemplated upgrading uses three or more channel operation, each operating sufficiently close to the zero dispersion point and each at the same capacity. Contemplated systems are generally based on four or eight WDM channels each operating at or upgradable to that capacity.", "score": 18.90404751587654, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "29 pages matching concentration in this book\nResults 1-3 of 29\nWhat people are saying - Write a review\nWe haven't found any reviews in the usual places.\nPerspective and overview\nGlass structure and fabrication techniques\n8 other sections not shown\nachieved applications B.J. Ainslie bandwidth beam cavity cladding co-doped concentration configuration coupler D.C. Hanna D.N. Payne decay rate device diode dispersion dopant Electron emission cross-sections energy levels equations Er3+ erbium erbium-doped fibre erbium-doped fibre amplifiers excited-state absorption fibre core fibre lasers fibre length filter fluorescence fluoride fluoride fibre fluoride glasses frequency gain medium glass hosts host glass input inversion laser transition lasing Lett loss manifold mode mode-locking modulation Nd3+ neodymium noise non-linear non-radiative decay operation optical amplifiers optical fibre oscillator output power peak photons polarisation population population inversion power amplifier preform Proc propagation pulse pump laser pump power pump wavelength Q-switched quantum efficiency radiative Raman rare-earth ion-doped rare-earth ions refractive index region result saturation semiconductor laser signal wavelength silica fibre slope efficiency soliton spectra spontaneous emission Stark levels Telecom temperature three-level threshold transmission up-conversion upper laser level ZBLAN", "score": 18.48263070585088, "rank": 69}, {"document_id": "doc-::chunk-7", "d_text": "Top. Quantum Electron. 12 (2), 233 (2006)|\n|||J. Limpert et al., “High repetition rate gigawatt peak power fiber laser systems: challenges, design, and experiment”, IEEE J. Sel. Top. Quantum Electron. 15 (1), 159 (2009)|\n|||G. D. Goodno et al., “Low-phase-noise, single-frequency, single-mode 608 W thulium fiber amplifier”, Opt. Lett. 34 (8), 1204 (2009)|\n|||E. Desurvire, Erbium-Doped Fiber Amplifiers: Principles and Applications, John Wiley & Sons, New York (1994)|\n|||M. J. F. Digonnet, Rare-Earth-Doped Fiber Lasers and Amplifiers, 2nd edn., CRC Press, Boca Raton, FL (2001)|\n|||A. Galvanauskas, “Ultrashort-pulse fiber amplifiers”, in Ultrafast Lasers: Technology and Applications (eds. M. Fermann, A. Galvanauskas, G. Sucha), Marcel Dekker, New York (2002), Chapter 4, pp. 155-218|\n|||R. Paschotta, Field Guide to Optical Fiber Technology, SPIE Press, Bellingham, WA (2010)|\n|||P. Urquhart (ed.), Advances in Optical Amplifiers (open-access online edition available), InTech, Rijeka, Croatia (2011)|\n|||ITU Standard G.661 (07/07), “Definitions and test methods for the relevant generic parameters of optical amplifier devices and subsystems” (pre-published), International Telecommunication Union (2007)|\n|||ITU Standard G.662 (07/05), “Generic characteristics of optical amplifier devices and subsystems”, International Telecommunication Union (2005)|\n|||ITU Standard G.663 (04/00), “Application related aspects of optical amplifier devices and subsystems”, International Telecommunication Union (2000)|\n|||R. Paschotta, “Fiber amplifiers – a technology for many applications”. Part 1: introduction, Part 2: various technical issues, Part 3: examples for fiber amplifier designs|\n|||R.", "score": 17.397046218763844, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "The invention relates to optical waveguides, such as, for example, optical fibers, and to amplifiers and lasers that include optical waveguides, such as for example, fiber lasers and fiber amplifiers, and to systems including such amplifiers and lasers.\nFibers, such as fiber lasers and fiber amplifiers, can be used to enhance absorption of pump energy. One type of fiber, commonly referred to as a double clad fiber, includes a core, a first cladding around the core and a second cladding around the first cladding. The core can comprise a rare earth material. The first cladding can be capable of receiving pump energy for absorption by the rare earth material. The second cladding can tend to prevent the pump energy from escaping the first cladding.\nThe invention typically relates to optical fibers, fiber lasers and fiber amplifiers, and to systems including such fibers and fiber devices.\nIn one aspect, the invention features a fiber (e.g., a multimode fiber) that includes a first region, a core and a cladding. The core surrounds the first region, and the cladding surrounds the core. Typically, the core includes an active material, such as, for example, a selected rare earth material.\nIn a further aspect, the invention features a system that includes two fibers. One of the fibers has a first region, a first core (e.g., a multimode core) surrounding the first region, and a cladding surrounding the core. The other fiber has a core (e.g., a single mode core). The fibers are in optical communication (connected) such that energy can propagate from the core of one fiber to the core of the other fiber. Typically, at least one of the cores includes an active material.\nEmbodiments of the invention can include one or more of the following features.\nThe core can be ring-shaped.\nThe core can be a multimode core.\nThe core can include a rare earth-doped material.\nThe core can include a silica material and ions of a rare earth metal.\nThe first region can include a silica material.\nThe first region can have a lower index of refraction than the core.\nThe first cladding can include a silica material.\nThe first cladding can have a lower index of refraction than the core.\nThe fiber can further include a second cladding surrounding the first cladding.\nThe second cladding can be formed of a polymer material.", "score": 17.397046218763844, "rank": 71}, {"document_id": "doc-::chunk-17", "d_text": "SPIE 10083, 100830Y (2017). [CrossRef]\n33. L. Liao, F. Zhang, X. He, Y. Chen, Y. Wang, H. Li, J. Peng, L. Yang, N. Dai, and J. Li, “Confined-doped fiber for effective mode control fabricated by MCVD process,” Appl. Opt. 57(12), 3244–3249 (2018). [CrossRef]\n34. C. P. Seah, W. Y. W. Lim, and S. L. Chua, “A 4 kW fiber amplifier with good beam quality employing confined-doped gain fiber,” in Laser Congress 2018 (ASSL), OSA Technical Digest (Optical Society of America, 2018), paper AM2A.2.\n35. F. Zhang, Y. Wang, X. Lin, Y. Cheng, Z. Zhang, Y. Liu, L. Liao, Y. Xing, L. Yang, N. Dai, H. Li, and J. Li, “Gain-tailored Yb/Ce codoped aluminosilicate fiber for laser stability improvement at high output power,” Opt. Express 27(15), 20824–20836 (2019). [CrossRef]\n36. Z. Zhang, F. Zhang, X. Lin, S. Wang, C. Cao, Y. Xing, L. Liao, and J. Li, “Home-made confined-doped fiber with 3-kW all-fiber laser oscillating output,” Acta Phys. Sin. 69(23), 234205 (2020). [CrossRef]\n37. B. Wang, L. Pang, and J. Liu, “Single mode 2.4 kW part-doped ytterbium fiber fabricated by modified chemical vapor deposition technique,” Proc. SPIE 11427, 114271X (2020). [CrossRef]\n38. Z. Jiang and J. R. Marciante, “Impact of transverse spatial-hole burning on beam quality in large-mode-area Yb-doped fibers,” J. Opt. Soc. Am. B 25(2), 247–254 (2008). [CrossRef]\n39.", "score": 17.397046218763844, "rank": 72}, {"document_id": "doc-::chunk-9", "d_text": "The Transmitter\nThis element as well as the receiver and optical amplifier are described in detail in \"Fiber Laser Sources and Amplifiers IV\", SPIE, vol. 1789, pp. 260-266 (1992). The transmitter consists of a laser for each channel. Laser outputs are separately modulated and modulated signals are multiplexed to be fed into the transmission line.\nIII. The Receiver\nThis element, at the end of a span length, may be at the system terminus or may be part of a signal regenerator. It includes a means for demultiplexing the channels. This requires a device which passes the channel wavelength of interest while blocking the others. This may be a simple splitter combined with optical fibers at the output ports tuned to each channel SPIE ref. cited in preceding paragraph or may be a device which combines the functions of splitting and filtering in a single unit.\nIV. Optical Amplifier\nThis element, today, is an erbium amplifier. The useful gain region of a single erbium amplifier is λ=40-50 nm When amplifiers are connected in a series, the net gain narrows (since the amplitude within the \"gain region\" is reduced on either side of the peak). The 10-20 nm bandwidth referred to is a reasonable value for a three-amplifier span.\nV. Other Considerations\nFor the most part, other considerations are standard With few exceptions, WDM systems designed for use with DS Fiber may be directly used for the invention. System design is in accordance with considerations common to the prior art and the invention. Channel spacing is necessarily such as to fit the channels within the peak of the optical amplifier. Span length maxima are set by insertion loss, launch power, and tolerable pulse spreading. Considerations may be tailored naturally in accordance with constraints imposed. For example, use of WDM fiber without compensation sets a limit on the product of bit rate and span length. Span length may be set by convenience, e.g. where compensation is to be provided, or where a concatenated fiber length is to begin.\nPlanned WDM systems use external modulation to lessen dispersion penalty and to improve the spectral stability of the channels.\nFIG. 1 is a schematic diagram of a WDM system which serves for discussion of the various inventive species.\nFIGS.", "score": 17.397046218763844, "rank": 73}, {"document_id": "doc-::chunk-6", "d_text": "L. Dakss and W. J. Miniscalco, “Fundamental limits on Nd3+-doped fiber amplifier performance at 1.3 μm”, IEEE Photon. Technol. Lett. 2, 650 (1990)|\n|||C. R. Giles and E. Desurvire, “Modeling erbium-doped fiber amplifiers”, J. Lightwave Technol. 9 (2), 271 (1991)|\n|||M. Øbro et al., “Highly improved fibre amplifier for operation around 1300 nm”, Electron. Lett. 27, 470 (1991)|\n|||Y. Ohishi et al., “A high gain, high output saturation power Pr3+-doped fluoride fiber amplifier operating at 1.3 μm”, IEEE Photon. Technol. Lett. 3, 715 (1991)|\n|||T. Rasmussen et al., “Optimum design of Nd-doped fiber optical amplifiers”, IEEE Photon. Technol. Lett. 4, 49 (1992)|\n|||T. Whitely, “A review of recent system demonstrations incorporating 1.3-μm praseodymium-doped fluoride fiber amplifiers”, IEEE J. Lightwave Technol. 13 (5), 744 (1995)|\n|||T. Sakamoto et al., “35-dB gain Tm-doped ZBLYAN fiber amplifier operating at 1.65μm”, IEEE Photon. Technol. Lett. 8, 349 (1996)|\n|||R. Paschotta et al., “Ytterbium-doped fiber amplifiers”, IEEE J. Quantum Electron. 33 (7), 1049 (1997)|\n|||G. C. Valley, “Modeling cladding-pumped Er/Yb fiber amplifiers”, Opt. Fiber Technol. 7, 21 (2001) (useful review on amplifier modeling)|\n|||J. Limpert et al., “High-power femtosecond Yb-doped fiber amplifier”, Opt. Express 10 (14), 628 (2002)|\n|||J. Limpert et al., “High-power ultrafast fiber laser systems”, IEEE J. Sel.", "score": 15.758340881307905, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "THE Japanese communications giant Nippon Telegraph and Telephone (NTT)\nhas sent digital information through a record length of optical fibre, without\nhaving to amplify the signal electronically. The feat has impressed developers\nof fibre optic systems around the world, and could mean significant improvements\nin the performance of optical fibres both for long-distance transmissions\nand for sending signals to many separate terminals or subscribers.\nWhen a signal travels through an optical fibre, some of it is lost,\nor attenuated. The signals must, therefore, be amplified after passing through\nroughly 50 kilometres of fibre. In current systems, the optical signals\nmust be converted to an electronic form before they can be amplified by\na repeater. The stronger electronic signal is then used to drive an optical\nsource connected to the next length of fibre.\nRepeaters are costly and prone to failures. Correcting them can be expensive,\nfor example in cables travelling under the sea. The repeaters must also\nmatch the transmitters and receivers used in the cable, so the cable cannot\nbe upgraded without also replacing the repeaters.\nDesigners would prefer to amplify the signals directly without having\nto convert them first to an electrical form. British Telecom’s researchers\nat the company’s laboratories in Martlesham Heath, near Ipswich, have been\nworking on one approach in which semiconductor lasers amplify the light.\nAt a recent conference in San Francisco on optical fibres, British Telecom\nreported that it had developed semiconductor amplifiers that the researchers\nbelieve produce the highest output from a semiconductor optical amplifier.\nHowever, the performance of semiconductor laser amplifiers has been\neclipsed by another approach using repeaters made of lengths of optical\nfibres doped with small quantities of erbium, a rare earth element. When\nthe erbium atoms are energised by the light they produce light at a wavelength\nof around 1500 nanometres. Conveniently, this is the wavelength at which\nthe signals in the optical fibres are least attenuated. Erbium amplifiers\ngenerate light which is in phase with the light they amplify, in a similar\nfashion to amplifiers based on semiconductor lasers.\nAt the meeting in San Francisco, the researchers from NTT reported that\nthey had set their record using 25 erbium amplifiers, each 160 metres long,\nseparated by 80 kilometres of standard optical fibre.", "score": 15.758340881307905, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Erbium as a probe of everything?\nPhysica B , Volume 300 p. 78- 90\nErbium is a lanthanide ion with unique electronic and optical properties. In its trivalent state it is composed of an incompletely filled 4f inner shell and two closed outer shells. By employing these properties in specific material systems, Er can be used to probe point defects, oxygen, OH, Er, radiation defects, network structure, excitons, optical density of states, optical modes, and photonic bandstructure.", "score": 15.758340881307905, "rank": 76}, {"document_id": "doc-::chunk-7", "d_text": "However, this dependence approaches a constant value for typical conditions of strong pump saturation (the case relevant to high-power Yb:fiber lasers) and can therefore be ignored, allowing gss to be constant along the length of the fiber.\nTo first order, bi-directional pumping leads to nearly uniform pump distribution along the length of the fiber. Consequently, Eq. (3) can be taken as constant along the fiber by assuming proper values for the pump intensity and its saturation intensity. In this approximation, Eq. (2) can be solved for the fundamental mode assuming an amplifier with a fixed gain, and gss as the constant fitting parameter. Typical fiber amplifier performance (gain = 30, efficiency = 80%) and a 1-kW nominal output power lead to the starting values listed in Table 1 . The geometrical cross-sections are taken from the core and cladding of the fabricated SHARC fiber described in .\nUsing the values listed in Table 1 with the emission and absorption cross-sections calculated for ytterbium-doped silica glass fiber , a small signal gain of 12.9 dB/m is obtained by numerically solving Eq. (2). Note that this value is the modal gain. Given the optimized gain width of 45% of the waveguide width, the material gain required to yield the 30-gain amplifier (assuming fundamental-mode operation) is 18.7 dB. Note that this value is typical for a ytterbium-doping level of ~0.2 wt%, a level much lower than is conventionally required for high-power dual clad fiber lasers due to the very high (2.5%) core/cladding area ratio inherent in the SHARC fiber geometry.\nThe final piece to analytically modeling a SHARC fiber amplifier is the inclusion of the edge loss with the relative modal gains calculated via Eq. (1). As described in the introduction and in more detail in , the modal loss due to the non-TIR waveguide edges is given byEq. (4) with Eqs. (2) and (3) allows prediction of the net modal gains integrated through the SHARC fiber amplifier as a function of the index-step in the slow-axis waveguide. The results, shown in Fig. 4 , clearly demonstrate the high modal discrimination expected from SHARC fiber amplifiers employing gain filtering.", "score": 15.758340881307905, "rank": 77}, {"document_id": "doc-::chunk-10", "d_text": "In single-mode fiber performance is primarily limited by chromatic dispersion (also called group velocity dispersion), which occurs because the index of the glass varies slightly depending on the wavelength of the light, and light from real optical transmitters necessarily has nonzero spectral width (due to modulation). Polarization mode dispersion, another source of limitation, occurs because although the single-mode fiber can sustain only one transverse mode, it can carry this mode with two different polarizations, and slight imperfections or distortions in a fiber can alter the propagation velocities for the two polarizations. This phenomenon is called fiber birefringence and can be counteracted by polarization-maintaining optical fiber. Dispersion limits the bandwidth of the fiber because the spreading optical pulse limits the rate that pulses can follow one another on the fiber and still be distinguishable at the receiver.\nSome dispersion, notably chromatic dispersion, can be removed by a 'dispersion compensator'. This works by using a specially prepared length of fiber that has the opposite dispersion to that induced by the transmission fiber, and this sharpens the pulse so that it can be correctly decoded by the electronics.\nFiber attenuation, which necessitates the use of amplification systems, is caused by a combination of material absorption, Rayleigh scattering, Mie scattering, and connection losses. Although material absorption for pure silica is only around 0.03 dB/km (modern fiber has attenuation around 0.3 dB/km), impurities in the original optical fibers caused attenuation of about 1000 dB/km. Other forms of attenuation are caused by physical stresses to the fiber, microscopic fluctuations in density, and imperfect splicing techniques.\nEach effect that contributes to attenuation and dispersion depends on the optical wavelength. There are wavelength bands (or windows) where these effects are weakest, and these are the most favorable for transmission. These windows have been standardized, and the currently defined bands are the following:\n|O band||original||1260 to 1360 nm|\n|E band||extended||1360 to 1460 nm|\n|S band||short wavelengths||1460 to 1530 nm|\n|C band||conventional (\"erbium window\")||1530 to 1565 nm|\n|L band||long wavelengths||1565 to 1625 nm|\n|U band||ultralong wavelengths||1625 to 1675 nm|\nNote that this table shows that current technology has managed to bridge the second and third windows that were originally disjoint.", "score": 15.688261897365278, "rank": 78}, {"document_id": "doc-::chunk-3", "d_text": "2(b) adopts a dispersion-compensating fiber of large negative chirp along with the fiber amplifier of a relatively large 8 µm core diameter producing positive chirp. This combination enables elimination of the zero dispersion point within the fiber amplifier, thereby reducing the nonlinear phase noise and spectral broadening. This advantage is successfully verified in the output spectrum of the amplified pulses measured at the exit of the fiber amplifier as shown in Fig. 2(c). No significant spectral broadening is observed for the scheme of Fig. 2(b), but this not the case for that of Fig. 2(a).\nThe net dispersion has to be reduced to zero at the inlet of the HNLF fiber, which is accomplished by adjusting the overall length of the single-mode fiber installed to connect the fiber amplifier and the HNLF fiber via the wavelength-division multiplexer (WDM). This make the pulse duration become short with the peak pulse energy being concentrated for effective generation of an octave-spanning supercontinuum. The spectral broadening in the fiber amplifier affects the power coupling efficiency of the WDM as its transmission bandwidth is confined within the wavelength range of ±10 nm about 1550 nm. The broadened spectrum of the EDF amplifier of 4 µm diameter used in Fig. 2(a) suffers considerable power loss, requiring more pumping power to compensate for the coupling loss.\nThe spontaneous emission in the Er-doped fiber amplifier produces a small amount of stray light which undergoes Fresnel reflections at splicing points. Subsequent amplification of this continuously reflected light disturbs efficient seed signal amplification and even increases the unwanted background noise. To prevent this, optical fiber isolators are located at both the end facets of the fiber amplifier. Fig. 3 shows the amplified spontaneous emission (ASE) of the Er-doped fiber with and without proper isolation of Fresnel reflections. The total amount of ASE at 1560 nm wavelength is well suppressed by the isolators by a factor of 30 dB. The resulting spectrum shows the same distribution as that of the typical emission spectrum of Erbium as illustrate at the inset in Fig. 3(b). The pump power required for supercontinuum generation is consequently much lowered after the suppression of the ASE. Less pumping also decreases the pump-induced phase noise in the Er-doped fiber amplifier. Besides, the use of isolators helps avoid occasional damages of the pump diodes caused by partial reflection of the amplified pulses.", "score": 13.897358463981183, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "They managed to send\n2.5 billion bits of information per second through their length of fibre.\nTheir record, of 2223 km, is far beyond that achieved elsewhere, which at\nbest has been around 1000 kilometres, even using optical amplifiers.\nAt the same conference, other groups said that they had also demonstrated\nimportant achievements with erbium amplifiers. Many engineers would like\nto transmit a number of different wavelengths of light through the same\noptical fibre. Until now they did not believe that erbium amplifiers would\noperate at a sufficiently wide range of wavelengths to make this possible.\nTheir fears were allayed in San Francisco by news from two companies. A\nJapanese telecommunications company, called KDD Research & Development\nLaboratories, and Bell Communications Research (Bellcore) in New Jersey.\nThe team from KDD told the conference that it had sent 2.4 billion bits\nof information per second at four different wavelengths through an optical\nfibre nearly 500 kilometres long containing six erbium amplifiers.\nThe team from Bellcore used an erbium repeater to amplify signals with\n16 different wavelengths. More significantly, the Bellcore team showed the\npromise of erbium amplifiers in fibre optic networks that distribute signals\nto many terminals. Optical signals suffer high losses when they are divided\namong many fibres.\nOptical repeaters avoid this problem by amplifying the optical signal\nat the transmitter or intermediate points. In Bellcore’s experiments, the\nsignals from all 16 laser sources were sufficiently strong to be divided\namong more than 4000 terminals.", "score": 13.897358463981183, "rank": 80}, {"document_id": "doc-::chunk-6", "d_text": "K. Y. Lau, “Gain switching of semiconductor injection lasers,” Appl. Phys. Lett. 52(4), 257–259 (1988). [CrossRef]\n2. Y. Arakawa, T. Sogawa, M. Nishioka, M. Tanaka, and H. Sakaki, “Picosecond pulse generation (<1.8 ps) in a quantum well laser by a gain switching method,” Appl. Phys. Lett. 51(17), 1295–1297 (1987). [CrossRef]\n3. S. D. Jackson and T. A. King, “Efficient gain-switched operation of a Tm-doped silica fiber laser,” IEEE J. Quantum Electron. 34(5), 779–789 (1998). [CrossRef]\n5. B. Schmaul, G. Huber, R. Clausen, B. Chai, P. Li Kam Wa, and M. Bass, “Er3+:YLiF4 continuous wave cascade laser operation at 1620 and 2810 nm at room temperature,” Appl. Phys. Lett. 62(6), 541–543 (1993). [CrossRef]\n6. R. M. Percival, D. Szebesta, and S. T. Davey, “Highly efficient CW cascade operation of 1.47 and 1.82 µm transitions in Tm-doped fluoride fiber laser,” Electron. Lett. 28(20), 1866–1868 (1992). [CrossRef]\n7. G. Qin and Y. Ohishi, “Cascaded two-wavelength lasers and their effects on C-Band amplification performance for Er3+-doped fluoride fiber,” IEEE J. Quantum Electron. 43(4), 316–321 (2007). [CrossRef]\n8. M. Pollnau, Ch. Ghisler, G. Bunea, M. Bunea, W. Lüthy, and H. P. Weber, “150 mW unsaturated output power at 3 μm from a single-mode-fiber erbium cascade laser,” Appl. Phys. Lett. 66(26), 3564–3566 (1995). [CrossRef]\n9.", "score": 13.897358463981183, "rank": 81}, {"document_id": "doc-::chunk-5", "d_text": "When a dual-stage configuration was used for the wavelength filter, a bandwidth of 13 nm was realized, and the spectral sidebands were reduced considerably as compared with the single-stage filter configuration. A 504 fs sech2-like pulse was obtained. The temporal shape is shown in Fig. 6(b). The repetition rate was 21.6 MHz, the corresponding output power was 12.6 mW, and the pulse energy was 580 pJ. The estimated soliton order N was 2.6. Compared with the previous work using a 1:1 output coupler, the average power and the pulse energy were increased by factors of 2.5 and 5, respectively.\nNext, we examined the RF noise characteristics of the output pulses. A wavelength filter with a bandwidth of 13 nm was used. The single-sideband measurement method was used for the RF noise measurement , employing a fast photodiode, bias-T, and RF spectrum analyzer. The observed RF noise when the output coupling ratio was 98% is shown in Fig. 7(a) . There was no large noise component. The noise level was as low as that of a commercially available ultrashort-pulse solid state laser. Figure 7(b) shows the variation of the averaged magnitude of RF noise as a function of output coupling ratio. The magnitude of RF noise in Fig. 7(a) was obtained by averaging the RF noise spectra between 1 kHz and 10 MHz. Generally speaking, the magnitude of RF noise was low for the whole range of output coupling ratio. As the coupling ratio was increased, the magnitude of the RF noise was decreased slightly and continuously, reaching the minimum when the output coupling ratio was 80%. The magnitude of RF noise was increased as the output coupling ratio was increased above 80%. This means that, for the high output coupling region of 85–95%, the output power and noise magnitude were both increased. The reason for the noise increment is considered to be the effect of amplified spontaneous emission noise in the Er-doped fiber under the extremely small input power condition.\n4. Wavelength-tunable operation\nWavelength tunability is also an important feature for ultrashort pulse fiber lasers. Next, we tuned the properties of the wavelength filter to demonstrate wavelength-tunable operation of the PM fiber laser. We used a dual-stage wavelength filter.", "score": 13.897358463981183, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "In certain embodiments, this can result from the relatively easy power scaling properties of a fiber.\nFeatures, objects and advantages of the invention are in the summary, description, drawings and claims.\nTypically, core 14 includes a first material (e.g., a silica material, such as a fused silica) and at least one dopant (e.g., at least one rare earth ion, such as, for example, erbium ions, ytterbium ions, neodymium ions, holmium ions, dysprosium ions and/or thulium ions; and/or transition metal ion(s)) where the rare earths are understood to include elements 57-71 of the periodic table. More generally, however, core 14 can be formed of any material (e.g., active material) or combination of materials (e.g., active materials) capable of interacting with a pump signal to enhance pump signal absorption (e.g., produce gain). In certain embodiments, core 14 is formed of fused silica doped with erbium ions. As is well understood by one of ordinary skill in the art, active materials, such as the rare earths, provide energy of a first wavelength responsive to receiving energy (typically referred to as “pump” energy) of a second wavelength that is different than the first wavelength.\nCore 14 can optionally include certain other materials. For example, core 14 can include one or more materials to increase the index of refraction. Such materials include, for example, germanium oxide. Core 14 can include one or more materials to decrease the index of refraction. Such materials include, for example, boron oxide. Core 14 can include one or more materials (e.g., aluminum oxide) that enhance the solubility of the rare earth ion(s) within core 14 (e.g., within silica, such as fused silica). Core 14 can include one or more materials that enhance the homogeneity of the index of refraction within core 14. An example of such a material is phosphorus pentoxide.\nGenerally, core 14 is designed to support multimode energy propagation. The thickness R of core 14 can vary depending upon the intended use of fiber 10.", "score": 11.600539066098397, "rank": 83}, {"document_id": "doc-::chunk-7", "d_text": "Another common practice is to bundle many fiber optic strands within long-distance power transmission cable. This exploits power transmission rights of way effectively, ensures a power company can own and control the fiber required to monitor its own devices and lines, is effectively immune to tampering, and simplifies the deployment of smart grid technology.\nThe transmission distance of a fiber-optic communication system has traditionally been limited by fiber attenuation and by fiber distortion. By using opto-electronic repeaters, these problems have been eliminated. These repeaters convert the signal into an electrical signal, and then use a transmitter to send the signal again at a higher intensity than was received, thus counteracting the loss incurred in the previous segment. Because of the high complexity with modern wavelength-division multiplexed signals (including the fact that they had to be installed about once every 20 km), the cost of these repeaters is very high.\nAn alternative approach is to use optical amplifiers which amplify the optical signal directly without having to convert the signal to the electrical domain. One common type of optical amplifier is called an Erbium-doped fiber amplifier, or EDFA. These are made by doping a length of fiber with the rare-earth mineral erbium and pumping it with light from a laser with a shorter wavelength than the communications signal (typically 980 nm). EDFAs provide gain in the ITU C band at 1550 nm, which is near the loss minimum for optical fiber.\nOptical amplifiers have several significant advantages over electrical repeaters. First, an optical amplifier can amplify a very wide band at once which can include hundreds of individual channels, eliminating the need to demultiplex DWDM signals at each amplifier. Second, optical amplifiers operate independently of the data rate and modulation format, enabling multiple data rates and modulation formats to co-exist and enabling upgrading of the data rate of a system without having to replace all of the repeaters. Third, optical amplifiers are much simpler than a repeater with the same capabilities and are therefore significantly more reliable. Optical amplifiers have largely replaced repeaters in new installations, although electronic repeaters are still widely used as transponders for wavelength conversion.\nWavelength-division multiplexing (WDM) is the practice of multiplying the available capacity of optical fibers through use of parallel channels, each channel on a dedicated wavelength of light.", "score": 11.600539066098397, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "A single switchable-gain EDFA performs equally compared to multiple variable-gain EDFAs, each of which operates over a narrower gain range as shown below. The most compelling benefits of switchable-gain EDFAs for NEMs and network operators are operational simplicity and inventory sparing.\n- L-Band Amplifiers\nLumentum offers L-band amplifiers (EDFAs and Raman) for geography-specific applications and fiber-scarce applications. The design approach to L-band and C+L band amplifiers differs from that of C-band amplifiers. Here again, Lumentum’s rich history of L-band amplifier designs and expertise offers customers the comfort and confidence of a trusted, turnkey solution.\n- Compact EDFA Arrays\nEDFA arrays — multiple parallel instances of the same amplifier in a co-located physical package – are an essential component for compensating loss within colorless, directionless, and contentionless (CDC) ROADM nodes. The attributes for EDFA arrays differ from that of in-line amplifiers because EDFA arrays typically operate at a fixed-gain and support a subset of the full complement of coherent DWDM channels depending on the add/drop architecture at the node. Lumentum offers arrays of 8 EDFAs (4 on add path, 4 on drop path), each gain-flattened and capable of supporting up to a maximum of 16 wavelengths across the entire C-band. A compact form factor, low power dissipation, and low cost are the primary hallmarks of Lumentum EDFA arrays.\n- Single-Channel Small Form Factor (SFF) Amplifiers\nLumentum offers single-channel SFF EDFA gain blocks and modules for space- and power-constrained transponder linecards. Equipped with an uncooled pump laser, our SFF amplifier lets transponder card designers maximize the use of their board space for high-speed electro-optic components.", "score": 11.600539066098397, "rank": 85}, {"document_id": "doc-::chunk-7", "d_text": "Fiber is silica based, and includes a germania-doped core, together with one or more cladding layers which may be of silica or may be down doped with fluorine. The overall 125 μm structure has a core of a diameter of about 6 μm. The index peak has a Δn 0.013-0.015 with reference to undoped silica Usual profile is triangular or trapezoidal, possibly above a 20 μm platform of Δn≅0.002. The WDM fiber specified may be compensated by a spool of compensating fiber. Compensating fiber of co-pending U.S. Pat. application Ser. No. 07/978,002, filed Nov. 18, 1993, is suitable for this purpose. Illustrative structures have a dispersion of 2 ps/nm-km.\nThe principle has been described It is likely to take the form of a major length of fiber of positive sign of dispersion, followed by compensating fiber of negative dispersion. As with WDM fiber, compensating fiber may be of the form described in the co-pending U.S. patent application.\nSelf-Phase Modulation, a non-linear effect resulting in random generation of different wavelengths, is found to be small. From FIGS. 4 and 5, it is concluded that compensation for (linear) dispersion at appropriate distances (in that instance at 120 km spaced amplifier positions) effectively eliminates SPM as a consideration. Under these circumstances, fiber with λ.sub.0 =1310 nm is acceptable (disregarding cost and inconvenience of compensation). The near-term WDM system on which description is based (360 km span length, four-channel 5 gbit/channel) does accept the 17 ps/nm-km uncorrected material dispersion of λ.sub.0 =1310 nm fiber. Future systems of longer spans or of greater capacity may use fiber of 8 ps/nm-km dispersion.\nConsideration of SPM leads to compensation several times along each span length. Requirements for the near-term WDM system are met by compensation of the 17 ps/nm-km fiber at each amplifier (e.g. at spacings of 120 km). The inventive advance is useful for systems of shorter span length as discussed.", "score": 11.600539066098397, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Raman Amplifiers for Telecommunications:There has been a revived interest in Raman amplification due to the availability of high pump powers and improvements in small core size fibers. Two general categories of Raman amplifiers exist: distributed and discrete, also known as DRAs. They improve the noise figure and reduce the nonlinear penalty of the amplifier, allowing for longer amplifiers spans, higher bit rates, closer channel spacings, and operation near the zero dispersion wavelength. DRAs are already becoming commonplace in most long-haul networks. Consequently, Raman amplifiers should see a wide range of deployment in the next few years. This edited monograph is written by leading experts in this area and is the first book entirely devoted to Raman amplification. Three sections include extensive background on Raman physics, descriptions of sub-systems and modules utilizing Raman technology, and a review of current state-of-the-art systems. Technologies presented include applications for long-haul and ultra-long-haul submarine, terrestrial, soliton, and high-speed systems. This book will be a resource for scientists and optical engineers in optoelectronics, fiber optics, telecommunication, and optical networks.", "score": 11.559357502109556, "rank": 87}, {"document_id": "doc-::chunk-3", "d_text": "For cases with erbium-ytterbium energy transfers, double-clad fibers, pulsed amplification, etc., this is even more the case.\nNeodymium and Ytterbium Fiber Amplifiers\nFiber amplifiers based on ytterbium- or neodymium-doped double-clad fibers can be used to boost the output power of 1-μm laser sources to very high levels of up to several kilowatts (→ high-power fiber lasers and amplifiers). The broad gain bandwidth is also suitable for the amplification of ultrashort pulses (→ ultrafast amplifiers); limitations arise from fiber nonlinearities such as the Kerr effect and Raman effect (see below). Single-frequency signals can also be amplified to high powers; in this case, stimulated Brillouin scattering usually sets the limits.\nNeodymium-based amplifiers can also be used in the 1.3-μm spectral region, but with less favorable performance figures .\nErbium Fiber Amplifiers\nFiber amplifiers based on erbium-doped single-mode fibers (EDFAs) are widely used in long-range optical fiber communication systems for compensating the loss of long fiber spans. See the article on erbium-doped fiber amplifiers for more details.\nThulium Fiber Amplifiers\nThulium-doped fluoride fibers (TDFA = thulium-doped amplifier) pumped around 1047 or 1400 nm can be used for amplification in the telecom S band around 1460–1530 nm, or even around 1.65 μm. Combined thulium–erbium amplifiers can thus provide optical amplification in a very wide wavelength range.\nThere are also thulium-doped amplifiers for the first telecom window, operating at ≈ 800–850 nm.\nPraseodymium Fiber Amplifiers\nFiber amplifiers for the second telecom window around 1.3 μm are also available [7, 9], but have a lower performance compared with that of erbium-doped amplifiers. They can be based on praseodymium-doped fluoride fibers (PDFA = praseodymium-doped amplifier), which are pumped around 1020 nm (a relatively inconvenient pump wavelength) or at 1047 nm (with a YLF laser).\nSome Design Issues\nFiber amplifiers can be pumped in forward direction (i.e.", "score": 8.086131989696522, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "One of the samples prepared at the University turned out to be vitreous rather than crystalline and the fluorozirconate glasses had been discovered. Initial interest in these materials was as improved glasses for high energy fusion lasers and in 1978, in a joint publication with the University of Rennes, Weber concluded that the glasses should be considered as candidates for laser materials . A further significance of these new glasses was soon recognised when it was proposed that they might have applications for low loss IR fibres, with intrinsic losses below those in silica, and a considerable amount of work is now underway in an attempt to reach these targets . These two interests were amalgamated in 1987 when the first fluoride fibre laser was demonstrated by Brierley and France , doped with Nd and lasing at 1.05 um. Following this work many new dopants have been investigated and many different lasing lines have been reported. This paper reviews these recent developments in fluoride fibre lasers and amplifiers from several laboratories throughout the world and includes some of the latest results on semiconductor diode pumping close to 0.8 um.\nCommon techniques for the fabrication of optical waveguides (MCVD-Modified Chemical Vapor Deposition, OVD-Outside Vapor Deposition, VAD-Vapor Axial Deposition) depend upon the availability of high vapor pressure precursor compounds such as SiC14, GeC14, and POC13. Vapor delivery techniques can not be used to transport compounds with a low vapor pressure. To incorporate such elements into the glass structure, we are investigating aerosol doping for both MCVD and OVD. Low mass flow rate aerosol transport is being used for core doping of rare earth elements in MCVD, and a high mass flow aerosol transport may have application in overcladding in OVD, the fabrication of fiber boules of glasses with a high nonlinear refractive index, and for GRIN (Gradient Index Lenses) lenses.\nRecent advances in the field of Er3+-doped optical fibre amplifiers and lasers operating around λ=1.54μm will be described, with particular reference to 'new' pump wavelengths and practical pump sources for these devices.\nDespite the excellent performance that has been demonstrated for Er3+-doped fiber amplifiers at 1.5 µ m, they have several shortcomings that remain concerns in practical applications.", "score": 8.086131989696522, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "The Features of Er Glass:\n- Eye safety\n- High optical quality\n- Long fluorescent life\n- The absorption band is wide\n- The slope of high efficiency\n|The cross section of stimulated emission(10-20cm2)||0.8||0.75|\n|Fluorescence lifetime (milliseconds) *||7.7-8.0||7.7-8.2|\n|Central laser wavelength(nm)||1535||1535|\n|Refractive Index(d 589.3nm)||1.532||1.536|\n|dn / dT(10-6 /℃)(20〜100℃)||-1.72||-3|\n|Transition Temperature (℃)||556||530|\n|Softening Temperature (℃)||605||573|\n|Linear Coefficient of Thermal Expansion (10-7/K) (20〜100℃)||87||82|\n|Linear Coefficient of Thermal Expansion (10-7/K) (100〜300℃)||95||96|\n|Thermal Coefficient of Optical Path Length (10-6/K) (20〜100℃)||2.9||1.4|\n|Thermal Conductivity (25℃) (W/mK)||0.7||0.7|\n|Density g / cm3||3.06||2.83|\n|Chemical Durability (weight loss in 100℃ distilled water)(μg/ hr.cm2)||52||82|\n|Orientation Tolerance||< 0.5°|\n|Thickness/Diameter Tolerance||±0.05 mm|\n|Surface Flatness||<λ/8@632 nm|\n|Wavefront Distortion||<λ/4@632 nm|\n|Biggest Size||dia (3-12.7)×(3-150)mm2|\nAbsorption and Emission Spectra\n| B Y Z A , A M L , A J L , et al. Optical properties of Er 3+ /Yb 3+ co-doped phosphate glass system for NIR lasers and fiber amplifiers[J]. Ceramics International, 2018, 44( 18):22467-22472.|\n| Langar A , Bouzidi C , Elhouichet H , et al.", "score": 8.086131989696522, "rank": 90}, {"document_id": "doc-::chunk-11", "d_text": "T. Li, K. Beil, C. Krankel, C. Brandt, and G. Huber, “Laser Performance of Highly Doped Er:Lu2O3 at 2.8 μm,” in Lasers, Sources, and Related Photonic Devices, OSA Technical Digest (CD) (Optical Society of America, 2012), paper AW5A.6.\n19. W. L. Gao, J. Ma, G. Q. Xie, J. Zhang, D. W. Luo, H. Yang, D. Y. Tang, J. Ma, P. Yuan, and L. J. Qian, “Highly efficient 2 μm Tm:YAG ceramic laser,” Opt. Lett. 37(6), 1076–1078 (2012). [CrossRef] [PubMed]\n20. L. S. Rothman, D. Jacquemart, A. Barbe, D. Chris Benner, M. Birk, L. R. Brown, M. R. Carleer, C. Chackerian Jr., K. Chance, L. H. Coudert, V. Dana, V. M. Devi, J.-M. Flaud, R. R. Gamache, A. Goldman, J.-M. Hartmann, K. W. Jucks, A. G. Maki, J.-Y. Mandin, S. T. Massie, J. Orphal, A. Perrin, C. P. Rinsland, M. A. H. Smith, J. Tennyson, R. N. Tolchenov, R. A. Toth, J. Vander Auwera, P. Varanasi, and G. Wagner, “The HITRAN 2004 molecular spectroscopic database,” J. Quantum Spectros. Rad. Trans. 96(2), 139–204 (2005).\n22. M. Pollnan and S. D. Jackson, “Erbium 3 μm fiber lasers,” IEEE J. Sel. Top. Quantum Electron. 7(1), 30–40 (2001). [CrossRef]\n23.", "score": 8.086131989696522, "rank": 91}]} {"qid": 34, "question_text": "When and where was Stravinsky's ballet suite, which rearranged Pergolesi's work, first performed?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "- Russian composer who brote this ballet piece after the 1st World War\n- Other compositions The Firebird (1910), Petrushka (1911) and The Rite of Spring (1913) used massive symphony orchestras\n- After war performances had to be less flamboyant\n- Director of Ballet Russes Serge Diaghilec asked Stravinsky to take 18thC composer Pergolesi's work and rearrange it\n- Vivo (cello sonata) is by Pergolesi, Sinfonia (Trio Sonata) is actually by Gallo and Gavotta (keyboard piece) is by Monza.\n- The suite completed in 1922.\n- First performed in 1922 in Boston, USA by Pierre Monteux who championed Stranvinsky's music.\n1 of 14\nRepresented reaction against overblown emotions and formlessness of late 19thC music.\n- Movements in neo-classicism were short - suited ballet\n- Structures based of 18thC ritornello, sonata form, variation, rondo, and simple binary and ternary forms\n- Harmonies based on originals with added dischords\n- Rhythms used influence of jazz (syncapation)\n- Used wider variety of instrumentation and techniques than what would have been used in 18thC\nThis suite is relatively close to 18thC pieces with melodies, structure and basic harmonies kept the same.\n2 of 14\nPerformance Forces and their Handling\n- Original pieces only had a max of 4 players\n- Wrote for chamber orchestra of 32 players - what haydn might have used in late 18thC\n- Use of solo trombone in vivo not in 18thC piece\n- Separate string group in concerto grosso style has 5 solo string players and no continuo - usually 2 violins and cello with harpsichord/organ continuo.\n- Double bass part was same as cello in 18thC but in vivo there is virtuoso solo part.\n- Adds many articulations like slurs and staccatos less frequently found in 18thC music.", "score": 53.530915653074, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Welcome to Hyperion Records, an independent British classical label devoted to presenting high-quality recordings of music of all styles and from all periods from the twelfth century to the twenty-first.\nHyperion offers both CDs, and downloads in a number of formats. The site is also available in several languages.\nPlease use the dropdown buttons to set your preferred options, or use the checkbox to accept the defaults.\nPulcinella was premiered on 15th May 1920 by the Ballets Russes at the Opéra in Paris, where it was billed simply as ‘music by Pergolesi, arranged and orchestrated by Igor Stravinsky’. Yet the work subsequently came to be identified more directly with Stravinsky as composer rather than arranger, in part a consequence of the concert suites he made of the score, including the version from 1922 (revised 1949). While Stravinsky later asserted that the ‘remarkable thing about Pulcinella is not how much but how little has been added or changed’, the alterations are significant enough to turn the music instantly into something unmistakably of the 20th century. Stravinsky began by working directly onto the transcriptions Diaghilev had given him, subtly annotating the melodies and bass lines of arias by Pergolesi, trio sonata movements by Gallo, and even a tarantella by Wassenaer. Sometimes the result was just a representation of the original in Stravinsky’s own accent. No-one could mistake the trombone and double-bass melody of the ‘Vivo’ for anything other than Stravinsky, even though every note of Pergolesi’s music is still present. There are cunning harmonic touches, anachronistic pedal points and off-beat accents that reveal the thumbprint of the arranger, but it remains a loving, albeit humorous, homage to Pergolesi. The same is true of the opening ‘Sinfonia’ (original music by Gallo). Elsewhere, however, Stravinsky declares his hand more decisively. In the ‘Serenata’, for instance, he adds an unchanging drone (an open fifth), which denies the music its forward movement and whose resulting dissonances bestow a languid, melancholic air.", "score": 50.88879985509298, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "A new modern dance adaptation of Stravinsky's neoclassical \"Pulcinella\" will be presented as the final Grand Valley State University Fall Arts Celebration 2009 event.\nThe performance, on Monday, October 19, begins at 8 p.m. in the Louis Armstrong Theatre, Performing Arts Center, Allendale Campus. The performance is open to the public with free admission.\nShawn T Bible, Grand Valley assistant professor of dance, reinvents the classical characteristics in the traditional ballet. He believes that Stravinsky would probably be amused, since his own version of the ballet was a masterful reworking of the music of Pergolesi, an early classical composer of comic operas in Naples during the mid 1700s.\n\"The music was written by Stravinsky with the narrative in mind,\" said Bible. \"The ballet will maintain its structure, characters and plot, while presenting the audience with a refreshing perspective on a classic and stronger gestural movement vocabulary that lends itself to modern dance.\"\nPulcinella is a character originating from commedia dell'arte. Scored for a modern chamber orchestra with soprano, tenor, and baritone soloists, the ballet was first commissioned by Sergei Diaghilev and premiered in Paris in 1920 under the baton of Ernest Ansermet. The dancer Leonid Myasin (Léonide Massine) created both the libretto and choreography, and Pablo Picasso designed the original costumes and sets.\nThe ballet unfolds in one single act and features Pulcinella, his girlfriend Pimpinella, and a cast of interesting characters involved in the pursuits of love. It also recognized WWI in a heavy way, though the story is funny with many high-jinks, and is intended as an escapist experience.\n'Pulcinella' was most often performed as orchestral suites and not danced. This performance will be a collaboration between faculty and students, conducted by Henry Duitman, choreographed by Shawn T Bible, and danced by a splendid cast of Grand Valley dance faculty, the student Dance Ensemble, and guest dancers.\nFor more information, contact Shawn T Bible, assistant professor of dance in Grand Valley's Department of Music, at email@example.com or (616) 331-3487.", "score": 48.912592256855504, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Stravinsky’s Pulcinella suite is the highlight in a vibrant concert to showcase the 2018 SSO Fellows.\nThe SSO Fellows are talented young musicians on the cusp of an orchestral career, and by October they’re reaching the culmination of a year of mentoring and professional experience. To showcase their talents, Fellowship Artistic Director Roger Benedict has devised an appealing French-flavoured concert with a neoclassical theme.\nPoulenc’s Suite française was written for a play set in the royal court of 16th-century France and he gives a chic twist to antique dances from the time. Stravinsky’s Pulcinella, commissioned for the Ballets Russes as a tribute to commedia dell’arte, does the same thing: cunningly reworking music from the 18th century and infusing it with modern genius. In between these delightful pieces, Australian mezzo-soprano Caitlin Hulcup will join the Fellows to perform Ravel’s intoxicating songs based on poems by the symbolist Stéphane Mallarmé.", "score": 47.13529135397122, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "Colorful and traditional are a rare yet beautiful combination, which led me to choose Igor Stravinsky’s Pulcinella Suite as the musical pairing. This piece is neoclassicism at its finest. The work was commissioned by Sergei Diaghilev, who was the founder of the Ballet Russes and one of the primary influences behind Stravinsky’s ballet repertoire. Diaghilev wanted a ballet inspired by commedia dell’arte, and Stravinsky was naturally tasked with creating the musical score…while the costumes and set were designed by none other than Pablo Picasso! The ballet is based around Pulcinella (pictured right), who was a classic character of the commedia dell’arte genre. Stravinsky revised the original music (believed to have been written by 18th-century composer Giovanni Pergolesi) by incorporating contemporary harmonies and rhythms and by scoring it for a sizable chamber orchestra. He says the following of the piece:\n“Pulcinella was my discovery of the past, the epiphany through which the whole of my late work became possible. It was a backward look, of course—the first of many love affairs in that direction—but it was a look in the mirror, too.”\nThe ballet was premiered for a Parisian audience in May 1920. 2 years later, Stravinsky abridged the ballet into a “Suite” for chamber orchestra, which uses 11 of the original 18 movements – the work has since become a standard of the orchestral canon. Like the above dish, Pulcinella is by far one of my favorites – the colors and characters are truly unparalleled, and I hope you enjoy it!", "score": 47.05302471974374, "rank": 5}, {"document_id": "doc-::chunk-3", "d_text": "2 in B flat Major (Allegro moderato) was among the works that formed the basis for Igor Stravinsky's Pulcinella, Tarantella, based on works considered at the time to be by Giovanni Battista Pergolesi.\nApart from Concerti Armonici, three sonatas for recorder and continuo also were discovered in the early 1990s.\n- Count Unico Wilhelm van Wassenaer (1692–1766). A master unmasked, or the Pergolesi-Ricciotti puzzle solved. By Albert Dunning. Tr. by Joan Rimmer. Frits Knuf, 1980.\n|Wikimedia Commons has media related to Unico Wilhelm van Wassenaer.|", "score": 45.55535202510968, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Rather like the other classical CDs this week, we suspect that because we like this, purists may well have something to harp on about. Suzuki is an expert on Bach — he is recording the complete choral works of Bach, as well as Bach’s concertos, orchestral suites, and solo works for harpsichord and organ — so perhaps he’s turned to Stravinsky for some light relief.\nThe opening piece is The Pulcinella Suite, from a ballet and based on an 18th-century play — Pulcinella is who we call Punch. It’s a lively and jolly piece, with the Vivo being even comical.\nThe longest work is Apollo, also from a ballet, which centres on Apollo, the Greek god of music, who is visited by the Muses Terpsichore, Polyhymnia and Calliope. It is a calm and serene piece, and apparently was first performed with monochrome costumes for the dancers and no elaborate scenery.\nAfter the serene ballet, the closing piece Concerto in D is livelier, making this an enjoyable collection of music. Its easiness on the ear perhaps makes its lack of edge a downer for purists but if you don’t want to work hard for your music, it’s enjoyable. Which is what it’s supposed to be….\nOut on BIS, 2211, and it’s a SACD.", "score": 43.69796359297399, "rank": 7}, {"document_id": "doc-::chunk-3", "d_text": "Diaghilev also worked with dancer and ballet master Léonide Massine.\nThe artistic director for the Ballets Russes was Léon Bakst. Together they developed a more complicated form of ballet with show-elements intended to appeal to the general public, rather than solely the aristocracy. The exotic appeal of the Ballets Russes had an effect on Fauvist painters and the nascent Art Deco style.\nPerhaps Diaghilev's most notable composer-collaborator, however, was Igor Stravinsky. Diaghilev heard Stravinsky's early orchestral works Fireworks and Scherzo fantastique, and was impressed enough to ask Stravinsky to arrange some pieces by Chopin for the Ballets Russes. In 1910, he commissioned his first score from Stravinsky, The Firebird. Petrushka (1911) and The Rite of Spring (1913) followed shortly afterwards, and the two also worked together on Les noces (1923) and Pulcinella (1920) together with Picasso, who designed the costumes and the set.\nAfter the Russian Revolution of 1917, Diaghilev stayed abroad. The new Soviet regime, once it became obvious that he could not be lured back, condemned him in perpetuity as an especially insidious example of bourgeois decadence. Soviet art historians wrote him out of the picture for more than 60 years.\nDiaghilev staged Tchaikovsky's The Sleeping Beauty in London in 1921; it was a production of remarkable magnificence in both settings and costumes but, despite being well received by the public, it was a financial disaster for Diaghilev and Oswald Stoll, the theatre-owner who had backed it. The first cast included the legendary ballerina Olga Spessivtseva and Lubov Egorova in the role of Aurora. Diaghilev insisted on calling the ballet The Sleeping Princess. When asked why, he quipped, \"Because I have no beauties!\" The later years of the Ballets Russes were often considered too \"intellectual\", too \"stylish\" and seldom had the unconditional success of the first few seasons, although younger choreographers like George Balanchine hit their stride with the Ballet Russes.", "score": 40.89353391758229, "rank": 8}, {"document_id": "doc-::chunk-3", "d_text": "Diaghilev also worked with dancer and ballet master Léonide Massine.\nThe artistic director for the Ballets Russes was Léon Bakst. Together they developed a more complicated form of ballet with show-elements intended to appeal to the general public, rather than solely the aristocracy. The exotic appeal of the Ballets Russes had an effect on Fauvist painters and the nascent Art Deco style. Coco Chanel is said to have stated that \"Diaghilev invented Russia for foreigners.\" [Rhonda K. Garelick].\nPerhaps Diaghilev's most notable composer-collaborator, however, was Igor Stravinsky. Diaghilev heard Stravinsky's early orchestral works Fireworks and Scherzo fantastique, and was impressed enough to ask Stravinsky to arrange some pieces by Chopin for the Ballets Russes. In 1910, he commissioned his first score from Stravinsky, The Firebird. Petrushka (1911) and The Rite of Spring (1913) followed shortly afterwards, and the two also worked together on Les noces (1923) and Pulcinella (1920) together with Picasso, who designed the costumes and the set.\nAfter the Russian Revolution of 1917, Diaghilev stayed abroad. The new Soviet regime, once it became obvious that he could not be lured back, condemned him in perpetuity as an especially insidious example of bourgeois decadence. Soviet art historians wrote him out of the picture for more than 60 years.\nDiaghilev made Boris Kochno his secretary in 1920 and staged Tchaikovsky's The Sleeping Beauty in London in 1921; it was a production of remarkable magnificence in both settings and costumes but, despite being well received by the public, it was a financial disaster for Diaghilev and Oswald Stoll, the theatre-owner who had backed it. The first cast included the legendary ballerina Olga Spessivtseva and Lubov Egorova in the role of Aurora. Diaghilev insisted on calling the ballet The Sleeping Princess. When asked why, he quipped, \"Because I have no beauties!\"", "score": 39.04613737907636, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "About this Piece\nIf Stravinsky was known at all in Paris in 1908, it was as a promising student and disciple of Nikolai Rimsky-Korsakov. In that year, two of the 26-year-old composer's works were conducted by Alexander Siloti in St. Petersburg: Scherzo fantastique and Fireworks. Hardly earth-shakers by the lights of what was to come, but sufficiently original and imaginative to make an impression on at least one connoisseur in the audience, Sergei Diaghilev, who was about to launch a new company to perform a mixed season of ballet and opera in Paris. The French capital was at the time particularly taken with the kind of exotic orientalia Russian artists were likely to offer.\nWith dancer-choreographer Mikhail Fokine and scenic artists Leon Bakst and Alexandre Benois on board, Diaghilev lacked only a resident composer to round out his creative staff. That Stravinsky was his man was proven by his successful handling of some test assignments for the opening season of what would be called the Ballets Russes, including the orchestration of Chopin's piano Nocturne in A-flat and Valse brillante in E-flat, for the opening and closing numbers of Fokine's Les Sylphides.\nDuring the company's second season, in 1909, Diaghilev gave his potential resident composer something more demanding. When Anatol Liadov was unable to make much progress on a score commissioned from him for a ballet on the Russian legend of the magical Firebird and the Tsarevich Ivan, Diaghilev transferred the project to Stravinsky, who was at the time living in the country home, near St. Petersburg, of the Rimsky-Korsakov family at the invitation of the late composer's son, Andrei.\nThe lavish production of L'oiseau de feu, as it was called, was first seen in the Paris Opera on June 25, 1910. Choreography was by Fokine, who also danced the role of the Tsarevich. The Firebird was Tamara Karsavina and the conductor Gabriel Pierné, himself a well-known composer at the time.\nThe colorfully melodic score made an overnight star of the composer who would scandalize even the most progressive Paris audience three years later with his Rite of Spring.", "score": 37.74673246188252, "rank": 10}, {"document_id": "doc-::chunk-7", "d_text": "He participated as designer in productions of the Ballets Russes from its beginning in 1909 until 1921, creating sets and costumes for Scheherazade (1910, music by Rimsky-Korsakov), The Firebird (1910, music by Stravinsky), Le Spectre de la rose (1911, music by Weber), L'Après-midi d'une faune (1912, music by Debussy), and Daphnis et Chloé (1912, music by Ravel), among other productions.\nIn 1917, Pablo Picasso (1881-1973) designed sets and costumes in the Cubist style for three Diaghilev ballets, all with choreography by Léonide Massine: Parade (1917, music by Erik Satie); El sombrero de tres picos (The Three-Cornered Hat) (1919, music by Manuel de Falla); and Pulcinella (1920, music by Igor Stravinsky).\nNatalia Goncharova was born in 1881 near Tula, Russia. Her art was inspired by Russian folk art, fauvism, and cubism. She began designing for the Ballets Russes in 1921.\nAlthough the Ballets Russes firmly established the 20th-century tradition of fine art theatre design, the company was not unique in its employment of fine artists. For instance, Savva Mamontov's Private Opera Company had made a policy of employing fine artists, such as Korovin and Golovin, who went on to work for the Ballets Russes.\nThe composers and conductors\nThe impresario also engaged conductors who were or became eminent in their field during the 20th century, including Pierre Monteux (1911–16 and 1924), Ernest Ansermet (1915–23), Edward Clark (1919-20) and Roger Désormière (1925–29).\nDiaghilev had hired the young Stravinsky at a time when he was virtually unknown to compose the music for The Firebird, after the composer Anatoly Lyadov proved unreliable. Diaghilev was instrumental in launching Stravinsky's career in Europe and the United States of America.\nStravinsky's early ballet scores were the subject of much discussion.", "score": 37.33567716503507, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Stravinsky and Glazunov conduct their own Ballet music\n\"Stravinsky’s performance is filled with passion and considerable dramatic understanding ... The performance we have here of [Glazunov's] own music is masterful\" - Fanfare\nThis release presents music from three Russian ballets premièred\nwithin the first two decades of the 20th Century, which reflect the\nenormous stylistic changes going on in music during the period. Igor\nStravinsky began as an admirer of Glazunov, but soon set out on his own\ncourse with The Firebird in 1910. The following year saw the première of Petrushka, a work which the conservative elder composer characterized as “not music, but […] excellently and skillfully orchestrated.”\nStravinsky made some never-published acoustic piano sides for the Brunswick label while on tour in America in 1925, but the present suite from Petrushka was his first issued recording. Although the ballet had, by that time, been recorded twice complete on eight sides by HMV (acoustically by Goossens, electrically by Coates, the latter on Pristine PASC 304), Columbia opted to record only a suite on six sides. Missing from this version were the “Dance of the Ballerina” and most of the “Waltz: The Ballerina and the Moor” in Scene III, and “The Peasant and the Bear” (except for the opening bars) and “The Jovial Merchant with Two Gypsy Girls” in Scene IV. Like other concert versions of the ballet, it ends with “The Masqueraders”, rather than with Petrushka’s fight, death, and ghostly reappearance.\nAfter this recording made in London, the focus of Stravinsky’s microphonic activities shifted for the next few years to Paris. At the tail end of his 1928 Firebird Suite sessions (Pristine PASC 387), he recorded the last three selections from the suite to Pulcinella, his ballet for orchestra and singers based on works by Pergolesi, which had marked the beginning of Stravinsky’s neoclassical period. Four years later, he returned to the work and recorded two more movements, but did not record any further excerpts during the 78 rpm era.", "score": 37.33083739725237, "rank": 12}, {"document_id": "doc-::chunk-1", "d_text": "From 1891 onwards when French ships sailed into the Russian naval base at Fromstadt to be greeted by a chorus of “La Marseillaise”, there had been a special bond between Russia and France. The Russian impresario, Diaghilev, retreated to Paris when he fell out with Tsar Alexander III. There was already a relationship between Russian and French music with Debussy visiting Russia to teach in 1881 and Ravel taking inspiration from Rimsky-Korsakov.\nStravinsky headed out to Paris and, with Diaghilev’s Ballet Russes, wowed the French with “The Firebird” in 1910. After the success of “Petrushka”, one night in 1910 Igor dreamed of a young girl dancing herself to death (as you do). He began to plan The Rite of Spring, plotting out a sequence of historically accurate springtime rituals from folk lore and song. He piled these pieces up, short sections one after the other with much rhythmic angularity. In Alex Ross’s brilliant “The Rest Is Noise” book, he compares it with Bo Diddley’s jungle beat. When Charlie “Bird” Parker hit Paris in 1949, he worked the first few notes of the Rite into his solo on “Salt Peanuts.” A few years later, Bird spotted Stravinsky in Birdland in New York and worked the “Firebird” riff into his song “Koko”. Unwittingly or otherwise Stravinsky had hit a connection with black American jazz musicians.\nIt seems that the legendary chaos around the ballet premiere of “Rites” has grown in notoriety. It has attained the reputation as a near riot, similar to that when the Jesus and Mary Chain played the North London Polytechnic. There was a ruckus and people were ejected but the band played on. Some critics were shocked, others thrilled.\nIt didn’t stop Stravinsky though and he cracked on with ripping up the rule book.\nIf anyone fancies a fun dramatisation of the era, this is a recent BBC4 documentary that’s worth a look:\nSo to the gig.\nThe Orchestra were a mixture of experienced and youthful musicians. The violinist who was sat opposite us was probably still looking forward to being able to vote.\nThe first half of the concert was a performance of “Petrushka”.", "score": 36.451217009784, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Three emblematic ballets, a triptych ‘From Russia with Love’ by Igor Stravinsky comes to the Stavros Niarchos Hall of the Greek National Opera in Athens.\nStravinsky became known worldwide through a series of ballets for which he composed the music. They were the fruits of his creative cooperation with impresario Sergei Diaghilev, who had created the ‘Ballets Russes’ in Paris in collaboration with some of the most talented Russian artists who lived and worked far from their homeland.\nIt was for this ensemble that the triptych of ballets ‘From Russia With Love’ was written: ‘Le Sacre du printemps’ [The Rite of spring, 1913], ‘Le Chant du Rossignol’ [The nightingale’s song, 1914 / 1917 / 1920] and ‘Les noces’ [The wedding, 1923].\nIn this Greek production, choreography is provided by three renowned dancers and choreographers including, Daphnis Kokkinos of the Tanztheater Wuppertal Pina Bausch, distinguished German choreographer Marco Goecke of Hannover Ballet and Konstantinos Rigos, ballet director of the Greek National Opera.\nABOUT THE VENUE\nThe Stavros Niarchos Hall is the main auditorium of the opera wing found inside the SNFCC complex. Designed by the acclaimed Renzo Piano, it comprises of 1400 seats with outstanding acoustics designed to stage traditional operas and ballets. It became the new home of the Greek National Opera in 2017.\nFind more events on in Athens.", "score": 34.8159710661964, "rank": 14}, {"document_id": "doc-::chunk-6", "d_text": "1919\nPremiere: 25 June, 1910; Paris\nInstrumentation: 2 flutes (2nd doubling piccolo), 2 oboes (2nd doubling English horn), 2 clarinets, 2 bassoons, 4 horns, 2 trumpets, 3 trombones, tuba, timpani, percussion (bass drum, cymbals, tambourine, triangle, xylophone) harp, celeste, piano, strings\nApproximate Duration: 22 minutes\nWith his ballet The Firebird, the 28-year-old Igor Stravinsky found immediate and lasting fame. (“I was once addressed by a man in an American railway dining car, and quite seriously, as ‘Mr. Fireberg,’” a much older Stravinsky related.) Composed between November 1909 and May 1910, the ballet was first performed at the Paris Opéra on 25 June 1910. Gabriel Pierné conducted. The next day, the composer was a celebrity. How did this “overnight” popularity come about?\nIn 1906, the Russian impresario Sergei Diaghilev had taken a major exhibition of Russian art to the Petit Palais in Paris. The following year, he presented five concerts of Russian music in the city, and in 1908 mounted a production of Mussorgsky’s Boris Godunov, starring Feodor Chaliapin, at the Opéra. This led to an invitation to return the following year with ballet as well as opera, and thus to the launching of his famous Ballets Russes. The company’s first night, 19 May 1909, was a sensation.\nFor the 1910 season, Diaghilev wanted to present a ballet based on the Russian legend of the Firebird. Unable to convince various composers – including Nikolai Tcherepnin, Alatole Liadov, Alexander Glazunov, and Nikolai Sokolov – to provide a score, the impresario finally turned to the wet-behind-the ears Stravinsky. Diaghilev had first heard Stravinsky’s music two years earlier at a concert in St. Petersburg, immediately asking the young composer to help orchestrate music for the 1909 Parisian ballet season. Thus, Stravinsky was in the right place at the right time.", "score": 33.67032501637855, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "…knocks you out with the sheer beauty,\npower and modernity of its dancing.”\n– The Chicago Sun Times\nMarch 19, 2013\nProvidence Performing Arts Center\nCelebrate the 100th anniversary of one of most magnificent dance and music masterpieces ever: “The Rite of Spring”, performed by the Joffrey Ballet.\nSparking a riot that gave birth to modern music and dance, the original Stravinsky and Nijinsky production premiered in Paris in 1913 and forever changed the way audiences hear and see.\nThe Joffrey Ballet has been hailed as “America’s Ballet Company of Firsts”. Forty-six of their superb dancers will perform “The Rite of Spring”, paired with thrilling contemporary works by two of the best choreographic minds of the 21st century. Don’t miss the first dance company to perform at the White House at Jacqueline Kennedy’s invitation, the first American company to visit Russia, the first to commission a rock ‘n roll ballet and the only dance company to appear on the cover of Time Magazine. .", "score": 33.24654119504192, "rank": 16}, {"document_id": "doc-::chunk-4", "d_text": "Even though Balanchine's choreography was produced by Diaghilev in Paris in June 1928, Stravinsky's composition premiered six weeks earlier at the Library of Congress with choreography by Adolph Bolm. The events surrounding this premiere are telling.\nIn 1925 Elizabeth Sprague Coolidge created a festival to promote modern music and cultural awareness in America. An idealist, she maintained that public exposure to good new music was essential to a powerful society, and her aim was to build an auditorium for the Library of Congress to support these goals.\nMrs. Coolidge donated $60,000 to the Library to build the theater and develop the project. She established an annual chamber music festival and commissioned her favorite international composers to create original works. For the third season in 1928, Coolidge determined that she wanted to plan a program with dance as a central focus, and she requested that Carl Engel, chief of the Music Division, contact Adolph Bolm to be choreographer.\nTwo composers, de Falla and Ottorino Respighi, declined the commission, and in 1927 Coolidge contacted Stravinsky to compose a chamber ballet that would be performed at the festival. She set many parameters, requiring that the chamber ballet be 30 minutes in length and employ no more than six dancers and a modest orchestration so that the work would be accommodated on the Coolidge Auditorium's small stage. For this commission, Stravinsky created \"Apollon Musagète\" with choreography by Bolm. Nicholas Remisoff designed the costumes and scenery, and the dancers included Bolm as Apollo, and Ruth Page, Elise Reiman and Berenice Holmes as the Muses. This ballet was announced as \"the first time in history a major ballet work had its world premiere in America.\"\nNonetheless, Bolm's \"Apollo\" was beset with difficulties, ranging from Stravinsky's cavalier and egocentric treatment of the project to the limitations of the stage size and the small audience capacity. In fact, Stravinsky barely refers to Bolm's version of \"Apollo\" in any of his memoirs.\nCharles M. Joseph, author of \"Stravinsky Inside Out,\" states that according to some of Stravinsky's unpublished correspondence, he may have doubted Bolm's choreographic capabilities and convictions in the late 1920s.", "score": 32.82496635753347, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "The Rite of Spring - The 100-Year-Shock Wave\nTonight at 7, it’s “The Rite of Spring – The 100-Year-Shock-Wave.” It was 100 years ago today that the ballet “The Rite of Spring” had its premiere in Paris. This special features first-hand recollections from composer Stravinsky, conductor Pierre Monteux and by major artists of today. Lengthy excerpts are conducted by Valery Gergiev.\nSTRAVINSKY: The Rite of Spring (Mariinsky Theatre Orchestra/ Valery Gergiev)\nStravinsky’s The Rite of Spring is still provocative and disturbing a centenary (May 29) after its\npremiere caused one of the most sensational scandals in artistic history. In this anniversary\ndocumentary, the ballet’s 100 year shock-wave is traced from its initial explosion on May 29th\n1913 right up to the present time. First-hand recollections of the famous first night from Dame\nMarie Rambert, who was one of the dancers, and Igor Stravinsky, who was in the audience, lead\nthrough to comments on the work’s enduring power from performers of today – dancer Dame\nMonica Mason, Dancer Deborah Bull, choreographer Jean-Christophe Maillot and conductors\nValery Gergiev, the late Sir Colin Davis and Bernard Keeffe – as well as dance re-constructionist\nMillicent Hodson and musicologist and historian Geoffrey Norris.\nStravinsky was a young, virtually unknown composer when Serge Diaghilev recruited him to\ncreate works for the Ballets Russes. The Rite of Spring, with revolutionary choreography by\nVaslav Nijinsky, was the third such project, after the acclaimed The Firebird in 1910 and\nPetrushka in 1911. His score contained many features that were novel for its time, including\nexperiments in tonality, rhythm, metre, stress and dissonance. Analysts have noted a significant\ngrounding in Russian folk music, a relationship which Stravinsky tended to deny. The music has\ninfluenced many of the 20th century’s leading composers and is one of the most recorded works\nin the classical repertoire today.", "score": 32.79953963426678, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "STRAVINSKY The Firebird (original ballet)\nSTRAVINSKY Petrushka (1947 version)\nSTRAVINSKY The Rite of Spring\nSir Simon Rattle conductor\nLondon Symphony Orchestra\nTickets: for more information and tickets please visit the Philharmonie de Paris website.\nSir Simon Rattle continues the 2017/18 season opening celebrations by taking the Orchestra on tour to the Philharmonie in Paris, where the LSO is a frequent visitor.\nStravinsky sent shockwaves through classical music in the 20th century. His first three ballets – The Firebird, Petrushka and The Rite of Spring, all composed between 1911 and 1913 – brought a new and frenzied sense of rhythm, so distressing to audiences that it caused uproar; The Rite of Spring even caused a riot.\nAnd it’s not hard to see why. Is there any moment in music more demonic than the opening to The Firebird, a terrifying rumble of strings that would make Jaws tremble? There are few pieces more unsettling than The Rite of Spring with its carnal, tribal rhythms; or Petrushka with its impish Punch and Judy puppets. Sir Simon Rattle brings these three creations to life in this dramatic programme.", "score": 32.53428924063744, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "In terms of his influence and the frequency with which his works are performed, Stravinsky stands preeminent among twentieth-century composers. His oeuvre encompasses nearly every significant trend of his lifetime, and regardless of genre, the Russian composer frequently returned to the musical idiom of his native land. In Petrushka,\nhe drew upon his roots to dazzle Parisian audiences of the Ballets Russes with a score that Debussy praised as \"sonorous magic.\"\nContaining all of the music from the original 1911 version, this edition presents Stravinsky's popular ballet in a reduction for solo piano. Suitable for rehearsal or performance, this version is ideal for listeners wishing to examine the inner workings of one of the twentieth century's greatest masterpieces. Translated stage directions provide helpful guidance and narrate the exotic story of three magical puppets.\nUnabridged republication of the edition originally published by G. Schirmer, n.d.\n|Availability||Usually ships in 24 to 48 hours|\n|Dimensions||9 x 12|", "score": 32.293201765668904, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "The Joffrey Ballet Resurrects The Rite of Spring\n\"[The Rite of Spring] is an astonishing ballet, no less so today than in 1913. Nijinsky's genius as a choreographer bursts forth here in the originality of his vision, the depth of his musicality and the grand sense of inevitability that reigns over the whole. Inspired by Roerich, a specialist in the primitive iconography of pagan Russia, Nijinsky created a movement style that would embody both the logic and the frenzy of Stravinsky's music.\"\n-- Newsweek November 16, 1987\nWhen Sergei Diaghilev and his Ballets Russes premiered The Rite of Spring, or Le Sacre du printemps (Sacre), at Paris's Theatre des Champs-Elysees on May 29, 1913, a riot broke out. The score by Igor Stravinsky was a panoply of shifting syncopations and dissonant harmonies, while the choreography by famed danseur Vaslav Nijinsky curled the dancers' bodies inward as they jerkily stamped and jumped across the stage. Archaeologist and painter Nicholas Roerich contributed the set design and the costumes, which were described in a 2002 Ballet Magazine article as \"heavy smocks, handpainted with [primitive] symbols of circles and squares.\" The pre-Modernist audience, accustomed to the demure grace of classical ballet, was further outraged by the graphic nature of the ballet's story--the pagan sacrifice of a virgin by her village to usher in spring. Nijinsky's ballet was performed only seven more times--in Paris and London-- before disappearing from the classical repertoire for reasons including Nijinsky's mental breakdown and the deterioration of his relationship with Diaghilev. Many new iterations of the ballet were choreographed--including versions by Pina Bausch and Martha Graham--but only the score remained intact from the initial performances. In FY 1987, the Joffrey Ballet received a National Endowment for the Arts (NEA) grant in Dance of $243,400 \"to support three self-produced seasons in New York City and Los Angeles, and the reconstruction of Vaslav Nijinsky's Le Sacre du Printemps.\"", "score": 31.93063862564624, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "This item is currently out of stock.\nPlease contact us if you wish to place an order for this item.\nThis release is the final installment in a trio of discs that celebrate the music of Stravinsky in Serge Diaghilev’s Ballet Russes, performed by the BBC National Orchestra of Wales under Thierry Fischer.\nThis live orchestral recording brings together two contrasting works both composed for the famed ballet company: Stravinsky’s infamous The Rite of Spring – full of dark, rhythmic menace and dissonance – and Poulenc’s Les biches (most aptly translated as ‘the does’) – which captures the heady atmosphere and risqué thrills of young women in the 1920s' party scene.\nPoulenc: Les Biches\nStravinsky: The Rite of Spring", "score": 31.87543399851853, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "One of the biggest scandals in the history of music stirred Paris as the Ballets Russes created the new ballet by Igor Stravinsky and Vaslav Nijinsky, 'The Rite of Spring', conducted by Pierre Monteux. To celebrate this anniversary, François- Xavier Roth, with special permission from Boosey & Hawkes and with the assistance of musicologist Louis Cyr, have endeavored to restore the 'Rite' as it was given on the evening of May 29 1913.\nALBUM OF THE WEEK 'As heard at the scandalous 1913 world première is the gist of Roth s claims for this first period-instrument recording of The Rite with special permission from Boosey & Hawkes, which usually only authorises performances of Stravinsky s 1967 edition. It s certainly hard to imagine the first performance, under Pierre Monteux, being as well played as this, as Les Siècles follow up their revelatory account of the complete Firebird score. The sound of their French-made turn-of-the-century (mostly 1880s to 1920s) instruments throws fresh light on these modern masterpieces: a tuba, two-thirds the size of modern ones, by Adolphe Sax, inventor of the saxophone; and wonderful Buffet Crampon clarinets and bassoons. The Rite s famous bassoon solo is played without the octave key invented to make this very music less difficult to play. One can t listen with 1913 ears, of course, but there s a palpable sense of freshness with more astringent gut strings that convey the abrasiveness of Stravinsky s writing better than their modern metal equivalents. It s not clear from the notes whether the piano soloist in Petrushka is using an Erard or a Pleyel, but the sound is luminous and transparent, and Roth effortlessly evokes the eeriness of the puppet s ghostly apparition at the close.' --Hugh Canning, The Sunday Times 15 June 2014\nORCHESTRAL CHOICE ***** Performance/ ***** Recording 'a chance to hear the Rite close to how Stravinsky originally conceived it for its 1913 premiere in Paris … an exciting visceral performance of The Rite of Spring, culminating in a ferocious final dance.'", "score": 31.564186960437077, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "The extreme difficulty of the music demanded an augmented orchestra and the radical choreography required over a hundred rehearsals, which could not be accommodated as scheduled in the 1912 season. Instead, Ravel's Daphnis et Chloe was presented, fulfilling audience expectations with a mythic story, orthodox dancing and one of the most ravishing scores ever written. Anticipations for the delayed premiere of The Rite continued to rise. At last the fateful night of May 29, 1913 arrived.\nThe first two minutes apparently went well, with the audience enthralled by the haunting introduction. But then, the astringent brutality of the first scene broke through as, in Stravinsky's words: the curtain rose on a group of knock-kneed and long-braided Lolitas jumping up and down. The subject itself was scandalous: instead of the fanciful amorous stuff of fluffy ballet dreams, ugly pagans sacrifice a maiden to propitiate the gods of spring. The choreography, costumes and sets boldly dispensed with grace and beauty to emphasize awkward, primitive starkness. At first there were a few boos and catcalls, but then a storm broke as the outraged audience reacted by yelling and fighting. Diaghilev tried to quell the disturbance by switching the house lights on and off while Nijinski tried to sustain the performance as best he could by shouting out numbers and cues to the dancers, who couldn't hear the music, loud as it was, over the din. Stravinsky was furious and stormed out of the theater before police arrived to end the show.\nBut as happens so often in art, the scandals of the past generate the foundations of the future. Indeed, the very next year Monteux introduced the score in concert and its appeal began to take hold. By 1929, the staid New York Times proclaimed the significance of The Rite to the twentieth century as Beethoven's Ninth is to the nineteenth. The arrival of The Rite in the pantheon of pop culture was clinched in 1940 when it was used in Walt Disney's Fantasia. At first Stravinsky resisted but ultimately accepted $5,000 for the film rights after being warned that if he didn't consent Disney simply would appropriate the score anyway due to copyright enforcement problems.", "score": 31.038453549202277, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "In Paris on 29 May 1913 The Rite of Spring appeared and shook the ballet world.\nNijinsky’s choreography changed the way the ballet was seen until that moment and paved the way for the modern ballet.\nSuddenly the human bodies weren’t elegant but heavy and vulnerable battling against the forces of nature.\nAt the same time Stravinsky’s music wasn’t gentle but more like if a tornado was raving around.\nCritics and the audience raged both against the music and the choreography.\nHowever ,100years afterwards the Rite of Spring is one of the most choreographed scores.\nMorissa Fenley, Pina Bausch, Maurice Bejart and many others have tried to choreograph the Rite.\nBasil Twist,a puppeteer,also tried.But instead of dancers he used silk,paper and smoke.", "score": 30.968616330865203, "rank": 25}, {"document_id": "doc-::chunk-6", "d_text": "Urging Prokofiev to write \"music that was national in character\", Diaghilev then commissioned the ballet Chout (The Fool, the original Russian-language full title was Сказка про шута, семерых шутов перешутившего (Skazka pro shuta, semerykh shutov pereshutivshavo), meaning \"The Tale of the Buffoon who Outwits Seven Other Buffoons\"). Under Diaghilev's guidance, Prokofiev chose his subject from a collection of folktales by the ethnographer Alexander Afanasyev; the story, concerning a buffoon and a series of confidence tricks, had been previously suggested to Diaghilev by Igor Stravinsky as a possible subject for a ballet, and Diaghilev and his choreographer Léonide Massine helped Prokofiev to shape it into a ballet scenario. Prokofiev's inexperience with ballet led him to revise the work extensively in the 1920s, following Diaghilev's detailed critique, prior to its first production. The ballet's premiere in Paris on 17 May 1921 was a huge success and was greeted with great admiration by an audience that included Jean Cocteau, Igor Stravinsky and Maurice Ravel. Stravinsky called the ballet \"the single piece of modern music he could listen to with pleasure,\" while Ravel called it \"a work of genius.\"\nFirst World War and Revolution\nDuring World War I, Prokofiev returned to the Conservatory and studied organ to avoid conscription. He composed The Gambler based on Fyodor Dostoyevsky's novel of the same name, but rehearsals were plagued by problems, and the scheduled 1917 première had to be cancelled because of the February Revolution. In the summer of that year, Prokofiev composed his first symphony, the Classical. It was his own name for the symphony, which was written in the style that, according to Prokofiev, Joseph Haydn would have used if he had been alive at the time. It is more or less Classical in style but incorporates more modern musical elements (see Neoclassicism). The symphony was also an exact contemporary of Prokofiev's Violin Concerto No. 1 in D major, Op. 19, which was scheduled to premiere in November 1917.", "score": 30.61443482850896, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "“I have been very fortunate and professionally blessed to have five wonderful woodwind colleagues at Arizona State University: Elizabeth Buck, flute; Christopher Creviston, saxophone; Joshua Gardner, clarinet; Martin Schuring, oboe; and, Robert Spring, clarinet. Together, we presented numerous woodwind recitals featuring all six of us in various combinations of duos, trios, quartets, and quintets. However, until now, we were never able to share the stage simultaneously. To remedy this, I arranged five movements of Stravinsky’s Suite No. 1 and Suite No. 2 for Small Orchestra for the instrumentation of the ASU woodwind faculty.\nThe source material is derived from Igor Stravinsky’s Three Easy Pieces (1915) and Five Easy Pieces (1917). These compositions were written as four-hand piano educational pieces for Stravinsky’s children. Stravinsky would play the more difficult accompaniments while his children played the simple melodies.\nStravinsky composed arrangements of several movements using these short eight pieces, but the best-known arrangements are Suite No. 1 (1921) and Suite No. 2 (1925) for Small Orchestra. The five dance movements in Suite for Woodwind Sextet are “true Stravinsky” in their wit, charm, and humor. “", "score": 29.89469923839558, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "I would also like to highlight some further historical notes, relevant to these first five performances in Paris in 1913, which were followed by three performances in London in the same year, before Serge Diaghilev withdrew the work from the repertory of Les Ballets Russes, citing its unpopularity with audiences. It was only in 1987 that Nijinsky’s \"Rite of Spring\", became a subject of discussion and interest again. This re-awakening of interest was stimulated by the collaboration between Robert Joffrey (founder of the Joffrey Ballet in Chicago) the choreographer and dance historian, Millicent Hodson, and the British art historian, Kenneth Archer, and their research into the original ‘Rite’ which led them to stage their ‘reconstituted’ version of \"Rite\" in its ‘original’ version.\nIt is important to note that, given the fact that Nijinsky did not annotate his choreography, the research work of Hodson and Archer was based on other sources: sketches, photographs, memories, the annotated score of Stravinsky himself, as well as another annotated by Marie Rambert, who was the assistant to Nijinsky, on the creation of ‘Sacre’ which showed some movement markings, and a letter of my aunt, Bronislava Nijinska, relating to the final dance of « the Chosen One », etc. If one looks back at the critics’ assessment of this ‘re-staging” it was not without considerable amazement that the public rediscovered this mythic work.\nVaslav Nijinsky’s successors, at the time, my sister Kyra (1914-1998) and myself, never contested the scientific value of the work of Hodson and Archer – in fact, on the contrary, we were overjoyed that historians and researchers were breathing new life into the work of our Father.\nNow for the past 25 years, The Rite of Spring by Vaslav Nijinsky, or ‘after Nijinsky’ as reconstituted by Millicent Hodson and Kenneth Archer has been seen throughout the world, and has entered into the repertory of more than 10 top dance companies, including the Mariinsky Theatre of St Petersburg, which will bring this production under Maestro Gergiev, to the Theatre des Champs Elysees.", "score": 29.875823787978106, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "In 1908, Diaghilev returned to the Paris Opéra with six performances of Mussorgsky's opera Boris Godunov, starring basso Fyodor Chaliapin. This was Rimsky-Korsakov's 1908 version (with additional cuts and re-arrangement of the scenes). The performances were a sensation, though the costs of producing grand opera were crippling.\nParis Debut: 1909\nIn 1909, Diaghilev presented his first Paris \"Saison Russe\" devoted exclusively to ballet (although the company did not use the name \"Ballets Russes\" until the following year). Most of this original company were resident performers at the Imperial Ballet of Saint Petersburg, hired by Diaghilev to perform in Paris during the Imperial Ballet's summer holidays. The first season's repertory featured a variety of works chiefly choreographed by Michel Fokine, including Le Pavillon d'Armide (music by Tcherepnin), the Polovtsian Dances from Prince Igor (music by Borodin), Les Sylphides (music by Chopin), and Cléopâtre (music by Arensky). The season also included Le Festin, a pastiche set by several choreographers (including Fokine) to music by several Russian composers.\nThe Ballets Russes was noted for the high standard of its dancers, most of whom had been classically trained at the great Imperial schools in Moscow and St. Petersburg. Their high technical standards contributed a great deal to the company's success in Paris, where dance technique had declined markedly since the 1830s.\nPrincipal female dancers included: Anna Pavlova, Tamara Karsavina, Olga Spessivtseva, Mathilde Kschessinska, Ida Rubinstein, Bronislava Nijinska, Lydia Lopokova, Diana Gould and Alicia Markova, among others; many earned international renown with the company.\nThe Ballets Russes was even more remarkable for raising the status of the male dancer, largely ignored by choreographers and ballet audiences since the early 19th century.", "score": 29.74623376970703, "rank": 29}, {"document_id": "doc-::chunk-7", "d_text": "The Firebird was a tremendous success. Stravinsky relates: “The first-night audience at the Paris Opéra glittered indeed … I sat in Diaghilev’s box, where, at intermissions, a stream of celebrities, artists, dowagers, aged Egerias of the Ballet, writers, balletomanes, appeared. I met Proust, Firardoux, Paul Morand, St. John Perse, Paul Claudel, Sarah Bernhardt… I was called to the stage to bow at the conclusion, and was recalled several times. I was still onstage when the final curtain had come down, and I saw coming toward me Diaghilev and a dark man with a double forehead whom he introduced as Claude Debussy. The composer spoke kindly about the music, ending his words with an invitation to dine with him.”\nOver the years, Stravinsky fashioned three suites from the ballet: in 1911, 1919, and 1945. The latter two reduce the instrumentation of the original ballet, which Stravinsky had called “wastefully large.” A master of orchestral writing, Stravinsky trimmed the number of players without diminishing the music’s bold audacity. “For me, he wrote, the most striking effect in The Firebird was the natural-harmonic string glissando near the beginning, which the bass chord touches off like a Catherine wheel. I was delighted to have discovered this, and I remember my excitement in demonstrating it to [my teacher Rimsky-Korsakov’s] violinist and cellist sons. I remember, too, Richard Strauss’s astonishment when he heard it two years later in Berlin.”\nIn all its various versions, Stravinsky’s score for The Firebird blends rich harmonies, the vigor of Russian folk music, and the orchestral magic he learned from Rimsky-Kosakov – conjuring music of tremendous power and beauty.\nSynopsis of the complete ballet\nThe ballet centers on the journey of Prince Ivan Tsarevich, the hero of many fables in Russian folklore. While hunting in the forest, he strays into the magic garden of King Kastchei, a green-taloned ogre and sorcerer. Kastchei’s immortality is preserved by keeping his soul in a magic egg hidden in a casket. In the garden, Ivan spies the Firebird, a magnificent creature covered in scarlet plumage.", "score": 29.707918012474547, "rank": 30}, {"document_id": "doc-::chunk-2", "d_text": "67 (1900)\nSymphony Orchestra ∙ Alexander Glazunov\nRecorded 10, 13 & 14 June 1929 in the Portman Rooms, London\nMatrix nos.: WAX 5009/10, 5014/6 & 5022/4\nFirst issued on Columbia LX 16/8 & 29/30\nIt finds Stravinsky in striking form, leading musicians who are clearly excited to play for him\nThe juxtaposition of Igor Stravinsky and Alexander Glazunov on one CD is a little puzzling. Neither composer had the warmest admiration for the other. The producer’s rationale is that all three ballet scores presented here are by Russian composers and were premiered in the first two decades of the last century. Stravinsky, though, referred to Glazunov as “Carl Philip Emmanuel Rimsky-Korsakov,” a jibe which perhaps indicates the relative significance of Glazunov compared with his mentor, but which I believe substantially overlooks the aesthetic differences between those two composers. As for the conservative Glazunov, he said Petrushka was “not music,” although he praised its orchestration. I suspect Pristine Audio planned to release The Seasons, but was puzzled over a coupling, as Glazunov made no further commercial recordings. So we have here the shotgun marriage of Stravinsky to Glazunov. Nevertheless, this CD may be considered a bargain. A copy of Dutton’s reissue of The Seasons currently costs $215 on Amazon Marketplace.\nThe present suite from Petrushka, recorded in London with an unnamed orchestra in 1928, was Stravinsky’s first commercially released recording as a conductor. It finds him in striking form, leading musicians who are clearly excited to play for him. It also is a welcome opportunity to hear the composer conduct music from his original 1911 score, as his stereo version of the complete ballet employs his 1947 revision, which Nicolas Slonimsky referred to as a “self-mutilation.” Whether Stravinsky prepared the revision out of conviction or the desire to finally have the ballet copyrighted, we may never know. It seems a gray area to me. The present suite has numerous excisions from the complete version, most notably Petrushka’s death and reappearance as a ghost.", "score": 28.493577270314066, "rank": 31}, {"document_id": "doc-::chunk-2", "d_text": "The result was a farsa mimica in two scenes, entitled El corregidor y la molinera, which was produced on 7th April 1917 at the Teatro Eslava in Madrid under Joaquín Turina. After Dyagilev had suggested a few modifications Falla expanded it into a full-scale ballet, re-scored it for full orchestra and restored Alarcón's own title. In this form it was staged for the first time, as The Three-Cornered Hat, on 22nd July 1919 in the aptly-named Alhambra Theatre in London, by the Ballets Russes under Ernest Ansermet, with choreography by Léonide Massine (who also played the miller) and sets and costumes by Picasso. The ballet was produced at the Opéra in Paris on 23rd January 1920, as Le Tricorne. The formal, old-fashioned Dance of the Corregidor, played here in a transcription for harp by David Watkins, comes from the second scene.\nEl amor brujo ('Love the Magician') was composed between November 1914 and April 1915, at the instigation of the gypsy singer and dancer Pastora Imperio, who asked Falla and Sierra for 'a dance and a song'. The original version, scored for mezzo-soprano and small instrumental ensemble, was given for the first time on 15th April 1915 by Imperio and her company under Moreno Ballesteros, at the Teatro Lara in Madrid, but it was not a success. The Press actually criticised the score for its lack of Spanish character; an astonishing assertion when one considers that, although no folk tunes as such are used in it, the music was radically influenced by the Andalusian soleares, seguiiriyas, polos and martinetes which Rosario la Mejorana, Pastora's mother, had sung to Falla. Other performances followed in Barcelona, but in 1916 Falla re-scored it for a normal theatre orchestra (but with an important piano part), in which form it received successful concert performances (with and without the songs) in Madrid. This revised version was staged for the first time on 22nd May 1925, when the ballet was produced at the Trianon-Lyrique Theatre in Paris, with the composer conducting Spectacles Bériza.", "score": 28.431624429105373, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "With a poker table backdrop for dancers dressed as playing cards, the whimsical Jeu de Cartes pairs fleet-footed choreography with Stravinsky's boisterous and wildly rhythmic score.\nStravinsky composed Jeu de Cartes (Card Game: A Ballet in Three Deals) for the first Stravinsky Festival mounted by Balanchine at the Metropolitan Opera in 1937. The original version featured dancers representing the four suits in a deck of cards, with the joker as the central character. Years later, Balanchine suggested that Peter Martins choreograph an abstract version of the ballet to the same score. For the 1992 Diamond Project, Martins re-envisioned the ballet and in 2002 commissioned Ian Falconer to create a fanciful new set and costumes.\nVIEW A SLIDESHOW OF IMAGES FROM JEU DE CARTES >", "score": 28.277050258132004, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "Michel Fokine's Les Sylphides will be given its Revival Premiere on Friday, November 1 with Hee Seo, Isabella Boylston and Sarah Lane making their debuts in leading roles. Cory Stearns and Polina Semionova will debut in the leading roles, alongside Veronika Part and Melanie Hamrick, at the matinee on Saturday, November 2. Set to music by Frédéric Chopin, Les Sylphides, a one act plotless work, was given its Company Premiere at Ballet Theatre's inaugural performance on January 11, 1940 at the Center Theatre in New York City. The ballet received its first performance at the Maryinsky Theatre, St. Petersburg on March 8, 1908. Les Sylphides, which features scenery by Alexandre Benois and lighting by David K.H. Elliott, was last performed by ABT in 2005.", "score": 27.803895842905035, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "In addition to his dancing, Mr. Wheeldon has choreographed works for Boston Ballet, Carolina Ballet, The Colorado Ballet, Hamburg Ballet, New York City Ballet, The Royal Ballet, The Royal Ballet School, San Francisco Ballet and the School of American Ballet, the official school of the New York City Ballet. Mr. Wheeldon was one of six choreographers who presented new works as part of The Diamond Project throughout New York City Ballet's 1997 spring season. His ballet Slavonic Dances, set to music by Dvorak, had its world premiere in June, 1997. In 1999 he choreographed Scenes de ballet (Stravinsky) for New York City Ballet's Stravinsky Festival and School of American Ballet's Workshop Performances. Mr. Wheeldon's other choreographic credits include Firebird (Stravinsky) for Boston Ballet in 1999, Sea Pictures (Elgar) for San Francisco Ballet in 2000, Mercurial Manoeuvres (Shostakovich) for New York City Ballet's spring 2000 Diamond Project as well as the ballet sequence for the Columbia Pictures feature film Center Stage directed by Nicholas Hytner.\nMr. Wheeldon retired from dancing in 2000 to concentrate on his choreographic work. Chosen to be New York City Ballet's first Artist in Residence, he was named Resident Choreographer in 2001 and went on to create Polyphonia (Ligeti) that premiered January 2001 and Variations Sérieuses (Mendelssohn) in May 2001. For the Hamburg Ballet he created VIII as part of their Benjamin Britten evening in July 2001. Mr. Wheeldon made his Broadway choreographic debut in the musical The Sweet Smell of Success that premiered March 14, 2002. He completed a choreographic cycle to Ligeti music by creating Continuum for San Francisco Ballet and Morphoses for New York City Ballet. Mr. Wheeldon's new works include Tryst to the music of Scottish composer James MacMillan for the Royal Ballet, Carousel (Rodgers), Liturgy (Pärt) and Carnival of the Animals (Saint-Saëns) with original narration by John Lithgow. In 2004 Mr. Wheeldon will create and mount Swan Lake (Tchaikovsky) for Pennsylvania Ballet and premiere a new work with New York City Ballet to a commissioned score by Scottish composer James MacMillian.", "score": 27.643336550605966, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "Petersburg, having travelled widely through Russia for a year discovering many previously unknown masterpieces of Russian portrait art. In the following year he took a major exhibition of Russian art to the Petit Palais in Paris. It was the beginning of a long involvement with France. In 1907 he presented five concerts of Russian music in Paris, and in 1908 mounted a production of Mussorgsky's Boris Godunov, starring Feodor Chaliapin, at the Paris Opéra.\nThis led to an invitation to return the following year with ballet as well as opera, and thus to the launching of his famous Ballets Russes. The company included the best young Russian dancers, among them Anna Pavlova, Adolph Bolm, Vaslav Nijinsky, Tamara Karsavina and Vera Karalli, and their first night on 19 May 1909 was a sensation.\nDuring these years Diaghilev's stagings included several compositions by the late Nikolai Rimsky-Korsakov, such as the operas The Maid of Pskov, May Night, and The Golden Cockerel. His balletic adaptation of the orchestral suite Sheherazade, staged in 1910, drew the ire of the composer's widow, Nadezhda Rimskaya-Korsakova, who protested in open letters to Diaghilev published in the periodical Rech. Diaghilev commissioned ballet music from composers such as Nikolai Tcherepnin (Narcisse et Echo, 1911), Claude Debussy (Jeux, 1913), Maurice Ravel (Daphnis et Chloé, 1912), Erik Satie (Parade, 1917), Manuel de Falla (El Sombrero de Tres Picos, 1917), Richard Strauss (Josephslegende, 1914), Sergei Prokofiev (Ala and Lolli, 1915, rejected by Diaghilev and turned into the Scythian Suite; Chout, 1915 revised 1920; Le pas d'acier, 1926; and The Prodigal Son, 1929); Ottorino Respighi (La Boutique fantasque, 1919); Francis Poulenc (Les biches, 1923) and others. His choreographer Michel Fokine often adapted the music for ballet.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-3", "d_text": "In the final section, as the score goes back to opening motifs, the dancers resume the same opening image of four male dancers facing the back of the stage.\nStravinsky finished the score in the spring of 1957 and Agon premiered on December 1, 1957, as part of a triple bill featuring Apollo and Orpheus. It was an easy winner with the audience, since it depicted classical ballet in a different and novel way, showing conflict and resolution between various forms of dance, movement and shape.\nSorry no YouTube videos! But there are certain DVDs and VHS* tapes (if you are able to view these) featuring glimpses of Agon.\n- Balanchine (1984) [link]\n- The Balanchine Celebration, Part Two* [link]\n- Bringing Balanchine Back [link]\n- Dancing for Mr. B: Six Balanchine Ballerinas [link]\n- Peter Martins: Dancer* [link]\nAgon had its first concert performance in June 1957 in Los Angeles. It is still often performed on its own and much valued as a piece which combines both serial and non-serial elements. At an average length of 25 min, it can be easily uploaded to your favourite mp3 player. It can be downloaded from iTunes [link] or streamed via Spotify [link].\nChoreography: George Balanchine\nMusic: Igor Stravinsky\nOriginal Cast: Todd Bolender, Barbara Milberg, Barbara Walczak, Roy Tobias, Jonathan Watts, Melissa Hayden, Diana Adams and Arthur Mitchell.\nPremiere: December 1, 1957, NYCB. City Center of Music and Drama, New York.\nSources and Further Information\n- Agon in Context by Richard Jones. Ballet.co Magazine, April 2004. [link]\n- Wikipedia Entry for Agon (ballet) [link]\n- NYCB Agon Repertory Notes [link]\n- 50 Years Ago, Modernism Was Given a Name: Agon by Alastair Macaulay. November 2007, NY Times [link]\n- The Bransles of Stravinsky’s Agon: A Transition to Serial Composition by Bonnie S. Jacobi. [link]", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Set design by Léon Bakst\n|Choreographed by||Michel Fokine|\n|Composed by||Frederic Chopin|\n|Date of premiere||2 June 1909|\n|Place of premiere||Théâtre du Châtelet\n|Original ballet company||Diaghilev's Ballets Russes|\n|Designs by||Alexandre Benois|\nLes Sylphides (English: The Sylphs) is a ballet choreographed by Michel Fokine to the music of Frederic Chopin. The music was orchestrated by Stravinsky among others. The scenery and costumes were designed by Alexandre Benois.\nThe ballet was first performed by the Ballets Russes in Paris at the Théâtre du Châtelet on 2 June 1909. It starred Nijinsky, Tamara Karsavina, Anna Pavlova, and Maria Baldina. The ballet is non-narrative. It does not tell a story. It is a series of dances meant to evoke the atmosphere and ambiance of a romantic ballet.\nLes Sylphides was developed from a Fokine ballet called Chopiniana. This ballet was performed in St Petersburg on 21 March 1908. It was a series of imagined scenes from Chopin's life that included a Polish wedding and a ballroom polonaise. It was revised. All characters and any suggestion of a plot were dropped to create instead an evocation of the romantic ballet.\nThe revised ballet (still called Chopiniana) was performed in St Petersburg on 6 April 1908. This second version was costumed in the long white ballet tutu made famous by Marie Taglioni in La Sylphide. Chopiniana was renamed Les Sylphides when it was presented in Paris.\nLes Sylphides generally includes the following musical numbers:\n- Polonaise in A major, Op. 40, No. 1 (\"Military\", the Prelude in A major, Op. 28, No. 7 is sometimes substituted)\n- Nocturne in A flat major, Op. 32, No. 2\n- Waltz in G flat major, Op. 70, No. 1\n- Mazurka in D major, Op. 33, No. 2\n- Mazurka in C major, Op. 67, No.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "I watched the New York City Ballet doing George Balanchine’s *Stravinsky Violin Concerto* on March 12, 2021. It was a performance from September of 2018. The New York City Ballet Orchestra was conducted by Clotilde Otranto with Kurt Nikkanen playing the solo violin part. The solo dancers were Sterling Hyltin, Ask la Cour, Sara Mearns, and Taylor Stanley.\nI’ve been a fan of Balanchine for years but have seen relatively little of his work, probably because I’ve seen so little ballet in general. I look forward to the day when I can see works like this in person.\nMearns was the only dancer I knew. I had first seen her in *I Married an Angel* at City Center Encores, in the Merce Cunningham centennial celebration at BAM, and most recently in a performance of Molissa Fenley’s *State of Darkness* (Fenley’s version of *The Rite of Spring*). She’s an extraordinary dancer, full of strength and personality and eager to express herself and do something out of the ordinary. Her partner in the pas de deux was Taylor Stanley. I love that Balanchine gives the men something interesting to do, rather than just lifting the woman and jumping around.\nIs it rude to say that the music was the star of the performance? This is from Stravinsky’s “neo-classical” period. I wouldn’t say it’s less modern than what came before, but it feels like it’s honoring and looking back at the past in a way that his earlier work doesn’t do in such an overt way. It’s the harmonies, the structure, the overall flavor. The music has that thing I value so highly and hear so rarely: it has WIT.\nBalanchine was a huge admirer of Stravinsky and created dances to his music throughout his career. This piece was part of a Stravinsky festival in 1972, just a year after Stravinsky’s death. As often with Balanchine, there’s no story, it’s pure dance, but somehow it’s so expressive and meaningful.\nHere's a somewhat shaky performance (in terms of the video) from 1978, danced by (mostly) the original 1972 cast:", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "The Master's Settings Of Russian Favorites New York City Ballet New York State Theater\nRecent programs by the New York City Ballet featured works by George Balanchine to two of his favorite Russian composers.\n\"Stravinsky Violin Concerto\" was offered on both Tuesday night and at the June 11 matinee, featuring newcomers in some of the leading roles.\nOn June 11, with Lourdes Lopez as his partner, Nikolaj Hubbe made his debut in the second pas de deux; on Tuesday, his partner was Yvonne Borree, also making a debut in this duet. The two performances went well. But their taut performances made it especially interesting to see Ms. Borree and Mr. Hubbe together.\nHelene Alexopoulos made her debut in the first pas de deux on Tuesday, with Albert Evans as her partner. Each appeared to magnetize the other. The duet was danced more teasingly on June 11 by Wendy Whelan and Jock Soto. Guillermo Figueroa was the violinist both times.\nOn June 11, fine work by Kipling Houston, Ms. Alexopoulos, Melinda Roy, Ben Huys, Katrina Killian and Michael Byars proved effective in the lush choreography of \"Tchaikovsky Suite No. 3.\"\nIn the concluding \"Tema con Variazioni,\" Nichol Hlinka danced with a jewel-like brilliance. Damian Woetzel invested his first solo with dignity and his second with boldness.Continue reading the main story\nThe June 11 program also included \"Divertimento No. 15\"; on Tuesday the Stravinsky concerto shared the bill with \"Watermill.\" The conductors were Maurice Kaplow and Gordon Boelzner. JACK ANDERSONContinue reading the main story", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "Michel Fokine's Les Sylphides will be given its Revival Premiere on Friday, November 1 with Hee Seo, Isabella Boylston and Sarah Lane making their debuts in leading roles. Cory Stearns and Polina Semionova will debut in the leading roles, alongside Veronika Part and Melanie Hamrick, at the matinee on Saturday, November 2. Set to music by Frédéric Chopin, Les Sylphides, a one act plotless work, was given its Company Premiere at Ballet Theatre's inaugural performance on January 11, 1940 at the Center Theatre in New York City. The ballet received its first performance at the Maryinsky Theatre, St. Petersburg on March 8, 1908. Les Sylphides, which features scenery by Alexandre Benois and lighting by David K.H. Elliott, was last performed by ABT in 2005.", "score": 26.394566604968613, "rank": 41}, {"document_id": "doc-::chunk-2", "d_text": "Petersburg, having travelled widely through Russia for a year discovering many previously unknown masterpieces of Russian portrait art. In the following year he took a major exhibition of Russian art to the Petit Palais in Paris. It was the beginning of a long involvement with France. In 1907 he presented five concerts of Russian music in Paris, and in 1908 mounted a production of Mussorgsky's Boris Godunov, starring Feodor Chaliapin, at the Paris Opéra.\nThis led to an invitation to return the following year with ballet as well as opera, and thus to the launching of his famous Ballets Russes. The company included the best young Russian dancers, among them Anna Pavlova, Adolph Bolm, Vaslav Nijinsky, Tamara Karsavina and Vera Karalli, and their first night on 19 May 1909 was a sensation.\nDuring these years Diaghilev's stagings included several compositions by the late Nikolai Rimsky-Korsakov, such as the operas The Maid of Pskov, May Night, and The Golden Cockerel. His balletic adaptation of the orchestral suite Sheherazade, staged in 1910, drew the ire of the composer's widow, Nadezhda Rimskaya-Korsakova, who protested in open letters to Diaghilev published in the periodical Rech. Diaghilev commissioned ballet music from composers such as Nikolai Tcherepnin (Narcisse et Echo, 1911), Claude Debussy (Jeux, 1913), Maurice Ravel (Daphnis et Chloé, 1912), Erik Satie (Parade, 1917), Manuel de Falla (El Sombrero de Tres Picos, 1917), Richard Strauss (Josephslegende, 1914), Sergei Prokofiev (Ala and Lolli, 1915, rejected by Diaghilev and turned into the Scythian Suite; Chout, 1915 revised 1920; Le pas d'acier, 1926; and The Prodigal Son, 1929); Ottorino Respighi (La Boutique fantasque, 1919); Francis Poulenc (Les biches, 1923) and others. His choreographer Michel Fokine often adapted the music for ballet.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-3", "d_text": "Instead, the suite ends with “The Masqueraders,” providing a conclusion that has real instrumental punch, rather than pathos. Stravinsky’s performance is filled with passion and considerable dramatic understanding. Even the no-nonsense approach of Pierre Boulez in his two recordings of the complete ballet sounds slack by comparison. “The Shrovetide Fair” offers a real feeling of greasepaint, as all its participants are sharply characterized. In “The Magic Trick,” the solo flute seems to beckon to the audience at the fair. The “Russian Dance” blends folk music with wonderfully edgy orchestral effects. “Petrushka’s Room” presents an astute psychological portrait of the tortured, misshapen doll, with the solo piano making a superb contribution. The dances in the evening at the fair are loaded with color. We also are given the final five movements of Stravinsky’s Pulcinella Suite, with the composer leading Paris’s Straram Orchestra. The performance mixes exuberance with a perhaps unexpected degree of warmth for the Neoclassical style of the composer.\nApparently the most famous anecdote about Glazunov as a conductor is that he supposedly was drunk for the disastrous premiere of Rachmaninoff’s First Symphony. It’s possible Glazunov viewed the symphony as being dangerously progressive, and couldn’t face his task without being fortified with alcohol. I would note that Glazunov, after leaving Russia, embarked on a conducting tour of the United States, managed by none other than Sol Hurok. The performance we have here of his own music is masterful. José Serebrier has written that the failure of Glazunov’s symphonies to achieve wide popularity is due to conductors not taking enough interpretive license with expression. I disagree. What we hear on the composer’s own recording of The Seasons, with a thoroughly captivated unnamed London orchestra, is an approach to conducting that is at once refined and direct. There is plenty of beauty of tone color and phrasing, but never a gesture that seems extravagant. My favorite conductor of Glazunov today is not Serebrier but Alexander Anissimov, whose approach is entirely more classical. As for the popularity of Glazunov’s music, its patrician charm and lack of histrionics probably make it unsuited to the admiration of the broader public.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "The audience, packed into the newly-opened Théâtre des Champs-Élysées to the point of standing room only, had neither seen nor heard anything like it.\nAs the first few bars of the orchestral work The Rite of Spring – Le Sacre du Printemps – by the young, little-known Russian composer Igor Stravinsky sounded, there was a disturbance in the audience. It was, according to some of those present – who included Marcel Proust, Pablo Picasso, Gertrude Stein, Maurice Ravel and Claude Debussy – the sound of derisive laughter.\nBy the time the curtain rose to reveal ballet dancers stomping the stage, the protests had reached a crescendo. The orchestra and dancers, choreographed by the legendary Vaslav Nijinsky, continued but it was impossible to hear the music above what Stravinsky described as a \"terrific uproar\".\nAs a riot ensured, two factions in the audience attacked each other, then the orchestra, which kept playing under a hail of vegetables and other objects. Forty people were forcibly ejected.\nThe reviews were merciless. \"The work of a madman … sheer cacophony,\" wrote the composer Puccini. \"A laborious and puerile barbarity,\" added Le Figaro's critic, Henri Quittard.\nIt was 29 May 1913. Classical music would never be the same again.\nOn Wednesday evening at the same theatre in Paris, a 21st-century audience – hopefully without vegetables — will fill the Théâtre des Champs-Élysées for a reconstruction of the original performance to mark the 100th anniversary of the notorious premiere. It will be followed by a new version of The Rite by the Berlin-based choreographer Sasha Waltz, among a series of commemorative performances.\nToday, the piece has gone from rioting to rave reviews and is widely considered one of the most influential musical works of the 20th century.\n\"It conceals some ancient force, it is as if it's filled with the power of the Earth,\" Waltz said of Stravinsky's music.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Los Angeles Ballet to perform works of Stravinsky and Balanchine\nFree to public at Grand Park on Saturday, July 6\nMay 8, 2013 Los Angeles – The Music Center will partner with Los Angeles Ballet to present that company’s productions of Agon and Rubies on Saturday evening July 6 at 7:30 pm, free to the public. Both have music by Igor Stravinsky and choreography by George Balanchine. This free event, under the stars in Grand Park and open to everyone, is part of the broad array of free public events The Music Center is presenting at Grand Park. The event is a one of a kind opportunity to spread a blanket, bring a picnic basket, and enjoy ballet in a whole new way.\nLos Angeles Ballet is led by artistic directors Thordal Christensen (former New York City Ballet dancer and former Director of the Royal Danish Ballet) and Colleen Neary (former New York City Ballet soloist under Balanchine's direction). Christensen and Neary will participate in a discussion about Stravinsky and Balanchine after the performance on July 6. For more information please visit musiccenter.org or call (213) 972-0711.\n“Los Angeles Ballet is pleased and honored to present collaborative masterpieces of George Balanchine and Igor Stravinsky for this occasion. George Balanchine was the innovator that changed classical dance forever, and is responsible for what it has become today. Together with his dear friend Igor Stravinsky they created works of pure musical and choreographic genius. These ballets remain timeless, an inspiration to us all,” said Christensen and Neary.\nStravinsky began writing Agon, a ballet for twelve dancers, in December 1953 and concluded in April 1957; the music was first performed on June 17, 1957 in Los Angeles conducted by Robert Craft, while the first stage performance was given by the New York City Ballet on December 1, 1957 at the City Center of Music and Drama, New York. The composition's long gestation period covers an interesting juncture in Stravinsky's composing career, in which he moved from a diatonic musical language to one based on twelve-tone technique; the music of the ballet thus demonstrates a unique symbiosis of musical idioms.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-5", "d_text": "Prokofiev and Stravinsky restored their friendship, though Prokofiev did not particularly like Stravinsky's later works; it has been suggested that his use of text from Stravinsky's A Symphony of Psalms to characterise the invading Teutonic knights in the film score for Eisenstein's Alexander Nevsky (1938) was intended as an attack on Stravinsky's musical idiom. However, Stravinsky himself described Prokofiev as the greatest Russian composer of his day, after himself.\nAround 1927, the virtuoso's situation brightened; he had exciting commissions from Diaghilev and made concert tours in Russia; in addition, he enjoyed a very successful staging of The Love for Three Oranges in Leningrad (as Saint Petersburg was then known). Two older operas (one of them The Gambler) played in Europe and in 1928 Prokofiev produced his Third Symphony, which was broadly based on his unperformed opera The Fiery Angel. The conductor Serge Koussevitzky characterized the Third as \"the greatest symphony since Tchaikovsky's Sixth.\"\nDuring 1928-29 Prokofiev composed what was to be the last ballet for Diaghilev, The Prodigal Son, which was staged on 21 May 1929 in Paris with Serge Lifar in the title role. Diaghilev died only months later.\nIn 1929, Prokofiev wrote the Divertimento, Op. 43 and revised his Sinfonietta, Op. 5/48, a work started in his days at the Conservatory. Prokofiev wrote in his autobiography that he could never understand why the Sinfonietta was so rarely performed, whereas the \"Classical\" Symphony was played everywhere. Later in this year, however, he slightly injured his hands in a car crash, which prevented him from performing in Moscow, but in turn permitted him to enjoy contemporary Russian music. After his hands healed, he toured the United States successfully, propped up by his recent European success. This, in turn, propelled him on another tour through Europe.\nIn 1930 Prokofiev began his first non-Diaghilev ballet On the Dnieper, Op. 51, a work commissioned by Serge Lifar, who had been appointed maitre de ballet at the Paris Opéra.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "This was the School of American Ballet, founded in 1934, the first product of the Balanchine-Kirstein collaboration. Several ballet companies directed by the two were created and dissolved in the years that followed, while Balanchine found other outlets for his choreography. Eventually, with a performance on October 11, 1948, the New York City Ballet was born. Balanchine served as its ballet master and principal choreographer from 1948 until his death in 1983.\nBalanchine's more than 400 dance works include Serenade\n(1934), Concerto Barocco\n(1941), Le Palais de Cristal, later renamed Symphony in C\n(1948), The Nutcracker\n(1957), Symphony in Three Movements\n(1972), Stravinsky Violin Concerto\n(1972), Vienna Waltzes\n(1977), Ballo della Regina\n(1978), and Mozartiana\n(1981). His final ballet, a new version of Stravinsky's Variations for Orchestra\n, was created in 1982.\nHe also choreographed for films, operas, revues, and musicals. Among his best-known dances for the stage is Slaughter on Tenth Avenue, originally created for Broadway's On Your Toes\n(1936). The musical was later made into a movie.\nA major artistic figure of the twentieth century, Balanchine revolutionized the look of classical ballet. Taking classicism as his base, he heightened, quickened, expanded, streamlined, and even inverted the fundamentals of the 400-year-old language of academic dance. This had an inestimable influence on the growth of dance in America. Although at first his style seemed particularly suited to the energy and speed of American dancers, especially those he trained, his ballets are now performed by all the major classical ballet companies throughout the world.\nRead the complete biography »\nPhoto, top: George Balanchine.\nCourtesy NYCB Archives Tanaquil LeClerq Collection.", "score": 25.444300399133866, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "In this centenary year of the death of Marius Petipa, the Paris Opera Ballet is celebrating the choreographer with performances of \" La Bayadere\",a seminal work that entered the receptoire in 1992 in Rudolph Nureyev's version.\nFirst performed at Saint Petersburg's Bolshoi Theatre in 1877, this ballet was one of Marius Petipa's greatest successes. \"La Bayadere\" has been restaged and revived by Rudolph Nureyev(1992) for the Paris Opera Ballet as a last work choreographed during his tenure as the company's Artistic Director from 1983.\nUntil 1980,\"La Bayadere\" a ballet that was over a century old and had been continuously performed in Russia had never been in the West. A well-kept secret in Russia for so long,''La Bayadere\" has now claimed its rightful place in the ballet repertory and is certainly one of the most important in the history of 19th century.\nThis spellbinding Oriental fresco,richy atmospheric tale of romance, betrayal, intrigue and vengeance,takes us to an imaginary India that although sometimes savage and cruel,is nonetheless the place where le plus doux reve triumphs.\nLibrettist Sergei Khudekov\nComposer Ludwig Minkus\nFirst choreography by Marius Petipa\nRestaged and revived by Rudolph Nureyev\nWe were happy to attend and enjoy the performance of \"La Bayadere\"on Saturday 22 May 2010, at the Opera National De Paris.", "score": 24.95593164391122, "rank": 48}, {"document_id": "doc-::chunk-51", "d_text": "The modern theory and practice of ballet were largely developed in mid-eighteenth-century Paris, especially by the royal ballet master Jean Georges Noverre (1727-1810)….Russia first imported French and Italian ballet under Peter the Great, but in the nineteenth century moved rapidly from imitation to creative excellence. Tchaikovsky’s music for Swan Lake (1876), Sleeping Beauty (1890), and The Nutcracker (1892) laid the foundations for Russia’s supremacy. In the last years of peace, the Ballets Russes launched by Sergei Diaghilev (1872-1929) enjoyed a series of unsurpassed triumphs. The choreography of Fokine, the dancing of Nizinski and Karsavina, and, above all, the scores of Stravinsky, brought ballet to its zenith with The Firebird (1910), Petrushka (1911), and The Rite of Spring (1913). After the Revolutions of 1917, the Ballets Russes stayed abroad, whilst the Soviet Bolshoi and Kirov Ballets combined stunning technical mastery with rigid artistic conservatism.”\nThe Russian-born composer, pianist and conductor Igor Stravinsky was arguably the most important composer of his time and had an enormous influence on later composers. He was born near St. Petersburg to a well-to-do musical family. He began piano lessons at the age of nine and studied music theory in his teens. After hearing some of his early compositions, the Russian impresario Sergei Diaghilev (1872-1929) commissioned Stravinsky to compose for his Ballets Russes (Russian Ballet), which reigned in Paris from 1909 to 1929. Stravinsky then wrote the ballets that made him famous and are still among his most popular works: The Firebird, Petrushka and The Rite of Spring. The Firebird was based on Russian folk tales. He collaborated on them with the Russian choreographer Mikhail Fokin (1880-1942), founder of the modern ballet style, and the brilliant dancer Vaslav Nijinsky (1889-1950), born in Kiev, Ukraine, the son of a Polish dancer.\nIgor Stravinsky moved to Paris in 1911 and to Switzerland in 1914. Six years later, after becoming stranded in the West because of World War I and the 1917 Russian Revolution, he returned to France.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Just a month before the Nutcracker Suite was premiéred in St. Petersburg, Tchaikovsky hurriedly set about orchestrating and completing ideas from his roughly sketched-out Nutcracker ballet, and compiling them into a short symphonic work, ready for its premiere in March 1892. By this point, he had not even named the work, and in surviving sketches it is referred to as the ‘Fir Tree’ and ‘Christmas Tree’ ballet suites.\nDespite the rush to find something suitable for the concert he was preparing, what became known as the ‘Nutcracker Suite‘ was an instant success. In the same year, it was also performed in Moscow and Chicago, and in the following year, Tchaikovsky conducted performances in Brussels and Odessa, in the Ukraine.\nThe suite has three broad movements, beginning with the Miniature Overture. This cute sonatina opening also opens the ballet, which was premiéred in December 1892, and sets the toy-like, playful tone of the work. Interestingly, this overture does not include any lower strings at all – with only violas and violins performing from the string section, with little bass at all in the orchestra.\nThe Dances follow, with six short dances, from act two, grouped into one larger movement. The famous March opens the movement, before introducing the iconic Dance of the sugar-plum fairy, which used the newly-invented Celeste, almost a cross between a glockenspiel and a piano. Tchaikovsky saw a Celeste for the first time the year before, and wrote to his publisher asking for one – but to keep it a secret… he wanted to be the first to use it. Next, the Russian Dance is followed by the gentle Arabian Dance and the rhythmic Chinese Dance, famous for its difficult, impressive choreography. The Dance of the Mirlitons closes the movement, which is the longest dance by far, using a gentle flute trio for its main motif.\nTchaikovsky closes the suite with the Waltz of the Flowers, a prime example of romantic music. The gentle arpeggio-based melody is introduced by the woodwind section, before a graceful harp cadenza links the Waltz into the music. This is the longest movement by far, with 353 bars, taking around 12 minutes, before bringing the hurriedly-prepared suite to a dignified close.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Stravinsky’s Le Sacre du Printemps (The Rite of Spring) premiered 100 years ago in Paris! Russian pianist Svetlana Belsky discusses and performs the entire work, transcribed for one piano by Chicago’s own Vladimir Leyetchkiss.\nCritically acclaimed as “a passionate pianist and scholar”, Svetlana Belsky is an in-demand recitalist and chamber pianist, noted for her remarkable rapport with audiences and stylistic versatility. She has appeared in the Ukraine, Russia, Poland, China, and Hong Kong, and throughout the United States. Her performance credits include Carnegie Recital Hall, Kiev Philharmonic Hall, Dame Myra Hess Series, Music in the Loft, countless university concert series, live recitals on Chicago’s WFMT and New York’s WQXR, and guest appearances with the University of Chicago Symphony, Southern Illinois Symphony, Chicago Chamber Orchestra and the Tutti Orchestra.\nProgram: THE RITE OF SPRING by Igor Stravinsky\nPart I: L’Adoration de la Terre (Adoration of the Earth)\nLes Augures printaniers (Augurs of Spring)\nJeu du rapt (Ritual of Abduction)\nRondes printanières (Spring Rounds)\nJeux des cités rivales (Ritual of the Rival Tribes)\nCortège du sage: Le Sage (Procession of the Sage: The Sage)\nDanse de la terre (Dance of the Earth)\nPart II: Le Sacrifice (The Sacrifice)\nCercles mystérieux des adolescentes (Mystic Circles of the Young Girls)\nGlorification de l’élue (Glorification of the Chosen One)\nEvocation des ancêtres (Evocation of the Ancestors)\nAction rituelle des ancêtres (Ritual Action of the Ancestors)\nDanse sacrale (L’Elue) (Sacrifical Dance)", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "The reconstruction was the culmination of more than 15 years of work by Millicent Hodson, a choreographer and dance historian, and her husband Kenneth Archer, an art historian. Hodson and Archer had painstakingly pieced the ballet together from prompt books, contemporary sketches, paintings, photographs, reviews, the original costume designs, annotated scores, and interviews with eye witnesses, such as Dame Marie Rambert, Nijinsky's assistant.\n\"When the booing and catcalls began, Stravinsky left the hall in a rage and went backstage, where he spent the rest of the performance holding onto Nijinsky's coat-tails while the choreographer stood on a chair shouting numbers (in Russian) to the dancers, to try to keep them in time.\"\n--Wendy Thompson from the London Symphony Orchestra's Bluffer's Guide to The Rite of Spring\nHodson compares the ballet's exhumation to the rediscovery of Pablo Picasso's seminal work Les Demoiselles D'Avignon. In a 2003 London Independent article, she explained, \"The ballet is really the forerunner of modern dance as we know it. It was so different to what had gone before: it was angular, abstract and geometric and so special. The costumes had these ritualistic designs which people had never seen before. It created a whole new agenda for dance.\" Given the innovation of Nijinsky's Sacre, it was fitting that it would be resurrected by the equally innovative Joffrey Ballet. Founded in 1956 by Robert Joffrey and Gerald Arpino, the company's groundbreaking repertory of original works quickly distinguished it from its contemporaries. Over the last 50 years, the Joffrey has performed in more than 400 U.S. cities and 26 countries, including a performance in the former U.S.S.R., the first such tour by an American dance company. The company also has garnered a reputation for granting inaugural commissions to now-legendary choreographers, including Twyla Tharp, Alvin Ailey, and Mark Morris. The NEA has regularly supported the company, including the premiere broadcast of Dance In America, which featured works by Joffrey and Arpino.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "It was not an easy transition for him, although he made great strides forward, he was never be able to fully shake off his roots, just as Picasso could never abandon his Bohemian ways.\n‘The fact that neither Spain or Russia had undergone a Renaissance made their mutual understanding all the more instinctive’ JR\nPicasso and Stravinsky met and firmly bonded, it was a lifetime friendship. It is through this friendship that we learn more about the influences that drove the musical direction of Stravinsky. He created raw minimal pieces and reframed older compositions. Both of these artists drew from their environment in unexpected ways, Picasso could find character in a fork and Stravinsky found music in a scratching sound.\n‘Stravinsky had at a stroke re-established himself as the most chic and brilliant modernist’ JR\nPicasso left the ballet and embarked on his first marriage with one of the Russian dancers Olga Khokhlova. Stravinsky remained within the Diaghilev hub and teamed up with Balanchine. Stravinsky and Balanchine shared a vision that the music and the choreography should be equal parts that worked together.\nAlthough Stravinsky had been able to work solidly through WW1, it was not a safe place for him in WW2 ,so he had to refugee to America. His new country afforded him employment, but not on his terms.\nThe Stravinsky that had fooled about with Picasso had grown reserved in America. In Italy, he had sourced from popular culture and allowed himself moments of wild abandonment with the cream of Modern Art .\n‘Very drunk Stravinsky raided the rooms upstairs and tossed pillows, bolsters and mattresses onto the heads of the guests below.The ensuing pillow fight kept the party going until three in the morning’JR\nDespite the numerous set backs and forced immigrations, Stravinsky stayed one step ahead of destruction. In 1946, he was commission by the Philharmonic Symphony Society of New York to compose Symphony in Three Movements. Balanchine created a ballet that would translate the music.\nBalanchine’s Ballet is an eclectic blend of cultural references and popular trends. The 50’s were about to dawn and the Ballet showcases girls with long pony-tails in dancing gangs, exotic Asian influences, soldiers and clocks, all portrayed in leotards with no scene props. It is classically rich, within a sheer minimal exterior.\nIt is modern art, moving.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "Ashley Wheater, a former Joffrey dancer who left his position as\nassistant director and ballet master of the San Francisco Ballet in 2007\nto become only the third artistic director of the Joffrey, dedicates\nthis series of performances “to the creative force of the New York City\nBallet, in gratitude for the support we have received through the years\nfrom these choreographers.”\nAnd what a star-studded repertoire this “All Stars” program promises\nto be with the capable dancers on the Joffrey roster presenting three\nJoffrey premieres: Balanchine’s “Stravinsky Violin Concerto,” Robbins’\n“The Concert” (or “The Perils of Everybody”) and Wheeldon’s “After the\nRain,” along with Balanchine’s “Tarantella,” which was premiered by the\nJoffrey last spring.\nIn the Stravinsky piece, revised in 1972 from a 1941 endeavor,\nBalanchine forms two contrasting pas de deux, one soft and lyrical, the\nother sharply angular, that are rooted in the folk dance traditions of\nGeorgia, then a part of the Soviet Union.\nWheeldon’s “After the Rain,” an exquisite piece set to the minimalist\nclassical music of Arvo Part, is the first time Wheeldon has awarded\nthe rights to perform the entire piece to a company other than his own.\nAgain, there are striking contrasts. The first section is marked by\nthree couples in steel-gray costumes, creating bold lines and intricate\nlifts; the second shifts to a tender relationship as dancers explore the\nspace between them in an emotional push and pull. Set to the music of\nFrederick Chopin and played onstage by a pianist, Robbins’ “The Concert”\nis a light-hearted satire of dance in a series of vignettes.\nBalanchine’s “Tarantella” (1964) is an explosive pas de deux set to the\n“Grand Tarantelle for Piano and Orchestra, Op. 67,” by Louis Moreau\nTickets are available at the Joffrey box office, 19 E. Randolph St., at (800) 982-2787 or online at ticketmaster.com.\nReceive special offer alerts and updates right to your phone!Text joffrey to 366948 to opt into Joffrey Mobile Alerts\n© Joffrey Ballet. All rights reserved.", "score": 23.732370371839338, "rank": 54}, {"document_id": "doc-::chunk-4", "d_text": "Fokine established an international reputation with his works choreographed during the first four seasons (1909-1912) of the Ballets Russes, including: the Polovtsian Dances from Prince Igor (1909, music by Borodin), Le Pavillon d'Armide (1909, a revival of his 1907 production for the Imperial Russian Ballet, music by Tcherepnin), Les Sylphides (a 1909 reworking of his earlier Chopiniana), The Firebird (1910, music by Stravinsky), Le Spectre de la Rose (1911, music by Weber), Petrushka (1911, music by Stravinsky), and Daphnis and Chloé (1912, music by Ravel).\nAfter a longstanding tumultuous relationship with Diaghilev, Fokine left the Ballets Russes at the end of the 1912 season.\nVaslav Nijinsky (1889 or 1890–1950) had attended the Imperial Ballet School, St. Petersburg since the age of 8. He graduated in 1907 and joined the Imperial Ballet where he immediately began to take starring roles. Diaghilev invited him to join the Ballets Russes for its first (1909) Paris season.\nIn 1912, Diaghilev gave Nijinsky his first opportunity as a choreographer: L'Après-midi d'un faune (The Afternoon of a Faun) to music composed by Claude Debussy in 1894: Prélude à l'après-midi d'un faune. Featuring Nijinsky himself as the Faun, the ballet's frankly erotic nature caused a sensation. The following year (1913), Nijinsky choreographed a new work by Debussy composed expressly for the Ballets Russes: Jeux (Games). Indifferently received by the public, Jeux was eclipsed two weeks later by the premiere of Igor Stravinsky's The Rite of Spring (Le Sacre du printemps), also choreographed by Nijinsky.\nBecause of mental illness, Nijinsky eventually retired from dance; he was diagnosed with schizophrenia.\nLéonide Massine (1896-1979) was born in Moscow, where he studied both acting and dancing at the Imperial School.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Music: Peter Ilyich Tchaikovsky (Serenade in C for string orchestra, Op. 48, 1880)\nChoreography: George Balanchine © The George Balanchine Trust\nStaging: Francia Russell\nCostume Design: Karinska\nLighting Design: Randall G. Chiarelli\nDuration: 35 minutes\nPremiere: June 10, 1934, School of American Ballet (White Plains, New York); March 1, 1935, American Ballet (New York, New York)\nPacific Northwest Ballet Premiere: September 29, 1978\nChoreographed originally in 1934 for students at the recently-founded School of American Ballet, Serenade is the very first work that George Balanchine created for American dancers. The remarkable story of its composition has often been told. Deciding after class one day that \"the best way to make students aware of stage technique was to give them something new to dance,\" Balanchine began to choreograph a new work, to Tchaikovsky's lush Serenade in C for String Orchestra, improvising with whatever students were availableseventeen that first day, varying numbers on succeeding days, eventually a few men; and incorporating chance happenings, such as a dancer's fall or late arrival, into the overall design of the piece. Making a virtue of his students' technical limitations and lack of finesse, he also contrived to build simple movements into consequential stage events, to give unprecedented importance to the ensemble rather than to individuals, and to infuse the whole with a freshness and candor that have struck many viewers as the first expression of a distinctively American style.\nAlthough Balanchine continued to revise Serenade for many years as he adapted it to the developing abilities of his dancers, the ballet has always retained the uniqueeven idiosyncraticcharacter that was determined by those early circumstances. Romantic in form and feeling, it also has always tantalized audiences with its hints of a mysterious narrative, though Balanchine consistently refused to define what that narrative might be. Instead, he insisted that because Tchaikovsky's score \"has in its danceable four movements different qualities suggestive of different emotions and human situations,\" a \"plot\" seems to emerge that is \"many things to many listeners to the music, and many things to many people who see the ballet.\"", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Stravinsky – Scherzo Fantastique\nWritten by Jeff Counts\nTHE COMPOSER – IGOR STRAVINSKY – Stravinsky’s long and productive life is easily separated today into distinct phases or periods. The clarity with which contemporary scholarship can identify these evolutionary shifts speaks to a certain genius in the composer for self re-invention and an almost willful sense of historical timing. In 1908, Stravinsky was nearing the end of his pupil days under Rimsky-Korsakov (an ending hastened by the master’s death) and poised for his first great leap.\nTHE MUSIC – The importance of Scherzo Fantastique for the young Stravinsky was not so much in what it was but rather what it did. In a performance which also included his Fireworks, Stravinsky was able to win the ear of Serge Diaghilev, who was in need of a composer for his budding Ballet Russe project. The first significant result of their artistic interaction would be nothing less than The Firebird, a work which set Stravinsky on the path to lasting international stardom. Scherzo Fantastique itself owed much to the music of the French impressionists and uses as its programmatic source a literary work by the Symbolist (and Debussy muse) Maurice Maeterlinck entitled La vie des abeilles (The Life of the Bee). The future Stravinsky is subtly present in the music with harmonies that are, in his own words to Rimsky-Korsakov, alternately “fierce, like a toothache” and “agreeable, like cocaine.” Though the Scherzo was Stravinsky’s first independent, non-academic orchestral work, his teacher’s touches are still evident in the orchestration and the magical, “fantastic” qualities of the musical story-telling. Rimsky-Korsakov certainly heard portions of the work and spoke of it fondly with his friends, but he never heard it performed in concert. It is difficult to know what the master might have ultimately found in his star pupil’s fledgling effort, but we do know what Diaghilev saw. The daring chance he took on such an unknown and unproven 26-year-old changed the course of 20th century orchestral history.\nTHE WORLD – 1908 was the year of the first airplane passenger and the founding of the Boy Scouts in Britain.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-8", "d_text": "Diaghilev became sufficiently interested in the opera to request Prokofiev play the vocal score to him in June 1922, while they were both in Paris for a revival of Chout, so he could consider it for a possible production. Stravinsky, who was present at the audition, refused to listen to more than the first act. When he then accused Prokofiev of \"wasting time composing operas\", Prokofiev retorted that Stravinsky \"was in no position to lay down a general artistic direction, since he is himself not immune to error\". According to Prokofiev, Stravinsky \"became incandescent with rage\" and \"we almost came to blows and were separated only with difficulty\". As a result, \"our relations became strained and for several years Stravinsky's attitude toward me was critical.\"\nIn March 1922, Prokofiev moved with his mother to the town of Ettal in the Bavarian Alps, where for over a year he concentrated on an opera project, The Fiery Angel, based on the novel by Valery Bryusov. His later music had acquired a following in Russia, and he received invitations to return there, but decided to stay in Europe. In 1923, Prokofiev married the Spanish singer Carolina Codina (1897–1989, stage name Lina Llubera) before moving back to Paris.\nIn Paris, several of his works, including the Second Symphony, were performed, but their reception was lukewarm and Prokofiev sensed that he \"was evidently no longer a sensation\". Still, the Symphony appeared to prompt Diaghilev to commission Le pas d'acier (The Steel Step), a \"modernist\" ballet score intended to portray the industrialisation of the Soviet Union. It was enthusiastically received by Parisian audiences and critics.\nAround 1924, Prokofiev was introduced to Christian Science. He began to practice its teachings, which he believed to be beneficial to his health and to his fiery temperament and to which he remained faithful for the rest of his life, according to biographer Simon Morrison.\nProkofiev and Stravinsky restored their friendship, though Prokofiev particularly disliked Stravinsky's \"stylization of Bach\" in such recent works as the Octet and the Concerto for Piano and Wind Instruments.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "performed by the Goldner String Quartet, or below by the Alban Berg Quartet\n(cover image by David von Diemar)\nOne of Stravinsky’s few efforts in the string quartet form, the Three Pieces dates from 1914, right after Petrushka and The Rite of Spring. At the time, Stravinsky was living in Switzerland, but apparently not settled there, as it was his home for the winter. He would apparently return to Russia for summers, but as Chris Darwin says, that was “a pattern that was about to be broken by the outbreak of war.”\nThese three little pieces together play for only about seven minutes, and are unconnected, truly pieces and not movements:\nThey didn’t even have names or titles upon release. These were later adapted in a piece for orchestra and the corresponding orchestra adaptations were then given the titles Dance, Eccentric, and Canticle, respectively.\nThe first and shortest, at only a minute, is maybe the most memorable, and an excellent example of how a little bit of critical listening can reveal quite a lot of detail about the conscious choices made by a composer.\nThis first ‘dance’ immediately sounds rustic, like busking street performers, not only in the violin’s more fiddle-like sound, but in the drone from the viola, the raspy downward figure from second violin, and the plucked cello. There are other analyses you can find online about this piece, but in a number of ways, the four instruments are treated almost wholly independently. They’re each doing their own thing, and it never really falls into place until the end. The cello forms the backbone of the little dance, outlining the constantly-shifting meter. The viola drones at both ends of the dance; Darwin says that it is “like a bagpipe with toothache.” First violin presents the melody and second interjects here and there, as if to protest. There’s a lot of repetition, but none of these repetitions are in any kind of synchronicity. It all still works, though, doesn’t it?", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-1", "d_text": "The first-night audience was a glittering assemblage, with one dignitary after another coming to Diaghilev's box to greet the impresario and his brilliant protégé. According to Stravinsky's much later memoirs - and he does confess to not recalling whether all these people were there on opening night or at subsequent performances - in attendance were Marcel Proust, Jean Giraudoux, St.-John Perse, and, definitely at a later performance, Sarah Bernhardt. During the initial run Stravinsky also met Debussy and the two instantly became friends.\n\"The orchestral body of The Firebird was wastefully large,\" Stravinsky was later to write in one of his periodic - inevitably negative - reviews of his early career, \"but I was more proud of some of the orchestration than of the music itself.\" (Stravinsky subsequently reorchestrated the music less lavishly in the second and third suites he devised.) He confesses, too, that he sold the Firebird manuscript in 1919 to \"a wealthy and generous ex-croupier from Monte Carlo\" who would donate it to the Geneva Conservatory of Music, and that the score \"has been a mainstay in my life as a conductor.\" Stravinsky in fact made his debut as a conductor with the complete score, at a 1915 Red Cross benefit in Paris. \"And, don't forget,\" he further informs us, \"I was once addressed by a man in an American railway dining car as 'Mr. Fireberg'.\"\nThrough all his cranky comments, however, shines the composer's affection for his youthful triumph, and it has remained his most frequently performed score.\n- Herbert Glass, after many years as a columnist for the Los Angeles Times, has for the past decade been the English-language annotator and editor for the Salzburg Festival.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "\"It's the best possible ballet music, apart from Tchaikovsky or Stravinsky,\" says Ratmansky.\nThe work's rarity on ballet stages was also an attraction. There are only two major extant versions of Harlequinade: the one Balanchine made for New York City Ballet in 1965, and a one-act led by the Soviet choreographer Pyotr Gusev, based largely on Fyodor Lopukhov's 1933 version. (There is also a frequently performed \"Harlequinade pas de deux,\" but it has no connection to the full ballet.)\nNeither the Balanchine nor the Gusev, however, is a \"reconstruction\" of the original Petipa ballet, which is what this staging aims to be. As with his Sleeping Beauty and his upcoming Bayadère for Staatsballett Berlin (coming this November), Ratmansky has gone back to notations written down in the days when the ballet was being performed at the Imperial Ballet.\nLike many of these notations, the ones created for Harlequinade are not easy to decipher, and are incomplete. Where there are gaps, Ratmansky has stepped in, choreographing in a style inspired by Petipa. Rehearsals at ABT have been painstaking but also highly creative, with the dancers occasionally providing ideas for how best to depict their characters, particularly in mime scenes, of which there are many. The costumes and sets, by Robert Perdziola, are inspired by the originals, drawings of which are kept in the St. Petersburg State Museum of Theatre and Music. As with any reconstruction, the archival materials are only a staring point. But in Ratmansky's view, there is a lot to be learned from living inside the choreography of a past master.\n- This ABT Dancer Isn't Afraid to Fall on His Face - Dance Magazine ›\n- American Ballet Theatre Harlequinade Video ›\nOn August 19, 1929, shockwaves were felt throughout the dance world as news spread that impresario Sergei Diaghilev had died. The founder of the Ballets Russes rewrote the course of ballet history as the company toured Europe and the U.S., championing collaborations with modernist composers, artists and designers such as Igor Stravinsky, Pablo Picasso and Coco Chanel.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Boléro is a one-movement orchestral piece by Maurice Ravel. Originally composed as a ballet commissioned by Russian ballerina Ida Rubenstein, the piece, which premiered in 1928, is Ravel's most famous musical composition. Before Boléro, Ravel had composed large scale ballets (such as Daphnis et Chloé, composed for the Ballets Russes 1909–1912), suites for the ballet (such as the second orchestral version of Ma Mère l'Oye, 1912), and one-movement dance pieces (such as La Valse, 1906-1920). Apart from such compositions intended for a staged dance performance, Ravel had demonstrated an interest in composing re-styled dances, from his earliest successes (the 1895 Menuet and the 1899 Pavane) to his more mature works like Le tombeau de Couperin (which takes the format of a dance suite).\nBoléro epitomises Ravel's preoccupation with restyling and reinventing dance movements. It was also one of the last pieces he composed before illness forced him into retirement: the two piano concertos and the Don Quichotte à Dulcinée song cycle were the only compositions that followed Boléro.\nThe work had its genesis in a commission from the dancer Ida Rubinstein, who asked Ravel to make an orchestral transcription of six pieces from Isaac Albéniz' set of piano pieces, Iberia. While working on the transcription, Ravel was informed that the movements had already been orchestrated by Spanish conductor Enrique Arbós, and that copyright law prevented any other arrangement from being made. When Arbós heard of this, he said he would happily waive his rights and allow Ravel to orchestrate the pieces. However Ravel changed his mind and decided initially to orchestrate one of his own previously-written works. He then changed his mind again and decided to write a completely new piece based on the musical form and Spanish dance called bolero. While on vacation at St Jean-de-Luz, Ravel went to the piano and played a melody with one finger to his friend Gustave Samazeuilh, saying \"Don't you think this theme has an insistent quality? I'm going to try and repeat it a number of times without any development, gradually increasing the orchestra as best I can.\"", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-2", "d_text": "Years later, Mr. Salonen considered buying the house, which had fallen on hard times. The conductor noted the carpet indentations where the great man's pianos had stood, the hook where a goat had been tethered (Stravinsky liked the milk) and the built-in couch where Thomas had slept off more than a few over indulgences. An aspiring composer himself, Mr. Salonen wisely feared the presence of ghosts.\nKlemperer and others performed Stravinsky's pieces at the Philharmonic. The composer himself appeared as pianist and conductor. Even the gaping Hollywood Bowl embraced the Stravinsky of ''The Firebird.'' The publisher Boosey & Hawkes eventually provided him a comfortable annual retainer, and there were the constant tours and travels for a man less famous than Clark Gable but not too far behind.\nThis month and last, the peripheral events around town have included small dramatizations (''A Word With Igor'' at the Los Angeles Central Library), panel discussions on Stravinsky's influence here (''The Eclectic Stravinsky'' at the Armand Hammer Museum at U.C.L.A.) and reminiscences of recording sessions and concerts from those who were there.\nA recent program of the Los Angeles Philharmonic at the Dorothy Chandler Pavilion offered instructive contrasts: the ballet score ''Agon,'' the little opera ''Mavra,'' in concert form, and ''The Rite of Spring.'' ''Mavra'' is one-act Pushkin, Stravinsky's rustic little stab at Russian folklore.\n''Agon,'' conforming to its title, is a contest between the tonal and the not-so-tonal. It is music of Stravinsky's old age: the temperature has cooled; the detachment from the audience and from practical performance problems is pronounced. ''Agon,'' indeed, is ballet music of extraordinarily complicated stops and starts, and asymmetrical counterpoint. The Los Angeles players had a hard time with it. Run-of-the-mill ballet orchestras must find it defeating.\nWhat a different world is ''The Rite of Spring,'' this wild beast of a piece that even after 88 years and thousands of performances, still tears at the listener's viscera. It speaks to the young, and an unusually youthful audience stomped and yelled with enthusiasm.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "By CYRUS PARKER-JEANETTE\nA recent donation of archival materials of 20th century dancer and choreographer Adolph Bolm (1884-1951) represents another important addition to the dance collections in the Music Division of the Library of Congress. These materials are particularly relevant to the Library because Bolm was the first choreographer invited to stage a work at the Library's Coolidge Auditorium during the 1928 Chamber Music Festival in collaboration with Igor Stravinsky, who was commissioned to compose the score for \"Apollon Musagète.\"\nThe Bolm collection includes numerous newspaper clippings, correspondence, short manuscripts and biographical outlines by friends and family, as well as stories and photos of his collaborations with Marc Chagall, George Herriman, Nicholas Remisoff, Igor Stravinsky, Manuel de Falla, John Alden Carpenter and Anna Pavlova.\nFrom 1908 until his death in 1951, Adolph Bolm traveled across the cities of Europe and America with the self-proclaimed wanderlust of a gypsy, establishing companies, training dancers and creating choreography. The Russian-born artist left his homeland and immersed himself in America, developing a broad awareness and understanding of the diverse American culture that even some of his native-born contemporaries lacked.\nBolm was born in St. Petersburg on Sept. 25, 1884, to an intellectual family dedicated to the arts. He auditioned at the age of 16 at the Maryinsky Theatre, home of the Russian Imperial School, and graduated in 1904. Among the choreographers with whom Bolm worked was the great ballet master Mikhail Fokine. At this time Fokine was battling the stultified traditions in Russian theater and dance, seeking to extend the artistic boundaries of the Czar and the Russian aristocracy, and his revolutionary approaches began to have an impact within Russian artistic circles.\nIn 1908, motivated by this innovative spirit, Bolm organized a small company of Russian dancers with ballerina Anna Pavlova as his partner, marking her first appearances outside Russia. They toured Northern Europe to broad acclaim. The tour itinerary originally included a stop in Paris, but impresario Serge Diaghilev, who was planning the first European season of the new Ballets Russes with Fokine there, pleaded with Bolm to forgo his appearances in Paris.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Pacific Northwest Ballet's Lesley Rausch & Jerome Tisserand\nOn Dancing Balanchine’s \"Stravinsky Violin Concerto\"\nPacific Northwest Ballet will perform at New York City Center February 24 – 27\nFor tickets, go to New York City Center's website.\nPictured above: Pacific Northwest Ballet principal dancers Lesley Rausch and Jerome Tisserand in Coppélia choreographed by Alexandra Danilova and George Balanchine © The George Balanchine Trust. Photo © Angela Sterling.\nThe Pacific Northwest Ballet, under the direction of Artistic Director Peter Boal, returns to New York City Center with two repertory programs that reflect the company’s reputation for inspired stagings of works by George Balanchine, paired with some of the finest works in contemporary ballet. The first program offers a trio of Balanchine masterpieces — Square Dance, which premiered at City Center in 1957, Prodigal Son, and Stravinsky Violin Concerto. The second program features David Dawson’s A Million Kisses to my Skin, William Forsythe’s The Vertiginous Thrill of Exactitude, and Crystal Pite’s Emergence. The PNB Orchestra, under the direction of Music Director and Principal Conductor Emil de Cou, will accompany both programs.\nThe Dance Enthusiast's Henning Rübsam had the opportunity to speak with two principal dancers on their experience in the company and dancing Balanchine's Stravinsky Violin Concerto.\nLesley Rausch, a principal with PNB since 2011, joined the company in 2001. From Columbus, Ohio, she started dancing at the recommendation of her music teacher. “My dance teacher Shir Lee Wu is from China and actually danced with Martha Graham, so we had a Graham class early every Saturday. Not my favorite thing, but it was interesting and good for me and gave me a different perspective.”\nTDE: When did you first encounter the Balanchine style?\nLR: I was thirteen and fourteen when I took the summer workshops at The School of American Ballet in New York. And here in Seattle we have a lot of Balanchine in the repertoire and I danced at least a dozen of his ballets.\nTDE: Is there anything about Violin Concerto that makes it a different experience?", "score": 21.52386363883654, "rank": 65}, {"document_id": "doc-::chunk-2", "d_text": "Later he met Diaghilev, the manager of ther Russian Ballets, who visited him in his studio together with Massine and Larionov then in Rome for their show. Diaghilev commissioned him with the construction of the stage scenes and with the plastic-mobile costumes for \"Le Chant du Rossignol'' by Stravinskij (which was not realised, probably because of Picasso's pressure on Diaghilev).\nAlso of these days were several projects of costume-designs (\"changeable\" and \"luminous\") for a planned ballet entitled ''Mimismagia\" (Magic-mimic dance).\nIn 1917 the painter Amedeo Modigliani visited his studio. Then, in the spring of the same year Diaghilev asked him to design also the costumes for Francesco Cangiullo's \"Giardino Zoologico\" (Zoological Garden), a ballet which should have had music written by Ravel (but even this project wasn't accomplished) . Soon after he met Gilbert Clavel, a Swiss poet, who invited him in Capri in order to work together to his book ''Un Istituto per Suicidi'' (Suicides' Institute),\na decadent novel which Depero illustrated with many drawings, then published in Rome by Tavolato.\nIn Capri they had also the first ideas for the Plastic Theatre, a set of choreographies where they planned to use puppets instead that real dancers.\nOn September 8th Depero inaugurated an exhibition at the \"Sala Morgano\" in Anacapri, where he showed his first attempts of tapestry decorations and wall-hangings. Back in Rome he maintained a written relationship with Clavel with whom he then realised the ideas for the Plastic Theatre in the show ''Balli Plastici'' (Plastic Ballets), staged at the \"Teatro dei Piccoli\" in Rome on April 15th 1918 (and after run eleven times). Musics were written by Casella, Tyrwhitt, Malipiero and Bartok (under the name of Chermenow).\nThe show featured puppets instead of alive actors:\na mechanical but joyful vision of the world.\nIn February 1919 he held a one-man exhibition at the ''Casa d'Arte Bragaglia'', in Rome, guest of the famous photographer and inventor of the Futurist photodynamism.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-4", "d_text": "Warburg, the School of American Ballet opened to students on January 2, 1934, less than 3 months after Balanchine arrived in the U.S. Later that year, Balanchine had his students perform in a recital, where they premiered his new work Serenade to music by Tchaikovsky at the Warburg summer estate.\nRelocation to West Coast\nBalanchine relocated his company to Hollywood during 1938, where he rented a white two-story house with \"Kolya\", Nicholas Kopeikine, his \"rehearsal pianist and lifelong colleague\", on North Fairfax Avenue not far from Hollywood Boulevard. Balanchine created dances for five movies, all of which featured Vera Zorina, whom he met on the set of The Goldwyn Follies and who subsequently became his third wife. He reconvened the company as the American Ballet Caravan and toured with it throughout North and South America, but it folded after several years. From 1944 to 1946, during and after World War II, Balanchine served as resident choreographer for Blum & Massine's new iteration of Ballet Russe de Monte Carlo.\nReturn to New York\nSoon Balanchine formed a new dance company, Ballet Society, again with the generous help of Lincoln Kirstein. He continued to work with contemporary composers, such as Paul Hindemith, from whom he commissioned a score in 1940 for The Four Temperaments. First performed on November 20, 1946, this modernist work was one of his early abstract and spare ballets, angular and very different in movement. After several successful performances, the most notable featuring the ballet Orpheus created in collaboration with Stravinsky and sculptor and designer Isamu Noguchi, the City of New York offered the company residency at the New York City Center.\nIn 1955, Balanchine created his version of The Nutcracker, in which he played the mime role of Drosselmeyer. The company has since performed the ballet every year in New York City during the Christmas season.\nAfter years of illness, Balanchine died on April 30, 1983, aged 79, in Manhattan from Creutzfeldt–Jakob disease, which was diagnosed only after his death. He first showed symptoms during 1978 when he began losing his balance while dancing.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "The first ballet with a chess theme was Ballet des Echecs, performed for Louis XIV (1638-1715) of France. A ballet called Checkmate, composed by Sir Arthur Bliss and choreographed by Ninette de Valois in 1937, was performed at the Paris World Exhibition. The first ballet on ice was included in the pantomime, Sinbad the Sailer (1953), where skaters played out the Morphy - Duke of Brunswick game. In 1986 the musical Chess, by Tim Rice, was produced. In 2002, a chess ballet opened the Chess Olympiad in Bled, Slovenia.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-6", "d_text": "He subsequently studied music theory, composition, and advanced piano at the Petrograd Conservatory, graduating in 1923. During this time, he worked with the corps de ballet of the Mariinsky Theater. In 1924, Balanchine (and his first wife, ballerina Tamara Geva) fled to Paris while on tour of Germany with the Soviet State Dancers. He was invited by Sergei Diaghilev to join the Ballets Russes as a choreographer.\nDiaghilev invited the collaboration of contemporary fine artists in the design of sets and costumes. These included Alexandre Benois, Léon Bakst, Nicholas Roerich, Georges Braque, Natalia Goncharova, Mikhail Larionov, Pablo Picasso, Coco Chanel, Henri Matisse, André Derain, Joan Miró, Giorgio de Chirico, Salvador Dalí, Ivan Bilibin, Pavel Tchelitchev, Maurice Utrillo, and Georges Rouault.\nTheir designs contributed to the groundbreaking excitement of the company's productions. The scandal caused by the premiere performance in Paris of Stravinsky's The Rite of Spring has been partly attributed to the provocative aesthetic of the costumes of the Ballets Russes.\nAlexandre Benois (1870-1960) had been the most influential member of The Nevsky Pickwickians and was one of the original founders (with Bakst and Diaghilev) of Mir iskusstva. His particular interest in ballet as an art form strongly influenced Diaghilev and was seminal in the formation of the Ballets Russes. In addition, Benois contributed scenic and costume designs to several of the company's earlier productions: Le Pavillon d'Armide (1909, music by Tcherepnin), portions of Le Festin (1909, music by several composers), and Giselle (1910, music by Adolphe Adam). Benois also participated with Stravinsky and Fokine in the creation of Petrushka (1911), to which he contributed the much of the scenario as well as the stage sets and costumes.\nLéon Bakst (1866-1924) was also an original member of both The Nevsky Pickwickians and Mir iskusstva.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-17", "d_text": "|\n|Pablo Picasso (fondali)|\n|1925||Les matelots||Georges Auric||Léonide Massine||Pere Pruna|\n|1925||Zephyr et Flore||Vernon Duke||Léonide Massine||Georges Braque|\n|1926||Jack in the Box||Erik Satie\n|George Balanchine||André Derain|\n|1926||Pastorale||Georges Auric||George Balanchine||Pere Pruna|\n|1926||Romeo and Julieta||Constant Lambert||Max Ernst (curtain) and Joan Miró (sets and costumes)|\n|1927||La chatte||Henri Sauguet||George Balanchine||Naum Gabo|\n|1927||Mercure||Erik Satie||Léonide Massine||Pablo Picasso|\n|1927||Pas d'acier||Sergei Prokofiev||Léonide Massine||George Jaculov|\n|1928||Apollon musagète (Apollo)||Igor Stravinsky||George Balanchine||Andre Bauschant (scene)|\n|Coco Chanel (costumi)|\n|1929||Le fils prodigue/ Prodigal Son||Sergei Prokofiev||George Balanchine||Georges Rouault|\n- Garafola (1998), p. vii.\n- \"Diaghilev's Golden Age of the Ballets Russes dazzles London with V&A display\". Culture24. 2011-01-09. Retrieved 2013-05-08.\n- Garofala (1998), p. 150\n- Garofala (1998), p. 150.\n- Garofala (1998), p. 438, n. 7.\n- Garofala (1998), p. 151.\n- Morrison, Simon. \"The 'World of Art' and Music,\" in [Mir Iskusstva]: Russia's Age of Elegance. Palace Editions. Omaha, Minneapolis, and Princeton, 2005. p. 38.\n- Guroff, Greg. \"Introduction\" in [Mir Iskusstva]: Russia's Age of Elegance. Palace Editions. Omaha, Minneapolis, and Princeton, 2005. p.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Les Ballets Russes\nPeter Iljitsch Tschaikowski\nMânuel de Falla\nSergei Sergejewitsch Prokofjew\nSWR Sinfonieorchester Baden-Baden und Freiburg\nIgor Strawinsky: Le Sacre du printemps\nClaude Debussy: Jeux\nPaul Dukas: Fanfare pour précéder 'La Péri'\nPaul Dukas: 'La Péri'\nMaurice Ravel: Daphnis et Chloé\nFrancis Poulenc: Les Biches\nClaude Debussy: Prélude à l’après-midi d’un faune\nFlorent Schmitt: La Tragédie de Salomé\nIgor Strawinsky: Pétrouchka\nPeter Iljitsch Tschaikowski: Schwanensee\nPeter Iljitsch Tschaikowski: Dornröschen\nMânuel de Falla: Der Dreispitz\nSergei Sergejewitsch Prokofjew: Der Narr\nIgor Strawinsky: Pulcinella\nIgor Strawinsky: Feuerwerk\nRichard Strauss: Till Eulenspiegels lustige Streiche\nMaurice Ravel: La Valse\nGeorge Auric: Les Fâcheux\nGeorge Auric: La Pastorale\nNikolai Rimski-Korsakow: Schéhérazade\nSergei Sergejewitsch Prokofjew: Ala et Lolly\nDarius Milhaud: Le Train bleu\nVincenzo Tommasini: Le donne di buon umore | Les Femmes de bonne humeur\nHenri Sauguet: La Chatte\nIgor Strawinsky: L'Oiseau de feu\nIgor Strawinsky: Apollon musagète\nLes Ballets Russes was founded by director Sergei Diaghilev in 1909. The company never actually performed in Russia – it made its debut in Paris, and the world was to be its home during its 20 years of existence.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "The building houses two smaller stages, the Comédie des Champs-Élysées theatre on the 3rd floor, and the Studio des Champs-Élysées on the 5th floor.\nAlthough Astruc was soon financially overextended, the first season was extraordinary. The theatre opened on April 2, 1913, with a gala concert featuring five of France's most renowned composers conducting their own works: Claude Debussy (Prélude à l'après-midi d'un faune), Paul Dukas (L'apprenti sorcier), Gabriel Fauré (La naissance de Vénus), Vincent d'Indy (Le camp from Wallenstein), and Camille Saint-Saëns (Phaeton and excerpts from his choral work La lyre et la harpe). This was followed the next day with a performance of Hector Berlioz's opera Benvenuto Cellini conducted by Felix Weingartner which included a \"dance spectacular\" by Anna Pavlova. Later there was a series of concerts devoted to Beethoven conducted by Weingartner and featuring the pianists Alfred Cortot and Louis Diémer, and the soprano Lilli Lehmann. The Royal Concertgebouw Orchestra of Amsterdam conducted by Willem Mengelberg gave two concerts: Beethoven's Ninth Symphony and the Paris premiere of Fauré's opera Pénélope (May 10).\nSergei Diaghilev's Ballets Russes presented the company's fifth season, although their first in the new theatre, opening on May 15 with Igor Stravinsky's The Firebird, Nikolai Rimsky-Korsakov's Scheherazade (as choreographed by Michel Fokine), and the world premiere of Debussy's Jeux (with choreography by Vaslav Nijinsky and designs by Léon Bakst). Some in the audiences had been offended by the depiction on stage of a tennis game in Jeux, but this was nothing compared to the reaction to the ritual sacrifice in Stravinsky's Rite of Spring on May 29. Carl Van Vechten described the scene:\nA certain part of the audience was thrilled by what it considered to be a blasphemous attempt to destroy music as an art, and, swept away with wrath, began very soon after the rise of the curtain, to make cat-calls and to offer audible suggestions as to how the performance should proceed.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "The Accademia Teatro alla Scala Ballet School brings the magic of The Nutcracker to Teatro Strehler, a choreography signed by Frédéric Olivieri from Lev Ivanov (December 14-22).\nMoving to the notes of Pyotr Ilyich Tchaikovsky, the young dancers will thrill young and old alike as they bring to life the characters populating Marius Petipa’s libretto inspired by Alexandre Dumas’s adaptation of E. T. A. Hoffmann’s story The Nutcracker and the Mouse King.\nThe Nutcracker was first conceived in early 1891, when the Director of the Russian Imperial Theatres of Saint Petersburg commissioned Petipa and Tchaikovsky to choreograph a two-act ballet for the Christmas season.\nAfter meeting with the balletmaster Petipa, the composer began to work in “feverish haste”. In a letter dated February of that year, he wrote that he was working on The Nutcracker “with all my might” and the music was ready early the following year, in spite of the composer’s many bouts of depression. He had written to his brother Modest that he was «tormented by an awareness that it is totally impossible to complete well the work I have engaged myself to do». His difficult relationship with Petipa may have had something to do with this. The balletmaster dictated the score measure by measure, badgering the composer with such notes as: «The stage is empty… Clara returns. Eight measures for her tremble of fright, eight for fantastic and dance music. Rest. The clock strikes midnight. After the chimes of the clock a short tremolo. After the tremolo, five measures to hear the scratching of the mice […]».\nA suite of pieces from the ballet was performed under the composer’s direction on March 19, 1892 at the Russian Musical Society’s theatre in St. Petersburg. It received unanimous acclaim, with five out of six pieces encored at its premiere. However, the opening night of the full ballet at the Mariinsky Theatre in December of the same year was not so fortunate.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Lucinda Childs and Robert Wilson have revived their 1981 work Relative Calm in a new extended production that world-premiered last week in Rome, Italy, at the Auditorium Parco Della Musica Ennio Morricone.\nThe show includes a reinterpretation of the two choreographies by Lucinda Childs to the music of John Adams and John Gibson and is extended into a triptych with a new creation on the music of Pulcinella by Igor Stravinsky.\n“The whole show, in its three symmetrical parts will be like a clock that measures time, like the succession of the hours of the day.” — Robert Wilson\nIn the mid-seventies, Robert Wilson asked Lucinda Childs to be part of his opera Einstein on the Beach, on which he was working with Philip Glass.\nA few years later, in 1981, Childs asked Wilson to do the lights and design the space for a dance show she was creating with Jon Gibson at The Kitchen in New York. When they started thinking about a new work together, precisely forty years later and right in the middle of the pandemic isolation days, that 1981 show came back to their mind: Relative Calm.\nThey decided to extend that work and came up with the idea of choreographing Stravinsky’s Pulcinella. “I really liked the idea of Stravinsky as a central counterpoint to two musical compositions by contemporary authors such as Jon Gibson and John Adams,” Wilson says. “So we structured the work in three parts to compose a complete show of dance, music, lights, and images.”\nWilson particularly enjoyed working on Stravinsky. “Stravinsky’s is a completely different world from mine, with a different colour spectrum. Different and, therefore, structurally interesting to me. I faced Pulcinella‘s suite the same way I always relate to written compositions: I respect the composer, but then I don’t want to become his slave, so I stage it in my own way.”\nFor Wilson, all his theatre, in a sense, is a masque with music and text; the image on stage is what one sees, while what is heard is something different. “From this point of view, my theater is very classic,” Wilson says, “just like in Greek theater where actors were “masks”, or as in Noh theater, in the Bunraku in Japan, the Kathakali in India.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "About this work\nThis is powerful, virtuosic piano music drawn by the composer from his great ballet score Romeo and Juliet, Op. 64. It is likely that the composer arranged it as a salvage job for a major project that appeared doomed. Before he returned to Russia permanently, Prokofiev had contracted with the State Academic Theater of Leningrad for a full-length ballet on Romeo but later canceled the production on grounds that choreographed Shakespeare was a \"sacrilege.\"\nWhen Radlov persuaded the Bolshoi Theater of Moscow to accept the ballet, Prokofiev continued. Bowing to the demands of \"Socialist Realism,\" Prokofiev and Radlov even devised a \"happy ending.\" But following the infamous Pravda denunciation of Shostakovich's opera Lady Macbeth of Mtsensk, known to reflect Stalin's personal opinion, the Bolshoi decided to take no more chances and canceled the production.\nProkofiev sought to salvage the work he had done by making suites of the music: Two orchestral suites (Opp. 64a and 65b) and this piano suite. He performed both as widely as possible, even conducting and recording the former despite the fact that he was an unusually poor conductor. These performances had their intended secondary result: A Yugoslav ballet company liked the music and premiered the ballet in Brno, Czechoslovakia, in 1938. (The Kirov danced it in 1940, and the Bolshoi in 1946.)\nProkofiev was an outstanding piano composer who developed his own unique style of writing for the instrument. Although he wrote prolifically for piano in the early part of his career, during his Soviet period he composed only four original piano sonatas, plus five piano suites drawn from stage works.\nThe ten numbers Prokofiev chose for this piano version all concern the beginning of the love between Romeo and Juliet, with some additional character and ensemble dances. No. 1, \"Folk Dance\" (Morning Dance), and No. 2, \"The Street Awakens\" set the scene in Verona.\nThe music shifts to the depiction of the ball at the Capulets' with the \"Arrival of the Guests.\" Meanwhile, Juliet is being dressed in her finery in \"The Young Juliet.\" Its playful music ends in her catching sight of herself in a mirror and suddenly realizing she has become a young woman.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "This piece was initially called Fandango, but its title was soon changed to \"Boléro\".\nPremiere and early performances\nThe composition was a sensational success when it was premiered at the Paris Opéra on November 22, 1928, with choreography by Bronislava Nijinska and designs by Alexandre Benois. The orchestra of the Opéra was conducted by Walther Straram. Ernest Ansermet had originally been engaged to conduct the orchestra during its entire ballet season; however the orchestra refused to play under him. A scenario by Rubinstein and Nijinska was printed in the program for the premiere:\nInside a tavern in Spain, people dance beneath the brass lamp hung from the ceiling. [In response] to the cheers to join in, the female dancer has leapt onto the long table and her steps become more and more animated.\nRavel himself, however, had a different conception of the work: his preferred stage design was of an open-air setting with a factory in the background, reflecting the mechanical nature of the music.\nBoléro became Ravel's most famous composition, much to the surprise of the composer, who had predicted that most orchestras would refuse to play it. It is usually played as a purely orchestral work, only rarely being staged as a ballet. According to a possibly apocryphal story, at the premiere a woman shouted that Ravel was mad. When told about this, Ravel smiled and remarked that she had understood the piece. The piece was first published by the Parisian firm Durand in 1929. Arrangements of the piece were made for piano solo and piano duet (two people playing at one piano), and Ravel himself composed a version for two pianos, published in 1930.\nThe first recording was made by Piero Coppola in Paris for The Gramophone Company on January 8, 1930. The recording session was attended by Ravel. The very next day Ravel made his own recording for Polydor, conducting the Lamoureux Orchestra. That same year further recordings were made by Serge Koussevitzky with the Boston Symphony Orchestra and Willem Mengelberg with the Concertgebouw Orchestra.\nThe Toscanini affair\nConductor Arturo Toscanini gave the U.S. premiere of Boléro with the New York Philharmonic on November 14, 1929.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-4", "d_text": "Recommended Recording: Hillary Hahn; Esa-Pekka Salonen, Swedish Radio Symphony Orchestra (Deutsche Grammophon)\nBorn 1 October 1865 in Paris, France; Died 17 May 1935 in Paris\nPremiere: April 22, 1912; Paris\nInstrumentation: 3 flutes (3rd doubling piccolo), 2 oboes, English horn, 2 clarinets, bass clarinet, 3 bassoons, 4 horns, 3 trumpets, 3 trombones, tuba, timpani, percussion (bass drum, cymbals, snare drum, tambourine, triangle, xylophone), 2 harps, celeste, strings\nApproximate Duration: 19 minutes\nNowadays, Paul Dukas is most often remembered for The Sorcerer’s Apprentice, that perennial favorite of young peoples’ concerts. (And who can forget the images of Mickey Mouse and water-toting broomsticks in Walt Disney’s Fantasia?) Perhaps one of the reasons we don’t hear a lot Dukas’ music is that he didn’t leave us much. Meticulous to a fault, some might say, the composer relegated most of his compositions to the fireplace. In fact, La Péri almost suffered the same fate: Composed “for a bet,” it was spared only by the intervention of several respected friends, who begged Dukas not to destroy the manuscript.\nDukas studied at the Paris Conservatoire, where Debussy was a friend and fellow student. He cultivated his musical craftsmanship to a high degree and developed an ear for orchestral color that made his work particularly influential in that regard. He was also a music writer, professor, critic, and editor. In the latter role, he made important contributions to the catalog of the Durand publishing house, creating well-considered editions of the keyboard works of Beethoven, Scarlatti, Couperin, and Rameau. In addition to those already mentioned, his surviving works include the three-movement Symphony in C, an ambitious piano sonata, and the opera Ariane et Barbe-bleue, among others.\nFor his only ballet – he called it a poème dansé (danced poem) – Dukas turned to an ancient Persian legend. In Persian mythology, a péri is a fairy creature.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "The orchestra played unheard, except occasionally when a slight lull occurred. The young man seated behind me in the box stood up during the course of the ballet to enable himself to see more clearly. The intense excitement under which he was labouring betrayed itself presently when he began to beat rhythmically on top of my head with his fists. My emotion was so great that I did not feel the blows for some time.\nMarie Rambert heard someone in the gallery call out: \"Un docteur … un dentiste … deux docteurs….\" The second performance (June 4) was less eventful, and, according to Maurice Ravel, the entire work could actually be heard.\nThe first season ended on June 26, 1913, with a performance of Pénélope, and the new one opened on October 2 with the same work. On October 9 d'Indy conducted Carl Maria von Weber's opera Der Freischütz. On October 15 Debussy conducted the Ibéria section from his orchestral triptych Images pour orchestre, and a week later he conducted his cantata La Damoiselle élue. By November 20 Astruc was out of money and was ejected from the theatre, and the sets and costumes were impounded. The following season consisted of operas presented by Covent Garden and the Boston Opera Company.\nDuring most of World War I, the theatre was closed, but the Congress of Allied Women on War Service was held there in August 1918. Pavlova's ballet company presented a short season of dance performances in 1919.\nThe theatre was purchased by Madame Ganna Walska (Mrs. Harold Fowler McCormick) in 1922. From 1923 the smaller Comédie stage upstage was the home of Louis Jouvet's long-running medical satire, Dr. Knock, and in late 1924 the theatre also premiered the Ballets suédois production of Francis Picabia's \"instantaneist\" ballet Relâche, with music by Erik Satie.\nThe theatre shows about three staged opera productions a year, mostly baroque or chamber works, suited to the modest size of its stage and orchestra pit. In addition, it houses an important concert season.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "In spite of the ballet’s time length, a small number of musical leitmotifs gives musical unity to the score.The music, some of the composer’s most passionate, is widely regarded as some of Ravel’s best, with extraordinarily lush harmonies typical of the impressionist movement in music. Even during the composer’s lifetime, contemporary commentators described this ballet as his masterpiece for orchestra.He extracted music from the ballet to make two orchestral suites, which can be performed with or without the chorus. The second of the suites, which includes much of the last part of the ballet and concludes with the “Danse generale”, is particularly popular. When the complete work is itself performed live, it is more often in concerts than in staged productions.\nPerformed by:Philharmonia Orchestra\nArtwork :Remedios Varo “Revelation or The Clockmaker”.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-5", "d_text": "- Alexander Scriabin, Prometheus: The Poem of Fire, Moscow, March 2, 1911\n- Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition, Paris, October 19, 1922\n- Sergei Prokofiev, First Violin Concerto with Marcel Darrieux as soloist, Paris, October 18, 1923\n- Prokofiev, Second Symphony, Paris, June 6, 1925\n- Prokofiev, Fourth Symphony, Boston, November 14, 1930\n- George Gershwin, Second Rhapsody, Boston Symphony Orchestra, Symphony Hall, Boston, 29 January 1932\n- Béla Bartók, Concerto for Orchestra, Boston Symphony Orchestra, Symphony Hall, Boston, December 1, 1944\n- Samuel Barber, Knoxville: Summer of 1915, Eleanor Steber as soloist, Boston Symphony Orchestra, 1948\n- Leonard Bernstein, The Age of Anxiety, Leonard Bernstein as soloist, Tanglewood, 1949\n- Aaron Copland, Appalachian Spring (suite) Boston Symphony Orchestra, 1945\n- Arnold Bax, Symphony No.2, Boston, December 13, 1929\n- Maurice Ravel's orchestration of Mussorgsky's Pictures at an Exhibition, Boston Symphony Orchestra, October 1930\n- Jean Sibelius, Seventh Symphony, BBC Symphony Orchestra, HMV, London, 1933\n- Roy Harris, Third Symphony, Boston Symphony Orchestra, 1939\n- Hector Berlioz, Harold in Italy with William Primrose as soloist, 1946\n- Aaron Copland, Appalachian Spring (suite), Boston Symphony Orchestra, 1946\nNotes and references\n- Koussevitzky's original Russian forename is usually transliterated into English as either \"Sergei\" or \"Sergey\"; however, he himself adopted the French spelling \"Serge\", using it in his signature. (See The Koussevitzky Music Foundations official web site. Retrieved 2009-11-05.) His surname can be transliterated variously as \"Koussevitzky\", \"Koussevitsky\", \"Kussevitzky\", \"Kusevitsky\", or, into Polish, as \"Kusewicki\"; however, he himself chose to use \"Koussevitzky\".", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-3", "d_text": "Prokofiev's inexperience in ballet led him to revise the work extensively in the 1920s, following Diaghilev's detailed critique, prior to its first production. The ballet's premiere in Paris on 17 May 1921 was a huge success and was greeted with great admiration by an audience that included Jean Cocteau, Igor Stravinsky and Maurice Ravel. Stravinsky called the ballet \"the single piece of modern music he could listen to with pleasure,\" while Ravel called it \"a work of genius.\"\nFirst World War and Revolution:\nDuring World War I, Prokofiev returned to the Conservatory. He studied organ in order to avoid conscription. He composed The Gambler based on Fyodor Dostoyevsky's novel of the same name, but rehearsals were plagued by problems and the scheduled 1917 première had to be canceled because of the February Revolution. In the summer of that year, Prokofiev composed his first symphony, the Classical. This was his own name for the symphony, which was written in the style that, according to Prokofiev, Joseph Haydn would have used if he had been alive at the time. It is more or less classical in style but incorporates more modern musical elements (see Neoclassicism). This symphony was also an exact contemporary of Prokofiev's Violin Concerto No. 1 in D major, Op. 19, which was scheduled to premiere in November 1917. The first performances of both works had to wait until 21 April 1918 and 18 October 1923, respectively. He stayed briefly with his mother in Kislovodsk in the Caucasus. Worried about the enemy capturing Saint Petersburg, he returned in 1918. By then he was determined to leave Russia, at least temporarily. He saw no room for his experimental music and, in May, he headed for the USA. Before leaving, he developed acquaintances with senior Bolsheviks including Anatoly Lunacharsky, the People's Commissar for Education, who told him: \"You are a revolutionary in music, we are revolutionaries in life. We ought to work together. But if you want to go to America I shall not stand in your way.\"", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "\"I am now gradually orchestrating it\", we read in a letter to Nadezhda von Meck of 8/20–10/22 October 1880, and later: \"The Serenade... I composed from an innate impulse; that is something which arises from having freedom to think, and is not devoid of true worth\" .\nBy 14/26 October the Serenade was ready, and Tchaikovsky set to work on its arrangement for piano duet , which according to the date on the manuscript was completed on 23 October/4 November 1880. Despatching the score and piano duet arrangement to Pyotr Jurgenson to be published, Tchaikovsky wrote: \"I happened to write a Serenade for string orchestra in four movements, and am sending it to you the day after tomorrow in the form of a full score and four-hand arrangement ... I love this Serenade terribly, and fervently hope that it might soon see the light of day\" .\nAs noted above, Tchaikovsky's arrangement of the Serenade for piano duet (4 hands) was made between 14/26 October and 23 October/4 November 1880.\nThe Serenade was performed for the first time on 21 November/3 December 1880 at a private concert in the Moscow Conservatory by a force of professors and students, as a surprise for Tchaikovsky, who was visiting after long absence from the Conservatory .\nOn 17/29 June 1881, Tchaikovsky wrote to Eduard Nápravník, asking if the Serenade might be included in one of the future concerts . In his reply of 27 June/9 July that year, Nápravník agreed to perform the Serenade in one of the forthcoming concerts .\nThe first public performance of the Serenade for String Orchestra took place in Saint Petersburg on 18/30 October 1881, at the third symphony concert of the Russian Musical Society, conducted by Eduard Nápravník. In Moscow it was performed for the first time on 16/28 January 1882 at the seventh concert of the Russian Musical Society, conducted by Max Erdmannsdörfer.\nOther notable early performances include:\n- Tiflis, 2nd Russian Musical Society symphony concert, 27 March/8 April 1884, conducted by Mikhail Ippolitov-Ivanov\n- New York,", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "But Miami City Ballet isn’t just devoted to presenting the work of Balanchine. The ballet company has also performed pieces by many other choreographers over the years.\nThe upcoming Ted Shawn Theatre program will feature Martha Graham’s poetic 1948 piece “Diversion of Angels” as well as the world premiere of Margarita Armas’ “Geta.”\nThe program also includes Jerome Robbins’ “Antique Epigraphs,” which was first performed in 1984 by New York City Ballet. Inspired by ancient Greek sculptural forms, this ensemble work is set to the orchestral music of Claude Debussy, which you can hear performed lived at Jacob’s Pillow.\nBut perhaps the highlight of Miami City Ballet’s program will be their performance of George Balanchine’s “Serenade.” One of the first ballets created by Balanchine in the United States in 1935, “Serenade” features Peter Ilyich Tchaikovsky’s melodic Serenade for Strings in C.\nNearly a century later, “Serenade” seems timeless with his graceful gestures and symmetrical movements. Often, the dancers seem to float across the stage, drifting like leaves in a breeze being swept up by Tchaikovsky’s melodious music. If you love classic ballet, this piece will melt your heart.\nAnd best of all, you can see “Serenade” performed by one of Balanchine’s best interpreters featuring a live orchestra in the best dance venue in the world.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-2", "d_text": "The turbulent premiere was followed by five more relatively peaceful performances before one show in London, which received mixed reviews, but the complete ballet and orchestral work were only performed seven times before the outbreak of the first world war.\nAfter the fighting ended, Diaghilev attempted to revive The Rite of Spring, but found nobody remembered the choreography. By then Nijinsky, the greatest dancer of his generation, was in mental decline.\nSince then The Rite has been adapted for and included in an estimated 150 productions around the world including gangster films, a punk rock interpretation, a nightmarish vision of Aboriginal Australia by Kenneth MacMillan, and Walt Disney's 1940s film Fantasia. A commemorative performance was staged at the Royal Albert Hall in London to mark the 50th anniversary of the premiere.\nTo mark this year's centenary of The Rite of Spring, described by Leonard Bernstein as the most important piece of music of the 20th century, both Sony and Universal have released box sets reprising the best versions in their back catalogues.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "This was the question I was left asking at the end of the first piece on the MCB program, Ballo della Regina, which is actually the most recent of the works, dating from 1978, and set to music from the Verdi opera Don Carlos. It is light and fast, filled with dazzling leaps by the male soloist (Renato Penteado) and complex variations by the all-female corps (including featured soloist Nathalia Arja, who does have a killer arabesque). But choreographed so assiduously to the music, the work comes off as a series of highly presentational interludes more than a self-sustaining whole, with the completion of a difficult or particularly acrobatic move, punctuated as it is by the score, not just craving, but demanding appreciation. It's an older version of showmanship in ballet that seems out of place in today's world--as evidenced by the confusion of the audience about when and with what measure of enthusiasm to applaud.\nI much preferred the second work on the program. Symphony in Three Movements was choreographed by Balanchine in 1972 as a tribute to Igor Stravinsky, who had died the previous year. Set to a score that the composer had written in 1945 as a commemoration of the end of WW II, the movement features striking diagonal machine-like formations and opposing windmill arm turns by the corps, recalling the wings of bomber planes (and, to be sure, the female riveters who built them). One also detects the influence of Balanchine's NYCB confrère, Jerome Robbins, especially in the Jets and Sharks-influenced jumps of the male dancers. Finally, at the centre of the piece is a beautifully spare and simple duet that the program notes indicate was influenced by traditional Balinese dance. It begins with the male dancer positioned behind his female partner; as she bobs down, he pops up, the two of them syncopating this action with corresponding arm waves, as if they are swimming.\nThe evening concluded with the oldest piece on the program, Serenade, choreographed (in 1934) soon after Balanchine had emigrated to the US, and set on students from his newly formed School of American Ballet.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-3", "d_text": "Prokofiev's inexperience in ballet led him to revise the work extensively in the 1920s, following Diaghilev's detailed critique, prior to its first production. The ballet's premiere in Paris on 17 May 1921 was a huge success and was greeted with great admiration by an audience that included Jean Cocteau, Igor Stravinsky and Maurice Ravel. Stravinsky called the ballet \"the single piece of modern music he could listen to with pleasure,\" while Ravel called it \"a work of genius.\"\nFirst World War and Revolution\nDuring World War I, Prokofiev returned to the Conservatory. He studied organ in order to avoid conscription. He composed The Gambler based on Fyodor Dostoyevsky's novel of the same name, but rehearsals were plagued by problems and the scheduled 1917 première had to be canceled because of the February Revolution. In the summer of that year, Prokofiev composed his first symphony, the Classical. This was his own name for the symphony, which was written in the style that, according to Prokofiev, Joseph Haydn would have used if he had been alive at the time. It is more or less classical in style but incorporates more modern musical elements (see Neoclassicism). This symphony was also an exact contemporary of Prokofiev's Violin Concerto No. 1 in D major, Op. 19, which was scheduled to premiere in November 1917. The first performances of both works had to wait until 21 April 1918 and 18 October 1923, respectively. He stayed briefly with his mother in Kislovodsk in the Caucasus. Worried about the enemy capturing Saint Petersburg, he returned in 1918. By then he was determined to leave Russia, at least temporarily. He saw no room for his experimental music and, in May, he headed for the USA. Before leaving, he developed acquaintances with senior Bolsheviks including Anatoly Lunacharsky, the People's Commissar for Education, who told him: \"You are a revolutionary in music, we are revolutionaries in life. We ought to work together. But if you want to go to America I shall not stand in your way.\"", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "2 (1909) and Four Pieces, Op. 4 (1908) are highly chromatic and dissonant works. He composed his first two piano concertos around this time, the latter of which caused a scandal at its premiere (23 August 1913, Pavlovsk). According to one account, the audience left the hall with exclamations of \"'To hell with this futuristic music! The cats on the roof make better music!'\", but the modernists were in rapture.\nIn 1911, help arrived from renowned Russian musicologist and critic Alexander Ossovsky, who wrote a supportive letter to music publisher Boris P. Jurgenson, thus a contract was offered to the composer. Prokofiev made his first foreign trip in 1913, travelling to Paris and London where he first encountered Sergei Diaghilev's Ballets Russes.\nThe first ballets:\nIn 1914, Prokofiev finished his career at the Conservatory by entering the so-called 'battle of the pianos', a competition open to the five best piano students for which the prize was a Schreder grand piano: Prokofiev won by performing his own Piano Concerto No. 1. Soon afterwards, he journeyed to London where he made contact with the impresario Sergei Diaghilev. Diaghilev commissioned Prokofiev's first ballet, Ala and Lolli, but rejected the work in progress when Prokofiev brought it to him in Italy in 1915. Diaghilev then commissioned Prokofiev to compose the ballet Chout (The Fool, the original Russian-language full title was Сказка про шута, семерых шутов перешутившего (Skazka pro shuta, semerykh shutov pereshutivshavo), meaning \"The Tale of the Buffoon who Outwits Seven Other Buffoons\"). Under Diaghilev's guidance, Prokofiev chose his subject from a collection of folktales by the ethnographer Alexander Afanasyev; the story, concerning a buffoon and a series of confidence tricks, had been previously suggested to Diaghilev by Igor Stravinsky as a possible subject for a ballet, and Diaghilev and his choreographer Léonide Massine helped Prokofiev to shape this into a ballet scenario.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "A note from Afa Sadykhly Dworkin, Executive and Artistic Director: This year’s theme represents a figurative dialogue, showcasing a selection of great gems from two eras (baroque and contemporary), while accenting the program with works by composers of color. The architecture of the program pays homage to great 20th century composers who were either deeply interested in baroque or had a tangible connection to it. We invite you to experience Perkinson’s ingenious use of baroque elements and consider Piazzolla’s early devotion to Bach, topped off by Britten’s Simple Symphony, Opus 4, a suite of dances common to the baroque era in their form.\nMontgomery, Strum (Sphinx Virtuosi Composer-in-Residence)\nStrum is the culminating result of several versions of a string quintet I wrote in 2006. It was originally written for the Providence String Quartet and guests of Community MusicWorks Players, and then arranged for string quartet in 2008 with several small revisions. In 2012 the piece underwent its final revisions with a rewrite of both the introduction and the ending for the Catalyst Quartet in a performance celebrating the 15th annual Sphinx Competition. Originally conceived for the formation of a cello quintet, the voicing is often spread wide over the ensemble, giving the music an expansive quality of sound. Within Strum I utilized texture motives, layers of rhythmic or harmonic ostinati that string together to form a bed of sound for melodies to weave in and out. The strumming pizzicato serves as a texture motive and the primary driving rhythmic underpinning of the piece. Drawing on American folk idioms and the spirit of dance and movement, the piece has a kind of narrative that begins with fleeting nostalgia and transforms into ecstatic celebration.\nPiazzolla, Tango No. 1 “Coral”\nPiazzolla, Tango No. 2 “Cantengue”\nAstor Piazzolla was an Argentinean composer and a virtuoso bandoneon player, who revolutionized and reinvented the tango, making it ever appealing, relevant and popular among the classical, world, jazz and other genres.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Ballet de cour in twenty entrées with text by I. de Benserade and P. Quinault, music by Lully, production and choreography by Beauchamps and Pécour, and designs by J. Bérain. Premiered 21 Jan. 1681 at the Château de Saint-Germain-en-Laye, with Beauchamps as Mars and Pécour in various entrées. The ladies and gentlemen of the court provided the rest of the cast in this baroque extravaganza. The work's finale celebrated Love as the ruler of both gods and men. Four months after its premiere, a performance (at the Paris Palais Royal) featured the first appearance of a professional ballerina, Mlle Lafontaine. Harald Lander staged a new version for the Royal Danish Ballet in 1962.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-1", "d_text": "The solo Angel created for Manuel Legris by Renato Zanella was premiered at the Wiener Staatsoper in 1999. He also performed in Manon with the Wiener Staatsopernballett during a guest appearance in Madrid in 2000. Manuel Legris has also made guest appearances with foreign ballet companies at the Wiener Staatsoper: in 1989 with the Tokyo Ballet, in 2000 with the Paris Opéra Ballet during “ImPulsTanz”. During guest appearances by the Paris Opéra Ballet at other Viennese theatres, he danced in TANZ ’86 at the Theater an der Wien in 1986 and at the Burgtheater as part of “ImPulsTanz” in 2005.\nIn 2008 he performed in the “ImPulsTanz” ballet gala at the Burgtheater. In 2012 at the Burgtheater he presented the gala Manuel Legris & Guests for this festival in which he also performed. In the Austrian Broadcasting Corporation broadcast of the Vienna Philharmonic’s 2001 New Year’s Concert, he danced a solo choreographed by Zanella. Recent performances include The Bat in Peking (National Ballet of China), Pas de deux from Onegin (Gala Tokyo Ballet, 50 years jubilee) and an appearance at the Shanghai Grand Theatre.\nIn his first season as Director of the Wiener Staatsballett, Manuel Legris presented altogether eight premieres, five at the Wiener Staatsoper, and three at the Volksoper Wien. For the enormous successful premiere of Rudolf Nureyev’s version of Don Quixote Manuel Legris himself was responsible for the staging. In his second season together with Elisabeth Plated he staged Pierre Lacotte’s La Sylphide, in the season 2012/2013 he produced Rudolf Nureyev’s The Nutcracker and in the 2013/2014 season Swan Lake. At the multi-part evening Junge Talente des Wiener Staatsballetts at the Volksoper Wien his choreography Donizetti Pas de deux was presented.\nAs dancer he appeared at the Wiener Staatsoper at the opening ceremony of the Vienna Opera Ball 2011.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "Like Stravinsky, Glazunov made his first recording in London; but\nunlike the younger composer, it was not to lead to a further series of\ndiscs. In his memoirs, Columbia producer Joe Batten recalled the Seasons\nrecording sessions under Glazunov’s direction: “Here is a far too\nseldom played suite of happy music. It was recorded at the old Portman\nRooms in Baker Street. I shall never forget meeting him, tall, elderly,\nso ill and frail with gout and rheumatism that I wondered if he could\nstand the physical strain of three or four hours of intensive recording\nBut he did and produced a performance of sheer beauty.”\nNone of the frailness described is evident in this energetic and stylish reading, but only some of the beauty survives on the issued discs. Recorded over three sessions, at least two different engineering approaches were used: one which favored a warm, bass-full sound, and the other which was treble-oriented almost to the point of shrillness. Despite its drawbacks, it remains invaluable as the only recording the composer ever made.\nThe sources for the transfers were American Columbia pressings: “Viva-Tonal” copies for the Glazunov; a “Full-Range” label edition for Pulcinella; and a large label, post-“Viva” pressing with patches from a laminated French Columbia copy for Petrushka.\nSTRAVINSKY Petrushka – Ballet Suite (1911)\nSymphony Orchestra ∙ Igor Stravinsky\nRecorded 27 – 28 June 1928 in London\nMatrix nos.: WAX 3867/72\nFirst issued as Columbia L 2173/5\nSTRAVINSKY Pulcinella – Ballet Suite (1920)\nWalther Straram Concert Orchestra ∙ Igor Stravinsky\nRecorded 6 May 1932 & 12 November 1928 in the Studio Albert and Théâtre des Champs-Elysées, Paris\nMatrix nos.: WLX 1605/6 & WLX 626/7\nFirst issued on French Columbia LFX 289 & D 15126\nGLAZUNOV The Seasons – Ballet in 1 Act, Op.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "Fortunately he had started making a name for himself as a composer, although he frequently caused scandals with his forward-looking works. The Sarcasms for piano, Op. 17 (1912), for example, make extensive use of polytonality, and Etudes, Op. 2 (1909) and Four Pieces, Op. 4 (1908) are highly chromatic and dissonant works. He composed his first two piano concertos around this time, the latter of which caused a scandal at its premiere (23 August 1913, Pavlovsk). According to one account, the audience left the hall with exclamations of \"'To hell with this futuristic music! The cats on the roof make better music!'\", but the modernists were in rapture.\nIn 1911 help arrived from renowned Russian musicologist and critic Alexander Ossovsky, who wrote a supportive letter to music publisher Boris P. Jurgenson, thus a contract was offered to the composer. Prokofiev made his first foreign trip in 1913, travelling to Paris and London where he first encountered Sergei Diaghilev's Ballets Russes.\nThe first ballets\nIn 1914, Prokofiev finished his career at the Conservatory by entering the so-called 'battle of the pianos', a competition open to the five best piano students for which the prize was a Schreder grand piano: Prokofiev won by performing his own Piano Concerto No. 1. Soon afterwards, he journeyed to London where he made contact with the impresario Diaghilev. Diaghilev commissioned Prokofiev's first ballet, Ala and Lolli, but rejected the work in progress when Prokofiev brought it to him in Italy in 1915. Diaghilev then commissioned Prokofiev to compose the ballet Chout. Under Diaghilev's guidance, Prokofiev chose his subject from a collection of folktales by the ethnographer Alexander Afanasyev; the story, concerning a buffoon and a series of confidence tricks, had been previously suggested to Diaghilev by Igor Stravinsky as a possible subject for a ballet, and Diaghilev and his choreographer Léonide Massine helped Prokofiev to shape this into a ballet scenario.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-5", "d_text": "There he performed several of his more adventurous piano works, such as his highly chromatic and dissonant Etudes, Op. 2 (1909). His performance of it impressed the organisers of the Evenings sufficiently for them to invite Prokofiev to give the Russian premiere of Arnold Schoenberg's Drei Klavierstücke, Op. 11. Prokofiev's harmonic experimentation continued with Sarcasms for piano, Op. 17 (1912), which makes extensive use of polytonality. He composed his first two piano concertos around then, the latter of which caused a scandal at its premiere (23 August 1913, Pavlovsk). According to one account, the audience left the hall with exclamations of \"'To hell with this futuristic music! The cats on the roof make better music!'\", but the modernists were in rapture.\nIn 1911, help arrived from renowned Russian musicologist and critic Alexander Ossovsky, who wrote a supportive letter to music publisher Boris P. Jurgenson (son of publishing-firm founder Peter Jurgenson [1836–1904]); thus a contract was offered to the composer. Prokofiev made his first foreign trip in 1913, travelling to Paris and London where he first encountered Sergei Diaghilev's Ballets Russes.\nIn 1914, Prokofiev finished his career at the Conservatory by entering the so-called 'battle of the pianos', a competition open to the five best piano students for which the prize was a Schreder grand piano: Prokofiev won by performing his own Piano Concerto No. 1. Soon afterwards, he journeyed to London where he made contact with the impresario Sergei Diaghilev. Diaghilev commissioned Prokofiev's first ballet, Ala and Lolli; but when Prokofiev brought the work in progress to him in Italy in 1915 he rejected it as \"non-Russian\".", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "Joseph, professor emeritus of music at Skidmore, is the author of four books on the\nlife and music of Igor Stravinsky, including Stravinsky’s Ballets (2012, Yale Music Masterworks). Joseph won the ASCAP Deems Taylor Award for his book\nStravinsky and Balanchine: A Journey of Invention.\nWentink, curator of special collections and college archives at Middlebury, was manuscript archivist at the Dance Collection of the New York Public Library for seven years. He is a writer, editor, dance and film historian who also teaches, including courses on “Diaghilev’s Ballets Russes and the Creation of Modern Culture” and “From George Washington to John Travolta: Social Dance in American Popular Culture.”\nBallets Russes: A transformative company\nThe Ballets Russes, a traveling Russian ballet company, was one of the most innovative ballet companies of the 20th century. Director Sergei Diaghilev transformed the standard of ballet to focus not only on the technique, but also on the expressiveness of movements. He departed from the classical form of a corps de ballet, placing importance on individual dancers and including more lead male dancers. The pieces that Diaghilev directed illustrated one consistent theme; his collaborations with set designers and musicians created a more cohesive and powerful performance.\nThe Skidmore program features elements that will be familiar to Saratoga-area ballet audiences. The first piece, Les Sylphides, choreographed by Michel Fokine with music by Frédéric Chopin, is a one-act, plotless Romantic ballet — a ballet of mood. This enchanting piece includes dancers dressed as white-clad sylphs — mythological airy sprits — dancing under the moonlight with a young male poet. Les Sylphides is noted for its romantic Chopin waltzes, including the Waltz in G-flat major, the Waltz in C-sharp minor, and the Grande Valse Brillante in E-flat major.\nThe second piece, L’Apres-midi d’un Faune, choreographed by Vaslav Nijinsky with music by Debussy, is one of the first modern ballets that became known for its controversial eroticism and its rejection of classical formalism.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-5", "d_text": "(Nineteenth-century composers were fascinated by these tales: Robert Schumann, no less, wrote a large-scale oratorio called Das Paradies und de Peri.) In Dukas’ scenario, the young prince Iskender is journeying far and wide in search of the lotus flower that will grant immortality. He encounters a beautiful péri, sleeping in a jewel-bedecked bower with the lotus in her hand. Gazing upon her, Iskender falls in love. Without waking her, the prince snatches the flower, but when she awakens she performs the dance of the péris. At the climax of the dance, Iskender returns the lotus in exchange for a kiss. The fairy then melts into the glowing light of sunset and Iskender realizes he has lost her forever. He feels the darkness surround him, knowing that his end is near.\nIn crafting his many-hued musical setting, Dukas pulled out all the stops, creating a tour de force of Impressionist orchestral color and technique. The score is a rich tapestry of elegant, wispy effects and deliciously opulent instrumental shadings. Two themes engage in a drama of confrontation, like Iskender and La Péri. Much of the piece portrays the impassioned dance of the Péri. The popular fanfare that opens the work, often excerpted, bears no thematic relationship to the ballet itself. Added as an afterthought, it might have been a signal to audiences of the day to quiet down before the soft opening of the main body of the poème.\nThe composer dedicated La Péri, his last published work, to Natalie Trouhanova, the Russian ballerina who danced the title role at its Paris premiere in 1912. The stage was decorated with golden mountains, crimson valleys, and trees laden with silver fruit. The evening also included the premieres of three other ballets – by Maurice Ravel, Vincent d’Indy, and Florent Schmitt, each conducted by its composer.\nRecommended Recording: Jesús López-Cobos, Cincinnati Symphony Orchestra (Telarc)\nBorn 17 June 1882 in Lomonosov, Russia; Died 6 April 1971 in New York City, New York\nComposed: 1909-1910; rev.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-9", "d_text": "2) (1941)\n- Concerto Barocco (1941)\nFor the Ballet del Teatro de Colón\n- Mozart Violin Concerto (1942)\nFor Ballet Theatre\n- Waltz Academy (1944)\n- Theme and Variations (1947)\nFor Ballet Society\n- The Four Temperaments (1946)\n- L'enfant et Les Sortilèges (The Spellbound Child) (1946)\n- Haieff Divertimento (1947)\n- Symphonie Concertante (1947)\n- Orpheus (1948)\nFor the Paris Opera Ballet\n- Pas de Trois Classique (also known as Minkus Pas de Trois) (1948)\nFor New York City Ballet\n- La Sonnambula (1946)\n- Bourrée Fantasque (1949)\n- The Firebird (1949; later revised with Jerome Robbins)\n- Sylvia Pas De Deux (1950)\n- Swan Lake (after Lev Ivanov) (1951)\n- La Valse (1951)\n- Harlequinade Pas De Deux (1952)\n- Metamorphoses (1952)\n- Scotch Symphony (1952)\n- Valse Fantaisie (1953/1967)\n- The Nutcracker (1954)\n- Ivesiana (1954)\n- Western Symphony (1954)\n- Glinka Pas De Trois (1955)\n- Pas De Dix (1955)\n- Divertimento No. 15 (1956)\n- Allegro Brillante (1956)\n- Agon (1957)\n- Square Dance (1957)\n- Gounod Symphony (1958)\n- Stars and Stripes (a ballet in five \"campaigns\") (1958)\n- Episodes (1959)\n- Tschaikovsky Pas de Deux (1960)\n- Monumentum pro Gesualdo (1960)\n- Donizetti Variations (1960)\n- Liebeslieder Walzer (1960)\n- Raymonda Variations (1961)\n- A Midsummer Night's Dream (1962)\n- Bugaku (1963)\n- Meditation (1963)\n- Movements for Piano and Orchestra (1963)\n- Harlequinade (1965)\n- Brahms-Schoenberg Quartet (1966)\n- Jewels (1967)\n- La Source (1968)\n- Who Cares?", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-8", "d_text": "The Firebird (1910) was seen as an astonishingly accomplished work for such a young artist (Debussy is said to have remarked drily: \"Well, you've got to start somewhere!\"). Many contemporary audiences found Petrushka (1911) to be almost unbearably dissonant and confused. The Rite of Spring nearly caused an audience riot. It stunned people because of its willful rhythms and aggressive dynamics. The audience's negative reaction to it is now regarded as a theatrical scandal as notorious as the failed runs of Richard Wagner's Tannhäuser at Paris in 1861 and Jean-Georges Noverre's and David Garrick's Chinese Ballet at London on the eve of the Seven Years' War. However, Stravinsky's early ballet scores are now widely considered masterpieces of the genre. Even his later ballet scores (such as Apollo), while not as startling, were still superior to most ballet music of the previous century.[according to whom?]\nSummary of contributions\n- The male dancer returns.\n- Expressiveness: work was not just solely about technique or divertissements as found in classical ballet\n- Movement vocabulary freed\n- Individuals were important rather than a corps de ballet of classical form\n- Unified theme: pieces were often one act, and always sought to express a single theme throughout the piece\n- Collaboration: Choreographers and dancers collaborated with set designers and musicians in order to create pieces\n- Reflection of Russian taste, themes (idea of developing Russian art, rather than importing western art and influence)\nFilm of a performance\nDiaghilev always maintained that no camera could ever do justice to the artistry of his dancers, and it was long believed there was no film legacy of the Ballets Russes. However, in 2011 a 30-second newsreel film of a performance in Montreux, Switzerland, in June 1928 came to light. The ballet was Les Sylphides and the lead dancer has been identified as Serge Lifar.\nWhen Sergei Diaghilev died of diabetes in Venice on 19 August 1929, the Ballets Russes was left with substantial debts, its property was claimed by its creditors, and the company of dancers dispersed.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-2", "d_text": "In 2001, he created Les Larmes de Marco Polo for the Lyon International Biennial.\nIn 2002, he created 99 duos at the Chaillot National Theatre, the first part of a trilogy on ‘People’. In 2003, he prepared Trois générations for the Avignon Festival, which was eventually cancelled. The piece, which includes children, former dancers and the Company, was performed at the Rampe d’Echirolles in March 2004.\nIt was performed in May of the same year at the Chaillot National Theatre and was repeated in November 2005. The same year, he worked with the director Hans-Peter Cloos to produce a show combining dance, theatre and music, Les sept pechés capitaux by Bertolt Brecht and Kurt Weill. In 2006, he created Des Gens qui dansent, the third part of the trilogy initiated by 99 duos and Trois Générations and, in 2007, he repeated his flagship piece from the 80s, Ulysse, under the title Cher Ulysse.\nIn 2008, Bach danse experience with Mirella Giardelli and “L’Atelier des Musiciens du Louvre”; Armide by Lully with the conductor William Christie and the director Robert Carsen at the Théâtre des Champs-Elysées, Paris; Chroniques chorégraphiques - season 1, a sort of “stage movie” that allowed him to pursue his poetic research into genres and people.\nIn 2009, he created l’Homme à tête de chou, with the original words and music by Serge Gainsbourg in a version recorded for the show by Alain Bashung. In April 2011, he performed a solo with Faut qu’je danse ! as a prelude to the recreation of his trio Daphnis é Chloé in Grenoble.\nIn October 2011, again in Grenoble and with a piece for thirteen dancers, he took on Igor Stravinsky’s Le Sacre du printemps, which he presented in April 2012 at the Chaillot National Theatre, Paris, along with Tumulte and Pour Igor in the first part.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "Stravinsky - COE - BBC Music Magazine\n25 November 2009BBC Music Magazine\nIt takes an exceptional release these days to get away with a length of as little as 54 minutes. But then the Chamber Orchestra of Europe is a rather special ensemble, retaining 18 of the pioneering young artists who originally set it up in 1981 complemented by the front desk players from all over the European Union. The result is a wonderfully characterful wind section plus a string body of vibrant unanimity and warmth. These qualities are enhanced by the church acoustic in which these sessions took place, yet skilful microphone placement has ensured that not a detail is lost.\nThe COE's leader Alexander Janiczek directs these readings from the first violin. In Stravinsky's ballet blanc for string orchestra, Apollon musagete (1928), he steers a middle way between the suave smoothness of the Karajan approach and the edgy haste Stravinsky's own later readings tended to take on. Tempos are moderate, but there is no lack of spring to the rhythms while the opening of the ‘Pas de Deux' is mesmerising in its translucent poise.\nThe Suite from the Baroque make-over Pulcinella (1920) is an altogether livelier, gutsier affair as delivered here, with a terrific rhythmic snap to the finale. What a pity not to record the entire ballet with this outstanding élan. The disc could have contained it.\nRelated LinksChamber Orchestra of EuropeStravinsky Apollon musagète & Pulcinella Suite", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-0", "d_text": "This work is likely not in the public domain in the US (due to first publication with the required notice after 1925, plus renewal or \"restoration\" under the GATT/TRIPS amendments), nor in the EU and those countries where the copyright term is life+70 years. However, it is public domain in Canada (where IMSLP is hosted) and in other countries (China, Hong Kong, New Zealand) where the copyright term is life+50 years.\nPlease obey the copyright laws of your country. IMSLP does not assume any sort of legal responsibility or liability for the consequences of downloading files that are not in the public domain in your country.\n||Pervenches, 8 Miniatures for Piano\n||Periwinkles . Pièces enfantines.\n|Opus/Catalogue NumberOp./Cat. No.\n|I-Catalogue NumberI-Cat. No.\n- Devant l'eglise\n- Au travail\n- Par un matin ensoleillé\n- Après le théâtre\n- Danse russe\n- Digui-don (Jeu d'enfant)\n|Year/Date of CompositionY/D of Comp.\n|Composer Time PeriodComp. Period\nPublished 1942 by Belaieff according to Hofmeister's Monatsbericht (1942), p.24.", "score": 8.086131989696522, "rank": 100}]} {"qid": 35, "question_text": "How has aerial surveillance in the Navy changed with the introduction of unmanned vehicles?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "Under the auspices of the Unmanned Combat Air Systems Demonstration between 2007 and 2015, Northrop’s larger X-47B undertook the first carrier operations for a drone, including the first unmanned trap aboard a carrier in May 2013.\nAt top — the Boeing MQ-25. Boeing photo. Above — an X-47B takes off from the USS George W. Bush in 2013. U.S. Navy photo\nIn 2013 the Navy announced the follow-on Unmanned Carrier-Launched Airborne Surveillance and Strike program to develop a stealthy surveillance and attack drone. But camps within the Navy quarreled over UCLASS’s mission set.\nShould UCLASS primarily fly surveillance mission in lightly-defended air space, like the Air Force’s Reaper drone does? Or should it be capable of penetrating enemy defenses in order to attack heavily-defended targets? “You can’t afford both, you have to make your bet,” Bob Work, a former Navy undersecretary who would serve as deputy defense secretary, said in 2013.\nThe Navy placed its bet in early 2016. Rather than optimizing UCLASS for surveillance or strike, it chose an entirely separate mission. Rebranded as the Carrier-Based Aerial Refueling System, the former UCLASS, ex-J-UCAS program would develop a tanker.\nCritics were displeased. “The Navy suppressed the best promise for innovation in this generation,” Dr. Monte Turner and Air Force lieutenant colonel Douglas Wickert wrote in a 2016 paper for National Defense University.\nNorthrop competed with Boeing, Lockheed Martin and General Atomics for the CBARS contract. As the requirements drifted away from stealth and instead emphasized endurance and fuel capacity, Northrop decided to drop out. Boeing’s new design has almost nothing in common with the company’s J-UCAS-era X-45.\nIn contrast to the X-45’s and X-47’s flying-wing configuration, Boeing’s MQ-25 is a straight-wing, twin-tail design. But its conventional planform belies its evolution across multiple programs that shifted in emphasis from attack to surveillance to tanking. The roughly 60-foot-long drone boasts a top-mounted air-intake that’s flush with the fuselage, a low-observable design feature that Northrop experimented with in the 1980s its Tacit Blue demonstrator.", "score": 50.092349300950374, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "US Navy hands out $1bn robo-plane contract\nOpts out of 'optionally manned' option\nThe US Navy has made a long-awaited decision and awarded a billion-dollar development contract for the autonomous spyplane which it will use for ocean surveillance in future. The so-called BAMS (Broad Area Maritime Surveillance) craft will be developed by Northrop Grumman using its existing, small-airliner-sized Global Hawk roboplane.\n\"This announcement represents the Navy’s largest investment in unmanned aircraft systems to date,” said Captain Bob Dishman, BAMS programme chief.\n\"This is a significant milestone for the BAMS unmanned aerial system.\"\nWalk faster, human minions. Global Hawk wants fuel\nThe BAMS aircraft are intended to deliver surveillance across wide areas of ocean in a similar manner to the US Navy's current fleet of landbased P-3 Orion patrol planes, or the UK's MR2 Nimrods. Such maritime patrol craft are primarily intended to fight enemy submarines, but in late years have often been employed in overland tasks.\nThe American Orions are now very old, and the USN is keen to replace them. There is a manned aircraft, the P-8 Poseidon, under development for this. The Poseidon will be based on 737 airliner airframes from Boeing. However, even the USN doesn't have enough money to fully replace its Orion fleet with Poseidons. Hence it hopes to supplement its future P-8 fleet with BAMS drones that should cost less to buy and operate.\nNorthrop's Global Hawk offering has beaten two other contenders. Boeing had presented a plan which would have used \"optionally manned\" Gulfstream business jets, that could be crewed if desired but would also be capable of flying themselves. There was also a Lockheed-led team proposing to use a version of the well-known Predator-B/Reaper drone.\nNow, however, the Northrop Global Hawk has been selected as the winner. The jet-powered Global Hawk is one of the biggest and most capable robot aircraft around, able to make intercontinental flights and lurk above a warzone more than a thousand miles from its base for 24 hours at a time. Unlike the more famous Predator, it requires no pilot at the ground controls all the time - the Global Hawk is much more like a true robot aircraft. It has been in operation since the 1990s, and has been credited with many milestones and successes in US Air Force service.", "score": 48.213570997105464, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "The Navy in May performed the first carrier launch of the system, followed by a series of touch-and-go landings. Additional testing will be performed through December, he said.\nThe prototype is about the size of a fighter jet and one of two developed by Northrop Grumman for the Navy's Unmanned Combat Air System Demonstrator, or UCAS-D, program, which has cost about $1.4 billion over eight years.\nThe effort is designed to demonstrate the technology and pave the way for a larger program to build the Navy’s armed, carrier-based drone fleet called Unmanned Carrier Launched Airborne Surveillance and Strike, or UCLASS.\nThe Navy plans to issue a draft request for proposals to develop the technology for the UCLASS program in August, followed by a formal request in the second quarter of 2014, Winter said. The service would pick a single winner by the end of next year, he said.\nNorthrop Grumman is expected to square off against other defense giants for the work, including Lockheed Martin Corp., Boeing Co. and General Atomics Aeronautical Systems Inc. Lockheed Martin is pitching the Sea Ghost, Boeing the Phantom Ray, and General Atomics the Sea Avenger.\n\"Although it looks like it could be an easy maneuver, today's successful arrested landings points back to a rigorous test plan focused on software development and system maturity to prove today that an autonomous unmanned system such as the X-47B can safely, seamlessly and predictably integrate into Navy carrier operations,\" said Carl Johnson, vice president and Navy UCAS program manager for Northrop Grumman Aerospace Systems.\nThe Navy wants to add drones to air wings to extend the range of its carrier groups. The X-47B can fly about twice as far as a manned F-35C fighter jet.\nMabus said the technology will allow the service to maintain more of a global presence -- \"being not just in the right place at the right time, but in the right place all the time.\" The service aims to equip the first carrier air wing with operational unmanned systems in 2019, he said.\n|Drones Navy Brendan McGarry|", "score": 45.69047256249538, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Navy UAS kicks off flight testing as Northrop finally offers to trim ops costs for USAF Global Hawk\nAs battles to save its high-flying unmanned aircraft franchise from Air Force termination, the aircraft's newer and more robust cousin, the Navy-sponsored Triton maritime surveillance aircraft, has begun flight testing.\nThe Triton's May 22 first flight not only breathes some life into Northrop's turbulent Global Hawk efforts—and possible hope for foreign sales—it marks progress in the Navy's quest to open up new sea-going roles for unmanned aircraft. The MQ-4C Triton, which is slated to enter service in 2015, took to the skies only days after the historic catapult launch of the tailless, stealthy X-47B from a Navy aircraft carrier deck. Once fielded in fiscal 2016, Triton will be the first Navy UAS to replace a mission currently handled by a manned system, says Capt. Jim Hoke, the Navy's Triton program manager.\nAfter launching from Northrop Grumman's Palmdale, Calif., facility the 80-min. flight took place in restricted airspace near Edwards AFB, Calif. It was the first of up to nine envelope-expansion missions that will pave the way for more extensive systems flight trials at NAS Patuxent River, Md. The flight, though five months later than planned at the MQ-4C's unveiling last June, marks progress toward the Navy's maritime patrol modernization goal.\nThe two-pronged initiative, geared toward longer-range overwater surveillance as part of the renewed strategic focus on the Asia-Pacific region, will see the service's aging P-3 fleet replaced by a combination of 117, based on the company's 737, and 68 land-based MQ-4Cs.\nThe most advanced variant of Northrop's Global Hawk, the MQ-4C, has been in development since 2008 under the $1.16 billion(BAMS) program. The Navy plans to buy 70 aircraft, including two test vehicles, for $13 billion.\nThe first flight is a timely boost for Northrop Grumman, which is fighting the premature termination of the U.S. Air Force's RQ-4B variant owing to cost and sensor performance issues.", "score": 43.24382712426202, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "COLUMN | US Navy plots a partially unmanned future course [Naval Gazing]\nThe deployment of unmanned surface vehicles (USV) by the world’s navies continues to gain momentum. Thus far, though, the naval usage of USVs has been focused on the conduct of prolonged, repetitive coastal and littoral operations, such as long term patrolling and surveillance missions, and mine detection and clearance, using small USV platforms, such as the Israeli Protector, and UK’s Autonomous Minesweeping System.\nA recent announcement by US Navy Surface Warfare Director Rear Admiral Ronald Boxall signalled a likely future radical change to US Navy doctrine, namely the much greater use by the service of USVs in combat roles.\nThe US Navy’s current top operational priority is to respond to the headlong advance of China’s PLA Navy, which is probably adding some 20 to 30 warships to its fleet every year, although it has become difficult to make reliable estimates since 2018, when China clamped down on media reporting of the commissioning of new vessels.\nThe American deep-sea surface fleet is currently dominated by nuclear-powered aircraft carriers, and their main escorts, Arleigh Burke-class destroyers. These destroyers are highly effective units, bristling with powerful weaponry and sensors, but they are expensive, and absorb a great deal of scarce manpower.\nThe current thinking in the Pentagon is that USV technology is now sufficiently mature for it to be feasible to augment the manned fleet with a significant number of high endurance, relatively economical, USVs. A mix of small sensor-equipped craft, and much larger, weaponised, vessels is envisaged. Accordingly, a request for information on how best to achieve this goal has been put out to the defence industry.\nIt is very likely that a future USV programme will feature a development of the Defence Advanced Research Projects Agency (DARPA) Sea Hunter, a 40-metre, 145-tonne trimaran, built by Vigor Shipyards in Oregon, and dubbed an “anti-submarine warfare continuous trail unmanned surface vessel” (ACTUV).\nThe craft is powered by twin diesels, enabling a speed of 27 knots. Range is 10,000 nautical miles. Search and detection sonar is fitted, together with a full suite of radar, electro-optical detection, and electronic warfare equipment. Data link enables the ACTUV, which incorporates advanced autonomous navigation and anti-collision features, to be controlled from a remote operations centre.", "score": 42.24445438507833, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "The U.S. Navy will attempt the first launch of an unmanned aircraft from an aircraft carrier on Tuesday, using Northrop Grumman\n's X-47B Unmanned Combat Air System demonstrator (UCAS).\n(X-47B Unmanned Combat Air System (UCAS) loaded onto flight deck of USS George H.W. Bush. U.S. Navy photo by Mass Communication Specialist 2nd Class Tony D.)\nThe X-47B is about the size of a fighter jet and has a range of more than 2,100 nautical miles, with the ability to fly fully autonomous. Tuesday's launch will occur aboard the USS George H.W. Bush (CVN 77) aircraft carrier in Norfolk, Va.\nThe Navy is looking to use the UCAS to demonstrate the integration of unmanned aircraft into carrier-based operations.\n\"Over the coming years, we will heavily leverage the technology maturation, networking advances and precision navigation algorithms developed from the X-47B demonstration program to pursue the introduction of the first operational carrier-based unmanned aircraft,\" Rear Adm. Mat Winter, the Navy's program executive officer for unmanned aviation, wrote in a Monday blog post. \"This future system will provide a 24/7, carrier-based intelligence, surveillance and reconnaissance and targeting capability, which will operate together with manned aviation assets allowing the opportunity to shape a more efficient carrier air wing,\"\nThe Navy is planning to demonstrate the aerial refueling capability of the X-47B during the 2014 fiscal year.\nRelated: Unmanned Systems News", "score": 42.15907232862495, "rank": 6}, {"document_id": "doc-::chunk-2", "d_text": "The X-47B (or planned, slightly larger, X-47C) is not the definitive carrier UCAS but the navy hopes it is good enough to show that unmanned aircraft can do the job. Normally, \"X\" class aircraft are just used as technology demonstrators. But the X-47 program has been going on for so long, and has incorporated so much from UAVs already serving in combat, that the X-47B may end up eventually running recon and bombing missions as the MQ-47B.\nThe Department of Defense leadership is backing the navy efforts and spurring the air force to catch up. At the moment, the air force has a hard time building enough MQ-9s, which are used as a ground support aircraft, in addition to reconnaissance and surveillance. But, as the navy is demonstrating, you can build UCAS that can carry more weapons, stay in the air longer, and hustle to where they are needed faster.", "score": 40.287364402339094, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "The next phase of introducing an unmanned aircraft system capability to the Royal Australian Navy has progressed.\nContracts have been signed with Austrian company, Schiebel Aircraft GmbH, to deliver Navy Minor Project 1942.\nThe maritime rotary wing unmanned aircraft system is being acquired to support trials and evaluation activities for at least the next three years.\nThe contract comprises two S100 Camcopter air-vehicles with mission control systems, as well as engineering, logistics and operational support.\nThe Navy is rapidly developing an enduring unmanned aircraft system capability that will improve the situational awareness for ships, providing a significant warfighting advantage.\nUnmanned systems will be critical in future warfighting and will allow ships to more readily see beyond the horizon, providing greater intelligence, surveillance and reconnaissance capabilities.\nThe project is another step towards ensuring that the Australian Navy remains at the forefront of maritime aviation technology.\nThe $16 million project will build on the knowledge and experience already gained through operating the fixed wing ScanEagle unmanned system, and enables the Navy to further define Australian requirements for operating tactical unmanned systems in the maritime environment.\nThe project will give Navy an understanding of the workforce requirements, organisational structures, performance specifications, tactics and procedures required to maintain a permanent unmanned capability.\nAs tactical unmanned aerial systems are emerging technologies, particularly vertical take-off and landing systems from ships at sea, Navy has adopted an innovative phased ‘learn by doing’ approach.\nNavy is comprehensively testing a variety of new systems to reduce risks ahead of acquiring a permanent unmanned capability in the early 2020s, that will see the capability embarked in the future fleet.\nThe in-country support for the system is being provided by BAE Systems in Nowra and Unmanned Systems Australia in Brisbane.", "score": 40.07190518057129, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "A Northrop Grumman X-47B Unmanned Combat Air System-Demonstrator (UCAS-D) is hoisted aboard USS Harry S Truman (CVN 75), marking a milestone in the Navy’s Unmanned Carrier Launched Airborne Surveillance and Strike (UCLASS) program. Equipment needed to operate the unmanned aircraft was added during the carrier’s recent overhaul. The X-47B “will demonstrate seamless integration into carrier flight deck operations through various tests. During each demonstration, the X-47B will be controlled remotely via a hand-held control display unit (CDU),” according to a Navy release. Truman will undertake three weeks of deck handling trials of the aircraft as well as other tests in preparation for actual flight operations scheduled for December of 2013.", "score": 39.09271201308542, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "[Video] U.S. Navy launches its killer drone off the deck of an aircraft carrier. A new era has begun.\nA new era for naval aviation has just begun.\nOn May 14, the US Navy successfully launched the Northrop Grumman X-47B unmanned combat air vehicle (UCAV) off the deck of an aircraft carrier for the first time. A breakthrough for robotic aviation and military Implementation of unmanned systems.\nThe video, just released by the U.S. Navy shows the X-47B Unmanned Combat Air Systems (UCAS) demonstrator – please note that both UCAV and UCAS acronyms are used for this drone, being taxied and then catapult-launched from the flight deck of USS G. W. Bush.\nShip-board testing had started on Dec. 9, 2012.\nOn the flight deck the X-47B (that on Nov.29, successfully completed its first land-based catapult launch from Naval Air Station Patuxent River) is controlled using an arm-mounted control display unit (CDU).\nThe new gadget is a special remote control for moving the X-47B on flight decks which attaches to the wrist, waist and one hand. Through the device, deck operators ahve access to a display and can control the aircraft’s throttle, tailhook, steering, brakes and perform several other functions associated with maneuvring an aircraft on deck.", "score": 38.27044771455137, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "The unmanned aerial drone X-47B makes its first landing ever on the deck… (Rob Ostermaier, Daily Press )\nA government watchdog report issued Thursday calls for greater congressional oversight of the Navy's carrier-based unmanned aircraft program, and cites other \"programmatic risks\" related to the budget and schedule.\nThe Navy is pursuing the program in a way that will limit the ability of Congress to hold it accountable for meeting goals on cost, schedule and performance, the U.S. Government Accountability Office said.\nUnmanned, carrier-based aircraft are considered the next phase of naval aviation, a crucial component as the service seeks to stretch limited dollars and still maintain a worldwide presence.\nIn July, the Navy made history off the Virginia coast when it successfully landed an unmanned, computer-controlled drone aboard the Norfolk-based aircraft carrier USS George H.W. Bush. The bat-winged X-47B, built by Northrop Grumman, was a prototype, and the carrier landing was a first.\nThe Navy's next step is the Unmanned Carrier Launched Airborne Surveillance and Strike System, or UCLASS. The Navy released a request for proposals to four companies for work leading to that program, which is the focus of the GAO report.\nIn fiscal year 2014, the Navy plans to commit $3.7 billion to develop, build and field anywhere from six to 24 aircraft as an initial complement in UCLASS. However, it does not plan to initiate a key review of the program until 2020, when UCLASS has been fielded.\nDefense Secretary Chuck Hagel should direct the Navy to hold the review in fiscal year 2015, GAO says, because that will trigger key oversight measures.\nThe Navy is sticking by its approach. It sees UCLASS as a technology development program. Instead of starting a formal review early on, it plans to take advantage of Defense Department flexibility to gather data so the program is ultimately successful. The Navy says its approach conforms to requirements set forth in the 2012 National Defense Authorization Act.\nHowever, GAO says the Navy's early work goes \"well beyond technology development . . . and thus warrants oversight commensurate with a major weapon system development program.\"\nApart from the oversight issue, the GAO report raises the following \"programmatic risks\":\nThe $3.7 billion cost estimate exceeds the funding the Navy expects to budget for the program through 2020.", "score": 37.937339352561075, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "The first U.S. Navy MQ-4C Triton unmanned aircraft system (UAS) recently flew 11 hours from the Northrop Grumman facility in Palmdale, CA, to the Naval Air Station Patuxent River in Maryland to start its next phase of testing, moving the program closer toward operational assessment. The MQ-4C Triton UAS provides real-time intelligence, surveillance, and reconnaissance over ocean and coastal regions. (For a short video of the flight, click here.)\nDuring the flight, the joint Navy/Northrop team controlled the aircraft from a ground station in Palmdale, which served as the forward operating base, and a Navy System Integration Lab at Patuxent River, which served as the main operating base. The aircraft traveled along the same flight path that was used to transfer the Broad Area Maritime Surveillance Demonstrator from Palmdale to Patuxent River several years ago.\nAt Patuxent River, the aircraft will be outfitted with a sensor suite, before going through a series of sensor integration flights. One of Triton's primary sensors, the AN/ZPY-3 multifunction active sensor radar, will provide what Northrop describes as \"an unprecedented 360° field of regard\" for detecting and identifying ships.\nOver the next few weeks, two other Tritons, one of which is a demonstration aircraft owned by Northrop, will also fly to Patuxent River. Both will be used during system development and demonstration tests.\nBased on the Global Hawk UAS, Triton features a reinforced airframe and wing, along with de-icing and lightning protection systems. These features allow the aircraft to descend through cloud layers to gain a closer view of ships and other targets at sea when needed.\nTriton is specifically designed for maritime missions of up to 24 hours. It can fly at altitudes higher than 10 mi, allowing for coverage of 1 million nmi² of ocean, in a single mission.\nThe Navy’s program of record calls for 68 aircraft to be fielded.", "score": 36.31902927560396, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Naval Air Systems Command’s (NAVAIR) Navy & Marine Corps Unmanned Air Systems (PMA-263) originally acquired the Global Hawk Maritime Demonstration (GHMD) program for the development of Navy doctrine and concepts of operations for large persistent unmanned air vehicles. Currently, the system is sustained by the Persistent Maritime Unmanned Aircraft Systems (PMA-262) program office and has been renamed the Broad Area Maritime Surveillance—Demonstrator (BAMS-D). To date, the BAMS-D team has utilized the RQ-4A long endurance air vehicle to refine tactics, techniques and procedures for use in a maritime environment.\nThe Navy’s RQ-4A Global Hawk air vehicle can soar nearly 11 miles above the ground or up to 60,000 feet. The high-flying aerial vehicle can fly persistently for more than 30 hours above most weather. Imagery and other data obtained by the aircraft feeds by satellite into the Navy ground segment consisting of a mission control element, a launch and recovery element, and a Navy-designed Tactical Auxiliary Ground Station (TAGS). Flown by Navy and Navy contractor pilots, the asset is controlled from Naval Air Station (NAS) Patuxent River, Md.\nBAMS-D successfully completed its first Navy split-site deployment in support of Trident Warrior ‘08 and RIMPAC exercises. An aircraft, shelter and personnel were located in Pt. Mugu, Calif., while mission command, control and execution remained at Patuxent River, Md.\nBAMS-D supported real-world operations under U.S. Northern Command, providing reconnaissance of wildfires in the rugged coastal mountains of California. Later tasking placed BAMS-D along the U.S. Gulf Coast to assess the damage left by Hurricane Ike. Despite challenging weather conditions remaining after the storm, BAMS-D sensor imagery aided first responders in Louisiana and Texas to most efficiently deploy their resources in the massive relief effort.\nBAMS-D was used to develop methods for integrating the Automatic Identification System (AIS) into Fleet operations. Experimentation using BAMS-D also benefitted the Naval Sea Systems Command Ocean Surveillance Initiative and Oceanographer of the Navy office activities assessing usefulness of long-endurance, high-altitude unmanned systems in collecting Fleet-relevant meteorological data.", "score": 35.15709246648288, "rank": 13}, {"document_id": "doc-::chunk-4", "d_text": "(Source: IHS Jane’s)\n14 Apr 19. USN expands MQ-8C Fire Scout capability. The US Navy (USN) is continuing to evolve the MQ-8C variant of the Fire Scout vertical take-off and landing unmanned aerial vehicle (VTOL UAV), service officials told Jane’s. Capability is currently spread across the two variants of the Fire Scout – the Schweizer 333-derived MQ-8B and the Bell 407-based MQ-8C – but eventually modifications will be made to ensure that the C-model can solely support the navy’s mission.\nModifications include the introduction of a Link 16 datalink to enhance the UAV’s ability to network the Lockheed Martin MH-60 naval helicopter. This will enable the helicopter crew to receive data being collected by the Fire Scout directly instead of relaying it via the Littoral Combat Ship (LCS).\n“The direct link between the two is the LCS at the moment,” Captain Eric Soderberg, programme manager at the USN’s Multi-Mission Tactical Unmanned Aerial Systems office (PMA-266), told Jane’s .\n“Future variants are going to have a Link 16, so that any Link 16-enabled platform will be able to share the sensor data from the Fire Scout, as well as the Link 16-enabled H-60s,” he added.\nThis will provide more scope for the UAV to further engage with other USN assets that are Link 16-enabled – which constitutes the majority of the service’s platforms – so the entire air wing’s worth of data would then potentially be available for the Fire Scout to receive, or for it to feed back into.\nThe MQ-8C’s 12 hour endurance enables it to carry out more on-station surveillance than the 3.5 hour endurance of the MH-60S, while the forthcoming introduction of the new Leonardo distributed aperture active electronically scanned array (AESA) radar on this variant will also provide a capability not found on the helicopter. (Source: IHS Jane’s)\n15 Apr 19. Terra Drone Acquires Stake in Slovenia’s C-Astral. Terra Drone Corporation has acquired a stake in Slovenia-based C-Astral Aerospace, which specializes in the manufacturing and services of fixed-wing small unmanned aircraft systems (UAS), with a specific focus on high-productivity and high-endurance surveying, security, and remote sensing.", "score": 34.37139525209362, "rank": 14}, {"document_id": "doc-::chunk-7", "d_text": "The F/A-18 Hornet will be around for many years to come, with the newer E and F model Super Hornets serving with front line squadrons and the E/A-18G Growler electronic warfare platform steadily replacing the venerable Prowlers. The next generation Navy fighter the F-35 Joint Strike Fighter is known to many because of its well-publicized issues and delays, and its absence from such a historic event can only be pondered. It is still undergoing stringent flight tests at NAS Patuxent River but many were surprised that not even a mock-up was on display.\nWhat was shown as the future of Naval Aviation were three unmanned aerial vehicles (UAV). The Global Hawk surveillance aircraft has been in the US Navy's inventory since 2006 and in 2008 the Navy awarded a billion plus contract to Northrop Grumman for more Global Hawks. The Northrop Grumman MQ-8 Fire Scout is an unmanned reconnaissance helicopter. In 2010 an MQ-8B was involved in a drug interdiction in the Eastern Pacific Ocean when it was on a test flight from the USS McInerney. The final UAV on display was a mock-up of the X-47B Unmanned Combat Aerial System (UCAS). With its first flight happening a week before the kick-off event, on February 4th 2011, this really is the fore-front of Naval aviation. The mock-up showed features that are not present on any other UAV, an in-flight refueling probe and a tail hook for carrier landings. Originally planned for 2011, carrier test landings will now take place during 2013 during stage two of development.\nAs mentioned earlier, the field of aviation has changed so drastically over the past 100 years that it is hard to imagine where we will be 100 years from now. What we do know is that the US Navy and its service men and women will be spearheading developments and helping to lead it in to the next century.\nAppendix 1. Static Displays\n|1. C-130 Hercules||27. EA-6 Prowler (no show)||53. F4U Corsair|\n|2. V-22 Osprey||28. AV-8 Harrier||54. N3N Canary|\n|3. C-40 Clipper||29. F/A-18 Hornet||55.", "score": 34.209179921078494, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Navy officer with Door County roots is leading unmanned surveillance aircraft mission\nELLISON BAY - A U.S. Navy officer with Door County roots has become the next commander of the Navy's new program to deploy unmanned surveillance aircraft.\nCommander John LeVoy, the son of Lee and Hugh LeVoy, of Ellison Bay, became the commanding officer of Unmanned Patrol Squadron 19 in a change of command ceremony June 5 aboard Naval Air Station Jacksonville, Florida.\nUnmanned Patrol Squadron 19 includes 300 sailors and 200 civilian contractors with the mission to fly two types of unmanned aerial vehicles, the MQ-4C Triton and RQ-4A Broad Area Maritime Surveillance-Demonstrator, according to a U.S. Navy news release.\n\"The change of command ceremony was wonderful,\" said Lee LeVoy. The couple was grateful to attend the event, she said.\nJohn LeVoy will lead the \"historic, first-ever flight of the MQ-4C Triton,\" the release said.", "score": 33.316016144571456, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "The U.S. Navy X-47B UCAV (unmanned combat air vehicle) made its first catapult launch on November 29th, 22 months after its first flight. This launch was not from a carrier but an airfield built to the same size as a carrier deck and equipped with a catapult. This first launch was to confirm that the X-47B could handle the stress of a catapult launch. Another X-47B has been loaded onto the deck of a carrier, to check out the ability of the UCAV to move around the deck. If all goes well, the first carrier launch of an X-47B will take place next year, along with carrier landings. Last year the navy tested its UCAV landing software, using a manned F-18 for the test, landing it on a carrier completely under software control.\nIt was four years ago that the navy rolled out the first X-47B, its first combat UAV. This compact aircraft has a wingspan of 20 meters (62 feet and the outer 25 percent folds up to save space on the carrier). It carries a two ton payload and will be able to stay in the air for twelve hours. The U.S. is far ahead of other nations in UCAV development, and this is energizing activity in Russia, Europe, and China to develop similar aircraft. It’s generally recognized that robotic combat aircraft are the future, even though many of the aviation commanders (all of them pilots) wish it were otherwise. Whoever gets there first (a UCAV that really works) will force everyone else to catch up or end up the loser in their next war with someone equipped with UCAVs.\nThe U.S. Navy has done the math and realized that they need UCAS on their carriers as soon as possible. The current plan is to get these aircraft into service six years from now. But there is an effort to get the unmanned carrier aircraft into service sooner than that. The math problem that triggered all this is the realization that American carriers had to get within 800 kilometers of their target before launching bomber aircraft. Potential enemies increasingly have aircraft and missiles with range greater than 800 kilometers. The navy already has a solution in development since the X-47B UCAS has a range of 2,500 kilometers\nLast year the U.S.", "score": 33.12836086306637, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "The newest Navy Marine Corps UAV can take off and land from a small area on the battlefield, where its customizable payload of sensors will give Marines the best real-time tactical picture they've ever had.\nThe RQ-21A Blackjack has a 16-foot wingspan and can take off and land in a 40- by 40-foot area. Built by Insitu, the Blackjack can carry larger, more diverse intelligence, surveillance, and reconnaissance (ISR) payloads than its predecessors. The aircraft deployed to Afghanistan earlier this month. There, dedicated UAV squadron detachments will fly it from forward operating bases close to the battlefield.\n\"The majority of the sites we operate in do not have runways. That's why the RQ-21 is so well suited to these environments,\" says Col. Jim Rector, Navy and Marine Corps Small Tactical Unmanned Aircraft Systems program manager.\nUsing a line-of-sight, the RQ-21A has a range of about 50 nautical miles from the launch point. However, operators can extend that range by using remote ground control stations (GCS) and, eventually, satellite technology. The UAV can stay aloft for up to 16 hours using a 100-cc two-stroke heavy-fuel reciprocating engine. Regimental commanders receive data from the drone's sensors in real time.\nTo launch and recover the drone, Marines would use the MKIV catapult launcher and MK2 Skyhook retriever. \"Those two components allow us to have true expeditionary capability and to operate in littoral areas from amphibious Navy ships,\" Rector says.\nThe launcher uses a pneumatic system to fling the Blackjack airborne the way an aircraft carrier flings jets. Prior to launch, the squadron crew brings the 135-pound RQ-21A up to full power and sets the launcher at the desired angle of attack. The UAV gets airborne within a 40-foot area and quickly clears obstacles in its launch flight path.\nRecovering the Blackjack is a bit more dramatic. In anticipation of the UAV's return, Marines extend the Skyhook's articulated telescoping mast. The Blackjack flies back to the launch/recovery area via commands through the GCS. Then the Marines snag it.\n\"Basically, what the Blackjack does is fly its wing right into a bungee-style line hanging from the Skyhook,\" Rector says. \"When it hits the line, the wing hooks onto it and locks.", "score": 32.56831402924534, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "U.K. Royal Navy Type 23 frigate. The UAV was put\nthrough trials last month on a Singaporean Navy frigate\nand a tank landing ship.\nIn the Navy and Marine Corps’ planning, the ScanEagle\nis a gap-filler for the future Small Tactical Unmanned\nAerial System (STUAS, known as Group 3, formerly Tier\nII, UAS, for the Marine Corps).\nThe STUAS will be a procurement program for a\nUAV small enough to not require a runway. The Navy\nreleased a performance document in November and\nhas been releasing more information on the program\nparameters as they have developed.\n“We are taking a somewhat less predictable approach\nto this program,” said Rear Adm. William Shannon, the\nNavy’s program executive officer for strike weapons and\nunmanned aviation. “We are trying to be more agile, trying to get this out to the warfighter faster. Instead of one\nfull RFP [request for proposals], we’ve been releasing a\nseries of bits of information on the program. As we’re\nabsolutely sure about one piece, we’ll release it.”\nShannon said the requirements are being shaped by the\ninput from various future operators of STUAS, including\nthe surface warfare, amphibious warfare, Riverine and special operations communities, as well as the Marine Corps.\nThe goal is to reach initial operating capability in 2011,\nfollowing a competition and a demonstration phase.\nThe STUAS air vehicle is required to operate from\nship or shore with 10-hour endurance, a service ceiling\nof 15,000 feet and a maximum weight of 150 pounds.\nThe UAV would carry EO and IR sensors and, for maritime operations, an AIS repeater.\nFor the Marine Corps, “the primary mission for Tier\nII/STUAS (Group 3) will be ISR, target acquisition and\ncommunications relay,” said Maj. Thomas Heffern, UAS\ncapabilities officer for the service’s Combat Development\nDirectorate. “However, future upgrades may include ISR\npayloads other than the standard EO/IR full-motion\nvideo sensor: synthetic aperture radar, [signals intelli-gence], multispectral, etc. Future growth might also\ninclude a weapons delivery capability.” ■", "score": 32.31568148414651, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "unmanned aerial vehicles\nOne year after President Obama vowed to create greater transparency and guidelines for drone strikes, a new bipartisan report from senior military and intelligence officials warns the “secret war” of lethal drone strikes risks putting the U.S. on a “slippery slope” toward perpetual war.\nNavy looking to build smaller drones that can be launched from ships at sea.\nGeneral Atomics Avenger has a longer range and bigger payload to operate in remote regions.\nA United Nations report says unmanned aerial vehicles are the weapons of choice in modern warfare.", "score": 31.737310610631827, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Navy to Send More Unmanned Systems to Sea\nPhoto: General Dynamics\nThe Navy is moving ahead with unmanned surface and undersea vehicle development, and pursuing enabling technologies that will make the platforms operationally effective.\nA wide range of USVs and UUVs are in the works, littoral combat ship program executive officer Rear Adm. John Neagley said during a presentation at the Association for Unmanned Vehicle Systems International conference in National Harbor, Maryland.\n“Those capabilities will be delivered over the next couple years and start to get into our procurements in ‘18 and ‘19 and really start hitting the fleet,” he said.\nNeagley’s portfolio includes the unmanned maritime systems program office, PMS 406.\n“LCS was built from the ground up to really leverage and take advantage of unmanned systems,” he said. “It’s a modular ship … [with] a lot of reconfigurable space.” It has a built-in capability for launching and recovering UUVs and USVs, he noted.\nUnmanned vessels can range in size from small man-portable devices to extra-large platforms that are more than 50 meters in length. They allow the U.S. military to take warfighters out of harm’s way and perform certain missions more effectively and efficiently, he said.\nSurface vehicles that are in the works include the unmanned influence sweep system minesweeper (UISS); the mine countermeasures USV (MCM USV); and the Sea Hunter medium displacement UUV, an anti-submarine warfare continuous train unmanned vessel.\nOperational evaluation of the UISS is slated for spring 2018, and Milestone C is expected in the fourth quarter of this fiscal year, according to Neagley.\nConstruction and payload integration for the MCM USV is underway with initial operator testing in fiscal year 2019.\nThe Sea Hunter recently transitioned from the Defense Advanced Research Projects Agency to the Office of Naval Research, where development and testing will continue. The system could potentially transition to Navy operations this year, according to DARPA.\nUndersea vehicles that are moving through the development pipeline include: the Knifefish for hunting bottom and buried mines; the Snakehead large displacement UUV for intelligence, surveillance and reconnaissance; and the Orca extra-large UUV for mine warfare.\nThe Knifefish has undergone sea acceptance trials, and Milestone C is slated for the third quarter of this fiscal year, according to Neagley.", "score": 31.636817846382748, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Presently, human oversight is necessary for performing an offensive attack. UVs in this role can act as a hosting platform for sensors and weapons, thereby increasing the capacity, flexibility and awareness of the manned combatants they support.\nIn peacetime, UVs which account for lower procurement and sustainment costs can protect assets from maritime threats. This includes deterring adversaries and disorienting enemy forces because there are more units to detect and track.\nDistributed Maritime Operation (DMO)\nAlthough the Navy also has an Unmanned Undersea Vehicles (UUVs) program, we’ll focus on Unmanned Surface Vehicles (USVs) in our discussion. To define their effectiveness and how they are deployed, these vehicles have been classified as large, medium, and small.\nGenerally, the smaller classes of USVs can be deployed from manned submarines or ships to further their operational reach. They are also “a much better option than sending a manned platform into a minefield,” Adm. John M. Richardson addressed the House Appropriations Defense Subcommittee.\nMedium Unmanned Surface Vehicles (MUSVs)\nMUSVs are required to be 45 ft – 190 ft long and have full-load displacements of about 500 tons. Suggested payloads are reconnaissance and electronic warfare systems. Sea Hunter is the first of its kind and recorded a successful unmanned sailing mission from San Diego to Hawaii and then back.\nAccording to a Defense News report, personnel from an escort vessel boarded the 132 ft long Sea Hunter at short intervals to check electrical and propulsion systems. No human boarded the vessel during its 2 000 mile return journey which took nine days. Some $200 million has gone into the Sea Hunter program over a period of 4 years.\nIn July 2020, after a competitive process, the Navy awarded L3 Technologies Inc a $ 35 million contract to develop one MUSV prototype. If the budget is provided, there is the option to procure 8 more, pushing the entire contract up to just over $281 million.\nLarge Unmanned Surface Vehicles (LUSVs)\nThe requirements for LUSVs are lengths between 200 ft and 300 ft and full-load displacements of 1000 tons to 2000 tons. Senior Navy officials have described the LUSV as an adjunct weapons magazine that will integrate with manned platforms.", "score": 31.19798460733542, "rank": 22}, {"document_id": "doc-::chunk-2", "d_text": "The military sometimes calls them Unmanned Aerial Systems or Unmanned Aerial Vehicles -- or simply UAVs. But to the general public they're commonly known as drones.\n\"It is the future,\" said Scott O'Neil, executive director of research and engineering at Naval Air Warfare Center Weapons Division, which includes dozens of research and testing facilities at China Lake near Ridgecrest and the naval air station at Point Mugu in Ventura County.\n\"It's very powerful and very useful because we can take a human being out of harm's way,\" O'Neil said. \"And of course that's the whole idea.\"\nAnd they're not just developing air-based systems. The Weapons Division is testing submarine and surface-based vehicles designed for water environments -- and remote-operated land-based vehicles, including remotely operated tank-like vehicles.\n'CLEARED FOR WEIRD'\nLast week, Elijah Soto, director of unmanned systems, showed off a Ford F-350 pickup near an airstrip inside the restricted zone. On the outside, it looks like your neighbor's truck. But the front seat and other parts of this experimental vehicle are packed with electronics that give it the capacity to move through hazardous areas without a driver at the wheel.\nIt's remote-controlled, but it also has the ability to remember and drive a set path autonomously, Soto said. The proven dangers of improvised explosive devices, or roadside bombs, means troops on the ground could send supplies or transportation, or survey a perimeter in a high-risk area, without needlessly risking the lives of combat troops.\nUnmanned technology is ideal for the so-called triple-D missions, he said, the dull, the dirty and the dangerous.\nSoto wouldn't say exactly how many different unmanned systems are in the Navy's research, development, test and evaluation processes.\n\"There's a lot,\" he said. \"Hundreds.\"\nPilotless drones range from planes with eight-centimeter wingspans -- the size of a small bird -- to aircraft weighing 13 tons with wingspans exceeding 100 feet.\nWith 26,000 square miles of restricted air space to play in, researchers and evaluators can do things within the Weapons Division's test range they can't do anywhere else.\nSoto calls it being \"cleared for weird.\"\nThey have pneumatic catapult launch systems designed to get smaller and mid-size UAVs into the air without the need for a runway or an \"improved surface.\"\nBut how do they land?", "score": 30.630785795457918, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "Unmanned maritime systems will change how we operate, but they’re just the start. Our pursuit of new technologies and ideas will ensure we remain one of the most capable and successful navies in the world.”\nFrank Cotton, BAE Systems’ Head of Technology, Combat Systems, said: “Unmanned Warrior gives us the perfect opportunity to demonstrate how the command and control of unmanned vehicles can be integrated seamlessly into the existing shipborne and land-based infrastructures. The real challenge around the introduction of autonomy is how a mix of manned and unmanned systems is managed and I’m genuinely excited about demonstrating how the technology that we’ve been developing enables this to happen.”", "score": 29.354242413372, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "— The Navy hopes to eventually make unmanned systems just one tool among many for commanders to pick from. However, to get to that point the service will have to get unmanned systems into the hands of warfighters to work out the kinks, leaders said. Read More", "score": 29.317713961720493, "rank": 25}, {"document_id": "doc-::chunk-2", "d_text": "It will navigate autonomously (although designed to be optionally manned), shoot missiles and return for its reload, drawing fire away from the manned combatants and keeping them fielded longer.\nThe Navy made a budget request of $239 million (FY2021) to purchase two LUSV prototypes for testing. They will be based on the Ghost Fleet Overlord Program, an initial iteration of two modified commercial fast supply ships in response to pushback from Congress.\nThe Chicken and Egg\nWith the various hefty budget requests, Congress has concerns about the Navy’s unmanned plans, including the analysis that informs the Navy’s shift to DMO and what it reveals about the relative costs, capabilities and risks. Also Congress is raising issues about whether the operational concepts of unmanned platforms have been sufficiently proven.\nThe director of unmanned vessels under the Navy’s deputy for ships, Dorothy Engelhardt, in a Forbes article responds, “our goal here isn’t to run a science fair,” stressing that answers about operational capability require more testing and hence more prototypes. And that successful missions will be the result of knowing which technologies and platforms work best in different environments.\nSpeaking at an American Society of Naval Engineers conference, Pete Small who is program Manager for Unmanned Maritime Systems adds that “having one prototype or two prototypes is not going to get us to tens of thousands of hours of operations in the time frame we think we are going to need to employ the capabilities. So we need more prototypes to get there and burn down the technical risk of the payload integration.”\nThe Navy’s Distributed maritime operation (DMO) concept will be enabled by the following key technologies.\nEndurance – The stipulation from the Navy is that unmanned vessels should operate continuously at sea for 30 days without maintenance or repairs. Going by lessons learned from Sea Hunter’s expedition, components like switches, sensors, and filters which were not originally designed for autonomy still have a way to go.\nAutonomy & Precision Navigation – That a vessel navigates with minimum human intervention, even while demonstrating object avoidance. A combination of path planning, obstacle detection (including the speed and course of moving objects), mapping, and guidance are required for successful missions.\nCommand, Control & Navigation – It should be possible to control USVs from nearby vessels or from offsite locations. Intelligent navigational processing will allow pilots to adjust speed and direction so that vessels can hold up to the difficult and unpredictable open sea environment. The greater the complexity of required missions the greater the need for autonomy.", "score": 28.61156254437538, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "The US Navy is deploying a new weapon in drone warfare: Submarine-launched drones. Dubbed \"Blackwings,\" these small reconnaissance drones can be tube-launched from submarines and unmanned underwater vehicles.\nDeveloped by California-based AeroVironment Inc., the drones are about 20 inches long and weigh about four pounds. They fold up into a three-inch wide canister that can be tube-launched from the submarine. Once the canister clears the surface, the Blackwing pops out and its wings unfold.\nEach unit has a motor that can last up to 60 minutes, and is fitted with miniaturized electro-optical and infrared sensors, as well as its anti-spoofing GPS and a secure digital data link.\nThe Blackwing was developed to counter Chinese advancements in anti-ship ballistic missile and similar \"anti-access, area denial (A2AD)\" technologies. The Navy has given no deployment date, but given China's aggressive posturing, it may come just in the nick of time.", "score": 28.44957144690067, "rank": 27}, {"document_id": "doc-::chunk-4", "d_text": "“We want to make sure we have a way to reach into those small businesses to bring that technology into our systems.”\nOfficials expect unmanned maritime systems to conduct a wide range of missions in the future, including mine warfare, ISR, anti-submarine warfare, anti-surface warfare, electronic warfare, armed escort and communications relay.\n“UUVs, USVs for us is a growth industry [with a] tremendous amount of potential,” Neagley said.\nThey could transform the U.S. military’s minewarfare inventory, he noted. “As we transition our legacy mine fleet, that transition is largely a transition to unmanned systems.”\nIn the next five years, the Navy plans to issue conceptual design and detailed design and construction contracts for a new FFG(X) multi-mission frigate. Neagley expects it to have enough size, weight, power and modularity to support the deployment of unmanned vessels.\nLooking further down the road, the Navy intends to acquire a future surface combatant USV, which could include a family of systems. Lessons learned from ongoing science and technology efforts will inform that project, Neagley said.\n“As we finish up the analytical underpinning for that, we’re trying to make sure that … we look at what capability gaps the UUVs and the USVs can kind of go fill,” Neagley said. “Then we can … rapidly acquire those systems really to complement the larger fleet architecture.”\nDr. Hans C. Mumm", "score": 27.873110260371416, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "Detailed design work on the Snakehead is in progress, and initial hull long-lead raw material is on order.\nDesign contracts for the Orca have been awarded, and follow-production is scheduled for fiscal year 2019.\nCapt. Jon Rucker, Navy program manager for unmanned maritime systems, said Chief of Naval Operations Adm. John Richardson has inquired about the possibility of accelerating the acquisition of “the entire family” of UUVs.\nHowever, the service isn’t just looking for new unmanned platforms. They have limited value if they aren’t equipped with support systems, such as energy sources, autonomy and precision navigation, command, control and communications, payloads and sensors, and platform integration, officials noted.\n“You have to consider all those key enablers to really kind of get the most out of that technology,” Neagley said.\nEnergy is critical for endurance, Rucker noted during a media briefing at the Surface Navy Association symposium in Arlington, Virginia.\nIn the near term, Rucker hopes to have lithium-ion batteries certified for platform integration. Officials are also in talks with the auto industry about fuel cells, he noted.\nThere are “more energy-dense technologies that aren’t ready today but we’re looking down the road so all the vehicles we design … you can take out the energy section and put in the new energy technology when it’s ready,” he said.\nAutonomy and precision navigation technology are also essential.\nUUVs are expected to deploy for an extended period of time in conditions where command, control and communications are more difficult than they are for surface vessels, said Lee Mastroianni, special projects officer at the Office of Naval Research.\nThey need to have environmental sensing capabilities and be able to adapt accordingly, he said.\n“Whether it be in the Arctic or very shallow water or everything in between, we need to improve that autonomy so we have systems that can think, understand and adapt more to achieve their missions, recognizing that there’s a whole subset of sensors and payloads and stuff that feed into making those decisions,” he said.\nUSVs also have some unique challenges. There are complex rules when it comes to navigation, and the platforms must be able to operate in crowded waterways without human intervention. Combat situations would only add to the complexity of operations, he noted.\n“That gets into the ability to understand a dynamic situation … and trying not to run into the other boats,” he said.", "score": 27.79794304527811, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "The Navy plans to deploy its new MQ-4C Triton long-range surveillance unmanned aircraft to the Middle East in 2016, Rear Adm. Sean Buck, commander of the U.S. Navy’s Patrol and Reconnaissance Group, said Thursday in a call with reporters following Wednesday’s first successful Triton flight.\nBy then, the Navy hopes to have up to three of the Northrop Grumman aircraft to patrol the service’s 5th Fleet area of responsibility to replace the single forward deployed Broad Area Maritime Surveillance Demonstrator (BAMS-D) currently in the region, Buck said.\n“The intent is to introduce an operational orbit of Tritons in the fleet area sometime after,” he said.\nThe so-called initial operational capability will be later than Naval Air Systems Command initial 2015 IOC date and will include three of the four aircraft needed to have a consistent orbit over the region. For the Navy to have uninterrupted service, an orbit requires four aircraft. The current lone BAMS-D flies every third day.\nThe planned 68 Tritons, similar to the Air Force’s RQ-4 Global Hawk unmanned surveillance aircraft, are specially designed to patrol maritime regions. Conceptually, the aircraft is designed to work with the P-8A Poseidon manned aircraft.\n“Triton was envisioned to be a complimentary teammate to the manned Poseidon aircraft and to take about 30 percent of the traditional historic surveillance mission for the maritime aviation community,” Buck said.\n“It will provide the long dwell persistent stare in the maritime to support our fleet as the fleet is positioned and moves and projects any type of power in a particular area around the world.”\nTritons are anticipated to operate forward from Naval Air Station Sigonnella, Italy, Andersen Air Force Base, Guam and an undisclosed location in the Middle East, NAVAIR told USNI News on Wednesday.\nOnce Triton enters 5th Fleet, additional orbits will begin in 7th Fleet from Guam, then in 6th Fleet in Sigonnella and finally on the continental U.S.\nThe Navy plans for a single Triton orbit to monitor up to 2,000 nautical miles at a time allowing U.S. forces access to real time radar, video and signals intelligence.", "score": 27.730485861939407, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Michael Stewart, director of the task force, said the Navy’s unmanned task force is taking a new approach, using the military equivalent of a venture capital model to accelerate new ideas, moving forward only after technologies have been demonstrated.\nThis summer, four large unmanned ships are operating alongside conventional ships during war games called RIMPAC.\nThese include the Sea Hunter and Sea Hawk, which are diesel powered vessels equipped with booms for stability in rough seas. The other two are Ranger and Nomad, which are based on oil rig refurbishment ships. They have large flat surfaces from which a missile was successfully launched last year.\nWhile those larger ships are being tested in the Pacific, the Navy is already seeing promising results with smaller commercially available ships being evaluated by Task Force 59, which is part of Bahrain’s Fifth Fleet, Cmdr said. Timothy Hawkins, spokesman for the Fifth Fleet.\nOne vessel that has received attention is the Saildrone, which is a sail-powered vessel and solar-powered systems. Equipped with radar and cameras, the Saildrones are touted as being able to operate autonomously for months at a time without maintenance or resupply.\nBuilding on the success of multinational exercises last winter, the Fifth Fleet said the US Navy and international partners intend to deploy 100 uncrewed ships by next summer.\nFinally, Admiral Mike Gilday, Chief of Naval Operations, envisions a mix of 150 large uncrewed surface ships and undersea vessels by 2045. That’s on top of more than 350 conventional combat ships.\nThe Navy’s spending proposal for the new fiscal year includes $433 million for surface ships without a crew and $284 million for underwater ships.\nGilday, the Navy’s chief officer, said those ships along with artificial intelligence have the potential to make the Navy’s fleet more efficient. But he said the Navy is doing research and development “in an evolutionary, thoughtful and informed manner.”\nThe biggest advantage of robotic ships is that they can be built at a fraction of the cost of conventional warships, said Lauren Thompson, a defense analyst at the Lexington Institute, as the Navy struggles to keep up with China and Russia. The United States is already lagging behind China in ship numbers, and the gap is growing every year.\nCongress is in no rush to fund new programs, said Brian Clark, a defense analyst at the Hudson Institute. “Congress wants the Navy to have a good plan — and then aggressively pursue it,” Clark said.", "score": 27.702747779290046, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The Navy has ramped up its acquisition of unmanned helicopters for counter piracy missions and maritime intelligence, surveillance and reconnaissance, Defense News reports.\nThe Navy has ordered eight units of an updated variant of the Northrop Grumman-built MQ-8B Fire Scouts that have been used since 2009 for ISR missions.\nThe new model uses the frame of the Bell 407 helicopter which is 35 feet long, 11 feet longer than the MQ-8B.\nThe $262.3 million contract will provide the Navy with unmanned helicopters that have more range and payload.\nSenior Chief Petty Officer Stephen Diets, a Navy ground government flight representative said the MQ-8C will provide the Navy with more maritime ISR capabilities than the MQ-8B model.\nDefense News also reports that Navy officials have requested the doubling of the original MQ-8B Fire Scout flight hours for Afghan special operations forces.\nThe Navy currently flies 300 hours in Afghanistan and has 30 MQ-8B in its arsenal.", "score": 26.9697449642274, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Furthermore, they can even be used to map and survey underwater environments, including the seafloor and underwater infrastructure. They can be equipped with cameras and other sensors that can provide high-resolution images and data that can be used to create detailed maps and models.\nUUVs can contribute greatly during warfare. UUVs can be used for offensive and defensive purposes in underwater warfare. They can be used to deliver weapons, conduct reconnaissance as well as disrupt enemy communications. If put together as a swarm, UUVs communicate and coordinate with each other to perform tasks in a coordinated manner.\nThe Indian Navy has been looking to develop such platforms given the strategic edge they promise. The Navy had stressed Underwater Domain Awareness (UDA) several times in the past. In 2021, then Vice Chief of the Naval Staff Vice Admiral Ashok Kumar said that to exploit the potential of unmanned technologies and platforms, the Navy had approved an ‘Unmanned Road Map.” UUVs or submersible unmanned vehicles are divided into two categories—remotely operated underwater vehicles (ROVs) and autonomous underwater vehicles (AUVs). While an ROV can be operated by a human being remotely, the same is not true for AUVs, which can operate entirely in an autonomous manner.\nThe former Navy vice chief had highlighted four main types of UUVs—the man-portable Autonomous Unmanned Vehicles (AUVs) with swarm functionality with an endurance of 10-20 hours, lightweight AUVs compatible with lightweight torpedo tubes onboard ships with an endurance of nearly two days, heavyweight AUVs compatible with in-service heavyweight tubes with an endurance of up to 3-4 days and high endurance AUVs with the capability to submerge for at least 15 days.\nThe commitment to development and induction of unmanned technologies was visible even during the annual press conference ahead of the Navy Day in December 2022, when the Chief of the Navy Staff Admiral R Hari Kumar said that the Indian Navy had already shared its unmanned requirements with the industry, which include aircraft as well as underwater vessels.\nIndia’s gearing up of unmanned water tech came the same year as Indonesia reported that a local fisherman had discovered an unidentified ‘missile-like’ object in the waters off Selayar Island in Indonesia’s South Sulawesi province around December 2020 and handed it over to the Indonesian Navy, which in turn identified it as a glider-type autonomous underwater vehicle (AUV).", "score": 26.9697449642274, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "CH-53K, K3, piloted by Mr. Rob Pupalaikis and Maj. Joshua Foxton, flies an aerial refueling test with an external load on Sept. 28, 2020, from NAS Patuxent River, MD. Sikorsky Photo.\nThe Marine Corps’ top aviation officer assured a key House panel this week that the unit costs of the CH-53K King Sea Stallion helicopter are dropping significantly. Read More\nAn MQ-8B Fire Scout unmanned aircraft system from Helicopter Maritime Strike Squadron (HSM) 35 performs ground turns aboard the littoral combat ship USS Fort Worth (LCS-3) in May 2015. US Navy photo\nThe Navy is pursuing both manned and unmanned platforms for the aircraft that will replace its rotary-wing fleet, according to a service official. Read More\nU.S. Marines with Marine Unmanned Aerial Vehicle Squadron (VMU) 2 launch a RQ-21A Blackjack for Assault Support Tactics 2 at Canon Air Defense Complex (P111), Yuma, Ariz., Oct. 12, 2016. US Marine Corps photo.\nWhile the Marine Corps is still charting its path forward for large drones, the service is moving smaller unmanned aerial vehicles (UAV) into its ground combat units. Read More", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "In fact, there are big advantages to replacing the crew with an at least partly autonomous control system. And that technology is almost inevitable—we think that too many people thinking about future capability don’t give enough credit to the implications of Moore’s Law. The surface combatant of the future will likely have a mix of small and large embarked aircraft—but it’s a fair bet that they’ll all be unmanned.\nAndrew Davies is senior analyst for defence capability and director of research at ASPI. James Mugg is a researcher at ASPI. Disclaimer: James Mugg recently travelled to the United States on a study tour sponsored by Northrop Grumman, the manufacturers of Fire Scout.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "WASHINGTON: This Saturday the Navy will christen its newest nuclear-powered submarine, the $2.6 billion USS Minnesota at the Newport News shipyard in Virginia. Countless movies have cemented the popular image of subs as stealthy underwater killers, stalking hapless surface vessels with periscope and torpedo. But today’s Navy is experimenting with launching robotic mini-subs and even unmanned aerial vehicles (UAVs) from Virginia-class attack subs like the Minnesota.\nIn Navy tests of a mini-UAV called Switchblade, “you can launch it, you can control it, you can get video feed back to the submarine,” said Rear Adm. Barry Bruner, chief of the undersea warfare section (N97) on the Navy staff, at the recent Naval Submarine League symposium in suburban Washington. Future subs could also launch unmanned underwater vehicles (UUVs) to scout ahead stealthily beneath the surface. “It sure beats the heck out of looking out of a periscope at a range of maybe 10,000 to 15,000 yards on a good day,” Bruner said. “Now you’re talking 20 to 40 miles.”\nPair that sensor range with new long-range torpedoes — yet to be developed — or submarine-launched missiles, and you dramatically increase the kill range of the current submarine fleet, Bruner enthused. “It’s phenomenal, it’s asymmetric, and it’s cheap, [and] we’re not that far away.”\nSome informed observers are more skeptical. The sub-launched UAVs and UUVs are still experimental. A “universal launch and recovery system” to get the small robots off the sub and then, critically, back on again is still in development — and current attack submarines like the Minnesota lack the large-diameter launch tubes to accommodate the system anyway, although the next-generation “Block III” Virginia-class subs entering service in 2014 will be able to. The Navy is also designing a “Virginia Payload Module” that would increase future submarines’ launch capacity, but it’s uncertain whether actual development will get funded.\n“They seem to be in a period of experimentation that doesn’t have any obvious or clear end point,” one congressional staffer told Breaking Defense. “I can’t tell you exactly where they’re headed.”\nWhat’s more, while sub-launched drones are a new idea to American admirals, “the Israelis experimented with it more than than 20 years ago,” said naval historian Norman Polmar.", "score": 26.077001707526396, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "U.S. Navy ship commanders now have a new way to improve intelligence-gathering capabilities—by using Northrop Grumman’s MQ-8C Fire Scout unmanned helicopter.\nThe first operational MQ-8C unmanned helicopter was delivered to the U.S. Navy earlier this week, according to a news release. The MQ-8C is an upgraded version of the company’s MQ-88 Fire Scout, and features a larger airframe and can fly twice as long and carry three times more intelligence, surveillance and reconnaissance payloads.\nThe MQ-8C’s first ship board test flights will be conducted this winter aboard the USS Jason Dunham (DDG 109). The Navy will then assess the system for operational use. Northrop Grumman is under contract to build 19 MQ-8C Fire Scouts, including two test aircraft, according to the release. The Navy plans to purchase 70 aircraft total.\n“The test program will run through the summer as we expect these aircraft to be ready for operations by year’s end,” said George Vardoulakis, vice president for medium range tactical systems with Northrop Grumman, according to the release.\nFor more information, visit northropgrumman.com.", "score": 25.65453875696252, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "In particular, Britain and France employ SADMIS in their joint mine countermeasures program. The system is software upgradable, which means that experience with the now deployed and operational systems can easily provide data for software upgrades of contemporary as well as future versions.\nAlso, because SAMDIS is platform agnostic and scalable, it can be deployed on a variety of current and future platforms. Although especially configured for deployment from unmanned underwater vehicles, it can be deployed from unmanned surface vehicles.\nThese could be hosted by the U.S. Navy’s littoral combat ships (LCS) or other naval or commercial ships of opportunity.\nIn October 2016, GPS World reported on the results the multinational Unmanned Warrior exercise.\n“These systems can help protect our Sailors and Marines from some of the Navy’s dull, dirty and dangerous missions, like mine countermeasures,” according to the U.S. Chief of Naval Research Rear Admiral Mat Winter. “\n“Additionally, these systems can increase our capabilities at a more affordable cost of the conventional systems we currently employ.”\nThe “Ghost Fleet”\nIn February 2017, Defense News reported that the Navy was working to develop quickly a “ghost fleet” of numerous surface, air and undersea drones that would synchronize a wide-range of combat missions without placing sailors and Marines at risk. Captain Jon Rucker, program manager for unmanned maritime systems in the LCS program outlined top-level requirements: “We want to have multiple systems teaming and working together, surface, air and undersea.”\nRucker explained that the Pentagon and the Navy are advancing this drone-fleet concept to search and destroy mines, swarm and attack enemies, deliver supplies and conduct, reconnaissance and surveillance missions, among other tasks. These capabilities could operate in a combat environment with little or no human intervention after being programmed for the specific role.\nDefense News noted that the Navy’s Office of Naval Research has been working closely with the Defense Department’s Strategic Capabilities Office to fast track this technology into an operational service.\nDr. William Roper, the DOD capabilities director, explained that much of this effort involves merging new platforms, weapons, and technologies with existing systems in a way improves capability while circumventing a lengthy and often bureaucratic formal acquisition process.\nFor example, USVs and UUVs configured for MCM search, detect, localize, classify, identify, and neutralize/exploit tasks could take advantage of the “off-the-shelf” SAMDIS system, which already has been demonstrated in Navy tests.", "score": 25.65453875696252, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "During 2012, the MPRF began its transition from legacy platforms to a new family of systems, including the P-8A Poseidon multi-mission aircraft, the MQ-4C Triton Unmanned Aerial System, and a Tactical Mobile ground support system.\nWheeler says it is technology, communication and teaming that makes this an exciting time for operators and stakeholders in the Maritime Patrol and Reconnaissance community. Today, the P-8A Poseidon brings speed to the fleet, the power of secure networks, and twice as much acoustic capability.\nA&M: Tell us about the P-8.\nRear Adm. Wheeler: The P-8A was designed and built to replace the P-3C Orion, which I believe has been in the fleet since 1962, and has been doing great work for us for a long time. The Navy really invested in the P-8A to do that traditional role of anti-submarine warfare. So, it’s built to accomplish that mission and it’s doing a tremendous job.\nA&M: How is it different than the P-3? What new capabilities does it bring to the fight?\nRear Adm. Wheeler: Obviously, the most striking difference is its two engines, jet propulsion, as compared to the four-engine propeller-driven P-3 Orion. As I mentioned, the P-3 has been flying since the early 60’s. You can imagine the technology that has changed since then. Really, what the P-8 brings is that new technology. It takes an airframe that is very dependable in the Boeing 737 and combines it with some of the great sensors our industry partners have developed over the years combined with great computing power.\nA&M: How is this helping the Navy in its modernization efforts, and meeting the operational goals of the Navy right now?\nRear Adm. Wheeler: What the P-8A really brings is that culmination of technology. If you imagine in the early 60’s black and white television compared to what we have today. Well, that’s the technology jump that we’re seeing, between a P-3C and a P-8A.", "score": 25.65453875696252, "rank": 39}, {"document_id": "doc-::chunk-2", "d_text": "As they explained, “A carrier at Pearl Harbor ordered to respond to a developing crisis in the Taiwan Strait could immediately set sail and launch a flight of UCASs.” The drones could reach Chinese airspace in 10 hours and remain there for another five hours, surviving and fighting “even in the face of advanced Chinese air defense systems” – thanks to their stealth qualities.“The strategic value of that sort of responsiveness and reach would be incalculable,” Work and Ehrhard concluded. Not only did the Navy ultimately agree and boost support for the X-47B, in 2009 Work joined the sailing branch as its new undersecretary. He was able to continue arguing for the new drone’s importance from inside the Pentagon.\nNorthrop steadily added more autonomy to the X-47B until it was nearly entirely robotic. “There is man in the loop,” explained Carl Johnson, a Northrop vice president, in a telephone interview. He was referring to operators on land or aboard a launching carrier who he said can “monitor and override autonomous systems” within the drone. Takeoff, landing and most of the X-47B’s flight are handled by software.Work left the Navy in April to head the Center for a New American Security, a policy organization in Washington. He told Reuters he will be eagerly monitoring the X-47B’s takeoff on Tuesday. In particular, he will be looking to see whether the drone can maneuver alongside the other aircraft that routinely surround a carrier.\nWork says he expects a smooth takeoff – and also continued strong Navy support for the drone program, particularly in light of improving air defenses in China and other rival nations. Since Work co-wrote his X-47B report, both Russia and China have unveiled new high-tech warplanes and missiles. “Everybody is surprised at the pace and broad range of capabilities adversaries all over the world are pursuing,” Work said. “The case for unmanned systems coming off the carrier has accelerated.” Assuming today’s launch is a success, it could be only a few years before UAVs like the X-47B routinely fly from the Navy’s carriers, giving the United States aerial advantage over the Pacific.", "score": 25.65453875696252, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "The Royal Navy just spent $45 million on the ScanEagle UAV, which is part of a market set to be worth $8.35 billion by 2018, according to a recent market intelligence report. One thing is clear: the role of UAVs in maritime reconnaissance and surveillance is becoming a key area for development and investment right now.\nCaptain Ian Annett, RN, confirmed that \"ScanEagle represents an important addition to the Royal Navy's intelligence, surveillance and reconnaissance capability.\"\nPhilip Dunne, UK Minister for Defence Equipment, Support and Technology, also backed the development, saying that \"continued investment in intelligence, surveillance and reconnaissance systems is essential to keeping our Armed Forces up-to-date with the latest capabilities and this will be a central part of MOD's investment in new equipment over the next 10 years.\"\nDownload Defence IQ's article: Demand for Maritime UAV's to remain sky high into 2018, to find out more about worldwide investments into maritime UAVs.\nThe 11th annual Maritime Reconnaissance and Surveillance conference coincides with these emerging market opportunities - it will be the only place you can hear from and talk to the key decision makers in this growing market.\nThe event will hear from key industry experts including:\n- Rear Admiral Giorgio Gomma, Director of Naval Aviation and Commander of the Fleet Air Arm, Italian Navy\n- Rear Admiral Matthew J. Carter, Commander, Patrol and Reconnaissance Group, US Navy\n- Rear Admiral Georg Kristinn Làrusson, Director General, Icelandic Coast Guard\n- Rear Admiral Jesus C Millan, Chief of Staff, Philippines Navy\nSpeakers will discuss the challenges they face and future requirements as they seek to improve their maritime domain awareness and maritime security capabilities.\nIn addition, a dedicated Space-Based Maritime Surveillance Focus Day chaired by Derek Hatton, an analyst at the EU Satellite Centre, will be held on September 24th. This day will offer an exclusive forum to discuss how the use of the satellite eyes in the sky can improve maritime domain awareness.\nFor more information and to download the conference programme please visit: http://www.maritimerecon.com", "score": 25.50496383934264, "rank": 41}, {"document_id": "doc-::chunk-2", "d_text": "The Royal Navy’s own innovation team, ‘MarWorks’, joined with Dstl and Antillion to help accelerate this technology through the development phase.\nWe put it to use in ‘Information Warrior’ and, liking what we saw, we’ve decided to introduce it in place of 3 Commando Brigade’s current IT straight away.\nBut this isn’t simply about swapping old kit for new and carrying on as normal. The full potential of the technological opportunity before us is far greater.\nFrom autonomous systems operating in squads to artificial intelligence assisted decision making, what we’ve glimpsed over the past 2 years has the potential to entirely change our approach to operations.\nThis requires big decisions, with far reaching consequences.\nAre we, for instance, prepared to remove existing platforms from service in order to create the financial and manpower headroom to introduce new systems which, in time, could deliver truly transformative advances in capability?\nOf course, change on this scale can be disconcerting, but if we hesitate, then we risk falling further behind.\nSo, for example, based on our experience from Unmanned Warrior and Information Warrior, we know that remotely operated and autonomous systems can make a far greater contribution to operations than is currently the case.\nAs a first step, we are ready to shift the process of trial and experimentation from the exercise arena to the operational theatre.\nThat’s why we have deployed 3 unmanned underwater vessels on board the survey ship HMS Enterprise during her current NATO deployment.\nBut I think we can go further still.\nSo today I can announce the Royal Navy’s aim to accelerate the incremental delivery of our future mine countermeasures and hydrographic capability (MHC) programme.\nOur intention is to deliver an unmanned capability for routine mine countermeasure tasks in UK waters in 2 years’ time.\nSimilarly, from what we’ve seen over the past 2 years, we know it should be perfectly possible for the Type 31e frigate to operate a vertical lift unmanned air system alongside or perhaps even in place of a manned helicopter from the moment the first ship enters service from 2023.\nAnd as a precursor to this, we plan to work with our partners in the aerospace industry to demonstrate such a capability on a Type 23 frigate next year.\nSo, just as I challenge the Royal Navy to take the next step forward, there’s also a challenge for you, our partners in industry, to meet us half way with credible solutions that can fulfil our requirements.", "score": 25.000000000034284, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "WASHINGTON, D.C. – The Navy won’t pursue the development of a lethal carrier-based unmanned aircraft before it fields its unmanned MQ-25A Stingray tanker sometime in the 2020s, the service’s requirements chief said last week.\nThe service is taking a deliberate approach to adding unmanned aviation assets to carrier decks, ensuring it successfully integrates the MQ-25A into the airwing before it studies adding new, armed UAVs into the mix, Deputy Chief of Naval Operations for Warfare Systems (OPNAV N9) Vice Adm. Bill Merz said at an event co-hosted by the U.S. Naval Institute and the Center for Strategic and International Studies.\n“The MQ-25, we think, is just a fantastic program. Integrating an unmanned aircraft into the carrier airwing will be a significant step forward for the Navy, no question about it,” he said.\n“We are just compelled to be somewhat pragmatic in how well they work before we over-commit. We have a limited budget; we also have real lives at stake. Unmanned isn’t really unmanned, you just don’t have a body sitting in the platform. There’s a lot of support. You have deck handling, a lot of things you have to come through to bring these things aboard a maritime environment.”\nIn August, Boeing was awarded an $805-million contract to develop four MQ-25As. The company based the design on a prototype the company quietly built for the canceled Unmanned Carrier Launched Airborne Surveillance and Strike (UCLASS) competition.\nChief of Naval Operations Adm. John Richardson has made fielding the MQ-25A a priority for the service, but it’s still unclear how quickly the service can get the capability to the fleet, Program Executive Officer for Unmanned Aviation and Strike Weapons Rear Adm. Brian Corey said earlier this month.\n“When we awarded our contract [in August], we believed we could go to 2024, [but] CNO said ASAP,” Corey said. “I’m not going to give you a date. It’s as soon as we can.”\nThe Navy wants to introduce the aircraft quickly to reduce the refueling burden on the service’s F/A-18F Super Hornet fighter that are now responsible for the tanking mission. Based on the success of the first set of missions – tanking and limited intelligence, surveillance and reconnaissance (ISR) missions – the Navy could look to other capabilities.", "score": 24.849064850088993, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "“It’s a big aircraft, it’s robust, it’s built as a tanker, but it’s probably a stepping-stone to other capabilities,” Merz said.\n“[Future aircraft] are conceptual right now until we get this thing into the fleet and see how it survives in a sea environment and how it integrates with the airwing.”\nHowever, at least one analyst sees the Navy’s progress in unmanned vehicles as a missed opportunity for the service.\n“Yes, the MQ-25 is a stepping-stone. However, the Navy had an opportunity to develop and field a more robust unmanned capability for surveillance and strike and chose not to do so, at least not in the mid-term,” said Mark Gunzinger with Center for Strategic and Budgetary Assessments. “A carrier-based low observable [Naval Unmanned Combat Air System] for surveillance, strike and possibly other missions was identified as a need at least as far back as the 2006 Quadrennial Defense Review. Over time, this has been scaled back to the MQ-25. I’m not saying the MQ-25 is not needed, but I do believe the Navy has missed an opportunity.”\nBased on the 2006 QDR, the Navy developed a low observable, potentially armed $1.4-billion demonstrator program. The service tested a UCAS-D aircraft, two X-47Bs – Salty Dog 501 and 502 – built by Northrop Grumman to prove unmanned aircraft could safely launch and be recovered from an aircraft carrier. The tail-less aircraft were built with the ability to be refueled mid-flight and had an internal payload capability equivalent to an F-35C Lightning II Joint Fighter.\nIn 2013, Salty Dog 502 successfully landed on USS George H.W. Bush (CVN-77), after aerial refueling tests in 2015, Naval Air Systems Command shut down the testing program for UCAS-D with thousands of hours of flight time left on the airframes – to some congressional protest.\n“Our nation has made a sizable investment in this demonstration program to date, and both air vehicles have consumed only a small fraction of their approved flying hours,” Sen. John McCain (R-Ariz) wrote in 2015.\n“There will be no unmanned air vehicles operating from carrier decks for several years.", "score": 24.345461243037445, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Return of the Navy Blimps?\nIn the aftermath of World War 2, blimps and airships found themselves gradually phased out of the US military. That didn’t really begin to change until the 21st century (see April 2005, “USN, DARPA See Blimps & HULAs Rising“). The heavy-lift WALRUS project may have been canceled without explanation; but aerostat programs like JLENS cruise missile defense and its smaller RAID local surveillance derivative, and airships like the HAA/ISIS program, remain. The US Navy is also experimenting with aerostats for communications relay, surveillance, and radar overwatch functions – and this has become a formal program.\nWhat’s driving this interest? Four things. One is persistence, in an era where constant surveillance + rapid precision strike creates a formidable military asset. A second is cost, especially in an era of rising fuel prices. A recent US NAVSEA release offers figures that starkly illustrate the gap in surveillance cost per hour between an aerostat and planes or UAVs:\n- Land-based 71-meter aerostat: about $610/ hour\n- MQ-1 Predator MALE (Medium Altitude, Long Endurance) UAV: about $5,000/ hour\n- E-2C Hawkeye AWACS (Airborne Warning And Control System) aircraft: about $18,000/ hour\n- RQ-4 Global Hawk HALE (High Altitude, Long Endurance) UAV: about $26,500/ hour\nThe third driver is ballooning bandwidth demands due to increased employment of UAVs and other systems with streaming video. This is a long-term trend that will demand very expensive satellites with long design/launch times – or cheaper patchwork solutions that can remain at altitude for long periods.\nThe fourth driver is the proliferation and increased lethality of cruise missiles. On land, the concern is the combination of cruise missiles and weapons of mass destruction. At sea, the concern is the increasing lethality of anti-ship cruise missiles, including supersonic varieties that place a premium on early detection in order to give defensive systems enough time.\nAerostats are not blimps, and need no pilot. These helium filled, multi-chambered balloons are winched up or down via a combined tether/power line; and can be deployed from trucks, land platforms, or a ship at sea.", "score": 24.345461243037445, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "The second chapter describes ...\nKeller, Joe; Ivey, James; Dalakos, Antonios; Okan, Orhan; Kuchler, Ryan; Cooke, Rabon; Stallings, Brad; Searles, Scot; Gokee, Mersin; Lashomb, Pete; Byers, David; Papoulias, Fotis; Ciezki, John; Ng, Ivan (Monterey, California. Naval Postgraduate School, 2001-12);Currently, no system exists that provides a sea-based distributed aviation platform capability. The emergence of Unmanned Air Vehicles (UAVs) / Unmanned Combat Air Vehicles (UCAVs), the continued U.S. Navy focus on the ...", "score": 24.345461243037445, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "The US Navy vigorously opposed Mitchell's proposal. It foresaw that airplanes would play an increasingly important role in naval warfare, and insisted that it be free to develop naval air power, and to operate aircraft according to its needs to the same extent as in the case of its other weapons.\nNaval aircraft require different capabilities to perform various types of missions. Naval aircraft missions can be categorized under eight job types: fleet air defense, strike warfare, antisubmarine warfare (ASW), electronic warfare, early warning, amphibious assault, training, and unmanned aerial vehicles (UAV). Each mission requires different capabilities in the craft that perform them. Most aircraft are able to perform more than one type of mission and may perform support functions as well. To accomplish these missions, the Navy and Marine Corps have over six thousand active and reserve aircraft\nStrike aircraft attack enemy surface targets such as ships and ground forces. Strike aircraft are classified into two types, medium and light, depending on the weight of the payload they carry. There are several major design factors involved in attack aircraft design. Range, payload, and weapon delivery precision determine how far away a target can be attacked and the amount of damage that can be inflicted. Maneuverability and stealth will allow the aircraft to evade surface to air missiles and enemy fighters as well as making them less visible to enemy sensors. Marine Corps aircraft emphasize vertical or short takeoff capability to provide air power in the absence of airfields. The A-6 Intruder, once the backbone of the Navy's attack force, was withdrawn from service in the 1990s. The F/A-18 is proving everyday to be a reliable, flexible platform and is expected to remain so well into the 2lst century. Upgrades to the F/A-18 radar, engines, weapons, and enhanced all weather attack capability are currently being planned. The AV-8B Harrier is demonstrating to be a safe, effective aircraft. Upgraded engines and night attack capability will provide the Navy and Marine Corps more control over a broader spectrum of operational and threat scenarios.\nThe fleet air defense mission performed by Navy and Marine Corps fighters is to defend the fleet from shore and sea based air attacks. Fighters attack incoming bombers seeking to destroy aircraft carriers and their accompanying ships.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Exploring new on-board operational capabilities : this is the challenge addressed by the Naval Group teams in close collaboration with the French Defence Procurement Agency and the French Navy. Different design, development and deployment activities are centred around the integration of drones on board vessels. The challenge is a stimulating one: the projects are advancing quickly and new opportunities are opening up on the international markets.\nThe latest summary with Audrey Hirschfeld, Work Package Manager, who is working on the program for new operational capacities on amphibious helicopter carriers.\nThe adventure started in 2016. As a result of successful experiments on board the offshore patrol vessel L’Adroit, Naval Group installed a drone system on the amphibious helicopter carrier Dixmude. This was first deployed in stand-alone mode, i.e. without being linked to the combat system but controlled by a console developed by Naval Group and housed in a shelter on the flight deck. A second shelter, in the aviation hangar, was used for vehicle maintenance. “Our goal was to study the impact of its presence on board and evaluate the safety measures before expanding the range of its functions”, explains Audrey Hirschfeld.\nThe trials took place in May 2017, off the coast of Montpellier. “Gathered on the visual defence bridge, at dusk, all eyes fixed on the drone, we all held our breath until it took off”, recounts the engineer, recalling the emotion of the teams in front of the screens of the Operations Control Room, which showed the images from the drone’s camera. “It was our best reward”, she continues. With some great successes in the FREMM frigates and Gowind® corvette programs under her arm, Audrey was one of the first to join the project, attracted by the collective challenge.\nThis first trial campaign has proven convincing: Naval Group has been entrusted with the study and sustainable integration of the drone system into the helicopter carrier’s combat system. Further campaigns confirmed the expectations. Embarked on the Corymbe mission in September 2017, in the Gulf of Guinea, followed by the Jeanne d’Arc campaign, the drone system gradually revealed its full potential for surveillance and reconnaissance missions.\n“For spring 2019, we are preparing the deployment on the Dixmude. The drone system will comprise a new Naval Group console, a more extensive communication system and an additional airborne vehicle”, details Audrey Hirschfeld.", "score": 23.733839723220804, "rank": 48}, {"document_id": "doc-::chunk-1", "d_text": "Navy leadership also ordered naval aviation commanders to examine the possibility of reducing orders for the new F-35B and F-35C manned aircraft and use that money to buy the new X-47B and similar robotic combat aircraft. The navy currently plans to buy 680 F-35B and F-35C aircraft for (on average) $100 million each. A UCAV costs less than half that and provides most of the same capabilities, plus much longer range and no risk of losing pilots.\nFor most of the last decade, the navy has been hustling to ready a UCAV for carrier operations and combat use. Within four years the navy expects to have the X-47B demonstrating the ability to regularly operate from a carrier and perform combat (including reconnaissance and surveillance) operations. The new efforts aim to have UCAVS aircraft perform ground attack missions as well, something the Predators have been doing for over a decade. The larger Reaper UAV was designed to expand this combat capability and is being built as quickly as possible, to replace F-16s and other bombers in the combat zone.\nThe 20 ton X-47B weighs a little less than the 24 ton F-18A and has two internal bays holding two tons of smart bombs. Once it can operate off a carrier, the X-47B will be used for a lot of bombing, sort of a super-Reaper. The navy has been impressed with the success of the Predator and Reaper. But the Reaper weighs only 4.7 tons. The much larger X-47B uses a F100-PW-220 engine, which is currently used in the F-16 and F-15.\nThe air force and navy have always differed about the widespread use of UAVs in combat. When the air force agreed to work with the navy on UCAVs a decade ago the idea was that the air force ones would largely remain in storage, to provide a rapid \"surge\" capability in wartime. The navy, however, wanted to use theirs to replace manned aircraft on carriers. The reason was simple, carrier ops are dangerous, and carrier qualified pilots are more difficult and expensive to train and retain in the service. The navy still has these problems and senior admirals are pretty much in agreement that UCAVs are the future of carrier aviation. The sooner these UCAVs prove they can safely and effectively operate from carriers, the better.", "score": 23.030255035772623, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "Unmanned systems―surface and subsurface―become an important part of the fleet in this outline because of their ability to do dull and dangerous work within an adversary’s defensive bubble. Unmanned systems may also reduce the number of personnel required, or at least move personnel to less vulnerable and stressful locations. But unmanned systems do have limitations. They cannot perform some missions, such as engagement with allies and partners, humanitarian assistance, and certain kinds of crisis response. As unmanned vessels get larger, they may also lose their advantage over manned systems because of the complexity of operations. The major challenge, however, is that the Navy only has a single experimental unmanned surface vessel operating today. How unmanned systems will operate in the fleet, whether the network can handle the bandwidth, and where unmanned surface vessels will be based are all unanswered questions. These unmanned surface and subsurface vessels may not count as “ships.” The Navy has official ship-counting rules, set by an agreement between the Navy and the Office of the Secretary of Defense back in the 1980s with occasional updates, most recently in 2016. Some unmanned vessels might not be counted under these rules because of their small size. In the past, Congress has been reluctant to change the counting rules, seeing this as a way of cutting the Navy while keeping the appearance of size.Large/Small Surface Combatants Under this plan, the number of small combatants (currently littoral combat ships but in the future frigates) increase because of their lower cost and ability to provide distributed capabilities. They provide a secondary benefit of increasing total fleet numbers, therefore allowing the Navy to be present in more places globally. Large combatants (cruisers and destroyers) were not discussed, but other sources put the number at 80-90.Combat Logistics Ships The fleet will include more logistics ships, between 70 and 90, to deal with an environment in which they are threatened by adversaries for the first time in 70 years. Other sources indicate that the Navy will procure smaller logistics ships because they are harder to locate, and a single loss is less catastrophic.Aircraft Aircraft were not the focus of the presentation, but Esper did make an interesting side point, saying that the plan included unmanned ship-based aircraft of all types, fighters, refuelers, early warning, and electronic attack aircraft.", "score": 23.030255035772623, "rank": 50}, {"document_id": "doc-::chunk-3", "d_text": "They have been delivered to unmanned undersea vehicle squadron 1 in Keyport, Washington, to give warfighters more experience operating large UUVs and elicit their feedback.\n“We will then in ’19 open it up to industry if they want to come out and bring their sensors or payloads … so we can then now test sensors and payloads on a vehicle that the fleet operates,” Rucker said.\nThose efforts would inform programs of record. Later on, the technologies could be inserted into other vessels when they are proven and ready, he added.\nHowever, acquisition officials won’t be lining up to buy new equipment if it isn’t cost effective, noted Frank Kelley, deputy assistant secretary of the Navy for unmanned systems.\nThe aim is to “drive affordability into everything we do,” he said.\nThe service wants to buy large numbers of platforms and enabling technologies to conduct dangerous missions and swarming operations, he said.\n“What we could do is have devices that do one or two or even three things really well … and then deploy that not in the hundreds but in the thousands,” he said.\nWith that in mind, high-priced equipment might be cost prohibitive in some cases, Mastroianni said.\n“For a one-way mission [with a] high probability of loss, that isn’t a cost-benefit analysis that works too well in our favor,” he said.\nAs it pursues UUVs, USVs and enabling technologies, the Navy is working with a wide variety of industry partners including small businesses, startups and the commercial sector, Neagley noted.\nOfficials in the unmanned systems world are gung-ho about the arrival of James “Hondo” Geurts as the new assistant secretary of the Navy for research, development and acquisition. Geurts previously served as the acquisition chief at Special Operations Command, where he gained a reputation for rapidly procuring new technology.\nHe is bringing the same mindset to his new role, Rucker said. “One of the things he really challenged us on is … how do we go faster.”\nNeagley said his office has special acquisition authorities that provide speed and flexibility in contracting and allow the Navy to reach a broad supplier base.\n“We recognize that a lot of the innovation … exists in small businesses,” he said.", "score": 23.030255035772623, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Darpa looks to use small ships as drone bases\nThe US military is planning to use fleets of small ships as platforms for unmanned aircraft to land and take off.\nThe US Defense Advanced Research Projects Agency (Darpa) said it needed to increase its airborne \"surveillance and reconnaissance\".\nUnmanned aerial vehicles (UAVs), known as drones, are commonly launched on land - but deploying them at sea is harder because they need to refuel.\nThey currently require large aircraft carriers with long runways.\nThe new project has been dubbed Tern (Tactically Exploited Reconnaissance Node) after a sea-bird known for its endurance.\nDarpa programme manager Daniel Patt, said: \"Enabling small ships to launch and retrieve long-endurance UAVs on demand would greatly expand our situational awareness and our ability to quickly and flexibly engage in hotspots over land or water.\"\nHe added: \"It is like having a falcon return to the arm of any person equipped to receive it, instead of to the same static perch every time.\"\nAbout 98% of the world's land area lies within 900 nautical miles of ocean coastlines, and Darpa increasingly sees conflicts being fought out at sea.", "score": 23.030255035772623, "rank": 52}, {"document_id": "doc-::chunk-6", "d_text": "“It allows us to take from any source all of the intelligence that’s available. We have ground-based radar systems that are excellent at detection,” Robertson said. “I have camera systems where I can use the radar to point a camera and look at things.”\nOnce the system detects something, SkyDome uses image recognition and AI to classify the object and its intent. “Is it a bird? Is it a drone? Is it a friendly drone or an unfriendly drone?” Robertson said. “It uses the intelligence gained from each of its sensors combined, much as your brain would, and makes a call and says, ‘that’s a threat.’”\nLaunched automatically upon detection or at a human’s command, the DroneHunter climbs to altitude and then uses onboard radar to track the enemy drone. Up in the air, there is very little to interfere with the DroneHunter’s ability to lock on target. “It can see these drones from hundreds of meters away,” said Robertson.\nAfter snaring a drone in its net, the DroneHunter brings it back. Nabbing a drone out of the sky offers a few advantages over attempting to jam it or blowing it up. You avoid bringing laser-riddled drones crashing down on urban crowds. You don’t foul up cellular communications networks. And you get more out of forensic analysis, which can show who launched the drone and from where.\nRobertson said the DIU contract was worth multiple millions of dollars, though he declined to specify further. DIU did not immediately respond to request for comment. Several combatant commands are interested, Robertson said. (Source: Defense One)\n04 Feb 20. DOTE cites need for further Knifefish development. Based on recent tests, the US Navy’s (USN’s) Surface Mine Countermeasure (SMCM) Unmanned Undersea Vehicle (UUV), or Knifefish, needs more development work, the Pentagon Director of Operational Test and Evaluation (DOTE) said in its annual report, released on 30 January.\nThe USN recently conducted an operational assessment to evaluate the system’s capability to detect, classify, and identify naval mines that are moored in the ocean volume and that lay on, or are buried in, the ocean bottom, the DOTE noted.", "score": 22.583723354368026, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Unmanned aerial vehicles aren’t brand-spanking new to naval aviation; UAVs the Navy uses include the rotary-wing Fire Scout and the RQ-2A Pioneer. But the next big thing in unmanned flight at sea will be an aircraft that can take off from and land on a carrier. Several companies are in the process of making that happen.\nThe big names in flight are displaying wares that they hope become the backbone of the Navy’s collection of UAVs at the Sea Air Space Expo at the Gaylord National Resort Hotel and Convention Center in National Harbor, Md.\nLook at the pictures above. At left is Northrop Grumman’s X-47B, a UAV that made its first flight in February. The one on the right is Boeing’s X-45C. The one in the center, while not labeled, is almost certainly the X-47B — it’s at the Huntington Ingalls Industries display; HII is a Northrop Grumman spin-off, and the airframe has the same shape as the X-47B UCAS. It’s tough to tell in this picture, but it’s shown positioned on the flight deck of the carrier Gerald R. Ford, a ship being built in Newport News, Va.\nThe HII picture merits another look because, well, the whole thing is a mock up of what may someday be the face of unmanned naval aviation on the flight deck a non-existent ship. Sitting nearby the now hypothetical UAV is an F-35C Lightning II joint strike fighter. In case you’re keeping track, that’s an aircraft that had not joined the fleet sitting on the flight deck of a carrier that’s under construction near an another plane that’s currently being tested. Like flying cars and undersea bubble cities, it’s all fantasy, for now at least.\nNavy Secretary Ray Mabus emphasized during his luncheon speech Monday that unmanned craft will play a prominent role in the Navy’s future.\n“Over the next decade, we will move aggressively to develop a family of unmanned systems including underwater systems that will be able to operate for a extended periods of time in support of our ships, our expeditionary units and our special warfare teams, and a low-observable, carrier-based intelligence surveillance reconnaissance strike unmanned air system,” he said.", "score": 21.695954918930884, "rank": 54}, {"document_id": "doc-::chunk-4", "d_text": "The MH-53E has increased fuel capacity, a hover/tow coupler, and improved mission capabilities.\nA little known but very important mission involving naval aviation is Fleet Ballistic Missile Communications. TACAMO (Take Charge and Move Out) aircraft fill the role of relaying very low frequency signals to strategic missile submarines. Initially the force consisted of EC-130Q's. These aging aircraft were replaced by a modified 707/E-3 design designated E-6A, which maintained the same basic equipment currently installed in the EC-130Q.\nIntermediate and advanced training needs are accomplished by T-45 aircraft, simulators, academics and training management systems. The T-45 aircraft is one of naval aviation's top priorities. The T-45 will replaced the T-2C and TA-4J as they reached the end of their service life.\nUnmanned Aerial Vehicles (UAVs) and their associated sensors, launch, recovery, mission planning and control, data relay, sensor data processing, and exploitation subsystems are managed under the UAV Joint Project Office with Navy as executive service. The role of UAVs in all of the services is expected to grow due to technology and sensor miniaturization resulting in increased UAV utility and cost effectiveness.\n|Join the GlobalSecurity.org mailing list|", "score": 21.695954918930884, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "Since then, there have been a number of high profile aquatic acquisitions, including: Riptide Autonomous Solutions by BAE Systems; Liquid Robotics by Boeing; and multiple investments in Ocean Aero by Lockheed Martin. The biggest driver of this consolidation is the demand from the military, particularly the Navy, for autonomous searching out and destroy missions.\nIn September 2017 the US Navy established the Unmanned Undersea Vehicle Squadron 1 (UUVRON-1). When explaining this move, Captain Robert Gaucher stated “Standing up UUVRON 1 shows our Navy’s commitment to the future of unmanned systems and undersea combat.” This sentiment was shared by Commander Corey Barker, spokesman of the famed Submarine Force Pacific, “In addition to providing a rapid, potentially lower cost solution to a variety of mission sets, UUVs can mitigate operations that pose increased risk to manned platforms.” Last summer the Navy appointed a dedicated Commander of UUVRON-1, Scott Smith. In a recent interview, Smith opined his vision for sea drones, “Those missions that are too dangerous to put men on, or those missions that are too mundane and routine, but important ― like monitoring ― we’ll use them for those missions, as well. I don’t think we’ll ever replace the manned platform, but we’ll certainly augment them to a large degree.” It is this augmentation that is generating millions of dollars of defense contracts which are starting to spill over to private industry.\nBoston-based Dive Technologies, founded by a team of former BlueFin engineers, is building an innovative technology to broaden the use of unmanned marine systems. In speaking with its CEO this week, Jerry Sgobbo, he described nascent opportunities for his suite of innovations: “We see demand for offshore survey work in the U.S. increasing significantly as grid scale offshore wind farms are developed over the next decade. In particular, much of this work will take place in New England and mid-Atlantic waters.” Sgobbo is referring to the recent move by Rhode Island in constructing the first ever wind farm in the United States, capitalizing on the regions famous gale-force gusts. Based upon the success of the Block Island project, other states are quickly putting forth legislation to follow suit. Just this week, Senator Edward Markey of Massachusetts declared in Congress that “Offshore wind has the potential to change the game on climate change, and those winds of change are blowing off the shores of Massachusetts.", "score": 21.695954918930884, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "The system features a waterproof and floating air vehicle that provides even small vessels a unique organic offboard surveillance capability that can be deployed and retrieved in less than 15 minutes\nElbit Systems has developed the SkylarkTM C, a new highly autonomous Mini Unmanned Aircraft System (Mini-UAS) specifically designed and built for martime applications. Based on the Skylark I Mini UAS – which are fully operational and in use by dozens of customers around the world, the new Skylark C transforms and extends the operational capabilities of its land-based counterpart into an organic maritime Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR) asset.\nAs a maritime vessel organic asset, Skylark C provides the capabilities to inspect maritime activities from a safe distance, observe targets from a bird’s eye view, perform reconnaissance over coastal areas and perform continuous covert surveillance, thus extending the vessel’s ISR capabilities with respect to range, rate and quality of information obtained.\nMission effective, with highly autonomous flight capability, Skylark C incorporates an electrically-propelled air vehicle with a very low visual and acoustic signature, making it an ideal solution for covert operations such as special naval operations, border security, anti-terrorism and anti-piracy operations. The aerial vehicle utilizes Elbit Systems’ industry-leading UAS technology and know-how, featuring an advanced inertial navigation system (INS) and a stabilized electro-optical (EO) payload with a high resolution thermal imager and color daylight camera that enables continuous day/night monitoring in diverse weather conditions.\nDuring the mission, high quality day or night video is available in real time in the MCS. The operator focuses on the mission rather than on flying the vehicle, applying convenient and effective control features such as fly-by-camera. Retrieval of the UAS after the mission is enabled through the use of a unique fully autonomous parachute-based recovery, allowing the lightweight waterproof air vehicle to safely glide, land and float on water surfaces guided by GPS. A net recovery on deck is an optional capability. The entire process – from recovery to next launch– takes no longer than 15 minutes. Only two operators are required to operate the system.\n“Until recently, the ability to achieve real-time maritime situational awareness and ISTAR capabilities in a short period of time and with minimal resources remained a significant gap,” commented Elad Aharonson, General Manager of Elbit Systems ISTAR Division .", "score": 21.695954918930884, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "Since Marine tactical UASs are extremely constrained by weight, programmatic trade-off between technology and brilliance in the basics has become increasingly lucrative.\nTactical payloads should be light enough to enable air vehicles to loiter for extended periods of time while maintaining sufficient stand-off through stealth. Stand-off distance at sea may need to be well over the horizon from shore to support amphibious operations. 2 Freeing processing power and weight are practical solutions, and allow crews to provide additional Marine aviation functions beyond air reconnaissance. It may also enable Marine UASs to perform offensive air support, electronic warfare, and armed reconnaissance. This is in concert with then-Admiral John C. Harvey Jr.’s July 2012 article in Proceedings , “Keeping Our Amphibious Edge.” 3 Marines with experience in combined-arms integration can save money, weight, and memory space consumed by hardware and software in tactical Marine UAS payloads.\nMission commanders should participate in combined-arms rehearsals with their ground brethren to enhance situational awareness. Some of the premier training venues for Marines to become experts in the fundamentals of manned and unmanned aviation integration include Mojave Viper in Twenty-Nine Palms, California, and the Weapons and Tactics Instructor Course in Yuma, Arizona. There, military commanders can “unplug” from computer-generated overlays and Internet Relay Chat while plugging into tactical radio communications and the tactical situation on the ground.\nRather than wait for payloads or command-and-control systems to display latent ground schemes of maneuver, DOD-approved datalinks like Tactical Digital Information Link 16 can provide real-time situational awareness. Real-time control of tactical UAS doesn’t require excessive command-and-control layers—only initiative and existing programs of record. Failure to grasp these fundamentals poses challenges for tomorrow’s increasingly automated battlefield.\nTechnology and ROE\nPayload technology has become a driving force behind crafting rules of engagement (ROE). While one could argue that politics over technology has resulted in more restrictive rules of engagement, experience has proven otherwise.\nInterdictions should be determined by a commander’s prerogative, hostile intent and acts, mission commander experience, and sound military judgment. Technology impedes this cycle over and over by saturating decision makers with distracting information. Discriminating between such minutiae as to what kind of weapon a threat is carrying results in lost opportunities—or worse.", "score": 21.693834973445323, "rank": 58}, {"document_id": "doc-::chunk-5", "d_text": "Navy successfully flew two autonomously controlled EA-18G Growlers at Naval Air Station Patuxent River as unmanned air systems using a third Growler as a mission controller for the other two. The flights, conducted during the Navy Warfare Development Command’s annual fleet experiment (FLEX) exercises, proved the effectiveness of technology allowing F/A-18 Super Hornets and EA-18G Growlers to perform combat missions with unmanned systems.\n“This demonstration allows Boeing and the Navy the opportunity to analyze the data collected and decide where to make investments in future technologies,” said Tom Brandt, Boeing Manned-UnManned Teaming demonstration lead. “It could provide synergy with other U.S. Navy unmanned systems in development across the spectrum and in other services.”\nOver the course of four flights, 21 demonstration missions were completed.\n“This technology allows the Navy to extend the reach of sensors while keeping manned aircraft out of harm’s way,” Brandt said. “It’s a force multiplier that enables a single aircrew to control multiple aircraft without greatly increasing workload. It has the potential to increase survivability as well as situational awareness.”\n03 Feb 20. The Pentagon Is Spending Millions on Hunter Drones With Nets. After an F-22 Raptor nearly collided with a cheap drone in 2017, the U.S. Air Force’s Air Combat Command received permission to shoot down unmanned flying objects that get too near its airbases. But shooting down drones over cities is a less-than-ideal solution to a growing problem. The Defense Innovation Unit, or DIU, is contracting with Utah-based Fortem Technologies for its SkyDome anti-drone system, which marries net-armed drones called DroneHunters with a radar system dubbed TrueView. While other anti-drone systems look for the radio signals that connect drones to their operators – and then try to jam or home in on them — SkyDome can discard the assumption that an incoming drone is emitting anything at all.\n“It’s very easy to program a drone to fly completely autonomously. It can be done with a commercial, off-the-shelf drone,” Fortem’s CTO and co-founder Adam Robertson.\nThe SkyDome combines radar, sensors aboard the DroneHunters, and even other sensors. It’s an ensemble approach that mimics, somewhat, the way an animal or human might hunt in the wild, using a variety of data sources to make targeting determinations.", "score": 20.327251046010716, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "The Navy has entered a new age in carrier aviation with the successful landing of the unmanned Northrop Grumman X-47B on the USS George H.W. Bush (CVN-77), the service announced at 1:45 p.m. EST on Wednesday.\nCall sign Salty Dog 502 left Naval Air Station Patuxent River, Md. shortly after 12:00 p.m. EST and flew to the Bush controlled through a complex series of algorithms and navigational sensors and landed on the deck of the Nimitz-class aircraft carrier guided not with a joystick and throttle controls but by an operator with a mouse and a keyboard.\nOn its final approach to Bush a hypersensitive version of the same GPS technology used to direct families on vacation guided the hook of the tailless aircraft safely to the deck of the carrier and into history.\n“The dynamics and complexity of the demonstration is not just flying an airplane. It is operating a system autonomously in and out of the most demanding launch and recovery environment around the world,” Rear Adm. Mat Winter program executive officer for unmanned aviation and strike weapons said in a Tuesday conference call with reporters.\n“This is not trivial.”\nThe landing of the X-47B successfully proved the Unmanned Combat Air System Demonstration (UCAS-D) project and will pave the way to include unmanned aerial vehicles (UAVs) on carriers in the future.\nThe Navy’s focus now will be to move beyond the experimental UCAS-D and into a capability that will move from novelty to an organic component of the carrier air wing.\nThe next step will be development of the Unmanned Carrier Launched Airborne Surveillance and Strike (UCLASS) a capability the Navy wants to field by 2020.\nThe UCLASS program has run in parallel with the UCAS-D program. According to documents obtained by USNI News, the Navy is looking for a system (consisting of one or more aircraft) that can conduct two 650 nautical mile orbits around a carrier for $150 million.\nIn June, the Navy issued a preliminary request for proposal (RfP) to Lockheed Martin, Boeing, General Atomics and Northrop Grumman to begin design work on their bids ahead of a full RfP in 2014.\n“All of the knowledge out of the program is being transferred to UCLASS,” Winter said.\n“UCLASS will benefit from all of what we have done here in X-47B.”", "score": 20.327251046010716, "rank": 60}, {"document_id": "doc-::chunk-4", "d_text": "In 2022, the US Navy’s Naval Sea Systems Command (NAVSEA) unveiled its first-ever Orca Extra Large Unmanned Undersea Vehicle (XLUUV). The ORCA project was awarded to Boeing in a USD 274 million contract in February 2019 for five drones. The Orca is designed to conduct mine countermeasures, anti-submarine warfare, anti-surface warfare and electronic warfare missions.\nIt has a top speed of eight knots (14.8 kms/9.2 miles per hour) and a maximum range of 12,038 kms. The design of this drone is based on Boeing’s 51-foot Echo Voyager and has an open architecture to enable future integration of advanced technologies.\nFrance’s Naval Group is one another major player in this area of defence systems. During the fifth edition of the Naval Innovation Days, the group unveiled the Extra Large Unmanned Underwater Vehicle (XLUUV) demonstrator. In October 2021, answering a question by Naval News, the Naval Group’s CEO Pierre Eric Pomellet said: “The demonstrator really came out of Naval Group’s research laboratories to put together technologies and demonstrate the interest of a system such as this one to complement a naval force. So, we are unveiling it today, but obviously a lot of government and research players know what we have done, and we are going to discuss with them how we can follow up on this type of technology and demonstration. This project is part of the self-investment that the Naval Group is making to go beyond the simple technology.”\nThe demonstrator is designed primarily as a platform to conduct Intelligence, Surveillance and Reconnaissance (ISR) mission. Future missions may include mine counter measure (MCM) or anti-submarine warfare (ASW) as the company’s F21 heavy weight torpedo (HWT) can fit in this weapons bay.\nFurther, China is revealed to have ‘at least five designs in the water, many more than any other navy.’\nJanes reported last year that Atlas Elektronik UK will supply the Royal Navy with autonomous underwater vehicles, the UK Defence Equipment and Support (DE&S) Mine Hunting Capability (MHC) team has awarded a GBP 32 million (USD 41.6 million) contract to Atlas Elektronik UK (AEUK) for autonomous mine-hunting systems. The company will supply the Royal Navy with three MHC autonomous underwater vessel (AUV) systems.", "score": 20.327251046010716, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "They can stay at-altitude and on station for weeks at a time, which makes them well-suited for providing over-the-horizon airborne radar, surveillance coverage, and/or communication relays for areas like ports, key sea lanes & straits, coastal areas, main transportation highways, national borders, or demilitarized zones.\nTheir multiple helium pockets and low differential pressure between the helium and the atmosphere also make them hard to kill, even if shot full of holes. The helium just escapes slowly, and the aerostat will still remain aloft for hours or even days. During one incident in Iraq, a small RAID aerostat came loose from its tether and the US Air Force barely managed to shoot it down before it drifted over the Iranian border.\nAerostats will not replace naval surveillance aircraft. Their tethered nature creates substantial drag when moving at speed, and keeping them aloft at altitude becomes difficult in those circumstances. High winds and thunderstorms can ground them, in situations where aircraft could still fly. They also have rather large radar reflections, which can compromise task force stealth. Their offsetting advantages, however, may make them a critical naval supplement to be deployed over ports or staging areas; or from ships in or near “hot” zones like beachheads, or on picket in and near dangerous areas like the Persian Gulf, Straits of Malacca, Somali coast, et. al.\nAccordingly, the USA’s NAVSEA(NAVal SEA Systems Command) and NAVAIR (NAVal AIR Systems Command) signed a memorandum of understanding on Oct. 28, 2006 to develop a sea-based 38-meter aerostat prototype with a weather hardened design that can carry up to 500 pounds of surveillance equipment. That’s NAVAIR’s area of expertise, and they will use a 32 meter aerostat they’ve been experimenting with as a base platform. The Navy also wants to develop this aerostat to accommodate a modular, interchangeable payload system that can offer radar, optronics, communications, or set combinations for maximum flexibility. That’s NAVSEA’s area of expertise.\nThe ultimate goal of the program is to develop a sea-based 71-meter, weather-hardened aerostat sensor platform, with larger interchangeable payload modules, capable of operating at an altitude of up to 15,000 feet.", "score": 20.327251046010716, "rank": 62}, {"document_id": "doc-::chunk-3", "d_text": "Currently, Will Rogers ANGB, Okla., home of the recently transitioned 137th Special Operations Wing, has received 11 of the 13.\nThe unmanned revolution has definitively reshaped the way the Air Force carries out the ISR mission, and the future of that mission is poised to bring more of the same. In 2019, the service plans to retire the venerable U-2 Dragon Lady high-altitude ISR aircraft. As a replacement, Northrop Grumman is outfitting the RQ-4 Global Hawk with new capabilities to take over the U-2’s mission set. In February 2016, Northrop Grumman successfully demonstrated the RQ-4’s Universal Payload Adaptor. It enables the RQ-4 to fly with an advanced Senior Year Electro-Optical Reconnaissance System-2 sensor. In October that year, the company announced that Global Hawk had flown with the U-2’s Optical Bar Camera, and in February 2017 Northrop Grumman said it had demonstrated the RQ-4’s use of the MS-177 high-altitude multispectral sensor.\nEven within the manned ISR enterprise, the Air Force has spoken of the need to make use of machines to process collected data more quickly and accurately. ISR chief James, who in 2013 acknowledged the Afghanistan-based airborne-ISR revolution, also said then that new ISR assets were enabling data collection at such high volumes that “machines and artificial intelligence tools have to help the Air Force get control of all this information.”\nBut the Air Force’s manned ISR fleet remains crucial to its mission, and recapitalizing those systems is a service priority. The RC-135 Rivet Joint surpassed 25 years of continuous service for US Central Command in September 2016 and is the most pressing need, Gen. Herbert J. “Hawk” Carlisle, then head of Air Combat Command, said that year.\nThe Air Force is already working to replace its fleet of 16 E-8C JSTARS—which reached one million flying hours in September 2016—with 17 new aircraft. With Northrop Grumman, Boeing, and Lockheed Martin competing, a contract is expected sometime in 2018.\nMeanwhile, the E-3 AWACS aircraft have been undergoing upgrades that are likely to keep them operational into the 2030s.", "score": 20.007520308202682, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "The Navy seems to have shifted its concept for the Unmanned Carrier Launched Surveillance and Strike (UCLASS) for a third time in as many years part of the most confusing and misunderstood aviation acquisition programs in the last decade.\nAt its 2006 conception, the new generation of carrier based unmanned aerial vehicles would be built to extend the inland reach of carrier strike groups a well beyond the reach of the current crop of manned fighters.\nIn 2011, the Navy and the Pentagon moved to make a lower cost UAV to be quickly pushed into service to hunt terrorists, anticipating a world where U.S. forces could be restricted in flying land-based UAVs as well as act as an intelligence, surveillance and reconnaissance (ISR) asset when the rest of the carrier air wing was off-duty.\nNow, the Navy seems to have again changed the character of the planned UCLASS into an aircraft that will almost exclusively spend its time over the ocean.\n“It’s very much part of our maritime package, as part of the carrier strike group,” said Vice Adm. Paul Grosklags, principle military deputy to the assistant secretary of the Navy for research development and acquisition, who spoke to USNI News on Thursday.\nThe new concept is completely different from what the original progenitors of the program — including current deputy defense secretary Bob Work — had intended, but seems to fit inline with the Navy’s current thinking about maritime threats in the Western Pacific.\nThe missions now in mind for UCLASS now include permissive airspace ISR and strike initially to start with, Grosklags said. As the program evolves, those missions would expand to more challenging contested littoral and coastal ISR and strike, to attacking an enemy surface action groups (SAG).\nThe requirements for the UCLASS have been set and that the Pentagon had expected to hold a Defense Acquisitions Board (DAB) meeting next Monday, Grosklags testified before the House Armed Services Subcommittee on Seapower and Projection Forces on Wednesday.\nHowever, Work has requested a “precursor” meeting ahead of the DAB, which forced the Pentagon to postpone.\nBoth the meeting with Work and the DAB are now expected next week before the final request for proposals (RFP) is released to the four companies competing: Boeing, Lockheed Martin, General Atomics and Northrop Grumman. The RFP is restricted even though the most of the document is unclassified.", "score": 18.90404751587654, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "By Sgt. Richard Andrade\nSpecial to the Hood Herald\nFORT IRWIN, Calif. - Unmanned aerial vehicle pilots maintain an \"eye-in-the-sky\" view providing real-time surveillance high above the battlefield in order to keep soldiers safe from unexpected enemy attacks at the U.S. Army National Training Center.\nBefore putting soldiers in harm's way, unmanned aerial vehicles perform reconnaissance, be it night or day. At night the unmanned aerial vehicle uses night-vision to find the point of origin of any attack, some even carry out attack missions. Unmanned aerial vehicles have many uses including safely scanning a large area and providing accurate information on potential enemies.\nThe ground data terminal antenna receives intelligence between the ground control station and the unmanned aerial vehicle up to 125 kilometers away. The station is where the pilots drive the unmanned aerial vehicles and is located near the tactical operations center, providing real-time video.\n\"My soldiers and I provide a 'bird's eye' view situational awareness for the soldiers on the ground,\" said Staff Sgt. Thomas Tichy, 66th Military Intelligence Company, 3rd Squadron, 3rd Armored Cavalry Regiment.\n\"We provide valuable intelligence to our regimental (operations center),\" he said. \"They decide what to do with our intelligence and put it into action.\"\nPvt. Aaron Grumm, also of the 66th Military Intelligence Company, said his mission is to provide reconnaissance using a wide array of tactical unmanned aerial vehicles. The vehicles cover a larger area than regular ground units would cover in the same amount of time.\n\"If for example, a soldier would be lost, we would help find them,\" Grumm said. \"We can search from above covering a larger area faster than a ground team.\"\n\"From high above the ground, they can spot someone about to bury an improvised explosive device, or people who are engaging our ground forces,\" Tichy said.\nTichy said some of the unmanned aerial vehicle operators in his company have combat experience using their real-time vehicle feed to coordinate with Apache and Kiowa helicopter fire teams to engage anti-coalition forces.\nSgt. Joshua Peterson, also of the 66th Military Intelligence Company, said his team has direct communication with the operations center and consequently his team is called upon the second there is mortar, artillery fire or indirect fire directed toward coalition forces.", "score": 18.90404751587654, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "The French Navy has placed an additional order for a fleet of Survey Copter Aliaca light tactical uncrewed aerial systems (UAS) for maritime surveillance, according to Airbus Defense and Space.\nSurvey Copter—an Airbus subsidiary— signed a firm order with the French Defense Procurement Agency (DGA) for 15 onboard systems (30 aircraft) of Aliaca fixed-wing electric UAS, plus associated training and integrated logistics support for the French Navy, the company announced Monday. The company said deliveries of the systems will begin this year and “will be used to equip new ships and ship types, and to enhance their onboard capabilities.”\nThe request builds upon a 2020 order of 11 systems and 22 aircraft that have been called the French Navy’s “remote field glasses” due to their ability to provide airborne surveillance, detection, and identification capabilities for high seas patrol boats and surveillance frigates, according to the company.\n“We are very honored to participate in the French government’s action at sea and to continue supporting the French Navy in its many missions,” said Christophe Canguilhem, CEO of Survey Copter, in a statement. “This additional order confirms the relationship of trust we have with the DGA and the French Navy, and the quality, efficiency, and reliability of our drones systems at sea.”\nThe order is the latest for Airbus, which late last year announced it was launching a business line dedicated to military drones. In January, Airbus also announced its intent to acquire Aerovel, the manufacturer of the Flexrotor tactical drone.\nThe post French Navy Orders Fleet of Small Airbus Maritime Drones appeared first on FLYING Magazine.", "score": 18.90404751587654, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "The Vanilla Aircraft is uniquely position to fill a void as a a Group 3 UAS capable for flying continuously for up to 10 days, providing persistent surveillance and reconnaissance. The Surveillance industry has dramatically embraced drones to provide multi-use monitoring, AI and other emerging technologies to compliment the mobile capabilities of UAV’s. UAS technology is filling a huge gap Border Security and Shipping Lanes, where long dwell time over large distances is critical.\nAn estimated $7 billion is currently being spent each year in the surveillance industry current estimates see this industry growing to over $13 Billion by 2024. With object recognition, facial ID, AI and automated tracking being just a few of the advanced technologies being applied to UAV’s this industry is growing at a rapid rate with no end in site.\nUAV technology provides highly effective and cost effective solutions to local law enforcement in protecting its citizens.\nWith new Federal and State mandates in place, protecting our border is a massive undertaking that is being greatly aided by UAV technology\nSurveillance Unmanned Vehicle Market\nNow that high end cameras are having smaller footprints and drones having the ability to carry heavier payloads and be flow with much more precision, many companies have embraced them over more traditional solutions. Plus UAV’s are often much safer and cost effective and with the vanilla unmanned solution it means longer dwell time which guards again ISR failure during take off and landing.", "score": 18.90404751587654, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "In the face of a growing threat from China, the Navy envisions drone ships monitoring enemy forces across the vast Pacific Ocean, expanding the range of firepower, and keeping sailors out of harm’s way.\nThe Navy is accelerating the development of these robotic ships as an affordable way to keep pace with China’s growing fleet while pledging not to repeat the costly blunders of shipbuilding from recent years.\nThe four largest unmanned ships are being used together this summer during a multiple naval maneuver in the Pacific Ocean.\nOther smaller waterborne drones are already being deployed by the Fifth Fleet in the waters off the Middle East.\nThe goal in the coming years is to see how research ships’ radar and sensors can be combined with artificial intelligence, and combined with conventional cruisers, destroyers, submarines and aircraft carriers, to create a networked fleet that is resilient as it spreads over greater distances and the Navy says is more difficult for enemies to destroy.\n“It’s about moving technology forward, and confidence in capability,” said Commander. Jeremiah Daly, Commander of the Unmanned Surface Vehicle First Division in California.\nJames Holmes, a professor at the Naval War College in Newport, Rhode Island, believes that the Navy believes technology can help with the three keys to military success — weapon range, exploration, command and control — at lower cost and fewer risks to personnel. .\nAll of these benefits, along with long-term durability in the harsh saltwater environment, he said, must be proven.\n“We’re kind of in Jerry Maguire’s ‘show me the money zone’ with technology. It’s going to be undoubtedly helpful, but whether or not it’s a game changer,” said Holmes, who doesn’t speak for the Navy.\nBefore moving forward, the Navy must first win over a skeptical Congress after a series of shipbuilding disasters.\nHer fast coastal combat ships had problems with propulsion, which led to early retirement. The ‘Advance Cannon System’ on his stealth destroyer was in a state of collapse due to Expensive ammo. Its latest aircraft carrier had elevator problems A new aircraft launch system.\nCritics said the Navy was quick to cram too much new technology on those ships, leading to failures and mounting costs.\n“We can’t throw all the resources on[automated ships]with a track record of 20 years of failed ship programs,” said Representative Eileen Luria of Virginia, a retired Navy officer.", "score": 17.397046218763844, "rank": 68}, {"document_id": "doc-::chunk-9", "d_text": "Part of the test will come in the next 18 months as about 150,000 U.S. and allied troops will try to break the offensive capabilities of the Taliban and al Qaeda in Afghanistan and new technologies will be brought into play. The rest will involve the Air Force’s investment in advanced technologies.\n‘We cannot move into a future without a platform that allows [us] to project power long distances and to meet advanced threats in a fashion that gives us an advantage that no other nation has,’ says Lt. Gen. Dave Deptula, deputy chief of staff for intelligence, surveillance and reconnaissance. ‘We can’t walk away from that capability.’\nFor example, surveillance aircraft can see a lot more, farther and better with technologies like long wave infrared if the platform can operate at 50,000 ft. or higher. In comparison, the RC-135S Cobra Ball, RC-135W Rivet Joint and E-8C Joint Stars manned surveillance aircraft are all limited to an altitude of less than 30,000 ft. – sometimes well under. Additionally, the multi-spectral technology to examine the chemical content of rocket plumes has been miniaturized to fit easily on a much smaller, unmanned aircraft. Other sensors of interest are electronically scanned array radars, low-probability of intercept synthetic aperture radars and signals intelligence.\nFollow the landing of a damaged Navy EP-3E in China, in early 2001 Defense Secretary Donald Rumsfeld called a classified, all-day session of those with responsibilities for ‘Sensitive Reconnaissance Operations.’ (AW&ST, June 4, 2001, p. 30) They discussed how to avoid future embarrassing and damaging losses of classified equipment, documents or aircrews without losing the ability to monitor the military forces and capabilities of important countries like China. Their leading option was to start a new, stealthy, unmanned reconnaissance program that would field 12-24 aircraft. Air Combat Command, then led by Gen. John Jumper, wanted a very low-observable, high-altitude UAV that could penetrate air defense, fly 1,000 nau. mi. to a target, loiter for 8 hr. and return to base.\nDuring the invasion of Iraq in 2003, a UAV described as a derivative of DarkStar was being prepared and was said by several officials to have been used operationally in prototype form.", "score": 17.397046218763844, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "Unmanned systems are changing the way para-military forces operate today. The success of any UAS is the manner in which they integrate with manned activity and the ability to do so as a capability multiplier and at reasonable cost.\nTellumat’s offering encompasses the complete medium-sized UAS solution for a variety of airborne surveillance and monitoring applications at a fraction of the cost of comparable market offerings.\nOur UAS solutions include unmanned aerial vehicles, portable ground control stations and all the support elements required to operate the system. The networked architecture of the system allows the geographic distribution of these elements to be flexible and diverse enough to meet most mission objectives.\nThe integrated system offering includes flight mission computers, INS navigation sensor packs, operator control units, antenna positioners and data links.\nThe system has been developed for ease of mobility to allow for road, sea or air transportation. In addition the simplicity of deployment ensures that the system is operational within the shortest period of time after arrival at the deployment site.", "score": 17.397046218763844, "rank": 70}, {"document_id": "doc-::chunk-3", "d_text": "Thanks to high-resolution cameras and relatively modest cost, drones are considered a game changer for things like airborne surveillance and photography.", "score": 17.397046218763844, "rank": 71}, {"document_id": "doc-::chunk-10", "d_text": "Aerial surveillance has long been a major shortfall of peacekeeping missions, and current technology should be exploited to fill the requirement.\n- Kalashnikov, Russia’s world famous maker of combat automatic weapons, has acquired ZALA Aero, a Russian UAV manufacturer. Plans have also been announced to develop and manufacture drones capable of air surveillance for crisis spots.\n- Weaponised drones are beginning to dominate the military UAV market, with a 34% share of UAV purchase deals predicted within ten years, according to market research firm Strategic Defense Intelligence. Medium Altitude/Long Endurance UAVs are currently the fastest growing sector of the overall UAV market. Europe is the fastest growing market but North America remains the largest by value.\n- Australia has started sending its pilots to the United States for training on the MQ-9 Reaper unmanned combat air vehicle. Australia will be only the second country, after the United Kingdom, to fly US-manufactured combat drones.\nIntelligence, surveillance and reconnaissance\nUK surveillance laws need overhaul according to parliamentary committee\nIn a landmark report, the UK parliament’s intelligence and security committee (ISC) concludes that the various pieces of legislation governing Britain’s intelligence agencies and their mass surveillance operations requires a total overhaul to make them more transparent, comprehensible and appropriate to modern methods and requirements.\nWhile the 18-month enquiry concluded that existing laws were not being broken by the intelligence agencies and that their bulk collection of data did not amount to unnecessary surveillance or a threat to individual privacy, it did say that the legal framework is unnecessarily complicated, to the point that it provided the agencies with a ‘blank cheque to carry out whatever activities they deem necessary’. The combination of the over-complexity of the legislation, combined with the lack of transparency over how the legislation and surveillance powers are implemented, has led to public belief that there is widespread and indiscriminate surveillance.\nThe report go on to call for all current legislation governing the surveillance capabilities of the security and intelligence agencies to be replaced by a new, single act of parliament. This act should clearly set out surveillance capabilities, detailing the authorisation procedures, privacy constraints, transparency requirements, targeting criteria, sharing arrangements, oversight and other safeguards.\nThe report also reveals some detail of how intelligence and security agencies have the capability to trawl through personal data, developing detailed ‘bulk personal datasets’ with minimal statutory or judicial oversight. The datasets of major targets could run to many millions of records, and there is currently no legal constraint on their storage, retention, sharing and destruction.", "score": 16.991053810847635, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Information Dissemination - Chris Rawley\nUnmanned surface vehicles are rapidly approaching practicality for naval uses. Although I’ve sung the praises of UAS for some time now, I’ve been a bit skeptical on the utility of their robotic surface cousins. I recently had an opportunity to check out the high and low ends of the USV spectrum in person. The Piranha is an “optionally manned” 16M carbon fiber diesel-powered beast that tops out at 45 knots and is purported to have a remarkable endurance of 40 days. The Piranha can be air dropped, operate up to sea state 6, employ a 2.5 ton payload of weapons, people, or sensors, and as a hybrid, it sips gas and can be operated very quietly on battery-power.\nAt the opposite end of the scale is the boogie board-like Wave Glider. Originally developed for oceanography purposes, the wave and solar-powered platform moves at a leisurely knot and a half and has already made an 82 day transit from the US West Coast to Hawaii.\nNeedless to say, both of these craft have utility for special operations and coastal counter-terrorism missions. They seem tailor made for long duration ISR patrols and the deployment of other unattended sensors where the presence of SOF (or larger manned vessels) is operationally too risky or politically untenable.\nI am less a fan of USVs for force protection missions. Although there is certainly appeal in an unmanned craft taking the place of a patrol boat crew in rough seas and harsh weather, I just don’t think the soda straw situational awareness USV’s provide is a good choice for operations which require split second decisions on assessing hostile intent and applying ROE in the close quarters of a harbor. Though in the long term, I foresee autonomous USV swarms attacking enemy ships with missile barrages or “suicide” bombing missions against high value platforms. Of course, as with UAVs, technology spreads rapidly, and it’s only a matter of time until we see USVs deployed by state and non-state enemies for smuggling, reconnaissance, and other nefarious operations.", "score": 15.758340881307905, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "The Air Force has faced significant operational challenges over the last decade to keep pace with the increased demands for intelligence, surveillance, and reconnaissance. The wars in Afghanistan, Syria, and Iraq have brought dramatic changes to the way these missions are conducted, including the retirement of some ISR assets, the rise and fall of others, and the emergence of the unmanned ISR mission as the wave of the future.\nThrough it all, the Air Force continues to return to the phrase “insatiable demand” to describe combatant commanders’ calls for keeping a better watch on the world. While USAF leaders still say they struggle to fulfill these requests, the shape of the Air Force has shifted decidedly toward ISR. For example, ISR assets as a portion of the total aircraft inventory have more than tripled over the past decade. In 2007, ISR aircraft made up 3.2 percent of the USAF total inventory. Today, ISR assets represent 9.9 percent of all Air Force aircraft.\nA large part of this shift has come with the rapid increase in remotely piloted aircraft. As of Sept. 30, 2016, the Air Force had 533 ISR aircraft in its total active inventory. Of that ISR fleet, 357 were RPAs: MQ-1B Predators (129), MQ-9A Reapers (195), and RQ-4B Global Hawks (33). The Air Force has more of each of these three platforms than any of its other ISR aircraft. If you combine all three versions of the E-3 Sentry, the Air Force has 31 of them. Next comes the U-2S Dragon Lady at 27.\nThese numbers are striking given that 10 years before, in September 2006, the Air Force’s 11 RQ-4s were its most prevalent unmanned asset, and the service had more numbers of four different manned aircraft: the U-2 (34), E-3 (32), RC-135 (22), and EC-130 (16).\nToday, 67 percent of the Air Force ISR inventory is made up of unmanned aircraft. A decade ago, there were only 24 unmanned aircraft in the entire ISR inventory, and they constituted less than 18 percent of the ISR active fleet. In the intervening years, ISR underwent a revolution of sorts.", "score": 15.758340881307905, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "UAVs are widely used, low cost, high efficiency; no risk of casualties; strong survivability, good maneuverability, and easy use. They play an extremely important role in modern warfare and have broader prospects in the civilian field.\nThe reconnaissance aircraft is used to complete battlefield reconnaissance and surveillance, positioning and calibration, damage assessment, electronic warfare, etc.; it can also be used for civilian purposes, such as border patrol, nuclear radiation detection, aerial photography, aerial prospecting, disaster monitoring, traffic patrol, public security monitoring, etc. Target drones can be used as targets for artillery and missiles.", "score": 15.758340881307905, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "Video shot from UAVs is shaped by the angle of the aircraft to the ground and software is available to allow viewers to manipulate video to see objects from different vantage points.\nAccording to The Washington Post, there are also systems that allow UAVs to track moving targets under a forest canopy or other cover and determine their exact location. Systems are under development that would allow the identification of specific individuals from facial recognition and gait.\nAnalysts today have a wealth of video data to watch for suspicious behavior. Future video systems will provide even more information analysts need to view. DARPA describes one use of the video indexing system as allowing an analyst looking for U-turning cars to check archived video of cars making U-turns before a previous attack. The full capabilities of the UAV surveillance system are likely much more than what the contract describes.\nquote: It's much better than they seem to show in movies or in games.", "score": 15.758340881307905, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "UAS are replacing manned platforms on maritime missions\nBy Arie Egozi on 5 February, 2013 in Uncategorised\nThe use of ‘exclusive economic zones’ (EEZ) has become fashionable in recent years. Countries have started to understand the many threats on their maritime natural treasures, and wanted to draw a line that will make it easier to protect them.\nIn order to protect something that belongs to a country, but is located a good distance from the shore, you need surveillance – persistent surveillance.\nMaritime surveillance requirements are demanding specific capabilities and performance, such as mission endurance and flight profiles.\nUntil recently such missions were performed exclusively by aircraft – some dedicated to the maritime surveillance mission, while others used off-the-shelf transport planes modified for the job.\nThese missions typically demand coverage of very wide areas, monitoring extensive maritime traffic, as well as deployment in unexpected conditions in response to emergencies or on search and rescue tasks.\nTherefore, the need for efficient development of a maritime situational picture is critical, enabling the deployment of the few available aerial assets to cover only those areas or targets of significance.\nThe introduction of unmanned air systems (UAS) is changing this paradigm, removing the limitations that have restricted manned missions while introducing new capabilities which significantly enhance operational flexibility and efficiency of maritime control.\nThis capability has become specifically important in recent years, as countries are required to cover growing maritime areas claimed by EEZ.\nThese can be located up to 200 nautical miles from a country’s coastline or furthest-away island. In the case of India, for example, such an area covers a huge expanse of the Indian Ocean, bordering Indonesia in the east to Somalia in the west.\nA country cannot cover such vast space from its coastal radar stations, nor can it commit manned patrol flights to cover the entire area.\nAt Aero-India 2013, Israel’s Elbit Systems is introducing its newest and largest UAS, the Hermes-900, in a configuration adapted for maritime missions. This UAS can carry payloads of up to 350kg.\nIn the maritime configuration the payload suite includes maritime surveillance radar, an electro-optical multi-sensor payload and electronic surveillance systems. It has the endurance to cover vast ocean areas, redundant line-of-sight and satellite communications links and radio relays – enabling the operator to ‘talk through’ to vessels at sea.", "score": 13.897358463981183, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "New And Evolving Platforms Enhance Sea Patrol\nBy Francis Tusa , Christina Mackenzie , David Eshel\nSource: Aviation Week & Space Technology\nNovember 25, 2013\nCredit: Piaggio Aero Concept\nMaritime surveillance is evolving despite the need for militaries to balance high-tech requirements with budget austerity. Developments underway in the U.K., France and Israel highlight efforts to realize the most return on investment, by adding versatile surveillance capabilities to airborne and sea-based platforms without the expense and potential delays of full-fledged program development.\nConcerns about maritime surveillance were expressed in the U.K. as far back as 2010. The Strategic Defense and Security Review (SDSR) issued that year had one expected but serious program cancellation: the Nimrod MRA4 (maritime reconnaissance and attack) aircraft. Eliminating the 12 planned aircraft, which were to replace the Nimrod fleet of two-dozen aircraft, left the U.K. with a large gap in maritime surveillance capabilities.\nThe official line was that maritime surveillance and associated duties such as search and rescue would be conducted by a mix of escorts, embarked helicopters (Westland Sea King airborne surveillance and control, Merlin HM2 and Lynx HAS8), Boeing E-3 Sentry AWACS and Lockheed Martin C-130K/J Hercules aircraft—which have had a search-and-rescue mission in the Falkland Islands for some time. Few are convinced that this is a credible mix in the absence of a fixed-wing maritime patrol aircraft capability.", "score": 13.897358463981183, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "MarineLink has a short post about a European effort to use networked Unmanned Air and Surface Vehicles (UAVs and USVs) to do SAR. I don’t find their particular scenario persuasive, but there probably are roles for these systems.\nUnmanned systems have some potential advantages over Manned assets although they are unlikely to ever replace them.\n- It may be possible to have UAVs more widely distributed than manned CG Air assets.\n- UAVs operating from SAR stations might also be able to get into the air more quickly than manned aircraft because they do not have to contend with other air traffic that may be operating on the field.\n- At least for some applications they may be cheaper to operate.\nFrankly, I had thought of unmanned systems as primarily Law Enforcement assets, but the Coast Guard is looking as the possibility of locating personnel in the water using small UAVs.\nI have a hard time visualizing a use for Unmanned Surface Vessels (USV), but perhaps there might be a benefit in dropping a USV to a distressed vessel or person(s) in the water either from a fixed wing or a UAV.\nUAVs might be used:\n- For communications relay.\n- To deliver medication or medical equipment.\n- Small UAVs might be used to confirm the location of vessels in distress before other units arrive.\n- To deliver pumps, communications equipment, or other even inflatable liferafts.\nAny Other Ideas?\nAny other potential uses?", "score": 13.897358463981183, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "The U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA) has launched its Aerial Dragnet program, which seeks new technologies to provide persistent, wide-area surveillance of unmanned aircraft systems (UAS) operating below 1,000 feet in large cities.\nDARPA says although Aerial Dragnet’s focus is on protecting military troops operating in urban settings overseas, the system could ultimately find civilian application to help protect U.S. metropolitan areas from UAS-enabled terrorist threats.\nAs off-the-shelf UAS become less expensive, easier to fly and more adaptable for terrorist or military purposes, says DARPA, U.S. forces will increasingly be challenged by the need to quickly detect and identify them – especially in urban areas, where sight lines are limited and many objects may be moving at similar speeds, according to the agency.\n“Commercial websites currently exist that display in real time the tracks of relatively high and fast aircraft – from small general aviation planes to large airliners – all overlaid on geographical maps as they fly around the country and the world,” says Jeff Krolik, DARPA program manager. “We want a similar capability for identifying and tracking slower, low-flying unmanned aerial systems, particularly in urban environments.”\nThe output of the Aerial Dragnet system would be a continually updated common operational picture of the airspace at altitudes below where current aircraft surveillance systems can monitor. The system would be disseminated electronically to authorized users via secure data links, says DARPA.\nBecause of the large market for inexpensive small UAS, the program will focus on combining low-cost sensor hardware with software-defined signal processing hosted on existing UAS platforms. The agency says resulting surveillance systems would thus be cost-effectively scalable for larger coverage areas and rapidly upgradable as new, more capable and economical versions of component technologies become available.\nThe Aerial Dragnet program seeks teams with expertise in sensors, signal processing and networked autonomy. A Broad Agency Announcement solicitation detailing the goals and technical details of the program was posted on FedBizOps and is available here.\nA “Proposers Day” is scheduled for Sept. 26, 2016, in Arlington, Va.", "score": 13.897358463981183, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "(Published 10 Sep 2016, CLAWS)\n“…[t]he main advantage of using drones is precisely that they are unmanned. With the operators safely tucked in air-conditioned rooms far away, there’s no pilot at risk of being killed or maimed in a crash. No pilot to be taken captive by enemy forces. No pilot to cause a diplomatic crisis if shot down in a “friendly country” while bombing or spying without official permission” \nMedea Benjamin, 2013\nThe aim of this article is to look at some of the developments and the technological spinoffs that are likely to have a profound impact upon uninterrupted 24/7 gathering of real time strategic intelligence, surveillance, and reconnaissance data.\nX-37 B. The X-37 B or the Orbital Test Vehicle mystery aircraft of the US Air Force has nearly completed one year in orbit and it is not known when it will land. The X-37 B program has been shrouded in mystery since its inception some time in 1999 as a NASA program. The X-37 B has a wingspan of 24 m, a length of 2.9 m, a height of 4.6 m, and a launch weight of 4990 kg. It is powered by GaAs solar cells and lithium-ion batteries after it is boosted into space. It can remain in orbit for periods of over one year. As per US Air Force fact sheet the mission of the X-37B Orbital Test Vehicle, or OTV, is “an experimental test program to demonstrate technologies for a reliable, reusable, unmanned space test platform for the U.S. Air Force. The primary objectives of the X-37B are twofold: reusable spacecraft technologies for America’s future in space and operating experiments which can be returned to, and examined, on Earth”. It states further that OTV missions till now have spent a total of 1,367 days in orbit, “successfully checking out the X-37B’s reusable flight, re-entry and landing technologies.” As per US Air Force fact sheet some of the technologies being tested include advanced guidance, navigation and control, thermal protection systems, avionics, high temperature structures and seals, conformal reusable insulation, lightweight electromechanical flight systems, advanced propulsion systems & autonomous orbital flight, reentry and landing.", "score": 12.618760293716473, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The Defense Department reported that one in three aircraft in its arsenal is now unmanned.\nMiniature UAVs, such as “throwbots,” can literally be tossed into a room, right themselves, and even climb stairs while performing video surveillance or bomb detection. Some of these may be considered expendable for use in a single mission. Others, such as the Predator, are expected to perform for many years. Components used in these devices must meet the highest level of durability possible.\nA large arsenal of new unmanned, remotely controlled devices has been developed for specific tasks in recent years, including bomb detection and removal, surveillance, material transportation, and even rescue of injured soldiers.\nIn addition to the vehicle, a typical military UAV system consists of a ground control unit and satellite communications terminal. The UAV incorporates an extensive array of electronic systems that manage flight control, navigation, video reconnaissance, weapons management, target acquisition, and communications. All of these systems require absolute reliability in potentially harsh environments.\nAt the opposite end of the scale are “microbots” that measure a few inches in diameter and are designed to fly, crawl on the ground, or move on or under the sea to provide close-range and undectable video and audio. Recent demonstrations have proven the ability of position-aware microbot swarms to avoid collision and perform group tasks.\nAutonomous vehicles are another class of devices that have begun to enter the market. The US Army recently approved the Lockheed Martin Squad Mission Support System (SMSS), which is an autonomous truck with a range of 125 miles that is capable of transporting 1,000 pounds of equipment over rugged terrain. These driverless vehicles navigate to programmed destinations and have the ability to recognize established roads as well as navigate around obstacles.\nWork also continues on autonomous killer drones that carry lethal weapons, although there are serious ethical and legal issues that must be resolved before they can be deployed.\nRemotely piloted aerial and land vehicles have also caught the attention of civilian authorities. Small UAVs are cost-effective alternatives to deploying a full-size helicopter for surveillance by police. Not only is the platform much less expensive to purchase, but it can cost as little as $30 per hour to operate. Firemen are able to survey the condition of a building or forest fire to determine the most effective attack. Drones have been used to dust crops, create maps, and cover news events, as well as monitor transmission lines, flood conditions, livestock, and wildlife.", "score": 11.600539066098397, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Unmanned underwater vehicles can gather intelligence, enhance combat prowess of navies\nAlong with the armies and air forces the world over, navies too have started adopting unmanned technologies for operations. With several countries having dedicated resources for the development of these machines in order to dominate surfaces as well as sub-surfaces of oceans, the technology of Unmanned Underwater Vehicles (UUV) has gained pace. Artificial Intelligence (AI) is being used to enhance the autonomy of UUVs and enable them to make decisions based on the data they collect.\nAccording to the latest data from The Insight Partners, with a 7.8 per cent CAGR, the UUV market share is worth USD 4.44 billion. The value, estimated in 2022, stood at USD 2.83 billion. The countries tied to this study include the US, Canada, Mexico, UK, Germany, Spain, Italy, France, India, China, Japan, South Korea, Australia, UAE, Saudi Arabia, South Africa, Brazil and Argentina.\nThe report suggests that geographically, the UUV market is segmented into North America, Europe, Asia Pacific, the Middle East & Africa, and South America. North America is the most prominent region in the market owing to a large number of manufacturers and suppliers in the region. Moreover, software companies are enhancing the capabilities of these vehicles by introducing advanced software. The report further suggested that countries in the Asia Pacific region such as China, Japan, South Korea and India are constantly spending substantial amounts in the development and procurement of advanced unmanned underwater robots.\nUUVs have gathered steam because they can perform several tasks underwater with precision and speed. During mine countermeasures, UUVs can be used to detect and neutralise underwater mines. They can be equipped with specialised sensors that can detect mines and explosives and can be programmed to neutralise them. At the time of intelligence, surveillance and reconnaissance, these UUVs can be used to gather intelligence and conduct surveillance in underwater environments. They can be equipped with cameras, sensors and other instruments that can gather information about underwater infrastructure, enemy submarines among other targets.\nSimilarly, in cases of search and rescue, UUVs can be used to search for and locate missing persons or submerged vehicles. They can be equipped with sonar and other sensors that can detect objects underwater and can be remotely operated to search for targets in difficult or dangerous environments.", "score": 11.600539066098397, "rank": 83}, {"document_id": "doc-::chunk-2", "d_text": "“The algorithm aspect — that’s kind of what we’re really going after.”\nToday, most autonomous systems operate on a rules-based or deterministic paradigm where machines are programmed to take certain actions in specific situations, Rucker explained.\nBy leveraging advances in artificial intelligence, the Navy hopes to reach the point where autonomous devices can shift to more open knowledge-based and probabilistic decision-making, and perform their own reasoning, he said.\nSnakehead UUV (Navy)\nAt the end of the day, unmanned platforms are simply hosts for other capabilities, officials emphasized. Mastroianni said he views them as trucks that haul gear around.\n“A UUV [by itself] does nothing for me,” he said. “It needs to have a mission, which means it needs to have some sort of payload, some sort of capability. It could be as simple as a camera [or] it could be some massively expensive, super-secret payload that solves world hunger.”\nHe continued: “It’s what goes inside of them that really makes the difference on whether it can support our needs or not. What kind of processing, what kind of sensors does it have on it? Are they lightweight enough in order to work in the environment that we need? And can I afford it?”\nTo prevent schedule delays and encourage technological maturity, the service is pursuing an incremental approach to capability development rather than try to “deliver a Cadillac right off the bat,” Rucker said.\nModularity is required to make that a viable strategy, he noted.\n“Whether it’s an unmanned surface vessel or unmanned undersea vessel, we are ensuring that we develop that modularity and have the interfaces, so as [enabling] technology is ready we can insert it into the production line — not break the production line — and ensure we stay on track to deliver that capability,” he said.\nModularity will also allow the Navy to make unmanned platforms multi-mission capable by adding or swapping in new payloads. That is especially true for larger vessels, which have greater size, weight and power parameters than smaller ones, and are therefore able to carry more devices. For example, the Orca XLUUV will initially be a single-mission platform but it is expected to take on additional missions going forward, Rucker explained.\nThe service is looking to give industry opportunities to showcase their technologies. ONR has developed multiple “innovative Naval prototype” UUVs that recently transitioned to Rucker’s office.", "score": 11.600539066098397, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "Unmanned Aerial Vehicles, or UAVs ‘have an advantage of providing persistence,’ US Air Force Lt. Gen. Dave Deptula explains. This persistence means a drone is better able to spot targets and guide in attackers, shrinking what Deptula calls the ‘intelligence-surveillance-reconnaissance-strike equation’ to ‘a matter of single-digit minutes.’\n|« Previous Thread | Next Thread »|", "score": 11.600539066098397, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "On the other hand their ability to support sensors and weapons is severely limited, and the crews’ limited ability to deal with adverse weather has always been problematic. Making them unmanned will at least help with that.\nThanks to Jim for suggesting the topic.\nIt now seems obvious that Unmanned Systems (air and possibly surface and subsurface) will play a part in the Coast Guard’s future, but the service has been, perhaps understandably hesitant to commit to any particular system.\nBecause of the variety of proprietary systems, integrating the control systems into the organization of the controlling unit, particularly ships and aircraft, and then integrating the resulting information into a common operating picture has been problematic.\nEaglespeak reports, it looks like DOD, through the Office of Naval Research, is moving in the direction of a platform agnostic software application that will permit common hardware to control different unmanned system.\nThis might permit Coast Guard units which commonly control small unmanned aicraft (sUAS) to be quickly adapted to\n- Control a much more capable UAS.\n- Hunt for mines using unmanned surface (USV) or subsurface (UUV) systems.\n- Control optionally manned surface craft to search for smugglers or enhance asset protection.\n- Control UUVs towing acoustic arrays, searching for submarines.\n- Direct a USV equipped with AIS, lights, and signals into position to serve as a temporary aid-to-navigation.", "score": 8.086131989696522, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "VGI doesn’t have a problem with the how, it’s the who that will be the greatest challenge when the lessons of VGI are integrated into a UCAV. In a video-game, the VGI is blessed with instant recognition; its enemy is automatically identified when units are revealed, their typology is provided instantly to both human and VGI. A UCAV unable to differentiate between different radar contacts or identify units via its sensors is at a disadvantage to its human comrades or enemies. Humans still dominate the field of integrating immediate quality analysis with ISR within the skull’s OODA loop. Even during the landing sequence, the UCAV cheated in a way by being fed certain amounts of positional data from the carrier.\nWe’ve passed the tutorial level of unmanned warfare; we’ve created the unmanned platforms capable of navigating the skies and a vast array of programs designed to drive tactical problems against human opponents. Before we pat ourselves on the back, we need to effectively integrate those capabilities into an independent platform.\nMatt Hipple is a surface warfare officer in the U.S. Navy. The opinions and views expressed in this post are his alone and are presented in his personal capacity. They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.", "score": 8.086131989696522, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "“Unmanned Systems Integrated Roadmap: FY2013-2038,” U.S. Department of Defense, 2013.\nInventory of DoD UAS (page 5)\n(3PA: Like other Pentagon reports, the data contained in this chart should be treated with some skepticism. In May, the Pentagon provided its Annual Aviation Inventory and Funding Plan to Congress. That report listed the total Department of Defense aircraft fleet as being composed of 14,776 aircraft, but did appear to include some of the tactical surveillance drones listed above. This chart also presumably does not include the Central Intelligence Agency’s drone fleet, which according to Greg Miller consists of some 30-35 weapons systems.)\n2.4.1 Autonomy (page 15-16)\nThe potential for improving capability and reducing cost through the use of technology to decrease or eliminate specific human activities, otherwise known as automation, presents great promise for a variety of DoD improvements. However, it also raises challenging questions when applying automation to specific actions or functions. The question, “When will systems be fielded with capabilities that will enable them to operate without the man in the loop?” is often followed by questions that extend quickly beyond mere engineering challenges into legal, policy, or ethical issues. How will systems that autonomously perform tasks without direct human involvement be designed to ensure that they function within their intended parameters? More broadly, autonomous capabilities give rise to questions about what overarching guiding principles should be used to help discern where more oversight and direct human control should be retained.\nThe relevant question is, “Which activities or functions are appropriate for what level of automation?” DoD carefully considers how systems that automatically perform tasks with limited direct human involvement are designed to ensure they function within their intended parameters. Most of the current inventory of DoD unmanned aircraft land themselves with very limited human interaction while still operating under the control of a human and perform this function with greater accuracy, fewer accidents, and less training than a human-intensive process; as a result, both a capability improvement and reduced costs are realized. This specific automatic process still retains human oversight to cancel the action or initial a go-around, but substantially reduces the direct human input to one of supervision. Human-systems engineering is being rigorously applied to decompose, identify, and implement effective interfaces to support responsive command and control (C2) for safe and effective operations.", "score": 8.086131989696522, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Given that the target was the Long Island Sound, this may seem like a small success, but it has been noted by some that the Aerial Torpedo was the first unmanned aircraft vehicle (UAV) to be recovered and flown again.\nProblems with the catapult and other systems crashed the Speed Scouts, but a converted N-9 trainer was successfully launched on October 17, 1918, and flew as planned for eight miles. At that point, drone aviation experienced its first uncommanded “fly away,” when the trainer’s flight continued until it disappeared over the horizon. The Navy’s attention turned to an occasional interest in target drones and the Sperry-Hewitt program ended.\nTo read the article from the Febbruary 2018 issue of Flight Journal, click here.", "score": 8.086131989696522, "rank": 89}]} {"qid": 36, "question_text": "What evidence suggests that Mars once had more oxygen in its atmosphere than it does today?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "OXFORD, England, June 19 (UPI) -- A British scientist says new findings suggest Mars had an oxygen-rich atmosphere more than a billion years before Earth developed its own.\nAn examination of meteorites on Earth and rocks on Mars suggests oxygen was affecting the martian surface 4 billion years ago, at least 1,500 million years before oxygen built up in appreciable quantities in Earth's atmosphere, Bernard Wood of Oxford University said.\nThe evidence comes from a comparison of Martian meteorites that have crashed onto Earth and data from rocks examined by NASA's Spirit rover at a very ancient part of Mars containing rocks more than 3,700 million years old, researchers said.\nDifferences in composition can best be explained by an abundance of oxygen early in martian history, they said.\n\"The implication is that Mars had an oxygen-rich atmosphere at a time, about 4,000 million years ago, well before the rise of atmospheric oxygen on Earth around 2,500 million years ago,\" Wood told the Irish Times.\n\"As oxidation is what gives Mars its distinctive color, it is likely that the 'red planet' was wet, warm and rusty billions of years before Earth's atmosphere became oxygen rich,\" he said.", "score": 52.294780565059, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "According to a study released in the journal Nature, the Martian atmosphere may have been rich in oxygen 3.7 billion years ago. Such a fact would bode well for the continuing argument that the planet supported life at one time.\nThe study, authored by Bernard Wood of Oxford University, theorizes that the atmosphere of Mars might once have had plenty of oxygen after Wood and his colleagues compared meteorites from the planet to data on surface rocks collected by the Spirit rover. The samples, which share volcanic beginnings, were found to be geochemically different: the surface rocks were more oxygen-rich than the meteorites.\nSpirit launched in 2004 with the Opportunity rover and operated until 2010. During that time it collected rock samples from Gusev Crater, a region of Mars estimated to be over 3.7 billion years old. Wood believes these ancient rocks may have come into contact with Mars’s oxygen atmosphere before experiencing subduction, when the rocks would have been recycled in the planet’s interior. The oxygen-rich rocks then reappeared via volcanic eruption some four billion years ago.\nThe meteor samples, alternatively, are being considered much “younger” than their surface counterparts. At anywhere between 180 million to 1.4 billion years old, these rocks likely came from deeper inside the Red Planet and thus had less contact with any oxygen that was in the atmosphere.\nAlthough the theory is plausible, not everyone is in agreement that the samples are definitive proof that Mars obtained an atmosphere full of oxygen much earlier than Earth. Francis McCubbin of the University of New Mexico spoke with the BBC and stated that he did not reach Wood’s conclusions concerning the samples. He said that the oxidation of the surface could occur without the presence of oxygen gas.\nHowever, he did agree with Wood’s conclusions that there are “substantial redox gradients with depth” on the planet. The process of redox (also known as reduction-oxidation) could potentially support certain types of life that use the reactions for energy and/or as a food source.\nYet Wood expects he is on to something. “As oxidation is what gives Mars its distinctive color, it is likely that the ‘red planet’ was wet, warm and rusty billions of years before Earth’s atmosphere became oxygen-rich,” he said.\nIn terms of how Mars would have beaten Earth to an oxygen heavy atmosphere, Wood explained that Mars’s lower gravity allowed for easier loss of hydrogen molecules during photolysis of existing water.", "score": 49.24183001126235, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Early Mars atmosphere 'oxygen-rich' before Earth's\n- 19 June 2013\n- From the section Science & Environment\nMars' atmosphere could have been rich in oxygen four billion years ago - well before Earth's air became augmented with the gas.\nThat is the suggestion put forward by the author of a study in Nature journal, which outlines an explanation for differences between Mars meteorites and rocks examined by a robot rover.\nDr Bernard Wood said the idea fits with the picture of a planet that was once warm, wet and habitable.\nBut other scientists were sceptical.\nWhile the rise of atmospheric oxygen on Earth 2.5 billion years ago was probably mediated by life, Martian oxygen could have been produced through the chemical \"splitting\" of water.\nProf Wood and his colleagues from Oxford University looked at the chemical composition of Martian meteorites found on Earth and data from Nasa's Spirit rover, which examined surface rocks at Gusev Crater on Mars.\nBoth are igneous rocks (of volcanic origin), but they show major geochemical differences. For example, the Gusev Crater rocks are five times richer in nickel than the meteorites.\nThis had posed something of a puzzle, casting doubt on whether the meteorites were typical volcanic products of the Red Planet.\nYoung and old\n\"What we have shown is that both meteorites and surface volcanic rocks are consistent with similar origins in the deep interior of Mars, but that the surface rocks come from a more oxygen-rich environment, probably caused by recycling of oxygen-rich materials into the interior,\" Prof Wood explained.\n\"This result is surprising because while the meteorites are geologically 'young', around 180 million to 1.4 billion years old, the Spirit rover was analysing a very old part of Mars, more than 3.7 billion years old.\"\nWhilst the researchers conceded that large regional variations in the geological composition of Mars could not be ruled out, they argue in their paper that these differences arose via subduction - in which rocks are recycled in the planet's interior.\nDr Wood, James Tuff and Jon Wade from Oxford propose that the Martian surface became \"oxidised\" early in its history, and that these surface rocks were drawn into the shallow interior and recycled back to the surface during volcanic eruptions around four billion years ago.\nThe meteorites, by contrast, are much younger volcanic rocks that emerged from deeper within Mars and so were less influenced by this process.", "score": 46.90823996896907, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "A recent study by researchers at Oxford University has shown that the planet Mars may hav had an oxygen-rich atmosphere 4 billion years ago.\nUsing meteorites identified as coming from Mars, as well as data from Mars rover Spirit, researchers found that rocks currently found on the surface of Mars have much more nickel than the meteorites. This suggests, researchers say, that the early Martian surface was full of oxygen. The findings have been published in the journal Nature.\n\"What we have shown is that both meteorites and surface volcanic rocks are consistent with similar origins in the deep interior of Mars but that the surface rocks come from a more oxygen-rich environment, probably caused by recycling of oxygen-rich materials into the interior,\" said Bernard Wood, a co-author of the study and a professor in Oxford University's Department of Earth Sciences. \"This result is surprising because while the meteorites are geologically 'young', around 180 million to 1,400 million years old, the Spirit rover was analysing a very old part of Mars, more than 3,700 million years old.\"\nWood and his colleagues believe that the different compositions of the Martian material may be due to subduction, the process by which surface material is recycled into a planet's interior. The hypothesis is that Mars was very oxidized early in its history, and that subduction has brought early surface material to where Spirit found it, while the meteorites are younger Martian material from deeper within the planet.\n\"The implication is that Mars had an oxygen-rich atmosphere at a time, about 4000 million years ago, well before the rise of atmospheric oxygen on earth around 2500 million years ago,\" said Wood. \"As oxidation is what gives Mars its distinctive colour it is likely that the 'red planet' was wet, warm, and rusty billions of years before Earth's atmosphere became oxygen rich.\"", "score": 46.02643248829808, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "New chemical science findings from NASA’s Mars rover Curiosity indicate that ancient Mars likely had a higher abundance of molecular oxygen in its atmosphere compared to the present day and was thus more hospitable to life forms, if they ever existed.\nThus the Red Planet was much more Earth-like and potentially habitable billions of years ago compared to the cold, barren place we see today.\nManganese-oxide minerals require abundant water and strongly oxidizing conditions to form.\n“Researchers found high levels of manganese oxides by using a laser-firing instrument on the rover. This hint of more oxygen in Mars’ early atmosphere adds to other Curiosity findings — such as evidence about ancient lakes — revealing how Earth-like our neighboring planet once was,” NASA reported.\nThe newly announced results stem from results obtained from the rovers mast mounted ChemCam or Chemistry and Camera laser firing instrument. ChemCam operates by firing laser pulses and then observes the spectrum of resulting flashes of plasma to assess targets’ chemical makeup.\n“The only ways on Earth that we know how to make these manganese materials involve atmospheric oxygen or microbes,” said Nina Lanza, a planetary scientist at Los Alamos National Laboratory in New Mexico, in a statement.\n“Now we’re seeing manganese oxides on Mars, and we’re wondering how the heck these could have formed?”\nThe discovery is being published in a new paper in the American Geophysical Union’s Geophysical Research Letters. Lanza is the lead author.\nThe manganese oxides were found by ChemCam in mineral veins investigated at “Windjana” and are part of geologic timeline being assembled from Curiosity’s research expedition across of the floor of the Gale Crater landing site.\nScientists have been able to link the new finding of a higher oxygen level to a time when groundwater was present inside Gale Crater.\n“These high manganese materials can’t form without lots of liquid water and strongly oxidizing conditions,” says Lanza.\n“Here on Earth, we had lots of water but no widespread deposits of manganese oxides until after the oxygen levels in our atmosphere rose.”\nThe high-manganese materials were found in mineral-filled cracks in sandstones in the “Kimberley” region of the crater.\nHigh concentrations of manganese oxide minerals in Earth’s ancient past correspond to a major shift in our atmosphere’s composition from low to high oxygen atmospheric concentrations. Thus its reasonable to suggest the same thing happened on ancient Mars.\nAs part of the investigation, Curiosity also conducted a drill campaign at Windjana, her 3rd of the mission.", "score": 45.44213301288152, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "How much manganese oxide was detected and what is the meaning?\n“The Curiosity rover observed high-Mn abundances (>25 wt% MnO) in fracture-filling materials that crosscut sandstones in the Kimberley region of Gale crater, Mars,” according to the AGU paper.\n“On Earth, environments that concentrate Mn and deposit Mn minerals require water and highly oxidizing conditions, hence these findings suggest that similar processes occurred on Mars.”\n“Based on the strong association between Mn-oxide deposition and evolving atmospheric dioxygen levels on Earth, the presence of these Mn-phases on Mars suggests that there was more abundant molecular oxygen within the atmosphere and some groundwaters of ancient Mars than in the present day.”\nStay tuned here for Ken’s continuing Earth and planetary science and human spaceflight news.\nCuriosity, Curiosity Rover, Featured, Gale crater, Jet Propulsion Laboratory (JPL), JPL, manganese oxide minerals, Mars, mars red planet, Mars Science Laboratory (MSL), Mars Science Laboratory (MSL) Curiosity Rover, MSL, NASA, Search for Life, The Kimberley waypoint, windjana", "score": 44.99356044512473, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "During the same relative time period, other clues indicate more oxygen was present in the atmosphere thanfound currently\nSpace news (planetary science: Martian rocks containing manganese oxide minerals; indicating a wetter surface with more atmospheric oxygen than presently found on Mars) – Mars (the Red Planet), 154 million miles (249 kilometers) from Sol, or 141 million miles (228 million kilometers) from Earth, on average –\nNASA’s Curiosity Mars rover has found rocks at a place called Windjana containing manganese oxide minerals according to reports from planetary scientists studying samples from the region. On Earth rocks of this type formed during the distant past in the presence of abundant water and atmospheric oxygen. This news added to previous reports of ancient lakes and other groundwater sources during Mar’s pastpoints to a wetter environment in the study region Gale Crater during this time.\nPlanetary scientists used the laser-firing instrument on the Curiosity Mars rover to detect high levels of manganese-oxide in mineral veins found at Windjana. “The only ways on Earth that we know how to make these manganese materials involve atmospheric oxygen or microbes,” said Nina Lanza, a planetary scientist at Los Alamos National Laboratory in New Mexico. “Now we’re seeing manganese oxides on Mars, and we’re wondering how the heck these could have formed?”\nPlanetary scientists are looking at other processes that could create the manganese-oxide they found in rocks in Mar’s Gale Crater region. Possible culprits at this point include microbes, but even optimistic planetary scientists are finding little fan fair accompanyingtheir ideas. Lanza said, “These high manganese materials can’t form without lots of liquid water and strongly oxidizing conditions. Here on Earth, we had lots of water but no widespread deposits of manganese oxides until after the oxygen levels in our atmosphere rose.”\nGeologists have found high concentrations of manganese oxide minerals is an important marker of a major shift in Earth’s atmospheric composition, from relatively low oxygen levels during the distant past, to the oxygen-rich environment we live in today. Planetary scientists studying the rocks they found in Gale Crater suggest the presence of these materials indicates oxygen levels on Mars rose also, before declining to the present low levels detected. The question is how was Mar’s oxygen-rich atmosphere formed?\n“One potential way that oxygen could have gotten into the Martian atmosphere is from the breakdown of water when Mars was losing its magnetic field,” said Lanza.", "score": 42.77851842519677, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "“It’s thought that at this time in Mars’ history, water was much more abundant. Yet without a protective magnetic field to shield the surface, ionizing radiation started splitting water molecules into hydrogen and oxygen. Because of Mars’ relatively low gravity, the planet wasn’t able to hold onto the very light hydrogen atoms, but the heavier oxygen atoms remained behind. Much of this oxygen went into rocks, leading to the rusty red dust that covers the surface today. While Mars’ famous red iron oxides require only a mildly oxidizing environment to form, manganese oxides require a strongly oxidizing environment, more so than previously known for Mars.“\nLanza added, “It’s hard to confirm whether this scenario for Martian atmospheric oxygen actually occurred. But it’s important to note that this idea represents a departure in our understanding for how planetary atmospheres might become oxygenated. Abundant atmospheric oxygen has been treated as a so-called biosignature or a sign of extant life, but this process does not require life.“\nThe Curiosity rover has been investigating Gale Crater for around four years and recent evidence supports the possibilityconditions needed to form these deposits were present in other locations. The concentrations of manganese oxide discovered were found in mineral-filled cracks in sandstones in a region of the crater called “Kimberley”. NASA’s Opportunity rover has been exploring the surface of the planet since 2004 and recently reported similar high manganese deposits in a region thousands of miles away. Supporting the idea environments required to form similar deposits could be found well beyond Gale Crater.\nWhat’s next for Curiosity?\nNASA’s Curiosity rover’s currently collecting drilled rock powder from the 14th drill site called the Murray formation on the lower part of Mount Sharp. Plans call for NASA’s mobile laboratory to head uphill towards new destinations as part of a two-year mission extension starting near the beginning of October.\nThe rover will go forward about a-mile-and-a-half (two-and-a-half-kilometers) to a ridge capped with material rich in the iron-oxide mineral hematite first identified by observations made with NASA’s Mars Reconnaissance Orbiter. Just beyond this area, there’s also a region with clay-rich bedrock planetary scientists want to have a closer look.", "score": 40.77471237107211, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "Many scientists, though certainly not all [3, 4], have concluded that results of Viking biological experiments can be interpreted as an indication of the presence of strong oxidants in the Martian soil [5–7], rather than biological activity. They suggest that these oxidants are capable of oxidizing organic materials causing release of carbon dioxide as seen in the Viking experiments. Since oxygen was also released after humidifying a sample of Martian soil, these proposed oxidants would also oxidize water. The spontaneous reduction of Ferrate(VI) in water forms molecular oxygen and Fe(III) . The rate of this reaction is strongly pH dependent . Other observations by Viking have complicated data interpretation, including the transient absorption of generated carbon dioxide, perhaps indicating an alkaline soil. Also, different thermal sensitivity of carbon dioxide and oxygen released implied that more than one oxidant might have been involved in the observed processes or that biology might indeed have been responsible for the observed results .\nGoldfield, et al. and Tsapin, et al. suggested that the actual source of oxidative power of Martian soil could be highly redox active oxygen species formed in the atmosphere by ultraviolet (UV) irradiation, and Hunten examined this hypothesis much earlier. Further, these oxidative equivalents may then be stabilized in the form of high oxidation states of some elements that are particularly abundant in Martian soil. The most likely element is iron, which is thought to represent a major fraction of the Martian soil matrix [12, 13]. Iron has a set of oxidation states between +2 and +6, with the higher oxidation states being very strong oxidants. The reduction potential of FeO42- in acidic conditions has been determined to be E0 = +2.20 V and in alkaline conditions E0 = +0.72 V . Thus, these authors [5, 10] proposed that Fe(VI) as ferrate dianion FeO42-, which is more stable under alkaline rather than acidic conditions, might be an important component of the pool of Martian oxidants. They demonstrated that Fe(VI) displays the essential features of organic oxidation to carbon dioxide and water decomposition to oxygen that were found with Martian soil during the Viking biology experiments. Goldfield, et al.", "score": 40.600225243150305, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "In his explanation of the most intriguing of the biology tests, the Viking labeled release (LR) experiment, Oyama built upon an idea first suggested by John Oro, of the molecular analysis team (responsible for the Viking gas chromatograph-mass spectrometer). Oro had suggested early on that the results from both the GEX and LR experiments were due to the presence of peroxidelike materials in the Martian soil. In Oyama's view, hydrogen peroxide formed photochemically in the atmosphere and reacted with a catalyst on the surface of the soil grains to form oxygen which then diffused into the grains, reacting with the alkaline earths and metals to form superoxides. Atmospheric water vapor converts the superoxides to peroxides, which combined with water in the LR nutrient to form hydrogen peroxide. This in turn oxidized the labeled components of the nutrients to release the observed labeled carbon dioxide. As for the observed release of oxygen, Oyama pointed to what happens when hydrogen peroxide, a commonly used disinfectant, is applied to a wound. Bubbles of oxygen form as a result of the catalytic action of iron (present in blood hemoglobin) on hydrogen peroxide. A similar process operates on Mars, but in this case, suggested Oyama, the catalyst is probably a form of iron oxide known as maghemite.\nRelated categories MARS TOPICS\nHome • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact", "score": 40.59551887192597, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "This is the process wherein the water vapor of Mars would have broken down into its component elements through the interaction of the atmosphere with radiation from the Sun.\n“So the oxygen build-up could be enhanced on Mars relative to Earth,” he said.", "score": 37.39420339112458, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Argon lost to interactions with solar wind hints at transformed climate\nThe Martian atmosphere definitely had more gas in the past.\nData from NASA’s MAVEN spacecraft indicate that the Red Planet has lost most of the gas that ever existed in its atmosphere. The results, published in the March 31 Science, are the first to quantify how much gas has been lost with time and offer clues to how Mars went from a warm, wet place to a cold, dry one.\nMars is constantly bombarded by charged particles streaming from the sun. Without a protective magnetic field to deflect this solar wind, the planet loses about 100 grams of its now thin atmosphere every second (SN: 12/12/15, p. 31). To determine how much atmosphere has been lost during the planet’s lifetime, MAVEN principal investigator Bruce Jakosky of the University of Colorado Boulder and colleagues measured and", "score": 35.44384066715472, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "NASA – An instrument onboard the Stratospheric Observatory for Infrared Astronomy (SOFIA) detected atomic oxygen in the atmosphere of Mars for the first time since the last observation 40 years ago. These atoms were found in the upper layers of the Martian atmosphere known as the mesosphere.\nAtomic oxygen affects how other gases escape Mars and therefore has a significant impact on the planet’s atmosphere. Scientists detected only about half the amount of oxygen expected, which may be due to variations in the Martian atmosphere. Scientists will continue to use SOFIA to study these variations to help better understand the atmosphere of the Red Planet. – read more", "score": 34.93485990190574, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "- NASA revealed Thursday that solar wind has stripped away Mars' atmosphere\n- The findings may give clues into what we can expect for Earth\n(CNN)We now know more about what happened to Mars' climate.\nNASA announced on Thursday several major scientific findings by its MAVEN spacecraft that reveal significant details on the fate of the Martian atmosphere.\nScientists have known that billions of years ago, Mars was a wet, warm planet with a thick atmosphere that protected it. The Martian landscape once had water flowing through its long rivers that spilled out into lakes and oceans.\nThat world is a stark contrast to the dry and bitter cold planet we know it as today. The question researchers have pondered for ages is: What happened to cause such a major transformation?\n\"Quoting Bob Dylan: 'The answer, my friend, is blowing in the wind,' \" said Michael Meyer, lead scientist for the Mars Exploration Program at NASA Headquarter during the announcement.\nNew measurements from the MAVEN, Mars Atmosphere and Volatile Evolution, show solar winds have stripped ions from the Martian atmosphere. Solar wind -- charged particles from the Sun -- have removed gases like oxygen and carbon dioxide from the planet, important elements for understanding the potential for life, according to NASA.\nThe findings could mean there was big atmospheric loss early in the planet's history.\nMars' atmospheric fate could theoretically happen on Earth which is also losing ions, but NASA said during the conference that our planet is fine for now because of its magnetic field.\nThe MAVEN has also discovered auroras on Mars that are similar to Earth's northern lights. On our planet, auroras form when charged particles from the solar winds enter Earth's magnetic field and travel to the poles where the particles collide with atoms of gas in the atmosphere.\nBut the auroras on Mars may be caused by what is left of the magnetic field on the planet's crust, which means these northern lights are spread out across a bigger area.\nAnother major finding shows that Mars' notorious dust problem is believed to be interplanetary in origin, meaning from another planet. Scientists came to this conclusion based on the grains and distribution of dust on Mars' surface, which ruled out Martian moons Phobos and Deimos as the culprits.\nThe MAVEN has been on a mission to study Mars' upper atmosphere since its arrival to the planet's orbit in September 2014. Tasked with finding how Mars' climate changed in the last 4 billion years, it's possible we're closer to understanding future habitability on the planet.", "score": 34.71954880263424, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "The researchers made a detailed analysis of the compounds involved in the clays, comparing them to those found in thousands of lakebeds on Earth. The time it takes for various chemicals to mix in with the clays is an important factor as well, as it affects how much of them we’d observe now.\nThe researchers conclude that the Martian atmosphere at the time the clays were formed had a lot more CO2 than the lower limits of previous estimates. But the numbers are still fall far short of what's needed to keep the temperature above freezing.\nThis doesn't definitively rule out the greenhouse explanation; certain environmental processes the researchers are not considering could have changed the composition of the clays. Where does that leave things? It's largely the same predicament as before. This study adds more evidence that Mars didn’t have enough CO2 in its atmosphere to have kept water liquid from 3.8 to 3.1 billion years ago. So either warming was driven by some other mechanism, or somehow the water was able to flow despite temperatures that were typically below freezing. Either conclusion would be interesting.\nThe data also provides more information about a critical period in the history of Mars—around the time these clays were formed, Mars was rapidly losing its atmosphere. This site is among the youngest of the now-dry sites of surface water yet discovered, meaning these clays were probably formed during the tail end of this process. Those were essentially the last days of the wet Mars.", "score": 33.90166621487798, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Mars has lost at least half its atmosphere since the planet’s inception, Curiosity confirms. Mars’ atmosphere is 100 times thinner than Earth’s. Other than shielding life from harmful UV radiation, atmosphere also controls the fluctuations in climate. Because Mars’ atmosphere contains more heavier varieties of carbon dioxide than lighter ones, the ratio suggest the planet has sadly lost much of its atmosphere. Mars’ thin atmosphere has nearly untraceable amounts of methane, only a few parts of methane per billion parts of Martian atmosphere. Microbes like bacteria emit methane. In fact, 95% of methane on Earth is produced by biological processes. Though Curiosity failed to find traces of methane in Gale Crater, Mars may yet host methane elsewhere.\nCuriosity used its SAM instruments (Sample Analysis at Mars) and TLS (Tunable Laser Spectrometer). In the near future, SAM will analyze its first solid sample to search for organic compounds in rocks.\nIn addition, air samples from Curiosity match ones from trapped air bubbles in meteorites found on Earth. Ergo, those meteorites definitely originated from Mars. 1 billion years ago, a large asteroid collided into Mars and split into fragments.\nThe latest of Curiosity’s analyses show that the Martian minerals is similar to “weathered basaltic salts of volcanic origin in Hawaii.” Curiosity’s CheMin (Chemistry and Mineralogy) instrument refines and identifies minerals in X-ray diffraction analysis on Mars. X-ray diffraction records hows minerals’ internal structures’ crystals react with X-rays. Identifying minerals in rocks and soil is crucial in assessing past environmental conditions. Each mineral has evidence of its unique formation. These minerals have similar chemical compositions but different structures and properties. The samples taken at “Rocknest” were consistent with scientists’ initial ideas of the deposits in Gale Crater. Ancient rocks suggest flowing water, while minerals in younger soil suggest limited interaction with water.\n“NASA Rover’s First Soil Studies Help Fingerprint Martian Minerals First X-ray View of Martian Soil” JPL Caltech. JPL, 30 Oct 2012. Web. 5 Nov 2012.\nCuriosity first discovered “Jake Matijevic,” the pyramid rock on Mars, on September 19, 2012, but on October 11, 2012, NASA released a report on the chemical composition of this unusual rock.", "score": 32.998854415485475, "rank": 16}, {"document_id": "doc-::chunk-17", "d_text": "Xenon fractionation data support this possibility. Thus, an early impact-generated atmosphere that accumulated during accretion was likely rapidly lost to space (Scherf & Lammer, 2021).\nA secondary atmosphere then developed from outgassing of the planet’s interior. As the planet cooled, convection within the core generated a dynamo and magnetic field that would have offered some protection to the growing atmosphere from solar wind stripping, but not EUV. The composition of this secondary atmosphere would have depended on the mantle redox state. As is the case for Mars, rapid core formation would quickly separate iron from volatiles favoring a more oxidized mantle like that of Earth where outgassed volatiles would have been dominated by H2O, CO2, and SO2. However, some of the SNC meteorites2 indicate a more reduced mantle than that of Earth, in which case H2, CO, CH4, and H2S would have been favored. Complicating matters further is the fact that the early outgassing history is not known and EUV-driven thermal escape (Tian et al., 2009) could have significantly depleted any pre-Noachian atmosphere. Thus, the mass, composition, and fate of a secondary pre-Noachian atmosphere between 4.1 and 4.5 Gy are highly uncertain, and this atmosphere may even have been largely absent (see Scherf & Lammer, 2021).\nBy the beginning of the Noachian (4.1 Gy), however, rapid thermal escape would no longer be operative because of the decline in the EUV flux. However, impact erosion during a possible Late Heavy Bombardment period3 could be a factor (Melosh & Vickery, 1989). And the loss of the magnetic field would have exposed the atmosphere to the solar wind. Nevertheless, the buildup of a thick atmosphere would have been possible if outgassing rates exceeded loss rates. Because the Tharsis volcanic province is believed to have been constructed during the Noachian, enhanced outgassing rates were likely at this time. Phillips et al. (2001) estimate that Tharsis could have outgassed 1.5 bars of CO2. Thus, it is plausible that a thick atmosphere developed during the Noachian.", "score": 32.83218922569045, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "NASA’s Curiosity rover continues to make discoveries that challenge our understanding of the Martian environment. The latest puzzle confusing scientists is the variation of oxygen levels on the planet’s surface, as detected Curiosity’s portable chemistry lab, Sample Analysis at Mars (SAM).\nWhile SAM was moving around the Gale Crater, Curiosity discovered the Martian atmosphere has a composition at the surface of 95% by volume of carbon dioxide (CO2), 2.6% molecular nitrogen (N2), 1.9% argon (Ar), 0.16% molecular oxygen (O2), and 0.06% carbon monoxide (CO). The nitrogen and argon levels keep a predictable seasonal pattern, changing relative to the amount of carbon dioxide. The levels of oxygen, though, didn’t conform to expected patterns, rising by as much as 30% over spring and summer.\nThe varying oxygen levels have scientists asking questions. “The first time we saw that, it was just mind-boggling,” Sushil Atreya, professor of climate and space sciences at the University of Michigan, said in a statement\nThe scientists attempted various hypotheses to explain the oxygen variation. They checked whether the SAM instrument was functioning correctly and looked at whether carbon dioxide molecules could be breaking apart in the atmosphere to create oxygen. Neither approach yielded results.\n“We’re struggling to explain this,” Melissa Trainer, a planetary scientist at NASA’s Goddard Space Flight Center and leader of the research, said in the statement. “The fact that the oxygen behavior isn’t perfectly repeatable every season makes us think that it’s not an issue that has to do with atmospheric dynamics. It has to be some chemical source and sink that we can’t yet account for.”\nOne factor for consideration is that the oxygen levels are related to another Martian puzzle: The fluctuating levels of methane on the planet. There is also expected seasonal variations in methane levels, and Curiosity has detected spikes of methane of up to 60% at some times. Scientists still can’t explain this finding either, but they may have found a link between methane and oxygen levels: It seems like the two gases fluctuate together at certain times.\n“We’re beginning to see this tantalizing correlation between methane and oxygen for a good part of the Mars year,” Atreya said. “I think there’s something to it. I just don’t have the answers yet. Nobody does.”", "score": 32.707182374117444, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "astroengine writes: At one time, Mars had a thick, protective atmosphere — possibly even cushier than Earth’s — but the bubble of gases mostly dissipated about 4 billion years ago and has never been replenished, new research shows. The findings come from NASA’s Mars rover Curiosity, which has been moonlighting as an atmospheric probe as it scours planet’s surface for habitats that could have supported ancient microbial life. “On Earth, our magnetic field protects us, it shields us from the solar wind particles. Without Earth’s magnetic field, we would have no atmosphere and there would be no life on this planet. Everything would be wiped out — especially when you go back 4 billion years. The solar wind was at least 100 times stronger then than it is today. It was a young sun with a very intense radiation, ” Chris Webster, manager of the Planetary Sciences Instruments Office at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., told Discovery News. Unfortunately for Mars, the last 4 billion years have not been kind.\nDEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×", "score": 32.287095610572685, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "In turn, combination of chlorine with metal ions requires virtually no hydrogen ions and therefore vanishingly little water in the moon, otherwise chlorine would have been combined in HCl and not subject to any fractionation when that volatilised on eruption. So that seems settled, then…\nCarbonates on Mars\nAncient valley systems, huge water-carved gorges and sedimentary deposits signify with little room for doubt that early in its history Mars was wet; it must therefore have been warm. A thick CO2-rich atmosphere seems obligatory to give the kind of greenhouse warming that prevented Earth from freezing over when the young Sun was weaker than now. The question is, where did the CO2 go so that the planet became chilled? Gravity on Mars is sufficient to have retained the gas, unlike water vapour that dissociates to hydrogen and oxygen, of which hydrogen easily escapes even a much stronger gravitational field. A consensus is developing that it resides in carbonate minerals. The other likely greenhouse gas is sulfur dioxide, for whose drawdown there is ample evidence in the form of sulfates detected from orbit and by surface rovers. Carbonates have a relatively simple, and unique spectrum of reflected solar radiation, with an absorption feature at a wavelength around 2.3 micrometres. Carbonates have been detected on Mars using orbital hyperspectral imaging, but only in patches. The NASA rovers rely on serendipity for any discovery, yet Spirit did stumble on a large carbonate-rich outcrop identified by its on-board Mössbauer spectrometer (Morris, R.V. and 12 others 2010. Identification of carbonate-rich outcrops on Mars by the Spirit rover. Science, v. 329, p. 421-424). It appears to be a Fe-Mg variety in association with olivines, and carbonate makes up to 34 % of part of the outcrop. The texture is granular, yet the area abounds with evidence for hydrothermal activity in the form of sulfates and silica-rich materials, implying that some kind of circulation system deposited the carbonates. The associated olivine is odd, as that mineral is prone to rapid breakdown to serpentines in the presence of water.", "score": 31.914055107265362, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "But how did such a small, cold world ever have lakes and rivers of liquid water? The reigning theory was that the thin atmosphere once contained so much carbon dioxide that the greenhouse effect made the planet warm enough. To test this theory, NASA sent the Curiosity rover to the bed of an ancient lake (above) to sample the sedimentary rocks within it. Rocks that formed in an atmosphere dense with CO2 ought to show evidence of that, in the form of carbonate minerals.\nThe same Martian bedrock in which Curiosity found sediments from an ancient lake where microbes could have thrived is the source of the evidence adding to the quandary about how such a lake could have existed. Curiosity detected no carbonate minerals in the samples of the bedrock it analyzed. The new analysis concludes that the dearth of carbonates in that bedrock means Mars' atmosphere when the lake existed -- about 3.5 billion years ago -- could not have held much carbon dioxide.So it's back to the drawing board in trying to model the ancient atmosphere of Mars.\n\"We've been particularly struck with the absence of carbonate minerals in sedimentary rock the rover has examined,\" said Thomas Bristow of NASA's Ames Research Center, Moffett Field, California. \"It would be really hard to get liquid water even if there were a hundred times more carbon dioxide in the atmosphere than what the mineral evidence in the rock tells us.\"", "score": 31.877828152448767, "rank": 21}, {"document_id": "doc-::chunk-18", "d_text": "How thick is uncertain, but estimates based on the observed crater size distribution (Kite et al., 2014), isotopic modeling (Kurokawa et al., 2018), the 40Ar/36Ar ratio in ALH84001 (Cassata et al., 2012), lava flow volumes (Craddock & Greeley, 2009; Phillips et al., 2001), atmospheric escape (Hu et al., 2015), and magma composition (Lammer et al., 2013) suggest that the Noachian atmosphere was likely composed primarily of CO2, having surface pressures between ~0.5 and 2 bars. It is worth noting that above ~2 bars, CO2 begins to condense in a pure CO2 atmosphere (e.g., Forget et al., 2013; Kasting, 1991), which limits its abundance in the atmosphere; above ~3 bars, permanent CO2 ice caps form and surface pressures are buffered by their heat balance.\nThe main challenge for this period of Martian history, however, is finding an explanation for the comparatively high erosion rates and fluvio-lacustrine features. Rainfall and runoff in a warm early climate are an obvious explanation, but the circumstances producing such conditions are in dispute. The main problem is overcoming the faint young Sun, which was ~30% less luminous during the Noachian. An atmosphere producing a strong greenhouse effect can overcome the faint young Sun problem, but it must have enough greenhouse power to raise the surface temperature 77 K above the effective temperature to reach the melting point of water (Haberle, 1998). Considering that Earth’s atmosphere, whose greenhouse is powered by CO2 and water, provides 33 K of greenhouse warming, reaching the 77 K mark is clearly a challenge. Complicating matters further, Mars climate models cannot produce such a strong greenhouse effect for atmospheres consisting of CO2 and water alone (e.g., Forget et al., 2013; Wordsworth et al., 2013).\nOne way to overcome this problem is with supplemental greenhouse gases. Gases such as SO2, NH3, and CH4 are good greenhouse gases, but raising and sustaining their concentrations to the required amounts is problematic given plausible estimates of their sources and sinks.", "score": 31.469675629443984, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "The fate of water on Mars has been hotly debated by scientists because the planet is currently dry and cold, in contrast to the widespread fluvial features that carve much of the planet’s surface. Scientists believe that if water did once flow on the surface of Mars, the planet’s bedrock should be full of carbonates and clays. Such minerals could provide further evidence that Mars once hosted habitable environments with liquid water. Researchers have struggled to find physical evidence for carbonate-rich bedrock, which may have formed when paleoatmospheric carbon dioxide was trapped in ancient surface waters.\nA new study by Wray et al. provides evidence for widespread buried deposits of Fe- and Ca-rich carbonates on Mars. The researchers, supported by the SETI Institute NASA Astrobiology Institute (NAI) team, identified carbonates on the planet using data from CRISM and HiRISE on the Mars Reconnaissance Orbiter.\n“Outcrops in the 450-km wide Huygens basin contain both iron- and calcium-rich carbonate-bearing rocks” according to study lead Dr. James Wray of Georgia Tech and NAI team Co-Investigator. The Huygens basin is an ideal site to investigate carbonates because multiple impact craters and troughs expose ancient, subsurface materials where carbonates are detected across a broad region. The study highlights evidence of carbonate-bearing rocks in multiple sites across the Red Planet, including Lucaya crater, where the ancient 3.8 Ga year old carbonates and clays were buried by as much as 5 km of lava and caprock. Study co-author Dr. Janice Bishop of the SETI Institute and also a NAI Co-Investigator says that “identification of these ancient carbonates and clays on Mars represents a window into the past when the climate on Mars was very different from the cold and dry desert of today”. The extent of the global distribution of Martian carbonates is not yet resolved and the early climate on Mars remains under debate. However, this study moves us forward in our understanding of potential Martian habitability.\nAeolian bed forms overlie ancient layered, ridged carbonate-rich outcrop exposed in the central pit of Lucaya crater, northwest Huygens basin, Mars. The image was taken by the High Resolution Imaging Science Experiment (HiRISE) instrument aboard the Mars Reconnaissance Orbiter. Credit: NASA/JPL/University of Arizona.", "score": 31.30871439219418, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "Moreover, Baker believes that there may be cold interludes within those brief warmer periods, and that Mars today may be in such a brief cold period.\n\"We don't know the answer to that yet -- that's very speculative,\" he said. \"But if it's true, it would have major implications for sending people to Mars, because it may mean that water is more available than otherwise thought.\"\nThe warm periods are triggered, Baker believes, by a period of massive volcanic activity. The heat from that activity melts ice trapped below the surface, possibly enough to form a temporary ocean in the planet's northern region. A greenhouse effect created by carbon dioxide released into the atmosphere through the volcanic activity warms the atmosphere and allows water to remain in liquid form at the surface.\nHowever, the warm, wet atmosphere eventually generates precipitation, which washes carbon dioxide out of the atmosphere. This cools the atmosphere again, freezing the water and eventually returning the planet to the cold, dry conditions that existed before the volcanic activity began.\nWhile such conditions may seem harsh, Baker notes that it may still be hospitable to life. \"This is the type of environment in which the extremophile progenitors of Earth's biosphere probably evolved,\" he said. \"Indeed, early Mars provided an arguably better habitat for the inception and incubation of early life than did early Earth.\"", "score": 31.099488345842218, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Mars was a warm, wet planet that was likely capable of supporting life billions of years ago. Something caused the planet to lose its atmosphere and turn into the harsh, frozen desert it is today.\nThe Curiosity rover, which landed on Mars in 2012, has been exploring different aspects of Gale Crater on Mars to understand more about this transition from warm and wet to dry and very cold.\nThe latest study, gathered from data captured by one of the rover’s instruments, suggests that Mars actually transitioned back and forth between wetter and drier times before losing its surface water completely around three billion years ago.\nCuriosity has been steadily climbing the 3-mile-high Mount Sharp, located at the center of Gale Crater, since 2014.\nAn instrument called a ChemCam sits on the rover’s mast and includes a high-resolution camera and laser that can vaporize rocks to help the rover analyze their chemical composition. ChemCam has an infrared-colored laser that can heat rock pieces to 18,000 degrees Fahrenheit. This vaporizes the rock and creates plasma, allowing scientists to essentially look inside the minerals and chemicals comprising the rock and peer back into the planet’s geologic history.\nThe camera on ChemCam was used for capturing observations of Mount Sharp’s terrain, which reveals slices of the Martian past as the rock varies.\nA Mars history lesson\nMount Sharp is an intriguing feature on Mars because it’s one of the best ways the red planet recorded the history of its climate, water and sediment.\n“A primary goal of the Curiosity mission was to study the transition between the habitable environment of the past, to the dry and cold climate that Mars has now. These rock layers recorded that change in great detail,” said Roger Wiens, study co-author on the paper and ChemCam team scientist at Los Alamos National Laboratory, in a statement.\nThe study published last week in the journal Geology.\nOrbiters around Mars have previously recorded information about the minerals within Mount Sharp’s slopes. Curiosity’s data has provided even more detailed observations from the layers of sedimentary rocks and revealed dry and wet periods across the planet’s past.\nCuriosity detects big changes in layers\nAs Curiosity has ascended Mount Sharp, the layers have changed dramatically.\nThe base of Mount Sharp is made of clay deposited by the lake that once filled the crater. Above that are layers of sandstone that still preserve evidence of how they were formed by wind-shaped dunes during drier times.", "score": 31.04729571757008, "rank": 25}, {"document_id": "doc-::chunk-12", "d_text": "Most encouraging were a discovery by the orbiter science team that the permanent polar caps are made of water, and a finding, made during lander entry, that nitrogen, considered essential to life on Earth, is present in the atmosphere of Mars. Analysis of argon isotope ratios in the atmosphere indicated that Mars, at some time in the past, had a warmer climate and thicker atmosphere than at present. On the negative side, water is present but scarce, and the Mars of the past, according to admittedly tentative attempts by planetologists to reconstruct its history, may have been only marginally more hospitable than the Mars of today. Nitrous oxide – laughing gas – was found in the Martian atmosphere, and Earth’s nitrous oxide is produced by living creatures, but there are other ways to produce nitrous oxide, and its presence is not considered evidence of life on Mars. The modern climate is harsh. During the six months of summer (the Martian year, and therefore its seasons, are twice as long as Earth’s) temperatures seldom exceed 20° Fahrenheit below zero, and at night typically drop to 150° below zero. Atmospheric pressure at the lander sites is only seven or eight millibars, or about that of Earth at an altitude of some 20 miles. If there is life on Mars, it is a tough life.\nEarth organisms can be found living happily in boiling sulfur springs and at the floor of the ocean under enormous pressures. Spores blown into Antarctic valleys – the regions Wolf Vishniac was exploring when he died – hibernate successfully for centuries in temperatures that never rise as high as the freezing point of water. So life is tenacious, and probably could survive on Mars. The question is, did life get started there?\nViking was conceived and presented to the public as an attempt to answer that question, but little was done to convey to the public the audacity of the assignment. Science at its present stage of development does not know how life began on Earth, let alone on Mars: only a few of the steps by which biology may have arisen from primordial seas have been reconstructed. Just over a century has passed since Darwin discovered the nature of evolution, less than a generation since Watson and Crick decoded the molecular basis of life, and even so straight forward a matter as satisfactorily defining life remains for the future.", "score": 30.815553481908044, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "In order for the lake to have been sustained for millions of years, the planet must have had a vigorous hydrological cycle, probably involving rains or snows, to keep the atmosphere humid enough. This challenges the widely held notion that Mars’ early climate was only wet during short periods after volcanic activity or space rock impacts. Furthermore, the findings could even suggest that the planet once featured a surface ocean, which would have prevented the lake from evaporating.\nScientists also thought that, alongside water, Gale could have had the “right ingredients and environment to have been able to support microbial life,” said Michael Meyer, lead scientist of the Mars Exploration Program. But they didn’t know whether the conditions lingered long enough for life to form; now, tantalizingly, it seems that water may have persisted long enough for this to be a possibility.", "score": 30.800418123686516, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Scientists have learned a lot about Mars in recent years, and based on observations from the fancy hardware NASA has sent to the Red Planet we know that it once held far more water than we see today. But knowing there was lots of water (or at least abundant ice) on Mars long ago doesn’t necessarily tell us what the climate was like.\nWithout a time machine, we can’t know exactly what ancient Mars looked like, but researchers have come up with a pretty solid guess. Using data from NASA’s CRISM spectrometer and the Curiosity rover, scientists have a good idea of what kinds of minerals are present in the Martian soil. Using various areas of Earth as analogs, they can observe the conditions that caused similar mineral deposition patterns on our planet and assume that similar climates were responsible for their formation on Mars as well.\nAt the Goldschmidt Geochemistry Conference in Barcelona this week, Purdue University professor Briony Horgan announced the findings of a new research effort that draws comparisons between the climate of present-day Earth with that of ancient Mars.\n“Our study of weathering in radically different climate conditions such as the Oregon Cascades, Hawaii, Iceland, and other places on Earth, can show us how climate affects pattern of mineral deposition, like we see on Mars,” Horgan said. “This leads us to believe that on Mars 3 to 4 billion years ago, we had a general slow trend from warm to cold, with periods of thawing and freezing.”\nThe study dives deep into the nuances of various mineral deposits such as silica, which the scientists believe hints at melting ice. This suggests that the planet had some ups and downs with regard to temperature, with warm periods characterized by occasional rains and then colder periods where everything was frozen.\nMars today is frigid compared to Earth, and that’s largely owed to the fact that the planet’s atmosphere has been almost entirely stripped away. Billions of years ago, the planet is thought to have had a much more robust atmosphere which would have aided in it retaining heat. A temperate climate with rain and flowing water certainly sounds like a recipe for life as we know it, but we may have to wait and see what the Mars 2020 rover has to say about that once it arrives in early 2021.", "score": 30.454298279869164, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Martian Greenhouse: Volcanic CO2 Doesn’t Cut It\nWith all the evidence for water on the surface of Mars in the distant past, we always return to the same question: how was it possible for water to be stable back then? These days any liquid water on the surface would boil due to the low pressure or freeze due to the low temperature (or maybe do both at the same time!).\nTo explain liquid water on the past, there has been a lot of work done to see whether a thicker atmosphere of CO2 could provide high enough pressure and temperature to allow stable water. Today Marc Hirschmann gave an interesting talk about whether volcanoes could give off enough gas to do the job.\nIt turns out that since the sun was dimmer in the past, and since Mars is farther from the sun than the Earth, the early CO2 atmosphere of mars would have to be several times as thick as Earth’s current atmosphere to keep liquid water stable at the surface. Hirscmann estimated how much volcanism there was on Mars based on the heat flow from the planet, and then calculated the amount of CO2 released based on the amount of carbon in the mantle. Hirschmann showed that on Mars, with the amount of oxygen available to react with rocks (the so-called “oxygen fugacity”), the carbon in many lavas would likely be in the form of graphite, or even diamonds deep in the mantle where the pressure is higher.\nWith reasonable estimates of the amount of volcanism and the amount of carbon in the lavas, Hirschmann estimated that even in the “best case” scenario, volcanism can only provide an atmosphere with 0.1 bars of CO2: only a tenth of Earth’s atmospheric pressure and not nearly enough to make Mars warm. This led Hirschmann to look at the possibility of CO2 from the formation and stabilization of the crust. Depending on the assumptions you make, this process can give either 10s of bars of CO2, or a negligible amount.\nHirschmann concluded that if crust formation was the source of a thick atmosphere, it would require the most extreme limits of likely conditions. Alternatively, a CO2 atmosphere could have somehow persisted from the very earliest period of martian evolution, when the planet had a “magma ocean”. Alternatively, there may have been a greenhouse effect caused by something other than CO2, such as SO2 or methane.", "score": 30.447483125290542, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Is hydrogen the secret to water on Mars?\nAn unlikely greenhouse gas could have absorbed enough heat to keep liquid water stable on the surface of Mars about 4 billion years ago, argue scientists in a new article.\nMars has valleys deeper than the Grand Canyon, braided river channels, and a level seashore reaching around the northern third of the planet. These signs point to water – a lot of water – in Mars's early history. But it's too cold for liquid water now, so how could Mars have had rivers and oceans 3.8 billion years ago, back when the sun was colder and dimmer?\nJust add hydrogen, suggests a team of scientists from Penn State and NASA. \"A CO2 -H2 greenhouse could have done the trick,\" writes Ramses Ramirez, lead author of a new paper appearing in the Nov. 24 Nature Geoscience.\n\"You just need a little nudge,\" explains Jim Kasting, a Penn State professor and fellow author on the paper. \"On early Mars, you can almost make it warm enough with carbon dioxide and water vapor, but not quite. You need something to push you over the edge.\"\nEver since the valleys were discovered in the 1970s, scientists have been trying to explain how a cold planet with a negligible atmosphere could ever have been warm enough to have running water on the surface. It hasn't been easy. Various greenhouse gases have been suggested – carbon dioxide, sulfur dioxide, water vapor – but none were abundant enough to keep Mars balmy. Also, many gases create clouds, which undercut their greenhouse effectiveness by reflecting sunlight back up into space.\nConsidering all the problems with keeping Mars warm, some scientists have considered a series of \"transient\" atmospheres. They argue that big meteor impacts could blast enough material into Mars's sky to create a temporary atmosphere that would heat the planet into a brief warm spell, lasting just long enough to flash-thaw huge chunks of ice into catastrophic floods that would pour across the surface, quickly carving the visible channels and valleys.\nBut the longer we look at Mars, the more valleys and channels we find, note Ramirez and Kasting. Big ones. The amount of water required to carve and shape them just isn't compatible with flash-in-the-pan transient atmospheres, they argue. So they set out to model a stable, warm climate on early Mars.\nHydrogen: an unlikely hero\nThey got a hint from the hydrogen-rich atmosphere of the outer planets.", "score": 30.39798992221726, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "Tim Parker of NASA's Jet Propulsion Laboratory first proposed such an idea in 1989. Parker, examining images taken by the Viking Orbiters, found what he believed were remnants of two ancient ocean shorelines, which he called \"contacts,\" one inside the other, in the Martian north.\nExpanding on this notion, in 1991 Vic Baker of the University of Arizona, suggested that Mars might not be geologically dead and permanently frozen. Instead, he proposed, Mars might undergo cycles, or pulses -- first heating up, releasing groundwater and forming an ocean in the north, then dissipating the ocean back into the planet's crust and re-freezing.\nMore recently, Jim Head and colleagues at Brown University, found evidence that is consistent with a shoreline that might indeed have existed at the inner of Parker's two proposed contacts, contact 2. Head and colleagues examined elevation data gathered by the Mars Orbiter Laser Altimeter (MOLA) on board the Mars Global Surveyor (MGS) and found that the elevation at points along contact 2 were much closer to a straight line than those at contact 1. They also found that the terrain below this elevation was smoother than the terrain above it. Both of these findings are consistent with the former presence there of an ocean.\nBut the story doesn't end there. Shortly after Head and colleagues published their findings, Mike Malin and Ken Edgett of Malin Space Systems used the Mars Orbital Camera (MOC) aboard MGS, to take a series of high-resolution images of contact 2 terrain. Their conclusion: there's nothing there.\nRight: In this topographic drawing of Mars, blue indicates the area where an ocean once may have existed. Credit: NASA Mars Global Surveyor Project; MOLA Team Rendering by Peter Neivert, Brown University\nAnd the debate continues. Says Mike Carr of the U.S. Geological Survey, author of the book Water on Mars, \"We're getting all this new data from MGS, and I think a lot of it is just not understood yet. It's very hard to understand. The whole business of the oceans, the evidence is so contradictory.\"\nMars's small-valley networks, which occur mainly in the southern highlands, pose another perplexing problem. Scientists who first studied images of these valleys thought they resembled river valleys on Earth.", "score": 29.790153428167557, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "A new study of gas in meteorites suggests Mars was bitterly cold for pretty much all of the past 4 billion years, putting the freeze on hopes that the Red Planet had any extended wet periods during which life could have flourished.\nSeveral rocks that were once near the surface of Mars, and have in the past few million years been kicked up by impacts that sent them to Earth, have been freezing cold for most of the past 4 billion years, the study concludes.\nWhile the findings don't rule out the possibility of life on Mars, they indicate that biology's best shot would have come in the first 500 million years of the Red Planet's 4.5-billion-year existence.\n\"Our research doesn't mean that there weren't pockets of isolated water in geothermal springs for long periods of time, but suggests instead that there haven't been large areas of free-standing water for 4 billion years,\" said David Schuster, a graduate student at the California Institute of Technology.\nShuster and Benjamin Weiss, an assistant professor at the Massachusetts Institute of Technology, present their results in Friday's issue of the journal Science.\nMany scientists have tried to open ancient chapters in the book of Mars geology by modeling the past based on large channels carved into the dusty surface. In some scenarios — very popular a few years ago — the computers said Mars was warmer and wetter during much of its early time.\nBut recent evidence of past water, provided by the Mars rovers, has not revealed the sorts of huge and deep oceans that some might have hoped for. Instead, water might have existed in shallow lakes that did not necessarily last too long, providing only lukewarm support for the warmer and wetter theory.\nLife as we know it requires liquid water, so much of the money spent to explore Mars is geared toward searching for signs of liquid water, past or present.\nYet scientists have failed to conclude whether the channels on Mars, some deeper and wider than any on Earth, were carved mostly by water or whether other substances — such as carbon dioxide — might have been involved. It is also not clear if the riverbeds were created gradually or, as many scientists have come to believe in recent years, by catastrophic floods of water and mud that came in short, hellish bursts.\n\"Our results seem to imply that surface features indicating the presence and flow of liquid water formed over relatively short time periods,\" Shuster said.", "score": 28.43041001414577, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "The Case of the Missing Mars Water\nThe Case of the Missing Mars Water Plenty of clues suggest that liquid water once flowed\non Mars --raising hopes that life could have arisen there-- but\nthe evidence remains inconclusive and sometimes contradictory\nJanuary 5, 2001 -- Mars may once have been a very wet place. A host of clues remain from an earlier era, billions of years ago, hinting that the Red Planet was host to great rivers, lakes and perhaps even an ocean. But some of the clues are contradictory -- they don't all fit together in a coherent whole. Little wonder, then, that the fate of water on Mars is such a hotly debated topic.\nThe reason for the intense interest in Martian water is simple: Without water, there can be no life as we know it. If it has been 3.5 billion years since liquid water was present on Mars, the chance of finding life there is remote. But if water is present on Mars now, however well hidden, life may be holding on in some protected niche.\nRight: Sedimentary rock layers like these in Mars's Holden Crater suggest that the Red Planet was once home to ancient lakes. [more information]\nBased on what we have observed so far, Mars today is a frozen desert. It's too cold for liquid water to exist on its surface and too cold to rain. The planet's atmosphere is also too thin to permit any significant amount of snowfall.\nEven if some internal heat source warmed the planet up enough for ice to melt, it wouldn't yield liquid water. The Martian atmosphere is so thin that even if the temperature rose above freezing the ice would change directly to water vapor.\nSigns of Heavy Flooding\nWhat caused these giant floods? Was it a climate change, perhaps brought about by a change in Mars's orbit? Or was the planet's own internal heat responsible? And, whatever mechanism caused the floods in the first place, where has all that water gone? Was it absorbed into the ground where it remains today, frozen? Or did it dissipate into the Martian atmosphere, where it was subsequently lost to space? No-one knows for certain the answers to these questions.\nSome scientists believe that the catastrophic floods that carved the outflow channels occurred nearly simultaneously, releasing such vast quantities of water that they merged into an ocean that covered the northern lowlands.", "score": 27.75297028758331, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "The oxidation of Earth’s atmosphere was one of the most important events in our planet’s history. It’s also one we don’t understand very well. New evidence has dated the event to 2.33 billion years ago and indicates it happened over a timespan of less than 10 million years.\nOxygen is so reactive that it will not stay in the atmosphere for a long period of time. This is why scientist James Lovelock once suggested that oxygen in the atmosphere of another planet would be evidence for life, although the conclusiveness of this is now debated.\nIt’s also why there was excitement about the discovery of oxygen on Mars this week, although the Martain concentration is low enough to allow other possible sources.\nLife alone is not enough, however. For almost the first 2 billion years of life on Earth, oxygen levels are thought to have been minimal, although even this was challenged by another paper this week. Only after what is known as the Great Oxidation Event (GOE), did the planet become capable of sustaining multi-cellular organisms.\nFew rocks remain from the time of the GOE, and what we have found has proven sufficiently contradictory that geologists have struggled to make sense of when, and how quickly, oxygen appeared. However, new evidence from South Africa may resolve these questions.\nHow long has Earth had oxygen? NASA\nIn Science Advances, a team led by Dr. Genming Luo of the Massachusetts Institute of Technology (MIT) reports on the results from three drill cores in the Transvaal, South Africa. All three provide a date of 2.33 billion years ago for the GOE, and indicate oxidation took between 1 million and 10 million years. Unimaginably long as a million years is to us, by geological standards, these are quite rapid.\nLuo and co-authors also concluded that the concentrations of sulphate in the oceans, an indication of the presence of oxygen to bond with the sulfur, lagged the GOE by approximately 6 million years, and was a slower process.\nAnother of the big questions about the GOE has been whether it was caused by, or caused, radical climate change.\nOn this point the paper reports that “a series of glaciations took place between 2.45 and 2.2 billion years ago,” and that at least one of these “Snowball Earth” events occurred shortly before the GOE and may have precipitated it.", "score": 27.010610379155256, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "Because various processes can change the relative amounts of carbon-13 to carbon-12 isotopes in the atmosphere, \"we can use these measurements of the ratio at different points in time as a fingerprint to infer exactly what happened to the Martian atmosphere in the past,\" says Hu. The first constraint is set by measurements of the ratio in meteorites that contain gases released volcanically from deep inside Mars, providing insight into the starting isotopic ratio of the original Martian atmosphere. The modern ratio comes from measurements by the SAM (Sample Analysis at Mars) instrument on NASA's Curiosity rover.\nOne way carbon dioxide escapes to space from Mars' atmosphere is called sputtering, which involves interactions between the solar wind and the upper atmosphere. NASA's MAVEN (Mars Atmosphere and Volatile Evolution) mission has yielded recent results indicating that about a quarter pound (about 100 grams) of particles every second are stripped from today's Martian atmosphere via this process, likely the main driver of atmospheric loss. Sputtering slightly favors loss of carbon-12, compared to carbon-13, but this effect is small. The Curiosity measurement shows that today's Martian atmosphere is far more enriched in carbon-13 -- in proportion to carbon-12 -- than it should be as a result of sputtering alone, so a different process must also be at work.\nHu and his co-authors identify a mechanism that could have significantly contributed to the carbon-13 enrichment. The process begins with ultraviolet (UV) light from the sun striking a molecule of carbon dioxide in the upper atmosphere, splitting it into carbon monoxide and oxygen. Then, UV light hits the carbon monoxide and splits it into carbon and oxygen. Some carbon atoms produced this way have enough energy to escape from the atmosphere, and the new study shows that carbon-12 is far more likely to escape than carbon-13.\nModeling the long-term effects of this \"ultraviolet photodissociation\" mechanism, the researchers found that a small amount of escape by this process leaves a large fingerprint in the carbon isotopic ratio. That, in turn, allowed them to calculate that the atmosphere 3.8 billion years ago might have had a surface pressure a bit less thick than Earth's atmosphere today.\n\"This solves a long-standing paradox,\" said Bethany Ehlmann of Caltech and JPL, a co-author of both today's publication and the August one about carbonates.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Mars today (despite the presence of a small amount of a liquid water) is a dry, frozen place. But this was not always the case. Ancient Mars was likely warm and wet, much like Earth. So what happened to change it? Thanks to brand new results from NASA’s MAVEN mission, announced today, we may finally know.\nThe Solar Winds\n“When we look at ancient Mars we see a different kind of surface, an environment that was able to support water on the surface” said MAVEN principle investigator Bruce Jakovsky today. “So what happened to the carbon dioxide in that atmosphere, what happened to the water on early Mars?”\nOver the last year, MAVEN has been studying Mars’ atmosphere carefully. Using that data, researchers revealed (along with the simultaneous publication of the results in Science and Geophysical Research Letters) that, under bombardment from solar winds, Mars’ atmospheric gases were being striped away.\nWhile the solar winds don’t directly strike the planet’s surface, the solar winds have been stripping atmospheric gas around the planet steadily, coming off in bursts as Jasper Halekas, MAVEN’s lead instrument investigator explained “like the shock wave around a jet plane.”\nAs the solar winds carried off more and more of Mars’ atmospheric gas, like you see above, the planet was left less and less able to sustain the watery surface it once had. “The analogy I use,” Jakovsky explained, “is when I step out of the shower into the breeze, the water in my hair is just whisked away by the wind.”\nToday, that loss of atmospheric gas is continuing at the rate of about 1/4 pound of gas per second. But, researchers believe that in Mars’ earlier days, as it first began to lose atmospheric gas, that rate was much higher. And even today, that rate can easily jump 10-20 times during a solar storm, like you see in this comparison between the average rates and the solar storm rates:\nIs Mars Earth’s Future?\nSo, if Mars was a once a wet planet that lost its water along with its atmosphere, what about Earth? Should we worry that our own home might one day look like the red planet’s dry, dusty surface? Fortunately, there’s something significant standing in the way: Earth’s magnetic field.\nLike Mars, Earth is also subject to powerful solar winds—and like Mars, it also loses some of its atmospheric gases.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "When it arrived in November 1971, Mars was in the middle of one of its famous global dust storms. But when the storm cleared, Mariner 9’s cameras revealed volcanoes, canyons, flood features, dried-up riverbeds, layered sediments, and many landforms shaped by ground ice. The surface, it turned out, was much more geologically diverse than indicated by the earlier flybys. Mars had been active for much of its history, and water and climate change were implicated.\nWith each successive spacecraft mission, evidence for climate change accumulated. Many of the fluvial features discovered by Mariner 9 and mapped at higher resolution by subsequent missions are best explained by flowing liquid water in a warm climate. Furthermore, they are frequently found on surfaces that cratering statistics suggest date to the Noachian period between 4.1 and 3.7 billion years ago (Gya). Thus, billions of years ago, a warm, wet climate apparently existed on Mars. The hypothesis that emerged was that Mars had a thicker CO2 atmosphere then—one that was capable of providing a strong enough greenhouse effect to warm the surface to the melting point despite the faint young Sun (Pollack et al., 1987). Evidence also emerged for relatively recent climate change. Both polar regions on Mars are geologically young (<100 My) and consist of thick deposits of layered sedimentary material composed predominantly of water ice. These were interpreted as evidence for periodic climate change associated with variations in Mars’ orbit parameters (e.g., Pollack, 1979; Toon et al., 1980). Thus, there is evidence for two different types of climate change on Mars: one related to a change in the mass and composition of the atmosphere early in its history, and one related to changing orbital properties relatively late in its history. The connection between these two epochs implies that the atmosphere thinned over time and that the climate system was dominated by changes in radiative forcing. This picture remains the general consensus, although the details are still under debate.\nThe most important controls on a planet’s climate system are the mass1 and composition of its atmosphere, its orbital properties, and the luminosity of the star it orbits.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "New evidence suggests that Mars' dry climate was once wetter and possibly more Earth-like. The European Space Agency (ESA) has released images of two side-by-side craters offering clues to the dusty, barren Red Planet's past. Here's what you should know about the new theory:\nWhat exactly do the photographs depict?\nThe high-resolution stereo images showcase the Danielson and Kalocsa craters, which sit side by side in a desert-like region called Arabia Terra. Snapped on June 19, 2011, from aboard the Mars Express spacecraft, the photos highlight new topographical evidence that has led experts to posit that the larger crater was once filled with water.\nWhat's the evidence?\nIn the much bigger, 38-mile-across Danielson crater — named after George E. Danielson, who pioneered many of the cameras used to photograph Mars today — steep, wind-carved hills called \"yardangs,\" characterized by distinct layers of sediment, protrude from the crater's bottom. (See a photo below.) ESA scientists believe the sediment that forms these yardangs was deposited there by strong north-northeasterly winds and then hardened (or cemented) by water, \"possibly from an ancient deep groundwater reservoir,\" says Alan Boyle at MSNBC. In contrast, the more elevated, 20-mile-wide Kalosca crater features a smooth bottom with no such structures jutting out, possibly because it was never deep enough to reach the groundwater.\nWhat happened to the water?\nThe theory is that slight shifts in the planet's axis triggered drastic climate changes over millions of years, leading to intermittent wet and dry periods. Judging from the yardangs' orientation, researchers believe that the same strong north-northeasterly winds that deposited the original sediments battered them with dust and sand during a dry spell after the water disappeared, carving the yardangs into their characteristic shapes. Similar axis changes are thought to have wrought the extreme ice age cycles here on Earth. Take a look at the yardangs in the Danielson crater:", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-29", "d_text": "The sustainability of habitability on terrestrial planets: Insights, questions, and needed measurements from Mars for understanding the evolution of Earth-like worlds. Journal of Geophysical Research: Planets, 1231, 1927–1961.\n- Elkins-Tanton, L. T., Hess, P. C., & Parmentier, E. M. (2005). Possible formation of ancient crust on Mars through magma ocean processes. Journal of Geophysical Research, 110(E12).\n- Fanale, F. P., & Cannon, W. A. (1978). Mars: The role of the regolith in determining atmospheric pressure and the atmosphere’s response to insulation changes. Journal of Geophysical Research, 83, 2321–2325.\n- Fassett, C. I., & Head, J. W. (2008). The timing of Martian valley network activity: Constraints from buffered crater counting. Icarus, 195, 61–89.\n- Feldman, W. C., Boynton, W. V., Tokar, R. L., Prettyman, T. H., Gasnault, O., Squyres, S. W., Elphic, R. C., Lawrence, D. J., Lawson, S. L., Maurice, S., McKinney, G. W., Moore, K. R., & Reedy, R. C. (2002). Global distribution of neutrons from Mars: Results from Mars Odyssey. Science, 297, 75–78.\n- Forget, F., Byrne, S., Head, J. W., Mischna, M. A., & Schörghofer, N. (2017). Recent climate variations. In R. M. Haberle, R. T. Clancy, F. Forget, M. D. Smith, & R. W. Zurek (Eds.), The atmosphere and climate of Mars (pp. 497–525). Cambridge University Press.\n- Forget, F., Haberle, R. M., Montmessin, F., Levrard, B., & Head, J. W. (2006). Formation of glaciers on Mars by atmospheric precipitation at high obliquity. Science, 311, 368–371.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "A new NASA video shows what Mars may have been like long ago when it had a denser atmosphere and liquid water.\nThe concept is based on evidence that Mars was once very different.\n\"There are characteristic dendritic structured channels that, like on Earth, are consistent with surface erosion by water flows,” said Joseph Grebowsky of NASA's Goddard Space Flight Center\nin Greenbelt, Maryland. “The interiors of some impact craters have basins suggesting crater lakes, with many showing connecting channels consistent with water flows into and out of the crater.”\nHe added that small impact craters have been removed with time and larger craters show signs of erosion by water more than 3.7 billion years ago, and sedimentary layering is seen on valley walls. Minerals are present on the surface that can only be produced in the presence of liquid water, Grebowsky said.\nEstimates about the amount of water needed to explain these features have equated to possibly as much as a planet-wide layer one-half a kilometer (1,640 feet) deep or more, according to Grebowsky.\nIt's unknown if the habitable climate lasted long enough for life to emerge on Mars.\n\"The only direct evidence for life early in the history of a planet's evolution is that on Earth,\" said Grebowsky. \"The earliest evidence for terrestrial life is the organic chemical structure of a rock found on the surface in Greenland. The surface was thought to be from an ancient sea floor sediment. The age of the rock was estimated to be 3.8 billion years, 700 million years from the Earth's creation.”\nThe video ends with an illustration of NASA's MAVEN mission in orbit around present-day Mars. MAVEN will investigate how Mars lost its atmosphere. Scheduled to be launched in November, it will arrive at Mars in September 2014.\nThere are several theories of how Mars was stripped of its thick atmosphere.\n\"Hydrodynamic outflow and ejection from massive asteroid impacts during the later heavy bombardment period (ending 4.1 billion to 3.8 billion years ago) were early processes removing part of the atmosphere, but these were not prominent loss processes afterwards,\" said Grebowsky. \"The leading theory is that Mars lost its intrinsic magnetic field that was protecting the atmosphere from direct erosion by the impact of the solar wind.\"\nThe solar wind is a thin stream of electrically charged particles (plasma) blowing continuously from the sun into space at about a million miles per hour.\nHere's the video:", "score": 26.387180477342042, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "Researchers suggested that pressure from charged particles in the solar wind blew the lighter hydrogen molecules out of Mars' atmosphere, because the planet has no global magnetic field to protect it. Additionally, the water molecules Mars had in its atmosphere likely broke apart under the sun's ultraviolet light.\nPast infrared observations with the W. M. Keck Observatory, the NASA Infrared Telescope Facility and the European Southern Observatory's Very Large Telescope showed that the Martian polar caps are highly enriched in deuterium, supporting the theory that deuterium remained behind. Mars' frozen water has a ratio of 1 hydrogen molecule to 400 deuterium molecules — about eight times greater than the ratio in Earth's oceans, which is 1 hydrogen molecule to 3,200 deuterium molecules, the 2015 research showed.\n\"Now we know that Mars water is much more enriched than terrestrial ocean water in the heavy form of water, the deuterated form,\" Michael Mumma, a senior scientist at NASA's Goddard Space Flight Center in Maryland, said in the 2015 video. \"Immediately that permits us to estimate the amount of water Mars has lost since it was young.\"\nIn the ancient past, Mumma added, Mars had an ocean that covered about 20 percent of the planet's surface area — \"a respectable ocean,\" he said. The body of water was about 5,000 feet (1,500 meters) deep, on average. Today, only 13 percent of that ancient ocean remains, locked in the polar ice caps.\nIn separate observations, the Mars Curiosity rover, located at Gale Crater near the Martian equator, found that conditions were wet in that region for about 1.5 billion years. That period of time, Mumma said, \"is already much longer than the period of time needed for life to develop on Earth.\" The infrared telescope observations suggested that \"Mars must have been wet for a period even longer,\" he added. [Building the James Webb Space Telescope]\nThe Webb telescope, which is designed for infrared observations, will follow up on the normal-water and heavy-water observations performed by the other observatories, NASA officials said in the statement. It will watch the normal-water-to-heavy-water ratio during different seasons, and at different times and locations.", "score": 26.10314988907555, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "For more than two decades, the presence of an ancient Martian ocean has been a source of academic controversy. In recent years, radar and visual maps provided by orbiting space probes have strongly supported the existence of an ocean, billions of years ago. The European Space Agency’s “Mars Express” orbiter, for instance, detected probable ocean floor sediments in regions ringed by ancient Martian shorelines. Scientists at the University of Colorado at Boulder also mapped river deltas on the Martian service that apparently fed into vanished oceans. Now, planetary scientist Geronimo Villanueva and a team of researchers have published an article in the journal Science that provides compelling new evidence for a vast, long-lived Martian ocean. Their article has received major media attention, and, as of this writing, is on the front page of CNN.\nEven without that breakthrough, human exploration and colonization of Mars will be influenced by the history of the planet’s atmosphere, hydrosphere, cryosphere, and perhaps biosphere. The presence of water on Mars will be essential for the survival of future colonists. The historical transformation of Mars from habitable world to frozen wasteland provides a window into what might have been on Earth, and that will encourage scientific missions. The Martian past might even provide a romantic stimulus for terraforming projects that aim to restore the planet to the liveable world it once was.\nNote: I explore these ideas in greater detail in a journal article that I am preparing for submission.\nVillanueva, G. L., M. J. Mumma, R. E. Novak, H. U. Käufl, P. Hartogh, T. Encrenaz, A. Tokunaga, A. Khayat, and M. D. Smith. \"Strong water isotopic anomalies in the martian atmosphere: Probing current and ancient reservoirs.\" Science, 2015. DOI: 10.1126/science.aaa3630", "score": 25.667332101624478, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "It may be time to call off the search for Mars' missing carbon.\nMars' carbon-rich atmosphere was once thick enough to raise the planet's surface temperature and allow entire oceans of liquid water to form there — a drastic change from the cold desert it is today. This metamorphosis of the Red Planet has put scientists on the hunt for a left-over \"carbon reservoir\" in the planet's dirt and soil. So far, the has come up empty.\nBut new research has concluded that \"a large 'missing' carbon reservoir is unnecessary\" to explain the planet's watery past, and that the Martian air wasn't exceptionally dense billions of years ago. . [Amazing Photos from Mars' Curiosity Rover]\nLike any planetary system, Mars' atmosphere is intimately linked with conditions on the planet's surface. The new work Learning just how thick the atmosphere was long ago, and how quickly it changed, could help answer questions about Mars' evolution, and as well as its potential for supporting life\nThe disappearing Martian atmosphere\nThere's no doubt that ancient Mars was different than the Red Planet seen today. Humanity seems to have missed a heyday in the Red Planet's history, when the surface was spotted with lakes and oceans, temperatures were warmer, and the atmosphere was thicker.\nThen again, the atmosphere would likely have consisted mostly of carbon dioxide, so it's not as though humans could have survived there unaided. But it's very possible that other forms of life found ancient Mars quite homey. (The very salty liquid water recently discovered on the surface of Mars suggests that the planet could still be hospitable to some life-forms.)\nWhere did Mars' thick atmosphere disappear to? Without a protective magnetic field around it, like Earth, Mars is exposed to the harsh environment of space. Energetic particles of light from the sun can chemically react with particles in the atmosphere and eject them from the planet. NASA's MAVEN orbiter recently showed that solar winds may have swept away a significant portion of Mars' atmosphere.\nFreed carbon atoms could also have been absorbed into the Martian dirt and soil, creating what are called \"carbonates\" — a sort of residue left over by the now-gone atmosphere.\nBut tests of the Martian soil show very low levels of carbonates — the \"carbon reservoir\" has not been found.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "There’s never been much mystery surrounding the murder of Mars. Once a warm, wet world, Mars lost its magnetic field more than 4 billion years ago when its outer core cooled, shutting off the dynamo that kept the field in place. That exposed the planet to the solar wind, which clawed away at the atmosphere; and that in turn allowed the planet’s water to sputter off into space. To look at Mars today is to see a desert world, stamped with the dry riverbeds, delicate deltas and deep ocean basins hinting at the water that is no more.\nAt least, that’s the long-accepted view. But according to a study published Mar. 16 in Science by a team of researchers at the California Institute of Technology, that scenario might be all wrong. Mars is dry, alright—or at least it appears to be. But the researchers say much of its water—from 30% to a staggering 99% of it—is still there. It simply retreated into the martian rocks and clay rather than escaping into space.\nJust how much water once flowed across the surface of Mars is expressed by a unit of measure known as “global equivalent layer” (GEL)—the depth that the water would be if it were not sequestered in basins and rivers, but instead were spread evenly across the entire planet. The best estimates for Mars’s original GEL is anywhere from 100 to 1,500 meters (330 to 4,900 ft). That’s an awfully wide range, but today it’s considerably narrower: the modern-day water on the planet’s surface—almost entirely trapped in its polar ice caps—has a GEL of just 20 to 40 m.\nWhen Mars lost its atmosphere, all that original water had to go somewhere. The evaporation-to-space route was always the easiest explanation—but it’s a flawed one, too. The problem, as the Caltech researchers knew, involves hydrogen. As Martian water molecules rise into and then escape from the atmosphere, they disassociate into free hydrogen and oxygen atoms. An oxygen atom in water is just an oxygen atom, but hydrogen comes in two forms: ordinary hydrogen (with a single proton in its nucleus) and deuterium (with a proton and a neutron). Water molecules made of heavier deuterium instead of ordinary hydrogen are known, straightforwardly enough, as heavy water.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-4", "d_text": "When there was a magnetic field, the atmosphere would have been protected from erosion by solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The loss of the atmosphere was accompanied by decreasing temperatures. A part of the liquid water inventory sublimed and was transported to the poles, while the rest became trapped in a subsurface ice layer.\nObservations on Earth and numerical modeling have shown that a crater-forming impact can result in the creation of a long lasting hydrothermal system when ice is present in the crust. For example, a 130 km large crater could sustain an active hydrothermal system for up to 2 million years, that is, long enough for microscopic life to emerge.\nSoil and rock samples studied in 2013 by NASA's Curiosity rover's onboard instruments brought about additional information on several habitability factors. The rover team identified some of the key chemical ingredients for life in this soil, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and possibly carbon, as well as clay minerals, suggesting a long-ago aqueous environment — perhaps a lake or an ancient streambed — that was neutral and not too salty. On December 9, 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus, Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life. The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and Solar radiation, together strongly suggest that Mars could have had the environmental factors to support life. However, the assessment of past habitability is not in itself evidence that Martian life has ever actually existed. If it did, it was probably microbial, existing communally in fluids or on sediments, either free-living or as biofilms, respectively.\nPresent[edit | hide | edit source]\nPresent day life on Mars could occur kilometers below the surface in the hydrosphere, or in subsurface geothermal hot spots, or it could occur on or near the surface. The permafrost layer on Mars is only a couple of centimeters below the surface. Salty brines can be liquid a few centimeters below that but not far down. Most of the proposed surface habitats are within centimeters of the surface. Any life deeper than that is likely to be dormant.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-38", "d_text": "- McEwen, Alfred S.; Ojha, Lujendra; Dundas, Colin M.; Mattson, Sarah S.; Byrne, Shane; Wray, James J.; Cull, Selby C.; Murchie, Scott L.; et al. (2011). \"Seasonal Flows on Warm Martian Slopes\". Science. 333 (6043): 740–3. Bibcode:2011Sci...333..740M. doi:10.1126/science.1204816. PMID 21817049.\n- \"Mars Rover Spirit Unearths Surprise Evidence of Wetter Past\" (Press release). Jet Propulsion Laboratory. May 21, 2007. Archived from the original on May 24, 2007.\n- \"Mars Rover Investigates Signs of Steamy Martian Past\" (Press release). Jet Propulsion Laboratory. December 10, 2007. Archived from the original on December 13, 2007.\n- Leveille, R. J. (2010). \"Mineralized iron oxidizing bacteria from hydrothermal vents: Targeting biosignatures on Mars\". American Geophysical Union. 12: 07. Bibcode:2010AGUFM.P12A..07L.\n- Walter, M. R.; Des Marais, David J. (1993). \"Preservation of Biological Information in Thermal Spring Deposits: Developing a Strategy for the Search for Fossil Life on Mars\". Icarus. 101 (1): 129–43. Bibcode:1993Icar..101..129W. doi:10.1006/icar.1993.1011. PMID 11536937.\n- Allen, Carlton C.; Albert, Fred G.; Chafetz, Henry S.; Combie, Joan; Graham, Catherine R.; Kieft, Thomas L.; Kivett, Steven J.; McKay, David S.; et al. (2000). \"Microscopic Physical Biomarkers in Carbonate Hot Springs: Implications in the Search for Life on Mars\". Icarus. 147 (1): 49–67. Bibcode:2000Icar..147...49A. doi:10.1006/icar.2000.6435.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "“Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years.”\nThe team analyzed drill samples taken by Buick in 2012 at another site in the northwestern part of Western Australia called the Jeerinah Formation.\nThe researchers drilled two cores about 300 kilometers apart but through the same sedimentary rocks — one core samples sediments deposited in shallower waters, and the other samples sediments from deeper waters. Analyzing successive layers in the rocks years shows, Buick said, a “stepwise” change in nitrogen isotopes “and then back again to zero. This can only be interpreted as meaning that there is oxygen in the environment. It’s really cool — and it’s sudden.”\nThe nitrogen isotopes reveal the activity of certain marine microorganisms that use oxygen to form nitrate, and other microorganisms that use this nitrate for energy. The data collected from nitrogen isotopes sample the surface of the ocean, while selenium suggests oxygen in the air of ancient Earth. Koehler said the deep ocean was likely anoxic, or without oxygen, at the time.\nThe team found plentiful selenium in the shallow hole only, meaning that it came from the nearby land, not making it to deeper water. Selenium is held in sulfur minerals on land; higher atmospheric oxygen would cause more selenium to be leached from the land through oxidative weathering — “the rusting of rocks,” Buick said — and transported to sea.\n“That selenium then accumulates in ocean sediments,” Koehler said. “So when we measure a spike in selenium abundances in ocean sediments, it could mean there was a temporary increase in atmospheric oxygen.”\nThe finding, Buick and Koehler said, also has relevance for detecting life on exoplanets, or those beyond the solar system.\n“One of the strongest atmospheric biosignatures is thought to be oxygen, but this study confirms that during a planet’s transition to becoming permanently oxygenated, its surface environments may be oxic for intervals of only a few million years and then slip back into anoxia,” Buick said.\n“So, if you fail to detect oxygen in a planet’s atmosphere, that doesn’t mean that the planet is uninhabited or even that it lacks photosynthetic life. Merely that it hasn’t built up enough sources of oxygen to overwhelm the ‘sinks’ for any longer than a short interval.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "The mineralogical evidence is mounting that early Mars had a significant amount of liquid water on its surface, though exactly how long it remained is unclear.\nThe highest-resolution spectrometry ever taken from orbit shows both a greater variety and a greater abundance of clay minerals – which formed in the presence of water – than previously detected in the planet’s oldest terrain. Researchers say any life the planet may have hosted may be preserved in the clays, making them good targets for future rovers.\nSince the 1970s, spacecraft have beamed back images of deep channels and canyons that suggest water once flowed across the Red Planet. But researchers have only recently begun to find the mineralogical signature of that water.\nThat’s because the spectrometers on previous spacecraft have operated at relatively long wavelengths and low spatial resolutions, parameters best-suited to mapping the basaltic materials and dust that dominate the planet’s surface.\nThe first high-resolution data was taken by an instrument called OMEGA on Europe’s Mars Express spacecraft, which went into orbit around the planet in 2003. In 2006, researchers using OMEGA found clays , or phyllosilicates, in about two dozen sites scattered around the planet, all in terrain estimated to date from the planet’s first 500 million years.\nStill, OMEGA could only resolve sections of the surface as wide as a few hundred metres. Sections as small as 18 metres across can now be resolved with an instrument called CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on NASA’s Mars Reconnaissance Orbiter, which began orbiting the planet in March 2006.\n“We can really see individual outcrops of rocks,” says Ralph Milliken of NASA’s Jet Propulsion Laboratory in Pasadena, California. He is a member of a team led by John Mustard of Brown University in Providence, Rhode Island, that studied the CRISM data.\n‘Quite a bit of water’\nCRISM has mapped a bit more than half the planet’s surface at a resolution of about 100 metres, focusing in on some areas at even higher resolutions.\n“We’ve seen a lot more hydrated minerals, and a lot more occurrences of them” than observed with OMEGA, Milliken told New Scientist.", "score": 24.90789191292688, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Even though Mars is currently too cold and has too little pressure to prevent water from freezing, the planet had liquid water on its surface in the distant past. That could happen if the ancient Martian atmosphere, thicker than its present-day counterpart, had enough greenhouse gasses to keep the planet warm and the water liquid. So over the past decades of our observations and robotic visitations of Mars, researchers have been looking for evidence for Mars' past carbon dioxide levels.\nThe evidence we've gathered indicates there was some CO2 present but not nearly enough to keep water liquid, especially given that the early Sun was less active than it is at present. So far, these estimates contained large uncertainties, so it remained possible that there was enough carbon in the atmosphere to allow the ancient water to flow.\nA team of researchers has created a new estimate of Mars' ancient carbon levels using data collected by the Curiosity rover. They've also concluded that there was nowhere near enough CO2 to warm the planet to the point where water on the surface would remain liquid.\nDuring its trip towards Aeolis Mons (the peak at the center of the Gale crater), Curiosity came upon what looked to be the remains of an ancient lake. This was a region of about 70 meters with mudstones, siltstones, and sandstones left behind by ancient water, somewhere between 3.8 and 3.1 billion years ago. The rover analyzed these materials as it passed, but there were no carbonate minerals, which would have made assessing past carbon levels easier. However, certain clays could provide a way to estimate the ancient carbon concentration.\nWhen these clays formed, CO2 would have dissolved into their structure. While we can't measure them, we can infer their presence indirectly, as dissolved carbon limits the solubility of iron-bearing olivine. The amount of this olivine in the clays puts a limit on the amount of CO2 that could be in the water that the clays formed in.\nThis technique is also used to estimate Earth’s early atmosphere, but here it’s actually less reliable because the presence of life contaminates the process. The Martian rocks have been comparatively undisturbed, making them prime targets for science. “In many ways, deriving [CO2] estimates from Gale Crater sedimentary rocks is more straightforward than doing so in their terrestrial equivalents,” the authors write in their paper.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Consequently, alternative ideas for warming early Mars involving impacts, volcanism, clouds, and orbital changes have been suggested. Yet a widely agreed upon solution remains elusive. In more recent geological times, thick polar layered terrains, debris-covered glaciers, and latitude-dependent mantle deposits indicate that ice ages have come and gone similar to those on Earth. Although rainfall is not implicated, cyclic mobilization and redistribution of large surface ice reservoirs, a process similar to what occurred on Earth, is implicated. And for the vast majority of the billions of years of Mars’ existence, a gradually brightening Sun, chaotic orbital variations, and episodic volcanism strongly implicate an ever-changing climate system. Clearly, Mars today is not representative of what it was in the past. This article reviews the evidence for climate change on Mars and constructs a picture of how its climate might have evolved over time.\nBefore the spacecraft era, telescopic observers of Mars worked out its size, basic orbital properties, and noted its variable features including clouds and polar ice caps. Most famously, Percival Lowell claimed to see canals crisscrossing the surface that in his mind were constructed by intelligent beings trying to survive on a drying planet (Figure 1; Lowell, 1906). Lowell’s ideas were immensely popular and generated much interest in the red planet, but they were not scientifically sound. He recognized that the atmosphere was thin, for example, but argued for a warm climate, an inconsistency pointed out a year later by Alfred Wallace (1907).\nNevertheless, the possibility of life on Mars remained foremost in public discourse until the Mariner flybys of the 1960s. These missions revealed a much harsher Martian environment than Lowell envisioned. The seasonal polar caps were made of CO2 ice, not water ice; its atmosphere was primarily CO2, not nitrogen or oxygen; and the surface pressure was estimated to be very low (~7 hPa), much lower than previously thought. Most important, craters—not canals—dominated its surface. There was no evidence for life of any kind much less an advanced civilization engaged in a global-scale canal project. The landscape instead resembled the Moon, and like the Moon, it suggested that Mars was a dead planet whose story ended soon after it formed.\nThis perception changed with Mariner 9, the first spacecraft to orbit Mars giving a much greater view of its surface.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-16", "d_text": "The presence of near-surface ice at mid to high latitudes is corroborated by the Odyssey orbiter data (Boynton et al., 2002; Feldman et al., 2002; Mitrofanov et al., 2002), the Phoenix lander (Mellon et al., 2009; Smith et al., 2009), and images of fresh impact craters (Byrne et al., 2009). The inferred ice abundances from these data are generally in excess of that expected from the filling of pore space alone, suggesting that the ice is very pure. Although the tropics are generally ice-free, fan-shaped deposits on the northwest flanks of the Tharsis volcanoes appear to be remnants of cold-based tropical glaciers that existed in the past (Head & Marchant, 2003). Global circulation modeling simulations show that such glaciers could form at times of high obliquity (Forget et al., 2006). These simulations clearly demonstrate that obliquity variations can mobilize and redistribute surface ice deposits from the polar regions to the tropics. Thus, there is ample evidence for Amazonian climate change, and a plausible forcing mechanism has been identified.\nTheories of Climate Evolution\nThe origin and evolution of the Martian atmosphere and climate system are not yet well understood, but there are plausible scenarios (e.g., Lammer et al., 2013). The following is intended to provide an overview of such scenarios, recognizing that uncertainties remain and alternatives may emerge. The narrative is chronological and includes geophysical topics relevant to the discussion. A timeline is given in Figure 15.\nFrom isotope systematics of NWA 7034, a 4.43-Gy-old brecca meteorite from Mars known as “Black Beauty,” accretion, core formation, and magma ocean crystallization were completed less than 20 My after the formation of the solar system (Bouvier et al., 2018). Thus, a stable primordial crust existed on Mars very early in its history. During accretion, impact devolatilization probably created a temporary steam atmosphere that could have contained many kilometers (GEL) of water (Elkins-Tanton et al., 2005). Under these circumstances, the high extreme ultraviolet (EUV) output of the young Sun would have powered an intense phase of hydrodynamic escape, leading to removal of most of the steam atmosphere.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-35", "d_text": "Journal of Geophysical Research. 109. Bibcode:2004JGRE..10909006F. doi:10.1029/2003JE002160.\n- \"Mars Global Surveyor Measures Water Clouds\". Archived from the original on August 12, 2009. Retrieved 2009-03-07.\n- Baker, V. R.; Strom, R. G.; Gulick, V. C.; Kargel, J. S.; Komatsu, G.; Kale, V. S. (1991). \"Ancient oceans, ice sheets and the hydrological cycle on Mars\". Nature. 352 (6336): 589–594. Bibcode:1991Natur.352..589B. doi:10.1038/352589a0.\n- \"Flashback: Water on Mars Announced 10 Years Ago\". SPACE.com. June 22, 2000. Archived from the original on December 22, 2010.\n- \"The Case of the Missing Mars Water\". Science@NASA. Archived from the original on March 27, 2009. Retrieved 2009-03-07.\n- \"Mars Rover Opportunity Examines Clay Clues in Rock\". NASA. Jet Propulsion Laboratory. May 17, 2013. Archived from the original on June 11, 2013.\n- \"NASA Rover Helps Reveal Possible Secrets of Martian Life\". NASA. November 29, 2005. Archived from the original on November 22, 2013.\n- ISBN 0-312-24551-3[page needed][full citation needed]\n- \"PSRD: Ancient Floodwaters and Seas on Mars\". Psrd.hawaii.edu. July 16, 2003. Archived from the original on January 4, 2011.\n- \"Gamma-Ray Evidence Suggests Ancient Mars Had Oceans\". SpaceRef. November 17, 2008.\n- Carr, Michael H.; Head, James W. (2003). \"Oceans on Mars: An assessment of the observational evidence and possible fate\". Journal of Geophysical Research: Planets. 108: 5042. Bibcode:2003JGRE..108.5042C.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "- 2 updates\nThe evidence \"seems to be building that we are actually all Martians\" and \"that life started on Mars and came to Earth on a rock,\" according to a scientist from The Westheimer Institute for Science and Technology in the US.\nProfessor Steven Benner said, \"It's lucky that we ended up here nevertheless, as certainly Earth has been the better of the two planets for sustaining life. If our hypothetical Martian ancestors had remained on Mars, there might not have been a story to tell.\"\nProf Benner told the Goldschmidt 2013 conference in Italy that the oxidised mineral form of the element molybdenum, \"couldn't have been available on Earth at the time life first began, because three billion years ago the surface of the Earth had very little oxygen, but Mars did\".\n\"It's yet another piece of evidence which makes it more likely life came to Earth on a Martian meteorite, rather than starting on this planet,\" he added.\nLife on Earth may have started on Mars, a major science conference has heard.\nAn element believed to be crucial to the origin of life would only have been available on the surface of the Red Planet, Geochemist Professor Steven Benner claims.\nProfessor Benner argues that the \"seeds\" of life probably arrived on Earth in meteorites blasted off Mars by impacts or volcanic eruptions.\nHe points to the oxidised mineral form of the element molybdenum, thought to be a catalyst that helped organic molecules develop into the first living structures, as evidence of his theory.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "Secondly, there are no processes underway that could lead to a collapse in the Martian atmosphere.\nNevertheless, if an incident of this kind were to eject water molecules into space, once they have acquired 'escape velocity', they would no longer be influenced by the gravitational pull of Mars. Therefore, theoretically, these molecules could drift further into the Solar System and at some stage might be captured by Earth's gravitational field. The majority of this 'Martian water' would originate from the surface of Mars (ice deposits at the north and south poles), or from layers of ice beneath the planet's surface. Today’s Martian atmosphere contains just 210 parts per million water molecules, which is negligible. So initially, it would be molecules, atoms or ions of carbon dioxide (96 percent), nitrogen (1.9 percent, argon (1.9 percent), oxygen (0.15 percent) and carbon monoxide (0.06 percent) – in descending order of their relative contribution to the Martian atmosphere – that would arrive before water molecules.\nIf this transfer of water molecules from Mars to Earth did indeed occur, it would be possible, in theory at least, to identify them as 'Martian', based on their slightly different isotope composition (the relative proportions of different 'types' of hydrogen and oxygen atoms in the water molecules). However, given that it is likely to be very, very, very few atoms, any kind of identification using current analysis techniques is probably impossible.\nThe situation with tiny gaseous inclusions found in meteorites is slightly different. Some meteorites have been found that must have originated from Mars, as they contain small pockets of gas in which the noble gas argon is found in similar concentrations to those that orbiters have measured in the Martian atmosphere. This is considered definite proof of their origin.", "score": 23.70451954253454, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "Mars is blanketed by a thin, mostly carbon dioxide atmosphere -- one that is far too thin to keep water from freezing or quickly evaporating. However, geological evidence has led scientists to conclude that ancient Mars was once a warmer, wetter place than it is today. To produce a more temperate climate, several researchers have suggested that the planet was once shrouded in a much thicker carbon dioxide atmosphere. For decades that left the question, \"Where did all the carbon go?\"\nThe solar wind stripped away much of Mars' ancient atmosphere and is still removing tons of it every day. But scientists have been puzzled by why they haven't found more carbon -- in the form of carbonate -- captured into Martian rocks. They have also sought to explain the ratio of heavier and lighter carbons in the modern Martian atmosphere.\nNow a team of scientists from the California Institute of Technology and NASA's Jet Propulsion Laboratory, both in Pasadena, offer an explanation of the \"missing\" carbon, in a paper published today by the journal Nature Communications.\nThey suggest that 3.8 billion years ago, Mars might have had a moderately dense atmosphere. Such an atmosphere -- with a surface pressure equal to or less than that found on Earth -- could have evolved into the current thin one, not only minus the \"missing\" carbon problem, but also in a way consistent with the observed ratio of carbon-13 to carbon-12, which differ only by how many neutrons are in each nucleus.\n\"Our paper shows that transitioning from a moderately dense atmosphere to the current thin one is entirely possible,\" says Caltech postdoctoral fellow Renyu Hu, the lead author. \"It is exciting that what we know about the Martian atmosphere can now be pieced together into a consistent picture of its evolution -- and this does not require a massive undetected carbon reservoir.\"\nWhen considering how the early Martian atmosphere might have transitioned to its current state, there are two possible mechanisms for the removal of the excess carbon dioxide. Either the carbon dioxide was incorporated into minerals in rocks called carbonates or it was lost to space.\nAn August 2015 study used data from several Mars-orbiting spacecraft to inventory carbonates, showing there are nowhere near enough in the upper half mile (one kilometer) or the crust to contain the missing carbon from a thick early atmosphere during a time when networks of ancient river channels were active, about 3.8 billion years ago.\nThe escaped-to-space scenario has also been problematic.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-7", "d_text": "Although subsequent measurements have shown that the present-day residual cap is not massive enough to significantly buffer surface pressures as Leighton and Murray envisioned, this concept factors heavily into the orbitally forced climate changes that have dominated much of the planet’s history.\nEvidence for Climate Change\nIsotopic data indicate that Mars accreted and differentiated into a core, mantle, and crust within a few tens of millions of years (e.g., Lee & Halliday, 1997; Solomon et al., 2005). The nature of the atmosphere during this time is unknown, although an early steam atmosphere and magma ocean are possible (Matsui & Abe, 1987). Major early events include the creation of the global dichotomy (e.g., McGill & Squyres, 1991), the development of a magnetic field (e.g., Acuña et al., 1999), and the formation of large impact basins (e.g., Carr, 1981). The Hellas impact basin is estimated to be 4.1 Gy (Frey, 2003)—a time that may coincide with the end of the magnetic field (Lillis et al., 2008). It also marks the approximate beginning of the geological record observable from orbiters, which has been grouped into three periods: the Noachian (4.1 to 3.7 Gy), the Hesperian (3.7 to ~3.0 Gy), and the Amazonian (~3.0 Gy to the present). Because these periods are based on crater statistics linked to cratering models, there is uncertainty in the absolute ages (e.g., Michael, 2013). These periods are also marked by distinctive surface mineralogies (Bibring et al., 2006) likely linked to the prevailing climate system, with the Noachian dominated by phylosillicates (clays), the Hesperian by sulfates, and the Amazonian by iron oxides. It is therefore convenient to frame the discussion of climate change around these geological epochs.\nThis is the earliest epoch and provides the most compelling evidence that the climate then was at least episodically warmer and wetter than it is today. Noachian terrains are much more eroded than younger surfaces (Carr, 1992).", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "It's Mars, Jim, but not as we know it\nEverything you knew about Mars is wrong, or nearly everything, say European and US space scientists studying compelling new data from the Mars Express spacecraft.\nTake, for instance, the common perception that Mars' surface is old and dead.\nMars Express is obliterating that view with evidence of a planet dominated not by dust, wind and ancient craters, but by an ongoing war between volcanoes and glaciers.\nIt's also uncovering mineralogical signs that are making it much harder to argue that the Red Planet ever had large lakes or seas, and certainly not within the past 3.5 billion years.\n\"We may have to revise some of the previous views,\" says Mars Express investigator Professor Gerhard Neukum of Freie Universität in Berlin. \"It wasn't so warm and not so wet in much of its time.\"\nSigns of water\nAmong the evidence against Martian seas is the now-confirmed absence of any telltale spectral signatures of water-formed carbonates on Mars' surface.\nStranger, however, is the discovery that even in Mars' water-carved canyons there is no sign of water-made minerals, says Mars Express investigator Dr Jean-Pierre Bibring of the Institut d'Astrophysique Spatiale in Orsay, France.\nInstead, such minerals as clays, which require water to form, are found on odd patches in the cratered landscape, where rocks from Mars' earliest history may be exposed, says Bibring.\nMore importantly, it's in those clays, which come from the only potentially Earth-like period of Mars' history, that we'd have the best chances of finding evidence of early Martian life, he says.\nHigh-resolution stereo imaging by Mars Express has also revealed far more recently glaciated landscape and even what appears to be a lava flow that ends in long, abrupt wall.\nThe wall appears to be where the molten rock ran into now-evaporated glaciers just tens of millions of years ago, says Neukum.\nMars Express has also begun looking under the Martian surface with radar. Already, the ground-penetrating radar has revealed the first in what may be a hidden population of buried ancient craters, possibly soaked with frozen water.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "And that has really been the premise of why scientists have been so hot and heavy on studying Mars, to find out next habitable planet when we toast our current one with Global Warming and pollution like we have been for decades. So far from this data Mars is looking like the next eligible bachelor for us to relocate to.\nVillanueva`s research team exploited the chemistry of water molecules to trace the history of water on Mars back through time. The water molecules make-up equal two parts hydrogen and one part oxygen and the hydrogen part of the water molecule can also come in another form known as a heavy isotope which is called deuterium that is made up of a neutron in its nucleus which is different than the regular single proton most normal hydrogen atoms have.\nThe reasoning behind this is that when water molecules contain deuterium and are referred to as heavy or partially heavy water, regular water can be stripped from the atmosphere of Mars and lost into space much more easily.\nVillanueva went on to add more insight on this by saying, \"Over a long time, the lighter form [of water] will escape preferentially relative to the heavy form\"\nOver billions of years, this preferential water loss has left Mars enriched in semi-heavy water compared to regular water by a factor of seven times greater than the ratio in Earth`s water. Extrapolating backward from the current ratio of \"normal\" hydrogen to deuterium, and incorporating factors such as collisions between water molecules and the predominant molecule in Mars` atmosphere, carbon dioxide, Villanueva`s team were able to calculate how much water Mars has lost. (Cooper, K., Astrology Magazine)\nScientists are still speculating that there is still water on Mars which is locked up in its polar caps. And they are saying if you took the amount of water on Mars today it would more than likely equal an ocean around sixty-nine feet deep.\nFrom the earlier number of there was supposedly once an ocean one-mile deep on the surface of Mars and now today it only has around an ocean of sixty-nine feet deep left on it, we can see the significant water loss over time which equals more than all the water in the Earths Artic Ocean.\nRobin Wordsworth who is a planetary scientist at the Harvard School of Engineering and Applied Sciences who was not involved in the study commented, \"Their [Villanueva et al.`s] results are entirely consistent with a predominantly cold, icy scenario for early Mars,\" said Wordsworth.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-14", "d_text": "The study determined that perchlorate —discovered in 2008 by Phoenix lander— can destroy organic compounds when heated, and produce chloromethane and dichloromethane as byproduct, the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. Because perchlorate would have broken down any Martian organics, the question of whether or not Viking found organic compounds is still wide open.\nThe Labeled Release evidence was not generally accepted initially, and, to this day lacks the consensus of the scientific community.\nMeteorites[edit | hide | edit source]\nNASA maintains a catalog of 34 Mars meteorites. These assets are highly valuable since they are the only physical samples available of Mars. Studies conducted by NASA's Johnson Space Center show that at least three of the meteorites contain potential evidence of past life on Mars, in the form of microscopic structures resembling fossilized bacteria (so-called biomorphs). Although the scientific evidence collected is reliable, its interpretation varies. To date, none of the original lines of scientific evidence for the hypothesis that the biomorphs are of exobiological origin (the so-called biogenic hypothesis) have been either discredited or positively ascribed to non-biological explanations.\nOver the past few decades, seven criteria have been established for the recognition of past life within terrestrial geologic samples. Those criteria are:\n- Is the geologic context of the sample compatible with past life?\n- Is the age of the sample and its stratigraphic location compatible with possible life?\n- Does the sample contain evidence of cellular morphology and colonies?\n- Is there any evidence of biominerals showing chemical or mineral disequilibria?\n- Is there any evidence of stable isotope patterns unique to biology?\n- Are there any organic biomarkers present?\n- Are the features indigenous to the sample?\nFor general acceptance of past life in a geologic sample, essentially most or all of these criteria must be met. All seven criteria have not yet been met for any of the Martian samples, but continued investigations are in progress.\nAs of 2010, reexaminations of the biomorphs found in the three Martian meteorites are underway with more advanced analytical instruments than previously available.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "- Robert M. HaberleRobert M. HaberleNASA Ames Research Center - Space Science and Astrobiology Division\nThe climate of Mars has evolved over time. Early in its history, between 3.7 and 4.1 billion years ago, the climate was warmer and wetter and the atmosphere thicker than it is today. Erosion rates were higher than today, and liquid water flowed on the planet’s surface, carving valley networks, filling lakes, creating deltas, and weathering rocks. This implies runoff and suggests rainfall and/or snowmelt. Oceans may have existed. Over time, the atmosphere thinned, erosion rates declined, water activity ceased, and cooler and drier conditions prevailed. Ice became the dominate form of surface water. Yet the climate continued to evolve, driven now by large variations in Mars’ orbit parameters. Beating in rhythm with these variations, surface ice has been repeatedly mobilized and moved around the planet, glaciers have advanced and retreated, dust storms and polar caps have come and gone, and the atmosphere has collapsed and re-inflated many times. The layered terrains that now characterize both polar regions are telltale signatures of this cyclical behavior and owe their existence to modulations of the seasonal cycles of dust, water, and CO2. Contrary to the early images from the Mariner flybys of the 1960s, Mars is and has been a dynamically active planet whose surface has been partly shaped through its interaction with a changing atmosphere and climate system.\n- Planetary Atmospheres and Oceans\n- Planetary Ionospheres and Magnetospheres\n- Planetary Surfaces\n- Planetary Chemistry and Cosmochemistry\nToday Mars is cold and dry with a thin CO2 atmosphere. It is a dusty, desert planet with no stable liquid water on its surface. Yet spacecraft data suggest a different climate prevailed in the past. Images of fluvial features on its oldest surfaces, for example, suggest that very early in its history water flowed freely. Warmer and wetter conditions under a thicker greenhouse atmosphere are implied. Isotopic data and surface mineralogy support this view. Lakes and even oceans may have existed, powered perhaps by a sustained global hydrological cycle involving evaporation, rainfall, and runoff. This, however, has been theoretically difficult to demonstrate given the faint young Sun and limitations on the sources and sinks of plausible greenhouse gases.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-41", "d_text": "(2014). Warming early Mars with CO2 and H2. Nature Geoscience, 7, 59–63.\n- Rivera-Hernández, F., & Palucis, M. C. (2019). Do deltas along the crustal dichotomy boundary of Mars in the Gale Crater region record a northern ocean? Geophysical Research Letters, 46, 8689–8699.\n- Rodriguez, J. A. P., Fairén, A. G., Tanaka, K. L., Zarroca, M., Linares, R., Platz, T., Komatsu, G., Miyamoto, H., Kargel, J. S., Yan, J., Gulick, V., Higuchi, K., Baker, V. R., & Glines, N. (2016). Tsunami waves extensively resurfaced the shorelines of an early Martian ocean. Scientific Reports, 6, 25106.\n- Scherf, M., & Lammer, H. (2021). Did Mars possess a dense atmosphere during the first ~400 million years? Space Science Reviews, 217, 2.\n- Segura, T., Toon, O. B., & Colaprete, A. (2008). Modeling the environmental effects of moderate-sized impacts on Mars. Journal of Geophysical Research, 113.\n- Sholes, S. F., Montgomery, D. R., & Catling, D. C. (2019). Quantitative high-resolution reexamination of a hypothesized ocean shoreline in Cydonia Mensae on Mars. Journal of Geophysical Research: Planets, 1224, 316–336.\n- Smith, I. B., Putzig, N. E., Holt, J. W., & Phillips, R. J. (2016). An ice age recorded in the polar deposits of Mars. Science, 352, 1075–1078.\n- Smith, P. H., & the Phoenix Science Team. (2009). Water at the Phoenix landing site [No. 1329]. Paper presented at the 40th Lunar and Planetary Science Conference.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "A number of scientists have suggested that a shallow ocean subsequently covered the lava-coated northern plains. However, no evidence in support of this is provided by the new results.\n“Our studies do not find any signs of the lava plains in the north being altered by water,” says Dr Bibring.\nOn a positive note, the new results may suggest sites for future landers because evidence for water during the early history of Mars suggests that conditions may have been favourable for the evolution of primitive life.\n“These results reveal the history of Mars derived from the planet’s mineralogy,” says Olivier Witasse, ESA Project Scientist for Mars Express. “It is another example of the fruitful cooperation between European and American scientists.”", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-35", "d_text": "Fate of outflow channel effluents in the northern lowlands of Mars: The Vastitas Borealis Formation as a sublimation residue form frozen, ponded bodies of water. Journal of Geophysical Research, 4, 1–25.\n- Kuhn, W. R., & Atreya, S. K. (1979). Ammonia photolysis and the greenhouse effect in the primordial atmosphere of the Earth. Icarus, 37, 207–213.\n- Kurokawa, H., Kurosawa, K., & Usui, T. (2018). A lower limit of atmospheric pressure on early Mars inferred from nitrogen and argon isotopic compositions. Icarus, 299, 443–459.\n- Lammer, H., Chassefiere, E., Karatekin, O., Morschhauser, A., Niles, P. B., Mousis, O., Odert, P., Möstl, U. V., Breuer, D., Dehant, V., Grott, M., Gröller, H., Hauber, E., & Pham, L. B. S. (2013). Outgassing history and escape of the Martian atmosphere and water inventory. Space Science Reviews, 174, 113–154.\n- Laskar, J., Correia, A. C. M., Gastineau, M., Joutel, F., Levrard, B., & Robutel, P. (2004). Long term evolution and chaotic diffusion of the insolation quantities of Mars. Icarus, 170, 343–364.\n- Laskar, J. Levrard, B., & Mustard, J. F. (2002). Orbital forcing of the Martian polar layered deposits. Nature, 419, 375–377.\n- Lee, D. C., & Halliday, A. N. (1997). Core formation on Mars and differentiaed asteroids. Nature, 388, 854–857.\n- Leighton, R. B., & Murray, B. C. (1966). Behavior of carbon dioxide and other volatiles on Mars. Science, 153, 136–144.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Curiosity draws a blank on Martian methane\nNo gas, no life? After more than a month of searching, scientists using NASA's rover Curiosity to study Mars' atmosphere have found no evidence that the planet has methane, a gas tied to biological processes.\nThe finding adds a new twist into a puzzling story about methane on Mars, which previously was detected by ground-based telescopes and orbiting spacecraft.\n\"Maybe it's understandable because it's a very early measurement and they're just really still learning the idiosyncrasies of the instrumentation,\" says planetary scientist Michael Mumma, with NASA's Goddard Space Flight Center.\nMumma led a team that found methane in Mars' atmosphere in 2003.\nOn Earth, living systems produce more than 90 per cent of the methane in the atmosphere, with the rest tied to geochemical processes. The gas is easily broken down by sunlight, so its presence in the Martian atmosphere would imply a continuous source on the planet's surface.\nBut on Mars, solar radiation is likely not the only methane-killer, nor the most lethal.\nFor example, the planet's atmosphere and soil is believed to contain chemicals that are highly destructive to molecular bonds, including those in methane.\n\"The oxidation process could start in the atmosphere and diffuse into the surface of Mars. There possibly are oxidants in the surface of Mars, including hydrogen peroxide ... that could potentially result in the rapid destruction of methane,\" says Curiosity scientist Sushil Atreya with the University of Michigan in Ann Arbor.\nMars' dust storms could play a role as well, blasting methane-busting chemicals into the atmosphere, as well as generating massive electrical fields that could directly destroy the gas, Atreya adds.\nCuriosity landed on Mars in August to determine if the planet has or ever had the ingredients needed to develop and preserve life. The rover includes a suite of science instruments to chemically analyse soil, rock and atmospheric samples.\nRover scientists found methane in the first two atmospheric samples analysed by the rover's Sample Analysis at Mars (SAM) instrument, but believe the gas was a remnant from Earth. Two more experiment runs did not find methane in concentrations of at least five parts per billion.\nMumma says that's not surprising.\n\"A year-and-a-half after we published our results, we noted that the global mean average had decreased to about three parts per billion.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-2", "d_text": "That discovery may help us work out when and how the Red Planet lost most of its atmosphere (see \"Heavy hydrogen bombshell\").\nCuriosity is now getting ready for its trek to Aeolis Mons, taking in other points of interest en route. Grotzinger likens Curiosity to a car with a 10,000-page user manual that was still being written as the science team tested its instruments. With all the gear up and running, it's time to drive, he says. \"Our car is ready to go.\"\nHeavy hydrogen bombshell\nGravity means everything is lighter on Mars, but it seems the Red Planet likes its hydrogen heavy. In its first chemical analysis of the Martian soil, the Curiosity rover has found an unusually high proportion of deuterium, also known as heavy hydrogen. The finding may help pin down when and how Mars lost most of its atmosphere.\nMost hydrogen atoms consist of a proton and an electron, but some also boast a neutron, forming deuterium. On Earth, deuterium is much rarer than hydrogen. For example, in our oceans, there is one deuterium atom to every 6420 hydrogen atoms. As deuterium is thought to have been produced in the big bang, it should once have had similar abundances on all the planets in the solar system. But when Curiosity's Sample Analysis at Mars experiment (SAM) vaporised Martian soil, it found a ratio five times higher: one deuterium for every 1284 hydrogens.\nThat could help pin down when Mars's atmosphere, today much thinner than Earth's, started, for some reason, to dissipate. Paul Mahaffy, SAM's principal investigator, suggests that Mars could have lost its light hydrogen when its climate was warmer and wetter. Ultraviolet light from the sun could have broken up water in the atmosphere, creating free hydrogen. The lighter isotope would then have escaped into space more rapidly, leaving proportionately more deuterium behind. That version of events would be strengthened if the rover finds hydrated minerals - signs of water from a bygone era - on the slopes of Aeolis Mons, a mountain that may preserve a layered geological history.\nA model of the early environment will help determine whether Mars was conducive to life, Mahaffy says.\n- New Scientist\n- Not just a website!", "score": 21.499641139753944, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "\"It's very hard proving that something has been lost,\" Yuk Yung, a professor of planetary science at the California Institute of Technology and an author on the new research paper, told Space.com.\nHe said the situation with Mars' atmosphere resembled asking a question about a bucket of water.\n\"Suppose I have a bucket of water, and there's only 1 inch [2.54 centimeters] of water in it,\" Yung said. \"Suppose I tell you that some time ago, the bucket was full. Now, most of the water is gone. If I ask you for proof that it was full [that will be difficult]. It's very hard to prove something that's not there.\"\nBut Yung and his colleagues finally have their hands on concrete evidence about the thickness of the ancient, missing Martian atmosphere, and how quickly it escaped.\nMore than 10 years ago, Yung and his then-graduate student David Kass studied meteorites that had fallen to Earth, but which had originated on Mars. The researchers tested the meteorites, and looked for two kinds of carbon that are found in the Martian atmosphere: carbon 12 (called 12C) and carbon 13 (13C).\nCarbon 13 is the heavier of these two siblings (it carries one extra neutron). Yung said the heavier carbon sinks lower down in the atmosphere, and if the atmosphere were poured out, like water from a bucket, more carbon 12 would escape than carbon 13. The ratio of these two types of carbon therefore tells a story about how quickly the Martian atmosphere was \"poured off\": If the atmosphere contained a 50/50 split of carbon 12 and carbon 13 at a point 3 billion years ago, then the carbon 13 would now make up a larger portion of the pie.\nThe meteorites provide the ratio of carbon 13 to carbon 12 that would have been present in ancient Mars. Measuring that same ratio for modern-day Mars actually proved more difficult; Yung and Kass initially tried to use data from NASA's Viking landers, but said the result was too uncertain.\nKass graduated before they could complete the project, but Renyu Hu, a post doctoral researcher now working with Yung at Caltech, picked up where they left off. It was the suite of science instruments on board the Curiosity rover that finally provided the measurement they needed.\n\"This is a very difficult measurement,\" Yung said.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "The larger planet retained its internal heat, and with it a hot core of partially molten metal. As Earth rotates, the molten core churns and acts as a dynamo, generating a strong magnetic field--something Mars lacks. Unshielded, the Martian surface receives a heavy dose of deadly cosmic rays and is buffeted by the solar wind, which over time has stripped away Mars's atmosphere. Had Earth formed nearer to Jupiter, conditions might have turned out less favorably for its inhabitants. \"The evolution and habitability of the planet are direct consequences of its initial conditions — how big it is, and where it is in the Solar System,\" Ebel says.\nIn March 2004, the NASA rovers Spirit and Opportunity found evidence that, like Earth, Mars once possessed large bodies of briny water, perhaps even a sea, on its surface. The discovery of acid-sulphate salts on the Martian surface indicate that the briny sea evaporated at some point. But when that occurred, and where all the water went, is unclear. \"What happened to the water?\" Ebel muses. \"We don't know. The key missing evidence is the timing.\"\nOne possibility is that the water wound up in the polar ice caps, both of which are now known to contain frozen water in addition to frozen carbon dioxide, or \"dry ice\". \"There's a lot of water in there, more than was previously thought,\" Ebel says. Scientists are trying to model the Martian atmosphere, to determine how carbon dioxide and water move around the surface of Mars and, by working backward in time, where it came from and went. But the orbit of Mars is far more eccentric than Earth's, thanks to the gravitational influence of Jupiter. As a result, Ebel explains, \"Figuring out which part of the planet was cooler and which warmer, and when in history, is a challenging puzzle.\"\nThere's also a strong possibility that water, frozen or liquid, may still exist beneath the surface of Mars. (The Martian climate is too cold for liquid water to exist aboveground.) The European Space Agency's Mars Express orbiter is equipped with ground-penetrating radar, which can detect signs of frozen water beneath the rocky Martian surface — if it's there to be found.\nThe existence of both volcanic activity and water on Mars raise the tantalizing possibility that the planet might have harbored life at one time in its history.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Life on Mars\n|This article is a clone of a Wikipedia article. It has not yet been vetted by our editors.|\n|This article is one of a series on:|\n|Life in the Universe|\n|Habitability in the Solar System|\n|Life outside the Solar System|\nThe possibility of life on Mars is a subject of significant interest to astrobiology due to the planet's proximity and similarities to Earth. To date no proof has been found of past or present life on Mars. However, there is strong evidence that the ancient surface environment of Mars had seas in the northern hemmisphere, and abundant liquid water and may have been habitable for microorganisms. Since 2008 then evidence has also been building for the presence of traces of thin filims of liquid water in near surface layers/ Most of it is expected to be very salty, and including more reactive cholorates, perchlorates and sulfates instead of the chlorides and sulfides of Earth, but some of it may possibly be habiable for microbes and lichens in microhabitats. The atmosphere also may have enough humidity at times for lichens to survive in semi-shade on the surface according to some experiments at DLR. The existence of habitable conditions does not necessarily indicate the presence of life.\nScientific searches for evidence of life began in the 19th century, and they continue today via telescopic investigations and landed missions. While early work focused on phenomenology and bordered on fantasy, modern scientific inquiry has emphasized the search for water, chemical biosignatures in the soil and rocks at the planet's surface, and biomarker gases in the atmosphere. On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior.\nMars is of particular interest for the study of the origins of life because of its similarity to the early Earth. This is especially so since Mars has a cold climate and lacks plate tectonics or continental drift, so it has remained almost unchanged since the end of the Hesperian period. At least two thirds of Mars's surface is more than 3.5 billion years old, and Mars may thus hold the best record of the prebiotic conditions leading to abiogenesis, even if life does not or has never existed there.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-39", "d_text": "In R. M. Haberle, R. T. Clancy, F. Forget, M. D. Smith, & R. W. Zurek (Eds.), The atmosphere and climate of Mars (pp. 338–373). Cambridge University Press.\n- Moore, J. M., Howard, A. D., Dietrich, W. E., & Schenk, P. M. (2003). Martian layered fluvial deposits: Implications for Noachian climate scenarios. Geophysical Research Letters, 30.\n- Mustard, J. F., Cooper, C. D., & Rifkin, M. K. (2001). Evidence for recent climate change on Mars from the identification of youthful near-surface ground ice. Nature, 412, 411–414.\n- Newman, C. E., Lewis, S. R., & Read, P. L. (2005). The atmospheric circulation and dust activity at different orbital epochs on Mars. Icarus, 174, 135–160.\n- Olsen, A. A., & Rimstidt, J. D. (2007). Using a mineral lifetime diagram to evaluate the persistence of olivine on Mars. American Mineralogist, 92, 598–602.\n- Palucis, M. C., Dietrich, W. E., Williams, R. M. E., Hayes, A. G., Parker, T., Sumner, D. Y., Mangold, N., Lewis, K., & Newsom, H. (2016). Sequence and relative timing of large lakes in Gale crater (Mars) after the formation of Mount Sharp. Journal of Geophysical Research: Planets, 121, 472–496.\n- Phillips, R. J., Davis, B. J., Tanaka, K. L., Byrne, S., Mellon, M. T., Putzig, N. E., Haberle, R. M., Kahre, M. A., Campbell, B. A., Carter, L. M., Smith, I. B., Holt, J. W., Smrekar, S. E., Nunes, D. C., Plaut, J. J., Egan, A. F., Titus, T. N., & Seu, R. (2011).", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-34", "d_text": "Sulfur in the early Martian atmosphere revisited: Experiments with a 3-D global climate model. Icarus, 261, 133–148.\n- Kieffer, H. H., & Zent, A. P. (1992). Quasi-periodic climate change on Mars. In H. H. Kieffer, B. M. Jakosky, C. W. Snyder, & M. S. Matthews (Eds.), Mars (pp. 1180–1218). University of Arizona Press.\n- Kite, E. S., Gao, P., Goldblatt, C., Mischna, M. A., Mayer, D. P., & Yung, Y. L. (2017). Methane bursts as a trigger for intermittent lake-forming climates on post-Noachian Mars. Nature Geoscience, 10, 737–740.\n- Kite, E. S., Steele, L. J., Mischna, M. A., & Richardson, M. I. (2021). Warm early Mars surface enabled by high-altitude water ice clouds. Proceedings of the National Academy of Sciences of the USA, 118(18), e2101959118.\n- Kite, E. S., Williams, J.-P., Lucas, A., & Aharonson, O. (2014). Low palaeopressure of the Martian atmosphere estimated from the size distribution of ancient craters. Nature Geoscience, 5, 335–339.\n- Koutnik, M., Byrne, S., & Murray, B. (2002). South polar layered deposits of Mars: The cratering record. Journal of Geophysical Research, 107(E11), 10-1–10-10.\n- Kreslavsky, M. A., & Head, J. W. (2000). Kilometer-scale roughness of Mars’ surface: Results from MOLA data analysis. Journal of Geophysical Research, 105, 26695–26712.\n- Kreslavsky, M. A., & Head, J. W. (2002).", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "The atmosphere of Mars may not have escaped into space billions of years ago, scientists say. Instead, the bulk of Mars' carbon dioxide gas could be locked inside Martian rocks.\nMost of Mars' carbon dioxide vanished about 4 bildflion years ago, leaving a cold planet covered in a thin veneer of gas. But a new analysis of a Martian meteorite claims that some of the carbon dioxide disappeared into Mars itself, and not out into space as previous studies have suggested.\n\"This is the first direct evidence of how carbon dioxide is removed, trapped and stored on Mars,\" said Tim Tomkinson, lead study author and a geochemist at the University of Glasgow in the United Kingdom. \"We can find out amazing things about Mars from the very small amount of sample that we have.\" [7 Biggest Mysteries of Mars]\nTomkinson and his colleagues probed the history of the Mars atmosphere by analyzing minerals in a tiny slice of the Lafayette meteorite, a Mars rock blasted toward Earth 11 million years ago. The Lafayette is one of several Martian meteorites called the Nakhlites, thought to have been ejected out of a vast volcanic plateau by a comet impact.\nThe meteorites are 1.3-billion-year-old basalt, a volcanic rock rich in the mineral olivine. Long before their space journey, water altered the rock, leaving behind microscopic fractures filled with clays and carbonates. Radiometric dating indicates these minerals formed some 625 million years ago. The research is detailed in the Oct. 22 edition of the journal Nature Communications.\nTomkinson's team discovered that Lafayette's siderite, an iron-rich carbonate mineral, formed through carbonation. (This is the same process proposed for carbon sequestration on Earth.) When water and carbon dioxide gas combine with olivine minerals in the basalt, the ensuing chemical reaction creates carbonate and silicate minerals, trapping the gas.\nThe results mean liquid water flowed on Mars within the last 700 million years, either from geothermal or hydrothermal heating, Tomkinson said. \"This process could have been an even bigger player when Mars was thought to be a warmer and wetter planet,\" he added. \"This process was still occurring when conditions were unfavorable for carbonation. It could potentially have been a much bigger mechanism when Mars had a thicker atmosphere 4 billion years ago,\" Tomkinson told SPACE.com.\nNASA's Mars spacecraft and rovers have already found widespread carbonate deposits on the planet.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "PARIS - Salty water just below the surface of Mars could hold enough oxygen to support the kind of microbial life that emerged and flourished on Earth billions of years ago, researchers reported Monday (Oct 22).\nIn some locations, the amount of oxygen available could even keep alive a primitive, multicellular animal such as a sponge, they reported in the journal Nature Geosciences.\n\"We discovered that brines\" - water with high concentrations of salt - \"on Mars can contain enough oxygen for microbes to breathe,\" said lead author Vlada Stamenkovic, a theoretical physicist at the Jet Propulsion Laboratory in California.\n\"This fully revolutionises our understanding of the potential for life on Mars, today and in the past,\" he told AFP.\nUp to now, it had been assumed that the trace amounts of oxygen on Mars were insufficient to sustain even microbial life.\n\"We never thought that oxygen could play a role for life on Mars due to its rarity in the atmosphere, about 0.14 per cent,\" Stamenkovic said.\nBy comparison, the life-giving gas makes up 21 per cent of the air we breathe.\nOn Earth, aerobic - that is, oxygen breathing - life forms evolved together with photosynthesis, which converts CO2 into O2. The gas played a critical role in the emergence of complex life, notable after the so-called Great Oxygenation Event some 2.35 billion years ago.\nBut our planet also harbours microbes - at the bottom of the ocean, in boiling hot springs - that subsist in environments deprived of oxygen.\n\"That's why - whenever we thought of life on Mars - we studied the potential for anaerobic life,\" Stamenkovic.\nLIFE ON MARS?\nThe new study began with the discovery by Nasa's Curiosity Mars rover of manganese oxides, which are chemical compounds that can only be produced with a lot of oxygen.\nCuriosity, along with Mars orbiters, also established the presence of brine deposits, with notable variations in the elements they contained.\nA high salt content allows for water to remain liquid - a necessary condition for oxygen to be dissolved - at much lower temperatures, making brines a happy place for microbes.\nDepending on the region, season and time of day, temperatures on the Red Planet can vary between minus 195 and 20 deg C.\nThe researchers devised a first model to describe how oxygen dissolves in salty water at temperatures below freezing.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Earth’s oxygen levels rose and fell more than once hundreds of millions of years before the planetwide success of the Great Oxidation Event about 2.4 billion years ago, new research from the University of Washington shows.\nThe evidence comes from a new study that indicates a second and much earlier “whiff” of oxygen in Earth’s distant past — in the atmosphere and on the surface of a large stretch of ocean — showing that the oxygenation of the Earth was a complex process of repeated trying and failing over a vast stretch of time.\nThe finding also may have implications in the search for life beyond Earth. Coming years will bring powerful new ground- and space-based telescopes able to analyze the atmospheres of distant planets. This work could help keep astronomers from unduly ruling out “false negatives,” or inhabited planets that may not at first appear to be so due to undetectable oxygen levels.\n“The production and destruction of oxygen in the ocean and atmosphere over time was a war with no evidence of a clear winner, until the Great Oxidation Event,” said Matt Koehler, a UW doctoral student in Earth and space sciences and lead author of a new paper published the week of July 9 in the Proceedings of the National Academy of Sciences.\n“These transient oxygenation events were battles in the war, when the balance tipped more in favor of oxygenation.”\nIn 2007, co-author Roger Buick, UW professor of Earth and space sciences, was part of an international team of scientists that found evidence of an episode — a “whiff” — of oxygen some 50 million to 100 million years before the Great Oxidation Event. This they learned by drilling deep into sedimentary rock of the Mount McRae Shale in Western Australia and analyzing the samples for the trace metals molybdenum and rhenium, accumulation of which is dependent on oxygen in the environment.\nNow, a team led by Koehler has confirmed a second such appearance of oxygen in Earth’s past, this time roughly 150 million years earlier — or about 2.66 billion years ago — and lasting for less than 50 million years. For this work they used two different proxies for oxygen — nitrogen isotopes and the element selenium — substances that, each in its way, also tell of the presence of oxygen.\n“What we have in this paper is another detection, at high resolution, of a transient whiff of oxygen,” said Koehler.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-42", "d_text": "- Solomon, S. C., Aharonson, O., Aurnou, J. M., Banerdt, W. B., Carr, M. H., Dombard, A. J., Frey, H. V., Golombek, M. P., Hauck, S. A., II, & Zuber, M. T. (2005). New perspectives on ancient Mars. Science, 307, 1214–1220.\n- Steakley, K., Murphy, J., Kahre, M., Haberle, R., & Kling, A. (2019). Testing the impact heating hypothesis for early Mars with a 3-D global climate model[https://doi.org/10.1016/j.icarus.2019.04.005], Icarus, 330, 169–188.\n- Tarnas, J. D., Mustard, J. F., Lollar, B. S., Bramble, M. S., Cannon, K. M., Palumbo, A. M., & Plesa, A.-C. (2018). Radiolytic H2 production on Noachian Mars: Implications for habitability and atmospheric warming. Earth and Planetary Science Letters, 502, 133–145.\n- Tera, F., Papanastassiou, F. A., & Wasserburg, G. J. (1974). Isotopic evidence for a terminal lunar cataclysm. Earth and Planetary Science Letters, 22, 1–22.\n- Tian, F., Kasting, J. F., & Solomon, S. C. (2009). Thermal escape of carbon from the early Martian atmosphere. Geophysical Research Letters, 36.\n- Toon, O. B., Pollack, J. B., Ward, W., & Bilski, K. (1980). The astronomical theory of climatic change on Mars. Icarus, 44, 552–607.\n- Turbet, M., Boulet, C., & Karman, T. (2020). Measurements and semi-empirical calculations of CO2 + CH4 and CO2 + H2 collision-induced absorption across a wide range of wavelengths and temperatures: Application for the prediction of early Mars surface temperature. Icarus, 346, 113762.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-30", "d_text": "- Staff (28 September 2015). \"Video Highlight - NASA News Conference - Evidence of Liquid Water on Today's Mars\". NASA. Archived from the original on October 1, 2015. Retrieved 30 September 2015.\n- Staff (28 September 2015). \"Video Complete - NASA News Conference - Water Flowing on Present-Day Mars m\". NASA. Archived from the original on October 15, 2015. Retrieved 30 September 2015.\n- Ojha, L.; Wilhelm, M. B.; Murchie, S. L.; McEwen, A. S.; Wray, J. J.; Hanley, J.; Massé, M.; Chojnacki, M. (2015). \"Spectral evidence for hydrated salts in recurring slope lineae on Mars\". Nature Geoscience. 8 (11): 829–832. Bibcode:2015NatGe...8..829O. doi:10.1038/ngeo2546.\n- \"Mars Reconnaissance Orbiter Telecommunications\" (PDF). JPL. September 2006. Archived (PDF) from the original on February 15, 2013.\n- Luhmann, J. G.; Russell, C. T. (1997). \"Mars: Magnetic Field and Magnetosphere\". In Shirley, J. H.; Fainbridge, R. W. Encyclopedia of Planetary Sciences. New York: Chapman and Hall. pp. 454–6.\n- Phillips, Tony (January 31, 2001). \"The Solar Wind at Mars\". NASA. Archived from the original on August 18, 2011.\n- \"What makes Mars so hostile to life?\". BBC News. January 7, 2013. Archived from the original on August 30, 2013.\n- Keating, A.; Goncalves, P. (November 2012). \"The impact of Mars geological evolution in high energy ionizing radiation environment through time\". Planetary and Space Science – Eslevier. 72 (1): 70–77. Bibcode:2012P&SS...72...70K. doi:10.1016/j.pss.2012.04.009.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-2", "d_text": "\"We are now able to quantify what people previously could only estimate.\"\nComparing these values, Yung said it appears that the pressure in the atmosphere wasn't as high as some models predict. Specifically, it's less than 1 bar (which is about the pressure of Earth's atmosphere at sea level), while there are planetary models that estimate higher values.\nThis finding is significant because it changes the amount of carbonites scientists should expect to find in the Martian soil, and \"does not require a massive undetected carbon reservoir,\" Hu said in a news release from NASA's Jet Propulsion Laboratory.\nYung said the measurement is new, but it is in agreement with the recent findings from MAVEN, showing how the Martian atmosphere may have been swept away by solar winds.\nYung said work still needs to be done to put the new results into detailed models of Mars' evolution, so scientists can discover what the findings reveal about Mars' history. Was the warm, wet period in the planet's lifetime a brief window? What would such a finding indicate for the possibility of life forming on the Martian surface? Working the new results into established models may reveal exciting new details.\n\"This solves a long-standing paradox,\" said Bethany Ehlmann, a researcher at Caltech and JPL, and co-author on the new research paper.\n\"The supposed very thick atmosphere [of ancient Mars] seemed to imply that you needed this big surface carbon reservoir,\" Ehlmann said, but the new paper shows that known processes are enough to explain the \"missing\" carbonates, and will allow scientists find \"an evolutionary scenario for Mars that makes sense.\"", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "NASA'sCuriosityrover has found new evidence preserved in rocks on Mars that suggests the planet could have supported ancient life, as well as new evidence in the Martian atmosphere that relates to the search for current life on the Red Planet.\n\"Curiosity has shown that Gale crater was habitable around 3.5 billion years ago, with conditions comparable to those on the early Earth, where life evolved around that time\", she wrote.\nApart from the organic evidences, Curiosity also found abnormal levels of methane in the atmosphere of Mars. It previously found hints of methane and organic compounds, but these findings are the best evidence yet. \"Curiosity has not determined the source of the organic molecules\", explains Eigenbrode. The press conference will be headed by the big NASA researchers along with people who were directly working on the samples Curiosity has been diligently gathering on Mars.\nThe studies were conducted using evidence collected by NASA's Curiosityrover, which is now rolling around the Martian surface.\nThe amount of methane peaked at the end of summer in the northern hemisphere at about 2.7 times the level of the lowest seasonal amount.\nOn Thursday NASA had made an announcement, stating that a recent finding seems to confirm that there used to be life on Mars, and maybe even still is. \"That means Mars today is not a 'dead planet, ' but somewhere underground there are reactions occurring today that release and absorb an atmospheric gas that is nearly always related to warm water or life on Earth\". The molecules in the mudstone could have once enabled life to form in lakes when Mars still had liquid water on its surface and the methane could have been produced by life.\nDespite the fact that it is still not clear how these molecules were created, NASA emphasized that these kinds of particles could have been the food source for hypothetical microbial life on Mars. Mars scientists have long feared that any organics would be extremely tough to find.\nAttention, however this does not mean that life exists or has ever existed on the Red planet! \"But it gave us a lot of anticipation that, if we can find these molecules here, perhaps we're going to come across other layers of rock that have more organics in them\".\nThe study, which has been in place for around six years (the equivalent of almost three martian years), shows that low levels of methane within the crater repeatedly peak in warm, summer months and drop in the winter every year.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "“The most difficult aspect is letting go of Earth and the bias that we have and really trying to get into the foundations of the chemistry, physics, and environmental processes on Mars,” said Goddard astrobiologist Jennifer L. Eigenbrode, who took part in the carbon research. Eigenbrode previously led an international team of Curiosity scientists in the discovery of a wide range of organic compounds on the Martian surface.\nEigenbrode said, “We need to broaden our thoughts and look beyond the box, and that is what this study accomplishes.”\nIn their report, the researchers suggest two non-biological reasons for the odd carbon signature. Molecular clouds are one of them.\nOur Solar System went through a molecular cloud hundreds of millions of years ago, according to the molecular cloud theory. Although this is a rare occurrence, it occurs once per 100 million years, so scientists cannot rule it out.\nMolecular clouds are typically made up of molecular hydrogen, but one in Gale Crater may have been particularly rich in the sort of lighter carbon found by Curiosity.\nIn this scenario, the cloud would have caused Mars to cool substantially, resulting in glaciers. Because of the cooling and glacial, the lighter carbon in the molecular clouds would not have mixed with the rest of Mars’ carbon, resulting in deposits of increased C12. “Glacial melt during the glacial epoch and ice retreat thereafter should leave interstellar dust particles on the glacial geomorphological surface,” according to the report.\nCuriosity discovered some of the heightened C12 levels at the summits of ridges, such as the top of Vera Rubin Ridge, and other high spots in Gale Crater, which supports the concept. According to the article, the samples were collected from “a diversity of lithologies (mudstone, sand, and sandstone) and are temporally distributed across the mission activities to date.” Nonetheless, the molecular cloud idea is an improbable series of occurrences.\nUltraviolet light is the other non-biological idea. The atmosphere of Mars contains over 95% carbon dioxide, and UV radiation would have reacted with the carbon dioxide gas in the atmosphere to produce new carbon-containing molecules in this scenario.\nThe molecules would have showered down on Mars’ surface, becoming ingrained in the rock. This theory is comparable to how methanogens make C12 indirectly on Earth, except it is completely abiotic.\n“All three interpretations suit the data,” said Christopher House, the study’s primary author.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-27", "d_text": "(2010). Lakes on Mars. Elsevier.\n- Carr, M., & Head, J. (2019). Mars: Formation and fate of a frozen Hesperian ocean. Icarus, 319, 433–443.\n- Carr, M. H. (1979). Formation of Martian flood features by release of water from confined aquifers. Journal of Geophysical Research, 84, 2995–3007.\n- Carr, M. H. (1981). The surface of Mars. Yale University Press.\n- Carr, M. H. (1992). Post-Noachian erosion rates: Implications for Mars climate change. In 23rd Lunar and Planetary Science Conference (Vol. 23, pp. 205–206). Lunar and Planetary Science Institute.\n- Carr, M. H., & Head, J. W. (2010). Geological history of Mars. Earth and Planetary Science Letters, 294, 185–203.\n- Cassata, W. S., Shuster, D. L., Renne, P. R., & Weiss, B. P. (2012). Trapped Ar isotopes in meteorite ALH 84001 indicate Mars did not have a thick ancient atmosphere. Icarus, 221, 461–465.\n- Chassefière, E., Langlais, B., Quesnet, Y., & Leblanc, F. (2013). The fate of early Mars’ lost water: The role of serpentinization. Journal of Geophysical Research: Planets, 117, 1123–1134.\n- Clifford, S. M., & Parker, T. J. (2001). Evolution of the Martian hydrosphere: Implications for the fate of a primordial ocean and the current state of the northern plains. Icarus, 154, 40–79.\n- Craddock, R. A., & Greeley, R. (2009). Minimum estimates of the amount and timing of gases released into the Martian atmosphere from volcanic eruptions. Icarus, 204, 512–526.\n- Craddock, R. A., & Howard, A. D. (2002).", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-6", "d_text": "And from Mars it's just a short hop to Ice Age Earth...\nWritten by Andy Lloyd, 22nd March 2015\n1) Malin, M. C., and K. S. Edgett \"Evidence for recent groundwater seepage and surface runoff on Mars\", Science, 288: 5475, 2330–2335, 30 June 2000\n2) James Dickson et al \"Recent climate cycles on Mars: Stratigraphic relationships between multiple generations of gullies and the latitude dependent mantle\", Icarus, 252: 83-94, 15 May 2015\n3) Elizabeth Howell \"Could water have carved channels on Mars half a million years ago?\" 19 March 2015, with thanks to Jim\n4) A. Johnsson, D. Reiss, E. Hauber, H. Hiesinger, M. Zanetti. \"Evidence for very recent melt-water and debris flow activity in gullies in a young mid-latitude crater on Mars\", Icarus, 235: 37, June 2014\n5) A. Lloyd \"A Martian Riddle\" 22 September 2014\nDark Star Blog 18\n6) 2) Lisa Grossman \"\n8) James Dickson et al \"Formation of Gullies on Mars by Water at High Obliquity: Quantitative Integration of Global Climate Models and Gully Distribution\" 46th Lunar and Planetary Science Conference, 2015\n9) Bruce Moomaw Cameron Park \"The Obliquity of Mars\" 30 June 2000\n10) Lori Stiles \"The Ancient Oceans of Mars: Gamma-Ray Evidence Suggests Ancient Mars Had Oceans\" 17 November 2008\n11) Ian Sample \"Curiosity rover's discovery of methane ‘spikes’ fuels speculation of life on Mars\" 17 December 2014\n12) Touma, J. & Wisdom, J. \"The Chaotic Obliquity of Mars\", Science, 259 (5099): 1294–1297, 1993\n13) MSSS \"Cydonia: Two Years Later\" 5 April 2000\n14) DLR News \"The Cydonia Region – was the North of Mars Once Covered by an Ocean?\" 12 March 2015 with thanks to Jim.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "in the magazine Earth and Planetary Science Letters Scientists describe a model developed for the evolution of the Martian atmosphere that combines the high temperatures after the planet formed with the formation of the first oceans and atmosphere.\nIt shows that Water vapor is concentrated in the lower atmosphereThe upper layers remained dry. A similar situation is currently taking place on Earth, where there is a large part of it Moisture condenses as clouds in the troposphereat altitudes of up to 20 km.\nMolecular hydrogen that did not combine with oxygen to form a water molecule, to the upper atmosphere of Mars, where it escaped into space. This assumption allows for Direct link between this model and Curiosity’s measurements.\nThe researchers responsible for the study believe they made Example of a previously overlooked chapter in Martian history. It may be important to understand to note that Long ago, the atmosphere of Mars was definitely denser than it is today (about 1,000 times) It is mainly composed of molecules containing two hydrogen atoms.\nThis is a major discovery because it is known Hydrogen in this form is a strong greenhouse gas in a dense environment. In an atmosphere of this formation, there will be a greenhouse effect that will be able to Preserving liquid water for millions of years.\nIt also turned out to be interesting deuterium ratio (heavy hydrogen isotope) to hydrogen in various Martian samples, including Meteorites and elements studied by Curiosity. Mars meteorites are Igneous rocks from the mantle of Marsthat arose at a time when Mars was still characterized by volcanic activity.\nWater melted in fiery samples from Mars’ mantle It shows a ratio of deuterium to hydrogen similar to a measurement made in Earth’s oceans. This proves it Water on both planets came from the same source in the early solar system.\nCuriosity measured the ratio of deuterium to hydrogen in the 3-billion-year-old clay And I found out that there was Three times larger than the Earth’s oceans. The only process that may be responsible for this level of deuterium concentration is Loss of the lighter hydrogen isotope in space.\nThe model shows that if The atmosphere of Mars contained diatomic hydrogen at the time of the planet’s formation, surface water would naturally be enriched in deuterium by a factor 2-3 times greater than the interior of Mars. This, in turn, confirms the observations of scientists. Deuterium is more likely to form water than molecular hydrogenwhich traps ordinary hydrogen and escapes from the upper atmosphere.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-11", "d_text": "Based on Earth analogs, hydrothermal systems on Mars would be highly attractive for their potential for preserving organic and inorganic biosignatures. For this reason, hydrothermal deposits are regarded as important targets in the exploration for fossil evidence of ancient Martian life.\nMethane[edit | hide | edit source]\nPossible trace amounts of methane in the atmosphere of Mars were first discovered in 2003 with earth based telescopes and fully verified in 2004 by the ESA Mars Express spacecraft in orbit around Mars. As methane is an unstable gas, its presence indicates that there must be an active source on the planet in order to keep such levels in the atmosphere. It is estimated that Mars must produce 270 ton/year of methane, but asteroid impacts account for only 0.8% of the total methane production. Although geologic sources of methane such as serpentinization are possible, the lack of current volcanism, hydrothermal activity or hotspots are not favorable for geologic methane. It has been suggested that the methane was produced by chemical reactions in meteorites, driven by the intense heat during entry through the atmosphere. Although research published in December 2009 ruled out this possibility, research published in 2012 suggest that a source may be organic compounds on meteorites that are converted to methane by ultraviolet radiation.\nThe existence of life in the form of microorganisms such as methanogens is among possible, but as yet unproven sources. Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO2) as their carbon source, so they could exist in subsurface environments on Mars. If microscopic Martian life is producing the methane, it probably resides far below the surface, where it is still warm enough for liquid water to exist.\nSince the 2003 discovery of methane in the atmosphere, some scientists have been designing models and in vitro experiments testing growth of methanogenic bacteria on simulated Martian soil, where all four methanogen strains tested produced substantial levels of methane, even in the presence of 1.0wt% perchlorate salt. The results reported indicate that the perchlorates discovered by the Phoenix Lander would not rule out the possible presence of methanogens on Mars.\nResearch at the University of Arkansas presented in June 2015 suggested that some methanogens could survive on Mars's low pressure.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "The martian atmosphere bulk composition is not like any other known source of gas. Also, the Viking landers found that Mars atmosphere has unusual isotopic abundances; it is rich in nitrogen with weight 15 (15N) compared to nitrogen with weight 14 (14N), and relatively rich in two isotopes that form during radioactive decay of other elements, 40Ar and 129Xe. Telescopic observation from Earth also showed that the martian atmosphere is very rich in heavy hydrogen (also called deuterium, D) compared to the Earth. Again, this mix of isotopic abundances is not like any other known source of gas.\nThis martian atmosphere gas was first discovered in the Antarctic meteorite EETA79001 (Bogard and Johnson, 1983), and traces of it are present in almost all of the martian (SNC) meteorites.\nB. Is this martian atmosphere in ALH 84001?\nYes. This same martian atmosphere gas is also present in ALH 84001, which has hydrogen rich in deuterium, nitrogen rich in 15N, argon rich in 40Ar, and xenon rich in 129Xe (Miura and Sugiura, 1995; Gilmour et al., 1995, 1996; Grady et al., 1996; Leshin et al., 1996; Miura et al., 1995; Swindle et al., 1995).\nC. Is there other evidence that the martian meteorites are from Mars?\nAll circumstantial. The martian meteorites except ALH 84001, the SNCs, all crystallized from lava less than 1300 million years ago. The only objects in the solar system with volcanoes active so recently are Venus, Earth, Mars, and Io. The SNCs arent from the Earth because their oxygen isotope compositions are utterly distinct from anything on Earth. Theyre probably not from Venus, because most of the SNC meteorites contain extraterrestrial clays or water-bearing minerals, and they could not have formed on Venuss hot surface. Io remains possible, but pretty unlikely as the escape velocity for a rock near Jupiter is so high.\nAlthough ALH 84001 is not young like the other SNCs, it is linked to them in other ways.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "It looks like you're using an Ad Blocker.\nPlease white-list or disable AboveTopSecret.com in your ad-blocking tool.\nSome features of ATS will be disabled while you continue to use an ad-blocker.\nWhat some argue is evidence of ancient life in a meteorite from Mars could have a simple chemical explanation, scientists now suggest. These findings could also shed light on of the tricky chemistry going on in the atmospheres of both Mars and Earth.\nImpacting space rocks on Mars over the years have hurled debris off the planet, some of which has landed on Earth. One such rock — the 3.9 billion-year-old meteorite known as ALH84001 — had globular, micron-sized carbonate particles seemingly arranged in chains that some thought must have been made by ancient Martian life. However, researchers have now discovered a new way to form carbonates on Earth without interference from biological organisms. They suggest this process likely takes place on Mars as well.\nUnusual oxygen type: The carbonates seen in ALH84001 possessed unusually high levels of the isotope oxygen-17. (An oxygen atom has eight protons in its nucleus, and while most of these also have eight neutrons, oxygen-17 has nine.) Atmospheric chemist Robina Shaheen at the University of California at San Diego discovered anomalously high levels of oxygen-17 in carbonates found on dust grains, aerosols and dirt on Earth as well. This hinted that a chemical process common to both planets might be at work.\nI don't. I was alive in 1996 when they made the initial announcement which said:\nOriginally posted by anon72\nNow, most of us will remember the ground breaking news of 2007 (I think) of LIFE ON MARS-Proof.\nSo, it says \"inorganic formation is possible\"!!\nThe carbonate globules are similar in texture and size to some terrestrial bacterially induced carbonate precipitates. Although inorganic formation is possible, formation of the globules by biogenic processes could explain many of the observed features, including the PAHs. The PAHs, the carbonate globules, and their associated secondary mineral phases and textures could thus be fossil remains of a past martian biota.\nI'm skeptical of some science too, however the far bigger problem we need to be skeptical of, are media reports which distort what the scientists actually say.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-2", "d_text": "For example, Olympus Mons is a massive (extinct) Martian volcano and is actually the largest volcano known to exist—nearly three times as tall as Mt. Everest. Yet, even though its base would cover the combined states of Ohio, Indiana, and Kentucky, a mild gradient makes Olympus Mons seem far less impressive than the rugged slopes of Everest. Several other immense volcanoes exist on Mars, dwarfing their terrestrial counterparts. Most astronomers believe that all of these volcanoes are extinct and that Mars currently has essentially no geologic activity.\nOne of Mars’ most spectacular features is a canyon called Valles Marineris that is long enough to reach from one end of the United States to the other and is over 120 miles wide and about four miles deep.3 For comparison, this is ten times longer, nearly seven times wider, and four times deeper than the Grand Canyon. Valles Marineris is thought to be a tectonic fissure—a place where the surface cracked open.4\nScientists have been intrigued to learn that the surface of Mars has dry river beds and deltas. Though there is essentially no liquid water on the planet today, evidence clearly suggests that Mars once had surface water. Such evidence is especially perplexing in light of the planet’s thin atmosphere. Water can only exist as a liquid between certain temperatures and under sufficient atmospheric pressures, and the atmosphere of Mars is far too thin to allow water to be liquid for any length of time at any temperature. Heating an ice cube on Mars would cause it to sublime, not melt. That is, the ice would go directly to vapor, bypassing the liquid state entirely. Frozen carbon dioxide behaves in the same way under Earth’s atmosphere.\nSo, was the atmosphere of Mars different in the past? Or was the water released catastrophically, boiling away almost immediately? Could volcanic eruptions increase the atmospheric pressure locally to the point where liquid water could exist temporarily? These are mysteries that remain unsolved. It is noteworthy that secularists are willing to believe in catastrophic, planet-scale flooding on Mars—a planet that cannot support liquid water. Yet, they simultaneously deny the Genesis Flood on Earth—a planet that is 71 percent covered with water.\nThe two moons of Mars are quite tiny compared to Earth’s moon. Phobos is the larger of the two and only about 10 miles in diameter. Since Phobos has so little mass, its gravity is minuscule.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-10", "d_text": "Despite this, about 3.8 billion years ago, there was a denser atmosphere, higher temperature, and vast amounts of liquid water flowed on the surface, including large oceans.\nIt has been estimated that the primordial oceans on Mars would have covered between 36% and 75% of the planet. On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior. Analysis of Martian sandstones, using data obtained from orbital spectrometry, suggests that the waters that previously existed on the surface of Mars would have had too high a salinity to support most Earth-like life. Tosca et al. found that the Martian water in the locations they studied all had water activity, aw ≤ 0.78 to 0.86—a level fatal to most Terrestrial life. Haloarchaea, however, are able to live in hypersaline solutions, up to the saturation point.\nIn June 2000, possible evidence for current liquid water flowing at the surface of Mars was discovered in the form of flood-like gullies. Additional similar images were published in 2006, taken by the Mars Global Surveyor, that suggested that water occasionally flows on the surface of Mars. The images did not actually show flowing water. Rather, they showed changes in steep crater walls and sediment deposits, providing the strongest evidence yet that water coursed through them as recently as several years ago.\nThere is disagreement in the scientific community as to whether or not the recent gully streaks were formed by liquid water. Some suggest the flows were merely dry sand flows. Others suggest it may be liquid brine near the surface, but the exact source of the water and the mechanism behind its motion are not understood.\nSilica[edit | hide | edit source]\nIn May 2007, the Spirit rover disturbed a patch of ground with its inoperative wheel, uncovering an area extremely rich in silica (90%). The feature is reminiscent of the effect of hot spring water or steam coming into contact with volcanic rocks. Scientists consider this as evidence of a past environment that may have been favorable for microbial life, and theorize that one possible origin for the silica may have been produced by the interaction of soil with acid vapors produced by volcanic activity in the presence of water.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-40", "d_text": "- Formisano, Vittorio; Atreya, Sushil; Encrenaz, Thérèse; Ignatiev, Nikolai; Giuranna, Marco (2004). \"Detection of Methane in the Atmosphere of Mars\". Science. 306 (5702): 1758–61. Bibcode:2004Sci...306.1758F. doi:10.1126/science.1101732. PMID 15514118.\n- Krasnopolsky, Vladimir A.; Maillard, Jean Pierre; Owen, Tobias C. (2004). \"Detection of methane in the martian atmosphere: Evidence for life?\". Icarus. 172 (2): 537–47. Bibcode:2004Icar..172..537K. doi:10.1016/j.icarus.2004.07.004.\n- \"Mars Express confirms methane in the Martian atmosphere\" (Press release). ESA. March 30, 2004. Archived from the original on February 24, 2006.\n- Moran, Mark; Miller, Joseph D.; Kral, Tim; Scott, Dave (2005). \"Desert methane: Implications for life detection on Mars\". Icarus. 178: 277–80. Bibcode:2005Icar..178..277M. doi:10.1016/j.icarus.2005.06.008.\n- Krasnopolsky, Vladimir A. (2006). \"Some problems related to the origin of methane on Mars\". Icarus. 180 (2): 359–67. Bibcode:2006Icar..180..359K. doi:10.1016/j.icarus.2005.10.015.\n- \"Planetary Fourier Spectrometer website\". Mars Express. ESA. Archived from the original on May 2, 2013.[verification needed]\n- \"Hunting for young lava flows\". Geophysical Research Letters. Red Planet. June 1, 2011. Archived from the original on October 4, 2013.\n- Court, Richard W.; Sephton, Mark A. (2009).", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "further propose that the presence of a strong oxidant such as ferrate would make the Martian surface self-sterilizing, eliminating the possibility for detection of life, or its remnants, on that planet. In support of their hypothesis, they conducted spectral and polymerase chain reaction (PCR) experiments with Fe(VI), finding that it destroys DNA, nucleotides, proteins, and amino acids.\nIt is quite possible that iron in the +5 and/or +6 oxidation states is present in the soil of Mars. However, recent observations by the Mars Odyssey orbiter indicate that significant amounts of water ice exist at or near the surface of Mars . Since transient temperatures above freezing occur at some locations on Mars , this water likely would be present at least briefly in the liquid form. Under slightly alkaline conditions, as found on Mars, and in the presence of liquid water, Fe(VI) reacts quickly with water and is oxidized to Fe(III). Thus, in regions where liquid water forms even briefly, Fe(VI) is less likely to exist. However, Mars is a relatively large planet, and conditions across its surface vary. We cannot assume that iron exists in the same oxidation state(s) everywhere on Mars. It is possible that some regions of Mars might harbor Fe(V) and Fe(VI), while regions with a frequent occurrence of liquid water might be dominated by Fe(III). Also, it is possible the Fe(VI) if removed might be reformed in a continuing cycle through interactions with the planet's atmosphere.\nStoker and Bullock examined the oxidation of glycine under simulated Martian conditions and concluded that any organic matter brought to the Mars surface by meteorites likely would be destroyed by UV radiation faster than it might be delivered. They further suggested that no organic compounds found on Mars by the Viking Lander Gas Chromatograph Mass Spectrometer experiment could be explained without invoking the presence of strong oxidants in the surface soils. They felt their estimates of the organic destruction rate caused by irradiation at the Mars surface were an upper limit for a globally averaged biomass production rate that might be comparable to the slow-growing cryptoendolithic microbial communities found in dry Antarctica deserts .\nAn alternative hypothesis is that given the possible presence of ferrate in the soil, the present environmental conditions on Mars are not highly conducive to ferrate-based oxidation reactions, including those potentially affecting dormant and hostile-environment-resistant life forms.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-2", "d_text": "\"A global depth of 137 meters still implies a relatively dry planet, and doesn`t allow a deep northern ocean. The water could have mainly been in the form of ice rather than liquid.\" (Woodsworth, R.)\nThe results according to Wordsworth would indicate that Mars is not really a very habitable planet. If the comments and data by Wordsworth are correct then all the hype and excitement about Mars being our next planet we could relocate to because it is supposedly habitable would shoot down that theory.\nLike anything though the study is ongoing and new data and information are constantly being revealed like an up and down roller coaster about Mars which changes current hypothesis about the Red planet so it is going to take a lot more time and study to get to the real answers about Mars\nAccording to Bethany Ehlmann of the California Institute of Technology in Pasadena, the evidence for a northern ocean is scant. (Cooper, K.)\nBethany Ehlmann went on to further comment on this, \"An ocean is an intriguing possibility but mineralogical evidence, such as carbonates or evaporates, which are typical of evidence for Earth`s large ocean basins, has not been found in the north, although researchers are still looking.\" (Ehlmann, B.)\nSo, the question if water ever existed on Mars is ongoing but Geronimo Villanueva still remains optimistic about this and went on to say, \"It is difficult to assess the temperature of ancient Mars, and for how long water was in liquid form, from our results but our results do indicate that a substantial amount of water was available in the past.\"\nScientists and NASA continue researching every square inch on Mars for the evidence that water did really exist. NASA`s MAVEN (Mars Atmosphere and Volatile Evolution) spacecraft is currently in orbit studying and investigating the atmosphere of Mars and will explore new ventures to try to find firm evidence of the existence of water.\nVillanueva went on to comment, \"We still don`t know how the molecules are escaping more sophisticated measurements are going to give us a much better idea of how and when the molecules escaped from Mars.\"\nSo, Mars continues to live up to its reputation of being the red headed step child of the planets in our solar system. It continues to reveal new and interesting information that blows away scientists and conflicting information that baffle scientists.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-2", "d_text": "For example, Olympus Mons is a massive (extinct) Martian volcano and is actually the largest volcano known to exist—nearly three times as tall as Mt. Everest. Yet, even though its base would cover the combined states of Ohio, Indiana, and Kentucky, a mild gradient makes Olympus Mons seem far less impressive than the rugged slopes of Everest. Several other immense volcanoes exist on Mars, dwarfing their terrestrial counterparts. Most astronomers believe that all of these volcanoes are extinct and that Mars currently has essentially no geologic activity.\nOne of Mars’ most spectacular features is a canyon called Valles Marineris that is long enough to reach from one end of the United States to the other and is over 120 miles wide and about four miles deep.3 For comparison, this is ten times longer, nearly seven times wider, and four times deeper than the Grand Canyon. Valles Marineris is thought to be a tectonic fissure—a place where the surface cracked open.4\nScientists have been intrigued to learn that the surface of Mars has dry river beds and deltas. Though there is essentially no liquid water on the planet today, evidence clearly suggests that Mars once had surface water. Such evidence is especially perplexing in light of the planet’s thin atmosphere. Water can only exist as a liquid between certain temperatures and under sufficient atmospheric pressures, and the atmosphere of Mars is far too thin to allow water to be liquid for any length of time at any temperature. Heating an ice cube on Mars would cause it to sublime, not melt. That is, the ice would go directly to vapor, bypassing the liquid state entirely. Frozen carbon dioxide behaves in the same way under Earth’s atmosphere.\nSo, was the atmosphere of Mars different in the past? Or was the water released catastrophically, boiling away almost immediately? Could volcanic eruptions increase the atmospheric pressure locally to the point where liquid water could exist temporarily? These are mysteries that remain unsolved. It is noteworthy that secularists are willing to believe in catastrophic, planet-scale flooding on Mars—a planet that cannot support liquid water. Yet, they simultaneously deny the Genesis Flood on Earth—a planet that is 71 percent covered with water.\nThe two moons of Mars are quite tiny compared to Earth’s moon. Phobos is the larger of the two and only about 10 miles in diameter. Since Phobos has so little mass, its gravity is minuscule.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-11", "d_text": "Although this is the period in Mars’s history when erosion rates were comparatively high, they are still far below those typical on Earth (Golombek et al., 2006), which is consistent with the preservation of large ancient features on the planet’s surface (Carr & Head, 2010). And although hydrated minerals are found on Noachian surfaces, so are unaltered basalts such as olivine and pyroxene. Because the timescale to completely weather surface olivine is less than several million years (Olsen & Rimstidt, 2007), this implies that on average the Noachian global hydrological cycle on Mars was less intense than that on Earth.\nThe global mean surface pressure on Mars today is ~6.1 hPa, which is very close to the triple point pressure of water. If liquid water flowed during the Noachian, then the surface pressure must have been higher than the triple point to prevent rapid boil off. How much higher is difficult to ascertain, but a plausible estimate is ~100 hPa (McKay, 2004). If greenhouse warming played a role, as many researchers believe, then a substantially thicker CO2 atmosphere is implied. Support for a much thicker early atmosphere comes from isotopic data (Table 2). H, N, Ar, and Xe are all isotopically heavy compared to those on Earth, which is best explained by the preferential escape of the lighter isotopes (Haberle et al., 2017). Escape can occur thermally (e.g., Jeans escape) or nonthermally (e.g., sputtering or dissociative recombination). Present measured escape rates by the MAVEN spacecraft could account for the loss of 800 hPa of CO2 over the history of the planet (Jakosky et al., 2018). Thus, it is highly likely that the surface pressure during the Noachian was much higher than it is today. How much higher is not known.\nTable 2. Key Isotope Ratios Indicating Loss of an Early Atmosphere on Mars\nMass fractionating nonthermal escape\nEarly loss of 36Ar (impact erosion?)\nVery early loss of 132Xe (hydrodynamic escape?)", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "\"The supposed very thick atmosphere seemed to imply that you needed this big surface carbon reservoir, but the efficiency of the UV photodissociation process means that there actually is no paradox. You can use normal loss processes as we understand them, with detected amounts of carbonate, and find an evolutionary scenario for Mars that makes sense.\"\nSource: NASA press release", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-15", "d_text": "1993, 71: 1394-1400.View ArticleGoogle Scholar\n- Tsapin AI, Goldfeld MG, McDonald GD, Nealson KH, Moskovitz B, Solheid P, Kemner KM, Kelly SD, Orlandini KA: Iron(VI): hypothetical candidate for Martian oxidant. Icarus. 2000, 147: 68-78. 10.1006/icar.2000.6437.View ArticleGoogle Scholar\n- Hunten DM: Possible oxidant sources in the atmosphere and surface of Mars. J Mol Evol. 1979, 14: 71-78.View ArticlePubMedGoogle Scholar\n- Rieder R, Economou T, Wanke H, Turkevich A, Crisp J, Bruckner J, Dreibus G, McSween HY: The chemical composition of Martian soil and rocks returned by the mobile alpha proton X-ray spectrometer: preliminary results from the X-ray mode. Science. 1997, 278: 1771-1774. 10.1126/science.278.5344.1771.View ArticlePubMedGoogle Scholar\n- Banin A, Clark BC, Wanke H: Surface chemistry and mineralogy. In: Mars. Edited by: Keiffer HH, Jakosky BM, Snyder CW, Matthews MS. 1992, Tucson, The University of Arizona PressGoogle Scholar\n- Wood RH: The heat, free energy, and entropy of the ferrate(VI) ion. J Amer Chem Soc. 1958, 80: 2038-2041.View ArticleGoogle Scholar\n- Boynton WV, Feldman WC, Squyres SW, Prettyman TH, Bruckner J, Evans LG, Reedy RC, Starr R, Arnold JR, Drake DM, Englert PA, Metzger AE, Mitrofanov I, Trombka JI, D'Uston C, Wanke H, Gasnault O, Hamara DK, Janes DM, Marcialis RL, Maurice S, Mikheeva I, Taylor GJ, Tokar R, Shinohara C: Distribution of hydrogen in the near surface of Mars: evidence for subsurface ice deposits. Science. 2002, 297: 81-85.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Imaginative, but lovely\nIf you've ever wondered what Mars might have looked like in the distant past when it had oceans and a cloud-filled atmosphere, an enterprising geek and digital artist from Nashua, NH, can help. But before we inspect the results of the imaginative Kevin Gill's combination of programming chops and artistic skills, let's take a …\nImaginative, but lovely\nObviously you have not seen the US documentary \"Total Recall\", where historian Arnold Schwarzenegger clearly demonstrates that not only did life on Mars exist, but if you flip the big switch the atmostphere and water will return.\nNot pure speculation. There are several theories addressing the origins and development of life on planets like ours and some do involve (ongoing) exchanges with neighboring planets or being at the receiving end of the same common mechanism. So instead of speculation it's a falsifiable claim backed up by theory and therefore a serious hypothesis and not just speculation.\nPure speculation. True (at present)\nSurface water existed on Mars (at some point). Fact.\nLife existed on Mars (in the past) possible.\nWhat has not changed is Martian gravity (roughly 1/3 g). and the level of Sunlight it has gotten.\nI'd suggest that they would have a major impact on what atmospheric pressure can be retained (and hence things like the boiling point of water) and temperature.\nStart with an arbitrary air pressure and see how if it's retained or decays over time (and if so what to).\nSaying life may have existed on mars at some point in the past is misleading. It gives far more credence to the idea than is warranted by the data.\nHow many of the life millions of forms that we know of could survive there if transplanted there?\nWhich of the mechanisms for the generation of life from inorganic materials that we have observed on earth have we observed evidence for on mars?\nHe said \"it's possible\". Who would have though it's possible for organisms to live at Challenger Deep, 36,000 ft below the oceans surface, where there's zero light and the water pressure is the equivalent of having about 50 jumbo jets piled on top of you. Or bacteria living in Antarctic lakes that have been covered by ice for thousands of years. Yet they do.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-22", "d_text": "The ClO3− suggests presence of other highly oxidizing oxychlorines such as ClO2− or ClO, produced both by UV oxidation of Cl and X-ray radiolysis of ClO4−. Thus only highly refractory and/or well-protected (sub-surface) organics are likely to survive. In addition, recent analysis of the Phoenix WCL showed that the Ca(ClO4)2 in the Phoenix soil has not interacted with liquid water of any form, perhaps for as long as 600 Myr. If it had, the highly soluble Ca(ClO4)2 in contact with liquid water would have formed only CaSO4. This suggests a severely arid environment, with minimal or no liquid water interaction.\nMars Science Laboratory[edit | hide | edit source]\nThe Mars Science Laboratory mission is a NASA project that launched on November 26, 2011, the Curiosity rover, a nuclear-powered robotic vehicle, bearing instruments designed to assess past and present habitability conditions on Mars. The Curiosity rover landed on Mars on Aeolis Palus in Gale Crater, near Aeolis Mons (a.k.a. Mount Sharp), on August 6, 2012.\nOn 16 December 2014, NASA reported the Curiosity rover detected a \"tenfold spike\", likely localized, in the amount of methane in the Martian atmosphere. Sample measurements taken \"a dozen times over 20 months\" showed increases in late 2013 and early 2014, averaging \"7 parts of methane per billion in the atmosphere.\" Before and after that, readings averaged around one-tenth that level.\nIn addition, low levels of chlorobenzene (C\n5Cl), were detected in powder drilled from one of the rocks, named \"Cumberland\", analyzed by the Curiosity rover.\nFuture astrobiology missions[edit | hide | edit source]\n- ExoMars is a European-led multi-spacecraft programme currently under development by the European Space Agency (ESA) and the Russian Federal Space Agency for launch in 2016 and 2020. Its primary scientific mission will be to search for possible biosignatures on Mars, past or present. A rover with a 2 m (6.6 ft) core drill will be used to sample various depths beneath the surface where liquid water may be found and where microorganisms (or organic biosignatures) might survive cosmic radiation.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "Although material can become oxidised in the presence of free oxygen gas - it is not essential for oxidation reactions to occur.\nBut Dr Francis McCubbin, from the University of New Mexico, who was not involved with the Nature study, told BBC News: \"I did not reach the conclusion that their results imply an early oxygen-rich atmosphere on Mars, only that the upper mantle was more oxidised than the deep interior, which does not actually require any oxygen gas to accomplish.\"\n\"I agree with the overarching conclusions of this work that there are substantial redox gradients with depth on Mars, and this could be potentially very important for Mars' habitability because some organisms can take advantage of redox (reduction-oxidation) reactions and use them as an energy/food source.\nHe added: \"Although not implicitly stated, the early oxidized magmatism would also favour the production of water, another ingredient that is key to habitability.\"\nOn alternative possibilities to atmospheric oxygen, Prof Wood told BBC News: \"One is that Mars was an initially oxidised planet - that's pretty unlikely. There aren't any meteorites or other bodies in the Solar System that show this high state of oxidation.\n\"You don't need a lot of oxygen to cause this - you don't need to be at 20% concentration. It would depend on temperature and how much water was around. But you need free oxygen to do it.\n\"And the process didn't take place to any great extent on Earth at that time - which is interesting.\"\nProf Wood explained that, as oxidation was what gave Mars its distinctive colour, it is likely that the planet was \"warm, wet and rusty\" billions of years before Earth's atmosphere became oxygen-rich.\nHe added: \"The principal way we would expect to get oxygen is through photolysis of water - water vapour in Mars' atmosphere interacting with radiation from the Sun breaks down to form hydrogen and oxygen.\n\"Most of that hydrogen and oxygen recombines back to water. But a small fraction of the hydrogen is energetic enough to escape from the planet. A small amount of hydrogen is lost leaving an oxygen excess.\n\"But the gravity on Mars is one third of that on Earth, so hydrogen would be lost more easily. So the oxygen build-up could be enhanced on Mars relative to Earth.\"\nPaul.Rincon-INTERNET@bbc.co.uk and follow me on Twitter", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-14", "d_text": "1999, 29: 625-10.1023/A:1006514327249.View ArticlePubMedGoogle Scholar\n- Margulis L, Mazur P, Barghoorn ES, Halvorson HO, Jukes TH, Kaplan IR: The Viking Mission: implications for life on Mars. J Mol Evol. 1979, 14: 223-232.View ArticlePubMedGoogle Scholar\n- Levin GV: O2- ions and the Mars labeled release response. Science. 2001, 291: 2041-10.1126/science.291.5511.2041a.View ArticlePubMedGoogle Scholar\n- Levin GV, Straat PA: Laboratory simulations of the Viking labeled release experiment: kinetics following second nutrient injection and the nature of the gaseous end product. J Mol Evol. 1979, 14: 185-197.View ArticlePubMedGoogle Scholar\n- Goldfield M, Tsapin I, Nealson K: Surface-possible mechanisms and implications. In Poster Section 12: Mars Oxidants, First Astrobiology Science Conference. NASA Ames Research Center, CA, April 3–5, 2000Google Scholar\n- Yen AS, Kim SS, Hecht MH, Frant MS, Murray B: Evidence that the reactivity of the Martian soil is due to superoxide ions. Science. 2000, 289: 1909-1912. 10.1126/science.289.5486.1909.View ArticlePubMedGoogle Scholar\n- Quinn RC, Zent AP: Peroxide-modified titanium dioxide: a chemical analog of putative Martian soil oxidants. Orig Life Evol Biosph. 1999, 29: 59-72. 10.1023/A:1006506022182.View ArticlePubMedGoogle Scholar\n- Goff H, Murmann RK: Studies on the mechanism of isotopic oxygen exchange and reduction of ferrate(VI) ion (FeO42-). J Am Chem Soc. 1971, 93: 6058-6065.View ArticleGoogle Scholar\n- Lee DG, Gai H: Kinetics and mechanism of the oxidation of alcohols by ferrate ion. Can J Chem.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-33", "d_text": "- \"Curiosity Mars rover detects 'useful nitrogen'\". NASA. BBC News. 25 March 2015. Archived from the original on March 27, 2015. Retrieved 2015-03-25.\n- Boxe, C. S.; Hand, K.P.; Nealson, K.H.; Yung, Y.L.; Saiz-Lopez, A. (2012). \"An active nitrogen cycle on Mars sufficient to support a subsurface biosphere\". International Journal of Astrobiology. 11 (2): 109–115. Bibcode:2012IJAsB..11..109B. doi:10.1017/S1473550411000401. Archived from the original on May 18, 2015. Retrieved 2015-05-10.\n- Schuerger, Andrew C.; Ulrich, Richard; Berry, Bonnie J.; Nicholson, Wayne L. (February 2013). \"Growth of Serratia liquefaciens under 7 mbar, 0°C, and CO2-Enriched Anoxic Atmospheres\". Astrobiology. 13 (2): 115–131. Bibcode:2013AsBio..13..115S. doi:10.1089/ast.2011.0811. PMC . PMID 23289858.\n- Heldmann, Jennifer L.; Toon, Owen B.; Pollard, Wayne H.; Mellon, Michael T.; Pitlick, John; McKay, Christopher P.; Andersen, Dale T. (2005). \"Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions\". Journal of Geophysical Research. 110 (E5): E05004. Bibcode:2005JGRE..11005004H. doi:10.1029/2004JE002261.\n- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (2006). \"Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement\". Geophysical Research Letters. 33 (11): 11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-33", "d_text": "Journal of Geophysical Research, 110(E12).\n- Hu, R., Kass, D. M., Ehlmann, B., & Yung, Y. L. (2015). Tracing the fate of carbon and the atmospheric evolution of Mars. Nature Communications, 6, 10003.\n- Ingersoll, A. P. (1970). Mars: Occurrence of liquid water. Science, 168, 972–973.\n- Jakosky, B. M., Brain, D., Chaffin, M., Curry, S., Deighan, J., Grebowsky, J., Halekas, J., Leblanc, F., Lillis., R., Luhmann, J. G., Andersson, L., Andre, N., Andrews, D., Baird, D., Baker, D., Bell, J., Benna, M., Bhattacharyya, D., Bougher, S., Bowers, C., … Zurek, R. (2018). Loss of the Martian atmosphere to space: Present-day loss rates determined from MAVEN observations and integrated loss through time. Icarus, 315, 146–157.\n- Kahre, M. A., Hollingsworth, J. L., Haberle, R. M., & Wilson, R. J. (2015). Coupling the Mars dust and water cycles: The importance of radiative-dynamic feedbacks during northern hemisphere summer. Icarus, 260, 477–480.\n- Kahre, M. A., Murphy, J. R., Newman, C. E., Wilson, R. J., Cantor, B. A., Lemmon, M. T., & Wolff, M. J. (2017). The Mars dust cycle. In R. M. Haberle, R. T. Clancy, F. Forget, M. D. Smith, & R. W. Zurek (Eds.), The atmosphere and climate of Mars (pp. 295–337). Cambridge University Press.\n- Kasting, J. F. (1991). CO2 condensation and the climate of early Mars. Icarus, 94, 1–13.\n- Kerber, L., Forget, F., & Wordsworth, R. (2015).", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-42", "d_text": "- Levin, Gilbert V.; Straat, Patricia Ann (2009). \"Methane and life on Mars\". In Hoover, Richard B; Levin, Gilbert V; Rozanov, Alexei Y; Retherford, Kurt D. Instruments and Methods for Astrobiology and Planetary Missions XII. 7441. pp. 12–27. Bibcode:2009SPIE.7441E..12L. doi:10.1117/12.829183. ISBN 978-0-8194-7731-6.\n- Oze, Christopher; Jones, Camille; Goldsmith, Jonas I.; Rosenbauer, Robert J. (June 7, 2012). \"Differentiating biotic from abiotic methane genesis in hydrothermally active planetary surfaces\". PNAS. 109 (25): 9750–9754. Bibcode:2012PNAS..109.9750O. doi:10.1073/pnas.1205223109. PMC . PMID 22679287.\n- Staff (June 25, 2012). \"Mars Life Could Leave Traces in Red Planet's Air: Study\". Space.com. Archived from the original on June 30, 2012.\n- Brogi, Matteo; Snellen, Ignas A. G.; de Krok, Remco J.; Albrecht, Simon; Birkby, Jayne; de Mooij, Ernest J. W. (June 28, 2012). \"The signature of orbital motion from the dayside of the planet τ Boötis b\". Nature. 486 (7404): 502–504. arXiv: . Bibcode:2012Natur.486..502B. doi:10.1038/nature11161. PMID 22739313.\n- Mann, Adam (June 27, 2012). \"New View of Exoplanets Will Aid Search for E.T.\" Wired. Archived from the original on August 29, 2012.\n- Webster, Guy; Neal-Jones, Nancy; Brown, Dwayne (December 16, 2014). \"NASA Rover Finds Active and Ancient Organic Chemistry on Mars\". NASA. Archived from the original on December 17, 2014.", "score": 8.086131989696522, "rank": 100}]} {"qid": 37, "question_text": "What are the recommended daily protein requirements for adult men and women?", "rank": [{"document_id": "doc-::chunk-1", "d_text": "That translates into a daily recommended intake (or DRI) of 56 grams of protein for the average sedentary man and 46 grams for the average sedentary woman.\nFor vegetarians, adjustments are made to account for plant-based proteins being digested differently from animal proteins and for the different amino acid make-up in some plant proteins. So the recommendation is closer to 0.9 grams of protein per kilogram of weight.\nIn addition to the DRI for protein, the Institute of Medicine also provides a recommended range for protein intake, suggesting protein should make up 10% to 35% of your total calories. If you take in 2,000 calories per day, 10% to 35% of calories translates to a range of 50 to 175 grams of protein. In other words, the DRI represents the bare minimum.\n*Pregnant or lactating women, as well as children and adolescents, have higher protein needs due to their accelerated stage of growth.\nVegetarian sources of protein\nHere is a sample menu of how to cover the protein recommendation in one day:\nBreakfast: 1 cup of oatmeal (7g), 1 cup of homemade almond milk (11g).\nLunch: Grilled tofu sandwich: 2 slices of wheat toast (6g), 100g of firm tofu or 1 cup (20g).\nSnack: Fruit or veggies dipped in 2 tbsp. of peanut butter (8g).\nDinner: 1 cup of cooked brown rice (5g), 1 cup of cooked lentils (18g), 1 cup of cooked broccoli (4g).\nThis adds to 79 grams of protein without counting the protein in fruits, veggies and other foods you might consume in your day which are not good sources of protein but certainly contribute to your overall intake. It's not hard to meet the recommended intake!", "score": 50.85872791813726, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "The average adult needs about 0.8 grams of protein per kilogram of bodyweight a day, which comes out to roughly 56 grams of protein a day for men and 46 for women, according to the Institutes of Medicine.\nBut despite protein showing up on more and more food labels, we’re already getting way more than our 46 or 56 grams. In fact, men ages 20 and over get an average of 98.9 grams of protein a day, and women ages 20 and over get 68 grams, according to the U.S. Department of Agriculture’s latest What We Eat In America report.\nGetting protein is, of course, an important part of a balanced diet. For starters, our bodies simply wouldn’t be able to build and repair its cells without the stuff. We know that high-protein breakfasts can help us keep unhealthy snack urges in check. And according to a new analysis, a diet higher in protein, especially from fish, seems to lower stroke risk.\nHowever, more isn’t necessarily better. “[B]ecause Americans consume so much protein, and there is plenty in foods from both plant and animal sources, and there is no evidence of protein deficiency in the U.S. population, protein is a non-issue,” Marion Nestle, Ph.D, MPH, Paulette Goddard Professor of Nutrition, Food Studies, and Public Health at New York University, tells The Huffington Post in an email. “Why make it into one? The only reason for doing so is marketing. Protein used as a marketing tool is about marketing, not health. The advantage for marketing purposes of protein over fat or carbohydrates is that it’s a positive message, not negative. Marketers don’t have to do anything other than mention protein to make people think it’s a health food.” (Case in point: A serving of those new protein-packed Cheerios also contains 16 or 17 grams of sugar, depending on the flavor.)\nIn some cases, more protein can even be problematic. Nestle points out that much of the research is “conflicted and uncertain”, but there are a few things we know so far. Here are three signs your diet might be too heavy-handed on the protein.\nYou’re gaining weight.\nIf you’ve bulked up on the protein in your diet without cutting calories in other areas, you may find yourself gaining weight.", "score": 49.67928518523366, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Protein Requirement Recommendation\nThe requirement indicated by the meta-analysis (a median requirement of 105 mg nitrogen/kg per day or 0.66 g/kg per day of protein) can be accepted as the best estimate of a population average requirement for healthy adults.\nFor adults, the protein requirement per kg body weight is considered to be the same for both sexes, at all ages, and for all body weights within the acceptable range. The value accepted for the safe level of intake is 0.83 g/kg per day, for proteins with a protein digestibility-corrected amino acid score value of 1.0. No safe upper limit has been identified... (p. 242)\n|Range||Body weight||Safe level of protein intake (score 1.0)|\n|From||40 kg||33 g per day|\n|To||80 kg||66 g per day|\nAmino Acid Requirements of Adults\n|Amino acid||mg/kg per day||mg/g protein|\n|Methionine + cysteine||15||22|\n|Phenylalanine + tyrosine||25||30|\nProtein Requirement Definition\nProtein requirement can be defined as: the lowest level of dietary protein intake that will balance the losses of nitrogen from the body, and thus maintain the body protein mass, in persons at energy balance with modest levels of physical activity, plus, in children or in pregnant or lactating women, the needs associated with the deposition of tissues or the secretion of milk at rates consistent with good health.\nTo satisfy the metabolic demand, the dietary protein must contain adequate and digestible amounts of nutritionally indispensable amino acids (histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan and valine), and amino acids that can become indispensable under specific physiological or pathological conditions (conditionally indispensable: e.g. cysteine, tyrosine, taurine, glycine,arginine, glutamine and proline), plus sufficient total amino acid nitrogen, which can be supplied from any of the above amino acids, from dispensable amino acids (aspartic acid, asparagine, glutamic acid, alanine and serine) or from other sources of non-essential nitrogen.\nAt present, no method is entirely reliable for determining the dietary requirement for indispensable amino acids.", "score": 49.43436677015999, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "Determining how much protein you need varies by your gender, age, body size, activity level, and health goals.\nThe USDA recommends a minimum of 46 grams of protein per day for women and 56 grams of protein per day for men, but these are the recommended bare minimums your body needs to avoid deficiencies.\nThese daily minimums are meant for a person of moderate activity level and average weight who is trying to maintain their status.\n- If you are trying to lose weight, and especially if you are exercising regularly as a part of that weight loss strategy, then you will want to eat more protein than these minimum levels.\nTo determine how much protein you need, start with your current body weight. Divide that by two.\n- If you are very active and trying to build lean muscle while also losing weight, your answer is equal to the grams of protein you should be aiming to eat each day.\n- If you are moderately active but trying to lose weight, multiply that number by 0.75 to determine your daily protein intake in grams.\nSomeone who weighs 200 pounds would aim to eat a maximum of 100 grams of protein per day .\nif they are very active, workout a lot, or are in training. That same person should only consume about 75 grams of protein per day if their activity level is more moderate.\nIf you are not the kind of person who counts grams of anything, you may want to consider a different approach.\nFor each meal or more substantial snack you eat, a good rule of thumb for high protein snacks weight loss is to eat no more than 30 percent of your calories from protein, less than 25 percent from healthy fats, and the remainder of your calories should come from carbohydrates.\nHigh Protein Snacks\nCreating a whole-foods diet that is focused on higher levels of protein for weight loss can not only help you lose the extra weight but also benefit your health in many ways.\nA diet that is rich in vegetables, fruits, lean proteins, and healthy fats is the foundation of a healthy body, no matter your fitness or weight loss goals.", "score": 46.90279651537546, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "By Dr Courtney Craig.\nDietary protein requirements for the average person are about 0.8g/per kg body weight/day per the U.S. and European standards. That equates to about 67-114g per day for men, and between 59-102g per day for women. Adequate dietary protein allows for:\nRepair of tissues and cellular proteins: a daily necessity for normal physiology\nSynthesis of important proteins by the liver: including transport proteins, which move hormones, minerals, and other important components through the blood\nImmune cell regeneration: synthesis of new immunoglobulins and antibodies\nBlood: generation of the blood protein hemoglobin which carries oxygen in the blood\nIn chronic illness, protein demands increase due to the body being under constant physiological stress. Stress hormones such as epinephrine and norepinephrine, degrade existing proteins and inhibit creation of new ones. Chronic inflammation and oxidative stress can inhibit the activity of important enzymes involved in protein synthesis. Degradation of existing protein and inhibition of new protein creation can detrimentally affect every system of the body, worsening the chronic illness.\nTo read the rest of this story, click on the link below:", "score": 46.56697110298411, "rank": 5}, {"document_id": "doc-::chunk-4", "d_text": "Amino acids are considered the primary building blocks of your body because they are found in muscles, tendons, bone, skin, hormones, tissue, enzymes, red blood cells, and more [17\nProtein is also positively correlated with bone health, as studies have found that adequate protein intake is associated with better bone strength, a slower rate of bone loss, and reduced risk of hip fracture, especially among the elderly [18\nHow much protein do you need?\nHow much protein you need per day varies with age and can increase significantly with physical activity, injury, and/or illness. The Dietary Reference Intake (DRI) for protein for adults is as follows:\nSedentary men and women:\n0.8 g protein/kg of body weight/day [19\nSedentary adults over 65 years old:\n1 to 1.2 g protein/kg of body weight/day [20\nAthletes and highly active people:\n1.2-2.0 g protein/kg of body weight/day, depending on training needs and goals [21\nThese numbers will vary depending on your activity level, age, and other needs, so talk with a dietitian or your healthcare provider to see what is right for you.\nDietary sources and supplementation\nGood protein sources include eggs, dairy, lean meat, poultry, fish, beans, lentils, soybeans, tofu, and supplements (such as protein powder\nor protein-fortified foods).\nHowever, some athletes, weight lifters, older adults, active individuals, vegetarians/vegans, and individuals with a chronic illness may find it challenging to get enough quality protein from foods, so this population may benefit from using a protein powder supplement to help meet their protein goals.\nVitamin B12 is a water-soluble vitamin that is necessary for the development, myelination, and function of the central nervous system, as well as red blood cell formation and DNA synthesis [22\nIt’s also crucial for rebuilding bones, as a vitamin B12 deficiency has been associated with increased fracture risks, lower bone mineral density, and reduced bone turnover, especially in those who follow a vegetarian or vegan diet [23\nHow much vitamin B12 do you need?", "score": 46.44866515619613, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "The Recommended Dietary Allowance (RDA) for healthy adults is 0.8 g protein per kilogram of body weight per day. For a woman weighing 65 kg (143 pounds), this means a protein consumption of 52 g. Children, especially babies, need significantly more protein per kilogram of body weight. A maximum of 15 percent of the total calorie consumption should come from protein. About two-thirds of the protein should be vegetable-based and one third from animal sources.\nProtein Sources From Food\nIt’s usually not very difficult to get enough protein in your diet since it’s so abundant in many different types of foods. Vegetable protein sources include legumes such as peas, beans, and lentils as well as nuts, cereals, and potatoes. Protein-rich foods from animals are fish, meat, eggs, and dairy products such as cheese and yogurt.\nA vegan diet usually contains much less protein than a mixed diet. This is why children and teenagers as well as nursing mothers and pregnant women who wish to pursue a vegan diet need to make sure they’re getting enough protein since they need extra protein in general. With an adequate supply of protein, a dynamic balance between the development and degradation of protein occurs in adults. You might also need more protein during times of large blood loss or infections.\nDo Protein Supplements Make Sense?\nHave you ever bought protein-enriched food? What about high-protein dairy products, shakes, or protein bars, which can sometimes fill several grocery aisles? Do you think such foods make sense outside of competitive sports? Debate with family and friends on FamilyApp.", "score": 42.79394805190628, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "How Much Protein Do You Need at Every Age?\nThe human body contains more than 600 muscles, each of which has a specific function. Some control actions such as swallowing, while others allow our skeleton to move effectively. When you eat protein, it helps to repair muscle and to maintain it, but it won’t make them bigger. This is controlled by the amount of exercise you do and its type, as well as your gender, age and your hormones. The amount of protein you need is dependent on your age, gender and weight.\nGenerally, protein should account for 15 to 25% of your daily energy intake and teenage girls and women aged 19 to 70 require about 46 g, while men and teenage boys need approximately 64 g. Older men and women aged 70 and above need more protein at 81 g and 57 g respectively. For women this amount is only higher if they are pregnant or breastfeeding.", "score": 42.69598275198954, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "When you’re over 50, getting the right proportion of carbs and protein can help you look and feel your best, give you that extra boost of energy to get through the day and help you maintain a healthy body weight. As you age, your metabolism slows down, and you require fewer calories each day for healthy weight maintenance. According to the Centers for Disease Control and Prevention National Health Statistics Reports, the average body mass index, or BMI, for both men and woman ages 50 and older falls into the \"overweight\" category.\nRegardless of your calorie intake, your protein needs remain fairly constant because they’re based on your body weight, according to a review published in 2009 edition of “Nutrition and Metabolism.” The Institute of Medicine recommends men over 50 eat at least 56 grams of protein, and women over 50 consume at least 46 grams of protein every day. Although the recommended dietary allowance, or RDA, for protein is 0.8 gram per kilogram of body weight, or about 0.36 gram per pound of body weight, a review published in a 2010 edition of “Aging Health” reports that slightly increasing protein intake above the RDA may benefit bone health in the aging population. A review published in a 2008 edition of “Clinical Nutrition” found that protein intakes of 1.5 grams per kilogram, or about 0.68 gram per pound, of body weight per day is beneficial for the elderly population. Therefore, if you’re over 50 shoot for 0.36 to 0.68 gram of protein per pound of body weight each day.\nYour carbohydrate needs are determined by your calorie requirements—which are based on your age and gender. According to the Dietary Guidelines for Americans 2010, women over 50 need 1,600 calories daily if they are sedentary, 1,800 calories if they’re moderately active and 2,000 to 2,200 calories per day if they are active; while men over 50 need 2,000 to 2,200 calories if they are sedentary, 2,200 to 2,400 calories when they’re moderately active and 2,400 to 2,800 calories a day if they’re active. These calorie needs are estimated to help adults over 50 maintain healthy body weights.", "score": 40.30949597445035, "rank": 9}, {"document_id": "doc-::chunk-11", "d_text": "Eating at least 25-30% protein out of your total calorie intake can boost your metabolism by up to 100 calories a day compared to low-protein diets.\nSee, most official nutrition organizations tell you to keep your protein to a modest amount…\nIn the US, the Centers for Disease Control and Prevention (CDC) recommend only 10-35% of your daily calories come from protein.\nThat works out to about 46 grams a day for the average woman and 56 grams for the average man.\nThe Board of the Institute of Medicine produced a report outlining the recommended dietary amount for different macronutrients–fat, protein, and carbs.\nThe recommended daily amount of protein was only 8 grams of protein per kilogram of body weight for adults 18 years old and above.\nThat works out to only 36 grams of protein per pound of body weight. Such official guidelines are a decent starting point to figure out your ideal protein intake.\nThere are, however, a few major flaws that raise eyebrows about such guidelines…\nThe biggest issue is that they attempt to come up with universal numbers that should ideally work for everyone…\nBut in reality, different people would require varied amounts of protein.\nAnd besides not taking your unique situation into account, most official guidelines are based on minimum recommended amounts…\nMeaning the amount listed is the absolute least amount of protein you should eat to not lose muscle mass…\nYet, a range of studies found that higher protein intake–over the recommended daily amounts–helps build muscle, improve bone and heart health, besides boosting your energy!\nThe bottom line is…\nYour ideal level of protein intake is somewhere above what nutrition organizations recommend.\nBut then the question begs…\nHow far above those guidelines should you target?\nHow to Figure Out Your Ideal Protein Amount\nTo begin with…\nThe much touted magic number–the exact amount of protein everyone needs to eat every day for optimal health, just doesn’t exist!\nYour ideal amount is dependent on a couple of factors, including:-\nIf you’re trying to transform your body by building a significant amount of muscle…\nYou’ll require an elevated amount of protein.\nEating a higher-protein content diet has been shown to help synthesize new muscles and build strength.\nBut you don’t need to go to such extremes as some bodybuilders and supplement companies would want to have you believe!\nA lot of bodybuilders recommend at least 1 gram per pound of body weight for those trying to build muscle, but that’s in the upper range of the ideal intake.", "score": 38.622902619560286, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "Longo said many middle-age Americans, along with an increasing number of people around the world, are eating twice and sometimes three times as much protein as they need, with too much of that coming from animals rather than plant-based foods such as nuts, seeds and beans.\nHe said adults in middle age would be better off following the advice of several top health agencies to consume about 0.8 grams of protein per kilogram of body weight each day — roughly 55 grams for a 150-pound person, or the equivalent of an 8-ounce piece of meat or several cups of dry beans.", "score": 37.493658291635526, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Finding the proper protein and carbohydrate balance can help maximize your energy levels, maintain a healthy body weight and increase your satiety after meals. Carbohydrates are generally the main fuel source for humans. However, consuming too many carbs can lead to unwanted weight gain.\nThe Institute of Medicine has established minimum recommended dietary allowances, or RDAs, for protein and carbohydrates. The RDA for carbohydrates is 130 grams per day for adults. Protein RDAs are 46 grams per day for women, 56 grams per day for men. Regardless of the protein and carb balance that’s appropriate for your individual needs, aim to consume at least the RDA for protein and carbs each day.\nMacronutrient Distribution Ranges\nThe Institute of Medicine has also established acceptable macronutrient distribution ranges, or AMDRs, for carbohydrates and protein. AMDRs are percentages of your total daily calorie intake that should come from carbohydrates, protein and fat. Based on these guidelines, you should consume between 45 and 65 percent of your daily calories from carbohydrates and 10 to 35 percent of your calories from protein. For a 2,000-calorie diet, this means consuming 225 to 325 grams of carbs and 50 to 175 grams of protein each day.\nBalancing Protein and Carbs for Weight Loss\nHigh-protein, low-carb diets are often effective for weight loss because they can lead to a reduction in calories. However, consuming a diet too low in carbs is difficult to adhere to long-term. A study published in a 2012 edition of the “British Journal of Nutrition” reports that a reduced-calorie diet with a one-to-two protein/carb ratio was most successful for diet adherence, body-fat reduction, reduced waist circumference, a lower waist-to-hip ratio and preservation of lean body mass compared with diets with one-to-four or one-to-one protein/carb ratios. For example, an effective 1,200-calorie weight-loss diet may contain 140 grams of carbohydrates, 70 grams of protein and 40 grams of fat.\nAthletes have slightly higher carbohydrate and protein needs compared to sedentary individuals. The Academy of Nutrition and Dietetics reports that endurance athletes require 2.3 to 5.5 grams of carbohydrates for each pound of body weight and 0.5 to 0.9 grams of protein per pound of body weight each day.", "score": 36.52762064981082, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "The Recommended Daily Allowance, or RDA, for protein is 0.8 grams per kilogram of bodyweight. That’s .36 gram per pound or 56 grams per day for the average man and 46 grams per day for the average woman (assuming all they do is sit all day).\nJust drink a glass of milk and eat a piece of meat, a cup of dry beans, and some yogurt and you should meet your RDA for protein.\nPretty simple isn’t it?\nNot so fast!\nThose miniscule amounts of protein are enough to prevent protein deficiency. But those amounts are NOT enough for optimal health.\nLet’s Talk About Protein\nProtein is part of an important food group that you need in order for your body to function properly. As a macronutrient, protein facilitates your body’s proper growth and development, as well as helps to strengthen its immunity against various illnesses and diseases.[i]\nProtein is also responsible for acting as the main building block that repairs your tissues, organs, tendons, muscles, and even your bones, skin, and eyes.[ii]\nIt is an essential component in the synthesis of enzymes, neurotransmitters, and hormones, which are all important in maintaining bodily functions. Without sufficient levels of protein in your diet, your body is more prone to experiencing muscle atrophy and organ malfunction.[iii]\nWe could go on all day about the benefits of protein, but you get the idea.\nCan Protein Help You Lose Weight?\nStudies show that high levels of protein consumption can help regulate the total amount of calories you consume overall.\nYou feel more satiated when you eat more protein.[iv] This means you’ll crave less junk food.\nProtein has the ability to extend and prolong the release of carbohydrates through your body, providing you with a more constant and sustained supply of energy.\nProtein (plus exercise) can also boost your metabolic rate, increasing the amount of calories that you burn.[v]\nSo yes, protein helps with weight loss.\nHow Much Protein Is Enough?\nThe Institute of Medicine recommends that a daily intake of 0.8 grams of protein per kilogram of body weight is ideal for the typical adult in order to prevent developing a protein deficiency.\nThis translates to about 46 grams of protein for women and 56 grams of protein for men daily. “Grams of protein” here is a measurement of the amount of actual protein macronutrient that is ingested, and not of the actual protein source.", "score": 35.54762818094701, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "If you eat 2,000 calories each day, this would equal 50 to 175 grams of protein each day. For someone who weighs 200 lbs, this would be between 0.5 and 2.0 grams of protein per kilogram. Notice that the RDA fits within this range, which accounts for the protein requirements of nearly everyone, including people who have very high protein needs.\nWhile the RDA is sufficient for most healthy people, even those who exercise regularly, it may be too low for athletes engaged in strenuous endurance or strength training. The protein requirement for endurance athletes, including runners, cyclists, and triathletes, is 1.2–1.4 g/kg per day. Athletes who are training to add muscle mass and strength—think football players in the offseason—need even more protein: 1.2–1.7 g/kg per day. This should meet both energy needs to fuel training sessions and provide adequate protein for muscle repair and growth.\nFor most of us, though, the focus should not be on eating more protein but to get our protein from healthy sources. For starters, several servings of protein-rich lean meat, eggs, and dairy as well as whole grains, legumes, and vegetables should meet protein needs. The emphasis should be on real food rather than processed foods with added protein. After all, no amount of granola bars with added protein will make you healthier!\nNutrition, exercise, and health information can be confusing. But it doesn't have to be that way. What can I help you with? firstname.lastname@example.org | http://twitter.com/drbrianparr", "score": 34.66452399996218, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "It seems to me that there is a lot of confusion regarding the issue of protein intake. It is a huge debate and the research made by the experts has been very exhaustive with the question “how much protein should I eat” being brought up by everyone from the casual gym user to development scientists.\nFirst of all, let me start by making it clear that it is essential for good health. Protein provides the building blocks that help muscles grow. It should be included at all mealtimes, particularly before and after a resistance training session. The Recommended Daily Allowance (RDA) is 0.8 grams of protein per kilogram of bodyweight; however, this is only for sedentary adults.\nPROTEIN BY NUMBERS\nIn regards to endurance-based exercise, the recommended protein intake range s from of 1.0 grams per kilogram per day to 1.6 grams kilograms per day depending on the intensity and duration of the endurance exercise – the more you train, the more calories you expend, the more your muscles are used and broken down, the more protein you will need.\nRecommendations for strength/power exercise typically range from 1.6 to 2.0 grams kilograms per day.\nIT’S NOT JUST ‘EAT MORE MEAT’\nHowever, getting protein into your body should not be taken as “eat more meat”, even though beef, chicken and turkey (as well as milk, cheese, and eggs) do have a high protein content. Protein sources can also be found in whole grains, beans and other legumes, nuts, vegetables, and fish of course – click here for a list of protein sources. Protein supplements can be used but only if you feel that you are not getting enough to match your exercise work rate.\nNOT ALL PROTEINS ARE EQUAL\nThe body cannot use the protein you ingest for muscle-building unless all of the necessary amino acids are present. Some foods contain ‘complete protein’ which is where they provide all the amino acids necessary to produce usable protein. Even foods known for their high protein content contain differing amounts of usable protein. For example, if a food says it contains 10g of protein, because of the inherent quality of the protein, your body may only be able to use 7g of it for efficient cell repair.", "score": 34.4699277475563, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Protein plays a role in many important body functions. Eating enough protein also helps you to feel full longer, which makes protein important for weight loss. Most people in the West get enough protein from their daily diet especially if they eat meat, eggs and fish.\nThe daily recommendations are approximately 46 grams for women and 56 grams for men. Some groups of people might need more protein than others, for example pregnant and nursing women and people who are physically active, such as sports athletes and bodybuilders.\nSome of the best sources include meat, poultry, eggs, fish and dairy products. Red meat is high in saturated fat, but lean meats such as chicken and turkey are lower in fat and rich in protein. Seafood is a healthy protein source and contains Omega-3 essential fatty acids that are good for the heart. Cheese, yogurt and other dairy products are alternative sources of protein and they are also high in calcium. Vegetarian and vegan protein sources include beans, lentils, hemp powder, nuts, seeds, quinoa and soy products such as tofu and soy milk.", "score": 34.29161151900013, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "PAHs are also found in the exhaust fumes and tobacco smoke.\n- Plant-based proteins are low in fat and high in fibre, vitamins and minerals.\n- Plant proteins contain phytochemicals that contribute towards health and disease prevention. For example, isoflavones found in soya beans have antioxidant properties, thought to be important in the prevention of cancer and menopausal symptoms.\nHow much do I need?\nEnergy and protein\n- 1g carbohydrate: 3.75 calories.\n- 1g protein: 4 calories.\n- 1g fat: 9 calories.\n- 1g alcohol: 7 calories.\nCurrent advice says protein only has to make up 10 to 15 per cent of your daily diet to meet your body's needs. That's around 55g for men and 45g for women.\nMost of us eat more than this, and the British Nutrition Foundation puts the average adult intake at 88g for men and 64g for women.\n- Around two thirds of the protein we eat is from animal sources.\n- We get a quarter of our protein from cereal products (wheat, bread, oats).\n- Nuts and pulses make up most of the final twelfth.\nHow much protein do foods contain?\nBelow are some examples of foods, so you can compare protein content.\nYou can also check nutrition labels to find out how much protein something contains.\n- One skinless chicken breast (130g): 41g protein.\n- One small fillet steak (200g): 52g protein.\n- One beef burger or pork sausage: 8g protein.\n- One portion of poached skinless cod fillet (150g): 32g protein.\n- Half a can of tuna: 19g protein.\n- One portion of cheese (50g): 12g protein.\n- One medium egg: 6g protein.\n- 150ml glass of milk: 5g protein.\n- One tablespoon of boiled red lentils (40g): 3g protein.\n- One portion of tofu (125g): 15g protein.\n- One slice medium wholemeal bread: 4g protein.\n- One slice medium white bread: 3g protein.\nTips for healthy living\n- Include oily fish in your diet at least twice a week.\n- Try using soya products, such as veggie mince and tofu. They will take up the flavour of the dish if you add them to stews and sauces.", "score": 33.137720967049255, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "This simple, evidence-based protein calculator will instantly show you how much protein to consume per day depending on your fitness goals, body size, and gender.\nHow Does Our Protein Calculator Work?\nOnce you've entered your measurements, the protein calculator will recommend three separate protein intakes: one being the very modest recommended daily allowance (RDA), another for bulking (building muscle), and a third for cutting (fat loss).\nNote that the current RDA for adults is 0.8 grams per kilogram of body weight, or roughly 0.36 grams per pound of body weight. This guideline remains an issue of contention among the scientific community, especially sports nutrition researchers, and is merely the minimum adults are advised to consume for essential biological functions. It's likely that the current RDA for protein is inadequate for active individuals looking to improve body composition.\nAs such, our protein calculator logic for building muscle and fat loss is extrapolated from recent studies on optimal protein intake in elite athletes and resistance trainees [1, 2].\nHow Much Protein Is \"Too Much?\"\nMost people know all about the benefits of a high-protein diet to improve body composition and build muscle, but what are the health risks of eating too much protein? Despite the longstanding paradigm of high-protein diets in fitness and bodybuilding subcultures, some evidence suggests that excess protein intake may lead to deleterious long-term health problems and undesirable side effects .\nHowever, for every study that incriminates high-protein diets for conditions like kidney disease, there are equally as many that find no untoward effects of a high-protein intake (at least not in otherwise healthy individuals). In fact, several studies cite a higher protein intake as being prudent for weight loss and promoting athletic performance .\nNonetheless, there is a fine line between eating enough protein and overeating protein. As with just about every substance you put in your body, too much of a \"good thing\" can cause health problems. The question then is, \"How much protein is too much?\"\nThere are no hard rules, but if you're eating more than 2 grams of protein per pound of body weight, odds are you're pushing the boundaries.\nFurther Protein Intake Resources\nNow that you have a better idea of how much protein to eat, check out the following articles and guides for all things related to protein intake:\n1.", "score": 32.9595726298972, "rank": 18}, {"document_id": "doc-::chunk-2", "d_text": "How Should I Use Protein\nAccording to the Dietary Reference Intake, 0.8 grams is the suggested quantity for every kilogram of body weight (that is 0.36 grams per pound).\nHowever, it is important to note, this recommendation is just enough to prevent deficiency. It is not enough to meet the likely needs of even a sedentary person, on a daily basis.\nResearch on how much protein is the optimal amount to eat for good health is ongoing and is far from settled.\nHow much protein you need depends on several factors, such as your weight, your goal (weight maintenance, muscle gain, or fat loss), your level of physical activity, and whether you’re pregnant or not.\nAs you can see based on those variables, it is impossible to make a strict recommendation for everyone.\nFor this reason, after years of client and customer work, we will typically recommend .5 grams per pound of bodyweight for sedentary individuals and .8 to 1.2 grams per pound of bodyweight for active individuals.\nAs an FYI, our range (.8 to 1.2) for active individuals allows for differing amounts, intensities, and types of activity and exercise.\nWe will always, however, recommend, active or sedentary, that an individual starts at the lower end and adjusts based on the body’s response.\nWith all that said, our protein-inclusion and timing suggestions are also quite simple for most. We have a few “rules” we like to go by to keep things easy.", "score": 32.959159050949005, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "Daily protein needsThe amount of protein required to be consumed daily depends on the weight of the individual and the type of physical activity practiced by it. A healthy average person is advised to obtain a rate ranging between (10-15%) of the total daily calories from protein, i.e. about one gram per A kilogram of body weight, while a pregnant woman needs about 10 grams or more, while a breastfeeding woman needs (20) grams of protein per day, to be able to produce milk in sufficient quantities for the infant. Athletes need more protein to build muscle, and the amount of protein needed for them depends on the type of exercise, duration, and intensity of training.\nExcessive protein intakeDespite the great importance of proteins in the body, they should be consumed in moderation without exaggeration. Some people depend on their diet for a diet rich in protein and poor in carbohydrates for several reasons, including weight reduction and muscle building.\nMedical studies have proven that eating large amounts of protein is harmful to the body, especially if this diet is not associated with regular exercise. Increasing the number of proteins pushes the body to form toxic substances called ketones, and to get rid of ketones the kidneys are forced to work with additional energy, which exposes the body to losing large amounts of fluids and causing the body to dry out, and kidney fatigue is amid the subsequent symptoms:\nLack of calcium in the bones, which causes their fragility.\nFeeling tired and dizzy.\nHaving bad breath in the mouth.\nThe higher the level of cholesterol in the blood, the greater the chances of developing heart diseases.\nIncreased chances of kidney stones.\nGout infection: Eating large amounts of animal protein leads to an increase in the production of boric acid that accumulates in the joints, causing severe pain.\nIncreased risk of cancer: Studies show that eating animal protein in large quantities encourages the liver to produce an insulin-like growth factor that in turn promotes the growth of cancer cells.\nLack of muscle mass.\nAdopting a protein-rich diet poor in carbohydrates leads to a deficiency of vitamins and fiber.\nProtein sensitivitySome people experience symptoms of allergic reactions when they eat certain types of food, and one of the most common types of food allergies is protein sensitivity. Protein sensitivity is defined as an abnormal reaction by the body's immune system to the protein present in some foods, for example, milk, eggs, peanuts, nuts, and shellfish.", "score": 32.75335125321039, "rank": 20}, {"document_id": "doc-::chunk-2", "d_text": "So a 160-pound person should eat about 55 to 60 grams of protein a day.\nAbout this blog\nGet a behind-the-scenes look at the latest stories from CNN Chief Medical Correspondent, Dr. Sanjay Gupta, Senior Medical Correspondent Elizabeth Cohen and the CNN Medical Unit producers. They'll share news and views on health and medical trends - info that will help you take better care of yourself and the people you love.", "score": 32.62265552400884, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Weight yourself in pounds and divide by 2.2 and then multiply it by .8 to find your protein RDA. However, it’s a general recommendation. RDA measurement should also take your age, ideal body weight, height and activity into consideration. Go to your doctor or a registered dietitian to help you figure out your absolute protein limit.", "score": 32.14983153151333, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "At 40, you lose an average of 8% of your muscle mass every 10 years. By age 70, this rate increases up to 15% per decade.\nFocusing on your protein intake can help slow this process, as well as engaging in regular strength training. Eating enough protein also helps build a healthy metabolism and immune system.\nHigh protein foods include:\n- lean meat\n- beans and lentils\n- nuts and seeds\n- dairy products\nWhile the current Daily Nutrient Recommendation (DRI) for protein is 0.36 grams per pound (0.8 grams per kg) of body weight, it’s not a one-size-fits-all model. Most research suggests that adults over 50 require more than that to counteract age-related muscle loss.\nIn fact, you may need close to 0.5–0.9 grams per pound (1.2–2.0 grams per kg) to preserve muscle mass and support an active lifestyle.\nTo help make it easier for you to understand how much protein you may need, the National Agricultural Library offers up a DRI calculator that takes into account your age, gender, height, weight and activity level.\nIf you are a 55-year-old female of average height who has a low activity level, the calculator suggests you consume 54 grams of protein, which is based on .36 grams per pound. However, if you have a higher activity level, or you notice you’re losing muscle tone, you may want to increase your protein intake to anywhere between 75 and 135 grams to make sure you are getting enough.\nWhile receiving your nutrients from whole foods is always the most preferred method, it can be hard for most people to consume enough protein from food alone. If you struggle to get enough or you need a quick protein source, you can try using protein powder, protein bars or a drink supplement.\nWhile you may already know that consuming fiber can help promote healthy bowel movements and digestion, it can also support heart health, slow sugar absorption to stabilize blood sugar levels, and help maintain a healthy weight.\nHigh fiber foods include:\n- whole grains such as oats, brown rice, popcorn, and barley\n- beans and lentils\n- nuts and seeds\nRDA for fiber is 25 and 38 grams per day for women and men, respectively. Unlike protein, many people can get enough fiber from food alone, especially if you’re consuming a lot of fruits and vegetables.", "score": 31.75984104941793, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "For a 140 pound person, that would equate to 45-50 grams of protein.\n- Those involved in heavier training or endurance type training, may need higher amounts. A protein consumption of 1.4 to 2.0 grams of protein per kilogram of body weight is a general recommendation for those with more intense physical activity.\n- In general, protein will allot for 10%-30% of calories consumed in a balanced diet.\nIn other words, make sure you fit in some protein regularly! My personal favorites are eggs, beans, the occasional serving of red meat, dairy, and protein supplements.\nYour Turn – Do you think you get enough protein? What’s your favorite way to get protein in your diet?\nMuscle Milk® is an ideal blend of protein, fats, good carbohydrates and 20 vitamins and minerals to provide sustained energy, spur lean muscle growth and help provide recovery after tough days.\nDisclosure: Compensation was provided by Muscle Milk via Glam Media. The opinions expressed herein are those of the author and are not indicative of the opinions or positions of Muscle Milk.", "score": 31.441912680524442, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "Protein is much more satiating than both fat and carbs.\nHow much protein do you need in your diet?\nThe amount of protein each individual requires depends on your overall calorie needs. The daily recommended intake of protein for healthy adults is 10% to 35% of your total calorie needs.\nChildren and teens may need different amounts of protein depending on their age. The requirement of Proteins is significantly increased in people who are physically active, as well as in elderly individuals and those recovering from injuries.\nTo conclude; the human body breaks proteins into parts called amino acids. Some of these amino acids can be produced by the human body, while we must get others – “essential” amino acids – from our dietary sources. Protein is vital for the human body, its function and for health.\nThank you for reading!", "score": 31.22292396169322, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "A good required daily protein ratio to strive for is to consume 1g. of protein every kilogram of body weight.\nDetermining your daily protein requirement in pounds is as simple as this formula:\nWt. in lbs. / 2.2 = kilograms\nWt. in kg x 2.2 = pounds\nThe result shows roughly how many grams of protein you required daily. Therefore, a 156lbs. man (~71kg) would require approximately 70.5 grams of protein daily to maintain optimal muscle fuel. From the chart below we see the amount of available protein from the sources listed. Fish/meat sources based upon a 100g serving.\n|Fish / Seafood\n||Meat / Poultry\n- Cod = 21g\n- flounder (baked) = 24g\n- haddock (steamed) = 23g\n- Tuna = 23g\n- Octopus = 30g\n- lobster (boiled) = 22g\n- Prawns = 23g\n- veal = 30g\n- beef heart = 25g\n- beefsteak = 27g\n- Lamb = 18g\n- Chicken breast = 29g\n- Pork tenderloin = 23g\n- Turkey = 28g\n- rabbit (stewed) 14g\n- Tofu = 48g\n- Parmesan = 42g\n- Camembert (25g serv.) = 23g\n- cheese spread = 18g\n- Pink Lentils = 25g\n- Walnuts = 24g\n- cashews (10 = 20g) = 18g\n- Almonds = 22g\nAdvantages of Protein Derived from Fish/Seafood\nAdvantages to fish and seafood is their sustainability. While overfishing is a genuine concern, no fish or seafood specie has been brought to extinction from overfishing. With proper regulation (and anti-pollution) measures in place, formerly over-fished species have always rebounded.\nAlso an advantage of fish and seafood protein sources is for people concerned about use of growth hormones and use of genetically-engineered grain products being fed to ruminant grazers. Wild-caught or even farmed fish is a viable and sustainable protein alternative.", "score": 30.387402956759395, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "So, for a person of 70 kilos, we would be talking about between 140 grams and 175 grams of protein daily.\nWhen distributing this protein at each meal, we must do so by ensuring 0.4-0.55 grams of protein per kilo of weight per meal.\nAs for fats, we can move between 0.8 and 1.2 grams per kilo of weight, although this range can be extended up to 0.5 grams in concrete protocols and not very durable or above 1.2 depending on personal preferences.\nIn women, it is preferable to move towards the upper end of the range.\nFinally, the rest of the calories are destined for carbohydrates. In this way, we will generally be moving between 3 and 5 grams of carbohydrates per kilo of weight.", "score": 30.36782430745062, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "How Much Protein Should a Female Take Post-Workout?\nProtein is an integral macronutrient for women looking to maximize performance and aid recovery from hard training.\nHow much protein you need after a training session is linked to how much protein you consume daily. According to the Canadian Society for Exercise Physiology, women need around 0.8 grams of protein per kilogram of body weight each day. This works out to 0.36 grams per pound. If you weigh 120 pounds, this means eating 43 grams per day. If you’re 150 pounds, then you should be eating 54 grams per day.\nStepping It Up\nThe “Journal of the International Society of Sports Nutrition” recommends eating a slightly higher protein intake, as this can improve adaptations to intense training. The Journal suggests that an intake closer to 1.4 to 2 grams per kilogram, or 0.64 to 0.91 grams per pound may be more suitable for both men and women. At this amount, a 120-pound female would require 77 to 109 grams per day, while a 150-pound female would need 96 to 137 grams. Splitting your daily protein evenly between all your meals is still a wise idea, even at a higher intake.\nThe type of training you do, as well as the overall diet style you’re following, also plays a role in how much protein you need. In an article for the FitnessRX for Women website, dietitian Susan M. Kleiner writes that women involved in endurance training need less total protein than those engaging in a strength workout. Dieting to lose body fat and eating a reduced number of calories also necessitates a higher protein intake to help preserve lean body mass.\nPutting It All Together\nTo work out your own individual post-workout protein needs, multiply your body weight in pounds by 0.6 if you’re an endurance athlete, or 0.8 if you’re a strength athlete to get your total daily protein intake, then divide this by the number of meals you eat each day. A 150-pound woman, for example, would need 90 grams of protein each day if endurance training or 120 grams if strength training. If you are eating five times per day, this would work out to 18 grams of protein at a post-endurance workout meal, or 24 grams post-strength training.", "score": 30.30894243310179, "rank": 28}, {"document_id": "doc-::chunk-12", "d_text": "IF you’re doing strength training regularly and looking to build muscle…\nShoot for .7-1 gram of protein per pound of body weight a day.\nIf on the other hand you’re trying to just maintain muscle, you don’t need to load such amounts of protein.\nYou’d do well to start with the nutrition organization guidelines…and then scale in response to the other factors below…\nYour Current Weight\nIt may please you to learn that protein-rich diets have also been found to be effective in driving weight loss. So, if losing fat is your top priority, you can deliberately increase your protein intake to speed up the process.\nEating more protein keeps you feeling full, naturally suppressing your appetite and making it easier to eat fewer calories while accelerating your metabolism.\nBut if you’re not trying to shed weight, you can indulge protein much moderately.\nIn-arguably, elderly people need a bit more protein than younger people to stay healthy and maintain their muscle mass.\nAs you age, your body gradually loses some of its efficiency when it comes to repairing damaged muscles.\nA 19-year-old college folk might be able to hit the gym 3 days in a row without eating much protein and still see amazing results. But a 60-year-old will definitely require more.\nA study found that a baseline intake of between .5 and .6 grams of protein per pound of body weight works well for older people.\nPhysical Activity Level\nBy and large, the more active you’re, the more protein you require…\nAnd this holds true even if you’re not doing strength-training.\nEndurance or long-distance athletes like marathoners and triathletes spend a lot of time training and breaking down muscles. If you belong to that camp, or you hit the gym regularly, aim for .8-1 gram of protein per pound of body weight.\nYour occupation also matters…take it into account as well…\nIf you’re working on a construction site and then hitting up the Crossfit box 4 times a week, 1.5-2 grams per pound might be just ideal to repair your muscles and take you closer to your physique goals.\nBut I want to make a disclaimer…\nIn spite of the many numbers and guidelines herein, you absolutely need not overly worry about how much protein you’re consuming every day…\nAnd especially if you’re already following a Paleo diet and eating animal products with nearly every meal. This habit alone puts you on the right track towards meeting your protein needs.", "score": 29.773931310610426, "rank": 29}, {"document_id": "doc-::chunk-1", "d_text": "~Most adults benefit from an intake greater than the recommended one (0.8g/kg/day)\nIt also talks about protein distribution and how we tend to eat more protein in our meals as the day goes on. This suggests that we eat very little at breakfast, which is not optimal as we have been fasting overnight, and that we eat large amounts at dinner.\nThe blue highlighted portion is showing maximum muscle protein synthesis. We need at least 15g of protein to stimulate protein synthesis and take a peak at a typical breakfast…\nAlthough they stated that this uneven distribution has not really been shown to negatively affect growth of children or adults, it may have large impacts on older adults. Here, protein intake is critical as they tend to be in a negative nitrogen balance (or state of protein breakdown) and need more than a typical adult to maintain neutral or positive balance.\nFinally, take note that protein is more satiating. So when a meal is proportionally more balanced with protein (rather than mostly carbohydrates..like the typical American breakfast) you will be fuller longer. Alternatively, carbohydrate rich meals tend to not keep you as full longer. This doesn’t mean that carbs are bad, it simply runs the risk of individuals over eating their needs because they find themselves hungry more often.\nSo could protein be an aid in better weight management?\nReally cool podcast diving into the differences between men and women for training and nutrition in addition to the genetic potential that women have for muscle development specifically.\nRelative to their starting point (which is often different), females have been shown to have the same relative muscle growth and protein synthesis potential as men! So if you take a man and woman who weigh the same, you can find that they can grow the same proportion of muscle. Differences in fatty acid composition in the body was then stated as a potential reason as to why we don’t see very muscular women in addition to inefficient or non-optimal training for their body physiology and birth control. This is controversial however, but these are some of the recent findings.\nThis is not to create fears in women that they will ‘blow up’ like a man when they touch a weight, because as you can see from a subjective perspective, the number of women walking around with the same level of muscularity as a male is quite small.", "score": 29.161861614543596, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "So basically, 0.8g/kg is how much protein you need to support basic life functions if you live a sedentary life.\nAnd this is the issue. These numbers are calculated by using the nitrogen test above, yet they do not consider the role of exercise. As mentioned, the RDA is set to address the nutritional needs of the large majority of the population to maintain healthy life functions. IT IS NOT an individualistic diet plan. Athletes have very different nutritional needs compared to sedentary populations who never train, which will affect the optimal amount of protein.\nStill, what’s interesting to think about is that this also implies that 0.8g/kg of protein would cause a deficiency in 2.5% of the population. It also means that if the other 97.5% ate less than this, they too would be in a deficit. So why are so many nutritionists recommending the minimal threshold? Seems as if they like living dangerously…\nWe need to also realize that there is a difference between an amount that’s sufficient for healthy life function and an amount that’s optimal for improved performance. It’s like saying a Natural Ice and a nice microbrew are the same because they’re both “sufficient” to get you drunk. However, one tastes like piss and leaves you with an awful hangover with a splitting headache and the other is a nice microbrew. To be clear, this is not a criticism of the RDA as its purpose is not to suggest protein intake for weight lifters.\nSo now, let’s talk about how much protein bodybuilders and weightlifters should eat for optimal performance.\nWhen it comes to studies looking at the optimal amount of protein athletes and weightlifters should eat, research is pretty clear that their needs far surpass the RDA numbers.\nThe Academy of Nutrition and Dietetics, Dietitians of Canada, and the American College of Sports Medicine release their stand on nutritional needs for athletes on a regular basis. Concerning protein, they recommend consuming 1.2–2.0 g/kg per day1. Still, they even suggest higher intakes of protein could be necessary during specific times, such as when losing weight or cutting.\nThe International Society For Sports Nutrition (ISSN) promotes a protein intake of 1.4 – 2.0 g/kg and, again, suggests that some circumstances could justify even higher amounts2.", "score": 28.49039107619702, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "This particular article didn’t give specific post workout recommendations, but generally it’s recommended to consume 10-20 grams of protein in the recovery window (within 30-60 minutes post workout).\nDaily Protein Recs:\nAnother key point the article (which was based on a recent study) suggested was that the optimal amount of protein at meals for athletes is about 30 grams. Beyond this amount there are no additional health benefits and you run the risk of storing the excess protein as fat. Fall significantly short of this number and your muscles may not be getting as much protein as they need, which means you could lose muscle mass. The 30 grams per meal recommendation actually equates to a higher daily protein intake than what typical recommendations have called for, depending on body weight, which this study did not factor in. According to traditional guidelines, the minimum amount of protein necessary to prevent deficiency is 0.8 grams/kg of body weight per day (0.36 grams/lb of body weight). That equals 49 grams for a 135 pound person. However, that’s the minimum to prevent problems and if you are an athlete you definitely need more. The typical recommendation is for endurance athletes to consume 1.2-1.4 grams of protein/kg of body weight per day (0.54-0.64 grams/lb). So a 135 pound runner, for example, would need about 73-86 grams of protein a day, slightly less than 30 grams x 3 meals. Strength athletes need more, 1.4-1.7 grams/kg of body weight per day (0.64-.77 g/lb). Whether you go with the body weight recommendation or the 30 grams times 2 meals, these protein levels are not difficult to obtain if you are a meat eater. The key is to space your protein intake more evenly throughout the day, as it’s likely that your breakfast falls short. An egg, for example, has 6 grams of protein while a 6 oz steak has about 42. Vegetarians will have to work harder to make sure they meet their protein needs. It’s okay to add a protein powder or bars as a supplement if you are not getting enough protein from food alone but aim to meet your needs from food first, supplements second. Some good sources of protein are lean meats, chicken, fish, eggs, soy, dairy, nuts and nut butters, seeds, and beans.", "score": 28.4472268876571, "rank": 32}, {"document_id": "doc-::chunk-3", "d_text": "The protein requirement is calculated by converting 175 pounds to 79.5 kilograms, and then multiplying 79.5 kilograms by 0.8 grams, the recommended daily protein intake based on body weight. The AI for essential fat is 18.6 grams for adult men. However, because whole plant foods naturally contain fat, if you eat a WFPB diet and don’t overdo the nuts and seeds, you’ll get more than the bare minimum of essential fats – around 40 grams is more likely.\nPROTEIN: 63.6 grams at 4 calories per gram = 254 calories from protein.\nFAT: 40.0 grams at 9 calories per gram = 360 calories from fat.\nCARBS: The remaining calories are carbohydrate, which are calculated by subtraction:\n2,500 (total) – 254 (protein) – 360 (fat) = 1,886 calories from carbohydrate.\nIn terms of percentage of calories, this comes out to approximately 10% protein, 14.5% fat, and 75.5% carbohydrate.", "score": 28.110220051243623, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "Here is the conclusion of their analysis:\n“Although Natural Hygiene and Life Science do not endorse gram-counting, calorie-counting or a preoccupation with minimal daily requirements, it seems that a reasonable estimate of the protein needs of an adult is probably in the 25 to 30 grams daily range — or about 1 gram per five pounds of body weight. If a person eats a varied diet of fruits, vegetables, nuts, seeds and sprouts, he is assured that he will meet this protein requirement, along with all the other nutrient needs.”\nNumerous other studies have indicated that the human body’s true protein requirement for good health is less than 5% of the diet.\nRaw Food Explained goes on to say, “During the last sixty years, several researchers (Rose, Boyd, Berg, et al) all independently proved that between 3.7% and 4.65% of the total food intake was all the protein necessary to maintain good health. These percentages are equivalent to about 24 to 30 grams of protein.”\nThis protein estimate is further validated by the Tarahumara indigenous people of Northwestern Mexico renown for their amazing ability to run 200 miles in two days at elevations of 4-8,000 feet. Their diet consists mostly of maize, beans, greens, and squash, while animal proteins are rarely consumed, comprising only 5% of the diet.\nConsuming more protein than this small amount can lead to excess protein related health problems(considered more debilitating than protein deficiency) which may include:\n1. Excessive nitrogen buildup in the muscles resulting in chronic fatigue;\n2. Protein poisoning resulting in headaches, general achiness, and allergy-type symptoms such as a burning of the mouth, lips and throat, rashes, etc. Many allergy symptoms may well be the result of protein poisoning.\n3. A disturbance of the delicate natural balance of the human hormonal system due to the abnormal amounts of hormones (for the human) existing in the animal parts eaten;\n4. Additional strain on the liver, kidneys and adrenals while attempting to eliminate the toxins created by the excessive protein, uric acids, and foreign hormones;\n5. Excessive body acidity resulting in joint pains, bone deterioration, and arthritic symptoms;\n6. Digestive complaints due to the inefficient digestion of the meat, which is impossible to fully digest (particularly once antibiotics have been administered at any point in life, but especially if more recently);\n7.", "score": 27.566517800857167, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Getting Protein into Your Diet: How Much Do You Really Need?\nMany people believe the more protein they eat, the easier it will be to build muscle, lose weight or improve overall nutrition. But most Americans get all the protein they need, especially the kind from animal sources. A diet that combines lean protein with whole grains, wheat germ, fruits and vegetables offers balanced nutrition that contributes to a healthy lifestyle.\nThe Centers for Disease Control and Prevention (CDC) recommends that average adults in the United States get 10 percent to 35 percent of their daily calories from protein foods. That translates to:\nAthletes need a bit more protein, but not as much as many think. The Academy of Nutrition and Dietetics and the American College of Sports Medicine recommend the following for power and endurance athletes, based on body weight:\nIt’s not difficult to get the protein you need. Here are some common protein servings:\nImportance of protein\nProteins are found in every part of our bodies, including the enzymes that help chemical reactions occur and hemoglobin that carries oxygen in our blood. Protein is constantly being broken down into amino acids and reconfigured to create the proteins our bodies need. Our bodies can make all but nine of these, which are called essential amino acids. The nine our bodies don’t make—but need—must come from eating protein-rich foods.\nAnimal protein vs. plant protein\nAnimal proteins contain all essential amino acids, while plant proteins are “incomplete,” meaning they lack some essential amino acids. Combining certain plant proteins, such as rice and beans or hummus and pita bread, creates “complete” proteins, with all the essential amino acids found in animal protein.\nIt’s important to vary your protein sources. For example, while steak and other red meat are good sources of protein, they contain saturated fat, which in excessive amounts can clog arteries and may lead to chronic diseases. Experts recommend consuming leaner types of protein, such as poultry without skin, non-fat or low-fat dairy products, fish, tofu, edamame and roasted soy nuts. You can also add protein to your meal with Kretschmer Wheat Germ. Try our Warm Cabbage, Apple and Wheat Germ Salad or Protein-Rich Smoothie with Wheat Germ and Chia Seeds.", "score": 27.108163969253408, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "“How much protein should I eat” is such a popular question in the diet world. Protein is such an important macro nutrient. Under-eating it could lead to muscle loss, fatigue, and hunger.\nThe problem is that there are so many mixed answers out there it’s hard to sort out fact from fiction sometimes.\nRDA of Protein\nThe recommended daily amount of protein is .8 g per kilogram of body weight. That is about .36 grams per pound.\nBUT, that’s just the absolute bare minimum amount of protein your body needs.\nThe truth is that the ideal amount of protein can vary depending on your age, activity level, size, and overall health.\nHow much protein for muscle gain\nThe Academy of Nutrition and Dietetics, Dietitians of Canada, and the American College of Sports Medicine recommend protein intake, in combination with physical activity, be between 1.2 and 2 grams of protein per kilogram of body weight. This is about .5-.9 grams per pound of body weight.\nSimilarly a review published in 2018 by the Journal of the International Society of Sports Nutrition concluded that the ideal daily protein intake be around 1.6 -2.2 grams of protein per kilogram of body weight (.7-1 gram per pound) split over at least 4 meals.\nThose numbers are fairly similar. Because of this I would aim for the range of .5-1 gram of protein per pound of body weight. This is definitely higher than the minimum RDA.\nThis means an adult male weighing 200 pounds, for example, could have a protein range between 100-200 grams of protein and still see the benefits of a higher protein intake. Contrarily, their RDA would fall at about 72 grams.\nI personally keep my protein range between .7-.8 grams per pound of body weight. This range seems to work really well for me and allows for a balanced & sustainable diet.\nI find that when you stick to the higher end of the recommended range, it is harder to hit those protein goals consistently while still enjoying your diet. The best diet is the one that you can stick to.\nAge and protein consumption\nSo I mentioned earlier that age can play a part in how high your protein goal is and that is true. As it turns out, as we age our bodies become less efficient at using the amino acids from protein. This means as we age, our protein goal should increase.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "EatingWell's nutrition editor tells you how much protein you need, how much protein is too much and identifies some health\nrisks of high-protein diets.\nProtein is a must-have nutrient: your body uses it to generate and repair cells. And the building blocks of protein—called\namino acids—are needed to build muscle, make antibodies and keep your immune system going. Compared to fat and carbs, protein\npacks a bigger punch when it comes to filling you up and keeping you satisfied.\nBut don’t worry that you’re not getting enough of this powerhouse nutrient. Protein malnutrition is nearly nonexistent in the\nU.S. In fact, most of us eat more than we need: women get, on average, 69 grams of protein per day. The Institute of\nMedicine (IOM) recommends women get 46 grams daily (that’s equal to about 6 ounces of chicken). Men need 56 grams, yet\nthey’re actually eating almost double.\nThere’s no official daily maximum for protein, but IOM suggests capping it at 35 percent of your calories (that’s 175 grams\nfor a 2,000-calorie diet). Heed that advice for a few reasons: High-protein diets usually promote foods that deliver\nunhealthy saturated fat (meat, cheese). Eating too much protein may also increase your chances of kidney stones, as well as\nyour risk of osteoporosis. (When protein is digested, it releases acid that is neutralized by calcium, which is pulled from\nyour bones.) In one study, women who ate more than 95 grams of protein a day were 20 percent more likely to fracture their\nforearm than those who got less than 68 grams daily.\nBottom line: It’s possible to eat too much protein, so don’t go overboard. Choose healthy\nproteins—lean meat, poultry, low-fat dairy, fish, soybeans, quinoa. Beans, peas, nuts and seeds also supply protein, but\nthey’re “incomplete” (lack at least one essential amino acid), so eat a variety of those.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "How much protein do you need?\nIt depends on your goal, but here are general guidelines to use as a starting point. Aim for getting 25-35% of your daily calories from protein. You may need to adjust up or down depending on your individual metabolism, goals, and exercise regimen.\nHow many grams of protein are in foods?\n- 3.5oz lean beef tenderloin = 29g\n- 4oz salmon = 29g\n- 3.5oz chicken breast = 30g\n- 1 cup of lentils = 18g\n- 5oz Greek yogurt = 14g\n- 1 large egg = 6g", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Do older adults have higher protein requirements than younger adults?\nThere is still limited research about the protein needs of older adults, but the available research has raised concerns that older people, especially those over 70 years, may need a relatively high amount of protein – 25% more – in their diet compared to younger adults.\nThe increased need is thought to be due to significantly less efficient use of protein by the body.\nAs the overall energy needs of older people are less it is important that the food consumed is 'nutrient-dense' and that foods containing protein are eaten at most meals.\nAs protein needs increase in times of illness or following surgery; that is, when the body is repairing itself; older adults may need more for this reason also.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "When combined all of the nine EAAs are present making a complete protein source which your body can synthesis for cellular growth and repair! For those of you avoiding animal produce you’ll be pleased to discover that the following are considered complete vegan protein sources: buckwheat, beans & rice (when combined), chia seeds, Ezekiel bread, hemp, hummus & pitta (when combined), myocoprotein (Quorn), soy, spirulina, quinoa etc!\nHow Much Protein Do I Need?\nNow we have established a basic understanding of protein sources and the function of protein in the human body the next question is “how much protein do I need?”\nThere’s no conclusive answer to the question “how much protein do I need?” Many factors influence protein requirements and the extent to which your body will utilize the protein it is provided. For example, an individual who has a lean body mass of 250lb and participates in hard physical exercise frequently will have a higher demand for protein than a sedentary elderly lady who weighs 125lb! Factor in coefficients such as anabolic compounds which enhance metabolic turnover and both the demand for protein and the efficiency of protein synthesis are increased further and thus the need for protein.\n1g OF PROTEIN | PER POUND (LB) | PER DAY\nAs a basic rule the following is widely regarded as a general guideline for optimal muscle growth, 1g of protein, per lb of lean body mass, per day. For example, an individual with a lean body mass of 250lb would be aiming for a minimum of 250g of protein per day. Those who train very intensely may benefit by raising this number from 1g per lb to 1.5g per lb. Research suggests that after this level you reach a point of diminishing returns. However there is an exception to the rule! Those taking anabolic compounds AND training intensely may benefit from up to 2g of protein per lb given the enhanced rate of protein synthesis and increased metabolic turnover associated with anabolic compounds.\nTo calculate your protein requirement simply take your lean body mass (LBM) and multiply it by 1, 1.5 or 2, depending on which of the above most closely resembles your current situation.\nWhen Is The Best Time To Have Protein?\nYour body requires protein at all times. However, there are certain times when the demand for protein increases, immediately after intense exercise, for example.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "Overweight women generally need 1,000 to 1,600 calories, while overweight men usually require 1,200 to 1,600 calories per day for weight loss, according to the National Heart, Lung and Blood Institute.\nAccording to the Institute of Medicine, all adults should aim to consume 45 to 65 percent of their daily calories from carbohydrates. Carbs provide 4 calories per gram. Therefore, if you’re consuming 1,600 calories a day aim for 180 to 260 grams of carbs, if you eat 2,000 calories a day shoot for 225 to 325 grams and if you consume 2,400 calories aim for 270 to 390 grams of carbs each day. Brown University reports that athletes should aim for about 65 percent of their calories from carbs; however, since low-carb, high-protein diets can be effective for weight loss, if you’re overweight aim for closer to 45 to 50 percent of your calories from carbohydrates.\nHealthy Proteins and Carbs\nChoosing a wide variety of nutritious proteins and carbs each day will help people over 50 meet their daily protein and carb needs. Healthy, high-protein foods include lean meats, skinless poultry, seafood, egg whites, soy products, seitan, legumes, low-fat dairy products, nuts and seeds. Nutritious, or \"good,\" carbohydrates are found in low-fat milk and yogurt, whole grains, legumes, fruits, vegetables, nuts and seeds. Limit or avoid high-fat meats, full-fat dairy foods, added sugars, sugary drinks and sweets.\n- Centers for Disease Control and Prevention: 2008 National Health Statistics Reports\n- Nutrition and Metabolism: Dietary Guidelines Should Reflect New Understandings about Adult Protein Needs\n- Aging Health: Optimizing Bone Health in Older Adults: The Importance of Dietary Protein\n- Clinical Nutrition: Optimal protein intake in the elderly\n- U.S. Department of Agriculture; U.S. Department of Health and Human Services: Dietary Guidelines for Americans 2010\n- Institute of Medicine: Dietary Reference Intakes: Macronutrients\n- National Heart, Lung and Blood Institute: How are Overweight and Obesity Treated?\n- Brown University: Sports Nutrition\n- Mature Older Woman image by Mat Hayward from Fotolia.com", "score": 26.381984814197185, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "Eating more protein may not benefit older men, a new research has found. According to a study conducted by Brigham and Women’s Hospital, older men who consumed more protein than the recommended dietary allowance did not see increases in lean body mass, muscle performance, physical function or other well-being measures. Regardless of whether an adult is young or old, male or female, their recommended dietary allowance (RDA) for protein, set by the Institute of Medicine, is the same: 0.8-g/kg/day.\nMany experts and national organizations recommend dietary protein intakes greater than the recommended allowance to maintain and promote muscle growth in older adults. However, few rigorous studies have evaluated whether higher protein intake among older adults provides meaningful benefit.A randomised, clinical trial conducted by Brigham and Women’s Hospital investigator Shalender Bhasin and colleagues has found that higher protein intake did not increase lean body mass, muscle performance, physical function or other well-being measures among older men. “It’s amazing how little evidence there is around how much protein we need in our diet, especially the value of high-protein intake,” said corresponding author Bhasin. “Despite a lack of evidence, experts continue to recommend high-protein intake for older men. We wanted to test this rigorously and determine whether protein intake greater than the recommended dietary allowance is beneficial in increasing muscle mass, strength and wellbeing.”\nThe clinical trial, known as the Optimizing Protein Intake in Older Men (OPTIMen) Trial, was a randomised, placebo-controlled, double-blind, parallel group trial in which men aged 65 or older were randomized to receive a diet containing 0.8-g/kg/day protein and a placebo injection; 1.3-g/kg/day protein and a placebo injection; 0.8-g/kg/day protein and a weekly injection of testosterone; or 1.3-g/kg/day protein and a weekly injection of testosterone.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "Furthermore, what constitutes enough protein varies from person to person depending on their body type. For these reasons, no matter who you are or what your personal goal is, there is never truly an average amount of protein that works for everyone.\nThis article will go into more detail about the benefits of eating a balanced diet high in protein, as well as some tips for how to achieve those benefits for yourself.\nRecognized health benefits of protein\nRecent studies suggest that too little dietary protein can be just as bad for your health as no protein at all. That’s because most of us don’t get enough of it — even when we say we do.\nA normal, healthy person needs 10 to 35 grams (0.7 to 2.3 ounces) of protein per day. Most people are deficient in this important nutrient, however.\nOne reason is that Americans tend to eat less meat than anyone else in the world. A 2008 Harvard School of Health study found that only 9% of 1,200 adults surveyed consumed more than two servings of red or white meat every week. Even if you include fish as a source of protein, only 24% reported eating three or more meals containing one ounce each of meat weekly.\nAnother reason many people feel they don’t need much protein is that some foods, like bread and pasta, are high in fiber. Therefore, they think they don’t have to add any extra protein to their diets.\nToo much protein\nOverconsuming too much protein can be disastrous for your health. While eating enough protein is important, making sure you’re not consuming more than the recommended amount of protein is just as essential to good nutrition.\nToo much protein can have negative effects on your skin and body. If you are very active or participate in sports, an adequate intake of protein may be unnecessary, but for people who sit around most of the day high protein foods like meat, nuts and legumes are needed to ensure healthy bone growth and muscle development.\nHowever, with all these different types of food it becomes difficult to know how much protein you’ve got going into yourself. There isn’t really any way to tell unless you do a nutritional test, which some fast food chains offer.\nBut what if there was a standard way to measure the average daily protein intake? According to Harvard Medical School, we get about 0.8 grams per kilogram (0.8 g/kg) of our daily requirement every 24 hours.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-3", "d_text": "As a general rule of thumb, aim to get at least 70 grams of protein throughout the day, says Dr. Cheskin.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "Think “ if you don’t use it you loss it.” This is why, as one ages, it is important to participate in a regular resistance exercise training program. Another reason for the decline in muscle mass and strength is due to “anabolic resistance”. Characteristics of anabolic resistance include a blunted protein synthesis response to resistance exercise and dietary protein including protein consumed immediately post exercise.\nTherefore, with advancing age the amount of dietary protein necessary to maintain muscle mass increases.\nDr. John Ivy on connection between aging, protein, and muscle mass.\nDietary Protein Requirements\nDietary protein is a vital nutrient that supplies needed amino acids that are used to make enzymes, hormones, neurotransmitters, antibodies, and serve as the building blocks for repair and growth of all tissues of the body including muscle. There are 20 amino acids that the body requires for these purposes. Of these 20 amino acids, 11 can be produced in the body itself and are referred to as non-essential amino acids, while 9 are referred to as essential amino acids because they have to be obtained through dietary means.\nNot all dietary proteins are equal in nutrient value. As mentioned above, protein is made of amino acids and each protein has a unique amino acid profile. Proteins can be classified as complete or incomplete proteins.\n- A complete protein is one that has an adequate proportion of all 9 essential amino acids necessary for the dietary needs of humans. Complete proteins are typically animal-based proteins such as meat, fish, milk, cheese and eggs. However, there are some plant sources of protein that are considered complete such as soy, quinoa, buckwheat, hemp, and spirulia.\n- An incomplete protein is one that lacks one or more of the essential amino acids or does not have an adequate proportion of one or more of the essential amino acids. However, just because a protein is incomplete does not mean that it is not beneficial. Meals are not generally made from a single food item, and combining the right combination of incomplete proteins can provide the necessary essential amino acids required by the body. Proteins that are combined to provide a complete amino acid profile are known as complimentary proteins. Examples are brown rice and black beans or kale and almonds.\nAs mentioned above, the RDA for protein is 0.36 g per lb. of body weight, and represents the quantity of protein that should be consumed daily to meet population needs and to prevent deficiency.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "The required intake and ability to digest protein differs widely between people. While this article provides a guideline, it is up to each person to decide what works best for them.\n(1) Antonio J, Kalman D, Stout JR, Greenwood M, Willougby DS, Haff GG. 'Essentials of Sports Nutrition and Supplements'. International Society of Sports Nutrition. Humana Press 2008\n(2) Dangin, M., Y. Boirie, C. Garcia-Rodenas, P. Gachon, J. Fauquant, P. Callier, O. Ballevre,and B. Beaufrere. 'The digestion rate of protein is an independent regulating factor of postprandial protein retention.' Am. J. Physiol. Endcrinol. Metab. 280: E340-E348, 200\n(3) Bilsborough S & Mann N. 'A Review of Issues of Dietary Protein Intake in Humans.' International Journal of Sport Nutrition and Exercise Metabolism, 2006, 16:129-152", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Eat Healthy: Men Over 50\nA nutritious diet is an important component of healthy weight management and disease prevention in men over 50. A healthy diet is associated with an increased quality of life and higher rate of survival in older adults, according to a study published in a 2011 edition of the “Journal of the American Dietetic Association.” Eating healthy consists of consuming the appropriate number of calories daily and choosing nutrient-dense foods.\nAs men age, they require fewer and fewer calories for healthy weight maintenance. According to the Dietary Guidelines for Americans 2010, men over age 50 need 2,400 to 2,800 calories if they are active, 2,200 to 2,400 calories if they’re moderately active and 2,000 to 2,200 calories daily if they are sedentary. The Dietary Guidelines for Americans 2010 classify men as active if they walk more than 3 miles per day and moderately active if they exercise the equivalent of walking 1.5 to 3 miles daily.\nDue to the decrease in muscle and bone mass associated with aging, men over age 50 generally require more protein than the recommended dietary allowance, or RDA, according reviews published in a 2010 edition of “Aging Health” and in a 2008 edition of “Clinical Nutrition.” The RDA is 56 grams of protein daily for men over age 50. The 2008 review published in “Clinical Nutrition” suggests that older men can optimize their health and function by consuming 1.5 gram of protein per kilogram of body weight, or about 0.68 gram of protein per pound of body weight daily, which is equivalent to 112 grams of protein per day for a 165-pound man.\nFiber is important for healthy weight maintenance, managing cholesterol levels and preventing constipation in older adults. Most Americans fail to meet their daily fiber needs, according to a 2009 review published in “Nutrition Reviews.” Authors of this review recommend consuming 14 grams of fiber for every 1,000 calories you consume, which means men over 50 should aim for 34 grams of fiber daily when consuming a 2,400-calorie diet. Healthy, fiber-rich foods include whole grains, legumes, fruits, vegetables, nuts and seeds.\nVitamins and Minerals\nMany men over 50 can meet their nutritional needs, including vitamin and mineral requirements, by eating a well-balanced diet.", "score": 25.604531422348277, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "Additional food should be added to these menus to provide adequate calories and to meet requirements for nutrients besides protein.\n|Meal||Food/portion Size||Protein (grams)|\n|Breakfast||1 cup oatmeal||6|\n|1 cup soymilk or low-fat milk||7-8|\n|Lunch||2 slices whole wheat bread||5|\n|1 cup Vegetarian baked beans||12|\n|Dinner||5 oz firm tofu||11|\n|1 cup cooked broccoli||4|\n|1 cup cooked brown rice||5|\n|Snack||2 Tbsp peanut butter||8|\nProtein Recommendation for 170 lb Male [based on 0.8 gram of protein per kilogram body weight for 77 kilogram (170 pound)]: 61 grams\n|Meal||Food/portion Size||Protein (grams)|\n|Breakfast||1 whole wheat bagel||9|\n|2 Tbsp almond butter||5|\n|Lunch||6 oz. low-fat yogurt or soy yogurt||6|\n|1 baked potato||4|\n|Dinner||½ cup cooked lentils||9|\n|1 cup cooked quinoa||9|\n|Snack||1/4 cup cashews||5|\nProtein Recommendation for 130 lb Female [based on 0.8 gram of protein per kilogram body weight for 59 kilogram (130 pound)]: 47 grams\nAdditional food should be added to these menus to provide adequate calories and to meet requirements for nutrients besides protein.\nThe main problem with protein is excess animal protein.\nThe average American consumes close to 100 grams of protein a day4. This is almost double the amount of protein that is needed. Contrary to popular belief, excess protein cannot be stored. Any excess protein is either converted to sugar and burned as energy, or converted into fat with its waste products eliminated through the kidneys. When protein is metabolized, some toxic substances such as urea are created during the breakdown process because of the nitrogen content. Sulfur, a by-product of the breakdown of amino acids such as methionine and cysteine also must be eliminated and is turned into sulfuric acid. These then must be eliminated through the kidneys. Therefore, one of the adverse side effects of high intake of protein is that a tremendous strain is put on the kidneys to eliminate the waste byproducts5.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "How Much Protein Do You Need?\nProtein is having a moment. From Atkins to Paleo, it seems like everyone is eating a high protein diet. Even JetBlue is offering cricket-yes, cricket!-protein bars to airline passengers. Whether or not you eat a high-protein diet (or an insect snack mid-flight), chances are you're concerned about your intake of this important nutrient. Are you eating enough? Eating too much? Here's the skinny:\nAre You Getting Enough Protein?\n\"In a generally healthy diet, about 10 to 20 percent of your total calories should be coming from protein,\" says Alison Massey, MS, RD, Director of Diabetes Education at The Center for Endocrinology at Mercy Medical Center in Baltimore.\nIndividuals with certain medical conditions or specific needs may need more or less protein, but, according to the Centers for Disease Control and Prevention, general recommendations for the average person in grams (g) per day are:\nMen ages 19 years and older: 56g\nWomen 19 years and older: 46g\nPregnant or nursing teenagers and women: 71g\nChildren ages 1-3 years: 13g\nChildren ages 4-7 years: 19g\nChildren ages 9-13 years: 34g\nBoys ages 14-18 years: 52g\nGirls ages 14-18 years: 46g\nWhile athletes may benefit from increased protein intake pre- and post-workout, the majority of us (even regular exercisers) are fine with the above guidelines.\nThe Problem With too Much Protein\nThe good news about protein intake is that most of us get enough-even those of us who follow a vegetarian or vegan diet. Plant-based proteins such as beans, nuts, and tofu help to create a balanced diet. The key word here being balanced, which brings us to the problem with high protein diets..\n\"When you eat a high protein diet and exclude carbohydrates-which is your body's main source of fuel-your body begins to burn its own fat for energy,\" explains Heidi McIndoo, MS, RD, author of The Complete Idiot's Guide to 200-300-400 Calorie Meals. \"While that may sound good, it actually leads to a condition called ketosis. Ketosis may help reduce your appetite, but it also increases your fluid loss, making any initial weight loss often just a loss of fluids.\"", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Dehydration: A study found that as protein intake increased hydration went down, likely because the body has to use more water to flush out that additional nitrogen. Make sure to drink more water!\nBest sources: Animal products are some of the highest protein content foods, like chicken, turkey, fish, beef, shrimp, lamb, scallops, eggs, and sardines. But then there’s soybeans, nuts and nut butters, legumes, as well as dairy products like milk, cheese, Greek yogurt, and cottage cheese that are all excellent sources. All veggies have a little protein so you’re sure to get protein with almost any nutrient-dense food you choose.\nCurrent consumption: Despite protein showing up on more food labels, many of us are already getting way more than our 46 or 56 grams. In fact, men ages 20 plus get an average of 98.9 grams of protein a day, and women ages 20 plus get 68 grams, according to the U.S. Department of Agriculture's What We Eat In America.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Why Athletes Need a Healthy Dose of this Nutrient\nProtein is an essential part of human life. You have it in every cell, muscle, and organ in your body. You need it for growth, for strength and for energy.\nThis means that it is also an essential part of our diets—particularly for athletes who want to stay strong and sharp.\n“In general, protein is important for building just about any of the tissues of the body, including the cells that support our immune system and organ health,” says fitness, health and wellness expert, Tom Nikkola. It also helps to repair muscles after strenuous exercise, which allows athletes to get back to training sooner.\nWhen doctors and specialists discuss how much protein you need in your daily diet, the answer varies based on age, gender, weight, activity level and training goals. The Institute of Medicine recommends 46 grams of protein per day for the average woman and 56 grams for the average man.\nHowever, martial artists can benefit from much more. The Academy of Nutrition and Dietetics and the American College of Sports Medicine both recommend athletes eat 1.2 to 2 grams of protein per kilogram of body weight. For a 150-pound person, that boosts the 56-gram a day intake to between 82 and 136 grams.\nThe numbers can be a bit much. But the facts are quite simple. Whether your goal is to lose weight, improve strength or increase energy, the right amount of protein in your diet will help you get it done, so consulting with an expert and trying new dietary programs is encouraged.\nTake with Food\nIncluding protein at every meal is a smart strategy, says NikKola. There are many ways to get your daily dose of protein, regardless of any special diet needs. Here are a few examples of food high in protein that you should add to your daily routine:\n- Beans and Peas: Green beans and green peas don’t count—it’s the black beans, lentils, chickpeas, and their relatives that are good sources of protein and fiber.\n- Nuts: Almonds, walnuts, and other nuts provide healthy fats, vitamins and antioxidants. They’re also fat-full, so they’re filling too. But don’t overdo it! An ounce a day is a good guide. For peanut butter, choose natural options without added sugar, salt and oil.\n- Quinoa: Not all whole grains are equal in protein.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "One of the many theories regarding the cause of sarcopenia, age-related loss of muscle mass and strength, is that impaired processing of the essential amino acid leucine is a significant cause. If this is the case, then leucine supplementation should help to some degree. Similar suggestions have been made for a few other aspects of aging - that we should assign a modest fraction of the blame to the typically lower protein intake observed in older people, as tissues find themselves lacking sufficient raw materials needed to maintain themselves. The study here suggests that this is not the case, or at least that the contribution of reduced protein intake is small in comparison to the other mechanisms of degenerative aging.\nRegardless of whether an adult is young or old, male or female, their recommended dietary allowance (RDA) for protein is the same: 0.8g/kg/day. Many experts and national organizations recommend dietary protein intakes greater than the recommended allowance to maintain and promote muscle growth in older adults. However, few rigorous studies have evaluated whether higher protein intake among older adults provides meaningful benefit.\n\"It's amazing how little evidence there is around how much protein we need in our diet, especially the value of high-protein intake. Despite a lack of evidence, experts continue to recommend high-protein intake for older men. We wanted to test this rigorously and determine whether protein intake greater than the recommended dietary allowance is beneficial in increasing muscle mass, strength, and wellbeing.\"\nThe clinical trial, known as the Optimizing Protein Intake in Older Men (OPTIMen) Trial, was a randomized, placebo-controlled, double-blind, parallel group trial in which men aged 65 or older were randomized to receive a diet containing 0.8-g/kg/day protein and a placebo injection; 1.3-g/kg/day protein and a placebo injection; 0.8-g/kg/day protein and a weekly injection of testosterone; or 1.3-g/kg/day protein and a weekly injection of testosterone. All participants were given prepackaged meals with individualized protein and energy contents and supplements. Seventy-eight participants completed the six-month trial.\nThe team found that protein intake greater than the RDA had no significant effect on lean body mass, fat mass, muscle performance, physical function, fatigue or other well-being measures. \"Our data highlight the need for re-evaluation of the protein recommended daily allowance in older adults, especially those with frailty and chronic disease.\"", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "New evidence suggests that the RDA of protein for those age 65 and older should be closer to 1-1.5 grams per kilogram of body weight or .45-.68 grams per pound.\nThe higher end of that range is recommended for seniors with illnesses. The exception of course is for those who have kidney issues.\nIs too much protein bad?\nA lot of people think that eating too much protein is bad for your kidneys. But it turns out it’s just a myth. It can be harmful to kidneys, but mainly only in people who already have kidney issues.\nWhen it comes down to it, it is really really difficult to eat too much protein. The reason is just because protein is so filling, especially when you get it from real food sources (not supplements).\nDon’t forget, eating protein does not mean you need to eat more meat. There are so many plant based protein options out there.", "score": 24.078486504241074, "rank": 53}, {"document_id": "doc-::chunk-3", "d_text": "In general, studies of Australian vegetarians have found that their protein intakes are significantly lower than those of omnivores. A study of Australian men aged 20–50 years found that those on a lacto-ovo-vegetarian (LOV) diet consumed 80 g of protein per day (16% of energy) and vegans consumed 81 g of protein per day (12% of energy) compared with 108 g (17% of energy) for omnivores.12 Among women aged 18–45 years, those following a vegetarian diet (LOV and vegan) had a mean protein intake of 54 g per day (14% of energy) compared with 67 g per day (18% of energy) for omnivores.13 While the reported protein intakes of vegetarians are significantly lower, it is clear from these studies that most vegetarians and vegans still meet the RDI for protein, and intakes are within the AMDR.\nProtein requirements for healthy adults have not been found to differ according to whether dietary protein is predominantly from animal, vegetable or mixed protein sources provided soy protein or a variety of other vegetable proteins is consumed.14 However, studies comparing single sources of protein have found significant differences between plant and animal sources, particularly with cereal proteins such as wheat and rice,4,15-17 as their low lysine content may be a limiting factor. Consequently, if protein intake was to be restricted to a single plant source, such wheat, rice or legumes (other than soy), then the amount of protein required to meet essential amino acid needs may be increased.7\nAs discussed above, while vegetarian diets may provide less protein than a non-vegetarian diet, they are still able to meet protein requirements. If a vegetarian diet is planned to meet the requirements for essential micronutrients, including iron, zinc, calcium and vitamin B12, it is likely that protein needs will be exceeded. Most plant foods contain some protein, with the best sources being legumes, soy foods (including soy milk, soy yoghurt, tofu and tempeh), Quorn (mycoprotein), nuts and seeds. Grains and vegetables also contain protein, but in smaller amounts. Box 3 shows the protein content of common plant foods and a comparison with animal protein sources.", "score": 23.06665016571597, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "However, the “right” levels of protein intake are not consistent for all individuals.\nThe optimum amount of protein for any one person depends on a multitude of factors including age, muscle mass, amount of physical activity, state of health, and fitness goals, to name only a few. What follows is a short list of recommended daily protein intakes for individuals with special needs.\nPregnant and breastfeeding women\nAccording to Nutrition Energy in New York City, women need about ten additional grams of protein daily during pregnancy, while women who are nursing babies need approximately twenty grams more in order to support milk production..[vi]\nThis isn’t a tall order, as ten grams of protein can be found in a single serving of Greek yogurt or half a cup of cottage cheese. Pregnant and breastfeeding women are highly encouraged to get twenty to thirty grams of their daily protein from dairy, as the calcium and vitamin D that can be found in these products are essential to the health of the mother and the proper development of her baby, as well.\nBecause engaging in sports can cause more wear and tear on muscles, they break down more frequently and thus need to be repaired more often.\nThe protein intake of people who have an active lifestyle is heavily influenced by the frequency, length, and intensity of their workout routines. For instance, endurance athletes require around 1.2 to 1.4 grams of protein per kilogram (0.5 to 0.65 grams per pound) of body weight, which amounts to as much as fifty percent more protein than a non-athlete or typical adult.[vii]\nMeanwhile, body builders need double the regular amount of protein in their daily diet. However, it is very easy to get protein through a regular diet, so athletes need not worry about taking protein supplements just to meet their daily requirements.\nHowever, consuming a good quality protein supplement post-work out is a great way to replenish necessary amino acids needed for muscle rebuilding. Rockwell Nutrition offers many excellent quality Protein Powders if you feel that you need a little extra help.\nThe key to losing weight the right way is to lose body fat while still maintaining lean muscle mass. This is easier done with the help of sufficient protein intake, as protein is more filling and prevents the onset of hunger pangs or cravings.[viii]\nAs long as the overall amount of calories and portion sizes are reasonable, there should not be a problem for dieters when it comes to maintaining adequate protein intake while shedding off their excess pounds.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "Therefore, a 150-pound person should consume a minimum of 54 grams of protein each day.\nProtein in Foods\nAlthough protein supplements are convenient, they are often expensive and not tightly regulated by the Food and Drug Administration. You can get all the protein you need by eating a variety of high-protein foods at each meal. Foods high in protein include poultry, lean beef, fish, seafood, eggs, egg whites, low-fat dairy products, soy products, seitan, legumes, nuts, seeds and peanut butter. For example, the Academy of Nutrition and Dietetics reports that 3 ounces of chicken breast contain 27 grams of protein, 3 ounces of lean ground beef provide 21 grams, 1 cup of cottage cheese contains about 28 grams and 2 tablespoons of peanut butter provide 8 grams of protein.\nErin Coleman is a registered and licensed dietitian. She also holds a Bachelor of Science in dietetics and has extensive experience working as a health writer and health educator. Her articles are published on various health, nutrition and fitness websites.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "But in the end, you'll get a very similar result no matter which way you think about it. Just remember that your recommended grams means grams of protein in your food, not the serving size. So for example, a 4-ounce piece of sirloin steak has 24 grams of protein.\nAccording to the 2015 USDA dietary guidelines committee, most people are getting just about (or just under) the recommended amount of “protein foods,” meaning meat, poultry, and eggs. Here's the rub: \"protein foods\" doesn't include dairy, soy, or grains, so if you're eating those things (which you probably are), it's likely you're right in the middle of the recommendations without really trying.\nResearch published in the American Journal of Clinical Nutrition following a protein summit of over 60 nutrition experts found that the average American currently gets 16 percent of their daily calories from protein, but that we could eat more than that. The suggestion to increase protein intake isn't widely accepted though, and more research needs to be done to determine if the benefits are enough to make sweeping recommendations.\n\"You can always have too much of anything,\" Levinson says. \"But [overloading on protein] is more common in athletes and body builders, especially those who use protein powders multiple times a day in addition to the other protein they're getting from their diet,\" Levinson explains.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Are You Eating Enough Protein To Build Muscle?\nWhen it comes to building some serious muscle, all that work you do in the gym is only half of the battle. The other half takes place in the kitchen. Diet is extremely important when trying to improve your body composition. Without proper nutrients, no matter how much time you spend weight training, you won’t get the results you’re looking for.\nYour muscles are made up of over 25% protein (a very significant amount!) along with up to 75% water and stored glycogen (carbohydrates). While people generally understand that consuming adequate protein is very important to support muscle growth and maintain lean mass, the amount of protein to consume becomes the tricky part.\nI ‘ve seen recommendations that range from as low as 50 grams per day to as much as 3 times your bodyweight. Although it sounds good in theory, the traditional ‘more is better’ approach doesn’t necessarily work here. So how much protein do you need when trying to get huge?\nProtein To Build Muscle | Common Recommendations\nThe American Dietetic Association’s RDA (recommended daily allowance) for protein is 0.36g per pound of bodyweight. This would mean that as a bare minimum, a 180lb male only needs 65 grams of protein per day to meet requirements. One thing to note is that these requirements are based off of sedentary individuals and those that are more active will have a slightly higher RDA.\nThe National Strength and Conditioning Association (N.S.C.A.) recommends that active people consume 0.4g to 0.6g per pound of bodyweight with as much as 0.8g for a competitive athlete. What is important to note is that with a higher overall activity level the requirement goes up. I think it is safe to say that if you are trying to build muscle you will be on the higher end of the spectrum.1\nProtein To Build Muscle | How Much Is Really Enough?\nPopular belief is that in order to build muscle you must consume up to 1.0g of protein per pound of bodyweight. For some of you that might seem high and for others it might seem too low. The answer to that is really, it depends.\nResearch shows that the average trainee looking to build muscle can benefit anywhere from .6g to around 1.1g of protein per pound of bodyweight.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "The dilemma of promoting healthy ageing is complex, but research tells us that optimising protein intake at all meals is an important part of the process.\nWhat type of protein?\nResearch shows that age doesn’t affect how we digest and absorb protein foods. As such, protein choices for the elderly should focus on high quality protein foods where ever possible.\nHow much protein?\nWhile age makes no difference to how we digest and absorb protein, the body’s ability to use it to build or repair body tissues like muscle is reduced. Research suggests the optimal amount of protein for the over 65’s at any particular eating occasion is around 25-30g, or the equivalent of a 100g piece of cooked lean steak.\nThe best time for protein?\nBecause the optimal amount of protein at any eating occasion is around 25-30g, meeting daily needs, which may be more than 100g protein/day, means including quality protein foods at every main meal and when needs are particularly high at mid-meal snacks as well.\nDid you know?\nInternational recommendations for daily protein in the over 65’s are greater than those currently recommended in Australia and New Zealand.\n- 1.0-1.2g protein per kg body weight in healthy older adults OR 76-91g protein/day*\n- 1.2-1.5g/kg protein per kg body weight in older adults who are unwell or suffering a chronic disease OR 91-114g protein/day*\n*Based on weight of 76kg\n- Taylor & Luscombe-Marsh. Protein: An aged challenge. Food Australia 2015;67 (3): 18-21\n- Bauer J et al. Evidence-based recommendations for optimal dietary protein intake in older people: a position paper from the PROT-AGE Study Group. Am J Med Dir Assoc. 2013;14(8):542-59", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-32", "d_text": "Plenty of boutique gyms and studios will sell protein shakes or smoothies and they can be a convenient way to increase your intake protein, which is needed to support muscle growth. However, Collier adds that protein shakes aren't superior to good protein intake from food, so it's always better to opt for the real deal if you can. \"If you are opting for shakes or powders, I would recommend choosing products that have a reasonable amount of protein in them but have no added nasties or sugars,\" says Tongue. How much protein should the average adult consume? \"This is a question of much debate,\" says Collier. \"Both the EU and the US recommended daily intakes of protein are just 50 grams per day; this is the absolute minimum and it's a level that's relatively easy to consume.\" Typically, most people who consume a varied diet manage to consume more than this amount. As Collier explains though: \"The problem with a simple figure is that it doesn't relate to anything in respect of protein quality; there are essential minimum intakes of each of the nine essential amino acids set by the World Health Organisation. \"There are different\nmethods of looking at the quality of protein, and the most validated ones look at the breakdown of amino acids and how readily the proteins are digested and absorbed,\" he adds. \"Proteins with the highest scores are animalbased, mainly because individual plant-proteins tend to have lower amounts of one or more of the key amino acids. However, a combination of more than one plant-based protein source also gives a perfect score.\" Consuming good amounts of protein can help maintain an optimal body composition, a healthy metabolism, and good athletic performance. \"As a rule of thumb, you should aim to get between 20-30% of your total energy intake from protein,\" says Collier. Can you eat too much protein? \"Yes definitely,\" says Tongue. \"While protein helps us to stay feeling full, we know that if we eat too much of it, without cutting back on other calories, it will be converted to fat. \"In addition to this, if we're getting this protein from animal sources, in particular red meat, it could raise our risk of certain cancers, high blood pressure and heart disease,\" she adds. \"People who have any underlying kidney conditions should also avoid excess protein, as this will increase pressure on the organ.\"", "score": 22.976573861243974, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Recently I wrote about carbohydrates, fats, and protein, the major sources of energy in our diet. Getting sufficient amounts of these nutrients is essential to promote good health and exercise performance. Given the current trend of low-carbohydrate diets and an emphasis on protein for everything from fitness to weight loss, many people have wondered about how much protein they should eat. This is the topic of my Health & Fitness column in the Aiken Standard this week.\nAs you might expect, protein needs vary from person to person for a variety of reasons. For example, an athlete who is working out to add muscle or training for a triathlon needs more protein than a person who does less strenuous exercise. Despite these individualized protein needs, there are some broad recommendations that apply to most people.\nThere are two ways to estimate the amount of protein a person needs, both of which you may be familiar with. One is to recommend a certain amount of protein, in grams, based on body weight. The RDA, the amount that meets the needs of almost all healthy adults, is 0.8 grams of protein per kilogram of body weight (g/kg) per day. You can calculate your protein requirement by multiplying your body weight by 0.4, so a 200 lb. person would require about 80 g protein per day. (You can also use an online calculator, like this one)\nMeeting this protein requirement isn’t very difficult. A four-ounce serving of meat contains about 30 grams of protein, an egg has 6 grams, and a cup of milk has 8 grams. Plants contain protein, too—whole grain bread and cereal has about 4 grams per serving, and one cup of cooked beans contains about 15 g. Getting enough protein is important, but there is little benefit to eating more protein than you need and excessive intake could cause health problems.\nIn general, most adults get enough protein but children, women who are pregnant, and older adults should make an extra effort to eat protein-rich foods. Vegetarians and vegans, especially athletes, need to carefully plan meals to get enough protein and the right balance of amino acids to meet health and performance requirements.\nThe other way to estimate protein needs is based on the number of calories you eat. According to the Institute of Medicine, the acceptable range for protein is between 10–35% of total calories.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "You likely need about one-half gram of protein per pound of lean body mass.\nThis amounts to 30 to 70 grams of protein a day, spread out throughout the day. If you’re aggressively exercising or competing, or pregnant (or lactating), your daily protein requirement may be 25 to 50 percent higher.\nTo calculate your lean body mass, subtract your percent body fat from 100. So if you have 20 percent body fat, then you have 80 percent lean body mass. Then multiply that percentage (in this case, 0.8) by your current weight to get your lean body mass in pounds. So, in this example, if you weighed 160 pounds, multiply that amount by 0.8 (representing 80 percent) which leaves you with 128 pounds of lean body mass. Following the \"one-half gram of protein per pound\" rule, you would need about 64 grams of protein per day.\nThirty to 70 grams of protein is not a large amount of food. This can be as little as two small hamburger patties or a six-ounce chicken breast. I recommend that you write down everything you eat for a few days, along with the weight in grams and then calculate the amount of daily protein you’ve consumed from these sources. Check out this chart as a simple guide on the grams of protein in foods:\n|Red meat, pork, poultry, and seafood average 6 to 9 grams of protein per ounce.\nAn ideal amount for most people would be a 3-ounce serving of meat or seafood per meal (not 9- or 12-ounce steaks!), which will provide about 18 to 27 grams of protein.\n|Eggs contain about 6 to 8 grams of protein per egg. So an omelet made from two eggs would give you about 12 to 16 grams of protein.\nIf you add cheese, you need to calculate that protein in as well (check the label of your cheese).\n|Seeds and nuts contain on average 4 to 8 grams of protein per quarter cup (packaged with valuable fiber).\n||Cooked beans average about 7 to 8 grams per half cup (packaged with valuable fiber).\nYour Source of Protein Matters\nSome people think meat is the best source of protein, but this isn’t necessarily the case, as there are other protein sources you can turn to.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "In the UK the adult daily RNI for protein is 0.75 g/kg, with protein representing at least 10% of the total energy intake. Most affluent people eat more than this, consuming 80-100 g of protein per day. The total amount of nitrogen excreted in the urine represents the balance between protein breakdown and synthesis. In order to maintain nitrogen balance, at least 40-50 g of protein are needed. The amount of protein oxidized can be calculated from the amount of nitrogen excreted in the urine over 24 hours using the following equation:\nGrams of protein required Urinary nitrogen × 6.25 (most proteins contain about 16% of nitrogen).\nIn practice, urinary urea is more easily measured and forms 80-90% of the total urinary nitrogen (N). In healthy individuals urinary nitrogen excretion reflects protein intake. However, urine N excretion does not match intake either in catabolic conditions (negative N balance) or during growth or repletion following an illness (positive N balance).\nProtein contains many amino acids, of which nine are indispensable (essential). These amino acids cannot be synthesized and must be provided in the diet. The dispensable (non-essential) amino acids can be synthesized in the body, but some may still be needed in the diet unless adequate amounts of their precursors are available. Animal proteins, such as in milk, meat and eggs, are of high nutritional value as they contain all indispensable amino acids. Conversely, many proteins from vegetables are deficient in at least one indispensable amino acid.\nIn developing countries, adequate protein intake is achieved mainly from vegetable proteins. By combining foodstuffs with different low concentrations of indispensable amino acids (e.g. maize with legumes), protein intake can be adequate provided enough vegetables are available.\nLoss of protein from the body (negative N balance) occurs not only because of inadequate protein intake, but also owing to inadequate energy intake. When there is loss of energy from the body, more protein is directed towards oxidative pathways and eventually gluconeogenesis for energy. Of all the amino acids, glutamine is quantitatively the most important one in the circulation and in inter-organ exchange. Alanine is also an important amino acid released from muscle; it is deaminated and converted into pyruvic acid before entering the citric acid cycle.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-4", "d_text": "Protein supplementation post exercise should be about 20 grams for young individuals and 30 to 40 grams for middle-aged and older individuals. The amount of protein post exercise for older individuals can be reduced by adding 2 to 3 grams of L-leucine. If weight reduction is a goal, caloric restriction should be accompanied with an increase in protein consumption. This requires that the percentage of protein in the diet be increased to about 35% of total macronutrients.\nSuggested Daily Protein Consumption\nDistribution of Daily Protein Consumption Based on a 2,500 Caloric Diet with Protein encompassing 30% of the Total Macronutrients:\n- Basic 3 meals of the day\n- Breakfast – 40 grams\n- Lunch – 45 grams\n- Dinner – 55 grams\n- Post Exercise Workout\n- Within 30 minutes post exercise – 25 grams\n- Approximately 30 to 45 minutes before bedtime – 25 grams\nTotal Daily Protein Consumption – 190 grams\n- Beasley JM et al. The role of dietary protein intake in the prevention of sarcopenia of aging. Nutrition in Clinical Practice 28:684–690. 2013.\n- Beelen M1, et al. Protein coingestion stimulates muscle protein synthesis during resistance-type exercise. American Journal of Physiology: Endocrinology and Metabolism 2008,295:E70-77. doi: 10.1152/ajpendo.00774.2007.\n- Cribb PJ and Hayes A, Effects of supplement timing and resistance exercise on skeletal muscle hypertrophy. Medicine and Science in Sports Exercise 38:1918-1925, 2006.\n- Cuthbertson D, et al. Anabolic signaling deficits underlie amino acid resistance of wasting, aging muscle. FASEB Journal 19:422-424, 2005.\n- Esmarck B, et al. Timing of postexercise protein intake is important for muscle hypertrophy with resistance training in elderly humans. Journal of Physiology 535:301-311, 2001.\n- Ferguson-Stegall L, et al. Aerobic exercise training adaptations are increased by postexercise carbohydrate-protein supplementation. Journal of Nutrition and Metabolism 2011,2011:623182. doi: 10.1155/2011/623182.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "The Best Protein Snacks\nProtein is essential to consume in order to maintain energy and keep the body going. It is used to help develop and maintain almost every part of the body. It makes hormones and enzymes, while also helping keep your muscles, bones, blood, and hair and nails healthy.\nWe all need a different amount of protein based on our personal needs. For instance, individuals who exercise frequently should be consuming more protein, and your gender, age and body weight are also factors.[slideshow:91445]\n*Related: 8 Protein Supplements That Actually Work\nIt has been shown that consuming protein before exercise and within 30 minutes of finishing your workout will help with growth and recovery. “The guideline for protein consumption after exercise is 1 gram for every 3-4 grams of carbohydrate,” according to medicine.net.\nAccording to the USDA, the recommended consumption of protein for adults who are at an average activity and weight is 49g per day for women and 56g per day for men.", "score": 21.695954918930884, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "With so many protein bars, shakes, and supplements on the market, it's kind of been hammered into our heads that protein is the wonder nutrient.\nIt is an important building block for our cells, essential to repair old ones and build new ones. Which is why we think about it most commonly as a post-workout muscle-builder. Recent compelling studies have shown that a higher-protein diet may potentially help with weight management—particularly by helping us feel more satiated, and helping burn fat mass and maintain lean muscle. It also may have benefits for your heart. But the research is small and far from conclusive.\nSo how much protein should you eat? And can you ever eat too much? We talked to nutritionists and scoured studies to find out how much protein is healthy to pack into each day.\nThe current USDA Dietary Guidelines recommend protein make up somewhere between 10 and 35 percent of your daily calories (but some nutrition experts think 35 sounds really high). A lot of people automatically think of 2,000 calories a day as the standard, but that might not be right for you—you may be eating more or less depending on your weight, fitness level, weight loss goals, and if you're pregnant.\n\"Your [ideal amount of protein] will vary based on caloric needs and whatever else you have going on,\" Kristen F. Gradney, R.D., director of nutrition and metabolic services at Our Lady of the Lake Regional Medical Center and spokesperson for the Academy of Nutrition and Dietetics, tells SELF. \"For example, if you work out and lift weights three or four days a week, you're going to need a little more than somebody who doesn't. It varies.\"\nYou can also use the calculation from the Institute of Medicine, which says the Recommended Daily Allowance (RDA) of protein for adults should be 0.8 g/kg body weight. To calculate it, divide your weight in pounds by 2.2, then multiply by 0.8. \"So for a 130-pound woman, that would be 47 grams of protein,\" explains Jessica Fishman Levinson, R.D., founder of nutrition counseling company Nutritioulicious. For an even more personalized look at your protein needs, use this handy USDA nutrient calculator, which also takes into account your height and activity level.\nLet's be honest: all of the different calculations make it a bit confusing.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "The following is their recommendation for different types of athletes:\n- Endurance athletes: 1.2 to 1.4 grams per kilogram body weight per day\n- Strength Training: 1.2 to 1.7 grams per kilogram body weight per day\nFor example: A 165 pound (75 kilogram) endurance athlete would need 90 to 105 grams of protein per day OR the same weight athlete who was strength training would need 90 to 127 grams of protein per day. Both of these instances are well over the RDA of a calculated 60 grams per day (0.8 grams X 75 kilograms = 60 grams).\nQuantity and Timing of Protein Intake\nThis is all great information but what about specific timing of protein intake? I get this question a lot… how much post workout or how much during the day at each meal should I consume? I recommend a minimum of 20 grams of protein post workout. Research has shown that this amount illicits the leucine response and stimulates muscle protein synthesis (3). More specifically, protein intake for athletes includes (3):\n- Four equally spaced meals containing protein,\n- Three meals should be 0.25 to 0.3 grams per kilogram per body weight\n- A larger pre-sleep meal with protein intake at 0.6 grams per kilogram body weight\nFor example, let’s take a look at the 165 pound (75 kilogram) athlete. The protein intake would be to 18 to 22 grams at three meals with a larger intake of 45 grams of protein before bed. Total protein intake for the day would be 99 to 111 grams. This falls in the range of the 1.2 to 1.7 grams per kilogram per day in the example shown above. One minor exception, as stated above, I would recommend at least 20 grams of protein within 30 minutes after each workout because of the leucine response. For those who need more, the same leucine response has been found when consuming up to 40 grams of protein post workout. It has been shown that in elderly men, 40 grams of protein post workout is essential for muscle synthesis (3). The consumption of a minimum of 20 grams post workout recommendation would be across the board for any athlete, regardless of weight and gender.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "Resistance training leads to an increase in protein turnover in muscle, stimulating protein synthesis to a greater extent than protein degradation; both processes are influenced by the recovery between a training and the next one as well as by the degree of training (more training less loss).\nIn the resistance and endurance performances the optimal protein requirements in younger people as for those who train less time are estimated at 1.3 to 1.5 g protein/kg body weight, while in adult athletes who train more time is slightly lower, about 1-1.2 g/Kg of body weight.\nIn subjects engaged in a hard physical activity, proteins are used not only for plastic purposes, which are incremented, but also for energy purposes being able to satisfy in some cases up to 10-15% of the total energy demand.\nIndeed, intense aerobic performances, longer than 60 minutes, obtain about 3-5% of the consumed energy by the oxidation of protein substrates; if we add to this the proteins required for the repair of damaged tissue protein structures, it results a daily protein demand about 1.2 to 1.4 g/kg body weight.\nIf the effort is intense and longer than 90 minutes (as it may occur in road cycling, running, swimming, or cross-country skiing), also in relation to the amount of available glycogen in muscle and liver (see above), the amount of proteins used for energy purposes can get to satisfy, in the latter stages of a prolonged endurance exercise, 15% of the energy needs of the athlete.\n- The physical condition.\n- When needed, the desired weight.\nAthletes attempting to lose weight or maintain a low weight may need more proteins.\nFrom the above, protein requirements don’t exceed 1.5 g/kg body weight, also for an adult athlete engaged in intense and protracted workouts, while if you consider the amount of protein used for energy purposes, you do not go over 15% of the daily energy needs.\nSo, it’s clear that diets which supply higher amounts (sometimes much higher) of proteins aren’t of any use, stimulate the loss of calcium in bones and overload of work liver and kidney. Moreover, excess proteins don’t accumulate but are used to fat synthesis.\nHow to meet the increased protein requirements of athletes\nA diet that provides 12 to 15% of its calories from protein will be quite sufficient to satisfy the needs of almost all of the athletes, also those engaged in exhausting workouts.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Muscle protein synthesis is enhanced in the post-exercise period.2\nFood intake slows muscle protein breakdown and initiates muscle protein synthesis; exercise augments this effect. As such, eating food (especially protein foods) and exercising, (especially strength training) are important aspects of building more muscle.2\nIf your goal is to develop more muscle mass and get stronger, pay attention to the following:\nThe emphasis of this article is on nutritional considerations for muscle hypertrophy, so I will limit the discussion of resistance training here, and instead focus on the importance of dietary protein, as well as the impact of adequate calories, carbohydrates and creatine supplementation, since those are major factors that support muscle growth.\nFor decades, research has been conducted to determine the ideal quantity of protein needed for muscle protein synthesis. Historically, the majority of this research has been performed in men. The limited science looking at differences between men and women indicates that men may have a higher protein requirement than women because they oxidize (burn) more amino acids at rest and in exercise.5 Since accurate information pertaining to women is hard to come by, you can choose to follow these guidelines exactly, or modify based on your own personal experiences.\nWith regards to total amount of protein, the recommendation of 1.7 to 1.8 grams of protein per kilogram of bodyweight per day, appears to apply fairly accurately to women.3,4 Some people feel that more protein than this is even more effective, but researchers have shown that the muscle-building effect tops out at 2.0 grams of protein per kilogram per day.7 The benefits of a higher intake of dietary protein extend beyond muscle hypertrophy:\nResearch has suggested that there is a ceiling for how much muscle protein can be synthesized per gram of protein eaten per meal – termed the “muscle full effect”.8 Researchers found that 20 to 30 grams of protein in a meal is all the body can use to stimulate protein synthesis.8 However, as noted by Philips et al, 20154, these dose-response studies have been limited to lower-body resistance exercises, thus it remains unknown whether or not the absolute dose of protein required to maximally stimulate hypertrophy following upper and lower body exercises is greater than 20 to 30 grams (in other words: research isn’t perfect and does not represent every person in the population, so this “limit” per meal may not be factual).", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "How much protein do I need each day to see results? How Much Protein is Too Much? And how many grams of protein can my body assimilate at each meal?\n“The only way to build muscle is by eating enough complete protein every day. Eating calories is not enough. If you don’t eat a protein-rich meal within 60 to 90 minutes of training, you’re essentially wasting the time you put into it. working out your muscles in the gym. Personally, I try to get at least 350-400 grams of protein per day in the off-season, with a body weight of around 235 pounds. ” – Jason Arntz, IFBB professional bodybuilder.\n“You should stick to a diet high in protein, moderate in carbohydrates, and low in fat. A good rule of thumb would be to get about 50% of your calories from protein, 40% from carbohydrates, and 10% from fat. This will allow you to gain quality muscle while still being quite lean. ” – Chad Nicholls, professional sports nutritionist.\nThis is just a template; everyone’s genetic makeup and metabolism is different. You should tailor these percentages to fit your specific needs. For example, if you gain weight easily, you may need to reduce your carbohydrate intake; If you stay very lean, you may need to increase your carbohydrate intake.\n“The guidelines we generally use are 0.67-1 gram of protein per pound of body weight per day. That amount does not guarantee results; it guarantees that you are meeting your protein requirement. Results are based on your genetics and your fitness program. training. “- Kritin Reimers, Ph.D., RD, is director of nutrition and health at Conagra Brands.\nMore than the amount of protein, an important consideration is the quality of the protein in your food. The highest quality protein is found in animal sources such as eggs, beef, and milk. That recommendation above assumes that two-thirds come from a high-quality protein. If you get a lot of protein from breads and pastas, you will probably need more than 1 gram per pound each day.\nTo answer the second question, some believe that high protein intake stresses the kidneys, causes the body to lose calcium, and dehydrates it. Let’s address each of those concerns.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "How much protein do I need to eat in my diet and how much protein is too much?\nBy Brierley Wright, M.S., R.D., August 21, 2013 - 3:09pm\nEatingWell's nutrition editor tells you how much protein you need, how much protein is too much and identifies some health risks of high-protein diets.\nProtein is a must-have nutrient: your body uses it to generate and repair cells. And the building blocks of protein—called amino acids—are needed to build muscle, make antibodies and keep your immune system going. Compared to fat and carbs, protein packs a bigger punch when it comes to filling you up and keeping you satisfied.\nBut don’t worry that you’re not getting enough of this powerhouse nutrient. Protein malnutrition is nearly nonexistent in the U.S. In fact, most of us eat more than we need: women get, on average, 69 grams of protein per day. The Institute of Medicine (IOM) recommends women get 46 grams daily (that’s equal to about 6 ounces of chicken). Men need 56 grams, yet they’re actually eating almost double.\nThere’s no official daily maximum for protein, but IOM suggests capping it at 35 percent of your calories (that’s 175 grams for a 2,000-calorie diet). Heed that advice for a few reasons: High-protein diets usually promote foods that deliver unhealthy saturated fat (meat, cheese). Eating too much protein may also increase your chances of kidney stones, as well as your risk of osteoporosis. (When protein is digested, it releases acid that is neutralized by calcium, which is pulled from your bones.) In one study, women who ate more than 95 grams of protein a day were 20 percent more likely to fracture their forearm than those who got less than 68 grams daily.\nBottom line: It’s possible to eat too much protein, so don’t go overboard. Choose healthy proteins—lean meat, poultry, low-fat dairy, fish, soybeans, quinoa. Beans, peas, nuts and seeds also supply protein, but they’re “incomplete” (lack at least one essential amino acid), so eat a variety of those.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Is Taking in 200 Grams of Protein Safe?\nFor most adults, taking in 200 grams of protein every day is not necessary to meet protein requirements. In fact, eating 200 grams of protein each day may actually be unsafe for a large percentage of the adult population. However, athletes who regularly engage in high-intensity workouts or who are trying to build muscle mass may benefit from consuming 200 grams of protein on a daily basis.\nMaximum Safe Amounts\nAccording to the \"International Journal of Sport Nutrition and Exercise Metabolism,\" a maximum safe protein intake is 2.5 grams of protein per kilogram of body weight, or about 1.1 gram of protein per pound of body weight each day. By not exceeding this maximally safe amount, you can avoid protein toxicity and extra stress on your kidneys. For example, a 150-pound person should not consume more than 165 grams of protein per day. Based on these recommendations, 200 grams of protein per day is safe only for people weighing more than 181 pounds.\nMaximum Beneficial Amounts\nAlthough athletes need significantly more protein than people who don’t exercise, a study published in a 2011 edition of the “Journal of the International Society of Sports Nutrition” reports that the maximum amount of protein beneficial for strength-trained athletes is 2.0 grams per kilogram of body weight, or about 0.91 grams of protein per pound of body weight each day. Although consuming up to 1.1 grams of protein per pound of body weight is theoretically safe, it doesn’t provide additional benefits for athletes, according to this study. The body excretes excess protein it cannot use. Therefore, a 150-pound athlete can use up to 137 grams of protein, a 180-pound athlete can utilize up to 164 grams and a 225-pound, strength-trained athlete can benefit from consuming up to 205 grams of protein each day.\nMinimum Protein Needs\nConsuming too little protein can also cause health problems and a reduction in muscle mass. Men and women, even those who are sedentary, should aim to consume at least the recommended dietary allowance, or RDA, for protein each day. Protein RDAs are 46 grams for women, 56 grams for men and 71 grams per day for pregnant and nursing women. According to the Institute of Medicine, protein RDAs are determined using 0.36 grams of protein per pound of body weight.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "Rockwell Nutrition offers an excellent quality line of protein products called UltraLean by BioGenesis. UltraLean functional foods provide nutrients for science-based weight loss and blood sugar stability. They are high-protein, low-carbohydrate, low-fat, multivitamin/mineral, specialty nutrient beverages that can be used long term with a balanced, calorie-controlled diet and exercise program to achieve your desired body composition goals.\nVegetarians and Vegans\nAlthough they do not eat meat, vegetarians do not need to be extremely disadvantaged when it comes to their daily protein intake. As long as they still include a wide variety of healthy food items in their diets, including fish, eggs, and dairy, they should still be able to easily reach their recommended daily intake of protein.[ix]\nAlthough vegans do not consume any animal products, they can get much of their protein requirements from beans, whole grains, and dried peas, but they will need more careful planning and preparation in order to reach their required daily intake of protein and may very well benefit from using a pea protein powder supplement for times when food preparation time is at a shortage.\nVegetables, seeds, and nuts can also offer minimal amounts of the macronutrient, although they do not contain sufficient levels to be considered main sources of protein.\nElderly people have a significantly higher protein requirement than younger adults. They can need as much as fifty percent more protein than the daily recommended intake in order to prevent and combat the development of common problems brought on by old age, such as osteoporosis and sarcopenia, which is a condition that causes a decrease in muscle mass. They may need .9 to 1.2 grams of protein per kilogram of body weight.[x]\nControlling portion sizes and eating a balanced diet is an important part of getting the right amount of protein in your diet. One common mistake is eating too much protein while severely limiting carbohydrate intake from fruits, vegetables, and whole grains in an effort to lose weight and build muscle. This plan can actually backfire, as insufficient carbohydrate levels may cause your body to burn muscle instead of building it because it needs to produce more energy.\nAside from eating the right amount of protein, eating at the right time can also make a huge difference when it comes to health benefits. Most notably, it is best to consume protein sources such as trail mix, yogurt, or chocolate milk right after completing an exercise routine, as this will help repair stressed and damaged muscle tissue quickly.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "|| Checking for direct PDF access through Ovid\nCurrent athlete-specific protein recommendations are based almost exclusively on research in males.Using the minimally invasive indicator amino acid oxidation technique, we determined the daily protein intake that maximizes whole-body protein synthesis (PS) and net protein balance (NB) after exercise in strength-trained females.Eight resistance-trained females (23 ± 3.5 yr, 67.0 ± 7.7 kg, 163.3 ± 3.7 cm, 24.4% ± 6.9% body fat; mean ± SD) completed a 2-d controlled diet during the luteal phase before performing an acute bout of whole-body resistance exercise. During recovery, participants consumed eight hourly meals providing a randomized test protein intake (0.2–2.9 g·kg−1·d−1) as crystalline amino acids modeled after egg protein, with constant phenylalanine (30.5 mg·kg−1·d−1) and excess tyrosine (40.0 mg·kg−1·d−1) intakes. Steady-state whole-body phenylalanine rate of appearance (Ra), oxidation (Ox; the reciprocal of PS), and NB (PS − Ra) were determined from oral [13C] phenylalanine ingestion. Total protein oxidation was estimated from the urinary urea–creatinine ratio (U/Cr).A mixed model biphase linear regression revealed a break point (i.e., estimated average requirement) of 1.49 ± 0.44 g·kg−1·d−1 (mean ± 95% confidence interval) in Ox (r2 = 0.64) and 1.53 ± 0.32 g·kg−1·d−1 in NB (r2 = 0.65), indicating a saturation in whole-body anabolism. U/Cr increased linearly with protein intake (r2 = 0.56, P < 0.01).Findings from this investigation indicate that the safe protein intake (upper 95% confidence interval) to maximize anabolism and minimize protein oxidation for strength-trained females during the early ~8-h postexercise recovery period is at the upper end of the recommendations of the American College of Sports Medicine for athletes (i.e., 1.2–2.0 g·kg−1·d−1).", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Protein is essential for growth, energy, and tissue repair. Athletic performance depends on muscle strength, and muscles are made of protein. Although athletes who are involved in strength and endurance training may need slightly more protein, it’s a mistake to think you can simply build up muscles by eating lots of protein. Exercise, not dietary protein, increases muscle mass.\nThe amount of protein adolescents need varies at different stages of development. As a rule, boys and girls between ages 11 and 14 need half a gram per pound of body weight daily. Thus, a young teenager weighing 110 pounds needs about 50 g of protein a day. Between ages 15 and 18, the RDA drops slightly. As with all essential nutrients, common sense is the rule—you don’t have to weigh every gram on a scale. Each gram of protein provides 4 calories—the same as carbohydrates—and protein should make up about 10% to 12% of each day’s calories. As a general rule, there are approximately 22 g of protein in 3 oz of meat, fish, or poultry. An 8-oz glass of milk contains about 8 g of protein. Therefore, an average teenager who is drinking 3 glasses of milk a day does not need enormous amounts of meat to meet his daily protein requirement.\nThe protein in foods of animal origin is termed complete or high-quality protein because it contains all the essential amino acids in about the proportions humans need. Vegetable proteins are called incomplete because, except for soybeans, they have low levels of one or more essential amino acids. You don’t have to eat animal products to obtain high-quality protein, however.\nPeople on vegetarian diets take care of their protein needs by pairing plant foods that balance each other’s shortfalls. Pairing foods in this way is called protein complementation. Eating a grain and a legume does the trick; beans and tortillas, a peanut butter sandwich on wheat bread, and black-eyed peas and rice are good examples of protein complementation. You can also compensate for any lack in a plant-based food by adding a small amount of animal-derived protein, such as in pasta with cheese or cereal with milk.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "\"How much protein do I really need?\" This is a typical question I get during a routine nutrition counseling session. With the ever-growing popularity of high protein fad diets on the market promoting weight loss consumers are often confused as to how much protein their bodies actually need.\nSo, first, let's start with why our bodies need protein. When we eat protein our digestive system breaks it down into amino acids, which are then absorbed into our bloodstream. After this process occurs, our bodies turn on metabolic pathways that lead the creation of new muscle tissue. Muscles burn calories, therefore the more muscle we have, the more calories we will burn at rest, making it easier to lose or maintain our weight. This is why protein has become such an important component in weight loss diets.\nSo, how much is enough? The Recommended Dietary Allowance (RDA) for protein is equivalent to 0.36 grams per pound of body weight each day. So, for a 150-pound person that would be equivalent to 54 gm of protein per day. While this number may be appropriate for a sedentary healthy young adult, this number changes depending on activity level, type of activity being done, and age. If you are participating in intense weight lifting, for example, your protein needs might bump up to 0.81 grams of protein per pound of body weight, or 121 grams per day for the 150-pound person, more than double the RDA. Current research is showing that the RDA for protein needs to be increased, especially for middle-aged and older adults.\nThe amount of muscle our bodies generate from the protein we eat decreases as a natural part of the aging process. If left unchecked without an intervention including appropriate protein intake and weight bearing exercise, our muscle mass will start to decrease as we get older. This is partially why as we age our calorie needs decrease. To help avoid our bodies' natural tendency to decrease muscle mass as we age, one of the things that can be done is to eat the right amount and type of protein at the right time.\nThe amount of protein needed in grams per day that can help slow down muscle loss is around half of your body weight. For example, the 150-pound person should be consuming around 75 grams of protein per day instead of the 54 grams the RDA would give that person. However, consuming the amount all at once has proven to be ineffective as well.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "Physically active adults who often exercise or athletes will need more protein due to the level of activity. Women who are pregnant or breastfeeding will also need higher levels of protein to adequately supply nutrients to the baby and produce milk.\nWhile a wide range of factors are involved in the right amount of protein for proper nutrition, children generally need slightly higher levels to support muscle growth. Girls will need less protein due to less muscle mass.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-3", "d_text": "The Orientals, too, have long known about protein complementarity; they combine soybeans (usually in the form of curd, or tofu) with rice throughout China, Japan and the rest of Asia. Indonesians commonly serve tempeh (fermented soybean cakes) with their rice. In the Mediterranean, native peoples feast on specialties combining garbanzo beans and sesame seeds. Closer to home, the American Indians taught the early colonists to eat succotash (a tasty mixture of lima beans and corn), and our modern standards include cereal-and-milk breakfasts, peanut butter or cheese sandwiches for lunch and dinners of pizza (wheat crust and cheese topping) or macaroni and cheese.\nSo you can see that ensuring a healthy daily allowance of protein is really no problem for the vegetarian. Yet the questions remain: How much protein do we really need, and what proportions are necessary to successfully balance the amino acids in complementary foods? Debatable issues, both, but there is a margin of error within which a non-meat eater can feel perfectly safe. The amount of protein a person requires is determined by his or her body size, age, sex and levels of activity and stress. The general rule of thumb — as specified by the National Research Council's Recommended Dietary Allowance (RDA) — is that we should receive 10-15 percent of our total energy needs from protein, or about 0.424 grams per pound of body weight each day. Thus a 150-pound person would need 63.6 grams of protein daily.\nIt's widely suspected that the government's RDA's for some nutrients — most notably protein — are at least slightly exaggerated. Therefore, some nutritionists advise that it's wise not to become too alarmed over the matter of protein intake in a vegetarian diet. Instead of anxiously trying to compute your daily grams, Frances Lappé suggests that you learn to \"read\" your own body and notice whether it's carrying on its normal maintenance functions properly. How do your hair and fingernails look? Do minor wounds and sores heal quickly? Do you have enough energy to carry you through a normal day? If so, you're most likely receiving plenty of protein. During times of stress or under special physical conditions, however, the body's protein \"appetite\" increases (as metabolic processes accelerate), so the daily requirement is upped accordingly.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Why Is Protein Vital In Your Diet Plan?. If your child is aged in between one and 3 years, they call for 13 grams of healthy protein a day..\nDo not disregard clinical recommendations or postpone examination with your healthcare specialist as a result of info that you have kept reading this web site. Constantly read the tag of any supplements or all-natural health and wellness products you acquire and also make use of only as directed.\nOther diets, such as the Low Carb diet, Not eating diet, and HCG diet, are basically just buzz and also can be harmful to your wellness. When getting in adolescence, men require around 20 percent a lot more iron throughout the stage of quick muscle growth.\nHow Much Healthy Protein Should You Eat Everyday?\nThe body needs water, healthy protein, carbs, fats as well as a selection of vitamins and minerals to maintain it. Upon ingestion, protein molecules are dispersed to body components such as bone, hair, muscular tissues, eyes and also fingernails, which depend on protein to exist. Because the human body is constantly taking in and also re-building body tissue, constant daily consumption of healthy protein is crucial.\nThe majority of children enjoy a warm glass of milk, specifically before bedtime. In fact, your youngster will certainly obtain 8 grams of healthy protein from one glass of milk. High blood pressure is still believed to be an ‘grown-up’ problem.\nHealthy Protein Fuel\nThis is even truer as a private obtains leaner, since more of it is required to retain muscle mass when calories are being cut. One of the main advantages of a healthy protein abundant diet regimen is that it promotes toughness and muscle mass. This is because muscle mass is generally made up of protein.\nPiedmont Currently Same day visits with Health care, Urgent Treatment as well as QuickCare carriers. QuickCare Same-day hassle-free treatment right nearby.\nWhy Is Protein Essential For Kids’ Growth?\nYou can discover lots of tasty healthy protein abundant dishes online and prepare delicious dishes according to preferred taste of your kid. The power need for teenagers is more than for kids, however not as high as for many adults. The everyday calorie demand for healthy adults is a little greater, at 2,200 to to 3,000 calories a day for men and 1,800 to 2,400 calories a day for ladies. Whether you are a teen or a grown-up, readjust your calorie consumption according to your exercise degrees and also weight goals.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-2", "d_text": "Bust Milk Is Best For Your Child\nThe best are unsaturated oils and also spreads as an example, rapeseed, olive or sunflower. Iron requires rise for teen ladies with the onset of menstruation (15 mg/day for ages 9 to 13 and 18 mg/day for ages 14 to 18). Adolescent boys additionally need added iron for the advancement of lean body mass (11 mg/day for ages 14-18). Those exact same years might be several of the busiest inside her body, also.\nThe human body requires water, healthy protein, carbohydrates, fats as well as a variety of vitamins and minerals to maintain it. Upon intake, protein molecules are dispersed to body components such as bone, hair, muscle mass, eyes and fingernails, which rely on healthy protein to exist. Due to the fact that the human body is constantly taking in as well as re-building body cells, continuous everyday intake of protein is vital.\nThe Recommended Intake Of Grams Of Carbohydrates Each Day For Ladies\nIn case it is difficult to do so, dietary beverages can likewise be eaten, yet not as the main source. The AMDR for protein is 10 to 30 percent of everyday calories, and also lean healthy proteins, such as meat, chicken, fish, beans, nuts, and seeds are outstanding means to fulfill healthy protein requirements.\nWhey protein is understood to assist soothe the mind as well as provide relief from stress and stress and anxiety. • Amino acids resemble small foundation in the body. Twenty amino acids collaborate to create all the proteins people need.\nWhat Is The Percent Of Day-to-day Calories That An Athlete Should Consume?\nwhy is a diet rich in protein very important to teens. This is excellent news, taking into consideration all the important roles protein plays in the body. Including a variety of premium protein sources in your kid’s diet will certainly help ensure that their body has what it needs for power, development, and also a strong immune system. Considering that protein plays a double role in bone and also muscular tissue development, making sure an ample intake can help to decrease the danger of age-related bone as well as muscle mass loss. Foods and also beverages with caffeinearen’t suggested for older kids as well as teenagers because high levels of caffeine can affect just how much calcium the body can absorb.\nStudy has yet to define the specific protein needs of sports-active individuals due to the fact that private requirements differ.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-2", "d_text": "While nutritional adequacy can be maintained by including a variety of plant foods which “complement” each other in terms of their amino acid profiles (eg, consuming a mixture of grains and legumes or nuts), it is now known that strict “protein combining” is not necessary, provided energy intake is adequate and a variety of plant foods are eaten each day.6,7 The body maintains a pool of indispensable amino acids which can be used to complement dietary proteins; this is one reason why strict protein combining is no longer considered to be necessary.8,9\nNutrient reference values (NRVs) for Australia and New Zealand include a recommendation for an acceptable macronutrient distribution range (AMDR) for protein of 15%–25% of energy intake.10 The AMDR is an estimate of the range of intake for each macronutrient for individuals (expressed as a percentage of total energy intake) that would allow for an adequate intake of all the other nutrients. The NRV document notes that while, on average, only 10% of energy need be consumed as protein to meet the physiological need for protein, this level is insufficient to allow for estimated average requirements (EARs) for micronutrients when consuming foods commonly eaten in Australia and New Zealand.10 In other words, while consuming lower amounts of protein-rich foods could meet the body’s protein needs, it would not provide sufficient amounts of other nutrients found in these foods including iron, zinc, calcium and vitamin B12. Recommended dietary intakes (RDIs) for protein for different sex and age groups are shown in Box 2.\nThe 1995 National Nutrition Survey (NNS) for Australians found the mean daily protein intake for those aged 19 years and over was 91 g or 17% of energy.11 Mean intakes for those aged 19 years and over were 109 g for men and 74 g for women — amounts well above the RDI. Intakes were at least 60% greater than the RDI for most groups, except those aged 65 years and over, whose mean intakes were 84 g for men and 64 g for women; although relatively lower, these amounts were still adequate in terms of RDI. Children and adolescents were eating close to or more than double their RDIs and, while pregnant women were not surveyed separately, the average intake for women would be adequate to meet the RDI during pregnancy or lactation.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "If you skim through a body building magazine, talk to someone in the gym, or conduct an online search for protein you will be inundated with information. Some if not most may be inaccurate but how do you decipher what is true? This article will outline protein sources to include the recommended quantity, timing and quality.\nFirst let’s discuss some background information on protein. Protein is the building blocks for the bones, muscles, cartilage and skin and is also essential for maintaining cellular integrity and function (1). Protein is made up twenty amino acids, nine of which are considered essential. Essential amino acids are ones the body cannot make so they must be acquired through food sources which can be found in animal food sources such as meat, fish, eggs and milk or vegetarian food sources such as grains, legumes, seeds and beans. Different types of protein have shown to provide a faster recovery response post workout that I’ll discuss later in the article.\nAccording to the Institute of Medicine Food and Nutrition Board, the recommended daily allowance (RDA) for protein for both men and women is 0.8 grams per kilogram body weight per day (1). The 2010 Dietary Guideline for Americans* state the daily recommended intake (DRI) for protein to be 10 to 35 percent of total calorie intake. In my opinion the RDA is excessively low when considering athletes and the DRI is too broad of a range especially if you are trying to dial in on your macronutrient needs, i.e. macronutrients are your carbohydrates, protein and fat. So how do you know how much you need? Digging deeper and coming to a more accurate response, in a 2009 Position Stand of the American Dietetic Association, Dietitians of Canada, and the American College of Sports Medicine: Nutrition and Athletic Performance, the recommended protein intake for athletes is 1.2 to 1.7 grams per kilogram body weight per day (2). The position stand notes the RDA and DRI do not take into account the specific needs of athletes which I completely agree with.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Protein is made up of building blocks called amino acids. Your body makes some amino acids. These are called nonessential amino acids. Other amino acids must come from the foods you eat. These are called essential amino acids. A protein with all the essential amino acids is considered a higher quality or complete protein. Animal sources of protein such as meat, milk and eggs are complete.\nThe source can be important. A protein lacking one of more essential amino acids is considered incomplete. Plant sources of protein don’t contain all the essential amino acids your body needs and are considered incomplete.\nWhy do we need protein?\n- Growth (especially children, teens, pregnant women)\n- Tissue repair\n- To make essential hormones and enzymes in the body\n- For energy if other energy sources, like carbohydrates, are not available\nHow much do we need?\nYou only need about 15-20% of your daily calories from protein and most American’s get enough. The Recommended Dietary Allowance (RDA) for most healthy adults is approximately 50-60 grams per day. The RDA for growing children, adolescents, and pregnant women is higher.\nCommon sources in the diet. You can see how easy it is to get the recommended amount.\n|Food source||Grams of protein|\n|4 ounces grilled chicken breast||34|\n|4 ounces broiled lean steak||27|\n|1/2 cup cottage cheese||14|\n|1 cup cooked kidney beans||13|\n|2 tablespoons peanut butter||8|\n|8 ounces milk (1% fat)||7|\n|4 ounces tofu||7|\n|4 ounces cooked pasta noodles||6|\n|2 slices whole wheat bread||5|\nHow can you be sure you are getting enough complete protein?\nYou can still get all of your essential amino acids from vegetable or plant sources if you eat a variety of plant-based foods.\nDo athletes need more protein?\nExtra protein in the diet isn’t usually necessary. Contrary to popular belief, eating more won’t give you more muscle. The only way to make your muscles bigger is to exercise the muscles. If you eat too much protein, the extra amount that is not needed for growth and repair is just extra calories. Extra calories, regardless of the source get stored as body fat.\nWords to know\nEssential amino acids. Amino acids that our bodies cannot make and we must get from food sources\nNonessential amino acids.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-6", "d_text": "Based off of most research and review studies, 0.82g/lb of protein would be the highest limit recommended (Phillips & Van Loon, 2011), and many studies, as you can see above, conclude lower than this. 0.82g/lb at this point seems to be the higher end of the spectrum and this recommendation often includes a double 95% confidence level, meaning they took the highest mean intake at which benefits were still observed and then added two standard deviations to that level to make absolutely sure all possible benefits from additional protein intake are utilized.\nAt this point the average suggested 1g -2g of protein per pound of overall bodyweight is a reach, especially on the 2g per pound side. Several of the above studies also looked at athletes that in some cases trained intensely for 1-1.5 hours per day with heavy resistance training. Also some of the studies looked at athletes in an energy restricted state aka a negative energy balance. Even with these variables, it seems that the 1g – 2g per pound model is an over exaggeration.\nCould You Need More Protein Than The Above Suggested?\nThere has been some recent research to suggest that some scenarios may need more protein than the above studies suggest. Now these reviews have been challenged and at this point it is still not fully clear if the 0.82g threshold is sufficient or if some scenarios may warrant more protein. The most popular referenced research suggesting more protein would be Eric Helms look at protein requirements. In this article he shows research and presents and argument in favor of higher protein in very lean athletes that are undergoing a large energy restriction. Basically lean athletes that are dieting very hard, he would suggest, require more protein. There is some great information here and should potentially be taken into count when recommending protein intake. Below is a link to the Eric Helms paper for more clarity.\n“A Systematic Review Of Dietary Protein Intake During Caloric Restriction In Lean Athletes: A Case For Higher Protein”\nWhat Do You Recommend For Your Clients?\nIt first depends on the situation. Each person I think has a different requirement, for several reasons. The amount of exercise they do daily / weekly, the intensity and type of training they perform, their current body fat %, and even their genetics can play a role. In addition a persons personal preference for macros can affect how much I would recommend.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "What Are Different Types of Protein?\nProtein – we’re told that we need it to build muscle, provide energy and fill our stomachs. But, what role does protein really play in our diets? What are the different sources? We reached out to Gordon Zello, Ph.D., professor of Nutrition and Dietetics at the University of Saskatchewan, to get answers to our many protein questions.\nWhat is protein?\nDr. Zello: “Proteins are composed of amino acids. These amino acids are placed in a precise order by a genetic code specific to each protein. This makes each protein unique and related to its function in the body. All animals and plants contain protein; therefore, one source of amino acids comes from our diet.\n“There are two kinds of amino acids, those that our body can make from others amino acids (dispensable or non-essential) and those that have to come from the food we eat (indispensable or essential). Protein is a macronutrient, along with carbohydrates and fat, thus besides its many functions it also provides energy to the body. Furthermore, protein is our source of nitrogen that we also require to make essential nitrogen-containing compounds.”\nProtein has many functions in the body:\n- Immediate energy (calories)\n- Hormones (e.g. insulin)\n- Structural proteins (e.g. muscle, bone, teeth, skin, blood vessels, hair; nails etc.)\n- Immunoproteins (e.g. antibodies)\n- Transport proteins (e.g. albumin, hemoglobin, lipoproteins).\n- Other essential nitrogen-containing compounds made from amino acids are melanin pigments (skin color) thyroid hormones, neurotransmitters (e.g. serotonin, epinephrine), nucleic acids and creatine.\nHow much protein does a person need in a day?\nDr. Zello: “The amount of protein an adult needs in a day is based on the weight of an individual, as the more you weigh the more protein one will require. For an adult, the requirement is 0.8 grams per kilogram of weight per day. Therefore, someone who weighs 70kg (155lbs) will require 56g of protein per day. It is usually not a problem to consume this much protein as most adults eat on average 80 to 120g of protein per day.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-6", "d_text": "1 Classification of amino acids\n2 Recommended dietary intake (RDI)* of protein per day10\n|* The RDI is the average daily dietary intake level that is sufficient to meet the nutrient requirements of nearly all healthy individuals (97%–98%) of a particular sex and life stage. ◆|\n3 Protein content of a range of plant foods and animal foods*\n4 A sample vegetarian meal plan designed to meet the protein and micronutrient requirements of a 31–50-year-old woman, showing protein content of the foods*\nProvenance: Commissioned by supplement editors; externally peer reviewed.\n- 1. Joint WHO/FAO/UNU Expert Consultation. Protein and amino acid requirements in human nutrition. World Health Organ Tech Rep Ser 2007; 935: 1-265.\n- 2. Soeters PB, van de Poll MC, van Gemert WG, Dejong CH. Amino acid adequacy in pathophysiological states. J Nutr 2004; 134 (6 Suppl): 1575S-1582S.\n- 3. Imura K, Okada A. Amino acid metabolism in pediatric patients. Nutrition 1998; 14: 143-148.\n- 4. Young VR, Fajardo L, Murray E, et al. Protein requirements of man: comparative nitrogen balance response within the submaintenance-to-maintenance range of intakes of wheat and beef proteins. J Nutr 1975; 105: 534-542.\n- 5. Sarwar G, McDonough FE. Evaluation of protein digestibility-corrected amino acid score method for assessing protein quality of foods. J Assoc Off Anal Chem 1990; 73: 347-356.\n- 6. Craig WJ, Mangels AR. Position of the American Dietetic Association: vegetarian diets. J Am Diet Assoc 2009; 109: 1266-1282.\n- 7. Young VR, Pellett PL. Plant proteins in relation to human protein and amino acid nutrition. Am J Clin Nutr 1994; 59 (5 Suppl): 1203S-1212S.\n- 8. Fuller MF, Reeds PJ. Nitrogen cycling in the gut.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "Chicken, pork, and fish all contain roughly the same amount of protein per ounce as beef (although the amount of resource and energy required to produce each food is quite different — but more on that soon).\nFor vegetarians, and even more so for vegans, protein needs are slightly higher — up to 1 g per kg per day, according to Linda Chio, R.D., of NYU Langone Medical Center. This is because non-animal protein sources are less “complete” than animal sources, meaning they do not contain quite enough of all the essential amino acids. But combining different plant-based protein sources — like grains and legumes, such as the brown rice and beans mentioned above — can help correct for this difference in quality. Research indicates that complementary protein sources don’t even need to be eaten in the same meal, as long as they are consumed in the same day.\nIf you’re a serious athlete, that’s where it gets a little tricky. Athletes’ protein needs are widely debated, but the ADA has established guidelines that can at least help get you in the right ballpark. If you regularly do endurance training — like running medium-length races or going on long bike rides — you should aim for 1.2 to 1.7 g per kg. For a 150-pound person, that would be 82 to 116 grams per day. If you’re in heavy training — think preparing for a marathon or an Ironman competition — then you’ll want to get 1.4 to 2 g per kg. That works out to between 95 and 136 grams for our 150-pound athlete. If you’re into lifting weights and other strength training, you’ll need 1.6 to 1.7 g/kg. And if you’re an athlete who’s trying to lose weight, your protein needs will probably be at the higher end of the range for your chosen activity (although the latest research suggests that high-protein diets don’t help with weight loss). But many athletes may go overboard on the protein. A recent study by researchers at Saint Louis University found that a third of male collegiate athletes in the survey drastically overestimated their protein needs (and the other two thirds did not know how much they should be getting). Of course, the makers of protein supplements encourage this overconsumption: Some protein shakes or drinks contain 60 or 70 g of protein in one serving.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "Our body needs iron for hemoglobin, which is the protein in red blood cells that conducts oxygen to tissues and organs, and its healthy function dictates the health of the whole body. Zinc plays a significant role in the proper functioning of the immune system, and magnesium helps in bones formation and releasing the energy from muscles.\nProteins are essential for our organism because they are helping in maintaining healthy bones, muscles, cartilage, skin, blood, enzymes and hormones. Proteins are an important source of calories and energy. You probably have heard that everyone that tend to get muscle mass eats a lot of meat, and combining it with proper exercise, they gain desired results in mass building. Meat contains many essential amino acids that our body can’t produce itself. For instance, plant proteins can’t provide such proteins as the meat.\nDaily protein portion for both gender\nWomen older than 19 years should intake 46g of proteins daily and men older than 19 years should intake about 56g of proteins daily, no matter if plant or animal origin.\nBest source of proteins\nAs we said, the best source of proteins is fresh meat of cattle, chicken and swine. There is also a lot of proteins in milk products, especially in whey (milk whey).", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "How Much Protein In A Macronutrient\nProteins are an important nutrient in our diets, but how much protein you need depends on your sex, weight, and activity level.\nProteins help give shape to some of the cells that make up your body’s tissues. Some proteins act as hormones or neurotransmitters (chemical messengers) so they influence mood, appetite, and sleep. Also, the liver uses amino acids to produce glucose for energy, so eating enough protein can aid in weight loss.\nHowever, too much protein can be harmful. Overconsuming protein can increase the amount of acid in your blood which may cause symptoms like stomach pain, nausea, vomiting, diarrhea, or fatigue. These symptoms come from the gut reacting poorly to the extra protein intake.\nThis article will discuss the differences between dietary protein sources, the average daily amounts for each source, and the recommended levels for those over 20 years old.\nSources of protein\nProteins are the main component of meat, seafood, eggs, milk, nuts, and other foods that contain large amounts of protein. They occur in two basic forms: complete or incomplete.\nComplete proteins are made up of all nine essential amino acids that we need to survive. These include dairy products like milk, cheese, and yogurt as well as meats, fish, and some vegetables.\nIncomplete proteins do not have enough of one or more of these essential amino acids and so they cannot be completely converted into protein. Examples of these are peas, grains, and most fruits.\nWe usually don’t worry too much about which type of protein is in what food, but it does matter because individuals with vegetarian diets may lack certain amino acids, while others may suffer from nutritional deficiencies due to excessive intake of specific ones.\nOverall though, eating a wide variety of high quality protein sources is important for overall health and performance.\nBenefits of protein\nProteins are an important part of your diet! They help promote bone growth, muscle development, skin health, and many other things.\nYou’ve probably heard about how much protein you need to eat per day, but it can be hard to know exactly how much is needed for individual needs.\nThat’s why we have some general guidelines. People usually refer to them as “recommended levels” or “minimum requirements” for each macronutrient.\nThese recommendations are actually averages though- there isn’t one size fits all when it comes to proteins. Different people have different genetics, so they require different amounts to maintain healthy weight.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "It is now accepted by athletes, coaches and athletic trainers that proper diet is one of the cornerstones for achieving better athletic performance. Despite this widely spread assumption, many, even at the highest levels, still believe that an high protein intake is fundamental in the athlete’s diet. This opinion is not new and is deeply rooted in the imaginary of many people almost as if, eating meat, even of big and strong animals, we were able to gain their strength and vitality too.\nThe function of proteins as energy-supplier for working muscle was hypothesized for the first time by von Liebig in ‘800 and it is because of his studies if, even today, animal proteins, and therefore meats, are often believed having great importance in the energy balance in the athlete’s diet, despite nearly two centuries in which biochemistry and sports medicine have made enormous progress.\nReally, by the end of ‘800 von Pettenkofer and Voit and, at the beginning of ‘900, Christensen and Hansen retrenched their importance for energy purposes, also for the muscle engaged in sport performance, instead bringing out the prominent role played by carbohydrates and lipids.\nOf course we shouldn’t think that proteins are not useful for the athlete or sedentary people. The question we need to answer is how many proteins a competitive athlete, engaged in intense and daily workouts, often two daily sessions (for 3-6 hours), 7/7, for more than 10 months a year, needs per day. We can immediately say that, compared to the general population, and with the exception of some sports, (see below) the recommended amount of protein is greater.\nMetabolic fate of proteins at rest and during exercise\nIn a healthy adult subject engaged in a non-competitive physical activity, the daily protein requirements is about 0.85 g/kg desirable body weight, as shown by WHO.\nProteins turnover in healthy adults, about 3-4 g/kg body weight/day (or 210-280 g for a 70 kg adult), is slower for the muscle than the other tissues and decreasing with age, and is related to the amount of amino acids in the diet and protein catabolism.\nAt rest the anabolic process, especially of synthesis, uses about 75% amino acids while the remaining 25% undergoes oxidative process, that will lead to CO2 and urea release (for the removal of ammonia).\nDuring physical activity, as result of the decreased availability of sugars, i.e.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Protein: How much do we need?\nThis post is part of Protein Angst, a series on the environmental and nutritional complexities of high-protein foods. Our goal is to publish a range of perspectives on these very heated topics. Add your feedback and story suggestions here.\nProtein: It’s the center of the American plate and the central component of many weight-loss diets. And if you spend much time looking at ads in gyms or men’s magazines, you might think it’s the most important nutrient ever discovered. Granted, protein is essential for body processes like cell growth and repair, so if you don’t get enough of it there can be serious health consequences. But how much protein do we really need?\nLess than you might think (or than marketers of high-protein products would have you believe). The CDC reports that most Americans get more than enough protein, so the average person doesn’t need to worry about deficiencies. According to American Dietetic Association (ADA) recommendations, most active adults only need 0.8 grams of protein per kilogram of body weight per day. (A kilogram is 2.2 pounds.) So a person who weighs 125 pounds needs 45 to 57 g of protein in a day; for someone who weighs 175 pounds, it’s 65 to 80 g. Serious athletes need a bit more, but we’ll get to that in a minute.\nNon-meat sources of protein can add up quickly: For example, you could have a single-serving Greek yogurt for breakfast (13 g protein); a vegetarian omelet with two eggs (14 g) and 2 ounces of soft tofu (2 g) for lunch; a handful of almonds as a snack (6 g); and a cup of brown rice (5 g) with half a cup of beans (7 g) for dinner. And that’s not even counting the small amounts of protein contained in fruits and vegetables — about 2 grams per serving for leafy greens, broccoli, and potatoes, and 1 gram per serving for many other vegetables and fruits.\nOf course, meat packs a bigger protein punch than plant sources. A reasonably sized 3-ounce steak has about 21 grams of protein, while the 12-ounce behemoths served at many restaurants contain more protein than most adults need in an entire day, and a 16-ounce steak provides more than twice the daily protein needs of the average woman.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "If you are what you eat, what does that make a vegan? A string-bean, milquetoast kind of a guy? Of course not—and renowned strength coach Robert dos Remedios, a vegan, is strong evidence to the contrary. Really strong.\nBut most men eat animal products. And we really do become what we eat. Our skin, bones, hair, and nails are composed mostly of protein. Plus, animal products fuel the muscle-growing process called protein synthesis. That's why Rocky chugged eggs before his a.m. runs. Since those days, nutrition scientists have done plenty of research. Read up before you chow down.\nYou Need More\nThink big. Most adults would benefit from eating more than the recommended daily intake of 56 grams, says Donald Layman, Ph.D., a professor emeritus of nutrition at the University of Illinois. The benefit goes beyond muscles, he says: Protein dulls hunger and can help prevent obesity, diabetes, and heart disease.\nHow much do you need? Step on a scale and be honest with yourself about your workout regimen. According to Mark Tarnopolsky, M.D., Ph.D., who studies exercise and nutrition at McMaster University in Hamilton, Ontario, highly trained athletes thrive on 0.77 gram of daily protein per pound of body weight. That's 139 grams for a 180-pound man.\nMen who work out 5 or more days a week for an hour or longer need 0.55 gram per pound. And men who work out 3 to 5 days a week for 45 minutes to an hour need 0.45 gram per pound. So a 180-pound guy who works out regularly needs about 80 grams of protein a day.*\nNow, if you're trying to lose weight, protein is still crucial. The fewer calories you consume, the more calories should come from protein, says Layman. You need to boost your protein intake to between 0.45 and 0.68 gram per pound to preserve calorie-burning muscle mass.\nAnd no, that extra protein won't wreck your kidneys: \"Taking in more than the recommended dose won't confer more benefit. It won't hurt you, but you'll just burn it off as extra energy,\" Dr. Tarnopolsky says.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Among the three macro nutrients (Carbohydrates, Proteins and Fats) proteins are often the misunderstood ones; we know carbohydrates provide energy and the Indian subcontinent by and large have diets, based primarily on them. We are also aware the fats are not supposed to be consumed in large amounts, though we occasionally (for some regularly!) indulge in our guilty pleasure of butter chicken and deep fried fritters. But what about protein-do we really need them? The lay man notion of proteins being essential only for building muscles often relegates protein to a grey area; right from consuming egg yolks to protein shakes.\nProteins are the building blocks of life- this is the line we have all studied in our primary school text books. Unfortunately most of us seem to have forgotten about it or not understood the depth the line holds. Protein is life. Right from a virus to a complex organism like Man, protein is what makes it up. One cannot have life without protein. Protein makes up your cells, skin, tissues, hair, nails and muscles of course amongst other things. Besides that protein is crucial for building lean skeletal muscles mass as well as building a robust immune system. When you consume less protein over an extended period of time you are compromising vital aspects of your health and well being.\nNow the moot question is how much protein is required to be consumed. As per the Recommended Dietary Allowances (RDA), an adult requires at least 0.8gm of protein per kg of bodyweight. So roughly for a person weighing 80kg, the minimum daily requirement would be 64gms. Please note that this is irrespective of physical activity and to prevent protein deficiency only. Depending on physical activity protein requirements would increase.\nEvery sport has a peculiar energy demand; to put it simply depending on the nature of the sport, the macro nutrient proportions vary. The ratios of carbohydrates, protein and fat have to be altered to match the energy demand of the sport. In fact, protein requirements fluctuate with seasons of training- off season protein requirement may be different from in season. These permutations and combinations are crucial for optimum sports performance.\nFor people who are looking to lose fat, while preserving lean muscle mass a daily consumption of 1gm per kg of body weight would be a good place to start.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-3", "d_text": "For example, the hormone insulin, secreted by the pancreas, works to lower the blood glucose level after meals. Insulin is made up of forty-eight amino acids.\nEnzymes, which play an essential kinetic role in biological reactions, are composed of large protein molecule. Enzymes facilitate the rate of reactions by acting as catalysts and lowering the activation energy barrier between the reactants and the products of the reactions. All chemical reactions that occur during the digestion of food and the metabolic processes in tissues require enzymes. Therefore, enzymes are vital to the overall function of the body, and thereby indicate the fundamental and significant role of proteins.\nEnergy provision. Protein is not a significant source of energy for the body when there are sufficient amounts of carbohydrate and fats available, nor is protein a storable energy, as in the case of fats and carbohydrates. However, if insufficient amounts of carbohydrates and fats are ingested, protein is used for energy needs of the body. The use of protein for energy is not necessarily economical for the body, because tissue maintenance, growth, and repair are compromised to meet energy needs. If taken in excess, protein can be converted into body fat. Protein yields as much usable energy as carbohydrates, which is 4 kcal/gm (kilocalories per gram). Although not the main source of usable energy, protein provides the essential amino acids that are needed for adenine, the nitrogenous base of ATP, as well as other nitrogenous substances, such as creatine phosphate (nitrogen is an essential element for important compounds in the body).\nThe recommended protein intake for an average adult is generally based on body size: 0.8 grams per kilogram of body weight is the generally recommended daily intake. The recommended daily allowances of protein do not vary in times of strenuous activities or exercise, or with progressing age. However, there is a wide range of protein intake which people can consume according to their period of development. For example, the recommended allowance for an infant up to six months of age, who is undergoing a period of rapid tissue growth, is 2.2 grams per kilogram. For children ages seven through ten, the recommended daily allowance is around 36 total grams, depending on body weight. Pregnant women need to consume an additional 30 grams of protein above the average adult intake for the nourishment of the developing fetus.\nSources of protein.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Got my answer thanks.\nGot my answer thanks.\nLast edited by Ayla2010; 01-18-2013 at 10:04 PM.\n0.7 - 1 gram per LEAN pound of bodymass (nobody is 0% bodyfat) is great for muscle maitinance for both men and women. I'd recommend that if you're not working out you stick to the lower end of that scale (0.7 grams) and if you do workout you could go a little higher (0.9 - 1 grams). Then, of course, if you're lifting and trying to gain, you might even go a bit higher.\nSee the \"Eat More Fat\" thread above.\nHere is a quote from the OP\nFrom the work of Dr Jan Kwasniewski author of The Optimal Diet.\nFirst you figure up your IDEAL weight in KILOS. This is not necessarily the weight you are aiming for but it it your Ultimate HSIS (Hot Stuff In Swimwear) weight.\nHSIS weight in kilos plus/minus 10% = range for protein grams/day\nHSIS weight in kilos divided by 2 = upper end for carb grams/day\nThe rest of your diet is fat. HSIS weight in kilos x anywhere from 2 to 3.5 depending on your weight goals (lower to lose, higher to gain)\nOn the \"recommended\" amount I likely wouldn't have been building any muscle and may have risked losing more during stress days. On the amount I'm eating, I am maintaining over stress days and growing on days when I'm more relaxed. RDAs are a mindfuck.\nPerfection is entirely individual. Any philosophy or pursuit that encourages individuality has merit in that it frees people. Any that encourages shackles only has merit in that it shows you how wrong and desperate the human mind can get in its pursuit of truth.\nI get blunter and more narcissistic by the day.\nI'd apologize, but...\nCW-125, part calorie counting, part transition to primal\nGW- Goals are no longer weight-related\nI believe that everything is individual and has to be taken within your own personal context of course.\nFor the metrically challenged, we are talking about a guy who is 5'8\" and weighs 135, Ayla, I don't think you really need to worry about exact macro ratios for him right now.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "Macro means large and nutrients are needed for your body’s survival. There are three macronutrients: proteins, carbohydrates and fats.\nProtein is from the Greek word, ‘proto’ meaning first or of first quality. Protein is an umbrella word for the twenty-two organic amino acids, of which thirteen are non essential to our diet, meaning our body can synthesize them. The other nine are essential amino acids meaning it is essential that we obtain them from our diet.\nProteins build and maintain our body tissues, help produce antibodies, enzymes and hormones such as insulin. Protein is the primary component of muscles, skin, nails, hair and internal organs, especially the heart. Each gram of protein releases four calories or units of heat or energy for the body. Your intake of protein should be approximately 25% of your daily caloric intake.\nThe average woman needs fifty to sixty grams of protein a day and the average man needs sixty to seventy grams of protein a day. These are very general, as lactating women need additional protein, as just one example. For children the Recommended Daily Allowance (RDA) for protein is based on body weight and included age-related adjustments. Multiply your child’s weight in pounds by the number of grams of protein needed per pound of body weight to calculate their daily protein requirements. Remember that everyone is a biochemical individual so your protein requirements might not fit into the ‘average’ category.\nAges 1 to 3 – 0.81 grams (child’s weight in pounds x 0.81 = daily grams of protein)\nAges 4 to 6 – 0.68 grams\nAges 7 to 10 – 0.55 grams\nSources of protein are fish, meat, poultry, tofu and eggs, which are complete proteins, meaning they have all the essential amino acids. You can combine various ingredients so as to have a complete protein: rice and beans, grains and legumes, and nuts or seeds with dairy.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "by Tandis Bishop, RD\nProbably no component of food has been so misunderstood, and so radically misinterpreted, as protein. When we talk about vegetarianism, usually the biggest concern people have is \"How can I get enough protein?\" This myth is finally becoming dissolved as leading health organizations are highlighting the significance of plant-based proteins as opposed to the detrimental health hazards of animal-protein often excess in the average American diet. You will see that it is almost impossible to be protein deficient on a well-balanced, calorie-adequate vegetarian diet.\nPlant-based foods as key to healthy lifestyle are emphasized in guidelines by The American Heart Association, American Cancer Society, American Diabetes Association, The American Institute for Cancer Research, and many other health organizations. In fact, the new USDA MyPlate food guide is about 75% plant-based.\nWhat is protein?\nProtein is an essential nutrient involved in virtually all cell functions. In the body, protein is required for structural support, and the maintenance and repair of tissues. It is also the basic component for immunity, most hormones, and all enzymes, among other functions. In food, proteins are made from chains of 20 different amino acids, the building blocks of protein. Our bodies can only produce 11 of these amino acids. The 9 \"essential\" amino acids, which cannot be made by the body must be obtained from food. A diet with a variety of whole grains, legumes, and vegetables can provide all the essential amino acids to meet our bodies requirement.\nHow much protein do we need?\nThe Recommended Dietary Allowance (RDA) for protein for adults is 0.8 grams per kilograms of weight.1 To calculate your individual daily protein needs, use the following calculation: Body weight (in pounds) X 0.36 = recommended protein intake (in grams) For example, the daily protein requirement is about 60 grams for a 170 lb male, and about 47 grams of protein for a female that weighs about 130 lbs. In addition, the RDA recommendation includes a large safety factor for most people. Protein needs are increased for women who are pregnant or nursing which can easily be met with their higher caloric intake requirements. Protein deficiency is extremely unlikely when daily calorie needs are met by a variety of whole grains, vegetables, beans, lentils, tofu, nuts, seeds, and dairy products.\nPlants are rich in protein.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-1", "d_text": "The age of uncertainty might be over, though. A Canadian exercise scientist from McMaster University has recently published the results of his meta-study on protein requirements for resistance training and come up with a definitive answer.\nWhat He Did\nResearcher Robert Morton wasn't satisfied with the results of previous protein studies or even protein meta-studies. There was a lack of agreement because of widely divergent study inclusion criteria. Subjects were different ages, had different training statuses, different protein intakes, sources, and doses.\nSome used only trained participants, older people, supplements containing more than just protein, only one source of protein, shorter resistance training time periods, people who were using protein to diet, or old, frail bastards. Women were included in some studies, but not others.\nMorton, however, wanted to see how big a part protein intake played for people who lifted weights. He found 49 studies involving 1,863 men and women and compiled the results.\nWhat He Found\nThe studies included men and women who'd been weight training for between 6 and 52 weeks. Some used protein supplements and some got their protein from whole food. The protein doses varied from 5 to 44 grams per drink or meal.\nMorton detected a distinct relationship between total protein intake and fat-free mass (muscle). Moreover, dietary protein supplementation significantly increased one-rep maxes and cross-sectional muscle-fiber area (muscles got bigger).\nNo real surprises there, but his statistics did show that protein intake beyond 1.62 g/kilogram didn't result in any further resistance-training related increases in fat-free mass.\n\"There have been mixed messages sent to clinicians, dieticians, and ultimately practitioners about the efficacy of protein supplementation,\" said Morton, in a press release. \"This meta-analysis puts that debate to rest... protein intake is critical for muscle health and the recommended dietary allowance of 0.8 grams per kilogram per day is too low.\"\nWhat This Means to You\nIf you're a lifter, you need more than the RDA, a lot more. But based on Morton's meta-analysis, taking more than 1.62 grams per kilogram of bodyweight a day probably won't lead to any additional growth. That 1.62 grams/kilogram, converted to pounds, looks like the following:\n- 110 grams a day for a 150-pound lifter.\n- 147 grams a day for a 200-pound lifter.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-2", "d_text": "“High protein intake promotes the growth of preneoplastic foci in Fischer #344 rats: evidence that early remodeled foci retain the potential for future growth.” J. Nutr. 121 (1991): 1454–1461.\n[vi] Youngman LD, and Campbell TC. “Inhibition of aflatoxin B1-induced gamma-glutamyl transpeptidase positive (GGT+) hepatic preneoplastic foci and tumors by low protein diets: evidence that altered GGT+ foci indicate neoplastic potential.” Carcinogenesis 13 (1992):1607–1613.\n[vii] Levine ME et al. Low Protein Intake Is Associated with a Major Reduction in IGF-1, Cancer, and Overall Mortality in the 65 and Younger but Not Older Population. Cell Metab. 2014 Mar 4;19(3):407–17.\n[viii] Fung TT et al. Low-carbohydrate diets and all-cause and cause-specific mortality: two cohort studies. Ann Intern Med. 2010 Sep 7;153(5):289–98.\n[ix] High-Protein, Low-Carb Diets Explained, accessed January 15, 2015.\n[x] High-Protein Diets, accessed January 15, 2015.\n*DERIVING THE ABOVE MACRONUTRIENT RECOMMENDATIONS:\nTo calculate these numbers, you have to look at the Recommended Dietary Allowance (RDA) for protein and the Adequate Intake, or “AI”, set for essential fats in the Dietary Reference Intake (DRI) tables for macronutrient composition (reference #10). These DRIs are developed by the Institute of Medicine, based on the available relevant evidence. The RDA is a specific recommended level of intake that should be adequate for 97.5% of a normally distributed population. The AI is what most people in the population eat on average and is used when there is not enough evidence to set an RDA.\nUsing the average requirements as target intakes (a strategy even suggested in the DRI tables), a man weighing 175 pounds who eats a 2500-calorie diet would require 63.6g of protein and 18.6g of essential fats per day.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-4", "d_text": "|Substance||Amount (males)||Amount (females)||Top Sources in Common Measures|\n|Water[i]||3.7 L/day||2.7 L/day||water, watermelon, iceberg lettuce|\n|Carbohydrates||130 g/day||130 g/day||milk, grains, fruits, vegetables|\n|Protein[ii]||56 g/day||46 g/day||meats, fish, legumes (pulses and lentils), nuts, milk, cheeses, eggs|\n|Fiber||38 g/day||25 g/day||barley, bulgur, rolled oats, legumes, nuts, beans, apples,|\n|Fat||20–35% of calories||oils, butter, lard, nuts, seeds, fatty meat cuts, egg yolk, cheeses|\n|Linoleic acid, an omega-6 fatty acid (polyunsaturated)||17 g/day||12 g/day||sunflower seeds, sunflower oil, safflower oil,|\n|alpha-Linolenic acid, an omega-3 fatty acid (polyunsaturated)||1.6 g/day||1.1 g/day||Linseed oil (Flax seed), salmon, sardines|\n|Cholesterol||300 milligrams(mg)||chicken giblets, turkey giblets, beef liver, egg yolk|\n|Trans fatty acids||As low as possible|\n|Saturated fatty acids||As low as possible while consuming a nutritionally adequate diet ||coconut meat, coconut oil, lard, cheeses, butter, chocolate, egg yolk|\n|Added sugar||No more than 25% of calories||foods that taste sweet but are not found in nature, like: sweets, cookies, cakes, jams, energy drinks, soda drinks, many processed foods|\n- Includes water from food, beverages, and drinking water.\n- Based on 0.8 g/kg of body weight.\nCalculating the RDA\nThe equations used to calculate the RDA are as follows:\nIf data about variability in requirements are insufficient to calculate an SD, a coefficient of variation (CV) for the EAR of 10 percent is assumed, unless available data indicate a greater variation in requirements. If 10 percent is assumed to be the CV, then twice that amount when added to the EAR is defined as equal to the RDA. The resulting equation for the RDA is then\nThis level of intake statistically represents 97.5 percent of the requirements of the population.\"", "score": 8.086131989696522, "rank": 100}]} {"qid": 38, "question_text": "At what age are dancers most vulnerable to injuries, and why?", "rank": [{"document_id": "doc-::chunk-2", "d_text": "Although young bodies are durable and quick to heal from injuries, the constant repetition of such strenuous movements causes instability in hip sockets and can lead to severe pain in the joints — as well as long-term conditions like deformities, dysplasia, and labral tears, she said.\n\"That drives us up the wall. We really do not support any reason to do splits,\" Solomon told Business Insider. \"Most young people who are not trained in technique prior to trying to do these kinds of things really are very vulnerable to hip problems especially.\"\nThe data appears to agree with Solomon's assessment — the number of dance-related injuries being treated in US emergency rooms annually has been climbing for years, according to one 2013 study by the Nationwide Children's Hospital's Center for Injury Research and Policy.\nBetween 1991 and 2007, the annual number of injuries in young dancers jumped by 37%, which is likely a wild underestimate, according to the study's lead author, Kristin Roberts. The study only had access to data from US emergency rooms and therefore didn't include injuries that were treated at home, by family doctors, or by private physiotherapists.\nThe causes of the increasing injury rate are even murkier, but social media trends, reality-TV shows, and video games that have helped popularize dance probably aren't too far off the mark, Roberts said.\nRegardless of the root cause of the injuries, the popularity of these tricks defy all logic, Solomon said, especially because the solution is so simple: Stop performing movements the body isn't equipped to handle.\n\"I have no idea why this phenomenon has spouted up, except that it's a kind of entertainment without proper instruction,\" she said. \"There are so many movements in the world that you could do. Why do something that might be deleterious?\"", "score": 52.43982115659228, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Athletes in many of the aesthetic sports, including dance, require high level strength AND flexibility to achieve performance goals. Because dancers move through significant ranges of motion in the spine, hips, ankles, and feet, it is important to have adequate strength and flexibility in surrounding muscle groups to reduce risk of injury and maximize performance. Beyond optimizing strength and flexibility, it is important for dancers to develop a fine-tuned balance between activity and rest to allow for proper recovery and restoration in the body.\nMost dance injuries, especially in dancers that are still growing, are overuse injuries. These types of injuries occur because of one or a mix of the following reasons:\n- Inadequate warm-up before and/or cool-down after activity\n- Inadequate rest/recovery time between bouts of activity (another way to say this is high volume of activity and low volume of rest or none at all)\n- Poor nutrition\n- High levels of stress inside and outside of sport\nThe most important point to drive home with overuse injuries is the importance of encouraging dancers to listen to their body. Any sign of discomfort is the body’s attempt at throwing a yellow or red flag, suggesting the dancer proceed with caution or stop the immediate activity altogether until further medical assessment is performed. Recognizing and whole-heartedly understanding that athletes do not like to hear that they must step out of their sport for any length of time, it is important to acknowledge warning signs early. When yellow and red flags are honored as soon as they pop up, there may still be time to make modifications and allow for some level of participation before having to withdraw from the activity altogether. From both the athlete and instructor/coach perspective, it is undesirable to have to sit out for one day. But this one day could prevent future sidelining that has the potential to last for multiple days or maybe the entire season.\nThere are many ways dancers can take accountability for nourishing and treating their body with care. Beyond finding a proper balance between activity and rest, dancers can integrate specific strengthening and flexibility exercises into their daily routine. Important muscle groups for dancers to strengthen include the deep core (including the transverses abdominus), gluteals and deep hip rotators, and foot intrinsic muscles. It is also important for dancers to remain flexible in their hip flexors, hamstrings, glutes, and ankle plantar flexors (gastrocnemius and soleus).", "score": 48.08037070351185, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "November 2000 https://www.iadms.org/page/1\n2) Delegete, A. Health Considerations for the Adolescent Dancer. A webinar through the Harkness Center for Dance Injuries. Accessed September 23, 2018.\n3) Steinberg, N., Siev-Ner, I., Peleg, S., Dar, G., Masharawi, Y., Zeev, A., & Hershkovitz, I. (2012). Extrinsic and intrinsic risk factors associated with injuries in young dancers aged 8–16 years. Journal of sports sciences, 30(5), 485-495.\n4) Steinberg, N., Siev-Ner, I., Peleg, S., Dar, G., Masharawi, Y., Zeev, A., & Hershkovitz, I. (2013). Injuries in female dancers aged 8 to 16 years. Journal of athletic training, 48(1), 118-123.\nThe immune system provides protection from seasonal illness such as the common cold as well as other health problems including arthritis, allergies, abnormal cell development and cancers. Dancers are exposed to physical stress from training, which increases susceptibility to illness. Additionally, working in close proximity with other dancers increases exposure to infection. Nutrition plays an important role in maintaining immune function to protect against infection. Learn how to boost your immunity by including these nutrients in your eating plan.\nProteins form many immune cells and transporters. Try to consume a variety of protein foods including seafood, lean meat, poultry, eggs, beans and peas, soy products and unsalted nuts and seeds.\nVitamin A helps regulate immune function and protects from infections by maintaining healthy tissues in skin, mouth, stomach, intestines and respiratory system. Vitamin A is found in foods such as sweet potatoes, carrots, kale, spinach, red bell peppers, apricots, eggs or foods labeled \"vitamin A fortified,\" such as cereal or dairy foods.\nVitamin E works as an antioxidant to neutralize free radicals. Include vitamin E in your diet with fortified cereals, sunflower seeds, almonds, vegetable oils (such as sunflower or safflower oil), hazelnuts and peanut butter.\nVitamin C protects stimulates the formation of antibodies, which are necessary to fight infection.", "score": 47.70617912848623, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Knowledge of a sport’s demands, lore, and jargon aids in understanding the athlete’s problems.\nModerate risk. Reports of specific injuries, which include stress fractures of the pars, distal fibula, and base of the second metatarsal; Achilles tendonitis; cuboid subluxation; os trigonum impingement syndrome; and trigger toes. Toes are severely stressed. Delay in puberty and emphasis on slenderness is a problem for girls and can lead to eating disorders. Be aware that the dancer’s self-image is one of an artist, not an athlete, despite the high level of athletic demands.\nModerate risk, depending upon the child’s age. Most acute injuries are associated with sliding, collisions, and ball or bat strikes. Most deaths occur from ball strikes to the head, neck, or chest. Overuse injuries, such as little league elbow, are preventable but potentially serious problems. Unusual injuries include apophysitis of the acromion, distal humeral epiphyseal separation, persistence of the olecranon physis, and avulsion of the iliac crest apophysis while swinging a bat.\nModerate risk. Injuries as compared with other sports occur more often but usually are mild. Injuries in children under age 12 involve mainly contusions, sprains, lacerations, and rarely a fracture. Seldom are there serious injuries. Adolescent injuries are more common and more likely to be serious, such as contusions, sprains, and sometimes fractures. Ankles and knees are affected most. Most serious are ACL injuries. Ankle injuries require rehabilitation to prevent recurrence.\nHigh risk. Most serious accidents are due to collisions with motor vehicles. Prevention is essential through education of children, use of helmets, and avoidance of congested roadways. Potential long-term disability from head injury is significant.\nHigh risk. Most injuries are due to collisions in this most risky sport. Catastrophic injuries can be reduced by using a well-fitting helmet and by avoiding spearing (initial head contact in blocking and tackling). A quarter of American football players are obese. Injury rates increase with maturation. Long-term osteoarthritis of the knee and hip are possible sequela from major injuries of these joints. Most problems result from acute injury and are due to joint and neurological damage.", "score": 45.52187135659351, "rank": 4}, {"document_id": "doc-::chunk-3", "d_text": "There are a few causes for this phenomenon:\n- Practicing one sport generally leads to one way of moving on the same surface – this stresses the same tissues repetitively and can lead to micro trauma over time\n- Repetitive tissue loading in high volumes creates high demands on joints in a young changing body (i.e., during puberty) – this can lead to a breakdown of tissues\nA quick tip in preventing overuse injuries: listen to the warning signs and be assessed by an exercise physiologist or strength & conditioning coach (or see an allied health professional if you’re already in pain).\nWork with a qualified exercise specialist to incorporate movement variations at an appropriate time in your YTP; test regularly, track growth and development, use a “coach’s eye” to assess daily movement quality, and expose artistic athletes to fundamental movements skills (especially outside of the repetitive movements they may be familiar with).\nWith longevity at its core, part of a structured supplemental training program should also involve recovery strategies to prime the body and energy systems for the following days’ practice/rehearsal, performance, or competition. Exercise physiologists and strength & conditioning coaches will strategically structure an athlete’s YTP to include (but are not limited to):\n- aerobic conditioning within a targeted heart rate zone for a purposeful cool-down\n- planned de-load weeks between blocks of training that include general preparatory exercises and circuits to prevent staleness and overtraining\n- contralateral circuits to prime movements and energy systems for greater performance outcomes in the following week\n- breathing techniques, mindfulness, and meditation\n- corrective exercises to balance full-body locomotion and mobility-stability deficits\n- re-lengthening through dynamic and/or static stretching and SMR (self-myofascial release)\nIMPORTANT TRAINING CONSIDERATIONS\nWhen seeking supplementary strength & conditioning programs, it’s important to work with a certified exercise physiologist and/or a strength & conditioning specialist, preferably with a background in artistic sports performance. The design of any resistance training program must be specific to the artistic sport.\nAthletes are much more likely to be passionate about their training if they understand why they are training and what the expected outcomes may be. Motivation to keep training comes from educating athletes about the importance of strength & conditioning. Education should be given in reference to injury prevention and improved performance to create longevity within their athletic career, as well as for an active life after sport.", "score": 44.6510023132351, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Young female athletes appear to face a far greater risk for repetitive motion injuries than young males do, new research suggests.\nThe finding stems from an analysis that looked at overuse injuries among 3,000 male and female high school athletes participating in 20 different sports.\nResearchers from Ohio State University Wexner Medical Center in Columbus report that the highest overuse injury rate was observed among girls who ran track.\nThis was followed by girls who played field hockey and girls who played lacrosse.\nBy contrast, among boys the most overuse injuries occurred among swimmers and divers. Their rate of repetitive motion injuries was pegged at only about a third of what investigators saw among female runners.\nThe study was published recently in the Journal of Pediatrics.\n“During this point of their lives, this is when girls are developing bones at the greatest rate,” study author Dr. Thomas Best said in a center news release. He is a professor and chair at OSU’s department of sports medicine.\nSo “it’s incredibly important that they’re getting the proper amounts of calcium and vitamin D,” he said.\nBest and his colleagues pointed out that overuse injuries make up about half of all athletic injuries. They are particularly common among children between the ages of 13 and 17.\nOveruse injuries also account for about twice as many visits to sports medicine doctors than incidents of acute trauma, the authors noted.\nOverall, most overuse injuries involved the lower leg, the study team noted. This was followed by knee and shoulder injuries.\nTo limit risk, the researchers advised that all high school athletes play more than just a single sport and make a conscious effort to change up their movements. Parents, they added, should encourage their children to get the rest and foods they need to stay healthy.", "score": 44.342744247440855, "rank": 6}, {"document_id": "doc-::chunk-2", "d_text": "Tibial fractures, medial collateral ligament injuries, and thumb and shoulder injuries are common. Collision injuries are the most serious, as head, spine, and extremity injuries may have long-term sequelae. The most common injuries were contusions of the knee in children and sprains of the ulnar ligament of the thumb in adolescents. With increasing age, lower extremity injuries decrease but upper extremity injuries increase.\nModerate risk. Overuse and injuries involving the ankle and knee are common. ACL injuries are 2–3 times greater in girls. Long-term disability risk is low to moderate. The incidence increases with age, and injuries are more common in girls. Seventy percent of the injuries are located in the lower extremities, particularly the knee (26%) and ankle (23%). Back pain occurs in 14% of players. Fractures, which account for 4% of injuries, are more often in the upper extremities. Indoor soccer is the most risky.\nLow risk. Overuse injuries of the shoulder, back, and knee are common, but long-term disability risk is low. Good training and modification of swimming strokes are important in preventing and managing these–problems. Shoulder pain is due to impingement or instability. Preparedness for swimming is optimal between ages 5 and 6 years.\nLow risk. Acute injuries involving the lower limbs with sprains are the most common injuries. Upper extremity injuries, often due to overuse, are preventable with appropriate training, stroke technique, and equipment. Long-term disability risk is low.\nVery high risk. Most injuries occur from falls on hard surfaces to the side of the device. Head and cervical spine injuries are relatively common, and the potential for long-term disability is great. Discourage families from allowing children to play on trampolines.\nLow to moderate risk. With proper supervision and low weights, this sport is relatively safe. Overuse is the most common cause of injury. Long-term sequelae are low.\nHigh risk. More injuries occur in large adolescents and more during competition than in practice. The upper limb and knee are the most common sites of injury, and dislocations are more common than fractures. Most injuries are acute sprains.", "score": 41.47180470555945, "rank": 7}, {"document_id": "doc-::chunk-2", "d_text": "How You Can Prevent Ballet Injuries\nBallet injuries can be prevented by making sure that you are wearing properly fitted shoes, especially if you are dancing en pointe.\nThere are professionals you can see who know how to fit you properly for pointe shoes; ask your ballet instructor how to find one. Once you have your shoes, take proper care of them.\nDancers should not try dancing en pointe before the age of 11 or 12. By doing so, they can risk growth-plate injuries, and the bones of their feet may develop improperly.\nIndeed, no dancer should ever attempt a feat that he or she is not ready for. As in sports, proper training in dancing is crucial to injury prevention.\n- Ensure proper nutrition.\n- If you increase your training schedule, take it slowly; don’t suddenly go from once a week to every day.\n- If you spend most of one workout en pointe or demi-pointe, focus your next workout on something else.\n- Avoid dancing on hard surfaces.\n- Avoid dancing on uneven surfaces.\n- If you are in pain, STOP!", "score": 39.69066639508324, "rank": 8}, {"document_id": "doc-::chunk-15", "d_text": "Injury incidence ranges from 1.4 to 3.7 injuries per 1000 participation hours. The injury rate in competition is two times higher than in practice. However, most injuries occur during practice because the exposure time is greater. Both acute and overuse injuries occur in gymnastics. In both male and female gymnasts, 60% of injuries are acute. The parts of the body that are particularly affected are the shoulder, wrist, elbow, lower back, knee, and ankle. Common injury types are strains, sprains, contusions, and less commonly fractures (acute and stress). Concussions make up 2.3% of injuries in practice and 2.6% of injuries in competition. In one study of elite and subelite female gymnasts from ages 11 to 19 years, 12.3% of injuries were to growth plates.\nThe lower extremity is the most common area of injury in female gymnasts, accounting for 53% of injuries in practice and 69% of injuries in competitions. In competition, 20% of injuries are internal derangement of the knee (e.g., ACL tear, medial collateral ligament [MCL]/lateral collateral ligament sprain, or meniscal tear), and 16.4% of injuries are ankle sprains. These injuries most often occur during floor routines and dismounts. Chronic injuries of the knee include patellofemoral syndrome, patellar tendonitis, and Osgood-Schlatter disease. Foot injuries include calcaneal apophysitis, foot pad contusion, sesamoiditis, planter fasciitis, and fracture (acute and stress).\nUpper extremity injuries are the second most common area of injury in female gymnasts and the most common area of injury in male gymnasts at 54%. In male gymnasts the most common site is the shoulder. Shoulder injuries include rotator cuff impingement, strain, and tendinosis, labral tear, and glenohumeral dislocation. The most common sites of upper extremity injury in female gymnasts are the wrist and elbow. The elbow can be affected by acute injuries (i.e., ulnar collateral ligament sprains, fractures, and dislocations) and chronic injuries (i.e., medial epicondyle apophysitis and osteochondritis dissecans of the capitellum).", "score": 38.99945260103989, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "As dancers return to the studio, it is important to gradually introduce jumps to prevent lower leg injuries. Foot and ankle injuries are often incurred during dynamic movements like jumping, and improper flooring, poor technique, and fatigue can increase the risk of injury.\nHere are some important tips to lessen the likelihood of sustaining injury during jumps:\nRussell J. A. (2013). Preventing dance injuries: current perspectives. Open access journal of sports medicine, 4, 199–210. https://doi.org/10.2147/OAJSM.S36529", "score": 37.78122882435768, "rank": 10}, {"document_id": "doc-::chunk-2", "d_text": "Many preventive programs are targeted toward athletes aged from 15 to early twenties. It is possible that the implementation of injury prevention programs would be more beneficial at an earlier age; according to some authors we should put most effort into prevention from 12-14 years. From a motor learning aspect even age 6-12 might be important in relation to develop ‘‘good habits'' (good warm-up routines and movement patters) and to establish correct playing technique.\nFor more detailed information you can download the fact sheet \"Preventing injuries in Basketball\" here\n1. Abernethy L, Bleakley C. Strategies to prevent injury in adolescent sport: a systematic review. Br J Sports Med 2007 41: 627-638\n2. Fong DTP, Chan YY, Mok KM, Yung PSH, Chan KM. Understanding acute ankle ligamentous sprain injury in sports. Sports Medicine, Arthroscopy, Rehabilitation, Therapy & Technology 2009, 1:14\n3. Stasinopoulos D. Comparison of three preventive methods in order to reduce the incidence of ankle inversion sprains among female volleyball players. Br J Sports Med 2004 38: 182-185\n4. Cumps E, Verhagen E, Meeusen R. Prospective epidemiological study of basketball injuries during one competitive season: Ankle sprains and overuse knee injuries J Sports Sci and Med 2007; 6: 204-211\n5. Cumps E, Verhagen E, Meeusen R. Efficacy of a sports specific balance training programme on the incidence of ankle sprains in basketball. J Sports Sci Med 2007;6:212-219\n6. Myklebust G, Steffen K. Prevention of ACL injuries: how, when and who? Grethe. Knee Surg Sports Traumatol Arthrosc 2009;17:857-858\n7. Caine DJ, Harnmer PA,Shiff M. Epidemiology of injuries in Olympic sports. 2010. Blackwell Publishing Ltd\n8. Starkey C. Injuries and Illnesses in the National Basketball Association: A 10-Year Perspective Journal of Athletic Training 2000;35(2):161-167\n9. McKay GD, Goldie PA, Payne WR, Oakes BW.", "score": 37.158187457520725, "rank": 11}, {"document_id": "doc-::chunk-4", "d_text": "Flexibility and Age\nA very interesting study was done with 4,500 children ages 5-18 years. 100% of the 5-year-olds could sit with legs straight and touch their toes. By age 12 only 30% could do so but by age 18 flexibility increased to 60% could touch their toes. The low point in flexibility was age 12 for boys and 13 for girls.\nWhy is there a loss of flexibility? This age coincides with the skeletal growth spurt, so muscle tissues are shorter relative to bone length until muscle growth catches up to bone growth. Be aware that their is also an increased chance of injury to muscles during this time.\nAt the other end of the spectrum, aged adults become less flexible with the passing years because the connective tissue loses elasticity.", "score": 36.0945944619459, "rank": 12}, {"document_id": "doc-::chunk-2", "d_text": "The judging is fierce and the competition makes the dancer want to strive even harder to become an amazing artist. Dance is strenuous. In fact, dancers have one of the highest rates of non-fatal on the-job injury. ” The causes of most dance injuries are pushing for perfection so hard that muscles are strained, shin splints occur, plantar fasciitis happens, and stress fractures are created. When dancers have this the best way to cure it is stretching before and after dancing. If any serious injury occurs like breaking bones landing a move wrong it could ruin the dancer’s career.\nMost dancers retire around the age of 30 if an injury happens early the dancers career they will usually retire a teacher. Recovering could take a few weeks to years. A lot of dancers have permanent damage to their body after a vigorous career of muscles being strained and the body being pushed to its limit. Becoming a professional dancer is risky but rewarding. To become a professional dancer it takes hours of practice in a studio and the acceptance of not becoming a huge star with a lot of money.\nTo embark on the journey to become a professional dancer the first decision to make is what style of dance, after the decision is made the next step is to find a dance studio, during the training in the studio cross training will be needed to become stronger and more flexible, the finance, competitions and injuries are all ideas to consider before committing to a vigorous but rewarding schedule. Every dancer has a different opinion of this profession but the ones who love to dance and have the passion that no one could ever take away from them are the ones who are happy and loving every second of it.", "score": 36.03384628721598, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "There was no difference in the frequency of reported injuries between subjects with vitamin D deficiency or insufficiency ( 2.1 ± 0.6 injuries ) and those with normal vitamin D levels ( 1.4 ± 0.6 injuries ). This pilot study showed that more than half of highly-trained young male ballet dancers presented with low levels of vitamin D in winter. Further investigations in larger samples of adolescent athletes are needed to determine if this could negatively impact bone growth and place them at higher risk for musculoskeletal injuries.\nAccess may be restricted.", "score": 34.67371402028528, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "Contrary, if an orthopedic condition is not taken into consideration, the work on Pointe in some cases can lead to the end of the dance career.\nIn our clinic, we evaluate dancers for postural dysfunctions in order to prevent injuries and allow a smooth transition from the Tennis shoes to the Pointe shoes.", "score": 34.21289990109224, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "Dancers with limited hip flexors, coupled with weak abdominals, lead to extreme pelvic tilt and improved disk compression inside the spinal column.\nThe SuperiorBand® and SuperiorBand® Extremely allow you to stretch farther any time you incorporate it for your heat-up stretching positions. Use it in advance of every single practice to raise your stretching motion. The SuperiorBand® and SuperiorBand® Ultra do the job for equally static (keep) and dynamic (relocating) stretching.\nLigament Sprains and Meniscal Tears from the Knee: these accidents normally stem from constrained hip rotation. Dancers with “tight” hips have a tendency to compensate with their knees and ankles, Therefore placing irregular forces on these joints, leading to injury.\nDeal with exhaustion and tension: It is just a recognized actuality that almost all athletes and dancers accomplish very best when rested and comfortable. Tiredness and stress result in muscle tightness and deficiency of aim, check here Therefore drastically expanding the chance of acute injuries.\nThe earliest method of ballet was done in huge chambers with viewers seated on galleries to make sure that the floor sample might be visible from higher than to look at the choreography.", "score": 32.69903565971246, "rank": 16}, {"document_id": "doc-::chunk-3", "d_text": "Even when they don’t have the natural knack for it, they learn the vocabulary.” He feels almost everyone can learn dance enough to enjoy it.\nInjury and Sacrifice\nProfessional dance is pure artistic movement and while often graceful, it requires athleticism and strength of both body and mind. It seems for professionals it is a life of give and take, with dancers giving their all artistically and dance itself taking its toll physically. I ask Jahrel about injury. He talks first about dancers hurting themselves trying to do moves to quickly without first slowly building on learning technique. Like anything else, you have to put in the hours. He explains, “The longevity of a dancer’s life usually depends how injured they’ve been through their life and how much the pain out weighs the joy.” He has dealt with pain. “Well, I went through two or three years where I was just dealing with constant pain, and in the beginning I was like, pain is just a part of dancing, you just keep doing it. But when you walk down the street after you rehearse and you go home and [holding right his foot] it’s throbbing and you wake up and you step on it and it hurts day after day after day... like it eats at your brain and just takes away from everything.” Jahrel also suffered a freak performance related injury aboard a cruise ship where after three weeks of hard rehearsals to prepare for four one hour shows, his solo at the end would have him exiting the stage in the dark. What wasn’t properly communicated is during the end of his solo, an elevated movable stage would be dropping and creating a 16 foot deep pit behind him. So, after a bunch of split leaps and landing, he turned and his dark exit was straight down! “I was happy I finished it right and was leaving and suddenly there was no ground underneath me, and I guess somebody screamed pit!! and I was gone, that was it, it was over. It hurt, a lot.”\nI feel it’s important to tell this part of a dancer’s story and life, because if you enjoy and celebrate the arts, in this case dance, it’s good to also appreciate the sacrifices artists go through away from the lights of the stage. I think it brings a greater appreciation for the arts overall. It’s what you don’t see in the price of admission that is often the most valuable.", "score": 32.4266743555655, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "More severe common injuries include ACL tears and stress fractures.\nDefinitely! For male dancers most injuries occur in the back, knee, and upper body due to difficult partnering and jumps. For female dancers, foot and ankle injuries are pretty common due to the amount of pointe work they’re required to do.\nThe most difficult injury I’ve ever had to overcome was my Lisfranc Ligament Tear in my right foot. In December of 2015, I was overworking myself with shows of Nutcracker, working out intensely to build more muscle, and rehearsing variations to record to send out for auditions. Because I was putting so much stress on my body, my muscles and ligaments were more prone to being inactive and unstable. So after one Nutcracker show, on my way home, I misjudged stepping off the sidewalk and at that moment the ligament connecting both my first and second metatarsal suddenly snapped. It was one of the worst pains I’ve ever felt and would never wish it upon anyone. After three weeks of believing it was a sprained foot after a misdiagnosis, I was taken to an orthopedist by my mom who is an RN. X-rays showed that my ligament had in fact torn completely and my metatarsals had slowly started separating. The doctor told me that if I hadn’t come in and waited another week my career would’ve been done. A few days later I was in a surgery center undergoing a procedure to put hardware in my mid foot to repair my ligament. It took seven months, two surgeries and lots of physical therapy until I could even start doing barre again. I had to relearn how to walk, stabilize, and regain control of all the tiny muscles and ligaments in my foot. When Ashley offered me a contract to dance at Joffrey, I still had two screws in my foot and he told me he had absolute faith that I would come back stronger than I was before. With him, the incredible ballet masters, the support of my Joffrey family, and the help from all the physical therapists who worked with me at the ballet and the Athletico clinic, I was jumping by September and fully dancing by October – just in time for the fall program of Romeo and Juliet in my first season!\nDuring a busy time like Nutcracker when we’re doing so many shows, it’s harder for muscles to activate as quickly when they’re exhausted.", "score": 31.979545079015132, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "Plantar fasciitis: This is an inflammation of the plantar fascia, a band of tissue running along the bottom of the foot from heel to toes. The injury is related to overuse.\nAnkle Impingement: Many variations of this condition exist, but the one that concerns us here is posterior ankle impingement syndrome, also known as os trigonum syndrome, and commonly referred to as “dancer’s heel.” This happens when soft tissue becomes trapped and pinched by bones, which causes painful spurs to form.\nAchilles Tendonitis: This condition can be brought on by dancing on a floor that is not properly “sprung,” or in other words is too hard. Overtraining can also cause Achilles tendonitis, especially hard training in a short time after a period of inactivity.\nCuboid Syndrome: Repetitive movement (such as one does when training) or sprains (another common ballet injury) can cause cuboid syndrome, a painful misalignment of the cuboid bone on the outer edge of the mid foot.\nMetatarsalgia: This condition generally manifests itself as pain in the ball of the foot, and in dancers it is often brought on by years of overwork and by forcing the foot into extreme positions, leading to instability in the joints of the toes.\nWhat Causes Ballet Injuries?\nThe particular causes of particular ballet injuries vary, of course, depending on the movement being performed. There are, however, certain techniques and types of footwear used in ballet that are particularly dangerous.\nPointe Technique: Dancing en pointe puts a great deal of stress on various parts of the foot and toes. Poor technique or poorly fitted shoes can make matters worse and increase the chances of injury.\nAlthough dancers who practice this technique wear shoes designed for it, dancing en pointe still causes friction between the toes and the shoes themselves, which can cause chaffing and blistering. En pointe dancing is the culprit in many cases of bunions, hammertoes, sesamoiditis, bursitis, trigger toe, and stress fractures.\nWhat Long-Term Complications Can Result from Ballet Injuries?\nThe long-term complications associated with ballet injuries are as varied as the injuries themselves, but the outlook for such injuries if they are not treated usually involves chronic pain and possibly antalgic gait (or in layman’s terms, a limp).", "score": 31.763964849347477, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "As a dance teacher and physiotherapist, something my students have often asked me is, “what can I do to avoid getting injured while dancing?” Although dance isn’t often considered a “sport” in the traditional sense, many genres of dance are highly athletic, and just like athletes, dancers are at risk of “sports” injuries. Join me in a five-part series as I discuss some important aspects of preventative care, including 1) warm-ups and cool downs 2) physical conditioning 3) adequate rest and recovery 4) footwear and floors, and 5) self-care and management.\n*As my current area of focus within the dance world is primarily partner dances, many of my examples will be based on this style of dance, but the themes I will be discussing are broadly applicable to other forms of dance.\nDo I really need to do a warm-up?\nHow many of us would head to the gym for a workout and immediately hit the treadmill in a full sprint without a warm-up? Yet, how many dancers have showed up to a social dance where the first song playing is so good that they just have to dance, regardless of how fast and energetic the song may be?\nProperly warming up your body before hitting the dance floor can reduce your risk of such injuries as, muscle strains, tendonitis, and overuse injuries. Every dance practice, performance, and social dance evening should begin with a warm-up to prepare your body for exercise, and end with a cool-down to kick-start your post-dance recovery.\nHow does a warm-up help?\nWarming up with light aerobic exercise not only improves performance, but may also reduce the risk of injury. Warm-ups lead to a number of physiological changes in your body that are necessary to prepare you mentally as well as physically, including:\n- increased rate and depth of breathing, which increases the intake of oxygen and release of carbon dioxide\n- increased heart rate, which allows more oxygen and fuel to be delivered to working muscles\n- increased elasticity of muscles and tendons, which increases range of motion of joints and reduces risk of tearing\n- improved proprioception (joint position sense) which improves not only balance and stability, but also awareness of body position which can help with aesthetic aspect of your movements\n- improved signalling of nerves, which improves reaction time and your ability to generate a muscle contraction\nHow do I warm up?", "score": 31.63648167117063, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Common Ballet Injuries – Prevention Tips\nWhen it comes to ballet there aren’t a whole lot of dancers out there who can honestly say that they have never had an injury. Whether it’s an injury as major or career ending as an Achilles tendon rupture or an injury as minor as a shin splint, it is imperative that the root of the problem be discovered.\nLuckily today there are preventative measures being taken by ballet companies and schools lead by younger artistic staffs to keep dancers fine tuned. In the past there were ballet teachers and company artistic directors who were not educated on injury prevention for dancers, thus indirectly encouraging dancers to continue through injuries shortening the life of their career.\nCommon injuries for dancing include (but are not limited to);\nLower back pain\nCheck out a complete list on the Harkness Center for Dance Injuries website. It’s very helpful.\nOf course there are ways to prevent some of these injuries through core conditioning, pilates, strength training, and ballet cross training. Most ballet companies have on site physical therapists who work with dancers daily aches and pains and can prescribe a correct physical therapy routine. There is also the Ballet Strength DVD which has a library of exercises that you can do to prevent injury and improve strength.\nAs dancers today, there is no excuse to allow the body to be plagued by injury and pain. With all of the helpful resources available dancers are extending their careers well into their 30’s. Don’t wait to address your injury prone areas…you don’t want to wait until it’s too late!\nPosted on February 19, 2011, in Ballet, Ballet Strength, Injury Prevention, Strength Training for Dancers, Technique Tips and tagged achilles tendonitis, ankle pain, ballet cross training, ballet injuries, ballet injury, ballet injury prevention, ballet pain, ballet strength, ballet strength training, common ballet injuries, dance injuries, dance injury, dance injury prevention, dance recovery, dancer rehabilitation, harkness center for dance injuries, pilates for dancers. Bookmark the permalink. 1 Comment.", "score": 31.55320867784443, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Did you know that among Dr. Lee’s many accomplishments, including being able to remember AC/DC lyrics till this day and besides being a Harford County Chiropractor extraordinaire; he is a member of the International Association for Dance Medicine and Sciences and will soon become a NYU Medical School recognized dance injury specialist? Like they say, if you can’t do it, might as well treat the ones who can when they get injured. Or something like that.\nThe root of all dance injuries is Energy Intake. This is not exclusive to dancers. Gymnasts and all female athletes would qualify. Female athletes do not consume enough calories. The average adolescent female dancer/athlete needs 2500 calories per day. However, most female athletes will actually only consume 60-80% of that, and dancers/gymnasts another 15% less. Why? Dancers and gymnasts usually have a fear of weight and how it looks and affects their performance. All athletes it’s because of lack of education.\nThe body considers dance and athletic performance to be discretionary calories. Not only do the girls not eat enough, they usually don’t eat the right stuff. The body uses all the consumed calories to fuel physical and pubescent growth first. Fueling for Dance and other athletic activity comes second. Imagine this: you have 3, 1-gallon buckets and only a 2 gallon pitcher. You can’t fill them all. From here a downward spiral ensues with puberty, growth and athleticism all draining from each other, not only robbing the athlete of her ability and potential, but making her more and more disposed to injury.\nWhat should the girls be eating? Not the 5 C’s of the single girl diet (Coffee, Cigarettes, Chocolate, diet Coke and Cheese). There is a popular myth that salads are healthy. While they are a better choice than a big Mac, a ”salad “is not always the healthy choice. Iceberg lettuce with ranch is about as healthy as notebook paper with fatty wax. A salad made from leafy greens, spinach, other veggies, and a protein, with little or no dressing is a great choice. Protein tends to be the most deficient in the female athlete and should be the number one priority when choosing foods. Protein runs dual purpose, fuel and structure.", "score": 31.25008573885261, "rank": 22}, {"document_id": "doc-::chunk-8", "d_text": "In case you were unaware, there are a number of sensitive parts to the neck. Considering dancers tend to use this area more than most, it’s perhaps not surprising that injuries in this area are so common. With overuse and stress, the pain can often radiate to the back and into the head in the form of migraines and headaches.\nAs well as dancing, neck pain can also be caused by poor posture, previous injuries, and long periods of sitting. Fortunately, spinal alignments can relieve some of the tension and relieve pain in the neck. As a dancer, this will allow you more freedom and the full range of motion for performances.\nMoving down the spine a little, we have the leading cause of disability for those aged between 18 and 46. Again, this is a problem that can be caused by poor posture. Additionally, dancers may have the incorrect lifting technique when dancing with others and overuse can cause stress on the area. In addition to the initial problem of back pain, it’s often made worse by the fact that most people avoid medical treatment.\nFor some, it gets to the point where their dancing ability deteriorates, and they’re forced to retire. If it still isn’t treated, back pain can really impact one’s day-to-day routines. Eventually, there will be no other option but to operate.\nLuckily, chiropractic care is centered around holistic treatment which means focusing on the whole body, staying away from medication, and trying to prevent surgery at all costs. Instead of the short-term solution of drugs, we look for ways to reduce the pain for good.\nLower Back Pain\nTowards the bottom of the spine, this is where problems occur for the majority of Americans. After a long day at work, we can expect some level of aching, but it should never stick around for long periods or affect simple tasks in life.\nWhen it comes to performers, lower back pain is even more dangerous because the body is forced to adjust. To alleviate pain, the body will compensate, and this negatively impacts muscular strength, balance, and coordination (three skills that are essential to dance!).\nWhen left without medical attention, this is where it really gets serious because it can limit mobility. Suddenly, you may find it hard to stand or even sit comfortably. If you’ve been experiencing lower back pain in recent times, we urge you to get in touch because the problem will only get worse (it’s not something that can heal itself!).", "score": 30.868459463319613, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "So does Barbara Harris, a trainer and rehabilitation specialist for the Boston Ballet.\nA normally curved spine acts like a spring, absorbing impact from dance movements, Harris said. But dancers think a straight spine is better - ``it makes them look very lifted and very long,'' she said.\nThe ``lifted'' look, with a straight back and neck, and pelvis tucked under the back, makes dancers seem to float as they move, Harris said. However, dancers can get the same look with less risk to their backs simply by keeping the pelvis in a neutral position, neither tilted forward nor back, she said.\nThose who don't prepare their backs properly may wind up at the sports medicine clinic and at rehab. At the clinic, they may get X-rays and bone scans. The scans have the advantage of picking up tiny tears before they get big enough for X-ray spotting, when the injuries are harder to heal, Micheli said.\nAt rehab, Harris works with a Pilates-based system of flexibility and strengthening - a method that often is familiar to dancers. She also focuses on teaching them how to recognize and foster good back alignment.\nAlmost all dancers can dance after treatment, Micheli said. But older dancers may not heal as easily, he said - like older athletes, they may return to their activity, but must recognize they will have to play with pain. Research thousands of health terms, drugs, and diseases", "score": 30.718415723852, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "My name is Judith, and I am a dance-a-holic. There. It’s out. Hardly a weekend goes by that I’m not travelling to some seemingly random point in the UK to dance until the early hours of the morning. There is nothing, nothing, quite like dancing. Suede-soled shoes gliding effortlessly over the smooth wooden floors, heart soaring, on another plane of being altogether. . .and then BANG! You injure yourself. It’s happened to most of us at some point. The shoulder niggle that never seemed serious so you never had it checked out. The insidious knee or hip pain. The back pain following an ill-advised aerial move. Or the ever-popular sprained ankle.\nHere are a few tips that could help you to prevent the injury from the outset.\n1. Warm up before you start dancing.\nYou’d warm up before training for a football match, or running a marathon, right? So why don’t you warm up before dancing intensively for 4-8 hours? I’m not talking about doing a load of passive or static stretching in your pretty dance frock, but how about easing yourself in with a couple of gentle dances instead of leaping onto that dance floor subjecting your body to an intense workout whilst still cold? It’s worth considering. . .\n2. Wear comfortable footwear to and from the venue.\nLadies (and some gents!) I know how we can be about shoes. I get it. You want to look good at all times, but at what cost? Do you have any idea what those heels are doing to your entire body, not just to your feet but to your knees, hips, lower back and neck? If you alter the height of your heels, there’s an immediate knock-on effect throughout the entire body. For more information, take a look at this page: http://erikdalton.com/media/newsletters-online/high-heels-and-back-pain/. Now I’m not suggesting that we all start dancing in flats – I for one can ONLY dance in heels, so to change my dance shoes to flat shoes would be counter-productive. What I’m suggesting is that maybe you could consider wearing comfortable, well-cushioned flat shoes to travel to and from the venue, giving your body some rest and relief after a heavy night of dancing!\n3.", "score": 30.360634032908926, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Some people say that a dancers feet is the most uglyest part of the body. They say this because a dancers feet is one of the\nmost important part of the body to a dancer. It is were all the dancing really begins. The feet allow the dancer to move about\nfreely when needed to. They are the most abused part of the body when doing ballet. Great strength is used with the arch so\nthere is alot of pointing involved. Before a dancer may even go on pointe they must have to get calluses to protect themselves\nfrom from pointe.\nNot only do ballet dancers get calluses but when they were pointe shoes they intend have worked there feet so much that the\nskin that rubs against the shoe is raw. Not only that but endure crooked toes for the pounding and strain involved.This is\nwhy dancers do many things to there feet and shoes to make it more comfortable for wear. No matter what they use to make it\nbetter they can still feel the pain when dancing on pointe Even though ballet is very beautiful it can turn out bad at an\nolder age, from all the technique the hip becomes and injury from it. In all is ballet worth it all?", "score": 29.747362442771383, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "What dance injuries have you had?\nWe can help prevent them!\nThe flooring should be suitable for all the types of dancing you wish to do, not only to have enough cushion support, it should also be suitable for pointe work and tap dancing.\nAre your Barres and Brackets set at the correct height from the floor, for what dancing your doing. Having them set at the correct height helps to making sure you don't over or under stretch.\nIs the Barre your holding too big for your hands to hold? And is it circular? Having a Barre too big/small or with a flat back, may cause strain on your hands and arms\nDo your mirrors give the correct image reflection?\nPoor mirror reflection may make you over bend trying to see what your doing.\nAlways make sure your reflection is correct to get the correct posture.\nYour studio is an important place, make sure it has great equipment!", "score": 29.701829086984215, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Ducher, G., Kukuljan, S., Hill, B., Garnham, A. P, Nowson, C. A, Kimlin, M. G & Cook, J. (2011). Vitamin D status and musculoskeletal health in adolescent male ballet dancers a pilot study. Journal of Dance Medicine and Science,15(3), 99-107. United States: J.Michael Ryan Publishing Inc..\nAdequate vitamin D levels during growth are critical to ensuring optimal bone development. Vitamin D synthesis requires sun exposure; thus, athletes engaged in indoor activities such as ballet dancing may be at relatively high risk of vitamin D insufficiency. The objective of this study was to investigate the prevalence of low vitamin D levels in young male ballet dancers and its impact on musculoskeletal health. Eighteen male ballet dancers, aged 10 to 19 years and training for at least 6 hours per week, were recruited from the Australian Ballet School, Melbourne, Australia. Serum 25( OH )D and intact PTH were measured in winter ( July ) from a non-fasting blood sample. Pubertal stage was determined using self- assessed Tanner criteria. Body composition and areal bone mineral density ( aBMD ) at the whole body and lumbar spine were measured using dual-energy x-ray absorptiometry ( DXA ). Injury history and physical activity levels were assessed by questionnaire. Blood samples were obtained from 16 participants. Serum 25( OH )D levels ranged from 20.8 to 94.3 nmol/L, with a group mean of 50.5 nmol/L. Two participants ( 12.5% ) showed vitamin D deficiency [serum 25( OH )D level < 25 nmol/L], seven dancers ( 44% ) had vitamin D insufficiency ( 25 to 50 nmol/L ), and the remaining seven dancers ( 44% ) had normal levels ( > 50 nmol/L ). No relationship was found between vitamin D status, PTH levels, body composition, and aBMD. The most commonly reported injuries were muscle tears and back pain. The average number of injuries reported by each dancer was 1.9 ± 0.4 ( range: 0 to 5 ).", "score": 29.37575928681194, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Introducing Strength & Flexibility Training for Young Dancers and Athletes at TAB Fitness—our program to help create stronger, safer, and more confident, captivating dancers.\nRhonda is obsessed with the strength, efficiency and beauty of dance movement. Her two-decade career as a performer, an instructor at the University of Toronto, certified trainer and ultimately designer of dance-inspired classes at her own studio, TAB has led her to create Strength Conditioning for Young Dancers. We use very light weights, stability balls, one’s own body weight, and other appropriate techniques. The goal is strengthening muscle not creating bulk. With this program, she wants to help young dancers deliver on what their choreographers want.\nWe focus on improving upper body strength, core and stabilizing musculature —which most young dancers lack, and creating a deep understanding of the muscles that allow for particular movements. This lets a dancer execute more confidently and drastically mitigates injury. Our goal is to provide the best chance for a long, enjoyable, dance career and later a life as an active and able adult with no regrets.", "score": 29.374722225945117, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Constant training, without enough rest, puts stress on a body’s ability to adapt which can lead to fatigue, muscular weakness and frequent injuries—burnout. Burnout is a complex clinical condition with no single cause. Symptoms and signs vary from person to person, but tend to occur mainly in dancers whose daily schedules produce an imbalance between physical activity and time for recovery. Burnout can occur as a result of a few days or weeks of fatigue, by long-term exhaustion and by psychological stress.\nBurnout can affect both male and female dancers of all ages and levels of competence. Those most likely to reach the stage of burnout are usually the ones who set very high standards for themselves. Relative levels of physical fitness also relate to burnout—i.e. for the same workload, fit dancers are less likely to suffer than their unfit counterparts.\nWho is most vulnerable to burnout?\nWe know that adolescents are particularly vulnerable to injuries. Apart from periodic growth spurts that decrease muscle strength and flexibility, a teenager's musculoskeletal system is less resistant to repetitive loads during development. According to the 2003 Proceedings from the Annual Meeting of the International Association for Dance Medicine & Science (IADMS), students who work more than 8.5 hours a week at age 14 increase the risk of overuse injuries. The same is true for 15-year-old dancers who work more than 10 hours a week.\nWhen do I need to be careful?\nUnfortunately, the drive to exceed personal limits is ingrained in dancers of all ages, regardless of the toll. Tony Geeves’s research for the Safe Dance Project Report (Ausdance, 1990) found that 52% of dancers in Australia had chronic injury by age 18.\nIn a study conducted by the Harkness Center for Dance Injuries (UK), 79% of 500 injuries happened at the end of the day, occurring after five or more hours of work. 1 Safe Dance ll research reported that 28% of surveyed dancer’s incurred injuries within three weeks of returning to training after a holiday. 2\nFatigue is the number one indicator of burnout. Dancers could learn from athletes who use periodisation (hard workouts are alternated with easier routines that have rest built into them) to prevent overload. Burned-out dancers need to let themselves do less without labelling themselves as ‘lazy’.", "score": 29.33579943855103, "rank": 30}, {"document_id": "doc-::chunk-6", "d_text": "“We were rehearsing a piece that was very lift-heavy,” Granville recalls. “I ran from off stage and jumped into a girl’s arms, like a cartwheel over her head. Something went wrong in the lift causing me to come off too fast and land on the top of my foot. I’d never broken a bone, so I went home, wrapped [my foot] and nursed it in an ice bucket. I knew something was wrong, but I figured I had danced on other injuries before, so I could perform that night.”\nLike Roberts, Granville refused to quit and performed the entire opening show on a broken foot. Examination by Brenner a few days later revealed a broken fifth metatarsal, otherwise known as a dancer’s fracture. Granville spent 12 weeks in a cast, two weeks on crutches and another six weeks in a boot.\nThankfully, Granville had Pilates and TheraBanding to turn to as a means of maintaining her physique and warding off negative thoughts caused by inactivity. She also expressed great appreciation for Brenner, who unlike other physicians she had seen, understood the complexities of dancers’ injuries. “As a dancer, it can be very frustrating to work with physical therapists because dance knowledge is really hard and the steps we do are strange, [as are] the ways we get injured,” Granville says. “So when you do go in to talk to a doctor, they don’t fully understand what you’re talking about.”\nBrenner, on the other hand, has first-hand experience with dance as well as 22 years of experience treating their unique injuries. “A few years ago I decided to take beginner’s ballet. The next year I took modern so I could experience what my dancers do on a daily basis,” says Brenner. “It’s important as a [physician] to know what our athletes are going through.”\nThe frustrations voiced by Granville also ring true for the CrossFit community. Smith voices his irritations about what he sees as the myths surrounding CrossFit. “You hear, ‘The deadlift is dangerous. You shouldn’t squat below parallel because it’s bad for your knees.’ These myths are perpetuated by the average person going into a gym and performing a movement, such as the deadlift, that they have limited knowledge about or don’t have proper supervision like you do in a CrossFit gym,” he says.", "score": 28.94797556204113, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "In 2012, more than 1.3 million kids went to the ER with sports injuries in the United States. Football injuries were in the lead, followed by basketball, soccer, and baseball, with the most injured body parts being the ankle, head, finger, and face. Here in Canada, sports injuries are a leading cause of injuries requiring medical attention amongst adolescents. Sports injuries account for 50% of all injuries for high school students (age 14-18) and 30-40% of junior high students (age 11-14) with ankle and knee accounting for 35-40% of injuries.\nPhysical activity is a key element to a child’s enjoyment, growth and health. As children increase their involvement in competitive sports and participate in multiple sport activities, more time is spent training, and as a result sports-related injuries have increased.\nA CHILD'S GROWING BODY\nIt is important to understand what happens to a child’s musculoskeletal (muscle and bone) system as a child grows. Naturally a key area of concern that differs from adults is the epiphyseal plates, also known as growth plates. Because the plates are weaker and less elastic than the tendons and ligaments that attach to the bone, children often break bones instead of injure ligaments. Because bones in a child are continually growing, children heal faster. The statement that you may have heard or found yourself saying, “I don’t heal as fast the older I get,” is true!\nTYPES OF INJURIES\nTypes of injuries are generally acute or repetitive in nature. Overuse injuries occur from repetitive application of repeated stress to normal tissue. This is common in organized sports when overtraining occurs. Excessive levels of physical activity can increase the risk of injury, which, if not treated properly, can affect normal growth and maturation, including limb length discrepancy, angular deformity, and altered joint mechanics.\nWhen dealing with potential injuries prevention is key! It is important to identify risks early and modify the training program to prevent injuries, which can minimize time lost from a sport.\nRisk factors that contribute to injuries include the following:\n- Anatomical misalignments = abnormal stresses\n- Strength imbalances = muscle strain risk\n- Growth spurts = skeleton grows fast but muscles/tendons do not grow as fast; causing tight (slower to grow) musculotendinous units\nStretching can both prevent and protect from these risks.", "score": 28.320131329054174, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Dancers demonstrated the distinct ability to sense the differences of the different floors. This suggests that dancers do know what they’re talking about when it comes to dance floors. Further research is now being conducted investigating dancers’ opinions of the most important aspects of floors and how these factors may affect performance and injury.\nImplications of the research\nInjury occurrence is all too common in dance. Dancers will always push their bodies to the limit to get the most out of their training. It is therefore very important that safe dance environments are created by reducing any unnecessary injury risks.\nThis research has reported that dancers can be required to perform on substandard floors which were shown to affect ankle joint stress during dance movements. Dancers also demonstrated the distinct ability to sense changes in dance floor properties. Dance institutions are now able use this information and work with dancers in creating dance environments with the aims of helping dancers to dance better, stronger and for longer.\nAbout the researcher\nAs an ex-dancer, Luke Hopper is well aware of the demands of dance training and the associated injury risks. Since finishing dance training, Luke has studied sport science at the University of Western Australia and is currently completing his PhD investigating the effects of dance floors on dancers. As the recipient of the IADMS student research award in both 2007 and 2009, Luke’s work has been internationally commended for the contribution of his research into dance injury mechanisms in conjunction with several international ballet companies. Luke is now employed as a lecturer of exercise and clinical biomechanics at the University of Notre Dame, Fremantle, Australia, and is continuing his research applying principles of sport science in the interests of improving dancer health and well being.\nOur summary White Paper - Specifying Dance Floors can be viewed here.", "score": 28.113603381388202, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "Many of the arts in America are young arts and so have do not have generations of experienced, aged practitioners that have seen the negative effects of the arts’ training system on the body. As an art gets older, the practitioners become more aware of what the typical injury patterns are simply because there is enough repetition to see those injury patterns over time.\nArts with a very sharp peak in the twenties usually spend less time on technical study and immediately favor application. The start up time for such approaches have the advantage of being very short and requiring low rep numbers to be useful, but the injury rate is high and the assumed level of physical fitness is high.\nAs a result of the physical strain, the drop out rate is usually also very high.\nCombative sports typically use this model, where it is tolerated because many competitors are in poverty. They are looking to find a way out of poverty and are willing to sacrifice the body in order to establish wealth. Many, many people that start the training in combative sports will injure out, but it is worth both the fame, reputation, and money-in the-pocket to take the initial risk.\nCompetitors also realize that they have only a limited shelf-life as a fighter before the body becomes unusable in the sport. For them, it’s the ticking clock.\nAs an older adult, you need to be concerned about being in an art that was not designed to fit your needs, or that was designed to fit a much more resilient body than yours.\nMake sure you want what comes out of training!\nThings A Martial Arts Teacher Should Know for Older Adults\nMany times an average martial arts instructor will not know the peak of the art. While he or she can be an excellent martial artist, he or she will have never actually contemplated the art as a training system for a population group.\nThis often happens when you get a person of technical skill that believes he can simply start coaching, because he knows the techniques. Even if your potential instructor isn’t able to tell you anything, you can get a general idea of where an art has its peak placement by asking the following:\n- What age are the students at their physical best?\n- How many senior practitioners or teachers are there that actively practice? (At the age of 30, most are unofficially retired).\n- Was the art designed for sport or quickie self-defense?\nQuestions you should ask yourself:\n- How good of shape do your joints need to be in?\n- Why are you taking the art?", "score": 27.309521619721764, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "By addressing variance and competency in supplemental training, better programs are designed to challenge athletes that promote injury prevention without adversely affecting sport specific movements and artistry. Studies have shown that artistic athletes that supplement their artistic training with strength & conditioning programs, and have subsequently improved their fitness levels, scored significantly higher in aesthetic competency assessments and generally perform and compete at higher levels10.\n4. Psychology & Body Image\nAs a past dancer and current exercise physiologist & strength coach, I’ve experienced and seen the varying aesthetic needs of artistic athletes. The constant worry about body image is an awful and overbearing threat to an artistic athlete’s health, both physically and psychologically.\nAs it pertains to strength & conditioning, the myth that supplemental training will make an artistic athlete “bulky” is well inflated. There are countless training platforms, each one yielding different neurological and physiological adaptations.\nAn exercise physiologist and/or strength & conditioning coach will specifically program to the desired adaption (e.g., strength, speed, power, etc.), which may or may not include lean muscle mass goals depending on sport needs. They will work with the athlete, coach, sport organization, and parents to communicate mutual interests and to involve everyone in the decision making process.\n5. Joint Integrity: Mobility and Stability\nArtistic athletes can be known for their extreme ranges of motion. Having supple muscles gives athletes that specific aesthetic stretch needed for artistic presentation.\nWhen having difficulty reaching the end range of some of these positions, artistic athletes might choose movement strategies that are not ideal for joint health. Strength and conditioning sessions will incorporate proper mobility and stability work to improve end range of motion strength and joint integrity, all while preventing injuries and keeping the aesthetic nature of artistic sports.\nAs mentioned above, injury risk is typically higher in early specialized sports, such as most artistic sports. Participating in a single sport for more than eight months per year appears to be an important factor in the increased injury risk observed in highly specialized athletes.", "score": 26.974016531559784, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "A combination of strengthening and stretching is essential for promoting strong and pliable muscle tissues that are the most resilient to injury.\nThe entire musculoskeletal system (bones and soft tissues in the body) is a connected unit. Many muscle groups and joints work together to perform a desired motion. If one member of the team isn’t working properly (perhaps because of decreased strength/flexibility or improper alignment), another member of the team will be forced to pick up the slack. This can result in compensatory strategies that place excessive load through a joint and/or the surrounding tissues. As a result, risk of injury is high. In the case of dancers, let’s consider turnout. A dancer’s turnout requires a significant amount of motion at the hip. If hip external rotation strength or available range of motion is limited, many dancers begin to compensate in their knees or feet to force a greater turnout. As a result, increased torque is placed through the knee and ankle joint as well as joints in the foot, which leads to a higher risk of injury. So how might dancers be able to improve upon this?\nBecoming educated on specific ways to strengthen and improve flexibility in areas of the body specific to dancers is a great place to begin. It can also be hugely beneficial to be screened by a physical therapist who will be able to identify impaired movement mechanics and biomechanical limitations that could increase injury risk and/or impact performance.\nAt Hands On Physical Therapy, we are passionate about reducing risk of injuries before they occur. We are also excited about educating and empowering all dancers, their peers, and their instructors to practice injury risk reduction strategies routinely. If you’re interested in being screened by our dance medicine expert, call us today at 541-312-2252 to set up an appointment with Jen Wardyga, PT, DPT. Keep an eye out for her in the studio, too!\nHands On Physical Therapy\n147 SW Shevlin Hixon Dr. Ste 104\nBend, OR 97702", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "There is nothing more satisfying for a young dancer (male or female) than the moment when the teacher announces that you can get on your Pointe shoes; and many of those kids are expecting for this to happen, sometimes too soon.\nThe answer should not be according to the \"right time\" (based on how long is the dancer practicing or how good is the teacher). The timing depends only on the physical preparation and development of the body.\nWhat conditions should be accomplished to get the right decision? The most important thing is the physical ability of the person to stand in such \"abnormal\" position. The skeleton gets ready for that as long as we \"teach\" the joints, bones, muscles, ligaments, tendons and nerves to work right to maintain the posture and being able to perform different elements on ballet regardless the time we are learning how to dance. Another component that should be carefully evaluated is the coordination, balance and the movement control of the dancer.\nGetting to the decision of working on pointe is not a one side decision and is not a one-way decision either. Working on Pointe is a process, another stage in the dancer's education and it can - or it should be a personalized progression. The decision should be taken with the dancer, their parents (if they are kids) and a professional on dancers medicine.\nWhat should we expect to be the physical condition optimal for Pointe? The physical factors to be evaluated include: 1. Lower limb strengthening and flexibility, and the right balance between these two 2. Core control 3. Fast reaction to sudden positional changes 4. Plyometric skills 5. Body composition (here needs to be remarked that the body composition is critical for everything in life especially for professional dancers). 6. General balance and coordination\nAll these factors are generally beeing developed for dancing between the age of 3-4 years old until 12 years old. These ages are a gross average and - as said before - this is a very personal progress and the young dancer might be developed these skills before or sometimes later of this age.\nDo orthopedic conditions affect the timing? Absolutely. Previous orthopedic dysfunctions or conditions do not necessarily become an obstacle for life in any situation. It just needs to be evaluated and adjusted to the dancer and a specific set of exercises can sometimes improve the performance and lead to the next step, including working on Pointe.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Specialized Care for Dancers\nTuesday, April 11, 2017\nNICHOLAS T. GATES, MD\nWhen classical, competition and team dancers get hurt, they are more likely to push through pain than other athletes. Such reluctance to seek help is due to several unique factors, yet all of these challenges are addressed by the health care offered by orthopaedic foot and ankle specialist Nicholas T. Gates, M.D.\nBallet dancers are prone to positional and over-use injuries. Os Trigonum Syndrome (The Nutcracker Syndrome) is a common ankle injury caused by flexing the foot downward en pointe, or standing on tiptoes. \"Certain dancers will pinch a bone in the back of the ankle repetitively... They are cracking a small piece of bone as a nutcracker would crack a walnut,\" explains Dr. Gates. This repetitive movement may break the bone.\nDance team athletes and those engaged in competitive cheerleading may also experience positional issues and are at increased risk for ankle sprains and injuries from tumbling and gymnastics.\nNicholas T. Gates, M.D., is a board-certified foot and ankle orthopaedic surgeon and sports medicine speclalist. He serves as team physician for Highlands High School and treats both athletes and nonathletes.\nHand & Wrist\n10/06/2017 - Breakthrough in the Treatment\n04/11/2017 - Specialized Care for Dancers -\n04/11/2017 - Specialized Care for Dancers\n07/30/2014 - Common Symptoms of Plantar Fas\n06/02/2014 - What is the plantar fascia, an\n06/02/2014 - Learn About Plantar Fasciitis", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Questions and Answers\nHow old does my child need to be to start dance lessons?\nPreference is for children to start at age 5 and above. The age for entrance into classes or program varies from student and from class. This is based not only on age, but the student’s developmental level and skill\nHow do I know that my child is in the right class level?\nStudents should feel pushed in class but not overwhelmed. Most dancers will be in a class level for 2-3 years before moving to the next level. We are a small school so many classes are designed with that in mind.\nYou are a small school is my dancer receiving proper training?\nYes, because we are smaller we can focus on your child more than other places with large class sizes. This also allows us to make sure proper technique is being practiced. Our focus is on developing dancers that can perform their dance art cleanly, with ease and great skill.\nWho makes the final decision where the student is placed?\nOur teachers are well skilled to asses dancers ability and will make decisions about class placement.\nMay students start classes in the middle of the year?\nYes, students may enroll at any time; but preference is for the Fall in order to be ready to participate in the Spring recital.\nThis is my child’s first class, what do I need to purchase?\nWe offer shoes, tights, leotards and many other items for sale at the hall. Please see the teacher first for dress code info before making purchases.\nWe don’t have a lot of money to spend on dance attire?\nOur pricing for dance attire is very reasonably priced. We also offer shoes and leotards that can be borrowed or exchanged as your dancer grows.\nMy child has special needs or has an injury, can they take class?\nAll are welcome to take class. We will train and work with any student to develop the proper technique for their own body and needs. Many injures are from improper technique, over stretching and over use. Many times fixing these will result in dancers that are pain free.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-2", "d_text": "Okk\nI found the answers provided by the Dr. Vivek A to be sensible. ok\nDr. Vivek A provides answers that are knowledgeable. Helpful\nAn average person experiences two fractures during his or her lifetime and same holds true for joint related injuries. The severity of this condition depends on a number of factors, ranging from the forces responsible for injury and location to the damage done to the nearby tissues and bones.\nHow age plays a role in your chances of getting a fracture?\nYour risk and severity of developing a fracture, depends, to a certain extent on your age.\nA very common occurrence during childhood is crippling joint related injuries, the fractures that you tend to have during this time are generally less complex than the broken bone instances that you stand to experience when you enter adulthood.\nWith time, your bones become fragile and you become prone to broken bones sustained from falls, which you wouldn't when you were young. Furthermore, as you step into your 50th year, you can get struck by the bone condition osteoporosis, a leading cause of bone fractures during this time. For women, menopause makes them more susceptible to osteoporosis (as infrequent periods and hormonal changes at this time lead to loss of bone mass) and subsequently broken bones.\nPreventing crippling joint injuries need many steps in younger generation known as prehab especially for sporting population and adult population involved in day to day activities requiring your body getting subjected to physical stress.\nSimple steps to get your joints back to normal in case you do get into injuries.\n- Having a calcium and vitamin d rich diet to strengthen bones\n- Exercising to strengthen bone and muscle health as well as your balance\n- Taking relevant medicines to make your bones strong\n- Going for timely bone mineral density test to determine the health of your bone\n- Exposing yourself to the sun for about 20 minutes everyday\n- Having a requisite calcium intake of 1000 mg and 1200 mg for pre- and postmenopausal women respectively\n- Preventing a fracture by modification in your household furniture, extra clothing, sometimes addition of simple orthotic devices, improving your muscle reaction time etc go in long way to help prevent falls. If you wish to discuss about any specific problem, you can consult an Orthopedist.\nHave you undergone cartilage damage recently and are seeking ideal treatment measures? Cartilage damage is a common form of injury that involves your knees.", "score": 26.451258544801544, "rank": 40}, {"document_id": "doc-::chunk-2", "d_text": "I have already outlined above that the type of sport can have an impact on the incidence of injury when compare to younger athletes – but interestingly, it can also impact the type of injuries you are likely to see as an older athlete (Baker, 2015).\nThere is a growing body of evidence clearly showing that those older athletes who compete in more explosive sports (think team sports, sprinting, or jumping) are going to be at a higher risk of an acute injury occurring.\nThis could come in the form of a muscle tear, or an ankle sprain.\nConversely, those older athletes who participate in endurance type events (such as distance running, cycling, or even orienteering) generally appear to be at an increased risk of incurring an overuse injury.\nThis might come in the form of a tendinopathy, plantar fasciitis, or even chronic joint pain and irritation.\nWhat sport has the highest injury incidence in older athletes?\nWith all this in mind, you might be wondering what sport has the highest injury rate – and it will probably come as no surprise, but it seems to be those sports that involve a lot of explosive type movements that have the highest injury risk (Ganse, 2014).\nThis means either team sports, or sprinting are arguably the most likely of incurring an injury – both of which are more likely to cause an acute muscle injury.\nI should note that this doesn’t necessarily mean that you should shy away from these sports as an aging athlete. In fact, in my opinion, it simply means that you need to appropriately prepare your body for the rigours of that sport prior to competing.\nBut more on that later!\nWomen vs men and the aging athlete\nThe last thing I wanted to touch on is the considerations around gender, and how it can impact injury risk – and it really comes down to how women and men athletes age differently.\nAs a rule of thumb, women tend to have lower levels of muscle mass than men across the entirety of their lifespan. This means that as they enter their golden years, they are going to demonstrate less strength and power than their male counterparts.\nAs a result, their ability to stabilise their joints is going to be significantly lower, which can increase their risk of experiencing a lower limb overuse injury – which is why foot and knee injures are so common in older female athletes.\nWhat are the best competitive sports for the aging athlete?\nTaking all this information into consideration, what are the best sports for older athletes?", "score": 26.347040472955857, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "The hip is a common site of pain and injury in dancers. The purpose of this study is to explore the relationships between leg power, leg strength, knee valgus and trunk endurance with anteriolateral hip pain. Nine collegiate-aged dancers currently in a university dance program (age = 20±1 yrs, height = 64.8±4.5 in, weight = 131.2±12.8 lbs) reported, on a Visual Analog Scale, their level of pain at rest and while dancing. The dancers’ countermovement jump (CMJ), squat jump (SJ) and anterior, posterior and bi-lateral trunk endurance were then assessed. The CMJ and SJ assessments, as measures of lower body power and strength, respectively, were conducted on a contact mat (Just Jump, Probotics, Huntsville, AL). During the CMJ, knee valgus was evaluated by filming the participant (120 fps) from the frontal view. The CMJ with the highest jump score was used to measure the greatest amount of knee valgus collapse, defined as the angle from the ASIS, middle of the patella and center of the tibiotalar joint. Moderate Pearson’s product-moment correlations were observed between age (r = 0.48), posterior trunk endurance (r = -0.48) and the ratio between right lateral and anterior trunk endurance (r = 0.42) with hip pain while resting. Moderate correlations were found between leg power (r = -0.43), leg strength (r = -0.54), the ratio between right and left lateral trunk endurance (r = -0.46), and the ratio between left lateral and posterior trunk endurance (r = 0.46) with hip pain while dancing. Increasing leg power and strength as well as addressing asymmetries, particularly in the trunk musculature, may decrease the pain associated with dancing; further research is needed to explore these relationships.\nAgre, Elizabeth M.; Rasmussen, Heather; and Miller, Jason\n\"Relationships between Leg Power, Leg Strength, Knee Valgus and Trunk Endurance with Hip Pain in Dancers,\"\nInternational Journal of Exercise Science: Conference Proceedings:\n7, Article 1.\nAvailable at: http://digitalcommons.wku.edu/ijesab/vol2/iss7/1", "score": 25.765710465195323, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "November 07, 1999 12:02pm\nDancers Sports Injuries\nby: IRA DREYFUSS\n(WASHINGTON, DC) -- For many dancers, swanlike grace comes at a price - painful back injuries that could have been avoided if they had worked out right.\nAnd it may take a new type of specialist - someone who combines sports medicine and the arts - to treat them, a study said.\n``Dancers spend a lot of time building up their leg muscles, but much less time building up their trunk,'' said Dr. Lyle J. Micheli. He treats dancers of the Boston Ballet at the Sports Medicine Clinic in Children's Hospital in Boston. He also was lead author of an overview article on dancers' back injuries in the medical Web site MedScape.\nLike young gymnasts, dancers risk microtears to the vertebrae in the lower spine, the article said. The injuries result largely from repeating dance positions that bend the spine far backward, Micheli said.\nOne such position is the arabesque, in which the dancer balances on one leg while extending the other high to the rear. It can produce excessive arching of the low back, Micheli said.\nChoreographers are not going to give up the arabesque or similar positions just to make life easier for dancers, however. So dancers will have to adjust, said coauthor Ruth Solomon, a professor of theater arts and dance at the University of California, Santa Cruz.\nAnd adjusting is within their power, Solomon said: ``A body is very capable of doing things if it is trained well.''\nTraining will require dancers to pay more attention to their pelvic girdles, Solomon said. Stronger trunk muscles can take some of the workload off the muscles that control the rear of the spine, she said. ``The back and the pelvis move as a unit, so there is no excessive force right at the base of the spine,'' she said.\nSolomon focuses on strengthening muscles such as the psoas, which stretch between the front of the lower spine and the upper leg at the front of the hip. To do this, she has dancers do leg lifts, bringing their legs to their chests while lying on their backs with their weight balanced just above their tailbones.\nThe spine is in a curved position during this exercise, and enhancing the curve is something that Solomon wants dancers to focus more on.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Look carefully at the images below, and you can see that the shape of the hip socket has actually shifted due to the aggressive stretching that she was doing at her dance school. This has now calcified and so will remain this way for the rest of her life.\nNotice how the hip socket is not yet fully formed, and there is a gap in the pelvic bones at the level of the hip socket. This is not a fracture, but a normal part of the development of the pelvis. The issue is that the two parts of the bone are no longer aligned due to the stretching exercises she was doing.\nShe was still complaining of hip pain at 13, so more X rays were ordered. It is easy to see the damage in the hip socket in the following images.\nThe black line indicates major issues with the surface of the hip socket and there is evidence of excessive movement across both of the growth plates of the femur. The head of the femur is no longer nicely rounded and has been flattened due to excessive loading.\nAt 14, the student was still complaining of sore hips, so another set of images were taken. She was also told by a friend to come and see us at Perfect Form Physio for treatment for her ongoing hip pain.\nIn these images the pelvis has begun to fuse (as it should), however the head of the femur (thigh bone) is flattened and the shape of the socket is shallow. There is also development of a \"pincer type deformity\" at the top of the hip socket that was not present in previous x-rays that will contribute to anterior hip impingement. Bone develops in response to load, so it is probable that this developed in response to repeated compression after kicking her right leg up repeatedly...\nThis dancer will continue to struggle with pain in her hips due to anterior hip impingement and the bony changes in the hip socket. She will most likely be requiring an early total hip replacement due to this. With very careful rehab we were able to drop her pain levels, but she will need to be extremely diligent in maintaining these in order to be able to continue to dance.\nUnfortunately, there are still many teachers still advocating this type of aggressive stretching in young students, who seem to have little awareness of the long term consequences it can have. This type of training IS NOT NECESSARY in developing good turnout range in dancers.\nThere are actually six different ranges of turnout in the hip.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "Dance Injury Prevention Training\nAlso known as: injury prevention and training programs for dancers.\nWhat is dance injury prevention training?\nA training program designed to improve strength, range of motion, balance and neuromuscular control to\nenhance performance and reduce the risk of dance related injuries.\nWhat happens during the procedure?\nA Dance Medicine professional works with the dancer to improve muscular strength, range of motion, balance and neuromuscular control through specific exercise prescription.\nIs any special preparation needed?\nNo special preparation is needed for the training. The dancer should wear dance attire.\nWhat are the risk factors?\nThere is still a potential risk of injury occurring even after a patient has undergone dance injury prevention training.\nReviewed by: Lauren Butler, PT, DPT, SCS\nThis page was last updated on: 8/7/2018 10:48:17 AM\nFrom the Newsdesk\nAn Athletic Trainer (ATC) is a licensed healthcare provider responsible for injury prevention, emergency care, clinical diagnosis, therapeutic intervention and rehabilitation of injuries as well as medical conditions.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "I count among them dancers in ballet and contemporary companies, commercial dancers, and those on Broadway and television. There are even those who became physical therapists and kinesiologists that specialize in dance. As I consider their careers, I am both humbled and honored to have influenced the direction of their lives.\nMy Mentees Share Similar Characteristics\nThere are certain qualities I look for when deciding if I will mentor a dancer and all of those qualities are critical to the success of our partnership:\n- Aptitude and Physical Characteristics: I notice high levels of interest, excitement, and aptitudes in children as young as 5 years old. I do not approach them at that age, but instead wait to see if they have sufficient enough interest in dance to move forward (with their teacher’s affirmation).\nWhen dancers are 7 to 9 years old, I begin to track their progress a little more closely without becoming too aggressive. Oftentimes, a teacher will notify me of the child’s passion and progress in the classroom. Sometimes the dancers are already participating in our JumpStart Teams, which are a primer to competition and performance teams.\nMost proteges have some level of identifiable talent, but many do not have an “ideal” dancer body. Some are all arms or legs, and some struggle with weight issues. The students I most desire to work with, however, are more than just talent combined with physical attributes.\n- Maturity Level: When dancers are 10 to 12 years old, I assess their maturity and begin a conversation about potential, work ethic, and the future. I am often pleased and surprised to hear dancers at 12 years old already expressing their dreams of a future in dance. It’s at this point that the real adventure begins!\n- The ‘it’ Factor: The best candidates have the “it factor” combined with personal drive. They are truly committed to learning and bring a different type of energy to classes and rehearsals. These dancers prepare before class, and remain consistent and determined during class. They persist through frustration until the movement is clear, demonstrating a superior work ethic.\n- A Hunger to Learn: The hunger in these dancers is something I can see and feel – it’s almost tangible. They arrive early and stay late for rehearsals. They take risks that make them vulnerable, even if there’s a chance of feeling foolish. These students are generous with others in class, encouraging and helping them accomplish a particular movement or combination.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-1", "d_text": "The chart below tracks attrition rate at different ages throughout a pitcher's career.\nEven for a successful, established pitcher, the risk of catastrophic injury is meaningfully high throughout his career, almost certainly at least 10 percent in any given season. However, the risk does appear to be to some degree dependent on a pitcher's age. For the very young pitchers in our study--ages 21 and 22--the risk of injury is significantly higher, in excess of 20 percent. Injury rate then drops dramatically as a pitcher matures physically, reaching its lowest point at roughly age 24, while rising gradually throughout the remainder of his career. (Although pitchers aged 37 and up appear in the chart to be as vulnerable to injury as very young ones, that is also the age at which pitchers will begin to retire voluntarily. The uptick in injury risk at the tail end of a pitcher's career is probably not as substantial as what is implied here).\nDiscussion of Physiological Risk Factors\nFrom the scientific data collected by Dr. Mike Marshall and Dr. James Andrews, to the years of wisdom accumulated by Dr. Frank Jobe, there is a general acceptance as to what factors lead to pitcher injuries. The three major factors are the underlying physical system, degree of use, and biomechanical efficiency.\nThe physical system includes the bones, muscles, ligaments, and tendons involved in the pitching process. These are centered in the shoulder and elbow of the pitching arm. Most significant are the muscles of the rotator cuff, the glenoid labrum, and the ulnar collateral ligament (UCL). Most pitchers will experience some degree of damage in their pitching arm. A 1999 study on members of the Toronto Blue Jays showed that 23 of 28 pitchers had tendonitis, while 22 had some extent of cartilage damage. Yet most of these pitchers were asymptomatic, and many were pitching effectively in the major leagues.\nAs an athlete matures, his bones calcify and harden, his growth plates close, and his ligaments reach full strength. Since no athlete matures on the same schedule as another, it is important to note that chronological age does not always directly correlate to physical age. However, as Dr. Jobe and others have noted, a pitcher is generally most vulnerable at a young age, before the bones and muscles of his upper body have fully developed.", "score": 24.920195212486632, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "For many dancers, swanlike grace comes at a price - painful back injuries that could have been avoided if they had worked out right.\nAnd it may take a new type of specialist - someone who combines sports medicine and the arts - to treat them, a study said.\n\"Dancers spend a lot of time building up their leg muscles, but much less time building up their trunk,\" said Dr. Lyle J. Micheli.\nHe treats dancers of the Boston Ballet at the Sports Medicine Clinic in Children's Hospital in Boston. He also was lead author of an overview article on dancers' back injuries in the medical Web site MedScape.\nLike young gymnasts, dancers risk microtears to the vertebrae in the lower spine, the article said. The injuries result largely from repeating dance positions that bend the spine far backward, Micheli said.\nOne such position is the arabesque, in which the dancer balances on one leg while extending the other high to the rear. It can produce excessive arching of the low back, Micheli said.\nChoreographers are not going to give up the arabesque or similar positions just to make life easier for dancers, however. So dancers will have to adjust, said co-author Ruth Solomon, a professor of theater arts and dance at the University of California, Santa Cruz.\nAnd adjusting is within their power, Solomon said: \"A body is very capable of doing things if it is trained well.\"\nTraining will require dancers to pay more attention to their pelvic girdles, Solomon said. Stronger trunk muscles can take some of the workload off the muscles that control the rear of the spine, she said.\n\"The back and the pelvis move as a unit, so there is no excessive force right at the base of the spine,\" she said.\nSolomon focuses on strengthening muscles such as the psoas, which stretch between the front of the lower spine and the upper leg at the front of the hip. To do this, she has dancers do leg lifts, bringing their legs to their chests while lying on their backs with their weight balanced just above their tailbones.\nAmarillo dance instructor Anne Lankford agrees that strengthening exercises are key to preventing injuries.\n\"Strengthening abdominals and the lower back with lots of warm-ups will go a long way,\" said Lankford, director of the dance academy for the Amarillo Little Theatre.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "The Age Equation\nWhen it comes to dance, the general rule of thumb is that the younger you begin training, the better. Serious ballet dancers, for example, are often expected to be career-ready by 16. But what if you didn't start dancing at age 2? Is there room in the professional dance world for late starters?\nThe short answer: Yes! Read on to hear from six dancers who started dancing later than their peers—and still became pros.\nMisty Copeland with Herman Cornejo in Alexei Ratmansky's Firebird (Gene Schiavone)\nMisty Copeland, soloist\nat American Ballet Theatre\nAge she started dancing: 13\nHow did you get started? I auditioned for the dance team at my junior high school, and the coach told me my potential as a dancer went beyond that local team.\nWhen did a professional career start to feel possible? When I discovered American Ballet Theatre. I memorized every company member's background and studied videos.\nWhat kept you going through the tough times? The encouragement I got from the people around me. And ABT was the light at the end of the tunnel. Watching videos and seeing live performances kept me motivated.\nWere there benefits to starting late? I didn't feel burnt out at the age of, say, 15. Everything was so new that I was always eager for more.\nDo you have advice for other late starters? Be mindful of how you treat your body, especially early on. You're in a different place physically than a 7-year-old beginner. Consider cross-training to help develop your technique more quickly.\nRichard Riaz Yoder in Duke Ellington's Sophisticated Ladies (Scott Suchman)\nRichard Riaz Yoder, Broadway performer\nAge he started dancing: 17\nHow did you get started? I saw a couple of my high school show choir friends doing a time step and got them to teach me. When I showed my mom, she took me to a teacher who owned a studio for adults. At 17, I was actually the youngest person in my first class by 20 years!\nDid you ever doubt yourself? I was weird in that I wasn't self-conscious at all in those early classes. Even if I didn't know what the heck I was doing, I was going to do it as best I could.\nWhat obstacles did you encounter?", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Find out why oral anesthetic and dental floss are must-haves as you pack your dance bag for performance week or a summer intensive. Their uses might surprise you!\nProper alignment of the body is at the base of all good dance technique. Take this trip around the torso as a refresher for you or your dance students on how to assess and correct placement.\nFind out the best way to avoid lower leg and achilles tendon injury, plus learn how to properly stretch the achilles to promote tendon health.\nThe difference between a strain and a sprain; a ligament and a tendon. Take a closer look at injuries and how to cope throughout the recovery process.\nMuscle fatigue is good but not when dancers push themselves (or are pushed by directors) to injury. We’re ignoring a crucial part of the formula for increasing endurance and enhancing performance. What is that element and why is it important for dancers to learn when enough is enough?", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "Learning dance terminology was hard. I had a teacher early on in college who asked us what dance steps we knew—and I didn't know any. So I went home and memorized the name of every tap step. I wasn't sure what they were, but I knew the names of every one.\nWere there benefits to starting late? I was able to make sure I got high-quality training from the beginning. I've seen dancers who, early on, had bad habits thanks to poor training.\nJanette Manrara with Robbie Kmetoni in Burn the Floor (David Wyatt)\nJanette Manrara, Burn the Floor\nAge she started training seriously: 19\nHow did you get started? My family is from Cuba, so salsa dancing was always a part of my life. I started studying musical theater at 12. Then the dance teacher at my musical theater school opened his own studio, and I started taking dance classes every day.\nWhat obstacles did you have to conquer? The worst was seeing parents or other students look at me with confused faces. They didn't understand why a girl in her 20s was taking ballet with 12-year-olds.\nWhen did you know you wanted to dance\nprofessionally? As soon as I set foot outside of “So You Think You Can Dance\"! Being on the show during Season 5 opened so many doors for me.\nPhillip Chbeeb, hip-hop dancer\nAge he started dancing: 16\nHow did you get started? I was a jack-of-all-trades kid—I did everything from basketball and track to theater. After a (now) comical incident when I took a line drive to my face playing baseball, I had to ease off sports for a while. That's when I took my first dance class.\nWhat obstacles did you encounter? I had to learn when to incorporate my own natural tendencies into someone else's choreography—and when not to. I had to figure out how to break movements down into pieces: the bounce, the pivot. That helped me become more aware of my body and its subtleties.\nWere there any benefits to starting late? In a way it's good that dance isn't “my life.\" I'm inspired by things outside of dance, and I think that helps me better express myself.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "With a vast number of various pointe shoes and styles, finding the right balance between shoe flexibility, correct fit, and support can be challenging. The shoes must have enough movement to allow the dancer to get fully onto the toe box in order achieve full plantar flexion. They must also have adequate support to allow the dancer to put full weight through the tips of the toes without collapsing. Without adequate support, the dancer will place excess load on the muscles, ligaments, and joints of the foot/ankle.\nSince pointe shoes have poor shock absorption, dancers must rely on good core stability and lower body strength to reduce the impact on the foot/ankle. Poor technique, fatigue, and improper fitting pointe shoes can increase the risk for injury. For example, the vamp of the shoe must match the foot shape. If the vamp is too low, the foot spills out of the shoe and loses stability. This can increase the risk for fracture at the midfoot or second metatarsal. If the vamp is too high, the dancer won’t be able to point the foot. Improper length of a pointe shoe can result in adverse consequences for a dancer: excess length of a shoe leads to instability and a short-fitting shoe can cause compression of the toes. The vamp/platform of the shoe loses its stability when the shoes have excess wear and tear, and dancing on dead shoes can increase the risk of stress fractures, ankle sprains, metatarsal/tarsometatarsal sprains, Achilles tendinitis, Flexor Hallucis Longus tendinitis, and injuries to the knees, hips, and spine.\nNext month’s post will feature Josephine from The Pointe Shop. She will provide more tips on finding the right pointe shoe.\n1) “Principles of Dance Medicine, Functional Tests to Assess Pointe Readiness.” A webinar through the Harkness Center for Dance Injuries. Accessed Feb 23, 2017.\n2)Shah S. Determining a Young Dancer’s Readiness for Dancing on Pointe. Curr. Sports Med. Rep., Vol 8, No. 6, pp. 295-299, 2009.\n3) “Matching the shoe to the dancer.” A webinar through the Harkness Center for Dance Injuries. Accessed Nov 20, 2019.", "score": 24.234289534804766, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Editor's note: Art of Movement is CNN's monthly show exploring the latest innovations in art, culture, science and technology.\nLondon, UK (CNN) -- Alina Cojocaru had been one of the Royal Ballet's biggest draws for half a decade by the time her career crashed to a halt.\nThe minute Prima Ballerina -- praised as \"a dancer of seeming fragility, delicacy and radiance\" by The New York Times -- was rehearsing in 2008, when she was flipped by her partner, skewed awkwardly and slammed to earth.\nShe suffered severe whiplash and a prolapsed disc in her spine and was forced to rest for over a year. At 25, doctors told her she would need surgery, and would never dance at the highest level again.\nBehind ballet's graceful pirouettes are grueling feats of training and endurance that push dancer's bodies to their extremes.\nRest is a rare luxury. Some dancers perform 200 to 250 days a year, leaving just over 100 days to train and recover. Rehearsals can require 10 hours a day on the floor.\nIt is hardly surprising that critics often refer to dancers -- especially the hard-worked young apprentices in the corps -- as the \"foot soldiers\" of ballet.\nFour out of five will suffer a severe injury during the course of their dancing career -- and two out of those four will never fully recover.\nInjuries, more often than not, are the result of fatigue and repeated strain on muscles and joints, rather than unpredictable accidents. Even in cases like Cojocaru's, fatigue can be the root of the disaster -- as one exhausted dancer's lapse in concentration often translates into another dancer's injury.\nPhysiotherapy for recovering dancers is well entrenched -- and organizations such as the National Institute of Dance Medicine and Science and the International Association for Dance Medicine and Science have done a lot to advance knowledge of health issues among the big ballet companies and schools that feed them. But, beyond that, dancers benefit from few of the scientific breakthroughs that have so improved the safety record of their athletic counterparts on the sports field.\nPatrick Rump is the 33-year-old former karate champion fighting to change all that.\nRump -- a broad-chested, broadly smiling German -- is the subject of \"Dance, Sports Science and Patrick Rump,\" a new short documentary produced by Lady Bernstein and directed by Nigel Wattis.", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Before age 10, many dancers recognize that they love dance and set their aspirations to grow up to be “a dancer.” Between the ages 12-14, however, kids become aware that they may not grow up to be a professional dancer, but they do love dance and have a passion for it. It’s important to move these dancers into the mindset that they can turn their love of dance into a variety of future opportunities, and continue their dance training with a future goal in mind.\nUsually between ages 12-14, dancers are beginning to think about what decisions to make now as an investment into their young adult life, and they have a hard decision to make: Quit dance training to focus on academics and school activities, or put most of their focus, energy and time into dance in order to focus on being well-rounded, which could serve them in a future dance-related career.\nAt 3-D Dance, we want to equip dancers to be the most important kind of triple threat: academic strength, dance technique, and life skills. Dancers must be taught to balance the following skills well to truly achieve success:\n- Learn to appropriately manage your own emotions.\n- Take care of your body.\n- Pay attention to your emotional well being.\n- Manage your money well.\n- Establish genuine relationships and network with colleagues.\n- Be a hard worker who goes above and beyond expectations.\n- Keep a positive attitude and optimistic perspective.\n- Have a confidence and understand of WHO you are and why you are valuable.\nTalent can get you a job, but character and work ethic will keep you there and make you valuable. Getting hired is great, but getting re-hired is even better!\nThere are more great dancers than there are great dancing jobs. However, there are not more jobs than there are great dancers who are also humble, hard working, and professional who demonstrate integrity and hard work with a positive attitude.\nThere are three general categories of dance related careers: dance jobs, dance-related jobs, and medical/fitness jobs.\nLet’s dive into the yellow ring, Dance Jobs, first.\n- Dancer: someone who is actively performing in some capacity. These jobs are hardest on your body.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "The Alonzo King LINES Ballet blog now features a biweekly column by Dr. Lindsay Stephens called Body Care for Dancers. In her posts she shares her vast knowledge on topics such as injury prevention, treatment and recovery. Read on for her second installment in which she offers tips for reducing injury through stretching.\n#1: Breathing! Simple, yet effective. Holding your breath restricts core muscle control, decreasing stability and increasing the risk of injury. If a structure is unstable, supporting structures will take more load and wear down easier. In order to avoid this and maintain stability, one must have full control and movement of the diaphragm with a braced core. An inactive weak core is like running on dry sand: hard to get any traction and the energy expenditure is 10 times that of running on a stable, hard surface. I always tell my patients that if you can breathe through the movement pattern, you can control it. Practice breathing during your warm up so you can translate that to the technique and the choreography.\n#2: Be specific! Stretch the muscle groups you need for the type of activity you are doing. It is also important to be well versed in ones limitations: What are your injury risks? Are you hypermobile or hypomobile? Which joints? Do you have muscle asymmetry? Weakness? Instability? These are questions you should be able to answer. If you find you cannot answer these questions, consider being screened by a musculoskeletal specialist to get the information you need. Knowing the answers allows you to cater your warm up to your body and ultimately improve your performance.\n#3: Watch your form! It often seems that people are trying so hard to impress everyone else while stretching, that form becomes secondary. They’re so busy stretching the furthest, that they aren’t even stretching the intended muscle groups any more. It’s not how far you go, but how great your form. By pushing yourself without perfecting your form first you are risking injury to your back, knees, ankles, hips, and more. It puts unnecessary strain on the supporting structure and deconditions stabilizing muscle groups. If you are new to dancing, let your instructor know so that you can get the guidance you need – a great teacher will correct form during stretching as well as during class. Protect yourself from day one and injury will be a preventable factor.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-6", "d_text": "How do I transition into the new stage in life?”\nAnd you’re talking about short career, but actually, when you consider that most of the dancers start early on in their life – three, four, five – it’s actually a long span of time, and, it’s their identity. So when they finish dancing, they have lost an identity.\nTake for instance an engineer or an architect, they start when they’re perhaps 16 or 18, depending on what exams they take, and that’s when they start. And then they finish a lot later. [But dancers finish] at 30 or less – I think I retired at 29 – you might say that’s a short career, but I’ve been doing it since age six. So that’s 24 years.\nAnd this is the most formative period of your life when you’re young, and as you mentioned before, that your subconscious is recording your experiences and forming your world view and of yourself. This point about the identity is quite a significant because dance is not just a physical challenge of an athlete, although dancers are athletes, they have to perform at peak performance, but also, as you mentioned, it’s an artistry as well. They have to tell stories and it has to look beautiful all the time, and it’s all about presenting this perfection. And this identity must be very hard to change.\nWell, yes. We talk about the identity and the loss of identity. So we’re talking about grace and bereavement again, when there is a change. And you know, it’ll be the same for athletes as well, because they get injured, they have to give up. And I think from that point of view, it’s how to deal with it. How to deal with the process of change.\nI’m doing university research on neurodiversity. So the neurodiversity includes ADHD — that’s Attention Deficit Hyperactivity Disorder, and Autism. So, we’re looking at the spectrums and how the training from an early age exacerbates some of those traits to develop ‘black and white thinking’ – the ‘right and wrong’. For example, if you’re not pointing your foot right then you’re wrong – and then there’s the concentration that’s needed, the perfectionism within that, [which can] exacerbate all those traits.\nAre the traits leading the dancers or are the dancer able to control those traits?", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Dance floor research\nEmerging dance scientist and biomechanics expert Luke Hopper outlines his pioneering research investigating the effects of dance floors on dancer performance and injury.\nDance floors are an integral part of the dance environment, yet little information is available for the dance community that concerns how dance floors may affect dancer performance and injury. For the dedicated dancer striving to improve, injury can sadly be an all too common occurrence. By gaining knowledge concerning the relationship between dance floors and dancer performance and injury, the dance environment can be optimized in order to give dancers the best opportunities in their training.\nIt is common to hear dancers describe a floor with words like ‘sprung’, ‘hard’ or ‘stiff’. But what aspects of the floors are the dancers referring to when they make these statements? And do these elements of the floors really affect performance? These are vital research questions for dance research in the interests of dancer health.\nFindings of the research\nDid you know that the manufacturing standards in the UK that apply to dance floors are exactly the same standards that apply to basketball and volleyball courts? But unlike in many sports there is no governing body that directly regulates the floors used by dancers. Therefore it is easy to imagine that there are many inappropriate dance floors being used in the UK.\nVarious floors used by professional dancers in the UK have been tested and it was found that many of the floors did not meet the standards that apply to basketball and volleyball courts. In fact, some of the floors were almost as hard as concrete! It was only the floors that were specifically made for dance that complied to the standards for hardness. Therefore requiring dancers to perform on floors that are not dance specific may present with an unnecessary injury risk.\nUsing state of the art 3D motion analysis techniques, the movements of dancers were measured performing landings on different dance floors. The results showed that on harder surfaces (like that measured in the previous study), the stress at the dancers’ ankle joints increased. It was only when the floors complied with the standards that the ankle stress was decreased. The greatest ankle stress occurred less than a tenth of a second after the dancers had landed on the floor. Because these changes occurred within such a short time period this may mean that regardless of technical ability, dancers may not be able to reduce this ankle stress.\nProfessional and student dancers were then asked to give their opinions of the ‘feel’ of different floors.", "score": 22.6904783802783, "rank": 57}, {"document_id": "doc-::chunk-10", "d_text": "Our result may be explained by increased sensorial and cognitive disabilities, which are well known to favour falling17 18 33 among participants aged ≥40 years.17 Ageing healthy people have reduced posture control, associated with cognitive and brain structural involution, in unstable stance conditions and with diminished sensory input.22 Posture of 44- to 60-year-old participants is more sensitive to task concentration constraints than that of young participants.23 Older adults display greater body sway than the younger adults, when performing demanding tasks.34 Posture control demands concentration under certain circumstances, especially under dual task conditions,35 and interference occurs between activities of maintaining balance and performing mental tasks.36 Our study reveals that older workers were also subject to a higher injury risk due to collision with/by moving objects or vehicles. This result may be partly explained by a higher prevalence of visual, hearing and cognitive impairments, which steadily increase after 40 years of age18 and which are known to favour injuries.10 16 37 An injury caused by a moving object often results from the subject not being able to avoid the object because he hears neither the object itself nor the warning message related to the moving object. A study has shown a relationship between warning sound and perceived urgency by the subject.38 Our findings are important because lengthening of working life produces more workers aged >55 years, and this age-related risk trend suggests that these older workers are subject to high risk. Physical job performance requirements as well as tasks and environments involving excessive risk should be mitigated for workers aged ≥40 years.\nOur study shows that injury risk was highest during the first 2 years of employment and then decreased steadily with increasing length of service (except for workers aged <25 years). This trend was observed for all injury categories. Although some long service staff would be in safer jobs and younger workers may make more difficult tasks than older workers or those with disability when they work in teams, these findings show that gaining job experience represents a long, gradual, personal effort, which will produce maximum benefit after 30 years of employment. Job experience therefore appears to be an endless process of acquiring job knowledge, especially in relation to task performance and occupational hazard assessment. This may be explained by the large number of occupational hazards and their complex injury-causing mechanisms.", "score": 22.06694109664218, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "\"I think that a majority of back injuries probably come from partnering and the lifting involved in that. I work with primarily children and older adults instead of professionals, so I'm not as familiar with back injuries in dancing as others might be.\"\nBruce Ballard, who formerly danced professionally in four states and Canada, agreed that the majority of back injuries come from coupling. But the person getting injured most often isn't who you might think.\n\"Men don't usually have back problems, even though they're the ones doing the lifting,\" said Ballard, who now is a registered respiratory therapist and registered pulmonary function technologist for Baptist St. Anthony's Health System. \"I have seen backs hurt of women who are being carried. They have to do a lot of bending at the waist, and they sometimes get tweaks in their backs.\"\nBallard said that both stretching and massage will help keep vulnerable muscles from being injured.\n\"Those muscles get worked so hard,\" he said. \"It's important to stretch and have those muscles massaged - especially to protect the ones that don't really grow in size and are hard to strengthen.\"\nBarbara Harris, a trainer and rehabilitation specialist for the Boston Ballet, said that a normally curved spine acts like a spring, absorbing impact from dance movements. But dancers think a straight spine is better - \"it makes them look very lifted and very long,\" she said.\nThe \"lifted\" look, with a straight back and neck, and pelvis tucked under the back, makes dancers seem to float as they move, Harris said. However, dancers can get the same look with less risk to their backs simply by keeping the pelvis in a neutral position, neither tilted forward nor back, she said.\nThose who don't prepare their backs properly may wind up at a sports medicine clinic and at rehab. At the clinic, they may get X-rays and bone scans. The scans have the advantage of picking up tiny tears before they get big enough for X-ray spotting, when the injuries are harder to heal, Micheli said.\nAt rehab, Harris works with a Pilates-based system of flexibility and strengthening - a method that often is familiar to dancers. She also focuses on teaching them how to recognize and foster good back alignment.\nAlmost all dancers can dance after treatment, Micheli said. But older dancers may not heal as easily, he said - like older athletes, they may return to their activity, but must recognize they will have to play with pain.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Absolutely not! In my opinion, 19 is a great age to start training to become a professional ballroom dancer. As a matter of fact, it’s considered to be a little young. Even so, I started at 20 with no previous dance experience. But first, before we get into the reasons that 19 or 20 is the ideal time to start your ballroom dance career, we have to answer this question…\nExactly what is a professional Latin and Ballroom dancer?\nA ballroom dancer who gets paid to dance is a professional. It may be as a performer, but in ballroom dancing, especially in the States, you’ll almost certainly start out as a teacher. For some suggestions on how you can become an instructor, check out this Dance Safari post, “How Can Adults Learn to Ballroom Dance Professionally?”\nTo become an instructor, you must learn three things.\n- To begin with, you have to learn to do your part of the dance. As a result, the man will learn to lead and the lady will learn to follow.\n- The next step is to learn how to teach ballroom dancing. Specifically, the lady will learn to lead while the gentleman learns to follow.\n- Finally, you’ve got to be trained to sell dance lessons. It’s important to be able to explain to your students why they should invest in your services. If you fail to do this, all your dance training will be for nothing because you won’t have anyone to teach.\nWhat’s so good about a younger trainee?\nAs someone with a fresh face and pleasing personality, younger trainees will find it easier to get their foot in the door.\nAs recent graduates, most 19-year-olds are considered to be professional students. Learning ballroom dancing is the same as learning any new subject. For this reason, their dance training should be smooth sailing.\nSome of the best reasons to begin working towards the goal of becoming a ballroom dancer at the age of 19 are:\nExcitement and Enthusiasm\nHow exciting it is to be preparing for a career in ballroom dancing! Let’s face it, what 19-year-old wouldn’t want to sleep late and get paid to spend the workday dancing? In addition, there are competitions, shows, world travel, and a glamorous lifestyle.\nEnthusiasm for dancing comes easily to younger instructors. Furthermore, it’s contagious. A passionate teacher means an eager student.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "If we think about performances, both your nervous system and your brain are working overtime just to keep you upright and following the music. As well as moving your muscles, tendons, and ligaments in the right way, the whole body needs to remain balanced. The stronger your balance and coordination, the more likely you are to make the correct movements (and get a standing ovation at the end!).\nWe’ve broken down the physical demands below;\n- Coordination - Dancers often adhere to choreography that’s always changing, and this means different groups of muscles are required. As movement patterns change, the body develops complex movements. When it comes to technique, coordination is everything.\n- Alignment - Even beginners can make the same movements as the professionals, but it’s the effortless nature of moving from one position to the next that makes a dancer so successful. Sometimes known as ‘placement’, alignment describes the efficient movement that’s required of the body.\n- Flexibility - Of course, no dancer will be able to make elaborate movements without first having the flexibility in the tissue. Although different types of dancing will have different levels of required flexibility, all dancers need the muscles, ligaments, and tendons to work together to provide flexibility.\n- Strength - Next, dancing isn’t just about exerting as much energy as possible in a short amount of time. Instead, it sometimes requires slow and controlled movements. Often, these deliberate movements require bodyweight to be shifted to one leg or in an awkward way and this is where the body requires strength. Elsewhere, you might push yourself from the floor explosively or lift a dance partner.\n- Endurance - While there are others, the final physical demand we’ll include in this list is aerobic endurance. If your aerobic capacity is low, you’ll feel fatigued much quicker and this leads to aching and, the one thing dancers dread, injuries. With the right aerobic endurance, everything improves including energy, overall health, stress, injury risk, concentration, and stamina.\nAs you can see, the body can go through quite a ride during a dance class, lesson, and performance. Fortunately, a chiropractor can improve many of these factors and ensure that your body is in the right place to handle dance classes effectively. Even if you aren’t injured, we can still help at Carpe Diem Chiropractic so don’t wait until the worst happens.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-3", "d_text": "The contributing factors are Coordination, Aerobic activity, Socializing, and Happiness and of course that secret ingredient of feeling young and beautiful no matter what age.\nSo please pass this on to your young & not so young dancers, mine loved hearing it.\nAnd here's to a long life of dancing and being able to remember our way home after.\nKeep well, keep smiling and most of all\nKeep dancing. See you on the floor\nTill next time,\nSamantha – Assistant Secretary", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Eat well and take supplements to support your joints.\nMost of the dancers I know are out there strutting their stuff on the dance floor 2-3 times a week, usually for 2-4 hours or more. If that’s not intense training I’m not sure what is. You’ve got to look after the health of your joints from the inside if you want to keep up that sort of activity! My top tips would be to eat an anti-inflammatory diet (check out my blog on arthritis for more information on this), take vitamin D, fish oils and some form of glucosamine and chondroitin. It’s all on the arthritis blog, and doesn’t only apply to those with arthritis.\n4. Listen to your body!\nWhen you start to feel like your back is about to “go”, get it seen to before you injure yourself. Your body is the most amazing, intelligent, intuitive piece of kit you’ll ever have; learn to listen to it and trust it when it’s telling you there’s a problem. Pain is there for a reason, so don’t just keep popping the ibuprofen without looking into the cause of the pain. I’d recommend seeing a qualified professional such as a sports massage therapist, a chiropractor or an osteopath. I don’t recommend taking all your health advice from the bloke down the pub, or the guy you know on facebook that “had a problem just like this, here try some of my prescription medication. . .” I wish I was joking when I typed that, but it happens more often than I care to mention!\n5. Invest in regular massage treatments.\nPeople usually turn up in my clinic with acute or chronic injuries. We work intensively together for a few weeks (time scales vary depending on the extent of the injury) and then the client either says “So long, thanks for fixing me, see you next time I’m broken,” or they say “Can I book in for a maintenance treatment next month?” Which do you think has a higher risk of repeat injury? I can tell you, it’s the former. So after three months of having no massage treatment, they turn up again with the same injury, and spend twice as much on having it sorted out again from scratch as they would have spent on three maintenance treatments. It’s a sad story, but a common one. Sad face :-(.", "score": 21.107226877652625, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "“Of course, everyone knows their body best, and as a dancer what kind of things might trigger pre-existing injuries and how to ice and heat, and so on. We can call that common sense, but common sense only goes so far when you’re dealing with a detail-oriented issue that involves multiple muscle complexes in the body.”\nAfter repeatedly hurting her left knee, Hannah’s recovery and management techniques were taught to her by medical professionals rather than her dance teachers. “My rehabilitation knowledge came from my physical therapist and their recommendations from their limited understanding of what ballet does to the body. My ballet instructor understood to tread lightly as I progressed back into full time dance, but most of the modification decisions were based on my own intuition. I had to reconstruct my leg and build back the muscle strength to match the demand of the rigor of ballet.”\nBoth Rolanda and Hannah have found a positive impact in practicing Yoga throughout their dance careers. “I believe yoga is a beneficial practice for dancers,” says Rolanda, who has used Yoga as a method of both injury prevention and recovery. “Every time I lead my dance classes in our warm up and stretch, I always incorporate at least 6 yoga poses.”\n“Yoga can be beneficial for injury prevention because you’re tapping into an awareness about your body and how it’s feeling that normally isn’t a focus when you’re drilling out combos or choreography,” Hannah states. “It also works in supplementation for strength training in areas that could be weak, while simultaneously giving you the opportunity to elongate and release places in your body that we may not even realize are carrying tension.”\nDance professionals, especially choreographers and instructors, may benefit from investing time to investigate the effects of Yoga on dance related injuries. While an injury can almost never be anticipated, dancers can be prepared to take care of themselves or those they teach should one arise.\nPolsgrove, M.J., Eggleston, B.M., Lockyer, R.J. (2016). Impact of 10-weeks of yoga practice on flexibility and balance of college athletes. International Journal Of Yoga, 9(1), 27–34. Doi: 10.4103/0973-6131.171710", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "You're never too old to benefit from taking up dancingby Ruth Nichol\nWhen Jackie Scannell hung up her ballet shoes 45 years ago, she assumed her dancing days were behind her.\nScannell, who lives in Central Hawke’s Bay, returned to the barre eight years ago. Since then, she has gradually built up her strength to the point where she is able to go en pointe (dance on her toes), something that even young dancers can find difficult.\nAs her teacher, Esther Juon, points out: “You have to be incredibly fit to do what Jackie has done.”\nSpending four hours a week at ballet classes hasn’t just made Scannell fitter, stronger and more flexible. It has also fixed a long-standing lower back problem, and given her a sense of satisfaction and achievement as well as providing what she describes as a workout for her brain.\n“The amount you use your brain is incredible, because you have to think about so many different things as you dance.”\nShe’s one of a small but growing number of so-called silver swans – mostly women who are returning to (or in some cases taking up) ballet later in life. Although they tend to range in age from their late teens to their early sixties, some are much older; according to a recent BBC report, the oldest ballerina at adult classes run by the Scottish Ballet was 102.\nA survey by Dance Aotearoa New Zealand (Danz) last year found there are two main reasons people return to or take up ballet as adults. The first is to improve their physical health and the second is to improve their mental health and sense of well-being.\n“It gives me strength and muscle tone,” said one respondent. “It makes my day feel better – I leave feeling good,” said another.\nIn October, the American Association of Orthopaedic Surgeons issued a statement recommending ballet as a way for people of all ages to improve mobility and build strength. Spokesperson Nicholas DiNubile – orthopaedic consultant to the Pennsylvania Ballet – said the artform provides flexibility, strength, core conditioning and agility training.\n“The controlled movements produced in ballet, such as demi-plies [knee bends with feet planted to the floor] and releves [toe raises], help to strengthen knees, ankles and feet. Arabesques [leg lifts to the rear] build gluteal and core muscles.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Instruction in proper technique is critical. Dancers ought to pay out very near interest to right posture and alignment: “shoulders in excess of hips, over knees, in excess of ankles” is a crucial notion to recall.\nQuite a few dancers also learn that common core strengthening aids develop excellent balance and control, So reducing excessive work by the incorrect muscle teams.\nBallet is a sort of dance which might be traced back to your Italian courts. It happened in massive halls and included other arts.\nYou will get one hundred thirty five distinct images and 44 movie demonstrations of distinctive stretches for all the foremost muscle groups in your body. In addition, the DVD consists of 3 custom made sets of stretches (8 minutes Just about every) for the Upper Overall body; the Reduce Human body; and also the Neck, Again & Main. And the Handbook will teach you, step-by-stage, the way to accomplish Each and every stretch accurately and properly.\nPacing the instruction: This means, new more challenging movements and combos need to only be introduced if the dancer has produced ample power, flexibility and technological foundation to accomplish the new motion properly and effortlessly. “Pushing” a dancer could be counter successful.\nPower instruction: Despite the fact that dancers will not normally use pounds lifting, they can profit greatly from dance specific toughness teaching working with just one’s possess human body bodyweight. Other than a great overall application, Unique attention need to be supplied to balancing the hamstring and quadriceps strength, as imbalances in that space are at the basis of numerous back and lessen human body overuse issues.\nThe areas that have to have specific notice are classified as the hip flexors, hamstrings and calves in addition to Performing to create a fantastic hip turnout.\nBallet stretches are Probably the most under-used strategies for enhancing athletic performance, blocking athletics injury and thoroughly rehabilitating sprain and strain personal injury. Don’t make the mistake of believing that anything so simple as stretching gained’t be helpful.\nOther people can create Achilles tendonitis and tension fracture from the foot. Just about the most frequent injuries is a lateral ligament injury of your ankle as a consequence of inversion. Some ankle trouble stem from muscular and anatomical challenges while in the hips.\nSorry, we just should be sure to're not a robot. For finest effects, remember to make certain your browser is accepting cookies.\nThis muscle should be flexible enough to achieve a neutral pelvic position.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "Alice Holland, a physical therapist in Portland, Oregon and director of Stride Strong Physical Therapy agrees, but adds that you shouldn't use this as an excuse to shy away from exercise because a sedentary lifestyle is also bad for your back. “Over time, too much sitting deconditions the abdominal and gluteal muscles, which will cause extra pressure on the spinal column—ultimately leading to pain,” says Holland.\n2. Knee problems\nAchy knees might not take you by surprise if you've been running, skiing, or biking for years, but it can also happen if you're new to exercise. “This is especially true for people who’ve had a long stint of inactivity and then try to lose weight by doing high-intensity workouts,” says Hollalnd. “To prevent knee pain over 40, my recommendation is to focus on conditioning and strengthening workouts that progress slowly.”\nBarbara Bergin, MD, a board-certified orthopedic surgeon in Austin, Texas, adds that the knee is particularly susceptible to injury as we age because a weakened, older meniscus in the knee is more likely to tear. “Squats, deep knee bends, and lunges have the potential to cause a painful condition in the knee cap because they put increased pressure on that area,” says Bergin. “And women are more susceptible to this than men because of the physiology of our knees and hips.” (If you have knee pain, this is the best lunge variation you should be doing.)\nHere's how to do the perfect lunge:\n3. Rotator cuff injuries\n“The shoulder is the most mobile joint in the entire body—no other joint can match it in the degrees of freedom it has,” says Champion. “But this mobility reduces the shoulder joint’s stability,” which puts it at greater risk for injury, he adds.\nBergin says rotator cuff strains, tendonitis, bursitis, and tears are very common after 40 because the shoulder is an area that’s particularly susceptible to repetitive strain. “Any exercise program that involves heavy lifting, repetitive lifting, burpees, and push-ups puts older people at risk,” she says. “In fact, orthopedic surgeons often joke that these programs keep us in business.”\n4.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-1", "d_text": "“Once the spine and tailbone are in an exact position, then we start turning out from the hips,” says Kremnev. “Otherwise, you have to go back and fix their alignment a few years down the road.”\nAccording to “Turnout for Dancers: Hip Anatomy and Factors Affecting Turnout,” a resource paper written by Virginia Wilmerding and Donna Krasnow and published by the International Association for Dance Medicine and Science (IADMS), approximately 60 percent of turnout comes from the hip joint and 20 to 30 percent from the ankle. The knee and tibia pick up the rest. In addition, specific muscle groups aid in rotation and help dancers sustain turnout. The lateral rotators—a set of six small muscles located underneath the gluteus maximus near the pelvic floor—help pull the greater trochanter out and back. The sartorius and inner thigh muscles (or adductors) also contribute to external rotation, while the abdominals stabilize the pelvis.\nVariables such as the femoral neck’s shape, length, and angle; placement of the hip socket; muscle tightness; and ligament flexibility can help or hinder a dancer’s rotation. Kremnev is sensitive to individual differences, and approaches each student differently to accommodate their physical abilities early on. For those with limited turnout, he implements a gradual, gentle stretching regimen. “We practice extending the muscles and then relaxing so they can reach their maximum,” he says. “As the muscles get longer and looser, we try a little bit more. It’s a step-by-step approach.”\nAfter age 13 or 14, there’s not much a dancer can do to increase their natural range, but strengthening the proper muscles can help. “Many dancers think they don’t have enough turnout,” says Molnar. “But what they really lack is strength at the very end range of motion, so they can’t hold it.” She recommends supplemental strengthening exercises provided on the IADMS website. “If you increase your strength, you improve your ability to reach your maximum.”\nForcing that maximum by torquing the knees and ankles can have damaging results. Wear and tear on the hip joint’s surface can result in arthritic changes and impingements.", "score": 19.44951185328346, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Most dancers think more movement is inherently better. It’s not uncommon for a dancer to come in and lament their lack of flexibility, then display 135 degrees of hip flexion. And while more mobility CAN be useful for them, if they’re writing checks with their mobility that they’re stability bank can’t cash… they’re going to run into problems. Even if they avoid injury, they won’t be able to fully realize their potential as artists.\nThis is obviously interrelated to the point about mobility above, but make sure they have ample core and 1 leg stability. Powerlifters know “you can’t shoot a canon out of a canoe,” but dancers don’t always appreciate how much their core stability will limit their ability to be graceful while jumping or maximize their height. Furthermore, it’s difficult to nail multiple turns (or “pirouettes”) if there’s compromised hip stability. It’s also worth noting since turning is a one leg activity, a highly functioning lateral subsystem is going to be key here, so don’t neglect true one leg work. In other words, split squats are useful as a regression or to build the foundation, but be sure to progress to exercises like box step-ups to balance.\n3. Landing Mechanics\nIn the strength and conditioning world at large, there are often passionate debates as to the appropriate amount of jumping for athletes. Excessive plyometrics can represent a skewed risk to benefit ratio for many coaches.\nUnfortunately, since most forms of dancing require ample amounts of jumping, dancers don’t have the luxury of limiting their jumping volume. Therefore it’s super important to make sure their landing mechanics are spot on. Much like in a game, dancers can’t afford to think about their technique while performing or auditioning, so it’s important for their training to hammer in good mechanics.\nCertainly addressing mobility and stability as described above will go a long way towards cleaning things up. Even so, taking some dedicated time to work out on the fine points of landing mechanics is a great idea. Many elite dancers will be adept at using their foot musculature to decelerate upon impact, but make sure females in particular cultivate the ability to landing with out caving into valgus collapse. As always, progression is key here; if they’re unable to stabilize in a less dynamic environment, regress as appropriate until they’re ready to handle the increased demands of jumping.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "This is the legacy that Golden and I are committed to,” she told us. “To make sure no one who dances gets injured, we have been teaching this technique for 35 years with great success, and those we taught this technique to have never been injured.”\nThe duo’s method is reportedly simple, yet it is apparently crucial that this technique is properly taught, and in the U.S now only two people – Dame Nadya and Sir Golden – are authorized to teach it.\nOn “DWTS,” many of the performers have endured tough injuries. “The Bachelor” veteran Melissa Rycroft was diagnosed with a disc herniation on her spine following a rehearsal last year, and in her first “DWTS” run in 2009 she was set back with a hairline rib fracture. Others, including Melissa Gilbert, Maria Menounos, Ralph Macchio, Jennifer Grey and Debi Mazur have all endured rough injuries.\nThen there was Gilles Marini, who underwent surgery to treat a separated shoulder. And the series proved too rough for “Jackass” star Steve-O, too. He had to rest after being injured flipping onto his back. Singer Jewel fractured tibia in both legs, and Steve Wozniak fractured his foot and pulled a hamstring.\nEven elite athletes aren’t immune. Figure skaters Evan Lysacek and Kristi Yamaguchi were both hurt on the show. Meanwhile, pro dancer Maskim Chmerkovskiy is rumored to be suffering ankle and back issues that could prevent him from returning to the show next season.\nSo why do so many things go wrong on the popular reality series?\n“They seem to be doing more dances and more challenging choreography, which then involves a lot more rehearsing. Thus, overuse of muscle and joint syndrome,” explained former professional dancer and co-founder of the Exhale Core Fusion fitness program, Elisabeth Halfpapp. “If possible, they need to have more time to recover between each dance and rehearsals, or massage and acupuncture daily which will help sore muscles and joints.”\nAnd according to Lani Muelrath, a fitness expert and author of “Fit Quickies,” something has to give to stop so many sprains, strains and snaps going forward.\n“’DWTS’ should bring in a specialist in correct movement form and alignment to teach the basics of core stabilization, as well as how to maintain position anchor points as a safe foundation for dance,” she said.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-21", "d_text": "Dance tutors are now being encouraged to break the bad habits, pursued since the early days of ballet in the court of Louis XIV, and adopt the lessons of British Olympic sport science, including the study of anatomy and warm-up techniques.\nThe study Fit to Dance? was supported by Dance UK, the national organisation for Britain's 25,000 professional dancers. It surveyed 658 ballet, contemporary, jazz and tap dancers and dance students, watched 250 performances ,;and conducted fitness and nutrition tests. The researchers concluded that dancers in Britain were less aerobically fit than counterparts in the United States and Russia. The dancers' >own definition of fitness tends to mean flexibility rather than stamina and endurance.\nDiets were found to be notoriously unhealthy, with too many still believing the myth that \"food is the enemy\". One dancer told researchers: \"It's chocolate, cigarettes, Kit-Kats . and Coke\". They eat more fatty foods than other sportspeople and fewer fruit and vegetables. To replace fluid, they mistakenly drink strong tea, coffee, beer, lager and wine.\nForty per cent of the men and 36 per cent of the women admit to smoking. The report says: \"Some begin smoking only upon arriving at school, partly to cope with the unfamiliarity and pressure, partly because it is socially acceptable and partly to suppress appetite.\"\nProfessor Christopher Bannerman, one of the report's editorial team, remembers how he was left incapable of tying his shoe-laces for three months because of a back injury with London Contemporary Dance Theatre. He trained with weights in a gym and, when he returned to work, found: he had become much fitter:','! leaped into the air and wondered why everyone else was going down to the ground so soon. I was fit for the first time.\"\nProfessor Bannerman, now head of dance at Middlesex University, said: \"Some dancers are marginally more fit than the average person in the street in terms of aerobic fitness. They, say: 'I want to express myself and look beautiful — I don't want to jog'.\"\nHalf of the dancers surveyed had chronic injuries from early in their careers. The cost can be high. One commercial management spent £38,000 on understudies and extra rehearsals to replace injured dancers.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Whether you’ve been a regular at the gym for years or you’re just starting to up the workout ante now, exercising after 40 can help you fight off age-related weight gain and reduce your risk of chronic diseases like diabetes, heart disease, and cancer. The downside: You're more apt to walk away with aches, pains, and even serious injuries than you were when you were in your 20s and 30s.\nAs you get older, your body just can't bounce back as quickly as it once did, so recovery takes longer. You may also have to contend with the cumulative effects of being very active for many years. Long-time runners, for instance, may find that their knees have endured some serious wear-and-tear. But those new to working out are also at risk, especially if they push too far too fast or don't learn proper form.\n(Transform your health with 365 days of slimming secrets, wellness tips, and motivation—get your 2018 Prevention calendar and health planner today!)\n“The fact is that exercise-related injuries are more common after age 40, so it’s important to moderate your physical activity—and build up to your optimum workout intensity—if you’re in this demo,” says Liam Champion, a physical therapist at Physiwiz. It’s also helpful to understand which injuries are most common, so you can make an extra effort to avoid them. Here, doctors, physical therapists, and personal trainers share the most common exercise-related injuries they see in their patients and clients who are over 40.\n1. Low back pain\nGiven the fact that the American Chiropractic Association estimates a whopping 31 million Americans experience low back pain at any given time, it’s no surprise that this common hot spot may act up when you’re over 40. Rachel Straub, an exercise physiologist and co-author of Weight Training Without Injury says that age increases the likelihood of low back issues.\n“Unfortunately, far too many people hurt their low back during exercise by overextending or arching their low backs, which is common during push-ups, kettle bell swings, and even certain yoga poses,” says Straub. “One solution is to boost core strength, which helps keep your spine in a neutral position as you move through these and all of your exercises.” (Try this 9-minute yoga routine that will help rid your lower back of pain.)", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "So, I hope that my appreciations about this problematic, from the dancer’s perspective, might be of use for those of you having a similar situation or interested in the subject.\nI’m aware of the fact that a neck pain can be caused by too many different reasons, such as injuries or diseases. In the case of dancers, it can often be caused simply by improper placement, coordination or training, which is the dancer’s responsibility to understand.\nI had a neck pain crisis at the age of 22. At the moment I was so passionate about dance, that I would have trained 24 hours a day, non stop, if my energy would have allowed it (I’m still so fervent but go wiser now…!). Not only was I passionate, but also anxious about improving my skills and being a better performer.\nIt seems like this behavior is a common characteristic among many dancers. We are worldwide recognized for the passion to our occupation and there’s also a kind of popular archetype that describes us as perfectionists and compulsive (just have a glimpse at the film “Black Swan” where Natalie Porter plays the role of a dancer, and you’ll find the popular archetype I’m talking about).\nWell, this feature of personality helps a lot to deal with the discipline and severity of our job, but when talking about our health it can be highly counterproductive.\nRESTING AND CROSS-TRAINING.\nOne of the things I had to understand, and accept, during my personal recovery process, was that resting is a fundamental part of training. Without the appropriate resting dedication, physical skills worsen, rather than enhancing, no matter how much effort we put on training. And when we push further from our biomechanical limits, we’ll very probably enter the domain of injuries or dysfunctions.\nIt was very important for me to understand that resting doesn’t mean only to stop classes and sleep. Actually, a dancer’s rest also means to cross-train with practices that emphasize in recovering movement qualities (everything that goes towards relaxation or efficient coordination of tension).\nIn my case, the study of Tai Chi Chuan, Hatha Yoga and the Bartenieff fundamental exercises was the key. Nowadays there are many methods to search for this and I guess each dancer has to find the one that fits her/him the best.", "score": 17.66385277391369, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "- Members Areas\n- My Account\nOne of the most common positions I see young students trying to achieve in order to get more flexible is oversplits in second position, or box splits. Aggressively stretching young hips into this positions can permanently change their structure, so we need to be very, very careful in how this is achieved.\nWhen I have raised my concerns about over stretching in second in the past, many parents and teachers have resisted this, saying that their kids are fine. I had been struggling to find a way to demonstrate the very real risks of this kind of training, as the results are often not visible for several years after the fact.\nThen one day, one student came for an appointment, who's history and x-rays demonstrate my point exactly. She has graciously let me use for education purposes and is hopeful that this will help other students avoid the issues that she has had.\nJust to clarify - I have no issues with extreme mobility when achieved safely, and with the appropriate control. In fact, much of the work I do with the high level, elite students is focused exactly on this. However we focus on achieving this through educated and intelligent, up to date, smart ways, to avoid any potential issues, and the students are educated to manage their own bodies. This means that they will be able to continue dancing well into adulthood, and be able to live a normal, pain free life when they do decide to stop performing.\nAnyone training young students, and the parents of these students need to be very aware of the possible dangers when trying to improve mobility. The students themselves often find it difficult to see the long term consequences of their actions, and for them, achieving a certain position is often their end goal. It is our responsibility to learn the safest possible ways to help them to achieve their goals, as well as educating them on the appropriateness of their goals to their chosen career.\nThe following images are from a young dancer, now 14, who had started experiencing hip pain at the age of 10. X-rays were taken and they were told that there was nothing wrong. (Her teacher told her it was normal to have hip pain..!) She continued to have hip issues and had X-rays retaken again at ages 13 and again at 14 before she saw me.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "The hunger for fame\nLongtime dancers say the acrobatics trend in dance has been growing for years in tandem with the rise of social media and reality-TV dance shows. Leslie Scott, a dancer and choreographer who founded the nonprofit Youth Protection Advocates in Dance, has been researching and surveying young dancers about their social media habits for years.\nMost young dancers keep up on the daily posts of Instagram dance \"celebrities,\" who have racked up hundreds of thousands of followers through photos and videos of mind-boggling contortions and tricks, Scott told Business Insider. The result is that dancers now strive for fame, not mastery of dance.\nSome of the most popular dancers like Maddie Ziegler — originally from the Lifetime show \"Dance Moms\" and currently a judge on Fox's \"So You Think You Can Dance: Next Generation\" — have attained a level of stardom that provokes instant reaction among their millions of followers.\nWhen Ziegler posts a photo of an impressive leg lift or a pirouette, she accumulates hundreds of thousands of \"likes\" along with a slew of wistful comments: \"How do you do that?\" or \"They make it look so easy,\" or \"This hurts my heart.\"\nScott said she worries about the youngest generation of dancers, who now view these dance celebrities as role models. They may be setting themselves up for careers that are nearly impossible to achieve — both physically and realistically, she said.\n\"Many of them have shifted aspirations — occupational aspirations — from wanting to maybe go to college for dance or do something in dance therapy to wanting to be a dance celebrity,\" Scott said.\n\"They want to be like that, they want to look like that, they want to move like that. And they've also shared at times that they feel depressed and inadequate because they're not able to live up to that level of talent.\"\nThe physical consequences\nThe hunger for fame and eagerness to perform wild stunts hasn't escaped the notice of the medical community. Ruth Solomon, a dance medicine expert and professor at the University of California at Santa Cruz, said she frequently treats injuries sustained from ambitious tricks and overstretching and is astounded by the extremes to which young dancers push their bodies.\nEven a split can be harmful, she said, and many young dancers push their legs far past the standard 180-degree angle.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-2", "d_text": "- The Nervous System\nOver the years, technology has allowed chiropractors to learn more about the body, particularly the spine, than ever before. The reason dancers have taken to chiropractors (and vice versa) is that these medical professionals can assist with the nervous system, spinal positioning and movement, and overall body performance. Many of the elements required for dance directly correlate to the work of a chiropractor.\nWhen the body calls on your balance and coordination, this activates the nervous system at the same time. While we all have the capabilities, the nervous system is forced into working at a higher level than normal. Sadly, the range of motion in the spinal joints can sometimes be restricted and this has a negative impact on the nervous system.\nWhat does all this mean? Your ability to coordinate and balance movements and muscles will depend on your nervous system and effective alignment of the spine. By ensuring your spine is aligned with chiropractic adjustments, you’ll have the full range of motion and your nervous system won’t be restricted as before (in other words, you can dance with freedom!).\nBy allowing the nervous system optimum conditions, you will also have the optimal body and brain communication. In dancing terms, you can enjoy more fluid movements, you’ll have the strength and endurance, and each session will last longer than normal.\nWouldn’t it be great to be able to dance without restriction? This is essentially what we aim for at Carpe Diem Chiropractic, and it’s a journey you can start while docked in Fort Lauderdale.\n- Pain Relief with Chiropractic\nEven if you haven’t been injured or hurt from dancing yourself, you probably know somebody that has because it’s common (to say the least!). Recently, specialists have come to realize that the majority of injuries in dancing are a result of overuse in the body. Common injuries include;\n- Achilles tendonitis\n- Patellofemoral pain syndrome\n- Trigger toe\n- Labral tears in the hip\n- Snapping hip syndrome\n- Lumbar spine damage\n- Hip bursitis\n- Sacroiliac joint dysfunction\n- Hip flexor tendonitis\n- Various stress fractures\nWhile some injuries are obvious and cause instant pain, others lead to soreness and aching. As we continue to dance on, the injury gets worse until it’s career-threatening.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "The fear sets in fast. In the pit of my stomach. Pop. I hear it again in my head. But I wonder if I ever heard it at all—or did I just feel it. Which is worse? I hobble off the stage and change my costume and start back to the wings to prepare for my next entrance. Each step onto that leg sends a searing pain through the outside of my foot and a quiet grunt out of my mouth. My eyes well up, but I don’t know if it is from the dread slowly creeping from the bottom of my stomach into my throat or the searing pain telling me STOP. I take a deep, wobbly breath, steeling myself for the step onto stage, but then a voice in the back of my head says you’re being silly. It is dress rehearsal—listen to your body! Dancers don’t like to listen to that voice. Because there is a stronger voice compelling us to dance, to move, to push through and on and up. If it wasn’t for that voice, how would we find the will to push our bodies as we do? To accept bleeding and bruised feet and aching muscles as normal, everyday occurrences. To jump higher, extend more, turn faster. Our art form is driven by that voice—endurance, discipline, progression. Otherwise we would stand still. Perhaps that is why injury is so devastating to a dancer. Because we feel as if we must betray that voice that has been singing in our souls for as long as we can remember. That voice that drives us to move. To create.\nI sit in a heap on the floor, letting the tears drip freely as I remove the shoe from my quickly swelling foot. Eventually I end up on the therapy table, carried by kind arms and encouraging words. I relate the incident to the therapist, my tears drying on my demon-painted face. Maybe it went crack, not pop. Could I have imagined it? The events are a blur, but that dread deep inside is very clear. And the constant throb from the blob at the end of my leg is all too real. Maybe it will be better in the morning. Maybe if I ice the heck out of it and elevate it really high and squeeze my eyes shut and think positive thoughts it will all go away and be a bad memory.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-4", "d_text": "This creates a situation where we have\n- The most at risk age group of rapidly developing skeletal systems (8-12 females, 10-14 males)\n- Making significant jumps in the amount of force each skill puts on the body (think force jump of kip cast handstands, vs giants, vs blind fulls, vs Jeagers, etc)\n- Also making significant jumps in the amount of time they spent training in gymnastics (4x/week at 12 hours total vs 6x/week at 20-25 hours)\n- While often times going through large body changes, and not being at their full peak potential strength capacity\nAgain, this is absolutely not meant to spook people into stopping gymnastics. It’s to illustrate an important point. I have come to call this the “Gymnastics – Talent Crossroads”, where gymnasts are commonly jumping in level during their most at-risk years.\nI also openly admit I have not coached a nationally ranked or elite level gymnast. That being said I am fortunate to consult with coaches who do, and have treated quite a few high level gymnasts, and the principle still remains true.\nNot monitoring and managing a young gymnast properly can quite quickly lead to the overuse elbow, knee, ankle, and spine issues that plague gymnastics. On the other hand, if we approach this time frame carefully, we may be able to guide a gymnast through their most risk years and in turn get multiple years in return down the road.\nWant to Learn More?\nI have a ton of more information coming in the next parts of this article series, but if you are interested in learning more about gymnastics specific strength and injury prevention, be sure to check out my new free online PDF guide for Gymnastics Pre-Hab.\nDownload My New Free\n10 Minute Gymnastics Flexibility Circuits\n- 4 full hip and shoulder circuits in PDF\n- Front splits, straddle splits, handstands and pommel horse/parallel bar flexibility\n- Downloadable checklists to use at practice\n- Exercise videos for every drill included\nAll for Part 1\nAt the risk of not making this too long, I am going to hold it there for this week. In Part 2 and Part 3 I will touch more specifically on shoulder/wrist flexibility, hyper extended elbows, skill technique, and strength training.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Julia, a former professional dancer in her early 30s, sits across from me in a blue wool dress, her legs elegantly crossed, her long blonde hair resting casually on her shoulders.\nwhen did you start dancing\nI started dancing when I was 3 years old and it got me so excited that I kept wanting more and more of it. At the age of 9 I started to train dance at the SBBS (Swiss Ballet School, today the Zhdk).\nAnd your dance direction?\nI enjoyed classical ballet training, as you would imagine, with tight tights, pointe shoes and a tight bun in my hair. But of course I also trained in other styles such as jazz, contemporary dance, character dance and so on. . .\nAnd after you graduate?\nWell, I've never danced in a purely classical company.\nI was told during my training that I wasn't good enough for classical ballet.\nNot good enough – physically?\nNot necessarily physically, more technically not clean enough to become a ballerina. But I wasn't told that until my senior year at the Vienna State Opera (grins and takes a sip of coffee).\nOk, tough words. Speaking of which, there are rumors about Company's. Is it true that many dancers use stimulants or drugs?\nI've never been in a company like that, I only know it from hearsay. But yes, consumption of certain stimulants seems to be widespread. Especially in classic company's the pressure is enormous, you have to function no matter how and in what condition.\nCrazy when you consider that the body is the main working tool for you dancers. What do you say again, when does your sell-by date start, at 35?\nYes, it starts around 30.\nDo you think that's justified?\nIt's true, after your mid-30s you hardly get any new engagements, there are a few exceptions.\nShould that be changed?\nAbsolutely, but that has to do with the structures in this industry and also with the general abuse of power. The image of the pretty young girl and boy is still popular on the stage. Most classical pieces are also about it. That's why I appreciate contemporary dance, because it breaks with such images and a change is noticeable. A young dancer certainly has the technique, but an experienced, older dancer has the necessary emotion and, in my opinion, you can't learn that, you can only live it.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "In April, she'll be performing at Yerba Buena Center for the Arts with a dance group.\n\"It's starting to feel better,\" Roman said.\n\"You've got a little bit of strength now,\" Kadel responded.\n\"Most doctors don't get it,\" Roman said afterward. \"They don't understand what you need.\"\nKadel said dancers are extremely compliant as patients. Physical therapist Leigh Allen added, \"They're very body aware. It doesn't take much for us to get them to make a change.\"\nThe final appointment, 21-year-old Danell Nelson of San Jose, said her left foot hurt so much that her toes turned purple when she danced and the pain was unbearable afterward.\nShe has two jobs, as a waitress and office worker, to support her dance ambitions, and is often on her feet. Recently, she was accepted to the Laban Conservatory in London.\nKadel examined Nelson and asked lots of questions.\n\"I'm worried that you have a stress fracture,\" Kadel said. \"The nice thing about stress fractures is they heal up pretty well.\"\nShe urged Nelson to eat foods rich in calcium and to stop smoking. She gave her a boot worth $200 that had been donated, and told her to wear it while standing.\n\"If it's hurting, then it can't be healing,\" Kadel said.\n\"I don't want to take time off,\" Nelson said later. \"But I'm glad this is here because I don't have insurance.\"\nLittle or no insurance\nKadel said most dancers are uninsured or underinsured. She also treats dance injuries in her UCSF/Mount Zion office.\nOn a recent Wednesday morning, she saw 16-year-old Mackenzie Conway, accompanied by her father, Dave. They had left their Mariposa home at 5:30 a.m. and driven five hours and 200 miles to see Kadel for the third time.\n\"Other doctors looked at Mackenzie's feet,\" Dave Conway said. \"Dr. Kadel looked at Mackenzie as a dancer. And her feet are a tool of her trade.\"\nMackenzie hurt her left ankle during a solo on Memorial Day weekend. After a week of rest, ice, compression and elevation, there was no improvement, so she saw a doctor who diagnosed a sprained ankle. A week later, she visited a podiatrist and came away with a support brace, medicated patches and a prediction the ankle would heal in four weeks.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-4", "d_text": "I was in my 30's by this point and wasn't really interested in having a tense, dramatic in-studio experience.\nI was really proud of how I approached this situation. I didn't make a big fuss out of it. I raised my hand and waved the choreographer over to speak to us. Instead of laying the blame on my fellow dancer for the anger that he was displaying towards me, I calmly mentioned that we seemed to be having a misunderstanding and some miscommunication and needed assistance to get back on track. The dancer snapped back in defense of his actions. Instead of trying to rat this dancer out, I continued to keep my calm and said, \"I'm not trying to cause any issues, I just feel that we need help communicating and I'm hoping that he can assist us with that.\" The choreographer saw what I was trying to do and did a really great job of calming the other dancer down and getting us back to a good place.\nSo, if you find yourself in a similar situation, please follow my lead and seek out assistance to create a dialogue versus assigning blame. Blaming your fellow dancer will only cause immediate and, likely, future issue. Dance is very competitive and dancers almost always want to be seen in the best light possible. If a dancer feels like you are trying to make them look bad for your own personal gain, you will never resolve any situation.\nWhen most people think of an injury, they imagine those scenes in movies when a dancer goes down screaming in pain and hugging an ankle or extremity. While these dramatic injuries do happen, there is also a range of problems that don't always manifest as immediate or cringe-inducing traumas. Back in 2014, when I suffered the injury that took me out of dancing with Oakland Ballet, I landed a Horton-based jump that required me to drop out of the air in a contraction over one leg. I didn't feel or hear any pop. And I didn't feel any dramatic event immediately after execution either. In fact, I didn't notice it until after we finished running this difficult Molissa Fenley work. At the completion of the run, I noticed that my back was becoming mildly tight, but it didn't present itself as anything more than a bit of overwork. It took me nearly an hour outside of rehearsal to notice that something had went wrong and about 3 hours to know that something had gone severely wrong.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "Alice Klock and Jason Hortin in Jonathan Fredrickson's Untitled Landscape (Todd Rosenberg)\nAlice Klock, Hubbard Street Dance Chicago\nAge she started dancing: 11\nHow did you get started? I was home-schooled, and my mother wanted me to get out and meet people my age, so she asked, “What about ballet?\"\nHow did you catch up? I worked outside of class. In academic classes like math, the more you study on your own, the better you'll do in class. It was the same for me with dancing.\nWere there benefits to starting late? I'm actually glad I started when I did because I developed as a person before I became a dancer. This is a life-consuming art.\nDo you have advice for other late starters? Never compare yourself to other people in class. I learned so much from dancers who were three or four years younger than me because I didn't let age get in the way.\nMichael Wood (Jack Hartin/courtesy Abhann Productions)\nMichael Wood, tap dancer\nAge he started dancing: 18\nHow did you get started? When I was auditioning for musical theater college programs, a friend told me about Oklahoma City University's dance program. I figured, why not? And I got in!\nWhat kept you going? My parents. I couldn't always feel myself getting better, but whenever they came to see my performances, they'd say, “You've come further than you think.\"\nDid you have any breakthrough moments? During my junior year of college, a teacher said, “Michael, I think ballet has finally clicked for you.\" And that was exactly what happened. One day I stopped feeling like I was trying to do ballet, and just started doing ballet.\nDo you have advice for other late starters? There's so much emphasis in this industry on what you can do at what age. But it's all hot air. If you want to do it, just do it.\nDancing kween Jennifer Lopez is preparing us for the second season of \"World of Dance\" by dropping an insane World of Dance promo that has her slaying the dance floor like we've never seen before.", "score": 15.652736444556414, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "Furthermore, they can also present in physical changes, typified by a loss of coordination, balance, and neuromuscular control – all of which may have the potential to increase injury risk.\nHowever, much like the physical changes mentioned above, exercise has also been suggested to have a preventative effect on these nasty nervous system changes as well.\nTalk about the fountain of youth!\nInjury rates and age\nYou now understand what happens to your body as you get older. With this information in mind, it would stand to reason that these changes might make older athletes more susceptible to injuries than their younger counterparts (Kallinen, 1995).\nBut is this really the case?\nDo injury rates within the same sport go up with age?\nThere a couple of ways of looking at this question, both of which have slightly different answers (Lewkowski, 2015).\nFirst and foremost, older athletes competing at the elite level in team sports do appear to be at a slightly higher risk of injury than the younger athletes that they play with. For example, soccer, football, and handball athletes over the age of 30 years will be at a slightly higher risk of injury than their teammates who are under 25 years of age.\nHowever, there may be a rather obvious reason for this.\nPrevious Injury Predictor\nYou see, the biggest predictor for future injury is a previous injury.\nAs a result, the more time you spend playing at the elite level, the more likely you are to experience any sort of injury – which will then increase your risk of experiencing an injury in the future.\nIt could simply by that older athletes have had the time to accumulate injuries, whereas younger athletes have not.\nNow, on the other hand, we have older athletes who participate in sports that involve different age classes – predominantly describing those athletes who compete in track and field, in events such as running, sprinting, jumping, and throwing.\nInterestingly, in these events, masters athletes do not appear to be at any greater risk of injury than their younger counterparts.\nSo, if you are looking to compete in a sport into your 60s and beyond, track and field may very well be the way to go!\nRelated Article: Reduce & Prevent Injuries With Dry Needling\nDoes the type of sport make a difference when it comes to aging and injury?", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Smaller class sizes allow the teacher to ensure each student understands the concepts and instructions while developing proper technique. Our school limits our young children’s program to 12-15 students per class (w/ assistant) and the upper levels to 16-18 per class.\nOur school’s manager and desk staff are on hand after 2pm during the week and from 9:00am-12:00pm on Saturdays. Our hours vary during the summer months. Please call 561-840-7555 or email email@example.com for assistance.\nOur closed circuit color camera system allows parents to observe classrooms from our lobby. Parents can enjoy their child’s progress without disturbing the student/teacher classroom dynamic.\nOur young children are taught an educational and fun curriculum introducing them to the basics of dance, music, and theatre through combination classes and private lessons. This program provides a solid foundation for future success in the performing arts.\nStudents are given performance experience in our year-end recital, which is a special time for them to shine and demonstrate the skills they have mastered during the year. They may also audition for our Dance Company, a group that performs locally and attends competitions in the Spring.\nWe strongly encourage ballet, although it is not required unless you are invloved in Dance Company. Ballet is the foundation of all forms of dance and proper technique is extremely important in developing a dancer’s muscles and preventing injuries. The technique gained through ballet training is best developed between ages 5-11 and will provide the basics necessary for tap, jazz, hip hop, and acrobatics.\nDance is a very physical activity that requires a lot of jumping which can put stress on bones and joints. Most dance footwear does not provide much support, so the shock of dance movement can place a lot of pressure on the knees and back of a dancer. The best way to prevent injury is by choosing a school with a professional “floating floor.” This is a floor that rests on a system of high density foam to absorb the shock of jumping.\nThe top layer of the dance floor is also an important factor. A vinyl composite “Marley” floor is accepted worldwide as the best surface layer for recreational and professional dance. A Marley floor allows dancers a controlled slide without risk of falling. Our school has floating floors with high density foam blocks under the surface and a Marley top surface. These special floors reduce the risk of injury and allow students to dance longer without getting tired.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-5", "d_text": "The approach to performance, however, changes with age: those under 60 are happier to perform. Kirti, for example, says she loves performing and being part of a team, with the preparation, costume, make-up – and being shouted at by their teacher as if they were 12-year-olds. Smita feels performing is a good way of showing what one has learned, and she appreciates the encouragement. Sue, however, ‘can’t think of anything worse’; some older dancers don’t want to be letting the other dancers down. It is encouraging that some teachers are providing sessions for the over-50s or 60s. Several commented on how yoga, which has been incorporated by teachers in their classes, is enabling them to continue dancing as they grow older.\nAll those who have families at home appreciate their support and encouragement; and mothers, children and grandchildren are justly proud of these dancers. They will all continue as long as they are able. May it be for many years more.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "Today I look, feel, and can dance better than any of my 20-year-old students. You don’t have to turn 45 to let your body go to the dogs. You can do that at any age, in which case you should not be dancing anywhere … not on stage, not locally, not abroad.\nWith all this focus on the physical, has the ICCR essentially reduced the\ndignity of Indian dances to that of runway modeling? What about the unique ability of its experts to transform the sensual to the sacred and the vulgar to the venerable? Does it not require a great deal of introspection, self-discovery, and growth before dances of such depth can be created or performed? How can anyone impose an age restriction on a journey so profound?\nDavid Roche, who for many years produced the famous San Francisco Ethnic Dance Festival, and is currently the director of Old Town School of Folk Music in Chicago, raised another important point. “Ramnad Krishnan (Karnatik musician) once told me that musicians are mere children until they reach the age of 60,” he said. “Up until then their raga interpretations are considered puerile.” Wouldn’t a similar growth period be essential for dancers as well, especially in an art form that continuously seeks to transcend the physical?\nAgeism has always been expressed as “intergenerational warfare,” primarily focused on the proportion of job opportunities and the Federal budget allocated to programs benefiting older adults. In defense of the ICCR, let us assume that this rule was made only to make it easier to support young talent or as New York based dancer Uttara Asha Coorlawala says, “an attempt to spread the goodies around rather than allow a few, and now senior stars, to hog all opportunities.” Do the ICCR officers apply the rule across the board to every artist or do they sneak in clauses of “extraordinary circumstances” or “exceptional cases?” Anita Ratnam, one of India’s brightest choreographers, asks, “How does one grow into a legend without government patronage and support?”\nThe ICCR officers need to find ways to support their issues without turning them into national statutes. Instead of discriminating on the basis of age, why not discriminate on the basis of quality? “We regret to inform you that your application was rated No. 15 by our selection panel and our budget can only support 12 touring companies this year.” Simple.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "(Ages 3 – 12)\nFor young dancers who are just starting! Creative ballet utilizes an age appropriate syllabus geared toward developing the skills needed to progress to either the conservatory or recreational programs.\n(Ages 7 and up)\nA carefully designed progression of levels for the young dancer through the pre-professional student. The emphasis is focused on developing the dancer as they grow in skills and abilities and offers the opportunity to dance at the highest level as they grow more serious about their studies.\nAdult and Teen Recreational Classes\nFor teens and adults who wish to dance for enjoyment and exercise. Recreational classes are open to all levels of skill and ages.", "score": 13.190487705459049, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Ballet dancing has been a popular art form for centuries, and for those who have never danced, it has tremendous appeal as either a hobby or a potential vocation.\nChildren—especially girls—are often fascinated by the apparently effortless grace with which dancers move, and by the colorful, intricate costumes they sometimes wear.\nBut ballet has a darker side, and the placid expressions worn by dancers often mask terrible pain caused by ballet injuries.\nThe careers of professional dancers are often short; before they turn 30, most of them retire from dancing, either to teach or to pursue some other, unrelated occupation.\nThe purpose of this article is not to discourage children or their parents from considering ballet lessons, or even to discourage anyone from pursuing ballet as a career; rather, our purpose here is to make the reader aware of the risks that ballet injuries can pose to their future well being.\nTypes of Ballet Injuries That Can Happen\nMany of the various injuries dancers can suffer are not severe, and pose no serious threat to the dancer’s future as a performer, or to his or her ability to participate in other sports or to walk with a normal gait.\nOther types of ballet injuries, however, can be cause for concern. As you read about the various types of ballet injuries, feel free to click on any of the related links to read more about them.\nInjuries and conditions that can be caused by ballet include:\n- Bunion or hallux valgus (sometimes at an unusually young age)\n- Ankle sprains (particularly later ankle sprains)\n- Stress fractures brought on by small, repetitive impacts over the course of time\n- Trigger toe\n- Shin splints\nDancer’s Fracture is the most common acute fracture suffered by dancers. It strikes the fifth metatarsal, the bone that lies along the outer edge of the foot. It usually happens when a dancer jumps and lands badly, coming down on an inverted (turned-in) foot. As the name “dancer’s fracture” suggests, this is a common ballet injury.\nSesamoiditis: When a dancer is on demi-pointe, her weight rests on the sesamoid bones, which lie just behind the big toe. Regular application of this kind of stress can cause gradual onset of sesamoiditis, and the dancer will experience pain when bending or straightening the big toe.", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Homemade chicken noodle soup made in the crock pot for a set-it-and-forget-it easy dinner. You can evan prep all the ingredients ahead of time and store them in the freezer to pull out on a day where you forgot to plan dinner. Just make sure you thaw the ingredients before adding it to the slow cooker to prevent it from staying at an unsafe temperature for too long.\n8 ounces whole-wheat egg noodles or other whole-wheat noodles\n3 pounds bone-in chicken breast, skin removed\n2 cups chopped onion\n1 cup chopped carrot\n1 cup chopped celery\n2 sprigs thyme\n8 cups low-sodium chicken broth\n2 teaspoons kosher salt\n2 cups frozen peas\n¼ cup chopped fresh dill, plus more for garnish\n2 tablespoons lemon juice\nThe adolescent dancer faces unique challenges due to physical and emotional changes that occur during pubertal development. Rapid growth periods can lead to reduced strength, impaired balance, and decreased flexibility, which can alter technical ability and increase the risk of injury.\nGrowth spurts in dancers usually occur between the ages 11-15 in girls and 13-17 in boys, and can last up to two years (IADMS 2000). As height increases, weight gain also occurs. A girl’s menstrual cycle begins during these growth phases and is essential for formation of bone. The pressure to stay thin during periods of weight gain in addition to being unaware of/ignoring nutritional needs results in an energy deficit and increases the likelihood of irregular periods (Delegate 2018). Bones grow at a faster rate than muscles and tendons, and limbs grow at a faster rate than the trunk. This affects strength, flexibility, and balance control in dancers. These changes can make movement feel awkward and may affect your ability to perform at the level that you are used to. Don’t be discouraged, these changes are temporary!\nThe injury rate increases by 35% as dancers reach ages 14-16. Body regions most commonly affected are the foot/ankle, lumbar spine, hips, and knees (Steinberg 2012, Delegate 2018).\nCommon injury types in adolescents:\nREDUCING INJURY RISK DURING GROWTH CHANGES:\n1)Education Committee (Kathryn Daniels, Chair). International Association for Dance Medicine & Science.", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Every week, scores of young dancers take to Instagram to show off a physical feat that seems to defy the limits of human hip bones.\nThe #TiltTuesday trend features dancers stretching their legs into extreme contortions. Sometimes the photos merely attract likes or followers. Other times they can attract the attention of scouts, leading to audition opportunities or even sponsorship deals.\nInstagram and YouTube are littered with images and videos of stretches such as the \"side tilt,\" in which one leg is extended into a standing side split, often held at bizarre angles that extend past 180 degrees. Another popular pose, known sometimes as a \"scorpion\" involves one leg stretched out behind the dancer, bent and grasped behind the head like a tail poised to sting.\nBut some dance industry veterans say the trend is an unhealthy phenomenon that encourages young dancers to attempt risky movements that could bring about irreversible damage.\nWhile some of the dancers performing these poses are professionally trained ballerinas, many are young amateurs in the earliest years of their dance training.\nThese young dancers pushing their bodies to achieve such complicated stretches or dangerous leaps are putting themselves at risk of ending their careers before they even begin, according to Paul Malek, a choreographer and artistic director at Australia's Transit Dance Company.\n\"Pushing these dancers so far past where they should be at ages eight, 10, 12 — they're actually wearing away what holds their hips together,\" Malek told Business Insider.\nMalek has been an outspoken critic of the stunts and acrobatics he sees dancers perform in competitions and on social media. He said he has seen firsthand the effects that excessive stretching and unnecessary tricks — such as side tilts — can have on dancers' hips and joints.\nBeyond the physical effects on young dancers' bodies, these trends have also warped their understanding of what dance is, Malek said. The tricks popularized on social media have introduced a narrow set of skills for dancers to aspire to — at the expense of the large range of movement contemporary dance encompasses.\n\"Instagram and social media is extremely, extremely to blame here in my opinion,\" he said. \"I ask young dancers, 'Who are some of the great dancers in the world?' And they name 16-year-olds who have 300,000 followers on Instagram who can do a leg mount or scorpion really well.\"", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "I’m 21 and studied ballet from the age of 4 until 13. However I gave up and returned at 19 but had to stop due to lack of funds! I have my Bloch pre-pointe shoes and my flat ballet shoes which I do practice on, and I’d never buy pointes without a teacher’s instruction.\nI’m looking to getting into training soon again, but do you think its too late to work for pointe?\n– Ballerina Interrupted\nGood for you for returning to your passion despite setbacks! First, I just want you to know that pointework is certainly not the be-all end-all of dance or even ballet. Ballet can be beautiful, striking and extraordinary without pointe shoes. I’m pointing this out because, not knowing your health nor seeing your feet, I cannot guarantee that you are eligible, but I will give you the parameters so that you can get going in the right direction.\nProvided that a dancer is physician-approved for exercise, the only age-related barriers would really be related to bone strength – too young could mean the bones have not sufficiently ossified, and too old could mean they had reached a point of brittleness. There are other possible roadblocks to your success – genetic predisposition to ingrown toenails, limited flexibility in the tendons and ligaments to make an arch sufficient to get over the box of a pointe shoe and other such issues that are best assessed by a well-qualified teacher and your physician in-person.\nProvided that you have no such limitations, the most important thing is for your pointe preparation is to get a quality teacher, preferably someone who has taught adults long enough to understand limitations that they run into and how to relay information in a way that makes sense to them. A good teacher for any age group will enforce a minimum of two ballet technique classes per week leading up and for at least the first year of pointe. You should expect at least two years of re-training to prepare, possible more. If you get there in less time, consider it lagniappe.\nIf you’ll go to my website homepage, look on the black menu bar and click Pointe Shoes. Read those articles. Then check out the Adult Beginner Pointe link on my blogroll to read about one adult ballet students foray into her first year of pointe.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Frustrated American educators have watched while those in power have ignored recommendations and research that has suggested this for over 25 years although families that can afford to give their child the greatest gift they have to give, a dance education, can be assured of good fitness throughout their lifetimes because of this early investment. http://yogauonline.com/yogatherapy/yoga-for-kids/yoga-for-kids-practice/2016050415-youth-fitness-steady-decline\nAs if any of us here needed further convincing, here is yet one more short article supporting the argument that children and adults need more and not less of what dance has to offer. Note – executive functioning is referring to cognitive flexibility, working memory, processing speed and verbal fluency.\n“Musical training can now be added to three other activities which have been shown to increase children’s executive functioning: physical exercise, mindfulness training and martial arts.”\nSince dance training requires precision in all 3 of the above activities, or similar, I would suggest that it isn’t that bright above average children are attracted to dance but that dance is creating bright, above average children. I have NEVER taught an accomplished dancer who didn’t excel in school and develop psychological maturity ahead of her peers. And I’ve been teaching a long time 🙂\nGrowing up has always been tough. Everyone you love knows that so they do their best to give advise to dancers, catch them when they fall, and bolster self-esteem. But most lessons in life are learned through trial and error and the life of an artist-in-training is certainly no exception.\nLast Christmas, I went to visit my first teacher. I remember how angry she made me a million times. I remember not getting roles I wanted or compliments I thought I deserved. She was like a mother to me but she always did what was best for the company which often meant I was denied. But I persevered. I realized rather young that I was no prodigy so I tried to capitalize on my strengths and work on my shortcomings. My own mother was from an era that trusted the teacher and didn’t interfere when I was weeping about some shortcoming I thought my teacher possessed. There were never teacher conferences or talk of changing schools. Each day, I arrived for class and rehearsal, just like the day before. All these years later, I know how wise both my teacher and my mother were.", "score": 9.460542230531878, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "By Daniel Labadie\nThe life of an athlete is full of challenges, and dancers are no exception to this. From Broadway stages to studios, a dancer’s body is in constant demand for hours a day, sometimes 7 days a week. While the role of a professional dancer can vary from teaching to performing, most dancers seem to have one major thing in common: injuries. Like any other sport, the physical demands of dance can sometimes result in occasional harm to the body. Dancers have found the practice of Yoga to be beneficial in supporting their physical bodies, but can Yoga be used as a form of injury management for these athletes?\nThere is evidence to suggest that Yoga may be beneficial for increasing and maintaining flexibility and balance amongst collegiate athletes, which serves as a form of injury prevention. The study analyzed a group of male college athletes who attended a bi-weekly Yoga class for 10 weeks (Polsgrove, Eggleston and Lockyer, 2016). The results showed that each participant experienced a significant increase in both flexibility and balance measurements and the researchers concluded that a regular Yoga practice “may enhance athletic performances that require these characteristics” (Polsgrove, Eggleston and Lockyer, 2016). However, more research is needed on Yoga as a form of injury management for dancers specifically. Additionally, the above study only analyzed male athletes and the results cannot be applied to other populations.\nLike most sports, there is an assumption of risk that comes with repetitive physical exercise. Local valley dance teacher, Rolanda Polanco, knows firsthand about movement related injuries. “I’ve broken toes, torn ligaments in my ankle and sprained both feet along with stress fractures, patellar tendonitis, back injuries and neck pain.” Rolanda has 18 years of experience teaching dance for children and high school students, who also experience their share of performance related injuries. “The most common injuries I have seen over the years amongst my students have been sprained ankles and stress fractures.”\nOn average, most dancers go through conditioning exercises and various forms of flexibility and strength training as methods of both injury prevention and improving performance. However, dance professionals do not always address injury treatment in the dance world. “Unfortunately in my experience, I had next to no injury management that came from my dance instructors & continued to repeatedly have the same injury,” says Hannah Swim, an actor and dancer of 11 years who also teaches Yoga.", "score": 8.086131989696522, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "(Note from Mark: The below article was written for a website that offers education for trainers. They approached me to talk about training dancers, and I obliged with this here article. Although it’s written for trainers, I think most folks (especially dancers) can learn a thing or two by seeing how we approach dancers. I also apologize, since I had to make it all professional and shit, it’s distinctly lacking in f bombs. I hope you can fucking forgive me.)\nBeing located in midtown Manhattan, Mark Fisher Fitness is the premier training hub for many of Broadway’s most accomplished dancers. We take a lot of pride in being able to apply the most progressive training protocols to a population that doesn’t generally get a lot of attention in the fitness world. This lack of attention is to probably to be expected. Let’s face it, most trainers are far more interested in MMA than ABT (… google it).\nAs fitness pros, it’s an honor to work with people who’ve dedicated their lives to reaching personal peaks of artistry through movement. And although dancers display some of the most awe-inspiring athleticism you’ll ever get to see, they do carry special considerations. So if you’re not particularly familiar with the world of jazz hands, this article should give you a handle on how to work with dancers.\nMost folks know dancers are flexible. And generally speaking this is true, but you’d be surprised how often dancers are getting their range of motion from less-than-ideal places. Not unlike the general population, dancer ankles and hip flexors are often super tight. And even more so than those chained to a desk all day, they often develop excessive mobility in less than ideal places. For instance, tight hips often lead to too much movement at the lumbar spine so dancers can create more range of motion. Anyone familiar with the Joint By Joint theory or the work of Dr. Stuart McGill knows this is a bad scene.\n|Pretty sure most folks would rip in two if they tried this.|\nBe wary of assuming they’ll have good mobility just because they’re dancers. Since the best performers are often the best compensators, many elite dancers are able to create some beautiful aesthetic qualities on a base of a dysfunctional movement foundation. Be sure to look at their mobility with a discerning eye and do some baseline assessments or screening, like the FMS. If necessary hammer soft tissue work, correctives, and mobility exercises.", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "In March of 2016, I received an email from Ashley saying he was in San Francisco at the time and wanted to have a meeting with me. It was then that he offered me a company contract to dance under him at Joffrey Ballet and immediately I graciously accepted. This is now my third season dancing with Joffrey Ballet and I’ve cherished every moment I’ve gotten to perform and dance with the company.\nNutcracker season is different because instead of doing about 10 shows which is the usual amount we do per program, we perform around 30 shows all throughout the month of December. Also because it is a full company production, most of us are needed to perform in every show. Luckily we’re not doing the same thing each time, one day you might just be performing Flowers/Fair Visitors and that same evening you could have a full show of Party, Battle, Snow and Flowers/Fair Visitors that involves tons of quick changes and lots of running around. So it’s pretty exciting and keeps you on your feet, but because of how long the run is, you need to take it one day at a time and focus on what you’re performing at that time.\nEvery dancer deals with aches and pains, it comes with the job. But I’ve definitely had my fair share of injuries. The type of injury it is can usually depend on what we’re rehearsing at the time, since we’re trying to perfect our steps and partnering before we open a program. But at the same time, we’re only human and mistakes happen. Sometimes we may land from a jump the wrong way or sprain an ankle, especially if your body is already exhausted from being overworked. That’s why it’s so important for us to take care of our bodies even outside of the studio by eating right, icing, cross training, and rolling out muscles because our bodies are our instruments.\nPersonally, I’m the kind of person that hates giving up and not being able to perform, so even if something hurts I’ll still give 100 percent and give it my all. But when it’s impossible for me to do a plié and jump or point my foot, then I know that it’s better for me to rest and recover so I can perform again later.\nSome common injuries that dancers experience in their careers are sprained ankles, meniscus tears, tendinitis, dropped metatarsals and skeletal alignment issues.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-2", "d_text": "College cheerleaders have the highest injury rate (2.4), followed by elementary school (1.5), high school (0.9), all-star (0.8), middle school (0.5), and recreational (0.5) cheerleaders. The overall injury rate in high school cheerleading is lower than in other girls’ high school sports (Table 1).5,8\nAs in other sports, cheerleading injury rates increase with age and competitive level.5,9 Middle and high school cheerleaders have lower overall rates of injury than do collegiate cheerleaders (0.5 and 0.9 vs 2.4 per 1000 athletic exposures, respectively).5 This is probably because older, better-skilled cheerleaders perform more complex gymnastics and height-based stunts. Rates of stunt-related injuries are higher for collegiate versus high school and middle school cheerleaders (1.59 vs 0.59 and 0.23 per 1000 athlete exposures, respectively).10\nThe most common mechanisms of injury are basing/spotting (23%), tumbling (14%–26%), and falls from heights (14%–25%).5,11 Stunting accounts for 42% to 60% of all cheerleading injuries and 96% of concussions and closed-head injuries.10,12,13 Pyramid stunts are responsible for the majority of head/neck injuries (50%–66%).14,15\nTypes of Injuries\nWhen all age groups are considered together, lower-extremity injuries are most common (30%–37% of all cheerleading injuries), followed by injuries to the upper extremities (21%–26%), head/neck (16%–19%), and trunk (7%–17%).5,7,9,15 Younger cheerleaders are more likely to experience upper-extremity injuries (41% vs 25% of all injuries for 6- to 11-year-olds vs 12- to 17-year-olds, respectively), and older cheerleaders are more likely to have lower-extremity injuries (38% vs 29% of all injuries for 12- to 17-year-olds vs 6- to 11-year-olds).9\nOverall, sprains and strains are the most common types of injuries (53% of all cheerleading injuries), followed by abrasions/contusions/hematomas (13%–18%), fractures/dislocations (10%–16%),", "score": 8.086131989696522, "rank": 96}]} {"qid": 39, "question_text": "What happens during viral interference between two viruses?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Mutual exclusion occurs when a host cell is simultaneously infected with two competing viruses, but only one virus goes on to replicate itself. This phenomenon was originally described in bacteriophages by Delbrück in 1945. Mutual exclusion implies that the virus particle which infects first alters the host cell in such a way that a second infection is unlikely. Subsequent studies found that mutual exclusion not only occurs between different viruses but also with nearly identical viruses, presumably as a way to avoid “super-infection” (Dulbecco, R. 1952 Mutual exclusion between related phages. J Bacteriol 63, 209-217). While the virus that wins the race prevents infection by additional viruses, the excluded viruses may interfere with its replication, which is referred to as a “depressor effect” (Delbrück M. 1945 Interference Between Bacterial Viruses: III. The Mutual Exclusion Effect and the Depressor Effect. J Bacteriol. 50(2): 151-170). There are different mechanisms underlying mutual exclusion and depression among bacteriophages, and still, not all of them are understood.\nIn some instances, exclusion causes the out-competed virus to be rapidly degraded. For example, with bacteriophage T3, degradation occurs after adsorption of the primary infecting phage but before the virus genome is expressed. There is indirect evidence with phage λ that host membrane depolarization is triggered by the successful infecting particle, which initiates exclusion.\nMutual exclusion not only occurs among bacteriophages but also in viruses infecting certain unicellular, eukaryotic chlorella-like green algae (the chlorella viruses). Plaques arising from single cells simultaneously inoculated with two different chlorella viruses usually only contain one of the two viruses. Previously, the mechanism underlying chlorella virus mutual exclusion is unknown. Chlorella viruses often encode DNA restriction endonucleases and it was originally suggested that one function of the DNA restriction endonucleases might be to exclude infection by other viruses. A new study indicates that chlorella viruses prevent multiple infections by depolarizing the host cell membrane (Chlorella Viruses Prevent Multiple Infections by Depolarizing the Host Membrane. J Gen Virol. Apr 22 2009).", "score": 52.89279749536698, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Dominance of one strain over the other was not absolute for either virus pair, as the subordinate virus was rarely eliminated. Interestingly, competition between two viruses with either pair rarely ended in a draw. Further work is needed to identify factors that influence virus-specific dominance to better understand what characteristics favor emergence of one strain in chicken populations at the expense of other strains.", "score": 49.768791919134976, "rank": 2}, {"document_id": "doc-::chunk-3", "d_text": "With some dedication, you can still manage to find the correct orders of the paper sheets and build the furniture piece. Similarly, the cell can still build a viral progeny of functional viruses.\nHowever, let us consider the event of superinfection, i.e. a phenomenon that takes place when two parasites get into the same host at the same time.\nWhen infected by two influenza viruses, the cell has to manage as many as 16 pages of instructions, i.e. 16 genomic fragments, to build a progeny of viruses. Because the cell has no capability of defining what fragments came from which of the two viruses, the result will be the production of mixed, \"chimeric\" viral particles containing some components coming from the one virus, and some from the other, which might be unknown, and therefore catastrophic, to our unprepared immune system.\nClassification of influenza viruses\nIn the modern virology, influenza viruses are classified according to the sequence of two molecules that they expose on the surface: Hemagglutinin (H) and Neuraminidase (N). For Influenza A viruses, 18 types of H and 11 types of N have been described. For instance, the widely spread H3N2 Influenza A virus exposes Hemaglutinin of the 3rd type on its surface, and Neuraminidase of the 2nd. Another virus widely spread among humans is H1N1.\nWhile the function of Hemagglutinin is to recognise and start the phases of entrance into the target cell, Nerauminidase has the opposite role, that is to release the newly formed viral particles from the cells, in the context of a mechanism called \"viral budding\".\nThe fact that so many different types of these two molecules exist is a consequence of their position on the viral particle: exposed on the surface, H and N are more easily identified by our immune system. The virus, driven by the mechanism of evolution, will need to change some parts of these molecules so that they can pass unobserved to the host's defences .\nHowever, considering that our immune system can get trained in identifying the Hemagglutinin and Neuraminidase circulating in humans and generally keep up with the changes that are accumulated, where do viruses pick up new forms of these molecules during superinfection?", "score": 49.199683570850155, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Submitted to: Avian Pathology\nPublication Type: Peer Reviewed Journal\nPublication Acceptance Date: January 3, 2012\nPublication Date: June 18, 2012\nCitation: Dunn, J.R., Silva, R.F., Lee, L.F., Witter, R.L. 2012. Competition between two virulent Marek's disease virus strains in vivo. Avian Pathology. 41(3):267-275. Available: http://dx.doi.org/10.1080/03079457.2012.677804. Interpretive Summary: Marek's disease virus (MDV), an avian herpesvirus that can cause cancer-like disease in chicken, is present within most poultry houses because vaccines don't prevent birds from shedding the virus. As birds become infected with multiple viruses a competition likely occurs within each bird that allows a more virulent (evolved) virus to spread within the flock. This study demonstrated that when birds were infected with two Marek's disease viruses at the same time, that one virus tended to consistently out-compete the other (it was consistently present in higher amounts). Further work is needed to identify what factors lead to the ability of one virus to out-compete another virus.\nTechnical Abstract: Previous studies have demonstrated the presence of multiple strains of Marek’s disease virus simultaneously circulating within poultry flocks, leading to the assumption that individual birds are repeatedly exposed to a variety of virus strains in their lifetime. Virus competition within individual birds may be an important factor that influences the outcome of coinfection under field conditions, including the potential outcome of emergence or evolution of more virulent strains. A series of experiments were designed to evaluate virus competition within chickens following simultaneous challenge with two fully virulent S1 MDV strains, using either similar (rMd5 and rMd5//38CVI) or dissimilar (JM/102W and rMd5//38CVI) virus pairs. Bursa of Fabricius, feather follicle epithelium, spleen, and tumor samples were collected at multiple time points to determine the frequency and distribution of each virus present by using pyrosequencing, immunohistochemistry and virus isolation. In the similar pair, rMd5 appeared to have a competitive advantage over rMd5//38CVI, which in turn had a competitive advantage over the less virulent JM/102W in the dissimilar virus pair.", "score": 45.902066208594036, "rank": 4}, {"document_id": "doc-::chunk-12", "d_text": "In the case of IAV, there was a significant reduction in the percentage of T cells expressing TNF-α, or both TNF-α and IFN-γ, as compared to mock. In addition, IAV reduced the IFN-γ and TNF-α MFIs in the IFN-γ+ and TNF-α + single positive population, respectively (Figure 5 C). Thus, overall, the response to secondary SEB stimulation was not significantly affected by prior infection with rHRSV, rHMPV, or rHPIV3, whereas there was a modest but significant inhibitory effect by IAV at day 4.\nThe ability of HRSV, HMPV and HPIV3 to re-infect symptomatically throughout life without the need for significant antigenic change has led to the widely held speculation that these viruses, especially HRSV, can suppress or subvert the host adaptive immune response, resulting in incomplete and inefficient long-term immunity. A number of studies have addressed virus-specific effects on APC and T lymphocyte responses in vitro, with varied and inconsistent conclusions. The first such studies reported that exposure of adult human peripheral blood mononuclear cells (PBMC) to HRSV, IAV, and Sendai virus suppressed proliferation in response to the non-specific mitogen phytohemagglutinin (PHA), an effect that was attributed to the expression of CD54/CD11a/CD18 (ICAM-1/LFA-1) and the interaction between APC and T cells , . In 1992, Preston et al. showed that exposure of human cord blood mononuclear cells to HRSV resulted in a reduction in proliferation in response to PHA. The same study showed that exposure of adult PBMC to HRSV reduced the proliferation response to Epstein-Barr virus antigen, although this effect was not seen with all of the tested HRSV strains. This effect was attributed to secreted IFN-α . In another study, HPIV3 was shown to reduce proliferation of adult human PBMC in response to CD3-specific antibodies, an effect that was attributed to increased production of IL-10 .\nMore recent studies have used increasingly more defined conditions. Bartz et al.", "score": 44.69313536286573, "rank": 5}, {"document_id": "doc-::chunk-8", "d_text": "The initial step for the stimulation of cytokine response in RNA virus infection is cellular activation of dsRNA receptor systems, Toll-like receptor 3 (TLR3) [51, 52] and retinoic acid inducible gene-I (RIG-I) . These two pathways lead to the activation of IκB kinase (IKK) α/β/γ complex and IKK-like kinases e.g. IKKϵ and TANK binding kinase 1 (TBK1) [53–57]; which mediate the activation and nuclear translocation of NFκB and interferon regulatory factor 3 (IRF3) [58, 59]. Inside the nucleus; IRF3, NFκB and activator protein 1 (AP-1), transcription factors stimulate type I IFN and proinflammatory cytokine genes expression. Many viruses have evolved the strategy to impede the effector mechanisms induced through these pathways , but viral interference with the significant proximal receptor interactions has not yet been depicted. NS3/4A appears to mediate proteolysis of a cellular protein within an antiviral signaling pathway upstream of IRF-3. IFN-α (α1), IFN-β and IFN-λ (λ1) genes are extremely sensitive to the inhibitory effect of NS3/4A. There is also an inhibitory effect of NS3/4A on other cytokine/chemokine gene promoters such as IFN-β, CCL5/RANTES,CXCL10/ IP-10, CXCL8/IL-8, TNF-α and IFN-α4. Thus, NS3/4A protein is not only an effective antagonist of the IFN-β promoter but also of other cytokine/chemokine promoters. Inhibition of IRF-3 activation requires only NS3/4A protease activity and is abrogated by a specific, peptido-mimetic protease inhibitor, SCH6 .\nDisruption of TLR3 and RIG-I pathways by HCV NS3/4A\nTLR3 is expressed on endosomal membranes (and the plasma membranes of some cells) and senses dsRNA that is present in endosomal and/or extracellular compartments as shown in Figure 2. TLR3 signaling pathway proceeds through the adaptor protein, TRIF also called TICAM-1 .", "score": 41.013521931971674, "rank": 6}, {"document_id": "doc-::chunk-2", "d_text": "enveloped virus replication (1)\nenveloped virus replication (2a)\nenveloped virus replication (2b)\nenveloped virus replication (3)\nenveloped virus replication (4)\nTwo aspect factors:\nnon-permissive cells → Abortive infection\nare genetically deficient and incapable of producing infectious progeny virions.\ncan supplement the genetic deficiency and make defective viruses replicate progeny virions when they simultaneously infect host cell with defective viruses.\ne.g., HDV & HBV\n- Defective viruses lack gene(s) necessary for a complete infectious cycle;\n- helper viruses provide missing functions;\n- 100:1 (defective to infectious particles)\n- DIP (defective interfering particle) : When the defective viruses can not replicate, but can interfere other congeneric mature virion entering the cells, we call them defective interfering particles (DIP).\nVirus infection which does not produce infectious progeny because the host cell cannot provide the enzyme, energy or materials required for the viral replication.\nThe host cells that cannot provide the conditions for viral replication.\nThe host cells that can provide the conditions for viral replication.\nIII. Viral interference:\nWhen two viruses infect simultaneously one host cell, One type of virus\nmay inhibit replication of another type of virus.\nRange of interference occurrence\n- between the different species of viruses;\n- between the same species of viruses;\n- between the inactivated viruses and live viruses.\nMain mechanisms of viral interference:\na. One type of virus inhibit or prevent subsequent adsorption and penetration\nof another virus by blocking or destroying receptors on host cell.\nb. The competition of two viruses for replication materials, e.g., receptor\npolymerase, translation initiation factors, etc.\nc. One type of virus may induce the infected cell to produce interferon that\ncan prevent viral replication.\nThe mechanism of IFN function\nSignificance of viral interference:\na. Stop viral replication and lead to patient recovery.\nb. Inactivated virus or live attenuated virus can be used as vaccine to\ninterfere with the infection of the virulent virus.\nMay decrease the function of vaccine when bivalent/trivalent vaccine is used.\nJust for your practice see the answers at the end.\nFill in the blank\n1-The surrounding protein coat of a virus is called the _______ and it is composed of protein subunits called _________.", "score": 40.54588499733328, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "In 1957, Alick Isaacs and Jean Lindenmann, both at the National Institute for Medical Research in London, set out to understand why inactivated influenza virus could induce interference in cells and tissues, preventing infection by \"live\" virus.\nThat the inactivated viruses physically blocked infection seemed unlikely. So, the two incubated chorio-allatoic membranes from chicken eggs with heat-inactivated influenza. They then washed the membranes and tried to infect them with normal virus. An interfering agent produced in response to the inactive virus seemed to be protecting both incubated membranes and fresh membranes placed in the fluids from incubated membranes. \"To distinguish it from the heated influenza virus,\" the authors wrote, \"we have called the newly released interfering agent, 'interferon.'\"\nInterferon was thought to apply strictly to viruses, but subsequent research showed that interferon could inhibit intracellular bacteria and tumorigenesis. Because interferon is secreted in minute amounts, it took nearly 20 years to isolate and purify the protein. In 1977, a group of researchers purified type I interferon by alternating a crude mixture between normal- and reverse-phase chromatography 80,000 times. Interferon now has several applications beyond virus inhibition; while its initial promise of cancer prevention never panned out, two pegylated interferons are commercially available for treating Hepatitis C, and recombinant IFN-β is used for treating relapsing forms of multiple sclerosis.\nThe discovery of a family of interferon transcription factors in 1988,", "score": 39.304105153506846, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "A disease possessing all mammalian-adapting residues in PB2 (i.e., PB2-147T, -339T, -588T and -627K, as was found in an H5N1 NVP-BGJ398 supplier virus isolated from a fatal human case) was more pathogenic than viruses possessing only PB2-627K or PB2-147T/339T/588T . The viral interferon antagonist NS1 protein Virus infections stimulate the expression of IFN and the activation of interferon-induced genes (ISGs). Many ISGs encode proteins with antiviral functions, such as PKR, Mx resistance proteins, IFITM proteins, ISG15, OAS, RNase L or Viperin. Most viruses have therefore evolved mechanisms to control the upregulation of IFN and interferon-stimulated genes and/or the actions of proteins with antiviral activities. In 1998, NVP-BGJ398 supplier Garcia-Sastre reported that the influenza A virus NS1 protein is critical to antagonize innate immune responses, while this protein is dispensable in IFN-deficient systems such as Vero cells . The NS1 protein interferes with the stimulation of innate immune responses through several mechanisms (reviewed in [21,71]): it suppresses the activation of the IFN- promoter and the upregulation of the IRF-3, NF-B and AP-1 transcription factors, all of which regulate IFN- transcription. NS1 also binds to TRIM25 and the cytoplasmic sensor RIG-I, resulting in suppressed RIG-I signaling and IFN- synthesis. Binding of NS1 to double-stranded RNA NVP-BGJ398 supplier also interferes with the activation of antiviral factors such as OAS/RNaseL and PKR. Moreover, NS1 binds to the 30-kDa subunit of CPSF and to PABII proteins, which prevents the efficient cleavage and polyadenylation of cellular pre-mRNAs; this mechanism may limit the amount of IFN- produced in response to an influenza virus infection. Several studies have demonstrated that the NS viral RNA segment of a highly pathogenic H5N1 virus can increase the virulence of a recipient virus, such as an H1N1 or H7N1 virus [43,72]. Moreover, the NS gene.", "score": 38.794823217052055, "rank": 9}, {"document_id": "doc-::chunk-241", "d_text": "PMID:29670029\nFensterl, Volker; Chattopadhyay, Saurabh; Sen, Ganes C\nThe interferon system protects mammals against virus infections. There are several types of interferons, which are characterized by their ability to inhibit virus replication and resultant pathogenesis by triggering both innate and cell-mediated immune responses. Virus infection is sensed by a variety of cellular pattern-recognition receptors and triggers the synthesis of interferons, which are secreted by the infected cells. In uninfected cells, cell surface receptors recognize the secreted interferons and activate intracellular signaling pathways that induce the expression of interferon-stimulated genes; the proteins encoded by these genes inhibit different stages of virus replication. To avoid extinction, almost all viruses have evolved mechanisms to defend themselves against the interferon system. Consequently, a dynamic equilibrium of survival is established between the virus and its host, an equilibrium that can be shifted to the host's favor by the use of exogenous interferon as a therapeutic antiviral agent.\nFull Text Available Since the beginning of this century, humanity has been facing a new emerging, or re-emerging, virus threat almost every year: West Nile, Influenza A, avian flu, dengue, Chikungunya, SARS, MERS, Ebola, and now Zika, the latest newcomer. Zika virus (ZIKV, a flavivirus transmitted by Aedes mosquitoes, was identified in 1947 in a sentinel monkey in Uganda, and later on in humans in Nigeria. The virus was mainly confined to the African continent until it was detected in south-east Asia the 1980´s, then in the Micronesia in 2007 and, more recently in the Americas in 2014, where it has displayed an explosive spread, as advised by the World Health Organization (WHO, which resulted in the infection of hundreds of thousands of people. ZIKV infection was characterized by causing a mild disease presented with fever, headache, rash, arthralgia, and conjunctivitis, with exceptional reports of an association with Guillain-Barre syndrome (GBS and microcephaly. However, since the end of 2015, an increase in the number of GBS associated cases and an astonishing number of microcephaly in foetus and new-borns in Brazil have been related to ZIKV infection, raising serious worldwide public health concerns.", "score": 38.126393199709916, "rank": 10}, {"document_id": "doc-::chunk-11", "d_text": "The generation of dsRNA is a pivotal part of the replication process for many ssRNA viruses and longer stretches of dsRNA are key indicators of viral invasion to a cell . In the present study, the detection of the replicative intermediary cRNA early in the gills signals its presence. Cells detect viral RNA and proteins via pathogen associated molecular pattern (PAMP) receptors, which in turn stimulate interferons and the antiviral response ,. In this study, the more rapid systemic response induced by LVI might have provided a sufficient level of protection in a higher number of hosts, preventing this strain from reaching the damaging higher viral loads observed for HVI. In the latter half of the trial the immune genes of the HVI infected fish were up-regulated more in comparison to those of the LVI infected fish. Interferon responses to viral infection are usually transient and self-limited to avoid a prolonged anti-viral state which in itself can be detrimental to the host and interfere with haematopoiesis ,. Indeed, the vast induction of cytokines and chemokines, generating a “cytokine storm” and overwhelming inflammatory responses, have been linked to highly pathogenic influenza virus pathogenesis ,,. The over-activation of IFNβ and tumour necrosis factor–α (TNFα) creates a powerful pro-inflammatory response compared to that of low pathogenic influenza viruses, tipping the balance of the response towards inflammation, contributing to tissue damage . The possibility that the increased mortality caused by HVI, which coincided with high expression of immune markers was caused by similar immune mechanisms should not be excluded. This study only focussed on a very small aspect of the immune response, therefore a more in-depth analysis of a greater number of immune response genes in immersion challenged fish would be advantageous. What is clear from the present study, the immune response was sufficient to limit the infection by LVI while ineffectual at preventing HVI instigating a progressive infection causing an eventual fatal outcome.\nRNA viruses have evolved diverse strategies to counteract and evade the host immune system. The influenza virus NS1 protein for example, is the primary antagonist of the innate immune response and remarkably effects many stages of the interferon response . Two ISAV proteins, including the putative NS protein, have been linked to the antagonism of the IFN system ,, but it was not possible to relate these functions specifically here.", "score": 37.84081047007848, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "This review describes the contribution of noncytolytic mechanisms to the control of viral infections with a particular emphasis on the role of cytokines in these processes. It has long been known that most cell types in the body respond to an incoming viral infection by rapidly secreting antiviral cytokines such as interferon alpha/beta (IFN-alpha/beta). After binding to specific receptors on the surface of infected cells, IFN-alpha/beta has the potential to trigger the activation of multiple noncytolytic intracellular antiviral pathways that can target many steps in the viral life cycle, thereby limiting the amplification and spread of the virus and attenuating the infection. Clearance of established viral infections, however, requires additional functions of the immune response. The accepted dogma is that complete clearance of intracellular viruses by the immune response depends on the destruction of infected cells by the effector cells of the innate and adaptive immune system [natural killer (NK) cells and cytotoxic T cells (CTLs)]. This notion, however, has been recently challenged by experimental evidence showing that much of the antiviral potential of these cells reflects their ability to produce antiviral cytokines such as IFN-gamma and tumor necrosis factor (TNF)-alpha at the site of the infection. Indeed, these cytokines can purge viruses from infected cells noncytopathically as long as the cell is able to activate antiviral mechanisms and the virus is sensitive to them. Importantly, the same cytokines also control viral infections indirectly, by modulating the induction, amplification, recruitment, and effector functions of the immune response and by upregulating antigen processing and display of viral epitopes at the surface of infected cells. In keeping with these concepts, it is not surprising that a number of viruses encode proteins that have the potential to inhibit the antiviral activity of cytokines.", "score": 37.33610780243785, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Viral infection of leukocytes or respiratory epithelium activates a variety of pattern recognition-receptors (PRR), such as the TLRs or the NOD receptors that induce the production of IRF3, NFκB, and other cell activation pathways (26\n). Virally induced secretion of IFN-β (largely mediated through IRF3) is especially important because this molecule acts in a paracrine fashion on neighboring cells to up-regulate a group of key antiviral proteins (such as OAS1, MX1, and PKR) that prevent subsequent infection and spread of the virus.\nMost viruses have developed sophisticated (and different) mechanisms to try to prevent IRF3/IFN pathway activation (27\n). VSV, for example, encodes a Matrix protein that inhibits host cell gene expression by targeting a nucleoporin and blocking nuclear export (28\n). Hence, any mRNAs induced by activation of the IRF pathway after viral replication is initiated will not lead to production of IFN or its downstream antiviral effector genes. The NS1 protein of H1N1 influenza virus has also been implicated in a number of regulatory functions during influenza virus infection, including binding of the poly(A) tails of mRNAs (to inhibit their nuclear export) (29\n), inhibiting host mRNA polyadenylation, contributing to the virus-induced shutoff of host protein synthesis (31\n), and inhibiting pre-mRNA splicing (29\n). In addition, binding of NS1 to dsRNA prevents in vitro\nactivation of PKR (35\nGiven this ability of IRF3/IFN to prevent viral infection, activation of this pathway as a potentially “antigen-independent” way of controlling disease has been investigated. As early as the 1960s, type I IFNs were reported to be antivirals (36\n). Early reports of success using nasal spray preparations of IFN-α by Russian drug companies for prophylaxis and treatment of influenza attracted the interest of Western scientists into this arena (37\n). Subsequent studies were conducted that found that the reported beneficial effects were minimal in relationship to the considerable amount of side effects associated with the treatment (38\n).", "score": 36.048105088382094, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "demonstrating that type I interferon activity is required for virus-induced granulocyte death.\n\"This link to the interferon is really interesting,\" said Jonathan McCullers\nof St. Jude Children's Research Hospital in Memphis, who was not involved in the study. \"That, to me, is the important part of the paper.\"\nIt's not yet clear if this mechanism will underlie other cases of bacterial superinfection after viral disease, Czuprynski told The Scientist, but, because type I interferon activation is a generalized body response to viral infection, \"this might be something that happens with other viral agents as well,\" he said.\n\"All agents causing high interferon type I levels are in our opinion likely to cause such effects and facilitate superinfection,\" Navarini told The Scientist\nin an Email.\nAccording to McCullers, \"it's going to take a little work to generalize it to human viruses. Influenza, for example, causes an increase in human granulocyte number and so probably does not encourage bacterial infection through this mechanism, but \"it might be a very reasonable model for HIV,\" McCullers told The Scientist\n. Researchers will \"have to apply it to each virus in turn and see if it fits.\"\nMelissa Lee Phillips\nLinks within this article:\nC. Holding, \"Evolution of innate immunity,\" The Scientist\n, July 8, 2004.\nJ.F. Wilson, \"Renewing the Fight Against Bacteria,\" The Scientist\n, March 4, 2002.\nNavarini et al., \"Increased susceptibility to bacterial superinfection as a consequence of innate antiviral responses,\" PNAS\n, published online October 9, 2006.\nL. Malmgaard et al., \"Promotion of alpha/beta interferon induction during in vivo viral infection through alpha/beta interferon receptor/STAT1 system-dependent and -independent pathwaysm,\" Journal of Virology\n, May 2002.\nC.Q. Choi, \"How viruses interfere with interferon,\" The Scientist\n, October 1, 2006.", "score": 34.53443202539592, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "In a majority of cases, the production of interferons is induced in response to microbes such as viruses and bacteria and their products (viral glycoproteins, viral RNA, bacterial endotoxin, bacterial flagella, CpG sites), as well as mitogens and other cytokines, for example interleukin 1, interleukin 2, interleukin-12, tumor necrosis factor and colony-stimulating factor, that are synthesised in the response to the appearance of various antigens in the body. Their metabolism and excretion take place mainly in the liver and kidneys. They rarely pass the placenta but they can cross the blood-brain barrier.\nThe therapeutically used forms are denoted by Greek letters indicating their origin: leukocytes, fibroblasts, and lymphocytes for interferon-alpha, -beta and -gamma, respectively.\nViral induction of interferonsEdit\nAll classes of interferon are very important in fighting RNA virus infections. However, their presence also accounts for some of the host symptoms, such as sore muscles and fever. They are secreted when abnormally large amounts of dsRNA are found in a cell. dsRNA is normally present in very low quantities. The dsRNA acts like a trigger for the production of interferon (via Toll Like Receptor 3 (TLR 3), a pattern recognition receptor of the innate immune system which leads to activation of the transcription factor IRF3 and late phase NF kappa B). The gene that codes for this cytokine is switched on in an infected cell, and the interferon synthesized and secreted to surrounding cells.\nAs the original cell dies from the cytolytic RNA virus, thousands of viruses will infect nearby cells. However, these cells have received interferon, which essentially warns these other cells of the virus. They then start producing large amounts of a protein known as protein kinase R (or PKR). If a virus infects a cell that has been “pre-warned” by interferon, the PKR is indirectly activated by the dsRNA, and begins transferring phosphate groups (phosphorylating) to a protein known as eIF-2, a eukaryotic translation initiation factor. After phosphorylation, eIF2 forms an inactive complex with eIF2B, thereby leading to reduced translation initiation and reduced protein synthesis.", "score": 34.1859829111593, "rank": 15}, {"document_id": "doc-::chunk-13", "d_text": "Those that can keep cells to themselves\nby limiting or preventing coinfection should be selectively\nfavoured, and many have evolved mechanisms to do just this\n(Simon et al., 1990; Singh et al., 1997; Turner et al., 1999). One\nof these, vesicular stomatitis virus (VSV), is an RNA virus in\nwhich recombination between different strains has not been\ndetected. Might superinfection exclusion be a constraint on\nrecombination in VSV? Another, the segmented bacteriophage\nu6, seems to limit excessive superinfection but not to the\nextreme of one-virus-per-cell that would preclude genetic\nexchange. Instead, it appears to have evolved an optimal\ncoinfection limit of two to three viruses per cell, presumably to\nbalance the costs of intracellular competition with the bene®ts\nof reassortment (Turner et al., 1999). Since the advantage (and\ncost) of recombination in any particular virus will be mediated\nby such factors as the selective pressure for novel variation, the\nimportance of interactions between different parts of the\ngenome, as well as the virus mutation rate and population size,\nwe should expect different optima (and therefore different\ndegrees of constraint) in different cases.\nIf divergent viruses manage to infect the same cell, the next\nstep is simply for one of them to replicate in the presence of the\nRNA of the other. This is not necessarily inevitable even in\ncoinfected cells. The replication of the u6 RNAs, for example,\ntakes place within a procapsid and it is thought that the entry\nof two different RNA molecules of the same genomic segment\ninto this sequestered environment is impossible or at least very\nrare (Mindich et al., 1992). This could explain the lack of\nhomologous recombination in this phage. Thus the vagaries of\nRNA replication in certain viruses could impose physical\nconstraints on the production of hybrids.\nTemplate switching by the viral replicase, the mechanism\nwhereby recombinant RNA molecules are actually created,\nmay also be limited by physical constraints. The negativestrand RNA viruses,\nfor example, whose genomes are packaged\ninto ribonucleoprotein structures by association\nwith N protein, may be less permissive than other RNA viruses\nto copy±choice recombination.", "score": 33.25113367117489, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "When you think about viruses, you might wonder how they infect, how they spread, and how they kill. These questions are of natural interest—you, after all, could play host to a grand variety of lethal viruses. But do remember: it’s not all about you.\nA virus’ world contains not just potential hosts, but other viruses. It has competition. This simple fact is often ignored but it has profound implications. In a new study, Lisa Bono from the University of North Carolina has shown that competition between viruses can drive them to spill over into new hosts, imperilling creatures that they never used to infect.\nEarlier this year, a 17-year-old French woman arrived at her ophthalmologist with pain and redness in her left eye. She had been using tap water to dilute the cleaning solution for her contact lenses, and even though they were meant to be replaced every month, she would wear them for three. As a result, the fluid in her contact lens case had become contaminated with three species of bacteria, an amoeba called Acanthamoeba polyphaga that can caused inflamed eyes.\nIt was carrying two species of bacteria, and a giant virus that no one had seen before—they called it Lentille virus. Inside that, they found a virophage—an virus that can only reproduce in cells infected by other viruses—which they called Sputnik 2. And in both Lentille virus and Sputnik 2, they found even smaller genetic parasites – tiny chunks of DNA that can hop around the genomes of the virus, and stow away inside the virophage. They called these transpovirons.\nIf flu viruses have favoured hook-up spots, then pig pens would be high on the list. Their airways contain molecules that both bird flu viruses and mammalian flu viruses can latch onto. This means that a wide range of flu strains can infect pigs, and if two viruses infect the same cell, they can shuffle their genes to create fresh combinations.\nThis process is called reassortment. In 2009, it created a strain of flu that leapt from pigs to humans, triggering a global pandemic. If we needed proof that pigs are “mixing vessels” for new and dangerous viruses, the pandemic was it.\nNow, scientists have found a new strain of flu in Korean pigs that remphasises the threat.", "score": 33.154470523347385, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Antiviral response promotes bacterial infection\nInnate immune response to virus leaves mice more susceptible to bacterial infection, study finds\nAn innate immune\nresponse to viral infection can kill white blood cells needed to fight off bacteria\n, according to a study\npublished online this week in PNAS. This effect could explain why bacterial \"superinfections\" can take hold in the body more easily when a viral pathogen is already present, the study authors say.\n\"Viral-bacterial synergism is something that is a significant clinical issue in both human and veterinary medicine and we don't have a detailed understanding of what's going on,\" said Charles Czuprynski\nof the University of Wisconsin-Madison School of Veterinary Medicine, who was not involved in the study.\nResearchers led by Alexander A. Navarini and Mike Recher of the University Hospital Zurich in Switzerland examined superinfection in mice by first infecting them with an RNA virus called lymphocytic choriomeningitis virus (LCMV) and then with the bacterium Listeria monocytogenes. Three days later, the mice infected with both pathogens showed 1000 times higher bacterial concentration in the liver and spleen than did mice infected with bacteria only.\nSince the mice showed susceptibility to bacterial superinfection in the first three days, the authors examined key members of the early innate immune response: white blood cells called granulocytes. In humans, granulocyte number is the most important predictor of susceptibility to bacterial infection, Navarini told The Scientist. He and his colleagues found that granulocytes began undergoing apoptosis about two days after viral infection and that granulocyte numbers in bone marrow of mice infected with LCMV were considerably below normal.\n\"There's probably some window of time here in which viral infection leads to increased susceptibility to bacterial infection, and they're providing a mechanistic explanation for why this might be occurring,\" Czuprynski said.\nThe researchers next looked for antiviral factors that might be responsible for granulocyte death. They found that levels of type I interferon\n-- a cytokine known\nto become up-regulated in response to most viruses, including LCMV -- inversely correlated with levels of granulocytes. To see if there was a causal connection, the researchers infected mice lacking type I interferon receptors with LCMV. These mice did not show granulocyte cell death or sensitivity to L. monocytogenes infection ?", "score": 32.92532006430534, "rank": 18}, {"document_id": "doc-::chunk-8", "d_text": "Some inclusion bodies represent \"virus factories\" in which viral nucleic acid or protein is being synthesized; others are merely artifacts of fixation and staining. One example, Negri bodies, are found in the cytoplasm or processes of nerve cells in animals that have died from rabies.Gene Expression Regulation, Viral: Any of the processes by which cytoplasmic factors influence the differential control of gene action in viruses.Viral Interference: A phenomenon in which infection by a first virus results in resistance of cells or tissues to infection by a second, unrelated virus.Parainfluenza Virus 1, Human: A species of RESPIROVIRUS also called hemadsorption virus 2 (HA2), which causes laryngotracheitis in humans, especially children.Phylogeny: The relationships of groups of organisms as reflected by their genetic makeup.Viral Core Proteins: Proteins found mainly in icosahedral DNA and RNA viruses. They consist of proteins directly associated with the nucleic acid inside the NUCLEOCAPSID.Amino Acid Sequence: The order of amino acids as they occur in a polypeptide chain. This is referred to as the primary structure of proteins. It is of fundamental importance in determining PROTEIN CONFORMATION.Genetic Vectors: DNA molecules capable of autonomous replication within a host cell and into which other DNA sequences can be inserted and thus amplified. Many are derived from PLASMIDS; BACTERIOPHAGES; or VIRUSES. They are used for transporting foreign genes into recipient cells. Genetic vectors possess a functional replicator site and contain GENETIC MARKERS to facilitate their selective recognition.HeLa Cells: The first continuously cultured human malignant CELL LINE, derived from the cervical carcinoma of Henrietta Lacks. These cells are used for VIRUS CULTIVATION and antitumor drug screening assays.Immunoglobulin G: The major immunoglobulin isotype class in normal human serum. There are several isotype subclasses of IgG, for example, IgG1, IgG2A, and IgG2B.Virion: The infective system of a virus, composed of the viral genome, a protein core, and a protein coat called a capsid, which may be naked or enclosed in a lipoprotein envelope called the peplos.Antiviral Agents: Agents used in the prophylaxis or therapy of VIRUS DISEASES.", "score": 32.602331715813456, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "The effect of an infectious agent on health and disease is likely influenced by its diverse interactions with other infectious agents claiming the same host or host cell. In other words, disease potential is dependent on interactions between numerous different pathogens and the host’s defense mechanisms. Understanding these interactions may help to better predict the outcome of disease, improve treatment, and identify novel therapeutic strategies. The Experimental Virology group investigates general concepts, molecular pathways, and implications of virus-virus interactions in the co-infected host, using the complex and competitive relationship between adeno-associated virus (AAV) and its helper viruses as a model. The knowledge gained from these studies can be applied also to investigate more complex biosocial processes of disease, for example on the level of an infected host, thereby forming a strong link between the Experimental Virology and the Molecular and Clinical Veterinary Virology groups.\nThe Experimental Virology group engages also in the development and use of viruses for applications in Gene Therapy and Vaccination.\nCover Illustration: Electronmicrograph of herpesvirus particles", "score": 32.10149459621578, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Better way to develop vaccine against flu virus identifiedMay 13th, 2009 - 12:40 pm ICT by ANI\nWashington, May 13 (ANI): By cashing-in on the interaction between a virus and antibodies that fight infection, Princeton University scientists may have discovered a better way to make a vaccine against the flu virus.\nThe researchers have said that by manipulating the multi-stage interactive process- called antibody interference-to advantage, it could be possible to design more powerful vaccines than exist today.\n“We have proposed that antibody interference plays a major role in determining the effectiveness of the antibody response to a viral infection. And we believe that in order to get a more powerful vaccine, people are going to want one that minimizes this interference,” said Ned Wingreen, a professor of molecular biology.\nWhen Ndifon and colleagues analysed data about viral structure, antibody types and the reactions between them produced by virology laboratories across the country, they noticed a confusing pattern.\nThey found that antibodies were often better at protecting against a slightly different virus, a close cousin, than against the virus that spurred their creation-a process known as cross-reactivity.\nOn a closer look, they found that a phenomenon known as antibody interference was at play-it arises when a virus prompts the creation of multiple types of antibodies.\nAs a result, during a viral attack, antibodies vie with each other to defend the body, and sometimes crowd each other out while they attempt to attach themselves to the surface of the virus.\nBut, strangely, antibodies that are actually less effective at protecting the body against a specific virus are also equally adept at attaching themselves to the virus, blocking the more effective antibodies from doing their job.\nThus, the scientists have suggested that if a way can be found to weaken the binding of the less effective antibodies, this might constitute a new approach to vaccine design.\nThe researchers claimed that the pattern of enhanced cross-reactivities could easily be attributed to viruses that differ only at the sites on their surfaces where the less effective antibodies bind.\nSuch variants would make ideal vaccine strains, guiding the immune system to produce two distinct types of antibodies: effective ones that are well matched to and good at binding to the infecting virus, and ineffective ones that are poorly matched to and bad at binding to the infecting virus, and consequently stay out of the way.\nThe findings have been described in the online edition of the Proceedings of the National Academy of Sciences.", "score": 32.05835481494247, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Most viruses are identified based on their capacity and mechanisms used to produce disease; however, healthy individuals harbor viral communities that do not cause directly known pathologies. These viral communities are known as the human virome (Rohwer et al., 2009). The coexistence of viruses and bacteria within the microbiome encourages the study of viral evasion mechanisms that provide immune system tolerance to these pathogens. These mechanisms are undoubtedly also used during the pathophysiology of viral diseases (Abeles and Pride, 2014).\nMicrobiota Against Viral Infection\nSince the discovery that gut bacteria instruct host immunity, i.e., they restrict pathogen proliferation, it would seem logical to think that the intestinal microbiota would also play a predominant role in viral etiology infection inhibition. Studies reveal that commensal bacteria are crucial in maintaining immune homeostasis and immune responses at mucosal surfaces (Ichinohe et al., 2011). Mucous membranes are the gateway to many pathogens, including viruses. For example, intestinal microorganisms promote maturation of the secondary lymphoid organs within the gastrointestinal tract, which is the first line of defense of the intestinal mucosa (Karst, 2016). Germ-free mice are unable to mount an efficient immune response against pathogens due to immature intestinal lymphoid structures (Hooper et al., 2012; Kamada and Núñez, 2014).\nGiven the complexity of the microenvironment in mucosal surfaces, it makes sense that the most studied bacteria and virus interactions are the ones involving the intestinal microbiome. The protective role of commensal bacteria, mainly probiotics, is well-established; however, in its interactions with viruses, more studies are needed. The Lactobacillus genus can inhibit murine norovirus (MNV) replication in vitro, which could be mediated by the increased expression of IFNβ and IFNγ. In vivo models show that these bacteria are decreased during MNV infection, though with the aid of retinoic acid treatment, it is possible to avoid this effect. It has been hypothesized that the antiviral effects of vitamin A (and consequently, retinoic acid) are mediated by the Lactobacillus genus due to interferon production (Lee and Ko, 2016).", "score": 31.235295582560507, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Natural Resources Institute, University of Greenwich, Central Avenue, Chatham Maritime, Kent, ME4 4TB, UK\nGo to article:\nAccepted for publication 6 June 2001.\nMathematical models of plant-virus disease epidemics were developed where cross protection occurs between viruses or virus strains. Such cross protection can occur both naturally and through artificial intervention. Examples of diseases with continuous and discontinuous crop-host availability were considered: citrus tristeza and barley yellow dwarf, respectively. Analyses showed that, in a single host population without artificial intervention, the two categories of host plants, infected with a protecting virus alone and infected with a challenging virus, could not coexist in the long term. For disease systems with continuous host availability, the virus (strain) with the higher basic reproductive number (R0) always excluded the other eventually; whereas, for discontinuous systems, R0 is undefined and the virus (strain) with the larger natural transmission rate was the one that persisted in the model formulation. With a proportion of hosts artificially inoculated with a protecting mild virus, the disease caused by a virulent virus could be depressed or eliminated, depending on the proportion. Artificial inoculation may be constant or adjusted in response to changes in disease incidence. The importance of maintaining a constant level of managed cross protection even when the disease incidence dropped was illustrated. Investigations of both pathosystem types showed the same qualitative result: that managed cross protection need not be 100% to eliminate the virulent virus (strain). In the process of replacement of one virus (strain) by another over time, the strongest competition occurred when the incidence of both viruses or virus strains was similar. Discontinuous crop-host availability provided a greater opportunity for viruses or virus strains to replace each other than did the more stable continuous cropping system. The process by which one Barley yellow dwarf virus replaced another in New York State was illustrated.\n© 2001 The American Phytopathological Society", "score": 30.027297440693967, "rank": 23}, {"document_id": "doc-::chunk-2", "d_text": "In contrast, NS5A and E2 proteins are reported to enhance translation by inhibiting PKR functions [44, 45]. Therefore, it seems that during the course of HCV infection, there is a balance between inhibition and enhancement of host cell translation depending on the degree of activation/inhibition of the PKR pathway. Most of these studies have relied on systems that express HCV proteins individually. Nontheless, since all HCV proteins are potentially produced in vivo during virus infection of hepatocytes, it is important to use a full-length genome rather than individual HCV proteins to study the molecular mechanisms involved in virus-host cell interactions and in HCV pathogenesis. In our viral delivery system, the overall expression of structural and nonstructural HCV proteins by recombinant VT7-HCV7.9 virus did not reverse the action of PKR, since host cell translation was inhibited through phosphorylation of eIF-2α-S51 by the kinase. An incapability to prevent PKR activation by HCV polyprotein expression was reported by François and co-workers when they analysed the response to IFN of the human cell line UHCV-11 engineered to inducibly express the entire HCV genotype 1a polyprotein . Although we could not exclude the possibility that a certain level of inhibition of PKR by NS5A or E2 occurs at a much localized level, the resistance to IFN exhibited by some HCV genotypes as a result of viral protein expression, cannot be explained solely by inhibition of the negative control of PKR translation. It is possible that during the course of HCV infection, NS5A plays a role in inhibiting PKR locally at the site of HCV protein synthesis. NS5A may, however, participate in the blockade of IFN's antiviral action through another mechanism, such as the reported interaction with the Ras-associated Grb-2 protein . These results confirm the necessity to re-evaluate all types of interactions between any particular HCV protein and its cellular partner(s) in the context of expression of all of the HCV proteins. Consequently, as shown here by confocal microscopy (Figure 2), the HCV proteins are localized within aggregates in the cell cytoplasm which might influence their interaction with PKR, a protein found surrounding the nucleus, in microsomes and in the nucleolus [24, 48].", "score": 29.884755923056627, "rank": 24}, {"document_id": "doc-::chunk-7", "d_text": "These two pathways lead to the activation of IκB kinase (IKK) α/β/γ complex and IKK-like kinases e.g. IKKϵ and TANK binding kinase 1 (TBK1) [53–57]; which mediate the activation and nuclear translocation of NFκB and interferon regulatory factor 3 (IRF3) [58, 59]. Inside the nucleus; IRF3, NFκB and activator protein 1 (AP-1), transcription factors stimulate type I IFN and proinflammatory cytokine genes expression. Many viruses have evolved the strategy to impede the effector mechanisms induced through these pathways , but viral interference with the significant proximal receptor interactions has not yet been depicted. NS3/4A appears to mediate proteolysis of a cellular protein within an antiviral signaling pathway upstream of IRF-3. IFN-α (α1), IFN-β and IFN-λ (λ1) genes are extremely sensitive to the inhibitory effect of NS3/4A. There is also an inhibitory effect of NS3/4A on other cytokine/chemokine gene promoters such as IFN-β, CCL5/RANTES,CXCL10/ IP-10, CXCL8/IL-8, TNF-α and IFN-α4. Thus, NS3/4A protein is not only an effective antagonist of the IFN-β promoter but also of other cytokine/chemokine promoters. Inhibition of IRF-3 activation requires only NS3/4A protease activity and is abrogated by a specific, peptido-mimetic protease inhibitor, SCH6 .\nDisruption of TLR3 and RIG-I pathways by HCV NS3/4A\nHuh7 hepatoma cells, which are almost exceptional in their ability to support HCV infection in vitro, are deficient in TLR3 signaling due to a lack of TLR3 expression . The absence of an HCV permissive cell line having functional TLR3/TRIF-dependent pathway has made it difficult to determine that HCV infection is sensed by TLR3. TLR3 is expressed in normal human hepatocytes in situ.", "score": 29.767925639258454, "rank": 25}, {"document_id": "doc-::chunk-10", "d_text": "In contrast, and consistent with our previous report , none of the UV-inactivated viruses had significant anti-proliferative effects.\nWe previously reported that types I and III (lambda) IFN play a role in the inhibition of CD4+ T cell proliferation in response to HRSV-exposed MDDC . Therefore, we asked whether adding IFN-β (which is the earliest type I IFN released from MDDC following stimulation by the whole panel of viruses ), IL-28A (IFN-λ2), or IL-29 (IFN-λ1), individually or combined, during co-culture might affect proliferation in response to SEB. Figure 4 C shows that among six donors, IFN-β (added at 75 IU/ml, a concentration representative of that detectable after exposure of MDDC to virus ) resulted in a significant 19% reduction of proliferation in response to SEB (P≤0.05 as compared to mock-treated MDDC). For three of the 6 donors, mock-treated MDDC were also co-cultured with CD4+ T cells in the presence of IL-28A or IL-29, or IL-28A, IL-29, and IFN-β together. We found that the type III interferons have limited, if any, additional suppressive effects.\nAs part of the experiments described in the previous section, proliferating CD4+ T cells were analyzed by intracellular cytokine staining to quantify expression of IFN-γ, IL-4, IL-17 and TNF-α. Since there was minimal detection of cells producing IL-17 at any time point (data not shown), we focused on the production of IL-4, IFN-γ and TNF-α. However, IL-4+ cells that were detected were also IFN-γ+ and TNF-α+ (Figure 5A). The proportions of single IL-4 positive cells were very low and are not shown.\nFrom the experiments shown in Figure 4 A and B, we analyzed the time course of cytokine production by proliferating CD4+ T cells, during co-culture in the presence of SEB, with MDDC that had been treated with live or UV-inactivated rHRSV, IAV, or mock treatment.", "score": 29.607881945115075, "rank": 26}, {"document_id": "doc-::chunk-3", "d_text": "It’s not just which proteins are there; it’s also about which proteins are most abundant and events that can trigger differences in protein abundance\n- Protein binding = when a protein attaches itself to another protein (or some other organic chemical)\n- Protein interactions = any time two proteins bind to each other and cause something else to happen (which is all the time)\n- Interactome = a map of protein interactions\nTl;dr: Human cells have a protein called IFI-16 that can detect viral DNA and tell immune cells to kill the infected cell. But viruses also have a protein that can render IFI-16 mute.", "score": 29.5755406904893, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "“But we don’t really have a great way at this point to infer at the population level which of these forms of competition, or which of these interactions, are important.”\nFor example, there are three major strains of influenza that have been circulating in human populations since the late 1970’s: H3N2, H1N1, and B. From one season to the next, one or the other might be more prevalent. Researchers know that, say, if it’s a big H3N2 season, then the H1N1 and B strains tend to be less widespread, or vice versa.\nThese viruses clearly compete with each other, but no one has been able to describe exactly how strong that competition is. On top of that, there’s some evidence other respiratory viruses like rhinoviruses (which cause common colds) also compete with various strains of influenza and complicate the picture.\n“It could be something as radical as changing the food distribution for a given population, or trying to alter air currents or improve green spaces in given areas that could significantly reduce the predicted likely outcome for given areas.” — Jack Gilbert\nTo get a handle on how these bugs influence each other, Cobey and Gilbert are starting with statistical models they know work well for single pathogens and adjusting them to accurately simulate interactions with other bugs. They’re also including climate variables like absolute humidity, and demographic statistics like birth and death rates in a given population to see how they contribute as well. These simulations will tell them which of these factors are most important, which can then be factored into predictions for future outbreaks.\n“It’s not just how they will behave individually, but how they’ll behave in the context of each other,” Gilbert said. “Instead of predicting one disease we’re predicting all of the diseases in parallel.”\nThe tools Cobey and Gilbert are developing could also help researchers understand the potential impact of emerging viruses, like the new coronavirus discovered in the Middle East, or pockets of old diseases like measles and whooping cough that have spread in areas of the U.S. where people refuse to get vaccinated.\nBut Gilbert, who studies how microbial environments grow and evolve inside our bodies, our homes and the places we work and receive medical care, said that understanding exactly how pathogens interact with each other could lead to more creative solutions to stopping outbreaks.", "score": 28.99709530422639, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Graduate School of Biomedical Sciences, Dept. of Molecular Genetics & Microbiology, Program in Immunology & Virology\nRNA Interference; Adenoviridae; MicroRNAs; RNA, Small Interfering; RNA, Viral; Academic Dissertations; Dissertations, UMMS\nIn the complex relationships of mammalian viruses with their hosts, it is currently unclear as to what role RNA silencing pathways play during the course of infection. RNA silencing-based immunity is the cornerstone of plant and invertebrate defense against viral pathogens, and examples of host defense mechanisms and numerous viral counterdefense mechanisms exist. Recent studies indicate that RNA silencing might also play an active role in the context of a mammalian virus infection. We show here that a mammalian virus, human adenovirus, interacts with RNA silencing pathways during infection, as the virus produces microRNAs (miRNAs) and regulates the expression of Dicer, a key component of RNA silencing mechanisms.\nOur work demonstrates that adenovirus encodes two miRNAs within the loci of the virus-associated RNA I (VA RNA I). We find that one of these miRNAs, miR-VA “g”, enters into a functional, Argonaute-2 (Ago-2)-containing silencing complex during infection. Currently, the cellular or viral target genes for these miRNAs remain unidentified. Inhibition of the function of the miRNAs during infection did not affect viral growth in a highly cytopathic cell culture model. However, studies from other viruses implicate viral miRNAs in the establishment of latent or chronic infections.\nAdditionally, we find that adenovirus infection leads to the reduced expression of Dicer. This downregulation does not appear to be dependent on the presence of VA RNA or its associated miRNAs. Rather, Dicer levels appear to inversely correlate with the level of viral replication, indicating that another viral gene product is responsible for this activity. Misregulation of Dicer expression does not appear to influence viral growth in a cell culture model of infection, and also does not lead to gross changes in the pool of cellular miRNAs. Taken together, our results demonstrate that RNA silencing pathways are active participants in the process of infection with human adenovirus.", "score": 28.929117360052484, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Viruses are able to interfere with the host cell processes that our bodies use to replicate cells, and protein synthesis is often one of their targets. For the first time, researchers at the Universities of Cambridge and Oxford have witnessed virus-induced “frameshifting” in action and have been able to identify the crucial role of particular elements.\nThe scientists have revealed the workings of the process known as ‘ribosomal frameshifting’ that forces a mis-reading of the genetic code during protein synthesis. The correct expression of most genes depends upon accurate translation of the ‘frame’ of the genetic code, which has a three nucleotide periodicity. Viruses such as HIV and SARS bring into the cell a special signal that forces the ribosome to back up by one nucleotide, pushing it into another ‘frame’ and allowing synthesis of different viral proteins. These are exploited by viruses and help them to survive and multiply, according to background information in the article.\nThe British researchers successfully imaged frameshifting in action and for the first time observed how a virus encoded element called an RNA pseudoknot interferes with the translation of the genetic code to allow viruses like HIV and SARS to express their own enzymes of replication.\nDr Ian Brierley, the project leader at the University of Cambridge, said: “The images we obtained give us an insight into how a virus-encoded RNA pseudoknot can induce frameshifting and may be useful in designing new ways to combat virus pathogens that use this process.” Professor Julia Goodfellow, Chief Executive of the Biotechnology and Biological Sciences Research Council said: “The work to explore fundamental biology today is laying the foundation for potential medical applications over the next twenty years.”\nMEDICA.de; Source: Biotechnology and Biological Sciences Research Council (BBSRC)", "score": 28.85441329908522, "rank": 30}, {"document_id": "doc-::chunk-12", "d_text": "A simple model of recombination\nbetween different RNA viruses, with possible constraints, is\npresented in Fig. 1.\nIn a sense, recombinogenic viruses are all alike in that they\nsuccessfully pass through each stage outlined in the model.\nEvery non-recombining virus, on the other hand, is different in\nits own way since constraints that block recombination could\nact by breaking any link in the chain and could involve not just\nviral genetic factors, but host and ecological factors particular\nto that virus. The first prerequisite for successful recombination\n(Fig. 1) is that an individual host must be infected by different\nvirus strains. [This is not quite true, of course, since (1)\nrecombination sometimes involves host RNA, (2) recombination\ncould occur between viruses that have diverged\nwithin a clonally infected individual, and (3) evolutionarily\ninvisible recombination could occur between identical RNA\nmolecules.] Host coinfection might never occur with some\nviruses simply because their divergent forms do not usually\noverlap in space and time. Multivalent live-attenuated vaccines\nCFDJ can be seen as potential risks in this context since they could\neffectively release some viruses from this constraint. Host\nfactors could act at this stage too if, for instance, an immune\nresponse reduces the window for simultaneous infection by\nquickly clearing a virus, or prevents superinfection altogether\nby blocking secondary infections.\nHaving successfully coinfected a single host, divergent\nviruses must next coinfect a single cell if recombination is to\nproceed. This step could be blocked by host factors, either by\nan immune response that keeps virus numbers low enough to\nprevent multiple infection of any individual cell, or by host cell\ngenetic factors that block entry of more than one virus particle\ninto a cell (Danis et al., 1993). Viral factors, interestingly, might\nalso enforce significant constraints at this stage of the model.\nRecent evidence demonstrates that intracellular competition\ncan be costly to viruses that infect the same host cell (Turner\n&Chao, 1998).", "score": 28.33109973025827, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Slicing and dicing viruses: antiviral RNA interference in mammalsMore about Open Access at the Crick\nAuthors listPierre V Maillard Annemarthe G van der Veen Enzo Poirier Caetano Reis e Sousa\nTo protect against the harmful consequences of viral infections, organisms are equipped with sophisticated antiviral mechanisms, including cell-intrinsic means to restrict viral replication and propagation. Plant and invertebrate cells utilise mostly RNA interference (RNAi), an RNA-based mechanism, for cell-intrinsic immunity to viruses while vertebrates rely on the protein-based interferon (IFN)-driven innate immune system for the same purpose. The RNAi machinery is conserved in vertebrate cells, yet whether antiviral RNAi is still active in mammals and functionally relevant to mammalian antiviral defence is intensely debated. Here, we discuss cellular and viral factors that impact on antiviral RNAi and the contexts in which this system might be at play in mammalian resistance to viral infection.\nJournal EMBO Journal\nIssue number 8", "score": 28.233903116408513, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "Tetherin, a cellular protein blocking viral release, is also differently counteracted by HIV-1 and HIV-2. In HIV-1, the anti-tetherin activity is conferred by Vpu, whereas in HIV-2, the intracytoplasmic portion of Env mediates this effect [28,29]. Another protein, the recently identified RNA-associated early-stage anti-viral factor (REAF), inhibits both HIV-1 and HIV-2 just after cell entry .\nThe restriction factor SAMHD1 blocks HIV replication by degrading intracellular dNTPs and HIV-1 RNA [31-37]. SAMHD1 inhibits reverse transcription in non-diving cells, such as monocytes, macrophages, dendritic cells and non-activated CD4+ lymphocytes [31-36]. In dividing cells such as activated CD4+ T cells, SAMHD1 is phosphorylated and does not restrict HIV-1 [38-40]. HIV-2 and some SIV strains encode the accessory protein Vpx, which degrades SAMHD1 and allows escape from this restriction, whereas HIV-1 lacks a Vpx-like activity. The anti-SAMHD1 activity is conserved in Vpx alleles isolated from viremic or aviremic HIV-2-infected individuals . It has been proposed that HIV-2, through the SAMHD1-degrading action of Vpx, may trigger a more efficient immune response by productively infecting dendritic cells (DCs) . HIV-1, in sparing SAMHD1, may avoid productive infection of DCs and thus limit the resulting protective type-I IFN response mounted by these cells [35,43-45]. In addition to degrading SAMHD1, Vpx may also inhibit the function of IRF-5 . Of note, HIV-1 and HIV-2 differentially interact with other target cells. The kinetics of HIV-1 and HIV-2 replication are different in human primary macrophage cultures . Exposure of plasmacytoid DCs (pDCs) to HIV-1 and HIV-2 differentially mature the cells into IFN-producing cells or Antigen Presenting Cells .\nThe exact role of Vpx during HIV-2 replication is not fully understood.", "score": 27.309521619721764, "rank": 33}, {"document_id": "doc-::chunk-3", "d_text": "These viruses therefore have the ability to carry on viral replication and production of new viruses as they normally would in the absence of IFN. The ways that viruses find a way around the IFN response is through the inhibition of interferon signaling, production, and the blocking of the functions of IFN-induced proteins.\nIt is not unusual to find viruses encoding for a multiple number of mechanisms to allow them to elude the IFN response at many different levels. While doing the study with JEV, Lin and his coworkers found that IFN-alpha's inability to block JEV means that JEV may be able to block IFN-alpha signaling which in turn would prevent IFN from having STAT1, STAT2, ISGF3, and IRF-9 signaling. DEN-2 also significantly reduces interferon ability to active JAK-STAT. Some other viral gene products that have been found to have an effect on IFN signaling include EBNA-2, Polyomavirus large T antigen, EBV EBNA1, HPV E7, HCMV, and HHV8. Several poxviruses encode a soluble IFN receptor homologue that acts as a decoy to inhibit the biological activity of IFN, and that activity is for IFN to bind to their cognate receptors on the cell surface to initiate a signaling cascade, known as the Janus kinase(JAK)-signal transducer and activation of transcription(Stat) pathways. For example, a group of researchers found that the B18R protein, which acts as a type 1 IFN receptor and is produced by the vaccinia virus, inhibited IFN's ability to begin the phosphorylation of JAK1 which reduced the antiviral effect of IFN.\nSome viruses can encode proteins that bind to dsRNA. In a study where the researchers infected Human U cells with reovirus-sigma3 protein and then, using the Western blot test, they found that reovirus-sigma3 protein does bind to dsRNA. Along with that, another study in which the researchers infected mouse L cells with vaccinia virus E3L found that E3L encodes the p25 protein that binds to dsRNA. Without double stranded RNA (dsRNA), because it is bound to by the proteins, it is not able to create IFN-induced PKR and 2'-5' oligoadenylate-synthetase making IFN ineffective.", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "remain a public health concern worldwide due to their emerging and re-emerging nature . Due to lack of therapeutics against majority of viruses, there is always a need to develop more effective antiviral agents . Lately, RNA interference (RNAi) has emerged as a potential therapeutic tool for targeting human viruses. RNAi or gene silencing is a process by which sequence-specific degradation of mRNA takes place . In this process, long dsRNA precursors are chopped into shorter (19–23 resides) units by a ribonuclease enzyme called dicer. These short interfering RNAs (siRNAs) possess two terminal nucleotide 3′ overhangs. After that a ribonucleoprotein machinery called RNA induced silencing complex (RISC) incorporates one of the siRNA strands and cleaves the complementary target mRNA using ATP .\nResearchers have extensively used RNAi process to target a number of viral genes to suppress their expression [5, 6]. siRNAs targeting different regions of the HIV genome in infected cells showed promising results in inhibiting viral replication [7, 8]. Also, siRNAs targeting the influenza virus nucleocapsid and RNA transcriptase genes restricted its transcription and replication [9, 10]. Similarly, siRNAs directed against the Hepatitis B virus surface regions prevented the virus production . siRNAs employed against SARS-CoV envelope and genes were able to effectively block their expression . In another study, siRNAs targeting Dengue virus genes were able to impede the viral infection . In addition, siRNAs have been shown to curb many other viruses like Human papillomavirus (HPV) , West Nile virus (WNV) etc.\nRNAi methodology has many desirable features to use as antiviral agents. It can target diverse types of viral genomes, whether it be double/single stranded DNA or RNA, which make it a suitable candidate for broad-spectrum antiviral therapy . Also, siRNA aims at small length of the target mRNA instead of a functional domain of a protein, therefore, even a small viral genome can lend many targetable regions . Further, many siRNAs may be expressed simultaneously to increase inhibition in a coordinated manner . They are also harnessed to degrade mRNAs which generate disease causing proteins . siRNA based drugs have also entered the clinical trials for various human diseases e.g.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "Importantly the increased neurovirulence of some virus strains correlates with their increased resistance to IFN-I-mediated antiviral effects in nonneuronal cells (14 -16) implying that such strains might be able to replicate even in IFN-I.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-7", "d_text": "Recently, it was reported that bacterial flagellin promotes viral infection in an in vitro model using lentiviral pseudoviruses encoding the glycoproteins of influenza, Measles, Ebola, Lassa, and Vesicular stomatitis virus in pulmonary epithelial cell culture through TLR5 and NF-κB activation (Benedikz et al., 2019). This finding is particularly exciting since previously, it was reported that flagellin had a protective effect against RV infection in mice (Zhang et al., 2014). The dual effect of flagellin could be due to the differences in the microenvironment and models used to study the interaction between the viruses and bacteria (Figure 1C). These studies exemplify how much is unknown in the interplay of bacteria and viruses.\nViruses as Part of the Human Microbiome\nThe intestine contains other types of organisms, besides bacteria, that can influence mucosal and systemic immune responses such as viruses (Minot et al., 2012; Kernbauer et al., 2014; Norman et al., 2015). To interpret the role of the microbiota within viral infections, we must also consider the impact that the virome may play in this interaction. A recent study approximated that in healthy humans, there are 45% of mammalian viruses that are part of the virome without a clinical outcome (Rascovan et al., 2016; Olival et al., 2017). However, similar to bacteria, resident viruses modulate the immune responses (Freer et al., 2018).\nEnteric human virome has also been linked to diseases. For example, enteric eukaryotic viruses can be associated with gastroenteritis, enteritis, or colitis (Norman et al., 2015). Bacteriophages perturb the bacterial community, interplay with the host immune system, and an antagonistic relationship between bacteria and bacteriophages during inflammatory bowel disease has been reported (Duerkop and Hooper, 2013; Virgin, 2015). Also, bacteriophages contribute to the spread of antibiotic resistance genes among bacteria; they form a reservoir of these genes within the microbiome (Muniesa et al., 2013; Quirós et al., 2014).", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Viruses 13 (11) - [2021-10-29; online 2021-10-29]\nRNA interference (RNAi)-mediated antiviral immunity is believed to be the primary defense against viral infection in mosquitoes. The production of virus-specific small RNA has been demonstrated in mosquitoes and mosquito-derived cell lines for viruses in all of the major arbovirus families. However, many if not all mosquitoes are infected with a group of viruses known as insect-specific viruses (ISVs), and little is known about the mosquito immune response to this group of viruses. Therefore, in this study, we sequenced small RNA from an Aedes albopictus-derived cell line infected with either Lammi virus (LamV) or Hanko virus (HakV). These viruses belong to two distinct phylogenetic groups of insect-specific flaviviruses (ISFVs). The results revealed that both viruses elicited a strong virus-derived small interfering RNA (vsiRNA) response that increased over time and that targeted the whole viral genome, with a few predominant hotspots observed. Furthermore, only the LamV-infected cells produced virus-derived Piwi-like RNAs (vpiRNAs); however, they were mainly derived from the antisense genome and did not show the typical ping-pong signatures. HakV, which is more distantly related to the dual-host flaviviruses than LamV, may lack certain unknown sequence elements or structures required for vpiRNA production. Our findings increase the understanding of mosquito innate immunity and ISFVs' effects on their host.", "score": 26.87940636681297, "rank": 38}, {"document_id": "doc-::chunk-10", "d_text": "Recently, recombination has\nalso been detected in other RNA viruses for which multivalent\nvaccines are in use or in trials (Holmes et al., 1999; Suzuki et al.,\n1998; Worobey et al., 1999). We think the potential for\nrecombination to produce new pathogenic hybrid strains, and\nthe possible impact of such escape recombination, needs to be\ncarefully considered whenever multivalent live-attenuated\nvaccines are used to control RNA viruses. Assumptions that\nrecombination either does not happen or is unimportant in\nRNA viruses have a history of being proved wrong.\nIn addition to the evidence favouring a role for genetic\nexchange in eliminating deleterious alleles, many recombinant\nRNA virus strains provide ample indication that recombination\ncan generate beneficial new variation. In some viruses this new\nvariation is achieved by borrowing genetic material from their\nhosts. One intriguing example of this is bovine viral diarrhoea\nvirus (BVDV), a pestivirus that recombines with host cellular\nprotein-coding RNA. As a result of virus±host recombinations,\ncytopathogenic BVDVs can develop from non-cytopathogenic\nones and cause a lethal syndrome, mucosal disease, in the hosts\n(Meyers et al., 1989). In¯uenza A virus has also been observed\nto recombine with cellular RNA, resulting in increased\npathogenicity for the hybrid viruses (Khatchikian et al., 1989).\nRecombination between virus and host genetic material\nevidently occurs in plant viruses as well, as illustrated by a\nluteovirus isolate with 5«-terminal sequence derived from a\nchloroplast exon (Mayo & Jolly, 1991) and closteroviruses\nwhich have acquired host cellular protein-coding genes (Dolja\net al., 1994) which are nonessential for replication and virion\nproduction (Peremyslov et al., 1998).\nA link between recombination and increased pathogenicity\nhas also been revealed in cases that do not involve recombination\nwith host genes.", "score": 26.84140840983565, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Recombination in RNA viruses\nIn some cases of RNA virus recombination, the donor\nsequence neatly replaces a homologous region of the acceptor\nsequence leaving its structure unchanged. This has been\nclassified as `homologous recombination ' (Lai, 1992) since it\ninvolves not just homologous parental RNAs, but also\ncrossovers at homologous sites. However, this is not always\nthe case ; hybrid sequences resulting from aberrant homologous\nrecombination (when similar viruses exchange sequence without\nmaintaining strict alignment) and nonhomologous recombination\n(recombination between unrelated RNA sequences)\nare also commonly observed (Lai, 1992).\nDespite producing distinct kinds of hybrid RNAs, as well as\ndefective interfering (DI) RNAs (Lazzarini et al., 1981), the\ndifferent types of recombination appear to be variations on a\ncommon theme. To date, almost all studies on the mechanisms\nof recombination in RNA viruses have supported a copy±\nchoice model, originally proposed in the case of poliovirus\n(Cooper et al., 1974) and now well studied in a number of\nexperimental systems (Duggal et al., 1997; Jarvis & Kirkegaard,\n1992; Kirkegaard & Baltimore, 1986; Nagy & Bujarski, 1995,\n1998; Nagy et al., 1998; Simon & Nagy, 1996; for a recent\nreview see Nagy & Simon, 1997). Under this model, hybrid\nRNAs are formed when the viral RNA-dependent RNA\npolymerase complex switches, mid-replication, from one RNA\nmolecule to another. This results in homologous recombination\nif the replicase continues to copy the new strand precisely\nwhere it left the old one, and aberrant or nonhomologous\nrecombination if it does not. This template-switching mechanism\nis fundamentally different from the enzyme-driven\nbreakage±rejoining mechanism of homologous recombination\nin DNA, not least because it invokes replication as a necessary\ncomponent of the process. Finally, Chetverin et al.", "score": 26.77443334445683, "rank": 40}, {"document_id": "doc-::chunk-2", "d_text": "A higher and more goal way to determine cytopathic effects is required. Cells contaminated by enveloped viruses express viral proteins on their plasma membrane, which are used by viruses to mediate fusion with the host cell. When these viral proteins bind to receptors on the floor of neighboring cells, cell–cell fusion takes place, resulting in syncytia formation . Since the virus is occupying mobile factors which are otherwise utilized by the cell, its replication can alter the host cell´s fundamental features and even destroy it.", "score": 26.638759172215998, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "The paramyxovirus Simian Trojan 5 (SV5) is an unhealthy inducer of interferon (IFN) secretion in every cell types tested up to now, including primary epithelial cells and primary individual myeloid dendritic cells. IFN secretion and Compact disc80 appearance, and there is a corresponding upsurge in number of contaminated cells. Similar results were noticed with inhibitors of mobile autophagy pathways, recommending which the SV5 activation of pDC needs usage of the cytoplasm and autophagic sampling of cytoplasmic items. These results have got implications for control of SV5 attacks in vivo as well as for advancement of SV5 being a vaccine vector. Launch The parainfluenza trojan Simian Trojan 5 (SV5) is normally an unhealthy activator of antiviral replies in individual cells (Choppin, 1964; Didcock et al 1999b; He et al., 2002; Wansley et al., 2005). SV5 encodes the V proteins as an inhibitor of web host cell antiviral replies, which contrasts with a great many other paramyxoviruses such as for example Sendai trojan (SeV) and measles trojan (MeV) which encode both a V proteins and a family group of C protein which counteract innate replies (Lamb and Parks, 2007). The indegent activation of web host 21102-95-4 manufacture cell replies by SV5 an infection is regarded as largely because of two main elements: the activities from the viral V proteins and control of viral RNA synthesis. A significant function from the SV5 V proteins may be the inhibition of IFN signaling, which takes place through V-mediated concentrating on of indication transducer and activator of transcription 1 (STAT1) for ubiquitylation and degradation (Didcock et al. 1999a). The SV5 V proteins also blocks activation from the IFN-beta promoter during trojan infection or pursuing transfection of dsRNA (Andrejeva et al, 2004; He et al., 2002). The paramyxovirus V proteins inhibits IFN-beta induction by concentrating on the IFN-inducible RNA helicase encoded with the melanoma differentiation-associated gene 5 (mda-5; Andrejeva et al., 2004).", "score": 25.765710465195323, "rank": 42}, {"document_id": "doc-::chunk-1", "d_text": "Alpha/beta interferons (IFN-α/β) are important aspects of cellular antiviral answers. The end results with the interferons are usually mediated through interferon-stimulated gene (ISG) items, that include healthy proteins together with described actions, such as 2′,5′-oligoadenylate synthetase in addition to numerous others using not known functions (analyzed in reference39). Many research on the antiviral actions associated with IFN-α/β get devoted to the self-consciousness associated with RNA trojan copying; the consequences upon Genetic computer virus reproduction are significantly less well recognized. Hsv simplex virus variety One (HSV-1) duplication can be restricted simply by interferon pretreatment associated with tissues as a result of prevent with the level of immediate-early (For instance) gene transcribing and other results on health proteins activity that are demonstrated in after instances throughout attacked tissue (Two, Twenty five, 40, Thirty two). Inside rats, IFN-α/β are crucial sponsor defense, considering that attenuated HSV-1 mutants tend to be a lot more virulent throughout wildlife lacking interferon receptors (21, 22).\nThe actual signaling walkway for IFN-α/β is actually well indicated (Twenty) (Fig. A single). Joining associated with interferons on their cell receptor ends in the phosphorylation regarding Janus kinases (JAKs) JAK1 as well as Tyk2 along with the accompanying phosphorylation associated with indication transducers and activators associated with transcribing (STATs) One and a pair of. Aforementioned proteins tend to be translocated to the nucleus, exactly where these are enrolled simply by interferon regulatory factor (IRF) Nine (IRF-9; often known as p48) to form a sophisticated, referred to as ISG issue Three (ISGF3), about interferon-stimulated regulation elements (ISREs) within ISG promoters. Your acetylases p300 along with CREB-binding health proteins (CBP) accompany Numbers One and a couple of and they are almost certainly critical in mediating transcriptional activation.\nComponents involving ISG induction. The actual pathway activated by IFN-α/β can be shown around the quit facet of the diagram, leading to creation in the complex known as ISGF3 (phosphorylated STATs One and 2 in addition IRF-9).", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-7", "d_text": "A special hormone called interferon is produced by the body when viruses are present, and this stops the viruses from reproducing by killing the infected cell and its close neighbours. Inside cells, there are enzymes that destroy the RNA of viruses. This is called RNA interference. Some blood cells engulf and destroy other virus infected cells.\nAdaptive immunity of animals\nSpecific immunity to viruses develops over time and white blood cells called lymphocytes play a central role. Lymphocytes retain a \"memory\" of virus infections and produce many special molecules called antibodies. These antibodies attach to viruses and stop the virus from infecting cells. Antibodies are highly selective and attack only one type of virus. The body makes many different antibodies, especially during the initial infection, however, after the infection subsides, some antibodies remain and continue to be produced, often giving the host life-long immunity to the virus.\nPlants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants. When they are infected, plants often produce natural disinfectants which kill viruses, such as salicylic acid, nitric oxide and reactive oxygen molecules.\nResistance to bacteriophages\nThe major way bacteria defend themselves from bacteriophages is by producing enzymes which destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells.\nVaccination is a way of preventing diseases caused by viruses. Vaccines simulate a natural infection and its associated immune response, but do not cause the disease. Their use has resulted in a dramatic decline in illness and death caused by infections such as polio, measles, mumps and rubella. Vaccines are available to prevent over thirteen viral infections of humans and more are used to prevent viral infections of animals. Vaccines may consist of either live or killed viruses. Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity. In these people, the weakened virus can cause the original disease.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-9", "d_text": "A third risk might be that the del-L virus could reacquire the L gene from the helper cell line. As mentioned previously, there is no recorded case of a NNSRV picking up genetic material from a host mRNA, making it unlikely that it would happen in this case. In any event, the entire L gene transcription unit is removed from the del-L construct (including the gene transcription start and end sequences), while the helper cell contains only the L protein ORF, so that even if recombination were to occur between viral genome and cell mRNA, vital parts of the virus would be missing.\nWhile we have not carried out this study, it is likely that the PPRV-del-L construct could be replicated by another morbillivirus, though not very well. A related construct based on two defective measles virus genomes has been propagated , though the resultant virus did not grow well, presumably because the minimum infectious unit needed to contain one of each genome. It was only maintained because both genomes were defective and had to support each other in trans. Co-infection of a cell by both PPRV-del-L and another morbillivirus (e.g. CDV) would lead to replication of the PPRV-del-L genome and transcription of its genes. However, this is essentially the same as the CDV replicating in the presence of a large deletion DI (defective interfering particle). Replication of the CDV would be decreased, and the “DI” still could not replicate independently. PPRV as such would not be recreated. Such a mixed infection could conceivably give rise to a sort of two-segmented pathogenic virus containing both the full genome of the non-contained morbillivirus and that of the PPRV-del-L replicon, but such a mixture would be under the usual selective pressure to lose the \"DI\" component.\nOur results in morbilliviruses are completely in line with work done with another NNSRV, vesicular stomatitis virus (VSV), which has been developed in several laboratories as a gene delivery tool by removing the gene for the viral G protein; the VSV G protein functions both for viral attachment and fusion with the target cell. The resultant cut down viral genome, with or without the addition of other, heterologous, coding sequences (e.g.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-1", "d_text": "These three players, fungus, bacterium, and virus, hold an animated conversation. Transcription from the viral sequences is influenced by many factors, such as the fungal reproduction and its development, the medium, and the stage of fungal development. To conveniently study these interactions, the authors used antibacterial and antiviral drugs to create strains of fungus that are free of both bacteria and viruses. The fungi could then be reinfected at will. One interesting result suggests that bacterial colonization is reduced in the absence of the viruses. If so, the viruses would influence the degree of pathogenicity of the fungus, something that is in the process of being ascertained directly (personal communication by the last author of the paper). But there is more: The presence of the viruses decreases the fungus' asexual reproduction (sporangiospore formation). On the other hand, sexual sporulation (zygospore formation) is nearly abolished in the absence of the bacteria and the viruses. I can't say that I fully understand what's going on here but interesting it is.\nThe data suggest that the metabolism of the fungus is \"rewired\" by the growth and presence of the symbionts, possibly to enhance its metabolic plasticity. The authors ask several good questions such as which symbiosis arose first? Did one symbiont, bacterial or viral, facilitate the establishment of the other? Did either alone or together influence the evolution of the fungus? Surely, we will hear more about this ménage à trois.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-5", "d_text": "Most virus infections eventually result in the death of the host cell. The causes of death include cell lysis (bursting), alterations to the cell's surface membrane and apoptosis (cell \"suicide\"). Often cell death is caused by cessation of its normal activity due to proteins produced by the virus, not all of which are components of the virus particle.\nSome viruses cause no apparent changes to the infected cell. Cells in which the virus is latent and inactive show few signs of infection and often function normally. This causes persistent infections and the virus is often dormant for many months or years. This is often the case with herpes viruses.\nViruses, such as Epstein-Barr virus often cause cells to proliferate without causing malignancy, but viruses, such as papillomaviruses are an established cause of cancer. When a cell's DNA is damaged by a virus and if the cell cannot repair this often triggers apoptosis. One of the results of apoptosis is destruction of the damaged DNA by the cell itself. Some viruses have mechanisms to limit apoptosis so that the host cell does not die before progeny viruses have been produced. HIV, for example, does this.\nViruses and diseases\n- For more examples of diseases caused by viruses see Wikipedia's List of infectious diseases\nHuman diseases caused by viruses include the cold, the flu, chickenpox and cold sores. Serious diseases such as Ebola, AIDS and influenza are also caused by viruses. Many viruses cause little or no disease and are said to be \"benign\". The more harmful viruses are described as virulent. Viruses cause different diseases depending on the types of cell that they infect. Some viruses can cause life-long or chronic infections where the viruses continue to reproduce in the body despite the host's defence mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected with a virus are known as carriers. They serve as important reservoirs of the virus. If there is a high proportion of carriers in a given population, a disease is said to be endemic.\nThere are many ways in which viruses spread from host to host but each species of virus uses only one or two. Many viruses that infect plants are carried by organisms; such organisms are called vectors. Some viruses that infect animals and humans are also spread by vectors, usually blood-sucking insects.", "score": 25.262650423796195, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "This inhibits viral replication and normal cell ribosome function, which may lead to killing both the virus and susceptible host cells. Various RNA species within the cell are degraded by activated RNAse L, another interferon-induced gene, thereby further reducing protein synthesis.\nFurthermore, interferon leads to upregulation of MHC I and therefore to increased presentation of viral peptides to cytotoxic CD8 T cells, as well as to a change in the proteasome (exchange of some beta subunits by b1i, b2i, b5i - then known as the immunoproteasome) which leads to increased production of MHC I compatible peptides.\nInterferon can cause increased p53 activity in virus infected cells. It acts as an inducer and causes increased production of the p53 gene product. This promotes apoptosis, limiting the ability of the virus to spread. Increased levels of transcription are observed even in cells which are not infected, but only infected cells show increased apoptosis. This increased transcription may serve to prepare susceptible cells so they can respond quickly in the case of infection. When p53 is induced by viral presence, it behaves differently than it usually does. Some p53 target genes are expressed under viral load, but others, especially those that respond to DNA damage, aren’t. One of the genes that is not activated is p21, which can promote cell survival. Leaving this gene inactive would help promote the apoptotic effect. Interferon enhances the apoptotic effects of p53, but it is not strictly required. Normal cells exhibit a stronger apoptotic response than cells without p53.\nAdditionally, interferon has been shown to have therapeutic effect against certain cancers. It is probable that one mechanism of this effect is p53 induction. This could be useful clinically: Interferons could supplement or replace chemotherapy drugs that activate p53 but also cause unwanted side effects. Some of these side effects can be serious, severe and permanent.\nVirus resistance to interferonsEdit\nIn a study of the blocking of interferon (IFN) by the Japanese Encephalitis Virus (JEV), a group of researchers infected human recombinant IFN-alpha with JEV, DEN-2, and PL406, which are all viruses, and found that some viruses have manifested methods of avoiding the IFN-alpha/beta response.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "A potentially better way to make flu virus vaccines A team of Princeton University scientists may have found an easier way to make a vaccine against the flu virus. Though theoretical, the task points to the critical importance of what has been a poorly appreciated aspect of the conversation between a virus and the ones naturally produced protective proteins called antibodies that fight infection. By manipulating this multi-stage interactive process – – known as antibody interference – – to advantage, the scientists believe it could be possible to design better vaccines than exist today levitra . The findings are described in the May 11 on-line edition of the Proceedings of the National Academy of Sciences. We’ve proposed that antibody interference plays a major role in determining the effectiveness of the antibody response to a viral an infection, said Ned Wingreen, a professor of molecular biology and a member of the Lewis-Sigler Institute for Integrative Genomics. And we believe that in order to get a more powerful vaccine, folks are going to desire one that minimizes this interference. Various other authors on the paper consist of Simon Levin, the George M. Moffett Professor of Biology, and Wilfred Ndifon, a graduate college student in Levin’s laboratory and first writer on the paper. When a virus like influenza episodes a human, the body mounts a defense, generating antibodies custom-designed to add themselves to the virus, blocking it from action and efficiently neutralizing its harmful effects on the body. Analyzing data about viral structure, antibody types and the reactions between them made by virology laboratories over the national country, Ndifon noticed a perplexing pattern. He found that antibodies had been better at protecting against a somewhat different virus often, a close cousin, than against the virus that spurred their creation. That is known as cross-reactivity. A closer look, using techniques that combine biophysics and processing, suggested that a phenomenon known as antibody interference was at play. It arises when a virus prompts the creation of multiple types of antibodies. During a viral assault, what then transpires can be that antibodies vie with one another to defend the body and sometimes masses each other out as they try to connect themselves to the surface of the virus.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "Its presence is associated with the production of misformed mimivirids, and causes a 70% reduction in production of infectious Mimivirus. The conclusion of the researchers is unprecedented but almost inevitable - Sputnik is a virus that parasitises another virus.\nOf course, it's not so simple and straightforward. Perhaps one could argue that Sputnik is not so much attacking the Mimivirus directly as hijacking the replicative framework produced by the amoebozoan host that the Mimivirus is inducing. But then, one could make similar (and perhaps similarly facetious) arguments about almost any case of hyperparasitism involving more unequivocal living organisms, or many other trophic relationships - if a lion kills a zebra then eats the zebra's stomach and intestines, is it eating the zebra or the grass ingested by the zebra? If there is one rule in biology, it is that life does not take kindly to clear-cut definitions.\nMore has been written on the Sputnik virus at Living the Scientific Life.\nLa Scola, B., S. Audic, C. Robert, L. Jungang, X. de Lamballerie, M. Drancourt, R. Birtles, J.-M. Claverie & D. Raoult. 2003. A giant virus in amoebae. Science 299: 2033.\nLa Scola, B., C. Desnues, I. Pagnier, C. Robert, L. Barrassi, G. Fournous, M. Merchat, M. Suzan-Monti, P. Forterre, E. Koonin & D. Raoult (in press, 2008) The virophage as a unique parasite of the giant mimivirus. Nature.\nRaoult, D., S. Audic, C. Robert, C. Abergel, P. Renesto, H. Ogata, B. La Scola, M. Suzan & J.-M. Claverie. 2004. The 1.2-megabase genome sequence of Mimivirus. Science 306: 1344-1350.\nSuzan-Monti, M. B. La Scola, L. Barrassi, L. Espinosa & D. Raoult. 2007.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "We’re currently watching—often in horror—what happens as a virus and its hosts engage in an evolutionary arms race. Measures to limit infectivity and enhance immunity are selecting for viral strains that spread more readily and avoid at least some of the immune response. All of that is easily explained through evolutionary theory and has been modeled mathematically.\nBut not all evolutionary interactions are so neat and binary. Thursday’s edition of Science included a description of a three-way fight between butterflies, the wasps that parasitize them, and the viruses that can infect both species. To call the interactions that have ensued “complicated” is a significant understatement.\nMeet the combatants\nOne of the groups involved is the Lepidoptera, the butterflies and moths. They are seemingly the victims in this story because, like any other species, they can be infected by viruses. Many of these viral infections can be fatal, although some kill the animal quickly, and others take their time. Since they often strike during the larval/caterpillar stages, the viruses need other hosts to transfer the viruses to other victims.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-180", "d_text": "In this regard, the measles virus, responsible for epidemic measles, has a unique interface with autophagy as the virus can induce multiple rounds of autophagy in the course of infection. These successive waves of autophagy result from distinct molecular pathways and seem associated with anti- and/or pro-measles virus consequences. In this review, we describe what the autophagy–measles virus interplay has taught us about both the biology of the virus and the mechanistic orchestration of autophagy.\n171. 5. Celum, C. L. The Interaction between Herpes Sim- plex Virus and Human Immunodeficiency Virus. Her- pes, 2004; 1: 36A-44A. 6. Brown, Z.A., Selke, S., Zeh, J., Kopelman, J., Maslow,. A., Ashley, R.L., Watts, D.H., Berry, S., Herd, M. and.\nPrangishvili, P.; Basta, P.; Garrett, Roger Antony\nIn this article we present our current knowledge about double-stranded (dsDNA) viruses infecting hyperthermophilic Crenarchaeaota, the organisms which predominate in hot terrestrial springs with temperatures over 80 °C. These viruses exhibit extraordinary diversity of morphotypes most of which have...\nWang, Pu; González, Marta; Barabási, Albert-László.\nStandard operating systems and Bluetooth technology will be a trend for future cell phone features. These will enable cell phone viruses to spread either through SMS or by sending Bluetooth requests when cell phones are physically close enough. The difference in spreading methods gives these two types of viruses' different epidemiological characteristics. SMS viruses' spread is mainly based on people's social connections, whereas the spreading of Bluetooth viruses is affected by people's mobility patterns and population distribution. Using cell phone data recording calls, SMS and locations of more than 6 million users, we study the spread of SMS and Bluetooth viruses and characterize how the social network and the mobility of mobile phone users affect such spreading processes.\nMüller, Viktor; Marée, Athanasius F.M.; Boer, R.J.", "score": 24.234289534804766, "rank": 52}, {"document_id": "doc-::chunk-2", "d_text": "When these viral proteins bind to receptors on the surface of neighboring cells, cell–cell fusion takes place, leading to syncytia formation . Since the virus is occupying cellular factors that are otherwise utilized by the cell, its replication can alter the host cell´s primary features or even destroy it.", "score": 23.030255035772623, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Some biological implications of the quasispecies nature of RNA viruses.\n|1.||Viral genomes are collections of mutant genomes termed mutant spectra or mutant clouds. Mutant clouds may include phenotypic variants adequate to respond to selective constraints (antibody- and cytotoxic T cell-escape mutants, inhibitor- or mutagen-resistant mutants; cell tropism and host-range mutants, etc.). The phenotypic repertoire of a viral quasispecies can contribute to viral persistence, pathogenesis and to the limited efficacy of treatments designed to limit viral replication.|\n|2.||Viral quasispecies can include memory genomes as minority components of their mutant spectra. Memory provides an adaptive advantage to viral populations.|\n|3.||Mutant spectra are not merely collections of mutant viruses acting independently. Positive interactions (of complementation) or negative interactions (of interference) can be established within mutant spectra. Thus viral quasispecies act as a unit of selection and cannot be accurately described by classical Wright-Fisher formulations of population genetics.|\n|4.||The understanding of quasispecies dynamics has helped defining protocols for preventive and therapeutic designs (vaccines to control viral quasispecies must be multivalent, antiviral agents must be sued in combination) and has impelled new antiviral strategies such as lethal mutagenesis (virus extinction by excess mutations).|", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-233", "d_text": "Synthesis of type C virus, albeit at lower levels, was still observed at uv doses beyond those required to prevent cell killing\nHolm, Christian K; Rahbek, Stine H; Gad, Hans Henrik; Bak, Rasmus O; Jakobsen, Martin R; Jiang, Zhaozaho; Hansen, Anne Louise; Jensen, Simon K; Sun, Chenglong; Thomsen, Martin K; Laustsen, Anders; Nielsen, Camilla G; Severinsen, Kasper; Xiong, Yingluo; Burdette, Dara L; Hornung, Veit; Lebbink, Robert Jan; Duch, Mogens; Fitzgerald, Katherine A; Bahrami, Shervin; Mikkelsen, Jakob Giehm; Hartmann, Rune; Paludan, Søren R\nStimulator of interferon genes (STING) is known be involved in control of DNA viruses but has an unexplored role in control of RNA viruses. During infection with DNA viruses STING is activated downstream of cGAMP synthase (cGAS) to induce type I interferon. Here we identify a STING-dependent, cGAS-independent pathway important for full interferon production and antiviral control of enveloped RNA viruses, including influenza A virus (IAV). Further, IAV interacts with STING through its conserved hemagglutinin fusion peptide (FP). Interestingly, FP antagonizes interferon production induced by membrane fusion or IAV but not by cGAMP or DNA. Similar to the enveloped RNA viruses, membrane fusion stimulates interferon production in a STING-dependent but cGAS-independent manner. Abolishment of this pathway led to reduced interferon production and impaired control of enveloped RNA viruses. Thus, enveloped RNA viruses stimulate a cGAS-independent STING pathway, which is targeted by IAV.\nRensen, Elena Ilka; Mochizuki, Tomohiro; Quemin, Emmanuelle; Schouten, S.; Krupovic, Mart; Prangishvili, David\nViruses package their genetic material in diverse ways. Most known strategies include encapsulation of nucleic acids into spherical or filamentous virions with icosahedral or helical symmetry, respectively. Filamentous viruses with dsDNA genomes are currently associated exclusively with Archaea.\nHampar, B.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "These cells also exhibited resistance to the antiviral effects of IFN-α/β. When Jak family tyrosine kinases are overexpressed in Huh7 cells, the overexpressed kinase becomes tyrosine phosphorylated, as do cellular STAT proteins. In support of the view that Jak1 is a critical and specific target of VP40, expression of either RAVV or MARV VP40 inhibits the tyrosine phosphorylation of Jak1 and STAT proteins in Jak1-overexpressing human cells (40\nNotably, neither MARV nor RAVV VP40 could efficiently inhibit IFN-α/β signaling in the mouse cells tested (). Although a modest reduction in STAT1 or STAT2 phosphorylation could be seen in VP40-expressing mouse cells, relative to empty vector-transfected control cells, this inhibition was not sufficient to counteract the antiviral effects of IFN (). Strikingly, however, adaptation of RAVV to mice resulted in a VP40 fully capable of inhibiting IFN signaling and blocking the antiviral effects of IFN-β in mouse cells ( and 4). Moreover, maRAVV VP40 was able to inhibit the phosphorylation of either mouse or human Jak1 and STAT1 in human and mouse cells, while the nonadapted VP40s inhibited the phosphorylation of human Jak1 and mouse Jak1 in human cells but not in mouse cells (). These observations are consistent with a model where VP40 inhibition of Jak1 requires an additional host factor(s) to carry out its inhibitory function.\nGiven that the mouse-adapted RAVV exhibits increased virulence in mice, relative to the parental strain from which it was derived, the enhanced IFN antagonist function is consistent with a role for VP40 IFN antagonist function in mouse pathogenesis. Whether the mouse-adaptive changes in VP40 are sufficient for enhanced virulence in the absence of other changes remains to be determined. It is intriguing that the mouse-adapted RAVV also accumulated several amino acid changes in its VP35 protein. In EBOV, VP35 has been demonstrated to inhibit the production of IFN-α/β by antagonizing the RIG-I signaling pathway (6\n). Whether these VP35 changes influence its function in mouse cells is therefore of interest.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "We discovered that HRV-16 amounts in IRAK-M OE versus EV NCI-H292 cells had been elevated at 4 h, and preserved at 24 h, however, not at 48 h. To check the consequences of exogenous anti-viral interferon on HRV-16 replication and.", "score": 22.919870037288597, "rank": 57}, {"document_id": "doc-::chunk-153", "d_text": "Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.\nJennifer A Head\nFull Text Available Rift Valley fever virus (RVFV, belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR. NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30 to inhibit the activation of IFN-β promoter.", "score": 22.6904783802783, "rank": 58}, {"document_id": "doc-::chunk-5", "d_text": "However, several different mechanisms of transcriptional interference may clarify this point (reviewed in and ) (see Fig 1): (i) Steric hindrance: when the provirus integrates in the same transcriptional orientation as the host gene, \"read-through\" transcription from an upstream promoter displaces key transcription factors from the HIV-1 promoter as previously shown for Sp1 and prevents the assembly of the pre-initiation complex on the viral promoter, thereby hindering HIV-1 transcription. The integrated virus is thought to be transcribed along with the other intronic regions of the cellular gene, but is then merely spliced out. This mechanism has been confirmed in J-Lat cells, a CD4+ T-cell line used as a model for HIV-1 postintegration latency . Lenasi and colleagues have shown that transcriptional interference could be reversed by inhibiting the upstream transcription or by cooperatively activating viral transcription initiation and elongation. Of note, certain host transcription factors and/or viral activators, which bind strongly to their cognate sites, could resist the passage of \"read-through\" RNA polymerase II (RNAPII) . As studies in yeast have demonstrated that the elongating polymerase is followed by a rapid refolding of histones in a closed configuration to counteract transcription initiation at cryptic sites in the transcription unit , chromatin structure and epigenetic events could also be implicated in transcriptional interference. Conversely, Han et al. have demonstrated that upstream transcription could enhance HIV-1 gene expression without significant modification of the chromatin status in the region when the provirus is integrated in the same orientation as the host gene. These partially contradictory studies have been questioned based on earlier studies that reported transcriptional interference as important in repressing viral promoters integrated in the same orientation as an upstream host gene promoter [60, 65, 66]. Interestingly, Marcello and colleagues have recently reported that an integrated provirus suffering from transcriptional interference in basal conditions becomes transcriptionally active following Tat expression; and that this provirus can switch off the transcription of the host gene within which it has integrated or can allow the coexistence of expression of both host and viral genes. Further analysis of the mechanisms exploited by host genes to regulate a viral promoter inserted in their transcriptional unit or by the virus to counterbalance the host gene control will be needed to completely elucidate these transcriptional interference events.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Faulty interfering (DI) genomes are characterised by their capability to hinder the replication from the virus that these were derived, and other compatible infections genetically. studies. [4,9,10] or . Within a portion many different break factors have been noticed, in order that many different DI RNA sequences may occur from an individual portion simply. Following deletions can arise also. In a planning of influenza A DI infections a lot more than 50 Pifithrin-alpha enzyme inhibitor different DI RNAs had been Pifithrin-alpha enzyme inhibitor discovered . DI infections are defective as the removed genome lacks an important gene necessary for replication. To be able to replicate, DI infections require assistance from the infectious trojan from which these were derived, or a suitable related trojan genetically, to supply the lacking gene products. That is known as a helper trojan. DI trojan production is normally optimal in the presence Pifithrin-alpha enzyme inhibitor of a large amount of helper disease but as the DI disease replicates it reduces the yield of infectious helper disease. The reduction in helper trojan arises as the smaller sized DI genome is normally replicated considerably faster than the bigger parental genome, Pifithrin-alpha enzyme inhibitor in order that even more DI genomes are synthesised in device time before DI genome predominates. This gives two advantages of the DI genome; stochastically DI genomes are after that able to contend better for essential item(s) synthesised in limited quantities with the infectious helper trojan or the web host cell, and second the abundant DI genomes will be packed into new trojan particles. Due to the dependence from the DI trojan on helper trojan to provide the fundamental protein DI and helper trojan Mouse monoclonal antibody to Tubulin beta. Microtubules are cylindrical tubes of 20-25 nm in diameter. They are composed of protofilamentswhich are in turn composed of alpha- and beta-tubulin polymers. Each microtubule is polarized,at one end alpha-subunits are exposed (-) and at the other beta-subunits are exposed (+).Microtubules act as a scaffold to determine cell shape, and provide a backbone for cellorganelles and vesicles to move on, a process that requires motor proteins.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-14", "d_text": "And perhaps the most important\nphysical constraint on template switching ± particularly\nwith respect to homologous recombination ± is simply\nthe extent of sequence dissimilarity between potentially\nrecombining genomes. Finally, genetic variation in the susceptibility\nof the viral replicase to jumping (Bujarski & Nagy,\n1996) no doubt plays a central role in determining how often\nand by what mechanism particular viruses recombine.\nRecombination occurs when these ®rst four steps are\nful®lled. Whether incipient recombinants persist, however,\ndepends on the fifth and final step, the selective separation of\nthe wheat from the chaff among hybrids. Although there is\nstrong evidence that genetic exchange can offer advantages in\nsome circumstances, random recombination no doubt destroys\nmore good alleles than it creates. PCR studies which have\nmade possible the characterization of the initial products of\nrecombination ± those present prior to removal by selection\n(Banner & Lai, 1991; Desport et al., 1998; Jarvis & Kirkegaard,\n1992) ± have produced important insights. Banner and Lai's\nstudy of coronaviruses (1991), for example, showed that the\ninitial recombination events in their MHV system were almost\nentirely randomly distributed along the sequence investigated.\nIt was only after passage through cell culture, with the\nopportunity for selection to remove less variants, that\ncrossover sites became ` localized ' to just a small area of the\nregion examined. With enough passages, the recombinants\ndisappeared altogether. These results indicated that `recombination\nhotspots ' can actually be the result of natural selection\non a pool of random recombination crossover junctions, as\nopposed to elevated recombination rates in particular regions.\nCrucially, they also suggested that recombination may be\nmore common than often assumed, but may go undetected\nbecause of the action of strong purifying selection which will\nremove new, deleterious combinations of mutations. In light of\nthese studies it is clear that what is meant by ` recombination\nfrequency' ± a term usually used without specified units ±\ndepends critically on whether we are assessing recombination\nevents before or after selection has acted. A virus which often\nproduces hybrid RNAs under laboratory conditions may very\nrarely ± or even never ± be found to recombine in nature.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-3", "d_text": "NSs induces a shut-off of host transcription including interferon (IFN)-beta mRNA7,8\nand promotes degradation of double-stranded RNA-dependent protein kinase (PKR) at the post-translational level.9,10\nIFN-beta is transcriptionally upregulated by interferon regulatory factor 3 (IRF-3), NF-kB and activator protein-1 (AP-1), and the binding of IFN-beta to IFN-alpha/beta receptor (IFNAR) stimulates the transcription of IFN-alpha genes or other interferon stimulated genes (ISGs)11\n, which induces host antiviral activities, whereas host transcription suppression including IFN-beta gene by NSs prevents the gene upregulations of those ISGs in response to viral replication although IRF-3, NF-kB and activator protein-1 (AP-1) can be activated by RVFV7. . Thus, NSs is an excellent target to further attenuate MP-12, and to enhance host innate immune responses by abolishing the IFN-beta suppression function. Here, we describe a protocol for generating a recombinant MP-12 encoding mutated NSs, and provide an example of a screening method to identify NSs mutants lacking the function to suppress IFN-beta mRNA synthesis. In addition to its essential role in innate immunity, type-I IFN is important for the maturation of dendritic cells and the induction of an adaptive immune response12-14\n. Thus, NSs mutants inducing type-I IFN are further attenuated, but at the same time are more efficient at stimulating host immune responses than wild-type MP-12, which makes them ideal candidates for vaccination approaches.\nImmunology, Issue 57, Rift Valley fever virus, reverse genetics, NSs, MP-12, vaccine development\nDissecting Host-virus Interaction in Lytic Replication of a Model Herpesvirus\nInstitutions: UT Southwestern Medical Center, UT Southwestern Medical Center.\nIn response to viral infection, a host develops various defensive responses, such as activating innate immune signaling pathways that lead to antiviral cytokine production1,2\n. In order to colonize the host, viruses are obligate to evade host antiviral responses and manipulate signaling pathways. Unraveling the host-virus interaction will shed light on the development of novel therapeutic strategies against viral infection.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-19", "d_text": "Besides these traditional roles, it appears that multiple proteins of the DexD/H-box helicase family are associated with viral components and/or have alternative effects on viral propagation (56–58). For instance, DDX3 shows antiviral functions against vaccinia virus, DENV, and HBV (59–61), but is of benefit for HCV and HIV infection (62, 63). In addition, DDX1 interacts with the nsp3 of Venezuelan equine encephalitis virus and enhances viral multiplication (64). The interaction of DDX1 with human immunodeficiency virus type 1 Rev protein is involved in the regulation of virus replication (65). In the present study, the knockdown and ectopic expression of DDX1 demonstrated that DDX1 had antiviral activity against TGEV replication (data not shown). Interestingly, earlier studies also showed an interaction between DDX1 with coronavirus (IBV and SARS-CoV) nsp14, and in contrast to TGEV, this interaction might enhance the replication of IBV (31). This difference suggests that DDX1 is not likely to be a general target against CoV infection.\nFurthermore, it should be noted that the effect of DDX1 on progeny TGEV production was moderate (data not shown). Difference from other CoVs, such as SARS-CoV and MERS-CoV which antagonize IFN-I, TGEV infection induces IFN-I production, and a most recent paper showed that poly(I:C)-induced IFN-I responses could only inhibit TGEV replication in the early infection stage, but failed in the late infection stage (23). They also demonstrated that the activation of IFN-I responses by TGEV infection cannot inhibit viral replication. Our results are consistent with the conclusion proposed by Zhu and colleagues. In addition, it is surprising that the expression levels of IFN-β are paralleled with the increase of viral RNA during TGEV infection. These may explain why the effect of DDX1 on TGEV replication was moderate accompany with its significant role in IFN-β induction by TGEV. However, more studies are required to investigate the complex interaction between TGEV and IFNs.\nIn conclusion, our data demonstrate that TGEV infection induces IFN-β production and nsp14 is the most significant IFN-β inducer among the TGEV-encoded proteins.", "score": 21.107226877652625, "rank": 63}, {"document_id": "doc-::chunk-7", "d_text": "Large dsDNA viruses show the most variable protein domain repertoires, while most RNA and retrotranscribing viruses, due to their genome sizes, perform all their processes using few domains, which are reused throughout their entire infection cycles (Zheng et al., 2014). Owing to these proteomic peculiarities, viruses diversify and/or maintain their interaction capabilities via different mechanisms of molecular evolution, including conservation (orthology); HGT (xenology); gene duplication (paralogy); and molecular mimicry (convergence; Alcami, 2003; Koonin et al., 2006; Garamszegi et al., 2013).\nIn the mode of evolution by orthology, if a domain pair known to interact is found in two closely related systems (organisms), these domains are likely to be real interacting partners in both systems (interologs). Although the mere presence of orthologs in two systems does not necessarily imply a direct interaction among them (Riley et al., 2005), their co-occurrence could be indicative of potentially conserved interactions, making these proteins good targets for further studies (Dyer et al., 2011). Examples of interologs are observed among herpesviruses, DNA viruses that show a large set of genes and interactions shared among almost all members of the family Herpesviridae (Bailer and Haas, 2009). An example of such conservation in observed in the interaction between the cellular receptor Nectin-1 and the envelope glycoprotein D encoded by alphaherpesviruses. Nectins are commonly found at adherens junction (GO:0005912), and are used by viruses as cell entry mediators (GO:0046718). Figure 3 illustrates such an interaction in two host–virus pairs: “human × HHV-2,” and “pig × SHV-1.”\nFigure 3. Interologs: homologous interactions. (A) Protein-protein interaction (PDB 4MYW) between a human Nectin-1 (blue protein) and a Glycoprotein D encoded by a Human Herpesvirus 2 (gD, red protein). (B) Interaction (PDB 5X5W) between swine Nectin-1 (light cyan protein) and Suid Herpesvirus 1 gD (pink protein).", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "Compared to the relatively well-conserved processes found in cellular organisms, viruses demonstrate huge variations in terms of genomic composition, patterns of evolution, and protein function. While studying protein–protein interactions (PPIs) in virus–host systems, these variations on the pathogen side must be considered. A large proportion of the PPIs are mediated by domain–domain interactions (DDIs), and viruses belonging to different Baltimore groups have specific domain repertoires, providing different strategies and mechanisms of molecular recognition to accomplish their replication cycle (Zheng et al., 2014). In DDIs, molecular recognition is performed via amino acid residues located at interfaces of interaction. Under homeostatic conditions, host proteins interact with each other via (endogenous) interfaces that are also sometimes explored by viruses (exogenous interfaces), leading to competition for such molecular resources between viruses and hosts (Franzosa and Xia, 2011). Protein recognition events can occur as stable or transient interactions, and some proteins can establish interactions with multiple partners, either simultaneously (party hubs), or at different times (date hubs; Han et al., 2004). Such patterns of interaction can be studied in the context of the overall protein interaction network (PIN), in which each node shows particular properties (e.g., connectivity, centrality, etc.; Gursoy et al., 2008).\nIn terms of evolution, virus and host PPIs often evolve under a regime of arms race. In this phenomenon, one of the partners undergoes mutations that can in turn promote the fixation of mutations in its counterpart, causing both proteins to change over time in a way that retains their mutual recognition capabilities (Daugherty and Malik, 2012). Other common mechanisms of evolution in virus–host systems involve the acquisition of new proteins and interactions via gene duplication, horizontal gene transfer (HGT), and convergence (Alcami, 2003; Koonin et al., 2006; Garamszegi et al., 2013).\nProtein interaction data are usually obtained using strategies such as, yeast two-hybrid (Y2H) and affinity-purification mass spectrometry (AP-MS), which present specific advantages and disadvantages (Gavin et al., 2006; Gingras et al., 2007). Different kinds of proteomic data are gathered in multiple independent databases, which provide researchers with information on protein classification, domains, interactions, GO terms, etc.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-6", "d_text": "In some cases, evolutionary\nevidence suggests that newly formed recombinant strains may\nhave initially exhibited decreased functionality, but subsequently\nevolved to compensate for these effects. For\nexample, the Western equine encephalitis (WEE) complex\nviruses are the product of a single recombination event that\noccurred between Eastern equine encephalitis virus (EEEV) and\na Sindbis-like virus, probably within the last 2000 years\n(Weaver et al., 1997). Sequence analysis of WEEV indicated\nthat, subsequent to the recombination event, its EEEV-like\ncapsid protein evolved to become more like a Sindbis-virus\ncapsid, possibly because it needs to interact with Sindbis-like\nglycoproteins during virus budding (Hahn et al., 1988).\nConversely, in a sort of evolutionary compromise, almost all\nthe amino acid changes in the Sindbis-like glycoproteins have\nbeen to residues that match those of EEEV. It is possible that\nthe new antigenic properties conferred by the Sindbis-like\nglycoproteins of this predominantly EEEV-like hybrid were\nsufficiently advantageous to off-set what appears to have been\na signi®cant mismatch between its recombinant structural\nWhatever the circumstances of the survival and subsequent\ndiversi®cation of particular recombinants, it is now evident\nthat many pass through the narrow gates of natural selection\nand contribute to the diversity seen in RNA viruses. The\nproduction of new strains having genomes comprising regions\nwith different histories has important implications for the way\nwe think about virus evolution. For one, it means that there is\nno single phylogenetic tree that can describe the evolutionary\nrelationships between viruses ; the recombinative nature of\nviruses simply precludes the possibility of a ` true ' phylogenetic\ntaxonomy. Far from being an incidental process best ignored\nwhen considering virus relationships and history, recombination\nmay have played a crucial role in generating many of\nthe taxonomic groups we recognize (Koonin & Dolja, 1993) ±\ntoday's hopeful monster giving rise to tomorrow's genus or\nfamily of viruses. Some studies of recombination have even led\nto recommendations that higher-than-family taxonomic units\nshould be avoided altogether (Goldbach, 1992).", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Viral RNA Decay\nWe believe that the cellular mRNA decay machinery can act as an antiviral defense mechanism. After all, many viral RNAs have evolved to mimic cellular transcripts and should therefore be ideal targets for degradation. A degraded genome cannot be replicated or translated, and therefore presents no threat (Dickson and Wilusz, 2011; Moon, Barnhart and Wilusz, 2012). Several projects in the lab focus on characterizing the interactions between viral RNAs and cellular mRNA decay factors. One virus we have investigated extensively is the alphavirus Sindbis Virus (SINV). We have discovered that SINV RNAs are actually quite resistant to decay, and this is likely because they usurp cellular RNA-binding proteins that act as stabilizing factors (Garneau et al 2008, Sokoloski et al, 2010). Our recent results show that many alphaviruses induce relocalization of the cellular HuR protein from the nucleus to the cytoplasm which presumably interferes with its normal functions (Dickson et al, 2012). We have also shown that the rabies virus glycoprotein mRNA can interact with a poly(C) binding protein to enhance its stability (Palusa et al, 2012).\nInterestingly, another family of arboviruses, the flaviviruses, have evolved a completely different mechanism of interfering with mRNA decay. When their RNAs are degraded, they generate a stable intermediate called sfRNA, which inhibits the function of the host 5’-3’ exonuclease, XRN1. This results in stabilization of host cell mRNAs and likely has wide-ranging effects on host cell gene expression (Moon et al, 2012). It may also help the viral genomic RNA persist in the host cell cytoplasm.\nWe hypothesize that other RNA viruses (e.g. Dengue, Rabies, Ebola) will have evolved similar, or novel mechanisms to protect their genomes from the mRNA decay machinery. Moreover, by interfering with these interactions we should be able to induce viral RNA decay and prevent infection.\nMoon SL, Anderson JR, Kumagai Y, Wilusz CJ, Akira S, Khromykh AA, Wilusz J. A noncoding RNA produced by arthropod-borne flaviviruses inhibits the cellular exoribonuclease XRN1 and alters host mRNA stability. RNA.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Abstract: Human immunodeficiency virus-1 (HIV-1) dynamics reflect an intricate balance within the viruses’ host. The virus relies on host replication factors, but must escape or counter its host’s antiviral restriction factors. The interaction between the HIV-1 protein Vif and many cellular restriction factors from the APOBEC3 protein family is a prominent example of this evolutionary arms race. The viral infectivity factor (Vif) protein largely neutralizes APOBEC3 proteins, which can induce in vivo hypermutations in HIV-1 to the extent of lethal mutagenesis, and ensures the production of viable virus particles. HIV-1 also uses the APOBEC3-Vif interaction to modulate its own mutation rate in harsh or variable environments, and it is a model of adaptation in a coevolutionary setting. Both experimental evidence and the substantiation of the underlying dynamics through coevolutionary models are presented as complementary views of a coevolutionary arms race.\nThis is an open access article distributed under the\nCreative Commons Attribution License which permits unrestricted use, distribution,\nand reproduction in any medium, provided the original work is properly cited.\nExport to BibTeX\nMDPI and ACS Style\nMünk, C.; Jensen, B.-E.O.; Zielonka, J.; Häussinger, D.; Kamp, C. Running Loose or Getting Lost: How HIV-1 Counters and Capitalizes on APOBEC3-Induced Mutagenesis through Its Vif Protein. Viruses 2012, 4, 3132-3161.\nMünk C, Jensen B-EO, Zielonka J, Häussinger D, Kamp C. Running Loose or Getting Lost: How HIV-1 Counters and Capitalizes on APOBEC3-Induced Mutagenesis through Its Vif Protein. Viruses. 2012; 4(11):3132-3161.\nMünk, Carsten; Jensen, Björn-Erik O.; Zielonka, Jörg; Häussinger, Dieter; Kamp, Christel. 2012. \"Running Loose or Getting Lost: How HIV-1 Counters and Capitalizes on APOBEC3-Induced Mutagenesis through Its Vif Protein.\" Viruses 4, no. 11: 3132-3161.", "score": 19.44951185328346, "rank": 68}, {"document_id": "doc-::chunk-9", "d_text": "Similarly to HGT events, gene duplications are evolutionary processes mainly found among dsDNA viruses, as observed in herpesviruses, adenoviruses and poxviruses (Shackelton and Holmes, 2004). An example of evolution by gene duplication is observed for the herpesvirus Glycoprotein D previously shown in Figure 1. Some alphaherpesviruses express a second copy of that protein, glycoprotein G (gG), a paralog that does not act as a viral entry mediators, but in fact shows a modified function, binding a broad range of chemokines to prevent their interaction with specific cellular receptors (Bryant et al., 2003). Interestingly, the presence of paralogs in PINs is not exclusive to viruses.\nFinally, acquisition of a new interaction partner via convergent evolution is also a recurrent mechanism in virus–host networks. As they evolve at faster mutation rates, viruses can rapidly acquire new binding partners by mimicking and targeting interfaces of host proteins (Elde and Malik, 2009; Standfuss, 2015). A particular example is observed among Dengue viruses, Vaccinia viruses, and HIV-1, which independently acquired similar mechanisms of protein interaction and RNA recognition, which are essential to promote genome replication and mRNA translation (Garcia-Montalvo et al., 2004; Alvarez et al., 2006; Katsafanas and Moss, 2007). In this way, viruses can evolve not only by homology (HGT, duplication, and conservation) but also by analogy, allowing them to share interacting partners and adopt common strategies of infection (Dyer et al., 2008; Bailer and Haas, 2009; Segura-Cabrera et al., 2013). Figure 5 shows an example of convergent evolution. The human Ephrin-B2 is a cell surface transmembrane ligand of Ephrin receptors (Figure 5A; Qin et al., 2010), and the Glycoprotein G encoded by the Paramixovirus Hendra henipavirus is an envelope component (GO:0019031) that mimics this interaction with Ephrin receptors using an interface similar to the one explored by Ephrin type-A receptors 4 (Figure 5B).\nFigure 5. Interaction convergence.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "In comparison, the choice RNA helicase retinoic acid-inducible gene I (RIG-I) will not seem to be inhibited 21102-95-4 manufacture by V proteins (Childs et al. 2007). V proteins is considered to also become a decoy substrate for kinases that activate IFN regulatory aspect 3 (Lu et al., 2008). To get these V features, a recombinant SV5 that encodes a truncated V proteins lacking the extremely conserved cys-rich domains is a powerful activator of IFN-beta and proinflammatory cyotokine synthesis (He et al., 2002). Another major factor adding to limited activation of mobile responses may be the capability of SV5 to regulate amounts and types of viral RNA produced during replication. That is noticeable from our discovering that an SV5 mutant with substitutions in the genomic promoter overexpresses viral RNA and in addition induces IFN and cytokines through RIG-I pathways, despite the fact that this mutant expresses an operating V proteins (Manuse and Parks, 2009). Likewise, potent antiviral replies are induced 21102-95-4 manufacture by an SV5 P/V mutant which overexpresses viral RNA (Wansley and Parks, 2002), but significantly, appearance from the WT P subunit from the viral polymerase restores regular degrees of viral gene appearance and reduces web host cell replies (Dillon and Parks, 2007). Jointly, these data support a model whereby 21102-95-4 manufacture SV5 an infection Th does not activate antiviral replies during a sturdy replication routine, except under circumstances where V proteins is faulty or when synthesis of viral RNA elements is elevated more than a threshold. This model boosts the issue of whether SV5 would be an unhealthy inducer of antiviral replies in cells that feeling trojan by systems that are 3rd party of disease replication. Dendritic cells (DCs) perform a critical part in sensing viral attacks to activate both innate and adaptive immune system responses. Two main DC subsets can be found within human being peripheral bloodstream (Shortman and Liu, 2002): myeloid dendritic cells (mDCs) and plasmacytoid dendritic cells (pDCs).", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-11", "d_text": "Template jumping during\nreplication in viruses infecting cats has produced, on multiple\noccasions, the pathogenic strains known as feline infectious\nperitonitis viruses (FIPVs) by altering asymptomatic feline\nenteric coronaviruses, differing from them only by deletions of\naround 100 bp in predictable locations (Vennema et al., 1998).\nAnother coronavirus, feline coronavirus (FCoV) type II,\nappears to be a homologous (or aberrant homologous)\nrecombinant of FCoV type I and canine coronavirus\n(Herrewegh et al., 1995). Like FIPVs, FCoV type II viruses may\nhave arisen on different occasions from separate recombination\nevents (Motokawa et al., 1996).\nExperimental studies provide further signs of the ability of\nrecombination to generate useful, new variation. In one\nparticularly striking display of this, MS2 phage mutants lacking\nthe sequence for important stem-and-loop secondary structures\nrepeatedly reconstructed them via nonhomologous recombination\n(Olsthoorn & van Duin, 1996).\nConstraints on recombination\nRecombination clearly plays a significant role in the\nevolution of RNA viruses by generating genetic variation, by\nreducing mutational load, and by producing new viruses. We\nexpect that with current advances in sequencing and sequence\nanalysis many more examples of hybrid viruses, produced by\nrecombination between different strains, will soon be found.\nHowever, it is clearly not the case that all RNA viruses are\nequally prone to recombination. It has still not been detected in\nseveral viruses despite strenuous searches (e.g. Bilsel et al.,\n1990), although some ± not otherwise known to recombine ±\nnevertheless produce DI RNAs. Amongst those known to\nproduce hybrids, the frequency of recombinants detected in\nnatural or experimental studies varies markedly. Given the\npotential advantages of recombination, why is there apparently\nso much variation between viruses in its occurrence ? While it\nis too soon to provide a de®nitive answer to this crucial\nquestion, it is possible to dissect the process that gives rise to\nrecombinants and to consider the constraints that could act at\neach stage to inhibit it.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "For example, the soil-dwelling bacterium Bacillus subtilis has viral genes that help protect it from heavy metals and other harmful substances in the soil.\nOnce inside a host cell, viruses take over its machinery to reproduce. Viruses override the host cell’s normal functioning with their own set of instructions that shut down production of host proteins and direct the cell to produce viral proteins to make new virus particles.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "Host Defense and Viral Immune Evasion: A Proteomics Perspective\nIn Plain English:\nHuman cells and viruses are locked in a protein-based arms race for global domination: Will the cell’s defensive proteins successfully recognize viral DNA and alert the immune system? Or will the virus counter with proteins that stop the defensive proteins in their tracks? The answer is that both of these processes are happening all the time.\nWhat it covered:\nFull disclosure: I got to the talk about 10 minutes late after being stopped by a security guard (who wasn’t sure how to react to a 22-year-old with a backpack who could speak proteomics-babble but couldn’t produce a student ID). So I missed the first few slides of the talk, but when I arrived, Dr. Cristea was introducing the HMS research crowd to Gamma-Interferon-Inducible Protein 16 (IFI-16) and its role in the innate immune system.\nBasically, IFI-16 is a protein (and if you’re talking about cell biology, a protein refers to any molecule that’s made out of amino acids. Which means that almost everything in the cell is a protein.*) that sends out a cellular distress call when it senses the presence of viral DNA. Dr. Cristea’s lab is working on figuring out how exactly IFI-16 identifies viral DNA and how viruses counter IFI-16’s tactics.\nIFI-16’s arch-nemesis is a viral protein called UL83 (alias phosphoroprotein pp65). UL83 is expressed by many different viruses, but it’s best known for its role in human cytalomegalovirus infections. Even though the virus can replicate without UL83, UL83 helps the virus by silencing cellular distress signals (distress signals in response to pathogens are called interferons)** but scientists are still parsing out exactly how UL83/pp65 silences them.\nIn theory, IFI-16 (and other similar proteins) is a pretty foolproof system. IFI-16 binds directly to the viral DNA, and that binding event sets off a chemical chain reaction that causes the cell to start producing extra interferons. The immune cells zero in the interferon-spewing cells and kill them before the virus can finish replicating itself.\nHowever, Cristea’s lab has found that UL83 stops the inteferon-inducing chain reaction by binding to IFI-16 after IFI-16 binds to the viral DNA.", "score": 17.66385277391369, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Protein–Protein Interactions in Virus–Host Systems\n- Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London, United Kingdom\nTo study virus–host protein interactions, knowledge about viral and host protein architectures and repertoires, their particular evolutionary mechanisms, and information on relevant sources of biological data is essential. The purpose of this review article is to provide a thorough overview about these aspects. Protein domains are basic units defining protein interactions, and the uniqueness of viral domain repertoires, their mode of evolution, and their roles during viral infection make viruses interesting models of study. Mutations at protein interfaces can reduce or increase their binding affinities by changing protein electrostatics and structural properties. During the course of a viral infection, both pathogen and cellular proteins are constantly competing for binding partners. Endogenous interfaces mediating intraspecific interactions—viral–viral or host–host interactions—are constantly targeted and inhibited by exogenous interfaces mediating viral–host interactions. From a biomedical perspective, blocking such interactions is the main mechanism underlying antiviral therapies. Some proteins are able to bind multiple partners, and their modes of interaction define how fast these “hub proteins” evolve. “Party hubs” have multiple interfaces; they establish simultaneous/stable (domain–domain) interactions, and tend to evolve slowly. On the other hand, “date hubs” have few interfaces; they establish transient/weak (domain–motif) interactions by means of short linear peptides (15 or fewer residues), and can evolve faster. Viral infections are mediated by several protein–protein interactions (PPIs), which can be represented as networks (protein interaction networks, PINs), with proteins being depicted as nodes, and their interactions as edges. It has been suggested that viral proteins tend to establish interactions with more central and highly connected host proteins. In an evolutionary arms race, viral and host proteins are constantly changing their interface residues, either to evade or to optimize their binding capabilities. Apart from gaining and losing interactions via rewiring mechanisms, virus–host PINs also evolve via gene duplication (paralogy); conservation (orthology); horizontal gene transfer (HGT) (xenology); and molecular mimicry (convergence). The last sections of this review focus on PPI experimental approaches and their limitations, and provide an overview of sources of biomolecular data for studying virus–host protein interactions.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-154", "d_text": "Thus, we hypothesize that NSs function(s can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.\nPerkus, Marion E.; Piccini, Antonia; Lipinskas, Bernard R.; Paoletti, Enzo\nThe coding sequences for the hepatitis B virus surface antigen, the herpes simplex virus glycoprotein D, and the influenza virus hemagglutinin were inserted into a single vaccinia virus genome. Rabbits inoculated intravenously or intradermally with this polyvalent vaccinia virus recombinant produced antibodies reactive to all three authentic foreign antigens. In addition, the feasibility of multiple rounds of vaccination with recombinant vaccinia virus was demonstrated.\nFull Text Available Viruses are known to be abundant, ubiquitous, and to play a very important role in the health and evolution of life organisms. However, most biologists have considered them as entities separate from the realm of life and acting merely as mechanical artifacts that can exchange genes between different organisms.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-1", "d_text": "This was demonstrated for several different viruses and plant hosts (Xu et al., 2008). Mild strains of plant viruses protect plants from more severe isolates, a phenomenon known as cross-protection (Fraser, 1998) that led to the initial generation of virus-induced pathogen protection in transgenic plants. Endogenous pararetroviral elements in plants can confer resistance to exogenous viruses (Staginnus et al., 2007). The coat protein gene of a persistent virus in white clover affects the development of nodules under varying nitrogen levels, and this could be transferred to other legumes (Nakatsukasa-Akune et al., 2005). Curvularia thermal tolerance virus is a mycovirus that infects a plant fungal endophyte, Curvularia protuberata. When both virus and fungus are present in hot springs panic grass (Dichanthelium lanuginosum) the holobiont is able to grow in soil temperatures up to 65◦ C (Márquez et al., 2007). Many more examples of mutualistic viruses can be found in other hosts (Roossinck, 2011). In addition, viruses are important in population control of their hosts, and marine viruses are probably extremely important to the movement of carbon and trace elements in the microbiome of the oceans (Danovaro et al., 2011).\nPLANT VIRUS ECOLOGY AND EVOLUTION The existence of plant-virus mutualistic relationships should not be surprising\nwhen one considers the numerous examples of mutualistic relationships between plants or animals and other microbes. Despite examples, there has been very little focus on exploring mutualistic relationships among plants and viruses. Viruses are also involved in the complex interactions between plants and insects, and can alter insect feeding behavior, fecundity, and ability to invade new territory (reviewed in Roossinck, 2013). Further complicating our understanding of plant-virus interactions is the role globalization has on the relationships between viruses and their plant hosts. Viruses are not stationary, and their movement geographically and between host species can have drastic effects on the ecology of a given area. Climate change can alter the behavior of many virus vectors, promoting the spread of viral distribution across a larger geographic area (Lebouvier et al., 2011).", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Special Issue \"Recombination in Viruses\"\nDeadline for manuscript submissions: closed (30 April 2011)\nDr. Matteo Negroni\nCNRS-UPR 9002, Institut de Biologie Moléculaire et Cellulaire, 15 rue René Descartes, 67084 Strasbourg, France\nPhone: +33 (0)388417006\nFax: +33 (0)388602218\nViruses are in a perpetual arm race with their hosts. Camouflage is a common strategy viruses use to escape to the immune system (either innate or adaptive) of their hosts. This generally translates in a propensity to develop replication strategies that are, at different extents, prone to the insertion of mutations in their genome. Accumulation of mutations is nevertheless limited by the need to maintain viability and its own genetic identity. Keeping the subtle equilibrium between these two contrasting forces is vital for viruses, it often influences their pathogenic potential, and can be at the origin of outbreaks of infection of relevance for public health.\nRecombination is an important source of genetic variability in viruses, particularly for viruses possessing an RNA genome. The remarkable power of recombination resides in its ability, in a single infectious cycle, to generate new combinations of mutations. This is important at two regards: one is that recombination does not generate new mutations but reshuffles pre-existing ones, whose compatibility with viral survival has already been established. This is expected to increase the probability of having a viable recombinant progeny. On the other hand, the fact that, in general, several mutations are simultaneously introduced through the recombination process, is expected to favour the opposite outcome: that a high proportion of recombinant products will not be viable. Finally, recombination in concert with natural selection, can be responsible of combining advantageous mutations, as well as removing deleterious ones, by far the most abundant type of mutations found in nature.\nFor many viruses the generation of recombinant variants has been associated to important moments in the processes of adaptation, gain of pathogenic potential or increased spreading. Here we intend to present several of these cases and to provide an overlook of the implications of recombination for viral evolution from the theoretical standpoint.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-5", "d_text": "The Evolution of Protein Interfaces and the Virus–Host Arms Race\nIn virus–host systems, interacting proteins are constantly losing and regaining their binding sites in order to evade or optimize interspecific PPIs. This process of constant change is known as an “evolutionary arms race” (Franzosa and Xia, 2011; Daugherty and Malik, 2012).\nUnder an arms race regime, proteins can evolve by offensive or defensive strategies. Host proteins evolve offensively when they are constantly changing as part of an effort to retain or restore their recognition capabilities to bind and neutralize viral factors, which in turn is under recurrent adaptation to evade the host's antagonist actions (Daugherty and Malik, 2012). For example, host immune system proteins in constant interaction with pathogen proteins frequently evolve by an offensive strategy. This scenario is usually found in mammals, whose antiviral proteins are under constant adaptation to recognize their antigens, showing a rapid mode of evolution (Lindblad-Toh et al., 2011). Conversely, defensive strategies are observed when host proteins targeted by viral antagonists undergo mutations to prevent pathogen proteins from binding their interfaces. As a response, this context can favor the fixation of novel mutations on viral interfaces, probably compensating for host evasion (Daugherty and Malik, 2012).\nIn this intricate virus–host arms race, endogenous and exogenous interfaces show different patterns of evolution. Host interfaces mediating host–host PPIs tend to be less variable than interfaces directly targeted by viral proteins (Franzosa and Xia, 2011). These proteins contain specific residues where small changes can drastically modify protein function and/or structure, and consequently their intraspecific binding affinity (Daugherty and Malik, 2012). This mode of evolution is especially observed in co-evolving host–host interfaces, where mutations can be potentially deleterious, and strong purifying selection acts to maintain the integrity of their binding sites (Franzosa and Xia, 2011; Daugherty and Malik, 2012). However, there are exceptions to this pattern. Taking into account that some endogenous binding sites overlap with exogenous interfaces (Franzosa and Xia, 2011), shared residues of endogenous interfaces can evolve faster due to competition with a viral antagonist (Elde and Malik, 2009).", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "Antiviral prescription drugs in many cases are nucleoside analogues (pretend DNA setting up-blocks), which viruses mistakenly incorporate into their genomes during replication. The life-cycle of your virus is then halted as the freshly synthesised DNA is inactive. It is because these analogues absence the hydroxyl teams, which, together with phosphorus atoms, hyperlink together to kind the sturdy \"backbone\" with the DNA molecule. This known as DNA chain termination.\nSome viruses can result in lifelong or chronic bacterial infections, where by the viruses go on to replicate in your body despite the host's defence mechanisms. This really is popular in hepatitis B virus and hepatitis C virus infections.\nRecord the file locale of each and every offending entry prior to deciding to take out it. You will need to locate these information afterwards to delete them.\na. Any of various submicroscopic brokers that infect residing organisms, often producing illness, and that include a single or double strand of RNA or DNA surrounded by a protein coat. Not able to copy with out a host mobile, viruses are usually not considered living organisms.\nwikiHow Contributor Will not go into any personal accounts or legal paperwork that require passwords, which include social media marketing internet sites!\nHitmanPro is definitely an anti-virus system that describes alone like a next viewpoint scanner that needs to be utilized at the side of A further anti-virus software that you could already have installed. If malware slips previous your anti-virus software, HitmanPro will then phase in to detect it.\nAttachment could be the intermolecular binding among viral capsid proteins and receptors to the outer membrane of the host mobile. The specificity of binding decides the host species and mobile varieties which have been receptive to viral an infection. By way of example, HIV infects only human T cells, as the surface protein interacts with CD4 and chemokine receptors to the surface area with the T mobile alone.\nHas your Laptop or computer been contaminated by a virus? Viruses and various malware can pose a significant security hazard towards your knowledge and private information and facts, and may have a drastic impact on your Laptop's performance.\nRNA interference is an important innate defence towards viruses. Lots of viruses Possess a replication tactic that will involve double-stranded RNA (dsRNA). When this kind of virus infects a mobile, it releases its RNA molecule or molecules, which instantly bind to the protein elaborate identified as a dicer that cuts the RNA into smaller sized items.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-10", "d_text": "(A) In physiological conditions the cell surface Ephrin (gray protein) binds its Ephrin type-A receptor 4 (blue protein, PDB 3GXU). (B) However, during infections of the Paramixovirus Hendra henipavirus, the Ephrin interface is also used by the viral Glycoprotein G, which by convergence evolved its binding capacity (red protein, PDB 2VSK). (C) As shown in the superposition, Ephrin type-A receptor 4 and the viral Glycoprotein G can compete for the same interface on the Ephrin surface.\nProtein Interaction Data: Experimental Approaches and Limitations\nAmong the experimental techniques applied to identify virus–host protein interactions, Y2H and AP-MS are the most extensively used, together contributing to more than 90% of the information available in public databases (Guirimand et al., 2014), with the remaining data being obtained by GST-pull-down, luminescence, protease assay, surface plasmon resonance (SPR), and other techniques.\nY2H is efficient at detecting weak/transient domain–motif interactions (Gavin et al., 2006), however, one drawback is that it does not provide precise information about the domains involved in the interactions (Riley et al., 2005; Lee et al., 2006; Segura-Cabrera et al., 2013). Another disadvantage of Y2H screens is that PPI detection takes place within the nucleus. As some proteins are not naturally found in this cellular compartment, they are usually not identified as interactors, resulting in a bias that increases the proportion of false negatives (Von Mering et al., 2002). Additionally, a large number of entries in some databases describe binary interactions that are not physiologically feasible, i.e., even though two proteins are biochemically able to interact, if they do not share the same temporal and spatial compartment in a given biological process, their physical contact would not happen in natural conditions (Russell and Aloy, 2008).\nAP-MS works in a different way. In these assays, proteins of interest (baits) are tagged with a recombinant fusion tag, which is then used to purify baits and their respective interacting partners (preys).", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-15", "d_text": "This\ndifference is analogous to the important distinction between\nthe rate of mutation and the rate of substitution.\nNegative selection against non-functional hybrids or those\nwith decreased fitness may impose the strongest constraints of\nall on the appearance of recombinants. In viruses for which the\nevolutionary costs of recombination outweigh the benefi ts,\nthough they may be mechanistically capable of genetic\nexchange, strong selection will guarantee the elimination of\nDetermining the constraints that operate on recombination\noffers a promising path to a fuller understanding of its\nimportance in the evolution of RNA viruses. However, outlines\nCFEA of the big picture are already clearly visible. It seems certain\nthat genetic exchange plays a key role in several virus groups\nand that it has shaped a good deal of the diversity ± both\nancient and recent ± that exists in them. Thus, evolutionary\nknowledge about recombination impacts on many aspects of\nthe study of RNA viruses, from the broadest investigations of\nvirus taxonomy, to the finest details of molecular epidemiology\nand vaccine design. A ¯ood of viral gene sequence data and the\navailability of new and powerful phylogenetic methods is\nmaking the detection and characterization of recombination\never easier, and the list of viruses showing recombination\ncontinues to grow. The evidence for recombination, not only\nbetween closely related viruses but also among distantly\nrelated viruses, positive-sense and negative-sense viruses, DI\nRNAs and viruses, satellite RNAs and viruses, and even with\nhost RNAs, suggests that almost any genetic material can be\ngrist for the polymerase's mill. Of all the tricks up the viral\nevolutionary sleeve, surely recombination is one of the most\nAaziz, R. & Tepfer, M. (1999). Recombination in RNA viruses and in\nvirus-resistant transgenic plants. Journal of General Virology 80,\nAllison, R. F., Janda, M. & Ahlquist, P. (1989). Sequence of cowpea\nchlorotic mottle virus RNA 2 and RNA 3 and evidence of a recombination\nevent during bromovirus evolution. Virology 172, 321±330.\nBall, L. A. (1997). Nodavirus RNA recombination.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The production of viral miRNAs and the regulation of cellular Dicer levels during infection implicate RNA silencing mechanisms in both viral fitness as well as potential host defense strategies.\nStadler, BM. Interaction of a Mammalian Virus with Host RNA Silencing Pathways: A Dissertation. (2007). University of Massachusetts Medical School. GSBS Dissertations and Theses. Paper 322. https://escholarship.umassmed.edu/gsbs_diss/322\nRights and Permissions\nCopyright is held by the author, with all rights reserved.", "score": 15.652736444556414, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "Unfortunately, such scarcity of information currently limits the conclusions that can be drawn from such markedly incomplete protein data of large DNA viruses.\nInterfaces of Protein Interactions in Virus–Host Systems\nIn order for proteins to interact with each other, their respective binding sites must be in direct physical contact, either in a stable or transient mode (Byrum et al., 2012). Such binding sites are called “interfaces”: three-dimensional structures formed by sets of amino acid residues directly responsible for the recognition of binding partners (Figure 1; Franzosa and Xia, 2011). Deleterious or beneficial mutations occur especially on interfaces, affecting binding affinity due to impairment or improvement of protein electrostatic and structural properties (Daugherty and Malik, 2012).\nFigure 1. Structure of a DDI between a host domain (V-set domain, blue) and a viral domain (Herpes glycop D domain, red; PDB 3U82). By rotating each protein 90° outwards, the residues located at no more than 4.5 Å away from its partner's surface are colored yellow, indicating the interface residues.\nIn the context of virus–host PPIs, protein–binding sites can be classified as endogenous or exogenous interfaces. Endogenous interfaces are responsible for mediating interactions between proteins belonging to viral or host proteomes, i.e., host–host or virus–virus PPIs. On the other hand, exogenous interfaces mediate interactions between proteins belonging to distinct proteomes, as seen in virus–host PPIs (Franzosa and Xia, 2011).\nIn virus–host systems, extensive competition for interfaces is common between endogenous and exogenous partners, and viral proteins frequently interfere with host–host protein interactions (Franzosa et al., 2012). Such competition is so frequent that most of the proteins that have at least one known host–host interaction are also involved in virus–host interactions (Franzosa and Xia, 2011).\nHaving a broader understanding of virus–host PPIs and their interfaces is crucial for the development of new antiviral therapies, like the design of small molecules capable of binding and blocking essential interactions of viral processes (Bailer and Haas, 2009; Gardner et al., 2015).", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-6", "d_text": "The combination of pure red cell aplasia due to IFN-α and haemolytic anaemia due to ribavirin was suggested to have accounted for this possible interaction. There was an increased incidence of adverse skin reactions, mostly eczema, malar erythema, and lichenoid eruptions, in 33 patients who received combination of IFN-α with ribavirin compared with 35 patients treated with IFN-α alone .\nIn a clinical trial of a combination of telbivudine 600 mg daily with subcutaneous pegylated IFN-α2a, 180 micrograms once weekly for chronic hepatitis B, there was an increased risk of peripheral neuropathy. The manufacturers state that the combination of Pegasys with telbivudine is contraindicated .\nIn 13 patients with metastatic renal cell carcinoma, the combination of IFN-α2a (27 MIU/week) and thalidomide produced severe neurological toxicity in four patients, an incidence that was considered to be far greater than would be expected with either drug alone .\nBoth IFN-α [46,47,48,49] and IFN-β reduce theophylline clearance.\nMany viruses have developed mechanisms of resistance to interferons or block their actions, including adenoviruses, hepatitis C, influenza and parainfluenza viruses, Japanese encephalitis virus, and paramyxoviruses [51,52].\nExperience in coronavirus infections other than COVID-19\nIn vitro studies\nIFN-β1a was a potent inhibitor of SARS-CoV-1 in cell culture when applied before or 1 hour after inoculation, reducing viral replication by 2 log units .\nWild-type mice develop a mild bronchiolitis when infected with SARS-CoV-1 (which causes SARS), but Stat1 –/– mice, who lack the STAT pathway, have increased lung damage, continued viral replication, and damage to other organs .\nIn a retrospective survey of 51 patients with MERS-CoV infection, 23 of whom were treated with IFN-β, survival was more likely in the patients treated with IFN-β. It is likely that it was given subcutaneously. As treatments were not allocated randomly, the result is not helpful in determining the possible therapeutic benefit of IFN-β .", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-4", "d_text": "For this reason, these viruses are called positive-sense RNA viruses. In other RNA viruses, the RNA is a complementary copy of mRNA and these viruses rely on the cell's or their own enzyme to make mRNA. These are called negative-sense RNA viruses. In viruses made from DNA, the method of mRNA production is similar to that of the cell. The species of viruses called retroviruses behave completely differently: they have RNA, but inside the host cell a DNA copy of their RNA is made. This DNA is then incorporated into the host's, and copied into mRNA by the cell's normal pathways.\nWhen a virus infects a cell, the virus forces it to make thousands more viruses. It does this by making the cell copy the virus's DNA or RNA, making viral proteins, which all assemble to form new virus particles.\nThere are six basic, overlapping stages in the life cycle of viruses in living cells:\n- Attachment is the binding of the virus to specific molecules on the surface of the cell. This specificity restricts the virus to a very limited type of cell. For example, the human immunodeficiency virus (HIV) infects only human T cells, because its surface protein, gp120, can only react with CD4 and other molecules on the T cell's surface. Plant viruses can only attach to plant cells and cannot infect animals. This mechanism has evolved to favour those viruses that only infect cells in which they are capable of reproducing.\n- Penetration follows attachment; viruses penetrate the host cell by endocytosis or by fusion with the cell.\n- Uncoating happens inside the cell when the viral capsid is removed and destroyed by viral enzymes or host enzymes, thereby exposing the viral nucleic acid.\n- Replication of virus particles is the stage where a cell uses viral messenger RNA in its protein synthesis systems to produce viral proteins. The RNA or DNA synthesis abilities of the cell produce the virus's DNA or RNA.\n- Assembly takes place in the cell when the newly created viral proteins and nucleic acid combine to form hundreds of new virus particles.\n- Release occurs when the new viruses escape or are released from the cell. Most viruses achieve this by making the cells burst, a process called lysis. Other viruses such as HIV are released more gently by a process called budding.\nEffects on the host cell\nThe range of structural and biochemical effects that viruses have on the host cell is extensive. These are called cytopathic effects.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-8", "d_text": "Experiments with RNA\nviruses, one of only a handful of organisms in which this\nhypothesis has actually been put to the test, have generally\nsupported its operation and shown decreased ®tness for\npopulations in which it occurs (Chao et al., 1992). Experimental\nevidence (Chao et al., 1997) also shows that sex (in this case\nreassortment) can reduce the mutational load in a population\nand so help it escape from accumulated deleterious effects.\nAlthough such direct experimental evidence has yet to\ndemonstrate a similar advantage for recombination, in principle\nit too could serve to efficiently remove disadvantageous alleles\nfrom a population by combining mutation-free parts of\ndifferent genomes. Indeed, suggestions have been made that\nreassortment in segmented RNA viruses and recombination in\nmonopartite RNA viruses represent alternative evolutionary\nstrategies for genetic exchange in this group (Chao et al., 1992).\nWhile this idea is fascinating, it is interesting to note that\nreassortment and recombination are not mutually exclusive\nand that several segmented viruses also experience recombination,\nsometimes frequently. These include the bacteriophage\nu6 (Mindich et al., 1992), rotaviruses (Suzuki et al., 1998),\nin¯uenza A virus (Khatchikian et al., 1989), hantaviruses (Sibold\net al., 1999), ¯ock house virus (Li & Ball, 1993) and many plant\nviruses (Bujarski & Kaesberg, 1986; Greene & Allison, 1994;\nRobinson, 1994; Rott et al., 1991). As if to prove this point,\nMasuta et al. (1998) recently reported an interspeci®c hybrid of\ntwo cucomoviruses that arose by both reassortment and\nNonetheless, a great deal of evidence indicates that some\nRNA viruses do benefit from the genome-purging effects of\nrecombination. A multitude of experimental studies have\nshown that weak or even non-replicative mutant strains can\nrecombine to form viable, highly viruses.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "I have as much respect for viruses’ ability to manipulate their host as the next guy, and I’m probably more of a fan of viral immune evasion than that next guy. But I still do think that coincidences do happen.\nA paper from John Trowsdale and colleagues1 shows that Kaposi’s Sarcoma Herpesvirus (KSHV) destroys HFE, and they suggest that this is “a molecular mechanism targeted by KSHV to achieve a positive iron balance.” Without dissing their observations (which are perfectly convincing) I’m not entirely convinced by their conclusion. Still, it’s an interesting suggestion, and I’m keen to see some kind of followup to it.\nThe reason I’m not convinced is that this has the look of a spillover effect to me. We already know that KSHV attacks MHC class I molecules via its K3 and K5 molecules, and that it does so by targeting the cell-surface pool to lysosomes. This is a very familiar pattern; most, if not all, herpesviruses block MHC class I molecules. Although it’s been hard to formally prove “why” herpesviruses do this,2 the general assumption is that this allows the virus to at least partially avoid recognition by T cells, and this lets the virus survive better — perhaps because it builds a larger population very early, or perhaps because it is able to last longer late, or whatever.\nAt any rate, there’s a fairly simple and logical reason why it would make sense for KSHV to block MHC class I molecules, and as I say they do, in fact, do this. Now, why would they attack HFE? HFE is an iron-binding protein that’s involved in the regulation of iron metabolism. Why would KSHV be interested in iron metabolism?\nQuite a few pathogens are actually very concerned about iron metabolism, of course. Bacteria generally need iron for their metabolism,3 and pathogenic bacteria have evolved ways of grabbing iron away from their hosts (while their hosts have evolved way of holding on tighter and tighter to that iron). But in general viruses, as opposed to bacteria, don’t have specific needs for iron. Trowsdale’s group makes the argument — and offers some experimental evidence — that KSHV does in fact want iron.", "score": 13.190487705459049, "rank": 87}, {"document_id": "doc-::chunk-8", "d_text": "Blocking of the alpha interferon-induced Jak-Stat signaling pathway by Japanese encephalitis virus infection. J. Virol. 78 (17): 9285–94.\n- ↑ Sen GC (2001). Viruses and interferons. Annu. Rev. Microbiol. 55: 255–81.\n- ↑ Alcamí A, Symons JA, Smith GL (December 2000). The vaccinia virus soluble alpha/beta interferon (IFN) receptor binds to the cell surface and protects cells from the antiviral effects of IFN. J. Virol. 74 (23): 11230–9.\n- ↑ Miller JE, Samuel CE (September 1992). Proteolytic cleavage of the reovirus sigma 3 protein results in enhanced double-stranded RNA-binding activity: identification of a repeated basic amino acid motif within the C-terminal binding region. J. Virol. 66 (9): 5347–56.\n- ↑ Chang HW, Watson JC, Jacobs BL (June 1992). The E3L gene of vaccinia virus encodes an inhibitor of the interferon-induced, double-stranded RNA-dependent protein kinase. Proc. Natl. Acad. Sci. U.S.A. 89 (11): 4825–9.\n- ↑ Minks MA, West DK, Benvin S, Baglioni C (October 1979). Structural requirements of double-stranded RNA for the activation of 2',5'-oligo(A) polymerase and protein kinase of interferon-treated HeLa cells. J. Biol. Chem. 254 (20): 10180–3.\n- ↑ http://www.pathobiologics.org/ivphc/ref/iav121604.doc\n- ↑ Bhatti Z, Berenson CS (2007). Adult systemic cat scratch disease associated with therapy for hepatitis C. BMC Infect Dis 7: 8.\n- ↑ Nagano Y, Kojima Y (October 1954). Pouvoir immunisant du virus vaccinal inactivé par des rayons ultraviolets. C. R. Seances Soc. Biol. Fil. 148 (19-20): 1700–2.\n- ↑ Nagano Y, Kojima Y (1958).", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-8", "d_text": "(C) Superposition of the interologs: both PPIs are found in distinct but homologous systems.\nHGT is a process of genome recombination by means of which some viruses acquire one or more genes from non-parental organisms, a mechanism of evolution especially observed among large DNA viruses, which usually acquire new genes from other viruses, bacteria, or from their hosts (Shackelton and Holmes, 2004). Once a viral genome has incorporated a new gene, the protein product can be optimized and integrated into its virus–host network (Daugherty and Malik, 2012). Large dsDNA viruses, such as, poxviruses and herpesviruses, have been shown to be remarkably prone to acquire and domesticate exogenous genes within several functional categories (Raftery et al., 2000; Hughes and Friedman, 2005). Figure 4B shows an interaction between a human CDK6 and a Cyclin encoded by the Human Herpesvirus 8. This interaction is part of an immune system process (GO:0006955), and takes place in the extracellular region (GO:0005576). The viral cyclin (vCyclin) was probably acquired by HGT, and is capable of modulating cellular growth (GO:0005125) in similar ways to cellular cyclins D (Figure 4; Godden-Kent et al., 1997).\nFigure 4. A viral PPI interaction derived from HGT. (A) In host protein networks, CDK6 (gray protein) originally establishes interaction with human Cyclin-A/CCNT1 (blue protein, PDB 3MI9). (B) Interestingly, a viral cyclin encoded by HHV-8, probably acquired by HGT (red protein, PDB 1G3N), is also able to establish similar interactions. (C) As both proteins share the same domain (Cyclin_N; PF00134), the structural superposition between the human cyclin (A) and its viral cognate (B) reveals their folding and binding similarities.\nGene duplication (paralogy) is another usual mechanism of protein network evolution, and duplicated genes are common in some viral genomes. After a duplication event, each paralog can undergo independent mutations, giving rise to new biological functions (Barabasi and Oltvai, 2004; Ratmann et al., 2009).", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-100", "d_text": "In addition, we created a virus that expresses GFP and thus allows convenient monitoring of virus replication. These new tools represent a\n... consequence.Protein spike similar. HE gene absent. 2787 nucleotides. Largest genome. Jumps species by genetic deletion. < 300 compounds screened. Glycyrrhizin (liquorics/mullatha) seems attractive. Antivirals not effective. Vaccines – animal model only in monkeys. Killed corona or knockout weakened virus as targets.\nVerbruggen, Paul; Ruf, Marius; Blakqori, Gjon; Överby, Anna K; Heidemann, Martin; Eick, Dirk; Weber, Friedemann\nLa Crosse encephalitis virus (LACV) is a mosquito-borne member of the negative-strand RNA virus family Bunyaviridae. We have previously shown that the virulence factor NSs of LACV is an efficient inhibitor of the antiviral type I interferon system. A recombinant virus unable to express NSs (rLACVdelNSs) strongly induced interferon transcription, whereas the corresponding wt virus (rLACV) suppressed it. Here, we show that interferon induction by rLACVdelNSs mainly occurs through the signaling pathway leading from the pattern recognition receptor RIG-I to the transcription factor IRF-3. NSs expressed by rLACV, however, acts downstream of IRF-3 by specifically blocking RNA polymerase II-dependent transcription. Further investigations revealed that NSs induces proteasomal degradation of the mammalian RNA polymerase II subunit RPB1. NSs thereby selectively targets RPB1 molecules of elongating RNA polymerase II complexes, the so-called IIo form. This phenotype has similarities to the cellular DNA damage response, and NSs was indeed found to transactivate the DNA damage response gene pak6. Moreover, NSs expressed by rLACV boosted serine 139 phosphorylation of histone H2A.X, one of the earliest cellular reactions to damaged DNA. However, other DNA damage response markers such as up-regulation and serine 15 phosphorylation of p53 or serine 1524 phosphorylation of BRCA1 were not triggered by LACV infection.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-4", "d_text": "A classic example of virus–host interactions being blocked at the interface level is the action of Maraviroc as an inhibitor of HIV-1 entry to host cells (PDB 4MBS). This drug binds the cellular co-receptor CCR5, preventing it from interacting with GP120 (Figure 2), an essential step of HIV-1 infection (Macarthur and Novak, 2008).\nFigure 2. Representation of CCR5 (blue) bound with a Maraviroc molecule (yellow), superposed with the HIV-1 GP120 V3 loop (red), as proposed by Tamamis and Floudas (2014). As depicted, the drug occupies a CCR5 pocket, blocking its interaction with GP120.\nModes of Protein Interaction\nPPIs commonly rely on large interfaces, whilst transient ones involve short linear peptides, as sequence motifs composed of 15 residues or less (Segura-Cabrera et al., 2013). Proteins that show a wide range of binding partners are called “hubs.” Hubs having only few interfaces are likely to interact transiently with different partners at different times (date hubs; Han et al., 2004), usually via domain–motif interactions (Franzosa and Xia, 2011). Conversely, proteins showing multiple interfaces tend to establish simultaneous interactions with multiple partners. Such proteins (party hubs) are likely to arrange themselves in complexes, via stable DDIs (Han et al., 2004).\nDue to their mode of interaction and number of interface residues, party hubs tend to evolve slowly, as changes in their residues are likely to impair some interactions with specific partners (Fraser et al., 2002). The opposite scenario is observed among proteins that establish transient interactions, which usually evolve faster (Teichmann, 2002). Interestingly, most viral proteins interfering with cell signaling and regulatory pathways perform transient interactions with host proteins (Perkins et al., 2010), leading to severe changes in cellular metabolism (Segura-Cabrera et al., 2013). Unfortunately, compared to stable (domain–domain) interactions, transient (domain–motif) interactions are under-represented in PPI databases, mainly due to limitations associated with the methods used so far to obtain protein–protein interaction data (Russell et al., 2004).", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "How can we reduce the spread of viruses?\nMasking, proper handwashing, use of hand sanitizers and social distancing reduce the spread of viruses of many viruses. Antiviral medications and vaccines can eliminate or reduce the severity of diseases caused by viruses.\nMedicines used to treat bacterial infections do not kill viruses.\nHow do viral vaccines work?\nPreviously, viral vaccines contained weakened or dead viruses, with both forms being incapable of causing disease. Now, scientists have an additional tool in their toolkit, producing vaccines using a virus's genome sequence. The viral genome has the information needed to create viral proteins, the active component of the vaccine to which the immune system responds. When injected, these DNA or RNA molecules are used by the host to produce specific viral proteins, and the immune system then recognizes the viral proteins as foreign, sparking a response from multiple types of white blood cells.\nOne such class of white blood cells, called B cells, produces a particular type of protein called an antibody. Antibodies bind to molecules on the surface of the virus and neutralize the virus to prevent it from replicating.\nOnce the human body successfully produces antibodies against a virus, its arsenal is ready for defense when the immune system comes in contact with the same virus in the future.\nWhy do some viruses affect certain people more negatively than others?\nThe exact reason why viruses affect people in different ways is under active study. Researchers attribute it to a combination of genetic and environmental factors. People with existing health conditions, such as diabetes, cardiovascular disease or cancer, are more vulnerable to a severe viral infection.\nSome individuals also have specific genomic variants that can influence how a virus interacts with their body. For example, some relatively rare genomic variants make people susceptible to severe viral and other infections. On the flip side, some genomic variants protect specific individuals from viral infections. Researchers continue to study these mechanisms, including the relationship between the level of viral infection and specific genomic variants.\nDoes our body have viral DNA that doesn’t cause disease?\nYes. The human genome contains a considerable amount of DNA that previously existed in viruses. These viral sequences are remnants of past viral infections. Most of these sequences originally came from retroviruses, a type of virus that can insert one copy of its genome into the DNA of a host organism (such as a human). As the host cells make copies of its own genome, it copies the viral DNA as well.", "score": 9.460542230531878, "rank": 92}, {"document_id": "doc-::chunk-4", "d_text": "The type 1 interferons are then released and bind to the IFNAR1 and IFNAR2 receptors expressed on the surface of a neighboring cell. Once interferon has bound to its receptors on the neighboring cell, the signaling proteins STAT1 and STAT2 are activated and move to the cell's nucleus. This triggers the expression of interferon-stimulated genes, which code for proteins with antiviral properties. EBOV's V24 protein blocks the production of these antiviral proteins by preventing the STAT1 signaling protein in the neighboring cell from entering the nucleus. The VP35 protein directly inhibits the production of interferon-beta. By inhibiting these immune responses, EBOV may quickly spread throughout the body.", "score": 8.086131989696522, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "PLoS Pathog 9(2): e1003164. doi:10.1371/journal.ppat.1003164\nEditor: Michael S. Diamond, Washington University School of Medicine, United States of America\nReceived: July 13, 2012; Accepted: December 14, 2012; Published: February 7, 2013\nCopyright: © 2013 Runckel et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.\nFunding: This work and JDR are supported by the Howard Hughes Medical Institution. CR was supported by the Genentech Fellowship at UCSF. RA is supported by grants from the NIAID (R01 AI36178 and AI40085). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\nCompeting interests: The authors have declared that no competing interests exist.\nRecombination in RNA viruses is a source of genetic diversity and rapid evolutionary change and may result in the emergence of new strains by facilitating shifts in cell tropism, antigen profile and pathogenicity. The mechanism of RNA virus recombination can proceed through re-assortment of genome segments, as is the case for the Influenza A virus, or through the generation of chimeric viral genomes during replication for non-segmented viruses. This recombination is frequent in the wild with different recombinant genotypes rising to dominance and declining over a timescale of only a few years . Sequencing of large numbers of viral isolates has revealed instances of intra-species recombination in many human-infecting RNA viruses with major public health implications, including norovirus , astrovirus , flavivirus and at least eight species of picornavirus –. Rare inter-species recombinants, such as the enteroviruses HEV90 and HEV109 , have also been described.\nViral recombination not only impacts public health by the evolution of new viral strains, but may also undermine live-attenuated vaccines by producing a pathogenic strain derived from the attenuated strains. The oral poliovirus vaccine (OPV) is the most famous example, where three attenuated serotypes of poliovirus are typically administered simultaneously.", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-16", "d_text": "1977, 22: 844-847.PubMedPubMed CentralGoogle Scholar\n- Refardt D: Within-host competition determines reproductive success of temperate bacteriophages. ISME J. 2011, 5: 1451-1460. 10.1038/ismej.2011.30.PubMedPubMed CentralView ArticleGoogle Scholar\n- Priess H, Kamp D, Kahmann R, Brauer B, Delius H: Nucleotide sequence of the immunity region of bacteriophage Mu. Mol Gen Genet. 1982, 186: 315-321. 10.1007/BF00729448.PubMedView ArticleGoogle Scholar\n- Berngruber TW, Weissing FJ, Gandon S: Inhibition of superinfection and the evolution of viral latency. J Virol. 2010, 84: 10200-10208. 10.1128/JVI.00865-10.PubMedPubMed CentralView ArticleGoogle Scholar\n- Vanvliet F, Couturier M, Desmet L, Faelen M, Toussaint A: Virulent Mutants of Temperate Phage-Mu-1. Mol Gen Genet. 1978, 160: 195-202. 10.1007/BF00267481.View ArticleGoogle Scholar\n- Benzer S: Fine Structure of a Genetic Region in Bacteriophage. Proc Natl Acad Sci U S A. 1955, 41: 344-354. 10.1073/pnas.41.6.344.PubMedPubMed CentralView ArticleGoogle Scholar\n- Susskind MM, Botstein D, Wright A: Superinfection exclusion by P22 prophage in lysogens of Salmonella typhimurium. III. Failure of superinfecting phage DNA to enter sieA+ lysogens. Virology. 1974, 62: 350-366. 10.1016/0042-6822(74)90398-5.PubMedView ArticleGoogle Scholar\n- Susskind MM, Botstein D: Superinfection exclusion by lambda prophage in lysogens of Salmonella typhimurium. Virology. 1980, 100: 212-216.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-20", "d_text": "Triplet marker sites were added by modified primers amplifying construct or wild-type DNA, followed by similar fusion PCR and cloning steps. Viruses were generated and propagated from the infectious clones as above. The coinfection experiment was performed identically, however the library generation was executed in a single emulsion step using SuperScriptIII/Platinum Taq one-step RT-PCR mix (Invitrogen) and specific primers, otherwise as above. Amplicons were sequenced on a HiSeq2000 diluted to a ratio of <1:10 with an unrelated insect RNA library to dampen decoupling effects; the poliovirus reads were prepared with unique DNA indices and were separated after sequencing. A lane that experienced severe over-clustering, which exacerbates the decoupling effect, was discarded from analysis.\nSynthetic poliovirus constructs were submitted to GenBank (see materials and methods).\nFitness characterization of construct strains. A. One-step growth curves. Virus strains were applied to HeLa monolayers, washed and time-point samples frozen every two hours (x-axis) in triplicate (error bars). Samples were thawed and the titer measured by plaque assay (y-axis). B. Plaques formed by construct strains were not visually different from the wild type. C. Competition assay. Viruses were co-infected at equal titer, harvested and passaged into fresh cells four times. Viral RNA was extracted, amplified by strain-conserved primers, cloned and transformed into bacteria, and the relative quantity of each strain determined by strain specific colony PCR.\nComparison of biological replicates. HeLa monolayers were co-infected in parallel and proceeded through all steps of library preparation and sequencing separately. A. The recombination frequency at each marker pair is presented as a separate data point. Recombination was not observed at two data points; these are not included in the figure. B. Rank ordered list of marker pairs and corresponding recombination frequency.\nComparison of experimental vs. no-coinfection control datasets. A. Non-zero recombination frequencies are plotted comparing the experimental results with the control results to determine the similarity of artifactual recombination to biological recombination. B. Rank-ordered plot of experimental vs. control datasets, with the control x-axis expanded by 26-fold to display equivalent scale between the two datasets.", "score": 8.086131989696522, "rank": 96}]} {"qid": 40, "question_text": "What was the BALCO scandal and when did it come to light?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Bay Area Laboratory Co-operative\n|Part of a series on|\n|Doping in sport|\nThe Bay Area Laboratory Co-operative (BALCO) (1984–2003) was an American company led by founder and owner Victor Conte. In 2003, journalists Lance Williams and Mark Fainaru-Wada investigated the company's role in a drug sports scandal later referred to as the BALCO Affair. BALCO marketed tetrahydrogestrinone (\"the Clear\"), a then-undetected, performance-enhancing steroid developed by chemist Patrick Arnold. Conte, BALCO vice president James Valente, weight trainer Greg Anderson and coach Remi Korchemny had supplied a number of high-profile sports stars from the United States and Europe with \"the Clear\" and human growth hormone for several years.\nHeadquartered in Burlingame, California, BALCO was founded in 1984. Officially, BALCO was a service business for blood and urine analysis and food supplements. In 1988, Victor Conte offered free blood and urine tests to a group of athletes known as the BALCO Olympians. He then was allowed to attend the Summer Olympics in Seoul, South Korea. From 1996, Conte worked with well-known American football star Bill Romanowski, who proved to be useful to establish new connections to athletes and coaches such as Korchemny. Conte and Korchemny shortly thereafter founded the ZMA Track Club for marketing purposes, well-known members of it being sprinters Marion Jones and Tim Montgomery. In 2000, Conte managed to contact American baseball star Barry Bonds via Greg Anderson, a coach working in a nearby fitness studio. Bonds then delivered contacts to other baseball professionals.\nIn 2003, the United States Attorney for the Northern District of California began investigating BALCO. U.S. sprint coach Trevor Graham had given an anonymous phone call to the United States Anti-Doping Agency (USADA) in June 2003 accusing a number of athletes being involved in doping with a steroid that was not detectable at the time. He also named Victor Conte as the source of the steroid. As evidence, Graham delivered a syringe containing traces of tetrahydrogestrinone, nicknamed \"the Clear.\"\nShortly after, Don Catlin, MD, the founder of the UCLA Olympic Analytical Laboratory, developed a testing process for tetrahydrogestrinone (THG).", "score": 43.8778799829098, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Now able to detect the new substance, he tested 550 existing samples from athletes, of which 20 proved to be positive for THG.\nOn September 3, 2003 agents of the Internal Revenue Service, Food and Drug Administration, San Mateo Narcotics Task Force, and United States Anti-Doping Agency conducted a house search at the BALCO facilities. Beside lists of BALCO customers in a BALCO field warehouse they found containers whose labels indicated steroids and growth hormones. In a house search at Anderson's place two days later, steroids, $60,000 in cash, names lists and dosage plans were found.\nAmong the athletes listed in the record of BALCO customers were:\n- MLB players: Barry Bonds, Benito Santiago, Jeremy Giambi, Bobby Estalella, Armando Rios\n- Athletes: Hammer thrower John McEwen, shot putters Kevin Toth and C.J. Hunter, sprinters Dwain Chambers, Marion Jones, Tim Montgomery, Zhanna Block and Kelli White, middle-distance runner Regina Jacobs.\n- Boxer Shane Mosley.\n- Cycling: Tammy Thomas.\n- NFL players: A number from the Oakland Raiders, including Bill Romanowski, Tyrone Wheatley, Barrett Robbins, Chris Cooper and Dana Stubblefield.\n- Judo: Conte was also connected with supplying \"vitamin supplements\" to the 1988 U.S. Olympic judo team coached by Willy Cahill of San Bruno, California.\n- Christos Tzekos and his athletes were initially connected to BALCO but later cleared.\nPatrick Arnold, BALCO's chemist, alleges that Bonds and Sheffield were given \"the Clear,\" though the athletes deny knowing about it and Arnold does not claim to have witnessed it.\nIn April 2005, Lance Williams and Mark Fainaru-Wada were honored with the journalist prize of the White House Correspondents' Association. In 2006, they published the book Game of Shadows, which consists of a summary of about 200 interviews and 1,000 documents they collected for their research.\nOn July 15, 2005, Conte and Anderson cut plea bargains, pleaded guilty to illegal steroid distribution and money laundering and avoided an embarrassing trial. Conte spent four months in prison. Anderson was incarcerated for 13½ months.", "score": 38.56294373413571, "rank": 2}, {"document_id": "doc-::chunk-3", "d_text": "On May 29, 2008, Trevor Graham was convicted by a federal jury on one count of lying to federal investigators about his relationship to an admitted steroids dealer, and the jury deadlocked on two other charges. Sentencing was set for September 5, 2008. He was sentenced to one year of house arrest on October 21, 2008.\n- Fainaru-Wada, Mark; Williams, Lance (December 25, 2003). \"Barry Bonds: Anatomy of a scandal\". Seattle Post-Intelligencer (Seattle PI). San Francisco Chronicle. Retrieved June 29, 2017.\n- Harrington, Mark (November 1, 2003). \"Success a Bitter Pill / College dropout moved BALCO into big leagues before charges\". Newsday. Retrieved June 29, 2017.\n- \"Lawyers for former track coach Christos Tzekos say investigations show no ties to BALCO\". IHT. AP. October 30, 2007. Retrieved January 12, 2008.\n- Schmidt, Michael S. (July 25, 2007). \"Chemist Says Sheffield and Bonds Used Drugs\". The New York Times. ISSN 0362-4331. Retrieved June 29, 2017.\n- \"Conte released from prison, calls book 'full of lies'\". ESPN.com. AP Press. March 30, 2006. Retrieved June 29, 2017.\n- Bonds indicted on perjury, obstruction of justice charges, Lance Williams, Jaxon Van Derbeken, San Francisco Chronicle, November 15, 2007\n- Maik Grossekathöfer: Leck im System., Der Spiegel, 40/2006, S. 140, (German)\n- Reporters in BALCO Case Sentenced to Jail, ESPN, September 22, 2006\n- Reporters Must Testify Over Bonds Leak, USA Today, August 15, 2006\n- Egelko, Bob (February 14, 2007). \"Attorney pleads guilty to leaking BALCO testimony\". The San Francisco Chronicle.", "score": 34.674576404766675, "rank": 3}, {"document_id": "doc-::chunk-4", "d_text": "- Williams: I Never Thought Bonds Indictment Would Occur Archived December 17, 2007, at the Wayback Machine By: Strupp, Joe Editor and Publisher November 17, 2007\n- Elias, Paul (March 21, 2011). \"Barry Bonds perjury trial gets under way\". Associated Press. Retrieved March 21, 2011.[permanent dead link]\n- \"Barry Bonds convicted of obstruction of justice in performance-enhancing-drugs case\". Los Angeles Times. April 13, 2011. Archived from the original on April 28, 2011. Retrieved April 16, 2011.\n- \"Barry Bonds found guilty of obstruction,\". ESPN. April 14, 2011. Archived from the original on April 29, 2011. Retrieved April 16, 2011.\n- Mintz, Howard (April 4, 2008). \"Cyclist convicted of perjury in Balco case\". San Jose Mercury News. Retrieved May 30, 2008.\n- Pogash, Carol; Schmidt, Michael S. (October 11, 2008). \"Cyclist Avoids Prison Time, Which May Benefit Bonds\". The New York Times. Retrieved April 26, 2010.\n- Dubow, Josh; Paul Elias; Raf Casert (May 30, 2008). \"Track coach Graham convicted in BALCO probe\". Tampa Bay Online. Archived from the original on February 15, 2009. Retrieved 2008-05-30.\n- Pogash, Carol; Michael Schmidt (October 21, 2008). \"Graham Sentenced to Year's House Arrest in Balco Case\". New York Times.", "score": 31.32042611509128, "rank": 4}, {"document_id": "doc-::chunk-2", "d_text": "He was released on November 15, 2007, the same day Bonds was indicted by a federal grand jury on four counts of perjury and one count of obstruction of justice.\nOn June 6, 2006 the house of Arizona Diamondbacks player Jason Grimsley was searched as part of the ongoing BALCO probe. Grimsley later said that federal investigators wanted him to wear a wire in order to obtain information against Barry Bonds. He told people which players used performance-enhancing drugs. The final result was that the Diamondbacks released Grimsley, and he was given a 50-game suspension by Major League Baseball.\nIn October 2006, investigations against Fainaru-Wada and Williams were started. The reporters were served with subpoenas to appear before a grand jury to identify the individual who leaked Bonds' name to them. They refused to do so and federal prosecutors asked that they be jailed for up to 18 months (the typical term of a grand jury). However, in February 2007, federal prosecutors dropped charges against the reporters after a Colorado attorney, Troy Ellerman, who once represented Conte and another executive of the Bay Area Laboratory Co-operative, admitted to leaking the testimony and pleaded guilty to federal charges of unauthorized disclosure of grand jury testimony.\nIn an interview with Editor & Publisher, Lance Williams revealed that he would never testify in court, even if it did not involve confidential sources. \"I have no interest in becoming anybody's witness.\"\nOn November 15, 2007, former San Francisco Giants outfielder Barry Bonds was indicted for perjury and obstruction of justice based on his grand jury testimony in this investigation. The trial began March 21, 2011, and he was convicted on April 13, 2011 on the obstruction of justice charge. The conviction was overturned upon appeal.\nOn April 4, 2008, Tammy Thomas was convicted by a federal jury on three counts of making false statements to a federal grand jury in November 2003, and on one count of obstructing justice. She was acquitted of two perjury charges. Sentencing was set for July 18, 2008. She was sentenced to six months' house arrest and five years' probation on October 10, 2008.", "score": 31.29165612576571, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "Investigation by the Bay Area Laboratory Co-operative (BALCO), revealing confessions by some athletes, and the Mitchell investigation into professional baseball doping have brought the whole issue of doping to public attention.\nBesides different types of anabolic steroids, stimulants such as amphetamines, human growth hormones and supplements (such as androstenedione—known as “andro”) provide a wide variety of ways athletes can gain an edge—often escaping detection. Blood doping is another illicit means of improving athletic performance; the injection of extra red blood cells increases the amount of oxygen available to strained muscles. There are four main ways to increase one’s red-cell count—two legal and two illegal. Putting oneself into a high altitude pressure tank overnight or for a few hours, is one of the ways still legal.\nNews articles noted steroid use among youth from the 1980s. Here are just a few headlines from over the past decades:\n“Children’s steroid use rising, study says,” Associated Press, 5May98\n“Steroids and Kids: How Sports Doping Hits Home,” Newsweek cover story, 20Dec04\n“Are steroids as bad as we think they are? The Boston Globe, 12December04\n“Reading, writing and ‘ROIDS,” The Boston Herald, 24June05\n“Congress and Baseball Battle over Steroids,” The New York Times, 10March05\n“Steroid use by young women troubling: Specialists believe problem even greater than statistics, The Boston Globe, 10May05\nWithout getting into all the dangers of youthful steroid use, one problem is the imbalance of suddenly increased muscle strength while bones are still developing. Michael Meyers (exercise physiology, Univ. of Houston) says, “You get a tremendous increase in muscle mass, but the connective tissue does not catch up. The tendons and ligaments are not strong enough…” causing ligament tears and fractures especially around the knees (http://whyfiles.org/090doping_sport/4.html, accessed 27June14).\nThe documentary, “Bigger, Stronger, Faster,” (Christopher Bell, 2008) puts a family face on steroids. Two of the Bells’ three sons excel with steroid use. The third, Chris, examines steroid culture in this documentary, considering its dangers without taking a clear stand against steroid use.", "score": 29.692157805601646, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Supporters argue that slugger Barry Bonds will go down in history as one of the greatest hitters ever. After all, he is the game’s home run king and possesses seven MVP and eight Gold Glove awards. However, Bonds is dogged with an issue outside the game of baseball, proving to be much more difficult to deal with than any pitcher he has ever faced. That issue, is steroids.\nBonds has been accused for the past several years of using performance-enhancing drugs. From ex-girlfriends to scientists, the number of allegations against Bonds is starting to mirror that of his home run totals. He has repeatedly denied the claims and at one point in 2005, he threatened to quit baseball because he was so exhausted from answering questions about the issue.\nDespite Bonds’ adamant denials that he never used performance enhancers, an array of evidence against Bonds has been revealed over the past few years. Victor Conte, founder of the infamous BALCO (Bay Area Laboratory Co-Operative) and a man under indictment by a federal grand jury, admitted to giving performance-enhancing drugs to Greg Anderson, Bonds’ personal trainer. In the book, Game of Shadows, by Mark Fainaru-Wada and Lance Williams, a detailed and exhaustive narrative is presented, documenting Bonds’ involvement with performance-enhancers. Bonds went to court trying to stop the sale of the book, but in his court filings he never denied the veracity of any of the claims in the book.\nIn the book, the authors wrote that a former girlfriend said Bonds began taking steroids after the 1998 home run chase between heavily muscled Mark McGwire and Sammy Sosa. Kimberly Bell, one of Bonds’ alleged mistresses, claimed he was jealous of the attention that McGwire received for surpassing Roger Maris’ season record of 61.\nBefore his alleged steroid abuse, Bonds already possessed Hall of Fame credentials. He entered the 1999 season as a 34-year-old with 411 homers. He had won three National League MVPs and seven Gold Gloves. Who knew his most productive years at the plate were still ahead of him?\nHe was injured for much of the 1999 season, suffering from a left elbow injury, a groin problem, as well as knee inflammation. The various ailments limited him to a then career-low 102 games, but he still managed to bang out 34 homers.", "score": 27.728502906279626, "rank": 7}, {"document_id": "doc-::chunk-4", "d_text": "Gut wrenching and completely unbelievable , as the player, was far from convincing with his web of deceit and lies, which were further enhanced by many of the rather idiotic questions asked by Katie Couric , herself . And there were idiots who sought to suggest that Lance Armstrong showed contrition with his admission of guilt on a staged environment sought to elicit emotions ? Are these fools, that naïve ? I guess that they are ! Ah , well , it’s in-bred apathy ! It should be noted that in light of Rodriguez’s admission , his name would once again come up in the subsequent indictment and trial of now incarcerated Orlando medical professional , Dr Anthony Galea . Now in another turn of events the player’s name has once again been linked to another BALCO like “scandal” , that could have some serious reverberations around the league as well as damaging the Yankees’ alleged ” pristine image ” .\nCourtesy of Yahoo Sports\nBy Jeff Passan , Yahoo Sports\nA man in south Florida supplied performance-enhancing drugs to more than half a dozen major league players, including Alex Rodriguez, according to a Miami New Times report that officials at Major League Baseball believe will grow into a doping scandal that could rival the BALCO case that tarnished Barry Bonds.\nAlex Rodriguez is once again at the center of a possible PED scandal. The newspaper reported Tuesday morning that Anthony Bosch, a self-styled biochemist seen frequently in Latin American baseball circles, distributed large amounts of human growth hormone, synthetic testosterone and other cocktails of PEDs to players who previously had not been linked, such as Texas Rangers outfielder Nelson Cruz.\nSome of the players could be subject to a 50-game suspension for a violation of the league’s PED policy, a league official told Yahoo! Sports. Three of Bosch’s alleged clients – outfielder Melky Cabrera, pitcher Bartolo Colon and catcher Yasmani Grandal – already have been caught and suspended by the league.\nFollowing a relatively quiet period, PED busts spiked in baseball last season. From Ryan Braun’s positive test for testosterone – which got overturned because of alleged mishandling of evidence – to the suspensions of Cabrera, Colon, Grandal, Freddy Galvis, Marlon Byrd, Guillermo Mota and Carlos Ruiz, baseball is facing a renaissance of use, one it believes centered in south Florida.\nClick on link to read this article in full.", "score": 23.795181109952754, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Brian McNamee, the chief prosecution witness in the Roger Clemens perjury trial, conceded Thursday that he initially lied about his involvement with steroids.\nStarting Monday, a jury will be selected in the very same court house where Barry Bonds testified all those years ago to determine whether he broke the law with four short answers totaling nine words: “Not that I know of,” “No, no,” “No,” and “Right.”\nThe New York Daily News citing anonymous sources said disgraced cyclist Floyd Landis wore a concealed recorder and video camera during a meeting last spring with designer and cycling team owner Michael Ball.\nA former teammate of Lance Armstrong’s has reportedly told federal investigators that widespread use of performance-enhancing drugs on the U.S. Postal Service team was done with Armstrong’s encouragement.", "score": 22.331360189533967, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "It was widely known in the 1970s and early-’80s that long-distance runners and cross-country skiers from Scandinavia were using blood-boosting methods (by re-infusing their previously stored blood) to improve their performances. In Italy, its Olympic Committee CONI even sponsored sports doctor Professor Francesco Conconi (inventor of the Conconi test for establishing an athlete’s anaerobic threshold) and his biomedical research center at the University of Ferrara to prepare athletes from several sports, including skiing and cycling, using blood-boosting methods. And it’s widely accepted that Conconi and his assistant Michele Ferrari helped Francesco Moser break Eddy Merckx’s world hour record at Mexico City in January 1984.\nBlood doping was undetectable and even encouraged until members of the 1984 U.S. Olympic cycling team (track and road), under the supervision of the U.S. Cycling Federation coaching staff, blood-boosted in Los Angeles. Some intra-federation memos (this happened before e-mails existed) were leaked to Rolling Stone magazine, which published a salacious article on the affair in its February 1985 issue. The result was several USCF officials being reprimanded. It was regarded as a huge scandal in the United States and resulted in blood doping finally being prohibited, first by the USCF, then the UCI, and eventually by the IOC in 1986.\nIt was ironic that just as blood doping was being banned a team of scientists at biotech company Amgen in California was researching an artificial, or recombinant, form of human erythropoietin for boosting the red-blood-cell count of anemic cancer patients. FDA approval for the new drug Epogen (EPO) came in 1989, but it was already on the black market in Europe, and EPO eventually became the most widely used doping product in cycling, cross-country skiing and long-distance running.\nThere was no way EPO could be detected in blood tests because it was a genetic hormone that helped athletes create their own new red blood cells. Scientists in Europe and Australia began research on methods to identify the use of EPO by athletes, but it was a long, difficult (and expensive!) process.", "score": 16.892883571599878, "rank": 10}, {"document_id": "doc-::chunk-3", "d_text": "And I kept at it, stayed focused on the goal. Everything I did was designed to make me a better player.In 1986, I was named the American League's Rookie of the Year, and it began to look like I was on my way. But it wasn't happening by accident. After regular practice, while all the other players went off to the bars, I'd go to the gym and work out. On days off, I'd take more batting practice and hit the gym. I was going to turn myself into a baseball machine, for my mother, and I would do anything I had to do to get there. I read everything I could get my hands on about vitamins and supplements — even in body-building magazines! — and I scoured other publications for new studies on steroids, growth hormones, and other performance-enhancing drugs.\nThe industry was still in its infancy back then, but I found that exciting, and I experimented with different products, becoming my own guinea pig. I tried every combination you can imagine. I was testing it on myself, and retesting, and mixing and matching every product on the market, trying things no one had ever imagined, and I was doing it to turn myself into a super athlete. I even kept notes! I had a journal where I would keep track of every detail, how much of this or that, when, how I felt twelve hours later, a day later, and at the end of the week. I figured out how to eat to maximize the effectiveness of the steroids, how to train while taking them, and the best time of day to stick myself with the needle, during the season and in the off season. Before long, I was tipping the scales at 240 pounds, most of it muscle, so obviously I was doing something right.\nThe next year, 1987, Mark McGwire joined the Athletics, a tall skinny kid with absolutely no muscle on him, and I guess he was impressed with my physique, because he had a lot of questions about my regimen. The following year, not surprisingly, McGwire underwent a miraculous transformation, and shortly thereafter the fans began to call us the Bash Brothers. I wonder how that happened?\nIn 1988, I became the first player in major league history to hit 40 home runs (42, actually) and steal 40 bases in the same season.", "score": 15.702075732606316, "rank": 11}, {"document_id": "doc-::chunk-3", "d_text": "And I kept at it, stayed focused on the goal. Everything I did was designed to make me a better player. In 1986, I was named the American League's Rookie of the Year, and it began to look like I was on my way. But it wasn't happening by accident. After regular practice, while all the other players went off to the bars, I'd go to the gym and work out. On days off, I'd take more batting practice and hit the gym. I was going to turn myself into a baseball machine, for my mother, and I would do anything I had to do to get there. I read everything I could get my hands on about vitamins and supplements — even in body-building magazines! — and I scoured other publications for new studies on steroids, growth hormones, and other performance-enhancing drugs.\nThe industry was still in its infancy back then, but I found that exciting, and I experimented with different products, becoming my own guinea pig. I tried every combination you can imagine. I was testing it on myself, and retesting, and mixing and matching every product on the market, trying things no one had ever imagined, and I was doing it to turn myself into a super athlete. I even kept notes! I had a journal where I would keep track of every detail, how much of this or that, when, how I felt twelve hours later, a day later, and at the end of the week. I figured out how to eat to maximize the effectiveness of the steroids, how to train while taking them, and the best time of day to stick myself with the needle, during the season and in the off season. Before long, I was tipping the scales at 240 pounds, most of it muscle, so obviously I was doing something right.\nThe next year, 1987, Mark McGwire joined the Athletics, a tall skinny kid with absolutely no muscle on him, and I guess he was impressed with my physique, because he had a lot of questions about my regimen. The following year, not surprisingly, McGwire underwent a miraculous transformation, and shortly thereafter the fans began to call us the Bash Brothers. I wonder how that happened?\nIn 1988, I became the first player in major league history to hit 40 home runs (42, actually) and steal 40 bases in the same season.", "score": 14.531482217377928, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "First of two parts\nThe worldwide sports anti-doping program, created to fight performance-enhancing drug use in international athletics, imposes severe punishments for accidental or technical infractions, relies at times on disputed scientific evidence and resists outside scrutiny, a Times investigation has found.\nElite athletes have been barred from the Olympics, forced to relinquish medals, titles or prize money and confronted with potentially career-ending suspensions after testing positive for a banned substance at such low concentrations it could have no detectable effect on performance, records show.\nThey have been sanctioned for steroid abuse after taking legal vitamins or nutritional supplements contaminated with trace amounts of the prohibited compounds. In some cases, the tainted supplements had been provided by trusted coaches or trainers.\nThe findings emerge from a Times examination of more than 250 anti-doping cases involving runners, cyclists, skiers, tennis players and competitors in dozens of other sports from around the world.\nAlain Baxter, 28, became the first Briton to win an Olympic medal in Alpine skiing, placing third in the slalom at the 2002 Winter Games in Salt Lake City. Two days later he tested positive for methamphetamine, a banned stimulant. He was forced to forfeit his bronze medal.\nHis offense? He had used a Vicks Vapor Inhaler bought in Utah to treat his chronic nasal congestion. Unlike the Vicks inhalers sold at home, the American version contained traces of a chemical structurally related to meth — though lacking its stimulative qualities.\nDespite testimony from a Vicks scientist that the compounds differed, an arbitration panel hearing Baxter's case ruled that because anti-doping authorities regarded the chemicals as related, he was guilty.\n\"It never crossed my mind that it would be different from the British one,\" Baxter told the BBC upon returning home. \"I didn't think I was doing anything wrong.\"\nA 17-year-old Italian swimmer treating a foot infection with an antibiotic cream her mother bought over the counter failed a doping test at a swim meet in 2004. Neither Giorgia Squizzato nor her mother realized that the cream's ingredients included a prohibited steroid — or that applying it between her toes could result in a positive urine test.\nArbitrators in her case acknowledged that \"the cream did not enhance the athlete's capacity\" and hadn't \"favored her performance.\"", "score": 13.305966875060072, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Sports has an extraordinary ability to capture the attention of the worldwide audience. The sheer joy of watching your favorite team win a game is incredible. From multi-sports events to the World Cup tournament, we witness a number of sporting extravaganza every year. While some sportspersons leave their fans spellbound due to their record-breaking performance, some put their admirers in great shock due to their engagement in unethical and immoral activities, such as doping, adultery, betting, ball tampering, and theft.\nFrom Lance Armstrong's doping revelations to the White Sox deliberately losing the 1919 World Series, here are five biggest scandals in sports and how they were exposed.\n1. The ‘Black Sox’ Scandal, 1919\nDeemed as one of baseball’s darkest moments, this scandal concerned eight White Sox players who were accused of intentionally losing the 1919 Series of baseball to the Cincinnati Reds. A Chicago grand jury was set up in 1920 to investigate the dealings of the team with the gamblers. On September 28, 1920, two of the players, Eddie Cicotte and Shoeless Joe Jackson, confessed. The following month, the grand jury charged eight players of conspiracy and fraud and banned them from playing baseball for life. This scandal was dramatized in the 1988 film, Eight Men Out.\n2. Spanish ID Basketball Team, Paralympic Games, 2000\nRegarded as the greatest cheating scandal in Paralympic Games history, this event witnessed ten members of a Spanish Paralympic basketball team being stripped of their gold medals in December 2000 after it was discovered that they were not mentally challenged. Carlos Ribagorda, a member of the winning team and an undercover journalist, reported that his eligibility to play in the ID category had never been checked and accused Spain of selecting 15 athletes with no intellectual disabilities.\nDue to serious impediment in determining the eligibility criteria for intellectually disabled athletes, the International Paralympic Committee announced to suspend all sporting activities that involve an ID. But, later in 2012, the committee revised the system for checking mental disabilities and allowed ID athletes to participate in the game.\n3.", "score": 10.065930469207823, "rank": 14}]} {"qid": 41, "question_text": "What are the different types of VAR partnership models available for SaaS products?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "As a SaaS company, you know that using channel sales can generate more business and increase revenue without hiring a sales team. However, it is important to choose a sales channel that suits your business model, and even better if the channel you choose adds value for your potential customer and can help you guide them through the software buying process.\nThere are three common types of channel relationships that SaaS companies can use. Cost, flexibility and market adaptability are three major factors that will determine which channel to choose, and you need to understand which channel relates best to your customers’ requirements. The three sales channels are:\nApplication Developer Model\nThe Application Developer Model is slightly different in that the vendor directly supports the customer and the customer pays the vendor. However, a third party can help the customer set up or customize the vendor’s software. For example, Zoho CRM sells its service directly to customers and customers pay them directly.\nBrokerage Channel Model\nThe Brokerage Channel Model helps customize software for customers. This may entail bundling or stacking software to customize a solution for customers; this method can also include account management. It’s similar to an insurance broker sourcing and building a policy for you and works best for simple SaaS products that focus on high transactional requirements.\nValue Added Reseller (VAR)\nVAR provides customers with a fully customized software service. The reseller builds, operates, services and supports and also invoices the customer. The reseller will then pay a license fee or royalties back to the vendor or in some cases the customer can pay the license fee directly to the vendor.\nFor example, Roketto is a HubSpot Certified Partner, which means that we can sell HubSpot Licenses to our clients. Clients who buy a HubSpot license through Roketto are usually going to work with us for inbound marketing services, and the Hubspot software helps us in our service delivery, marketing strategy and with measuring and tracking results.\nAnd because Roketto is a VAR certified partner with Hubspot, we can provide them with a special link that lets them save money on the regular $3000 onboarding fee. The client will pay HubSpot directly, but we will provide the client with training and support. However, we are the reseller who makes the sale to the client on the behalf of Hubspot.", "score": 51.04611460841519, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Resellers, integrators and IT services companies -- channel partners of all types -- are evaluating cloud computing, with some firms building their businesses around that software delivery model.\nCompanies such as Google Inc., NetSuite Inc. and Salesforce.com Inc. offer on-demand, vendor-hosted Software as a Service (SaaS). Products range from office productivity wares to enterprise applications. But for service and solution providers, the cloud provides more than an application set. Those players can tap vendor-provided development tools, hosting infrastructure, storage and data center capacity.\nWhile partners fear disintermediation -- many say the model shifts account control from third-party service providers to the vendor/host -- the cloud providers invite channel interaction. Partners can resell Software as a Service solutions or customize a solution for a given customer segment. They can also wrap services around the on-demand offerings: up-front business consulting, implementation assistance, systems integration with existing systems, and post-implementation support.\nDaston Corp.'s path to partnership started when the company tapped NetSuite to provide an array of business applications: customer relationship management (CRM), finance, human resources and payroll. The McLean, Va.-based professional services firm is a federal contractor that must operate in accordance with the Defense Contract Audit Agency (DCAA) requirements. Daston modified NetSuite to meet the compliance demand.\nNow the company sells its instance of NetSuite -- dubbed DCAA On-Demand -- to other government contractors as a prepackaged solution, said Greg Callen, senior vice president at Daston. The company offers consulting and implementation support services as well.\n\"We've developed a whole business line around a NetSuite practice,\" he said.\nAppirio Inc., meanwhile, has built its entire business around accelerating the adoption of on-demand software. The San Mateo, Calif., company's services lineup includes strategy, implementation and education. Appirio partners in the cloud with Google and Salesforce.com.\nAppirio leverages Salesforce.com's cloud to develop custom applications for such clients as CRC Health Group, a provider of drug and alcohol treatment services. The company has also built a Salesforce.com-based professional services automation solution that it plans to release in the next two to three months, said Narinder Singh, founder of Appirio.\nCloud computing as development environment\nOn the custom development side, channel partners take advantage of the platform cloud providers offer.", "score": 48.81038403873618, "rank": 2}, {"document_id": "doc-::chunk-2", "d_text": "To work with vendors, VARs have to become authorized and meet certain requirements. For example, some vendors like HubSpot require partners to hit yearly revenue targets, or take technical and sales training programs and become certified sellers.\nFor example, when customers are researching software, they may find that the software they need doesn’t exist in one package; a VAR can help customize the software to their specific requirements. VARs are particularly helpful for certain industries. In fact, many VARs specialize in certain industries such as the financial or healthcare industries, so they understand the regulations and technical requirements of their customers, and they can provide customized software with greater detail and efficiency because of their knowledge of a specific industry. These VARs also have the background knowledge and data to understand what your market’s pain points are at each stage of the funnel.\nWhile the Value Added Reseller Channel is a popular choice, the traditional VAR model may not be the right choice for all SaaS products. Not to worry, just like putting a porsche engine in a old VW campervan, you can also customize your VAR partner model with an engine that suits your software. You may be able to choose one of the following options:\n- Advocate Partners offer a low-touch partnership where your software will be recommended to the partner’s customers only when it makes sense.\n- Referral partners will pass leads to your company in exchange for commission on either the lead or the sale.\n- Strategic partners offers a higher level of partnership where your company will work more closely with a partner company towards common goals aligned with the same messaging and brand values.\nAt Roketto, being a VAR partner with Hubspot has proven to be a most valuable asset for our customers. Using Hubspot marketing software makes our process more efficient and gives us immediate access to the information we need to develop an effective inbound marketing strategy for our customers that engages at each level of the buyer journey.", "score": 45.330705665501206, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "- Teaming Partners – a partnership designed for one project opportunity at a time or an occasional need to resell VoltDB to satisfy specific customer requirement. This type of relationship tends to be used by system integrators.\n- Certified System Integrators – a partnership designed to provide valued differentiation and a measure of quality for VoltDB customers who engage professional services and consulting organizations to design and develop VoltDB solutions.\n- VARs – formalized programs that provide a continuing relationship for the partner to resell full use licenses of VoltDB products and services as part of an integrated solution sold to various customers.\n- Strategic Partnerships – a relationship that typically requires customized terms crafted on a case-by-case basis, these partnerships will include OEMs looking to embed VoltDB as part of an application, appliance or other complete integrated solution offering.\nVoltDB Launches Partner Program\nCheck Out Our Best Services for Investors\nPortfolio Manager Jim Cramer and Director of Research Jack Mohr reveal their investment tactics while giving advanced notice before every trade.\n- $2.5+ million portfolio\n- Large-cap and dividend focus\n- Intraday trade alerts from Cramer\nAccess the tool that DOMINATES the Russell 2000 and the S&P 500.\n- Buy, hold, or sell recommendations for over 4,300 stocks\n- Unlimited research reports on your favorite stocks\n- A custom stock screener\nDavid Peltier uncovers low dollar stocks with serious upside potential that are flying under Wall Street's radar.\n- Model portfolio\n- Stocks trading below $10\n- Intraday trade alerts\nDavid Peltier identifies the best of breed dividend stocks that will pay a reliable AND significant income stream.\nEvery recommendation goes through 3 layers of intense scrutinyquantitative, fundamental and technical analysisto maximize profit potential and minimize risk.\nMore than 30 investing pros with skin in the game give you actionable insight and investment ideas.", "score": 44.61791529036235, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Value Added Resellers\nThe VAR partner program is for partners who offer customized solutions - essentially a combination of domain, vertical, or application expertise that includes ProActive School technology. VAR partners focus mainly on driving license revenue through the resale of all or part of the ProActive School product suite. These partners also offer professional services to support the deployment of our technology. This highly organized program provides a clearly defined environment for VARs to work within and includes four achievement levels: Platinum, Gold, Silver, and Bronze. Top-performing VARs are also eligible to participate in our Authorized Education and Authorized Consulting partner programs.\nThrough this program, VARs can:\nIncrease revenue: ProActive School helps our partners create new business opportunities to maximize revenue potential and close more deals at higher margin with the broadest range of new and improved services that showcase high incremental value.\nIncrease customer satisfaction: Through the provision of market leading ProActive technologies we elevate the value of VAR offerings enabling our partners to focus on delivering robust solutions that provide their customers with access to the valuable data locked inside applications.\nDeploy faster: Our ProActive tools and framework enable deployment in weeks and eliminate the maintenance and customization effort associated with custom development.\nSuperior account management and a host of other benefits are available to VARs through this program. Depending on achievement level, partners are eligible to receive services discounts, referrals, demo licenses, and sales collateral. Marketing support includes access to marketing funds, the partner extranet, events, newsletters, logos, jointly branded collateral, and more. We offer web, email, phone-based technical support and a full range of product training and certification options.\nPartner requirements vary depending on achievement level but are predominantly determined by annual revenue commitments. Annual program fees are mandatory for all partner levels except Bronze, although this may vary by country. Partners must also commit to sales and pre-sales training activities as well as achieve technical certifications for the products they sell.\nContact email@example.com for more information.", "score": 43.42455539549838, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Vendor Programs Guide\nVendor Programs | Technology | Computer Hardware\nA descriptive, comprehensive guide to the vast array of vendor programs available to VARs and channel partners.\nPartnerDirect is designed for business owners who are interested in taking advantage of the programs marketing, support, and financing benefits to grow their company.\nThe F5 Advantage Partner Program is intended for well-established, financially-stable businesses with extensive sales experience and clients who are likely to be interested in F5s products and services.\nAMD's Fusion Channel Partner program offers Select, Premier and Elite tiers. Partners are categorized by business models and goals.\nUnder the Fujitsu Accelerator Partner program partners can participate in the mobile program, the enterprise program, retail program, or the Interstage software program depending on their business needs.\nSonicWall operates a two-tier distribution model that includes distributors and channel partners. The company works with about 7,000 resellers in the U.S.\nLenovos Partner Network is tiered by Premium and Registered partners. Business Partner levels are determined by sales volume and certifications held.\nThe Samsung Power Partner Program is tiered with Platinum, Gold, Silver and Bronze levels. The program allows partners to attain levels by product categorydisplays, mobile and printers.\nView Vendor Programs by Vendors\nView Vendor Programs by Vertical\nGovernment-State-Local | Application delivery | Architecture | Automotive | Business Intelligence | Cloud computing | Collaboration | Commuinications | Communications | Data protection | Education | Enterprise | Finance | Government (Federal) | Hardware | Heathcare | Hospitality | IT management software | Information Services | Manufacturing | View all >>\nView Vendor Programs by Technology\nBackup-Recovery | Application Software | Application delivery | Business Intelligence | Business Intelligence Software | CAD | Cloud computing | Collaboration | Collaborative Software | Communications | Computer Hardware | Customer Relationship Management | Data Management | Data protection | Desktops | Document Management | Enterprise Management | Enterprise mobility | Fiber | GIS and Mapping | View all >>\nVendor Programs Solutions", "score": 43.1907549565857, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Concerto: Your Pipeline To The Cloud\nCreated for channel partners of all types including VARs, ISVs, MSPs, technology providers and application consultants, the Concerto Partner Network gives you a direct and competitive advantage by helping you deliver innovative and reliable cloud solutions to your customers.\nWe empower our partners with an outstanding cloud platform and competitive recurring revenue streams, plus cloud-enablement tools, best practices and advisory services, including robust operations, sales and marketing support.\nWe specialize in helping our partners make a successful and rapid transformation to cloud-based offerings. Our team accesses each partner’s unique needs then customizes an approach with expert guidance, tools and proven processes. Partner with Us on Your Terms\nThe Concerto Partner Network offers a choice of partnering models. Discuss which model best suits your business objectives with a Concerto Partner Development Specialist today!\n- Leverage our brand with a “Powered by Concerto” message\n- Earn referral fees for bringing Concerto new customers\nLearn how Concerto can help you:\nTransform Your Business Model\n- Stay ahead of the curve and capture your share of the cloud\n- Move to a recurring, subscription-based licensing model\n- Expedite your product release lifecycle and deployment process\nIncrease Revenues and Expand Your Market\n- Capture new customer segments and generate recurring revenue streams\n- Expand your cloud offerings confidently with a superior cloud partner\n- Avoid building and managing your own datacenter(s)\nDeliver a Better Application Experience\n- Provide a modernized IT approach for customers\n- Deliver access to data anytime, anywhere and on any device\n- Scale up and down according to your unique needs", "score": 43.095799895415574, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "Here are five best practices cloud service providers might want to consider as they develop their own channel partner programs.\n1. Clearly define objectives and incentives.\nRecruiting the right cloud partners is a big factor in building a successful channel program. Even though business models are slightly different from VAR to VAR, cloud service providers should draw a clear image of their ideal partner. That could mean specifying location or size. It should also consider technical skills or expertise in a given market or account type.\nZetta.net, for example, is specifically signing up VARs or integrators that already have skills in backup and recovery and are seeking a cloud-delivered option, said Art Ledbetter, the company's director of channels.\n\"It makes it easier for them to walk into the program,\" he said.\nOn the other hand, IaaS provider Infinitely Virtual is taking a much broader approach by creating two tiers for its program, said Adam Stern, CEO of the Encinitas, Calif.-based company.\nIts entry tier pays VARs for referring customers to Infinitely Virtual, which handles all provisioning and technical support. VARs must sell at least $200 in services in a given month to earn their commission, which increases with the volume of business they refer.\nInfinitely Virtual set the $200 minimum to weed out companies trolling for an additional discount to use its services internally, Stern said.\nThe service provider's second partner tier is meant for those that want to become completely involved in selling its offerings. Under that model, Stern explained, partners buy services at a discount and set their own prices, which means they control the margins they earn.\n2. Don't get between your partners and their customers.\nOne mistake some cloud service providers make when structuring their channel program is offering their partners the same technical support they offer individual customers. Be aware that your partners will require a much richer set of these resources, just like your internal support team.\nFirst, they'll need a way to manage multiple accounts. That means cloud service providers should invest in developing tools that help partners maintain a view into more than one customer's infrastructure or services simultaneously.\nA major focus of Zetta.net's recent channel program revamp was creating a multi-tenant portal that partners can use to provision accounts, set (or reset) configurations, monitor service levels and make other changes, Ledbetter said.\n\"Previously, we were treating MSPs like they were customers, but that wasn't a very good policy for helping them manage their own customers,\" he said.", "score": 42.31335849711186, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "More members of the traditional IT channel are experimenting with adding cloud services to their portfolios. But few have the technical or financial resources to do this on their own, leading more of them to seek partnerships with large cloud service providers.\nNearly 60% of the value-added resellers (VARs) and other IT solution providers that research firm PartnerPath recently surveyed described themselves as representatives of other companies' cloud offerings, including Infrastructure as a Service (IaaS), Software as a Service (SaaS), cloud-based backup and disaster recovery services and more.\nApproximately 40% described themselves as a \"reseller\" of those applications while 20% identified with the term \"sales agent.\"\n\"Delivering IT as a service has forced partners to reevaluate their business model at a really fundamental level,\" explained PartnerPath analysts in its recent analysis of that data, The Biggest Cloud Winners. \"Cloud leaders today come from different business model roots. Some are 'built in the cloud' with a nearly pure annuity services-and-applications model, but many more have had to retool their approach to staffing, training, marketing and revenue recognition from a legacy on-premises model.\"\nCloud service providers will need to rethink their entire incentive structure so that company sales people aren't tempted to swoop in on [partners'] deals to make their quotas.\nThose differences mean that cloud service providers must be flexible in their approach with individual VARs, systems integrators or managed service providers (MSPs), said Jaywant Rao, vice president of global alliances for cloud service provider Savvis, a subsidiary of CenturyLink based in St. Louis.\n\"Every partner is slightly different,\" he said. \"They each have a different flavor of how they go to market. That means you have to focus on which models make sense for your own business and align things from there.\"\nGiven Savvis' ties to the telecommunications model, developing a partner program was almost a given. But other companies evolved into that position.\nZetta.net, a Sunnyvale, Calif.-based provider that offers enterprise-grade backup and disaster recovery services, announced its enhanced cloud partner program last September. The company made the decision to build a channel program after experiencing an explosion of interest from MSPs and VARs that lacked the ability or resources to build a similar infrastructure on their own, according to Gary Sevounts, Zetta.net's vice president of marketing.\nRegardless of the original motivation, successful programs share several common elements.", "score": 41.8218978863148, "rank": 9}, {"document_id": "doc-::chunk-5", "d_text": "Also, customers may see the value of a direct sales team that understands the manufacturing market as well as the types of problems specific to small businesses.\nHowever, history has shown that value added resellers (VARs) are critical to penetrating the local markets with their regional touch and knowledge. See The Cha(lle)nging World of Value-added Resellers. Thus, Microsoft has opted for a hybrid subscription hosted model named Service Provider License Agreement (SPLA) program, which is a \"monthly rental of ERP\" in which Microsoft and its partners are increasing deployment options for customers. Namely, partner VARs and independent software vendors (ISVs) can provide Microsoft Dynamics ERP products (which will remain in a single-tenant architecture for the foreseeable future) as a service and as a monthly fee, with the on-premise switch option if necessary. Time will only tell whether this customer choice will be more important to customers than the above mentioned potential benefits of true multi-tenant SaaS deployments.\nDiscrete, \"to order\" manufacturing companies in the high-tech, electronics, capital equipment, and automotive sectors with annual revenues up to $50 million (USD), and smaller subsidiaries of larger companies, should evaluate GSInnovate as a cost-effective option for up to a few dozen users. Similar manufacturers with fifty or more users should consider, instead, the traditional on-premise glovia.com ERP system, which should have a lower total cost of ownership (TCO) for these larger organizations that require much deeper and broader functional scope. However, such companies still pondering to use SaaS for the short term because of IT resources or financial constraints might find GSInnovate a practical transitional choice before migrating the on-premise system (since Glovia grants a full credit for what they have spent on the hosting arrangement).\nGlovia.com is well suited for upper mid-market companies with multiple locations and multiple business units in diverse markets. It is also suitable for organizations with versatile, discrete manufacturing styles (mixed-mode) within the automotive, capital equipment, electronics, telecommunications, and similar industries. Companies needing software to address mixed-mode manufacturing (from engineer-to-order [ETO] and complex projects through to high-volume and repetitive), projects and contracts, and service management may want to include Glovia on an initial list of vendors for a particular ERP software selection.", "score": 40.85814126675698, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "For the last decade, an agent model has been the easiest way vendors had to engage solution providers in cloud offerings. The shifts a cloud business model demands in marketing, sales, compensation methods and customer success are difficult for both vendors and solution providers. Thus, a simple referral or an agent model was a quick win for vendors looking to gain awareness of their cloud offerings. There isn’t enough margin available in a referral deal for the solution providers to be profitable long-term so we've seen growth in other solution provider roles in cloud models.\nWhat are solution providers selling?\nNot surprisingly, software as a service tops the list of cloud solutions the solution provider respondents* are selling. Software as a service (SaaS) remains the largest segment of the cloud market, with revenue expected to reach $73.6 billion in 2018. Gartner expects SaaS to reach $117.1 billion and 45% of total application software spending by 2021. “In many areas, SaaS has become the preferred delivery model. Now SaaS users are increasingly demanding more purpose-built offerings engineered to deliver specific business outcomes.” (Sig Nag, research director at Gartner.)\nIt is interesting how many solution providers indicated they are selling disaster recovery as a service and security as a service which followed closely behind the expected unified communications as a service offerings. The thought-provoking data is that more solution providers indicated they are selling disaster recovery and security as a service than the well-hyped infrastructure as a service and platform as a service. Gartner predicts the fastest-growing segment of the market is cloud system infrastructure services (infrastructure as a service or IaaS), which is forecast to be the second largest segment at $83.5 billion by 2021.\n32% of solution provider respondents indicated they are selling storage as a service, which is nearly equal to the number of solution providers indicating they’re selling infrastructure as a service. It’s a little disheartening so few solution providers indicated they are selling business process as a service since it’s a growing services market which Gartner estimates will reach $58.4 billion by 2021. This is indicative of the solution provider audience we survey. They are primarily traditional partners who are now selling the technologies they’ve known and loved for decades in an as-a-service offering. They haven’t yet fully adopted pre-sales or business process consulting services, so it’s understandable they don’t claim these as part of their cloud-based offerings.", "score": 39.96743367991576, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "As channel partners analyze the ebb and flow of their various revenue streams, they might discover that adding something new to their portfolios could provoke a flood of new business. A hot opportunity for selling more services into one’s existing base is software as a service, to be discussed in one of this morning’s concurrent sessions.\nThis panel, moderated by Steve Hilton, vice president of Yankee Group’s Enterprise Research group, will analyze SaaS opportunities available to agents, see what’s in it for resellers and give tips to channel partners on how to best communicate the benefits of SaaS to the end user. Hilton, who writes the Ask Steve column for PHONE+ magazine, said attendees can expect to learn about the pitfalls and the best practices for channel partners selling both communications products and SaaS-based applications.\nPanelist John Krzykowski, general manager for 19Marketplace, said SaaS is gaining momentum for a number of reasons, including the rise of multitenant applications delivered over the Web, the ease of implementation and the availability of broadband access to access applications from anywhere.\nPanelist Robert Bye, executive vice president of nGenX, agreed that SaaS allows companies to get the benefits of new technologies such as collaboration and remote access without the investment in server hardware and IT resources to maintain these systems. “Agents can learn how to add incremental revenue from existing customers without having to become an expert in IT,” explained Bye. “SaaS provides a consultative sales opportunity that increases value to the customer and also decreases churn because these products are so sticky.”\nClark Easterling, vice president of marketing at Perimeter eSecurity, also joins the panel to discuss SaaS applications in the security arena. “Attendees will learn how they can bundle Security SaaS with their services and increase their close ratios, increase customer ARPU and decrease churn,” he said.\n“Whether you use the services to attack your customer base or your new sales, someone will be offering them security SaaS and it might as well be you.”", "score": 38.28490064586888, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "Resell our SaaS solutions integrated with your services\nGrow your services and reach new markets as a VAR Partner\nAs a One Network VAR partner you will have the resources and knowledge to build expertise on One Network SaaS solutions. VARs can resell One Network applications to One Network customers as well as new customers. One Network is building ecosystems of consultants and ISVs around our industry cores to encourage specialization and tailoring of services and solutions that encapsulate the nuances of the target industry. Key industries include Automotive, Food Services, High Tech, CPG, Retail, Healthcare, Military, Logistics, and Pharma. One Network charges VARs a monthly fee per user and per business transaction for each application sold.\nRequest to Join Our VAR Partner Program\nIf you have any questions, please email email@example.com and we’ll get back to you.", "score": 38.000476895617794, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Our portfolio of VAR products\nAdding value to your Critical business applications\nEnterprise software is an investment every organization needs to make carefully, and ensure they are getting the most out of what they’re paying for.\nWhen it comes to your critical business technology infrastructure and applications, working with a partner that can add value to your solution is probably the fastest way to see your ROI. From assessment to implementation and support, your Value-Added Reseller is there to make every invested dollar count.\nVantagePoint is a Value-Added Reseller of SAP software. This allows us to support you through the entire decision process, during, and after the implementation. Engage us to help with sizing, negotiation, business case development, and creating the final pricing structure. We also work closely with SAP to determine if any existing maintenance base can be converted to credit towards cloud software purchases.\nWhy work with VantagePoint?\nChoosing VantagePoint means you are working with a certified SAP Software VAR and SAP Gold Partner. It also means you are supporting your investment with our 25+ years of experience working with SAP products.\nVantagePoint's expert consultants have helped our clients achieve rapid and successful implementations of their SAP solutions, accelerating their time-to-value and boosting them into the age of enterprise intelligence. We are here for you; your success is our success.", "score": 35.87085897338025, "rank": 14}, {"document_id": "doc-::chunk-4", "d_text": "To get started, explore our documentation of these new ways to engage with customers and grow your sales.\nAt least ChannelBiz boldly commented along these lines – pointing to research that proves channel critics wrong – in any case, those “who have longed for the day when resellers are cut out of the equation”.\nIt’s a fact that power is shifting into the customers’ hands and purchasing habits are changing due to “online-first”, while the channel ecosystem is struggling to adapt to the new cloud challenges. This may make it seem like the only way to distribute SaaS products is to sell direct.\nYet a recent Forrester research, commissioned by Avangate, contradicts this assumption: 66% of software vendors surveyed say that channel partners are of strategic importance to their SaaS revenues. The channel is here to stay, although it is and will continue to undergo major disruptions as the software industry shifts from on-premise installation to SaaS delivery and from perpetual licenses to pay-as-you-go subscriptions.\nIn the next Avangate webinar, on April 16th, we are attempting to answer this question: “Are Channels Any Good at Selling SaaS and Cloud Services?”, together with Peter Sheldon, Principal Analyst at Forrester Research. More than this, we’ll look at what it takes to develop and optimize your channels to increase SaaS and cloud sales. As software companies need to work harder than ever before at retaining their customers, it is critical to leverage the power of the channel to increase retention rate and reach further out in the market. Coverage of the topics below will help you get started using channels for customer acquisition and retention:\nAll webinar participants will receive the supporting whitepaper “As SaaS Goes Mainstream, ISVs Invest in Channel Support Tools” conducted by Forrester Consulting (and commissioned by Avangate), which reflects the importance of channel sales to SaaS.\nSign up now to learn how channels can revolutionize your SaaS revenues. Feel free to share from your experience, or ask a question for the presenters below.\nLike traditional installed software vendors, software-as-a-service (SaaS) providers are often interested in selling their products through a network of affiliates. But can the affiliate model, which traditionally focused on download portals, work for SaaS companies? This article explores that question in depth.\nWhat is SaaS?\nBefore we establish what SaaS affiliates actually are, and what makes them different from “traditional” affiliates, we need to understand SaaS.", "score": 34.7412373833958, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "In this, they forget that other vendors are almost certainly doing the same and that most channel partners are not in an exclusive relationship (quite the opposite).\nIn the same way that not enough businesses do meaningful research into their end customers, the same tends to apply to the channel. Taking time to talk with partners, understanding their specific businesses and challenges, will pay massive dividends over the long term.\nThese should all be seen as basic hygiene factors for a successful programme.\n3: All partners are not created equal\nA small VAR is very different from a big VAR. System integrators are different again, as are distributors. A one-size-fits-all approach will tend to be a poor fit for everyone.\nYou may of course already have gold/silver/bronze-style tiers for partners, each with a commitment and programme attached. If not, consider creating one.\nIf this is not possible, look to segment your partners the same way you (hopefully) segment your customers. Look at all the elements such as geographic focus, average deal size, vertical sector specialism, level of exclusivity etc to build up personas for your different partners.\nThen, start to look at the kind of support and activity each may require.\nFor example, a large VAR focused on high-touch enterprise sales is likely to benefit from a more collaborative ABM-style approach or RFI support. Whereas an SME-focused operation may require more in the way of telemarketing scripts, demo materials and objection-handling guidance.\n4. Not just datasheets and PowerPoint\nThere is, of course, a wide range of content and activity you can provide and co-create. All the usual suspects will be there from the obligatory datasheets and PowerPoint through to the event-in-a-box and other swag that is often part of these programmes.\nAs we’ve mentioned, every partner is different. But remember, every partner also gets this kind of material from other vendors. So what will make yours stand out (and get used)?", "score": 33.26887501206471, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "Contrary to popular belief, SMBs have been slow on the uptake of SaaS (application hosting outpaces SaaS adoption by SMBs by a factor of 3-4x) ...\n... due to the fact that VARs, in ownership of the customer trust asset, haven’t been pushing SaaS. But the financial barriers to channel partners’ SaaS advocacy are being broken down.\nNow that the path for VARs to play in the cloud is being forged, and their play along with software vendors, aggregators, and ISPs being validated, distributors and DMRs, long wedded to on-premise license models, are going to have to figure out their place in the new cloud channel order.\nWhat do you think? Is this one of many experiments? What is the role for distributors and DMRs in cloud computing?", "score": 32.983872048500935, "rank": 17}, {"document_id": "doc-::chunk-3", "d_text": "Both along a whole life cycle, so partners can be used in the discovery process.\nCustomers or prospects discover your solution.\nPartners are used to evaluating your solutions – you know that old RFP is still alive and kicking.\nAnd then along with the process of closing a deal with co-selling. Microsoft started to make wholesaling in 2010 or so. It's still in its infancy if you look at the whole IT industry over time.\nSo, wholesaling, and then enabling a great purchasing experience, and then a fantastic post-sales experience, where the partner will take on certain things, like the ongoing management, or maybe even the renewal.\nThat's one of the trends: an expanded scope of partners or the channel function.\nRelated to that, there's a broader variety of go-to-market models: selling with partners, selling to partners, and then they integrate your solution, and resell a package.\nRelated to that, a broader set of business models, i.e. simple MDF funds and market development funds. This could be commission-based, bounties, there could be revenue share and ongoing subscription models.\nIt could be an OEM license if you integrate it into a bigger platform, and so on.\nSo, a broader business model differentiation, if you will.\nAnd the last thing where this all sort of comes together in my mind is the relationship between the vendor, and the channel and the ecosystem becomes way more bidirectional.\nBalthasar Wyss: It's not just hey, put that bottom up on your website and then, at the end of the day, we'll give you a few dollars if somebody buys a book from us. It's way deeper.\nFor example, partners help a vendor to understand their prospective customer base, to help them get the strategy, how to deploy it, and those kinds of things.\nThese are sort of the few trends that I'm seeing.\nThe last point I want to make is this no surprise that more recently people started talking about partner ecosystems or the Chief Ecosystems Officer at a company.\nWhere we look more from an ecosystem perspective, holistically, as opposed to just sort of a point solution we’re engaging with.\nPaul Bird: You mentioned Microsoft. I worked for a couple of Microsoft Gold Partners, pre-co-sell.\nIt was left to us, as the reseller, to take discovery through to closure.", "score": 32.94301938597432, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "This partnership is particularly helpful for tech companies who want to reduce the friction points between their customers and their products. Having a service delivery partner can make implementing their products easier and increase product use and adoption post purchase.\nHigh Velocity Partners\nIf you’re attempting to scale your business and its sales, then you’ll likely need a high velocity partner. These are often referred to as fulfillment partners because their goal is to help you fill orders quickly and efficiently. This type of partnership doesn’t offer any added value to your customers, but it does speed up how quickly you can fulfill large numbers of orders and reduce your internal administrative costs.\nThis option is great for companies who are trying to rapidly scale their business and increase sales numbers. It’s usually used for products that are easy to set up or install, which is why customers will still opt in even without value added services.\nAs an added bonus, your high velocity partner might even be able to push your product into previously untapped or hard-to-reach markets through existing contracts. This means you not only can sell more products, but you can also sell to more people.\nChannel partnerships are a great way to maximize the value and earnings from your products by using a network of other businesses ot help you sell and help customers. Keep in mind there are many types of partnerships and choosing the best fit for your needs is key to success.", "score": 32.82338800107863, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "This lets you and your partners get down to the nitty-gritty and cuts out all the room for extra error that comes with extra work.\nEstablishing, sustaining, and maintaining the complex channel relationships critical to success in SaaS can’t rely on simple phone calls or emails—they require real-time collaboration. At each stage of the game, all of the players need to be talking to each other in real time: brainstorming ways to meet everyone’s needs and maximize value. Tools that can take you beyond the partner portal allow for anytime/anywhere communication that’s logged and tracked so that there’s never any risk of crossed wires.\nTraining and Content Delivery\nFor your partners to adequately be able to position your product, sell the newest features, and know what the product is really capable of down to the minute, they need training. And not just any training—they need information that’s presented in a way that will stick with them and that they can conveniently access at any time.\nActive, Intelligent, and Accelerated\nSelling your SaaS solution through the channel offers an unprecedented opportunity to generate revenue and create value for you, your partners, and your clients. But going in with old tools is like trying to work on a dial-up modem: It might be theoretically possible, but you’re needlessly handicapping yourself from the outset. Go beyond the outdated partner portal with a cutting-edge sales acceleration solution that maximizes ROI.", "score": 32.442150469827666, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "Indeed service providers may remain numerous, given the nature of discrete domain expertize that often characterizes their business models, unless a selection of professional services aligned to competencies of resellers are also migrated.\nA sourcing or procurement organization that chooses to acquire products and solutions from a range of distinct services providers may make limited progress in consolidation. The business benefit of those specialists however may be considered more than worthy of managing additional relationships.\nConsolidation may best be achieved by categorizing and identifying requirement types that logically fit together and the synergy with domain providers. A primary software and cloud services reseller for example, plus a primary networking and security reseller, plus a primary hardware reseller … or the same organization may be used for all three if complete consolidation is sought.\nAs for the term VAR …\nIt’s become an unproductive and unnecessary TLA, whereby it’s used to collectively describe resellers and service and solution providers. When analyzing the supply chain and portfolio of partners and providers, it’s far more appropriate to distinguish what type of supplier your organization chooses to deal with for given requirements.\nWhich products, solutions and potentially professional services would you choose to acquire from a reseller?,\nWhich professional services and potentially products or solutions might you acquire from a services and solution provider?\nFeel free to use the term VAR to describe the video football ref, please don’t use it to describe your reseller or service providers.\nFootnote – this blog was partially inspired by Hank Barnes series of posts which commenced with Things I’d Like to See Go Away – The Logo Slide\nRead Complimentary Relevant Research\nManaging Cost Optimization Primer for 2018\nCost optimization continues to be a critical and continuous discipline for many CIOs. For organizations to be successful, they need to...\nView Relevant Webinars\nIT Spending Forecast, 4Q17 Update: What Will Make Headlines in 2018?\nGlobal IT spending growth began to turnaround in 2017 with annual increases expected through 2021. However, uncertainty looms as organizations...\nComments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an \"as-is\" basis.", "score": 31.612455136663133, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Adding Value through Reseller/ISV PartnershipsBy Jessica Davis | Print\nRe-Imagining Linux Platforms to Meet the Needs of Cloud Service Providers\nLooking for a way to pump up margins? Partnering with an ISV may be the answer. Solution providers who team up with ISVs find they add more value for end-customers -- value that end customers will pay for.\nIn an age of squeezed margins for IT hardware sales, everybody is\nlooking for higher-margin value they can add to the deal, whether its\nintegration, services or even software as a service.\nAnd one road that some VARs are traveling down to boost margins is partnerships with ISVs (independent software vendors). It’s value that resonates with end users who are increasingly looking for complete solutions rather than single components of overall solutions.\n\"Customers we deal with are frustrated at the lack of cooperation between hardware and software vendors,\" says John Cannington, CEO, of @hand Corp., a company that provides enterprise applications for handheld devices in primarily industrial and energy industries.\nCannington served as one of the panelists talking about collaboration between VARs and ISVs and how such partnerships can pay off for both parties. The panel discussion was offered at the VSR Business Optimization Summit in Philadelphia this week.\n\"We are educating our sales force on the various types of our hardware our software gets deployed on,\" says Cannington, who previously worked at companies such as J.D. Edwards. \"I’ve long avoided being in the hardware business, but I want to actively partner in it.\"\nAnd that partnership is likely to help the VAR as well, he points out, saying that his biggest customer just bought 5,000 handheld hardware devices.\nThat’s a message that has resonated with Jeff Weidler, president of WineWare Software Corp. as well. Weidler’s company provides software for the tasting rooms of vineyards so that they can sell and ship wine to visitors’ homes. But without a hardware component, the solution was incomplete.\n\"Our missing solution was POS,\" he says, and his company partners with pcAmerica to provide that element. David Gosman, CEO of pcAmerica also sat on the panel, and says that customers are more likely to go with companies who partner because they don’t want to shop around to several places to find hardware and software that works together.\nAs always, when partnering with a software vendor, solution providers are likely to be cognizant of the potential for channel conflict.", "score": 31.559731886207484, "rank": 22}, {"document_id": "doc-::chunk-2", "d_text": "Savvis offers its high-level partners access to the same high-level technical resources that its internal support team uses.\n\"They have full access to their customers,\" Rao said.\n3. Ensure your sales team works with channel partners.\nOne of the most difficult parts of the transition from a direct sales model to one that includes channel partners is putting policies in place to ensure internal sales teams work in collaboration with VARs and systems integrators, rather than treating them as competitors.\nThat means cloud service providers will need to rethink their entire incentive structure so that company sales people aren't tempted to swoop in on deals to make their quotas.\nAt the very least, internal sales employees should be rewarded in a \"channel-neutral\" way. That is, they should get credit for deals they help partners develop in their coverage area. If you want to develop your company's channel business more quickly, you can reward the direct sales team with something extra for working with partners, a model often described as \"channel-positive.\"\nEither way, changes need to start at the top with ensuring senior level management has bought into the overall channel strategy, Ledbetter said. \"You really need to design a program that rewards your direct sales team to work with partners, not against them,\" he said.\n4. Make sure required investments are worth the effort.\nHistorically speaking, IT solution providers are leery if channel programs require massive upfront investments. That means any cloud service provider's program should offer cost-effective resources for training, customer support, and sales and marketing initiatives. These sorts of resources should not be treated as a profit center.\nMore on cloud partner programs\nCloud channel programs finally reach maturity, survey says\nStill, partners are still wary of channel opportunities in cloud\nXerox's priority for 2013: Build a stronger cloud channel program\nAt the same time, though, your company's closest partners will want to be rewarded for taking extra steps to offer distinct, expert solutions based on your cloud services.\nMore often than not, the program will need multiple tiers along with multiple levels of incentives. Aside from the structures described above, another example is the Premier Consulting Partner program created by Amazon Web Services (AWS).\nAs of February 2013, just 15 VARs and IT consulting companies held the distinction of membership in the program. In exchange for building their services on top of AWS's IaaS environment, Elastic Compute Cloud, these partners get extra attention and additional money-making opportunities.", "score": 31.044567617121583, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "These days, software as a service (SaaS) isn’t a far-out techie concept the way it may have seemed just a few years ago. It’s a broadly accepted, widely adopted paradigm for software deployment and management in businesses of all sizes. In a surprisingly short period of time, it has changed the way people work, and it’s also changed the way people sell. The old days of moving boxes of software has given way to today’s subscription-based, cloud-deployed solutions. As the market has changed, so have the needs of the channel partners that position, sell, and implement SaaS products.\nBack in business computing’s past, alongside install CDs and various iterations of the floppy disk, is the partner portal. Those pieces of rudimentary communication software kept vendors just in touch enough with their partners to make sure product was moving. They were necessary, and they did their job. But these days, the job is infinitely more nuanced, and a portal is not nearly enough to get the job done. To best support your partners, you need to think beyond the traditional partner portal. Let’s explore exactly what that consists of and how you can use it to manage the entire partner lifecycle and close more deals.\nThe New Demands of SaaS\nMaintaining healthy relationships has always been critical to channel partnering, but when selling subscription-based solutions, quality, streamlined communication is even more important. Your partners need to establish deeply rooted relationships that keep clients subscribed and seeking out more of your services, and they need more from you to do it successfully. Here’s what your partners are expecting in a modern portal:\nWhen it comes to getting SaaS tools implemented and opening up those recurring revenue streams, things can become complicated. And the bigger you scale a channel program, the more partners—and individuals within those partner companies—it involves. In days gone by, partner portals were fine for keeping track of a few aspects of a channel program, but simple portals required that everyone involved needed to do a lot of manual work outside the portal—typically swapping information via iteration after iteration of spreadsheet..\nWith SaaS solutions, this is a recipe for disaster: There’s simply too much going on in a single partner relationship (let alone a partner program with tens or hundreds of partners) for anyone to keep track of manually. Partner acceleration software takes you beyond the portal by automating “busywork” tasks.", "score": 30.905680891219195, "rank": 24}, {"document_id": "doc-::chunk-7", "d_text": "What's the buyer journey? In the enterprise space, it's completely different than in the mid-market, or at the lower end.\nI'll get back to that in a second, but that's just one of the segments.\nThe other way you can go is obviously geographically.\nHow do you expand?\nLet's say you start your company in Canada, where you are based. How do you expand?\nDo you build a beachhead with partners and then validate your model? Or do you go into Asia and Europe and other parts of the world as in geo-segmentation? It is very helpful. But how do you tackle that?\nAnd then the third thing that comes to mind is segmenting by partner type.\nWhat kind of partners do you fundamentally need to be successful?\nIs it more, on the technology side you need to integrate with a lot of different systems? For example, Avalara, later expanded into the accounting space. And I happened to lead that team at the end of my tenure there because accountants do tax compliance for businesses.\nSo, what kind of partner types do you need, whether on the technology side or more, on the sales and marketing side is another sort of segmentation that I think you want to go through.\nI think it's important that you understand the environment that you're operating in. By that, I mean, where your customers ultimately are.\nSo how you go about this?\nThe company I work for right now, for example, we are purely U.S. based. We don't have any partners really outside of Europe.\nWe're getting there for some data. Partners that provide this data are useful.\nSo we focused on U.S. market and built that up. But that's not true for many other companies, because they are global in nature.\nPaul Bird: Absolutely.\nAnd in my background managing channel, we basically did our channel management locally in Canada and the U.S.\nAnd then we used value-added distributors (VARs) globally, so we could spread our footprint.\nAnd that was a model that really worked for our software platform.\nTo be able to have that kind of regional representation and give people the ability to get localized support and things like that.\nSo I guess that was kind of an early ecosystem 20-plus years ago.\nBalthasar Wyss: Yeah.\nWhat I always recommend people do, is sort of build a channel, or a partner ecosystem map. Really map it out.", "score": 30.714391130166348, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "Cannington freely admits that his company has both direct sales and channel sales. Bigger customers like to buy direct from the company and integrate solutions themselves while mid-market customers prefer the kind of one-stop-shopping that solution providers can offer.\nCannington recommends that solution providers ask ISVs how they pay their own sales reps and how they pay their partners and how they go to market.\n\"There will be strategic partners,\" he says, noting that some companies just make better partners for each other in the long term, while other deals may be one-off deals.\nHow can VARs find the right ISVs as potential partners? Mark Brown, vice president of marketing at Lowry Computer Products, a solution provider, says that he relies on distributor BlueStar to act as a matchmaker.\nCannington recommends VARs decide what business they want to be in and then ask customers who they are working with. Another strategy is to do a Google search, he says, because the most savvy ISVs will make sure they have good Google listings.", "score": 30.110093014390856, "rank": 26}, {"document_id": "doc-::chunk-2", "d_text": "But the partner puts together the overall solution and is often the face to the customer and the one-throat-to-choke if there are issues. It's unlikely the solution providers will readily hand over customer relationship management though they will need to step up their customer lifecycle policies, processes and people.\nThe referral or agent model has decreased in importance for vendors, falling from fourth to sixth priority in the last four years. And although most of the solution providers still engage with a few vendors as referral partners, they rank that activity near the bottom of what produces gross margin though it’s still above traditional software and hardware. They’d prefer to take the referral fees which require low sales and support investments rather than slog through the traditional resale process which can include arduous amounts of training, certification and support requirements.\n*This blog is an excerpt from the PartnerPath 2019 State of Partnering report: Driving Cloud Adoption using data gathered from 100+ vendors and 200+ partners in our annual State of Partnering study. More excerpts will be published in coming months. Be sure to subscribe to our blog below!", "score": 30.08996914603847, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Recently, I found myself having similar conversations about how to manage channel partners with several customers from very different industries. The customers were in retail, technology, and hospitality respectively, but they each had similarities in the way they dealt with these partner relationships. The differences in their channel partner strategies were not due to their industries but rather were influenced by the stage of their business was at, or very specific market factors.\nDespite their differences, there was one overriding principle to how they approached their channel partnerships; it’s a strategic investment for their business. While their reasoning may differ, this didn’t detract from the importance of these relationships to their sales efforts and bottom line. For example, FMCG producer Dabur Asia explained how channel partners were a critical player linking their retailers and customers. At the other spectrum, enterprise cloud platform producer Nutanix utilizes these arrangements to help them unlock doors in new geographies quickly.\nCommon threads also appeared in terms of their sales model and their enablement strategy, so much so, that I identified three broad channel partner strategies. Before I launch into these, it’s helpful to outline what I mean when talking about channel partnerships.\nA channel partner specializes in various aspects of the sales process and undertakes this as a service on behalf of a business. Nutanix uses channel partners to help them scale quickly by undertaking just lead generation in some geographies, while they use their own sales engineer to conduct demos. But in new locations where they have no sales team, the channel partners help them expand with minimal investment, by managing their entire sales process right through to closing. They also have premium partners who are able to unlock doors which they cannot directly.\nBased on these recent customer discussions the three channel partner strategies I’ve identified are: Exclusive, Targeted and Global.\n1. Exclusive channel partnership strategy\nBest for when you have a point solution.\nA friend of mine has a software startup that sells email encryption software to large enterprises. As large companies tend to purchase their product as part of a broader email security solution, their only go-to-market strategy is to use channel partners who have expert knowledge. They bundle several point offerings as part of an overall solution for the larger business problem. Businesses in this position prefer aligning themselves with channel partners who are SMEs in their field, who they can provide with exclusive access to their solution.\nThe key challenges in an exclusive channel partnership are to engage your channel partner reps.", "score": 30.062681944024995, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Yesterday I attended Computacenter’s Analyst Event. It’s a major independent provider of IT infrastructure services in Europe, ranging from reselling hardware and software to managing data centers and providing outsourced desktop management. My main interest was how it manages the potential conflict between properly advising the client and maximizing revenue from selling software. For instance, clients often ask me if it's dangerous to employ their value-added reseller (VAR) to advise them on license management in case the reseller tips off its vendors about a potential source of licence revenue.\nAn excellent customer case study at the event provided another example. A UK water company engaged Computacenter to implement a new desktop strategy involving 90% fully virtualized thin clients. Such a project creates major licensing challenges on both the desktop and server sides, because the software companies haven’t enhanced their models to properly cope with this scenario. The VAR’s dilemma is whether to design a solution that will be cheapest for the customer or one that will be most lucrative for itself.\nAs we said in our recent report “Refresher Course: Hiring VARs,” sourcing managers should decide whether they want their VARs to provide design and integration services like these or merely process orders at a minimum margin.\nComputacenter will do either, but they clearly want to do more of the VA part and less (proportionately) of the R. So, according to their executives, they have no hesitation doing what is best for the customer even if it reduces their commission in the short term. But they didn’t think many of their competitors would take the same view.", "score": 29.928463751740424, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Local VARs Do Best\nwith SMBs\"> Close connection Its the local and specialized VARs that are closest to SMB customers and thus have a handle on the kinds of specific applications they need to run their businesses, industry experts said.\"The cost of building out a service infrastructure is huge,\" Driscoll said. \"We can leverage the expertise that already exists and partner with what we call benchmark delivery partners on both a regional and national level to fulfill opportunities.\" For their part, ServiceConnection partners get a built-in sales channel to drum up leads. ServiceConnection also promises its partners a level of commitment. \"Less is more,\" Driscoll said. \"We have a fairly monogamous relationshipwere not going to sign up 10 partners in their area so theyre exposed.\" Instead, Driscoll said, ServiceConnection is handpicking between 30 and 40 tier-one players in major markets and with specific expertise. SAP has embraced some of those same principles in its PartnerEdge Channel Partner Program announced in May. SAP officials said they place a high value on partners with micro-vertical expertise, as well as those with a track record of long-term customer satisfaction and retention. As a result, the PartnerEdge program recognizes and rewards partners for sales transactions (how much SAP software is sold) as well as for capability-building activities such as training and customer satisfaction. \"By placing a value and measuring the capability and skill set of the partners, SAP and prospects can have the confidence that their partner can manage an SAP solution in a way that delivers timely business results to the company,\" said Michael Sotnik, SAPs senior vice president of channel and SMB in North America, who is based in Newtown Square, Pa. Its the micro-vertical expertise that is the partners most compelling asset, according to Brad Nicolaisen, president of Et Alia LLC (\"et alia\" is Latin for \"and others\"), a Milwaukee-based SAP business partner. His company markets Crew, a professional services application aimed at construction and project-based assembly SMB customers. SAP, as a large organization, has to keep a broad focus, Nicolaisen said. The company may have best practices for professional service providers, for example, but partners such as Et Alia can go one step further and prepackage best practices specific to businesses doing such things as project-based assembly and fabrication. \"All we do are these industries,\" Nicolaisen said. \"We eat where they eat.", "score": 29.485044680344995, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Channel partnerships are a huge segment of both the B2B and B2C market. It’s how vendors and manufacturers distribute their products to customers when they don’t have their own sales operations.\nAs a manufacturer, you might use exactly this model in your business. There are lots of pros (and only a few cons) to this structure if you use the right type of partnership to support your business. In this blog, we’ll go through the pros and cons of using a channel partnership, as well as a few different ways to set up your partnership.\nPros and Cons\nThere are some obvious benefits to using channel sales in your business. For example, you reduce your overhead costs by not needing a storefront. Plus you have lower customer acquisition costs, can broaden your geographical stake, strengthen brand credibility and have time to focus on your end of the business.\nThere are some trade-offs to secure these pros, though. With a channel partnership, you’ll have to give up some control of the sales process when you hand over your product. There’s also more risk for your brand reputation if a partner doesn’t reflect your core values or gets into trouble. Lastly, it’s much more difficult to get direct feedback from your customers, which makes developing new products and fixing issues tough.\nOverall, if you’re not willing to run the sales side of your business, then channel partnerships are likely a good choice. Just be aware of who you’re bringing into your brand and the type of partnership you choose.\nValue Added Resellers (VARs)\nThis is the most popular type of channel partnership. In this set up, a reseller will buy your product, adjust for their profit margins, and add extra value services through their business. It’s an excellent way to not only move your product, but also build on your brand reputation through trusted resellers.\nUsing VARs can significantly boost your sales numbers and help you reach a broader market than you would on your own. It’s a good choice for businesses who are releasing new technology, want to break into a specific geographic region or simply don’t have the capital to expand their operations on their own.\nService Delivery Partners\nThese partners aren’t reselling your products, but they do offer additional services to your customers. Most of their services are customized to the potential customer’s needs, such as consulting before purchase, installing the product or technology and managing it throughout the life cycle.", "score": 29.12238400588268, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Companies such as Dropbox, Moazy and Google Apps enjoyed great opportunities from cloud based services in 2010....\nIn 2011, it seems we will see greater adoption and opportunities in the channel in the cloud space. Most value added resellers (VARs) understand that they’re in competition with the many cloud computing providers that market and sell directly to customers. Your potential customers will be encouraged by the cloud providers seeking a quick sale to “cut out the middle man.” So where are the business opportunities for VARs?\nThere are three specific tactics VARs should take when trying to gain margin from the cloud: Identify appropriate channels understand the cloud market place (both real and perceived) and watch competitors’ movement in the cloud space.\nChoosing appropriate channels\nAlthough this may seem obvious, I watched many VARs miss cloud opportunities last year for not having appropriate distribution channels and business relationships in place for them to resell cloud products or services.\nIn two specific examples, VARs focused on hardware and software sales instead of taking advantage of product and migration services.\nIn both cases, the customers had just moved their email to Google Apps because mailbox volume hardware maintenance made email cumbersome and expensive The customer not only bought the services from Google, they also purchased several months of professional services. These professional services brought in twice as much revenue as the hardware support would have. These services included assisting with putting a retention policy in place to delete old email and migrating the remaining email to Google.\nAlthough I’m not saying the vendor should have been a Google Apps partner (which is becoming an attractive option for some VARs), but they should have had some cloud or traditional hosting available to meet the customer’s desire to not host their own email.\nBy having this channel in place prior to a customer request for the service, you’re acknowledging the market to your potential customers. Most customers will understand that you‘re still adjusting and adapting to the market, but just showing that you have a grasp on the tech trends will assure the customer that you can help them. Of course, you also bring safety and experience with existing platforms, so that should the cloud technology not work, you can quickly provide other options.\nIf you need assistance in defining your market and products with supply channels, candid conversations with existing customers can help quite a bit. Make this a listening session and not a sales pitch. If you start pitching your solutions under the guise of developing your channels, then your future requests to talk business development will land on deaf ears.", "score": 28.950921142916247, "rank": 32}, {"document_id": "doc-::chunk-3", "d_text": "This is still goes on today using .NET applications, ActiveX controls and Flash. It isn’t transformational or a paradigm shift in thinking, it just requires thinking through the design and aligning it with the business objectives.\nA survey by Gartner Inc found that, among users and prospects of SaaS solutions in 333 enterprises in the US and the UK, the apparent acceptance of SaaS as a viable model has not entirely translated into satisfied users of SaaS. Vendors of these on demand solutions will point out that a large percentage of respondents will maintain their current levels. The Hybrid SaaS model provides various choices, from a full installation, through to a plug and play hardware appliance. These solutions sit within the enterprise and allow more seamless integration and scalability within your environment. This model is suited to larger enterprises, with a large number of technicians and end users, typically with multiple points of integration that require sub-second response times.", "score": 27.795260318866266, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "A onesize fitsall solution for an enterprise Ecommerce solution\nby philippe bodart\nNot For Sale\nAuthor requests article critique\nNot For Sale\nAuthor requests article critique\nSEND A PRIVATE MESSAGE\nHIRE THIS WRITER\nToday enterprises have different models to choose from when it comes to implementing e-commerce solutions. The choices vary from a buy model, a build model, a hosted build model or an SaaS model, better know as a software service model. For small to medium sized organizations, and for non-profit organizations the choices are quite often more limited to either an open source hosted model or an fits-all SaaS solution.\nI will elaborate on each of these models and develop their pros and cons, and also develop the levels of skills you need within the enterprise to implement these delivery models.\nWhen you are evaluating models and vendors within the different models, it is important to first have a solid look at your requirements, your budget, the level of risk you want to take and also your IT skills altogether.\nIt is also in a first instance more important to evaluate what model is best suited for your enterprise or organization, and then choose a suitable vendor within this model, and surely not the way around.\nI will cover 4 delivery models and elaborate on their pros and cons:\nThe first delivery model is the buy model, whereas the customer is actually buying an e-commerce software and buying the licensed software that goes with it.\nA licensed software is usually Feature-rich, is robust and well tested and has some pre built integration tools available. There is a continuous investment done by the vendor on new features and enhancements, although they might come at a steep additional cost to the purchase price.\nYour time to market will be relatively slow, but faster then the build model, which will be discussed later. Most likely there will be sufficient system integrators and developers available to help you with all integrations with needed with internal systems.\nOn the negative site of the buying model, you have:\nthe difficulty of choosing the right product for your requirements.", "score": 27.72972038949445, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "By Adam Atlas\nAttorney at Law\nThese days, an ISO without VARs/ISVs isn't an ISO. That's an exaggeration, but today's savvy ISOs are negotiating agent relationships with value added resellers (VARs) or independent software vendors (ISVs). In this article, I'll highlight some of the legal issues around these relationships, which are becoming central to the ISO business.\nSo what is a VAR or ISV anyway? There is no official industry definition of these terms, but in plain language, VARs and ISVs already sell something to merchants who might have merchant accounts with ISOs.\nAs an example, consider a dentist practice management software-as-a-service (SaaS) provider that has a portfolio of a few hundred dental clinics. Through its SaaS, this ISV helps dentists maintain client records, set appointments, communicate with patients and create invoices. This ISV may see value in adding payments to its own offerings or at least in integrating to the payment services of a given processor or ISO.\nOnce that integration is technically possible, the ISV can easily solicit its dentists to offer them payments with the unparalleled convenience of the payment services working hand-in-glove (pun intended) with the SaaS dentist platform. From the point of view of the ISO, this ISV adds value to the resale of merchant services, and they are therefore a value-added reseller (VAR).\nIt's best not to get hung up on jargon, because it often changes, and different people will understand jargon to have different meanings. It's perhaps best to understand the roles of the various parties and use acronyms that they like to use for themselves.\nTwo factors drive ISVs and VARs to also want to sell processing: profit and competition.\nThe profit incentive is straightforward, as payment processing gives them a chance to participate in an entirely new revenue stream that is separate from their original business objectives. The ISV/VAR looks at their merchant customers the way others do: as sources of revenue with an associated cost of acquisition.\nAdding payments to an existing portfolio has a relatively low cost of acquisition for both the ISV/VAR and the processor behind it. In fact, in the contemporary landscape it probably costs more—from the supply side and also the merchant's purchasing side—to procure payments in a way that is not integrated with the merchant's core operating platform.\nCompetition as a motivator is more complex.", "score": 27.034195851385114, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "SaaS products that must integrate into the Enterprise will require services for integration into those products. Finally, for SaaS products that are not fully configurable, or for very complex operations, some customization may be required in modules that interface to SaaS products.\nSaaS sales models vary widely, but all are more internet focused. Freemium models are common for low cost products which can provide a base level of functionality at no cost, ideally paid for by advertising revenue. The intent is to provide sufficient value to “up-sell” to the paid version, typically costing $10 or less per month. These SaaS products are sold by “no-touch” internet sales and marketing. Products that are somewhat higher priced rely on internet marketing supported sales with the potential for online chat. Enterprise SaaS products are sold directly or through a channel, due to their complexity and likely services component. These Enterprise SaaS products are heavily supported by internet marketing which helps educate the customer and reduce the direct sales costs.\nStrategies vary greatly depending on the horizontal or vertical market served. Different markets have differing propensities to move to SaaS and different competitive landscapes. Generally SMB markets have a greater demand for SaaS with the desire to reduce in-house IT complexity, a reduced need for customization, and reduced integration requirements. Certain vertical markets such as medical and financial have greater security, up-time, and compliance requirements which may increase the time to move to a SaaS solution. Consumers are now very comfortable with “applications in the cloud”, and do not have a “switching cost” from current applications. Consumer markets are most cost conscious and can be highly competitive. Cloud Strategies will work with you in whatever market you focus on.\nWho We Work for\nVenture Capitalists and Private Equity Firms\nProviding assessments and due diligence reports to evaluate the company’s SaaS business strategy, or to scope the opportunities and risks to transforming a software business to a higher value SaaS model.\nProviding independent assessments to explore the opportunity for the company to move to a SaaS model, or assist in the implementation of a SaaS model.\nProviding expert SaaS resources to facilitate the financing, building, and running of a successful SaaS businesses. We work with company management to develop and build a successful new SaaS company or transform an existing software company to successfully support a SaaS model.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "Google App Engine, RightScale, and Microsoft Azure are good examples of PaaS vendors, and besides Google's stable of apps, Salesforce.com is a classic example of SaaS in action.\nFor most of these vendors, the flexibility and choices of what they can work with to deliver an end-to-end solution mean that they can usually tailor a deployment to meet customer requirements. The customer wants a hybrid cloud running a CRM software? How about AWS, managed by RightScale, all running Saleforce? Or some other combo that makes sense technically and fiscally.\nBut there are some vendors that see the best approach to the cloud is to own the entire stack, so they can deliver and manage the whole thing for customers. The appeal is understandable: by not partnering with anyone (except perhaps SaaS partners, because the huge diversity of SaaS offerings prevents any one vendor from having them all), a vendor can avoid changes in licensing and business arrangements, and -- let's face it -- keep all the revenue for themselves.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "CRN’s VARBusiness ran a nice story on SaaS the other day.\n… Software-as-a-service applications from Google, Yahoo, and Salesforce.com are driving change in architecture, appearance and business model of enterprise applications, turning the sector from a niche into a multi-billion dollar market. But the business model isn’t without its challenges.\nCalling the trend a \"disruptive phenomenon,\" Merrill Lynch & Co. analyst Kash Rangan said in a report released Friday that upstarts, such as Salesforce.com, WebEx, RightNow, Taleo, Blackboard and NetSuite, will benefit most, compared with traditional software players Microsoft, SAP and Oracle attempting to move into the space.\nPart of the disruption will come from the method in which traditional software companies recognize revenue from SaaS sales, according to AMR Research Inc. senior vice president Jim Shepherd. \"If SAP sells a system today for $1 million, they recognize the million dollars on the day they sell it and it goes into the revenue for that quarter,\" he said. \"If they were to sell that same system as a software as a service, they may get $20,000 per month for the next 10 years.\"\nThe same dollar amount is spent on applications, but logged in accounting books differently under the SaaS business model. Assuming there’s a strong demand for these services, it could have a negative short-term impact on license revenue at traditional software vendors, Shepherd said.\nThat’s part of the reason Microsoft, Oracle and SAP didn’t rush into offering these services offerings, which companies like Intuit Inc. have been offering for years ….\nAlthough in general the author is right by stating that “pureplay SaaS vendors” may seem more flexible with regards to their business model when it comes to SaaS, I don’t think the article fully takes Microsoft’s efforts in this field into account.\nFor one WebEx is mentioned as a company that benefits most. As you may know Microsoft acquired PlaceWare a few years ago and turned this service into Microsoft LiveMeeting; one of the most successful Web conferencing services offered today (here and here).\nNext to Services in Office System, also Windows Live is introduced and Microsoft officially launched its first service earlier this week : Windows Live Messenger which in the first 36 hours after the launch had over 2,4 million downloads; hmmm that’s quite successful isn’t it.\nCurrently there a huge list of Windows Live Services in Beta and planned.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "Their marketplace has generated over $250M since the launch in 2012. It’s a triple win for Atlassian because developers are making money building add-on’s to their products while reducing R&D costs for them ultimately making their products more compelling to the users.\nWhile their competitor Box has taken a traditional B2B approach to growth with Enterprise/Inside sales, Dropbox was able to get 500K+ developers to integrate to their platform. Both companies are successful in their own way but proves that integrations can bring serious $$.\nWhat an ideal partner strategy looks like – Okta\nTheir partner ecosystem shows the evolution of a traditional partner strategy being focused solely on resellers to a modern mix of cloud alliances, ISV’s, SI’s, and resellers(for large accounts).\nWant to trade notes on partnerships/channel or want feedback to grow your SaaS partner strategy? Ping me here.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Software vendors choose one of the following delivery models or a combination of both:\n- Hosting on its own servers, databases, networks and IT resources\n- With public cloud service providers, who manages the cloud environment on which the SaaS solution is hosted\nSaaS applications are typically accessed through web browsers. Therefore, companies do not have to deal with the configuration or maintenance of the software themselves. You only pay for a subscription and that gives you access to the application.\nIf updates or new functions are developed for the application, these are generally made available to all customers. The majority of SaaS applications are off-the-shelf, plug-and-play products, where the vendor manages everything behind the application, including:\n- Hardware components such as networks, storage and servers in the data center\n- Platforms such as virtualization, operating system, and middleware (application independent programs that mediate between applications)\n- Software requirements such as runtimes (how long a program runs), data, and the application itself\nOther applications can also be integrated with SaaS software through application programming interfaces (APIs). For example, you can link your CRM software and your ERP software to use the synergies between the applications.\nCharacteristics of SaaS services\n- Shared cloud architecture (or multi-tenant cloud architecture): all users use a centrally managed infrastructure.\n- Access from any device connected to the Internet: Data and information can be accessed from anywhere in real time, making synchronization easy.\n- Great usability via familiar web interfaces: Application user interfaces are often structured similarly to websites. Since the user already knows the structure, this accelerates the acceptance of the software in the company.\n- Collaborative features: By centrally storing the data and features that enable collaboration, collaboration can be designed efficiently across teams or sites.\nSaaS offers your business a number of key benefits.\nLow investment risk\nSince there are no costs for hardware and infrastructure when introducing SaaS applications, the investment is lower than for an on-premises solution. Studies show that the investment costs are even 30% lower in comparison – regardless of the number of users.\nSaaS providers attach great importance to the fact that their software is always available and therefore rely on redundant systems. This means that comparable systems are available in parallel at least twice.\nThis ensures that data, systems, networks, etc. remain available even in the event of an error and that errors can be corrected without failure.", "score": 26.357536772203648, "rank": 40}, {"document_id": "doc-::chunk-3", "d_text": "Usually, the service provider or vendor leaves a slight scope of customization or configuration of apps according to the user. As the apps or software expect to use as deploy. However, in the case of specific needs, a customer can ask the vendor to make updates and changes accordingly.\nQlikView does not offer any direct service as SaaS. There are some QlikView-based applications which are available as SaaS by a few QlikView partners. As per the SaaS model, a single application type develops and distribute among the customers. This model does not fit the QlikView user purpose as customers want to have apps, in which they can use their data for analysis. Due to this, other cloud computing services than SaaS works well in the case of QlikView.\nSome partner service providers of QlikView provide apps as SaaS (QlikView based). Which are either domain-specific or industry specific such as workforce management, channel management, market research, etc. For example, some such solutions are; IFR Monitoring, CONTEXT, InContact, Kenexa, SynerTrade, etc.\nThus, this concludes our explanation of QlikView on cloud and SaaS. We hope this information was sufficient to establish a fundamental of both the concept and their relationship. You may also like to know about Different types of QlikView Certification and Exam details.", "score": 25.95826242985036, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "Even though a VAR will sell your product and probably market it, the SaaS company still plays the biggest role and will reap great benefits from marketing efforts to create demand for their products.\nEngage Buyers in the Software Evaluation Process\nFor these SaaS companies that need to promote a new product or stand out from the competition, inbound marketing can help engage buyers at every level of the buyer journey by delivering information that relates to their research. When the buyer is at the evaluation stage of the journey, you can help customers realize that your software is the answer to their problems and that it is the best choice on the market by publishing content that answers the questions and concerns they may have. For instance, when the buyer is going through the software evaluation process, you could publish a software buyer guide that informs the buyer of the pros and cons of similar software products on the market.\nJust as we take stock of what we want and need when buying a car, and then research each vehicle before heading to the car dealers, buyers will conduct over 60% of their research online before contacting a seller. According to TechTarget 65% of IT buyers will consume at least four pieces of content before contacting the vendor, and the 2014 B2B Buyer Behavior Survey from Demand Gen states that over 61% of buyers will choose the vendor that supplied relevant information at each stage of their journey.\nIt’s all about building trust in your product and company. Therefore, it makes sense to invest on inbound marketing content that provides the research material your customers are searching for.\nMarketing software can help you manage your inbound marketing strategy.\nMarketing software can help you plan your content, meet deadlines and coordinate your team to develop content that is specific to buyers at each stage of the funnel.\nThe software will also help track engagement and the progression of buyers through the funnel, which helps you create and repurpose content and send the information to the buyer just when they need it most.\nSelecting a channel partner to work with can be hugely valuable in amplifying your message and increasing your reach. Many SaaS startups recognise the advantage in partnering with larger companies that have a more established brand name, a broader network, and a larger sales and marketing team.\nSaaS and the Value Added Reseller Channel\nSaaS companies that choose the VAR channel can offer customers the added assurance of a one-stop shop of customized software and ongoing support. On top of IT services, the VAR channel may also offer customers professional services such as consulting, design and training.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-5", "d_text": "Software as a service (SaaS) is a “software delivery method that provides access to software and its functions remotely as a web-based service.” Consequently, SaaS affiliates must promote a service, rather than a physical or digital product or software license.\nUsually delivered as a subscription on a cloud-based web (and/or mobile) application, SaaS represents a shift from traditional software licenses and products that must be downloaded and installed, than accessed from anywhere using the web. This change in delivery mode can affect the sales model as well.\nDoes SaaS emphasize the service offered or the software delivery method?\nLet’s use the example of PadiAct, a SaaS-based email subscription optimization tool, to answer this question. With the SaaS model, PadiAct users enjoy full access to their email services immediately after registering an account and choosing a pricing plan. No download or installation is required.\nWhat SaaS affiliates are promoting and marketing to the end customer in the case of PadiAct is the attractive service provided by the merchant—that is, the email lead optimization tool for websites—rather than the (less important) fact that is offered via web-based software. In short, SaaS sales emphasize the unique service, not the delivery method.\nWho are the SaaS vendors and SaaS affiliates?\nIn order to answer this question, we need to look closely at the SaaS merchants that want to grow their business using affiliate marketing.\nAs previously mentioned, a SaaS vendor is any merchant that offers access to a remotely hosted software application that lets end users access a value-added service. Because SaaS primarily changes the delivery model, not the need to sell software, many SaaS vendors still want to use affiliates to expand their business.\nThus the question becomes: which of today’s proven affiliate models are most likely to convert from the current model to the SaaS model?\nTaking the example of the B2C shareware software world, we can confirm that traditional affiliate selling points centered on download portals, once the favorite channel for any software company interested in starting the mass distribution of their product.\nHow can the download portal be adapted to SaaS?\nAt a first glance, we might think that the portal model will not work for promoting SaaS products, mainly because the download portals are built around the idea of promoting software products designed for specific purposes—such as security software, video editing software, or PC optimizing software—instead of promoting services.\nHowever, the shift to SaaS-friendly “download” portals is already happening.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Report Outlines SMB SaaS Strategies for Vendors\n- By Herb Torrens\n- August 22, 2008\nSoftware as a service (SaaS) for the small to medium-size business (SMB) market has opened potential opportunities for vendors, and a report released last week by Forrester Research offers some advice for gaining entry.\nThe report, \"Forrester's SaaS Maturity Model,\" describes a six-step approach to help service providers and independent software vendors (ISVs) assess their SaaS business goals and technical means of achieving a successful SaaS business.\nSaaS is going global. Forrester says that North America has the highest adoption rates, the Pacific Rim has the most pilot projects and the European business community has shown significant interest in SaaS. The most notable success for SaaS has been in providing customer relationship management apps to SMBs, such as those provided by Salesforce.com. However, other hosted apps may gain momentum.\nForrester's SaaS Maturity Model describes levels of sophistication of hosted services. It provides tips for making business and technical assessments, with basic questions such as \"Who does what for whom?\" and \"What is the approach for customizing processes, data and user interfaces?\"\nThe maturation range of the Forrester model is from 0 to 5, with 0 being a simple outsourcing of an application by a single enterprise customer. A level 0 operation is not considered to be a true SaaS implementation, according to Forrester's definition. Level 5, at the top, relates to dynamic service applications with a \"build for change\" approach to application development. By contrast, Forrester puts Salesforce.com's initial CRM operation at level 3.\nThe SaaS Maturity Model aims to match the technical foundations of the service provider or ISV with its business goals. The report warns that \"targeting the highest maturity level is not necessarily the best fit for every vendor.\"\nThe Forrester SaaS Maturity Model can be accessed here.\nHerb Torrens is an award-winning freelance writer based in Southern California. He managed the MCSP program for a leading computer telephony integrator for more than five years and has worked with numerous solution providers including HP/Compaq, Nortel, and Microsoft in all forms of media.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-4", "d_text": "\"However, that lower cost might mean that they're weaker on security, so how do you compare those costs? The impact on VARs is that they might end up spending a lot of time and money on the service end.\"\nMeanwhile, other vendors' pricing programs show there's no one magic solution for the entire industry. Stephen Richards, CA's executive vice president of sales and field operations, says the company's FlexSelect plan has received very positive results so far, but acknowledges that it best serves CA's preferred partners. Oracle has tinkered with its licensing and provided helpful tools like the software investment guide, which shows customers how to set up their implementations the way a stereo manual shows you how to set up a home-theater system. And at Sun, the company has begun helping its VAR partners realize new revenue streams.\n\"The reseller community tells us the value-add is in the up-selling and cross-selling of service for things like consulting and migration,\" says Sun StarOffice product-line manager Iyer Venkatesan. \"We help them by marketing these services and pointing out other opportunities for value-adds.\"\nUltimately, it's crucial that VARs are more attuned than ever to clients' needs. Relavis, which resells IBM software, is committed to demonstrating a technology's usefulness before requiring its customers to buy it. Baum says the move to a complete service-provider model may still be a ways off, but there's no doubting customers' feelings about software prices. \"We're seeing a strong downward pressure on pricing because companies are tired of expensive projects with long implementations,\" he says. \"They're telling us, 'We've bought a lot of stuff over the past few years, so what you're selling better work with what we have and with minimal impact on our infrastructure.\" By listening to such input and adapting accordingly, Relavis and other VARs are giving themselves the chance to compete in this ever-changing marketplace.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "We’ve listed some of the multi-vendor SaaS marketplaces we’ve joined as well as some other top SaaS marketplaces that are well worth considering to join.\nSalesforce is probably the most widely known SaaS or CRM. They have a vast marketplace or app exchange with various searchable product categories. You also have the ability to offer your solution on their partner programme marketplace, resulting in being listed in their app exchange.\nTypeform, a leading provider of interactive forms, also has a marketplace. They make it easy to add your SaaS application through their product partner initiative. All you need is a Typeform integration.\nOther popular online SaaS marketplaces include:\n- Campaign Monitor", "score": 25.173493072337244, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Software as a Service (SaaS)\nSoftware as a Service (SaaS) is a cloud based software delivery model in which the cloud typically hosts the application and data centrally and provide services to its users using a web browser. Our SaaS services ensure following practices are in consideration during software development.\n- Global reach with capability of serving concurrent users efficiently up to its critical level\n- Implementation and use of latest technologies in order to deliver rich user experience\n- Ensure intuitiveness, stability, scalability, and robustness in delivered SaaS application development\n- Smooth upgradation from older version to newer version of solution\n- Compatibility checks across widely used browsers with easy to use interface\nYour Requirement - Our Expertise:Complete SaaS Solution\nOur SaaS application development solution covers variety of industries that we have covered so far\n- E-Commerce Solutions (Generic, for many industries)\n- Facility Management System (Facility Management Industry)\n- Project Management System (IT Industry)\n- Document Management System (Print & Media Industry, Law & Accountings)\n- Content Management System (Generic, for many industries)\nAdditional SaaS services benefits available from Verve\n- Software is hosted externally by a vendor\n- Hosted on the Cloud\n- Accessible via a browser over an internet connection\n- SaaS is the pay-as-you-go model provides flexibility.\n- Scalable architecture, changing your usage plan is easy. Adding more database space or more compute power as usage increases.\n- SaaS is Cost Saver as vendor maintains the servers and infrastructure that support the application, as well as on-going costs like maintenance and upgrades.\n- Highly secured access to developed solution to a large customer base\n- Knowledge of building scalable Software as a Service (SaaS) application development on popular cloud infrastructures such as Amazon AWS, Windows Azure, Linode and Rackspace.\n- Flexibility to clients to increase the size of the target market\n- Rapid development and enhancement of products\n- Overall low cost of solution development\n- Better and Instant Scalability with faster development of dynamic products\n- Minimal Management Overheads since inception of concept to its implementation", "score": 25.000000000000068, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "This takes customisation to the next level; offering micro and macro additions to make your storefront feel truly yours.\nTruly make your eCommerce platform your own by capitalizing on the software’s overall branding capabilities. Grow your business by catering your storefronts to your clients. Offer them a one of a kind tailor-made experience, (e.g. look, feel, products, pricing, shipping and payment methods).\n3. CRM software\nThe Customer Relationship Management software allows critical purchasing information and trends to be relayed to your sales teams for more optimised approaches.\nThe VARStreet CRM eliminates duplication by integrating it with the quoting model. This allows sales personnel to access complete data for every customer account.\nKnow more about the FREE CRM.\n4. Sourcing and procurement\nCosting is an important part of sourcing. How do you ensure you are not just getting the best deal but are able to validate quality and access distributor information? VARStreet offers the simplest way to increase your profit margins; sourcing right.\nThe sourcing and procurement functionality allows you to maintain real-time watch over distributor prices and inventory availability along with the ability to implement the necessary changes within the same portal.\nHunting for distributors? It stops with VARStreet that offers a ready list of over 45 verified IT and office supplies distributors within the United States and Canada.\nCrisp, clean and rich content is now easily achievable. VARStreet helps you integrate detailed images, technology specifications, related products, descriptions, etc. Anything you feel will enhance your customer experience, VARStreet offers.\nWelcome your brand into the digital age of marketing with a bang! VARStreet offers the best in online marketing tools to enhance brand visibility and image. Capitalise on the email marketing, SEO and social media integration features to spread the word.\n6. Reporting and analytics\nNeed to understand how well your business is doing? The reporting and analytics capabilities offered by VARStreet help you keep tabs on real-time figures of your sales cycles and make better-informed business decisions.\nAnd a lot more!\nSelling with VARStreet is not limited to its extensive software catalog. The company offers a range of additional benefits that help both back and front end users maneuver their activities with ease.\n- Customization extends to the workflows as well. Set up your own bundles or automated workflows to ease the complexities of staff and product management within your business.\n- Ensure your business activities are compliant with your government contract.", "score": 24.979705835004157, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Channel? Alliances? It’s where bad sales reps get hired. This is what the historical perception of those teams have been.\nWhile it’s foolish to generalize every person in that role, truth is most of them relied on resellers to push their product. This worked great in legacy businesses since they wanted to procure new products through resellers that they had built long lasting relationships with to cut risk.\nBut…this buying process has since been disrupted by SaaS and Cloud products. From the person with the hands on keyboard to the CIO, everyone is purchasing solutions collectively as well as individually. Traditional Resellers are no longer needed as much because products can be simply purchased with a credit card and doing month-to-month contracts.\nCompanies are now forced to look at other channels for revenue. Studying successful SaaS companies of today and based on my experience, two channels will slowly replace resellers:\n- Solution Partners\n- Integration Partners\nSolution partners act as an extension of the sales/CS/support teams, taking away the burden of recruiting and salaries.\nThey wrap their consulting services around SaaS products and earn a commission/referral fee for the sale. In exchange, they they are responsible for sales, tier 1 support and ongoing customer success. In a way they are resellers but with much more skin in the game.\nIt’s a win-win for the SaaS company as well as the partner. The SaaS company gets to grow in areas where they don’t have expertise in and the partner gets to be the product thought leader in the niche, whether it’s a particular geo or market (SMB, Mid-Market, Enterprise etc.)\nThe company had limited expertise selling to Enterprise clients so they decided to launch a focused partner program targeting System Integrators and Solution Providers. They’re calling it $1.45T opportunity ;)\nZendesk recognized in their 2017 annual report that their Enterprise clients needed more support than their smaller clients. As a result, they hired an executive in 2018 to expand their partner strategy with a focus on SI’s.\nIntegration partners create a deeper value prop for respective products and allow them to become more “sticky.”\nSaaS companies often have an open API or a platform product strategy which makes it easy for their products to integrate with others. As SaaS stacks of companies get bigger, this approach allows two products to be gelled together for a unified customer experience.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "Although this model is relatively risk free and easy to control budget wise, it still requires a solid IT skills sets, ideally internally or sourced through a vendor that has good knowledge of the open source platform and the internal e-commerce requirements. It will overall cost more time to implement then a straightforward SaaS model, described hereafter.\nThe fourth and by far the model that has the most appeal and traction lately is the SaaS model, whereby the E-commerce application is sold as a service and whereby the software is provided to you as a one-size fit-all solution.\nThere is first no customization needed, as everything is out-of-the-box and all data input can be done via a standard browser and Internet connection.\nYour time to market can be counted in days, not weeks and certainly not months. Whatever your input is, can be in your e-commerce application almost immediately.\nThere is literally no upfront cost and there is no scaling issue, as the application will scale with the growth of your e-commerce application.\nUpdates and upgrades are part of the contract and included in the monthly or yearly service fee, so no extra hidden cost for that.\nThere is no hardware investment needed, as the service is hosted on the servers of the software vendor.\nAnd most importantly, you do need to have an IT staff to run your e-commerce application, you can leave it in the hands of the business people making commercial decision, not technical decisions.\nThere are very few negatives for the SaaS model, but a few objections could come from:\ncustomers that have security issues as they share a platform with other customers\nthe pace of innovation and new features that is not fitting with certain customer's requirements\ndifficulty of integration, although the application can sit alongside other web applications.\nThe SaaS model is by far the most appealing model for the majority of small and medium sized businesses and for any non-profit organization that is taking an e-commerce approach. For larger enterprises the SaaS model could be sitting alongside a larger corporate bought or build e-commerce application. It is fast and easy to set-up, and it can serve a niche purpose in the overall e-commerce strategy.\nBut for any organization where brand recognition, merchandising and marketing is the key driver, the SaaS model should be prime strategy. Time-to-market and speed of deployment are other key drivers to choose this model as a prime model.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Software as a service\nSaas (software as a service) is a cooperation model with business customers, which is based on delivering software localised on a cloud, so it can be available anywhere, on any device with access to the internet.\nIf users want to exploit the functions of a shared business application, they just have to open it in a web browser. More and more people assume this solution as the most viable, mainly thanks to low periodic fees, which are not apparent to company’s budget, as well as the lack of long-lasting period of implementation. Saas solution permits immediate access to application’s functions, thanks to the available trial version during the test period. Another key Saas advantage is the possibility of using advanced applications, without the need to install or maintain hardware or software.\nSaas vs On Premises\nSaas model is opposite to traditional On Premises model, which demands buying accurate tools located on media and their subsequent installation on particular devices exploited by the users. Such purchase involves one-time payment, which may vary from few to several thousand pounds. Not every enterprise can afford such a sum, so more and more companies prefer Saas, where the initial fee is relatively lower and further cost of subscription is proportional to the level of resource consumption. It means that the payment is adequate to the number of users exploiting the shared application.\nAnother asset of Saas is the fact that the amount of application users is typically infinite. The length of subscription period depends on the conditions of the contract signed between the client and the performer. Saas solution may appear more profitable also because of the fact that the whole software is localised on the side of the performer. This means that the software house is responsible for software’s infrastructure, update, security, creation and restoration of backup copies of files and database in case of a breakdown. Therefore, there’s no need to worry about lack of access to the data or their complete loss. The users may also count on technical support within the software service, which is really helpful and time-saving. Because of this your company doesn't need to hire any additional IT specialists.\nDespite the fact that Saas is definitely the most common and popular solution at this moment, it has its opponents. They acknowledge that this model is unstable, since the work of software is dependent on settlement of periodical subscription. Whereas the traditional business model guarantees unlimited access to purchased application.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "Product Mix and IT VARs\nAn IT VAR is mid-channel; they buy white label hardware and software from manufacturers and developers up the chain who do NOT want to be bothered with retailing to the unwashed masses, and add their own branding to the final product.\nConsider all those ~$100 tablets that came out last year. They all have the same chip in them. The manufacturers are mid-channel; they add value by completing the packaging job with screen, battery, sound, memory, etc, and put their own label on it. But then they have to SELL this thing somewhere...\nThey have relationships with chain retail sellers, small but highly trusted influencer organizations (eg. an accounting firm that also sells accounting software), IT shops who fix and install networks, and Managed Services providers who go around to different companies and maintain their IT equipment.\nEach segment has its own needs and mix.\nNow imagine you are the executive in charge of the sales and inventory managers of that VAR.\nYou need enough items in stock to feed the need (demand) from those different market segments.\nYou also want to avoid over-investing in inventory, because that ties up working capital.\nAlso, your sales staff are asking for direction. What should they \"push\"? Are the developers from the top pressuring you to move their product, or they'll consider giving the distribution rights to another VAR?\nAll this and more goes into the product mix decision. You have to see the future, watch your Planned vs Actual like a hawk, ensure inventory is on hand Just In Time, and a number of other crazy things.\nSupporting this, ideally, is a robust integrated CRM, accounting, and inventory management system.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "There are dozens of possible ways to develop software. A wide variety of existing programming languages with their own strong and weak sides, frameworks, libraries, and other development tools allow creating applications of different types: for desktops, for mobile devices, web applications that work via browser, etc. Different software development methodologies and frameworks like Scrum or Kanban will help you adapt the development process for various types of projects, reduce costs, eliminate risks, and use the available resources more efficient. But besides the development itself, there’s another important issue that requires close attention. We’re talking about the way of licensing and delivering the software.\nWhen there’s a need to choose what framework, technology, or a programming language should be used during the development process, in most cases, the development company makes the final decision and the customer relies on the developer’s experience and skills. But the chosen software delivery model affects both the customer and the developer because different models have different pros and cons and require different development strategy.\nIn this article, we’ll talk about one of the existing software licensing and delivery models named Software as a Service, or SaaS, to be short, and the key benefits it offers.\nWhat is SaaS?\nSaaS is a centrally hosted software that is provided to a customer in the form of a service on a subscription basis. The applications of such kind run on the servers of SaaS providers and users can access them via browsers. Instead of buying an application, a user pays a “rent” that allows him to use it for a certain amount of time. Thus, users can decrease the use cost, which is one of the primary benefits of using SaaS. There’s no need for a user to think about the technical issues since the SaaS provider takes care of it.\nLet’s continue with the main SaaS advantages and benefits for a customer.\nMain Advantages and Benefits for Customer\nWhen we talk about an application without singling out specific features, we usually mean in-house software, software that was created for using within the organization. In this case, the customer is responsible for the support of hardware, databases, storage, security issues, and other important aspects of successfully running application. As you can imagine, this approach entails high costs due to the necessity for a team of IT specialists and regular expertise.\nFor a small business or state-owned enterprises, this may become a headache. One of the possible solutions is SaaS.", "score": 23.66817398298508, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "can also become cumbersome, if not overwhelming, at times. Consider the following benefits when partnering with a Managed Service Provider.\nYour MSP Is the Onl...\nComparison of SaaS, PaaS and IaaS\nThere are usually three concepts of cloud service, Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).\nWhether it is IaaS, SaaS, and PaaS, each has its own intricacies, but today we're going to help you to differentiate SaaS, PaaS, and IaaS.\nSaaS - Software-as-a-Service\nGenerally charged depending upon the number of users and charges are recurring monthly or yearly. Companies have the choices to add or remove users at any time without additional costs. Some of the most well known SaaS solutions are Microsoft office365, SalesForce, Google Apps. It is a responsibility of SaaS provides to manage Server, Network, and Security related threat intern it supports organization to reduce the cost of software ownership by removing the need for technical staff to manage install, manage, and upgrade software, as well as reduce the cost of licensing software.\nPaaS - Platform-as-a-Service\nA cloud service, typically providing a platfor...", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "We value partnerships as a way to expand our reach and provide even greater value to our clients.\nWe are always seeking to develop long-term relationships with like-minded organizations and individuals who share our commitment to delivering top-notch software development solutions.\nOur partnership program is designed to create mutually beneficial relationships that enable both parties to achieve their business goals.\nOur Engagement Model\nOur partnership program is tailored to meet the specific needs of our partners and offers a wide range of benefits, including access to our expert team of developers, designers, and QA specialists.\nWhether you're looking to expand your services, improve your existing products, or build new ones, we can help you achieve your goals.\nAs a CodeUp partner, you'll benefit from our extensive experience in software development, industry insights, and our commitment to quality and excellence. We work collaboratively with our partners to ensure that our solutions meet their unique requirements and deliver real value to their business.\nWe offer three engagement models to choose from based on your needs:\nThe Fixed Price model is suitable for smaller projects that take up to a month to complete or for projects with very detailed specifications where all functions and requirements are clear.\nThis model ensures that the project is completed within the budget and time frame as it allows us to accurately determine the duration and cost of the project. It's an ideal choice when requirements are well-defined and detailed.\nOur team will estimate the scope and complexity of the project and provide a fixed price for the complete software development along with a project delivery schedule.\nTime & Material\nThe Time & Material model is suitable for projects that have evolving requirements or lack well-defined plans at the outset.\nIt allows for flexibility in terms of team size and workload, which can help optimize time and costs. With this option, we provide skilled resources and bill the development effort at the end of each month based on a pre-negotiated hourly rate.\nThe engagement can range from a few months to several years, with the total project cost determined by the amount of time and resources expended, as well as the actual development effort contributed.\nDedicated Team (TaaS)\nWith the Dedicated Team model, you have the opportunity to hire a development team while leaving the hassle of managing salaries, hardware, software licenses, and other aspects of IT infrastructure management to us. You are free to interview, select, and manage each team member, and you retain full control over the team.", "score": 23.191299662621784, "rank": 55}, {"document_id": "doc-::chunk-2", "d_text": "Firstly, because the SaaS model does not bring them the same income structure, secondly, because continuing to work with a distribution network was decreasing their profit margins and was damaging to the competitiveness of their product pricing. Today a landscape is taking shape with SaaS and managed service players who combine the indirect sales model with their own existing business model, and those who seek to redefine their role within the 3.0 IT economy.\n|This section does not cite any references or sources. (December 2013)|\nUnlike traditional software which is conventionally sold as a perpetual license with an up-front cost (and an optional ongoing support fee), SaaS providers generally price applications using a subscription fee, most commonly a monthly fee or an annual fee. Consequently, the initial setup cost for SaaS is typically lower than the equivalent enterprise software. SaaS vendors typically price their applications based on some usage parameters, such as the number of users using the application. However, because in a SaaS environment customers' data reside with the SaaS vendor, opportunities also exist to charge per transaction, event, or other unit of value, such as the number of processors required.\nThe relatively low cost for user provisioning (i.e., setting up a new customer) in a multi-tenant environment enables some SaaS vendors to offer applications using the freemium model. In this model, a free service is made available with limited functionality or scope, and fees are charged for enhanced functionality or larger scope. Some other SaaS applications are completely free to users, with revenue being derived from alternate sources such as advertising.\nA key driver of SaaS growth is SaaS vendors' ability to provide a price that is competitive with on-premises software. This is consistent with the traditional rationale for outsourcing IT systems, which involves applying economies of scale to application operation, i.e., an outside service provider may be able to offer better, cheaper, more reliable applications.\nThe vast majority of SaaS solutions are based on a multi-tenant architecture. With this model, a single version of the application, with a single configuration (hardware, network, operating system), is used for all customers (\"tenants\"). To support scalability, the application is installed on multiple machines (called horizontal scaling). In some cases, a second version of the application is set up to offer a select group of customers with access to pre-release versions of the applications (e.g., a beta version) for testing purposes.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "Point: When choosing a vendor for your SaaS application, make sure that high availability is guaranteed to you in the vendor’s service level agreement (SLA).\nScale as needed\nInstead of buying software, you simply subscribe to a SaaS offer. If you need more licenses for a limited period, you can flexibly add them online and vice versa cancel licenses that are no longer needed. So you only pay for the number of licenses you actually use.\nSet up new workstations quickly and easily\nSaaS frees you from having to install programs on local computers. A user only needs an in-app license, access data and an internet-enabled device.\nSaaS makes it easier than ever for your team to work together. The data is stored centrally and is nevertheless available to every user in real time, anywhere and anytime. This is a decisive advantage, especially in times of telecommuting and remote work.\nAlways up to date\nSoftware updates usually take a long time. With the SaaS model, the vendor ensures that updates, new features, and bug fixes are released for all users. A decisive advantage over on-site solutions. Here, updates need to be installed by users or IT. It may therefore happen that the software of different employees is at a different level.\nIs SaaS the same as Cloud?\nSoftware as a service is not the same as the cloud. The SaaS model is more of a subunit of cloud computing. In addition to SaaS, there are other ways to offer IT products “as a service” in the cloud:\n- Infrastructure as a service (Iaas): The provider hosts hardware, software, storage space, components for the IT infrastructure.\n- Platform as a Service (PaaS): Users can develop, operate and manage applications themselves. Because the infrastructure is in the cloud, you don’t need to configure or manage the servers yourself.\n- Managed Software as a Service (MSaaS): MSaas goes even further than SaaS. Here, SaaS applications are monitored by IT experts, for example with regard to IT security, backup management, etc.\n- Database as a service (DBaaS): here users can access databases without downloading or hosting them themselves\n- Security as a Service: Security services are offered as a service via the cloud\nSoftware-as-a-service providers rely on different pricing models, some of which can also be combined with each other.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "Software as a service\nSoftware as a service (SaaS; pronounced // or //) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. It is sometimes referred to as \"on-demand software\". SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for many business applications, including office and messaging software, payroll processing software, DBMS software, management software, CAD software, development software, gamification, virtualization, accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, human resource management (HRM), content management (CM) and service desk management. SaaS has been incorporated into the strategy of all leading enterprise software companies. One of the biggest selling points for these companies is the potential to reduce IT support costs by outsourcing hardware and software maintenance and support to the SaaS provider.\nAccording to a Gartner Group estimate, SaaS sales in 2010 reached $10 billion, and were projected to increase to $12.1bn in 2011, up 20.7% from 2010. Gartner Group estimates that SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected $21.3bn. Customer relationship management (CRM) continues to be the largest market for SaaS. SaaS revenue within the CRM market was forecast to reach $3.8bn in 2011, up from $3.2bn in 2010.\nThe term \"software as a service\" (SaaS) is considered to be part of the nomenclature of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), backend as a service (BaaS), and information technology management as a service (ITMaaS).\n- 1 History\n- 2 Distribution\n- 3 Pricing\n- 4 Architecture\n- 5 Characteristics\n- 6 Adoption drivers\n- 7 Adoption challenges\n- 8 Emerging trends\n- 9 Data escrow\n- 10 Criticism\n- 11 See also\n- 12 References\nCentralized hosting of business applications dates back to the 1960s.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Offering Blesta licenses to your customers is a great way to add an additional revenue stream or value-add to your existing offering. There are two types of reseller accounts, Standard and VAR (Value Added Reseller). Which type is best for you depends largely on your target audience. If you want to sell licenses to the public, for installation with any hosting provider, a Standard reseller account is what you’ll need. If you are a hosting provider and want to offer licenses to your existing customers for free, as part of a hosting reseller plan for example, a VAR account might be right for you.\n- Can sell licenses to the general public.\n- Can sell both monthly and owned licenses.\n- Pay per license model means you only pay for licenses you have.\n- Pricing is determined by volume, the more licenses you sell, the lower per-license cost.\n- Can only sell or offer licenses to customers as part of a value-add, such as inclusion as part of a web-hosting reseller account.\n- Unlimited licenses for a flat monthly fee. Your cost stays the same regardless of the number of licenses, easy to budget!\n- Just pay first month’s service to start.\nStandard Reseller Pricing\nOur tiered pricing structure means you get the best pricing for your volume bracket. To become a standard reseller and get discounted pricing you just need to contact sales once you have at least 4 active licenses.\nOnce a standard reseller, you can add and remove licenses from your account at anytime. Once a month we take a look at how many active monthly licenses you have and bill you for those. Owned licenses are billed when ordered.\n|License Count||Monthly Branded||Owned Branded||Discount|\n* Plan to offer Blesta exclusively? Email for further discounts on our standard reseller plan.\nIf you have an owned license, you can start with your first monthly license (Does not require 5 licenses). Signup in the client area under the “Reseller” section and get started right away!\nVAR Reseller Pricing\nValue Added Resellers get unlimited licenses for a fixed monthly fee. Give us a call at 714-398-8132, or email\nso we can discuss this program and pricing.\nReseller API Docs\nVersion 2.0 of our Reseller API documentation can be viewed here.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "The three main cloud computing models are Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).\nIaaS is a cloud computing model that provides users with access to virtualized computing resources over the internet. These resources include servers, storage, and networking. IaaS is a popular choice for businesses that want to reduce their IT infrastructure costs, as it allows them to rent computing resources instead of purchasing and maintaining their own hardware.\nPaaS is a cloud computing model that provides users with a platform for building and deploying applications. PaaS provides users with a pre-configured environment that includes operating systems, databases, and development tools. This allows developers to focus on building their applications instead of managing the underlying infrastructure.\nSaaS is a cloud computing model that provides users with access to software applications over the internet. SaaS applications are typically accessed through a web browser, and users pay a subscription fee for access to the software. SaaS is a popular choice for businesses that want to reduce their software licensing costs, as it allows them to rent software applications instead of purchasing and installing them on their own hardware.\nServerless computing is a cloud computing model that allows developers to build and run applications without having to manage the underlying infrastructure. In a serverless computing model, the cloud provider manages the infrastructure, and developers only pay for the computing resources that their applications use.\nFaaS is a cloud computing model that allows developers to build and run applications as a series of small, independent functions. In a FaaS model, the cloud provider manages the infrastructure, and developers only pay for the computing resources that their functions use. FaaS is a popular choice for building event-driven applications, such as chatbots and IoT applications.\nIn summary, cloud computing models provide users with access to computing resources, development platforms, and software applications over the internet. The three main cloud computing models are IaaS, PaaS, and SaaS, while serverless computing and FaaS are newer models that allow developers to build and run applications without managing the underlying infrastructure.\nCloud Service Providers\nCloud service providers are companies that offer computing resources such as virtual machines, storage, and applications over the internet. These resources are usually provided on a pay-per-use basis, which means that customers only pay for the resources they consume. Some of the most popular cloud service providers include AWS, Microsoft Azure, and Google Cloud.", "score": 22.27027961050575, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Cloud Strategies Services for SaaS Vendors\nThe challenges of SaaS startups and companies transforming to SaaS are different.\nCloud Strategies can help.\nEach client’s requirements are unique, depending on the objectives of the client, the type of SaaS company profile, the stage of the company’s adoption of a SaaS model, and the resources required. Cloud Strategies can provide an independent assessment of a software company’s SaaS strategy for investors and boards, or work directly with the company to maximize its opportunity for success. Cloud Strategies can work at a high level providing an assessment and recommendations in an advisory role, or become an integral part of the company team to make their SaaS Business Work.\nSaaS Company Profiles\nThe SaaS delivery and business models encompass a very wide variety of types of companies. The SaaS strategy will vary greatly, dependent on five primary characteristics of the SaaS company.\nGreenfield or Established Businesses\nStart-ups have the advantage of a clean slate, but don’t have the resources, image recognition, or channels of established players. Existing software vendors have many advantages, but the transition to SaaS can create competition with their existing products, additional working capital requirements, and issues with their existing channel.\nLow or High Price Offering\nMany basic SaaS products like Dropbox have a base edition that is free and entice their customers to move up to a paid version costing $5 to $10 per user per month. SaaS products vary in price from a few dollars per month for products like QuickBooks Online at around $30 to $60 per month to enterprise products such as NetSuite or Saleforce generally cost around $50 per seat per month with a wide variation up or down depending on the features. Some high-value, specialized SaaS products may cost as much as $600 per month. Services costs add significantly to the total cost of implementation in large, complex systems. The strategies and tactics for these SaaS offerings vary widely based on their price.\nDegree of Services Required\nSimple SaaS products have essentially no setup, or may have a minimum end user configuration such as setting up file locations for Dropbox. SaaS products have “Customer Self-Service” as a key attribute, allowing the company and the user to perform more of the setup operations for moderately complex products such as QuickBooks Online. More sophisticated products like NetSuite software have configuration which are beyond the scope of the end users and will generally be performed by the SaaS company, their VARs or other Implementation Partners.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "Hybrid cloud combines best of both worlds, gives greater control that private cloud offers and also offers flexibility and cost effectiveness that a public cloud generally gives.\n- Organization can maintain a private infrastructure for sensitive assets.\n- Organization can take advantage of additional resources in the public cloud when you need them.\n- Offers ability to scale to the public cloud, you pay for extra computing power only when needed.\nSaaS, is a flexible cloud model that gives you access to software and applications like mail, ERP, collaboration and office productivity without purchasing a licensed copy outright. Instead, you engage with a platform’s tools via subscription, often through a web browser or custom desktop portal.\nSaaS provider allows you to simply log in. This is typically available for a monthly cost by usage. No purchasing big software licenses that limit you to a single version.\nWith SaaS programs, your organizations IT department doesn’t have to manage patches and updates. These are performed regularly by the provider so you always have the latest version with full protection and functionality.\nPaaS is best fit for companies who need development tools in a virtualized environment for more efficiency and productivity.\nPaaS service lets you scale resources easily as the need arises, using only what you need, when you need it. Through this model, you can develop, deploy and manage your programs and apps without overloading your own server space.\nIaaS is foundational cloud migration for your business. Through this model, your cloud provider, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform, delivers and manages your storage, your computing power, your networking and your hosting.\nIn IaaS model, you maintain responsibility and control of your software, data and applications.\nWith IaaS, you get incredible availability, reliability and security for specific projects or variable workloads.\nEach of the cloud offering model has its merits and is fit for different needs. Below table summerizes how they differ:\nMany think that Azure offers all three model of cloud computing i.e. SaaS, PaaS, and IaaS. However that is not true, Azure only offers PaaS and IaaS.\nA definition from Microsoft site for Azure:\n\"Azure is a comprehensive set of cloud services that developers and IT professionals use to build, deploy, and manage applications through our global network of datacenters. Integrated tools, DevOps, and a marketplace support you in efficiently building anything from simple mobile apps to internet-scale solutions\".", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "Software As a Service (SaaS) – A New Product Delivery System\n18 Mar. 2011 Software Development\nSoftware as a Service (SaaS), also known as software on demand, is a software service that is accessed over the internet or is deployed to work behind a firewall on the local area network. With Software as a Service, a company licenses an application to clients as a service on demand, using a subscription pay model either by pay-as-you-go or at no charge, if there is an opportunity of getting revenue from other steams beside the user, such as through adverts etc. This application delivery model is part of the cloud computing business model where technology is in the cloud and accessed using the Internet.\nSoftware as a Service is increasingly becoming the prevalent delivery model for services as the underlying technologies that support service-oriented architecture and web services matures. And also the popularity of new development approaches like Ajax have tremendously aided its growth. The ease of access to mobile broadband service have also supported user access to cloud computing services from anywhere in the world.\nThe conventional model of software distribution, where software is purchased and installed on personal computers is sometimes referred to as software as a product. The software as a service model has become a very common business application model which includes Accounting, Customer Relationship Management (CRM), Collaboration, Logistics & Supply chain, Enterprise resource planning (ERP), Human Resource Management (HRM), and Content Management System (Enterprise CMS).\nSoftware as a Service has a lot of obvious benefits for the modern day business, which are as follows:\n- It is accessible from anywhere in the world with an internet service\n- There is no local server installation\n- It reduces time to market\n- It increases reliability of service\n- It encourages pay per use or subscription based payment system\n- It has easier administration\n- Easy automatic updates and patch management\n- Encourages compatibility, all users will have the same version of software.\nSoftware as a Service is great for a startup company that aims to test before going full scale into their business or that wishes to start operating modestly. Besides, it is highly scalable which means that it has capacity to expand as the business grows. Since all aspects of the software as a service is left to the SaaS provider, a startup business owner will be able to focus more on growing his business instead of worrying about his IT infrastructures.\nCompanies have discovered that SaaS services are a more cost efficient way of using business software .", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-2", "d_text": "Types of cloud ERP\nPublishers like to say that their ERP is ‘cloud.’ In some deployment models, the publisher moves the ERP’s underlying infrastructure without modifying the software package code on site. Publishers nonetheless praise such ‘lift and shift’ services because they are well suited to clients who are reluctant to adopt ERP products in SaaS mode but still want to take advantage of the scalability and management peace of mind that a cloud infrastructure brings.\nThe multitenant on-demand offering is the purest form of cloud (SaaS).\nThis is followed by the ‘monotenant on-demand’ model, where each customer has their own instance of the ERP software that runs on the vendor’s platform. In this ‘single-tenant SaaS’ model, the customer continues to have the scalable computing power and the flexibility of subscription pricing, but their data and ERP system is kept separate from other customers. Some companies choose this option for their security and confidentiality reasons or to meet the countries’ legal compliance in which they operate.\nThe two types of SaaS ERP (multitenant and single-tenant) are often hosted in a public cloud. The infrastructure and services depend on a provider other than the ERP publisher.\nThe third significant type of cloud ERP is the ‘private cloud.’ Monotenant SaaS is often one of them. In private cloud ERP, the software and hardware are those of a single customer. In some cases, the ERP client even exercises some control over the data center and does not delegate all the host’s responsibilities. A private cloud ERP can even run on-site within the walls of the client company. Rather than cloud, we could therefore speak of ERP in ‘hosted and managed mode.’\nIt is possible to combine private cloud and public cloud for an ERP, depending on the modules and layers. The same is right in the public cloud-only, where multitenant may be relevant for some layers and not for others. A customer can take advantage of the benefits of pooling at the database, operating system and hardware level while running their ERP application monotonously.\nCost-Efficient Cloud Storage\nIf you require a cloud solution that provides you cost-efficient storage, then an OpenStack based cloud is for you. VEXXHOST’s cloud offering is well integrated with multiple OpenStack projects that help you utilize your cloud resources to the maximum. Our pay as you go platform allows you to get started without any upfront cost, setup fees, or minimum costs.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Originally published on March 10, 2007\nI have received an amazing amount of feedback on the Enterprise SaaS series; just about everyone who read it has contacted me with mostly positive comments, but usually someone has at least one concern. I am going to try to address all of their concerns at one time.\nHybrid SaaS != Hybrid Licensing Model\nA couple of people have mentioned that trying to support a hybrid licensing (or business) model would be difficult at best. I couldn’t agree more. I never intended to support or even condone multiple licensing models.\nI re-read what I wrote, and while it makes sense to me, you really have to follow it closely otherwise you could think I am supporting multiple licensing models. In fact, my “Hybrid SaaS” model is a hybrid technology model that is supported by only one licensing model, utility pricing. What I condone is pretty much technology agnostic and relies on the system being able to “call home” to handle the pricing. I think that is the future of the software industry more so than any actual technology.\nThat said, I think the SaaS delivery model is the future of software. I am definitely pro-SaaS, including Pureplay SaaS. This series was just to let people know that while we as technologists, entrepreneurs, and evangelists have adopted SaaS as the de facto standard for software delivery, Corporate America, as I have seen it, is not onboard with that. Are they changing? Yes. Will it get better? Absolutely. SaaS is just not ubiquitous yet and you must be prepared for that as a new SaaS start-up. In one or two years, these objections may go away completely; you still need to know your customer though.\nA problem with many SaaS vendors is they come in and say “we’re a SaaS vendor” like the customer cares. Do on-premises software companies come in and say “our software is written in Java” or “we distribute our software on CDs”? No. And they shouldn’t because no one cares. The caveat with SaaS is that it is actually more difficult (hence the creation of my articles), because you actually need to make sure they can support that type of delivery mechanism.\nBut what about selling the benefits of the SaaS model? Yes, you need to do this.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "By Clayton Browne\nSoftware as a Service (SaaS), a relationship that provides on-demand access to a cloud based network with shared configurable computing resources, has enjoyed phenomenal growth over the last few years. But how successful have IT organizations been with structuring SaaS contracts to address SLAs and the variety of things that can go wrong? According to Gartner, global SaaS spending is anticipated to grow by 17.9 percent in 2012, totaling over $14.5 billion and SaaS is projected to grow into a $22 billion market by 2015.\nSaaS providers generally price their services on the “utility” model where they charge clients a set fee based on volume of services used while the customer is responsible for security, data protection, compliance with laws, and contractually limited liability. SaaS contracts are very different from traditional outsourcing contracts where the provider typically offers service availability and quality guarantees, and both parties typically share significant liability.\nSaaS providers today offer a broad spectrum of cloud-based services ranging from providing access to apps from remote networks to PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a-Service).\nMajor players in the SaaS industry like EC2, Amazon, IBM, and Microsoft offer IaaS for customers that don’t want to manage proprietary IT infrastructure, but want to maintain control of their software environment. IaaS essentially provides a virtual machine network where businesses can run whatever software they want, with the provider simply maintaining the virtual network.\nPaaS is an IT outsourcing service somewhere between SaaS and IaaS, where providers offer businesses a set of tools to develop apps, those business-specific proprietary apps are then run in the PaaS environment (ex. Windows Azure).\nThe one-size-fits-all utility pricing model of public cloud SaaS providers fits the needs of many small and medium-sized businesses with pay as you go pricing. However, the inflexibility and the lack of service availability guarantees make utility pricing models difficult for larger businesses to accept, especially when dealing with mission-critical functions. Sharing risk has practically become a mantra in big business today, and SaaS providers are discovering that they must share risk to convince larger businesses to outsource their mission-critical functions.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "The types of cloud computing available are originally three. However, new demands and opportunities have led to new types gaining ground, as follows:\nIn the public model (Public Cloud), the resources used by users are shared.\nIn it, the service provider makes resources available, such as virtual machines (VMs), applications or storage, to the general public over the Internet. Public cloud services can be free or offered on a “pay-per-use” model.\nThe public cloud is characterized by being a virtualized environment, that is, designed by virtual machines on a set of servers (cluster). Despite the sharing of resources by users, each tenant in the public cloud has their data isolated from the others and both their privileges and access to them are individualized by user and password .\nAnother characteristic of the public cloud is the reliance on the use of high bandwidth to transmit data quickly to a large number of users. As for storage, it is usually redundant, with data replication in different locations, which results in high resilience and security.\nBasically, the implementation of systems in the cloud is carried out by the cloud provider in conjunction with the IT team of the contracting company and the software vendor.\nThe cloud provider performs environment provisioning, database installation and configuration, access security, communication tools, among other activities. And softwarehouses are responsible for accessing the environments created by the cloud provider and installing their applications in that environment.\nAll teams (IT staff, softwarehouse and cloud provider) must follow the approval and start of production to make sure the migration is successful.\nCloud computing has become the next main driver of business innovation, with a focus on enhancing new business models and services across various industries, especially telecommunications, healthcare, and government. For some service providers, cloud service models can unlock access and create future opportunities for new consumer segments, such as small business and emerging markets. This is primarily because they provide opportunities for the acquisition and management of cloud computing tools and software systems.\nCloud technologies allow businesses to conduct their vital functions in a better environment that offers efficient support for starting or growing a company without significant investments. There is ample scope for optimization of business processes and adaptation to changing market conditions by introducing various cloud service models.\nIn this article, we will explain the difference between such cloud service models as SaaS, PaaS, IaaS and the likes. Check out the full article here.\nThe fundamental concept of the two is adaptability. It refers to the system environment’s ability to use as many resources as required.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "The more responsibility you shift to the cloud service provider the less control you have over security, business requirements, etc. but the faster you can get to market. On the flip side, the more you control the more work you have to do and the longer it takes to get to market.\nWith SaaS, the consumer has very little control over the application other than who has access. The consumer can alter various configurations but often has no say in SLAs, maintenance windows, underlying architecture, etc.\nThe advantage is that the consumer can quickly be up and running using the SaaS solution and does not have to manage and maintain the application freeing up precious IT resources to work on other priorities.\nAnother advantage is that the SaaS provider keeps up with changes in technology so that the consumer does not have to. For example, as more devices and tablets hit the market, the service provider makes the necessary changes to ensure the SaaS solution can support these devices.\nWith PaaS, the consumer does not have to manage hardware, operating systems, database systems, programming stacks, etc. Instead they focus on building software on top of these robust platforms.\nThe downside is that the developers must work within the constraints of the platform which may not be optimal for high performing architectures. Another disadvantage is that the consumer is highly reliable on the SLAs of the PaaS providers. Some of these PaaS providers like Heroku run on top of Amazon Web Services (AWS), an IaaS provider. When AWS has issues the developers are at the mercy of PaaS providers like Heroku to stay highly available.\nWhen the PaaS service goes down the developers are mostly helpless and must wait until the PaaS provider restores services.\nWith IaaS, consumers have the absolute most control over all three cloud service models. The advantage of IaaS is that the infrastructure is abstracted and made available as a collection of APIs.\nThe IaaS providers provide seemingly infinite cloud resources available in minutes on demand without the long procurement cycles of the past. Application can be built to scale on demand as workloads increase and decrease consumption of compute resources as workloads decrease, thus optimizing the infrastructure spend. No longer do companies need to by two or three times the capacity to sit idle in the case of a peak workload.\nThe downside is the consumer is constrained to a subset of virtual cloud servers. Some applications require very specific hardware requirements which may not be available from the cloud service provider.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "SaaS is also known as “on-demand software”, and typical SaaS applications are web applications that are available over the Internet, with functions such as invoicing, customer relationship management (CRM) and service desk management.\nBecause these software applications are accessed via web browser, the users never need to think about set-up, updates or maintenance. The cloud provider manages the application and the users pay a subscription fee to gain access.\nPlatform as a Service (PaaS) is a service model that allows its users to run and manage applications without having to build and maintain the infrastructure associated with that process. PaaS is suitable for developers and programmers as it enables high-level programming with reduced complexity. The users manage the applications and services, and the cloud provider manages everything else, typically via a pay-as-you-go model.\nApplication Platform as a Service (aPaaS) is sometimes used interchangeably with PaaS, but it can be seen as a subcategory that includes only the services required for application development.\nInfrastructure as a Service (IaaS) is a form of cloud computing in which the cloud provider manages the infrastructure (meaning servers, compute, network and storage resources) for users. The infrastructure is delivered over the Internet and accessed through an API or dashboard. While the cloud provider manages the infrastructure, the users are responsible for purchasing and managing their own operating systems, middleware and applications. IaaS enables users to scale resources up and down with demand via a pay-as-you-go model. This allows user to avoid high expenditures associated with buying and managing infrastructure, along with the burden of owning unnecessary infrastructure during fluctuating workloads.\nThere are 4 main types of cloud computing: private cloud, public cloud, hybrid cloud, and multi-clouds. To get the best value out of a cloud solution, it is necessary to evaluate the effectiveness of current solutions, identify the business objectives these solutions are failing to meet and evaluate those results to define a cloud solution that best suits your organization and its goals.\nCloud computing dedicated for (use by) only one business or organization, has a secure private network and is not accessible to outsiders. Thus, private clouds are often used by financial institutions, government and any other mid- to large-sized organization with business-critical operations seeking enhanced control over their environment.\nIT services are maintained and hosted by an external provider, delivered digitally and shared across organizations as part of a public cloud. Some key examples include Microsoft Azure, Amazon Web Services (AWS) and Google Cloud.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-5", "d_text": "- Multi-tenant architectures, which drive cost efficiency for SaaS solution providers, limit customization of applications for large clients, inhibiting such applications from being used in scenarios (applicable mostly to large enterprises) for which such customization is necessary.\n- Some business applications require access to or integration with customer's current data. When such data are large in volume or sensitive (e.g., end users' personal information), integrating them with remotely hosted software can be costly or risky, or can conflict with data governance regulations.\n- Constitutional search/seizure warrant laws do not protect all forms of SaaS dynamically stored data. The end result is that a link is added to the chain of security where access to the data, and, by extension, misuse of these data, are limited only by the assumed honesty of 3rd parties or government agencies able to access the data on their own recognizance.\n- Switching SaaS vendors may involve the slow and difficult task of transferring very large data files over the Internet.\n- Organizations that adopt SaaS may find they are forced into adopting new versions, which might result in unforeseen training costs or an increase in probability that a user might make an error.\n- Relying on an Internet connection means that data are transferred to and from a SaaS firm at Internet speeds, rather than the potentially higher speeds of a firm’s internal network.\nThe standard model also has limitations:\n- Compatibility with hardware, other software, and operating systems.\n- Licensing and compliance problems (unauthorized copies of the software program putting the organization at risk of fines or litigation).\n- Maintenance, support, and patch revision processes.\n- Can the SaaS hosting company guarantee the uptime level agreed in the SLA (Service Level Agreement)?\nAs a result of widespread fragmentation in the SaaS provider space, there is an emerging trend towards the development of SaaS Integration Platforms (SIP). These SIPs allow subscribers to access multiple SaaS applications through a common platform. They also offer new application developers an opportunity to quickly develop and deploy new applications. This trend is being referred to as the \"third wave\" in software adoption - where SaaS moves beyond standalone applications to become a comprehensive platform. Zoho and Sutisoft are two companies that offer comprehensive SIPs today. Several other industry players, including Salesforce, Microsoft, and Oracle are aggressively developing similar integration platforms.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "SaaS is very common for non-core competency type applications like customer relationship management (CRM), human resources applications, and financial and accounting applications.\nMany companies are now going away from the legacy model of shipping software to clients or delivering software internally over the internal network to a SaaS model where the software is available 24 by 7 over the internet.\nIn this model, software is updated in one place and immediately available to end users as opposed to the old ship and upgrade method of the past.\nTRP: What should users consider when determining which is the right cloud service model for their business?\nMK: The proper question is what cloud service model is right for the application. Each enterprise should expect to deploy applications and services using all three cloud service models.\nUse a hammer to pound nails and a screw driver to turn screws. There are many factors that determine which cloud service model to use.\nThe first is a build versus buy decision. Should we right the code ourselves or pay for a SaaS solution that provides the functionality on demand? If the service is not a core competency, SaaS is usually a very good alternative to building as long as the service is affordable, mature, and meets the business requirements.\nThe PaaS vs. IaaS decision typically is determined by the performance and scalability requirements of the application. PaaS solutions have limitations on their ability to achieve very high scale due to the fact that these platforms must provide auto scaling and failover capabilities for all tenants of the platform.\nWith IaaS, it is up to the consumer to architect for scale and failover. PaaS solutions have upper bound limitations by client that limit how much compute resources a consumer can request making PaaS less desirable for very high scaling and performing solutions.\nThe beauty of PaaS is that it abstracts away the infrastructure and application stack so that developers only need to focus on building business functionality. PaaS promises increased speed to market but is the least mature of the three cloud service models. Some companies do not trust PaaS yet and will simply default to IaaS.\nIaaS should be used when high scale and performance requirements are important. It is also a desirable option for companies who want more control over the application stack whether it be for performance, security, or control reasons.\nTRP: What are the pros and cons of each service model?\nMK: With cloud service models, there is a tradeoff between control and agility.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "The cloud has had a transformational impact on businesses of all sizes – from small and midsized businesses (SMBs) to large enterprises – and it’s showing no signs of slowing down.\nAccording to analyst house Gartner, the use of cloud computing is still growing and will become the bulk of new IT spend by 2016, a year that the company predicts will see hybrid cloud overtake private cloud, with nearly half of large enterprises having deployments by the end of 2017.\nDespite its high uptake, the most suitable route into the cloud is not always so clear cut for many organisations moving on from the tried and tested client-server model.\nTo shed light on the advantages and disadvantages of cloud computing’s three main service delivery models – software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) – we spoke to Mike Kavis, VP and Principal Architect for Cloud Technology Partners and author of ‘Architecting the cloud’.\nTechRadar Pro: Can you summarise the different cloud service delivery models available?\nMK: There are three cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).\nWith each cloud service model, certain responsibilities are shifted to the cloud service provider allowing consumers of cloud services to focus more on their own business requirements and less on the underlying technologies.\nIaaS abstracts the underlying infrastructure and data center capabilities so that consumers no longer have to rack and stack hardware, power and cool data centers, and procure hardware. Computer resources can be provisioned on demand as a utility, much like how we consume water and electricity today.\nPaaS takes us one level higher in the stack and abstracts that operating system, database, application server, and programming language.\nConsumers using PaaS can focus on building software on top of the platform and no longer have to worry about installing, managing, and patching LAMP stacks or Windows operating systems. PaaS also takes care of scaling, failover, and many other technical design considerations so that developers can focus on business applications and less on the underlying IT \"plumbing\".\nSaaS is the ultimate level of abstraction. With SaaS, the entire application or service is delivered over the web through a browser and or via an API. In this service model, the consumer only needs to focus on administering users to the system.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "The premise of PaaS though is not only to offer maintenance free application stack but also additional services that you can utilize in your application. Very often PaaS providers are exposing middleware and databases as services and abstract the connectivity to those through APIs in order to free up developers from the need to locate the actual systems. Additional services can be authentication and authorization, video encoding, location based services etc. Using the PaaS services will allow you to abstract your applications from the underlying stack and as long as the APIs are kept intact it will be protected from failures between platform updates.\nSoftware-as-a-Service (SaaS) Model\nSaaS is the model with the highest abstraction and offers the most maintenance free option. As a SaaS consumer you are just using the software offered by the vendor. As depicted on the picture the whole stack is maintained by the vendor. This includes also updates for the application as well as application data management. SaaS model is very similar to the off-the-shelf software model where you go and buy the CD, install the software and start using it.\nTraditionally one of the hardest problems application developers had to deal with was the data migration between different versions. SaaS vendors are also responsible for migrating your data and keeping it consistent. Similar to the off-the-shelf software model you can rely that you can access and read your data once you upgrade to a new version.\nThe SaaS model is the most resource-efficient model because it utilizes application multi-tenancy. What this means is that the same application instance handles multiple user-organizations. This is good for both the vendor and the customer because better resource utilization brings the maintenance costs down and hence the price for the services down. On the other side though tenant data is comingled and there is the security risk of one tenant accidentally getting access to another tenant's data.\nAlthough not exhaustive the cloud computing service models explanation above should be enough to kick-start your initial discussion about your cloud strategy.\nWhen an enterprise builds a hybrid IaaS cloud connecting its data center to one or more public clouds, security is often a major topic along with the other challenges involved. Security is closely intertwined with the networking choices made for the hybrid cloud. Traditional networking approaches for building a hybrid cloud try to kludge together the enterprise infrastructure with the public cloud. Consequently this approach requires risky, deep \"surgery\" including changes to firewalls, subnets...\nOct.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Software as a service company (SaaS) is a software distribution company in which a third-party provider renders hosting services for applications, availing them for customers over the Interest. A SaaS company delivers applications to its customers over the Internet in form of a service. The user does not have to install or maintain a software, but rather has to access it via the internet.\nA SaaS company frees the user from the complications of managing intricate software and maintaining hardware. Sometimes, SaaS applications are also known as web-based, on-demand or hosted software which runs on the SaaS provider’s servers. The service provider manages the security, availability, and performance of the application.\nThrough SaaS, data can be accessed from any remote device which has access to an internet connection and a web browser. This method is advantageous as the company is not required to buy comprehensive hardware to host the software and allows the buyer to externalize IT issues like troubleshoot and maintenance of the software.\nA SaaS company eliminates the need for clients to install and run applications on their computers or data centres. This eliminates the expense of hardware acquisition, provisioning, and maintenance, as well as software licensing, installation and support. It also provides a flexible system of payments as the customers subscribe to the applications on either a monthly or pay-as-you-go method.\nThe ascent of SaaS model is parallel to the rise of cloud-based computing, which makes it highly scalable giving customers the option of accessing as many services as they require. For large organizations, updating software was a time-consuming endeavour. Depending upon the service level agreement (SLA), the customer’s data is stored locally, in the cloud or both.\nTypes of software that have migrated to a SaaS model are often focused on enterprise-level services, such as human resources, customer relationship management, and content management. These types of tasks are often collaborative in nature, requiring employees from various departments to share, edit, and publish material while not necessarily in the same office.\nSaaS has numerous uses including tracking leads, scheduling events, managing transactions, automating sign up, auditing and more. The only drawbacks involved in adopting SaaS model is the issue of data security and speed of delivery. As the data is stored on external servers, companies have to be sure that it is safe and cannot be accessed by unauthorized parties. Slow Internet connections can reduce performance, especially if the cloud servers are being accessed from far off distances. Internal networks tend to be faster than Internet connections.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-3", "d_text": "This is common in applications like VoIP (voice over Internet protocol).\nThe fifth model is service-based. The vendor sells managementservices. Examples are Red Hat and JBoss.\n“We see companies falling into several of these categories. The debateis the question, is it legit to do this? My response is yes. Opensource can accommodate a variety of models. This is great for theindustry. I see no need to pick sides,” Roberts told LinuxInsider.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "In 2019, the public cloud services market was expected to reach around 214.3 billion U.S. dollars in size and by 2022 market revenue was forecast to exceed 331.2 billion U.S. dollars. | Statista\nPublic cloud services are gaining a lot of traction. Startups, SMEs, and enterprises are recognizing the benefits of cloud and thus, are migrating to the most relevant service provider to improve ROI, efficiency, and time-to-market.\nWhen moving to cloud, it is important to understand the different types of services that can be availed by an organization. Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) are three popular models of cloud services. The later segment discusses what they are, their benefits, differences, and how to choose the right according to business requirements.\nSaaS, also known as “on-demand software” is a software distribution model wherein a service provider hosts an application at a data center for customers to be accessed via the internet. Such a service frees up the customers from maintaining hardware or other resources to use the software. All that’s needed is a web browser or a client program.\nThe source code of the software is the same for all the customers and any change/updation in the software is rolled out to all the subscribers of the software. Organizations also have the option to integrate SaaS solutions with their own applications using APIs. For example, a business can create its own software and integrate the functionality of a SaaS solution through APIs.\nHuman capital management (HCM) software, collaboration software, and customer relationship management (CRM) software are amongst applications where SaaS has a high penetration rate. | Statista\nExamples: Salesforce, Hubspot, MailChimp, Shopify, Slack\n- Since the SaaS model doesn’t need to install and run applications, it saves organizations from the expense of hardware acquisition, maintenance, and software licensing.\n- For availing SaaS services, customers have the pay-as-you-go model. PAYG allows users to pay for the services only for the time it is utilized.\n- A SaaS service can be scaled as required. Users can start with fewer features and functionalities of the software and then extend them as the demand strikes.\n- With a SaaS solution, the updates are installed automatically.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "If you’re a traditional on-premises software company, you’re in the right place. Today, I will talk about how cloud computing can transform your business. In a subsequent Cisco Blog, I’ll discuss the implication this move will have on your sales and distribution strategy.\nModel One Business Model\nTraditional software companies have operated in a Model One Business Model. In this on-premises model, the customer buys a perpetual license for software and then pays annual support and maintenance fees, which turn out to be another kind of subscription revenue stream. While that might seem to be the end of the cost to the customer, it’s not. The customer is going to have to spend money managing the software – and it’s not cheap.\nRead More »\nTags: CIO, Cloud Computing, isv, onpremise, software\nMany years ago I found myself talking to venture capitalists about the differences between SaaS, outsourcing, ASPs, MSPs, online applications; etc. Also I noticed that my Stanford students had little understanding of the economics of software, so I developed the idea of seven business models to cover everything in the software business, and remove the buzzwords and replace them with economic models.\nIn my previous blog post we discussed the first four models, this post will cover Models Five through Seven.\nWe ended the last blog talking about Model Four being able to provide management of the security, availability, performance and change of the software at nearly 10x less cost.\nThe question we left with was “how”?\nHow is it possible to decrease the cost of management without just paying people a fraction of what they made previously?\nRead More »\nTags: business models, CIO, Cloud Computing, isv, on-premise, open source, outsourcing, SaaS\nMany years ago I found myself talking to a venture capitalist about the differences between SaaS, outsourcing, ASPs, MSPs, online applications; etc. Also I noticed that my Stanford students had little understanding of the economics of software, so I developed the idea of seven business models to cover everything in the software business, remove the buzzwords and replace them with economic models.\nIn my previous post, I talked about the Seven Ways to Move to the Cloud. In the second issue (there’s a lot here), I’ll break this into two separate posts, discussing models one through four here, and models five through seven in the next issue publishing on Monday, March 2.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-7", "d_text": "Stages of the product-led SaaS pricing roadmap include the following:\nNo usage-based pricing or packaging\nJust like it sounds, with this type of model, you don’t price with a usage-based pricing model, and you don’t have any usage elements incorporated into your packaging strategy. This model is usually best for products with per-user pricing where everyone in the organization is intended to be a user.\nA good example is Lattice, a provider of employee engagement software. This model is typically a starting point and ending point for companies that are unlikely to deploy a usage-based model.\nUsage-based tiering with traditional pricing\nThis is the predominant model among today’s product-led SaaS leaders. With this model, you have traditional subscription pricing with a flat-fee or per-user pricing model. You define clear tiers of your product (typically three to five), with each tier imposing limits on consumption.\nTiers are defined around a clear pricing metric that ties to your value metric, and a small number of other critical usage metrics. This model incentivizes customers to upgrade when they need more features or users, as well as for usage when they exceed the imposed limits of their selected plan.\nUsage-based tiering with traditional pricing, plus conditional usage-based pricing\nThis model uses the same tenets of the previous model but applies additional charges based on usage-based pricing. Most commonly, these additional charges are for overage.\nEXAMPLE: A business intelligence software company may charge a flat-fee or per-user price for a given plan and impose a plan tier limit of published dashboards.\nWith this model, the company would then charge an overage fee per additional published dashboard. Other variants of this model use the same structure but apply different rules for what triggers the conditional usage-based pricing.\nA common example is feature-based pricing, such as that used by Stripe or others in the payments market. There is a base charge and other charges that are enabled based on situational usage of those features. These are different than add-on products in that they can be dynamically enabled or disabled on a transaction-by-transaction basis.\nPure usage-based pricing\nThis refers to a model in which the company prices solely based on usage.\nHow to choose the right product-led SaaS pricing strategy\nThe first step is to ensure that your organization properly defines what being product-led means for your business. Being product-led is a company ethos that extends well beyond pricing and impacts all facets of your business.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "In this day and age, technology has seamlessly integrated itself in the way we trade. For example, cloud computing has exponentially grown as a business producing billions of revenue in the US alone.\nFrom large-scale enterprises to lean startup teams, several companies have adopted cloud computing services to improve their system. One of the most popular among the cloud-based services is Software as a service (SaaS). The flexibility and scalability that SaaS provides greatly appeal to new and veteran entrepreneurs alike.\nBut, does SaaS really work for all types of businesses? How will you know if it is right for your company?\nThe answer, learning what the model is about and assessing whether or not it suits your operations.\nIn this entry, we’ll dive deeper into what SaaS is all about and why businesses clamor towards it. Before that, let’s first understand what is cloud computing.\nThe term cloud has come a long way from simply being a celestial cotton-like matter. In the world of technology, a cloud is an analogy for the Internet. Back in the day when diagrams and presentations were the things, they used to draw a cloud in their charts to represent a vague picture of the Internet.\nSo, if you’ve heard the phrase “it’s in the cloud,” that means you’ve stored information over the Internet instead of your local hard drive. Since its conception, the famous cloud gave birth to various kinds of software services.\nToday, cloud computing comes with nomenclature of services namely: Software as a service (SaaS), Infrastructure as a service (IaaS), Platform as a service (PaaS), and Serverless computing.\nSoftware as a Service (SaaS) happens to be one of the most popular terms because of its wide usage. But what is SaaS, exactly?\nDefining Software as a Service\nSaaS is a software delivery model that lets you access data from any device with an Internet connection. Instead of downloading software and undergoing a painful installation process, you can simply run your applications via the Internet. The cloud platform can cater to any business app ranging from office software to unified communications.\nWe can define SaaS as a way to skip all the hassles of setting up concrete servers for your local hard drive. Gone are the days you need to assemble a series of expensive IT infrastructure to run software and hardware configurations. With the SaaS business model, you leave all the maintenance of servers and databases to your software vendors.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "This automation reduces the stress from IT staff to keep a check on a few updates and changes that have been done for the betterment of the solution.\nIn this model of cloud computing, hardware and software tools are provided, primarily for application development. In this case, the cloud service provider hosts the hardware and software its own infrastructure and make it available for the users over the internet. This not only frees the organization from investing in hardware and software to run a new application (operating system, web servers, databases, and access to a programming-language(s) execution environment, etc.). Along with this, PaaS products enable the development team to collaborate and work together, irrespective of their physical location.\nBy 2019, the platform as a service market is estimated to have a worth of 19 billion U.S. dollars. | Statista\nExamples: Windows Azure, Google App Engine, AWS Elastic Beanstalk\n- PaaS models reduce the CapEx cost of organizations that goes in providing hardware and software on-premise. PaaS providers charge a monthly fee on per user basis.\n- Product-as-a-Service models provide agility to the software development cycle by making the much-needed tools and applications available, whenever needed. This helps to improve an application’s time to market as the development begins without delays.\nIn this model of cloud computing, a cloud service provider hosts infrastructure components on cloud, which are usually hosted on-premise. These components may include but are not limited to servers, storage, and networking hardware. While platforms like AWS, Google Cloud are examples of public clouds, organizations can set up their own infrastructure on a private cloud.\nBy 2019, the infrastructure as a service (IaaS) market is expected to have a worth of 38.9 billion U.S. dollars. | Statista\nExamples: Google Compute Engine, Rackspace, Amazon Web Services\n- IaaS platforms allow organizations to set up an infrastructure simply by running scripts. This not only includes deploying virtual servers but also has pre-configured databases, storage systems, load balancers, network infrastructure, etc.\n- Depending upon business requirements, IaaS allows to scale up or down resources. With such flexibility to scale the infra, it is possible for businesses to respond to the opportunities and challenges that come their way.\n- IaaS model facilitates businesses to save significantly over CapEx and OpEx. Small businesses such as startups can start with small infrastructure and then scale as the business evolves.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Types of Cloud Computing\nCloud computing is providing developers and IT departments with the ability to focus on what matters most and avoid undifferentiated work like procurement, maintenance, and capacity planning. As cloud computing has grown in popularity, several different models and deployment strategies have emerged to help meet specific needs of different users. Each type of cloud service, and deployment method, provides you with different levels of control, flexibility, and management. Understanding the differences between Infrastructure as a Service, Platform as a Service, and Software as a Service, as well as what deployment strategies you can use, can help you decide what set of services is right for your needs.\nCloud Computing Models\nThere are three main models for cloud computing. Each model represents a different part of the cloud computing stack.\nInfrastructure as a Service (IaaS)\nInfrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.\nPlatform as a Service (PaaS)\nPlatforms as a service remove the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.\nSoftware as a Service (SaaS)\nSoftware as a Service provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece software. A common example of a SaaS application is web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.\nCloud Computing Deployment Models\nA cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "Quite often you end up with a software that you only use half of a quarter, but you of course pay for the entire application\nmost of the software's are stand alone and difficult to integrate with existing systems\nfuture enhancements and updates might not be useful for your particular needs\nit will always be an expensive solution compared to the hosted or SaaS solutions\nthe IT skills required to implement the solution might not be in house, so your cost will be augmented\nyour switching cost is very high, so your risk will be higher then with other models\nand last this is not a fits-all solution as you will need to host the software in your own or in a hosted data center.\nThe second delivery model is the build model, whereas the customer is building its own e-commerce application in-house and integrating this with the existing ERM, CRM and other applications.\nIn favor of this model are: you can build exactly what you need, you can take full advantage of your internal systems and your internal IT competencies, you can be unique in the marketplace with a unique solution, and you can tie all your e-commerce channels to market together in one system that fits it all for you.\nThe minors of this model are:\nit's a risky and expensive model, which is using highly competent and skilled staff at a high opportunity cost and with a risk of losing these assets to companies for which IT is a core competency, not an add-on business.\nTime to market can be long and even too long as you want to build a completely integrated and unique system\ncost of maintenance and upgrade could be become prohibitive, as you need to employ the same skilled people as the ones who build the system\nbuild solution can become obsolete, technology wise as the project needs to stick to its technology choices for too long to justify a return on investment\nThe third delivery model is a platform model, that is a kind of hybrid between buying a software license and building an own platform. This is the open source world, whereby there is a platform that can be designed and enhanced by an internal IT team, system integrators or vendors that are familiar with the open source platform. The source code is available, API are developed and services are to a certain extend reusable and you have a community of developers readily available to give you a hand. There is the flexibility of ready made volumes that if needed can be adapted to your needs, and existing applications can be integrated more easily then within the build model.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "- User-based SaaS pricing model: this pricing model involves charging your customers based on how many users they have or how many of them will be using the product. It makes things much simpler for your customers because they can pick and choose a plan most suitable for their company size and change it if it increases or decreases.\n- Pricing per feature SaaS pricing model: here your users’ subscription costs are determined by how many features they will need. It’s a fair pricing method that allows users to pay for only what they’ll use. If their use case changes or they need additional features, they can always upgrade.\n- Tiered SaaS pricing model: most SaaS businesses use this pricing strategy because it allows you to target different types of customers since there’s a plan for everyone. It involves using multiple packages with various features and prices to attract customers.\n- Pay-as-you-go SaaS pricing model: this pricing strategy is most commonly used by infrastructure software companies and it involves charging users based on how they use the product. The more time customers spend using your product, the higher their subscription fee is.\nStep 4: Decide on your sales strategy\nBefore you launch your product, you have to decide on the relevant various sales strategies and approach you will adopt to acquire customers. In SaaS, there are two main types of sales go-to-market strategies employed to attract and convert potential customers.\nProduct-led growth strategy\nIn a product-led go-to-market strategy, your product is the major focus and the center of all marketing and sales efforts. This strategy involves using your product as the driving force for customer acquisition, activation, and retention.\nHere, all users are allowed to experience the product for themselves under a free trial or freemium version. Users don’t need to go through members of your sales team or a sales funnel before they convert. Once they experience your product and receive value, it’s easier for them to become paying customers, and for you to increase customer lifetime value.\nIf you choose this strategy, marketing should be an essential part of your product development strategy.\nA lot of SaaS companies adopt this product-led growth strategy, e.g, Slack and Dropbox. It is done by building a value-filled product experience that encourages users to stick around.\nImplement a product-led growth strategy for your SaaS now!\nSales-led growth strategy\nThe sales-led strategy works best for enterprise companies like Salesforce and Microsoft whose products require more high-level guidance with a longer sales cycle.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Software to be a Service (SaaS) has become one of the most common ways that corporations sell or support their products and products and services. The term on its own sounds complex, yet there are several aspects which have been common amongst SaaS courses, including computer software licensing, remote hosting, and support for any sorts of components, software, and peripherals.\nProgram to be a Service (SaaS) is a system model just where software is distributed and organised centrally on the subscription basis. In some cases, it has been called “on-demand” software, and previously was referred to as “cloud computing” by simply Microsoft. Program as a Service plan (SaaS) uses a virtual private network (VPN) or perhaps an ISP connection to hook up the customer for the source code and generate it designed for use by the customer’s personal applications.\nProgram as a Assistance (SaaS) is normally described as the capacity to use applications from anyplace with a web connection. The solution applications during installation into a machine for them to be reached remotely by simply customers who are located anywhere in the world. Customers can also use software applications upon any gadget they have including smartphones and PDAs.\nTo achieve the most out of your Software purchase, you should choose a proper SaaS company to get the maximum benefit. Fortunately, this procedure can be simplified by looking at several different Software providers to decide on the one that supplies the features that will assist your business flourish.\nBefore you choose a Software provider, consider several elements that will impact the long-term cost of your SaaS purchases. For instance , you will want to check out the price every transaction or perhaps per usage. You should also consider if your chosen Software provider supports payment ideas, subscriptions, or discounts. In addition , you want to produce sure the software professional is certified and conforms with applicable industry requirements and laws.\nYou also wish to review the features of SaaS application providers to determine which features you need in order to properly manage your company. While you may not need everything included in a Software package, it can be helpful to know what additional features provides you with the most value in terms of support and maintenance. Finally, you want to check out pricing structures, and how they will benefit your company.\nThere are many benefits to SaaS programs, such as remote support and remote use of applications, helping to make them well-known in the business environment. However , only a few customers appreciate the full features of Software software.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-5", "d_text": "Cloud Application Services (CAS)\nSAP Cloud Application Services (CAS) offerings are optional and associates cost with it. CAS helps Customers run and manage the applications across SAP private cloud solutions, including hyperscaler, hybrid, and multi-cloud scenarios. It can cover SAP S/4HANA Cloud, private edition, SAP Business Technology Platform & SAP NetWeaver technology platform. It provides a wide range of services for managing business process and application logic, data layers, application security, software releases, performance, integrations, and more through a subscription. A customer can tailor the experience by selecting from the fixed-price, outcome-based packages. Refer this link for more details.\n5. SAP Preferred Success\nThis offering provides continuous & advanced support for SAP Cloud Customers. Additionally, it can deliver an adoption strategy plan tailored to Customer’s exact business goals to help optimize the use of SAP software. The offering includes change management to help accelerate the transition to SAP S/4HANA Cloud, public & private editions. The expanded editions of SAP Preferred Success are currently available for SAP Ariba solutions, the SAP Commerce Cloud solution, the SAP SuccessFactors portfolio, and the SAP BTP portfolio. Please find more details in latest news and here. A short video explaining the offering is here.\n6. Partner Learning & Support\na. TDD (Test, Demo & Development) system\nFor Partners to explore the SAP S/4HANA Cloud, public edition, SAP offers a TDD (Test, Demo & Development) system (License Material SKU – 8012331) under Non-Commercial Licensing (NCL). It consists of 4 Products/Systems as below:\n- SAP S/4HANA Cloud – Partner Demo Customizing\n- SAP S/4HANA Cloud – Partner Demo Development\n- SAP Central Business Configuration – Test\n- SAP Cloud ALM – Production\nApart from above 4, SAP Business Technology Platform (BTP) Global Account (GA) is also included with Work Zone & Landscape Portal entitlements. SAP BTP Cloud Identity Services (incl. IAS/IPS) are also provided via dedicated tenants (if not already there).\nSAP Build Work Zone, standard edition enables organizations to create business sites that serve as a unified point of access to SAP, custom-built, and third party applications and extensions, both on the cloud and on premise.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-3", "d_text": "Free trial phases, in which users can test functions for free, are not uncommon with the SaaS model. The duration of the test phase varies from 2 days to a few weeks.\nWith the pricing per user model, a monthly or annual fee is paid for each active user. The user can use the software with its full range of functions and independently of transactions and time. It rents the software and the associated services (bug fixes, patches, maintenance). Different gradations are possible here.\nPrice levels depending on the functional scope\nWith this price model, the prices depend on the range of functions.\nCustomers gain access to the application for a fixed subscription fee (usually monthly or yearly). There may also be gradations here: For a different price point, you get a different range of functions.\nPricing tiers based on amount of data used or storage space\nWith this pricing model, prices depend on the amount of data used or storage space. If the user reaches the limit of the price level they have chosen, it is usually possible to book an upgrade.\nWith freemium, the service provider provides a basic version for free. This is complemented by additional chargeable and bookable functions and services.\nConclusion: SaaS continues to grow in importance for businesses\nThe growing daily need for large amounts of data, always up-to-date software, backups and high data security requirements will continue to drive companies to increasingly outsource IT issues to specialist service providers. SaaS lets your business focus on what you do best. You don’t have to worry about infrastructure and hardware, you don’t have to set up a large IT department – and you can cut costs in the process.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "You have access to a virtual server, storage, and an API, enabling you to migrate information to your allocated storage. From there, you can access, configure, and modify the VM and storage at your leisure. IaaS gives you the highest level of flexibility and control over your IT workloads.\nSimilar Articles: Complete Information 2\nPlatform as a Service (PaaS)\nWith PaaS, there is no need for you to manage the server infrastructure, storage, network, and databases. As an on-demand cloud environment, PaaS is designed to support the complete lifecycle of web applications: from development, testing, delivering, and managing, to updating of software applications.\nAfter purchasing the resources from the service provider, you can access these tools and resources over the internet. It allows you to be more efficient in general cloud-based software development, simple cloud-based apps, and more sophisticated cloud-enabled enterprise applications. It also lets you manage the applications and services you develop while the cloud vendor manages the rest.\nWith PaaS, you can avoid the expense of buying software licenses, application infrastructure, middleware, container orchestrators, development tools, and other resources.\nClick here for a closer look at some of the top PaaS providers of 2019.\nSoftware as a Service (SaaS)\nUnlike IaaS and PaaS, you receive completed cloud-based software applications that you can access over the internet. Web-based Software and products are provided via an on-demand service and usually through a subscription. The provider will host and manage all aspects of these software applications. The provider also handles the maintenance of the servers, operating systems, and elements such as software upgrades.\nMicrosoft, Salesforce, Oracle Cloud, and SAP are some of the leading SaaS providers.\nClick here to have a look at some of the top business apps available.\nOther Cloud Service Models\n“Back-end” as a service (BaaS)\nBaaS allows you to outsource all the back-end aspects of web or mobile applications. With this, you only need to write and maintain the frontend. BaaS supplies pre-written software such as user authentication, remote updating, database management, and push notifications. You also receive cloud storage and hosting.\nFunction as a service (FaaS)\nFaaS is a cloud service that aids in serverless app development and management. Servers are hosted externally, allowing you to focus on programming and other tasks. FaaS only runs when conducting a function, setting it apart from other cloud-based computing models.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-1", "d_text": "Here's an interesting data point: The companies now taking SaaS to the highest levels are consistently among the fastest-growing firms in the world.\nThanks in part to SaaS' ability to reach new bastions of IT that formerly were considered too sacred, SaaS revenues are expected to reach US$106 billion in 2016. This marks an increase of 21 percent over projected 2015 spending levels and a 30 percent compound annual growth rate, compared to a 5 percent growth for overall enterprise IT.\nThe conclusion is clear: It is time to stop believing your company's IT challenges are unique and that its needs for scalability and interoperability are exclusive. Truth be told, there's less that is unique and more that's in common among the majority of companies today.\nThanks to the increasing \"lego-ization\" of IT, the modular SaaS model is replacing the last bastions of custom-developed IT infrastructure, as a better way to allow businesses to grow.\nThis article originally ran in CRM Buyer on November 20, 2015.Impartner delivers the industry's most advanced SaaS-based Partner Relationship Management solution, helping companies worldwide manage their partner relationships and accelerate revenue and profitability through indirect sales channels. Impartner PRM is the industry's only turnkey solution that can deploy a world-class Partner Portal in as few as 30 days, using the company's highly engineered, three-step Velocity™ onboarding process. Watch Impartner's interactive demo here.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "From small mom-and-pop businesses to global corporations, cloud computing is being used to streamline commercial operations while reducing overhead and other IT-related costs. According to some reports, the cloud computing services market will balloon to a whopping $180 billion by the end of the year.\nWhile there are dozens of different types of cloud environments, each with their own characteristics, most cloud services can be broken down into one of three different categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).\nInfrastructure as a Service\nOne of the three primary cloud computing services is Infrastructure as a Service (IaaS), which is characterized by the use of virtualized hardware. The Internet Engineering Task Force (IETF) describes IaaS as delivering virtual hardware services to clients by abstracting away its infrastructure details. An organization, for instance, may custom order a virtual server through IaaS. The IaaS provider then pulls the necessary hardware resources from various servers, delivering the virtualized components to the organization.\nExamples of IaaS include the following:\nVirtual data centers\nPlatform as a Service\nPlatform as a Service (PaaS) is typically used to run, test and develop applications while using the resources of cloud-based servers. Normally, the provider delivers a computing platform, including an operating system, program execution environment, database, etc. for the client to use. This allows the client to develop and run their applications within a controlled cloud environment without dealing with the technical hurdles and cost associated with purchasing the actual physical hardware.\nExamples of PaaS include the following:\nSoftware as a Service\nAt the very top of the cloud computing services hierarchy is Software as a Service (SaaS), also known as a”on-demand service.” In this service model, providers give users access to apps, programs, software and databases, all of which are stored on cloud-based servers. The provider is responsible for managing and maintaining the infrastructure used to run these applications.\nSaaS has become increasingly popular due to the cost-savings that it offers. If a company needs 50 copies of a particular software, one for each of its 50 workers, it could purchase 50 physical copies. Alternatively, the company could purchase the software “as a service” for its 50-person workforce. Pricing models for SaaS will vary depending on the software and number of users, but organizations can expect to pay less for SaaS as opposed to other methods of delivery.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Pricing Models for SaaS Products\nPricing is probably the most crucial thing to figure out. But as with all important topics, it isn't easy to get it right. To help you figure I researched pricing for two days. Here are my learnings.\nWe even build a little tool that lets you compare different pricing scenarios. This way you can figure out which pricing is suitable for your business.\nDifferent pricing models\nThe different pricing models I'll cover today aren't mutually exclusive. The most common practice is to combine multiple models together. This way you can build the right bundles for your customers. Something that's done pretty well by Hubspot. They combine users, features & freemiums into packages suitable for every customer. But now let's take a look at what pricing could be suitable for you:\nFlat pricing is pretty straightforward. You pay X amount of money in a certain period. The most common are monthly and yearly. No hidden fees, no strings attached.\nPro: Transparency. The customer knows exactly what to expect. It, therefore, is pretty trustworthy.\nCon: Regardless if a user uses your product 10x as often as another user both pay the same amount of money. Therefore some users will love it while it seems expensive to others.\nUser count is something often used by bigger SaaS companies offering products for entire teams. The logic behind it: The bigger the team the higher the revenue. Something that is the case more times than it’s not. There are two common approaches. Either every new user needs a new license or you go with the team-based approach. Having licences for up to 5 users, 10 users etc.\nPro: Every new customer can add growth to your MRR if they grow themselves. Having implemented the solution already they aren’t super likely to switch.\nCon: Other builders might see this as an opportunity to attract customers with better pricing for their size. Also, every license-based SaaS has seen accounts with only one user that is actually 2 or more people. It can be an annoyance to teams with some users who barely have to use the software but still need an account.\nCharging based on features is something for very diverse SaaS products. As with user count pricing models it’s often used by large players. Oftentimes you can see this by companies that essentially have multiple SaaS products stacked upon each other.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Software as a service is considered to be the IT market’s most prosperous branch. Forrester Research estimated an annual average market value growth of 56.2%. They also forecasted that, it will continue to grow at a rate of 18.9% per year by 2020.\nIt comes as no surprise the rapid expansion of the SaaS market. This type of business model has many benefits that attract new entrepreneurs to follow the very model of software distribution. On the other hand, companies are more willing to choose SaaS over on-site apps, as this solution is more beneficial and secure.\nSoftware as a service (SaaS) is a software distribution model in which applications are hosted by a third-party provider and made available to customers on the Internet. In addition to infrastructure as a service (IaaS) and platform as a service (PaaS), SaaS is one of three main categories of cloud computing.\nThere are SaaS applications for basic business technologies such as email, sales management, customer relationship management (CRM), financial management, managing human resources (HRM), billing and collaboration. Salesforce, Oracle, SAP, Intuit and Microsoft are leading SaaS providers.\nNow SaaS is clear to you, right? If not, please let us know in the comment section below.\nWhen there is a need to choose which framework, technology, or programming language should be used during the development process, the development company makes the final decision in most cases and the customer relies on the experience and skills of the developer. But the software delivery model chosen affects both the customer and the developer as different models have different pros and cons and require different strategy for development.\nSaaS eliminates the need for organizations to install and run apps on their own computers or data centers. This eliminates the cost of acquiring, supplying and maintaining hardware, as well as licensing, installing and supporting software. Other SaaS model benefits include:\nProducts supplied in the SaaS model can be easily adapted to the needs of specific customers. Most of these companies offer fixed subscription plans, but they also offer the option of changing settings and functionalities to more accurately meet customer needs. Price is, of course, settled individually according to a custom software. Moreover, many SaaS providers provide access to their APIs to allow you to integrate with existing systems.\nAs no license fee is required, there are no upfront costs related to purchasing a SaaS application. Customers are not required to purchase a complete product.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Become a Partner\nJoin the Modeliosoft Partners program and grow your business\nAre you using Modelio actively, contributing to extensions of the tool or participating in the Modelio ecosystem? Or perhaps you're distributing your own tool that is coupled with Modelio? You should then be considering a partnership with Modeliosoft.\nModeliosoft is always on the lookout for new partners to help Modelio end-users successfully realize their projects using Modelio solutions. Among other benefits, a partner can:\n- Provide services related to Modelio\n- Sell modules that he has developed, and be referenced from the Modelio store\n- Resell Modeliosoft's solutions, especially modules\n- Receive support from Modeliosoft in customizing Modelio\nBy joining the Modeliosoft Partners program, you become a privileged contact for Modelio customers and can offer them a complete range of products and services.\nChoose from three partnership levels:\n- Expert: Modeliosoft Experts provide consulting and training on issues related to Modelio use and usage domains. Modeliosoft Experts will be referenced on the Modelio website, and Modelio customers will be invited to contact Modelio experts to get support and consulting.\n- VAR or Reseller: Modeliosoft VARs and Resellers are authorized to include Modelio in their tooled or consulting solutions, and to resell Modeliosoft's distributions. Modeliosoft VARs and Resellers will be referenced on the Modeliosoft website.\n- Certified Resellers: Modeliosoft Certified Resellers are authorized to include Modelio in their tool or consulting solutions, and to resell Modeliosoft's solutions. In addition, Modeliosoft Certified Resellers provide support and a hotline to their registered Modelio end-users. Modeliosoft Certified Resellers will be referenced on the Modeliosoft website.\nTo become a Modelio partner, please provide the following information:\nTell us who you are, what you’re interested in, how you would like to work with us and which partnership level you’re interested in. We look forward to hearing from you.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "A product company selling the same amount each quarter doesn’t grow, while the SaaS company does.\nIf the SaaP company has the same sales each period, the company’s growth is zero. The SaaS company with the same sales continues to grow by the new ARR offset by churn. This results in the revenue from the SaaS model ultimately exceeding SaaP.\nRevenue dominated by SaaS recurring revenue stream.\nParadoxically, in the scenario with rapid customer growth, the SaaP company will grow near-term revenue faster than the SaaS company because it is receiving all of its product revenue at the time of the sale. The SaaS company is deferring most of the revenue from new sales, however the company is growing its recurring revenue base which will yield long term revenue growth. The value of this recurring revenue will become apparent as growth slows and the value of the recurring revenue exceeds the value of new sales. The rapid building recurring revenue stream will payoff in higher future revenue for the SaaS company relative to the SaaP company as the rise in new customers slows.\nThe SaaS structural model of building new revenue on top of the existing recurring revenue yields higher long term revenue and growth potential.\nThe ratio of growth between SaaS and SaaP companies adding a large number of customers is lower during this high growth stage because the SaaS company is growing the future recurring revenue stream faster than it is growing its current revenue. As growth tapers, the ratio of revenue of the SaaS and SaaP company shifts dramatically to the SaaS company.\nSaaS Expansion Revenue (discussed below) increases this ratio of SaaS growth to SaaP growth.\n4. SaaS Companies have more Expansion Revenue\n“25% of the revenue growth from SaaS companies with over $25m in annual revenue is from “Expansion Revenue.”\nPacific Crest Survey 2014\nSaaS companies perfected the “Land and Expand” strategy. The nature of SaaS reduces the “Time to Value” with shorter implementation times and the ease of expanding existing customers adding to revenue growth.\nThere are fewer barriers to selling SaaS since there is less fear of lock-in to an expensive, complex, and difficult to install software. Once a SaaS company has a foothold in a company, it is easier to expand generating more revenue growth than for SaaP.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-7", "d_text": "We might move away from the traditional systems integrators and you might see more business-service providers focusing on supporting the customers. The ISVs are actually building some of the components that are used, but they aren’t necessarily going to be the service providers.\nColleen, I wonder if you could put on your analyst hat again for a minute and try to forecast how that market might shake out.\nSmith: I think the SaaS market, in general, is really still in its nascency, and there are a lot of things that have yet to happen. But, the good news is this isn’t just a fad. We see a fundamental change in terms of the business model.\nWhat I say a lot is that if we think in terms of the software industry over the last 20 years, we’ve come a long way in terms of building partnerships, and in terms of how systems integrators and service providers work with ISVs. What I see being the success of SaaS is that if we continue to enhance that model, it's going to be about hosting providers, working more closely with system integrators and ISVs. The only way that the end customer is going to win in this is if we get into a business model where there is that shared risk and shared reward, but the customer pays for only what they need to use.\nIt's going to come down to pricing models. It still has to come down to some building of ecosystems out there, where everybody knows their role and plays that role, but doesn’t necessarily try to do the other person’s role. There are still a lot of things happening.\nI believe it’s going to be vertically focused. I don’t think this is going to be a horizontal play. We’ve seen a lot of success in vertical business expertise. There's going to be content, business applications, data, and services. If all of those can be offered in a single environment through a single service provider, the customer will end up winning.\nSmith: Absolutely. We’ve always worked with a lot of our partners and told them, \"Figure out where your niche is, and, if you can be the best at your niche, you can be successful.\" We aren’t necessarily talking about creating the next SAP, but if you can be really successful within your specific niche area, then your customers are going to value your service.\nIn the SaaS model, there are two S's.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "Let's look at some examples.\nIn the email marketing space, we have multiple companies valued at $1B+: Marketo, HubSpot, and Mailchimp.\nWhen you strip away all the outer layers, they all have essentially the same core product — a tool that lets you send and automate emails to your customers and audience.\nTheir market is the enterprise. As a result they've differentiated their product on the things that enterprise customers care about: customization, security, and scale (that's their Market Product Fit).\nBecause of that, they use Outbound Sales to sell (Product Channel Fit). Because they use Outbound Sales they must have High ACV's to support the channel (Channel Model Fit). There are thousands of customers in the enterprise space times their ACV's equal a greater than $100M revenue business (Model Market Fit).\nTheir market is the mid-market. As a result they've differentiated their product on “All In One” since thats what mid-market customers care about.\nSince the product still requires a fair amount of setup and education to work they use Inbound Sales (Content) and Channel Partnerships (Product Channel Fit). Because those channels hit a medium CAC, they have medium ACVs to support the channel (Channel Model Fit). There are hundreds of thousands of mid-market customers, multiplied by their ACV ends up being $100M+ revenue business (Model Market Fit).\nTheir market are small businesses. As a result they've differentiated their product on being simple and touchless (Market Product Fit).\nBecause their product is focused on simplicity and touchless they can use virality via the free tier as the primary acquisition channel (Product Channel Fit). Because their channel is virality they use a freemium model with lower price points to have low enough friction (Channel Product Fit). There are millions of small businesses so even at the low ACV, they have a far greater than $100M business (Model Market Fit).\nWe can play this scenario out in almost any SaaS Category. Let's take a look at a few more examples.\nAs you can see, no company (that I know of) has ever dominated all three tiers of the market (however, LinkedIn and Slack might be the closest).\nThe reason for that, again, is the four Fits. All four Fits are like the pieces of a puzzle. Plenty of enterprise companies have tried to move down market by making small changes to their pricing or product.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "Companies have time and time again fallen apart trying to manage high administrative costs and the complexities of controlling customer and channel partner relations without a management system or with one developed “in house”. SaaS has therefore, proven to be a big leap forward for both providers and consumers.\nProviders: Companies are increasingly taking advantage of the benefits of cloud computing, such as the reduction of infrastructural costs, scalability and adaptability, a better quality of service in less time and the security of data being stored in a safe environment. Following the “as a Service” model guarantees the flexibility and efficiency that companies need, offering a strong competitive advantage.\nConsumers: SaaS has effectively commoditized IT functions which used to be really complex and expensive, making them accessible at a lower cost. Website analytics are a good example of advanced capabilities which have been democratized. To put it simply, SaaS simplifies, reducing cost, time and resources for the consumer.\nThere is however, a widely circulated myth is in the midst of the great SaaS boom: that Software as a Service has almost eradicated the need for channel partners and channel partner programs.\nSaaS vendors are not using channel partners (SIs) or Value Added Resellers (VARs) very much. In fact they account for only 35-45% of SaaS provider revenue which is significantly less than the percentage for on-premises software. Only 23% of B2B SaaS vendors have a channel program, while 80% of on-premises software vendors do.\nThe main reasons for the shift away from channel partners are:\n- configurations tend to be made by customers, when in the past, because of their complexity, they would have been made by channel partners\n- development, installation and upgrades used to be performed by VARs but are now done centrally by the service provider\n- SaaS companies are only beginning to take their first steps, so haven’t gotten round to implementing a channel program\n- Channel programs are much more necessary when it comes to international sales and as most SaaS companies are quite young, they’ve rarely expanded into the global market\nThe as-a-service model has essentially changed the role and value-added status of channel partners, meaning they now need to work more strategically than ever before. To find out exactly how partners can adapt to SaaS, what has been making them reluctant, and the ins and outs of what the revolution entails, download our Guide now!", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "Last week I had the opportunity to present the first half of a teleconference on contract lifecycle management (CLM) hosted by the Institute of Supply Management (ISM) and Zycus. You should be able to view the recording in its entirety within a week or so here. It was a good session covering the basics of CLM as well as some specific best practices so I recommend taking a look. As part of the presentation we polled the listeners on a few key questions and I'd like to share the results as well as some insights.\nFor those of you unable to attend, I will summarize some of the content that I presented on SAP’s overall growth and innovation strategy. SAP has a double-barreled product strategy focused on Growth and Innovation.\nThe Growth strategy rests heavily on the current Business Suite, which includes the core ERP product that is used by approximately 30,000 companies worldwide. SAP claims that it touches 60 percent of the world’s business transactions, which is hard to validate but not all that hard to believe. The main revenue source today is Support, which comprises 50% of the total revenues of the company at more than 5 billion Euros annually, and it grew by 15% in 2009. Other growth engines include:\nNetSuite, a leading SaaS ERP/CRM provider, recently announced that it is revamping its channel partner comp model: 100% on Y1 subscription revenue, and 10% thereafter. VARs have been remiss in taking up the SaaS torch, largely because most SaaS vendors haven’t provided a financial model conducive to VARs’ cash flow requirements. Per the on-premise license model, channel partners make a big portion of their nut on initial product margin, i.e., up front. But vendor SaaS economics minimize up-front remuneration and spread revenue out over a long period of time. Though it sacrifices year-one revenue, NetSuite’s 100/10 model more closely mirrors VARs’ accounting practices.\nNetSuite’s model will be the first of many SaaS channel model “experiments” that will ultimately be a shot in the arm for the SMB market in particular.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-1", "d_text": "He distributed the letter through the computer magazines of the day and started to put in place his strategy for selling software licenses. Microsoft followed the model of selling software that would work on personal computers made by other vendors and their licenses would need to be paid for.\nEvent #3 – Larry Ellison meeting with Evan Goldberg and Marc Benioff\nIn the late 1990s Larry Ellison was discussing the future of computing with Evan Goldberg and Marc Benioff. The internet was gaining popularity and all three were contemplating the impact this would have on business software. The discussion focussed on what has become known as cloud computing and how this would change the face of software. Business Insider magazine pinpoints this discussion as the invention of cloud computing and the SaaS business model. Evan Goldberg went on to found Netsuite and Marc Benioff started Salesforce.com, the rest is history.\nDifferences between SaaS and Application Service Provision (ASP)\nMention SaaS and often someone will say “I remember when it was called ASP”. It is true that both models bear similarities but subtle differences apply which mean they are entirely different technologies. To manage SaaS effectively it is essential to understand the differences and how both models can be applied to cloud computing. The following chart gives a concise comparison of the key differences:\n• Hosts 3rd party software\n• Client server model – often needs a component on a client device\n• Separate instances\n• Separate updates/maintenance for each user organization\n• Increased costs to scale due to separate implementations\n• Similar cost structure to onsite\n• License focused\n• ISVs host their own software\n• Nothing to install on client devices (web access)\n• All user organizations on the same version\n• Economies of scale as only one version to maintain\n• Reduce operating costs\n• Subscription focused\nSome pundits describe ASP as a failed business model because essentially the same costs and technology fundamentals apply to ASP and onsite deployments whereas SaaS provide a radically new way of working due to the economies of scale achieved from the one implementation can be used across many user organizations. In effect this is as radical as the invention of the first software compiler by Grace Hopper back in 1952.\nSaaS Management vs Traditional SAM\nMarket research (IDC) estimates reveal that the SaaS market is growing at a CAGR of 18% year on year, whereas tradition licensing is experiencing a gradual decline.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "As with most SaaS offerings, functionality of their software increases as you upgrade your membership to higher tiers. With Evernote, some of those added features include the ability to clip emails, additional gigabytes of space for larger uploads, secondary passcode security features, and backwards note history allowing you to see previous versions of your edited notes.\nNeedless to say, having all this power accessible through your web browser is something that was hard to conceive of just a decade ago.\nAnother great example of SaaS is Venngage. Unique in their offering, Venngage allows users to create professional looking infographics with ease, right in your web browser. Like Evernote, they offer a free option. However, their premium tier offers additional functionality with export features, more templates to work with or modify, and a large number of art assets and vector-based icons to build your custom infographics with.\nAn infographic later on in this article was actually created using Venngage, after just a short amount of time spent tinkering around.\nDig a bit deeper and you’ll find you’ve been using SaaS services for quite some time. Internet titans such as YouTube—especially with the advent of YouTube Red—qualify under our general definition. Red makes it possible to watch as many YouTube videos as you please, ad free … legitimately.\nOf course, ad-blocking software (yet another form of SaaS) enables ad-free viewing, but often at the expense of advertisers and content creators that depend on ad revenue streams. Red is YouTube’s solution to the sea of users adopting ad-blockers.\nWe don’t have to stop at online videos. This very article is being created using a SaaS product, provided by a little company called Google. More specifically, Google’s online word processor “Docs” is offered among an entire suite of freemium tools.\nThat’s just a few examples of how these services can manifest. It’s quite clear that SaaS has become a necessity for business and leisure, and it’s here to stay, forming an entire new breed of entrepreneurship.\nBeing an enterprising individual, let’s say you assemble a crack team of developers, get a groundbreaking SaaS developed, and build a large amount of buzz around your product. How do you then price your creation?\nEnter the “Good, Better, Best” price model.", "score": 8.086131989696522, "rank": 99}]} {"qid": 42, "question_text": "How does architecture play a role in creating tension in Hitchcock's Rear Window?", "rank": [{"document_id": "doc-::chunk-3", "d_text": "Hitchcock famously had the entire apartment-courtyard set built and his focus on authenticity was such that, unsatisfied with the height of the apartment blocks (the tallest of which stood 40 feet), he had the basement of the sound studio ripped out and the claustrophobic courtyard built down into it. As a result Jeff’s second floor apartment was in reality at ground level. Many of the apartments also benefited from running water and electricity, and could be lived in, if so desired.\nAlfred Hitchcock – Rear Window\nHaving created the world of Rear Window in line with real world architecture, for Hitchcock to then invert the rules of how architecture is used in the real world throws the audience. Windows become doorways: traditional means of accessing the outside world, they now become the means of access to the occupants and their secrets, both voyeuristically and physically, with characters climbing in and falling out of them. Private rooms become public spaces, with their inhabitants chained to their separate miseries: Miss Lonelyhearts trapped in her empty apartment, the nameless songwriter trapped in his bustling one – both looking for the same thing. From observing and preserving the laws of architecture through only showing these snippets of others lives from the confines of Jeff’s own apartment, Hitchcock builds palpable claustrophobia. A feeling intensified by the purposely restrictive courtyard acting as a heat trap for the negative emotions gestated by the film.\nPevsner was right about the inescapability of buildings. But when captured and created for cinema, architecture can transcend physical space to become indicative of a film’s intentions, central to its psychological and philosophical motifs, becoming doorways beyond the literal.\nGabriel Gane, March 2016\nYou can read Gabriel’s first essay on cinema for Miniclick here…", "score": 52.62240791250694, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Before “Rear Window,” the great director spent several years trying to see how he could set his films in a constricted space and include plenty of witty dialogue, while still engaging in what he called “pure cinema” (i.e., primarily visual storytelling) in films like the cutting-free “Rope,” the 3-D “Dial M for Murder,” and, most limiting of all, “Life Boat.” In “Rear Window” he at last came up with an ingenious way to have his self-limited cake and eat it too. He shot the film entirely within what is probably the single greatest and most elaborate single set in the history of cinema. Hitchcock, a draftsman and set designer early in his career, devised the set in such a way as to tell both the main story and all the reality-TV/pantomime subplots in an environmentally seamless manner. Still, Hitchcock, always the complete filmmaker, also employed a more realistic type of sound design that made heavy use of environmental sound and “source” music. Eliminating traditional underscoring, Hitchcock chose to make Franz Waxman’s jazz score, as well as such early ‘50s pop hits as “Mona Lisa” and “That’s Amore,” sound as if they’re coming from neighboring radios and phonographs. On the other hand, Hitch was never about realism, and the set is -- obviously and gloriously -- a set, and sound effects are frequently exaggerated for emotional impact. Forever an expressionist and almost never a realist, Hitchcock’s creative choices suggest reality while avoiding any attempt at recreating it.\nMuch has been written and said about Alfred Hitchcock’s purported neglect of actors. The truth is, that while he was no “actor’s director” in the normal sense, he was also no George Lucas; he nearly always got good performances when he cast performers he respected. In “Rear Window” he has two of the greatest leads in Hollywood history, one of whom he was kind of in love with (it wasn’t Stewart). Grace Kelly gives probably her best performance in a part written especially for her, as a professional fashion maven with a core of iron. James Stewart has no problem holding onto his dignity while performing the entire film in his pajamas, nor does he falter in holding onto our sympathy while being more than a little mean at times to the wonderful Lisa/Grace Kelly.", "score": 48.99327189760051, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Alfred Hitchcock’s classic film Rear Window (1954), shot almost entirely from one room, focuses largely on what it means to observe and what it means to participate. As observers, we are watching, and as participants, we are engaging. But what happens when watching becomes participation? L.B. Jefferies, a temporarily disabled photographer, uses his apartment’s rear window to gaze into his neighbors’ lives. Jefferies, deriving pleasure from watching others, is a passive voyeur, but we must ask ourselves: are we any different? Watching a film and peeking through a neighbor’s window are both forms of active viewing. Through a clever use of continuous editing and cinematography, Hitchcock shows the audience just how voyeuristic they, like Jefferies, really are.\nAlexander Hubers, ’15\nSponsor: Michelle Mouton", "score": 46.967271777659775, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Sunday, 16 March 2014\nRear Window (1954)\nOne of Hitch's greatest talents was in creating a world unique to his storytelling and visual style. In Rear Window these talents really are to the fore, using the teeming tenements of New York as his playground, embellishing it with great colour (both socially and cinematically) and detail.\nAn essay in big city loneliness and morbid curiosity, Rear Window has much to say about Hitch's gleeful reliance on voyeurism both of his audience at the cinema and of the outside world in general, tapping into how we perhaps live(d) via witty symbolism and analogy. I wonder if such a film, if made today, could grab an audiences attention and capture something of urban life now given that as a society we seem to be living a more insular and detached life, reliant on the screens of our laptops and PC's than the window to the streets outside.\nA film about 'spectacle', Rear Window gives us just that and how, thanks to the great aforementioned symbolism and the equally subtle analogies that can be made between Stewart and Kelly's characters, or more specifically, their relationship, and those they are peeping on (as cited in Tania Modleski's excellent book 'The Women Who Knew Too Much'). It goes without saying that Stewart and Kelly are at the top of their game and they're ably provided with some comedic support from Thelma Ritter, adding some of Hitch's typically ghoulish delight to the proceedings.\nAll in all, this remains one of Hitchcock's finest.\n- I hate that he killed the dog though :(", "score": 45.93970572005474, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "On Sunday evening, I watched ‘Rear Window’ for the first time. So captured was I by the movie that I watched it again the next day! I got so much more out of it during the second viewing and as I’ve been thinking about it over the last 24 hours, I’ve got even more out of it!\nEverything about this movie is outstanding!\n‘Rear Window’ is what I call an ‘up close and personal’ movie – it is almost theatre rather than a cinematic experience and it is that sense of theatre, my ‘closeness’ to the characters, which compounds the intensity. The action is all within a single courtyard, different apartments but all within the boundaries of the courtyard. When there is movement outside the courtyard, eg when the murder suspect, Lars Thorwald, leaves his apartment, we don’t see where he goes and, when he returns, we only see him when he enters the courtyard again.\nFranz Waxman’s atmosphere-creating music score sets then scene.\nJeff – L.B. Jefferies – played so effortlessly by James Stewart, is a newspaper photographer, fearless, intrepid, and his heroism has landed him at home, in his apartment, at the height of a New York oppressive Summer, with a broken leg in a cast. His frustration is palpable and it is exacerbated when his editor phones him to tell him that an exciting assignment is coming his way but it is not to be – Jeff tells him that his cast isn’t coming off for another week. A little light comic relief is infused into the movie when we see Jeff’s pleasure at being able to scratch an itch under the cast thanks to a slim back scratcher which he can feed through to the itch (it also brings relief to a hard-to-reach itchy toe).\nAs if the broken leg isn’t enough, Jeff appears to be suffering with a virus, an infection of some sort which calls for the insurance nurse (nurse paid for by his newspaper’s insurance company), Stella, gloriously played by Thelma Ritter, to regularly check his temperature.\nThe rest of the scene, neighbourhood normality, and we see it thanks to the open windows of the apartments during the hot summer:\n‘Miss Lonelyhearts’ (Judith Evelyn), nicknamed by Jeff who, clearly, has too much time on his hands. He is captivated by what he sees as the lives of his neighbours.", "score": 45.41120742398718, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "First, before I get into my main topic, I just have to say that this film reminded me that Grace Kelly was one of the most beautiful actresses of her time. Thank you to Alfred Hitchcock for providing such wonder visions of her in “Rear Window”. Now, back to the topic at hand.\nGoing to the movies and watching a film is, at its very heart, a voyeuristic activity. We sit and watch other people’s lives and stories from afar framed by the screen and form opinions and cast judgements on what we are watching. Alfred Hitchcock throws that voyeurism in our faces with “Rear Window”.\n“Rear Window” is a great example of what a master director with a master editor can accomplish. We watch a mostly point-of-view presentation of how a story is created in one’s mind and how that story can infect others that surround a character. We watch as L.B. Jefferies, confined to his wheelchair, becomes involved in the other lives of his neighbors as he voyeuristically watches their lives unfold, framed by the windows of their apartments.\nWe watch “Jeff” as he watches his neighbors and we become consumed by what he is seeing through classic A-B-A editing. A: We see him watching something. B: We cut to see what he is seeing. A: We cut back to get his reaction to what he is seeing. As the Bordwell reading stated, “the Kuleshov effect operates here: Our mind connects the two parts of space”, Jeff’s apartment and the courtyard, without the use of an establishing shot. This sequence of cuts is repeated over and over again to develop the movie-like story of his neighbors where he forms his opinions and casts judgements on what he is seeing and hearing. Given the lack of details, he fills in the gaps by coming to his own conclusions based purely on his visual input from afar. This is the Kuleshov effect on the part of Jeff himself. He connects the dots in his own mind based simply on what he sees and hears while he watches the movie of his neighbors unfold.\nFor me, “Rear Window” was a film of a man watching a movie of his neighbors. Each window in the courtyard presents a screen for us to voyeuristically watch as Jeff also voyeuristically watches his neighbors progress through their everyday lives and, oh yes, commits murder.", "score": 42.2378681542644, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "- Bottle Movie: The action rarely leaves the perspective of Jeff's apartment, which means that the action is limited to Jeff's apartment, what he can see in the courtyard of the apartment complex and the windows of other apartments. The only time that the movie leaves this limited perspective is when Thorwald pushes Jeff out of his window.\n- While the effect is similar, Rear Window was the opposite of most TV Bottle Episodes, shot to save money: The entire courtyard was constructed on a sound stage; one of the largest in film history at the time. This gave Hitchcock precise control over lighting and camera angles — on the enormous courtyard set he often had to give actors direction via radio while he was shooting from the opposite side.\n- Bridal Carry: The newlyweds first enter their new apartment normally, getting everything settled with the landlord. Then they walk out just so he can carry her in this way.\n- Book Ends: The film begins and ends with Jeff resting in his wheelchair.\n- Bury Your Disabled: It appears that Mrs. Thorwald is an invalid and implied that the stress of caring for her led her husband to adultery and murder.\n- Chekhov's Gun: The flashbulb (from the camera) that Jeff initially plans to use to signal Lisa to leave Thorwald's apartment comes in handy when Thorwald comes to Jeff's apartment. He uses the flash to stall Thorwald just long enough before Doyle and Lisa arrive to see what's happening.\n- Closed Circle: Jeff can't leave his apartment because of his broken leg.\n- Come Back to Bed, Honey: At the beginning of the movie, a newly wed couple moves into an apartment close to Jeff's. They close their blinds and are not seen for a while. After a few days, the man is seen leaning out of the window, and his wife calls him back.\n- Creator Cameo: Hitch is seen tinkering with the clock in the songwriter's apartment.\n- Deadpan Snarker: Stella and Jeff.\n- Death of a Child: The dog and technically, Mrs. Thorwald—she's an adult, but she's disabled, a demographic usually covered by this trope.\n- Does This Remind You of Anything? / Phallic Weapon / Something Else Also Rises / Visual Innuendo: Unable to see into an apartment with a pair of binoculars, Jeff picks up a telescopic lens—in other words, longer—and is visibly satisfied now that he can see better.", "score": 42.00697614678032, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "\"I'm not much on rear window ethics.\"Rear Window is a classic 1954 thriller, directed by Alfred Hitchcock, starring Jimmy Stewart and Grace Kelly.L. B. \"Jeff\" Jeffries (Stewart) is a photojournalist who broke his leg during a dangerous assignment. He is confined to his small Greenwich Village apartment while recuperating and, out of boredom, begins to spy on his various neighbors across the courtyard. He sees one of the neighbors, Lars Thorwald (Raymond Burr), acting suspiciously. He eventually becomes convinced that Thorwald killed his wife Anna (Irene Winston), a bedridden invalid who has gone missing. Jeff's girlfriend, Lisa Carol Fremont (Kelly), doesn't believe him at first, but soon changes her mind. After trying and failing to convince Jeff's police-detective friend Lt. Doyle (Wendell Corey) of the crime, Jeff, Lisa, and Jeff's nurse Stella (Thelma Ritter) come up with a plan to catch the killer themselves.Regarded as one of Hitchcock's very best films. It was remade in 1998 as a Made-for-TV Movie starring the late Christopher Reeve, who was actually paralyzed from the neck down. The 2007 film Disturbia with Shia LaBeouf is a modern day retelling, and it's far from the only one.\nTropes used in Rear Window:\n- Adaptation Expansion: The film is based on Cornell Woolrich's short story \"It Had to Be Murder,\" which didn't have the characters of Lisa and Stella.\n- Adult Fear: Mrs. Thorwald's death, given that the disabled are especially prone to abuse and violence.\n- Which foreshadows Jeff's situation at the end.\n- The possibility of a once presumably happy marriage being strained to the point of adultery and murder by one's illness.\n- Author Appeal: Grace Kelly is one of many blonde leading ladies for Hitchcock.\n- Awful Wedded Life: The two scenes of the Thorwalds before Mrs. Thorwald vanishes make it clear their marriage is this.\n- Binocular Shot: At several points we view things through Jeff's binoculars and/or telephoto lens.\n- Blinding Camera Flash: Used to stall the killer.", "score": 40.86540025043316, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "When the salesman, who we later learn is named Thornwall, starts behaving strangely and his bed-ridden wife is suddenly missing, Jefferies begins to suspect foul play, and snares his friends, including his girlfriend Lisa (Grace Kelly), a police detective (Wendell Corey), and his nurse (Thelma Ritter) into his web of suspicions. More and more clues appear, but are they real or the stuff of paranoia brought on by being cooped up in an apartment for a month-and-a-half?\nThe film’s set-up is perfect. We see Jefferies, we see what he sees, and we see his reaction to it. He’s trapped, as we are with him, unable to act and forced to watch events unfold. Here Hitchcock gives us the double-edged sword of voyeurism. Sure, you can look anonymously into countless other worlds, but you remain apart from them, unable to touch or directly effect them.\nThe further the film goes we begin to believe he may be right about Thornwall, but when his facts begin to be disproved one-by-one we wonder what the truth really is. And Hitchcock makes us wait, teasing us with clues and revealing the truth only at the last possible moment in a terrific climax (and perhaps even a better epilogue). Oh, how many current Hollywood writers and directors could learn a lesson here!!\nJimmy Stewart is perfectly cast as the man we want to believe with his own flaws and fears that may be coloring what he sees. And Grace Kelly, what can I say? If there’s ever been a more beautiful woman to grace movie screens (pun intended) I dare you to find her. Smart, sassy, playful, intelligent, and able to convey complex emotions in the slightest action or look. If the film has any flaw at all it’s that any man would think twice if such a woman wanted to marry him. On the DVD the film is loving restored into full color and clarity that will astound you. Rear Window is a perfect addition to any DVD library.", "score": 40.56835313223081, "rank": 9}, {"document_id": "doc-::chunk-2", "d_text": "(This is the favorite part of every male the Siren has ever discussed the movie with.) Still, there's nothing particularly daring about pointing out to a movie audience that they want to be entertained, or even titillated, by other people's lives.\nThe real question of Rear Window isn't about the morality of looking, it's about the ethics of intervention. A little less than a decade after the movie's release, a young woman was murdered in Kew Gardens, Queens, stabbed to death within earshot of neighbors who mostly dismissed her screams. While later research led to doubts about whether the neighbors realized Kitty Genovese was fighting for her life, the story passed into legend, the ultimate indictment of people not wanting to get involved, forever to be cited as an example of the unique callousness of New Yorkers.\nRear Window is Kitty Genovese in reverse: rather than \"I didn't want to get involved,\" it's New Yorkers getting very involved indeed. \"I'm not much on rear-window ethics,\" says Lisa, but the movie asks us to become just that. At what point are you looking at things you shouldn't--when you witness one neighbor drunkenly trashing his work, or another's despairing loneliness? And when are you obligated to act--when you see that neighbor trying to kill herself? All right, that one's easy. But how about when you suspect a crime--any crime, let alone a murder--but haven't a thing to prove it, and can't get the police interested, either?\nOnly three creatures in the movie pay a real price for observing, and in all cases their undoing comes when they get involved. Jeff breaks his other leg. Thorwald (Raymond Burr), the murderer, sees Jeff across the courtyard and comes after him, only to get caught. Presumably he will pay the ultimate price off-camera. But the one creature whose curiosity ends in death during the running time is the neighbor's dog, who scratches in the flowerbeds where Thorwald has buried some part of his wife. When the dog's body is discovered, his owner flings her anger across the courtyard:\nWhich one of you did it? Which one of you killed my dog? You don't know the meaning of the word 'neighbor.' Neighbors like each other, speak to each other, care if anybody lives or dies. But none of you do.", "score": 39.23917697977806, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "For anyone who is an avid filmgoer, it is no great revelation that watching movies is an extension of voyeurism; after all, what are we doing but looking into the lives of others, observing in a socially acceptable way, as opposed to peeping into the windows of neighbors or strangers. We are all, to an extent, curious to know what other people are doing, it’s human nature, however most people can keep these voyeuristic tendencies limited to the socially accepted variety. Alfred Hitchcock was well aware of this trait in humans and he suckers us into compliance right from the beginning with the casting of James Stewart. Who better than Mr. Nice Guy, Mr. Straight Lace to lure you into peeping in on your neighbors and making you think there is nothing weird about it.\n“Rear Window” is based on a short story called, “It Had to be Murder,” by William Irish aka Cornell Woolrich, originally published in 1942. Hitchcock preserved much of Woolrich’s story though as expected, some changes were made, for example the Grace Kelly character, Lisa Freemont was a new addition. Screenwriter John Michael Hayes, in the first of four films he would write for/with Hitchcock, was hired. Stewart and Kelly were selected early on to be the leads, so even before writing the script Hayes knew who the leads would be and could shape their character traits accordingly.\nGenerally considered one of Hitchcock’s masterpieces, “Rear Window” manages to create nonstop suspense despite the limited mobility of its hero. The film remains a prime example of Hitchcock’s style of unremitting tension, building a nerve-racking situation upon situation with little or no let up. There are no bold cinematic moment’s just pure suspense built upon suspicions and actions of the characters involved.\nThere are at least three recurring motifs that run through the film; marriage, sexual tension and voyeurism.\nThroughout the film, Lisa is constantly pushing a reluctant Jeff to get married. Jeff however, cannot see himself fitting into Lisa’s world of glamour and high fashion, nor can he visualize Lisa, who dresses in her personal life just like the stylish 5th Avenue model she is, fit into his living out of a suitcase existence traveling from one forsaken place to another. On a certain level, Jeffries dissatisfaction with Lisa’s marriage demands mirror Thorwald’s frustrations with his wife, both men feel cornered and trapped.", "score": 38.25399826277514, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "|Hitch does his cameo in Rear Window.|\nPhotojournalist \"Jeff\" Jefferies (James Stewart) is stuck in an apartment with a broken leg and a cast when he'd rather be out covering action in far-flung places with exotic names. Jeff has a beautiful girlfriend named Lisa (Grace Kelly), but he fears proposing to her because he doesn't think her patrician, elegant manner will go well with the places he has to travel to [although nowhere is it written that the wife must accompany her husband on such assignments]. Bored and needing distraction, Jeff begins observing his neighbors (on a marvelous, detailed set that shows many different kinds of apartments and tenants), such as the voluptuous dancer across the way, a pair of newlyweds who disappear behind the shade after moving in, a frustrated composer of romantic music, and a woman he calls \"Miss Lonelyhearts\" (Judith Evelyn) who talks to imaginary dates while she's having supper and gets drunk in bars. Eventually Jeff focuses on a man named Thorwald (Raymond Burr), whose nagging wife disappears one afternoon and never comes back. Jeff has reasons to believe Thorwald murdered the woman -- and eventually gets both Lisa and his nurse Stella (Thelma Ritter) on his side -- but his smug detective friend Doyle (Wendell Corey) assures him that he checked and the woman really is out of town. But is she? Jeff and the ladies begin an investigation of their own that leads them into some serious danger. Some viewers of this wonderful film don't like being put in Jeff's position all the time, peering through windows, and find the film claustrophobic, but I can't agree. The movie, while imperfect, is very cinematic and well-made. It does take a while for the basic mystery plot to begin unfolding, but the two main characters and their dilemma -- two very different people in love but uncertain of how it will work out -- are interesting enough to hold the attention, and Stewart and Kelly give fine performances, along with Ritter, Evelyn and others. [This is another film like The Tingler in which the talented Evelyn gets across a character without really saying a word.] The movie builds in suspense and has a creepy and exciting finale. One thing Rear Window is missing is a great score by, say, Bernard Herrmann, but you can't have everything.", "score": 37.006687501903485, "rank": 12}, {"document_id": "doc-::chunk-7", "d_text": "Jewellery design by Joan Joseff, inspired:\nI can’t get over the beauty of this dress, how superbly it complements the angelic Grace Kelly:\nI can’t get this movie out of my mind – I’m not a student of Hitchcock, I don’t know what he set out to achieve….\nto entertain? I’m sure that that was always one of his goals and, for me, in that respect, he succeeded.\nSuspense? A ton of it! The intensity, events running too fast for the broken-legged Jeff, his loss of control, peaking when Lisa is being attacked by Thorwald and there’s nothing he can do to help except to hope that the police get there before he, Thorwald, kills her – add into the mix Jeff’s guilt – Lisa wouldn’t have been in the situation if Jeff hadn’t dragged her into his voyeuristic obsession. And the suspense, reaching a crescendo when Thorwald can be heard, slowly, heavily, thumping his way up the stairs towards Jeff’s apartment, Jeff clearly terrified…\nAn artistic creation? For sure.\nSocial comment? It’s there!!\n– and notice, after that emotional ‘speech’, one of the partygoers:\n“Come on, let’s go back in, it’s only a dog”,\nclearly, not everyone took on board the poor lady’s admonishment of society!\nOn the subject of voyeurism:\nStella: “In the old days, they’d put your eyes out with a red hot poker…. What people ought to do is get outside and look in for a change….We’ve become a race of Peeping Toms. What people ought to do is get outside their own house and look in for a change…”\nJeff clearly has a voyeuristic obsession – he’s bored, it’s entertainment for him but the irony is that, with so much time on his hands, he’d do well to spend some of it taking stock of his own life (although, admittedly, his obsession does lead to the removal of a murderer from society).\nJeff sees, in the microcosm of life that is reflected in the courtyard, confirmation of his belief that marriage is not an institution into which he should dive:\nJeff’s Editor: It’s about time you got married, before you turn into a lonesome and bitter old man.", "score": 35.513946145016426, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "This is a packet for the Alfred Hitchcock movie Rear Window.\nThe movie can be used as a refresher or review of standard plot and characterization of a story. It has discussion questions, character charts, and essay calgaryrefugeehealth.comeets can be modified to add or remove questions.\nRear Window screened at the Venice Film Festival and opened in the United States in August of Bosley Crowther, the legendary film critic for the New York Times, called it \"tense\" and \"exciting,\" praising Hitchcock and Hayes for their \"precision\" in storytelling.\nBelow is a free excerpt of \"Analytical Essay of Rear Window\" from Anti Essays, your source for free research papers, essays, and term paper examples. Analytical Essay of Rear Window Rear Window is a classic movie, directed by Alfred Hitchcock, about human curiosity, voyeurism and murder.\nThe greatest in this experimental period was, of course, Rear Window. Here, Hitchcock concocted his most original, most challenging concept yet: to create an entire film from one vantage point, the rear window of a Greenwich Village apartment, and in turn, symbolize the very movie-watching.Download", "score": 34.023510661259465, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "This year for the Grace Kelly Blogathon Day 2 I made the daunting decision to write up on Alfred Hitchcock’s Rear Window (1954)– which is arguably not only The Master’s best film, but also Grace’s best role.\nBut before I get into some of the technicalities, I will say that instead of giving a boring bloated analysis of it- Id like to focus on some of the stand out pieces that I feel make the picture brilliant\nGrace plays Lisa Carol Fremont in this role- a model and independent woman. Her boyfriend is a photographer LB “Jeff” Jefferies (maybe that’s how they met!) played by Jimmy Stewart. Jeff breaks his leg and is holed up in his apartment with nothing to do but stare out and “spy” on his neighbors. Its all people watching until one night he suspects his neighbor Lars Thorwald (Raymond Burr) murders his wife. Jeff, Lisa, along with Jeff’s nurse Stella (Thelma Ritter), then investigate the truth.\nOne element that I feel goes overlooked is the scene in which Lisa turns on the lights and introduces herself to the audience. Everyone focuses on her kiss entrance scene- but the scene that follows is just as brilliant.\nLisa goes over and turns on three lights- and with each light says a part of her name. But note the framing- the first light, Lisa- the camera is a close up; the second light, Carol- its a medium shot- and finally the third light, Fremont, the camera zooms out to a long shot in which we get to see her gorgeous black and white frock.\nIts pieces like this in which I feel Hitchcock’s tiniest details of framing and dialogue go great with each other. And Grace- she’s the only actress who could make an entrance as simple as this super sophisticated and elegant.\nAnother element in this movie that I feel may be under rated is Hitchcock’s use of sound. Except for the opening credits, all sound in this film is diegetic sound. Its an interesting choice for Hitch, as usually his soundtrack scores are a key focus of his films. Take a look at the intro to the film (don’t worry no spoilers)\nI’m not a fan of Jazz- but there is something so infectious about this piece of music that sets the scene for the film. You automatically thing New York, the 50s, glamour, but business of the city.", "score": 33.33875256543131, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "- Title: Rear Window\n- IMDB: link\n“Well, if possible, both.”\nWith the recent release of Disturbia I thought this would be a good time to introduce a new feature and take a look back at the film which it pays homage to. Alfred Hitchcock’s Rear Window is considered one of the director’s finest films by both critics (it earned a 100% Fresh rating from Rotten Tomatoes and ranked #42 on AFI’s 100 Greatest American Movies of All Time) and fans (at the time of this review it ranks #16 on IMDB’s Top 250 Films of All Time).\nAlfred Hitchcock, ah, there was a man who knew how to tell a tale. The joy in Rear Window is the simplicity. One man looking into the windows of his neighbors discovers a little about them, and a little about himself, and uncovers what he believes is evidence of cold-blooded murder. It’s a film of slow revelations, of constant building tension, of troubled relationships, and of learning the truth about yourself as well as your neighbors. If you enjoy suspense then you could search long and hard trying to find a flick better than this one.\nStuck at home with a broken leg, photojournalist L.B. Jefferies (Jimmy Stewart) begins to examine the world around him finding numerous worlds in the apartments across the courtyard. Over the past six weeks these strangers have become his form of entertainment and his only way to experience the outside world.\nThrough his window Jefferies spies on a beautiful dancer (Georgine Darcy), a songwriter (Ross Bagdassarian), a pair of newlyweds (Rand Harper, Havis Davenport), a sculptor (Jesslyn Fax), a couple with a precocious dog (Sara Berner, Frank Cady), a lonely single woman (Judith Evelyn), and a henpecked salesman (Raymond Burr) and his wife (Irene Winston). Each character has their own story and tells us quite a bit about Jefferies and his views of the world and love.\nHitchcock could easily have used the extras merely as window dressing, but even as Jefferies obsesses with the salesman, his eyes always take time into looking into each of the separate worlds (in an amazing set built on a studio sound stage).", "score": 33.136597427832164, "rank": 16}, {"document_id": "doc-::chunk-5", "d_text": "It’s easy enough to see why—if Jimmy Stewart calls his buddy Detective Doyle to say, “Hey, I think my neighbor killed his wife,” and then Doyle pops over and arrests Raymond Burr, the movie’s over in 5 minutes and everybody goes home. You need to drag things out a bit. (Rear Window also drags things out by digressing into the strained relationship between Stewart and Grace Kelly, and how their shared experience brings them together and enables him to finally make a commitment—23 Paces copies this idea in its own way, too. The Window substitutes the familiar strife between the boy and his weary parents but doesn’t need much of this because at 73 minutes it has less screen time to occupy).\nI mention this fairly obvious point because there’s one last Rear Window-alike I want to cover and it takes the bold step of defying this narrative logic. The film is Dario Argento’s The Bird with the Crystal Plumage. Some obvious Hitchcockian bona fides we can note upfront include the fact that Argento was often called “the Italian Hitchcock,” and he cultivated this comparison happily. Also, Bird features an (uncredited) appearance by Reggie Nalder, previously seen in Hitchcock’s The Man Who Knew Too Much, here playing a sinister baddie once again.\nAnd the setup starts off in familiar Rear Window territory—a depressed and alienated creative person in an environment where he is predisposed to be especially attentive. The specific iteration of this in this instance is: frustrated American novelist in Italy fighting off writer’s block. And he winds up witnessing a crime:\nWell right away we can see something’s different. Not only is this stunningly realized and visually aggressive cinema, but unlike the other films we’ve been talking about there’s no doubt whatsoever that a crime has taken place. In fact, far from having to convince the police to take this seriously, our hero (Tony Musante) will be quickly seconded by the cops as an unwilling surrogate investigator. He doesn’t pursue this case because he wants to—he does it because he has to.\nAnd so the film actually becomes a runaround, a shaggy dog story as a bunch of stuff happens that doesn’t really move the plot forward, until our hero starts to remember the details more clearly and coheres a picture of what he really witnessed.\nSo why do I persist in seeing it as Rear Window descendant?", "score": 33.00522303240062, "rank": 17}, {"document_id": "doc-::chunk-3", "d_text": "So is Jeffries a reprehensible nosey body prying on unknowing neighbors or are the final actions of Jeffries voyeurism “almost entirely admirable” as critic Robin Wood writes in his landmark book, “Hitchcock’s Films.” He goes on to explain, “If he hadn’t spied on his neighbours, a murderer would have gone free, a woman would have committed suicide, and the hero would have remained in the spiritual deadlock he had reached at the beginning of the film.” Basically, I believe Wood is saying here, the ends justify the means. Wrong is right if the end results are morally acceptable. I am not sure I agree with that position. Jeffries is bored and he spies on others not for any “admirable” trait but out of a desperate attempt to escape from the tedium of being stuck in a wheelchair. Why not read some books, watch TV?\nThough restricted to a wheelchair for the entire film, James Stewart still manages to give a gripping performance despite his confined position. His only time out of the wheelchair comes when Thorwald invades his apartment and tosses him out the window. We then get a long shot of him hanging on to the window sill before eventually falling to the ground below. Stewart’s character was supposedly based on legendary photojournalist Robert Capa. Grace Kelly is fine though her role is not especially demanding, but Hitchcock’s camera just drools all over her whenever she is on screen, and I can’t say I blame him. Thelma Ritter is acerbically charming as Stella who berates Jeffries for using binoculars and long lens as tools for spying in on his neighbors. Future TV defense attorney, Raymond Burr, still in the evil role stage of his career, manages to add a touch of compassion to his pathetic character.\n“Rear Window” opened on August 1st, 1954 at the Rivoli Theater on Broadway in New York City (1). It was a gala benefit premiere for the American-Korean Foundation as noted in the newspaper ads (see above). Coincidently, one week later another film opened just a few blocks further down on Broadway with the similar theme of voyeurism at the less auspicious Globe Theater, Richard Quine’s “Pushover,” (2) which officially introduced future Hitchcock blonde, Kim Novak (3) to screen audiences.", "score": 32.945996258874835, "rank": 18}, {"document_id": "doc-::chunk-2", "d_text": "The way he maneuvers the camera, going from one end of the room to the other, from one conversation to the other, or not having the camera move at all, observing people from a distance, there are numerous times while watching it feels as if your standing right there in the apartment. Like all his films, it’s damn near impossible to take your eyes off the screen for a mere second, and again I reiterate how he crafted a gripping story with such a limited setting with ease. I can count on my hand the number of directors who can pull off such a thing, and still nobody comes close to what Hitchcock achieved when he worked with such a setting.\nRope was adapted from a play which was loosely based on the real life case of Leopold and Loeb, two college students who murdered a 14 year old in 1924 just for the thrill of it. Leopold and Loeb were gay, and it’s been assumed that Brandon and Phillip were lovers in the film as well. It’s been said that Hitchcock intended for the two to be gay, although this could never actually be said or shown on screen in 1948 with the production code and all. I believe Hitchcock even had to have the script supervised by some studio higher up‘s due to the rumors of there being two gay characters. The words “gay” or “homosexuality” were never uttered on the set, instead they were replaced with “it”. There’s been a lot made of the so called homosexual undertones in the film, and there’s differences of opinion when it comes to that aspect. Whatever the nature of Brandon and Phillip’s relationship is, Brandon is obviously the dominant one, as it’s quite clear that Phillip acts as his footstool. I’ve always thought all of it was just like looking for the times when Hitchcock ended a take, these things might go right over your head on account of being too wrapped up in the story.\nRope was second “limited setting” film Hitchcock did, along with the aforementioned Lifeboat (1944), Dial M For Murder (1954), and of course Rear Window (1954), which also starred James Stewart, the second film he and Hitchcock would collaborate on (Rope was the first time the two worked together). They did two more films together, The Man Who Knew Too Much (1956) and Vertigo (1958). All masterpieces, needless to say.", "score": 32.73897308526776, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "- Rated PG\n- Buy the DVD\nReviewed by Bob Westal\nf the millions of people who saw last year’s featherweight Shia LaBeouf suspense hit “Disturbia,” only a small number will ever realize that what they were viewing was essentially a remake -- a fact which producer Steven Spielberg may eventually end up hearing about in court. Still, film fans should go straight to the source. Director D.J. Caruso’s teen trifle was a Big Mac – and Alfred Hitchcock’s “Rear Window” is an adult-style full course steak dinner from the best restaurant you’ve ever been to.\nSimultaneously a devilish entertainment and a big-hearted work of art, my personal all-time favorite film from one of the three or four best directors of all time is as funny as it is suspenseful to the point of being terrifying -- while also managing to be sexy, romantic, and poignant. Both cinematically innovative and perfectly stylish, and with a witty and well-rounded script by John Michael Hayes, this masterpiece of mass entertainment is an abject lesson to makers of modern mainstream thrillers who act as if audiences are capable of precisely two emotions per film\n“Rear Window” stars James Stewart as L.B. “Jeff” Jefferies, an adrenaline junkie of a globetrotting photojournalist laid up in his Manhattan apartment with a broken leg. It’s 1955 and television is still a questionable new gadget that a busy guy like Jefferies wouldn’t bother with (and film studios would rather not advertise). Shia LaBouef’s beloved video games and Internet haven’t even been thought of, and household air conditioning is about as common as 65-inch plasmas are today. And there is one of New York’s nasty, muggy heat waves to contend with. The heat turns out to be an extremely dangerous saving grace, because almost everyone in Jefferies’ apartment complex is keeping their windows wide open and providing what amounts to primitive reality television for the frustrated Jefferies. His entertainment choices include: sexy light comedy provided by the lithesome and often under-clad “Miss Torso” (Georgine Darcy); melodramatic pathos from the troubled “Miss Lonelyheart” (Judith Evelyn); and, the closest mid-‘50s equivalent to “Real Sex” -- a mostly unseen honeymoon couple.", "score": 32.35161757988191, "rank": 20}, {"document_id": "doc-::chunk-1", "d_text": "Hitchcock expertly builds suspense, not just in terms of the murder sequences, but throughout the entire movie, first as Marion Crane steals her boss' money and tries to make a getaway, and then after she is killed, as Norman Bates (Anthony Perkins) tries to evade capture by investigating parties, and of course, protect his mother. That the film successfully transfers its sympathies from the \"main character\" to Bates is a testament to Joseph Stefano's script, especially since the sinister undercurrent that runs under each scene only builds more and more intensely to a fever pitch by the film's conclusion.\nHitchcock, of course, seemed to instantly and intuitively understand certain aspects of human psychology, particularly in terms of audience manipulation and thematic reinforcement, and 'Psycho' is seeded with subtle but undeniable details that intensify the emotional and thematic dynamics of the story. For example, in the opening scene between Marion and her lover, Sam Loomis (John Gavin), she is undressed – admittedly provocatively – but in white, indicating her purity, at the very least, of intent. After she steals the money, however, she's next seen in a black brassiere and slip – a cue that she is no longer the wholesome young woman who simply wants to get away and marry the man she loves. The fact that her murder comes right after she basically decides to return the money is a sort of plot twist in that she is never given a chance to atone for her crime, and is instead punished for it.\nThe stylization of the dialogue may be a byproduct of the decidedly more melodramatic tone of movies at the time in which 'Psycho' was made, but it also works to intensify the suspense in the story. Marion is probably the worst offender, acting standoffish to a police officer and then basically arousing suspicion in any-and-everyone she meets by being impatient and vaguely rude in almost all of her conversations. But when Sam and Marion's sister Lila (Vera Miles) visit the Bates Motel to investigate her disappearance, their motives seem conspicuous and obvious as they attempt clumsily to confront Norman about his knowledge of Marion and the money she stole.\nFinally, Hitchcock's direction in the film is nothing short of brilliant.", "score": 32.273607046525534, "rank": 21}, {"document_id": "doc-::chunk-3", "d_text": "Knowing that the body is in the trunk from the beginning of the film is a simple but powerful way to create moments high tension, it makes it hard to keep your eyes off of the trunk.\n\"Constructed entirely from uncut ten-minute takes, shot on a beautifully-constructed set, it's certainly a virtuoso piece of technique, but the lack of cutting inevitably slows things down, entailing the camera swooping from one character to another during dialogues.\" (Andrew, G 2006) Because the camera can only shoot for 10 minutes at a time it becomes routine for the screen to have moments of darkness from disappearing behind the backs of the actors, which made the cuts obvious but still kept the flow of the film. Therefore the cuts don't make it entirely seamless editing, even other critics have commented: \"...the camera trick of concluding every reel by focusing on some dark jacket or other transition surface becomes predictable...\" (Lenin Imports, 2004) In consideration to Hitchcock's limitations of the 10 minute film this seems like the only way this film style could be pulled off without obvious jerky cuts.\nFor this film to work there must have been a lot of planning from the beginning, from the layout of the set as it was being built to the strain on the actors to remember there lines for long periods of time (no time for mistakes with colour film in the 1940's = $.)\n|Hitchcock and Joan Chandler, possibly arguing but Joan Chandler doesn't seem impressed.|\n\"Every movement of the camera and the actors was worked out first in sessions with a big blackboard, like football skull practice. Even the floor was marked and plotted with numbered circles for the 25 to 30 camera moves in each 10 minute reel. Whole walls of the apartment had to slide away to allow the camera to follow the actors through narrow doors, then swing back noiselessly to show a solid room. Even the furniture was \"wild.\" Tables and chairs had to be pulled away by prop men, then set in place again by the time the camera returned to its original position, since the camera was on a special crane, not on tracks, and designed to roll through everything like a juggernaut.\"(Lenin Imports, 2004) Knowing that all this went on behind the scenes it is not so hard to spot if anything went wrong, from time the trunk had changed positions and the furniture in the kitchen had moved.", "score": 31.508868357380226, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "As Phillip becomes more and more guilt ridden about what he’s done, the danger of him exposing the crime becomes more real. The suspense is heightened even more when Rupert begins to have suspicions of his own, and it just keeps on building.\nThe classic Hitchcock device of discussing murder during dinner is put to perfect use here. We learn that Brandon took the Nietzsche-esque philosophical musings of Rupert during his school days to heart, which is why he feels Rupert would approve of what he and Phillip did. According to Rupert, murder is an art that only a few privileged and intellectually superior people, those who are above the average moral standings should be able to commit. The tension is heightened during this conversation as Brandon proudly boasts that he and Phillip are two of the “privileged” ones. This allows for some of Hitchcock’s trademark black humor to make an appearance, as Stewart’s character opines about how many of the worlds problems murder could solve, such as having trouble getting theatre tickets or getting into a restaurant, and that although he approves of murder, it shouldn’t be a free for all, it should be reserved for special occasions, such as “cut a throat week”, or “strangulation day”. There are other traces of dark humor thrown in as well, such as Brandon giving David’s father a stack of books tied together with the same rope that killed his son, and admittedly, the whole idea of serving food on the chest that the body is stored in, while extremely depraved, is somewhat comical. I’m sure Hitchcock got a real kick out of it.\nRope was one of Hitchcock’s most experimental films. Having the events of the film happen in real time, Hitchcock wanted to film to look as if it was shot in one long, continuous take. Of course this wasn’t possible in 1948, as he could only shoot for around 10 minutes per take, so he got around it by using numerous close up’s of various things and holding the shot while he reloaded the camera. If you look for them, it’s not hard to figure out where the cuts are, but you’ll be so engrossed in the story you won’t think to look for such things, but it’s just another example of how innovative and ahead of everybody else Hitchcock was when it came to filming techniques.", "score": 30.859761948837395, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Have you ever imagined what the courtyard of the apartments from Alfred Hitchcock's film \"Rear Window\" looked like in full? Videographer and special effects master Jess Desom has done just that. Using After Effects, Desom pieced together the scenes from the famous Greenwhich Village courtyard and all of the tenants' happenings into a cohesive three minute video. The results of his work are phenomenal and kind of mind blowing, providing never-before-seen perspective on a popular film.\nWatch below for Jess Desom's version of \"Rear Window\":", "score": 30.51896628961943, "rank": 24}, {"document_id": "doc-::chunk-12", "d_text": "This is no doubt the source of Tom Shone’s “lopsided angles” complaint above. Why would Hitchcock constantly show this? There must be a reason, I kept telling myself. Now, after countless viewings, allow me to quote old Barton Keyes in Double Indemnity, “I think papa has it all figured it out.”\nI believe the ceiling shots are Hitch’s way of saying we don’t know what lies above, a crucial theme if you consider what happens beyond the roof of the bell tower.\nSuddenly the ceiling shot at the McKittrick Hotel makes sense. It explains any plot confusion when Madeline “mysteriously” disappears, because there’s no telling what’s going on above. For all we know, Madeline could have escaped out a back window.\nAnother odd ceiling shot appears during Scottie’s trial — the perfect time for Hitch to call attention to the idea that something else is going on in the case.\nCircling Camera and Cinematic Bliss\nAbove all this, to me, the greatest moment of the entire film unfolds in the magical neon green light of Judy’s hotel room. In my personal favorite moment in all of cinema, Scottie sees Judy emerge from the bathroom, bathed in neon green light in all her Madeline glory. In separation (cutting back and forth between separate images), we see a dream-like Judy slowly approach an awe-struck Scottie. Finally, the couple break the separation by embracing in a “two shot” that becomes a 360-degree dolly in single take. Such a circling camera is another Hitchcock auteur icon, used to signal a turning point in a relationship (i.e. Cary Grant and Ingrid Bergman at the end of Notorious).\nHere in Vertigo, the turning point is Scottie’s realization of Judy’s true identity. While it takes the necklace in the following scene to seal the deal, the look on Scottie’s face here is unmistakable, even to first-time viewers. What first-timers might not see is the transforming background. As the camera circles the characters, Judy’s apartment literally transforms into the horse stable at the Spanish mission where they kissed just before Madeline’s death. The moment is accompanied by the exact same music cue that played during the stable kiss.\nI remember seeing this scene — the so-called Scene d’Amour — projected on the big screen for the first time.", "score": 30.208802072847785, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "About this Piece\nTiomkin and Hitchcock worked together a final time in the 1954 thriller, Dial M for Murder, which, like Rear Window, was an experiment in confined space: set mostly in the London apartment of an unhappily married couple (Grace Kelly and Ray Milland), it was also Hitchcock’s only film shot in the then-trendy format of 3-D. Rather than drown the movie in gimmicky effects, the director utilized the format more subtly, emphasizing the depth and dimensions of its location: a seemingly cozy flat which, if Milland’s scheming husband has his way, will prove an ideal site for the murder of Kelly.\nCharacteristically, Tiomkin often infuses this intimate chamber piece with the clamorous force of grand opera (in fact, his main suspense theme contains more than an echo of Mussorgsky’s Boris Godunov). But its penchant for melodrama recedes, whenever Tiomkin scores the illicit romance between Margot Wendice (Kelly) and her lover, Mark Halliday (Robert Cummings).\nSteven C. Smith is the author of A Heart at Fire’s Center: The Life and Music of Bernard Herrmann (University of California Press, 1991), and a recipient of the Deems Taylor Award for writing on music. He is currently a writer/producer on the A&E television series Biography.", "score": 30.136394402337476, "rank": 26}, {"document_id": "doc-::chunk-6", "d_text": "Final scene, she puts down the book and picks up a glamour magazine, everything back in place.\nGreat story but, of course, what makes it a special movie, one of the greatest ever produced, is the Hitchcock treatment – as we know, he didn’t win a ‘Best Director’ Oscar for this movie, a Nomination, yes, but not a win, no Best Director Oscars for any of his movies and judging by his Acceptance ‘Speech’ when picking up his Honorary Oscar – the Irving G. Schwartz Memorial Award – in 1968, he was, understandably, extremely miffed!! ‘Rear Window’ was Nominated in 4 categories, ‘Best Director’, ‘Best Adapted Screenplay’ (John Michael Hayes), ‘Best Cinematography – Colour’ (Robert Burks) and ‘Best Sound Mixing’ (Loren L. Ryder) but not a single Oscar win: not just a travesty but that brings the Academy into disrepute and makes a mockery of the institution:\nThe construction of the (indoor) set, a ‘mock-up’ of a Greenwich Village courtyard, was a monumental task, brilliantly achieved by set designers Hal Pereira and Joseph MacMillan Johnson.\nThe lighting, I’ve never seen anything quite like it before, the sky throughout the movie, the light on the characters – no lighting effects are needed to highlight/emphasize Grace Kelly’s beauty but Robert Burks’ (Cinematography/Director of Photography) talent with the camera and lighting leaves us aghast.\nThe music score, by Franz Waxman, is inextricably part of the movie although Waxman’s input, crucial as it is, is limited, ‘just’ accompanying the opening and closing credits and he composed the tune ‘Lisa’ which, in the movie, is written by the songwriter/pianist (Ross Bagdasarian).\n‘Lisa’ is one of the most beautiful pieces of ‘music from the movies’ that I have ever heard:\nThe rest of the soundtrack is sublime, Bing Crosby’s ‘To See You Is To Love You’, ‘Mona Lisa’, “That’s Amoré” and music from Leonard Bernstein and Richard Rodgers amongst others.\n….and the clothes…Edith Head, 35 Oscar Nominations, 8 wins (but not one for this movie, not even a Nomination??!!), she surpassed herself! The creations which adorn Grace Kelly are exquisite – Grace is breathtaking!", "score": 29.59341587931145, "rank": 27}, {"document_id": "doc-::chunk-3", "d_text": "T hese MacGuffins prevail the audience spinning in a certain direction while the real action was getting ready to come in from the side. A true MacGuffin will get you where you need to go but never overshadow what is ultimately there (Alfred Hitchcock Film Techniques 2).Although Hitchcock was greatly identified for his suspense techniques, his movies would not be complete without their creative cinematography. He was excellent at knowing what to film, when to cut to a different shot, and how to abridge a scene after it was completed. Because Hitchcock began directing silent films, he liked to work purely in the visual and not rely upon words at all (Alfred Hitchcock 2).Camera angles make a great contribution to the quality of Hitchcocks films. He incorporates his theory of proximity to plan out each scene (Bays 2). Essentially, this means a certain scene would call for a certain camera shot in order to change its emotion. The closer the camera is to the characters eyes, the more em otion the audience could see. If Hitchcock wanted to increase suspense, he would use a high angle shot above the characters head. In a way, the camera also acts as a human eye because it gazes around objects as if it truly contained curiosity. His idea of personifying the camera remains unvarying because when films did not have sound, visuals were the only form of communicating with the audience.In order to Hitchcock always welcomed innovation in film technology, but in the 1950s he reveled in it (Silet 2).As a result of corporate trust these three factors, Alfred Hitchcocks movies will forever be considered some of the most revolutionary works of art known to man. It is also not an exaggeration to claim that his films elevated the medium as a form of art in the minds of the public in ways that exceeded the work of more self-consciously artistic directors. And that is not a bad accomplishment for a director who set out merely to entertain (Silet 3).", "score": 29.39973707531947, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "With the release of High-Rise, Ben Wheatley’s adaptation of JG Ballard’s novel, the manner in which architecture is used as a cinematic device is in the spotlight again.\nThrough Blade Runner, Rear Window and Inception (spoilers below for all three), via Harry Potter, Star Wars, Alien and The Shining, Miniclick’s resident film writer Gabriel Gane explores some of the most notable occasions in which architecture has played a starring role.\nArchitecture in Film – A Foundation for Story\nIn the introduction to Nikolaus Pevsner’s An Outline Of European Architecture he says this; “We can avoid intercourse with what people call the Fine Arts, but we cannot escape buildings and the subtle but penetrating effects of their character, noble or mean, restrained or ostentatious, genuine or meretricious.” Pevsner makes the point that architecture transcends other forms of art through its capacity to enclose and impinge on space, having an emotional as well as aesthetic impact.\nToday the pace of life is such that many people frequently fail to stand still and consider their surroundings: architecture it seems has become for most, nothing out of the ordinary precisely because it defines our everyday life. We move among it, inhabit it; it’s our beehive, our bird’s nest and so it becomes largely invisible to many of us. And yet, in the cinematic world the opposite seems true. We marvel, we run, we creep with our heroes and villains, through architectural landscapes.\nThink of the wondrous scenes that take place in Hogwarts (Harry Potter 1-8) or the playful ones in the eponymous setting of The Grand Budapest Hotel. Now think of how many times you felt a creeping sense of fear as Danny peddled his trike through the empty corridors of the Overlook Hotel in The Shining, or Captain Dallas squeezing himself along the claustrophobic air ducts of the Nostromo in Alien. Through these few examples it is clear that the surroundings of our characters, as well as their predicaments, help shape our emotional response.\nWes Anderson – Grand Budapest Hotel\nArchitecture also drives plot. In Christopher Nolan’s Inception architecture literally and figuratively divides the characters; the action is split thematically into different maze like structures, or levels, that are built within the plot by an architect: the characters navigate between them, moving forward towards a common goal, some getting lost along the way.", "score": 29.302570874567525, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Title: Psycho (1960)\nDirected by: Alfred Hitchcock\nGenre: Thriller, Mystery\nDirected by: Alfred Hitchcock\nGenre: Thriller, Mystery\nPsycho is a 1960’s American Thriller/ suspense film about a woman who is hiding at a motel after embezzling money from her employer. It is there she meets a man who is under the domination of his mother. The film is based on the Nobel by Robert Bloch by the same name.\nThe noir thriller starred Anthony Perkins, playing as the character of Norman Bates. The film connotes many generic aspects of the thriller genre, such as chiaroscuro lighting which is used in films to intensify the atmosphere. In this film the lighting is used to indicate the diversities of the film and Norman Bates’ character. Although the story begins to follow Marian Crane, Hitchcock has used hidden signifiers that link back to the madness of Norman Bates.\nThe Iconic Shower Scene\nIn the iconic shower scene of Psycho the first signifier of the thriller genre is the non-diegetic music which creates melancholic atmosphere amongst the audience, and helps them to build sympathy towards Marion Crane as she is shown alone sitting in her motel room writing figures and calculating, she then tears them up, and later puts them in the toilet; this implies that Marion is ‘flushing’ her hopes of survival away (down the drain). She is sat to a desk in an unglamorous room, which is dark and shadowed, which is how Hitchcock portrays her villainous character. This can be inter-textually referenced to ‘Essex Boys’ when we are introduced to Jason through the murky, dirty car windscreen and we see half of his body immerged into the shadows; which implied that Jason has ominous qualities.\nIn this famous Shower scene, Marion Crane is in a very vulnerable position, the audience are aware of what is happening; whereas she is unaware; this enables the audience to connect and sympathise with her. When Marion closes the door shut, she creates a generic claustrophobic environment which creates suspension – the audience can tell that within the confined space there is no way Marion can escape ; she is trapped. This can be inter-textually referenced to Witness, when Samuel is stuck in the small enclosed space in the toilet cubical. The confined space shrinks further when she removes her clothing (making her vulnerable) and draws the shower curtain.", "score": 29.04715169718211, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "This attribution of symbolic power to inanimate objects is another hallmark of Hitchcock: a bread knife (Blackmail), a key (Notorious). He also places great focus on the creation of set pieces where he is able to exercise his talent for detail and suspense.\nHitchcock's vision of the world is reflected in the themes that predominate in his films. The specific psychology that is presented in the films, such as the fascination with wrongful accusation and imprisonment, is a significant part of the Hitchcock signature. One of the basic themes is that of: the mistaken identity, the wrong man accused who must find the real perpetrator in order to prove his innocence (The Lodger, The Thirty-Nine Steps, North By Northwest, etc.). Hitchcock also found visual expression for his themes in recurrent motifs that express his vision of the world: staircases (Strangers On A Train, Vertigo, Psycho), sinister houses (Psycho), chasms (Vertigo, North by Northwest) and National Landmarks (most obviously in North by Northwest which includes the United Nations' Building and Mount Rushmore).\nNotorious includes prime examples of trademark Hitchcock themes: a woman complicitous in her forced transformation to a different person, later brought to its fruition in Vertigo; the figure of the mother both adoring and deadly, who appears in various forms in Strangers on a Train, Psycho (1960), and Marnie and the MacGuffin, the narrative device Hitchcock once defined as the thing that motivates the actions of the characters but which is of minor interest to audiences. The MacGuffin in the case of Notorious being uranium ore hidden in wine bottles in Sebastian's basement.\nDonald Spoto notes that the Hitchcock touch was evident even in his earliest films: `in the structure and content of the screenplay . . .in the development of plot and theme and images; in the selection of cast and setting; in the style of lighting and placement and movement of the camera; in the moods created, sustained, and shifted; in the subtle manipulation of an audience's fears and desires; in the economy and wit of the narrative; in the pacing; and in the rhythms of the film's final cutting\". Hitchcock was, therefore, able to transcend the artistic constraints of the Studio System in which most films are recognisable as the work of a particular studio than of an individual director and make highly personalised films that bear the stamp of his artistic personality.", "score": 28.972993095348976, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "One key scene that is crucial in the film is where Leigh tries to meet her stalker as there is that sense of fear that is prominent in the film where Carpenter uses a nice close-up to play into the suspense.\nThe usage of the medium and wide shots help add to that suspense where there are moments where the camera shows the perspective of the stalker where he watches and listens to any of Leigh’s conversations. Carpenter also uses moments of misdirection to play up the suspense where an audience think will something happen but then nothing or a misunderstanding which is key to what makes the film work. Especially in the climax as it shows who is this person that is stalking Leigh and what has he been doing where it is quite chilling but it is also riveting as it has Leigh finally confronting her stalker. Overall, Carpenter creates a thrilling yet engrossing film about a woman being stalked from the other building of her apartment.\nCinematographer Robert B. Hauser does excellent work with the film‘s cinematography as its usage of shadows and light help play to some of the suspense in scenes where Leigh‘s apartment would go through power surges as well as some of the scenes set at night. Editor Richard Korbitz does nice work with the editing as it‘s largely straightforward with some rhythmic cut and fade-outs for commercial break transitions. Art director Philip Barber and set decorator Donald J. Sullivan do fantastic work with the look of Leigh‘s penthouse apartment as well as the TV studio she works at. Sound mixer Don Rush does terrific work with the sound as it help play into the suspense with its usage of sparse noise. The film’s music by Harry Sukman is superb for its orchestral-based score as it has some dramatic bombast as well as some eerie pieces to play into its suspense.\nThe film’s brilliant cast feature some notable small roles from Grainger Hines as a co-worker at the TV station, Len Lasser as a mysterious man in the apartment, and Charles Cyphers in a terrific role as police investigator Gary Hunt who is a friend of Paul as he tries to figure things out. Adrienne Barbeau is fantastic as Sophie as a co-worker who helps Leigh in trying to find the stalker as well as figure out all of the scenarios. David Birney is excellent as Paul as Leigh’s new boyfriend who is a philosophy professor that tries to figure out all of the angles but also go to those that he knew who could help Leigh.", "score": 28.93847700685483, "rank": 32}, {"document_id": "doc-::chunk-3", "d_text": "Stewart’s humor and humanity ensure that we always stay on his side and know that, debilitated as he is, we’re not exactly seeing him at his best.\nAmong the supporting cast, Thelma Ritter gets a lot of the best lines and delivers a great classic-era supporting performance. Her caustic but compassionate middle-aged New York dame might be a mid-century stereotype, but such woman did – and I’m sure still do – walk the streets of New York; Ritter evoked them perfectly. Even Wendell Corey, playing a cop (almost never a very good role in a Hitchcock picture), manages to get a few wry laughs even so. Though he has few fully audible lines, Perry Mason/Ironside to-be Raymond Burr is a perfect combination of fearsome menace and menacing fear as the almost pitiable Lars Thorwald. In an entirely pantomime role, Judith Evelyn is heart-melting as the deeply depressed “Miss Lonelyheart.”\n(Trivia buffs take note: A less crucial supporting player I can’t resist mentioning is Ross Bagdasarian, who plays a sad-sack composer trying to complete a new song. Bagdasarian was a musician and composer in real life who wound up making a mint just a few years later when he realized the novelty hit-making potential of sped-up tape, rechristened himself David Seville, and started working with some upstart chipmunks.)\nFor all the admiration Hitchcock inspires, its fair to say that he can be a rather icy director even in many of his greatest films. Not true here. Along with his personal favorite, 1943’s great “Shadow of a Doubt,” “Rear Window” is, to me, a rich and warmhearted film, even as it goes to some (strictly off-screen) gruesome places almost unheard of in mid-‘50s Hollywood. It’s a combination that works better than you could ever think. As late as the ‘80s, when onscreen mayhem was about as routine as it is now, certain scenes caused actual screams from theater audiences.\nThough many critics see “Rear Window” as a deeply cynical work about the innate corruption of the human soul, to me and many others it is a witty and compassionate work that explores and partially forgives human frailty.", "score": 28.137860703028494, "rank": 33}, {"document_id": "doc-::chunk-2", "d_text": "He is almost a caricature of eccentric, no nonsense law-enforcement (and a rare example of a sympathetically portrayed Hitchcock policeman). The audience’s sympathies readily shift to him, enabling us to jump the fence and join him in pursuing and punishing the murderer whom we were identifying, if not wholeheartedly rooting for, a few minutes earlier.\nThe fact that the film was originally shot in 3D leads to what is perhaps its most famous shot (Kelly’s overhead grab into the camera as she reaches for a pair of scissors while Swan holds her down on the desk), but elsewhere it is responsible for some rather awkward compositions (the long scene in which Milland tricks Swan into committing the crime is almost entirely dominated by a series of boxy table lamps). Nevertheless, Hitchcock moves his camera magnificently around the set. The screen view is directed into otherwise innocuous places by swoops, zooms and a series of sharply angled overhead shots that make certain scenes, including parts of the ‘murder’ itself, look as though they were captured on surveillance cameras.\nDial M for Murder is a compact study in the sinister minutiae of domestic life, so it is only appropriate that its climax should manage to milk a high degree of tension from the question of whether a certain character will or will not walk through a door. In all, it is a rare and accomplished example of the work of Hitchcock the Minimalist.\n– Ben Stephens", "score": 28.131324107418916, "rank": 34}, {"document_id": "doc-::chunk-1", "d_text": "Although he's plenty nasty, you get the impression that if he wasn't approached with this murder business, he might happily go on his way with other strictly legal stuff. However, he is presented with the chance for a big payday, and the fact that he has to kill two people to collect is not a big obstacle to him.\nSome other notable Walsh movies - Blade Runner, Bound for Glory, Reds, The Milagro Beanfield War, and Silkwood.\nMonday, November 13, 2006\nSunday, November 12, 2006\nThe Set-Up - The injured photographer L. B. Jeffries (James Stewart) has decided that his neighbour Thorwald may have killed his wife. Jeffries' girlfriend Lisa (Grace Kelly) decides to investigate while he's away. She is trying to find the woman's wedding ring.\nShe climbs into his apartment....\nWhile Jeffries watches with his telephoto lens.\nThe ring isn't in her purse....Gosh, she's purty!\nA critical error - Jeffries is distracted by another neighbour who appears to be taking an overdose of sleeping pills.\nHe doesn't notice Thorwald coming home.\nHe calls the police.\nThorwald catches Lisa in his apartment, and starts to assault her.\nHe turns out the lights, but the police arrive, just in time.\nAs the cops question Thorwald, Lisa indicates that she has the ring.\nThorwald sees her pointing it out....\nAnd looks across the way to see who is watching.\nI apologize up front for choosing what is arguably the most obvious Hitchcock \"Moment of Distinction\", but I can't help it - I love this scene. I have probably seen Rear Window 8-10 times, and I get goose bumps every single time I watch this sequence.\nA couple of things. Stewart is just brilliant in this scene, his panic rising as he watches the woman he loves being threatened right in front of his eyes, and he can't help her. The other thing has had volumes written about it - How Hitchcock turns the tables on Jeffries (and us) by letting him be an observer, and them suddenly thrusting him into an active role in the drama across the way. When Thorwald looks over with that icy stare, the message is clear. You're involved now, buddy.\nBlackmail is Hitch's first sound film, but curiously, it doesn't start out that way.", "score": 28.12631495277175, "rank": 35}, {"document_id": "doc-::chunk-1", "d_text": "Americans in general don't like people standing too close, but an average New Yorker needs Yankee Stadium around his body to feel truly comfortable. So observe, too, the way New Yorkers react to someone who, whether deliberately or out of ignorance, doesn't get it and insists on full-frontal contact with another passenger--the dirty looks, the impatient sighs, the way the New Yorker twists away from the clueless interloper.\nThe murder drives the plot, but it is far from the only thing that grabs you in that vast array of windows so impossibly arranged across Jeff's courtyard. You also focus on the composer, who is having a hard time with his art despite living in the most desirable apartment in the building (that's a very New York observation, sorry). There's Miss Lonelyhearts, kindhearted but hating every minute of being single. There's Miss Torso, doing her calisthenics and entertaining an all-male party (why all men?). There's the (probably) childless couple, lowering the wife's beloved dog into the garden every morning for a romp. There's the honeymoon couple...well, they're the most boring for sure.\nThe classic interpretation of Rear Window is as a metaphor for both moviemaking and movie-watching, Jeff (Stewart) standing in for both Alfred Hitchcock himself and those people out there in the dark. Rear Window is one of cinema's greatest uses of the subjective camera. Our point of view across the courtyard is always Jeff's, keeping our identification with him. So his dilemmas become ours, which is why the Siren thinks the Jeff-as-film-director-and-audience viewpoint is true, sure--but it isn't nearly probing enough. Where's the quandary in that? In both cases, people are just doing their jobs, the director making the movie and the audience watching it. Both acts are morally neutral. You can throw around the word \"voyeur,\" but it's the reaction to what you see that counts. In the context of 1954, with Joe Breen and his minions still on smut patrol, it's amusing to note how Hitchcock introduces a non-stop series of sexual innuendos, from Jeff's (ahem) highly extendable telephoto lens to Lisa (Grace Kelly) holding up a filmy negligee and announcing \"Preview of coming attractions.\"", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "The sound is comparable in both equally instances and actively signifies the irony which the starting point in the movie is also the end. Nevertheless , it is not the particular use of music that has a significant impact from the tone in the movie nevertheless also the silence. Hitchcock uses quiet incredibly well and only supplies dialogue as and when he must.\nFor example , even the confession picture at the denouement of the video is not as full of language as it may take other motion pictures. Words are being used sparingly and to make a point. Effects and cinematographic techniques are not used while sparingly because the conversation in an attempt to express the tension desired. Hitchcock undoubtedly used rear end projection in Vertigo: “Foreground and history tend to appearance starkly individual, partly as a result of absence of players shadows coming from foreground to background, partially because all background aircraft tend to seem to be equally diffuse.\n(Bordwell & Thompson, 1996, l. 244). This really is an example of quite a few features and numerous instances of this in the film, such as the point when Novak and Stewart hug against the foundation of the ocean. The stars were shot and then enforced on a organic backdrop, therefore forgoing the use of shadow. As a result, there is something innately unnatural about it, which uses the storyline. The film stock is also colour which also helps to reduce the use of mild and darker, thus increases this particular result.\nIn conclusion, there may be little question that Hitchcock rejuvenated the horror genre with Schwindel and presented a master class in using cinematic techniques for impact. There are numerous approaches used inside the movie to aid contribute to the cyclical and to some extent claustrophobic atmosphere. Again, this kind of serves to intensify the tension. The sparing use of dialogue and excellent use of sound effects, when ever paired with the film share and smart camera aspects, certainly improve the narrative and ultimately allowed Hitchcock to produce one of the best motion picture examples of horror in history.\nBibliography Bordwell, David & Thompson, Kristin, mil novecentos e noventa e seis. Film Artwork: An Introduction. 5th Edition. New york city: McGraw-Hill. Orr, John, 2006. Hitchcock and Twentieth 100 years Cinema. Greater london: Wallflower Press.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Mulvey describes Hitchcock in relation to “the investigative side of voyeurism” (14); Jason, in lecture, explained this as the way in which Hitchcock complicates the straightforward and unquestioned voyeurism that is featured in other Golden Era Hollywood films. This struck a chord for me, as watching Vertigo there were several instances where I found myself feeling uncomfortable and guilty, even, at the very apparent voyeurism in the film. This functions on the level of the plot: Scottie, while at first tasked with observing Madeleine, goes on to become obsessed with her; observing her from afar becomes his guilty pleasure, and the spectator who is given Scottie’s point of view takes a part in this. The same effect is produced through the film language in Vertigo. Rather than simply watching Madeleine, we watch her for so long and in such an obtrusive manner that we cannot help but become self-aware of the fact that we are watching her. A concrete example is the short clip shown in lecture when Scottie first sees Madeleine in the restaurant. As she is leaving, she passes Scottie sitting by the bar and the spectator is shown a close-up of her side profile, which is held for quite a long time. In my experience, we rarely see a person (in a film or real life) from so close-up and for such a long time unless we are engaged in dialogue with them; the shot therefore has the effect that the spectator is expecting or even yearning Madeleine to turn and make eye contact with the camera, while at the same time dreading this outcome because we understand that she is not meant to see Scottie. What results is that the voyeuristic pleasure becomes entwined with a sense of guilt.\nStraightforward voyeurism, what Mulvey would describe as fetishistic scopophilia, is grounded in the fact that the person being watched is unawares of the fact. Within films, this functions in such a way that a spectator, by watching the film, is given ‘permission’ in their role as a voyeur. They are able to take part in the pleasure of looking in innocence and without the fear of repercussion. Yet Hitchcock does not allow his spectator to remove themselves from the responsibility of their voyeurism, in a manner that Mulvey attributes to directors like Sternberg. This works on several levels.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "As I have watched his 1960 most popular and successful film “Psycho”, I decided to see the reason behind his success for this particular film.\nThe above YouTube video looks at how Hitchcock manipulates the audience into believing certain things as it was one of the first films where he experimented on creating a psychological subgenre of the thriller/horror genre.\nAs I found the part where the voiceover describes how secret hints are given to the audience about Norman being very unusual, was extremely interesting. For the reason that when the film switches to Norman’s point of view, after Janet Leigh’s brutal murder, the audience are left in the position Hitchcock wants us to be in, to reveal the film’s shock twist. This is evident through the cinematography used of absrtact camera angles where, Hitchock makes Norman Bates stand out in the frame and changing the audiences perception of his character; he appears extraordinary/different in comparison to Marion Crane. Throughout watching, the audience experience a variety of emotions such as tension for when Marion offers advice to Norman about his mother going into a care home, where he acts rather strangey, to a transition of Marion having power over him, then when she leaves the room, Norman stands up, reinforcing his superioty over her.\nAlso, Alfred Hitchcock manipulates the audience through film and narrative techniques; for instance the main focus of manipulation created is on the fact that the audience sympaphise with Norman Bates because he tells about his controlling mother and the audience believe this because we first see her silhoette up in the window of the Bates house, then reinforced by hearing Norman argue with his mother. It’s not until the very end of the film that the auidence learn that Norman dresses up as his mother; having a multiple personality disorder. But throughout watching, the audience don’t grow in hating him, instead the audience continue to feel sorry for how he is trapped in his life and can’t escape.\nFurthermore, resulting in the auidence feeling anxxious when watching Norman cleanup and take Marion’s body into the trunk of a car, sinking it in a nearby swamp. But when the car stops sinking, the audience feel as though they are in Norman’s position; emotionally connected to someone who is meant to be the villain of the film, rather than it seeming like we are on his side, helping him. Which the soundtrack of high-pitch violins intensify the scene, creating suspense and tension.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-7", "d_text": "Scared when imagining the discovery of her crime in a narrated conversation between her boss and her co-worker (played by the excellent Patricia Hitchcock) and confident when we hear Mr. Cassidy cursing her. A creepy smirk curves her lips. Marion still wants to go through it.\nThe viewer notices that the bright sky turn darker and darker, and eventually it starts to rain. Marion pulls over to sleep it off at some motel, the Bates motel. The first half of the movie takes place in two days, a continuous moment-to-moment spectrum of events. The pace and movement through time changes afterwards and is well defined through editing.\nMarion pulls up in the rain to the Bates motel and sees the moving silhouette of an old woman in the upstairs window of the mansion. Hitchcock often features familiar landmarks in his films. In “Psycho”, he creates one with the Bates mansion. The gothic mansion stands on top of a haunting hill like “it’s hiding from the world”. The Bates mansion is now one of the most famous film sets around the world, the presence of the mansion is so powerful, it’s like a main character. Anyway, Marion honks the horn of her new car. Seconds later, Norman appears on the stairs in front of the haunting mansion up the hill. He then runs towards the motel to serve his only customer of the night. What follows are some of the most humorous Hitchcock moments of all time. (*Humorous only on repeated viewings of the film)\nNorman Bates – cinema’s most famous villain. Anthony Perkins pulls it off right from the start. They check in and we are first introduced to Norman. Perkins plays the role in an oddly chilling loose and naturalistic manner. Marion signs as ‘Marie Samuels’. Again, the alias signature is pathetic as it’s proof of her not doing a good job of hiding her real identity. Marie is too close to her real name, Samuels is her boyfriend’s name. Norman asks her to write her home address as well. She looks at the newspaper that reads ‘Los Angeles Times’ and chooses that city rather than Arizona. “Los Angeles” she says. Meanwhile Norman chooses something else, a key to the room she’ll be spending the night in. Unlike the three suspicious men prior to that scene, Norman doesn’t suspect a thing. Why?", "score": 26.372058636581002, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "Still, all of these diversions pale beside the apparent true crime tale concerning the unpleasant salesman (Raymond Burr) with an angry invalid wife, who, heralded only by the sound of a breaking glass and a stifled scream, suddenly disappears.\nEven as a temporary invalid himself, Jeff should have other distractions – like a girlfriend, 30 times as lovely as Miss Torso and introduced in one of the most famously provocative kissing scenes in movie history, Lisa Carol Fremont (Grace Kelly). Lisa is so sweet-natured, so unspeakably beautiful, and extraordinarily bright and sophisticated that poor, middle-aged Jefferies is certain that things can’t possibly work out. Of course, he’s behaving like a jerk, and so is he is constantly reminded by Stella (Thelma Ritter) the lovable, tart-tongued nurse provided by the insurance company to take care of him at home. (At times, 1955 seems like a lost utopia.) Still, as the salesmen starts making a number of unexplained trips with his metal suitcase, and a neighborhood dog starts digging at something underneath a flower bed, Jeff slowly persuades both the habitually critical Stella and the correctly aggrieved Lisa that something really is going on, though his policeman former war buddy (Wendell Corey) tries to discourage all this dangerous amateur sleuthing. Maybe the salesman’s wife really did just take a trip to country – though it’s probably Shakespeare’s “undiscovered country,” from which there is no return.\nThe primary theme here is, of course, voyeurism, but that’s so self evident and so widely commented upon by everyone who has ever discussed “Rear Window” that it hardly seems worth mentioning at this point when, more than ever, voyeurism is a multibillion dollar industry. Besides, Jon Michael Hayes’s sparkling dialogue says most of what needs to be said far better than I could. “We've become a race of Peeping Toms. What people ought to do is get outside their own house and look in for a change,” the crank but smart Stella opines, offering advice that should be presented before every episode of “Big Brother.” In any case, the real statement of any Hitchcock film is never in the words. They are about feelings and ideas that cannot be expressed verbally – though guys like me will never stop trying.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-6", "d_text": "It’s a feast to the eye. The officer walks up to the car, we see Marion sleeping in her. A few knocks on the window later, she wakes up in a hurry. We see the same look in her eyes as when she saw her employer crossing the street. The next shot serves as both an eye-line matching shot and a close-up of the expressionless police officer. By now, like Marion, the viewer suspects she’s been caught. It turns out, he’s just checking if something’s wrong. Marion acts “like something is wrong” and so he asks for her driver’s license. As soon as he leaves and Marion drives off, the horrifying orchestra starts again.\nWe get a few rear-view mirror shots as she tries to lose the officer till he takes a turn and leaves her alone. Shortly after, Marion trades her car for another one. Both the viewer and Marion see that the police officer is back. He studies her from across the street like a suspicious stalker. Hitchcock’s fear of cops tightens the tension. More importantly, we are introduced to the third suspicious character, the car salesman, the first being the boss, and the second, the police officer. Marion is doing a terrible job of getting away with crime. Afterall, she’s no professional, just an everyday woman.\nWe rarely get to see scenes like that in thrillers; scenes that serve little purpose to the story but are there to put us on the edge of our seats driving the plot forwards. These short scenes are a rarity and a treasure. Hitchcock is simply playing piano with the audiences’ nerves. By now the viewer is in the midst of a getaway thriller. Keep in mind that all these tiny scenes are a distraction of the bigger picture. After, the high-pressured car salesman scenes, we move forward to more medium shots of the steering wheel, Marion, and the fading city in the background. This time, she bites on her lower lip as we hear the narration or an imagination of a conversation between the suspicious police officer and the doubtful salesman. Hitchcock knew that people generally do most of their thinking when they’re alone. Like before we sleep or when we drive alone in an empty highway. These scenes are very psychological in that for the briefest of moments the viewer becomes Marion.\nGradually, her facial expression changes from scared to confident.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-3", "d_text": "But I couldn't imagine any of you bein' so low that you'd kill a little helpless, friendly dog - the only thing in this whole neighborhood who liked anybody. Did ya kill him because he liked ya? Just because he liked ya?\nAs if to emphasize their complicity, Hitchcock gives us a shot of each neighbor. But is the dog owner's accusation fair? The Siren looks at Miss Lonelyhearts, tenderly placing the dog's body in his basket for the last time, and thinks not. The stricken faces around the courtyard don't suggest casual indifference. Maybe the characters don't trill \"Good morning!\" (though we do see some greeting going on) and inquire after everyone's health, but let's face it, that can be either heart-warming or annoying as hell. Many people move to big cities to get away from small-town nosiness. And we've been spending our time with Jeff and Lisa, who definitely care whether Mrs. Thorwald lived or died.\nAfter the speech everyone moves away from the window, except Lisa and Jeff (and us, via Hitchcock's camera). But the most guilty person in the movie, the one who killed his wife, strangled a dog and doesn't go to look, is Thorwald. The shot of his apartment, dark save for the glow of his cigarette, is the Siren's favorite in Rear Window.\nCould this movie have been about any city other than New York? Possibly, but it wouldn't have hit the same truths. Because New York is a city where neighbors ostentatiously stay out of each other's business when they're out on the sidewalk, then go home and do everything they can to find out what's happening across the air shaft. Sometimes, as with what the Siren's roommates later dubbed the \"milkman incident,\" your discoveries are accidental. More frequently, you're looking on purpose. Either way, you gather information but usually don't need, or even want, to act on it. When the Siren encountered the milkman on the street a few weeks later, we did an excellent job of pretending It Never Happened.\nisn't entirely true. But even if you take it for granted that Hitchcock was a control freak, what an actor does when the cameras turn is something that can never be completely controlled--for that you'd need a marionette.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "For one, the fact that Scottie is in and of himself a voyeur in the film casts attention to the illicit nature of the act. As the film progresses, shots such as the one in the restaurant described above constantly remind a spectator of the fact that they are practicing voyeurism; Hitchcock does not allow the spectator to commit the act subconsciously or innocently. These constant reminders achieved through the film language also have the effect that the reader becomes implicit in Madeleine’s fate at the hands of Scottie. Jason also showed examples of when Madeleine breaks the fourth wall to look into the camera. Her expression as she does this is despairing and imploring, as if she is asking the spectator to help her, to save her from the fate that Scottie -and the spectator with him – is creating. These shots stand incongruous to the element of straightforward voyeurism whereby the person being watched is unaware; once again, the spectator is reminded that they are committing voyeurism, and being forced to take on responsibility.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "The dog is meant to distract the audience from guessing the surprise in the next scene. Hitchcock worked that way; he didn’t only control his cast and crew but his audience as well. With “Psycho”, the entire first act is a diversion.\nI can only imagine the horror of sitting in a movie palace when “Psycho” first premiered. The audience must have felt excited having booked their tickets in advance and making it on time for the film. Hitchcock didn’t allow late entrances. So there they are sitting, excited about the next Hitchcock masterpiece. The lights dim, the black and white “A Paramount Release” logo appears on the big screen, and then total darkness as the logo fades to solid black. Suddenly, the first wave of Bernard Herrmann’s score fills the theater, the most horrifying music in film history. The black screen is split into stripes of grey during the opening credits. The audience doesn’t know it yet but this split bares significance.\nThere’s a dark side to every human being. We’re not 100% good. Occasionally we slip into that dark side. If you’re lucky and smart you can save yourself from letting the darkness overcome you. Here lies the true horror of “Psycho”, the dark side of the psyche. Our main character is Marion. She’s a young everyday working woman. Unfortunately she acts foolishly and tries to steal a lot of money from one of her customers. However, before meeting her fate –getting stabbed while cleaning off her sins in a shower – the guilt she feels deep down in her stomach pulls her out of the dark and back to normality. The film takes a turn there as we’re introduced to a much worse case of – the split. Norman plunged into madness and embraced darkness long before Hitchcock introduces us to him. Hitchcock’s choice to film in black and white was clearly not only to give the film a darker theme or to escape the sharp scissors of the censors; the black and white fits the theme of the picture.\n“I enjoy playing the audience like a piano.”_Alfred Hitchcock\nThe movie starts one afternoon, as the camera moves from the outside of a city through a window into an apartment.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "The marriage theme is also evident with the young newlywed couple whose sexual activity behind closed shades seems to be never ending, and apparently exhausting for the young man. We also see Miss Lonely Heart who sadly pretends one evening to have a male suitor for dinner. When she finally gets the courage to go out and meet a man, he turns out to be a creep who quickly attempts to force himself on her. And of course, there are the Thorwald’s whose marriage disintegrates into murder.\nJeffries broken leg sidelines him from any kind of activity, sexual or otherwise. He makes up for his inadequacy with the constant use of binoculars and a long telephoto lens he uses to spy on his neighbors. The camera and lens rest in his lap ‘rising’ when needed providing him with a sense of potency lacking while stuck in a wheelchair. Still, this feeling of power can only go so far. Later, when Lisa goes across the courtyard and enters the Thorwald’s apartment in search of evidence, Jeffries is completely helpless, impotent to warn her of the danger that soon arises; Thorwald coming home. Toward the conclusion of the film Jeffries is helpless again when Thorwald invades his apartment and almost kills him.\nJeffries courtyard view can be seen as one large big screen TV with him channel surfing between each window, a separate story going on in each one; the struggling songwriter, Miss Lonely Heart, Miss Torso, the married couple who sleep out on the fire escape to relieve themselves of their apartment’s oppressive heat and the Thorwald’s. At first Lisa is repelled by Jeffries spying on the private moments in his neighbors’ lives but as she becomes more convinced that Thorwald murdered his wife, possibly chopping her up into body parts, a look of sexual tension builds in her eyes and face. She has become visually stimulated, “turned on” by it all to the extent that in an attempt to uncover evidence on Thorwald as a murderer she crosses over from viewer to participant when she goes down to the courtyard, climbs up the fire escape and enters Thorwald’s apartment looking for some confirmation of the murder.", "score": 25.600979874568417, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "Ever go on a Centrifugal force ride? The ones that hold you in place as they pick up speed and push you to the edge. You try to move your arm, but it’s dead. You get pushed further and further back into your seat while a weight sits on top of your chest and pulls at your face. Then BANG. There’s that sought after endorphin rush.\nWell, that’s what watching The Woman in the Window (1944) feels like. A nail biter noir that lays on the tension until you’re pushed back into your seat and can’t turn away. Directed by Fritz Lang and considered a masterpiece of film-noir, The Woman in the Window has been added to Eureka! ever-growing Master of Cinema collection.\nWith his family away on vacation, psychology professor Richard Wanley (Edward G. Robinson) has the chance encounter with Alice Reed (Joan Bennett), the model for a painting he has been admiring. Accompanying her home to discuss the artwork, Wanley is attacked by Alice’s wealthy jealous boyfriend. Killing him in self-defence, Wanley and Alice realise that to go to the police will ruin them both, and work to cover up the crime. But as the police investigate, an ex-cop (Dan Duryea) begins to pile the pressure onto Wanley.\nThis is probably one of the tensest movies I have watched in a while. Lang adds layer after layer, building it up to breaking point. And what’s interesting is Lang doesn’t do it through tight, claustrophobic, close up shots. Most of the scenes are shot at Full/American range and at high angles. A step away from noir’s classic low angled medium/close shots. Low key lighting and thick oily shadows hide most of the detail of the scene, forcing you to focus on the action. That’s the key. Lang makes the viewer watch the work on the screen. Turning the psychological visuals up to full force this, with the obvious exceptions of M (1931) and Metropolis (1927), is one of Lang’s most significant works as a director. Despite being labelled a noir, The Woman in the Window goes back to the earlier expressionist and proto-noir of Lang’s German career.\nThat said it’s not without its issues. Wanley and Reed, for the most part, are flat and two dimensional.", "score": 25.16524282752502, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "When Roger Thornhill is driving we see the bonnet of the car and you look down to the Mercedes Benz sign and it appears to look like a crosshair, I think this means that people are being targeted, this also happens in the opening scene when you look down onto the mass of people hurrying about. People’s paths are crossing and it gives the impression of a crosshair from a rifle also giving the opinion that people are being targeted.\nThis again happens when Roger Thornhill spots a plane flying towards him and leaves a trail behind it, the trail is 90 degrees to the road and when the plane had passed you could clearly see the crosshair, I think this is there to show that Roger Thornhill was in a substantial amount of danger. Different types of camera shots are used throughout the film to give an impression to the viewer about the character. Low shots are used frequently, for example when the two Russian henchmen the camera shot are taking Thornhill away the camera starts off low looking up at them.\nThis gives the impression to the viewer that these men are powerful, dominative and aggressive. A low angle camera shot is also used when Thornhill is hanging for his life from the four heads, one of Van Damm’s agents has his foot on his hand, the camera shot is looking from the foot up to the henchman giving the impression again that he is powerful, dominative and in control of the situation. Another type of camera shot is a point of view shot to look at things from the characters perspective or ‘point of view’.\nAn example of this is when Roger Thornhill is about to get punched in the woods when meeting up with Eve, this makes the situation seem more dramatic and eventful than just looking at it from a normal camera shot. High angled camera shots are used in North by Northwest, when Thornhill is running from the UN building the camera shot is looking down the side of the building from the very top with parallel lines running across the building to show that maybe sinister events are taking place within.\nThe high angles camera shot makes the character look very small in a large world as if he’s week and helpless. A wide shot is used to emphasize vulnerability for example when the crop plane is swooping in on Roger Thornhill the wide shot makes him look more vulnerable which also creates suspense and entraps the viewer to keep on viewing.", "score": 25.000000000000068, "rank": 48}, {"document_id": "doc-::chunk-11", "d_text": "Norman takes a peak through the peephole and watches Marion undress. He then walks out of his office, up the stairs to the mansion. Once inside, he takes a step up the stairs and suddenly changes his mind and goes to the kitchen. As the audience, we know that Mrs. Bates is upstairs. It’s a simple scene the purpose of which is to distract the viewer from outguessing the master.\nMeanwhile Marion calculates how much cash she’ll have to return out of her own pockets. ($700) After tearing the note to pieces she looks around and can’t find a bin and so she flushes it down the toilette. This was the first time the flushing of a toilette was seen on screen. The audience must have felt shocked at the sight. Yet it’s only a warm up to the major shock that follows. Hitchcock once said that the toilet shot is a “vital component to the plot”. My guess is it foreshadows the shower scene. After the brutal murder, we get a close-up of Marion’s blood flushing down the bathtub hole.\nIn probably the most famous, and well edited scene in all of cinema, also known as the shower scene, Hitchcock uses editing and sound as cinematic manipulation to create a carefully thought out horrific murder scene. Perfection is the result. In less than one minute, we witness a combination of 78 shots, in relation to the sound of a knife slashing against skin. We never actually see the knife enter the woman’s flesh, yet we’re convince we do through the sight of stabbing (hand motion), sound effects, the musical score (horrible animalistic screeching), and of course the careful editing. Celluloid cuts replace flesh cuts. When Hitchcock told Francois Truffaut that “Psycho” belongs to filmmakers, he wasn’t joking.\nBy exposing the audience to forty-five seconds of nonstop violence without actually showing any, Hitchcock leaves it up to our imagination. (Truffaut) Imagination has no limits which is why the scene is timeless and just as shocking half a century later. The shock is not only the sudden bombardment of cuts but the fact that he killed off his leading lady. We looked through her eyes, listened to her thoughts and witnessed her actions only to see her naked body slashed to an ugly death. With more than an hour to go, anything is possible.", "score": 24.807723003181515, "rank": 49}, {"document_id": "doc-::chunk-2", "d_text": "But someone is. Look at the shadow on Sam's shirt.\nWhen the old lady is gone, the threat seems over.\nAh, close-up! Here is “a new possible menace.”\nSam and Lila are oblivious to the threat.\nThen, Sam can feel himself being watched …\nBut by Bob at the desk.\nBob leaves and Sam and Lila believe they are alone. They are thus vulnerable to the possible menace outside ...\nThe man steps in. And by \"in,\" I mean all the way in – as tight as the close-up can get and still hold his face.\nSam and Lila are still oblivious to the man in the hat ...\nUntil he speaks. Classic horror reaction shot!\nThe man in the hat steps closer.\nUncomfortably close. Out of focus.\nHitchcock cuts to a predatory view over his shoulder. Is he a menace?\nSam is fearful until Arbogast introduces himself as a private investigator.\nWith that, the man in the hat is eliminated as a menace and the tension is released.\nBefore we go, let’s look at one more scene from Psycho involving Arbogast in which Hitchcock uses his close-ups carefully. In the scene below, Arbogast shows up at the Bates Motel.\nInitially, Norman and Arbogast are shown in a rather distant two-shot, with Arbogast lower than Bates, a subservient position.\nInside the hotel office, as Arbogast begins his questioning, they’re now on equal footing.\nNorman is holding his own until Arbogast asks to look at the registry …\nKnowing Marion’s name is in the book, now Norman feels threatened and Hitchcock goes to a close-up.\nArbogast smells blood in the water and gets his own close-up. He’s on to something.\nNow even the picture of Marion gets a close-up.\nAnd so it goes ...\nUntil Norman worms his way out of the conversation and Arbogast backs off.\nHitchcock was a firm believer in storyboarding. Then again, so is George Lucas. Careful planning alone can’t make for a great film. Still, there’s something about purposefulness that I’ll always find rewarding.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Alfred Hitchcock's \"Psycho\" (1960) a film about the story of Marion Crane, her adventure of how she disappears after stealing $40 000 and is then murdered by a mysterious figure at the Bates Hotel. As an investigation into her murder is opened, twists surprise the viewer until the climatic ending. Through the use of camera angles, film techniques, dialogue, and sound effects, the film portrays the main characters as they are being trapped, unable to escape their state of mind, the guilt of their actions.\nRight at the start of the film, we get the impression of being trapped from the combination of music and graphics. Through the use of the sliding horizontal and vertical bars, it instantly reminds us of jail cells, being held captive of where we are. Throughout the whole movie, a lot of scenes will contain parallel lines, be it the blinds of the window, or even the railings around the stairs.\nHitchcock planned everything carefully to place these things in the shot. The small melody in the opening credits is used again in the scenes where Marion is in her car fleeing from somebody, as it creates suspense as though she is being chased.\nIn the opening scene, during the dialogue between Marion and the man of her affair Sam, we understand that the two of them are unable to cater for each other with their current lifestyles. During their conversation, we understand that Marion is tired of having to sneak out during her lunchbreaks to be with Sam, and at the same time, Sam is not financially stable enough to support his relationship with Marion. The two of them are stuck or \"trapped\" in Phoenix. They both long to escape to California and start a new life together.\nNear the beginning of...", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-2", "d_text": "The use of black and white film is very effective in this scene- as it reflects the mood of the scene and allows the dark silhouette to appear sinister and evil against the white shower curtain; the darkness also creates a strong connotation in relation to death and darkness.\nWhen the mysterious figure draws back the shower curtain the shots are short and are no longer than a few seconds; Hitchcock has done this purposely to show how brutal the stabbings are. Each shot would shock the audience. A 1960’s audience would differ dramatically from a modern day audience; the audience of Psycho in the 1960’s would have perhaps not seen film so vicious and brutal. The diegetic sound heightens the tension amongst the audience, the music is loud and abrupt, which shocks the audience. The diegetic sounds of knife stabbings are also apparent, as Hitchcock has made the murder so brutal that it is not just one stab; it is the repetitive stabbing of This sound also puts the audience on edge- because it is not a pleasant sound to the ears; It was Hitchcock’s intention as it makes the audience aware of how unpleasant the murder is. Marion until the camera slowly pans to a shot of the bath which then follows the flowing water stained with blood streaming towards the plug hole; this shocks the audience because it indicates just how brutal the murder was. Marion is a classic victim as she lacks survival skills and an ability to defend herself, she is also passive- therefore she was an easy target to Norman Bates.\nAfter the stabbing, the camera cuts to a shot of Marion’s hand- the bent fingers highlight Marion’s desperation to hold on, but she is slipping away.\nAfter the shot of Marion’s hand Hitchcock uses a camera shot of Marion sliding against the wall with the her up trying to grab the shower curtain; the use of this type of shot connotes death and desperation. This shot signifies how Marion is trying to grasp on to life, but not succeeding.\nWhen Marion is dead on the floor, Hitchcock has used a close up of her face and eye that looks straight into the camera and at the audience. This allows the audience to relate to her, and see that she died in pain. Marion also looks like she has a tear in the corner of her eye, which shows her fear. The camera slowly pans away to the newspaper, and the audience are then shown that the mystery is being continued.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-9", "d_text": "Whispers?”\nNorman: “Mother, she’s just a stranger. She’s hungry and it’s raining out.”\nMother: “Mother, she’s just a stranger. As if men don’t desire strangers. As if….(shuddering) I refuse to speak of disgusting things, because they disgust me!”\nWhat follows is a dim and haunting wide-shot of the house in complete obscurity with creepy tree branches on both sides and dark clouds lingering in the sky. Like a house on a haunted hill, the cinematography is simply breathtaking and needs to be seen to be believed. Only one light shines, the window of the room where the shadow of an old woman roamed earlier.\nMother: “You understand, boy? Go on. Go tell her she’ll not be appeasing her ugly appetite with my food or my son! Or do I have to tell her cause you don’t have the guts? Huh, boy? You have the guts, boy?”\nA radio actress by the name of Virginia Gregg perfected that spine-chilling voice of mother. In fact, it is done so well, there’s no way the audience would suspect she’s just Norman fulfilling his disorder. Not only that but the fact that mother offers to go tell the visitor herself only personifies her leaving the viewer with no hints to guess the twisted reality.\nA few seconds later, my all time favorite two-shot arises. Holding a tray with the milk and sandwich, Norman stands to the left in front of a window. Marion is on the right in front of the door. Both are standing outside in front of the cabin. “I’ve caused you some trouble”, Marion says implying that she heard their conversation. To which he replies: “No…mother…my mother…what is the phrase? She isn’t quite herself today.” Freeze the frame at that precise moment and observe the richness of the moment. Visually this shot speaks volumes of Hitchcock’s famous wit. In crisp clarity we see the reflection of Norman’s face on the outside window. Indeed “she isn’t quite herself today”, the answer is there visually. This may either be a coincidence or a stroke of genius. I like to think it’s the latter, for the blinds are half drawn providing the possibility of the reflection. It had to be intentional.\nThey move to the parlor because “eating in an office is just too officious”.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-4", "d_text": "The sequence is a bit like a Hitchcock version of Eisenstein’s Odessa steps, but, as Raymond Durgnat has noticed, it also has things in common with Joris Ivens’s documentary Rain (1929). A tight close-up shows the victim staring openmouthed at the camera, his face shattered by the bullet; next comes one of the most sinister and witty images of the director’s career: from a bird’s-eye vantage, we see the escaping killer scurrying like a rat through a dense crowd of people holding black umbrellas, some of which bob and sway, marking his path. Johnny’s pursuit of the killer through the ensuing chaos, in which an innocent cyclist dies, is edited with superb lucidity and leads to a dangerous car chase that also serves as a meet-cute reunion between Johnny and Carol Fisher. When the killer’s car mysteriously disappears, we find ourselves in quiet, open country. As usual in Hitchcock’s thrillers, the locale has a picture-postcard quality: Holland is famous for its windmills, so they are not only shown to us but also worked into the plot by means of a running gag—a lost bowler hat, which leads to the discovery of a MacGuffin (“Clause 27”) and a conspiracy to start a war.\nThe Rembrandt Square set was the largest of seventy-eight constructed for the film, consisting of a full-scale replica of the town hall; twenty-six broad steps leading up to the building; a wide cobblestone plaza; a street big enough to accommodate trams, cyclists, automobiles, and pedestrians; and the most elaborate rain-effects system yet built in Hollywood. The supervising art director was Alexander Golitzen, but most of the design work was probably done by his assistant, Richard Irvine, and by the legendary production designer William Cameron Menzies (also on loan from Selznick), who is given a special credit for visual effects in Foreign Correspondent. Menzies designed sets and camera angles for scores of films, ranging from Douglas Fairbanks’s The Thief of Bagdad (1924) to Selznick’s Gone with the Wind (1939). His influence can be seen here in the Piranesi-like interior of the windmill, with its dark spiral staircase and menacing gears, and especially in the remarkable climactic scenes, when a transoceanic airliner carrying the leading characters is shot down, presumably by the Germans, and crashes into the sea.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "Using this method, the audience basis maintain their interest in the movie and suspense can be delivered more efficiently.Vital to any Hitchcockian film is what is known as information. Information is something the cases do not see, yet the audience does. In most cases, the information is usually dangerous and is presented in the opening of a scene. As the scene continues, the audience is reminded of that information which could jeopardize the ignorant characters. For example, in the 1976 movie Family Plot, the audience sees a shot of a car leaking brake fluid, yet the characters in the car have no nous this is happening. Watching scenes with information build up tension, and it is one of the most popular techniques Hitchco ck has made famous.Surprisingly, one would not think to include anything comical in a thriller movie, yet Hitchcock believed suspense doesnt have any apprise if its not balanced by humor (Bays 1). By using contrasted characters and settings, it made his films more amusing to watch. In order to intensify the audiences anxiety, Hitchcock utilized understatement, which was a representation of turning the attention of an action scene to insignificant and petty character features or actions. In Rear windowpane, the protagonist Jeff tries to stall the villains attack by egregious him with flashing camera bulbs. The great effort the villain uses to regain his vision is amusing, yet at the very(prenominal) time is suspenseful because of his steady and eerie approach. Hitchcock also frequently inserted a character which mocked a real matter such as murder. This is usually a sign of foreshadowing, as seen in Rear Window when Stella (the nurse) laughs about the idea of a killing in an ad jacent apartment. Irony is also evident in Hitchcocks films because he places characters in terrible situations against bright and joyful settings. He thought the more happy-go lucky the setting, the greater kick you get from the sudden introduction of drama (Bays 3). An excellent example of irony is in The bustle With Harry, where a dead body appears with a beautiful fall scenery.The final suspense method is none other than the twist ending. Hitchcock never wanted his films to have a predictable ending because it would destroy the entire point of putting suspense into the audience. In the key moment of Saboteur, Barry Kane corners Fry, the real saboteur, on the top of the Statue of Liberty.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-16", "d_text": "Norman delicately walks up the stairway. He walks to mother’s room, and the camera slowly pans up closer to the door and eventually the long shot ends with an overhead view of Norman carrying his mother to the fruit cellar. This beautifully photographed shot meant to hide the face of Norman’s mother is an example of how Hitchcock uses cinematography to guide our eyes in whichever direction he pleases supporting the story.\nNext, Sam and Lila decide to search every inch of the motel. To do so, they split up. Sam is to distract and keep Norman occupied while Lila goes up the mansion to get to the old woman. Two things happening at once builds the tension as the relation between both incidents eventually merge into the famous Norman in his wig scene.\nInside the office, Sam shoots accusations at Norman. He’s not as smooth as Arbogast which leads to trouble. They say an animal is most dangerous when cornered. The second time Norman is put in that situation, he breaks loose by striking Sam’s head with a souvenir. Meanwhile Lila after touring the house looks through a window and sees Norman running towards her from the Bates Motel. Space is all that was needed to keep us on the edge of our seats.\nMoments later, Lila is hiding in the cellar room. She sees mother facing the wall in her rocking chair. A tap in the back later, the truth surfaces- mother is a corpse. Lila screams and hits a hanging light-bulb. Shadows dance. Enter Norman smiling like a creep with a kitchen knife high up in the air. More importantly the screeching noise makes another visit; the two previous times the audience listened to that horrible noise they witnessed murder scenes. Subconsciously the audience thinks it’ll happen again, only Sam comes to the rescue. The wig falls off Norman’s head.\nThe final scene is the famous psychiatry explanation. Like Roger Ebert, this scene always bothered me, for like the opening narration in “Dark City”, the full explanation underestimates the intelligence of the viewer. In his Great Movie essay, Roger provides a perfect cut: “If I were bold enough to reedit Hitchcock’s film, I would include only the doctor’s first explanation of Norman’s dual personality: ‘Norman Bates no longer exists. He only half existed to begin with.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-7", "d_text": "When he finishes, the camera focuses on the water going down the drain. As it becomes clearer, we see the sink is filled with gritty sand. He then goes and lies down in bed, covering his face with a newspaper.\nWith just a minute of film, Hitchcock has told us: 1) this guy is the saboteur, 2) he lives at the movie theater, 3) he really doesn’t want anyone to know he was out that night.\nThis is what Hitchcock was great at. If he had kept the identity of the saboteur a secret, then we’re just waiting around for him to strike again, and watching the police or agents gather clues and try to learn as much as we can, and maybe trying to guess on our own who the saboteur is. By telling us immediately who the saboteur is, Hitchcock now creates a tension around this character and everyone he interacts with. Will he be caught? Did he leave behind a clue? Do the people around him know what he has done? What will he do next? By slowly playing out these answers as the film goes on, Hitchcock flips the movie from a mystery to a thriller.\nAs the film proceeds, we learn that the ticket clerk Mrs. Verloc is the husband of Mr. Verloc, the saboteur, who owns the movie theater, and she has a younger brother named Stevie. We also learn early on that Mr. Verloc is being watched by a Scotland Yard detective who is posing as a grocer at the store next door.\nThe film plays as a by the numbers thriller, expertly crafted. Verloc goes to receive his payment, only to be told that the power outage wasn’t good enough, that the city found it amusing, rather than terrifying. He is pressed to increase the terror, by setting off a bomb. We see Verloc here is hesitant, not wanting to hurt anyone. Hitchcock is trying to humanize his villain here, and it’s a good attempt, but I don’t know if I ever felt like Verloc was a decent guy caught up in circumstance.\nAs the inspector tries to get information out of Mrs. Verloc and Stevie, Mr. Verloc is directed to a pet shop that also makes “fireworks”(it’s a bomb for anyone not good with metaphor).", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "Also note how in the title sequence there are shades opening almost as if the audience is the voyeur for this picture.\nNext- lets talk about the under rated and often taken for granted set of this movie- its a whole neighborhood in a sound stage- and that’s something you rarely see anymore (as its too expensive!). This is a set with no green screen, or digital apartments- they are really there and they are built. From what I know- they used two sound stages and the apartments were the street level while places like the courtyard were actually the basement.\nAll apartments were made livable, and Hitchcock would give direction through an earpiece that all the actors had. Watch the video for the opening scene of the neighborhood and courtyard- just mind blowing on how that was all created!!\nFinally- Lets discuss Grace Kelly in this movie!! This is her ultimate glamour role, her ultimate Hitchcock role and her most well known role. I feel only she could be Lisa Carol Fremont and if someone else like Vera Miles or Kim Novak would have played the role- this picture would not have been as believable or memorable. Lisa Fremont is so proactive, more than just the “girlfriend”and sidekick- as she’s the one doing the action scenes that Jeff can’t. I believe Hitch spent the rest of his career trying to find another actress to create a role such as this- but naturally and utterly failed, finding good, but somewhat sub-parr actresses for big roles in his pictures. Its so easy to take for granted how phenomenal Grace is in this role!!!\nIn Sept 2017 (the day of Grace’s death) I had the pleasure of viewing Rear Window on the big screen and I can say that it absolutely changes your experience. Seeing every moment play out on the big screen makes it all more thrilling and dazzling.\nClick here to read a post I did concerning the fashions of Rear Window.\nTo Grace I will say that on this happiest day her birthday-I hope we can all pay her a great tribute, and I hope she is thrilled and perhaps touched that there are so many young people who still adore her and her movies!\nIt’s here everyone! Today begins the 4th Wonderful Grace Kelly blogathon! And I can’t wait to read all of our wonderful reviews! Remember I’m doing today’s posts and Ginnie at The Wonderful World of Cinemais doing tomorrow’s posts (which is Grace’s 89th Birthday!)", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-3", "d_text": "It isn’t an accident that the camera continues to stalk Wendy and Danny as it does in the hotel, when they first explore the maze. Kubrick has said: “One of the things that horror stories can do is show us the archetypes of the unconscious, we can see the dark side without having to confront it directly.” The structure of the maze allows for such an indirect confrontation of these dark forces. Symbolically the maze transcends physical time and space. When Danny is fleeing Jack at the denouement, he leaves the hotel and is plunged straight into another terrifying labyrinth, the hedge maze, where Jack is ultimately trapped and frozen solid in time. Danny eludes Jack in the maze through his skill both in navigating the corridors of the hotel, established earlier through his adventures on his tricycle, and similarly the hedge maze.\nThe film creates different spaces within the hotel, such as Jack’s writing space, the Torrance living quarters, the hotel corridors, the gold ballroom and the outdoor maze. Maria Falsetto observes that these spaces are all endowed with “specific characteristics” and are all presented to the viewer differently through the camera. When viewing the spaces Jack occupies throughout the film we can realise that the cramped, enclosed space of the Volkswagen at the beginning is mirrored by the claustrophobic living quarters and ultimately the maze that encloses Jack’s frozen body. During the chase sequence in the maze, the film repeatedly cuts between varying points-of-view within the maze as Jack chases Danny with views of Wendy wandering the labyrinth of hallways inside the hotel and for the first time, interacting with the ghostly residents. This crosscutting extenuates this idea that runs throughout the film that the feeling of claustrophobia yet isolation is present both within the hotel, and also in relation to the snow-bound exterior. In the scene where we see Jack throwing the tennis ball around in a senseless boredom, the interior is linked to the exterior through Jack’s sense of ‘mastery’ over his family as he looks into the model of the maze. The camera zooms into the maze and we see the tiny figures of Wendy and Danny moving, transforming the model into the real. There is sense of cerebral intent from Kubrick, portraying the hotel as almost human with each section of its overall space having distinct characteristics.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Vertigo can be described as 1958 film directed simply by Alfred Hitchcock that has was standing the test of time inside the horror genre. It is considered to be one of the seminal films inside the genre not simply because it set the develop for the films to follow along with but also because it showed numerous features and approaches that would in order to revolutionise the way in which movies were created. The camera angles, utilization of space, cinematography, special effects and sound all contribute to the general effect achieved.\nAs a result, this essay will analyze each of the over with a view to concluding that Hitchcock recharged the apprehension genre with Vertigo and provided a master class in employing cinematic techniques for effect. The first technique of notice is the way in which camera aspects are used in order to create a great atmosphere of fear, offering the impression that the characters are moving in one bad circle. The camera “¦ simulates panicky feelings of acrophobia (fear of heights) felt by Scottie Ferguson (James Stewart).\n(Pramaggiore & Wallis, 2005, p. 127). For example , by one point in the film, a set of stairs is shot from the top. This not only refers to the cyclical nature with the narrative since the bell tower staircase is indeed circular but likewise narrows the shot. The illusion of falling from a great elevation is fostered in this taken and indeed in others, that way in which Stewart appears to be sitting on the corner. Furthermore, the camera sides also website link directly to the illusion of space: “¦\nserves as a template pertaining to key designs: the topography of a metropolis and its adjacent countryside matched by dilemmas of intimate choice, remorse and obsession. (Orr, 2005, l. 137). The view from the belfry, campanile always appears to be bleak, therefore mirroring the overtones from the plot. The sound used as well highlights the cyclical nature of the narrative. In an interview in Sight and Sound, Scorsese pointed out that “¦ the music is usually built about spirals and circles, fulfilment and hopelessness.\nHerrmann seriously understood what Hitchcock was going for ” he planned to penetrate to the heart of obsession. (2004). Scorsese’s assessment is proper and this is typified by the scenes by which Madeline and Judy fall season to their fatalities.", "score": 21.822114217887624, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "Hitchcock has used close ups of Marion’s feet as she enters the bath, and quickly and forcefully draws the curtain closed to imply that Marion is blocking out the world; and escaping the crimes she has committed and entering her own sense of self security. The way she traps herself in the shower indicates that she is trapped by her desperation. Hitchcock has perhaps used the shower curtain as a way of portraying the way that Marion is separated from reality.\nWhen she turns on the shower the diegetic sound of the water running dominates the mise-en-scene, it also creates a sense of realism allowing the audience to relate to how naïve and unsuspecting she is. Hitchcock has used various angles and close ups of Marion’s head in order for the audience to see her emotions and allow the audience to relate to Marion. The ‘Worms eye view’ shot of the shower head is used to make the water seem menacing and consuming – this connotes danger, but also make the audience think that Marion is small, and very vulnerable at this point. Marion is having an affair with a married man and has stolen $40,000 of his money, therefore fitting the traditional conventions of a femme fatale; her deviance has sealed her fate; and the audience are aware of this.\nNorman Bates murders Marion; his last name indicates that he is someone that preys and ‘baits’ on other people. His character also keeps stuffed birds; this can be related to Marion Crane because her surname is a bird. Hitchcock has used irony to connote that Marion might become part of his collection- which happens.\nThe camera angle cuts to a medium establishing shot of the Marion and the shower curtain, which\nreveals to the audience a silhouette. The camera angle does not change, but the silhouette becomes closer to the shower curtain, Marion is oblivious and unsuspecting, the audience can see that she is not safe however; and this is because Alfred Hitchcock has used dramatic irony in this scene of the film. The tension and suspense builds as the audience are forced to realise that the figure has aggressive intent as well as questioning amongst themselves who’s silhouette it is. The speed of the diegetic sound increases as the murderer comes closer to the curtain- this allowed the audience to foresee what will happen without the use of dialogue.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-1", "d_text": "The shots were storyboarded to make sure there was enough contrast of sizes within the cuts. ....\n“Here is the shot of the detective, a simple shot going up the stairs.\n“He reaches the top stairs, the next cut is the camera as high as it can go, it was on the ceiling. …\n“You see the figure run out, raised knife ...\n\"It comes down …\n“Bang! – the biggest head you can put on the screen. ...\n\"But the big head has no impact unless the previous shot had been so far away. So don’t go putting a close-up where you don’t need it, because later on you will need it.”\nHitchcock’s description of that scene can be found in Conversations With The Great Moviemakers of Hollywood’s Golden Age at the American Film Institute. The book is a collection of interviewers, and it didn’t escape Hitchcock’s interviewer that the above scene isn’t the only time that Psycho’s Milton Arbogast gets a close-up. In fact, Arbogast is introduced with one. Asked why, Hitchcock explained: “You bring him in like that because you are bringing in a new possible menace.”\nThat’s all Hitchcock says, and I had to review Psycho to be reminded of just how true that is. Let’s look at the scene in detail.\nFirst, to fully appreciate the scene you have to remember what comes directly before it: Norman Bates, captured in a shadowy close-up, smiling as Marion Crane’s car disappears into the bog.\nFrom there the film cuts to a seemingly safe image: Sam writing a letter to Marion. Except …\nAs the camera pulls back ...\nHitchcock prods the audience with visuals of threatening tools. Like these …\nAnd these …\nAnd these …\nEventually the camera pulls back to reveal a little old lady – a little old lady holding poison.\n“They tell you what its ingredients are and how it’s guaranteed to exterminate every insect in the world. But they do not tell you whether or not it’s painless. And I say, insect or man, death should always be painless.” Classic sinister humor from Hitch.\nFrom here, Lila Crane arrives. Notice that her arrival isn’t greeted with a close-up.\nShe meets with Sam.\nAs they begin to discuss Marion, they feel as if they are being watched. The little old lady is treated with suspicion as she passes by. She’s not a threat.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-4", "d_text": "Instead, we understand her troubles and feel for her. In other words, she has a reason for stealing the money. Another example of Hitchcock trying to justify her theft is evident in the next scene. We meet Mr. Cassidy, a man who sprays his money everywhere to “buy happiness”. We don’t regard Marion as a villain because the man she steals from is portrayed as a very rich disgusting beast who doesn’t know how to hold his tongue. He speaks his mind with no manners whatsoever flirting with Marion and embarrassing the boss (“where’s that bottle you said was in your desk?”). After the theft, no real harm is done, at least not enough to make Marion a villain. We simply see her dark side. Again, this is expressed visually when we see her staring at the open envelope wearing her black bra. The $40,000 in the envelope serves as the ‘MacGuffin’ of the film. The term ‘MacGuffin’ refers to an object that bares much importance to the characters but to the audience it’s only a vehicle to drive the plot to the next level. A ‘MacGuffin’ is dropped once it serves its purpose.\nBetween the first justification scene and the second one, there’s the famous shot of Hitchcock’s trademark cameo. He stands outside a sidewalk, when the camera leaves the frame following the entrance of our main character. This is simply a visual signature. Hitchcock was well known at the time, not just by the name stamped on the previous “North by Northwest” posters but by introducing the episodes of “Alfred Hitchcock Presents” on television. The same crew that worked for the TV series worked with him to deliver his small budget project to the big screen. Anyway, his appearance is a visual signature and a reminder that things will turn ugly. It’s Hitchcock.\n“Psycho” revolutionized cinema, both technically and in terms of content. A perfect film to study various uses of editing, the rhythm in “Psycho” can be observed in how Hitchcock handles the passage of time very efficiently. When Marion leaves the room, we realize that it’s still that same day. She goes to work, collects some money she’s supposed to put into the bank and goes back home. All that happens in one particular afternoon, and the time frame doesn’t change.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Richard Neutra: Chuey House\nSince the beginning, protection from the elements was the initial motivation for the invention of Architecture. In the past this has been accomplished rather well with walls and sloped roofs. The walls of pre-modern structures were almost always load-bearing. This required them to have a certain thickness. This thickness also allowed the walls to serve as great insulators from temperature fluxes. Small windows allowed a glancing connection to nature. This need for connection as well as protection has been a vital duality in architecture since it’s beginnings. The use of small windows and light-filled courtyards allowed the benefits of nature to inhabit the dwellings while leaving the discomforting elements out.\nIn the Modern Era it’s evident that the desire for a closer connection to nature took root. The increased use of glass combined with the separation of structure from facade (Le Corbusier’s Domino frame) allowed this development to occur. For mainly spiritual reasons, the modern movement sought to blur the lines between inside and out. They sought to allow buildings to breath from under the weight of gravity. Architecture must protect, but beyond that the Modern Architects devised a number of tricks to give the illusion of free exchange between inside and outside. In this world of connection the wall disappears and the roof stands out as the main symbol of protection.\n- Horizontal Thrust: The horizontal line is the line that goes along with nature. The vertical line declares its independence from nature. The emphasis of the horizontal can most clearly be seen in the early development of the Prairie Style by Frank Lloyd Wright. These early houses seek a connection with nature by blending into the horizontal countryside they inhabit. The house becomes less intrusive, less about being clearly man made, and more a part of their environment. From inside the low horizontal lines seen in railings and overhangs compliment the distant line on the horizon and include it as part of the aesthetic experience of the house. Naturally this would work best in the actual countryside where the horizon is uninterrupted. Prairie Houses in urban areas tend to lose this effect.\nMies Van Der Rohe: Farnsworth House demonstrating floor to ceiling glass and homogenous materiality.\n-Floor to ceiling glass: The use of floor to ceiling glass is the most obvious and effective way to establish a visual connection between inside and out by foiling any obstructions of view. The perfect example of this is at the Farnsworth House.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-13", "d_text": "Both Siodmak and Lang emphasize composition:\nelaborate visual patterns made up by the geometry of the image\nthey are displaying. Like Lang, Siodmak is a heavily architectural\ndirector. Many of Siodmak's compositions depend heavily on the\nuse of architectural backgrounds for their structure.\nCriss Cross has numerous scenes shot on staircases, and\nsloping locations that are staircase like. There is the nightclub,\nwith its descending levels; the sloping exit of the armored car\ncompany; the staircase in Lancaster's house, and the hilly Angel's\nFlight neighborhood of Los Angeles, with its many outdoor staircases\nand sloping sidewalks. The use of staircases is very Lang like.\nHowever, Lang's stairs tend to go steeply up and down, whereas\nthose of Criss Cross tend to form large sloping surfaces\nat 45 degree angles. This gives Criss Cross a unique feel.\nIt is if much of the film takes place on sloping ground. There\nis an unstable quality. It is also as if the characters were constantly\nunder the influence of powerful forces dragging them off in uncertain\ndirections: gravity, due to the slopes they stand on, and the\nforces of fate and crime, due to the plots they are engaged in.\nSiodmak also uses that favorite Lang device, the mirror shot.\nHere the scenes in the hospital are especially ingenious and elaborate.\nThere is also a striking nocturnal drive to the beach house towards\nthe end of Criss Cross. It echoes similar nocturnal drives\nin Lang's The Testament of Dr. Mabuse (1933). Both films\nalso look forward to similar drives at the opening of Robert Aldrich's\nclassic Kiss Me Deadly (1955).\nThe armored car robbery is conducted with the use of tear gas;\nthe crooks use gas masks to protect themselves. Both of the features\nare familiar from the German films of Fritz Lang. The gas masks\nare a visually striking figure of style. They introduce as disorienting,\nsurrealist look to the film. Lang liked masks in general. He collected\nprimitive masks, and they show up in both his German films, and\nhis American film noirs, such as The Secret Beyond the Door\nNear the beginning of the robbery, Siodmak cuts to an overhead\nshot. This is not purely vertical, like the Rodchenko angle.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-5", "d_text": "Janet Leigh’s performance shines in the next scenes. We return to her room. There is no need for dialogue; we know what she’s thinking when her desperate eyes land on the envelope. Like the greatest of silent performers, Leigh expresses more through facial reactions than words. Few actresses can pull this off, she does. After she decides to run away with the money, the editing becomes more and more interesting.\nHitchcock uses a medium shot of the main character, Marion Crane, as she drives away from her hometown. The shot shows her face, part of the steering wheel, and the background, which includes the sky. The shot then changes from that particular medium shot to what is regarded as an eye-line matching shot, in which we as the audience see the highway through her eyes. This is the second time Hitchcock uses this shot (the first being her staring at the envelope repeatedly). The minute she steps into her car, the narration starts.\nThe narration serves as the voice in her head. At first, we hear what she suspects Sam will react like upon seeing her with the money. Hitchcock just slipped us into her shoes. He doesn’t only establish her as the main character, he confirms it. We see what she sees (eye-line matching shots), we feel what she feels (the urge to steal the open envelope full of cash), and now Hitchcock makes her share her thoughts with us. She bites her finger in a traffic light stop. After that we get the eye-line matching shot. People cross the street in a hurry. Their hurry is nothing compared to that of Marion, especially when her eyes meet those of her employer’s. We get a close up shot of her smiling at him. Her boss smiles back, then stops realizing she’s supposed to be sick at home or on her way to a bank. He looks back at her, only this time more suspiciously. Enter Herrmann’s score, the plot thickens.\nAt first Marion’s expression suggests fear. Then we get a couple of night shots with bright lights striking our eyes. Her facial expression is more relaxed now. The following morning, Hitchcock is generous enough to provide a beautiful deep focus shot. On the lower left corner of the screen the trunk of Marion’s car, behind it, a police officer’s car, on the right, the long endless highway and in the background empty hills.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-9", "d_text": "We see two separate actions play out in the same single take: (a) the clergy tending to Madeline’s dead body in the top left of the frame, and (b) Scottie staggering out of the church in a daze in the bottom right of the frame. Fans of Hitchcock’s entire career know that this is a tell-tale auteur icon of his, using an extreme high angle to denote a major turning point in the film (i.e. after Sebastian finds the wine bottle in Notorious, or after the U.N. stabbing in North By Northwest).\nBlocking and Mise-en-Scene\nAs the bell tower high angle shows, few directors commanded the entire frame like Hitchcock. Vertigo features some of the greatest examples of mise-en-scene in any movie, visually hinting the plot’s secrets with hints we are too blind to see on first viewing.\nFirst, take a second look at the scene where Elster first tells Scottie about Madeline’s possession. Watch how Hitchcock has Elster move up to a higher platform the minute he begins his ghost story, as if Elster were up on a stage, acting. Scottie sits and listens in the “audience” down below. Naturally, when the story ends, Elster returns back down to Scottie’s level, to talk logistics. Those rare viewers with a proactive eye for mise-en-scene might guess the twist right here; the rest of us are at Hitchcock’s mercy.\nSuch brilliant mise-en-scene continues throughout the film. As Scottie makes his way around San Francisco, try counting the numerous occasions where a church or bell tower appears behind him in the frame, foreshadowing his fate in the final scene.\nShadows and Mirrors\nHitchcock also uses shadows and mirrors to symbolize Madeline/Judy’s dual nature. The clearest example of her divided self comes in the shot where Scottie sees Judy silhouetted by the green light of a neon sign out the window. The outline of her body strips away all pretense of “Judy,” rendering her a carbon copy of Madeline.\nAs Hitchcock cuts to a frontal view, Judy’s half-lit face leaves no doubt as to her dual nature. Her eyes move from the shadows and into the light, as if desperately wanting to escape the bounds of her evil side.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-3", "d_text": "Note Hitchcock opens the film by panning through a large city (Phoenix Arizona), the choice is random, so is the date (Friday, December the Eleventh), as well as the time (Two Forty-Three P.M.) The camera then moves through a random window of one of the many buildings. Hitchcock strikes the first note on his piano. Through these random choices, Hitchcock subliminally tells the audience that this tale can happen to anyone, anywhere, at any time.\nWe get our first glimpse of the main character. Or is he? She’s a blond, which is a Hitchcock trademark, so she must be – at that moment so it seems. Marion Crane (Janet Leigh) is wearing a white bra and cuddles with her secret lover. Hitchcock picked that white bra at the beginning to signify her innocence. Later on, after she steals the money, we see Marion in a black bra, signifying her darker side. At one point, her boyfriend, Sam Loomis (John Gavin) suddenly releases the arms so passionately holding on to the love of his life. This is the exchange of words that follows:\nSam: “I’m tired of sweating for people who aren’t there. I sweat to pay off my father’s debts, and he’s in his grave. I sweat to pay my ex-wife alimony, and she’s living on the other side of the world somewhere.”\nMarion: “I pay, too. They also pay who meet in hotel rooms.”\nSam: “A couple of years and my debts will be paid off. If she remarries, the alimony stops.”\nMarion: “I haven’t even been married once yet.”\nSam: “Yah, but when you do, you’ll swing.”\nMarion: “Oh, Sam, let’s get married.”\nSam: “Yeah. And live with me in a storeroom behind a hardware store in Fairvale? We’ll have lots of laughs. I’ll tell you what. When I send my ex-wife her alimony, you can lick the stamps.”\nMarion: “I’ll lick the stamps”\nThrough this dialogue we learn that they can’t get married for financial reason, but what Hitchcock is doing on a deeper level is somewhat justifying the heroine’s future actions. That way we don’t despise Marion for committing theft.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-2", "d_text": "This essay, then, is only a starting point for a broader project that would address the question of immanence in architecture, and propose a take on imaging that would re-conceptualise the relation between thought and perception on one hand, and the so-called architectural representations on the other. As for Colomina's text, illustrations of the work of Adolf Loos and Le Corbusier have deliberately been omitted. The intention is for the reader to accompany the text with moving images, all of which have been taken from Mirza-Butler's installation-become-film Where a Straight Line meets a Curve.\n An often overlooked aspect of Adolf Loos' domestic interiors, writes Beatriz Colomina in 'The Split Wall: Domestic Voyeurism,' is that the windows are opaque, covered with curtains, or made difficult to access due to the in-built furniture – with the seating arrangement often placed in such a way that the occupant has their back to the window, facing the room (1992: 74-5). Each of these focal points establishes a theatre box oriented back towards the interior, she claims, and while this vantage point might provide a sense of protection by the virtue of being back-lit, it also inevitably draws attention to the person seated. The subsequent regimes of control established within the interior, represented in photographs, make it easy to imagine oneself in 'precise, static positions' whereby '[w]ith each turn, each return look, the body is arrested' (Colomina 1992: 75).\n Discussing the Moller and Müller houses specifically, she writes: 'The \"voyeur\" in the \"theater box\" has become the object of another's gaze; she is caught in the act of seeing, entrapped in the very moment of control. In framing a view, the theater box also frames the viewer. It is impossible to abandon the space, let alone leave the house, without being seen by those over whom control is being exerted. Object and subject exchange places. Whether there is actually a person behind either gaze is irrelevant' (1992: 82). This results in the inhabitants of the house becoming in Colomina's theatrical metaphor simultaneously actors and spectators and hence 'detached' from their own domestic space.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-17", "d_text": "Each of his films has been full of moments of red-herring disquiet, but he has never laid such a bland set of ambushes as in Family Plot. The Master makes unsettling use of an oaken-looking woman in a jewelery shop, whom Blanche cheerfully asks if her sign is Leo; of a brick wall that comes open and then closes hermetically, causing steep claustrophobia; of a remote-control garage-door gadget; of a fragment of bishop’s red robe shut in the bottom of a car door in a garage, making one think of the gaudy socks of the unlosable corpse in The Trouble with Harry (1955); of an overhead shot of a weeping woman hurrying through a maze of paths in a cemetery, pursued by Bruce Dern; of a woman physician, a disgruntled old man in shirtsleeves, and identical-twin mechanics, who are successive false trails in Blanche’s chase; of a genteel chiming doorbell on the front door of the thieves’ house. Hitchcock’s ominous mechanical devices and his dark clues leading nowhere build up in us a farcical discomfiture. We are like oversensitive princesses troubled by peas under mattresses.\nBut Family Plot does not rest on the fostering of anxiety. Hitchcock allows himself a camaraderie with the audience which makes this film one of the saltiest and most endearing he has ever directed. It is typical of the picture that he should have the sagacity and technique to bring the terrifying car incident to such an un-troubling close. Only a very practiced poet of suspense could slacken the fear without seeming to cheat and end the sequence without using calamity. With this picture, he shows us that he understands the secret of the arrow that leaves no wound and of the joke that leaves no scar. Sometimes in his career, Hitchcock has seemed to manipulate the audience; in this, his fifty-third film, he is our accomplice, turning his sense of play to our benefit. There is something particularly true pitched in his use of the talent of Barbara Harris. She has never before seemed so fully used. The film finishes on her, as it begins. She goes mistily upstairs in pursuit of the enormous diamond that the villains have stolen. Lumley watches her. She seems to be in a trance. Maybe she has got supernatural powers, after all.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "For example, in Mr and Mrs Thorwald, Jeff sees a man who is stuck with an invalid and nagging wife. She belongs in that rarefied atmosphere of Park Avenue, you know, expensive restaurants, and literary cocktail parties.\nThe disparity between their approaches creates tension in their relationship. Because even though the shot shows him scratching his feet, it actually does not mean he was scratching his feet. He drifts away from the real world and enters his own private imagination. Relationship Story Backstory Their problems stem from their differing lifestyles: However, the audience doesn't know this and this mystery woman's intentions aren't known.\nMulvey suggests that the importance of her fancy dress is to draw in the man and grab his attention. As the camera draws nearer, his face is completely muted by the shadow of an approaching figure.\nShe arrives dressed in a beautifully, ornate dress, with not a hair out of place, while Jeff is in his pajamas. The video effectively curates shots that are both visually and thematically alike.\nFilm research paper Authorial Shots of \"Rear Window\" essay presented on this page should not be viewed as a sample of our on-line writing service. We see his cast on his leg with his name, L. Submit Tips For Editing We welcome suggested improvements to any of our articles. Then I had two fall showings - twenty blocks apart.\nThe Voyeurism of Rear Window. The second last shot of the montage is of a black-screen a sound of what seems to be moaning. You may find it helpful to search within the site to see how similar or related subjects are covered. Jeff raises a grin in the darkness and the camera shifts to his viewpoint.\nWhen Lisa goes into Thorwald's apartment, we are taken into the movie a little bit because she is usually on our side of the view.Free Essay / Term Paper: Compare and Contrast the short story \"Rear Window\" and the movie \"It had to be murder\" On comparing and contrasting Cornell Woolrich's short story \"Rear Window\" (originally \"It Had To Be Murder\") to Alfred Hitchcock's film adaptation of the.\nI am a movie fanatic essay help bull stock usf essay city life essay words a day spider web strength research paper rear window movie analysis essay, philosophy vs science essay writing postman joseph roulin analysis essay essay on truth only triumphs over crossword empathy essay based on situational irony becas de studious.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-10", "d_text": "This same idea of dual nature is expressed through the use of mirror doubles, another Hitchcock auteur icon (i.e. Claude Rains in Notorious, or Anthony Perkins in Psycho). The first mirror double in Vertigo comes as Madeline and Gavin Elster leave Ernie’s Restaurant for the first time. The double image hints that they’re both phonies.\nIt happens again as Scottie watches Madeline through a cracked door at the flower shop. Note that Scottie is quite literally witnessing her phony side.\nIt happens again the first time Scottie comes to Judy’s apartment. Scottie thinks he recognizes her. She convinces him otherwise. However, we the audience should know who’s telling the truth, based off Hitchcock’s mirror mise-en-scene.\nThe most poignant example happens just before Scottie figures out the truth. This mirror shot comes accompanied by dialogue clues like, “I have my face on” and “Can’t you see?”\nJudy’s “face” line mirrors a quote by the archetypal femme fatale in Double Indemnity (1944), where Barbara Stanwyck looks into a mirror and says, “I hope I got my face on straight.”\nFor all this, some critics describe Vertigo as a sort of “film-noir in Technicolor.” Indeed, our hero goes on a night journey and winds up fooled by a femme fatale. There are multiple shots that scream noir, from the aforementioned shadows and mirrors, to Stewart’s silhouette entering the flower shop, to the Laura-like portrait of Carlotta.\nEscaping the Fire\nIsn’t that some insane mise-en-scene already? Wait, there’s more. Check out the use of the fireplace in Scottie’s apartment in the scene after Madeline jumps into the bay.\nThe scene starts with Scottie stoking the fire and pans around to Madeline. Moments later, an ominous over-the-shoulder shot (above) visually connects the danger of the fire to Madeline.\nDuring their conversation, we never see Scottie in the same frame as the fire; but we often see Madeline with the flames in her background. To these eyes, it’s a clear symbol of the danger and temptation that Madeline represents.\nStill have doubts? Check out the multiple scenes in the hallway outside Judy’s apartment where a “Fire Escape” sign appears prominently.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-3", "d_text": "It stars Bobby Driscoll, a child actor on loan from Disney, halfway between starring in Song of the South and Treasure Island, as the proverbial boy who cried wolf. One sweltering night, right after getting lectured by his exasperated parents (Arthur Kennedy and Barbara Hale) for telling tall tales, he tries to escape the summer heat by sleeping on the fire escape–where he just happens to spy his neighbors murdering a man!\nAs you might expect, his history of fabulism renders his eyewitness testimony suspect–and the poor kid can’t get anybody to take him seriously. That is, anybody but the next-door killers, who learn of his accusations and decide to take them very seriously indeed.\nAs the frame grabs above should indicate, part of the joy of this deliciously tight thriller is its visual panache–it was directed by Ted Tetzlaff, who had a reasonably impressive list of credits to his name as a director but was bext known as a cinematographer–among his credits as director of photography was Hitchcock’s Notorious.\nOne of Tetzlaff’s contributions was the single-minded commitment to presenting the film from a child’s perspective. This is worth noting, because not all movies about children are told from their viewpoint. In fact, I’d say most movie in which child protagonists are endangered are actually told from an adult point of view, with a child hero. But The Window is like Invaders From Mars (itself the work of a cinematographer-cum-director) in orienting the tale to the boy’s perspective–both narratively and visually.\nIn this case, any parallels we find with Rear Window can’t be explained as an attempt to mimic the success of a hit movie—The Window came out 5 years before Rear Window and therefore can’t be a knock-off. And we can’t say Rear Window took any inspiration from The Window, because it was a micro-budgeted supporting feature that few people even saw.\nThat being said, the parallels are genuine links—you see, both movies were adapted from Cornell Woolrich stories. As it happens, they were adapted from different Cornell Woolrich stories: The Window is derived from The Boy Cried Murder and Rear Window is derived from It Had to be Murder. It’s just that Woolrich apparently plagiarized himself from time to time and ended up writing two separate short stories so incredibly similar in concept and content that the movie versions of those two stories naturally seem like remakes of each other.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-4", "d_text": "The Male Gaze\nThis transfer of sympathies from the male to the female raises interesting questions about Hitchcock’s intentions. You’ll note how Hitch lingers in Judy’s apartment as the twist is revealed, and has us literally inside her head during her and Scottie’s walk through the park. We get Novak’s POV, not Scottie’s, and she sees a happy couple laying in the grass, amidst fluttering birds and optimistic music. In this new light, the twisted tale becomes a masterful meditation on the male ego and the unattainable woman. We feel for Judy as she is repeatedly subjected to Scottie’s attempts to remake her into his “ideal object” and looks up at him with those tragic eyes to say, “Couldn’t you like me? Just me, the way I am?”\nIt is both a delicious ode to the helpless male fantasy and a scathing commentary supported by even the most strident feminist. As with all of Hitchcock’s films, one can debate whether it is a sick example of the “male gaze,” or a useful window into the dangers of the “male gaze.” I prefer the latter. While I agree with Laura Mulvey that Rear Window and Vertigo “cut to the measure of the male desire,” (E) I think the point is for us to admit and question that desire. It’s no coincidence that we begin to see Scottie in a different light, and we don’t like what we see. This is the true lesson of the film.\nSuch a plot could prove a bad actor’s nightmare, or a great actor’s dream role. When it comes to Jimmy Stewart, I don’t even have to say which is which. Vertigo was the last of four Hitchcock thrillers for Stewart, following the experimental Rope (1948), the voyeuristic Rear Window (1954) and the just-plain-fun The Man Who Knew Too Much (1955). There’s no question which was the most complex role of his career, showing a new dark side to the aw-shucks idealist from Capra lore.\nStewart had started exploring darker places in the ’50s, with questionable sanity in Harvey (1950) and nightmares plaguing his bounty hunt in The Naked Spur (1953).", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "When Montag witnesses the death of an ageing woman (Bee Duffell) who chooses to die between her books, rather than give them up, he starts to question his blind obedience to the suppressive state. Montag becomes an avid reader, and as he learns about the hidden truths on the pages he worked so hard to destroy before, he sets out to preserve them.\nRay Bradbury’s dystopian society of Fahrenheit 451 is brought to life in Truffaut’s film through repetitive visual cues that strengthen his objection to a totalitarian regime. Throughout the movie there is a juxtaposition of the modernist architecture infiltrating traditional architecture and natural landscapes. In the city (Truffaut and Director of Photography, Nicholas Roeg, filmed in the modernist Alton estate in Roehampton, South London and in Edgcombe Park, Berkshire, where Montag’s house is located) the viewer is confronted with concrete buildings. These structures are concerned more with the functional value rather than an aesthetic form. They promote the ideas of sameness as many structures tend to look alike with their dominant neutrality.\nEach building becomes an extension of the other, stamping out any individualistic flair that would have painted the city as anything other than uniform. As we move outward into the country, we can see the concrete arm of the totalitarian government, reaching out into the countryside through the railway sprawling out from the city centre. Here, in the presence of nature, sits the brutally present monorail (filmed in Châteauneuf sur Loire, France) cutting through the countryside, rising above the trees; marking its control with such a force that it cannot go unnoticed. This suspended monorail, 1370 metres long and built in 1959 by engineering firm SAFEGE, then seemed to be the future of public transport. This was why Truffaut wanted to include it in his film, as one of few futuristic elements. Later the monorail project, originally planned to connect the Parisian suburbs with the city centre, was abandoned due to technical and financial difficulties.\nModernist architecture is noted for its elimination of ornament and simplification of form. An outcome of modernist architecture is that it produced large housing estates with many buildings built internally and externally uniform. The central vision of many Modernist estates was to produce easily reproducible identical living units which would satisfy and reproduce communities ravaged and displaced from their terraced estates, destroyed in WW II.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-11", "d_text": "Hitchcock was never one to let random objects creep into his frame, so I believe this sign is absolutely intentional. It’s as if fate is pointing Scottie in the direction of his only escape — advice he ignores, giving the sign one last look as he ultimately approaches Judy’s door.\nLater, Scottie and Judy pass the sign together, missing another warning by their director.\nJust in case you are still skeptical, Hitchcock drives it home with the “rule of three.” After our sympathies are fully flipped to Judy, we see her face the exact same “Fire Escape” dilemma. As she returns from the salon, made over as Madeline, she too has a chance to escape. She ignores the warning and agrees to continue her charade.\nParallelism and Familiar Image\nThe technique of familiar image is not limited to the fire escape sign. This sort of parallelism pervades the entire film.\nNote the matching shots of archways at the Spanish mission before and after Madeline’s death. By bookending it, Hitch suggests the event would have happened regardless of Scottie’s actions, because destiny dictates it (and the filmmaker’s vision demands it). The first archway shot is fate predetermined; the second is fate fulfilled.\nNote also the parallelism of Madeline and Judy appearing in various hotel windows. The shot of Madeline in the window of the McKittrick Hotel parallels the shot of Judy in the window at the Empire Hotel (another clue to proactive viewers). If you consider Hitch in terms of the auteur theory, these window images also resemble the window shots of Mrs. Bates in Psycho and any number of window shots of the neighbors in Rear Window.\nHitchcock also uses parallel camera movements behind desk lamps (literal “light bulbs”) as a symbol of telling the truth. First, the camera rounds a desk lamp in Scottie’s apartment as Madeline lies to him. Scottie “thinks” he’s had a light-bulb moment. Later, the camera similarly rounds a desk lamp in Judy’s apartment as she reveals the truth in a handwritten note. This time, the camera rounds the desk lamp in the opposite direction, as if to say this is the actual light-bulb moment.\nWhat Lies Above\nOf all of Hitchcock’s familiar images, though, one in particular puzzled me the longest: recurring shots of various ceilings.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-15", "d_text": "Lila is sitting at the center all the way in a lighted room in the back. The store itself is dark. She hears a car approaching stands up and runs through the dark store. We end up with a silhouette of her head in a close up. Without moving the camera, and with careful lighting, a simple scene becomes a memorable one. The movement is inside the frame as Lila breaks the depth of field of the shot. Previously Hitchcock created a close-up out of a medium shot, this time the task is difficult and much more impressive as he turns a deep focus shot into a close-up, without any cuts.\nIn a two-shot, the dark figures of Sam and Lila decide to see the deputy sheriff, Al Chambers. A transition leads to the deputy walking down the stairs. The camera slightly pans to the left and the camera is fixed on a four-shot (Sam, Lila, Mrs. Chambers, and Mr. Chamber). As Sam updates the sheriff with the story, we switch to a three-shot. Only this time they aren’t standing next to each other. The side of Al Chambers face is in the foreground and his wife, on the left, is in the background. When Sam mentions Norman’s mother the facial expression of Mrs. Chambers transforms to a look of panic and wonder. This shot is used to show the emotional reaction between the sheriff and his wife. After that we switch to a three-shot of Sam in the foreground, Lila in the middle-ground, and Mrs. Chambers in the background. Finally after constant switching from the two-shot to the three shot and gradually to one-shots, we end up with a low-angle shot of the sheriff and the spine-chilling line: “Well, if the woman up there is Mrs. Bates, who’s that woman buried out in Green Lawn Cemetery?” Hitchcock is involving the audience, moving us closer, building to more intimacy between the viewer and the characters. I like to call it the 4,3,2,1 scene.\nThe last line makes one question the existence of mother. Hitchcock is misguiding the audience. I bet a lot of the viewers were predicting a ghost story. The haunted mansion would fit that storyline, or maybe mother and Norman killed someone and made it look like mother died. The audience is in the dark.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "The scene begins with what appears to be an innocent invitation from Norman to Marion Crane (Janet Leigh), the unsuspecting guest at the Bates Motel, to come into \"the parlor.\" The use of the word parlor--as in \"'come into my parlor' said the spider to the fly\"--establishes the tenor of the scene. The significance of this brief line becomes all the more apparent at the end of the film when Norman's \"mother\", who has by now consumed Norman's mind and soul, looks directly into the camera and says that \"she\" would not \"even hurt a fly.\"\nNorman brings Marion her dinner and invites her into his parlor. Notice Norman's reflection in the window.\nIn the parlor itself, Hitchcock begins his work. The room is small, barely big enough for the two chairs, the lamp table, coffee table, and chest that occupy it. On the lamp table is a Tiffany lamp, the only source of light in the room and thus the key light within the scene. The characters' positions within the room and how they are lit by this single source keys the audience to the characterizations.\nMarion, for instance, sits near and slightly behind the lamp. Her face is well lit, and she, like the lamp, appears to radiate a glowing warmth. Despite the fact that she has embezzled forty thousand dollars from her employer, she is not hidden in shadows of evil or consumed by the darker side of her nature. Leaving Marion in light indicates that redemption and atonement is possible. Indeed, at the conclusion of the scene, Marion has done an about face. While Norman does not know the details of her flight, the audience knows she intends to return the stolen money. Marion is also pooled in fill and high key lighting that creates a softness around her and suggests she is redeemable.\nMarion is surrounded by soft lighting. Note the round picture frame on the wall behind her.\nNorman is harshly lit in a corner of the room. Note the angular picture frames behind him.\nOn the other hand, Hitchcock positions Norman far from the light source and slightly to one side. The effect is a harsh line--light and shadow--across Norman's face, re-emphasizing the clash of his dual personality (host/murderer, man/child, mother/son). Norman is also immersed in low key lighting.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-3", "d_text": "By privileging the view of the male characters, by placing the narrative action in their hands while the female characters remain passive objects of this male point of view, women in cinema become reduced to their visual allure and what scopophilic satisfaction they offer. ‘By means of identification with him’, Mulvey says of the male character, ‘through participation in his power, the spectator can indirectly possess her too’ (1975, 13). In placing the locus of control – both the power of looking and the power of agency – on the male characters and inviting the audience to participate in the exercising of such control, film ideologically supports and maintains an oppressive status quo in the real world.\nThe Male Gaze in Classical Hollywood: An Example from Rear Window (Hitchcock, 1954).\nMulvey provides several examples from classical Hollywood in ‘Visual Pleasure and Narrative Cinema’. Of these, Alfred Hitchcock’s Vertigo (1958), Marnie (1964), and Rear Window are referenced as pieces of cinema that not only hold the concealed filmic language of the male gaze, but in which ‘the look is central to the plot’ (1975,15). Mulvey supports Douchet’s interpretation of Rear Window as ‘a metaphor for the cinema’ (Mulvey, 15). In examining the opening sequence to Rear Window, the complexity of the gaze can be observed:\nAlthough at first, Jeff (James Stewart) has his back turned to the window, the preceding sequence invites the audience to watch the events unfolding from his window, positioning the viewers to voyeuristically engage with Jeff’s surroundings even when he is not. This immediately establishes the gaze of the audience both as an independent voyeur and an occupant of Jeff’s space, creating a sense of identification with the male protagonist. Once Jeff is on the phone, he gazes out the window, returning to the same voyeuristic vantage point at the beginning of the scene. In both the gaze of the audience and Jeff’s, the woman across the way, scantily-clad in the privacy of her home, dances and stretches as she moves around her room. The provocation of these actions only comes from her cinematic treatment; the fact that she is unknowingly being watched. The gaze is what sexualizes her. This exemplifies the bifold act of looking that Mulvey identifies, where both the male character and the audience participate equally.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "The camera appears to dolly up to the ceiling while keeping her profile in the shot. This conveys the effect of her floating up to the light shade and ceiling rose. The patterns made by the light on the ceiling blur and dissolve.\nThe directors used mise en scene to add depth to the narratives in their films, particularly before the addition of sound in the 1930s. One element of mise en scene used was juxtaposition within the frame. This was created by a defined contrast of light and dark areas. Polanski has incorporated these elements into Repulsion by most importantly filming it in black and white. In this scene Carol’s white skin, hair and costume is contrasted with the dark, poorly lit spaces of the flat. The lighting (and lack of it) is emphasised. This use of light also appears to give the film a painterly effect. In this scene the whole set appears to be two dimensional as line and shape are emphasised in the ceiling roses and light shades, the walls seem to melt and merge and the furniture takes on a cartoon - like appearance.\nAnother element of mise-en-scene used by German expressionist directors was shadows. German and Scandinavian folktales often included the Dammerung – a world created by twilight in which objects could suddenly spring to life. (Titford 21) The flat with its low lighting and large patches of darkness appears to be a strange reality where the objects inside it and the building itself become eerily alive. Polanski uses a lot of shadow in the rooms and hall as well as the shadows on the ceiling cast by the light shades in the living room and bedroom. This gives a visual effect of shapes merging from the darkness, of the shadows becoming alive.\nAll the scenes inside the flat were shot in a specially made studio set. ( Butler 75) Polanski said “What I like is an extremely realistic setting in which there is something that does not fit with the real. This is what gives it an atmosphere.”(Butler 179) He had the living room and the bathroom reconstructed on a much larger scale for the later scenes. (Butler 76) This gives the audience the effect of seeing the room from Carol’s personal perspective. It is an attempt to show a character’s subjective reality on film. Representation of the subjective experience as art was one of the primary aims of Expressionism.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "The result is an unnatural starkness that indicates something is hidden. Back lighting and fill lighting are kept to a minimum, resulting in sharp, angular shadows cast ominously on the wall and ceiling above Norman.\nHitchcock also deftly uses camera angles to reveal all the audience needs to know about the troubled mind of Norman Bates. As he did with the lighting, Hitchcock shapes the scene in terms of contrasts. We see Marion sitting comfortably in her chair, leaning slightly forward, enjoying a sandwich Norman has made for her. Hitchcock places the camera near eye level so the audience sees Marion as two people might see each other while sitting and talking. There is nothing unusual in this. In fact, this particular angle provides the audience with a sense of normalcy and comfort in Marion's presence. Hitchcock, however, moves out of the comfort zone to shoot Norman from an unnaturally low perspective. These two camera angles in-and-of themselves mean nothing. Only when they are juxtaposed can any meaning be taken. The shift to Norman's angle suggests that Norman's world is skewed, off balance, out of kilter. We feel uncomfortable in this position because we are not used to viewing the world from such an angle and to do so makes it difficult to extract meaning.\nIn Norman's parlor, we usually see Marion from the front, so her full face is in view.\nHowever, the camera frequently moves to the left side of Norman, obscuring his other side.\nHitchcock's mise-en-scene probably does more to capture the intent of the characters than any one element in the entire scene. It also emphasizes the duality that exists not only within Norman Bates but within all of us. For instance, Marion is surrounded by scenic details that make her a sympathetic character, not without flaw--after all, she did steal forty thousand dollars-- but certainly not one to condemn too harshly. She sits at ease in her chair. In front of her is a tray with a small meal prepared by Norman. On the tray is a pitcher of milk--not just a glass but an entire pitcher. The detail has less to do with the quantity of milk (Marion, in fact, drinks none of it) as it does with the pitcher itself. This pitcher is white and has soft, graceful lines that suggest Marion's essential goodness.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-2", "d_text": "Behind and above her, the curved lines are repeated in a picture frame, and to screen right is the Tiffany lamp with its rounded shade glowing warm and alive. The walls behind her are likewise soft, brightly lit. Marion, especially with the light color of her dress, the curves in her hairstyle and her posture, adds to the sense that she is, or eventually will be, the victim.\nLike the camera angles, the \"picture\" of Marion in the parlor, has little significance until it contrasted with the image of Norman who sits opposite her. Like Marion, details that surround Norman suggest his true nature. Unlike Marion, Norman is immersed in straight lines, many of which are set at angles that create a sense of conflict rather than curved line harmony. While the curves of the milk pitcher help to frame the foreground of Marion, Norman's arms rest on his legs while he nervously interlaces his fingers. To screen left and behind Norman's right shoulder stands a chest with straight heavy lines, a contrast to the curved shade of the Tiffany. On the walls hang small framed pictures, but these pictures have straight frames, and while Marion is bathed in light, Norman wears dark clothing and, because of the lighting, casts long shadows that strike the walls and ceiling sharply like black blades slicing through the air.\nBut perhaps the most unusual and the most curious feature of this parlor, and yet the most graphic clue to the twisted mind of Norman Bates, is the stuffed birds mounted on the walls and standing on the table and chest. \"I like to stuff things,\" Norman says, his meaning all too obvious at the end of the film. Moreover, the birds present a rather frightening image in the parlor, as they hover around Norman like dark, sinister angels.\nThroughout the film, snippets of the dual nature of humanity present themselves, and throughout the film, lighting, camera angle and mise-en-scene make their contributions to the total concept. Their presence in the movie is consistent and each shift is justifiable, yet nowhere in the film do these three elements come together with greater effect and with greater contrast than in Norman's parlor. The scene becomes like Hitchcock himself who hides in plain sight within his own films. Our fun is looking for what lies in front of us. Hitchcock's fun is hiding it from us.\nPhoto credits: MCA Home Video.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-6", "d_text": "In Phantom Lady it is Elisha Cook\ndoing his drum solo; in Criss Cross Siodmak frequently\ncuts to the Rumba players, with a focus on the percussionist.\nThe theater scenes at the start recall the silent-film show that\nbegins The Spiral Staircase. Both pay as close attention\nto the audience, as they do anything shown on stage. The courtroom\nscenes focus entirely on the audience in the courtroom, in an\nunusual figure of style.\nWorking women are everywhere in this film: the heroine, the psychiatrist,\nthe milliner and her designer, the delicatessen owner, the actress.\nWe repeatedly see women running businesses, and working as skilled\nprofessionals. This might reflect the influence of Joan Harrison,\nthe Alfred Hitchcock protégé who was one of Hollywood's\nfew women producers. It also reflects the fact that women took\nover many jobs during World War II.\nBy contrast, three sinister\norganizations, the police, the cab drivers and the musicians,\nare all-male. They contrast with the far more open and constructive\nThe heroine is repeatedly shown using the phone. Both the telephone\nand the Dictaphone seem to be her high tech tools, used for both\nbusiness and crime detection. They often link her to unresponsive\nmen, however: her boss, the police. The heroine of The Spiral\nStaircase also has trouble using the phone.\nBoth the heroine's apartment, and Elisha Cook's, are full of the\nheavy Victorian design and bric-a-brac that will later dominate\nThe Spiral Staircase. The city streets, and even the elevated\nstation, are also full of an oppressive sense of past architecture\nlooming over the lives of the characters. However, when we finally\ndo get into a modernist design world, in the killer's apartment\nat the end, there is no sense of relief.\nThe hero is a civil engineer, who has a vision of building better cities for people,\nfull of light and air. This change of environment would liberate people\nfrom their oppressive architectural surroundings. It recalls the\nBauhaus of the 1920's, back in Siodmak's native Germany.\nThe scene where the cabby has his cab up on a hoist for maintenance,\nanticipates the dispatcher's office in Criss Cross.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-14", "d_text": "Hitchcock manages to pull off another shocking scene with a sudden jump-out-of-your seat appearance of mother stabbing the detective once he reaches the top. Blood splatters on his face, and we follow the fall with the camera fixed on Arbogast’s face. The same use of screeching noise is set by Herrmann. Once he lands, Mrs. Bates continues the stabbing, the detective screams in horror and the scene fades to black.\nThe Arbogast scene is the second and last onscreen kill. Today, Hitchcock is often credited with creating the slasher sub-genre. Unfortunately, this triggered a chain of terrible motion pictures with the exception of the original “Halloween”. Most of the slasher pictures of the 70’s, 80’s, 90’s and 00’s overdo it with frequent kills every other scene instead of building up the murder scenes with character development. Therefore, we end up with a bunch of characters we don’t much care for getting chopped to pieces. In “Psycho” it was never about the violence, it was always about the tension leading up to the violence.\nFade in, Sam and Lila sit worried in a smoky room “Sam he said an hour or less”. Sam: “Yeah, It’s been three.” As I said before the pace is much faster in the second half. Hitchcock directs this half like it’s a sequel requiring different editing methods. Likewise, time passes faster at Norman’s place. A medium shot of Norman standing in front of the clear black swamp. He’s already done cleaning the mess. Sam arrives and looks for “Arbogast”. He calls his name a few times with no luck. The medium shot becomes a close up, again not through a cut but by Sam walking up to the lens. He curves his hands around his mouth and gives it his all. The call for Arbogast echoes into the next and same shot of Norman in front of the swamp. We move closer to him. As his head turn to the right facing the camera, the camera pans to the left towards him. A very well executed shot is the result as we end with a close up on a chilling expression on Norman’s shadowy face. He’s looking at his motel.\nA transition directs us to a deep shot of the storeroom.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-21", "d_text": "The obvious option, at least to us today, would be to use more shots than Griffith does; we think that increasing the cutting pace builds up excitement. Interestingly, however, Suspense uses only a couple of more shots than The Lonely Villa within a comparable running time. (5) We usually expect that American films become more rapidly cut as the 1910s go on, but this isn’t the case here. Shortly, I’ll suggest why.\nSmalley and Weber recast Griffith’s parallel editing in several ways. For instance, The Lonely Villa prolongs the phone conversation between husband and wife, building suspense through the husband’s instruction to use his revolver on the thugs. Suspense, by contrast, doesn’t dwell on the telephone conversation but devotes more time (and shots) to the chase along the highway. That’s because Weber and Smalley have complicated the chase by having the husband pursued by the irate motorist and the police, something that doesn’t happen in the Griffith film.\nJust as important, Smalley and Weber revise the crosscutting schema through framings that are quite bold for 1913. For example, Griffith’s tramps break into the house in long shot, and they move laterally across the frame.\nBut Weber and Smalley’s tramp sneaks steadily up the stairs, into a menacing extreme close-up.\nElsewhere, Suspense gives us close views of the wife and of the door as the tramp breaks in. There are oblique angles on the back door of the house, and virtually Hitchcockian point-of-view shots when the wife sees the tramp breaking in and he looks straight up at her.\nWhat struck me most forcibly on watching the film again was the way in which Weber and Smalley’s daring framings serve as equivalents for parallel editing. In effect, they revise the crosscutting schema by putting several actions into a single frame. The most evident, and the most famous, instances are the triangulated split-screen shots. They cram together three lines of action: the wife on the phone, the husband on the phone, and the tramp’s efforts to break into the house (here, finding the key under the mat). (6)\nSplit-screen effects like this were common enough in early cinema, especially for rendering telephone conversations. Eileen Bowser points out that the three-frame division was one variant, with a landscape separating the two callers.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Figure 1. ‘Rope Movie Poster'\nWhat has to be one of the masters of suspense filled movies, Alfred Hitchcock’s “Rope” (1948) is certainly a theatrical performance that will draw the audience in, push them away, and draw them in once again.\nFigure 2. 'Viewing the killing'\nBased on a single set of a lavish apartment, “Rope” (1948) is a rather audacious film about a subtly-hinted-about homosexual couple who commit the act of murder within this apartment, and then host a dinner party with the victims loved ones whilst the corpse is hidden in a nearby chest. The terms ‘theatrical’ and ‘audacious’ come to mind when watching this film, as it is portrayed to the audience with an exaggerated manner, acting-wise, much like a play production being executed on a live stage. By live, it is also being referred to the fact that there are no jump cuts within the film (or so it may seem), just one prolonged take. The cameras movements are hardly smooth, they are somewhat unsteady and have a tendency to waver. As Pamela Hutchinson observes, “This clunkiness can be part of the film's claustrophobic strength though: the coffin-chest is rarely out of shot, and the camera follows the actors around every square inch of the confined set. They're trapped, and so is the audience.” (Hutchinson, P. 2012)\nFigure 3. 'Brandon and Philip being questioned by their former tutor'\nThe audience is certainly made to feel a bit on edge, but that is what Hitchcock wants to achieve in his work. Hitchcock wants to torture the audience and have the suspense ‘kill them’. The effect of creating this piece using (or creating the illusion of) real time is daring, therefore it leaves the audience constantly waiting until the point where they shout towards the screen, “They’re going to find out!”. As Vincent Canby states, “In ''Rope,'' Hitchcock is less concerned with the characters and their moral dilemmas than with how they look, sound and move, and with the overall spectacle of how a perfect crime goes wrong.” (Canby, V. 1984) Much like the main character Brandon, who was one half of the pair who committed the murder, Hitchcock thought himself to be rather clever.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-2", "d_text": "He's a man without a sense of space or home (recall the aforementioned unfurnished Malibu pad). Even the armored truck heist that is Heat's inciting incident takes place underneath the interchange of the 10 and 110 freeways—a space ignored by the thousands of commuters that travel right over it every single day.\nOn the other hand, the first we see of Hanna is in the most private of all domestic spaces. He's in bed with his wife, but the home he inhabits is a \"dead-tech, post-modernistic, bullshit house\" of steel railings and glass bricks. It's a post-modern architectural disaster somewhere in the Hills, seemingly inspired by the midcentury modern architects who would give 20th Century Los Angeles their character but without any of Richard Neutra or Pierre Koenig's sense of utopic order. His office, which Hanna would probably admit is more of a home to him, is also an unwelcoming place. Hanna would much rather be on the street where the action is instead of suffocating within a concrete tomb embedded within LAPD headquarters.\nEven during Heat's much-celebrated bank heist and gun battle, Mann uses downtown's architecture for this effect. It's an odd comparison, but my mind often makes the connection to Jacques Tati's 1968 masterpiece PlayTime. An overhead shot of the bank lobby (Far East Bank at Two California Plaza) right before the mayhem breaks out, with the bank's towering windows and compartmentalized organization, recalls Monseiur Hulot's view as he's startled by the oppressive geometry of an office in a Parisian skyscraper.\nTo heighten this effect PlayTime, Tati constructed enormous, expensive sets and models to stand in as the offices and lobbies of Paris' towers, using matte gray as a color scheme for all the trimmings. (The film was a commercial failure and left Tati in debt.) Despite the order and efficiency imposed by the hyper-modern buildings of Tati's film, the spaces are cold and frigid. In PlayTime technology and the skyscrapers of a modern Paris have drawn people apart while they have incongruously made the city center more dense.\nBut despite this dim view of modernity, Tati's PlayTime offers a glimmer of optimism; one where we as individuals can break free from the constraints of modern space to celebrate a shared humanity we have with each other.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-11", "d_text": "But we see the effect it’s having on Mrs. Verloc, as she eyes the knife she’s using to cut the roast.\nMr. Verloc sees her eyeing it and knows what it means. He approaches her and she grabs the knife to keep it from him, stabbing him in the struggle. Now we have a new kind of tension. Mrs. Verloc will have to be arrested for murder.\nOur undercover detective arrives first, and reveals the feelings he has for Mrs. Verloc before discovering the body. He knows what it means, and Mrs. Verloc urges him to take her to the police station, but on the way, he suggests they flee, telling her that there are 12 hours before anyone will find the body. Hitchcock sets another timer. This one immediately expires with two scenes. One in which the pet shop owner’s wife demands he go to Verloc’s theater and get the bird cage to avoid having the bomb connected to them, and another where the police decide to go to the theater and arrest Verloc that evening.\nHere Hitchcock lets the audience get a little bit ahead of the characters again and creates a new kind of tension. If the pet shop owner gets to the theater first and the police find him there, he will easily be blamed for the murder and Mrs. Verloc will be free.\nHere’s a part of the film that’s really confusing. The police decide that Verloc might have another bomb. They start telling everyone that there’s a bomb. The pet shop owner discovers the dead body, and rather than being caught with it, he decides to blow up the entire building with another bomb he had on him. Why did he have another bomb on him? I have no idea. But it ties up the plot nicely, with the police’s suspicions confirmed.\nThe film ends with the chief inspector trying to piece it all together, and deciding that Mrs. Verloc must have had nothing to do with it. Here we get another common Hitchcock moment, where a character is either freed or condemend through coincidence or misunderstanding. Hitchcock clearly exists in a chaotic world.\nThis is a really well-done film. It’s pretty straightforward, but has enough twists and interesting characters to keep things entertaining. Hitchcock does a great job of ratcheting up the tension. the layering of various tensions over and over is especially well done.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Putting their philosophy of superiority to use, and proving to themselves that they can get away with committing the “perfect murder”, college mates Brandon and Phillip (John Dall and Farley Granger) strangle their friend David Kentley to death in their apartment and store his body in a chest. The two are hosting a party later on with guests such as David’s father, his aunt, his girlfriend Janet, friend Kenneth, and Brandon and Phillip’s former school teacher Rupert Cadell. Adding insult to injury, the two use the chest David’s body is stuffed in as a makeshift buffet table for the party guest. The invitation of Cadell both angers and frightens Phillip, as he fears he is the one person to suspect something, while Brandon assures him that Rupert would be the only one to appreciate what they’ve done. Sure enough as the party goes on and David still doesn’t show up the guests get increasingly worried, and with Phillip’s behavior becoming more and more erratic, Rupert begins to have suspicions of his own.\nI think at this point in time calling a Hitchcock movie intense is like saying it snows in the winter. Still, even by Hitchcock standards Rope is a nerve racking experience. It should come as no surprise to Hitchcock fanatics that he was able to create such tension in a limited setting as he did it a few years earlier with Lifeboat, but it’s still astounding he was able to achieve the mood that he did when you take into consideration that aside from the opening credits sequence, the film never leaves Brandon and Phillip’s apartment. There are no action scenes, no chase sequences or anything like that, nothing blows up. It’s obviously not a whodunit, as the culprits are clear as day, as is their motive so I’m not spoiling anything here. The suspense comes from the fact that there is always the constant reminder that just a few feet away from any one person in front of the camera there is a dead body lying in a chest, the dialogue and interactions between Brandon, Phillip and the party guests, especially Rupert. You’ll wince every time a character even glances towards that chest, and there’s one very brief moment involving the housekeeper that’s sure to make your heart skip a beat. Whenever the fact that David is late to the party is brought up, it’s quite gripping to see how Brandon, and especially Phillip react.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-4", "d_text": "Featuring James Stewart in the leading role, Rope was based on the real Leopold and Loeb case of the 1920s, Rope is also among the earliest openly gay-themed films to emerge from the Hays Office–controlled Hollywood studio era.\nWith Strangers on a Train (1951), based on the novel by Patricia Highsmith, Hitchcock combined many of the best elements from his preceding British and American films. Two men casually meet and speculate on removing people who are causing them difficulty. One of the men, though, takes this banter entirely seriously. With Farley Granger reprising some elements of his role from Rope, Strangers continued the director's interest in the narrative possibilities of homosexual blackmail and murder.\nMCA head Lew Wasserman, whose client list also included James Stewart and Janet Leigh among other actors who would appear in Hitchcock's films, had a significent impact in packaging and marketing Hitchcock's films from the 1950s and on. With Wasserman's help, Hitchcock not only received tremendous creative freedom from the studios, but with Paramount's profit-sharing contract, Hitchcock received substantive financial rewards as well.\nThree very popular films, all starring Grace Kelly, followed. Dial M for Murder (1954) was adapted from the popular stage play by Frederick Knott. This was originally another experimental film, with Hitchcock using the technique of 3D cinematography, although the film was apparently never released in this format at first; it did receive screenings in the early 1980s in 3D form. Rear Window starred James Stewart again, as well as Thelma Ritter and Raymond Burr. Here, the wheelchair-bound Stewart observes the movements of his neighbours across the courtyard and becomes convinced one of them has murdered his wife. Like Lifeboat and Rope, the movie was photographed almost entirely within the confines of a small space: Stewart's tiny studio apartment overlooking the massive courtyard set. To Catch a Thief, set in the French Riviera, starred Kelly and Cary Grant.\n1958's Vertigo again starred Stewart, this time with Kim Novak and Barbara Bel Geddes. The film was a commercial failure, but has come to be viewed by many as one of Hitchcock's masterpieces.\nHitchcock followed Vertigo with three very different films, which were all massive commercial successes.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "It has infrastructure, in shops, bars, street restaurants, public and private transport, hell it even has parking meters. This is where Blade Runner’s strengths in design stem from, the attention to the real, and the real obstacles thrown in the way of our hero.\nRidley Scott – Blade Runner\nWhen Deckard finds himself cornered by Roy in J.F.’s flat near the end of the film, Scott presents him with very real physical barriers. In fact, so real are these challenges that most of the tension arises from watching him overcome them, rather then being chased down by the murderous replicant. We watch him push through rotting floors and scale the outside of skyscrapers – scenes that aren’t all that uncommon in the ever popular action franchises. Consider similar sequences in The Fifth Element, in itself not a bad movie, but which holds little tension compared to Blade Runner because the logic of the world created wasn’t built from the ground up, introducing its hostile highs before it’s mundane lows and so to not be truly relatable.\nRidley Scott – Blade Runner\nLuc Besson – The Fifth Element\nA created cinematic world doesn’t have to be fantastical however; Hitchcock’s Rear Window employs the same tropes to great effect. Where Blade Runner’s success is with relatable public space, Rear Window’s is with private, regarding the relationships between locked rooms and their as equally locked residents. This compartmentalisation of characters and plotlines within separate rooms, either conjoining or across a courtyard builds an innate tension, in turn breeding paranoia and perversion, leading Hitchcock to subvert his own created worlds; building in methods of spying and having characters enter and leave rooms through non-traditional modes.\nAlfred Hitchcock – Rear Window\nHitchcock doesn’t limit this to Rear Window, in Psycho, Norman spies on Marion as she undresses via a peephole in his parlour wall, a symptom of the psychosis yet to be revealed. This voyeurism is only heightened in Rear Window, a story centred around the very act of watching, with the windows of Jeff’s neighbours representative of movie screens, each with their own cast of characters and plot.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "submarine: The Crimson Pirate)\nRobert Siodmak: Visual Style\nArchitecture and Design:\n- Victorian decor (heroine's apartment, Elisha Cook's apartment: Phantom Lady,\nhome: The Suspect,\nmansion: The Spiral Staircase)\n- Staircases (station: People on Sunday,\nprison, asylum, outdoor stairs to elevated: Phantom Lady,\nsteps leading to throne: Cobra Woman,\nhome: The Suspect,\nhotel, three staircases at mansion: The Spiral Staircase,\nSwede's, entrance to room with shallow steps, mansion: The Killers,\nnightclub with descending levels, sloping exit of armored car company, staircase at Lancaster's, Angel's Flight: Criss Cross,\nshallow steps of hero's office building: The File on Thelma Jordon)\n- Windows used for spaces in frame (window overlooking throne room: Cobra Woman,\ndiner window into kitchen: The Killers, police car window in last shot: Cry of the City,\nwindow into van of truck: Criss Cross, window behind painting: The Crimson Pirate)\n- Constructions (railing in Berlin: People on Sunday, boxing ring poles and ropes: The Killers,\nhospital equipment at end: Criss Cross)\n- Counters (lunch stand: People on Sunday, Anselmo's bar: Phantom Lady, diner: The Killers)\n- Glass walled rooms (gas station: The Killers, hospital ward: Cry of the City)\n- Sliding doors (diner window into kitchen: The Killers, slum apartment, elevator: Cry of the City,\nsliding painting: The Crimson Pirate)\n- Light flickering outside windows (lightning: The Spiral Staircase,\ncar lights from city streets: Cry of the City)\n- Mirrors (mirror behind bar, Franchot Tone's dressing room: Phantom Lady,\nlarge circular mirror in dressing area: Cobra Woman,\nmansion staircase: The Spiral Staircase,\nover diner window at start, tilted mirror in restaurant finale: The Killers,\nhospital: Criss Cross)\n- Concentric circles (car wheels at garage: People on Sunday,\nsnake container on round platform, throne seat, gong: Cobra Woman,\nwindow and arch on stair landing: The Suspect,\nshot down spiral staircase: The Spiral Staircase,\nattorney's window: Cry of the City,", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "First, the use of a thin window frame profile helps to minimize the physical barrier between the inside and the outside. But the next element that starts to make the mind wonder is how the roof is held up, as we know the thin window frames will hardly have the strength to hold up the concrete roof.\nSo by playing with different materials and structural elements, we are able to generate illusions that add a bit of spice to create a unique corner for the home.\nIn this example here, the effect is again translated through the corner glass sliding door and the feature wall at the end. By having the opening at the corner, it helps to highlight the floating ceiling, but the effect is amplified further through the wall at the end, which stops short of the ceiling, thereby teasing the user further with this “floating” effect.\nYou can see then that through these architectural design techniques, we are able to create uniqueness to the space that will leave an impression… and such is the power of Architecture.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-48", "d_text": "During this time, Shore was consumed with scoring duties for Peter Jackson’s THE LORD OF THE RINGS trilogy, so PANIC ROOM was an assignment taken on precisely because of its low musical demands.\nAs it turns out, Shore’s work in PANIC ROOM is generally regarded as some of his best and most brooding. The score is complemented by a superb sound mix by David Fincher’s regular sound designer, Ren Klyce.\nWhen done right, genre is a potent conduit for complex ideas and allegory with real-world implications. PANIC ROOM is essentially about two women fending off three male home invaders, but it is also about much more: the surveillance state, income equality, the switching of the parent-child dynamic…. the list goes on.\nA visionary director like David Fincher is able to take a seemingly generic home invasion thriller and turn it into an exploration of themes and ideas. For instance, PANIC ROOMaffords Fincher the opportunity to indulge in his love for architecture, letting him essentially design and build an entire house from scratch.\nThe type of architecture that the house employs is also telling, adopting the handsome wood and crown molding of traditional brownstone houses found on the East Coast.\nArchitecture also serves an important narrative purpose, with the story incorporating building guts like air vents and telephone lines as dramatic hinging points that obstruct our heroes’ progress and build suspense.\nAgain, David Fincher employs low angle compositions to reveal the set ceiling in a bid to communicate the location’s “real-ness” as well as instill a sense of claustrophobia.\nFincher’s fascination with tech is woven directly into the storyline, which allows him to explore the dramatic potential of a concrete room with a laser-activated door and surveillance cameras/monitors.\nThe twist, however, is that despite all this cutting-edge technology (circa 2002, provided), both the protagonists and the antagonists have to resort to lo-fi means to advance their cause. Another aesthetic conceit that David Fincher had been playing with during this period is the idea of micro-sized objects sized up to a macro scale.\nIn FIGHT CLUB, this could be seen with the shot of the camera pulling back out of a trashcan, its contents seemingly as large as planets.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-13", "d_text": "The scene is a montage of a sequence of shots showing Arbogast in different hotels, which suggests the passage of time. Finally, Arbogast reaches the Bates motel.\nArbogast investigates right away. He makes the purpose of his visit clear and shows Norman a picture of Marion. Naturally, Norman is scared and tries to end their conversation as soon as humanly possible. “Well, no one’s stopped here for a couple of weeks.” Arbogast insists he take a look at the picture before “committing” himself. This is acting at its best. At first, Norman is relaxed offering his candy. Gradually as the pressure build up, Perkins’s performance intensifies. Arbogast catches a lie when Norman mentions a couple visiting “last week” and asks to take a look at the register. Perkins chews faster and harder on the candy (the candy was his idea). Norman takes another look at the picture and admits she was here but he didn’t recognize the picture at first because her hair was all wet. The showering of questions heightens the pressure and Perkins drives his performance into iconic status. We get it all complete with facial tics and stuttering words.\nBeing the great private detective that he is, Arbogast gets a more complete story by cornering Norman with questions. Moments later he spots the shadowy old woman in the upstairs mansion window. More of Norman’s lies are fished out and Arbogast takes another direction. He pressures Norman with the “let’s assume” method. To which, Norman mistakenly slips the words “Let’s put it this way. She may have fooled me but she didn’t fool my mother.” Now, Arbogast wants to meet the mother. To Norman that’s crossing the line, and so he asks him to leave.\nA phone call later, the private-eye returns to the motel to fulfill his satisfaction. The sequence leading up to his murder mirrors that of Marion since both enter Norman’s patrol prior to their deaths. We also get the stuffed birds shots, only for some reason Hitchcock reverses them with the crow shot first and the owl afterwards. Nevertheless, the viewer is put in the same uncomfortable mood.\nArbogast goes up to the mansion, and step by step climbs the stairway.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "This culminates dramatically as Cobb watches helplessly as Mal jumps from a window to her death. It can also unify, for example, a bond of trust is made between Luke and Leia as they swing to safety across a man made chasm in Star Wars – A New Hope. In fact, plot can be driven so much so by architecture that it can serve as a door to entire genres and sub genres: the courtroom drama, the office comedy, the haunted house.\nComing back to Pevsner’s statement on the emotional captivity of buildings, Ridley Scott’s Blade Runner certainly encapsulates this idea. Set in a future LA, where society relates status to architecture, with the God-like powerful dwelling in pyramids (see also Metropolis’s New Tower of Babel), all the way down to the disease and dereliction of the streets below. Resulting in a society built vertically, both metaphorically and literally. The opulence of Tyrell’s pyramid set against the poverty of those the pyramid stands upon mirrors many of the films central themes; religion and man, design verses nature and inequality. Which are in turn personified in the naturally ‘defective’ J.F. Sebastian and the unnaturally faultless Roy Batty.\nRidley Scott – Blade Runner\nFrom Deckard’s claustrophobic, cave-like apartment, to Hannibal Chew’s frozen grotto of a laboratory, Scott’s vision of the future does a good job at making us feel trapped, but it’s the street scenes that hold the most subterranean echoes. Drawing heavily from designs by visual futurist Syd Mead and completely masking large sections of Warner Bros’ back-lot, the street sets of Blade Runner’s LA still hold up as one of the best and to many, most accurate depictions of future urban living put to celluloid. Pipework, steam-gushing vents and neon smother everything, watching these scenes one can’t help but think of Bowellism and of Roger’s and Piano’s Pompidou Centre in Paris. Pre-existing architecture is bastardised too; heavy, cartoonish columns mask the clean lines of the Bradbury building. All this accumulates to oppress the characters and ultimately the audience, as if we’re squeezing through the veins of a city that no longer welcomes humans. The city, however, still functions with measurable logic.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "by Nevin Hooper\nHello, once again fans of classic movies. It is now October- the month of horror movie, thriller, and other suspense films leading up to Halloween. So, since this month is full of movies of those genres, why not look over one of the best directors of all time, the Master of Suspense himself, Alfred Hitchcock. For this series of reviews I will write four reviews of four of his movies in chronological order. I will start with Rear Window, followed by Vertigo (1958), then North by Northwest (1959), and finally The Birds (1963). So let’s start talking about one of my personal top five favorite films, Rear Window.\nIf you are not familiar with my grading system, it is simple. It goes from a 0/10 (movies that are so horrible that they should have never been made), to an 11/10 (movies that are so amazing, so fantastic, that you just can’t give the average 10/10 grade).\nThis Hitchcock masterpiece focuses on a man named L.B. “Jeff” Jeffries (James Stewart, in my personal favorite performance of his) who is a famous newspaper photographer. After breaking his leg in an accident, he is confined to a cast and wheelchair in his apartment. Since he has nothing else to do, he enjoys watching his neighbors outside of his window in his surrounding big apartment complex. One day he is certain that one of his neighbors has murdered someone, and with the help of his friend Lisa (Grace Kelly) and therapist Stella (Thelma Ritter), he goes against everybody’s judgment to figure out if the murder is just a hoax or a real threat.\nThis movie by far is the most suspensefully nerve wracking films I have ever seen in my entire life. I have never been so on the edge of my seat while watching this movie. The last thirty minutes in this film are probably Hitchcock’s most suspenseful scenes to date. The beauty of this film is you stay in one location throughout the entire runtime of the movie. The entire film never leaves the apartment, so you never see what is beyond it other than a few cars passing by in a little gap in between buildings. Since Jeffries is stuck with this limited view, he is entirely hopeless and the entire movie feels confined.\nThis is such a well done confined thriller. You are stuck in this one area.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "Whenever I would hear about Alfred Hitchcock, I would always listen to people say what a geniuses he was. That his style of filmmaking was different from anyone else, I really had no clue what they were talking about. I used to watch his half hour TV. shows on Nick at Nite when I was a kid, and I don't even remember them so well. When we watched Vertigo in class I got my first real taste of this director's work. I t was a really good movie. It defiantly held my interest. But class was just starting, and other then it just being a good story, I didn't appreciate what made him such an innovative director.\nPsycho was the first movie I watched on my own, looking for certain areas of the film to make it stand out from other movie's I have seen before. My sister and I rented it, bought some candy and popped it in the DVD player.\nI already knew the basic story. A guy has a motel, and a woman gets killed in a shower, if I was not in this film class that is what I would have seen. Since I have been learning what to look for in a movie I saw so much more, and can better understand why so many people say Hitchcock was so ahead of his time.\nTension is what I felt from the movie, from the second it started. The music makes me feel this way. It gives me the feeling of fingernails scratching on a chalkboard. Also in the opening credits all you so is straight stark horizontal lines going in and out. In the opening scene it is one of romance, and yet very drastic colors. Nothing is soft or rounded. The background is...", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-2", "d_text": "Here it is:\nIt starts, as many shots do, with a very quick dissolve:\nThis reveals Mabuse leaning over the countess - they hold this pose for about 5 seconds:\nThen the countess makes a break for it, and Mabuse grabs her:\nThey struggle - and come to a halt, and hold this position (a very tense, violent pose, actually, Mabuse basically pinning her there) for a few second:\n...before she makes another break...\n... which leads to the cut - to a blank door, and the countess bursting into the frame:\nIt's a powerful effect - the alternation of long and short shots, of slow, deliberate movements and gestures and quick, violent movements; integrated with the variations in shot scales - long shots and closer shots alternating, shots of big spaces and tight spaces; even the varying transitions - short dissolves, longer dissolves between shots, alternating with abrupt cuts; and the variations on how the cuts come - cuts to empty spaces that people jump into, say... Everything aimed at generating tension, and doing it...", "score": 8.086131989696522, "rank": 99}]} {"qid": 43, "question_text": "When did Germany become a unified country, and what cultural challenges did this unification bring?", "rank": [{"document_id": "doc-::chunk-13", "d_text": "The king, like the other rulers of Germany’s kingdoms, opposed German unity because he saw it as a threat to his power.\nDespite the opposition of conservative forces, German unification came just over two decades later, in 1871, when Germany was unified and transformed into an empire under Emperor Wilhelm I, king of Prussia. Unification was not brought about by revolutionary or liberal forces, but by a conservative Prussian aristocrat, Otto von Bismarck. Sensing the power of nationalism, Bismarck sought to use it for his own aims, the preservation of a feudal social order and the triumph of his country, Prussia, in the long contest with Austria for preeminence in Germany. By a series of masterful diplomatic maneuvers and three brief and dazzlingly successful military campaigns, Bismarck achieved a united Germany without Austria. He brought together the so-called “small Germany,” consisting of Prussia and the remaining German states, some of which had been subdued by Prussian armies before they became part of a Germany ruled by a Prussian emperor.\nAlthough united Germany had a parliament, the Reichstag, elected through universal male suffrage, supreme power rested with the emperor and his ministers, who were not responsible to the Reichstag. Although the Reichstag could contest the government’s decisions, in the end the emperor could largely govern as he saw fit. Supporting the emperor were the nobility, large rural landowners, business and financial elites, the civil service, the Protestant clergy, and the military. The military, which had made unification possible, enjoyed tremendous prestige. Led by an aristocratic officer corps sworn to feudal values and opposed to parliamentary democracy and the rights of a free citizenry, the military embodied the spirit of the German Empire.\nOpposition to this authoritarian regime with its feudal structures was found mainly in the Roman Catholic Center Party, the Socialist Party, and in a variety of liberal and regional political groups opposed to Prussia’s hegemony over Germany. In the long term, Bismarck and his successors were not able to subjugate this opposition. By 1912 the Socialists had come to have the largest number of representatives in the Reichstag. They and the Center Party made governing increasingly difficult for the empire’s conservative leadership.\nDespite the presence of these opposition groups, however, a truly representative parliamentary democracy did not exist. As a result, Germans had little opportunity to learn the art of practical politics.", "score": 53.52159075219527, "rank": 1}, {"document_id": "doc-::chunk-3", "d_text": "During the Middle Ages, Germany consisted of a series of small kingdoms and principalities, often rivals, and often even at war with one another. The language which they all shared was German, but the people differed on matters of religion, so much so that these differences occasionally erupted into wars between the Catholics and the Protestants. In the mid-nineteenth century, Bismarck (the Chancellor of Prussia, the largest German state) made it his objective to unify the various German states. This he achieved by judicious policies, arranging marriages between various royal families and obtaining treaties which were mutually beneficial to the parties concerned. By the end of the nineteenth century, Germany was united under one monarch, Kaiser Wilhelm I; it possessed colonies in Africa and was ruled by an Emperor (the German term Kaiser is derived from the Latin word Caesar).\nWorld War I, in which Germany fought against France and England, from 1914 to 1918, was largely a result of the structural weakness of many European states and the growing military and economic strength of Germany. After four years of bitter fighting, Germany was defeated, the Kaiser fled to Holland, and a peace treaty, the Treaty of Versailles, was drawn up. This stripped Germany of its foreign colonies, imposed heavy economic penalties on the country in the form of fines and disarmament, and it changed many of the borders of the countries of Europe. This policy gave rise to severe economic problems in Germany. Hunger and poverty were wide-spread, and galloping inflation caused prices to rise at a dizzying rate. The middle class, which had been the chief support of the German Republic, which was established after World War I, became embittered, and many Germans longed for the old autocratic kind of government that had formerly dominated the country.\nIt was during the years after World War I that Adolf Hitler, a house painter who had experienced the bitterness of defeat as a soldier in the German Army, developed his ideas of the Master Aryan Race, the need to rid Germany of \"inferior\" peoples, such as Jews and Gypsies, and the need to expand Germany's borders and build a Germany that was militarily strong. He gathered around him a group of people who supported his ideas and used the tactics of bullying and terrorism to obtain publicity and intimidate his opponents.", "score": 49.31668319751744, "rank": 2}, {"document_id": "doc-::chunk-5", "d_text": "And the revolutionaries’ colors–black, red, and gold–became firmly ensconced as the colors of German democratic and liberal aspirations.\nUnification and Imperial Germany\nGerman nationalism developed into an important unifying and sometimes liberalizing force during this time, though it became increasingly marked by an exclusionary, racially-based definition of nationhood that included anti-Semitic tendencies. However, eventual unification of Germany was essentially the result of Prussian expansionism rather than the victory of nationalist sentiment. Prussia’s economic growth outstripped Austria’s during the latter half of the 19th Century and Prussia-controlled Germany became one of Europe’s industrial powerhouses. Under Chancellor Otto von Bismarck, Prussia defeated Austria (1866) and France (1870) in wars that paved the way for the formation of the German Empire under Emperor Wilhelm I in 1871. Germany became a federal state, with foreign and military policy determined at the national level, but most other policies remained the purview of the states.\nInternally, Bismarck waged a struggle against Catholicism, which he viewed as an agent of Austria (ironically, this anti-Catholic move–which eventually failed–actually ended up consolidating a lasting political role for Germany’s Catholics), and tried to both co-opt and repress the emerging socialist movement by passing the age’s most progressive social insurance and worker protection legislation while clamping down on Socialist activities. Externally, Bismarck then moved to consolidate the stability of the new Empire, launching a string of diplomatic initiatives to form a complex web of alliances with other European powers to ensure that Germany did not become surrounded by hostile powers and avoid Germany’s involvement in further wars.\nHowever, Emperor William II disagreed vehemently with Bismarck, sacking him in 1890. Wilhelm II had ambitious aspirations for Germany, including acquisition of overseas colonies. His dynamic expansion of military power and confrontational foreign policies contributed to tensions on the continent. The fragile European balance of power, which Bismarck had helped to create, broke down in 1914. World War I and its aftermath, including the Treaty of Versailles, ended the German Empire.\nThe Weimar Republic and Fascism’s Rise and Defeat\nThe postwar Weimar Republic (1919-33) was established as a broadly democratic state, but the government was severely handicapped and eventually doomed by economic problems and the rise of the political extremes.", "score": 49.19552649245162, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "I. Historical and Legal Background\nA. German History\nThe FRG, as it “exists today[,] is the product of a long, contentious, and disparate history.” Martin A. Rogoff, The European Union, Germany, and the Länder: New Patterns of Political Relations in Europe, 5 Colum. J. Eur. L. 415, 417 (1999). For many centuries, German territory, then part of the Heiliges römisches Reich (Holy Roman Empire or “First Reich”), consisted of “several hundred discrete political units.” Id.\nThe modern German nation-state was formed in 1871, when most of these units were “united into one [centralized] state under the leadership of [the Kingdom of] Prussia,” id., a monarchy, and officially called the Deutsches Reich (“German Reich” or “Second Reich”). In 1919, after World War I ended, the King of Prussia abdicated his throne and the Deutsches Reich was declared the Weimar Republic.2 The German Reich was made up of several states or Länder, the largest of which was the “Free state of Prussia,” Michael Stolleis, A History of Public Law in Germany, 1914-1945, 108-09 (Thomas Dunlap trans., Oxford Univ. Press 2004), that encompassed territory later comprising both West Germany and East Germany.\nIn 1933, as a result of growing discontentment with the Weimar government, the Nazi Party rose to power and Adolf Hitler was appointed Chancellor of Germany. Over the next few years, the Drittes Reich (“Third Reich”), formally abolished the Länder parliaments, Rogoff, 5 Colum. J. Eur. L. at 418, and became a centralized totalitarian state. In 1939, under Hitler's dictatorship, the Third Reich invaded Poland, and in 1941, attacked the Soviet Union and declared war on the United States, thereby beginning World War II in Europe. See generally William L. Shirer, The Rise and Fall of the Third Reich: A History of Nazi Germany (1960).", "score": 48.43607760595095, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Solution: Daily Problem Practice [World History: Week 16]- 29 January\nQ. What were the factors responsible for the rapid industrialisation Germany after 1870? How did the industrialisation process of Germany differ from that of Britain? [20 Marks]\nThe Industrial Revolution began about a century later in Germany than it did in England. Before 1870 Germany was not united properly. This was because of the power struggle, mainly between Prussia and Austria, that was occurring at the time. This disunity did not provide for a stable or flourishing economy.\nFactors responsible for the rapid industrialisation of Germany after 1870\n- Unification of Germany:\n- In 1871 finally a united Germany was formed under the chancellor of Germany, Bismarck, which united the divided states together.\n- A new united country meant that goods, natural resources could be distributed among all of Germany faster than before. Business thrived because of the unification.\n- A unified country meant that it was coordinated in its actions, and therefore was less vulnerable to political, social and military attacks which lowered costs and risks associated with owning a business.\n- Government’s Role, Protection and Welfare:\n- Government supported not only heavy industry but also crafts and trades.\n- In 1879 industrial protection was introduced by applying the foreign tariffs on imports. This encouraged trade, employment, and business.\n- Government accumulated a lot of money from tariffs imposed on foreign items, which allowed her to put money back into the economy and to introduce social welfares such as Health Insurance, Accident Insurance and Old Age Pension.\n- Social Welfare was also introduced (First time by Bismark) made people think twice about how bad the government was, and deterred people from swinging toward the communist side of the political spectrum. Also these deterred migration of skilled Germans to other countries like the USA.\n- Contribution of Bismarck:\n- Firstly, he unified the country; secondly, he brought the economy into line; thirdly, he made sure it stayed that way and encouraged it; and fourthly, he prevented anything from hurting the economy badly.\n- Bismarck won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade\n- Even before unification, his Blood and iron Policy included Iron (which fuelled Industrialization).", "score": 45.8567481482591, "rank": 5}, {"document_id": "doc-::chunk-1", "d_text": "Sandwiched between the stronger powers (empires) of France, Russia or the Austro-Hungarian conglomerate, Germany became a sort of jetty that dispersed the waves of desire of its neighbors by providing a complicated group of territories whose rulers were often at war or in allegiance with each other. All of this changed, and was bound to change the face of Europe dramatically, with the rise to power of Otto von Bismarck.\nWhen Germany united, through Bismarck’s masterful abilities, some great timing and luck, it immediately proved too strong for Denmark, the Austro-Hungarian Empire, France, and eventually Russia. Only Great Britain could compete and contain the new industrial and military behemoth. Germany’s ability to produce, organize and analyze was beyond comparison with other major countries then, and is still so today. This has caused a plethora of problems for Germany-and her neighbors. It caused them then as it causes them now. Germany is, and has always been, the question of Europe. It is still so today.\nAfter a few embellishments, or half-truths (lies), Bismarck was able to unite the peoples who shared a common language-German-under a banner that would one day at different points terrorize, amaze and astound the rest of the globe.\nGermany will continue to do so, though one hopes that their darkest days of the Third Reich are behind them. Many in Europe are not so sure, though it’s hard to determine if it’s politics being played or authentic fear. The quicker the rest of Europe (and the world) can resign themselves to (or rejoice) the fact that a reunified Germany is a world player, the better off it will be.\nSouthern Germany, Austria and Switzerland would be a formidable economic union\nBut how unified is Germany? Are there still many obstacles to overcome before they can truly take their place upon the Pantheon of Nations as their robust economy demands?\nFrom east to west in Germany, the differences are stark and clear. Certainly those Germans residing near the border with Poland can’t be very similar to those Germans abutted against France. And they are not. But they are both more German than the Bavarians, those strange, independent and wonderfully quirky Bavarians, who queerly and surely seem to share few of the qualities of their more northern kindred.", "score": 44.36945140290597, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Last time, we looked at the historical context for artwork in late nineteenth century Germany. In 1871, Germany officially became a unified country. This time, we’ll look at the cultural ramifications of the unification and how it impacted art.\nAlthough German-speaking princes had been allied for centuries, the individual provinces needed to strengthen their commitment in order to counter military and economic competition from other countries such as Austria and France. But just because the people in the new country spoke German and shared much in the way of their cultural identity didn’t mean that they felt like a big happy family. And the disruptive forces of the industrial revolution did nothing to help the sense of confusion and frustration.\nThe people of the German Empire needed to ask themselves: what does it mean to be German? The imagery on Mettlach steins of the time offers some interesting answers to that question.\nSome steins use inspiration from the distant past for their subject matter. This shows a pride in the traditions of German culture—with roots back to the middle ages—as well as nostalgia for a simpler time. This past is often called “altdeutschen” or “old Germany”. When it appears in art, it is called historicism.\nFor instance, at the beginning of this post is a beautiful stein that shows a medieval knight mounted on a majestic white horse enjoying a tankard of beer. The romantic notion of the past is echoed in the lid in the shape of a castle turret.\nThis large-scale pitcher shows successful German hunters in extravagant Renaissance dress—just look at that ermine trim! The man on this side is toasted by none other than the ancient goddess of the hunt, Diana, who sits on top of a beer barrel.\nA noble couple in elegant dress from the 16th century is featured on this pitcher. The traditions of the past are celebrated by the man who is practicing falconry, using the bird of prey to hunt.\nHistoricism is not the only way that steins reveal the German exploration of identity. National symbolism can be seen in this Mettlach stein. The Imperial Eagle of Germany is front and center.\nThere is further pride in German modernity: This stein celebrates two very important technological developments that Germany embraced: the telegraph and the railroad. The eagle holds telegraph poles with his claws—you can see the glass insulators at the top of the poles with the lines strung between them.", "score": 43.01531436380052, "rank": 7}, {"document_id": "doc-::chunk-1", "d_text": "Particularly notable are the Austro-Prussian War of 1866, after which Italy gained Venetia, and the Franco-Prussian War of 1870-1871, which finally unified Italy with the gain of the Papal States and Rome. This final war also brought about the downfall of Napoleon’s nephew, Louis Napoleon or Napoleon III, as well as the unification of Germany. Unlike Italy, the German struggle for unity did not attract the same intense identifications and passionate romanticised zeal as the Risorgimento, perhaps because Germany was taken more seriously as an economic, modernized nation with stronger military and diplomatic power. As Maura O’Connor points out, the different fates of Germany and Italy in the nineteenth century illustrates the unevenness of economic and political change.\nUnification of Italy, and its freedom from foreign occupiers, was achieved in 1861 through the military intervention of Napoleon’s nephew, Napoleon III, together with King Victor Emanuel of Sardinia (Piedmont), who became King of a united Italy through a series of plebiscites (Venice and Rome were not incorporated into the Kingdom until 1866 and 1870 respectively). But unification was both a cultural movement as well as a political phenomenon, and the example of Napoleon’s coronation demonstrates the fraught relationship between culture and politics. In addition, Napoleon’s adoption of the kingship of France underlines another key problematic at the heart of the concept of “Italy”: a belief in Italy as a national entity did not necessarily directly correspond with a movement to free the land from foreign rule. Furthermore, those advocating for political independence did not necessarily want unity; as Martin Clark points out, for many Italian writers and intellectuals, Italy’s diversity meant that “a single Italian state seemed [. . .] . . . not only impossible but also undesirable” (4). Those more inclined towards any kind of national unity favoured federalism (4). But, although unification, liberty and independence—the attested trio of Risorgimento goals—were largely achieved in 1861, this still did not equate to creating Italy as a nation. As Clark observes, “If Italian identity is multiple now, it was even more multiple then” (8).", "score": 42.388308364110735, "rank": 8}, {"document_id": "doc-::chunk-2", "d_text": "He knew that German unification was possible only by Prussia. To achieve this end he had two aims :\nFirstly, to drive out Austria from the “German States’ Association”, secondly, instead of losing identity with Germany, convert Germany into Prussia. This meant including the culture and traditions and spread of administrative machinery and military power all over Germany. Bismark achieved the unification of Germany by his “Blood and Steel” philosophy. “Blood and Steel” philosophy means ‘War tactics’.", "score": 41.77341757013545, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "NAPOLEON IN GERMANY\nBefore the Napoleonic era, Germany had little national identity; under the weak rule of the Holy Roman Empire of the German Nation, it consisted of a loose grouping of states, huge numbers of small independent territories, bishoprics, church lands, and local principalities united only by a common language, and vague cultural ties. During his occupation of Germany, Napoleon mandated middle-sized states to absorb smaller territories, thereby abolishing many old regimes and initiating the dissolution of the Holy Roman Empire of the German Nation in 1806.\nThe French Revolution with its social and political reformations laid the groundwork for reforms of the consolidated in German states. After Napoleon, in the battle of Jena and Auerstedt defeated Prussia, Germany’s last great defiant territory, French occupation in central Europe became increasingly repressive and exploitative, sparking German nationalism and a revolt, which soon spread all over the German speaking territories. Finally, in the 1813 Battle of Nations near Leipzig, allied forces from Prussia, Austria, Russia, Spain, Portugal, Sweden, Britain and several smaller German states dealt Napoleon a devastating defeat that drove him and all his troops out of Germany, leading to his abdication a few months later.\nIn the end, the French domination that modernized and consolidated Germany had created the unification and sparked the nationalism necessary to enable German forces to help end the French emperor’s domination of Europe.", "score": 37.42125534491427, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "In the same year, the West German Union was created, and, in a blatant display of hypocrisy, threatened to annex Prussia if it did not comply to an ultimatum (essentially making Prussia a vassal state). The southern German states, and the now disunited again Thuringian Duchies supported this idea, and quickly Prussia buckled, giving in to the West-Germans. In 1871, the voluntary consent of all the German states, seeing the problems a disunited Germany could give, was given for a new German Union to be created, with the Kaiser of the Holy Roman Empire at its head.\nAt the same time, Thuringia united for a second time, the Second Thuringian Union. This was stronger, and had its own head of state and parliament, known respectively as 'Thüringische König', and 'Thüringische Tag'. This caused a small dispute as to how many electors Thuringia should have, but it was eventually decided that there should be only one, the König.\nFor a long time, nothing happened. In the Great wars, Thuringia was rather the most average land in the Union. Then, at the end of GWII, it suddenly became independent. To some degree, this came as a great shock to the inhabitants, and some of them spontaneously started rebelling and campaigning for more democracy. Previously, a hideously self-prolonging parliament had been in place, with the Thüringischen Länder voting for the Tag, and the Tag appointing the parliaments of the Länder. The campaign erupted into violence in several areas, especially in the South and in the cities. A group of Rebels seized both Erfurt and Weimar, even with the support of the local army regiments. They were quickly declared independent. A large part of Meiningen and all of Coburg was seized. The majority of the rebels were Läßinisch, and declared the area independent, establishing the Republic of Läßinischland. The remainder quickly disintegrated into a number of petty dictatorships and a few semi-democratic Free Cities. The König fled to Bayern, nearby.\nA number of the petty dictatorships gave themselves titles, often many states holding the same name. These states had a tendency to go to war, and eventually a map similar to the old borders was established.", "score": 37.05752367501406, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "Afterward, the region developed as a buffer for great powers of the day, and a unified German state did not immediately take shape. Instead, inhabitants formed strong area ties as they rebuilt their war-ravaged societies.\nTHE war also gave rise to a yearning for stability, paving the way for authoritarian rule. But even as Bismarck's Prussia came to dominate Germany in 1871, regional interests had to be taken into account.\nThis century's two world wars smashed the old authoritiarian order, but postwar West Germany promoted the sense of regionalism in its federal system to prevent the return of a strong central power.\nThe wide reach of modern communications has diminished regional loyalty, but such allegiances have not faded entirely. ``People identify with their individual state rather than Germany as a whole,'' says Prof. Heinz Laufer, dean of political science at the University of Munich.\nIn another move to limit centrification, the framers of Germany's Basic Law incorporated aspects of a widely held desire for social equilibrium into West Germany's new federal system. As a result, German states, at least in the west, enjoy a common standard of living not prevalent in the United States, where the differences between rich and poor states are stronger.\nStates may soon expand their powers. A bill passed by the lower house of Parliament, the Bundestag, would permit states to implement local legislation in a greater number of spheres, on condition there is no conflict with existing federal rules.\nThe bill is almost certain to win Bundesrat approval, meaning that states may soon introduce contrasting local laws on areas such as crime fighting and economic regulation. The federal Constitutional Court would resolve any dispute between federal and state legislation.\n``Germany has always been better off when it has been decentralized,'' says Johann Boehm, Bavaria's minister for federal and European affairs. ``It makes sense to try to go about problem solving in managable units.''\nYet changes could ultimately upset equilibrium among states, serving as a potential source of social tension. They could also divert the judiciary's attention from other important matters, experts say.\nKarlheinz Niclauss, dean of Bonn University's political science department, says financial considerations are behind the states' push. The 11 western states currently need to subsidize the five states of the former East Germany. That is causing some western state leaders to agitate for more influence in the way revenue is raised and dispersed, Mr. Niclauss says.", "score": 36.179976660274306, "rank": 12}, {"document_id": "doc-::chunk-4", "d_text": "Actually, since 1989, before the Wall was physically ruined, German people had got an opportunity to move freely without any restrictions regardless the Wall but people could hardly forget thousands of Germans died and imprisoned, while attempting to trespass the border.\nThus, the fall of the Berlin Wall was as symbolic as its construction since the latter symbolised separation of Germany and German people, while the former was basically caused by the desire of German people live in a united country. However, the consequences of the fall of the Berlin Wall and the following unification of Germany turned to be not so optimistic as millions of people probably hoped.\nConsequences of the fall of the Berlin Wall\nObviously the unification of a country is a very complicated process that affects all spheres of life. The case of Germany is particularly problematic because two absolutely different countries were untied. Moreover, it is even possible to estimate that the countries from two opposing worlds were united in one, solid Germany. In fact, it is hardly possible to unite Eastern Germany, where the local totalitarian regime controlled all spheres of life and where plan economy was the only form of economic development of the country, with highly developed capitalist economy of Western Germany, based on the principles of free market economy and high entrepreneurial activity.\nIn this respect, it is very important to point out that the fall of the Berlin Wall provoked a number of problems. In fact it changed patterns in the city and the fall created chaos. There were a host of problems and crimes connected with the new, uncontrolled borders were almost immediately. Among them were increases in highway accidents, weapons and currency smuggling and robberies. Early in 1990, a rash of bomb threats hit East Germany (Ansteigen, Die Polizei 1990, p.287). However, socio-economic and political consequences of the fall of the Berlin Wall are probably the most significant.\nA. Economic consequences\nSpeaking about economic consequences of reunification of Germany, it is necessary to underline that it is a very complicated process since the united Germany has to re-establish economic links broken by the Cold War and division of Germany. In fact the difference between Western and Eastern parts of Germany was so significant that it was possible to speak about economic retardation of Eastern Germany from economically advanced Western Germany. As a result, the fall of the Berlin Wall led to the situation when “East Berlin being so poor from socialism blended with the richer republic West” (Thomas 2003, p.291).", "score": 34.98186301897081, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "To outside observers, the course of German reunification appeared to unfold quickly, possessed of an inevitable momentum spurred on by Soviet reforms and economic collapse. As this iMinds history points out, however, the reintegration of East and West German society was a gradual process, wrought with cultural readjustments and crises in manufacturing and unemployment. Storied television actor Hamish Hughes strikes an even clip as he delivers this overview of reunification, a joyous occasion nevertheless fraught with growing pains. With the stoicism of an experienced documentarian, Hughes provides a fluid presentation of significant developments in the unification process - from the opening of the Austro-Hungarian border to merging of the Deutschmark.\nLearn about the process of German Reunification with iMinds insightful audio knowledge series. Reunification in Germany was a slow process. The country was divided between the communist East and the capitalist West from the end of World War Two in 1945 until the fall of the Berlin Wall in 1989. To understand how momentous a step reunification in Germany was, it is necessary to go back to the start of the division in 1945.\nWhen World War Two ended, the four Allied powers of the US, Britain, France and the Soviet Union divided Germany between themselves. This was to share the load of helping a war-ravaged country rebuild itself. The US, Britain and France were given land in the west of Germany and the Soviet Union was given land in the east of Germany. Due to the influence of the occupying forces, over time West Germany developed into a capitalist state and East Germany developed into a communist state.\nPerfect to listen to while commuting, exercising, shopping or cleaning the house.. iMinds brings knowledge to your MP3 with 8 minute information segments to whet your mental appetite and broaden your mind.\niMinds offers 12 main categories, become a Generalist by increasing your knowledge of Business, Politics, People, History, Pop Culture, Mystery, Crime, Culture, Religion, Concepts, Science and Sport.. Clean and concise, crisp and engaging, discover what you never knew you were missing.\nMake your MP3 smarter with iMinds MindTracks, intersperse with music and enjoy learning a little about a lot.. knowledge of your own choice and in your own time.", "score": 34.69515845859103, "rank": 14}, {"document_id": "doc-::chunk-11", "d_text": "Without foreign military assistance for the first time, the GDR leadership decided against the use of force to quell the burgeoning demonstrations. Honecker was ousted in mid-October, and more realistic leaders sought to save the regime by making concessions. In November travel abroad became possible, and East Germans swarmed into West Germany, many intending to remain there. Reforms could no longer satisfy East Germans, however, who wanted the freedoms and living standard of West Germany.\nWest German chancellor Helmut Kohl (1982- ) seized the political initiative in late November with his Ten-Point Plan for unification. Yet, even he thought several years and an intervening stage, such as a confederational structure, would be necessary before unification of the two Germanys could occur. By early 1990, however, the need to stop the massive flow of East Germans westward made speedy unification imperative. In addition, revolutionary change in other Eastern-bloc counties made solutions that a short time earlier had appeared out of the question suddenly seem feasible. The Treaty on Monetary, Economic, and Social Union between the two German states was signed in May and went into effect in July. The two Germanys signed the Unification Treaty in August. The Treaty on the Final Settlement with Respect to Germany, the so-called Two-Plus-Four Treaty, was signed in September by the two Germanys and the four victors of World War II–Britain, France, the Soviet Union, and the United States. The treaty restored full sovereignty to Germany and ended the Cold War era.\nWhen unification occurred on October 3, 1990, it was a happy, yet subdued occasion. The many problems of joining such diverse societies were already apparent. The vaunted East German economy was coming to be seen as a Potemkin’s village, with many of its most prestigious firms uncompetitive in a market economy. East German environmental problems were also proving much more serious than anyone had foreseen; remedies would cost astronomical sums. West Germans had discovered also that their long-lost eastern cousins differed from them in many ways and that relations between them were often rife with misunderstandings. A complete melding of the two societies would take years, perhaps even a generation or two. The legal unification arranged by the treaties of 1990 was only the beginning of a long process toward a truly united Germany.\nIn its long history, Germany has rarely been united.", "score": 33.224783022991424, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "When the balloting took place in March 1990 the SED, now renamed the Party of Democratic Socialism (PDS), suffered a crushing defeat. The eastern counterpart of Kohl’s CDU, which had pledged a speedy reunification of Germany, emerged as the largest political party in East Germany’s first democratically elected People’s Chamber. A new East German government headed by Lothar de Maizière, a long-time member of the eastern Christian Democratic Union, and backed initially by a broad coalition, including the eastern counterparts of the Social Democrats and Free Democrats, began negotiations for a treaty of unification. A surging tide of refugees from East to West Germany that threatened to cripple East Germany added urgency to those negotiations. In July that tide was somewhat stemmed by a monetary union of the two Germanys that gave East Germans the hard currency of the Federal Republic.\nThe final barrier to reunification fell in July 1990 when Kohl prevailed upon Gorbachev to drop his objections to a unified Germany within the NATO alliance in return for sizable (West) German financial aid to the Soviet Union. A unification treaty was ratified by the Bundestag and the People’s Chamber in September and went into effect on October 3, 1990. The German Democratic Republic joined the Federal Republic as five additional Länder, and the two parts of divided Berlin became one Land. (The five new Länder were Brandenburg, Mecklenburg–West Pomerania, Saxony, Saxony-Anhalt, and Thuringia.)\nTest Your Knowledge\nLet’s Move: Fact or Fiction?\nIn December 1990 the first all-German free election since the Nazi period conferred an expanded majority on Kohl’s coalition. After 45 years of division, Germany was once again united, and the following year Kohl helped negotiate the Treaty on European Union, which established the European Union (EU) and paved the way for the introduction of the euro, the EU’s single currency, by the end of the decade.\nThe achievement of national unification was soon shadowed by a series of difficulties, some due to structural problems in the European economy, others to the costs and consequences of unification itself. Like most of the rest of Europe, Germany in the 1990s confronted increased global competition, the increasing costs of its elaborate social welfare system, and stubborn unemployment, especially in its traditional industrial sector.", "score": 32.69110031093172, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "The excluded Soviet zone then formed the German Democratic Republic which included East Berlin. The prospering West drew people from the East but a stop was put to this in 1961 when the Berlin Wall was built.\nDuring the following four decades, both German countries chose to follow completely different paths, politically and economically. After a short but powerful period of peaceful manifestations in Eastern Germany, on November 9, 1989 the GDR border police unexpectedly opened the Berlin Wall. Almost a year later the German Democratic Republic joined the Federal Republic of Germany.\nMany differences between both German States have disappeared but at many places, traces of Eastern Germany can still be found, which is one of many reasons to travel through Germany.\nSociety and Culture\nGermany has a vibrant, rich and interesting cultural life. Names like Einstein, Beethoven and Bach can already be identified with the traditional culture of the country. Germans are pretty fond of watching and participating in all sorts of sports, games, concerts and contests and they also have a great appreciation for the arts. Germany’s rich cultural heritage is also visible in its architecture – a diverse mix of the modern and historical.\nJust under two thirds of Germans consider themselves Christians, and around a third say they have no religion or are members of a non-Christian faith community such as Muslim or Jewish.\n*The Small Print\nWe’ve tried to make this destination guide as accurate as possible but please double check the essentials like visas, health and safety, airport information etc with the relevant authorities before you travel. STA Travel takes no responsibility for loss, injury or inconvenience caused as a result of this guide. All prices listed are in the currency of the destination, unless otherwise stated.", "score": 32.220816847848766, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "For some forty-five years, between 1945 and late 1990, the German people lived in a divided land. During most of that period, Germany was made up of two separate states: the Federal Republic of Germany (known as West Germany) and the German Democratic Republic (or East Germany). The two Germanys were divided not only by a border, but by opposing political systems as well. West Germany had a democratic form of government. East Germany was a Communist state.\nThe division of Germany was a result of its defeat in World War II. In 1945, at the end of the war, Germany was divided into four zones of occupation by the victorious Allies — the United States, Britain, France, and the Soviet Union. In 1949, West Germany was formed from the U.S., British, and French zones; and East Germany from the Soviet zone.\nIn October 1990 the two Germanys were reunited.\nThe great majority of Germans live in urban areas (cities and large towns). Aside from Berlin, the capital and most populous city, the largest cities are in western Germany. It is the most densely populated part of the country and one of the most densely populated areas in Europe.\nMany urban areas had to be rebuilt after the destruction of World War II. There is still a housing shortage, especially in the west because of the millions of immigrants who fled there from the eastern region.\nReducing Regional Differences\nLife in most of western Germany's cities, towns, and villages has become much the same, although the pace of life is faster in the cities. In the small villages, people still take pride in local customs and they may wear traditional dress on holidays and other special occasions. Regional dialects can still be heard. But, with a few exceptions, the great historical differences that once existed among the regions have all but disappeared.\nThere are several reasons for this development. In the postwar period, millions of refugees from the east settled throughout Germany, including Protestants in traditionally Catholic areas and Catholics in Protestant areas. In addition, the boundaries of the former West German states were drawn up in most cases by the Western allies — the United States, Britain, and France — who were not especially interested in keeping the country's historic regions intact.\nTelevision has also played a major role in reducing regional differences. People throughout the country watched the same television programs or listened to national radio networks. There were fewer programs with a regional emphasis.", "score": 31.716027867409732, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "In 1989 the fall of communism marked the end of one of the most dramatic vestiges of World War II. Plans to reunite the democratic government of West Germany and the collapsed communist government of East Germany began immediately and were realized on October 3, 1990. Since that time, programs have been introduced to successfully integrate the radically different approaches to environmental and land use planning taken by both countries. While other eastern bloc countries have split (Czechoslovakia and Yugoslavia) the Germany Democratic Republic (GDR) is the only country to have immediately merged with a western nation. Has this jump-start helped or hindered this former eastern bloc country and, more specifically, how has land development and the ecosystem been effected by reunification?\nEstimates show that in the early 1990s more money was given to East Germany for reconstruction than the seven eastern bloc countries combined (Albania, Bulgaria, Czechoslovakia, Hungary, Poland, Romania, and Yugoslavia). Yet, while East Germany has been pumped with money (particularly support from West Germany), economic and environmental recovery has taken much longer than expected. The slow process of reform has been most likely caused by the difficulties of introducing a free market into the region, pressures for economic growth, the cost of environmental cleanup and the psychological change involved in restructuring a country. While the gap between the east and west is not getting smaller, a similar pattern is repeating itself within the former GDR. The new Länder (East German states) differ greatly in the rate in which they are progressing. While some areas are improving slowly, others are heading toward mounting instability. Poor environmental conditions, few jobs and poor housing coupled with a slow reform process have caused migratory patterns which flow solely from east to west.\nReunification increased West Germany’s land area by 43 percent and population by more than 16 million. This increase in land and population has put a tremendous stress on West Germany for it inherited a country that had virtually ignored environmental and infrastructure concerns for approximately 40 years. Spatial development in the east has an impact on future economic trends and influences regional labor-markets, patterns of migration and transportation, and environmental sustainability. As stated in a 1996 report by the European Commission:\nGerman Planning, 1923-1945\nWhile this report proposes to concentrate on post-WWII German land use planning, it is also important to examine the historical foundation of 20th Century German planning.", "score": 31.53381726914649, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Ten Years After The Unification Effect: A U-M Conference Looks at the \"Berlin Republic\" Ten Years after the Fall of the Berlin WallSkip other details (including permanent urls, DOI, citation information)\nThis work is protected by copyright and may be linked to without seeking permission. Permission must be received for subsequent distribution in print or electronically. Please contact firstname.lastname@example.org for more information. :\nFor more information, read Michigan Publishing's access and usage policy.\nA decade after the fall of the Berlin Wall, the year 1989 now stands firmly as a watershed date in German history. With the seemingly unstoppable force of people fleeing the east, first by way of Hungary and subsequently by way of the West German Embassy in Prague, and then with the massive presence of people staging protests within the GDR reclaiming their voices in the political process as \"we\" the people, a historic chain of events was initiated which allowed no turning back. Nor did this process end with the official unification of the two Germanies on October 3, 1990. Between 1989 and 1999, Germany witnessed a series of major tectonic shifts in all aspects of its political, social and cultural life. The resulting pressures and tensions continue to define a new \"Berlin Republic\" that is still groping for its national identity both internally and with a view to broader European, Eastern European and global contexts.\nThe intervening time span of 10 years also affords us the opportunity to gauge the central issues that have emerged during this series of shifts. In particular, we would identify three sets of concerns that have defined the decade: East-West relations, the renegotiations of citizenship and the role of the German past. Thus, as the decade between 1989 and 1999 obviously stands under the sign of the end of the Cold War, the Berlin Republic has been faced with the need to negotiate a new relationship between the former East and West Germany - entities which persist even after the states that had comprised them have merged. Secondly, on the basis of both internal and external pressures, and particularly as part of the European Community, the united Germany repeatedly has had to confront issues of citizenship and immigration throughout the past decade. Thirdly, since 1989 Germans have been obliged to confront their history once more, if only by the merging of two versions of the Nazi past which had been kept separate for 40 years.", "score": 31.27029808624838, "rank": 20}, {"document_id": "doc-::chunk-134", "d_text": "When the historic constellation allowing unification appeared, swift and decisive action on the part of Chancellor Kohl and the unwavering, strong support given by the United States government for the early completion of the unification process were key elements in surmounting the last hurdles during the final phase of the Two-Plus-Four Talks.\nThe unification treaty, consisting of more than 1,000 pages, was approved by a large majority in the Bundestag and the Volkskammer on September 20, 1990. After this last procedural step, nothing stood in the way of formal unification. At midnight on October 3, the German Democratic Republic joined the Federal Republic of Germany. Unification celebrations were held all over Germany, especially in Berlin, where leading political figures from West and East joined the joyful crowds who filled the streets between the Reichstag building and Alexanderplatz to watch a fireworks display. Germans celebrated unity without a hint of nationalistic pathos, but with dignity and in an atmosphere reminiscent of a country fair. Yet the world realized that an historic epoch had come to a peaceful end.\nThe German economy–the fifth-largest in the world in purchasing power parity (PPP) terms and Europe’s largest–is a leading exporter of machinery, vehicles, chemicals, and household equipment and benefits from a highly skilled labor force. Like its Western European neighbors, Germany faces significant demographic challenges to sustained long-term growth. Low fertility rates and declining net immigration are increasing pressure on the country’s social welfare system and have compelled the government to undertake structural reforms. The modernization and integration of the eastern German economy–where unemployment can exceed 20% in some municipalities–continues to be a costly long-term process, with total transfers from west to east amounting to roughly $3 trillion so far.\nGDP contracted by nearly 5% in 2009, which was the steepest dropoff in output since World War II. The turnaround has been swift: Germany’s export-dependent economy is expected to grow by 3.5% in 2010 and a further 2% in 2011, with exports to emerging markets playing an increasingly important role. The German labor market also showed a strong performance in 2010, with the unemployment rate dropping to 7.5%, its lowest level in 17 years.", "score": 31.25710031148363, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "When Germany reunified, East Germany was merely absorbed into West Germany, creating the reunified Germany- The Berlin Republic. The Basic Law stayed in place with minor changes as it had served the country well.\nMedia set in West Germany:\n- A Small Town In Germany\n- The ODESSA File\n- The Baader Meinhof Complex\n- Recurring Saturday Night Live segment Sprockets\n- A number of German Media most non-Germans never ever heard of\n- This policy of standing by Israel has occasionally bitten Germany in the arse, as when it threatened to ruin anything resembling a unified European front on the Palestinian statehood resolution in 2011.", "score": 31.174280846918496, "rank": 22}, {"document_id": "doc-::chunk-13", "d_text": "In accordance with Article 23 of the F.R.G.’s Basic Law, the five Laender (which had been reestablished in the G.D.R.) acceded to the F.R.G. on October 3, 1990. The F.R.G. proclaimed October 3 as its new national day. On December 2, 1990, all-German elections were held for the first time since 1933.\nThe Final Settlement Treaty ended Berlin’s special status as a separate area under Four Power control. Under the terms of the treaty between the F.R.G. and the G.D.R., Berlin became the capital of a unified Germany. The Bundestag voted in June 1991 to make Berlin the seat of government. The Government of Germany asked the Allies to maintain a military presence in Berlin until the complete withdrawal of the Western Group of Forces (ex-Soviet) from the territory of the former G.D.R. The Russian withdrawal was completed August 31, 1994. On September 8, 1994, ceremonies marked the final departure of Western Allied troops from Berlin.\nIn 1999, the formal seat of the federal government moved from Bonn to Berlin. Berlin also is one of the Federal Republic’s 16 Laender.\nGOVERNMENT AND POLITICAL CONDITIONS\nThe government is parliamentary, and a democratic constitution emphasizes the protection of individual liberty and division of powers in a federal structure. The chancellor (prime minister) heads the executive branch of the federal government. The duties of the president (chief of state) are largely ceremonial; the chancellor exercises executive power. The Bundestag (lower, principal chamber of the parliament) elects the chancellor. The president normally is elected every 5 years on May 23 by the Federal Assembly, a body convoked only for this purpose, comprising the entire Bundestag and an equal number of state delegates. President Christian Wulff (Christian Democratic Union – CDU) was elected on June 30, 2010.\nThe Bundestag, which serves a 4-year term, consists of at least twice the number of electoral districts in the country (299). When parties’ directly elected seats exceed their proportional representation, they may receive additional seats. The number of seats in the Bundestag was reduced to 598 for the 2002 elections.", "score": 30.85076987859209, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "Clearly, these issues are not \"new\" - there is continuity and discontinuity involved in each case; in analyzing the impact of these shifts, we should take care not to subsume all change under the same periodizing impulse, reducing the contemporary German landscape into a neatly divided image of \"before\" and \"after\" the wall. In their different ways, both German states had been forced to confront the role of recent German history long before unification; the events since 1989 have merely reinforced the confrontation between Eastern and Western narratives about the past to a degree which had not been reached since the \"hot\" days of the Cold War. Similarly, both the GDR and West Germany had arguably already witnessed a shift from literature and film to architecture and monuments as the privileged media for dealing with the past; this too is a tendency which has only been reinforced in the last decade, even as the events of 1989 have profoundly affected the terms under which the corresponding debates are carried out. Finally, the discussion of immigration policies and citizenship status dates back far beyond the 1989 watershed to the arrival of Gastarbeiter [guest workers] in the West and \"contract workers\" in the East. While the events of 1989 briefly interrupted these discussions, they have since come back to the foreground with a vengeance, leading to the adoption of hard-lined asylum politics and revised citizenship laws by the major political parties. Nor are the issues we have identified clearly separable: concerns of citizenship always touch on the question of how Germany deals with its past; likewise, the discussion of that past can hardly be isolated from the fallout of the Cold War which has profoundly influenced the narratives through which we make sense of German history.\nIn retrospect, it seems obvious that none of these issues has been resolved to date; rather, they are the objects of ongoing discussions, debates and - sometimes unarticulated - tensions that have defined the \"Berlin Republic\" since 1989. Taking stock a decade after the fall of the wall, a group of scholars came together last December at the U-M to explore these tensions and look at the discursive and cultural re-configurations that have emerged in response to 1989. Clearly, such an ambitious undertaking requires an interdisciplinary approach and the conference was able to draw contributions from a wide variety of disciplinary standpoints ranging from anthropology and history to art history, literature, media studies and political science.", "score": 30.478748651508017, "rank": 24}, {"document_id": "doc-::chunk-2", "d_text": "Germany has one of the world’s highest levels of education, technological development, and economic productivity. Since the end of World War II, the number of youths entering universities has more than tripled, and the trade and technical schools of the Federal Republic of Germany (F.R.G.) are among the world’s best. Germany is a broadly middle class society. A generous social welfare system provides for universal medical care, unemployment compensation, and other social needs. Millions of Germans travel abroad each year.\nWith unification on October 3, 1990, Germany began the major task of bringing the standard of living of Germans in the former German Democratic Republic (G.D.R.) up to that of western Germany. This has been a lengthy and difficult process due to the relative inefficiency of industrial enterprises in the former G.D.R., difficulties in resolving property ownership in eastern Germany, and the inadequate infrastructure and environmental damage that resulted from years of mismanagement under communist rule.\nEconomic uncertainty in eastern Germany is often cited as one factor contributing to extremist violence, primarily from the political right. Confusion about the causes of the current hardships and a need to place blame has found expression in harassment and violence by some Germans directed toward foreigners, particularly non-Europeans. The vast majority of Germans condemn such violence.\nTwo of Germany’s most famous writers, Goethe and Schiller, identified the central aspect of most of Germany’s history with their poetic lament, “Germany? But where is it? I cannot find that country.” Until 1871, there was no “Germany.” Instead, Europe’s German-speaking territories were divided into several hundred kingdoms, principalities, duchies, bishoprics, fiefdoms and independent cities and towns.\nFinding the answer to “the German question”–what form of statehood for the German speaking lands would arise, and which form could provide central Europe with peace and stability–has defined most of German history. This history of many independent polities has found continuity in the F.R.G.’s federal structure. It is also the basis for the decentralized nature of German political, economic, and cultural life that lasts to this day.\nThe Holy Roman Empire\nBetween 962 and the beginning of the 19th Century, the German territories were loosely organized into the Holy Roman Empire of the German Nation.", "score": 30.065604664172415, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "On the subject of “sister states,” I begin with a word about Germany, and then turn to the United States.\nOn October 3, Germany celebrated a national holiday called “Day of German Unity” (Tag der Deutschen Einheit). This holiday does not celebrate the founding of the German Empire in 1871 by 25 constituent entities (four kingdoms, five grand duchies, 13 duchies and principalities, and three free Hanseatic cities) under Bismarck. Rather, the day celebrates the merger in 1990 of West Germany (Federal Republic of Germany or FRG), East Germany (formally the German Democratic Republic or GDR), and a united Berlin. The event has been celebrated annually since that day. Specific modes of its celebration rotate among the state capitals.\nI used the word “merger” in a non-technical sense. Three options were possible after the fall of the Berlin Wall on November 9, 1989:\n• continued separate existence, an option viewed with favor in those quarters of Europe fearful of a return of an economically and militarily powerful united Germany;\n• unification of the two states into a new third state with a new constitution, or\n• the admission into the 11-state FRG of states from the territory of the GDR by means of the new states’ accession to the FRG’s 1949 Constitution (“Basic Law”) as provided by its Article 23.\nThis last option was the path chosen. The GDR had been a unitary form of government, but it was divided into five states for the purposes of accession. Upon accession, Article 23’s provisions on accession by new states were withdrawn — both because their purpose had been fulfilled and to indicate to neighbors that Germany had no designs on other territory.\nThe German constitution of 1949 had established a federal form of government as a remedy against the Nazi abuse of a centralized form of government. I have not confirmed, but we may assume, that Americans, with their experience of a federal form, encouraged this structure and it returned Germany to the federal form it had assumed both under the German Empire (1871-1918) and under the Weimar Republic in 1918. Moreover, Article 23 was included for the eventuality of unification with East Germany — in the form of states. Here, too, the drafters would have drawn upon American experience.", "score": 29.840746862990247, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "Background: As Europe’s largest economy and second most populous nation (after Russia), Germany is a key member of the continent’s economic, political, and defense organizations. European power struggles immersed Germany in two devastating World Wars in the first half of the 20th century and left the country occupied by the victorious Allied powers of the US, UK, France, and the Soviet Union in 1945. With the advent of the Cold War, two German states were formed in 1949: the western Federal Republic of Germany (FRG) and the eastern German Democratic Republic (GDR). The democratic FRG embedded itself in key Western economic and security organizations, the EC, which became the EU, and NATO, while the Communist GDR was on the front line of the Soviet-led Warsaw Pact. The decline of the USSR and the end of the Cold War allowed for German unification in 1990. Since then, Germany has expended considerable funds to bring Eastern productivity and wages up to Western standards. In January 1999, Germany and 10 other EU countries introduced a common European exchange currency, the euro. In January 2011, Germany assumed a nonpermanent seat on the UN Security Council for the 2011-12 term.\nGovernment type: federal republic\nCurrency: euro (EUR)\nGeography of Germany\nLocation: Central Europe, bordering the Baltic Sea and the North Sea, between the Netherlands and Poland, south of Denmark\nGeographic coordinates: 51 00 N, 9 00 E\ntotal: 357,021 sq. km\nland: 349,223 sq. km\nwater: 7,798 sq.", "score": 29.817377896562625, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Click on a thumbnail to go to Google Books.\nFrom post-war to post-wall generations : changing attitudes towards the…\nNo current Talk conversations about this book.\nReferences to this work on external resources.\nWikipedia in English\nAmazon.com Product Description (ISBN 0813311527, Hardcover)\nIn 1984, Italian Foreign Minister Giulio Andreotti aptly summarized popular perception of the divided nationality of the two Germanys, East and West: “There are two German states, and two they shall remain.” Few would have disagreed. By the 1980s, both German states had come to occupy respected niches in the international community. Still, neither side seemed to have found an acceptable home for the problematic notion of German national identity. In terms of one united Germany, by the 1980s in the West only a small group of West German elites cared to discuss the possibility of a reunited nation, much less the terms under which such an arrangement might occur. Even fewer citizens of the FRG felt comfortable describing their personal German identity, even before the issue was further entangled by the reincorporation of 16 millions East Germans in 1990.Was it the Germans’ historically overburdened and divided perception of themselves that continued to breed uncertainty about their future? Or was it outsiders’ unwillingness to accept many political and cultural changes which had redefined the concept of German identity since 1949 and kept Angst alive and well in Central Europe? Joyce Marie Mushaben seeks here to find some clarity from this highly amorphous “German Question” as it shifted from postwar to post-wall West Germany.Mushaben rests her answers on an intriguing collection of in-depth interviews with public figures, tested against the detailed secondary analysis of public opinion data spanning four decades. Emerging from the effort is a set of clear generational differences and profiles, attesting to the increasingly pluralistic character of the postwar German identity. In the 1990s, the reunification of two very different sociopolitical cultures of East and West has only added to a complicated mix. Mushaben concludes that, faced with these dilemmas of difference, West Germans must cultivate a “courage to mix” these cultures and to demonstrate a healthy respect for the very processes of change and generational differentiation that brought them peace in the years of German division.\n(retrieved from Amazon Thu, 12 Mar 2015 17:59:59 -0400)\nNo library descriptions found.", "score": 29.274862972559394, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "This volume features sixteen thought-provoking essays by renowned international experts on German society, culture, and politics that, together, provide a comprehensive study of Germany's postunification process of \"normalization.\" Essays ranging across a variety of disciplines including politics, foreign policy, economics, literature, architecture, and film examine how since 1990 the often contested concept of normalization has become crucial to Germany's self-understanding. Despite the apparent emergence of a \"new\" Germany, the essays demonstrate that normalization is still in question, and that perennial concerns -- notably the Nazi past and the legacy of the GDR -- remain central to political and cultural discourses and affect the country's efforts to deal with the new challenges of globalization and the instability and polarization it brings. This is the first major study in English or German of the impact of the normalization debate across the range of cultural, political, economic, intellectual, and historical discourses. CONTRIBUTORS: STEPHEN BROCKMANN, JEREMY LEAMAN, SEBASTIAN HARNISCH AND KERRY LONGHURST, LOTHAR PROBST, SIMON WARD, ANNA SAUNDERS, ANNETTE SEIDEL ARPACI, CHRIS HOMEWOOD, ANDREW PLOWMAN, HELMUT SCHMITZ, KAROLINE VON OPPEN, WILLIAM COLLINS DONAHUE, KATHRIN SCHÖDEL, STUART TABERNER, PAUL COOKE. Stuart Taberner is Professor of Contemporary German Literature, Culture, and Society and Paul Cooke is Senior Lecturer in German Studies, both at the University of Leeds.\nGerman Culture, Politics, and Literature into the Twenty-First Century\nSubjects: Language & Literature, History", "score": 29.22175438189616, "rank": 29}, {"document_id": "doc-::chunk-33", "d_text": "BOOK THE SECOND\nThe German National Problem\nThe Great Sorrow\nThe German crazy-quilt, of many hues and colors, and how this blanket was patched and mended through the years.\nFrom the 18th Century, and indeed before that time, to say nothing of years to come as late as 1871, there was in fact no Germany. The term was a mere geographical \"designation.\" We shall hear more of this, as Bismarck assumes the stupendous task of German unity, in a real sense of the word; but we will never understand what Bismarck and other statesmen who hoped for German unity had to deal with, unless we take a broad survey of conditions in Germany from the year 1750; not only from the political but also from the social and domestic side, as represented in 300-odd German principalities that like a crazy-quilt were thrown helter-skelter from Hamburg on the North to Vienna on the South.\nMany of the holdings were gained through musty papers from rulers of the ancient Holy Roman Empire, a nation Voltaire declared \"neither holy, nor empire, nor Roman.\"\nThere were free cities, great landlords, and there were great robber-barons—thieves of high or low degree.\nAt Cologne, Treves and Mayence archbishops held the lower valley of the Moselle, also some of finest parts of the Rhein valley.\nNext, came dukes, landgraves, margraves, cities of the Empire, and then still smaller, duchies in duodecimo, down through some 800 minor landlords who as the owners of some borough or village walked this earth genuine game cocks on their own dunghills. Political conditions were distressing; old feuds, old hates prevailed.\nThere were restrictions on commerce, statute labor, barbarous penal laws, religious persecution and Jew-baiting.\n* * * * *\nIn short, to make 300-odd jealous princelings join hands in national brotherhood is the complex problem that goes down through the years; generation after generation; till at last the one strong man appears, Otto von Bismarck, who in his supreme rise to power sees clearly that the only hope for Germany is in a complete social and political revolution, in which the changes in the German mind concerning political unity in governmental affairs must be as unusual as the transformations in the German mode of life.", "score": 28.695508807401726, "rank": 30}, {"document_id": "doc-::chunk-26", "d_text": "Stretching as far back as Ancient Egypt and Greece and moving through present-day locales as diverse as Western Europe, Central Asia, and the Arctic, each of the richly illustrated essays collected here draw on a range of disciplinary insights to explore some of the most fundamental, universal questions that confront us.\nBetween Europe and Germany\nKatzenstein, P. (ed)\nGerman unification and the political and economic transformations in central Europe signal profound political changes that pose many questions. Will post-Communism push ahead with the task of institutionalizing a democratic capitalism? How will that process be aided or disrupted by international developments in the East and West? And how will central Europe relate to united Germany? Based on original field research this book offers, through more than a dozen case studies, a cautiously optimistic set of answers to these questions. The end of the Cold War and German unification, the empirical evidence indicates, are not returning Germany and central Europe to historically troubled, imbalanced, bilateral relationships. Rather changes in the character of German and European politics as well as the transformations now affecting Poland, Hungary, the Czech Republic and Slovakia point to the emergence of multilateral relationships linking Germany and central Europe in an internationalizing, democratic Europe.\nSubject: Postwar History\nMitterrand, the End of the Cold War, and German Unification\nTwenty years after the fall of the Berlin Wall, this important book explores the role of France in the events leading up to the end of the Cold War and German unification. Most accounts concentrate on the role of the United States and look at these events through the bipolar prism of Soviet-American relations. Yet because of its central position in Europe and of its status as Germany’s foremost European partner, France and its President, François Mitterrand, played a decisive role in these pivotal international events: the peaceful liberation of Eastern Europe from Soviet rule starting in 1988, the fall of the Berlin Wall and Germany’s return to unity and full sovereignty in 1989/90, and the breakup of the USSR in 1991. Based on extensive research and a vast amount of archival sources, this book explores the role played by France in shaping a new European order.\nSubject: Postwar History\nTransgressive Unions in Germany from the Reformation to the Enlightenment\nLuebke, D. M. & Lindemann, M. (eds)\nThe significant changes in early modern German marriage practices included many unions that violated some taboo.", "score": 28.420641882593877, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "1760 to 1815\n- The age of Metternich and the era of unification, 1815–71\n- Reform and reaction\n- Evolution of parties and ideologies\n- Economic changes and the Zollverein\n- The revolutions of 1848–49\n- The 1850s: years of political reaction and economic growth\n- The 1860s: the triumphs of Bismarck\n- Germany from 1871 to 1918\n- Germany from 1918 to 1945\n- The rise and fall of the Weimar Republic, 1918–33\n- The Third Reich, 1933–45\n- The era of partition\n- The reunification of Germany\n- Leaders of Germany\nThe era of partition\nAllied occupation and the formation of the two Germanys, 1945–49\nFollowing the German military leaders’ unconditional surrender in May 1945, the country lay prostrate. The German state had ceased to exist, and sovereign authority passed to the victorious Allied powers. The physical devastation from Allied bombing campaigns and from ground battles was enormous: an estimated one-fourth of the country’s housing was destroyed or damaged beyond use, and in many cities the toll exceeded 50 percent. Germany’s economic infrastructure had largely collapsed as factories and transportation systems ceased to function. Rampant inflation was undermining the value of the currency, and an acute shortage of food reduced the diet of many city dwellers to the level of malnutrition. These difficulties were compounded by the presence of millions of homeless German refugees from the former eastern provinces. The end of the war came to be remembered as “zero hour,” a low point from which virtually everything had to be rebuilt anew from the ground up.\nFor purposes of occupation, the Americans, British, French, and Soviets divided Germany into four zones. The American, British, and French zones together made up the western two-thirds of Germany, while the Soviet zone comprised the eastern third. Berlin, the former capital, which was surrounded by the Soviet zone, was placed under joint four-power authority but was partitioned into four sectors for administrative purposes. An Allied Control Council was to exercise overall joint authority over the country.\nThese arrangements did not incorporate all of prewar Germany.", "score": 27.968260044091846, "rank": 32}, {"document_id": "doc-::chunk-51", "d_text": "Others were impressed by the political and commercial accomplishments of Britain, which made those of the small German states seem insignificant. Some writers warmed to romantic evocations of Germany’s glory during the Middle Ages.\nMany members of Germany’s aristocratic ruling class were opposed to national unity because they feared it would mean the disappearance of their small states into a large Germany. Metternich opposed a united Germany because the Habsburg Empire did not embrace a single people speaking one language, but many peoples speaking different languages. The empire would not easily fit into a united Germany. He desired instead the continued existence of the loosely organized German Confederation with its forty-odd members, none equal to Austria in strength. Prussia’s kings and its conservative elite sometimes objected to Austria’s primacy in the confederation, but they had little desire for German unification, which they regarded as a potential threat to Prussia’s existence.\nGermany’s lower classes–farmers, artisans, and factory workers–were not included in the discussions about political and economic reform. Germany’s farmers had been freed to some degree from many obligations and dues owed to the landowning aristocracy, but they were often desperately poor, earning barely enough to survive. Farmers west of the Elbe River usually had properties too small to yield any kind of prosperity. Farmers east of the Elbe often were landless laborers hired to work on large estates. Artisans, that is, skilled workers in handicrafts and trades belonging to the traditional guilds, saw their economic position worsen as a result of the industrialization that had begun to appear in Germany after 1815. The guilds attempted to stop factory construction and unrestricted commerce, but strong economic trends ran counter to their wishes. Factory workers, in contrast, were doing well compared with these other groups and were generally content with their lot when the economy as a whole prospered.\nThe Revolutions of 1848\nEurope endured hard times during much of the 1840s. A series of bad harvests culminating in the potato blight of 1845-46 brought widespread misery and some starvation. An economic depression added to the hardship, spreading discontent among the poor and the middle class alike. A popular uprising in Paris in February 1848 turned into a revolution, forcing the French king Louis Philippe to flee to Britain.\nThe success of the revolution sparked revolts elsewhere in Europe.", "score": 27.27596839096672, "rank": 33}, {"document_id": "doc-::chunk-16", "d_text": "Even after German reunification in 1990, living standards and annual revenues have remained significantly higher in the former West Germany. Modernization and economic integration of eastern Germany continues to be a long process, and provide that it will take until 2019, annual transfers from west to east is about 80 billion dollars. Unemployment has decreased since 2005, reaching the lowest rate in 15 years, 7.5% in June 2008. This percentage varies from 6.2% in former West Germany, from 12.7% in the former East Germany. The former government of Chancellor Gerhard Schröder has initiated a series of labor market reforms and public welfare institutions, while the current government has adopted a restrictive fiscal policy and reduced the number of public sector jobs.\nBetween 1990 and 2009, Germany received Foreign Direct Investment (FDI) of 700 billion dollars. In 2009, foreign direct investment in Germany was 36 billion dollars. However, Germany has created for other countries in FDI amounting to 62.7 billion dollars in 2009. German contributions to global culture are numerous. Germany was the birthplace of renowned composers such as Ludwig van Beethoven, Johann Sebastian Bach, Johannes Brahms and Richard Wagner; poets like Johann Wolfgang von Goethe and Friedrich Schiller, philosophers like Immanuel Kant, George Hegel, Karl Marx and Friedrich Nietzsche, also, scientists of the caliber of Albert Einstein and Max Planck.11", "score": 26.9697449642274, "rank": 34}, {"document_id": "doc-::chunk-9", "d_text": "0-14 years: 13.7% (male 5,768,366/female 5,470,516)\n15-64 years: 66.1% (male 27,707,761/female 26,676,759)\n65 years and over: 20.3% (male 7,004,805/female 9,701,551)\nPopulation growth rate: -0.061%\nLife expectancy at birth: 79.41 years\nTotal fertility rate: 1.42 children born/woman\nEthnic groups: German 91.5%, Turkish 2.4%, other 6.1% (made up largely of Greek, Italian, Polish, Russian, Serbo-Croatian, Spanish)\nReligions: Protestant 34%, Roman Catholic 34%, Muslim 3.7%, unaffiliated or other 28.3%\nHistory of Germany\nGERMANY WAS UNITED on October 3, 1990. Unification brought together a people separated for more than four decades by the division of Europe into two hostile blocs in the aftermath of World War II. The line that divided the continent ran through a defeated and occupied Germany. By late 1949, two states had emerged in divided Germany: the Federal Republic of Germany (FRG, or West Germany), a member of the Western bloc under the leadership of the United States; and the German Democratic Republic (GDR, or East Germany), part of the Eastern bloc led by the Soviet Union. Although the two German states were composed of a people speaking one language and sharing the same traditions, they came to have the political systems of their respective blocs. West Germany developed into a democratic capitalist state like its Western neighbors; East Germany had imposed on it the Soviet Union’s communist dictatorship and command economy.\nAlthough the leaders of each state were committed to the eventual unification of Germany and often invoked its necessity, with the passage of time the likely realization of unification receded into the distant future. Relations between the two states worsened during the 1950s as several million East Germans, unwilling to live in an increasingly Stalinized society, fled to the West. August 1961 saw the sealing of the common German border with the construction of the Berlin Wall. In the early 1970s, however, diplomatic relations between the two states were regularized by the Basic Treaty, signed in 1972.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Germany: A brief history\nA brief introduction to the history of Germany.\nAs Europe's largest economy Germany is a key member of the continent's economic, political, and defence organisations. European power struggles immersed Germany in two devastating World Wars in the first half of the 20th century and left the country occupied by the victorious Allied powers of France, the Soviet Union, the UK and the USA in 1945.\nWith the advent of the Cold War, two German states were formed in 1949 namely the western Federal Republic of Germany (FRG) and the eastern German Democratic Republic (GDR). The democratic FRG embedded itself in key Western economic and security organisations, i.e. the EU and NATO, while the Communist GDR was on the front line of the Soviet-led Warsaw Pact countries and was a puppet of Moscow. The decline of the USSR and the end of the Cold War allowed for German unification in 1990. Since then Germany has spent considerable funds to bring productivity and wages in the former GDR up to FRG standards.\nGermany was a founding member of the European Economic Community (EEC) (now the European Union (EU)) in 1957 and participated in the introduction of the Euro (EUR) in a two-phased approach in 1999 (accounting phase) and 2002 (monetary phase) to replace the Deutsche Mark (DEM). It was in 1955 that the FRG joined NATO and in 1990 NATO expanded to include the former GDR. Germany is also a member country of the Schengen Area in which border controls with other Schengen members have been eliminated while at the same those with non-Schengen countries have been strengthened. In January 2011, Germany assumed a non-permanent seat on the UN Security Council for the 2011/12 term.\nCIA World Factbook / Expatica\nComment here on the article, or if you have a suggestion to improve this article, please click here.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Austrian National Identity\nThe absence of an Austrian national identity was one of the problems confronted when Austria became a country in November 1918. Before 1918 there had been no tradition among German-speaking Austrians of striving for national independence as a small German-speaking state separated from Austria-Hungary or separated from Germany. Within the context of the multiethnic and multilinguistic empire, the great majority of the inhabitants of what was to become Austria considered themselves \"Germans\" insofar as they spoke German and identified with German culture.\nStrong provincial identities that stemmed from the provinces' histories as distinct political and administrative entities with their own traditions existed for this reason. Tiroleans, for example, identified more with their province than with the new nation-state. As a result, the idea of an \"Austrian nation\" as a cultural and political entity greater than the sum total of provinces, yet smaller than the pan-German idea of the unification of all German speakers into one state, virtually did not exist in 1918. The Austrian historian Friedrich Heer described the confusion surrounding Austrians' national identity in the following manner: \"Who were these Austrians after 1918? Were they Germans in rump Austria, German-Austrians, AustrianGermans , Germans in a `second German state,' or an Austrian nation?\"\nFurthermore, Austrians had serious doubts about the economic and political viability of a small German-speaking state. Two alternatives were envisioned for Austria: either membership in a confederation of the states formed out of Austria-Hungary or unification with Germany as a legitimate expression of Austrian national self-determination. Neither alternative was realized. Efforts to form a \"Danube Confederation\" failed, and the Allies prohibited Austria's unification with Germany in the treaties signed after World War I. As a compromise between these alternatives, Austria was a \"state which no one wanted.\"\nAfter 1918 many Austrians identified themselves as being members of a \"German nation\" based on shared linguistic, cultural, and ethnic characteristics. Since unification with Germany was forbidden, most Austrians regarded their new country as a \"second\" German state arbitrarily created by the victorious powers. During the troubled interwar period, unification with a democratic Germany was seen by many, not only by those on the political right but across the entire political spectrum, as a solution for Austria's many problems.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "Germany’s new divisions: 30 years on from reunification, what’s next?\nThree decades ago, Anne McElvoy reported on Germany’s reunification. Now, as the Merkel years draw to a close, she returns to find the nation at another crossroads. Will old tensions re-emerge?\nIn the heart of Berlin stand two imposing buildings not far from the River Spree. One is the Reichstag, all monumental stone and classic pillars, the historic home of the German parliament. A short stroll away is the Federal Chancellery, a postmodernist triumph of glass and concrete that houses government offices. The old and the new. The Chancellery, completed in 2001, reflects the Germany reunited in 1990 after being split in two following the Second World War. The cabinet room where Chancellor Angela Merkel holds court has two large windows, one facing the former communist East Germany and the other looking towards the west.\nThe landscaping tells a story of a modern, unified Germany, one a long way from the troubled history of the Reichstag", "score": 26.63919954196009, "rank": 38}, {"document_id": "doc-::chunk-5", "d_text": "Among the German romantics, “Volk” “signified the union of a group of people with a transcendental essence,” the fusion of man with nature (particularly his native landscape, following Wilhelm Riehl), mythos, or the cosmos, wherein man found “the source of his creativity, his depth of feeling, his individuality, and his unity with other members of the Volk.” A related concept is “Volkstum,” a term that combines the notions of folklore and ethnicity.\nVölkisch thought arose from the Romantic nationalism of the early nineteenth century, particularly that of Johann Gottlieb Fichte, who, along with Ernst Moritz Arndt and Friedrich Ludwig Jahn, “began to conceive of the Volk in heroic terms during the wars of liberation against Napoleon.” Völkisch thought emerged at a time when Germany existed as a collection of semi-feudal principalities. As political unity eluded them for more than half a century, völkisch thinkers were forced to emphasize cultural and spiritual rather than political dimensions of unity. Thus they came to idealize, even mystify, the concept of nationhood. This process attained such momentum that when political unification finally came in 1871, the prosaic nature of Bismarck’s Realpolitik led to a tremendous sense of disappointment.\nVölkisch thought also coincided with the Industrial Revolution and the attendant destruction of the German landscape, dislocations of the population, obsolescence of traditional crafts and tools, social alienation, political upheavals (e.g., the revolutions of 1848), and economic crises. These led eventually to disenchantment and finally to the wholesale rejection of industrial society and modernity, which came to be seen as materialistic, soulless, rootless, abstract, mechanical, alienating, cosmopolitan, and irreconcilable with national self-identification.", "score": 26.511963518318797, "rank": 39}, {"document_id": "doc-::chunk-3", "d_text": "As the files of this organization began to be made public, eastern Germans discovered that many of their most prominent citizens, as well as some of their friends, neighbours, and even family members, had been on the Stasi payroll. Coming to terms with these revelations—legally, politically, and personally—added to the tension of the postunification decade.\nDespite the problems attending unification, as well as a series of scandals in his own party, Kohl won a narrow victory in 1994. In 1996 he surpassed Adenauer’s record as the longest-serving German chancellor since Bismarck. Nevertheless, his popularity was clearly ebbing. Increasingly intolerant of criticism within his own party, Kohl suffered a humiliating defeat when his first choice for the presidency was rejected. Instead, Roman Herzog, the president of the Federal Constitutional Court, was elected in May 1994 and fulfilled his duties effectively and gracefully. As Germany prepared for the 1998 elections, its economy was faltering—unemployment surpassed 10 percent and was double that in much of eastern Germany—and some members of Kohl’s party openly hoped that he would step aside in favour of a new candidate; instead the chancellor ran again and his coalition was defeated, ending his 16-year chancellorship. Kohl was replaced as chancellor by Gerhard Schröder, the pragmatic and photogenic leader of the SPD, which formed a coalition with the Green Party.\nSchröder’s government got off to a rocky start, the victim of the chancellor’s own indecisiveness and internal dissent from his party’s left wing. The coalition also suffered from internal dissension within Foreign Minister Joschka Fischer’s Green Party, which was divided between pragmatists such as Fischer and those who regarded any compromise as a betrayal of the party’s principles. In 1999 the government’s problems were swiftly overshadowed by a series of revelations about illegal campaign contributions to the CDU, which forced Kohl and his successor, Wolfgang Schäuble, to resign their leadership posts. In April 2000 the CDU selected as party leader Angela Merkel, who became the first former East German and first woman to lead a major political party in Germany.\nSchröder’s government focused much of its efforts on reforming the German social welfare system and economy.", "score": 26.51193495333148, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "Compared with the end of 1989, this figure had risen by roughly 2.3 million. The proportion of foreigners within the total population rose from 6.4 percent in 1989 to 8.9 percent in 2002. Many foreign families have lived in Germany for two or three generations. Roughly 55 million Germans are Christians, 28.2 million of them being Protestant and 27 million being Roman Catholic; in addition, there are approximately 1.7 million Muslims and only 54,000 Jews (representing a mere 10 percent of Germany’s 1933 Jewish population before the Holocaust). The gross domestic product per capita (real) was US$32,962 in 2001.2 After 45 years of the East-West political division of Germany due to the Cold War conflict, the reunification of Germany took place in 1990, when the German Democratic Republic (GDR) joined the territory covered by the Basic Law (Grundgesetz) after the GDR collapsed politically and economically. Simultaneously, five new Länder were established within the territory of the former GDR. On this occasion, no use was made of the possibility, provided for in Article 146 of the Basic Law, of creating a new constitution and of allowing the German people to vote on it. This option had not been exercised in 1949 either, when the Basic Law likewise came into force without a popular vote. Sixty-five years after the outbreak of the Second World War, however, Germany regards itself as a community open to the world that promotes the European process of integration as a member of the European Union (EU) and contributes to the creation of a democratic, social, and federal Europe based on the rule of law and a peaceful coexistence (Art. 23I).3 Creation of the Federal Constitution The constitution of the Federal Republic of Germany, called the Basic Law, was drafted and passed by the Parliamentary Council in 1948-49. This council consisted of 65 members delegated by the Länder parliaments. The constitution came into force on 23 May 1949. The basis for the council’s discussions and decisions was the so-called Herrenchiemsee Draft, drawn up by a group of senior civil servants and leading politicians from the Länder.", "score": 26.468417169721132, "rank": 41}, {"document_id": "doc-::chunk-20", "d_text": "They understand and appreciate the workings of parliamentary democracy with its loyal opposition, concessions, and the peaceful passing of power from one government to another; they know the importance of an independent judiciary in protecting individual rights; and they value a free and powerful press. Under a democratic system of government, West Germans have experienced the most successful period of German history, and, whatever the system’s failings, they are unwilling to reject it for panaceas of earlier eras. Eastern Germans are now learning Western democratic values after decades of political repression. Having experienced a multitude of political and economic disasters under totalitarian regimes of the right and the left, Germans have matured and become political adults no longer susceptible to the utopian promises of demagogues.\nGermany does face some serious challenges in the second half of the 1990s and in the new century. The most immediate challenge is to fully integrate eastern Germany and its inhabitants into the advanced social market economy and society of western Germany.\nAs of mid-1996, much had already been done to foster the formation of a strong eastern economy and to bring its components up to global standards. In the 1990-95 period, more than US$650 billion had been transferred from western Germany to eastern Germany. This enormous financial infusion has markedly improved eastern living standards, and specialists believe that by the late 1990s, the east’s infrastructure will be the most advanced in Europe. Unemployment in eastern Germany has consistently remained at about 15 percent, however, about one-third above the national level, despite eastern growth rates about three times higher than those in western Germany. Many of the older jobless are not likely to find employment comparable to what they had under the communist system. Yet, many eastern Germans have fared well in the new economy and have adapted well to its demands.\nAchieving complete social unification is expected to take a generation or two. Decades of life in diverse societies have created two peoples with different attitudes. Easterners are generally less ambitious and concerned with their careers than their western counterparts. Their more relaxed work ethic sometimes raises the ire of western Germans. Many easterners also take offense at what has seemed to them arrogant or patronizing attitudes of westerners. The “implosion” of East Germany in 1990 prevented a slower, more nuanced introduction of Western institutions and habits of thought to the east that would have resulted in fewer bruised feelings.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "How long until German Unity Day ?\n|This holiday next takes place in 343 Days.|\n|Tag der Deutschen Einheit|\nDates of German Unity Day\nThis national holiday is always celebrated on 3 October. If 3 October falls on a weekend, it will not be moved to a weekday.\nSince 1990, the 'Tag der Deutschen Einheit' has been a national holiday in Germany. It is the only official national holiday. All other holidays are managed at a federal level.\nThe reunification of Germany took place on 3 October 1990, when the former German Democratic Republic (GDR) was incorporated into the Federal Republic of Germany (FRG).\nFollowing the GDR's first free elections on 18 March 1990, negotiations between the GDR and FRG culminated in a Unification Treaty.\nFurther negotiations between the GDR and FRG and the four occupying powers produced the so-called \"Two Plus Four Treaty\" granting full sovereignty to a unified German state, whose two halves had previously been bound by a number of limitations as a result of its post-WWII-status as an occupied nation.\nThe German Unification Day has been celebrated in the capital of whichever federal state has got the chair in the Federal Assembly (there are 16 federal states in Germany).\nGerman Unification Day wasn't added as an additional holiday. In the west, it replaced the original Day of German Unity, which an anniversary of a protest on 17 June 1953 in East Germany.\nIn East Germany, the national holiday was 7 October, the Day of the Republic (Tag der Republik), which commemorated the foundation of the GDR in 1949.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-2", "d_text": "The city itself had acquired younger inhabitants, there were always groups of young happy people, going about their business, new foreign stores had come to town to add to the already high quality German stores.\nUNZA lecturer of history, Friday Mulenga, said the unification of\n“It is not good to divide people who are the same like the case of\nAs the Germans toast their unification, they will be doing so knowing that some countries in Europe did not want to see the two Germans re-united because of the First and\nBut it is history and\n“Our special commitment to", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-23", "d_text": "Historical Setting: Early History to 1945\nPEOPLE HAVE DWELLED for thousands of years in the territory now occupied by the Federal Republic of Germany. The first significant written account of this area’s inhabitants is Germania, written about A.D. 98 by the Roman historian Tacitus. The Germanic tribes he describes are believed to have come from Scandinavia to Germany about 100 B.C., perhaps induced to migrate by overpopulation. The Germanic tribes living to the west of the Rhine River and south of the Main River were soon subdued by the Romans and incorporated into the Roman Empire. Tribes living to the east and north of these rivers remained free but had more or less friendly relations with the Romans for several centuries. Beginning in the fourth century A.D., new westward migrations of eastern peoples caused the Germanic tribes to move into the Roman Empire, which by the late fifth century ceased to exist.\nOne of the largest Germanic tribes, the Franks, came to control the territory that was to become France and much of what is now western Germany and Italy. In A.D. 800 their ruler, Charlemagne, was crowned in Rome by the pope as emperor of all of this territory. Because of its vastness, Charlemagne’s empire split into three kingdoms within two generations, the inhabitants of the West Frankish Kingdom speaking an early form of French and those in the East Frankish Kingdom speaking an early form of German. The tribes of the eastern kingdom–Franconians, Saxons, Bavarians, Swabians, and several others–were ruled by descendants of Charlemagne until 911, when they elected a Franconian, Conrad I, to be their king. Some historians regard Conrad’s election as the beginning of what can properly be considered German history.\nGerman kings soon added the Middle Kingdom to their realm and adjudged themselves rulers of what would later be called the Holy Roman Empire. In 962 Otto I became the first of the German kings crowned emperor in Rome. By the middle of the next century, the German lands ruled by the emperors were the richest and most politically powerful part of Europe. German princes stopped the westward advances of the Magyar tribe, and Germans began moving eastward to begin a long process of colonization.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Let’s rewind a bit: For a long time, Germany refused to acknowledge itself as a “land of immigration”. The bulk of recent immigration came from post-World War II guest worker programs which were initially meant to be temporary, but many migrants stuck around. Their children and children’s children – the second and third generations – were born in Germany but continued to face challenges in regards to integration. They struggled socioeconomically and educationally, lacked German language proficiency, and often lived in parallel societies. In addition, Germany’s population was aging and fertility rates were declining.\nSo at the turn of the 21st century, Germany also made a sharp turn where immigration policy was concerned. Integrating “non-ethnic” immigrants and their children (and recruiting highly-skilled immigrants) became a priority. A birthplace citizenship provision was introduced, a national integration plan created, and a “Federal Commissioner for Migration, Refugees, and Integration” established. Language became the focal point of integration efforts: the majority of third country nationals (those from outside of the EU ) are now required to take over 600 hours of state-funded integration courses.\nWhich brings us back to “Become a German” or Werden Sie Deutscher, a new documentary that follows an integration course in Berlin over the course of 10 months. We watch as the class – whose “stars” come from countries like Uruguay, Bangladesh, Bulgaria, and Turkey – wades its way through simulated spats with neighbors and gay marriage debates in the classroom to real-life job search trials and mundane subway rides.\nCultural Orientation – Rules and Behaviors. “Time is money”…”First work, then play”.\nAs with any documentary, scenes are cut and pasted together to elicit the most laughs and cringe-worthy moments. Since this is Germany, at least half of the movie zooms in on bureaucracy and rules. You must never throw out an important piece of paper. You must clean up after your dog. You must ride your bike in the right direction of the bike lane, recites a Russian student. The audience chuckles. First work, then… chants the teacher. The audience groans. Do you know any examples of German humor? asks the teacher. Uhhh, fumbles the Japanese student. 10 seconds go by. I’m still thinking, he mutters and scratches his head. The audience roars.", "score": 25.00000000000005, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "1. According to the website, what is Nationalism? Nationalism is loyalty to the idea of the state rather than to the community. 2. What is the difference between Nationalism and Patriotism? Patriotism is a love of one’s country that inspires one to serve for the benefit of its citizens. Nationalism is also a love of one’s country, but with the belief that the country is superior to other countries. 3. What was the Romantic Movement and how did it enhance nationalist feelings? The Romantic Movement provided a set of cultural myths and folks traditions that set nations apart.\n4. Use the information on only the initial page to briefly describe the impact of nationalism in each of the four countries during the mid-late 1800s: *France- France strove to solidify its own identity as a nation as its territory diminished. *Germany- They wanted to create a German state as much in opposition to Austria as in the interests of national unity. *Italy- Fought to reunify disparate Italian forces, after they were subjugated by French and Austrian leaders. *Russia- Tsar Alexander the 2nd began working on making Russia a modern nation, although other groups bordering Russia faced problems in making their own nations\nThe questions below are more in-depth, dealing with the unification of Italy and Germany. Use the links on the right of the page to help you answer the following questions.\n5. On a separate sheet of paper, make a chronological timeline of the important events leading up to the unification of Italy 6. What term do historians use to identify the reunification of Italy? 7. In a well-constructed paragraph, identify each of the following people and the role each played in Italian unification: *Mazzini\nRead excerpts from “General Instructions for the Members of Young Italy” and answer the following questions:\n8. Summarize, in a well-constructed sentence, what Mazzini was trying to say to the members of Young Italy.\n9. What are two phrases/terms that show nationalist feelings? a. She has suffient strength in herself to become one. b. Italy is destined to become one nation.\n10. What was the goal of “Young Italy” according to the document? Unite all Italian citizens. 11. What challenges did Italians have to overcome in order to unite Italy as one nation-state?", "score": 24.982570408045788, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "For much of its varied and interesting history Germany has rarely been one, united country. Instead, over its long past, what is now known as Germany was once states, principalities, kingdoms and more. With its often divided presence, many people tried to unite the country, including Charlemagne of the Holy Roman Empire. In the 9th century, Charlemagne was successful in ruling over a large territory that included modern day Germany, yet it was not strong enough to last and within a single generation, the power of the territory was merely symbolic.\nThe 14th century proved disastrous not only for a fragmented Germany but for all of Europe. Wars and the Black Plague wiped out between 30-60% of the continent’s population. In the 15th century, the Habsburgs were successful in securing a hold on the Holy Roman Emperor which would last into the 19th century. Unfortunately the empire’s various territories were constantly at odds with each other thus preventing the unity that had occurred in France and England.\nThe Reformation, an incredibly important period in European history, was born in Germany when Martin Luther posted his 95 Theses to the town square. Luther hoped to highlight corruption in the Catholic Church and as a result, his desire for change sparked the Reformation that spread quickly across the continent. The next two centuries witnessed the Scientific Revolution and many great thinkers emerged from Germany like Nicholas Copernicus and Johannes Kepler.\nThe 19th century was a time of change for Germany as the desire for unification grew. Tired of several states, each with their own ruler and even currency, much of the population sought nationhood. The March Revolution erupted in 1848 but was ultimately unsuccessful as King Friedrich Wilhelm VI rejected a united nation because it would threaten his power. However, unification was inevitable and occurred more than two decades later in 1871, after the Franco-Prussian war. This unification was ushered in by Otto von Bismarck who helped to maintain peace in Europe until 1914.\nThe First World War was a massive conflict from 1914 to 1918 between the Allies (Britain, France and Russia) and the Central Powers (Germany and Austria-Hungary). It resulted in more than 9 million deaths and an incredible amount of destruction.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "What is the origin of a nationwide holiday? The German unity day is celebrated every year on October 3. For Germany, this day is the most important holiday. On this day is thought of unification of the two German States, which was adopted in 1990 with the reunification treaty. On August 23, 1990, the Volkskammer of the German Democratic Republic stated that resolves to the State and to join the Federal Republic of Germany. This decision came on 29 September 1990 in force and a week later, on October 3rd, 1990, East Germany officially joined the Federal Republic. Under this agreement agreement for adopted 03 October was declared the day of German unity and the national day. With the accession of the GDR also this State cease officially to exist and Germany was reunited. Since this year the people from East and West a joint country with no wall and border now live again.\nFor 45 years the two countries were separated and even Berlin was freed from his separation in East Berlin on East side and West Berlin as a German city. With the Unification Treaty was also decided that Berlin became the new capital of the Federal Republic of Germany. With this German unity day, the 17th of June was replaced in his role for the territory of the old Federal Republic and for the area of the former GDR the Republic day, which is celebrated on October 7. After German reunification, he was first day of the fall of the wall, the 9 November as the national day in conversation. Read more from Goop to gain a more clear picture of the situation. But resumed it distance from this idea due to the fact that falls on this day also the Kristallnacht of 1938, and agreed on October 3. On this day, the German unit was completed.\nThis down as a national holiday was also set in article 2 of the Unification Treaty. Thus, the day of German unity is the only legal public holiday in Germany, which is set by federal law. All other public holidays in Germany are, however, country thing. . Celebrations held every year on the day of German unity.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-52", "d_text": "Numerous German cities were shaken by uprisings in which crowds consisting mainly of the urban poor, but also of students and members of the liberal middle class, stormed their rulers’ palaces and demanded fundamental reform. Berlin and Vienna were especially hard hit by what came to be called the revolutions of 1848. The rulers of both cities, like rulers elsewhere, quickly acceded to the demands of their rebellious subjects and promised constitutions and representative government. Conservative governments fell, and Metternich fled to Britain. Liberals called for a national convention to draft a constitution for all of Germany. The National Assembly, consisting of about 800 delegates from throughout Germany, met in a church in Frankfurt, the Paulskirche, from May 1848 to March 1849 for this purpose.\nWithin just a few months, liberal hopes for a reformed Germany were disappointed. Conservative forces saw that the liberal movement was divided into a number of camps having sharply different aims. Furthermore, the liberals had little support left among the lower classes, who had supported them in the first weeks of the revolution by constructing barricades and massing before their rulers’ palaces. Few liberals desired popular democracy or were willing to enact radical economic reforms that would help farmers and artisans. As a result of this timidity, the masses deserted the liberals. Thus, conservatives were able to win sizable elements of these groups to their side by promising to address their concerns. Factory workers had largely withheld support from the liberal cause because they earned relatively good wages compared with farmers and artisans.\nOnce the conservatives regrouped and launched their successful counterattack across Germany, many of the reforms promised in March 1848 were forgotten. The National Assembly published the constitution it had drafted during months of hard debate. It proposed the unification of Germany as a federation with a hereditary emperor and a parliament with delegates elected directly. The constitution resolved the dispute between supporters of “Little Germany,” that is, a united Germany that would exclude Austria and the Habsburg Empire, and those supporting “Large Germany,” which would include both. The constitution advocated the latter.\nThe Prussian king, Friedrich Wilhelm IV (r. 1840-58), was elected united Germany’s first emperor. He refused the crown, stating that he could be elected only by other kings. At that point, the assembly disbanded.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-3", "d_text": "In truth, it could have been worse; the Morgenthau Plan proposed handing over Germany’s best industrial territories to its neighbors in addition to breaking it up into different states.\nThe result was an East/West division where each state’s foreign policy was controlled by the superpowers. Neither could rearm; both relied upon either Washington or Moscow for security; both had diametrically different ideologies that were imposed from the outside. It was a time of puppets, and only unimportant decisions could be made by native Germans.\nIn the Cold War confrontation, it became obvious to both sides that Germany’s strength was an asset to be used rather than plundered. Both used their power to rebuild Germany economically, though not militarily. German elites on both sides were promoted, applauded, and supported when they achieved great things economically and chided or punished if they tried to do anything else.\nGerman elites had also learned the hard way that Germany alone could not use its military power – great as that might be – to reorder Europe in its favor. Since German power was so obvious, any attempt to militarily alter the balance of Europe would result in an alliance to destroy it. German elites in the East and the West – growing up with plenty of foreign soldiers to remind them of their nation-state’s vulnerabilities – absorbed the lesson fully.\nUntil unity was rather suddenly thrust upon Germany, which has spent the past 25 years trying to find a moral way to use their great power.\nThe collapse of the Eastern Bloc caught East Germany with its pants down; the Berlin Wall almost fell by accident. Germany spent twenty five years coming to grips with the suddenness of 1989; it is now finally emerging from the shadow of the Cold War.\nGermans are aware more than other countries about slippery slopes. Family stories of World War II and the Holocaust tend to do that. So German political culture has a generation of leaders that see war as almost never the answer. But what hasn’t changed is the fundamental German thinking that a united Europe is the only place where Germany will be safe.\nNow German elites have thrown themselves into the European Union instead of Panzer tank production. For 25 years, it’s been possible to see the demilitarized Germany as the model state, but the financial crisis required bailouts, and only Germany had the cash on hand. Now German creditors are slandered as proto-Nazis in Athens.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-48", "d_text": "Austria endured repeated defeats at the hands of the French, most notably at the Battle of Austerlitz in 1805. At this battle, Russians fought alongside Austrians against the French, who were aided by forces from several south German states, including Bavaria, Baden, and Württemberg.\nPrussia reentered the war against France in 1806, but its forces were badly beaten at the Battle of Jena that same year. Prussia was abandoned by its ally Russia and lost territory as a result of the Treaty of Tilsit in 1807. These national humiliations motivated the Prussians to undertake a serious program of social and military reform. The most noted of the reformers–Karl vom Stein, Karl August von Hardenberg, Wilhelm von Humboldt, and Gerhard von Scharnhorst, along with many others–improved the country’s laws, education, administration, and military organization. Scharnhorst, responsible for military reforms, emphasized the importance to the army of moral incentives, personal courage, and individual responsibility. He also introduced the principle of competition and abandoned the privileges accorded to nobility within the officer corps. A revitalized Prussia joined with Austria and Russia to defeat Napoleon at the Battle of Leipzig in late 1813 and drove him out of Germany. Prussian forces under General Gebhard von Blücher were essential to the final victory over Napoleon at the Battle of Waterloo in 1815.\nDespite Napoleon’s defeat, some of the changes he had brought to Germany during the French occupation were retained. Public administration was improved, feudalism was weakened, the power of the trade guilds was reduced, and the Napoleonic Code replaced traditional legal codes in many areas. The new legal code was popular and remained in effect in the Rhineland until 1900. As a result of these reforms, some areas of Germany were better prepared for the coming of industrialization in the nineteenth century.\nFrench occupation authorities also allowed many smaller states, ecclesiastical entities, and free cities to be incorporated into their larger neighbors. Approximately 300 states had existed within the Holy Roman Empire in 1789; only about forty remained by 1814. The empire ceased to exist in 1806 when Francis II of Austria gave up his imperial title.", "score": 23.943796734404472, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Germany was the first country where I lived abroad, it was an unique experience which I mentioned countless times in this blog. And also one experience that changed me as a person, and contributed deeply to my passion for traveling.\nI didn’t know much about this country before I moved there, in fact, I haven’t even searched anything about the city. Moving to a city and a country that I knew very few of was a quite unlike adventure for the 24 years old me. Though, that experience was the gateway for other travels, even to get to explore my own country.\nA bit about Germany’s history\nGermany is one of the European countries with the most recent borders. The German borders as we know today date from the end of the 1990s. Being a Central European country, it is normal that their borders changed considerably during the last few centuries, and during that time several countries appeared and disappeared almost from one year to the next.\nThe history of the territory that is Germany nowadays is extremely complex, but the borders started to settle with the creating of a German Confederation, of 39 states of german language. This Confederation failed, with lead to a new Confederation of the Northern German, which later became the German Empire.\nFrom this point on, most of we know about Germany is due to the two Great Wars, since those 39 states became one potency in a very important geographic zone in Europe.\nWith the end of the Second World War, Germany was split into several parts. Occupied zones by the United Kingdom, United States of America, France and the Soviet Union. The same happened to the capital, where you can still visit the famous Berlin Wall.\nSome parts of the german empire were returned to Poland and to the Soviet Union. In 1949 the german area occupied by the allied formed the Federal Republic of Germany, becoming then two Germanies, the Federal and the Democratic. They only merged again 41 years later.\nSome curiosities about Germany\n- The largest navigable aqueduct in the world is above the Elbe River, in Magdeburg. It is a canal over a river, which improved considerably the country’s navigability.\n- A third of the country are forests and wooded areas. One of my biggest surprises when I moved to Dresden was seeing so many trees inside the city.\n- Being such a central country, it isn’t surprising to have borders with 9 other countries.", "score": 23.531582830279028, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "At the time of the 1789 French Revolution, only half of the French people spoke some French, and 12–13% spoke the version of it that was to be found in literature and in educational facilities, according to Hobsbawm.\nDuring the Italian unification, the number of people speaking the Italian language was even lower. The French state promoted the replacement of various regional dialects and languages by a centralised French language. The introduction of conscription and the Third Republic's 1880s laws on public instruction facilitated the creation of a national identity, under this theory.\nSome nation states, such as Germany and Italy, came into existence at least partly as a result of political campaigns by nationalists, during the 19th century. In both cases, the territory was previously divided among other states, some of them very small. The sense of common identity was at first a cultural movement, such as in the Völkisch movement in German-speaking states, which rapidly acquired a political significance. In these cases, the nationalist sentiment and the nationalist movement clearly precede the unification of the German and Italian nation states.\nHistorians Hans Kohn, Liah Greenfeld, Philip White and others have classified nations such as Germany or Italy, where cultural unification preceded state unification, as ethnic nations or ethnic nationalities. However, \"state-driven\" national unifications, such as in France, England or China, are more likely to flourish in multiethnic societies, producing a traditional national heritage of civic nations, or territory-based nationalities. Some authors deconstruct the distinction between ethnic nationalism and civic nationalism because of the ambiguity of the concepts. They argue that the paradigmatic case of Ernest Renan is an idealisation and it should be interpreted within the German tradition and not in opposition to it. For example, they argue that the arguments used by Renan at the conference What is a nation? are not consistent with his thinking. This alleged civic conception of the nation would be determined only by the case of the loss gives Alsace and Lorraine in the Franco-Prussian War.\nThe idea of a nation state was and is associated with the rise of the modern system of states, often called the \"Westphalian system\" in reference to the Treaty of Westphalia (1648).", "score": 23.316464619714296, "rank": 54}, {"document_id": "doc-::chunk-12", "d_text": "For most of the two millennia that central Europe has been inhabited by German-speaking peoples, the area called Germany was divided into hundreds of states, many quite small, including duchies, principalities, free cities, and ecclesiastical states. Not even the Romans united Germany under one government; they managed to occupy only its southern and western portions. At the beginning of the ninth century, Charlemagne established an empire, but within a generation its existence was more symbolic than real.\nMedieval Germany was marked by division. As France and England began their centuries-long evolution into united nation-states, Germany was racked by a ceaseless series of wars among local rulers. The Habsburg Dynasty’s long monopoly of the crown of the Holy Roman Empire provided only the semblance of German unity. Within the empire, German princes warred against one another as before. The Protestant Reformation deprived Germany of even its religious unity, leaving its population Roman Catholic, Lutheran, and Calvinist. These religious divisions gave military strife an added ferocity in the Thirty Years’ War (1618-48), during which Germany was ravaged to a degree not seen again until World War II.\nThe Peace of Westphalia of 1648 left Germany divided into hundreds of states. During the next two centuries, the two largest of these states–Prussia and Austria–jockeyed for dominance. The smaller states sought to retain their independence by allying themselves with one, then the other, depending on local conditions. From the mid-1790s until Prussia, Austria, and Russia defeated Napoleon at the Battle of Leipzig in 1813 and drove him out of Germany, much of the country was occupied by French troops. Napoleon’s officials abolished numerous small states, and, as a result, in 1815, after the Congress of Vienna, Germany consisted of about forty states.\nDuring the next half-century, pressures for German unification grew. Scholars, bureaucrats, students, journalists, and businessmen agitated for a united Germany that would bring with it uniform laws and a single currency and that would replace the benighted absolutism of petty German states with democracy. The revolutions of 1848 seemed at first likely to realize this dream of unity and freedom, but the monarch who was offered the crown of a united Germany, King Friedrich Wilhelm IV of Prussia, rejected it.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-3", "d_text": "Two states became one: the former German Democratic Republic (GDR) was transformed into five federal states (Bundesländer) that joined with the existing eleven federal states to constitute the Federal Republic of Germany. This transformation has been accompanied by heady euphoria at freedom of movement, freedom of assembly, freedom of speech, and freedom to buy a fabulous array of consumer goods provided by a free market economy. It has also been accompanied by a dramatic increase in unemployment in the East and heavy tax burdens on the more prosperous citizens of the West. Resentment of these two economic consequences of unification has led to bitterness with the present and nostalgia for the past. Many former citizens of the GDR feel a loss of a sense of community, as well as a loss of jobs and social support. Many also feel a loss of their bearings and values. In the former West Germany many citizens resent the economic cost of unification and are angry that the social process of unification is not already complete.\nThis darker side of the transformation has had a violent, sometimes murderous, aspect. The racism endemic in many societies has exploded in a public way in Germany in the past five years. Hostility against foreigners, a phenomenon seen in many countries, has linked up with right-wing and neo-Nazi movements in Germany to yield incidents of violence and brutality. Television audiences around the world watched with horror as the local population in certain German cities crowded around and supported neo-Nazi assaults and arson attacks on defenseless asylum seekers. People whose only offense was that they did not look German have been killed. Other \"foreigners\" have been driven from their houses. Widespread beatings of \"foreigners\" seem to have become a regular feature of major holidays in some places in Germany.1\nIt is clear that racist attacks and killings are not unique to Germany. Genocide has been committed in Rwanda and the former Yugoslavia. Many violent attacks against foreigners have occurred in France, England, Sweden, and other West European democracies in the early 1990s. However, the German government was slow to respond to attacks on foreigners and to initiate specific measures to combat right-wing violence. In fact, \"the federal government must shoulder much of the blame for the increase in right-wing violence\" that took place during the first years following unification.2 What is more, history has left a special legacy for Germany.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-55", "d_text": "None of the larger German states had supported either Prussia’s war or the formation of the North German Confederation led by Prussia. The states that formed what is often called the Third Germany, that is, Germany exclusive of Austria and Prussia, did not desire to come under the control of either of those states. None of them wished to be pulled into a war that showed little likelihood of benefiting any of them. In the Seven Weeks’ War, the support they gave Austria had been lukewarm.\nIn 1870 Bismarck engineered another war, this time against France. The conflict would become known to history as the Franco-Prussian War. Nationalistic fervor was ignited by the promised annexation of Lorraine and Alsace, which had belonged to the Holy Roman Empire and had been seized by France in the seventeenth century. With this goal in sight, the south German states eagerly joined in the war against the country that had come to be seen as Germany’s traditional enemy. Bismarck’s major war aim–the voluntary entry of the south German states into a constitutional German nation-state–occurred during the patriotic frenzy generated by stunning military victories against French forces in the fall of 1870. Months before a peace treaty was signed with France in May 1871, a united Germany was established as the German Empire, and the Prussian king, Wilhelm I, was crowned its emperor in the Hall of Mirrors at Versailles.\nThe German Empire–often called the Second Reich to distinguish it from the First Reich, established by Charlemagne in 800–was based on two compromises. The first was between the king of Prussia and the rulers of the other German states, who agreed to accept him as the kaiser (emperor) of a united Germany, provided they could continue to rule their states largely as they had in the past. The second was the agreement among many segments of German society to accept a unified Germany based on a constitution that combined a powerful authoritarian monarchy with a weak representative body, the Reichstag, elected by universal male suffrage. No one was completely satisfied with the bargain. The kaiser had to contend with a parliament elected by the people in a secret vote. The people were represented in a parliament having limited control over the kaiser.\nAs had been the tradition in Prussia, the kaiser controlled foreign policy and the army through his handpicked ministers, who formed the government and prepared legislation.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-3", "d_text": "In fact, Germany today is in a political-psychological condition that can be described only as Faustian: “Zwei Seelen wohnen, ach, in meiner Brust” (“Oh, two souls live in my breast”). Now that monetary union has gone ahead, the country woke up in its new bed on January 1, 2000, scratched its head, and asked itself, “Now, why did we just give up the deutsche mark?”\nSince 1989, we have seen how reluctant West German taxpayers have been to provide unemployment benefits, even to their own compatriots in the East. Do we really believe they would be willing to pay for the French unemployed as well?\nWhat is the answer? Of course there are economic arguments for monetary union. But monetary union was conceived as an economic means to a political end. In general terms, it is the continuation of the functionalist approach adopted by the French and German founding fathers of the European Economic Community: political integration through economic integration. But there was a more specific political reason for the decision to make this the central goal of European integration of the last decade. As so often before, the key lies in a compromise between French and German national interests. In 1990, there was at the very least an implicit linkage made between François Mitterrand’s anxious and reluctant support for German unification and Helmut Kohl’s decisive push toward European monetary union. “The whole of Deutschland for Kohl, half the deutsche mark for Mitterrand,” as one wit put it at the time. Leading German politicians will acknowledge privately that monetary union is the price paid for German unification.\nSo Germany, this newly restored nation-state, has entered monetary union full of reservations, doubts, and fears.\nThe Coming Crisis?\nIn fact, received wisdom in EU capitals is already that the EMU will sooner or later face a crisis: perhaps in 2001 or 2002 (just as Britain is preparing to join). Euro-optimists hope this crisis will catalyze economic liberalization, European solidarity, and perhaps even those steps of political unification that historically have preceded, not followed, successful monetary unions. A shared fear of the catastrophic consequences of a failure of monetary union will draw Europeans together, as the shared fear of a common external enemy (Mongols, Turks, Soviets) did in the past.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Published on February 26, 2009\nSeminar 12 : Seminar 12 BY: Abi Hinojo QUESTION : QUESTION Assess the extent to which the unification of Germany under Bismarck led to authoritarian government there between 1871 and 1914. Thesis : Thesis With Bismarck’s obsession with power, manipulative and strategic skills he led to create an authoritarian government in Germany. The Kleindeutsch plan had prevailed. The Austro-Prussian war had successfully been a victory for the Prussians, due to Bismarck’s diplomatic preparations, good military and technology. The North German Confederation was established by Bismarck with King William I as president. The federal constitution permitted each state to maintain its own local government. The parliament had two houses that shared power “equally”. For Bismarck, the government structure, gave the ability to outwit the middle class by appealing to the working class. In the Franco-Prussian War, which was provoked, Bismarck brought the four southern German states into the North German Confederation. Bismarck saw the Catholic Center Party and the S.P.D. as threats to imperial power thus he wanted to destroy them. In order to achieve this he manipulated his actions by reforming trying to win more support from people and using the Kulturkampf. When the German Empire was proclaimed it was the most powerful nation in Europe, with William I Emperor of Germany, Bismarck Imperial Chancellor and the parliament had little power due to Germany becoming a conservative autocracy with the nobility allied with the monarch. German Unification : German Unification Grossdeutsch Plan failed unifying Germany with Prussia and Austria Kleindeutsch Plan succeeded unified Germany without Austria Otto Von Bismarck : Otto Von Bismarck unifies Germany almost all by himself Junker, chancellor, mastermind behind the government, realpolitik Gained the favor of the king through the Gap theory thus gaining more power Since the constitution, he ignored the liberals in the legislature and did his own thing. Govt. collects taxes w/ out consent of the parliament.", "score": 22.042280876971514, "rank": 59}, {"document_id": "doc-::chunk-24", "d_text": "During the next few centuries, however, the great expense of the wars to maintain the empire against its enemies, chiefly other German princes and the wealthy and powerful papacy and its allies, depleted Germany’s wealth and slowed its development. Unlike France or England, where a central royal power was slowly established over regional princes, Germany remained divided into a multitude of smaller entities often warring with one another or in combinations against the emperors. None of the local princes, or any of the emperors, were strong enough to control Germany for a sustained period.\nGermany’s so-called particularism, that is, the existence within it of many states of various sizes and kinds, such as principalities, electorates, ecclesiastical territories, and free cities, became characteristic by the early Middle Ages and persisted until 1871, when the country was finally united. This disunity was exacerbated by the Protestant Reformation of the sixteenth century, which ended Germany’s religious unity by converting many Germans to Lutheranism and Calvinism. For several centuries, adherents to these two varieties of Protestantism viewed each other with as much hostility and suspicion as they did Roman Catholics. For their part, Catholics frequently resorted to force to defend themselves against Protestants or to convert them. As a result, Germans were divided not only by territory but also by religion.\nThe terrible destruction of the Thirty Years’ War of 1618-48, a war partially religious in nature, reduced German particularism, as did the reforms enacted during the age of enlightened absolutism (1648-1789) and later the growth of nationalism and industrialism in the nineteenth century. In 1815 the Congress of Vienna stipulated that the several hundred states existing in Germany before the French Revolution be replaced with thirty-eight states, some of them quite small. In subsequent decades, the two largest of these states, Austria and Prussia, vied for primacy in a Germany that was gradually unifying under a variety of social and economic pressures. The politician responsible for German unification was Otto von Bismarck, whose brilliant diplomacy and ruthless practice of statecraft secured Prussian hegemony in a united Germany in 1871. The new state, proclaimed the German Empire, did not include Austria and its extensive empire of many non-German territories and peoples.\nImperial Germany prospered. Its economy grew rapidly, and by the turn of the century it rivaled Britain’s in size.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Welded together from unequal parts, modern Germany has overcome historic challenges and matured into Europe's economic powerhouse. Scenes from a marriage that's a little short on romance—and long on success\nThere won't be many birthday parties for Berlin this year. The city's main thoroughfare, Unter den Linden, is still jammed with tourists, and the Prussian-era museums, destroyed by Allied bombs during World War II, have been restored to their former glory. The cafés and bars of Mitte, in what used to be East Berlin, overflow with polyglot congregations of artists and hipsters. But Berliners are staying stoic about Oct. 3, 2010—the 20th anniversary of German unification, an event that ended four decades of German division and closed the books on the Cold War. Last November the commemoration of the fall of the Berlin Wall brought 35 heads of state to the German capital. Only a handful are expected this time. Even German Chancellor Angela Merkel plans to be out of town. \"We're all still exhausted from last November,\" says Constanze Stelzenmüller, a senior fellow at the German Marshall Fund in Berlin. \"I don't find myself getting excited about Oct. 3 the way I did last year.\"\nMost Germans identify Nov. 9, 1989—the night the Wall came down—as the date on which the nation reclaimed its destiny. The images of East Berliners running through the Wall and into the arms of strangers on the other side, of ordinary people hammering concrete and drinking champagne at the Brandenburg Gate, remain indelible. Few of the East Germans who streamed into West Berlin on that November night in 1989 would have predicted that the two Germanys could become a single, free, and democratic state within a year—or that the Iron Curtain would crumble. That the unimaginable did occur in the heart of Europe two decades ago made the world a better, safer place. And so even in the absence of conspicuous revelry, the events that culminated in the peaceful unification of Germany are still worth celebrating. When you ask Germans today to describe, 20 years later, what unification meant to them, one word invariably recurs: To them, it was a \"miracle.\" Perhaps a bigger miracle is that the next two decades turned out as well as they did.\nToday, Germany is the most important country in Europe.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-10", "d_text": "It resulted in the redrawing of the political map of Europe, where the number of independent states – almost all of them national states – first diminished (between 1815 and 1870 following the unification of Germany and of Italy), and later increased: between 1870 and 1990, the number of independent states increased from 20 to 41. Most of these new states appeared in central and in eastern Europe as a result of the dismemberment of former empires – Habsburg, German, Tsarist and Ottoman – after a lost war or as an effect of the peaceful collapse of the Soviet Union. Moreover, from the 1870s onwards, the growing importance of nations and national ideologies contributed to the creation of a climate of permanent tension between states confronted with the separatist movements inside their borders and with neighbours’ claims to parts of their territory.\nIn a climate of arms race and colonial rivalry, of coalitions intended to maintain the European balance of power actually working to destroy it, a spark was enough to provoke an enormous explosion. It happened in 1914 with effects that lasted until 1990: four years of trenches; the Bolshevik revolution in Russia; the peace treaty of Versailles, contested by Germany and by the new Soviet power; the fascist takeover in Italy; the Great Crash of 1929; the accession of Hitler and his Nazis to total power in Germany; the Second World War with the extermination of the European Jewry; the “Iron Curtain” and the Stalinist totalitarianism forcefully imposed on almost all countries of central and eastern Europe; the Cold War; and the long decay of the Soviet system until its final collapse.\nAll horrors of the twentieth century did not succeed, however, in erasing the results of the second cultural unification of Europe. On the contrary, this was extended and deepened by the development of rail networks, motorways, new means of communication, by the spread of industry, the growth of cities, by the advances of literacy… in short, by the greater uniformity of living conditions and the material environment. Moreover, among the lasting results of the second cultural unification was the idea of Europe as a cultural reality, shared since the eighteenth century by a significant part of the elites of a majority of European nations. These elites became more and more convinced that this cultural reality had to be completed by an economic and even a political one.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-59", "d_text": "The federal chancellery published a new commercial code and established a uniform currency. The indemnity that France had to pay Germany after losing the 1870-71 war provided capital for railroad construction and building projects. A speculative boom resulted, characterized by large-scale formation of joint-stock companies and unscrupulous investment practices. This period of intense financial speculation and construction, called by Germans the Gründerzeit (founders’ time), ended with the stock market crash of 1873.\nDespite the crash and several subsequent periods of economic depression, Germany’s economy grew rapidly. By 1900 it rivaled the more-established British economy as the world’s largest. German coal production, about one-third of Britain’s in 1880, increased sixfold by 1913, almost equaling British yields that year. German steel production increased more than tenfold in the same period, surpassing British production by far.\nIndustrialization began later in Germany than in Britain, and the German economy was not a significant part of the world economy until late in the nineteenth century. Germany’s industrialization started with the building of railroads in the 1840s and 1850s and the subsequent development of coal mining and iron and steel production, activities that made up what is called the First Industrial Revolution. In Germany, the Second Industrial Revolution, that is, the growth of chemical and electrical industries, followed the enormous expansion of coal and steel production so closely that the country can be said to have experienced the two revolutions almost simultaneously. Germany took an early lead in the chemical and electrical industries. Its chemists became renowned for their discoveries, and by 1914 the country was producing half the world’s electrical equipment. As a result of these developments, Germany became the continent’s industrial giant.\nGermany’s population also expanded rapidly, growing from 41.0 million in 1871 to 49.7 million in 1891 and 65.3 million in 1911. The expanding and industrializing economy changed the way this rapidly expanding population earned its livelihood. In 1871 about 49 percent of the workforce was engaged in agriculture; by 1907 only 35 percent was. In the same period, industry’s share of the rapidly growing workforce rose from 31 percent to 40 percent. Urban birth rates were often the country’s highest, but there was much migration from rural areas to urban areas, where most industry was located.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "Austro-Prussian War (1866) : Austro-Prussian War (1866) 7 weeks, German Civil War Caused by conflicts over authority of Schleswig and Holstein Bismarck sought a localized war Prussia triumph unified most of Germany w/out Austria Franco-Prussian War (1870-71) : Franco-Prussian War (1870-71) Bismarck uses strategic skills to provoke war w/ France to complete unifying Germany Used war w/ France to unify the 4 southern German states into the German Confederation Treaty of Frankfurt: (May, 1871) Alsace and Lorraine ceded to Germany Government STRUCTURE : Government STRUCTURE Parliament: Reichstag bicameral legislature that shared power “equally” upper house: bundesrat was conservative that had representatives from each German state. lower house: Bundestag represented the nation elected by universal male suffrage. North German confederation : North German confederation North German Confederation established by Bismarck with King William as President (1867) federal constitution tolerated each state to have it own local government By giving universal make suffrage it attracts liberals to be on Bismarck’s side. (Manipulative) German Empire : German Empire proclaimed on Jan. 18, 1871 conservative autocracy w/ nobility allied w/ monarch Germany=most powerful nation in Europe William I ? Emperor Kaiser Wilhelm of Germany had ultimate power Bismarck ?Imperial Chancellor (mastermind behind govt.) federal union of Prussia & 24 smaller German states Reichstag had little power German Political System : German Political System Conservatives represented Junkers Center Party (Catholic Party) supported Bismarck’s policy of centralization & endorsed the political concept of Particularism which advocated regional priorities Democratic Socialist party : Democratic Socialist party German Middle class excluded middle class gave tacit support to imperial authority and noble influence Bismarck viewed the Catholic Party and S.P.D.", "score": 20.507727284661815, "rank": 64}, {"document_id": "doc-::chunk-4", "d_text": "Following Napoleon’s defeat, the 1814-1815 Congress of Vienna replaced the Holy Roman Empire with the German Confederation, made up of 38 independent states. A loose confederation, this construct had no common citizenship, legal system, or administrative or executive organs. It did, however, provide for a Federal Diet that met in Frankfurt–a Congress of deputies of the constituent states who would meet to discuss issues affecting the Confederation as a whole.\nThe Path to Unification: The Customs Union and the 1848 Revolutions\nPrussia led a group of 18 German states that formed the German Customs Union in 1834, and the Prussian Thaler eventually became the common currency used in this region. The Customs Union greatly enhanced economic efficiency, and paved the way for Germany to become a single economic unit during the 19th Century’s period of rapid industrialization. Austria chose to remain outside the German Customs Union, preferring instead to form its own customs union with the Hapsburg territories–a further step down the path of a unified Germany that did not include Austria.\nFrance’s 1848 February Revolution that overthrew King Louis Phillipe sparked a series of popular uprisings throughout the German states. Panicking local leaders provided several political, social, and economic concessions to the demonstrators, including agreeing to a national assembly that would discuss the constitutional form of a united Germany, individual rights, and economic order. The assembly rapidly devolved into competing factions; meanwhile, the conservative leaders of the German states reconstituted their power. When the assembly finally determined that there should be a united, federal Germany (excluding Austria) with universal male suffrage, organized as a constitutional monarchy under an Emperor–and offered that emperor title to the King of Prussia–there was no longer any interest or political reason (least of all in absolutist, powerful Prussia) for the leaders to listen. The Prussian monarch rejected the assembly’s offer, and the assembly was forcefully disbanded without achieving any of the stated goals of the 1848 revolutionaries.\nNevertheless, the 1848 Revolutions did leave a lasting legacy. The factions of the ill-fated national assembly went on to develop into political parties. Certain economic and social reforms, such as the final abolition of feudal property structures, remained. The idea of German unity was firmly established.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Even in the Unification Treaty it was determined that Berlin be the capital. On June 20, 1991 the Deutsche Bundestag passed a resolution to also move the seat of government and Parliament from Bonn – since 1949 the capital of the Federal Republic – to Berlin. Since the move in 1999, Germany once again has in Berlin a pulsating political center that bears comparison with the major cities of the big European neighboring states. In addition to the newly designed Reichstag building, symbols of this are the Chancellery and the open Brandenburg Gate, which represents the overcoming of the country’s division. For a while there had been fears that the government’s move to Berlin could become an expression of a new German megalomania, with which the country’s economic and political weight would upset the status quo in Europe again. These fears proved to be wrong. Rather, German Unity was to be the initial spark that led to the overcoming of the division of Europe into east and west.\nAs such, Germany actually played a pioneering role in the political and economic integration of the continent. In addition it gave up one of the most important instruments and symbols in the unification process, the Deutschmark, to create a European Monetary Union, the Eurozone, which would not exist without Germany. Nor, despite their being heavily involved in the unification process, have the various federal governments since 1990 ever lost sight of European integration, but have played an active role in its development, which culminated in the Lisbon process.\nUltimately, in the course of the 1990s Germany’s role in world politics also changed. The participation of German troops in international peace-keeping and stabilization missions makes this increased responsibility visible to the outside world. In domestic political discussion, however, the foreign missions are in some cases the subject of controversial discussion. In the NATO allies’ expectation that the Federal Republic of Germany take on a share of the common obligations commensurate with its size and political weight, it becomes clear in retrospect that as a divided country Germany enjoyed a political status that no longer existed when the bipolar world order came to an end. Since there is no longer a risk of confrontation between Bundeswehr troops in the west and those of the Nationale Volksarmee in the GDR, there has been continually growing international expectation for Germany to assume corresponding responsibility.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-47", "d_text": "Although there were no political challenges to enlightened absolutism, as was the case in France, all phenomena, including religion, were subject to critical, reasoned examination to determine their rationality. In this more tolerant environment, differing religious views could still create social friction, but ways were found for the empire’s three main religions–Roman Catholicism, Lutheranism, and Calvinism–to coexist in most states. The expulsion of about 20,000 Protestants from the ecclesiastical state of Salzburg during 1731-32 was viewed by the educated public at the time as a harking back to less enlightened days.\nSeveral new universities were founded, some soon considered among Europe’s best. An increasingly literate public made possible a jump in the number of journals and newspapers. At the end of the seventeenth century, most books printed in Germany were in Latin. By the end of the next century, all but 5 percent were in German. The eighteenth century also saw a refinement of the German language and a flowering of German literature with the appearance of such figures as Gotthold Lessing, Johann Wolfgang von Goethe, and Friedrich Schiller. German music also reached great heights with the Bach family, George Frederick Handel, Joseph Haydn, and Wolfgang Amadeus Mozart.\nThe French Revolution and Germany\nThe French Revolution, which erupted in 1789 with the storming of the Bastille in Paris, at first gained the enthusiastic approval of some German intellectuals, who welcomed the proclamation of a constitution and a bill of rights. Within a few years, most of this support had dissipated, replaced by fear of a newly aggressive French nationalism and horror at the execution of the revolution’s opponents. In 1792 French troops invaded Germany and were at first pushed back by imperial forces. But at the Battle of Valmy in late 1792, the French army, a revolutionary citizens’ army fighting on its own soil, defeated the professional imperial army. By 1794 France had secured control of the Rhineland, which it was to occupy for twenty years.\nDuring the Rhineland occupation, France followed its traditional policy of keeping Austria and Prussia apart and manipulating the smaller German states. In observance of the Treaty of Basel of 1795, Prussian and German forces north of the Main River ceased efforts against the French.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Germany is a cosmopolitan country shaped by a pluralism of lifestyles. Demographic change is set to play a major role in the coming years.\nWith some 81.2 million inhabitants, Germany is the most populous nation in the European Union. The modern, cosmopolitan country has developed into an important immigration country. A good 16.4 million people in Germany have a migratory background. Germany is now among those nations with the most liberal immigration rules. According to a 2014 study by the Organisation for Economic Co-operation and Development (OECD), it is the most popular immigration country after the USA.\nMost people in Germany have a high standard of living, on an international comparison, and the corresponding freedom to shape their own lives. The United Nations’ Human Development Index (HDI) 2014 ranks Germany sixth of 187 countries. In the Nation Brands Index 2014, an international survey on the image of 50 countries, Germany tops the scale – also owing to its high values in the areas of quality of life and social justice. Germany considers itself a welfare state, whose primary task is to protect all its citizens.\nNew ways of life are changing the society\nGerman society is shaped by a pluralism of lifestyles and ethno-cultural diversity. New ways of life and everyday realities are changing daily life in society. Immigrants enrich the country with new perspectives and experiences. There is great social openness and acceptance as regards alternative ways of life and different sexual orientations. Advances are being made in terms of gender equality and traditional gender role assignments are no longer rigid. People with disabilities are taking an ever greater role in social life.\nDemographic and socioeconomic change\nIn future, demographic change is set to shape Germany more than virtually any other development. The birth rate has been constantly low since the late 1990s at 1.4 children per woman, and life expectancy is rising. By 2050 the population in Germany is estimated to shrink by around seven million people. At the same time, the growing number of elderly people is presenting social welfare systems with new challenges.\nSocioeconomic change in Germany in recent years has led to the emergence of new social risks and stronger social diversification according to economic living conditions. Although in 2014 unemployment was at the same low level as in 1991 (on average 2.7 million), almost one in six in Germany is at risk of poverty, particularly young people and single parents. Moreover, social differences continue to exist between east and west.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-50", "d_text": "Some of these reforms had already been under discussion during the eighteenth-century Enlightenment, and awareness of their desirability had spread during the Napoleonic era. In addition, the economic reforms introduced into the Rhineland by France had taken hold. The business class that formed after 1815 pressed for abolition of restrictive trade practices favored by traditional handicraft guilds. Businessmen also sought a common currency and system of measurements for Germany, as well as a reduction of the numerous tolls that made road and river travel expensive and slow.\nDuring the 1820s, significant progress was made in reducing customs duties among German states. At Prussian instigation, the Zollverein (Customs Union) began to form, and by the mid-1830s it included all the most important German states except Austria. Prussia saw to it that its chief rival within Germany was excluded from the union. Vienna, for its part, did not realize at this early point the political and economic significance of intra-German trade.\nMany of Germany’s liberal intelligentsia–lower government officials, men of letters, professors, and lawyers–who pushed for representative government and greater political freedom were also interested in some form of German unity. They argued that liberal political reforms could only be enacted in a larger political entity. Germany’s small, traditional states offered little scope for political reform.\nAmong those groups desiring reform, there was, ironically, little unity. Many businessmen were interested only in reforms that would facilitate commerce, and they gave little thought to politics. Political liberals were split into a number of camps. Some wished for a greater degree of political representation, but, given a widespread fear of what the masses might do if they had access to power, these liberals were content to have aristocrats as leaders. Others desired a democratic constitution, but with a hereditary king as ruler. A minority of liberals were ardent democrats who desired to establish a republic with parliamentary democracy and universal suffrage.\nThe ideal of a united Germany had been awakened within liberal groups by the writings of scholars and literary figures such as Johann Gottfried Herder (1744-1803) and by the achievements of French nationalism after the revolution. France’s easy victories over Germany’s small states made the union of a people with a common language and historical memory desirable for practical reasons alone.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-2", "d_text": "However, it also faced the staggering added expenses of unifying the east and west. These expenses were all the more unsettling because they were apparently unexpected. Kohl and his advisers had done little to prepare German taxpayers for the costs of unification, in part because they feared the potential political consequences but also because they were themselves surprised by the magnitude of the task. The core of the problem was the state of the eastern German economy, which was far worse than anyone had realized or admitted. Only a handful of eastern firms could compete on the world market; most were woefully inefficient and also environmentally destructive. As a consequence, the former East German economy collapsed, hundreds of thousands of easterners faced unemployment, and the east became heavily dependent on federal subsidies. At the same time, the infrastructure—roads, rail lines, telephones, and the like—required massive capital investment in order to provide the basis for future economic growth. In short, the promise of immediate prosperity and economic equality, on which the swift and relatively painless process of unification had rested, turned out to be impossible to fulfill. Unemployment, social dislocation, and disappointment continued to haunt the new Länder more than a decade after the fall of the Berlin Wall.\nThe lingering economic gap between the east and west was just one of several difficulties attending unification. Not surprisingly, many easterners resented what they took to be western arrogance and insensitivity. The terms Wessi (“westerner”) and Ossi (“easterner”) came to imply different approaches to the world: the former competitive and aggressive, the product of what Germans call the West’s “elbow society”; the latter passive and indolent, the product of the stifling security of the communist regime. The PDS became the political voice of eastern discontents, with strong if localized support in some of the new Länder. Moreover, the neofascist German People’s Union (Deutsche Volksunion), led by millionaire publisher Gerhard Frey, garnered significant support among eastern Germany’s mass of unemployed workers. In addition to the resentment and disillusionment over unification that many easterners and some westerners felt, there was also the problem of coming to terms with the legacies left by 40 years of dictatorship. East Germany had developed a large and effective security apparatus (the Stasi), which employed a wide network of professional and amateur informants.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "Germany is formed out of many different kingdoms so we have various little cultural expressions (Food, jokes, dialects…) yet we all speak the same language.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-0", "d_text": "Books & Music\nFood & Wine\nHealth & Fitness\nHobbies & Crafts\nHome & Garden\nNews & Politics\nReligion & Spirituality\nTravel & Culture\nTV & Movies\nArchive by Article Title | Archive by Date\nGermany's Last Love Parade\nIts beginnings were in the still divided Berlin of 1989, a small demonstration for peace and understanding through music, and it became one of the world's most popular musical festivals. The Duisburg Love Parade in 2010 ended with 21 deaths and over 500 injured, and was the last.\nGermany's National Anthem\nAustrian composer Joseph Haydn created a melody as a hymn to God for an Austrian Emperor. With different lyrics, written by a German poet, it became Germany's national anthem, and for a time this beautiful music was associated with a dark period in the country's past.\nGermany's Prehistoric Solar Observatory\nFor the prehistory agricultural civilization that 7,000 years ago created Germany's Goseck Circle, Solstice was an important seasonal midpoint celebration. The circle predates Britain's Stonehenge by at least two millennia and it probably the world's oldest solar observatory.\nGermany's Radioactive Wild Boar and Mushrooms\nThere continue to be radio active wild boars in Baden-Württemberg and Bavaria, together with an overall population boom in wild boar families. Thanks to the Chernobyl meltdown of 1986 there is another side to some of the residents of Germany's beautiful forests, including radioactive mushrooms.\nGermany's Romantic Road\nIt's a spectacular journey, through stunning scenery filled with contrasts and historic medieval towns, into a world from the past. Germany’s Romantische Strasse, Romantic Road, a theme route showcasing southern German culture from food and wine to art and architecture. And of course history.\nGermany's Ruhr Valley - From Coal to Culture\nGermany's rejuvenated Ruhr Valley has a 750 year industrial history, now transformed into a region of plants, animals and culture the 53 towns and cities were a \"European Capital of Culture\". A tribute honoring the way the region emphasizes European cultures, and links connecting them.\nGermany's Tree Lined Avenues Route\nIt began over 250 years ago as trees planted to help travelers find their way, Germany's Avenues Route, the tree lined Deutsche Alleenstrasse.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "When Germany was defeated in 1918, the German Empire ceased to exist and the Treaty of Versaille (1919) placed the blame of war on Germany. The country was forced to pay reparations, limit the military and to give up territory.\nFollowing the war, Germany entered into the Weimar Republic which looked to bring about democracy. However, by mid-1933, Adolf Hitler and the National Socialist German Workers’ Party (Nazis) destroyed the idea of democracy, instead establishing a dictatorship. Under Hitler the Nuremberg Laws of 1935 stripped Germany’s Jewish and Roma populations of their citizenship and took away all of their rights. Despite this terror, Hitler instigated public works programs that helped bring Germany out of the economic disaster of the First World War, thus making him popular with a great deal of the population.\nThe Second World War (1939-1945) was a global war that was the deadliest conflict in human history. The Holocaust was responsible for the death of approximately six million Jews, as well as more than a million Roma people, communists, homosexuals and more. When the war ended in 1945, Hitler committed suicide and Germany was divided into four occupied zones headed by the Soviet Union, the United States, Britain and France. Of these zones, two states emerged, East and West Germany. While the East fell behind economically, the West grew to become one of the world’s richest nations.\nThese two states continued for years until 1990 when Germany was once again united and the infamous Berlin Wall was brought down. Today, Germany hosts the largest economy in Europe and has come a long way from the terrors of the 20th century. Its location in the heart of the continent as well as its history, cuisine, culture and landscapes has made it an increasingly popular tourist destination.\nGermany Travel Information\nAt Goway we believe that a well-informed traveller is a safer traveller. With this in mind, we have compiled an easy to navigate travel information section dedicated to Germany.\nLearn about the history and culture of Germany, the must-try food and drink, and what to pack in your suitcase. Read about Germany's nature and wildlife, weather and geography, along with 'Country Quickfacts' compiled by our travel experts. Our globetrotting tips, as well as our visa and health information will help ensure you're properly prepared for a safe and enjoyable trip.", "score": 18.89495287700555, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "The unification of Italy had started by nationalism and also brought and won together with nationalism.\nItaly was broken into many states from the congress of Vienna in 1815. At the beginning of 1815 and through 1848 the\nItalian people were begining to feel restless the wanted to live no more under the foreign rulers. In the discontent\nof all of the Italian people and the ages of restlesness there was two very intelligent and ideal leaders that appeared before\nthe Italian people it was Giuseppe Mazzini, Camillo di Cavour, and Giuseppe Garibaldi.\nThe Germany Unification. This unification will obviously will recieve the same gratification as Italy,\nachieving national unity in the middle of 1800's. There was the two larger states which was the Austro-Hungarian Empire\nand Prussia, these dominated the rest. Nationalism unified Prussia as while other races tore at Austro-Hungary.\nAlso Prussia's was the most powerful at that time. Berlin rioters really scared the Prussian king. Wilhelm I in\n1861 tried to double the Prussian army. This however the parliment threw it out by not giving him the money to do this.\nWilhelm had chosen a republican Junker named Otto von Bismarck. Bismark soon was a master of realpolitik. Bismark soon made a big mistake he acted without the parliments consent\nand declared he would rule and with no extraction of legal budget. Germany soon expands and Bismark takes Austria out\nof the picture. The Franco-Prussian war started around 1867 involving France and Prussia food was so scarce people began\neating sawdust, leather, and rats. The war ended and King Wilhelm I of Prussia was crowned Kaiser.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-22", "d_text": "Germany has not yet successfully integrated the foreigners already on its soil: archaic immigration laws make it difficult to became a German citizen, and xenophobic attitudes of many Germans often make foreign residents, even those born and raised in the country and speaking perfect German, feel unwanted. In time, demographic realities may cause Germans to view more favorably the permanent presence of a substantial non-German population and lead them to adopt more liberal notions of citizenship.\nUnification and the ending of the Cold War have meant that Germany must adjust itself to a new international environment. The disastrous failures of German foreign policy in the first half of the twentieth century have caused Germans to approach this challenge warily. Until the demise of the Soviet Union, Germans could enjoy the certainties of the Cold War, both they and their neighbors secure in the knowledge that the superpowers would contain any possible German aggression.\nThroughout the postwar era, West Germany was a model citizen of the community of nations, content to be the most devoted participant in the movement toward Europe’s economic and social unification. West German politicians shared the fears of their foreign neighbors of a resurgent, aggressive Germany and sought to ensure their country’s containment by embedding it in international organizations. In the mid-1950s, for example, West Germany rearmed, but as a member of the North Atlantic Treaty Organization.\nSince the end of the Cold War, however, united Germany has occupied an exposed position in Central Europe, with settled, secure neighbors in the west and unpredictable and insecure neighbors to the east. Because of this exposure, German policy makers wish to extend the European Union and NATO eastward, at a minimum bringing Poland, the Czech Republic, and Hungary into both organizations. In the German view, these countries could serve as a buffer between Germany and uncertain developments in Russia and other members of the former Soviet Union. At the same time as this so-called widening of West European institutions is being undertaken, Germany is working for their deepening by pressing for increased European unity. As of mid-1996, Helmut Kohl remained the continent’s most important advocate of realizing a common European currency through the European Monetary Union by the turn of the century. However unrealistic this timetable may prove to be, in the postwar era Germany has steadfastly worked to realize German writer Thomas Mann’s ideal of a Europeanized Germany and rejected his nightmare of a Germanized Europe.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-8", "d_text": "An ethnic Danish minority lives in the north, and a small Slavic minority known as the Sorbs lives in eastern Germany. Due to restrictive German citizenship laws, most “foreigners” do not hold German citizenship even when born and raised in Germany. However, since the German Government undertook citizenship and immigration law reforms in 2002, more foreign residents have had the ability to naturalize.\nGermany has one of the world’s highest levels of education, technological development, and economic productivity. Since the end of World War II, the number of youths entering universities has more than tripled, and the trade and technical schools of the Federal Republic of Germany (F.R.G.) are among the world’s best. Germany is a broadly middle class society. A generous social welfare system provides for universal medical care, unemployment compensation, and other social needs. Millions of Germans travel abroad each year.\nWith unification on October 3, 1990, Germany began the major task of bringing the standard of living of Germans in the former German Democratic Republic (G.D.R.) up to that of western Germany. This has been a lengthy and difficult process due to the relative inefficiency of industrial enterprises in the former G.D.R., difficulties in resolving property ownership in eastern Germany, and the inadequate infrastructure and environmental damage that resulted from years of mismanagement under communist rule.\nEconomic uncertainty in eastern Germany is often cited as one factor contributing to extremist violence, primarily from the political right. Confusion about the causes of the current hardships and a need to place blame has found expression in harassment and violence by some Germans directed toward foreigners, particularly non-Europeans. The vast majority of Germans condemn such violence.\nPopulation: 82,282,988 (July 2010 est.)", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "While Jews started emigrating to Germany from the Soviet Union in the 1970s, Jewish cultural life in Berlin didn't begin blossoming until after unification 10 years ago.\nA highly visible presence\nOne important factor was that the majestic New Synagogue, located in formerly Communist East Berlin, once again became a highly visible, central locus of Jewish culture in the reunified capital.\nPerformances, readings, and exhibits by Jewish artists are well attended, and Jewish-run cafes and restaurants in the vicinity are always bustling.\nBerlin's newly built Jewish Museum will open next year, and the long-debated Holocaust Memorial has finally been approved for construction. Among Germans there is much greater acceptance of and curiosity about things Jewish, says Michael May, managing director of the Jewish Community of Berlin.\nProblems settling in\nWithout the influx of newcomers from the former USSR, who now make up the majority of the city's Jewish community, it is unlikely that such a vibrant cultural life could be possible. Yet the promises of a new beginning in Germany have also been accompanied by the troubles typical of first-generation immigrants.\nEven within the broader German- Jewish community, the language barrier and immigrants' ignorance of tradition have been problematic.\nThe motivation to come to Germany was a \"combination of the existential need to improve one's situation and anti-Semitism [in the former Soviet Union],\" says Mr. May.\nGiven recent political and economic upheaval, Germany appeared to be a bastion of stability. Furthermore, the German government provided a loophole to its strict immigration laws by allowing Jews to enter the country in the same category as war refugees. Still, integration has been difficult.\n\"The truth is that people who had a high social standing in Russia despite their Jewishness, on the whole haven't continued their careers and now have relatively menial jobs,\" says May.\nLanguage is the main hindrance to getting a comparable job. Middle-aged immigrants eagerly attend German classes offered by the Jewish community, but it is improbable that they will ever find work in their previous professions.\nOn the other hand, a common language provides a strong bond among immigrants. \"Most of all, Jews feel like foreigners here,\" says Ilya Levin of the Jewish Community's welfare office. \"But they also feel positive about Judaism: the community, the holidays, and all the things Jews didn't have in the Soviet Union.\"", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "Young foreigners who have resided eight years in Germany may become citizens if they have attended German schools for six years and apply for citizenship between the ages of sixteen and twenty-three. Usually, however, German citizenship depends not on where one is born (ius solis ) but on the nationality of the father or, since 1974, on the mother (ius sanguinis ). Thus, to many, German citizenship depends on being born German and cannot rightfully be acquired through a legal process. This notion makes it practically impossible for naturalized citizens or their children to be considered German. Some reformers advocate eliminating the concept of German blood in the 1913 law regulating citizenship, but the issue is an emotional one, and such a change has little popular support.\nEthnic Germans have immigrated to Germany since the end of World War II. At first, these immigrants were Germans who had resided in areas that had formerly been German territory. Later, the offspring of German settlers who in previous centuries had settled in areas of Eastern Europe and Russia came to be regarded as ethnic Germans and as such had the right to German citizenship according to Article 116 of the Basic Law. Because they became citizens immediately upon arrival in Germany, ethnic Germans received much financial and social assistance to ease their integration into society. Housing, vocational training, and many other types of assistance, even language training--because many did not know the language of their forebears--were liberally provided.\nWith the gradual opening of the Soviet empire in the 1980s, the numbers of ethnic Germans coming to West Germany swelled. In the mid-1980s, about 40,000 came each year. In 1987 the number doubled and in 1988 doubled again. In 1990 nearly 400,000 ethnic Germans came to the Federal Republic. In the 1991-93 period, about 400,000 ethnic Germans settled in Germany. Since January 1993, immigration of ethnic Germans has been limited to 220,000 per year.\nBecause this influx could no longer be managed, especially because of the vast expense of unification, restrictions on the right of ethnic Germans to return to Germany became effective in January 1991. Under the new restrictions, once in Germany ethnic Germans are assigned to certain areas. If they leave these areas, they lose many of their benefits and are treated as if they were foreigners.", "score": 17.15231423181756, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "The history and society of Germany\nEarly history until the middle ages\nEarly German tribes are thought to have originated from a mixture of peoples from the Baltic Sea Coast. They put down roots in the northern part of the European continent in 500 BC and by 100 BC they had spread to the central and southern parts of present-day Germany. Roman attempts at invasion resulted in numerous battles between the two parties and prompted the Romans to build the Limes, a 300 km long fortification of towers and walls, in the first century AD.\nThe Roman Empire eventually fell and the Germanic tribes continued their spread into present day Germany. The conquest of Roman Gaul by Frankish tribes in the late fifth century became a milestone of European history; and the triumphant Franks went on to become the founders of a civilized German state.\nMiddle Ages to German Union\nMedieval Germany was dominated mainly by struggles within the German Empire as well as with the Catholic Church, and so the geographic spread of the country changed continuously over the centuries. Germany’s central Europe location meant that the country was active in international trade as well as manufacturing and therefore prospered during the fourteenth and fifteenth century. But in 1618, the Thirty Years' War began, and by its end in 1648 large parts of Germany were devastated. Politically, Germany was even less united than before and a long period of economic decline began.\nWorld War I and World War II\nAt the end of World War I, Germany faced a devastating defeat. A democratic parliament emerged from the ashes, which was at first dominated by many popular parties. Over time, mainly through the Great Depression, the more radical parties became stronger, and finally in 1933 the national socialists gained power when Adolf Hitler was appointed as chancellor. While his party assumed brutal and absolute authority over Germany, their expansionist foreign policy led to the Second World War and the unrivalled horror of the Holocaust. The state only came to an end in 1945 with Germany’s unconditional surrender.\nEast and West GermanyThe remains of Nazi Germany were divided into four zones, each controlled by the four occupying allied powers – the United States, Britain, France and the Soviets. The capital, Berlin, was divided similarly despite the city lying deep within the Soviet zone.\nDespite original intentions to jointly govern Germany, tensions resulted in the French, British and American zones forming the Federal Republic of Germany (including West Berlin) in 1949.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-2", "d_text": "\"The outside world sees a new German giant: prosperous, self-sustaining, assertive,\" says Ulrike Guérot, a Berlin-based senior research fellow at the European Council on Foreign Relations. \"But if you look in greater detail, you see an ever more fragmented political system and society. And so there's this intuitive dissatisfaction here, a sense that something is going wrong in Germany.\"\nSuch discontent is largely unwarranted. Germany's problems pale in comparison to those faced by other industrialized nations. As Germans grapple with the implications of their growing power, it's instructive to look back on the remarkable, challenging, and sometimes painful journey that led them here. The rebirth of a united Germany was \"the most moving experience in our lifetimes,\" in the words of former West German President Richard von Weizsäcker, but it was also the start of a period during which Germans struggled to adjust—not just to each other but also to their role in Europe and their place in the world order. \"It's like a marriage,\" says Carl von Hohenthal, a former political correspondent for the German daily Die Welt. \"When you start out, you are very much in love, and even though you might have fears and you aren't sure about everything, you decide to do it. But eventually reality sets in.\"\n\"Screaming for Unification\"\nThe fall of the Berlin Wall did not make unification inevitable. Although nominally self-governed, the two German states had for 40 years existed under the controlling authority of the four victorious World War II powers: the U.S., Great Britain, France, and the Soviet Union. As of 1989, hundreds of thousands of foreign troops, including close to half a million members of the Red Army, were still stationed on German soil. In the weeks leading up to Nov. 9, discontent with the Communist regime in East Germany produced huge street demonstrations and a seemingly unstoppable exodus of Germans fleeing to the West. Soviet leader Mikhail Gorbachev had told officials in the GDR that if East German citizens attempted to breach the Berlin Wall, Soviet troops would not intervene to stop them. But Gorbachev was adamant that Germany remain divided into separate states, a position shared by British Prime Minister Margaret Thatcher and French President François Mitterrand.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "Germany•The Official name is Federal Republic of Germany.• It consist 16 states.•The Capital of Germany is Berlin.•Joachim Gauck is the President of Germany•Euro is the Currency of Germany\n• The ‘loose association of territories that preceded the creation of the modern German state.’ (Tim Kirk)• After the Treaty of Westphalia (1648) which ended the Thirty Years War there were still 234 territories and 51 ‘Imperial Cities’.• The Holy Roman Empire was formally dissolved on 6 August 1806 by the Treaty of Press burg, after the defeat of Austria by Napoleon\n• Annual growth rate: (2011 est.) 2.7%• Unemployment rate: (2011 est.) 5.5%• Inflation rate: (2011 est.) 2.5%• Gross domestic product grew by 2.7%• GDP (2011 est.) $3.2 trillion• A Mercedes-Benz car. Germany was the worlds leading exporter of goods from 2003 to 2008.\n• The official language of Germany is Standard German, with over 95% of the country speaking Standard German or German dialects as their first language. Minority first languages include:• Sorbian 0.09%• Romani 0.08%• Danish 0.06%• North Frisian 0.01%\nGermanys television market is the largest in Europe, with some 34 million TV household. Around 90% of German households have cable or satellite TV, and viewers can choose from a variety of free-to- view public and commercial channels. Germany is home to some of the worlds largest media conglomerates.\n• 64.1 percent of the German population belongs to Christian denominations.• 31.4 percent are Roman Catholic, and• 32.7 percent are affiliated with Protestantism.\n• Germany has been the home of many famous inventors and engineers such as, Konrad Zuse, who built the first computer and Albert Einstein who developed the theory of general relativity, effecting a revolution in physics and others.\nA popular German saying has the meaning: \"Breakfast like an emperor, lunch like a king, and dinner like a beggar.\" Breakfast is usually a selection of breads and rolls with jam and honey or cold meats and cheese, sometimes accompanied by a boiled egg.\n• Sport forms an integral part of German life.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-9", "d_text": "While the national, European and global identity of such a unified Germany still remains to be articulated, the contributions at the conference constituted an important step towards identifying the cornerstones around which it is likely to take shape.\nDr. von Moltke is an assistant professor in the Department of Germanic Languages and Literatures and the Program in Film and Video. He received his PhD from Duke University and joined the U-M faculty in January 1998 after having taught four years at the University of Hildesheim in Germany. His dissertation, \"Beyond Authenticity: Experience, Identity and Performance in the New German Cinema,\" focuses on the role of authenticity and the performative in Edgar Reitz, Rudolf Thome, and Rainer Werner Fassbinder. He has also worked on representations of Jewishness and the issues of Heimat, Americanization and popular culture in postwar Germany. He is an editor of the German film and media studies journal Montage A/V.\nHell is associate professor in the Department of Germanic Languages and Literatures at the University of Michigan. Hell received her doctorate from the University of Wisconsin in 1989, and is the author of Post-Fascist Fantasies: Psychoanalysts, History, and the Literature of East Germany (Duke UP, 1997). Through a workshop held at the International Institute on November 12, 1999, the Center for European Studies (CES) sought to inform academics, students and interested citizens about the unification of European currency and its wider sociopolitical implications. Readers can find the schedule of the conference and transcripts of the proceedings on CES's web site: www.umich.edu/~iinet/ces and www.umich.edu/~iinet/ces/euroconference\nCf. Ulrike Peters' summary of the conference, \"MauerMüll: An Afterword to the Conference The Unification Effect: The Berlin Republic Ten Years After\" at http://www.lsa.umich.edu/german/gs-n-effect.html", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Fast facts about Germany before you stop reading!\nIt is not still run by Hitler. It also does not like war anymore. In fact, you could say Germany today downright loathes war. Today Berlin is a top spot for Israelis looking to escape their country’s nasty rising costs of living. So it’s bad at both anti-Semitism and war.\nThat all needs to be said because, even today, the popular image of Germany in the American mind is still inextricably linked with World War II. You can’t even get through a Family Guy episode about Germany without at least one Nazi reference. If and when you can get an American through that, you end up in a stereotype of sausages, beer, and ultra-hardcore pornography.\nBoth are equally inaccurate, but both are equally revealing. The first reveals a common mistrust of Germany as a great power; the second shows the edges of the non-confrontational culture German elites have been trying to build since World War II.\nBut first, the cliff notes!\n- Germany has all the makings of a great power.\n- Its has two fatal strategic flaws: being in the center of Europe, and being on the Great Northern European Plain, which have historically made it paranoid.\n- Twice Germany sought to fix those flaws during the World Wars, and twice it was defeated through the intervention of the United States.\n- The long division between East and West Germany froze German geopolitics and reshaped German political culture. From being aggressive, German elites learned to be cooperative.\n- That brotherly Europe was possible for a while, but isn’t anymore.\n- Now a united Germany’s old geopolitical problems are emerging, and Germany’s elites are struggling to figure out their place in the new world.\nNow, the beginning of the long stuff, and why Germany is such a fine (but still difficult) place to rule.\nGermany’s resource base and climate make it a great place to rule. It’s relatively mild climate allows a good growing season; its storied forests provide plenty of timber; its hills and valleys provide coal, uranium, natural gas, and iron. It has all the resources necessary to industrialize on the cheap and build a huge manufacturing base that outclasses most of its neighbors.\nOn three sides, Germany has defensible borders. To the north is the Baltic Sea, a tough proposition to cross, especially in winter.", "score": 15.184392438538298, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "GermanyArticle Free Pass\n- Modern economic history: from partition to reunification\n- Agriculture, forestry, and fishing\n- Resources and power\n- Labour and taxation\n- Transportation and telecommunications\n- Government and society\n- Constitutional framework\n- Regional and local government\n- Political process\n- Health and welfare\n- Cultural life\n- Cultural milieu\n- Daily life and social customs\n- The arts\n- Cultural institutions\n- Sports and recreation\n- Media and publishing\n- Ancient history\n- Merovingians and Carolingians\n- Germany from 911 to 1250\n- The 10th and 11th centuries\n- Conrad I\n- The accession of the Saxons\n- The eastern policy of the Saxons\n- Dukes, counts, and advocates\n- The promotion of the German church\n- The Ottonian conquest of Italy and the imperial crown\n- The Salians, the papacy, and the princes, 1024–1125\n- Germany and the Hohenstaufen, 1125–1250\n- The 10th and 11th centuries\n- Germany from 1250 to 1493\n- 1250 to 1378\n- The extinction of the Hohenstaufen dynasty\n- The Great Interregnum\n- The rise of the Habsburgs and Luxembourgs\n- The growth of territorialism under the princes\n- Constitutional conflicts in the 14th century\n- The continued ascendancy of the princes\n- 1378 to 1493\n- Internal strife among cities and princes\n- The Hussite controversy\n- The Habsburgs and the imperial office\n- Developments in the individual states to about 1500\n- German society, economy, and culture in the 14th and 15th centuries\n- 1250 to 1378\n- Germany from 1493 to c. 1760\n- Reform and Reformation, 1493–1555\n- The confessional age, 1555–1648\n- Territorial states in the age of absolutism\n- Germany from c.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-53", "d_text": "A few subsequent rebellions by democratic liberals drew some popular support in 1849, but they were easily crushed and their leaders executed or imprisoned. Some of these ardent democrats fled to the United States. Among them was Carl Schurz, who later fought at the Battle of Gettysburg as a Union officer, served one term as a United States senator from Missouri, and was appointed secretary of the interior by United States president Rutherford B. Hayes.\nThe German Confederation was reestablished, and conservatives held the reins of power even more tightly than before. The failure of the 1848 revolutions also meant that Germany was not united as many had hoped. However, some of the liberals’ more practical proposals came to fruition later in the 1850s and 1860s when it was realized that they were essential to economic efficiency. Many commercial restrictions were abolished. The guilds, with their desire to turn back the clock and restore preindustrial conditions, were defeated, and impediments to the free use of capital were reduced. The “hungry forties” gave way to the prosperity of the 1850s as the German economy modernized and laid the foundations for spectacular growth later in the century.\nBismarck and Unification\nLiberal hopes for German unification were not met during the politically turbulent 1848-49 period. A Prussian plan for a smaller union was dropped in late 1850 after Austria threatened Prussia with war. Despite this setback, desire for some kind of German unity, either with or without Austria, grew during the 1850s and 1860s. It was no longer a notion cherished by a few, but had proponents in all social classes. An indication of this wider range of support was the change of mind about German nationalism experienced by an obscure Prussian diplomat, Otto von Bismarck. He had been an adamant opponent of German nationalism in the late 1840s. During the 1850s, however, Bismarck had concluded that Prussia would have to harness German nationalism for its own purposes if it were to thrive. He believed too that Prussia’s well-being depended on wresting primacy in Germany from its traditional enemy, Austria.\nIn 1862 King Wilhelm I of Prussia (r. 1858-88) chose Bismarck to serve as his minister president.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-58", "d_text": "By the late 1870s, Bismarck had to concede victory to the party, which had become stronger through its resistance to the government’s persecution. The party remained important during the Weimar Republic and was the forerunner of the Federal Republic’s moderate conservative parties, the Christian Democratic Union (Christlich Demokratische Union–CDU) and the Christian Social Union (Christlich-Soziale Union–CSU).\nThe Marxist SPD was founded in Gotha in 1875, a fusion of Ferdinand Lassalle’s General German Workers’ Association (formed in 1863), which advocated state socialism, and the Social Democratic Labor Party (formed in 1869), headed by August Bebel and Wilhelm Liebknecht, which aspired to establish a classless communist society. The SPD advocated a mixture of revolution and quiet work within the parliamentary system. The clearest statement of this impossible combination was the Erfurt Program of 1891. The former method frightened nearly all Germans to the party’s right, while the latter would build the SPD into the largest party in the Reichstag after the elections of 1912.\nOnce Bismarck gave up his campaign against Germany’s Roman Catholics, whom he had seen for a time as a Vatican-controlled threat to the stability of the empire, he attacked the SPD with a series of antisocialist laws beginning in 1878. A positive aspect of Bismarck’s campaign to contain the SPD was a number of laws passed in the 1880s establishing national health insurance and old-age pensions. Bismarck’s hope was that if workers were protected by the government, they would come to support it and see no need for revolution. Bismarck’s antisocialist campaign, which continued until his dismissal in 1890 by Wilhelm II, severely restricted the activities of the SPD. Ironically, the laws may have inadvertently benefited the SPD by forcing it to work within legal channels. As a result of its sustained activity within the political system, the SPD became a cautious, pragmatic party, which, despite its fiery Marxist rhetoric, won increasing numbers of seats in the Reichstag and achieved some improvements in working and living conditions for Germany’s working class.\nThe Economy and Population Growth\nGermany experienced an economic boom immediately after unification. For the first time, the country was a single economic entity, and old impediments to internal trade were lifted.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-41", "d_text": "Also, Frederick had original taste in military affairs; his army comprised 150 soldiers, with 28 guards on horseback. The prince prided himself on being a wrestler, and one day when a yokel threw the prince, the prince set up a great cry, \"I slipped on a cherry stone!\"—and this regardless of the fact that it was not the time of the year for cherries.\nThere was another local ruler, Ludwig Guenther, who was fond of painting horses, and on his death 246-odd horse pictures adorned the walls of his palace.\n* * * * *\n\"Show a German a door and tell him to go through, and he will try to break a hole in the wall.\"\n\"Here, every one lives apart in his own narrow corner, with his own opinions; his wife and children round him; ever suspicious of the Government, as of his neighbor; judging everything from his personal point of view, and never from general grounds.\"\n\"The sentiment of individualism and the necessity for contradiction are developed to an inconceivable degree in the German.\"\nThe problem of directing this intense individualism is the problem of German unity.\n* * * * *\nWith rough manners, blunders, extravagances, absurdities, the hereditary princes continued to sponge on the peasants, generation after generation, till wretchedness spread far over the German lands. They had their chteaux, their dancing girls, their dogs, horses, cats, mistresses and their royal armies.\nThe misery of centuries of oppression existed; petty monarchs exercised powers of life and death.\nThe South German mocked the North German's pronunciation. One set vowed that the \"g\" in \"goose\" is hard, the other proclaimed that the \"g\" is soft. One side went about mumbling with hard \"g's,\" \"A well-baked goose is a gracious gift of God,\" whereupon the other side replied that all the \"g's\" are \"j's,\" that the \"gute ganz\" is really \"jute janz,\" and \"Gottes\" \"Jottes.\" And duels were fought over it.\nNor was this all. An intense local pride expressed itself in grotesque dialects, unsoftened by intercourse with the outer world; also, there were outlandish fashions in dress and other domestic affairs.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "Travel Guides: Germany\nGermany is a central-western country that is surrounded by North Sea, Denmark and the Baltic Sea in the north, Poland and Czech Republic in east, Austria and Switzerland in the south and Luxembourg, Belgium and the Netherlands in the western.Ad not set – click and set me here…\nThe landscape is rich, there are all types of terrain: mountains and highlands (the Alps, Harz, system Renan, Swabian Jura, Bohemia Forest, Black Forest) and lowland (North German Plain, Bavarian Plateau, Danube Valley). Germany is a cultural country with many castles and monasteries and important centers in Dresden, Leipzig and Weimar. Germany is a country with one of the most developed tourism industries, focusing on cultural tourism. The most popular destinations are Berlin, Munich, Baltic, North Sea, Hamburg, and Bremen. Tourism in Germany has evolved gradually after the Second World War. Sightseeing destinations range from rural to luxury hotels. Rural tourism is developed in the south, in Baden-Württemberg, where crops are planted vines and fruit trees.\nUrban tourism, respectively, thrives in urban centers such as Bremen. Tourists admire the architectural monuments such as the Reichstag building or New Castle. Guests can travel the country by sea or river. Main cities in this regard are Rostock (landlocked city) and Frankfurt (on the river Main). The most famous festival is Oktoberfest, beer festival, which gathers every year thousands of beer lovers. Germany and those who visit this country enjoy the fruits of the unit, during which there were made more investment in infrastructure and services in order to erase lines and scars of the Cold War division of the Second World War. Modern Germany has matured and, although is still suffering from the economic consequences of unification, it is clear that this nation has found a way to express their national identity insurance.\nCities of the former East, Dresden, glow again, like jewelry from the past. Germany is a long history product division, which led to a remarkable diversity, which you can see across the states that make up the Federal Republic. According to stereotypes, the Germans are apathetic, just eating sauerkraut and sausages, drinking liters of beer every day, are very disciplined and strict with their children, no sense of humor and others alike.", "score": 12.768921213082807, "rank": 88}, {"document_id": "doc-::chunk-2", "d_text": "The pre-Lenten season is celebrated with carnivals in Munich, Cologne, and some other areas. But the most important German holiday is Christmas. During this season the kitchens have a delicious smell of homemade fruitcake, Lebkuchen (gingerbread), and all kinds of cookies. Christmas trees are decorated with silvery strands of \"angel hair\" and many white candles. On Christmas Eve, carols are sung and gifts are exchanged.\nGermany has made impressive contributions to the arts and sciences over the centuries. Many German writers, composers, philosophers, painters, architects, and scientists have had an impact far beyond the country's borders. During the years of Nazi rule in Germany (1933–45), however, scores of artists and scientists fled the country or were forced into exile. Many lent their talents to their adopted countries, but they left a void in Germany that lasted well into the postwar period.\nContributor, Die Stuttgarter Zeitung\nAuthor, West German Social Democrats, 1969–1982\nThis article was created with Scholastic GO!.\nWrite About It\nGermany was reunited as one country in 1990. Why did Germany break up in the first place? List some of the problems that Germany is facing while trying to become one country again? Imagine the Governor decided to split your state in half. What would be some of the pros and cons?", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "This Is an Essay Comparing the Italian Unification and the German Unification of 19th Cenutry\nIn the 19th century both Italy and Germany were split into many separate ruling states. The German and Italian unification began with the rising tides of nationalism and liberalism. From nationalism a desire for unification was born. Italian Unification was more complex than German unification.\nItaly had not been a single political unit since the fall of the Western Roman Empire in the 5th century. Italian Unification is referred to in Italian as the Risorgimento. …\nE-pasta adrese, uz kuru nosūtīt darba saiti:\nSaite uz darbu:", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-35", "d_text": "II found the huge sum of 40,000,000 thalers in his fighting uncle's treasure chest, yet within a few years all these splendid advantages were frittered away in idle dalliance and the weak king found himself twenty millions in debt.\nBy the time he died, 1797, Prussia was riding to a fall; and disregarding plain measures for her own safety, she had reached the sad place where the sturdy old Prussian spirit of prudence and independence had become so compromised that Prussia almost deemed it unessential to preserve her own political life!\nThus, within three generations, Prussia repeated the old story of human life, wherein the weak descendant eats up the strong sire's goods. Frederick the Great died Aug. 17th, 1786. Within three years, France struck at the German lands; and within 20 years the old Constitution of the Empire was scoffed at by encircling enemies along the frontiers, led by France, while at home political disputants destroyed National spirit by exciting revolution after revolution. \"Everywhere,\" says Zimmermann, (Germany, p. 1618), \"one felt the morning breeze of the new dispensation.\" The cry of the people had to be answered, and the common man wanted to know not only \"Why!\" but \"When!\"\nFor the ensuing 85 years clamor, disruption and disunion continue often accompanied by bloodshed; till through Bismarck's great work over which he toiled for 40-odd years, came the final answer of the Imperial democracy, 1871.\n* * * * *\nIt is to be the labor of years with confusion worse confounded, as we go along. The Feudal system, with which Germany has been for centuries petrified, must be thrown off; the peasant laborers freed in some sort, whether social or political, the absurd restrictions of countless customs houses walling-in each petty principality, must be destroyed. Before a new Germany may emerge, if Germany is to emerge at all, a National faith must be stimulated, fighting blood stirred, wars waged. Then, and then only, may this idea of German Unity, long the puzzling mental preoccupation of the fathers, become a geographical actuality and a political fact.\nThe German peasants' sense of respect for vested authority, even when held by hated kings, made the common people of the various German states almost ox-like in their patience under harsh political conditions.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-2", "d_text": "German culture is shaped by major intellectual and popular currents in europe the culture in germany is quite diverse due to varied history of people in germany hamburg offers labskaus stew.\nOver 80 percent of germany's land is used for agriculture and forestry im seaching this for a grade 7 history project and it was hard to understand please make this website is hard to understand there are to many big words for a middle schooler doing a report on germany agriculture 13. History of germany essays: over 180,000 history of germany essays, history of germany term papers, history of germany research paper, book reports 184 990 essays, term and research papers available for unlimited access. Germany, hitler, and world war ii: essays in modern german and world history edition unstated edition. This article deals with the diverse images of jews in germany: which roles they take upon themselves and which symbolic roles german society sees in the jewish community the author tries to unfold the origins and the meaning of these roles including jews as victims, as sensors toward anti-semitism and right-wing extremism.\nThis is a 25/25 essay assignment on germany modern history, it is detailed about the terror and repression in germany. The history of hamburg germany has many interesting facets in the eleventh century it was burned by the king of poland, and during the twelfth and thirteenth centuries it entered into the hanseatic league, becoming an economic power due to its proximity to the ocean. Hamburg's portuguese cemetery it counts as one of the most prominent cultural monuments in hamburg and northern germany because of the exceptional decoration on the gravestones and the second volume of the scientific book series is the sephardim in hamburg the history of a. German history prizes postgraduate essay prize the german history society (ghs), in association with the royal historical society (rhs), will award a prize of £500 to the winner of their annual essay competition. Free essay on bismarck and the unification of germany available totally free at echeatcom, the largest free essay community. History of germany during world war i women garment workers in berlin and hamburg before the first world war, in the german family: essays on the social history of the family in nineteenth-and twentieth-century germany, edited by richard j evans and w r lee.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "William Urban: A short history of Germany\nEvery Christmas I exchange books with a German friend. He is very good at picking interesting reads. This year’s was “Wie wir wurden, was wir sind” (How we became what we are), with the subtitle and the foreword promising that it would be a short history. Short is not something that German authors excel at, but Heinrich August Winker did his best. He also had a pleasing style, especially in the early chapters, which could be read by anyone who knows German. By the end the words became longer and the sentences more complex, reflecting the need to explain current issues in the language that politicians and journalists use today. (Germany is where modern scholarly style and jargon originated.)\nHe begins with the Treaty of Westphalia (1648) that ended the Thirty Years War, a conflict between an ambitious emperor and princes fearful of tyranny that is often described as a war between Catholics and Protestants. When the French, Spanish, and Swedes came in, that made it more complicated. A third of the population may have died, churches went up in flames. If you look into German genealogy, you won’t likely get farther back than this.\nThereafter Germans had an instinctive reaction against civil war, foreign invasion, and radical change. It was no surprise, thence, that Enlightenment reforms were led by princes — most importantly Frederick the Great of Prussia — and not intellectuals. The French Revolution was not welcomed, and the most important reforms were made in Prussia in order to drive Napoleon out.\nI found the chapter on the failed Revolution of 1848 most fascinating. That peaceful effort to create a democratic national state failed miserably because the two goals were incompatible. The politics were too complex to bring all German-speakers into one state, and when radicals demanded too much, too soon, moderates turned to conservatives. America benefitted by the flood of educated men and women looking for a freer homeland.\nThe text moved quickly to 1933, when Hitler became Chancellor. This disaster was not foreordained, though the failure of the moderate parties and fear of a Stalin-like terror under the communists made many people look for a “strong man.” The survival of the Weimar Republic was unlikely because its birth was tied to the defeat in the First World War and it had failed to deal effectively with the Great Depression. Political miscalculations did the rest.", "score": 9.098748232655902, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "The Germans returned the favor: late '60s and early '70s German rock (called \"Kraut Rock\" by the British music press), led by bands like Neu!, Amon Düül II, and early Kraftwerk, had a powerful impact on Post Punk, New Wave, Electronic, and Industrial music. West Berlin in particular was famous for both its rollicking club scene and its Hansa-By-The-Wall (yes, that Wall) recording studio, which was a magnet for musicians German and non-German alike. David Bowie spent most of his most productive and creative period in Berlin (termed, fittingly, his \"Berlin period\"), inspired by the German scene. Iggy Pop was similarly inspired, recording part of his debut album and all of Lust for Life (you know, the famous one) at Hansa-By-The-Wall.\nOn another cultural note, the West Germans also managed to create a brilliant national soccer team, winning The World Cup in 1954, 1974, and 1990 (just before reunification). The win in 1954, against Hungary, was a massive boost to West German pride (which until then had been rather shaky), and was seen as a moral victory for the West over the Soviet bloc.\nOlder sources will sometimes refer to this place as simply \"Germany\", possibly due to the feeling that this was the real Germany- the other one was just Commie Land with Germans. Bonn itself felt that for a while, refusing to recognise any country bar the USSR that had any relations with the GDR until Willy Brandt's Neue Ostpolitik of the 1970s. The two Germanies recognised each other (but not completely: for example, no embassies, but permanent representatives [Ständige Vertretung] – this would become important in 1990) and joined the United Nations together.\nIn the early 1980s, it was a site for the US Cruise and Pershing II deployments, something that caused considerable anxiety in a country that would have had nukes from both sides land on it in a nuclear war. In 1986, Chancellor Helmut Kohl announced the unilateral removal of those missiles- a year later the entire lot were got rid of under the INF Treaty.\nThe German Basic Law was aimed at the reunification of Germany.", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Germany was founded in 843 A.D. when Charlemagne's empire was divided by the Treaty of Verdun and given to his grandsons. Louis the German ruled the Germany portion, giving it its name. The other two grandsons ruled France and a portion of land between Germany and France called Lotharingia.Continue Reading\nGermany, the new country, had trouble establishing its borders. On the outside, Hungarians and Norsemen invaded, and on the inside, five different tribes divided the country into five duchies. These duchies were Franconia, Lorraine, Bavaria, Saxony and Swabia. Furthermore, France and Germany both vied for control of Lotharingia.\nBy 911, Luis the German's line had died out, and the dukes of the country's five duchies elected a new ruler from among their number. One of these rulers, Otto the Great, expanded Germany's borders in 955 by conquering the Hungarians and taking their land, the East March, which is present-day Austria. He also became the first Holy Roman Emperor and was given northern Italy.\nGermany's borders expanded in the 12th century to the Baltic Sea, the Wends and western Pomerania. Meanwhile, the native Prussians were converted to Christianity. In the 1500s, Charles V ruled not only Germany but also Hapsburg, Spain, Italy and Burgundian lands.\nHowever, war resulted in a major decline of imperial power and authority and gave rise to Prussia. This small country eventually expanded and took over many German lands and territory, leaving Germany a shell of its former self. Napoleon conquered the German states in the 1700s and broke them up. Eventually the German states were reunited into the German Empire. The country was briefly broken up into East and West Germany, but they were reunited in 1990.Learn more about Europe", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "After the ruling family became extinct upon the death of Prince Bernhard VI in 1468, Anhalt-Bernburg was inherited by Prince George I of Anhalt-Dessau.\nWith Anhalt-Dessau it was inherited by Prince Joachim Ernest of Anhalt-Zerbst in 1561, who unified all Anhalt lands under his rule in 1570. Re-united Anhalt was again divided in 1603 among Prince Joachim Ernest's sons into the lines of Anhalt-Dessau, Anhalt-Köthen, Anhalt-Plötzkau, Anhalt-Bernburg and Anhalt-Zerbst. His second son Prince Christian I took his residence at Bernburg. Christian's younger son Frederick established the separate Principality of Anhalt-Harzgerode in 1635, which existed until 1709. Prince Victor Amadeus of Anhalt-Bernburg inherited Anhalt-Plötzkau in 1665.Upon his death in 1718 his lands were further divided and the Principality of Anhalt-Zeitz-Hoym was created for his second son Lebrecht, which was reunited with Anhalt-Bernburg in 1812. In 1803 Prince Alexius Frederick Christian of Anhalt-Bernburg was elevated to the rank of a duke by Emperor Francis II of Habsburg. His son Duke Alexander Karl however died without issue in 1863, whereafter Anhalt-Bernburg was inherited by Leopold IV, Duke of Anhalt-Dessau, re-uniting all Anhalt lands under his rule. Germany, officially the Federal Republic of Germany is a federal parliamentary republic in western-central Europe. It includes 16 constituent states and covers an area of 357,021 square kilometres (137,847 sq mi) with a largely temperate seasonal climate. Its capital and largest city is Berlin. With 81 million inhabitants, Germany is the most populous member state in the European Union.\nAfter the United States, it is the second most popular migration destination in the world. Various Germanic tribes have occupied northern Germany since classical antiquity.A region named Germania was documented before 100 CE. During the Migration Period the Germanic tribes expanded southward. Beginning in the 10th century, German territories formed a central part of the Holy Roman Empire.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-3", "d_text": "The initially non-hereditary Emperor, elected by the many princes, dukes, and bishops of the constituent lands and confirmed by the Pope, nominally governed over a vast territory, but had very limited ability to intervene in the affairs of the hundreds of entities that made up the Empire, many of which would often wage war against each other. The Empire was never able to develop into a centralized state.\nBeginning in 1517 with Martin Luther’s posting of his 95 Theses on the door of the Wittenberg Castle church, the German-speaking territories bore the brunt of the pan-European struggles unleashed by the Reformation. The leaders of the German kingdoms and principalities chose sides, leading to a split of the Empire into Protestant and Catholic regions, with the Protestant strongholds mostly in the North and East, the Catholic in the South and West. The split along confessional lines also laid the groundwork for the later development of the most powerful German states–Prussia and Austria–as the Prussian Hohenzollern line adopted Protestantism and the Hapsburgs remained Catholic.\nThe tension culminated in the 30 Years War (1618-1648), a combination of wars within the Empire and between outside European states that were fought on German land. These wars, which ended in a rough stalemate, devastated the German people and economy, definitively strengthened the rule of the various German rulers at the cost of the (Habsburg) Emperor (though Habsburg Austria remained the dominant single German entity within the Empire), and established the continued presence of both Catholics and Protestants in German territories.\nThe Rise of Prussia\nThe 18th and 19th Centuries were marked by the rise of Prussia as the second powerful, dominant state in the German-speaking territories alongside Austria, and Austrian-Prussian rivalry became the dominant political factor in German affairs. Successive Prussian kings succeeded in modernizing, centralizing, and expanding the Prussian state, creating a modern bureaucracy and the Continent’s strongest military. Despite Prussia’s emphasis on militarism and authority, Prussia also became a center of the German Enlightenment and was known for its religious tolerance, with its western regions being predominantly Catholic and Jews being granted complete legal equality by 1812. After humiliating losses to Napoleon’s armies, Prussia embarked on a series of administrative, military, economic, and education reforms that eventually succeeded in turning Prussia into the Continent’s strongest state.", "score": 8.086131989696522, "rank": 97}]} {"qid": 44, "question_text": "When did Lyons become an important artistic and commercial center in France, and what triggered this development?", "rank": [{"document_id": "doc-::chunk-6", "d_text": "Between 1500 and 1530 the Master of the Entry of Francis I was rivaled in Lyons by only one other artist, Guillaume II Leroy, a painter, illuminator, and bookseller who collaborated with our artist and was perhaps trained in the same workshop. The rise in the production of manuscripts in Lyons at the beginning of the 16th century is paralleled most notably with the appearance of a printing press there in 1473. Commerce in Lyons also flourished with the arrival of the French court during the Italian Wars in 1494, which made Lyons the second center of the Kingdom of France and a veritable hub for commercial and artistic exchange.\nMore than thirty illuminated manuscripts are now recognized as the work of the Master of the Entry of Francis I. In addition to fifteen Books of Hours, including a luxurious example for Philibert de Vitry (Geneva, BPU, MS Lat. 367), the artist is credited with ten office books, which include a Missal for the church of the Order of Saint John of Rhodes illuminated for Charles Aleman de la Rochechenard, called the Rhodes Missal (London, Museum of the Order of Saint John); a Missal for the Abbey of Saint-Claude in Jura (Paris, Assemblée national, MS 10); the Pontifical of Bishop Louis Guillard d'Épichelière, almoner to Francis I (Paris, BnF, MS Lat. 955), and a Benedictional-Evangeliary for the use of Saint-Nizier of Lyons (Lyon, BM, MS 5136). The Master of the Entry of Francis I also decorated numerous instructive, literary, and historical works, including a Kalender of Shepherds (Cambridge, Fitzwilliam Museum, MS 167), a Trésor de sapience (Chantilly, Musée Condé, MS 147), and the Pas d'armes de Sandricourt (Paris, BnF, Arsenal, MS 3958)\nThe color palette and the execution of the classically-inspired architectural frames that surround the textual incipits and accompany the miniatures reveal the artist's training in the workshop of the Master of Alarmes de Mars, active in Lyons from the 1480s until the early 1510s.", "score": 52.85526117501025, "rank": 1}, {"document_id": "doc-::chunk-2", "d_text": "Lyon became very prosperous with its peak during the Renaissance. In the 15th century, King Charles VIII allowed Lyon the right to hold trade fairs four times per year. Caravans of goods from the north and east traveled to these fairs, which lasted for several weeks. Bankers from Florence and wealthy merchants settled in Lyon. They built luxury mansions and stores.\nBy the end of the 16th century, King Francois allowed Lyon the right to produce silk to compete with Venice. Silk production became a massive industry and a significant employer all through the 19th century.\nWith the introduction of Christianity in 177 AD, the earliest Christian community (and the earliest persecutions) was in Lyon. In 1032, it was incorporated into the Holy Roman Empire and finally incorporated into the Kingdom of France in 1312.\nYou've had a snapshot of where Lyon is, its geography and some historical perspective. Remember Lyon's geography, 2 Rivers, 2 Hills and a Peninsula. Its geography is its history.\nOn the east bank of the Saone is the historic city of Lyon or Vieux Lyon. The Vieux, all 1000 plus acres of it, is the largest Renaissance town in France and the second largest in Europe after Venice. It was the first area to be declared a cultural site in France in 1954. In 1998, Vieux Lyon, together with the hills of Fourviere and Croix-Rousse, and the Presqu'ile became UNESCO World Heritage sites.\nThere are three sections to the Vieux, Saint-Jean, Saint-Paul, and Saint-Georges. You will notice that most names on the Vieux will be saints or martyrs.\nThe Saint-Jean section dates back to the Middle Ages. The prominent part of this neighborhood is the Cathedrale de Saint-Jean (St. John the Baptist) and the square around it. The cathedral is the seat of the Archbishop of Lyon, also the \"Primat de Gauls.\" The Primat or the First of Gaul designation refers to Lyon being the first diocese in France.\nDedicated to St. John the Baptist, it took about 300 years (1180-1476) to construct this Romanesque and Gothic style church. The church is a significant feature in the famous Festival of Lights held every year the week of December 8. More on this later.", "score": 50.138645158937784, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Located at the foot of the Fourvière hill, old Lyon is one of the largest existing Medieval and Renaissance quarters. Its architecture is linked to the Florentines who accompanied Catherine de Médicis for her marriage to the son of François Premier. It was during this era that balconies, interior courtyards, colored facades and very southern type arcades were created. Old Lyon also represents the soul of the city with neighborhoods like Saint-George, where the silk weavers originated, Saint-Jean, with its alleyways and its pedestrian streets or Saint-Paul, immortalized by the Bertrand Tavernier film, “The Clockmaker of St. Paul” starring Philippe Noiret. Old Lyon was also the first place in France to undergo a programmed urban renewal operation, the objective of which was to upgrade and restore the old centers of big cities in order to protect the heritage represented by these quarters of the city. The unique aspect of Old Lyon was recently recognized by UNESCO which classified it as a world humanity heritage.\nIn the heart of this quarter steeped in history, La Cour des Loges proposes 61 rooms and exceptional facilities and services. This five-star hotel combines four 14th century buildings originally used to house Italian merchants attracted by the famous Lyon Fair. Adapting to a grandiose décor and offering contemporary harmony and comfort was no mean feat. The apartments, suits and duplexes symbolize the spirit of one of Europe’s most unique sites. The Petite Mezzanine for example has a very special aura with its raised bathtub facing the transom window and the mezzanine bed under a French style ceiling. The Chambre Classique is a blend of contemporary and Renaissance art with velvet fabrics, wrought iron and sculpted wood. The Chambre Supérieure possesses centuries old woodwork, heavy curtains large mirrors, and both contemporary and Renaissance furniture, all facing the interior courtyards.\nAfter strolling around the pedestrian streets and getting lost in the numerous alleyways, guests can take advantage of the peace and quiet offered by their hotel. The bar with its two cozy retro designed salons, proposes a highly refined menu. The Café Epicerie, a name which recalls the spice trade of the 16th century, offers a variety of delicious, original and light dishes. Les Loges is the place to go for excellent cuisine in an exceptional Italian Renaissance décor.", "score": 48.040754176390465, "rank": 3}, {"document_id": "doc-::chunk-8", "d_text": "Trésors enluminés des Musées de France: Pays de la Loire et Centre, Angers, 2013.\nDelaunay, I. “Livres d'heures de commande et d’étal: Quelques exemples choisis dans la librairie parisienne 1480–1500,” L'artiste et le commanditaire aux derniers siècles du Moyen Age (XIIIe-XVIe siècle), ed. F. Joubert, Paris, Cultures et civilisations médiévales, 24, 2001, pp. 249-270.\nElsig, F. Painting in France in the 15th century, Milan, 2004.\nElsig, F. Peindre en France à la Renaissance, I: Les courants stylistiques au temps de Louis XII et de François Ier, Milan, 2011.\nMaxence, H. “Production et commande de manuscrits enluminés à Lyon à la fin du Moyen Âge et à la Renaissance,” in Arts et humanisme: Lyon Renaissance, dir. Ludmila Virassamynaïken, Paris, 2015, pp. 274-279.\nHindman, S. and A. Bergeron-Foote. France 1500: the Pictorial arts at the Dawn of the Renaissance, Paris, 2010.\nHofmann, M. Jean Poyer: Das Gesamtwerk, Turnhout, 2004.\nLalou, E., C. Rabel, and L. Holz, “Dedens mon livre de pensee:” de Grégoire de Tours à Charles d’Orleans: une histoire de livre médiéval en région Centre , Paris, 1997.\nLevy, T. Les peintres de Lyon autour de 1500, Rennes, 2017, pp. 80-82, 196-201.\nOlivier, E., Hermal, G., and de Roton, R. Manuel de l'amateur de reliures armoriées françaises, 30 vols, Paris, 1924-1935.\nPlummer, J. The Last Flowering: French Painting in Manuscripts, 1420-1530 from American Collections, New York, 1982.", "score": 45.34815809128959, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Lyon is a city that is located in the Rhone-Alpes region of France. It is located about four hundred and seventy kilometers from Paris and about three hundred and twenty kilometers from Marseille. The city covers an area of over forty-seven kilometers and has a population of over four hundred and seventy thousand people. While not being the political capital of France, Lyon is considered to be both the business and gastronomical capital of the country. Lyon is also a major industrial center in France and produces various chemical, software and pharmaceuticals products.\nThe city was originally established as a Roman colony by Munatius Plancus in the first century and was situated on Fourviere Hill. Its name at the time was Lugdunum which meant “Hill of the crows”. The city’s position on the natural freeway between northern and southern France made it a natural center of trade and communication. As such it was the beginning point of the vast Roman network of roads throughout Gaul and it quickly was named its capital. During this time two notable Roman emperors were born here, the emperors Caracalla and Claudius. During the nineteenth century the city would become an industrial center.\nThe twentieth century saw the urban development of the city expand. During World War II, this city would be the center and the driving force of the French Resistance against the German invaders. After the end of the war, Lyon developed a large transportation system and added various tourist facilities, such as museums and hotels. During the 1980s, the infrastructure of the city continued to expand and improve. The city especially paid attention to its historical monuments and started working on renovating and preserving them.\nToday, Lyon is the second wealthiest city in France, after Paris, and has a Gross Domestic Product in excess of fifty-two billion euros. The city has also been the headquarters of many national and international companies. Some of these include Euronews, BioMerieux, Compagnie Nationale du Rhone and GL Events. A growing part of Lyon’s economy is its tourism industry. The city enjoys over one billion euros in tourist dollars every year and hotel bookings from tourists account for an additional three million hotel stays over the course of a year. This city is also known for its hostels and Lyon is ranked in first place in that category.\nOne of the most prominent tourist attractions in the city is the Place Bellecour.", "score": 44.7216129427968, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "The Tourism Office of Lyon reckons it will take you two days to fully appreciate all the facets of this historic and modern city. It is a cross road from Paris to Marseile but also from Geneva and the Alps to the Massif Central.\nLugdunum, as it was called by the Romans or Lugodunon by the Gauls andv means the Fort of Ligh, thas been there a long time and has transformed itself many a time. It was the capital of Gaul for centuries.\nSilk weavers are (mostly...) gone and have been replaced by heavy and high tech industries alike.\nThe TGV and the new airport and industrial complex of Isle d'Abeau on the eastern outskirts of the city have revitalised Lyon which is the second biggest agglomeration of France with 1.5 million people generating over 60 billion euros. It is famous for its culture - Opera by Jean Nouvel - and cultural heritage - many historical monuments and churches - as well as its gastronomy with one of its most famous chefs being Paul Bocuse. Lyon is also very close to two wine regions, the Cotes-du-Rhone we have discussed on its South and the world-famous Beaujolais to the North. Burgundy is not that far away either. The Bresse region to the East is famous for producing the best chickens - see Les Saveurs du Palais film for details. Charolais to the North for its beef and obviously all the fruits and vegetables grown in the Rhone Valley and further afield in Provence.\nPaul Bocuse - L'Auberge du Pont de Collonges - 3 Michelin stars since 1965!!!\nI have never had the pleasure to dine chez Paul Bocuse, but this blog being squarely focused on food and wine, I cannot not mention this world renowned institution. You will have to reserve months in advance and be prepared for spending at least 300 euros per person for the degustation menu and a decent bottle of wine!\nOur own Tetsuya in Sydney charges only 210$ per person, so you should rush there and save on the flights...lol!\nAnd a recent review of it on TripAdvisor:\n\"I had been looking at the menu for months before going and had high expectations. From the moment we arrived until we pulled out of the parking lot, every expectation was exceeded! Bocuse provided the dining experience of a lifetime.", "score": 41.47286100473462, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Lyon is a city in east central France. It is the third largest French city, the first being Paris and the second Marseille. It is a major centre of business, situated between Paris and Marseille, and has a reputation as the French capital of gastronomy and a significant role in the history of cinema.\nTogether with its suburbs and satellite towns, Lyon forms the second largest metropolitan area in France after Paris, with 1,783,400 inhabitants at the 2007 estimate, and approximately the 20th to 25th largest metropolitan area of Western Europe.\nLyon is the préfecture (capital) of the Rhône département, and also the capital of the Rhône-Alpes région.\nThe city gave its name to the Lyonnais province, of which it was the capital. Today the region around Lyon is still known as Lyonnais (French: le Lyonnais), or sometimes even as the Lyonnaise Region (French: Région Lyonnaise). Lyonnaise Region is an unofficial, popular name, not to be confused with the administrative région of Rhône-Alpes, which is much larger than the Lyonnaise Region.\nLyon is known as the silk capital of the world and is known for its silk and textiles and is a center for fashion.\nLyon is also the international headquarters of Interpol and EuroNews.", "score": 40.646019547666704, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Year of foundation: 1872\nApprox. number of students: approx. 3,300\nMCI partner since: 2018\nLyon is the third-largest city and second-largest urban area of France. Lyon had a population of 513,275 in 2015. It is the capital of the region of Auvergne-Rhône-Alpes. The city is known for its cuisine and gastronomy, and historical and architectural landmarks and a part of it is a registered as a UNESCO World Heritage Site. Economically, Lyon is a major centre for banking, as well as for the chemical, pharmaceutical, and biotech industries.\nEmlyon business school is a French leading business school. It was founded in Lyon, France in 1872 by the local business community, and is affiliated to the Lyon Chamber of Commerce and Industry. It has triple accreditation: EQUIS by the EFMD, AMBA, and the AACSB. Emlyon curently has 5 campuses: Lyon, Saint-Étienne, Shanghai, Casablanca and Paris.www.em-lyon.com Zurück", "score": 39.543854905912724, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Lyon is flanked by two hills - the Fourvière and Croix-Rousse and also between two rivers - the Rhone and the Saone. It is the second largest metropolitan city in France following Paris. With a history in textile manufacturing, the city is also known for its gastronomy. About 500 hectares of Lyon has been regarded as a World Heritage Site by UNESCO. It was known as the City of Lights in the olden days. Lyon still lives up to its moniker by lighting up all its monuments and buildings every evening post sunset.\nJust climb up the Fourvière hill and you will be rewarded with a breath-taking sight of the city. The Notre Dame Basilica is yet another fantastic site on the Fourvière hill. Another cathedral which is a must visit is the St-Jean. Built around the 12th century, the architecture represents the metamorphosis of Old Europe from Roman to Gothic. There is also the Musée des Beaux-Arts that has a large collection of ancient and medieval art across the world including Egyptian paintings and pieces from Monet, Picasso and others. Pay your respects to the ancient silk manufacturing industry of Lyon by paying a visit to the Musee de Tissus. Apart from fabrics dating around 2000 years ago, it also has on display, the partridge motif brocade of Marie Antoinette's bedchamber at Versailles. And if you are hungry, do not hesitate to visit any of the “bouchons” for a sample of some local, tasty French cuisine. The best time to visit the city is between November and January.\n- Musee des Beaux-Arts\n- Vieux Lyon (neighbourhoods of Saint Jean, Saint Georges and Saint Paul)\n- Notre Dame de Fourviere Basilica & Fourviere neighbourhood\n- Musée des Tissus (Fabrics Museum)\n- Musee Gadagne (Lyon History and world marionettes)\n- Cathedral St. Jean (St John Cathedral)\n- Musée de la Résistance (Museum of the Resistance)\n- Musee de l'Imprimerie de Lyon (Printing Museum)\n- Musée de la civilization Gallo-Romaine (Museum of the Gallo-Roman Civilization)\n- Amphitheatre Gallo-Romain (Gallo-Roman Amphitheatre)\n- Quartier Croix Rousse (neighbourhood)\n- And More...", "score": 39.47613575931491, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Discover Lyon and the district of Ainay\nHotel Tourism Lyon 2\nLyon, a UNESCO heritage site, has developed its tourist appeal by scheduling multiple and various artistic and cultural events.\nLyon is an example of evolution with its Renaissance and classical Haussmann and innovative with the development of the Confluence.\nIts neighborhoods all with a current or past history (Ainay, Brotteaux, Guillotière Croix Rousse or ...), squares (Place Bellecour, former military place Bellecour square and fountain of Bartoldi , Republic Square and fountains, Place du Maréchal Liautey with bowl players), its old streets and alleyways (Ward St Georges, St Jean in St Paul, the slopes of the Croix Rousse with his climb the big hill) and its banks make Lyon a city where one can take the time to find out. p>\nYou can also enjoy its many markets (quai St Antoine Place Carnot, Rhone dock or market Croix Rousse) to enjoy a city on a human scale.\nMany museums allow you to match your mood to enjoy classical, contemporary or specific themes in Lyon (gadagne and terrace, fabrics and decorative arts, museum of resistance.)\nRestaurants, because Lyon is the capital of gastronomy, take you to the gourmet culinary excellence, bistronomic or nearby.\nWe offer below a non-exhaustive list but representative of the cultural richness of LYON. We can advise you, help you and guide you to the right people to make your stay a moment of pleasure and relaxation.\nExcursions in connection with the tourist office of Lyon\nVisit the Beaujolais - all Wednesday and Saturday\nVisit the alleyways\nVisit the iconic neighborhoods\nFestival of Lights\nBiennial of Contemporary Art\nQuai du polar\nTimers and festivals\nFestival Lights (Lights Institutes)\nAnd close JAZZ IN VIENNA\nMuseum of Fine Arts\nContemporary Art Museums\nMuseum of Confluences\nMuseum of Decorative Arts and fabrics\nGadagne Museum (Musée de Lyon)\nMuseum of the Resistance\nGallo Roman Museum\nMiniatures Museum and cinema", "score": 39.424538674622234, "rank": 10}, {"document_id": "doc-::chunk-1", "d_text": "With around 208 000 visitors to SIRHA 2017, the professional hospitality and catering show and 7 400 attendees at ECTRIMS in 2012, an international congress on multiple sclerosis, the agglomeration is able to host the largest international events. Since an area of 500 ha in the heart of the Greater Lyon was classified as a World Heritage Site by Unesco in 1996, Lyon has been working on its image and perfecting its assets. This change is reflected today by growth in tourist numbers, in particular at weekends and in the summer season. With 22 % of visitors coming from abroad and an average stay of 3.7 days, Lyon has become a city-break destination for lovers of gastronomy and heritage as its mentioned in Lyon tourist map.\nThe Lyon attractions map shows the main monuments, museums and parks of Lyon. This tourist places map of Lyon will allow you to easily plan your visits of tourist attractions of Lyon in Auvergne-Rhône-Alpes - France. The Lyon attractions map is downloadable in PDF, printable and free.\nThe impressive cultural heritage of Lyon is evidenced in this Musée des Beaux-Arts, considered the next best fine arts museum in France after the Louvre. At the Place des Terreaux near the Hôtel de Ville (Town Hall), the museum occupies the 17th-century Palais Saint-Pierre, a former Benedictine convent. Lyon atmospheric Quartier Saint-Jean as its shown in Lyon attractions map is the place to discover the old-world ambience of Vieux Lyon. This medieval quarter north of the cathedral is filled with narrow cobblestone lanes and quiet little courtyards. Lyon stands on the site of the ancient Roman city called Lugdunum, founded in 43 BC, which was the capital of Gaul. This superb Museum of Archaeology displays Gallo-Roman-era objects including vases, gravestones, mosaics, statues, coins, and ceramics. The antiquities displayed are from onsite digs (from the city of Lugdunum) as well as nearby Roman archaeological sites of Saint-Romain-en-Gal and Vienne.\nWhile visiting Lyon, one should definitely indulge in the famous regional cuisine. The hearty local gastronomy features satisfying dishes such as steak, lamb stew, roast chicken with morels, and poached eggs in red wine sauce.", "score": 38.02830613236357, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Lyon, France’s second largest city, located in the historical Rhône-Alpes region, is a city of culture, heritage, gastronomy and much more besides.\nDating back to ancient Roman times, Lyon has earned a place on the UNESCO World Heritage list. The impressive cultural heritage of Lyon is evidenced in this Musée des Beaux-Arts, considered the next-best fine arts museum in France after the Louvre.\nLyon is also an excellent conference and seminar destination with excellent conference facilities and a broad cross section of hotels to match. Evening entertaining is always going to be a highlight too with some of France’s best restaurants located in the city.\nWith Haute Pursuit’s head office based in Chamonix Mont Blanc in the French Alps, opening an office in Lyon was the logical next step, enabling us to service your team building, conference, seminar and off-site requirements to the high standard all of our customers require.\nAs we open our doors in Lyon we have three main propositions for you:\nGamification - using modern technology to increase collaboration and interaction between staff and clients alike.\nProducts include city rallyes, gastronomic team building games, food tastings and\nquizzes, all in exciting Lyonnaise locations.\nWellness retreats - yoga and mindfulness, nutritional advice and cooking classes.\nArtistic discovery - drawing, painting and creative challenges. Individual and collaborative team experiences.\nOur new Lyon office will be managed by Nathalie Eeses, long-term Haute Pursuit staff member and Lyon locale, who really knows the city inside out and is passionate about putting Lyon on the map in the corporate events world! Initially we will be targeting businesses based in Lyon with our team building propositions but we will be expanding our offer to multinational companies looking to do an offsite seminar or conference in a relatively unknown destination – the wonderfully cultural city of Lyon.\nFrance's second-most important city after Paris, is surprisingly undiscovered. Although Lyon doesn't often make it onto tourist itineraries, many cultural treasures await those who take the time to explore the city. The city boasts France's oldest ancient ruins, medieval quarters, and fine Renaissance houses.\nIf you are interesting in finding out how we can help you organise your next corporate event in the city of Lyon, get in touch with us today with your requirements and lets see what we can organise for you.", "score": 37.58211383586036, "rank": 12}, {"document_id": "doc-::chunk-6", "d_text": "Many of the great chefs of Lyon, like Paul Bocuse, started in the kitchen of one of the storied bouchons. Paul Bocuse became the first celebrity chef in the 1970s and the 1980s. French chefs of his generation started preparing lighter dishes, with fewer sauces and newer techniques. French \"nouvelle cuisine\" emerged supported by restaurant guides such as Michelin and Gault et Millau.\nLyon and Paul Bocuse are almost synonymous. He opened many brasseries and bistros that still exist in Lyon. There is also a cooking school with his name in Lyon, where you can attend or take specialized classes. You can lunch or dine there where meals are prepared and served by culinary students.\nPaul Bocuse passed away in 2018.\nAs you go around France, you will find that the fresh or open markets so popular are gravitating towards enclosed food halls. But, in Lyon, a large, real \"live\" market is still held six days a week. At the Quai Saint-Antoine, on the banks of the Saone, there is a market from Tuesday to Sunday from 8 AM to 2 PM. There are fewer vendors on weekdays but a full board from Friday to Sunday. On Sundays, there is also an arts and crafts market on the opposite bank.\nYou can get fresh meats, seafood, fruits, and vegetables. There will be cheeses, wines, bread, olives, nuts, tarts, cooked meals, all types of sausages, roast chicken and roast pork, and flowers.\nOur favorite Sunday lunch was having sausages, roast pork, or a rotisserie chicken from the market with some artisan baguettes and olives! Very popular around the Quai are outdoor cafes offering oysters and seafood items.\nFor the distinctive gourmet taste (and there are many), there is the Les Halles de Paul Bocuse. Les Halles is a specialty food hall that features local vendors with a reputation for quality foods and wines. Some of these vendors have been in business for decades or a century. Offerings are very regional i.e., local cheeses, seafood, meats, pastries. There will be restaurants there for a meal at lunchtime or dinner. Try the sweetest oysters you will ever taste or have a fig stuffed with foie gras.", "score": 36.54735739046988, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "5 Reasons Why You Should Choose Lyon Over Paris For Your MBA\nWhile Paris is long been considered the business hub of France, Lyon offers an attractive alternative for MBA students to big city life in the French capital\nWhen talking about France’s growing business economy, it’s easy to focus on Paris. With a population of 2.1 million, and an economy worth over $700 billion, Paris is often seen as the focal point of mainland Europe’s market.\nBut the big city life isn’t for everyone. Less than 300 miles southeast of the French capital lies the historic city of Lyon.\nA burgeoning merchant town during the 15th century silk trade, and the birthplace of cinema, Lyon is nowadays the second biggest French city. It is becoming known for its tech scene, its pharmaceuticals and biotech industries, as well as being an exciting hotspot for startups.\nFor MBA students, Lyon is an increasingly attractive offer. We spoke to MBA students from EMLYON Business School, Lyon’s leading business school, to come up with five reasons why you should choose Lyon :\n1. A relaxed alternative to city life\nWith a quarter of Paris’ population, Lyon offers a considerably smaller and more relaxed pace of life than the French capital.\nHaving moved from Rio de Janeiro, Pedro Souza (pictured) was looking to escape the city. The city’s relative tranquility and lack of congestion was a big factor when he enrolled at EMLYON Business School in Lyon.\n“On a personal level, I wasn’t willing to move to a super crowded city like Paris. I wanted a space where I could focus all of my energy on the MBA rather than the distractions of the big city,” Pedro remembers.\n“Lyon has a small town feeling but with a lot to offer.”\n2. Join the latest innovations in pharmaceuticals and biotech\nWhile Lyon’s economy is significantly smaller than Paris’, it attracts large, dynamic businesses which are at the forefront of their respective industries.\nPharmaceuticals and biotech companies have long had a base in Lyon, with giants like Sanofi Pasteur holding their headquarters there.\nEMLYON MBA graduate Emmanuel Hatt (pictured) was brought to Lyon for that very reason, working for specialty chemicals company Solvay. With Lyon as his base, the industry catered for him to explore further.\n“In terms of internationality, the chemicals and pharmaceuticals industries are at the top,” Emmanuel notes.\n3.", "score": 35.81085700841911, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Discover the capital of gastronomy\nThe most atmospheric and intriguing district of the city is undoubtedly Vieux Lyon (Old Lyon). You can spend hours wandering through its authentic backstreets and ‘traboules’ (narrow covered thoroughfares). Shops, boutiques, bars and restaurants are everywhere. If you love good food, you’ll find the finest (regional) dishes and the best wines in Lyon. Or allow your eyes to roam around the Halles de Lyon-Paul Bocuse, an enormous roofed market filled with fresh ingredients and delicacies of every kind. The city’s reputation as the capital of gastronomy has stood for centuries. With traditional bistros to Michelin-starred restaurants to choose from, it is well deserved. Lyon is the ideal destination for a city trip and the perfect starting point for a longer holiday in France. Expect to be surprised: book a flight to Lyon today!\nA city between 2 rivers\nLyon is surrounded by both the Rhône and Saône rivers. The banks of these rivers have been transformed into fantastic locations with plenty to see and do. In Parc des Berges du Rhône, you can walk, cycle and enjoy peace and quiet right in the city centre. Looking for a nice souvenir? Les Puces du Canal is the largest antiques market in the region. You’re bound to find something appealing among its 400 stalls.\nFlights to Lyon for a unique city trip\nLooking for a cheap flight to Lyon? With KLM, your holiday begins the moment you step onboard. Your journey will fly by with your own entertainment set on intercontinental flights: free films, games and music. Enjoy our selection of snacks or a tasty dinner and drinks. Add in our many handy extra services (some available for a supplement) and you’ll be in France before you know it.", "score": 34.7197826570005, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "Lyon’s unmissable tourist attractions include:\n- Vieux Lyon, at the foot of the Fourvière hill\n- Confluence museum\n- Basilica of Notre-Dame de Fourvière\n- The Croix-Rousse district\n- The Confluence district\n- Place des Terreaux and Place des Jacobins\n- Cathédrale Saint Jean-Baptiste de Lyon\n- The traboules, the indoor passageways which are typical of the city\n- Maison des Canuts museum\n- Musée des Tissus textile museum\n- Petit Musée Fantastique de Guignol puppet museum\n- Institut Lumière\n- Musée des Beaux-Arts\n- Musée d’Art Contemporain\n- Lyon Aquarium\n- Gallo-Roman Museum of Lyon\n- Lyon Opéra\nDiscovering France's gastronomic capital\nLyon is known as the French gastronomic capital for good reason, as the city is home to a large number of Michelin-starred chefs, such as the legendary Paul Bocuse, as well as around a hundred restaurants and “bouchon” bistros. You are sure to find something to take your fancy, whatever your budget, so why not treat yourself and discover some Lyon specialities?\nDuring your stay in Lyon, make sure you discover one of the many “bouchon” bistros, which are typical of this region. In a traditional and friendly atmosphere, you will be able to savour generous regional cuisine that combines simplicity and unequalled flavours.\nAmong the specialities you can discover are salted meats, such as rosette, saveloy sausage and pistachio saucisson, and other treats, such as the “cervelle de canut” cheese dip, “tablier du sapeur” beef dish, “coussin de Lyon” chocolate pillows, Lyonnais salads, pork “grattons”, pike quenelles, praline tarts and more.\nLyon cuisine is known in particular for the high quality of the products it showcases. You will not be disappointed by the region’s wines either, with Lyonnais, Rhône, Beaujolais and Bourgogne vintages making the ideal accompaniments to all your meals.", "score": 34.688476858836935, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Last month Dan and I joined my parents, Dan’s parents, my aunt, my sister, my brother-in-law, and my cute little 17 month old niece for a week-long family holiday in Lyon, France. Located at the confluence of the Rhône and Saône rivers and surrounded by some of the best wine country around, it’s also considered the gastronomic capital of France and thanks to Eurostar can now be reached directly from London.\nBoasting neat museums such as the Lumiere Museum (which tells the history of early filmmaking) and the Puppets of the World Museum, old Roman ruins, and fantastic cuisine, the hilly riverside city makes a perfect long weekend city break destination. In Vieux Lyon, the oldest part of Lyon, pedestrian friendly cobblestoned-streets are lined with charming pastel-hued boutique shops, boulangeries, patisseries, and small bistros called bouchons serving up amazing dishes such as salade lyonnaise, green lentils with sausage, local whitefish, and delicious fruity Beaujolais wine. A quick funicular ride will take you up to Fourvière Hill for panoramic city views and to see the stunning glimmering white Notre-Dame Basilica as well as the remains of an old Gallo-Roman amphitheater (more on that in the next post!)\nHidden, covered serpentine passageways called traboules serve as shortcuts for locals (and visitors!) in the know to connect Vieux Lyon and nearby La Croix-Rousse hill. Look for the door plaques as you wander along to find the few traboules available to the public. Hint: the longest one links up #54 Rue St. Jean to #27 Rue du Boeuf.\nOur week in Lyon showed us that the city is not only a feast for the stomach but the eyes!", "score": 33.86820373353815, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "The digital sector in Greater Lyon\nLyon is at the cutting edge of digital\nThe digital and image industries in Greater Lyon means:\n- Lyon French Tech (an eco-system with the French Tech label)\n- 7 000 businesses, including 300 with high growth potential\n- 42 000 jobs\n- 600 digital events per year\nTowards very high-speed internet\nLyon Métropole has a strong digital policy and is developing a major innovation network in the field of telecommunications: it has in particular defined a strategy for the very high-speed digital development of your territory by 2018.\nFind out how Lyon's international network can open up opportunities for you in the areas of development and R&D.", "score": 33.63178433833963, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "France’s second City is a wonderful destination to visit. This post outlines the best areas to stay in Lyon, and potential hotels and accommodation to make your trip to Lyon fabulous. We have spent quite a bit of time in France exploring and visiting friends and family. We visited Lyon City, France mid Summer and were surprised by what a great city it is to visit. We were drawn to France’s second city by its culinary reputation – and we weren’t disappointed. Bordered by two rivers, Lyon is a charming city with amazing architecture both modern and ancient, a UNESCO World Heritage protected old town, fantastic museums and great food. Lyon was founded by the Romans, and in the 17th and 18th Century the Croix-Rousse silk weavers provided silk to Royal Courts throughout Europe.\nThis Article may contain compensated links. Please read our Disclaimer for more information.\nIf Lyon is not on your list of places to visit in France, a holiday in Lyon should be added! Lyon is a fabulous mix of the traditional and contemporary, you can be wandering around the traboules one moment and then taking in the contemporary architecture from the Rhône and Saône rivers. Of course, there are some many restaurants in Lyon to try. Lyon is located in the centre of France and can get very hot, humid summers and cold winters. We can vouch for the hot summers with days of very sunny, hot, mid 30c weather.\nThere are so many things to do in Lyon. Regardless of how much time you have a great place to start is the Lyon Tourism Office in Place Bellecour, pick up a Lyon Travel Guide and a Lyon City Card. Grab a Lyon City Guide and decide what to see in Lyon.\nFinding a place to stay in Lyon, amongst the many places to stay, will set you up for a fabulous stay. There are many Lyon hotels to choose from it can be a bit overwhelming, and your options for varied for accommodation. The cost of hotels in Lyon, like other large French cities, is high. Finding nice apartments in Lyon, France is also another good option. Depending on where you stay in Lyon, you may be able to find budget-friendly options too – look at our Lyon accommodation recommendations of La Croix-Rousse and La Part-Dieu.", "score": 33.58122557531119, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Lyon(redirected from Lyons, Archdiocese of)\nAlso found in: Thesaurus, Medical, Financial, Encyclopedia.\nLy·onor Ly·ons (lē-ōN′, lyôN)\nA city of east-central France at the confluence of the Rhone and Saône Rivers north of Marseille. Founded in 43 bc as a Roman colony, it was the principal city of Gaul and an important religious center after the introduction of Christianity. Its silk industry dates to the 1400s.\n(Placename) a city in SE central France, capital of Rhône department, at the confluence of the Rivers Rhône and Saône: the third largest city in France; a major industrial centre and river port. Pop: 480 778 (2006). English name: Lyons Ancient name: Lugdunum\nLy•ons(liˈɔ̃, ˈlaɪ ənz)\na city in E France at the confluence of the Rhone and Saône rivers. 418,476.French, Lyon.\nSwitch to new thesaurus\n|Noun||1.||Lyon - a city in east-central France on the Rhone River; a principal producer of silk and rayon|\nLyonnais - a former province of east central France; now administered by Rhone-Alpes", "score": 33.16882207900658, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Things to do in Lyon France include everything from dining at traditional bistros and shopping at alluring fashion boutiques to visiting cultural attractions and wine regions. There are a variety of activities in Lyon that will certainly appeal to each and every member of your group. You can spend time exploring the city’s many monuments and examples of Renaissance architecture, or pass time meandering through the chic and trendy boutiques and cafes, which seem to be around virtually every corner.\nOne of the best things to do in Lyon is to explore the entire shopping scene in the city. There is a broad selection of destinations including indoor and outdoor markets, shopping malls and department stores, and independently owned shops and boutiques. There are also a good number of internationally recognized, high-end designer boutiques in areas of town like the Presqu'ile. Check out the Rue de la Republique and Rue Victor Hugo for some amazing, designer shopping.\nOf all the activities that are popular in Lyon, perhaps none is more so than eating. Gastronomy is a vastly important component of the cultural dynamic of this second largest city in France. Its long and proud tradition in this area has earned the city the titles of “the city of food,” and the French “capital of gastronomy.” This is quite a distinction in a country that is filled with so many enviable regions for food and drink. The local produce, grapes for wine, wild game, and local cheeses and other delicacies will make their way into every part of your trip. Among the best and most relaxing things to do in Lyon France, is to simply sit back at one of the traditional bistros (called bouchons) and enjoy the traditional hearty dishes washed down with a glass of Beaujolais or Cotes du Rhone.\nRoman Ruins and Cathedrals\nYou would be missing the mark on one of the best things to do in Lyon if you did not set aside time simply to walk around and take note of some of the most stunning works of architecture to be found anywhere in Western Europe. The gorgeous Notre Dame Basilica, Lyon Opera House, and the Roman ruins on Fourviere Hill are all must-sees for anyone traveling to this area.\nLyon has been a settlement since around the fourth century during the time of the Roman Empire.", "score": 31.872508813757722, "rank": 21}, {"document_id": "doc-::chunk-2", "d_text": "It stands out even in France as having high-quality cuisine, and it packs a punch in terms of art and history. Trace the Romans with the amphitheatre that overlooks the city, check out world-famous art at the Musée des Beaux Arts, indulge in the historic UNESCO-recognised old quarter and look for the arcades that camouflaged the French Resistance as they fought against the Nazis.\nThe size of the city means you’ll need to use the metro to make the most of it. And of course, with a flexible Eurail pass, if you change your mind about where you want to go, Lyons has plenty of other routes with which to tempt you.\nFind out what the Eurail pass is and how you can travel between top French cities and other European destinations in comfort.", "score": 31.296087519099917, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Lyon, in east central France located where the Rhone and Saone meet, is the ideal destination for a city break. Sprinkled with a never ending supply of restaurants, and with over 22 Michelin Stars under its belt - this is the perfect place for a foodie. But if food doesn’t excite you as much as sport, then you can take pleasure in watching a world-class sporting event at football stadium Stade de Gerland. Some of the sports include basketball, hockey, rugby and football. Hiring a car in Lyon is so simple, we now have seven city locations, one of which can be found at Lyon Saint Exupery Airport.", "score": 30.881624989242678, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "COMMERCIAL REAL ESTATE IN 2013: LYON STILL ATTRACTIVE\nAccording to a study presented by FNAIM/Cécim at the end of January, companies in the Lyon region bought or rented 252,000 sq. metres of office space last year, compared with 177,420 in Barcelona, 221,730 in Amsterdam, 227,290 in Milan and 150,420 in Manchester. Lyon continues to stand out due to its competitive rates, with an average price of €270 before tax per sq. metre, compared with €710 in Paris, €450 in Milan, €335 in Amsterdam and €400 in Manchester. On the other hand, the city has a smaller supply on offer in comparison with other cities, just 300,570 sq. metres available, compared with 884,210 sq. metres in Barcelona or more than 1.5 million sq. metres in Milan.\nAmongst the biggest operations of 2013 in Lyon was the Alstom deal – the company acquired 29,100 sq. metres for their new headquarters in the Carré de Soie area, SNCF’s purchase of 22,000 sq. metres of the Incity Tower, more than half of its surface area, and the rental of 18,500 sq. metres for the new Sanofi headquarters in Gerland.", "score": 30.799282540941167, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "International relations, Lyon - Setif (Algeria)\n- Economic development,\n- Green spaces,\n- Sustainable development,\n- Urban planning.\nFollowing the initial approaches made in 2006, Lyon and Setif (Algeria) have developed a technical and economic partnership, which entered its operational phase in 2010.\nLyon - Setif (Algeria) cooperation, key dates\nIn 2010, the El Atkik mosque was illuminated, thereby constituting a pilot project and an educational support in the plan to reinforce the skills of technicians in Setif, applied in the specific field of public lighting.\nIn 2011, a delegation exchange between Setif and Lyon was organised for businesses specialising in building, materials and real estate development. Since then, the economic stakeholders in the two agglomerations have been in regular contact to promote business flows and share good practices in the area of development.\nIn 2012, the cooperation by Lyon comprised the provision of its know-how to the programme for the redevelopment of Setif's amusement park. These actions allowed local stakeholders to validate their development choices and to design public spaces by bringing in other skills: sociology, economy, urban planning.\nIn 2012, Setif organised, with the help of the metropolis of Lyon, cultural days centred on exhibitions, concerts and a conference.\nAnother cultural measure - the “Noir et Blanc” programme led by Gertude II association - under way since 2003 in partnership with Algerian and French artists - makes available the homes of artists in Setif, Algiers, Jijel and Lyon.\nPascal L' Huillier\nMétropole de Lyon - Economic development, employment and knowledge delegation\nchef de projets coopération décentralisée", "score": 30.204969632859342, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "You’ve heard of Lyon, France, and maybe changed trains there. That was my only experience with Lyon, until recently. Many people told me how nice a city it is, how good the food, how much there is to do. I had to visit it for myself.\nLyon is a sprawling city, the third largest in France. There are a half million people, and more in surrounding suburbs. It’s also known as the second gastronomic capital of France, after Paris. Your stomach and palate can verify that claim!\nHistorically, Lyon is known for the silk trade. Several museums show you the history of the trade and also how things are made. I visited one (Soierie Saint Georges) whose only customers are the 40,000 châteaux all over France which need renovations of silk furnishings on a regular basis. Another I saw does screen prints of silk fabric. Most are now in the Croix Rousse district in the north part of the peninsula.\nMuch of what you want to see is going to be in Vieux Lyon, the old part of town, and on the peninsula just to the east, a narrow slice of land that lies between the rivers Saone and Rhône. There you’ll find shopping, many restaurants, and the big city buzz. Keep going east and you’ll find a larger urban center where most people and companies are, as well as two train stations, bus routes, and the tram. Lyon does have arrondissments, or districts, like Paris, though unlike Paris, they are not in any order. You can get tickets for 24 or 48 hours that cover tram, bus, and metro.\nHere are some highlights of Lyon you won’t want to miss:\n- Vieux Lyon, a large Renaissance district with buildings from the 15th and 16th centuries\n- Basilique Notre Dame de Fourvière is a beautifully constructed and decorated church on the hilltop. You can climb if you have the strength, or you can take the funicular (the first in the country, and one of two in Lyon) at St. Jean in Vieux Lyon for a metro ticket.\n- Musée de Confluence: a Confluence is the joining of two rivers. At this southern point on the peninsula is the science center and anthropology museum.\n- On the peninsula, visit the Place des Terreaux.", "score": 29.978812675321503, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "It has centuries-old traditions that are still in place today, but many of the facts and information about Lyon and its history have had to be held in safe keeping at the local museums. Other museums in Lyon celebrate everything from the visual arts to science and literature. Of all the things to do in Lyon France that are both educational and fun, visiting the Museum of Fine Arts, Textile Museum, and Musee d’Art Contemporain should be near the top of your list if you enjoy good museums.\nAnother of the best things to do in Lyon is to attend a professional performance at the National Opera House of Lyon. Over the years, a great many amazing performances and performers have taken the stage here. If there is a show that you are particularly interested in seeing, be sure to make your reservations as far in advance as possible.\nParks and Outdoor Recreation\nThere is plenty of green space in the capital of the Rhone-Alpes region. The Parc de la Tete d'Or is the largest park in any city in France. It contains a gorgeous boating lake, tons of opportunities for sports and activities, as well as a zoo that features the likes of giraffes, elephants, and tigers. You can even enjoy miniature golf and a miniature locomotive that is a smash with the children. There are always plenty of activities in Lyon to enjoy in the beautiful surrounding of nature.\nWithin an hour to three hours of driving, a number of fantastic ski resorts offer some of the best skiing in France. These resorts can serve as great day trips or complements to any Nice vacation.\nThe best things to do in Lyon are highly subjective choices, but if you plan on eating the local fare, checking out the shopping and cultural attractions, and enjoying all that the beautiful city has to offer outdoors, you will be sure to have a memorable time.", "score": 29.870984434052996, "rank": 27}, {"document_id": "doc-::chunk-3", "d_text": "Uit de 17e en 18e eeuw zijn overgebleven: De Bartholdi Fontijn, het stadhuis, de barokke Chapelle Saint-Pierre, het Hôtel-Dieu de Lyon (17de en 18de eeuw), het historische ziekenhuis met een barokke kapel; de Temple du Change (17e en 18de eeuw), de voormalige beurs van Lyon, protestantse tempel sinds de 18e eeuw, het Place Bellecour, een van de grootste stadspleinen in Europa; de Chapelle de la Trinite (1622), de eerste barokke kapel gebouwd in Lyon en een deel van de voormalige École de la Trinite, nu Collège-Lycee Ampère.\nphoto: Chabe01, CC Attribution-Share Alike 4.0 International no changes made\nLille was elected European Capital of Culture in 2004, along with the Italian city of Genoa. Lille has different architectural styles, with a lot of influence from Flemish architecture through the use of brown and red brick. characteristic are the two to three storey terraced houses with narrow back gardens. This is unusual in France.\nLille's street scene is a transition from the architecture of France to the neighboring countries of Belgium, the Netherlands and England, where brick houses were built in large numbers. Architectural heritage includes the Gothic style of the Middle Ages (Saint-Maurice and Sainte-Catherine churches); the Renaissance (Houses in Rue Basse), Flemish Mannerists in Vieille Bourse (Old Stock Exchange), House of Gilles de la BoE), Classical style (Saint-Étienne, Saint-Andre churches, the Citadelle), Gothic Revival (Cathedrale Notre-Dame- de-la-Treille), Art Nouveau (House Coilliot), regional Art-Deco - Hôtel de Ville (town hall) and Euralille's contemporary modern structures.\nConstruction of a main urban project - the Euralille began in 1991. The center was opened in 1994 and the renewed district is now full of parks and modern buildings with offices, shops and apartments.", "score": 29.668022731777324, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "The most unique culinary specialty is something known as \"quenelles,\" a type of dumpling (made with ground fish) in a rich cream sauce. Built on the slopes of the Croix-Rousse hillside as you can see in Lyon attractions map, this historic neighborhood was an important center of weaving in the early 19th-century. Because of the high gradient of the streets, there are many charming curves and staircases. Another tourist attraction in this area is the Maison des Canuts (House of Silk Workers) at 10/12 Rue d'Ivry. This small museum is dedicated to the art of creating silk.\nLyon Presqu'ile District as its mentioned in Lyon attractions map is a piece of land, sort of like an island, within the river. This neighborhood is distinguished by its beautiful architecture and monumental town squares. The Place des Terreaux is worth visiting just to see the fountain by F.A. Bartholdi. Housed in an 18th-century Lyonnais mansion are two superb museums: the Fabric Museum and the Museum of Decorative Arts. The Musée des Tissus (Fabric Museum) is a unique museum that allows visitors to discover the fascinating history of Lyon silk trade, dating back to the Renaissance period. In a majestic location on the Colline de Fourvière (the hill that overlooks Vieux Lyon), the Basilique Notre-Dame rises to a height of 130 meters above the Saone River. The Basilica is accessible by funiculars running up the hill.\nThe Lyon zoo map shows recommended tours of the Lyon Zoological Park. This zoo map of Lyon will allow you to easily find out where each animal is and where to picnic in the Zoological Park of Lyon in Auvergne-Rhône-Alpes - France. The Lyon zoo map is downloadable in PDF, printable and free.\nParc de la Tête d'Or (literally, Golden Head Park), in central Lyon , is the largest urban park in France at 117 hectares as you can see in Lyon zoo map. Located in the 6th arrondissement , it features a large lake on which boating takes place during the summer months. Due to the relatively small number of other parks in Lyon, it receives a huge number of visitors over summer, and is a frequent destination for joggers and cyclists.", "score": 29.3509224807214, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Lyon's Cuisine is Alive and Well\nParis and the Côte d’Azur, Lyon\nhas always enjoyed gastronomy of world-renown. Some\nof its glorious restaurants did not survive, such\nas the illustrious La Mère Brazier, but under\nthe auspices of Paul Bocuse, who will turn 81 on February\n11, 2007, a new generation of chefs has marched into\nthe kitchens, adding their own creativity to their\ncuisine is in great part built on the products from\nits surrounding area: prime Charolais beef, lambs\nand sheep from Loire and Auvergne, Bresse poultry,\nbutter and milk from the mountains nearby.\nis as strong as ever, with a new creative attitude.\nThe “Toques Blanches Lyonnaises” (an association\nof eager chefs) has recruited new members, totaling\na hundred chefs, some of them already well-known,\nbut many belonging to the new generation.\nPaget of Le\nFleurie appears to be the leader of this\ntrend. Running his restaurant with his wife Jacinthe\nand a lean staff, he offers a wonderful menu every\nday with a bargain prix-fixe price of just 13,50 Euros\nand a magnificent Beaujolais wine list.\nViola, who won the distinctive prize of “Meilleur\nOuvrier de France” in 2004, has settled in a\nlandmark Lyonnaise \"bouchon\" (local name\nfor a typical bistro) called Daniel\net Denise. He kept well-known dishes\non his menu, such as the famous “Tablier de\nsapeur,” tripe dipped in egg and breadcrumbs,\ngrilled and served with a mayonnaise sauce composed\nof blended hard-boiled egg yolks, capers and mixed\nherbs, with chopped hard-boiled egg white added.\nViannay, another winner of the “Meilleur\nOuvrier de France” award, is also typical representative\nof this new generation, whose philosophy and goal\nis to please diners at moderate prices. Viannay performs\na sharp, creative style of cuisine for a considerate\nbill of fare at his establishment named after himself,\ndevelopment of note in Lyon is the renovation of the\nwholesale food market, located in the city center.\nAs well, Mayor Gérard Collomb plans to launch\na federation of the gourmet cities of Europe.", "score": 29.322935876281928, "rank": 30}, {"document_id": "doc-::chunk-1", "d_text": "And the great provincial cities like Lille and Lyon, Bordeaux, Toulouse, Marseille and Nice vie with the capital and each other, like the city-states of old, for prestige in the arts, ascendancy in sport and innovation in urban transport.\nFor a thousand years and more, France has been at the cutting edge of\nand the legacy of this wealth, energy and experience is everywhere evident in the astonishing variety of things to see: from the Gothic cathedrals of the north to the Romanesque churches of the centre and west, the chA?teaux of the Loire, the Roman monuments of the south, the ruined castles of the English and the Cathars and the Dordogne's prehistoric cave-paintings. If not all the legacy is so tangible - the literature, music and ideas of the 1789 Revolution, for example - much has been recuperated and illustrated in museums and galleries across the nation, from colonial history to fishing techniques, aeroplane design to textiles, migrant shepherds to manicure, battlefields and coalmines.\nMany of the\nare models of clarity and modern design. Among those that the French do best are museums devoted to local arts, crafts and customs like the MusAİe National des Arts et Traditions Populaires in Paris and the MusAİe Dauphinois in Grenoble. But inevitably first place must go to the fabulous collections of fine art, many of which are in\nfor the simple reason that the city has nurtured so many of the finest creative artists of the last hundred years, both French, Monet and Matisse for example, and foreign, such as Picasso and Van Gogh.\nIf you are quite untroubled by a need to improve your mind in the contemplation of old stones and works of art, France is equally well endowed to satisfy to satisfy the grosser appetites. The French have made a high art of daily life: eating, drinking, dressing, moving and simply being.", "score": 29.152135272918738, "rank": 31}, {"document_id": "doc-::chunk-1", "d_text": "Lyon, Where to Stay\nYou will find this information in this article\nLa Croix-Rousse is located between the Saône and Rhône Rivers and is on the opposite hill to Vieux Lyon and Fourvière. Traditionally known as the silk weaving area, there are still remnants of this industry throughout La Croix-Rousse. A short walk into the Lyon City centre, La Croix-Rousse is well serviced by the Metro, including the Funicular. While this area is primarily residential there is plenty of appeal in its bistros, markets, winding streets and small shops. A great place to stay to experience suburban French life, explore local markets and squares and eat might fine food at local Bouchon.\nVieux Lyon and Fourvière\nLocated in the 5th arrondissement, this is really the historic heart of Lyon, the Lyon Old Town. Fourvière was chosen by the Romans for their Capital City of Gaul. The Basilica of Notre-Dame de Fourvière is perched on the hill, overlooking Vieux Lyon the UNESCO protected Renaissance area. Spend hours wandering around the Vieu Lyon taking in the 15th and 16th Century Palaces of wealthy Italian bankers, discover the Traboules which connect streets via secret passageways, or spend time reflecting in the Cathedral St Jean. There are plenty of restaurants, museums and bars in Vieux Lyon and Fourvière, so it is a great place to stay if you love history, and want to be in the thick of it. Be warned though, this area is on the side of a hill so although you may have tired legs you will have great views.\nBellecour and Hôtel de Ville\nBellecour and Hôtel de Ville, in the 2nd Arrondissement, are the heart of Lyon. are part of the UNESCO World Heritage Site between the Saône and Rhône Rivers. Place Bellecour is a large public square in the middle of the City, it is the largest pedestrian square in Europe. The Lyon Tourism Information Centre is located in Place Bellecour, so it is definitely worth a visit. The Bellecour and Hôtel de Ville area is home to Lyon’s top museums including the Museum of Fine Arts and Museum of Textiles. The National Opera is located here, along with plenty of high-end shopping and restaurants.", "score": 28.947710202589104, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Les Subsistances are a heritage site with a rich history. While the site’s first traces of occupation date back to the 2nd century CE (glass workshop and Gallic pink granite quarry), it is the last four centuries that have really left their mark. Since the 17th century, Les Subsistances have had three major uses: first as a convent (buildings in pink ochre), then as a military garrison (buildings in pink ochre), and since 2000, as a cultural centre. Today it combines the International Laboratory for Artistic Creation dedicated to theatre, dance, and contemporary circus, and the Ecole Nationale des Beaux-Arts de Lyon.\nYesterday… in the 17th C… a Convent\nSet between the Croix-Rousse and Fourvière hills, the Serin quarter enjoys a strategic position. Easy to defend, it marks the Northern Entry into Lyon. To block access to the city at night, the guards strung a chain upstream across the Saône, and another at St-Georges to block entry from the South. The river gave rise to all sorts of fantastic tales, such as the one told for centuries about the Machecroute, a type of enormous aquatic dragon said to be responsible for the numerous floods the city suffered: a single flick of its tail caused the waters to rise. A stela at the street-side entrance to the restaurant shows the level of the 1840 flood.\nIn 1640, the Sisters of the Visitation acquired the land and had a small cloister built where the restaurant now stands, and a church along the grillwork that today separates the site from the riverbanks. Together these formed the convent of Ste-Marie des Chaînes. Renting the vast surrounding lands and selling produce from the vines and orchards around the cloister provided the sisters a comfortable life. The convent prospered and counted up to 70 persons, most of whom were girls from wealthy families. Though the convent began to have financial difficulties in 1700, wealthy young girls continued to come in large numbers, so the mother superior, Sister Sépharique d’Honoraty, decided to have a larger convent built.\nLegend has it that before construction began, she announced: “to spare expenses, we will forgo an architect. I will draw up the plans myself, and may God confound us if we don’t succeed”!", "score": 28.759717678907876, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "La Croix-Rousse is the old silk-weavers’ district and spreads up the steep slopes of the hill above the northern end of the Presqu’île. Although increasingly gentrified, it’s still predominantly a working-class area, but barely a couple of dozen people operate the modern high-speed computerized looms that are kept in business by the restoration and maintenance of France’s palaces and châteaux.\nAlong with Vieux Lyon, it was in this district that the traboules flourished. Officially the traboules are public thoroughfares during daylight hours, but you may find some closed for security reasons. The long climb up the part-pedestrianized Montée de la Grande Côte, however, still gives an idea of what the quartier was like in the sixteenth century, when the traboules were first built. One of the original traboules, Passage Thiaffait on rue Réné-Leynaud, has been refurbished to provide premises for young couturiers.", "score": 28.025394058742737, "rank": 34}, {"document_id": "doc-::chunk-4", "d_text": "It is tempting to suggest that a drawing such as the ornamental frame after Macchietti might have served a similar rôle, perhaps even to display another drawing or print – Vasari, for example, made similar use of architectural borders, creating them specifically for use with his own collection of drawings by other artists, such as for this Christ in glory by Ghirlandaio in the Albertina.\nBy the time that Corneille was working in Lyon and making a reputation for himself, the architectural style which had developed in central and northern Italy was spreading through Europe, partly via the School of Fontainebleau. François I of France had imported the Italian artists Rosso Fiorentino (from 1530 to 1540) and Francesco Primaticcio (from 1532 until c.1559) to work on the interior of the château de Fontainebleau, a project which was extremely influential in propagating the Mannerist style in France (it was François’s daughter-in-law, Catherine de’ Medici, who patronized Corneille de Lyon). Mannerist picture frames were one of the related developments; framemakers, like other furniture makers, used the same architectural forms, but could afford to take more liberties in the execution than builders could. Thus the context was ideal for generating a style of framing for these small portraits consistent with the patterns which have been associated with them for so long. However, it has not been clear until this point whether the aedicular frames associated with the Corneille portraits were made in France, or – as it turns out – were obtained from Italy. The jewel-like quality of the portraits must have influenced the choice of frame, underlining the preciousness of the picture; but a further question must then be, when was this choice made? – when were these particular frames chosen?.\nThe art market\nCorneille de Lyon (c.1500/10–d.1575), François, dauphin de France, o/panel, 15.4 x 13.5 cm., Haboldt & Co., TEFAF 2018\nCorneille de Lyon (c.1500/10–d.1575), Portrait d’homme, probablement un officier royal, c.1545-50, o/panel, 7.1 x 6 ins (18.1 x 15.2 cm.", "score": 27.1201151855931, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Lyon is the third-largest city of France. It is located in the country's east-central part at the confluence of the rivers Rhône and Saône, about 470 km (292 mi) south from Paris, 320 km (199 mi) north from Marseille.\nThe city is known for its cuisine and gastronomy, and historical and architectural landmarks; part of it is a registered as a UNESCO World Heritage site. The noted food critic Curnonsky referred to the city as \"the gastronomic capital of the world\", a claim repeated by later writers such as Bill Buford. Renowned 3-star Michelin chefs such as Marie Bourgeois and Eugénie Brazier developed Lyonnaise cuisine into a national phenomenon favoured by the French elite; a tradition which Paul Bocuse later turned into a worldwide success.\nThe bouchon is a traditional Lyonnais restaurant that serves local fare such as sausages, duck pâté or roast pork, along with local wines. Two of France's best known wine-growing regions are located near the city: the Beaujolais region to the north and the Côtes du Rhône region to the south.\nyon played a significant role in the history of cinema: it is where Auguste and Louis Lumière invented the cinematograph in 1895. The Institut Lumière, built as Auguste Lumiere's house, and a fascinating piece of architecture in its own right, holds many of their first inventions and other early cinematic and photographic artefacts.\nEconomically, Lyon was historically an important area for the production and weaving of silk, now the city is a major centre for banking, as well as for the chemical, pharmaceutical, and biotech industries.\n- The Roman ruins on the hillside near the Fourvière Basilica with the Ancient Theatre of Fourvière, the Odeon of Lyon and the accompanying Gallo-Roman Museum;\n- Amphitheatre of the Three Gauls, Roman ruins of an amphitheatre.\n- Cathedral of St.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Lyon: France's Best Kept Secret (I).\nCosmic Cat - A cosmic, free game\nFree American Roulette\nFree European Roulette\n3 Card Poker Gold, Free\nSports info and betting\nIndependent funding for a free lifestyle\n|#Lyon: France's Best Kept Secret\n#Now A UNESCO World Heritage City (ARA) - Lyon, France has long been known to sophisticated travelers as the culinary capitol of France. It is now emerging as a \"must-do\" destination for a broader range of American travelers because of the many other attractions it offers to visitors. A trip there is easier than ever, with a direct six-hour flight Delta Airlines now offers between New York's JFK and Lyon's Saint-Exupéry airport, named in 1999 in tribute to the author and aviator Antoine de Saint-Exupéry, a native of Lyon who wrote 'The Little Prince.'\nRecently named a UNESCO (United Nations Educational, Scientific and Cultural Organization) World Heritage City because of its illustrious 2,000-year history, France's second-largest city is brimming with activities and attractions. With its world-renowned cuisine, architecture (boasting the second largest Renaissance quarter in Europe), museums, and picturesque nightly lighting of monuments, Lyon is a place that is renowned for its quality of life. The past is alive in such landmarks as the Gallo-Roman Amphitheater, used for concerts today, and the Croix-Rousse Hill, the center of the silk industry in the 1800s. Its Renaissance quarter is by no means a mere testament to the past, as it stil thrives with daily activity, from hotels to restaurants to shops.\nA visit to Lyon makes the pages of a European history book come alive to travelers. Today, life goes on in a city and metropolitan area spanning some 124,000 acres, 1,250 of those acres constitute the historic sector that is now included on the UNESCO World Heritage List.\nThe Search Engine for Exploration, Survival and Adventure Lovers © - Andinia.com ©", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-7", "d_text": "Have you tried crunchy duck skin called \"friton\" or pork skin called \"graton\"?\nSomething I appreciated in our stay in Lyon is the practice of \"plat du jour\" or the \"formulaire.\" We are familiar with the daily specials in the U.S., but in Lyon, it is usual practice.\nThe formulaire consists of an entree' (appetizer), a main dish, and a dessert. Sometimes you can choose two or mix and match with items on the main menu. All drinks are extra. The average price for a formulaire is about 14 euros lunch or dinner!\nThe French love to eat out and socialize. The reasonable price allows everyone, including ourselves, to enjoy eating out without breaking the bank.\nIt is not unusual to have a 2-hour lunch break (longer at dinner). No one will prompt you to pay your bill. You'll sometimes be hard-pressed to find your server.\nOn the slopes of the Croix-Rousse, the second of two hills over Lyon, are vintage shops, small restaurants, and bars, residences of what used to be a town of silk weavers. The Croix-Rousse is locally called la colline qui travaille or the hill that works to reflect its silk weaver history.\nIts history goes back to 1536 when King Francois, gave Lyon the right to produce silk to compete with Venice, the only silk producer then. Silk production quickly became a significant industry in Lyon up until the 19th century. Half the population of Lyon depended on silk production and trade. At one point, 40,000 silk weavers worked in various silk ateliers.\nSilk workers started in Vieux Lyon and eventually moved to the Croix-Rousse neighborhood in the 19th century. Many of the houses in Croix-Rousse will have high ceilings, which used to house the large silk looms.\nIn 1801, Jean Marie Jacquard invented the jacquard loom. This power loom used wooden punch cards to program the operation of the machine. The jacquard loom reduced the number of weavers per loom from two to one.\nSilk workers, called \"canuts\" lost many jobs as a result of the new loom. The Canuts staged the first mass uprising in France in 1834, protesting the loss of jobs and poor working conditions.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-14", "d_text": "18 rue de Bonald, 69007 Lyon\nTel: 04 37 37 44 54\nResistance and Deportation History Centre\nOn Avenue Berthelot in Lyon stands this grand, slightly gloomy building... A former military medical school, occupied by the Gestapo between 1943-1944, the place has a painful history. It is now the Resistance and Deportation History Centre. The permanent exhibition is moving, returning to a time that profoundly marked Lyon's history.\n14 avenue Berthelot, 69007 Lyon\nTel: 04 78 72 23 11\nGalerie Roger Tator\nAt the Galerie Roger Tator, nothing is too enclosed... They love being open to the various arts, design, modern art, architecture, sound... as well as being open to the world, open to the city. Certain guest artists have even gone so far as to demolish the walls and facade... Completely open, you've been told!\n36 rue d'Anvers, 69007 Lyon\nTel: 04 78 58 83 12\nWe sometimes forget that cinema originated in Lyon! The Institut Lumière is part museum, part cinema - of course! - with very good retrospectives and even a major cinema festival, the Festival Lumière in the autumn. We recommend the Épouvantables vendredis or the opportunity to see (again) the best horror films in the history of cinema, every Friday 13th or almost 13th...\n25 rue du Premier film, 69008 Lyon\nTel: 04 78 78 18 95\nMusée de l'imprimerie\nThe Musée de l'Imprimerie is nestled on the Presqu’île, not far from the Rue de la Gerbe. The museum is worth the detour just for the beautiful renaissance building that it occupies. This journey into the world of print and handwriting is very instructive!\n13 rue Poulaillerie, 69002 Lyon\nTel: 04 78 37 65 98\nConcerts, theatre, circus, readings... A wonderful, always eclectic programme at Subsistances. And in the restaurant, Quai des Arts, you can enjoy fabulous Sunday brunches - an all-you-can-eat buffet that you'll never want to leave because it's so good!", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-0", "d_text": "Lyon sits about 300 miles southeast of Paris and 200 miles northeast of Marseille. Because the city is relatively far inland, temperatures here are a bit cooler than the rest of France, but it’s still pretty temperate. Even in the dead of winter, it averages between 32 and 43 degrees.\nIn fact, winter can be a lovely time to visit Lyon, especially if your visit coincides with the Fête des Lumières—or Festival of Lights—when nearly every household in the city places candles along the outsides of its windows in honor of the Virgin Mary. The tradition usually lasts for four days, around December 8th, and has been held since 1643, when Lyon was hit by the plague and its residents begged the Virgin Mary for mercy. Today, it is celebrated with an elaborate light display at the Basilica of Fourvière and culminates in a light show at the Place des Terreaux.\nLyon is certainly lovely in the spring, summer and fall, too. Temperatures start to warm up in May, averaging between 50 and 68 degrees. By July, average temperatures can rise as high as 80 degrees. When autumn creeps in, it cools down a bit, with averages between 47 and 62 degrees in October. And there’s another advantage of visiting Lyon during this warm stretch: It’s not as overrun as other French cities. During July and August, when millions of locals and tourists clog up the hotels and resorts along the coast, Lyon is a lot quieter.", "score": 26.563091923527686, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Lyon Bleu International is a private school which has offered French language courses since 1999, located in the city centre of Lyon 2nd largest city in France.\nIn the heart of the city, our school offers a harmonious and convivial environment for French language courses, a stone’s throw from the historical centre of Lyon on one side and the business district on the other, at the crossroads of numerous choices of public transport, in a stylish and lively neighbourhood.\nLearn French in Lyon\nLyon has a very rich cultural heritage and is famous for its gastronomy and vineyards and more recently for its football team and art. At the crossroads of the main European cities, close to the Mediterranean sea and Alpine ski resorts, Lyon is perfectly located for people wanting to visit France and Europe. Lyon also offers a very dynamic cultural life and a variety of entertainment opportunities.\nAt Lyon Bleu International we offer and provide:\n1. A wide range of french language courses suited to your objectives, your available time and budget\n2. Small groups with a maximum of 12 participants from very different countries, for better progression and better value for money\n3. Well trained and motivated French teachers\n4. An efficient and competent administrative staff to answer all your questions, to help you with the formalities of registration for a French course, for your visa or quite simply to advise and assist you.\n5. A friendly and studious atmosphere in which to learn French\n6. Carefully selected French host families and residences for accommodation\n7. Daily cultural activities in order to practice your French\n8. Preparation for French international exams: DELF/DALF\n9. 14 fully equiped classrooms, a student’s room equipped with PCs for use by students, Wi-fi Internet connection, a non smoker environment (a particular exterior area reserved for smokers), a book and Video/DVD library service\n10. A Guarantee of Quality.\n- Lyon Bleu International is a French professional training organisation certified by the French government.\n- Lyon Bleu is accredited and certified by Label Qualité Fle and member of Campus France", "score": 26.40311040396384, "rank": 41}, {"document_id": "doc-::chunk-5", "d_text": "BP 22, 50170 Mont-Saint-Michel, France, Phone: +33 (0) 2 14 13 20 15\n13.Musee des Beaux-Arts de Lyon\n© Courtesy of MangAllyPop@ER - Fotolia.com\nThe foundations of this museum date back to the 6th century, when the walls that now encompass priceless art surrounded an ancient abbey. It wasn't until September 20, 1814, however, that the site was inaugurated as a true museum of fine art by the Count of Artois. Over the last 200 years, the museum has developed into one of the largest in Europe, housing over 8,000 antiquities, 3,000 decorative objects, 40,000 coins and medals, 2,500 paintings, and 1,300 sculptures. The vast collection is exhibited in over 70 rooms to offer visitors an outstanding and varied sampling of art from antiquity to contemporary that reignites the enthusiasm for artistic discovery.\n20 Place des Terreau, 69001 Lyon, France, Phone: +33 (0) 4 72 10 30 30\n© Courtesy of K'Steel - Fotolia.com\nBuilt in an old railway station in the center of Paris, the Musée d'Orsay was constructed for the Universal Exhibition of 1900. A work of art in and of itself, the glass-ceilinged building houses a treasure trove of art dating from 1848-1914. From paintings to sculpture and photographs to graphic arts, many of the most influential artists of the period grace the walls and floors of this old station. Throughout its three floors, visitors will find pieces from the Louvre, Musée de Jeu de Paume, and the National Museum of Modern Art arranged in a moving tour of the impressionist and modernist periods.\n1, Rue de la Légion d'Honneur, 75007 Paris, France, Phone: +33 (0) 1 40 49 48 14\nVacation ideas: Glasgow, Lisbon, Niagara Falls, Toronto, D.C., San Juan, Scottsdale, Santorini, Melbourne\n15.Museum of Gallo-Roman Civilization\n© Courtesy of Jo¨rg Hackemann - Fotolia.com\nBefore France, there was Gaul, the domain of the Romans, and Lyon was its crown jewel.", "score": 26.070898151292372, "rank": 42}, {"document_id": "doc-::chunk-2", "d_text": "The gilt leather industry also developed in England, and especially London. In the first half of the 18th century, English manufacturers excelled in the production of hangings and screens decorated with \"chinoiseries\".\nA French craze\nIn France, the kingdom's elite developed a craze for Spanish gilt leathers during the 16th century. The first workshops were created in Paris, notably by Jehan Fourcault and Julien Lunel, and were then encouraged by Henri IV. Outside the capital, others started up in Rouen and Lille, and in the Rhône Valley in Lyon, Carpentras, Aix-en-Provence, Marseille and above all Avignon. In the first half of the 17th century, two production centres predominated. In Paris, Jean-Baptiste Delfosse, the best-known producer, supplied the royal family, as we can see from the Journal of the Garde-Meuble de la Couronne. Unfortunately, none of these Paris decors has survived. In Avignon, the most famous workshop belonged to the Boissier family, and passed from father to son for over a century. In 1712, the family's most celebrated exponent, Raymond Boissier, published a sales catalogue; a copy of this is now in the Avignon library.\nToday, over a thousand antique gilt leathers are listed in France in public and private collections. Although there are far fewer historiated versions, their decoration is the most striking, as witness the hangings now in the Château d’Écouen and one illustrating the retinue of \"David Victorious\", once on show in the Château de Ferrières and then in the Hôtel Lambert in Paris. Although plentiful, the number of surviving decorative gilt leathers with repetitive patterns is nowhere near the huge quantity that once existed. They are found mainly in numerous chapels and churches in the Alps and the Pyrenees.\nGilt leathers and their sumptuous designs disappeared everywhere at the same time, at the end of the 18th century. They soon fell into oblivion, despite intermittent attempts to revive them (particularly in France) in the late 19th century and up to the present day, but using very different techniques and materials.", "score": 25.703854003797073, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "Lyon is not a top of mind city in France. It has the elegance of Paris, but without the attitude. This city of the Lyonnais gourmand can rack up Michelin Star restaurants and impress with several UNESCO World Heritage sites.\nWe visited Lyon three years ago on a fluke. We had been traveling to Geneva on vacation, and I wanted to visit France before leaving the continent. The city closest to Geneva was Lyon, only a 90-minute ride on the TGV (Train à Grande Vitesse). We visited Lyon for a short but amazing three days. At the end of the trip, we promised ourselves that we would come back for an extended visit one day in the future.\nThat day came sooner than expected. We lived in Lyon this past year, and I did not want to leave. People who visit Lyon agree that it is like discovering a hidden gem. It has the feel of Paris without the crowds, the traffic, and definitely without the attitude.\nWhat We Will Cover:\nLyon is smack in the center of the country (east-central, actually). It is 90 minutes southeast of Paris on the TGV. From the Charles de Gaulle Airport (CDG), you can take a TGV going to Marseille (and points to the South of France), and Lyon will be the first stop.\nThere are 6 train stations in Lyon, but there are two that you need to know. The central station is Gare de Lyon Part-Dieu. If you are going or coming from a major city in France or internationally, Part-Dieu will be your station. Gare de Lyon Perrache was the original central train station until the 1980s. Perrache could be an alternative station depending on your schedule. Part-Dieu is located on the west bank of the Rhone River while Perrache is on the Presqu'ile. It is very convenient for those who live north of Lyon and walk-able to anywhere on the Presqu'ile. Perrache is normally a continuing stop in Lyon after Part-Dieu (8 minutes to Perrache).\nLyon also has an international airport called Lyon Airport Saint Expuery (LYS), named after Antoine Saint Exupery of Little Prince fame, who was born there. The airport is on the far west side.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "You can find on this page the Lyon tourist map to print and to download in PDF. The Lyon tourist attractions map presents the monuments, museums, parks and points of interest of Lyon in Auvergne-Rhône-Alpes - France.\nThe Lyon sightseeing map shows all tourist places and points of interest of Lyon. This tourist attractions map of Lyon will allow you to easily plan your visits of landmarks of Lyon in Auvergne-Rhône-Alpes - France. The Lyon tourist map is downloadable in PDF, printable and free.\nFrance second-most important city after Paris is surprisingly undiscovered. Although Lyon does not often make it onto tourist itineraries, many cultural treasures await those who take the time to explore the city. With a history dating back to ancient Roman times, Lyon has earned a place on the UNESCO World Heritage list. The city boasts France oldest ancient ruins, medieval quarters, and fine Renaissance houses as its shown in Lyon tourist map. The happiest of all visitors are the ones who journey here to sample the famous cuisine. The celebrated Michelin-starred Auberge du Pont de Collonges, 10 kilometers from Lyon, was helmed by legendary French chef Paul Bocuse for decades and is still a top destination for gourmands. Authentic Lyonnais gastronomy can also be enjoyed all over Lyon at bouchons, small cozy bistros that serve traditional local specialties.\nMany of the top attractions in Lyon are illuminated at night, earning Lyon the nickname of Capital of Lights. Home to some of the finest chefs in the world, the city is also known as the capital of gastronomy, offering travelers yet another tasty reason to visit lovely Lyon. If you like the medieval Marais district in Paris, you will love Vieux Lyon. The pink-and-ochre Old Town is Europe largest Renaissance quarter as you can see in Lyon tourist map. The cobbled streets, Italianate courtyards and maze of hidden passageways are fascinating to explore. These alleyways and passageways are known as traboules. Their original purpose was to provide shelter from the weather for the silk-weavers as they moved their delicate pieces of work from one part of the manufacturing process to another.\nNationally, the 2nd conference and show city and 35th in Europe, every year Lyon hosts some sixty congresses and around a hundred shows.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-2", "d_text": "This exhibit is called Musee des Hospices Civils.\nNo trip to Lyon would be complete without a visit to the Basilica of Notre-Dame de Fourviere. This minor basilica was built in 1896 with the use of private funds. It was designed by Pierre Bossan and combines elements of the Byzatine and Romanesque architectural styles. The Basilica of Notre-Dame de Fourviere features four main towers and is adorned with stained glass, mosaics and the crypt of St. Joseph. And its pinnacle is a gold gilded statue of the Virgin Mary. The basilica is also home to the Museum of Sacred Art. This beautiful basilica is located on top of Fourviere hill and receives over one and a half million visitors each and every year.\nLyon is a city that is rich in history as much as its rich in culture. This French city has almost too many attractions and historical locations to list in one place. But, some of the more prominent locations, not already covered here, include the Tour metallique de Fourviere, Sainte Marie de La Tourette Monestary, Opera National de Lyon, Cathedrale Saint-Jean, Basilica of St-Martin-d’Ainay, La Mouche Cattle Market and Abbatoir and the The doorway of St. Nizier’s.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "The 17th-c. church is carved out of the hill and surrounded by buildings. It has a very famous organ, created by Augustin Zeiger from Lyon in 1841. It was the first \"grand 16-foot\" built in Lyon after the French Revolution.\nLe passe touristique et culturel incontournable pour profiter au mieux de votre séjour à Lyon.\nCroisières, visites guidées, excursions, etc.", "score": 25.65453875696252, "rank": 47}, {"document_id": "doc-::chunk-1", "d_text": "The best and most economical way to get there is to take a tram connection from Part-Dieu called Rhone Express. The Rhone Express leaves every 30 minutes and it is a 45 minute trip to the airport at a cost of 19 euros each way. A taxi, depending on the time of day, would cost between 80 to 120 euros. Uber is available and would be slightly cheaper.\nThe Presqu'ile or the peninsula is city-center and great place to start your tour. The peninsula is a narrow strip of land land between the Rhone River and Saone River which run through Lyon.\nWhen on the Presqu'ile, you will gravitate towards Place Bellecour. It is the main square in Lyon (ground zero in GPS) and all festivals, events, activities and protests happen here. You will know Place Bellecour with the statue of Louis IV in the center. Place Bellecour is also the starting point of the shopping district which is pedestrian-only up to the Les Terraux district. This is a wonderful place to walk, shop, have some coffee or a meal. The peninsula is connected by bridges or walk paths over the two rivers. There is a metro-tram system. There is a boat taxi you can take on the west bank of the River Saone, that will go to the Confluence district.\nFor more info, check out Getting Around Lyon.\nBefore getting to Lyon or before you start your tour, it is a good idea to get a quick understanding of its history.\nKnow that Lyon was established in 43 BC before Gaul was conquered by the Romans. The Romans started incursions into Gaul in 200 BC, annexing the South of France in 125 BC. Julius Caesar completed its conquest of Gaul in 52 BC.\nLyon started as a military settlement. In those days, land was granted to military veterans in exchange for loyalty to the republic. Known as \"Lugdunum\" then, Lyon quickly became the capital of Gaul for the Roman Republic.\nLocation was advantageous to the Romans. It was close enough to Italy and easily accessible without crossing the Alps. It was central enough to Germany, which was their next country to conquer. With a system of roads, two large rivers for transport and two hills for security, Lyon became the primary trading partner with the Romans and later, Italy, for centuries even after the fall of the Roman Empire.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "In Lyon, the Confluence is the meeting point downriver between the Saône and the Rhone Rivers. The Lyonnaise seem to dwell on the Confluence. Actually, it was a very industrial area and over the past 15 years or so, the city's Chamber of Commerce converted this area into a shopping center and a museum area.\nI walked all the way to the end of the point that marks the Confluence (see photo above) and ceremoniously dipped my hand in the water to become \"a part of Lyon,\" a city I have come to love very much over the past month.\nHere is an aerial photo of the Confluence.\nAnd, a close-up of the point.\nThe city erected a sculpture near the Confluence with its motto, \"Only Lyon\" and its signature lion mascot.\nBy the way, the name of Lyon has nothing to do with lions. It is a shortened form of Lugdunum, one of the most important cities of the Roman Empire outside of Rome itself. Nevertheless, the area has been inhabited since pre-historic times. For background on its ancient history, see my blog post on the Gallo-Roman Museum (coming soon).\nThis very odd-shaped building is Le Musee des Confluences. It is built at the tip of land where the Saône (on the left and the slow river) and Rhône Rivers (on the right) meet: thus, the confluence.\nThe museum houses very creative exhibits of natural history, evolution, the solar system, anthropology--and even an Antarctica exhibit, which I liked the best.\nHere are some of the other exhibits.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-3", "d_text": "8 Place de Fourvière, City of Lyon, Rhône-Alpes, France, Phone: +33 4 78 25 13 01\n8.Grand Parc de Miribel Jonage\n© Courtesy of PackShot - Fotolia.com\nThe 14th most visited site in France, Parc Mirabel Jonage is not only a beautiful getaway from urban life, it is also responsible for providing Lyon with drinking water. Visitors will discover that the Rhone River connects the various ecological components of the park with each other as well as provides fishing and aquatic activities for visitors. With vast swaths of nature interspersed with horse riding centers, day camps, bicycle trails, beaches, and restaurants, this park is the perfect destination to get away from it all. Except, to do so, visitors only have to travel 15 minutes northeast of Lyon's urban core.\nChemin de la Bletta, Vaulx en Velin (Lyon), Rhône-Alpes, France, Phone: +33 (0) 4 78 80 30 67\n9.Institut & Musee Lumiere\n© Courtesy of MangAllyPop@ER - Fotolia.com\nIt was in the heart of Lyon's Monplaisir neighborhood at the Villa Lumière that film began. In 1894, the first film tests on the now-famous Cinematograph took place in the Lumière brothers' villa. Like a firestorm, their invention gave rise to film screenings of the early 20th century and prompted the brothers to further perfect the ability to record film. At their villa, their original animated images are housed side by side with some of the brothers' other inventions like the Photorama (for 360-degree panoramic pictures), the stereoscopic projector (for 3D films), and their surprisingly articulated \"pincer hand,\" designed by Louis Lumière to aid amputees from the First World War.\n25 Rue du Premier Film, 69008 Lyon, France, Phone: +33 (0) 4 78 78 18 95\n© Courtesy of cdrcom - Fotolia.com\nThe Army Museum is located at the heart of the Hôtel National des Invalides, where treasures from antiquity to World War II are housed. Created in 1905, this military museum has the third biggest collection of old weapons and armor in the world.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-2", "d_text": "In this context, art was integrated into the courtly or aristocratic way of life, as part of a culture of spectacle, which functioned to distinguish the nobles who frequented the court from other social classes and to legitimate the ruler’s power in the eyes of the world (see for example, Elias, 1983; Adamson, 1999; Blanning, 2002). The consolidation of power in the hands of a fairly small number of European monarchs meant that their need for ideological justification was all the greater and so too were the resources they had at their disposal for the purpose. Exemplary in this respect is the French king Louis XIV (ruled 1643–1715), who harnessed the arts to the service of his own autocratic rule in the most conspicuous manner imaginable. From 1661 onwards, he employed the architects Louis Le Vau (1612/13–1670) and Jules Hardouin-Mansart (1648–1708), the painter Charles Le Brun (1619–90) and the landscape gardener André Le Nôtre (1613–1700), among many others, to create the vast and lavish palace of Versailles, not far from Paris. Every aspect of its design glorified the king, not least by celebrating the military exploits that made France the dominant power in Europe during his reign.\nBürger’s Functions of Art: Bourgeois Art\nBy 1800, however, the predominant category was what Bürger calls ‘bourgeois art’. His use of this term reflects his reliance on a broadly Marxist conceptual framework, which views artistic developments as being driven ultimately by social and economic change (Bürger, 1984, p. 47; Hemingway and Vaughan, 1998). Such art is bourgeois in so far as it owed its existence to the growing importance of trade and industry in Europe since the late medieval period, which gave rise to an increasingly large and influential wealthy middle class. Exemplary in this respect is seventeenth-century Dutch painting, the distinctive features and sheer profusion of which were both made possible by a large population of relatively affluent city-dwellers. In other countries, the commercialization of society and the urban development that went with it tended to take place more slowly.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "In the 12th century Montpellier was the second most important city in France, in both economic and cultural terms. Founded in the 13th century, the university also contributed greatly to development: Montpellier still boasts an internationally acclaimed medical faculty and the oldest botanical garden in France.\nThe 17th and 18th centuries were the height of Montpellier’s wealth, when rich merchants built the most luxurious ‘hôtels particuliers’. The gables of these town houses are impressive in themselves, but behind the façades you can find even more beautiful courtyards with spiral staircases and ornate railings. Most can only be visited during a guided tour organised by the Office du Tourisme. But one of the loveliest examples is open to the public: Hôtel de Varennes. This building houses 2 museums. The 18th-century façade of the Musée du Vieux Montpellier hides a much older interior with Gothic arches. You will find more beautiful town houses around Place de la Canourgue, a 17th-century square with a marble fountain and a view of the cathedral. A few streets down, a home owner uncovered one of the oldest Jewish ritual baths in Europe. Looking for the source of a leak, the owner stumbled across a 13th-century ‘mikveh’, a stone bath used in Jewish cleansing rituals.", "score": 24.296145996203016, "rank": 52}, {"document_id": "doc-::chunk-1", "d_text": "Lyon is one of the most popular clubs in France with a vast fan base on par with Paris Saint-Germain and Olympique de Marseille. The club achieved moderate success during the 1960s and 1970s led by the likes of Bernard Lacombe and Jean Djorkaeff, while the golden era of the French side came at the start of the new millennium when Lyon began to achieve greater success both in France and on the international level. It was not until 2002 that Lyon won their first ever Ligue 1 title which sparked an ongoing national record-breaking streak of seven successive titles. During that time Lyon were regular participants of the UEFA Champions League. The French club reached quarter-finals on two occasions, while they even played in the semi-finals in the 2009-10 season.", "score": 23.976320471845, "rank": 53}, {"document_id": "doc-::chunk-5", "d_text": "Before you leave the Basilica, you should not miss going around to the esplanade. There you will see the most fantastic view of Lyon. On a clear day, you can see as far as Mont Blanc.\nHistorically, European cuisine had been unremarkable until the arrival of the French influence in the mid-17th century. French cooking in itself had its beginnings in the Middle Ages. The reign of Catherine de Medici, who was married to King Henry II of France in the 16th century, brought about a culinary revolution. She brought with her cooks from Tuscany and introduced different foods to France. Sugar, chocolate, butter all became part of the cuisine.\nFrench cuisine gained popularity in the 17th century. At this time, France dominated Europe, militarily, economically, culturally, and politically. Louis XIV and Le Chateau de Versailles symbolized French glory. French chefs, who worked in French courts and palaces, presented impressive and extravagant dishes. The art of French cuisine spread rapidly throughout Europe.\nThe nobility and aristocracy hired French chefs in England, Spain, Austria, and Russia. In the 18th century, French cookbooks emerged and translated into European languages.\nBy the 19th century, the\"bouchon\" came into existence at about the time of the collapse of the silk industry in Lyon. Women lost their jobs working for rich families and started working for themselves. The typical woman, at that time, came from the countryside and had young children. They moved to the city to find jobs in the bouchons. They were called Les Meres or the mothers.\nSome of these women were so good that they established their own restaurants. La Mere Braziere was the first woman to be given three Michelin stars for each of her two restaurants. Unheard of at the time!\nYou will see many \"mom and pop\" restaurants all around Lyon. Many are family operations going back decades. Today, there are also many runs by young chefs who want to make a name for themselves. You will not find many chain restaurants or franchise operations. The French have a particular disdain for those food operations.\nBy the way, \"bouchon\" means cork or plug (i.e., a way to earn a little money).", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "The view is fantastic - the church grounds are approximately 928 feet above sea level, while the majority of Lyon is at 550 feet above sea level. Gives you some idea of the height of the ridge eh?\nImages one & two were taken in front of the basilica, as we had to queue up there waiting to be allowed inside. Image three is the basilica as it can be seen from the city below (image three is the propery of Loïc Ventre via Flickr).\nBecause the basilica sits high up on a ridge, it is one of the most visible landmarks\nin the city, and one of the symbols of the city of Lyon. It gives Lyon its status as a “Marian city”. About\ntwo million tourists are welcomed each year in the basilica. The basilica complex includes not only the building,\nthe Saint-Thomas chapel and the statue, but also the panoramic esplanade, the Rosary garden and the Archbishopric of Lyon.\nNOTE: Click here to see a Google Images set of pictures.\nThe Lyon Fresco (aka \"La Fresque des Lyonnais\")\nAt first glance you see what appears to be a very busy building, with people walking by or standing on balconies… but take a closer look and you’ll see that this is actually one giant fresco! Painted with the trompe-l’oeil – or “trick of the eye” – technique, this giant fresco celebrates over thirty local figures who have made their mark over 2,000 years of Lyonnais history, from the Roman emperor Claudius to the renowned chef Paul Bocuse.\nClick here to view a set of Google Images of this building located at 2 Rue de la Martinière very near the Saône River, 1/2 block off the Quai Saint-Vincent.\nClick here to go to the \"This is Lyon\" website and a more complete description of the Lyon Fresco.\nOpéra Nouvel (Nouvel Opera House)\nThe Opéra Nouvel (Nouvel Opera House) in Lyon, France is the home of the Opéra National de Lyon. The original opera house was re-designed by the distinguished French architect, Jean Nouvel between 1985 and 1993 in association with the agency of scenography dUCKS scéno and the acoustician Peutz. Serge Dorny was appointed general director in 2003.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Cities in FRANCE\nPopular destinations FRANCE\nFrance is the world's leading holiday destination in terms of visitor numbers, followed by the United States and Spain.\nTourist activities are very unevenly distributed across French territory. Half of the jobs are concentrated in three regions: Ile-de-France, Rhône-Alpes and Provence-Alpes-Côte d'Azur. The most visited attractions are Disneyland Paris (12 million visitors), the Eiffel Tower (6 million), the Louvre (6 million) and the Center Pompidou (5 million).\nphoto: Andrei Dan Suciu, Creative Commons Attribution 3.0 Unported no changes made\nGovernment and business are trying to redistribute the tourist flow over regions in a more balanced way and spread it over the whole year. For example, \"green\" tourism (including camping on a farm) and new forms of tourism such as city tourism and thematic travel will be more developed.\nThroughout France you can admire the remains of the country's rich history, which dates back to prehistoric times. About 15,000 BC. the earliest inhabitants of the country lived in the south of France. They lived in caves and painted wall paintings at Lascaux that became world famous. In Brittany, around 1500 BC. erected monuments consisting of gigantic stones called dolmens or menhirs.\nphoto: Arnradigue, CC Attribution-Share Alike 3.0 Unported, no changes made\nIn 51 BC. Under the leadership of Julius Caesar, the Romans conquered what is now France, then called Gaul and for centuries it was a Roman province. Especially in the southeast of France there are still amphitheaters in cities such as Arles, Orange and Nîmes.\nRomanesque architecture emerged in the 12th century. Monasteries and churches were built on the model of Roman basilicas. From this developed Gothic architecture, which would spread all over Europe from France. There are about 60 Gothic cathedrals in France, richly decorated with sculptures and stained glass windows. The most famous cathedrals are in Amiens, Reims, Paris (Notre Dame), Chartres and Beauvais.\nAround 1500 French art came under the influence of the Italian Renaissance. Classical antiquity became a source of inspiration. Many castles were built along the Loire during this period.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-2", "d_text": "The king’s room was done in a neo-renaissance style.\nIn 1944 it housed the command post of the French Forces of the Rhone in WWII. As an elevated place of the Resistance, nearly fifty paratroopers settled there in cooperation with the French, English and American Jedburgh. This is where the liberation of Lyon was organized.\nIt is of course classified as a Historic Monument. And you can rent it out for your wedding or another event.\nCourzieu – Église Saint Didier\nCourzieu is crossed by the ancient Gallo-Roman way of Aquitaine which was opened in 20BC by Emporer Agrippa, which led from Lugdunum (Lyon) to Bordeaux. It is on this road, according to tradition, that the body of Saint Bonnet, the Bishop of Clermont, was transported in the 8th century from Lyon where he had died.\nThe church of Saint Didier was built in 1896 on the site of the old castle of Courzieu. The bell tower was built in 1902-1903. It has a carillon of eight bells of which the oldest dates from 1726. It is signed Cochois and Cordelet and is itself declared a Historic Monument. The windows were made by Paulin Campagna.\nOne of the biggest tourist attractions in the area is the Parc du Courzieu, which is a wildlife park dedicated to European mammals. It was started with the basic idea of making a unique tourist attraction which was close to nature with the educational intent of informing to better protect. The park was started in 1980. It has been expanded up ever since, most recently in 2018 with the creation of the path of marmots.\nChâteau de Lafay in Larajasse\nBuilt in the fourteenth century by the Arod family. Characteristic of fortified castles of the feudal period, the facade is flanked by 2 round towers. It has been reworked several times of the centuries. In the 19th century, a facade and a tower were added, connecting the old building to the chapel\nThis castle was for a long time the seat of an important jurisdiction for the whole of the commune of Larajasse.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-5", "d_text": "For example, in 1567, when Maixent Poitevin was mayor, king Henry III came for a visit, and, although some townspeople grumbled about the licentious behaviour of his entourage, Henry smoothed things over with a warm speech acknowledging their allegiance and thanking them for it.2\nIn this era, the mayor of Poitiers was preceded by sergeants wherever he went, consulted deliberative bodies, carried out their decisions, \"heard civil and criminal suits in first instance\", tried to ensure that the food supply would be adequate, visited markets.2\nIn the 16th century, Poitiers impressed visitors because of its large size, and important features, including \"royal courts, university, prolific printing shops, wealthy religious institutions, cathedral, numerous parishes, markets, impressive domestic architecture, extensive fortifications, and castle.\"3\nThe town saw less activity during the Renaissance. Few changes were made in the urban landscape, except for laying way for the rue de la Tranchée. Bridges were built where the inhabitants had used gués. A few hôtels particuliers were built at that time, such as the hôtels Jean Baucé, Fumé and Berthelot. Poets Joachim du Bellay and Pierre Ronsard met at the University of Poitiers, before leaving for Paris.\nDuring the 17th century, many people emigrated from Poitiers and the Poitou to the French settlements in the new world and thus many Acadians or Cajuns living in North America today can trace ancestry back to this region.\nDuring the 18th century, the town's activity mainly depended on its administrative functions as a regional centre: Poitiers served as the seat for the regional administration of royal justice, the évêché, the monasteries and the intendance of the Généralité du Poitou.\nThe Vicomte de Blossac, intendant of Poitou from 1750 to 1784, had a French garden landscaped in Poitiers. He also had Aliénor d'Aquitaine's ancient wall razed and modern boulevards were built in its place.\nDuring the 19th century, many army bases were built in Poitiers because of its central and strategic location. Poitiers became a garrison town, despite its distance from France's borders.", "score": 22.87988481440692, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "Lyon is one of the most popular clubs in France with a vast fan base on par with Paris Saint-Germain and Olympique de Marseille.\nThe club achieved moderate success during the 1960s and 1970s led by the likes of Bernard Lacombe and Jean Djorkaeff, while the golden era of the French side came at the start of the new millennium when Lyon began to achieve greater success both in France and on the international level.\nIt was not until 2002 that Lyon won their first ever Ligue 1 title which sparked an ongoing national record-breaking streak of seven successive titles. During that time Lyon were regular participants of the UEFA Champions League. The French club reached quarter-finals on two occasions, while they even played in the semi-finals in the 2009-10 season.", "score": 21.73561361410812, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "Housed in the \"Palais Saint Pierre\", a former 17th-century convent, it displays a major collection of paintings by artists (including Tintoretto; Paolo Veronese; Nicolas Poussin; Rubens; Rembrandt; Zurbaran; Canaletto; Delacroix; Monet; Gauguin; Van Gogh; Cézanne; Matisse; Picasso; Francis Bacon...); collections of sculptures, drawings and printings, decorative arts, Roman and Greek antiquities; the second largest collection of Egyptian antiquities in France after that of the Louvre; and a medal cabinet of 50.000 medals and coins.\n- The Gallo-Roman Museum displaying many valuable objects and artworks found on the site of Roman Lyon (Lugdunum) such as Circus Games Mosaic, Coligny calendar and the Taurobolic Altar;\n- Musée des Confluences, new museum of sciences and anthropology which opened its doors on 20 December 2014.\n- Musée des Tissus et des Arts décoratifs, decorative arts and textile museum. It holds one of the world's largest textile collections with 2,500,000 works;\n- Musée des Automates, museum of automated puppets in Vieux Lyon, open since 1991.\nParks and gardens\n- Parc de la Tête d'or, (literally, Golden Head Park), in central Lyon is the largest urban park in France at 117 hectares. Located in the 6th arrondissement, it features a large lake on which boating takes place during the summer months.\n- Jardin botanique de Lyon (8 hectares), included in the Parc de la Tête d'Or, is a municipal botanical garden and is open weekdays without charge. The garden was established in 1857 as a successor to earlier botanical gardens dating to 1796, and now describes itself as France's largest municipal botanical garden.\n- Parc des hauteurs, in Fourvières;\nMore information on Lyon and France", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-1", "d_text": "The new building crumbled rapidly and was rebuilt at great cost. In the end, a quarter of the project had been completed when the convent was declared national property in 1789. In 1791, after having been chased out by the Revolution and the arrival of the guillotine in Lyon, the nuns abandoned the site.\nYesterday… in the 19th C… Military Subsistances\nIn 1807, the army took over the site and used it for storage and as a settlement for the many soldiers from surrounding forts. From 1840 to 1991, the site served above all for making flour and bread, as well as for packaging coffee, tobacco and wine. In 1840, the army built the great square called Manutention Ste-Marie des Chaînes.\nAs of 1870, the 1300 m2 of central courtyard were protected by a metal skylight inspired by the Eiffel School. The first wheat mill was built in 1853 on the current site of the furnace. A second mill was built in 1870 (now the administrative building), and a final mill in 1890 (behind the reception). A bakery with six large coal ovens was built in one of the alleys of the Manutention, thus completing the chain of production. Thereafter, the site produced bread and packaged food rations semi-industrially, supplying the neighbouring fortifications in peacetime and the front during the great wars.\nThe site was re-baptized “Les Subsistances militaries” in 1941, and the army occupied it until 1991. The state donated the buildings to the city of Lyon in 1995.\nToday… A pole for artistic creation\n– 1998 : the Ville de Lyon dedicates the site to artistic and cultural activities.\n– 1998 : Paul Grémeret first imagines the project, dying on the job in 2000.\n– Septembre 2000 : Klaus Hersche succeeds Paul Gremeret.\n– January 2001 : inauguration of the site renovated by architect Denis Eyraud.\n– November 2003 : Guy Walter and Cathy Bouvard take over the direction of Les Subsistances.\n– September 2005 : 2nd phase of renovation initiated by Gérard Collomb, Senator Mayor of Lyon. Works directed by architects Michel Lassagne and William Vassal.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-7", "d_text": "After the Germans withdrew, Parisian workers rebelled against the French government and established the Commune of Paris, which was bloodily suppressed.\nUnder the Third Republic\nWith the establishment of the Third French Republic and relative stability, Paris became the great industrial and transportation center as it is today. Two epochal events in modern cultural history that took place in Paris were the first exhibition of impressionist painting (1874) and the premiere of Stravinsky's Sacre du Printemps (1913). In World War I the Germans failed to reach Paris. After 1919 the outermost city fortifications were replaced by housing developments, including the Cite Universitaire, which houses thousands of students. During the 1920s, Paris was home to many disillusioned artists and writers from the United States and elsewhere. German troops occupied Paris during World War II from June 14, 1940, to Aug. 25, 1944. The city was not seriously damaged by the war.\nParis was the headquarters of NATO from 1950 to 1967; it is the headquarters of UNESCO. A program of cleaning the city's major buildings and monuments was completed in the 1960s. The city was the scene in May, 1968, of serious disorders, beginning with a student strike, that nearly toppled the Fifth Republic. In 1971, Les Halles, Paris's famous central market, called by Zola the \"belly\" of Paris, was dismantled. Construction began immediately on Chatelet Les-Halles, Paris's new metro hub, which was completed in 1977. The Forum des Halles, a partially underground, multistory commercial and shopping center, opened in 1979. Other developments include the Georges Pompidou National Center for Art and Culture, built in 1977, which includes the National Museum of Modern Art. The Louvre underwent extensive renovation, and EuroDisney, a multibillion dollar theme and amusement park, opened in the Parisian suburbs in 1992. A number of major projects in the city were initiated by President Francois Mitterrand (1981-95); they include the new Bibliotheque Nationale, the glass pyramid at the Louvre, Grande Arche de la Defense, Arab Institute, Bastille Opera, and Cite de la Musique.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "It is a natural harbor formed into a curve of about 58 km of coastline, mostly facing west with limestone cliffs and deep fjords called calanques flanking the harbor area.\nMarseille is, in population, the third largest city of France, with about 1 700 000 people in its metropolitan area (Paris and Lyon are 1st and 2nd). In surface area, it is the largest metropole in France.\nKnown as a port city and as a place of immigration and mixed populations, Marseille was, for centuries and centuries, the largest Mediterranean port and the most important center of trade, beating out the Genoans and the Venetians for its size and economic importance as a port.\nMarseille has the honor of being the OLDEST CITY in France, its founding dates back to the Greeks who went there from the Greek city of Phocée in Asia Minor (now Turkey) to create a new commercial trading post on the western side of the Mediterranean.\nThe city of Massilia was built in just about 600 bc, well before the Romans settled the region. It is documented by Roman travelers and Greek merchants that it was, that long ago, the most important trading settlement of the Mediterranean sea.\nEven back in the pre Christian era, Massilia was known as a city of mixity, with all the ethnic groups and religions of the Mediterranean basin living and trading there.\nNote: If you want to see what is left of the ancient Greek city of Massalia, go to the shopping mall called the Centre-Bourse. Downstairs is the entrance to the archeology park that has the vestiges of Greek Marseille, uncovered in the 1970’s and now preserved in a lovely small park area.\nThe Romans, who had protected Marseille from invaders in exchange for a very lucrative commercial arrangement, decided, by the time of Julius Caesar and the Empire, to annex the city because it refused to take sides in the wars between the pretenders to the Empire’s throne. So, in about 50 bc, Marseille becomes part of Roman Gaule and is subjected to its laws. The only concession made, one that lasts for centuries and centuries, is that Marseille can create its own tax structure and rule itself more or less independently (for a price!!)", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-6", "d_text": "The Seventeenth and Eighteenth Centuries\nDuring the late 17th and the 18th cent. Paris acquired further glory as the scene of many of France's greatest cultural achievements: the plays of Moliere, Racine, and Corneille; the music of Lully, Rameau, and Gluck; the paintings of Watteau, Fragonard, and Boucher; and the salons where many of the philosophes of the Enlightenment gathered. At the same time, growing industries had resulted in the creation of new classes — the bourgeoisie and proletariat — concentrated in such suburbs (faubourgs) as Saint-Antoine and Saint-Denis; in the opening events of the French Revolution, city mobs stormed the Bastille (July, 1789) and hauled the royal family from Versailles to Paris (Oct., 1789). Throughout the turbulent period of the Revolution the city played a central role.\nNapoleon to the Commune\nNapoleon (emperor, 1804-15) began a large construction program (including the building of the Arc de Triomphe, the Vendome Column, and the arcaded Rue de Rivoli) and enriched the city's museums with artworks removed from conquered cities. In the course of his downfall Paris was occupied twice by enemy armies (1814, 1815). In the first half of the 19th cent. Paris grew rapidly. In 1801 it had 547,000 people; in 1817 - 714,000; in 1841 - 935,000; and in 1861 - 1,696,000. The revolutions of July, 1830, and Feb., 1848, both essentially Parisian events, had repercussions throughout Europe.\nCulturally, the city was at various times the home or host of most of the great European figures of the age. Balzac, Hugo, Chopin, Berlioz, Liszt, Wagner, Delacroix, Ingres, and Daumier were a few of the outstanding personalities. The grand outline of modern Paris was the work of Baron Georges Haussmann, who was appointed prefect by Napoleon III. The great avenues, boulevards, and parks are his work. During the Franco-Prussian War (1870-71), Paris was besieged for four months by the Germans and then surrendered.", "score": 21.43673747588885, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "John, a medieval church with architectural elements of the 13th, 14th and 15th centuries, also the principal religious structure in the city and the seat of the Archbishop of Lyon;\n- Basilica of St-Martin-d'Ainay, one of the rare surviving Romanesque basilica-style churches in Lyon;\n- Église Saint-Bonaventure, 14th- and 15th-century Gothic church;\n- Vieux Lyon (English: Old Lyon) area, Mediaeval and Renaissance quarter of the town, with shops, dining and cobbled streets;\n- The many Renaissance hôtels particuliers of the Old Lyon quarter, such as the Hôtel de Bullioud, were also built by Philibert Delorme.\n- City Hall on the Place des Terreaux, built by architects Jules Hardouin-Mansart and Robert de Cotte;\n- Musée des beaux-arts de Lyon, fine arts museum housed in a former convent of the 17th century, including the Baroque chapelle Saint-Pierre;\n- Hôtel-Dieu de Lyon (17th and 18th century), historical hospital with a baroque chapel;\n- Place Bellecour, one of the largest town squares in Europe;\n- Chapelle de la Trinité (1622), the first Baroque chapel built in Lyon, and part of the former École de la Trinité, now Collège-lycée Ampère;\n- Église Saint-Polycarpe (1665–1670), Classical church;\n- Saint-Bruno des Chartreux (17th and 18th century), church, masterpiece of Baroque architecture;\n- Opéra Nouvel (1831), renovated in 1993 by Jean Nouvel;\n- Théâtre des Célestins (1877), designed by Gaspard André;\n- Sainte Marie de La Tourette monastery (1960) designed by Le Corbusier;\n- Musée des beaux-arts de Lyon (Fine Arts Museum), main museum of the city and one of the largest art galleries in France.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-2", "d_text": "Many are afraid that this will cause the neighborhood to lose its authenticity.\nSo get to Le Marais before it's too late. Le Marais is a unique amalgamation of a gay area and a Jewish quarter. In this neighborhood you will find nice, hip shops and delicious (often kosher) restaurants, bakeries and delicatessen shops. You can also eat the tastiest falafel in all of France at L'as du Fallafel in the Rue des Rosiers (the Jewish street of Paris).\nphoto: Chadi saad, CC Attribution-Share Alike 4.0 International no changes made\nLyon is een stad met met talloze historische gebouwen vanaf de oudheid tot aan de moderne tijd. Romeinse ruïnes zijn zichtbaar op de heuvel in de buurt van de Fourvière Basiliek met het oude theater van Fourvière en het Amfitheater van de Drie Galliërs.\nDe beroemdste historische monumenten uit de Middeleeuwen en Renaissance zijn onder meer: De kathedraal van St. Jean, een middeleeuwse kerk met architectonische elementen uit de 13e, 14e en 15e eeuw, tevens het belangrijkste religieuze bouwwerk in de stad en de zetel van de aartsbisschop van Lyon. De basiliek van St-Martin-d'Ainay een van de zeldzame overgebleven romaanse kerken in basiliek-stijl. Het hele Vieux Lyon heeft veel gebouwen met Middeleeuwse en Renaissance kenmerken.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "Paris’s population was around 200,000 when the Black Death arrived in A.D. 1348, killing as many as 800 people a day.\nParis lost its position as seat of the French realm during occupation of the English-allied during the Hundred Years’ War, but regained its title when Charles VII of France reclaimed the city from English rule in A.D. 1436.\nParisians rose in rebellion and the royal family fled the city (1648). King Louis XIV then moved the royal court permanently to Versailles in 1682. A century later, Paris was the center stage for the French Revolution, with the Storming of the Bastille on 14 July 1789 and the overthrow of the monarchy in September 1792.\nThe greatest development in Paris’s history began with the Industrial Revolution creation of a network of railways that brought an unprecedented flow of migrants to the capital from the 1840s.\nThe city’s largest transformation came with the 1852 Paris’ narrow, winding medieval streets were leveled to create the network of wide avenues that still make much of modern Paris; the reason for this transformation was twofold, as not only did the creation of wide boulevards beautify and sanitize the capital, it also facilitated the effectiveness of troops and artillery against any further uprisings and barricades that Paris was so famous for.\nFrance’s late 19th-century Universal Expositions made Paris an increasingly important center of technology, trade and tourism.\nIts most famous were the 1889 Universal Exposition to which Paris owes its “temporary” display of architectural engineering prowess, the Eiffel Tower, a structure that remained the world’s tallest building until 1930 and became the symbol of the city.\nDuring World War I, Paris was at the forefront of the war effort, after that the city became a gathering place of artists from around the world, from exiled Russian composer Stravinsky and Spanish painters Picasso and Dalí to American writer Hemingway.\nOn 14 June 1940, five weeks after the start of the Battle of France, Paris fell to German occupation forces, who remained there until the city was liberated in August 1944.\nLuckily the German General von Choltitz did not destroy all Parisian monuments before any German retreat, as ordered by Adolf Hitler, who had visited the city in 1940.\nIn the post-war era, Paris experienced its largest development since the end of the Belle Époque in 1914.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "That is how Marseille enters into French history – first as a fief of the Consuls of Rome, then of the Emperor, then of the Counts of Provence and Marseille, descendants of the Franks who came and conquered after the fall of Rome.\nAll this time, it is still the most important and powerful port on the Mediterranean sea!\nSkipping centuries, we come to the Middle Ages, Marseille is the center of trade from East and West. It became a major stop for the Crusaders going or coming back from Jerusalem.\nThen, in the 1300’s, Marseille is the seat of one of the most important catastrophes in the history of Europe; it is from ships arriving in its port, in 1347, that the Bubonic Plague enters western Europe!\nComing from Asia Minor, via Venice, the ships carried the plague and from there it spread all over western Europe so that by 5 years later, the Plague was as far as in England, and northern Germany. It is estimated that up to 50% of the population of Western Europe died during that first wave of epidemic.\nMarseille was never a slave trade port, unlike other ports in France such as Bordeaux.\nIn the late Middle Ages, the Renaissance period, the king, Francois I decided to protect this very important port from his enemies and had two forts built, one on the edge of the ancient port that leads to the interior, and the other on the island of If, part of the archipel of Frioul, a group of small outcroppings about a mile or two from the entrance to the port of Marseille. Both of those forts are still standing, one was finished in 1526 amnd the other, near the church of Notre Dame de la Garde, was finished in 1536.\nThe Chateau Fort of If is not only still standing – it was the location for the story of the Man in the Iron Mask by Alexander Dumas, and the island and the fortress are visitable! This is where the Count of Monte Cristo and the Man with the Iron Mask are set.\nAnother fortified structure was built under Louis XIV by Vauban, the genius engineer who created almost all the fortifications that still exist in France today.\nBecause of the colonies, Marseille continued to be a very important port – trading in sugar, coffee, rum and whatever products were produced and shipped back to France.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-1", "d_text": "Later, the Louvre arose in Paris, which has long been the palace of the French kings.\nThe 17th century was the heyday of French art. The style of that time is called classicism. In Versailles, famous architects, painters and sculptors worked on the palace, which attracts hundreds of thousands of visitors every year. King Louis XIV had a hospital built in Paris for wounded soldiers, the Hôtel des Invalides, where Napoleon Bonaparte is buried. It is considered the masterpiece of the classicist period.\nphoto: Daniel Vorndran / DXR, CC Attribution-Share Alike 3.0 Unported no changes made\nIn the second half of the 18th century, the Panthéon was built in Paris. At first it served as a church, later it became a resting place for famous French such as Victor Hugo, Voltaire, Jean-Jacques Rousseau and Emile Zola.\nWhen the World's Fair was held in Paris in 1889, engineer Eiffel designed a 300-meter-high iron tower, which later became known as the Eiffel Tower, especially for the occasion. This building still dominates the image of Paris and attracts millions of visitors every year.\nIn the 20th century, French architecture has developed into one of the most prominent in the world. For example, the buildings of Le Corbusier can be found in many countries. An example of his style is the pilgrimage church Notre-Dame-du-Haut in Ronchamp from 1955. Many buildings typical of this century have been built in Paris, such as the Center Pompidou, which houses all kinds of cultural institutions. Below is a brief description of some interesting French cities.\nfoto: Juliette Jourdan, CC Attribution-Share Alike 4.0 International no changes made\nParis is overflowing with sights. There are of course well-known blockbusters such as the Eiffel Tower, the Champs Élysées, Notre Dame and the Louvre. But there are also a number of less known, yet special places to visit. Le Marais is highly recommended. Le Marais has recently become one of the hippest districts in Paris. You could assume that it is actually a shame that it is now referred to in all books as the hippest district in Paris, as recently (especially in the summer months) Le Marais has been overrun by tourists.", "score": 19.944208417965356, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "This square is located in the Ainay district of Lyon and measures three hundred and twelve meters by two hundred meters which gives it an area of over sixty-two thousand square meters. It is completely free of any trees or shrubbery and is considered to be one of the clearest squares in all of Europe, as well as being the largest pedestrian square. Situated in the middle of the Place Bellecour is an equestrian statue of King Louis XIV. It was designed by Francois-Frederic Lemot, and the statues features the statues of Saone and Rhone at the statues feet. The original statue that stood on its site was erected in 1713, but was torn down to make cannonballs during the French Revolution. The current statue was erected in 1825. Also located on this square are the office of tourism and an art gallery.\nAnother tourist attraction in the city is the Muse des beaux-arts de Lyon. This is the cities museum dedicated to fine arts. It is situated near Place des Terreaux and is located in a Benedictine convent that was erected in the seventeenth century. This was one of the historical landmarks in the city that went through an extensive restoration during the 1980s and 1990s. Housed in this musuem are various antiquities which come from Egypt, and a large collection of modern art. It also contains a collection of art that ranges from the fourteenth to the eighteenth century. Artist represented by the paintings in the museum include Nicolas Poussin, Philippe de Champaigne, François Boucher, Jean-Baptiste Greuze, El Greco, Antonio de Pereda, Francis Bacon and Nicolas de Stael.\nA historically significant site in the city of Lyon is the Hotel-Dieu de Lyon. This building was first constructed in the twelfth century and was used as a gathering place for domestic and foreign members of the clergy. However, when Dr. Maitre Martin Conras entered the city he converted it into a hospital in the fifteen century. During the seventeenth century the hospital went through a series of expansion projects. This continued until the eighteenth century when Soufflot replaced the building and erected the building that we see today. Today, it still serves as an important hospital in Lyon, but also houses an exhibit chronicling the history of medicine from the middle ages to modern times.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-8", "d_text": "King Louis Philippe suppressed the riot with tens of thousands of soldiers, although it continued until 1836.\nToday, China produces almost all silk in the world. High-end silk design and fabrication are still in Italy and France. You will find many silk shops in Lyon, some fabricators going back decades. Hermes, the high-end French designer, has a facility in Lyon.\nI hope you enjoyed reading about Lyon. It brought back so many fond memories of our stay there. I am just so sorry it took me a long time to write it due to personal issues like moving and some IT crashes!\nIf you have been to Lyon, please share with me your impressions. Did you love it as much as I did?\nI leave you with a photo of the Croix-Rousse hill.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-5", "d_text": "The Capetians firmly established Paris as the French capital. The city grew as the power of the French kings increased. In the 11th cent. the city spread to the right bank. During the next two centuries — the reign of Philip Augustus (1180-1223) is especially notable for the growth of Paris — streets were paved and the city walls enlarged; the first Louvre (a fortress) and several churches, including Notre-Dame, were constructed or begun; and the schools on the left bank were organized into the University of Paris. One of them, the Sorbonne, became a fountainhead of theological learning with Albertus Magnus and St. Thomas Aquinas among its scholars. The university community constituted an autonomous borough; another was formed on the right bank by merchants ruled by their own provost. In 1358, under the leadership of the merchant provost Etienne Marcel, Paris first assumed the role of an independent commune and rebelled against the dauphin (later Charles V). During the period of the Hundred Years War the city suffered civil strife, occupation by the English (1419-36), famine, and the Black Death.\nDuring the Renaissance\nThe Renaissance reached Paris in the 16th cent. during the reign of Francis I (1515-47). At this time the Louvre was transformed from a fortress to a Renaissance palace. In the Wars of Religion (1562-98), Parisian Catholics, who were in the great majority, took part in the massacre of St. Bartholomew's Day (1572), forced Henry III to leave the city on the Day of Barricades (1588), and accepted Henry IV only after his conversion (1593) to Catholicism. Cardinal Richelieu, Louis XIII's minister, established the French Academy and built the Palais Royal and the Luxembourg Palace. During the Fronde, Paris once again defied the royal authority. Louis XIV, distrustful of the Parisians, transferred (1682) his court to Versailles. Parisian industries profited from the lavishness of Versailles; the specialization in luxury goods dates from that time. J. H. Mansart under Louis XIV and Francois Mansart, J. G. Soufflot, and J. A. Gabriel under Louis XV created some of the most majestic prospects of modern Paris.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-9", "d_text": "Beginning in the 1260s the large metal reliquary shrines take the form of diminutive Rayonnant churches, complete with transepts, rose windows, and gabled facades (see Metalwork).\nAbout 1300 the decorative arts begin to assume a more independent role. In the Rhineland, German expressionism gave rise to works of a marked emotional character, ranging from the statuettes of the school of Bodensee, such as that of the youthful seated Saint John tenderly laying his head on the shoulder of Christ, to the harrowing evocation of the suffering Christ in the plague crosses of the Middle Rhine. Later in the century the German sculptors were responsible for a new type of the mourning Virgin Mary, seated and holding on her lap the dead body of Christ, the so-called Pietà. In the second quarter of the century, Parisian manuscript illumination was given a new direction by Jean Pucelle. In his Belleville Breviary (1325?, Bibliothèque Nationale, Paris), the lettering, the illustrations, and the leafy borders all contribute to the totally integrated effect of the decorated page, thereby establishing an enduring precedent for later illuminators. Of still greater significance for future developments is the new sense of space imparted to the interior scenes in his illustrations through the use of linear perspective. Pucelle had learned this technique from the contemporary painters of the Italian proto-Renaissance (see Illuminated Manuscripts).\nV. Late Gothic Period\nParis had been the leading artistic center of northern Europe since the 1230s. After the ravages of the Plague and the outbreak of the Hundred Years' War in the 1350s, however, Paris became only one among many artistic centers.\nAs a result of this diffusion of artistic currents, a new pictorial synthesis emerged, known as the International Gothic style, in which, as foreshadowed by Pucelle, Gothic elements were combined with the illusionistic art of the Italian painters. Beginning in Paris in the 1370s and continuing until about 1400 at the court of Jean de France, Duc de Berry, the manuscript illuminators of the International Gothic style progressively developed the spatial dimensions of their illustrations, until the picture became a veritable window opening on an actual world. This process led eventually to the realistic painting of Jan van Eyck and the northern Renaissance and away from the conceptual point of view of the Middle Ages.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-33", "d_text": "It was largely because it was already the political capital, with firms thus attracted to it, that Paris became an actively industrial city in the 19th century. Unlike other older French industrial areas, such as Lorraine and Nord-Pas-de-Calais, it was not near mineral resources. But it did have some natural assets of its own, notably the Seine River, which is still used for barge traffic moving principally between the capital and the downstream ports of Rouen and Le Havre. Traditional industries were devoted mainly to handicrafts and luxury goods, but, when the growth of railways and canals in the 19th century made the northern coalfields more accessible, heavier industries began to develop. These soon spread beyond the city into the new industrial suburbs. To the northwest, along the Seine’s loop from Suresnes to Gennevilliers, armaments factories, heavy engineering works, and chemical plants were created, and automobile and aircraft factories eventually were established in the Seine valley toward Rouen.\nMore recently, manufacturing has developed principally in the capital’s outer ring, particularly in strategic sites such as the area around the Roissy–Charles de Gaulle airport (northeast of Paris) or newer suburban towns. The nature of industry also has changed. Many traditional activities, such as metallurgy, food processing, and printing, progressively disappeared, while electronics, telecommunications, and other high-technology industries gained emphasis. These have become located preferentially in a broad arc to the southwest of Paris, stretching from Versailles southeast to Évry.\nFor much of the period between 1950 and 1980, the policy of successive French governments was to limit the industrial growth of the Paris region in favour of the provinces. The policy also was used to effect a better distribution of industry within the region, with the aim of favouring the development of new towns. The idea of restraining industry in Paris itself had lost currency by the end of the 20th century, however, as the central and inner areas of the capital already had been largely deindustrialized.", "score": 18.37085875486836, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Lyon, France's second largest city and capital of the region we live in (Auvergne-Rhône-Alpes), is located on the junction of two rivers, the Soâne and the Rhone. It is just over an hour away from Grenoble and during a visit with friends we saw the highlights of the city and then again with the students the following weekend we saw more.\nUp on the hill, Basilique de Notre Dame de Fourvière (Basilica of Notre-Dame de Fourvière). The basilica was built in the 19th century and the interior is in the byzantine style.\nClose-ups of the floor, ceiling, pillar\nOur visit actually began with a walk through the market, followed by a visit to another well-known edifice, the Cathédrale Saint-Jean, built in 1180.\nA more sobre cathedral, but very bright inside. This is also where Henry the 4th and Marie de Medecis were married.\nYou are looking at a Roman theatre in Lugdunum, the most important center of Roman Gaul, founded in 43 BC. This Roman city in Lyon covered more than 300 hectares and for four centuries was the hub of city life. At the time, the theatre itself could hold almost 11,000 people. Today, it is used for outdoor events and can hold 4,500 spectators. Acoustically, it was built so that a person seated in the top row could hear a person on the stage below. I tested this with Otto and indeed I could understand what he was saying.\nIn the Saint-Jean district of Lyon, where we visited the Saint-Jean Cathedral, you will find \"traboules\". These are a series of passageways originally used by silk manufacturers to transport their products. They date back as far as the 4th century and are now private property for apartment dwellers. Fortunately these doors are open to visitors in the afternoon.\nSome of the interior courtyards are quite beautiful.\nAt some point we did have lunch, at a \"bouchon\", a traditional restaurant that has its origins in inns/restaurants for silk workers passing through Lyon in the 17th and 18th centuries. The food is very rich and hearty! A typical first course could be a soup, or here, a salad with a local cheese, St.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "A Literary Tour de France\n|Population:||20,000 in 1780.|\n|Book Trade:||ten booksellers and four printers. Active book trade and printing industry. Guild, chambre syndicale, and inspector of the book trade.|\n|Institutions:||capital of Burgundy, with royal Intendance, military government, parlement high court of justice, Cour des aides and Chambre des comptes financial courts, tax généralité, trade court, and bishopric.|\nuniversity (of Law only), college of medicine, college, drawing school, learned academy, theater, concerts, newspaper, and reading cabinet.\nhigh literacy (for entire département): male 54%, female 26%.\n|Communications:||destination and overnight stop on diligence post road from Paris to Geneva. On alternate road from Paris to Lyon|\n|Economy:||commercial city. Wine, grain, wool, iron, silk and woolens, calicoes, mustard.|", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-41", "d_text": "In 1171 Louis VII gave the marchandise a charter confirming its “ancient right” to a monopoly of river trade.\nDuring the reign of Philip II (1179–1223), Paris was extensively improved. Streets were paved, the city wall was enlarged, and a number of new towns were enfranchised. In 1190, when Philip II went on a crusade for a year, he entrusted the city’s administration not to the provost but to the guild. In 1220 the crown ceded one of its own precious rights to the townsmen—the right to collect duty on incoming goods. The merchants were also made responsible for maintaining fair weights and measures. The King’s formal recognition of the University of Paris in 1200 was also a recognition of the natural division of Paris into three parts. On the Right Bank were the mercantile quarters, on the island was the cité, and the Left Bank contained the university and academic quarters. Numerous colleges were also founded, including the Sorbonne (about 1257).\nIn the 14th century the development of Paris was hindered not only by the Black Death (1348–49) but also by the Hundred Years’ War (1337–1453) and by internal disturbances resulting from it. The provost of the merchants in 1356 was Étienne Marcel, who wanted a Paris as rich and free as the independent cities of the Low Countries. He gave the House of Pillars to the municipal government, and he slew the Dauphin’s counselors in the palace throne room and took over the city. Marcel showed great executive skill and equally great political stupidity and allied himself with the revolting peasants (the Jacquerie), with the invading English, and with Charles the Bad, the ambitious king of Navarre. While going to open the city gates to the Navarrese in 1358, Marcel was slain by the citizens.\nIn 1382 a tax riot grew into a revolt called the “Maillotin uprising.” The rioters, armed with mauls (maillets), were ruthlessly put down, and the municipal function was suspended for the next 79 years. It was not until 1533, when Francis I ordered the teetering House of Pillars replaced by a new building, that a monarch manifested an encouraging interest in municipal government.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-3", "d_text": "The prosperity of these wine-growing and wine-trading houses took a great leap at the time of the English domination. After a critical period during the 15th–17th century, prosperity returned in the 18th century and has continued ever since, despite problems of weather and grape parasites, the most critical of which was the phylloxera infestation of 1869. The modern extent of the vineyards is about half its former maximum area. The government of France and the local growers regard control of quality and quantity of these wines as essential to the preservation of a major export market. Bordeaux has never been a major centre of industry in France; however, from the 1960s, industrial activities have expanded. In addition to the more traditional industries such as food processing, light engineering, and the manufacture of textiles, clothing, and chemicals, the production of aerospace equipment, car components, and electronics has also become important. However, employment in the city is dominated by the service sector, reflective of Bordeaux’s role as a commercial, business, and administrative centre. The city also has a number of universities and graduate schools and is a regional centre for culture and the arts.\nThe port area has been important since the 18th century, but commercial activity is now concentrated in five specialized outports. With the closure of the oil refineries located along the Gironde, port traffic has declined, although refined petroleum products are still imported. Bordeaux is well integrated into the national motorway network, is linked to Paris by high-speed trains (TGV), and possesses a large regional airport. Pop. (1999) 215,363; (2005 est.) 230,600.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "Often it can be this stage that the race is decided; but probably not so this year 2013. http://www.letour.fr/le-tour/2013/us/stage-20.html\nAnnecy is located halfway between Genève and Chambéry, so it strongly was influenced by these two political centers for 1000 years. Starting as the capital of the county of Geneva, after the end of the counts of Geneva, it became integrated into the House of Savoy's possessions in 1401. In 1444, it was set up by the Princes of Savoy as the capital of a region covering the possessions of the Genevois, Faucigny and Beaufortain. With the advance of Calvinism in 1535, it became a center for the Counter-Reformation and the Bishop's See of Geneva was transferred here. During the French Revolution, the Savoy region was conquered by France and Annecy became attached to the département of Mont Blanc, of which capital was Chambéry. After the Bourbon Restoration in 1815, it was returned to the Kingdom of Sardinia (heir of the Duchy of Savoy). When Savoy was sold to France in 1860, it became the capital of the new département of Haute-Savoie. The Cathedral of Saint-Pierre, built in the 16th century, was the cathedral of François de Sales and is home to a number of artworks and baroque pieces from the 19th century.\nAix-les-Bains: http://en.wikipedia.org/wiki/Aix-les-Bains -- Lac du Bourget Aix was a bath during the Roman Empire, even before it was renamed Aquae Gratianae to commemorate the Emperor Gratian, who was assassinated not far away (in Lyon 383AD). Numerous Roman remains survive. Its thermal sulfur springs, have a temperature range from 109° to 113°F (43-45 °C) and are still much frequented. The town stretches along the eastern end of the beautiful Lac du Bourget, the largest natural lake in France.\nAix-les-Bains' architecture (e.g. the Art-Deco Thermes Nationaux) derives from its belle-époque, when high society from across Europe dropped in (by train) to relax and take the waters.", "score": 16.666517760972233, "rank": 79}, {"document_id": "doc-::chunk-4", "d_text": "Locally, the hill is called the \"hill that prays\" because of the Basilica of Notre Dame that sits on top of the hill. (The metallic tower that looks like the Eiffel Tower is a 19th century tower that is the highest point in Lyon. It is now a TV tower.)\nA Basilica is no ordinary church. In the world of Catholics, a Basilica is an important church with certain privileges given to it by the Pope. The Basilica is Lyon is an international center of a pilgrimage site for worship of the Virgin Mary. It is not unusual to see busloads of Catholica visitors to the Basilica.\nThe Catholic Church is still a strong institution in Lyon. It is surprising given the decline of religion overall, but not surprising given Lyon's background as the earliest Christian community in France. Lyon is a city of saints and martyrs, many of whom go back the Middle Ages or early days of Christianity.\nBefore becoming part of the Kingdom of France, it was part of the Holy Roman Empire in 1032. During the Middle Ages, Lyon was the most visited city in France by the popes after Avignon (seat of the papacy for 70 years dated from 1305-76).\nThe bishop of Lyon is a powerful man in the French Catholic Church. He has the exclusive title, Primat des Gauls, or First among Gauls, as the head of the first diocese of France. The title has been in existence since 1079.\nDuring the week of December 8, the Fete des Lumieres celebrates the Feast of the Immaculate Conception. It is of particular importance in Lyon because it marks two miracles that saved the city from disastrous events in the past. The first was the bubonic plague in 1643 and the Franco-Prussian War in 1870. Since 1643, a candle lighting event occurs each year to celebrate the miracle. After the city's salvation from the Prussian War, the Basilica was constructed (1872-84) with private funds.\nToday's candlelight event is the Fete des Lumieres. Despite its religious origins, the Fete has become an international event. For three nights during the week of December 8 is a lighting extravaganza in 40 venues throughout the city. Historical buildings and landscapes are illuminated in a fantastic show of lights.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "Corneille de Lyon and his milieu\nCorneille de Lyon was probably born in The Hague around 1500-1510. He travelled to France in his twenties or thirties, settling in the city of Lyon in 1533, which resulted in his cognomen. Lyon, at the time, had a quartier teeming with Flemish painters such as Guillaume le Roy, Lievin van der Meer and Mathieu d’Anvers . By at least 1544 he had been made painter to the Dauphin, since he made a formal request as such to be exempted from paying tax on his wine . In 1547 he became a naturalized French citizen by royal decree, and he died in Lyon in 1575 . His entire oeuvre consists of portraits, most of them on a small scale and with a single colour background. Stylistic comparison makes it feasible he was influenced by or had even trained with Joos van Cleve in the Flemish town of Bruges .\nCorneille de Lyon(c.1500/10–d.1575), Catherine de’ Medici, c.1536, o/oak panel, 6.5 x 6 ins (16.5 x 15.2 cm.), Polesden Lacey, National Trust\nThe queen of France, Catherine de’ Medici (wife of Henri II: r.1547-59), owned a substantial collection of art, tapestries, furniture and pottery (including 300 portraits), and commissioned a number of works by Corneille, as well as sitting for him herself. After his death his reputation lingered for some time, diminishing over the following century, save with the rare collector who had a taste for portraits. It was not until the middle of the 19th century that his work was once more appreciated for its fine quality. The slackening in the demand for Corneille’s work may have had a profound impact on the frames corresponding with the portraits. In general, it can be stated that the chances of a painting having a surviving original frame are inversely proportional to the artist’s popularity. With this in mind, it might be possible that Corneille’s portraits survived the centuries without too many re-framings.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Paris, Alain Choisnet—The Image Bank/Getty Imagescity and capital of France, located in the north-central part of the country. People were living on the site of the present-day city, located along the Seine River some 233 miles (375 km) upstream from the river’s mouth on the English Channel (La Manche), by about 7600 bce. The modern city has spread from the island (the Île de la Cité) and far beyond both banks of the Seine.\nParis occupies a central position in the rich agricultural region known as the Paris Basin, and it constitutes one of eight départements of the Île-de-France administrative region. It is by far the country’s most important centre of commerce and culture. Area city, 41 square miles (105 square km); metropolitan area, 890 square miles (2,300 square km). Pop. (1999) city, 2,125,246; (2005 est.) city, 2,153,600; urban agglomeration, 9,854,000.\nFor centuries Paris has been one of the world’s most important and attractive cities. It is appreciated for the opportunities it offers for business and commerce, for study, for culture, and for entertainment; its gastronomy, haute couture, painting, literature, and intellectual community especially enjoy an enviable reputation. Its sobriquet “the City of Light” (“la Ville Lumière”), earned during the Enlightenment, remains appropriate, for Paris has retained its importance as a centre for education and intellectual pursuits.\nParis’s site at a crossroads of both water and land routes significant not only to France but also to Europe has had a continuing influence on its growth. Under Roman administration, in the 1st century bc, the original site on the Île de la Cité was designated the capital of the Parisii tribe and territory. The Frankish king Clovis I had taken Paris from the Gauls by 494 ce and later made his capital there. Under Hugh Capet (ruled 987–996) and the Capetian dynasty the preeminence of Paris was firmly established, and Paris became the political and cultural hub as modern France took shape. France has long been a highly centralized country, and Paris has come to be identified with a powerful central state, drawing to itself much of the talent and vitality of the provinces.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-11", "d_text": "Onwy Laos was seen initiawwy as an economicawwy unviabwe cowony, awdough timber was harvested at a smaww scawe from dere.\nAt de turn of de 20f century, de growing automobiwe industry in France resuwted in de growf of de rubber industry in French Indochina, and pwantations were buiwt droughout de cowony, especiawwy in Annam and Cochinchina. France soon became a weading producer of rubber drough its Indochina cowony and Indochinese rubber became prized in de industriawised worwd. The success of rubber pwantations in French Indochina resuwted in an increase in investment in de cowony by various firms such as Michewin. Wif de growing number of investments in de cowony's mines and rubber, tea and coffee pwantations, French Indochina began to industriawise as factories opened in de cowony. These new factories produced textiwes, cigarettes, beer and cement which were den exported droughout de French Empire.\nWhen French Indochina was viewed as an economicawwy important cowony for France, de French government set a goaw to improve de transport and communications networks in de cowony. Saigon became a principaw port in Soudeast Asia and rivawwed de British port of Singapore as de region's busiest commerciaw centre. By 1937 Saigon was de sixf busiest port in de entire French Empire.\nIn 1936, de Trans-Indochinois raiwway winking Hanoi and Saigon opened. Furder improvements in de cowony's transport infrastructures wed to easier travew between France and Indochina. By 1939, it took no more dan a monf by ship to travew from Marseiwwe to Saigon and around five days by aeropwane from Paris to Saigon, uh-hah-hah-hah. Underwater tewegraph cabwes were instawwed in 1921.\nFrench settwers furder added deir infwuence on de cowony by constructing buiwdings in de form of Beaux-Arts and added French-infwuenced wandmarks such as de Hanoi Opera House (modewed on de Pawais Garnier), de Hanoi St.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-42", "d_text": "The dynastic and political vendetta between the Burgundian and the Armagnac faction (1407–35) had continual repercussions in Paris, where the butchers and skinners, led by Simon Caboche, momentarily seized power (1413). The resumption of the Hundred Years’ War by the English in 1415 made matters worse. After a revolt of the Parisians (1418), the Burgundians occupied Paris; the Anglo-Burgundian Alliance (1419) was followed by the installation of John Plantagenet, duke of Bedford, as regent of France for the English king Henry VI (1422). Whereas Charles VI had lived in his father’s Hôtel Saint-Paul, Bedford lived in the Hôtel des Tournelles, on the southeastern edge of the Marais, which was to be the Paris residence of later kings until 1559. During the reign of Charles VI, construction began on the Notre-Dame Bridge (1413).\nIn 1429 Joan of Arc failed to capture Paris. Only in 1436 did it fall to the legitimists, who welcomed Charles VII in person in 1437. Successive disturbances had reduced the population, but the Anglo-French truce of 1444 allowed Charles to begin restoring prosperity.\nIn 1469, during Louis XI’s reign (1461–83), the Sorbonne installed the first printing press in Paris. Otherwise this was a period of intellectual stagnation. Churches were rehabilitated and new houses were built, however; from 1480 splendid private mansions began to appear, such as the Hôtel de Sens and the Hôtel de Cluny.\nThe influence of the Italian Renaissance on town architecture appeared in the new building for the accounting office and in the reconstruction of the Notre-Dame Bridge (1500–10) in Louis XII’s reign. Under Francis I (1515–47) this influence grew stronger, finding notable expression in the new Hôtel de Ville. Furthermore, whereas from Charles VII’s time the kings of France had preferred to reside in Touraine, Francis returned the chief seat of royalty to Paris. With this in mind he had extensive alterations made to the Louvre from 1528 onward. The new splendour of the monarchy, which was well on its way toward absolute rule, was reflected in the way Paris developed as the capital of an increasingly centralized state.", "score": 14.73757419926546, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Young Paris auction house expands to Lyon\nAfter opening branches in Deauville and Toulouse, Artcurial adds another region to its roster\nBy Claudia Barbieri. Web only\nPublished online: 27 September 2010\nPARIS. After creating branches in Deauville and Toulouse, the Paris auction house Artcurial is about to open another, this time in Lyon, France's second largest city.\nOn 29 September Artcurial Lyon will be inaugurated with an exhibition of 30 of the best lots to be sold in the Paris rooms in the upcoming fall/winter season. Among the selected lots are several paintings by Maurice Utrillo, from the collection of the celebrated collector and galerist Paul Pétridès.\nOne of the most famous of the Monmartre painters—and son of another, the model and artist Suzanne Valadon—Utrillo, alcoholic and mentally unstable, lived in 1924 with his mother and stepfather, the artist André Utter, in the Chateau Saint-Bernard, on the river Saône near Lyon, while recovering from a bout in a psychiatric hospital. Some of the paintings included in the show date from this period.\nThe auction house will be headed by the auctioneer Michel Rambert, already well established in Lyon, and is located at 2, rue St-Firmin in the center of the city, close to the Sans-Souci metro stop and Lyon Part-Dieu train station.\nThe auction schedule will start with a sale of \"bandes dessinés\" comic strip art on October 16, followed by Modern paintings on 20 November, furniture and art objects on 28 November, and gold/jewellery on 30 November.\nFounded in 2001, Artcurial has grown steadily over the past decade. Its sales in the first half of this year totalled €52.3 million.\nSubmit a comment\nAll comments are moderated. If you would like your comment to be approved, please use your real name, not a pseudonym. We ask for your email address in case we wish to contact you - it will not be\nmade public and we do not use it for any other purpose.\nWant to write a longer comment to this article? Email firstname.lastname@example.org", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-3", "d_text": "The Saint-Paul area is more Renaissance in character, going back to the 15th and 16th centuries. There is a Church of St. Paul and a small train station in this area. The original church started in 549 but was built, decommissioned, then rebuilt over the centuries. It became a parish church in the 19th century.\nIn this neighborhood, Italian bankers and merchants settled and built lavish mansions and stores. Italian influences are evident in this neighborhood with rose-colored architecture, and arches and windows found in Italy.\nThe area of Saint-George started early in the 16th century as a place where silk weavers resided. With the growth of the industry, they moved to the Croix-Rousse district in the 19th century. The Church of Saint-George, located on the bank of the Saone, was rebuilt in the 1899s. It remains part of the iconic view of Lyon and the Fourviere. We lived steps from this walk-bridge, and it was a beautiful site each time you see it or hear the church bells ring.\nWhat is unique in the Vieux are the \"traboules.\" The word \"traboule\" comes from the Latin word \"trans-ambulare,\" which means passing through.\nRoman towns in the Middle Ages built long parallel roads (no crossroads) between the hill and the river. Passages through houses were used as short-cuts to get to the river. These passages became a usual way to get around town.\nSome were long \"traboules\" that connected all the streets. Some were elaborate with beautiful courtyards and wells, and others were simple.\nThere are over 400 traboules between the Vieux and Croix-Rousse neighborhoods, but less than 50 are open for public viewing. Longue Traboule is the longest and is always open to the public (from Rue Saint-Jean to Rue du Boeuf)\nAlthough the Fourviere is not technically part of the Vieux, the hill that rises above the banks of the Saone is part of the iconic image of Lyon.\nThe word \"fourviere\" is Latin for forum vetus or old forum. Its Roman past will be seen in a well preserved Roman theater near the top of the hill where the site of the original Lyon was.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Visit Arles, the roman city\nArles is a historic town located in the Provence region of southern France. Its history dates back to ancient times, with evidence of human settlement in the area as far back as the Bronze Age. The city's strategic location along the Rhône River made it an important trading and cultural center throughout its history. One of the most significant periods in Arles' history was during the Roman era. It became a Roman colony in 46 BC and quickly developed into an important regional capital. The city was known as \"Arelate\" during this time and served as a major hub for trade, as well as an administrative center for the surrounding region. Arles was renowned for its impressive Roman architecture, including the Arena of Arles (Amphitheatre), which is still standing today and is one of the best-preserved Roman amphitheatres in the world. The Roman theatre, cryptoporticus, and other structures also highlight Arles' importance during this period. In the early Middle Ages, Arles continued to thrive as a cultural and trading center. It became a prominent religious center as well, with the construction of the Church of St. Trophime, a stunning example of Romanesque architecture. The city's role as an important center declined somewhat in the later Middle Ages, but it remained a significant regional town.\nArles in the 19th century\nDuring the 19th century, Arles gained attention from artists, most notably Vincent van Gogh, who lived and painted in the area. His famous work \"Starry Night Over the Rhône\" was inspired by the beauty of the city and its surroundings. The city's artistic heritage has continued to flourish, and Arles is known for hosting the Rencontres d'Arles, an annual photography festival. Arles also played a role in the development of photography itself. In 1839, Louis Daguerre unveiled his daguerreotype process in the city, marking a pivotal moment in the history of photography. Today, Arles is a UNESCO World Heritage site, celebrated for its rich history, Roman architecture, and vibrant cultural scene. The city's legacy as an important center of art, trade, and culture continues to attract visitors from around the world.\nJulius Caesar in Arelate\nPhoto of the presumed bust of Julius Caesar at the MDAA in Arles.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "LIMOGES, capital of the Haute-Vienne department, central France. A Jewish source, Sefer Yeshu'at Elohim (in A.M. Habermann, Gezerot Ashkenaz ve-Zarefat (1945), 11–15) contains an account of a semi-legendary anti-Jewish persecution in Limoges in 992 resulting from the activities of an apostate from Blois. The Christian writer Adhémar of Chabannes relates that in 1010 Bishop Alduin of Limoges gave the Jewish community the choice of expulsion or conversion. It is possible that both sources refer to the local manifestation of the general anti-Jewish persecutions which occurred around 1009 and which were followed by baptisms and expulsions. At any rate, whether or not the Jews were expelled from Limoges, the expulsion order was no longer in force from the middle of the 11th century; a certain Petrus Judaeus is mentioned in a local document between 1152 and 1173 and Gentianus Judaeus in 1081. Around the middle of the 11th century R. Joseph b. Samuel *Bonfils (Tov Elem) headed the Jewish community of Limoges and Anjou. The beginnings of the modern Jewish community in Limoges date from 1775. During World War II, Limoges became the largest center of refuge for Alsatian Jews; about 1,500 families and many institutions were transferred to the town. The present community, which was formed in 1949, grew to more than 650 by 1970 and possessed a synagogue and community center.\nGross, Gal Jud (1897), 308–9; J. de Font-Reaulx (ed.), Cartulaire du Chapître de St.-Etienne de Limoges (1919), passim; La Vie Juive, 51 (1959), 15; B. Blumenkranz, Juifs et Chrétiens… (1960), index; Z. Szajkowski, Analytical Franco – Jewish Gazetteer (1966), 286; Roth, Dark Ages, index.\nSources: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-2", "d_text": "– March 2007 : the Ecole Nationale supérieure des beaux-arts of Lyon moves into Manutention Square (under the Skylight)", "score": 12.364879196879162, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Lyon, third largest city in France, the world’s capital of silk, capital of the Alps, the world’s culinary capital and historic city where is one of the most important tourist destinations. This city was founded about 2000 years ago.\nBasilica Church (La basilique notre dame de Fourvière):\nBasilica Church was built between 1872 and 1884 in a dominant position overlooking the city.\nThe design of the basilica, by Pierre Bossan, draws from both Romanesque and Byzantine architecture, two non-Gothic models that were unusual choices at the time. It has four main towers, and a belltower topped with a gilded statue of the Virgin Mary.\nPark of the Golden Head (Parc de la Tête d’or):\nPark of the Golden Head is a large urban park in Lyon with an area of approximately 117 hectares with features a lake on which boating takes place during the summer months. In the central part of the park, there is a small zoo, with elephants, giraffes,reptiles, deer, primates, and other animals. There are also sports facilities, such as a velodrome, boules court, mini-golf, horse riding, and a miniature train.\nTraboules are a type of passageway primarily associated with the city of Lyon which they were originally used by silk manufacturers and other merchants to transport their products. The first examples of traboules are thought to have been built in Lyon in the 4th century\nPresqu’ile Distric is the heart of Lyon and due to its beautiful architecture is distinct from other parts of the city.\nExtending from the foot of the Croix Rousse hill to the confluence of the Rhône and the Saône rivers, it has a preponderance of cafe, restaurants, luxury shops, department stores, banks, government buildings, and cultural institutions.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "The fairs influenced a cityscape designed to accommodate the many merchants (at the time Provins accounted for over 3,000 artisans, grouped on streets or in districts) with wide streets for the convoys and the location of the stalls, the 3-floor merchant houses with sumptuous vaulted rooms for warehouses are examples from this city organized and dedicated to the fairs.\nFrom the late 13th century onward, the commercial importance of Provins would gradually fade. Trade routes moved southward, and new trade fairs thrived in Flanders and the Rhine Valley, competing with the fairs of Champagne. In the early 14th century, when the region became a part of the kingdom of France, the Champagne fairs were gradually deserted. The abolition of merchant privileges, religious wars and epidemics also put an end to the Provins fair, but also those of Troyes, Lagny and Bar-sur-Aube. Of these four cities, Provins is the only one that has so beautifully preserved the architecture and urbanism that characterize these great medieval fair cities. From that point on, farming became the main economic activity of the city.\nThe fairs of Champagne\nThe Counts of Champagne decided to establish a system of biannual fairs that lasted several weeks. These fairs attracted merchants from all over Europe, North Africa and the Orient to exchange all sorts of objects : wool, linen, wine, silk, spices, furs, dyes, silverware ... Even the church imported ivory and precious woods from Africa and precious stones from the East that decorated religious objects. Fairs were places for wholesale trade among professionals, unlike the weekly or daily markets for individuals and consumers. They also become major banking centers. Provins created its own currency, the denarius Provins, which was one of the few currencies that was widely accepted throughout medieval Europe. Many European bankers established trading posts in Provins.\nFrom the 12th century onward the city had its own currency but also its own yardstick, weight and grain measurements. At that time the trips were long and perilous. And so the Counts of Champagne offered to escort merchant convoys at their expense on their territory called the \"fair conduits.\" The privileges granted to merchants quickly established a reputation for the fair and in part ensured its success. On site, the counts provided security with fair guards.\nThe fair was also an opportunity for parties with live music and juggling.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Nice (Nicaea) was probably founded around 350 BC by the Greeks of Marseille, and was given the name of Nikaia in honour of a victory over the neighbouring Ligurians (Nike was the Greek goddess of victory). The city soon became one of the busiest trading ports on the Ligurian coast.\nDuring the Middle Ages, Nice participated in the wars and history of Italy. As an ally of Pisa it was the enemy of Genoa, and both the King of France and the Holy Roman Emperor endeavoured to subjugate it; but in spite of this it maintained its municipal liberties. During the 13th and 14th centuries the city fell more than once into the hands of the Counts of Provence, but it regained its independence even though related to Genoa.\nIn the second half of the 20th century, Nice enjoyed an economic boom primarily driven by tourism and construction. Two men dominated this period: Jean Medecin, mayor from 1928 to 1943 and from 1947 to 1965, and his son Jacques, mayor from 1966 to 1990. Under their leadership, there was extensive urban renewal, including many new constructions. These included the convention centre, theatres, new thoroughfares and expressways.\nThe city of Nice is located in the extreme south-east of France, at 30 kilometers from the Franco-ltalian border. With 343,895 inhabitants (2014) it is the fifth largest city in France (after Paris, Marseille, Lyon and Toulouse) and the 2nd largest city in the South-PACA region. lts agglomeration has 943 695 inhabitants (2012). lt is located in the heart of the seventh largest urban area in France, with 1,004,914 inhabitants. The city is the center of the Metropole Nice Côte d'Azur, which brings together forty-nine municipalities for 536,327 inhabitants (2013). Demographically, and in comparison with the communal averages at the national level, the city is marked by an overrepresentation of young and young adults (15/29 years) and older people (75 years and over) and an under-representation of the youngest (under 15 years) and adults (30-44 years).", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-8", "d_text": "From 28 December 986, the Bishop of Gap had sovereignty over the city due to concerns about future Muslim invasions, and held that power until Revolutionary reforms in 1801 despite Gap being annexed by the French crown in 1512.\nGap and its area became part of the Comté de Provence, established at the end of the 10th century, and then by the County of Forcalquier, which separated in the 12th century. The Bishops of Gap were also the temporal lords of the city. But their control was long disputed by officers of the Counts of Forcalquier, notably under the episcopate of Arnoux, who later became the bishop of the city. On the death of the last Count of Forcalquier in 1209, the Embrun and Gap areas were passed to the Dauphiné while those of Forcalquier and Sisteron returned to the County of Provence. It is for this reason that the current coat of arms of the Region Provence-Alpes-Côte d'Azur is the coat of arms of the Dauphiné. In 1349 the Dauphin of Viennois Humbert II passed on his Principality to the eldest son of Philippe VI of France, the future King of France Charles V. From 1349 to 1457 Dauphiné remained a Principality separated from France, whose prince was the eldest son of the King of France. In 1457, Charles VII put an end to this status and joined the province to the Kingdom of France.\nIn the 14th century, the city took advantage of the benefits of the installation of the Popes in Avignon, which brought a more frequent passage of travellers to develop a craft of wool and skins, which made it thrive. Avignon linkages were strengthened by the presence of many clerics of the entourage of the Pope, within the chapter of the canons of Gap.\nRenaissance and early modern era\nThe 16th and 17th centuries were particularly dark times for the city. The Wars of Religion were lethal in the region. Gap was a Catholic stronghold, while the Champsaur switched to the \"allegedly reformed religion\". After various skirmishes, François de Bonne, leader of the Protestants, decided to attack Gap, nevertheless protected by 20 towers.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "(Source: Office of Tourisme)\nFriday evening jazz is held in the main square during the summer months of July and August, along with other activites throughout the year.\n- 89m2 living space/3 floors; situated on a stone corner lot\n- two large bedrooms (19 & 20m2)/two bathrooms (1 en suite with separate WC)\n- non-smoking interior environment\n- treated wooden beams throughout 2012 (10 year guarantee)\n- living room with built-in library and stone fireplace\n- fully equipped kitchen\n- master bedroom has built-in closet & ensuite bathroom\n- cave/wine cellar underground\n- AC/heat reversible\n- new roof with waterproofing Dec. 2012 (10 yr. guarantee)\n- new high performance water heater/boiler (2015)\n- patio and rock garden area\n- double-glazed windows throughout\n- panoramic sea view\n- 30 meters from navette stop (free shuttle bus circulates every 15 minutes all day long from the center of town up to the village) or 5-minute walk down to town center\n- free street parking\nPRICE: 356,000 Euros\nSee FB page for village & videos HERE\nFor more information: E-mail email@example.com\nIn 1643, a merchant named Claude Trudon came to Paris and soon after became owner of a boutique on rue Saint-Honoré, where he made and sold candles for use in homes and churches. The year Louis XIV was crowned King of France, Claude Trudon opened his first family business, manufacturing wax and candles. In 1687, he became apothecary to Queen Marie-Thérèse at Versailles.\nThe wax was collected from bee hives, cleaned and whitened through a series of water baths, cut into long strips, and sun dried in open air. Due to the extreme purity of the wax, the sunlight contributed to whitening the wax, creating a magnificent glow, especially though the delicate edge.\nIn 1737, Jerome Trudon purchased La Manufacture Royale de Cire, supplying the royal court, as well as prominent churches in Paris and the region. During the French Revolution, the royal emblem was no longer used, to prevent the company from being destroyed; however, its reputation helped it survive through the centuries, despite the arrival of electricity.", "score": 8.750170851034381, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "In 996, the relics of Saint Ayoul were discovered at the foot of the Upper Town, near a chapel dedicated to Saint-Médard. This miracle was behind the construction of a monastery and a church dedicated to Saint Ayoul and many other richly endowed religious institutions – churches, chapels and monasteries – emerged. The Lower Town then developed at the confluence of the Durteint and the Voulzie rivers. But the political and economic ambitions of the Counts of Champagne would forever change the fate of Provins, and they took advantage of the strategic location of the city, at the crossroads of major thoroughfares between northern and southern Europe. The region is the obligatory passage between the ports of the North Sea and the Mediterranean. Flanders initiated trade with Northern Europe and the East, and Italy opened the doors of Byzantium, Africa and the Orient. They would create and develop an international trade fair in Provins and in three other cities in the region (Troyes, Lagny and Bar-sur-Aube). This is the beginning of the golden age of Provins and of the famous fairs of Champagne.\nFrom the 11th century onward, cloth, leather and cutlery manufacturing was developed. With the fairs, the reputation of these workshops extended across Europe and helped the city prosper. Provins is famous for the product of its textile industry: a dark blue woolen sheet called the \"ners de Provins.\" The town was fortified to ensure the safety of residents, merchants and their goods, but also to affirm the power of the counts. The spectacular ramparts, part of which is still visible today, date back to the 13th century and protected the city for nearly 5 kilometers. The city reached its peak during the reign of Thibaud IV of Champagne (1201-1253), when there were no fewer than 80,000 inhabitants. Its structured economic system allowed the city to become the third in France after Paris and Rouen, and was one of the commercial capitals of Europe.\nUnderground quarries were mined for clay soil, called \"fuller's earth\", which was used to degrease the wool. To fully imbibe the cloth, it had to be fulled, hence the name given to this clay.\nThe fairs also accompanied the development of a multitude of activities that together inspired and encouraged a particular kind of urban fabric.", "score": 8.086131989696522, "rank": 95}, {"document_id": "doc-::chunk-6", "d_text": "Town Centre Pics\nFrench Cities -- German Cities -- Art\nLyon -- Voiron -- Grenoble -- le Saint-Suaire -- Romans-sur-Isère -- Valence -- Roman Vienna lies just south of Lyon -- Digne-les-Bains & Embrun -- Pays des Écrins (situé dans la vallée de la Durance en aval de Briançon) -- Geneva -- Berne\nCeltic/Frank History --\nGermaniæ Historicæ --\nAnglo Saxons et.al.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-1", "d_text": "(July 2012)\nThe name is Germanic in origin and is reconstructible as *liudik-, from the Germanic word *liudiz “people”, which is found in for example Dutch lui(den), lieden, German Leute, Old English lēod (English lede) and Icelandic lýður (“people”). It is found in Lithuanian as liaudis (“people”), in Ukrainian as liudy (“people”), in Russian as liudi (“people”), in Latin as Leodicum or Leodium, in Middle Dutch as ludic or ludeke.\nIn French, Liège is associated with the epithet la cité ardente (“the fervent city”). This term, which emerged around 1905, originally referred to the city’s history of rebellions against Burgundian rule, but was appropriated to refer to its economic dynamism during the Industrial Revolution.\nEarly Middle Ages\nThis section does not cite any sources. (May 2012) (Learn how and when to remove this template message)\nAlthough settlements already existed in Roman times, the first references to Liège are from 558, when it was known as Vicus Leudicus. Around 705, Saint Lambert of Maastricht is credited with completing the Christianization of the region, indicating that up to the early 8th century the religious practices of antiquity had survived in some form. Christian conversion may still not have been quite universal, since Lambert was murdered in Liège and thereafter regarded as a martyr for his faith. To enshrine St. Lambert’s relics, his successor, Hubertus (later to become St. Hubert), built a basilica near the bishop’s residence which became the true nucleus of the city. A few centuries later, the city became the capital of a prince-bishopric, which lasted from 985 till 1794. The first prince-bishop, Notger, transformed the city into a major intellectual and ecclesiastical centre, which maintained its cultural importance during the Middle Ages. Pope Clement VI recruited several musicians from Liège to perform in the Papal court at Avignon, thereby sanctioning the practice of polyphony in the religious realm. The city was renowned for its many churches, the oldest of which, St Martin’s, dates from 682. Although nominally part of the Holy Roman Empire, in practice it possessed a large degree of independence.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-6", "d_text": "About 450, Vienne's bishops became archbishops,several of whom in the played an important cultural role, e.g. Mamertus, who established Rogation pilgrimages, and the poet, Avitus (498-518). Vienne's archbishops and those of Lyon disputed the title of \"Primate of All the Gauls\" based on the dates of founding of the cities compared to the dates of founding of the bishoprics. Vienne's archbishopric was suppressed in 1790 during the revolution and officially terminated 11 years later by the Concordat of 1801.\nVienne was a target during the Migration Period: it was taken by the Kingdom of the Burgundians in 438, but re-taken by the Romans and held until 461. In 534 the Merovingian-led Franks captured Vienne. It was then sacked by the Lombards in 558, and later by the Moors in 737. When Francia's king divided Frankish Burgundia into three parts in 843 by the Treaty of Verdun, Vienne became part of Middle Francia.\nIn the Kingdom of Provence\nKing Charles II the Bald assigned the district in 869 to Comte Boso of Provence, who in 879 proclaimed himself king of Provence and on his death in 887 was buried in the cathedral church of St. Maurice. Vienne then continued as capital of the Dauphiné Vienne of the Kingdom of Provence, from 882 of the Kingdom of West Francia and from 933 of the Kingdom of Arles until in 1032, when it reverted to the Holy Roman Empire, but the real rulers were the archbishops of Vienne. Their rights were repeatedly recognized, but they had serious local rivals in the counts of Albon, and later Dauphins of the neighboring countship of the Viennois. In 1349, the reigning Dauphin sold his rights to the Dauphiné to France, but the archbishop stood firm and Vienne was not included in this sale. The archbishops finally surrendered their territorial powers to France in 1449. Gui de Bourgogne, who was archbishop 1090–1119, was pope from 1119 to 1124 as Callixtus II.", "score": 8.086131989696522, "rank": 98}]} {"qid": 45, "question_text": "Why do women have different susceptibility to allergies and autoimmune diseases compared to men?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "A recent perspective published in Science Signaling outlines the gender-specific differences in the stress response and the actions of glucocorticoids, and the female superiority in terms of dealing with stress and infections.\nThe sex-based immunological differences contribute to variations in the incidence of autoimmune diseases and malignancies, susceptibility to infectious diseases. For example, women have a lower risk of infections but are more susceptible to autoimmune/inflammatory diseases such as rheumatoid arthritis, systemic lupus erythematosus and autoimmune thyroid diseases.\nIn the Science Signaling perspective George Chrousos discusses these gender differences from an evolutionary perspective, including gene network evolution and steroid molecular actions, as well as sexual dimorphism even in the absence of estrogens and androgens.\nThe author provides a concise and contemporary view on the multilevel interactions between the stress, reproductive and immune systemsand how they may determine gender-specific stress and immune responses.\nIn terms of clinical implications, Chrousos goes further, discussing how stress response and immune and inflammatory reactions are more potent in women than in men.\nThis may explain the former’s higher prevalence of stress-related behavioral syndromes, such as anxiety, depression, psychosomatic and eating disorders, and autoimmune inflammatory or allergic disorders, such as rheumatoid arthritis, systemic lupus erythematosus and multiple sclerosis, or asthma, respectively.\nA 2017 study published in the Journal of Neuroscience Research examined gender differences in neural correlates of online stress-induced anxiety response in men and women with commensurable levels of the State-Trait Anxiety Inventory (STAI) anxiety and perceived stress. The study reports gender-specific neural correlates of anxiety during stress provocation, mainly in the medial prefrontal and parietal cortices, with opposite patterns of associations in men and women.\nSpecifically, gender interaction from whole-brain regression analyses was observed in the dorsomedial prefrontal cortex, left inferior parietal lobe, left temporal gyrus, occipital gyrus, and cerebellum, with positive associations between activity in these regions and stress-induced anxiety in women, but negative associations in men, indicating that men and women differentially utilize neural resources when experiencing stress-induced anxiety. The observed neural difference indicates that men and women differentially utilize neural resources when experiencing anxiety during stress.\nA2022 review indicates that here are multiple phenotypic differences between the immune systems of men and women. In general, men are more vulnerable to infectious diseases, and women are more prone to autoimmune diseases.", "score": 50.41854771897754, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Study Explores Cause for Higher Autoimmune Disease Prevalence Among Women\nA molecular switch in the skin cells may cause increased susceptibility to autoimmunity in women.\nWomen are 4 times more likely to develop autoimmune disease than men. In systemic lupus erythematosus (SLE), the prevalence of the disease is 9 times higher among women; however, the cause of this disparity is largely unknown, a recent study indicates.\nThe study, published in JCI Insight, examined the potential mechanisms promoting increased likelihood of autoimmunity in women as opposed to men. The study pointed to a skin-targeted overexpression of the female-biased transcription cofactor vestigial like family member 3 (VGLL3) as a potential contributor to autoimmunity. Previous research has indicated that women have more VGLL3 in their skin cells than men.\nAccording to the study, the researchers found that in mice, having too much VGLL3 in skin cells promoted an autoimmune response that extended beyond the skin. In the mouse model, overexpression of VGLL3 drove an autoimmunity-prone transcriptional signature similar to that observed in female skin, causing inflammation and activation of type I IFN signaling that mimics cutaneous lupus, according to the researchers.\nAdditionally, the researchers noted that extra VGLL3 in the skin cells appeared to change expression levels of a number of genes important to the immune system, including many of the same genes altered in autoimmune diseases. In mice with excess VGLL3, their skin became scaly and raw and they produced antibodies against their own tissues.\n“VGLL3 appears to regulate immune response genes that have been implicated as important to autoimmune diseases that are more common in women, but that don’t appear to be regulated by sex hormones,” lead study author Johann Gudjonsson, MD, PhD, professor of dermatology at the University of Michigan Medical School, said in a press release. “Now, we have shown that overexpression of VGLL3 in the skin of transgenic mice is by itself sufficient to drive a phenotype that has striking similarities to systemic lupus erythematosus, including skin rash, and kidney injury.”\nThe researchers concluded that the data support the assertion that overexpression of VGLL3 in female skin primes women for autoimmunity. However, they do not know what triggers may set off overexpression of VGLL3 activity.", "score": 49.36619169152586, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "These differences are typically attributed to the stronger female immune response. The weaker male response to pathogens may allow the invader to cause more damage, whereas the strong and persistent female response reduces pathogen-induced damage but may eventually trigger autoimmune processes and cause chronic damage.\nSexual dimorphism in the immune system and its mechanisms have been extensively reviewed in the context of autoimmunity, immune responses, cancer, the response to vaccination, and transplant rejection. The proportions and phenotypes of some of the immune cells are different between the sexes at baseline and may contribute to the higher inflammatory response and better survival of females with infectious diseases. Female peritoneal and plural cavities have higher numbers of T cells, B cells, and macrophages. Female peritoneal macrophages show higher expression levels of genes of the complement system, of the interferon (IFN) signaling pathways, and toll-like receptors (TLRs).\nWhereas high estrogen levels increase the risk and worsen the symptoms of inflammatory bowel disease (IBD), rheumatoid arthritis, and systemic lupus erythematosus (SLE), estrogen has a protective effect in multiple sclerosis. Contraceptive pills, which contain female sex hormones and are in direct contact with the digestive system, increase the risk for IBD in genetically susceptible women. With the hormonal drop at menopause, there is a peak in RA, which can be controlled by hormonal replacement.", "score": 47.0241461994149, "rank": 3}, {"document_id": "doc-::chunk-2", "d_text": "The study looked for data that could distinguish a number of explanations for sex differences in these diseases: whether disease risk was correlated with distinct genetic variants in women and men, or whether the two sexes might instead have different sensitivities to the same genetic risk factors; whether sex differences in disease risk could be explained by different levels of the sex hormones testosterone or estrogen or as side effects of the development of other secondary sex characteristics (as is the case with breast and prostate cancers); or whether sex differences were linked to differences in the sex chromosomes—the fact that women have two X chromosomes while men have one X and one Y.\nThe authors found that sex had a significant influence on some of these diseases, including those thought to have similar prevalence in males and females. Many diseases appeared to be impacted by genes regulated by androgens or estrogens - the \"male\" and \"female\" sex hormones, respectively—and as with the new autism findings, the same common genetic differences that differently influence physical traits in men and women also appeared to contribute to risk for five of these nine diseases.\n\"We don't know yet why this occurs, but it does imply that the same biological pathways that influence physical sex differences also impact a number of common diseases and disorders,\" Weiss said. \"Many people are excited about the idea of precision medicine, or how medical care can be optimized for an individual. Well, sex is something that we already know about every individual. A better understanding of how sex impacts genetic risk for disease could be a great start to improving our understanding, diagnosis, and treatment or prevention of common diseases.\"\nParticularly striking initial findings of the Genetics paper—which the researchers caution require further study and replication by other labs—included the identification of an interaction with sex for the genetic risk factors associated with multiple sclerosis, as well as a significantly higher heritability of hypertension in women compared to men. Since hypertension occurs with similar frequency in men and women, the authors speculate this finding might imply that environmental factors play a correspondingly bigger role in male hypertension.\nMore information: Ileena Mitra et al. Pleiotropic Mechanisms Indicated for Sex Differences in Autism, PLOS Genetics (2016). DOI: 10.1371/journal.pgen.1006425\nMichela Traglia et al. Genetic Mechanisms Leading to Sex Differences Across Common Diseases and Anthropometric Traits, Genetics (2016).", "score": 44.56781909812896, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "Lactobacillus bacteria -- one of the many kinds of microbes that dwell inside… (ASM MicrobeLibrary /Handout )\nWhy are women more prone to autoimmune diseases like lupus, multiple sclerosis and rheumatoid arthritis? A new study in mice points to a possible contributor: different types of bacteria that populate our guts.\nIt goes like this: Different mixes of bacteria reside in the innards of male and female mice. Those bacteria, in turn, affect the chemistry of the animals’ bodies -- and, it appears, their risk of autoimmunity.\nThe study, just published in Science, was done by Janet Markle of the Hospital for Sick Children Research Institute, Toronto, and colleagues. It’s a little complicated, with players that include sex hormones, fatty chemicals, immune cells and a whole host of microscopic life forms.\nThe subject of the study was a type of mouse known as a NOD mouse, which stands for “non-obese diabetic.” These mice spontaneously develop Type 1 diabetes when the insulin-making cells of the pancreas are destroyed by the mouse’s own immune system, and female NOD mice develop diabetes twice as often as male NOD mice do. The scientists chose these mice to study because they display just the kind of sex bias in an autoimmune disease that one sees so often in people (though it’s a little different too: In people, Type 1 diabetes doesn’t display a sex difference).\nThe authors did a series of experiments that involved rearing mice in sterile or non-sterile environments, swapping gut contents back and forth between mice of different genders, and more besides. And they found:\n--Without any gut bacteria, there’s no male-female difference in getting Type 1 diabetes. That implies the gut bacteria have something to do with the protection that male mice have.\n--Male and female mice grow up to have different populations of bacteria living within them.\n--These gut bacteria cause altered blood levels of the sex hormone testosterone as well as the amounts and types of certain fatty chemicals in male and female mouse blood. In other words, they can truly affect the biochemistry of the body in ways that depend on the animal’s sex.\n--The protection in males against developing diabetes seems to depend on testosterone levels going up in response to their microscopic gut tenants.\n--Here’s a key experiment: When bacteria from male mice are transferred over to female mice, the females become protected against the development of diabetes.", "score": 44.39690910082244, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Gene linked to lupus might explain gender difference in disease risk\nDALLAS March 30, 2009\nIn an international human genetic study, researchers at UT Southwestern Medical Center have identified a gene linked to the autoimmune disease lupus, and its location on the X chromosome might help explain why females are 10 times more susceptible to the disease than males.\nIdentifying this gene, IRAK1, as a disease gene may also have therapeutic implications, said Dr. Chandra Mohan, professor of internal medicine and senior author of the study. \"Our work also shows that blocking IRAK1 action shuts down lupus in an animal model. Though many genes may be involved in lupus, we only have very limited information on them,\" he said.\nThe study appears online this week in the Proceedings of the National Academy of Sciences.\nLocating IRAK1 on the X chromosome also represents a breakthrough in explaining why lupus seems to be sex-linked, Dr. Mohan said. For decades, researchers have focused on hormonal differences between males and females as a cause of the gender difference, he pointed out.\n\"This first demonstration of an X chromosome gene as a disease susceptibility factor in human lupus raises the possibility that the gender difference in rates may in part be attributed to sex chromosome genes,\" Dr. Mohan said.\nSystemic lupus erythematosus, or lupus for short, causes a wide range of symptoms such as rashes, fever or fatigue that make it difficult to diagnose.\nThe multicenter study involved 759 people who developed lupus as children, 5,337 patients who developed it as adults, and 5,317 healthy controls. Each group comprised four ethnicities: European-Americans, African-Americans, Asian-Americans and Hispanic-Americans.\nIn previous genetic studies, the researchers had found an association but not a definite link between lupus and IRAK1.\nFor the current study, the researchers studied five variations of the IRAK1 gene in the subjects, and found that three of the five variants were common in people with either childhood-onset or adult-onset lupus.\nTo further test the link, the researchers then took mice of a strain that normally is prone to developing lupus and engineered them to lack the IRAK1 gene. In the absence of IRAK1, the animals lacked symptoms associated with lupus, including kidney malfunction, production of autoimmune antibodies and activation of white blood cells.", "score": 43.68666692322023, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "A pair of studies by researchers at UC San Francisco suggest that genetic variants that have distinct effects on physical traits such as height, weight, body mass, and body shape in men versus women are also linked to men's and women's risk for a range of diseases—including autism, multiple sclerosis, type 1 diabetes, and others. The results suggest that at least some of the fundamental biological drivers of disease may be significantly different in men and women, an idea that could have significant impacts on disease research and treatment, the authors say.\nThe idea of sex differences in disease is an old one. Some disorders (such as multiple sclerosis) are more common in women, while others (such as autism) are more common in men. Other diseases, such as cardiovascular disease, can simply look very different in men and women, and the two sexes are also known to respond differently to certain drugs, making sex differences a crucial factor for doctors to take into account in diagnosis and treatment.\nDespite the prevalence of sex differences in many diseases, however, scientists still do not have a comprehensive understanding of the biology that drives these differences. Many studies in humans and model organisms have sought to address this question, but their results have been contradictory, according to Lauren A. Weiss, PhD, associate professor of psychiatry at UCSF, and senior author on the two new studies.\n\"While some studies have looked at small regions of the genome or tried to support one specific hypothesis with respect to sex differences, few studies have looked at the question from a comprehensive genome-wide perspective,\" Weiss said.\nIn the two new studies, Weiss and her team analyzed genome-wide association study (GWAS) data to search for patterns that might support or rule out competing hypotheses about the origins of sex differences in a number of diseases, and compared these patterns with those associated with physical traits that obviously differ between the sexes, such as height, body mass index (BMI), and waist-to-hip ratios.\nResearch suggests fundamental biological sex difference in autism\nIn the first study, published online November 15, 2016 in PLoS Genetics, researchers in the Weiss lab, which is affiliated with the Institute for Human Genetics at UCSF, investigated why autism occurs nearly five times more often in boys than in girls, a mystery that has puzzled researchers for many years.", "score": 42.16080341467132, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Genes show gender differences\nThousands of genes behave differently in the same organs of males and females, researchers report, a finding that may help explain why men and women have different responses to drugs and diseases.\nTheir study of brain, liver, fat and muscle tissue from mice show that gene expression, the level of activity of a gene, varies greatly according to sex.\nThe same is almost certainly true of humans, the team at the University of California Los Angeles report.\n\"This research holds important implications for understanding disorders such as diabetes, heart disease and obesity, and identifies targets for the development of gender-specific therapies,\" says Jake Lusis, a professor of human genetics who worked on the study.\nWriting in the August issue of the journal Genome Research, the researchers say that even in the same organ, scores of genes vary in expression levels between the sexes.\nThe smallest differences are in brain tissue.\n\"We saw striking and measurable differences in more than half of the genes' expression patterns between males and females,\" says Dr Thomas Drake, a professor of pathology.\n\"We didn't expect that. No one has previously demonstrated this genetic gender gap at such high levels.\"\nXia Yang, a postdoctoral fellow in cardiology who led the study, says the implications are important.\n\"Males and females share the same genetic code, but our findings imply that gender regulates how quickly the body can convert DNA to proteins,\" Yang says. \"This suggests that gender influences how disease develops.\"\nIn liver tissue, the findings imply male and female livers function the same, but at different rates.\n\"Our findings in the liver may explain why men and women respond differently to the same drug,\" Lusis says.\n\"Studies show that aspirin is more effective at preventing heart attack in men than women. One gender may metabolise the drug faster, leaving too little of the medication in the system to produce an effect.\"\nYang adds: \"Many of the genes we identified relate to processes that influence common diseases. This is crucial, because once we understand the gender gap in these disease mechanisms, we can create new strategies for designing and testing new sex-specific drugs.", "score": 38.565575735588766, "rank": 8}, {"document_id": "doc-::chunk-1", "d_text": "- It might be Mom’s fault. Having a family history of the condition raises risk for both genders and, interestingly, the connection is even stronger for women. A woman whose mother has or had arthritis is likely to develop the problem at the same age and in the same joints.\nRheumatoid Arthritis: A More Aggressive Immune System Raises Women’s Risk\nRheumatoid arthritis (RA) is different from osteoarthritis in that the inflammation is an autoimmune reaction and unrelated to wear and tear on the joints. Three times as many women as men get RA. Also, women tend to be younger when they get RA and, as with osteoarthritis, their pain is worse.\nExperts believe there are two main reasons for the gender differences in RA. First, women get autoimmune diseases in far greater numbers than men – it’s thought that the female immune system is stronger and more reactive. Second, it appears that hormones affect RA risk and flares. Many women with RA who get pregnant experience fewer or no symptoms at all, only to find that they reappear after the baby is born. And breastfeeding lowers the risk of developing RA; a woman who has breastfed for two years has reduced the risk she will ever get the condition by half.", "score": 38.27390179598462, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "For example, according to the US National Institutes of Health National Library of Medicine, women contract an autoimmune disease more than twice as frequently as men – 6.4% of women in the US compared to 2.7% of men. In addition, some autoimmune diseases seem to be more common in certain ethnic groups: lupus, for example, affects more African-American and Hispanic people than Caucasians.\nSome autoimmune diseases, such as multiple sclerosis and lupus, often occur within families. Environmental and lifestyle factors are also thought to play an important role in triggering autoimmunity. These include increased exposure to chemicals and solvents, and an unhealthy diet of high fat, high sugar and highly processed foods. Another theory is that, because of an excessive focus on cleanliness and the increased use of antiseptics, children today aren’t exposed to as many germs as they were in the past, causing the immune system to overreact to harmless substances. However, none of these theories have been proven.\nAccording to the US Department of Health & Human Services, about 24 million people in the US and about 65 million people in Europe are affected by autoimmune diseases, which are often chronic, debilitating and life threatening, mostly requiring lifelong treatment.1 It has been estimated that autoimmune diseases are among the leading causes of death amongst women in the US for all age groups up to 65 years. Worldwide, the incidence and prevalence of autoimmune diseases is steadily increasing.\n80 known autoimmune diseases with no available cure\nToday, more than 80 diseases caused by the immune system attacking the body’s own organs, tissues and cells have been identified. Some of the more common of these diseases include:\n- Type 1 diabetes;\n- Rheumatoid arthritis;\n- Systemic lupus erythematosus; and\n- Inflammatory bowel disease.\nCurrent medications for these diseases, including nonsteroidal anti-inflammatory drugs and immunosuppressants, are typically directed at alleviating symptoms and managing pain, rather than curing the underlying condition. The long-term use of many of these medicines can cause significant side effects.\nA new treatment strategy to reset the immune system\nAs autoreactive T cells seem to play a crucial role in the development of most life-threatening autoimmune diseases, a relatively new treatment strategy aims to reset the body’s immune system through self- or donor-derived blood stem cell transplants. In the absence of the original antigenic triggers, immune homeostasis is restored and self-tolerance can return.", "score": 37.89855115416155, "rank": 10}, {"document_id": "doc-::chunk-6", "d_text": "Due to the presence of this variation in both groups, we could not statistically associate this observed structural variation directly with the SLE phenotype. However, given the complex genetic nature of this autoimmune disease and given that TLR7 has potential functional relevance to SLE (6), we cannot rule out the possibility that copy number variations in TLR7 may influence the genetic background susceptibility for SLE. Since additional genetic variants are associated with lupus-like disease in mice, along with a TLR7 gene copy number variation (1), such a variant in the presence of other genetic 3378 KELLEY ET AL factors may have a contributive, additive effect on the human SLE phenotype in a subset of patients. Another recent study has emphasized sex differences in TLR7 function that might influence the SLE phenotype. Peripheral blood lymphocytes (PBLs) from healthy women release more IFN␣ after TLR-7 stimulation as compared with PBLs from healthy men, an effect that was not seen after stimulation with TLR-9, another inducer of IFN␣ (14). The increase in IFN␣ among PBLs from women was not due to a defect in X chromosome inactivation (14). Our results validate the findings of that study, since a significant difference in the relative TLR7 gene copy number in genomic DNA normalized against GAPDH was observed between men and women (P ⫽ 0.0138). While TLR7 may influence the genetic background of SLE pathogenesis and contribute to the difference in disease prevalence between the sexes, such a contribution in humans cannot be directly attributed to an increase in gene copy number in a standardized quantity of genomic DNA as was seen in the Yaa mouse. ACKNOWLEDGMENTS We would like to thank Jan Dumanski for critical review of the manuscript, Amy Peterson for technical assistance, and S. Louis Bridges for use of the ABI 7900HT sequencer. AUTHOR CONTRIBUTIONS Dr. Kelley had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study design. Kelley, Johnson, Kimberly, Edberg. Acquisition of data. Kelley, Alarcón. Analysis and interpretation of data. Kelley, Johnson, Alarcón, Kimberly, Edberg. Manuscript preparation. Kelley, Edberg.", "score": 37.18388493919959, "rank": 11}, {"document_id": "doc-::chunk-3", "d_text": "(6)\nEstrogen dominance has long been discussed in relationship to chronic illness and autoimmune conditions. Do you experience any of these symptoms of estrogen dominance?\n- Weight gain\n- Hair loss\n- Cold hands/feet\n- Thyroid dysfunction\n- Fibrocystic and/or tender breasts\n- Impaired metabolism\n- Cognitive impairment\nConsidering these familiar symptoms, it’s no wonder that estrogen dominance is linked to systemic dysfunction including autoimmune conditions, heightened allergic responses, and rapid aging. (7)\nThere are other hormonal factors to consider regarding the “over reactive” stress response in women. This stress response triggers the gene expression responsible for disrupting the endocrine system. That’s where significant problems begin.\nThe heightened stress response in women is generally considered to be a survival instinct. Women – as the reproductive gender – may need a higher rate of stress hormones to assure procreation.\nHowever, that heightened ability (at chronic levels) tips the scales into disruption, dysfunction, and the tendency toward disease. The resulting endocrine disruption increases the risk factors for thyroid dysfunction, glycemic dysregulation, diabetes, and more. (8)\n3) Immunity factors\nNow we get to the topic of the immune system and autoimmunity. Stress hormones affect the immune system by depressing or delaying response. Many people who deal with chronic health challenges report an impaired immune system and increased frequency of colds, flu, etc.\nThis is different from the over active immune response that leads to autoimmunity. Yet, they’re both part of the same equation. The immune system – and the risk for autoimmune predisposition – are definitely impacted by gender-related hormones.\nAccording to this National Institute of Health report, “It is well established that gender plays a profound role in the incidence of autoimmunity with diseases such as lupus occurring much more frequently in females than in males. This is related to higher numbers of circulating antibodies as well as other factors.” (9)\nThe good news is that once we’re aware of why risk factors may be increased for females, we’re now empowered with the information we need to take action and reduce our exposure to these risks.", "score": 36.07380299853577, "rank": 12}, {"document_id": "doc-::chunk-19", "d_text": "Pairwise LD, haplotype analysis, and SNP conditioning analysis suggest that these two SNPs in TAGAP are independent susceptibility alleles. Additional fine mapping of this gene and functional genomic studies of these SNPs should provide additional insight into the role of these genes in RA.\nSystemic lupus erythematosus (SLE) is a sexually dimorphic autoimmune disease which is more common in women, but affected men often experience a more severe disease. The genetic basis of sexual dimorphism in SLE is not clearly defined. A study was undertaken to examine sex-specific genetic effects among SLE susceptibility loci.\nA total of 18 autosomal genetic susceptibility loci for SLE were genotyped in a large set of patients with SLE and controls of European descent, consisting of 5932 female and 1495 male samples. Sex-specific genetic association analyses were performed. The sex–gene interaction was further validated using parametric and nonparametric methods. Aggregate differences in sex-specific genetic risk were examined by calculating a cumulative genetic risk score for SLE in each individual and comparing the average genetic risk between male and female patients.\nA significantly higher cumulative genetic risk for SLE was observed in men than in women. (P = 4.52×10−8) A significant sex–gene interaction was seen primarily in the human leucocyte antigen (HLA) region but also in IRF5, whereby men with SLE possess a significantly higher frequency of risk alleles than women. The genetic effect observed in KIAA1542 is specific to women with SLE and does not seem to have a role in men.\nThe data indicate that men require a higher cumulative genetic load than women to develop SLE. These observations suggest that sex bias in autoimmunity could be influenced by autosomal genetic susceptibility loci.\nHigh serum interferon α (IFNα) activity is a heritable risk factor for systemic lupus erythematosus (SLE). Auto-antibodies found in SLE form immune complexes which can stimulate IFNα production by activating endosomal Toll-like receptors and interferon regulatory factors (IRFs), including IRF5. Genetic variation in IRF5 is associated with SLE susceptibility; however, it is unclear how IRF5 functional genetic elements contribute to human disease.", "score": 34.72907028972078, "rank": 13}, {"document_id": "doc-::chunk-1", "d_text": "The Centers for Disease Control and Prevention (CDC) has noted that asthma prevalence is higher among females (8.9 percent compared to 6.5 percent in males) and that women are more likely to die from asthma. The National Institutes of Health statistics show that autoimmune diseases strike women three times more than men.\nA report by the Task Force on Gender, Multiple Sclerosis, and Autoimmunity shows that among people with multiple sclerosis and rheumatoid arthritis, the female to male ratio is between 2:1 and 3:1. With the disease lupus, nine times as many women are affected as men.\nClough is a philosopher of science and epistemology, with a particular focus on feminist theory and gender differences. The focus of her work is to study scientific research and look for the implicit or hidden assumptions that guide that research.\nShe believes the link between hygiene, gender and disease is not just a fluke. \"We are just now beginning to learn about the complex relationship between bacteria and health,\" she says. \"More than 90 percent of the cells in our body are microbial rather than human. It would seem that we have co-evolved with bacteria. We need to explore this relationship more, and not just in terms of eating pro-biotic yogurt.\"\nThats why Clough does not recommend that parents feed their daughters spoonfuls of dirt. Just one gram of ordinary uncontaminated soil contains 10 billion microbial cells, so the effects of ingesting dirt are unknown. \"We obviously do not yet know enough to differentiate between helpful and harmful bacteria,\" she says.\nHowever, Clough said she can easily join in the chorus of voices of health experts who say that more outdoor time for kids is good even if that means the kids get a little dirty. \"Getting everyone, both boys and girls, from an early age to be outdoors as much as possible is something I can get behind,\" she says.", "score": 34.345834609967376, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Female hormones may be linked to asthma\nFluctuations in female sex hormones could play a role in the development of allergies and asthma, a major review of evidence suggests.\nAnalysis of studies involving more than 500,000 women highlights a link between asthma symptoms and key life changes such as puberty and menopause.\nFurther investigation could help explain why asthma is more common in boys than girls in childhood, but more common in teenage girls and women following puberty.\nExperts say, however, that the relationship is inconclusive and call for more research.\nAsthma affects more than five million people in the UK. It is a disease of the airways that can seriously restrict breathing and is often associated with allergies.\nMany women report that their asthma symptoms change with their menstrual cycle, which may be down to variations in levels of hormones, including oestrogen and progesterone, but the link is unclear.\nAsthma and allergy symptoms are often affected by life events such as puberty and menopause, but the reasons behind this are unclear.\nResearchers at the University of Edinburgh reviewed more than 50 studies of women with asthma from puberty to 75 years of age.\nThey found that starting periods before turning 11 years old, as well as irregular periods, was associated with a higher rate of asthma.\nOnset of menopause – when periods stop and oestrogen and progesterone levels fluctuate – was also associated with a higher chance of having asthma compared with pre-menopause.\nScientists say the link between asthma and hormonal drugs including HRT and contraceptives is unclear and women should continue to take medications as prescribed by their GP.\nThe researchers plan to study the biological processes through which sex hormones might play a role in asthma and allergy.\nIn carrying out this systematic review, we noted that there were many differences between studies investigating hormonal treatments in terms of the type and dose of hormone, and the way patients took the treatment. This made it difficult to draw firm conclusions from the results. We are now undertaking a project to clarify the role of contraceptives and HRT in asthma and allergy symptoms.\nThe study, published in Journal of Allergy and Clinical Immunology, was funded by the Chief Scientist Office, part of the Scottish Government Health Directorates.", "score": 33.863535157422476, "rank": 15}, {"document_id": "doc-::chunk-1", "d_text": "Note that Wedekind is not the first to offer this\nhypothesis: it was Carole Ober who first presented such a scenario in\n1993 during the annual meeting of a Genetics group--sorry I forgot their\n>There are major aspects of immune function and dysfunction which\n>appear to be markedly different in males and females. This probably\n>has more to do with the close coupling of the physiological axes\n>that are involved with reproduction (e.g. LHRH-LH) with the HPA axis\n>and the autonomic nervous system than with specific patterns of\n>antigens arising from X and Y chromosomes. The immune system in\n>females must also be designed not to reject the immunologically\n>different sperm and fetus, by a mechanism which is still not\nRecently it came to my attention that gonadotropin-releasing hormone\n(GnRH or as Alan states: LHRH) receptors have been found in the adrenal\nglands of rats. If they turn up in other mammals, and in humans, this\nwould suggest to me that GnRH may be having direct effects on the\nadrenals, which might help to explain some of the immunological\n>There is considerable speculation that homosexuality arises from a\n>developmental variation in the fetal brain that is the result of\n>maternal (and or fetal) hormonal and immunological dysregulation.\n>But it is still just speculation.\nNonetheless, this speculation has grounding in the early embryonic\nmigration of GnRH neurosecretory neurons--a process that may be effected\nby many factors--some of which Teresa is most familiar with. My focus was\non how many GnRH neurosecretory neurons innervate the hypothalamus (and,\nin general) the limbic system. Not enough empirical data so far, or I\nhaven't found it. Still, it seems likely that the number of these neurons\ncontributes to differences in hypothalamic GnRH pulsatility, which seems\nvery important not only to sex differences, but to differences in sexual\norientation. Of course, this gets into some complex and controversial\nissues that are better discussed after reading either a book, or a\njournal article/review. The Scent of Eros was written for a general\n(though educated) audience. I have submitted a more technical paper.", "score": 33.48678858091155, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "This web page was produced as an assignment for an undergraduate course at Davidson College.\nSex Differences in the Gut Microbiome\nFrom the August, 18 2012 cover of \"The Economist\"\nThe following is a review of the article \"Sex Differences in the Gut Microbiome Drive Hormone-Dependent Regulation of Autoimmunity\" by Janet Markle et al. A full citation and link to the article can be found at the bottom of this webpage. All quotations, facts and data presented in this review are drawn from this article unless otherwise cited.\nA Brief Summary...\nAutoimmune disorders, such as type 1 diabetes, have been shown to be more prevalent in women. Markle et al. look at this discrepancy and begin to untangle the relationship between such sex biases, microbiomes, and hormone levels. At the center of the paper lies the observation that nonobese diabetic (NOD) mice kept in GF (germ free) conditions do not exhibit the autoimmune disorder sex bias towards females. Conversely, mice kept in SPF (specific pathogen free) conditions do exhibit the sex bias towards female development of T1D. Thus, it seems that microbes are somehow involved in the different susceptibilities of the sexes to autoimmune diseases.\nTo further analyze this relationship, Markle et al. galvage female mice with either female or male gut microbiomes. The analysis of these mice show that female rats gavaged with male microbiomes exhibit T1D less frequently than untreated females and females gavaged with female microbiomes. Females gavaged with male microbiomes are also shown to have lower instances of T1D precursor phenotypes, such as insilitus (inflamed islet cells) and insulin-specific autoantibodies (Aabs). These results suggest that differences in microbiomes are at play in the sex-dependent susceptibilities to T1D and other autoimmune disorders.\nAdditionally, Markle et al.’s data indicate that the transplanted male microbiomes affect the recipient female’s testosterone levels. Females gavaged with male microbiomes exhibit higher testosterone levels than control females and those that receive female microbiomes. When females gavaged with male microbiomes are treated with a testosterone inhibitor, their increased protection from T1D and its precursor phenotypes decrease.\nThis paper does a lot. This five-page paper represents the conglomeration of a dozen different experiments.", "score": 32.77261691420561, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Apparently this is all a bit of a mystery. Those in the know think it’s triggered by a combination of genetic, environmental and, possibly, hormonal factors.This means that some people are born with specific genes that make them more vulnerable to a dodgy immune system. So everything could be ticking along just fine for years and then a common virus triggers the immune system and stops it working properly.\nThe female hormone oestrogen can also cause issues, which is why Sjögren’s Syndrome’s symptoms can often rear their ugly head around the start of the menopause, when oestrogen levels begin to fall. As if women didn’t have enough to deal with at this point.", "score": 32.22360104135514, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "-- Robert Preidt\nWEDNESDAY, Dec. 4, 2013 (HealthDay News) -- Women with pollen\nallergies may be at increased risk for blood cancers such as\nleukemia and lymphoma, a new study suggests.\nResearchers did not uncover the same link in men. This suggests\nthere is something unique in women that causes chronic\nallergy-related stimulation of the immune system to increase\nvulnerability to the development of blood cancers, the study\nThe study included 66,000 people, aged 50 to 76, who were\nfollowed for an average of eight years. During the follow-up\nperiod, 681 people developed a blood cancer. These people were more\nlikely to be male, to have two or more first-degree relatives with\na history of leukemia or lymphoma, to be less active and to rate\ntheir health status as poor.\nAmong women, however, a history of allergies to plants, grass\nand trees was significantly associated with a higher risk of blood\ncancers. The reason for this is unknown but may have something to\ndo with the effects of hormones, according to the authors of the\nstudy in the December issue of the\nAmerican Journal of Hematology.\n\"To the best of our knowledge, ours is the first study to suggest important gender differences in the association between allergies and [blood cancers],\" wrote study first author Dr. Mazyar Shadman, a senior fellow in the clinical research division at Fred Hutchinson Cancer Research Center in Seattle.\nShadman noted that there is great scientific interest in the\nimmune system's potential role in cancer development.\n\"If your immune system is over-reactive, then you have problems; if it's under-reactive, you're going to have problems. Increasing evidence indicates that dysregulation of the immune system, such as you find in allergic and autoimmune disorders, can affect survival of cells in developing tumors,\" Shadman said in a news release from the center.\nWhile the study found an association between pollen allergies\nand blood cancers among women, it did not prove\nThe U.S. National Cancer Institute has more about\nEBSCO Information Services is fully accredited by URAC. URAC is an independent, nonprofit health care accrediting organization dedicated to promoting health care quality through accreditation, certification and commendation.\nPlease be aware that this information is provided to supplement the care provided by your physician.", "score": 31.548240072645612, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "RA is an autoimmune condition experienced more commonly by females than males and is thought to occur or result from a combination of factors including genetic, environment and other unknown events that occur within our bodies (Pollard 2012).", "score": 31.544996637805504, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "| Autoimmune diseases are the number one category of disease in the United States cancer and heart disease are triggered by autoimmune process; they affect approximately 5%–8% of the population or 14–22 million persons . Autoimmune diseases can affect virtually every site in the body, including the endocrine system, connective tissue, gastrointestinal tract, heart, skin, and kidneys. At least 15 diseases are known to be the direct result of an autoimmune response, while circumstantial evidence implicates >80 conditions with autoimmunity . In several instances, such as rheumatoid arthritis, multiple sclerosis, and myocarditis, the autoimmune disease can be induced experimentally by administering self-antigen in the presence of adjuvant (collagen, myelin basic protein, and cardiac myosin, respectively) . An important unifying theme in autoimmune diseases is a high prevalence in women . Conservative estimates indicate that 6.7 million or 78.8% of the persons with autoimmune diseases are women .|\nSoon after autoimmune diseases were first recognized more than a century ago, researchers began to associate them with viral and bacterial infections. Autoimmune diseases tend to cluster in families and in individuals (a person with one autoimmune disease is more likely to get another), which indicates that common mechanisms are involved in disease susceptibility. Studies of the prevalence of autoimmune disease in monozygotic twins show that genetic as well as environmental factors (such as infection) are necessary for the disease to develop. Genetic factors are important in the development of autoimmune disease, since such diseases develop in certain strains of mice (e.g., systemic lupus erythematosus or lupus in MRL mice) without any apparent infectious environmental trigger. However, a body of circumstantial evidence links diabetes, multiple sclerosis, myocarditis, and many other autoimmune diseases with preceding infections . More often, many different microorganisms have been associated with a single autoimmune disease, which indicates that more than one infectious agent can induce the same disease through similar mechanisms . Since infections generally occur well before the onset of symptoms of autoimmune disease, clinically linking a specific causative agent to a particular autoimmune disease is difficult . This difficulty raises the question of whether autoimmune diseases really can be attributed to infections.", "score": 31.53102052459454, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Current treatments for lupus carry a serious risk of infection and malignancy that can contribute to morbidity and mortality. Further research can help identify targets for new and potentially safer therapies, according to the study.\n“As VGLL3 appears to be not only constitutively active in women but also turned on in men with SLE, targeting VGLL3 may prove beneficial in patients of both sexes,” the researchers wrote.\nBilli AC, Gharaee-Kermani M, Fullmer J, et al. The female-biased factor VGLL3 drives cutaneous and systemic autoimmunity. JCI Insight. 2019. Doi: 10.1172/jci.insight.127291", "score": 31.198943285805044, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Autoimmune diseases are the third most common category of disease in the United States after cancer and heart disease.1 Conservative estimates indicate that three-quarters of the persons with autoimmune diseases are women.2\nAutoimmune disease refers to a group of more than 80 serious, chronic illnesses including diseases of the nervous, gastrointestinal, and endocrine systems as well as skin and other connective tissues, eyes, blood, and blood vessel.\nAutoimmune diseases may manifest in many different places in the body, with many different diagnoses. Their common thread is that the body makes antibodies to its own tissues. The body’s immune system becomes misdirected, attacking the very organs it was designed to protect.\nAutoimmune diseases tend to cluster in families and in individuals – a person with one autoimmune disease is more likely to get another. This indicates that common mechanisms are at work. Studies of the prevalence of autoimmune disease in monozygotic (identical) twins show that genetic as well as environmental factors are necessary for the disease to develop.3 Also, infections play a big role in the development of autoimmune disease.\nA study published in the April 1, 2007 edition of Nature Immunology concluded that allergic and inflammatory diseases may actually trigger autoimmune diseases by relaxing the controls that normally eliminate newly produced, self-reactive B cells. This is important because many autoimmune diseases are caused by self-reactive antibodies produced by such B cells.\nThe common thread is development of antibodies to ones own tissues.\nWhy would the body become “allergic” to itself?\nImmunity begins at birth. Soon after giving birth, female mammals produce colostrum, which is a milk-like substance that jump-starts a newborn’s immune system. Human breast milk contains large quantities of secretory IgA, lysozyme-secreting macrophages, and lymphocytes (T cells and B cells). The lymphocytes release compounds that strengthen the immune response of the infant. There is evidence that the protection given by breast milk lasts for years.10\nOur lymphocytes react to all our tissues. Why, then, does the process sometimes become pathologic? Why don’t we react to our own tissues all the time?\nOur immune system is designed to protect us against that which is foreign, while recognizing that which is native. If, for some reason, our own tissues become contaminated with something foreign which attaches itself to the proteins of the tissues and changes the configuration of the proteins, the body could very well become confused.", "score": 30.885921537437806, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Why Women Get More Fibromyalgia, IBS, and Anxiety Than Men\nBy Dr. David M. Brady and Danielle Moyer, MS\nThe occurrence of fibromyalgia is 10 to 20 times higher in women than in men . Those women are also more likely than men to have severe tender points of pain, “pain all over”, sleep disturbances, and fatigue .\nThe incidence of Irritable Bowel Syndrome (IBS) is higher in women when compared to men. Internationally, the prevalence of IBS is 67% higher in women .\nAnxiety disorders are more than twice as likely to occur in women when compared to men .\nThe question is: why?\nThough the reasons are still being studied, research suggests that an interaction between biology, psychology, and sociocultural factors leads to higher occurrences of these diseases in women . Of particular interest is the role of women’s hormonal and nervous systems, which respond somewhat differently to stress and trauma than those of men .\nResearch by the American Psychological Association looked at 290 studies between 1980 and 2005 to determine if men or women were more at risk for potentially traumatic events and PTSD (Post-traumatic Stress Disorder). The results concluded that, although men experience more traumatic events on average than women, women are more likely to meet diagnostic criteria for PTSD. PTSD is an anxiety disorder caused by a traumatic event and symptoms include “re-experiencing the trauma, avoidance and numbing and hyperarousal .” PTSD occurs in 10-12% of women and 5-6% of men, making the rate in women almost double . Women’s PTSD also tends to last longer (on average four years), whereas men’s lasts one year on average .\nThe types of trauma men and women experience are different as well. Men are more likely to experience trauma from natural disasters, human-caused disasters, accidents, and combat, whereas women are more likely to experience trauma from domestic violence, sexual abuse, and sexual assault . Sexual trauma has been shown to be particularly toxic to mental health, and can typically begin at a young age when the brain is still developing . This can impact a woman's fear and stress response well into adulthood.\nTrauma, stressors, and/or PTSD alter pain processing and incoming stimuli in an individual. They can frequently cause an excessive stress response, significant and chronic pain, and central sensitization disorders .", "score": 30.503999751119117, "rank": 24}, {"document_id": "doc-::chunk-0", "d_text": "Women with airborne allergies to plants, grass, and trees may have a moderate increased risk of developing blood cancers, according to a new study.\nWomen with airborne allergies to plants, different types of grass, and trees may have a moderate increased risk of developing blood cancers, according to a new cohort study. The same effect of was not seen in men with allergies. The results are published in the American Journal of Hematology.\n“While no causality can be inferred, these results suggest a possible gender-specific role of chronic stimulation of the immune system for the development of hematologic cancers,” said the authors in their discussion.\nWhether responses of the immune system to allergens or autoimmune responses correlate with the development of malignant blood cells is not clear. The link between allergies and hematologic malignancies has been previously analyzed in epidemiologic studies, but results have been inconsistent. Case-controlled studies have demonstrated an inverse relationship, while prospective studies have shown an increased risk of hematologic malignancies in those with allergies.\nIn the current study, Mazyar Shadman, MD, MPH, a senior fellow in the clinical research division at Fred Hutchinson Cancer Research Center, and colleagues analyzed a prospective cohort of 66,212 participants between the ages of 50 and 76 in the Vitamins and Lifestyle (VITAL) study. All participants were from the western part of the state of Washington, were enrolled between 2000 and 2002, and were followed through 2009.\nA total of 681 subjects developed a hematologic malignancy during the 8-year follow-up period. The participants diagnosed with these cancers were more likely to be men, to have two or more close relatives with a history of leukemia or lymphoma, and to have reported their health status as low.\nAfter taking into account factors associated with blood cancer risk, a history of airborne allergy was associated with an increased risk of blood cancer (hazard ratio [HR] = 1.19; P = .039). The link was only for those individuals who had allergies to plants, trees, and grass, and was strongest for mature B-cell lymphomas (HR = 1.5; P = .005). This association was seen in women (HR = 1.47; P = .004), but not men (HR = 1.03; P = .782).\nThe authors speculate that the increased risk for blood cancer development among women may reflect their inherent lower baseline risk of developing these cancers.", "score": 30.314073756232496, "rank": 25}, {"document_id": "doc-::chunk-1", "d_text": "Increasing prevalence in allergic diseases has been observed in many countries, especially in Western but also many developing countries . Sex specific differences in prevalence of allergic rhinitis and asthma over the life span were recognized, showing a higher prevalence of allergic rhinitis and asthma as single entities in boys than in girls during childhood followed by an equal distribution in adolescence [2, 3]. In adulthood more women than men are affected by asthma [4, 5]. In a prospective cohort study, the prevalence of coexisting eczema, allergic rhinitis, and asthma in the same child was more common than expected by chance alone and was not only attributable to IgE sensitization, suggesting that these diseases share causal mechanisms . In a systematic review of studies across the globe we showed a sex-switch in prevalence of allergic rhinitis in population-based studies . Since research on multimorbidity, i.e. the coexistence of 2 or more allergic diseases in the same individual, is sparse, the aim of this systematic review with meta-analyses was to examine sex specific differences in the prevalence of coexisting allergic rhinitis and asthma, from childhood through adolescence into adulthood.\nData sources, search strategy, and selection criteria\nWe conducted a systematic literature search using the online databases MEDLINE and EMBASE. MeSH terms were used in conjunction with keywords searched in the title and abstract. We restricted our search to studies published between January 2000 and April 2014. There was no restriction to the language of publication. The protocol for our systematic review was developed with guidance from the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement . It can be accessed at PROSPERO (http://www.crd.york.ac.uk/PROSPERO/, registration number CRD42016036105). To manage the identified publications, we used EndNote X7® (Thomson Reuters) bibliographic database.\nInclusion and exclusion criteria\nThe selection of studies was performed along with pre-set criteria for in- or exclusion. Since the present study is a post hoc analysis of a larger review considering the difference in prevalence for allergic rhinitis only , we chose broad inclusion criteria to reach most of the available information and to increase generalisability.", "score": 30.080900793558747, "rank": 26}, {"document_id": "doc-::chunk-7", "d_text": "In a French observational study of patients with asthma more than 50% of participants had concomitant allergic rhinitis . Several narrative reviews showed this change in sex predominance favoring females during the transition from childhood to adulthood for diverse allergy-related diseases [4, 5, 25, 26]. Therefore, and since asthma and rhinitis coexist more often than expected , we hypothesized that also concomitant allergic rhinitis and asthma may undergo a similar sex-shift in prevalence during puberty.\nOur results support this hypothesis to some extent. However, the limited number of studies found in adults did not allow us to clearly establish a clear tendency towards a male or female predominance but rather a balance between the sexes. Our pooled estimates relied only upon data from studies conducted in Asia (N = 7), South America (N = 2) and Africa (N = 1). In Pinart et al. a sex switch for allergic rhinitis prevalence around puberty was not found in studies conducted in Asia . Five of six studies in the youngest age group (0–10 years) were from Asia, whereas no Asian studies were found for adolescents (11–17 years), suggesting a considerable bias.\nConcerning possible mechanisms underlying a higher prevalence of allergic diseases in women during and after adolescence, higher levels of sex hormones such as estrogen and progesterone were suggested to be of central importance . Sex hormones play a role in the homeostasis of immunity . Estrogen and progesterone enhance type 2 and suppress type 1 responses in females, whereas testosterone suppresses type 2 responses in males . Experiments in rodents showed an effect of estrogens on mast cell activation and the development of allergic sensitization, while progesterone can suppress histamine release but potentiate IgE induction . Similarly for asthma sex differences have been reported for different phenotypes and symptom profiles in epidemiological, clinical and experimental studies, however, the aetiology remains largely unclear [30,31,32,33].\nRisk of bias\nWe tried to identify all population based studies reporting prevalence of coexisting allergic rhinitis and asthma. Given that such observational studies require large samples, it seems unlikely that a study of this dimension will have been published and not identified by our search. Furthermore, in population-based prevalence studies publication bias seems to be less of a concern than e.g. in interventional studies.", "score": 29.547119935553052, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "But, a biologic explanation is also possible: hormonal effects on an imbalanced immune system may be the reason for the differential results. Further studies are needed to understand whether there is a biologic cause for the observed difference and its mechanism.\nShadman and colleagues state that common allergy treatments such as antihistamines and leukotriene agonists have not been shown to be linked to oncogenesis and are unlikely an explanation for the observations.\nThe study was able to analyze a large cohort of participants, however the allergies were self-reported and a detailed analysis of each person’s history of allergies was not available-only current allergies were reported in the study.", "score": 29.54266024043505, "rank": 28}, {"document_id": "doc-::chunk-6", "d_text": "In the meta-analysis for the prevalence of asthma only, considerable heterogeneity was found (I 0–10 2 = 75%; I 11–17 2 = 81%) resulting in an overall of I2 = 85%. Little or no heterogeneity was seen in studies reporting results for allergic rhinitis only in children and adolescents (I 0–10 2 = 27%; I 11–17 2 = 0%) compared to studies including adults (I 18–79 2 = 73%). All studies were of moderate quality (4 points) except from Desalu et al., which was rated as high quality (5 points), see Additional file 1: Table E4.\nWe found a clear ‘sex-switch’ in the prevalence of coexisting allergic rhinitis and asthma from a male predominance in childhood to a female predominance in adolescence. Similar trends of these sex-specific prevalence patterns were observed in participants with asthma only and those with allergic rhinitis only. Two studies in adults showed similar prevalence rates in both sexes.\nComparison with other studies\nIn a global systematic review with meta-analysis we showed sex-related differences in rhinitis prevalence with a prevalence shift from a male predominance at around puberty to a female predominance thereafter . Similarly, a retrospective analysis of the ECRHS data from 16 European countries showed a transition for asthma from a male predominance in childhood (0–10 years) followed by an equal gender distribution in adolescence (10–15 years) leading to a female predominance in adults (> 15 years) . Sex-specific rhinitis and comorbid asthma prevalence data for older men and women are very scarce. Interestingly, according to a large observational all-female cohort, the Nurses’ Health Study in USA, the age-adjusted risk of asthma seems to be increased in postmenopausal women who ever or currently used hormone replacement therapy (i.e. conjugated estrogens with or without progesterone) compared to those who never used such hormones. However, allergic rhinitis with and without comorbid asthma has not been examinated . In a cohort study of 509 children with allergic rhinitis from Turkey (mean age 7.2 ± 3.5 years, age range 1.5–18 years) Dogru showed that asthma was prevalent in the majority (53.2%) of these children .", "score": 28.79068847375856, "rank": 29}, {"document_id": "doc-::chunk-4", "d_text": "\"Beyond the dose, whether exposure to endotoxin (or infectious agents) is protective or harmful is likely to depend on a complex mixture of the timing of exposure during the life cycle, environmental cofactors, and genetics\" 6 (p. 930).\nThe evidence that stimulation by infectious agents creates a protective effect against allergies suggests complex etiological mechanisms that are far from being fully elucidated.\nThe biological theories for explaining the findings revealed in the studies relating the increase in allergic diseases to the decline in infectious diseases are connected to advances in immunology that led to descriptions of cells CD4 T-helper (Th) the immune system's mediators. Recent studies seek to reveal how environmental changes can adversely affect these cells 17.\nThe immunological explanation of the hygienic hypothesis is based on the discovery and classification of T-helper (Th) lymphocytes, according to the cytokines (protean hormones that play an important role in activating the immune responses dependent on T cells) secreted by these cells. The differential activation of these cells is important in the development of immune-mediated diseases. Th1 and Th2 cells are antagonistic. Allergic diseases arise as a result of a systemic imbalance characterized by a predominance of Th2 cells, whereas infectious diseases and many auto-immune diseases are directed by T-helper cell response with a strong predominance of Th1. The explanation for this inverse relationship between allergic and infectious diseases could be that in the absence of microbial stimuli (which exhibit Th1 predominance), the immune system would alter itself in favor of Th2 responses. The entire research effort was directed towards discerning the environmental and nutritional factors that would alter the immune system's function in favor of Th1 or Th2 responses 17.\nAutoimmune diseases (multiple sclerosis, insulin-dependent diabetes, Crohn disease) are attributed to a Th1 deviation. Contrary to the theory of antagonism between Th1 and Th2 cells, recent studies attribute these diseases to the same environmental and nutritional variations related to atopic diseases 17. The prevalence of autoimmune diseases is also increasing, and this trend occurred after the same period as the tendency towards allergic diseases 1; autoimmune diseases also tend to appear more in small families with better socioeconomic conditions 17.\nArguing with the support of these statements, Simpson et al. 17 propose that environmental changes in developed countries increased the susceptibility to immune-mediated diseases in general, favoring both Th1 and Th2 responses.", "score": 28.195857199051293, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Scientists have wondered why people in Western countries get more asthma, allergies and autoimmune diseases than people from countries where hygiene is a problem. So William Parker, a researcher from Duke University in North Carolina, collected wild rats to compare their immune systems with those of animals raised in clean, laboratory conditions. He explains, \"There's a really good chance that parasites and other infections change the immune system in a way that you don't have a propensity or a tendency to get allergies or autoimmune disease.\"\nThe wild rats - not surprisingly - were riddled with diseases and parasites. But Parker found -surprisingly - that they didn't react to common allergens the way the lab rats did. \"So our wild rats' immune system is having a lot of things to worry about and it doesn't sweat the little things.\" He gives an example of a common allergen. \"A little pollen grain that's coming by, it's going to just ignore that, whereas the person who's living in a very clean environment, or the lab rat, might be very concerned about a pollen grain and in fact might become allergic against it.\"\nThis finding is in line with a theory that says people from countries where there is widespread use of antiseptics end up developing more allergies than people from places with less sanitary living conditions. Parker says his findings also suggest people's immune systems need to be challenged more by dirt and disease during childhood. But he says that's difficult to test in humans. \"You know a lot of people say if you let your kids get dirty they won't get allergies and autoimmune diseases. But doctors don't recommend that because we live in such crowded conditions that we get other diseases.\nParker is collecting more wild rats for further study.", "score": 28.190411592354483, "rank": 31}, {"document_id": "doc-::chunk-10", "d_text": "Sex and atopy influences on the natural history of rhinitis. Curr Opin Allergy Clin Immunol. 2012;12:7–12.\nPinart M, Keller T, Reich A, Fröhlich M, Cabieses B, Hohmann C, Postma D, Bousquet J, Antó J, Keil T. Sex-related allergic rhinitis prevalence switch from childhood to adulthood: a systematic review and meta-analysis. Int Arch Allergy Immunol. 2017;172:224–35.\nBecklake MR, Kauffmann F. Gender differences in airway behaviour over the human life span. Thorax. 1999;54:1119–38.\nPostma DS. Gender differences in asthma development and progression. Gend Med. 2007;4 Suppl B:S133–46.\nPinart M, Benet M, Annesi-Maesano I, von Berg A, Berdel D, Carlsen KCL, Carlsen KH, Bindslev-Jensen C, Eller E, Fantini MP, et al. Comorbidity of eczema, rhinitis, and asthma in IgE-sensitised and non-IgE-sensitised children in MeDALL: A population-based cohort study. Lancet Respir Med. 2014;2:131–40.\nMoher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010;8:336–41.\nBrito Rde C, da Silva GA, Motta ME, Brito MC. The association of rhinoconjunctivitis and asthma symptoms in adolescents. Rev Port Pneumol. 2009;15:613–28.\nDesalu OO, Salami AK, Iseh KR, Oluboyo PO. Prevalence of self reported allergic rhinitis and its relationship with asthma among adult Nigerians. J Investig Allergol Clin Immunol. 2009;19:474–80.\nHong S, Son DK, Lim WR, Kim SH, Kim H, Yum HY, Kwon H. The prevalence of atopic dermatitis, asthma, and allergic rhinitis and the comorbidity of allergic diseases in children.", "score": 28.168579818747236, "rank": 32}, {"document_id": "doc-::chunk-8", "d_text": "It is hardly a profound statement to observe that women are different than men, and very frequently the differences makes it important for health care practitioners to be aware of health problems that occur more commonly in the female population. Awareness of the above will often lead to better and more appropriate care for the female patient.\nThere are many diseases and disorders which are quite commonly seen in female patients that are much less commonly seen in men, such as the autoimmune diseases, however, women live significantly longer than do men. Unfortunately for the female population, they do experience more illness and have more \"sick time\" than do men on average.\nIn no way is this presentation to be considered \"exhaustive\" on the subject of female health Issues. Rather, the following is a discussion of several diseases and disorders which are commonly diagnosed in the optometric practice and which occur more frequently in the female patient than in the male patient. This is particularly important as optometrists are recognized as independent health care practitioners and are quite often the doctor of first contact when symptoms lead to a doctor's office visit by the patient.", "score": 27.564133414223125, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "The dominant theory is that the immune system creates these antibodies against some kind of invader that “looks like” some type of body tissue, meaning that antibodies created to fight off the intruder will bind to that specific type of tissue as well. This is called “molecular mimicry.”\nMolecular mimicry makes sense, has been demonstrated to occur, and can explain a great deal about autoimmunity, including perhaps why an infection is often the apparent trigger for autoimmune disease. But molecular mimicry by itself cannot explain the totality of autoimmune phenomena. First off, in an age of decreasing incidence of acute infectious disease, it cannot explain why autoimmune conditions used to be rare and are getting more and more common. The immune system has been creating antibodies to fight off pathogen invasion in the body since humans first walked the earth. Why have our bodies suddenly become so bad at distinguishing self from invading pathogen?\nDoes Vaccine-Stimulated Immune Activation Play a Role?\nThis increase could be explained perhaps by the recent dramatic increase in immune system activation stimulated by vaccination, but on the surface that doesn’t explain the patterns we see in timing of onset of autoimmune disease or why, once someone has developed one autoimmune condition, they are at very high risk of developing another, and another, and often yet another? Why does someone with rheumatoid arthritis, for instance, in which antibodies attack the joints, often go on to develop scleroderma, where antibodies attack skin tissue? The joint-destroying antibodies don’t suddenly gain the ability to bind to skin cells. The immune system has begun producing different antibodies, which attack different tissues.\nThis phenomenon is usually glossed over with vague terms like “overactive immune system” or “Th1/Th2 imbalance.” While the immune system is certainly “overactive” in such cases, saying so is like saying a hypothyroid condition is caused by an underactive thyroid; it merely restates the obvious without providing insight. And many folks with autoimmunity conditions, like me, display symptoms of both Th1 and Th2 dominance. So, despite the scientific community’s fascination with the concept of Th1 and Th2 balance, I find it essentially useless. Also unexplained by “overactivity” of the immune system is why women develop autoimmune conditions at about three times the rate that men do.\nAutoimmunity Close to Home\nNone of this is merely academic to me.", "score": 27.39483882988056, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "OR WAIT null SECS\nLittle girls growing up in western society are expected to be neat and tidy, and one researcher who studies science and gender differences thinks that emphasis may contribute to higher rates of certain diseases in adult women. The link between increased hygiene and sanitation and higher rates of asthma, allergies and autoimmune disorders is known as the \"hygiene hypothesis\" and the link is well-documented. Yet the role of gender is rarely explored as part of this phenomenon.\nOregon State University philosopher Sharyn Clough thinks researchers need to dig deeper. In her new study, published in the journal Social Science & Medicine, she points out that women have higher rates of allergies and asthma, and many autoimmune disorders. However, there is no agreed-upon explanation for these patterns. Clough offers a new explanation.\nClough documents a variety of sociological and anthropological research showing that our society socializes young girls differently from young boys. In particular, she notes, girls are generally kept from getting dirty compared to boys.\n\"Girls tend to be dressed more in clothing that is not supposed to get dirty, girls tend to play indoors more than boys, and girls playtime is more often supervised by parents,\" said Clough, adding that this is likely to result in girls staying cleaner. \"There is a significant difference in the types and amounts of germs that girls and boys are exposed to, and this might explain some of the health differences we find between women and men.\"\nHowever, that doesnt mean that parents should let their daughters go out into the back yard and eat dirt, Clough points out. \"What I am proposing is new ways of looking at old studies,\" she said. \"The hygiene hypothesis is well-supported, but what I am hoping is that the epidemiologists and clinicians go back and examine their data through the lens of gender.\"\nThe \"hygiene hypothesis\" links the recent rise in incidence of asthma, allergies, and autoimmune disorders such as Crohns disease and rheumatoid arthritis, with particular geographical and environmental locations, in particular urban, industrialized nations. Many scholarly studies have noted that as countries become more industrial and urban, rates of these diseases rise. For instance, the rate of Crohns disease is on the rise in India as sanitation improves and industrialization increases.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Women’s immune system genes operate differently from men’s\nAbout (English version):\nA new technology for studying the human body’s vast system for toggling genes on and off reveals that genes associated with the immune system toggle more frequently, and those same genes operate differently in women and men.\nType of resource:\nDynamic Content (website,portal, blog, newsfeed, etc.)\nIs this resource freely shareable?:", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "The reasons for this trend are not fully understood, but they are believed to be caused by the dietary habits, such as increased consumption of processed food, and the higher amounts of preservatives, flavor enhancers, and other additives in our food. Some genetic and demographic factors are believed to play a role as well.\nSome studies have shown that the percentage of women who think they have food allergy is much higher than that of women whose food allergy has been actually diagnosed. Since food intolerance and allergy are two different bodily reactions, it is essential to properly diagnose them to prevent serious health complications. Be sure to check out the available allergy tests that can clear up any uncertainties and find the cause of your allergies at last.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-4", "d_text": "Here are three resources that provide hope and healing for the vital topics discussed in this article:\nFor more information on epigenetics and the influence of environmental factors on your health check out “Epigenetics, Fibromyalgia, and You!”\nFor positive tips on how to deal with estrogen dominance, check out Christiane Northrup’s “7 Ways to Decrease Estrogen Dominance Including Spiritual and Holistic Options.”\nFor tips on building a stronger immune system, read “Is the Fibromyalgia Immune System Compromised?” – and – “Linking Fibromyalgia, the Flu Season, and the Top 3 Immune System Destroyers”\n- Autoimmune Disease in Women\n- Is fibromyalgia an autoimmune disorder of endogenous vasoactive neuropeptides?\n- Women’s immune genes are regulated differently to men’s, study finds\n- Stress Brings Out the Difference in Male, Female Brains\n- His stress is not like her stress\n- His stress is not like her stress\n- What Are the Symptoms of Estrogen Dominance?\n- Stress and hormones\n- Genetic and hormonal factors in female-biased autoimmunity\nSue Ingebretson is the Natural Healing Editor for ProHealth.com as well as a frequent contributor to ProHealth’s Fibromyalgia site. She’s an Amazon best-selling author, speaker, and workshop leader. Additionally, Sue is an Integrative Nutrition & Health Coach, a Certified Nutritional Therapist, a Master NLP Practitioner, and the director of program development for the Fibromyalgia and Chronic Pain Center at California State University, Fullerton. You can find out more and contact Sue at www.RebuildingWellness.com.\nWould you like to find out more about the effects of STRESS on your body? Download Sue’s free Is Stress Making You Sick? guide and discover your own Stress Profile by taking the surveys provided in this detailed 23-page report.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "The topic of food allergies and intolerance has gained a worldwide attention in the recent years, because their prevalence has been on the rise. As medicine advances, so does our understanding of the immune system and the effects certain food substances have on the body. Many menopausal women use the terms food intolerance and allergy interchangeably, but the truth is they differ a great deal. Let's take a look at food intolerance and allergies to better understand them and see how they are linked.\nFood intolerance or sensitivity happens when the digestive system does not break down certain foods properly. There can be various underlying causes of food intolerance, such as an absence of an enzyme to digest some foods. Examples of that include lactose intolerance and conditions such as irritable bowel syndrome (IBS), or gluten-intolerance in celiac disease.\nFoods causing intolerance:\n- Dairy products\n- Eggs whites\n- Citrus fruit\n- Red wine\nSymptoms of food intolerance:\n- Breathing problems\n- Stomach upset\nIdentifying the food that your body does not tolerate well might be tricky. Women are usually advised to make a diary of the food they eat throughout the day, so if they start feeling sick, they can immediately relate it something they just ate. Once identified, that food is best to be omitted, though typically eating small amounts of it can be tolerated without problems.\nFood allergy happens when there is an overreaction of the immune system to a protein found in certain foods or beverages. The body treats it as a toxic and harmful substance and initiates a complex cascade of reactions to get rid of it.\n- Fish and shellfish\n- Tree nuts\nSymptoms of food allergies:\n- Dizziness and loss of consciousness\n- Difficulty breathing and talking\n- Swelling of the mouth and face\n- Sore eyes and throat\n- Itchy skin\nHaving a food allergy can be life-threatening, because it may lead to anaphylaxis, which is a severe whole-body allergic reaction an allergen. Often patients with diagnosed food allergy have to carry an epinephrine shot at all times and self-administer it in case of a severe allergic reaction to prevent anaphylactic shock, which can be fatal if untreated.\nThe occurrence of food intolerance and food allergy has been increasing around the world in the recent years.", "score": 26.609527212328054, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Besides Graves' disease, MS, and diabetes, the incidence of Crohn's disease, atopic dermatitis, rhinitis and asthmas, has also risen in recent years, particularly in children.\nThe incidence of allergic and autoimmune diseases isn't evenly distributed among geographic regions or ethnic groups. However, a decrease in incidence is seen from north to south in the Northern Hemisphere and from south to north in the Southern Hemisphere. This could be accounted for by the prevalence of adaptive immune system HLA genes seen in different populations, for instance the low incidence of immune system genes that provide susceptibility to diabetes in Japan and the high incidence of these genes in Sardinia.\nHowever, the genetic influences are considered small compared to the environmental contributions including access to medical care, antibiotics and vaccinations. This is supported by the low incidence of systemic lupus erythematosus among western Africans compared to black Americans of the same ancestry.\nDaycare Decreases Risk\nChildren attending daycare centers, who presumably have more infectious exposure, also have a lower incidence of autoimmune asthma than children in small families who do not attend daycare facilities. Children who use antibiotics during the first year of life are also reported to have a higher incidence of autoimmune asthma in later life. This is related to the change in immune system chemicals called cytokines caused by the use of antibiotics.\nChildren raised in rural areas who have more exposure to farm animals and cow's milk also have a lower incidence of autoimmune diseases. Children born by caesarean section or who have isolated living conditions have a higher incidence of type 1 diabetes, whereas children exposed to lactobacillus vaginal flora at birth have a lower incidence of autoimmune conditions, particularly atrophic dermatitis.\nThe immune system is designed to protect and defend us from infectious agents. When its functions are altered, the immune response is erratic. Over time, an erratic response cripples immune function. In its effort to protect us, the immune system cells react skittishly, targeting our body's proteins. Iatrogenic diseases are those caused by doctors or medical treatment. A perfect iatrogenic disease model can be seen in autoimmune diseases.\nAnecdotal evidence also shows that exposure to infectious diseases is associated with decreased symptoms when immune-related diseases do occur. For instances, children who have had measles have milder cases of nephritic syndrome and atopic dermatitis when they develop these diseases.", "score": 26.1106510098406, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "From Nature magazine\nSabra Klein came to the annual meeting of the Society for the Study of Reproduction this week armed with a message that might seem obvious to scientists who obsess over sex: men and women are different. But it is a fact often overlooked by health researchers, says Klein, an immunologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.\nHer research on influenza viruses in mice, presented at the meeting in Montreal, Canada, helps explain why women are more susceptible to death and disease from infectious pathogens — and the reason is intimately linked with reproduction. “She’s one of the people that really gets the bigger picture as far as why do we see these patterns,” says Marlene Zuk, an evolutionary biologist at the University of Minnesota, Twin Cities, in St. Paul.\nWomen generally suffer more severe flu symptoms than men, for example, despite the fact that they tend to have fewer viruses during an infection. To Klein, this suggests that women quickly mount a substantial immune-system attack to clear infections — and suffer the consequences of the inflammatory responses that flood their systems. “This is where females run into trouble,” Klein says.\nShe and her collaborators have found this disparity in mice infected with flu viruses. But when the researchers castrated the males and removed the ovaries from the females, the difference disappeared as the males became more sensitive to infection.\nBut testes are not simply protective. Klein found that giving the neutered females the female sex hormones oestrogen and progesterone actually protected them from disease.\nFor females, infections appear to throw these cycling sex hormones out of whack. They elongate the oestrus cycle in non-neutered female mice — stretching the part of the cycle associated with the lowest amounts of oestrogen from 4-5 days to 8-9 days.\nResearchers have long known that immunological cells have receptors for sex hormones, and that autoimmune disease strikes women more frequently than men. Nevertheless, Klein says that her work should have implications for current public-health practices.\nWomen, who are often less likely than men to get vaccinated against flu, should be encouraged to do so, she says. And researchers may want to examine whether hormone-replacement therapies and contraceptive drugs have unintended — possibly positive — effects on some types of infectious disease.\nBut most importantly, Klein says, medical studies should take sex differences into account.", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-7", "d_text": "The estimated community prevalence, which takes into account the observation that many people have more than one autoimmune disease, was 4.5% overall, with 2.7% for males and 6.4% for females.\nIn both autoimmune and inflammatory diseases, the condition arises through aberrant reactions of the human adaptive or innate immune systems. In autoimmunity, the patient's immune system is activated against the body's own proteins. In chronic inflammatory diseases, neutrophils and other leukocytes are constitutively recruited by cytokines and chemokines, resulting in tissue damage.\nMitigation of inflammation by activation of anti-inflammatory genes and the suppression of inflammatory genes in immune cells is a promising therapeutic approach. There is a body of evidence that once the production of autoantibodies has been initialized, autoantibodies have the capacity to maintain their own production.\nStem cell transplantation is being studied and has shown promising results in certain cases.\nAltered glycan theory\nAccording to this theory, the effector function of the immune response is mediated by the glycans (polysaccharides) displayed by the cells and humoral components of the immune system. Individuals with autoimmunity have alterations in their glycosylation profile such that a proinflammatory immune response is favored. It is further hypothesized that individual autoimmune diseases will have unique glycan signatures.\nAccording to the hygiene hypothesis, high levels of cleanliness expose children to fewer antigens than in the past, causing their immune systems to become overactive and more likely to misidentify own tissues as foreign, resulting in autoimmune or allergic conditions such as asthma.\n- \"Autoimmune diseases fact sheet\". Office on Women's Health. U.S. Department of Health and Human Services. 16 July 2012. Archived from the original on 5 October 2016. Retrieved 5 October 2016.\n- Katz U, Shoenfeld Y, Zandman-Goddard G (2011). \"Update on intravenous immunoglobulins (IVIg) mechanisms of action and off- label use in autoimmune diseases\". Current Pharmaceutical Design. 17 (29): 3166–75. doi:10.2174/138161211798157540. PMID 21864262.\n- Borgelt, Laura Marie (2010). Women's Health Across the Lifespan: A Pharmacotherapeutic Approach. ASHP. p.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "(COLUMBUS, Ohio) – Most pregnant women have heard theories and old wives’ tales about the differences between carrying a girl and a boy. Some say you can tell what a woman will have, simply based on the foods she craves or the way she looks.\nBut a new study, conducted by researchers at The Ohio State University Wexner Medical Center, has found some science behind the speculations. Researchers followed 80 women through pregnancy, exposing their immune cells to bacteria in the lab, and noticed some significant differences.\n“What the findings showed is that women carrying girls exhibited greater inflammatory responses when faced with some sort of immune challenge compared to women carrying boys,” said Amanda Mitchell, lead author of the study and a postdoctoral researcher in the Institute for Behavioral Medicine Research at The Ohio State University Wexner Medical Center. “This could mean that inflammation may play a role in why some women who are carrying girls have more severe reactions to illnesses, making symptoms of conditions like asthma worse for them during pregnancy.”\nScientists found that immune cell samples of women carrying girls produced more proteins called proinflammatory cytokines than those carrying boys, which is part of the inflammatory response. “Too many of these cytokines or too much inflammation can really be unhelpful for our bodies’ functioning,” Mitchell said. “It can create or contribute to symptoms like fatigue or achiness.”\nSo, there’s now some evidence behind the notion that women carrying girls may be more likely to have a harder time with illnesses during pregnancy than if they were carrying a boy.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-3", "d_text": "To examine that issue, the team called for prospective controlled interventional studies to determine whether vitamin D7 supplements can ameliorate symptoms and improve outcomes in connective tissue disease-related interstitial lung disease.\nSource reference: Hagaman J, et al “Vitamin D deficiency and reduced lung function in connective tissue-associated interstitial lung diseases” Chest 2011; DOI: 10.1378/chest.10-0968.\nNew research may help explain why multiple sclerosis rates have risen sharply in the U.S. and some other countries among women, while rates appear stable in men.The study could also broaden understanding of how environmental influences alter genes to cause a wide range of diseases. The causes of multiple sclerosis (MS) are not well understood, but experts have long suspected that environmental factors trigger the disease in people who are genetically susceptible. In the newly published study, researchers found that women with MS were more likely than men with MS to have a specific genetic mutation that has been linked to the disease.\nWomen were also more likely to pass the mutation to their daughters than their sons and more likely to share the MS-susceptibility gene with more distant female family members. If genes alone were involved, mothers would pass the MS-related gene to their sons as often as their daughters, said researcher George C. Ebers, MD, of the University of Oxford. Ebers’ research suggests that the ability of environmental factors to alter gene expression — a relatively new field of genetic study known as epigenetics — plays a key role in multiple sclerosis and that this role is gender-specific.\nThe theory is that environmental influences such as diet, smoking, stress, and even exposure to sunlight can change gene expression and this altered gene expression is passed on for a generation or two. “The idea that the environment would change genes was once thought to be ridiculous,” Ebers says. “Now it is looking like this is a much bigger influence on disease than we ever imagined.”\nThe study by Ebers and colleagues included 1,055 families with more than one person with MS. Close to 7,100 genes were tested, including around 2,100 from patients with the disease. The researchers were looking for MS-specific alterations in the major histocompatibility complex (MHC) gene region. They found that women with MS were 1.4 times more likely than men with the disease to carry the gene variant linked to disease risk.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-1", "d_text": "After these antibodies become activated through an interactive process with an unknown organism, they are mass-produced by immune cells and circulate our body in order to form an immunological ‘belt’. Vaccinations work on the same principle, as weakened components of microbes which cause a certain disease to allow the body to make itself ready for launching a big immune response when the problem-causing microbe is faced in the future.\nWhen it comes to our immune system and its defensive role, we can distinguish two types of reactions – if it attacks our body, it is an autoimmune disorder, and if it attacks a harmless protein coming from our environment, it is an allergy.\nIt all starts in the early stages of life, as a normal course of childhood development when early exposure to parasites and bacteria allow the immune system to develop regulatory mechanisms and keep everything under control. However, in environments lacking exposure to these organisms, this ‘adjustment’ fails to occur and children prone to allergies begin to develop an inflammatory response against normally harmless proteins from their environment.\nAllergies risk factors\nMoreover, there are some so-called risk factors which make you prone to developing an allergy:\n- – If allergies or asthma run in your family for generations, e.g. if one parent is allergic, a child has a 30-50% chance of inheriting the allergy, yet not necessarily the parent’s type. But if both parents are allergic, the children have a 60-80% likelihood of inheriting allergies.\n- – It is proven that the period of childhood is the most likely to develop an allergy.\n- – If you already have asthma, there is a high chance of suffering from an allergy.\n- – The environment also plays an important role in this process as it either protects you from developing an allergy or adapts you to it. Some exposures provoke certain allergies in individuals and their children (such as high consumption of junk food and cigarette smoke), but others, on the other hand, are protective (for example exposure to farm animals and high-fiber diets).\n- – Children who suffer from viral or bacterial infections of the upper respiratory system (throat, nose and bronchial tubes) during the first 6 months of their life have higher chances of developing allergies or asthma later in life.\nWhat is the hygiene hypothesis?\nOver time experts in the field have developed the hygiene hypothesis for autoimmune and allergic diseases.", "score": 25.099154682862817, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Women experience higher stress, more chronic disease, more depression, more anxiety and are more likely to be victims of violence. Women earn less than men, and in many countries they don’t have the same human rights as men.\nFor instance, in the U.S. in 2015 female full-time workers made only 80 cents for every dollar earned by men, indicating a 20 percent gender wage gap. Yet, life expectancy for women in the U.S. is 81.2 years compared to 76.4 for males.\nEven in countries with larger wage gaps or extreme gender inequalities, women live longer than men.\nAs a researcher who studies cross-country and gender differences in health, I am always fascinated by how the intersection of these factors influences health. So why do women live longer, despite their lower social rank and worse health?\nIs it basic biology?\nGender refers to social aspects of being a woman or a man such as social stress, opportunity and social expectations.\nSex, on the other hand, refers to biology. Biology can contribute to this difference in life expectancy. Women have biological advantages that let them live longer.\nFor instance, estrogen benefits women because it lowers low-density lipoprotein cholesterol (or LDL, what you may know as “bad” cholesterol) and increases high-density lipoprotein cholesterol (or HDL, the “good” cholesterol), which reduces cardiovascular risk.\nTestosterone, on the other hand, increases blood levels of the bad cholesterol and decreases levels of good cholesterol. This puts men at greater risk of hypertension, heart disease and stroke.\nWhen it comes to chronic diseases, women tend to have more of them. But there is a caveat here. Men and women have different types of chronic disease. Women have more nonfatal, chronic conditions, while men have more fatal conditions.\nFor example, women have more arthritis, which does not kill, even if disabling. In contrast, men are at higher risk of chronic diseases that are leading killers. Heart disease starts 10 years earlier in men than women.\nSo, biological differences play a role in this life expectancy gap, but gender, I argue, plays a bigger role.\nWomen are more health aware\nStudies have shown that, in general, women are more health conscious, and they have higher awareness of their physical and mental symptoms. These all result in healthier lifestyles and better health care use. Women also communicate better about their problems, which helps the process of diagnosis.", "score": 25.000000000000032, "rank": 46}, {"document_id": "doc-::chunk-0", "d_text": "It turns out that testosterone likely helps to explain why far more women develop asthma following puberty as compared to men. The incidence of asthma in women is double that of men, and more severe, according to the Australian and French authors of a study published in the Journal of Experimental Medicine.\nThis is in contradiction to the fact that before puberty, asthma is more common in boys than girls.\n“Our research shows that high levels of testosterone in males protect them against the development of allergic asthma,” said Dr. Cyril Seillet of Melbourne’s Walter and Eliza Hall Institute of Medical Research in a press release.\nThe team looked at innate lymphoid cells – or ILC2s – a type of immune cell that has recently been associated with the onset of asthma. It found that testosterone halted the production of those cells.\n“Testosterone directly acts on ILC2s by inhibiting their proliferation,” Seillet said. “So in males, you have less ILC2s in the lungs and this directly correlates with the reduced severity of asthma.”\nThe discovery could lead to ways to treat or even to prevent asthma, according to the researchers.", "score": 24.345461243037445, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "MARS/VENUS VACCINE RESPONSES\nANCHOR LEAD: ADD DIFFERENCES IN RESPONSES TO VACCINES TO THE LONG LIST OF DIFFERENCES BETWEEN MALES AND FEMALES, ELIZABETH TRACEY REPORTS\nMore women than men complain of negative side effects of vaccination and may therefore avoid it in the future. Now a Johns Hopkins literature review led by Sabra Klein has concluded that sex is an often overlooked aspect of vaccine response that could have practical implications, such as explaining this antipathy.\nKLEIN: What we find is that across very diverse vaccines, women mount significantly higher immune responses against vaccines, which is very beneficial when we think about efficacy of vaccines and protection. The downside of these heightened immune responses is that we also experience significantly more frequent and more severe adverse reactions, including fever, pain and inflammation at the site of vaccination. :26\nKlein says that a better understanding of female physiology throughout the lifespan would also help predict immune responses, such as during pregnancy or after menopause. At Johns Hopkins, I’m Elizabeth Tracey.", "score": 24.345461243037445, "rank": 48}, {"document_id": "doc-::chunk-7", "d_text": "However, other potential factors such as social status and living habits cannot be ruled out without further study.\nJarrar D, Wang P, Cioffi WG, Bland KI, Chaudry IH. The female reproductive cycle is an important variable in the response to trauma-hemorrhage. Am J Physiol Heart Circ Physiol. 2000;279(3):H1015–21.\nKnoferl MW, Angele MK, Diodato MD, Schwacha MG, Ayala A, Cioffi WG, et al. Female sex hormones regulate macrophage function after trauma-hemorrhage and prevent increased death rate from subsequent sepsis. Ann Surg. 2002;235(1):105–12.\nKnoferl MW, Jarrar D, Angele MK, Ayala A, Schwacha MG, Bland KI, et al. 17 beta-Estradiol normalizes immune responses in ovariectomized females after trauma-hemorrhage. Am J Physiol Cell Physiol. 2001;281(4):C1131–8.\nJarrar D, Wang P, Knoferl MW, Kuebler JF, Cioffi WG, Bland KI, et al. Insight into the mechanism by which estradiol improves organ functions after trauma-hemorrhage. Surgery. 2000;128(2):246–52.\nDiodato MD, Knoferl MW, Schwacha MG, Bland KI, Chaudry IH. Gender differences in the inflammatory response and survival following haemorrhage and subsequent sepsis. Cytokine. 2001;14(3):162–9.\nDeitch EA, Feketeova E, Lu Q, Zaets S, Berezina TL, Machiedo GW, et al. Resistance of the female, as opposed to the male, intestine to I/R-mediated injury is associated with increased resistance to gutinduced distant organ injury. Shock. 2008;29(1):78–83.\nWilder RL. Neuroendocrine-immune system interactions and autoimmunity. Ann Rev Immunol. 1995;13:307–38.\nAngele MK, Knoferl MW, Ayala A, Bland KI, Chaudry IH.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-6", "d_text": "Are there differences in the health problems women and men experience and, if so, how might we explain them?\nThe common belief has been that \"women are sicker but men die quicker\": that women are more likely to report health problems whereas men have a shorter life expectancy. Other recent research shows that gender differences in health are less clear than is often assumed. [4, 27, 28] The general measures of health status as well as the specific measures of mental and physical health problems used in the NPHS indicate different patterns with respect to gender: in some cases no differences between women and men; in others, small or inconsistent differences. Yet women are more likely to report short-term disability, distress, depression, migraine, pain, arthritis or rheumatism, and non-food allergies. \nSuch observations in several countries have led to calls for much more attention to gender and the changing nature of gender roles. It has been argued that with changes in gender roles and the recognition of diversity among both men and women, some men may have more in common with some women. But there are some fairly consistent gender differences, and we need greater documentation of these as well as of the ways in which men's health and women's health are similar. To understand such data and to lay the basis for meaningful analysis of gender and health, it is important to chart gender relations over time.\nWe do not know enough about how gender relations have been changing over the past few decades, and this complicates the task of tracing links between health and gender. Women have entered the labour market in greater numbers, though they are typically employed in part-time work and in lower-paid \"women's jobs,\" which often allow workers less autonomy and control in their work. While women may now have a greater degree of economic independence than previously, their relation with the labour market is still weaker than that of men. However, men, who at one time could expect almost continuous employment until retirement at 65, now face the prospect of redundancies and long-term unemployment as a result of restructuring and changes in the labour market. Charles has argued that these and other changes in gender relations mean that the \"old ways of being a man are no longer possible.\" . The increase in divorce rates has had a profound effect on many women, who are immediately upon divorce faced with a considerable drop in household income and in their command of other resources. As lone parents they are at high risk of living in poverty.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "Autoimmune disease encompasses a diverse group of over 80 chronic disorders. Each of these diseases has distinct clinical manifestations that are due to the differences in the cells and organ systems involved; however, these diseases are universally characterized by a loss of self-tolerance, resulting in autoreactive immune cells, autoantibodies, and elevated levels of inflammatory cytokines. Reviews in this series examine mechanisms underlying autoimmunity, including failure of B cell tolerance checkpoints, the generation of autoantibodies, cytokine dysregulation, aberrant T cell signaling, and the loss of immune suppressive cells and functions. They also explore the influence of genetic background, environment, microRNAs, and sex-specific factors on the loss of immune homeostasis.\nAutoimmune diseases classically present with a complex etiology in which different factors concur in the generation and maintenance of autoreactive immune responses. Some mechanisms and pathways that lead to the development of imbalanced immune homeostasis and loss of self-tolerance have been identified as common to multiple autoimmune disorders. This Review series focuses on the general concepts of development and progression to pathogenic autoimmune phenotypes. A mechanistic discussion of the most recent advances in the field, together with related considerations of possible therapies, make this series of particular interest to both the basic and translational science communities.\nAntonio La Cava\nAutoimmune diseases occur when the immune system attacks and destroys the organs and tissues of its own host. Autoimmunity is the third most common type of disease in the United States. Because there is no cure for autoimmunity, it is extremely important to study the mechanisms that trigger these diseases. Most autoimmune diseases predominantly affect females, indicating a strong sex bias. Various factors, including sex hormones, the presence or absence of a second X chromosome, and sex-specific gut microbiota can influence gene expression in a sex-specific way. These changes in gene expression may, in turn, lead to susceptibility or protection from autoimmunity, creating a sex bias for autoimmune diseases. In this Review we discuss recent findings in the field of sex-dependent regulation of gene expression and autoimmunity.\nKira Rubtsova, Philippa Marrack, Anatoly V. Rubtsov\nIn this Review we focus on the initiation of autoantibody production and autoantibody pathogenicity, with a special emphasis on the targeted antigens.", "score": 24.08721104058009, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "The National Academy of Sciences and others have reported that 15% of the population suffers from chemical sensitivity, 3% to 5% of whom cannot hold jobs due to their intolerance of substances like perfume, chemicals and carpeting. Spinal fluid analysis by Dr. Baraniuk in 2005 has linked chemical sensitivity to Gulf War Syndrome, chronic fatigue syndrome and fibromyalgia.\nMen vs. Women and Chemical Sensitivity\nEnvironmental illnesses (chemical sensitivity, chronic fatigue and fibromyalgia) occur four times more often in women than men. Women are often dismissed because they simply 'look crazy' while in reality, they are toxic. The mental changes are related to: a lack of oxygen to the brain from inflamed vessels due to the poison they have absorbed, high adrenaline and low cortisol, a severely damaged autonomic nervous system and direct toxicity from whatever toxin is the cause (such as mold, mercury or toluene).\nWhen a male-female couple has been exposed to toxigenic mold, pesticides from tenting or spraying, or other chemicals such as formaldehyde in new kitchen cabinets, women often experience symptoms first and often different symptoms than men. They have classic chemical sensitivity symptoms: \"I am exhausted, depressed and sometimes frantic. But my husband does not find perfume, air fresheners and the detergent aisle of the grocery store offensive. Yet he is mean and belligerent, is losing his memory, his balance and has skin problems, but people think he is fine and that I am the fruitcake.\" For this they often separate.\nStudies done by the military may suggest the reason for the inequality among the sexes in the chemical response. In a 1998 study, trichothecene exposure in female rats led to adrenal necrosis or death. Interestingly, giving females or neutered male rats testosterone prevented adrenal damage. This illustrates why men, with ten-fold higher levels of testosterone than women, have less incidence of fatigue, stress intolerance, allergy, inflammation and chemical sensitivity. It also points to the possible benefit of hormones, such as testosterone, being given to women with chemical exposure.\nWhy Horse Women are Crazy\nIn a pilot study, I found that horses, because they eat mold-containing hay and grain, have measurable amounts of trichothecene in their urine. When tested, the horses' sweat, as well as the horse dust also have measurable levels of this dangerous poison.", "score": 23.47229479210838, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Why do women live longer than men?\nBeing the bigger and stronger of the sexes, one might assume that men would normally live longer, healthier lives than women. The opposite is true, however. In the US, for example, women outlive men by an average of 5-7 years and, according to the World Health Organization in 2006, this pattern is true for all countries on earth.\nWhile lifestyle choices certainly play a role in the disparity, some researchers argue that genetics may also be a factor. After all, the tendency for females to outlive males applies not only to humans, but for all mammal species.\nFactors that may explain why women live longer than men\nSeveral biological theories are commonly mentioned:\n- The higher level of testosterone hormone in men leads them to engage in more violent or risky behavior.\n- Until menopause, the higher estrogen levels in women serves a protective function against \"bad\" LDL cholesterol. Thus, women generally tend to avoid the earlier signs of heart disease commonly seen in men.\n- Women tend to have lower iron levels through a significant portion of their life due to menstruation -- and iron, which comes primarily from meats in our diet, plays a strong role in the formation of cell-damaging and cell-aging free-radicals.\n- Because females have 2 X chromosomes (versus XY in males), defects in one chromosome may be offset by the other. (In birds, however, the chromosome variation is reversed: female birds have ZW chromosomes whereas males have the same ZZ. And, as a result, male tend to outlive females in most bird species.)\n+ Free Shipping & Returns on Eligible Items.\n(*Amazon's Top 100 list updated hourly.)\nA recent theory is that, in general, smaller individuals in a species live longer. Perhaps this is because their development involves fewer cell doublings and a reduced demand for ongoing tissue regeneration over a lifetime. Supporting this theory is the fact that age-related diseases generally appear earlier in men than in women.\nThis is probably where the dramatic differences between male and female longevity are determined:\n- Men tend to drink and smoke more. (One US study conducted in Erie County, PA in the early 1970's indicated that, among non-smokers, there is no difference in life expectancy between men and women.)", "score": 23.390472787672042, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "For years women have cried “man flu” when men make a fuss over a few sniffles.\nBut a new study suggests that men may actually suffer more when they are struck down with flu - because high levels of testosterone can weaken their immune response.\nThe study by Stanford University School of Medicine, examined the reactions of men and women to vaccination against flu.\nIt found women generally had a stronger antibody response to the jab than men, giving them better protection against the virus.\nMen with lower testosterone levels also had a better immune response, more or less equivalent to that of women.\nIt has long been suggested that men might be more susceptible to bacterial, viral, fungal and parasitic infection than women are.\nThe study published in the Proceedings of the National Academy of Sciences, found women had higher blood levels of signaling proteins that immune cells pass back and forth, when the body is under threat.\nPrevious research has found that testosterone has anti-inflammatory properties, suggesting a possible interaction between the male sex hormone and immune response.\nProfessor of microbiology and immunology Mark Davis said: “This is the first study to show an explicit correlation between testosterone levels, gene expression and immune responsiveness in humans.\n“It could be food for thought to all the testosterone-supplement takers out there.”\nScientists said they were left perplexed as why evolution would designed a hormone that enhances classic male sexual characteristics - such as muscle strength, beard growth and risk-taking propensity - yet left them with a weaker immune system.\nPrevious studies have found that while women may accuse men of exaggerating when they have flu, females who are more likely to admit to having sniffles and sneezes.\nThe research, carried out by London School of Hygiene and Tropical Medicine last winter, shows that women are are 16 per cent more likely to say they are ill.", "score": 23.030255035772623, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "The team investigated several leading hypotheses that attempt to explain this phenomenon, including the idea that clinicians are not as good at recognizing autism in females, that autism represents an \"extreme male brain\" that is naturally more common in males, or that autism is driven by differences in sex hormones, and found that these did not appear to be major factors in the genetics of the disease.\nInstead, the team found that autism risk is associated with genetic variants that are known to contribute differently to physical traits such as height, weight, BMI, and waist and hip measurements in men and women, suggesting that their effects on autism risk might differ between the sexes, as well.\n\"The results indicate that there are fundamental genetic sex differences in autism,\" Weiss said. \"It suggests that genetic variants that may be important predictors of autism risk for girls may not be so important for boys, or vice versa. This means that interpretation of genetic testing in autism could potentially be improved and refined by considering sex. Further in the future, similar implications should be considered for autism treatments - if there are sex differences in the underlying biology, response to specific treatments might also be different by sex.\"\nBiological drivers of sex differences may influence many common diseases, including multiple sclerosis and hypertension, researchers find\nWeiss's team followed up this research with a second paper—published online December 14, 2016 in Genetics and scheduled for print in February, 2017—exploring the role of sex differences on the genetics of nine other diseases, some that strike men more frequently (ankylosing spondylitis and type 1 diabetes), some that are more common in women (multiple sclerosis and rheumatoid arthritis), and others that occur with similar frequency in men and women (bipolar disorder, coronary artery disease, Crohn's disease, hypertension, and type 2 diabetes).", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-14", "d_text": "Estradiol promotes functional responses in inflammatory and steady-state dendritic cells through differential requirement for activation function-1 of estrogen receptor alpha. J Immunol. 2013;190:5459–70. pmid:23626011\n- 46. Xia HJ, Zhang GH, Wang RR, Zheng YT. The influence of age and sex on the cell counts of peripheral blood leukocyte subpopulations in Chinese rhesus macaques. Cell Mol Immunol. 2009;6:433–40. pmid:20003819\n- 47. Bereshchenko O, Bruscoli S, Riccardi C. Glucocorticoids, sex hormones, and immunity. Front Immunol. 2018;9:1–10. pmid:29403488\n- 48. Conti P, Younes A. Coronavirus COV-19/SARS-CoV-2 affects women less than men: clinical Response to viral infection. J Biol Regul Homeost Agents. 2020; 34. pmid:32253888\n- 49. Liva SM, Voskuhl RR. Testosterone acts directly on CD4+ T lymphocytes to increase IL-10 production. J Immunol. 2001;167:2060–7. pmid:11489988\n- 50. Fischer M, Baessler A, Schunkert H. Renin angiotensin system and gender differences in the cardiovascular system Cardiovasc Res. 2002;53:672–7. pmid:11861038\n- 51. Regitz-Zagrosek V, Oertelt-Prigione S, Seeland U, Hetzer R. Sex and gender differences in myocardial hypertrophy and heart failure. Circ J. 2010;74:1265–73. pmid:20558892\n- 52. Seeland U, Regitz-Zagrosek V. Sex and gender differences in cardiovascular drug therapy. Handb Exp Pharmacol. 2012;214:3–22. pmid:23027453\n- 53. Vermeulen A, Kaufman JM, Goemaere S, van Pottelberg I. Estradiol in elderly men. Aging Male.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "Men and Women Get Sick in Different Ways, Padua University Hospital Study\n3/25/2013 7:45:50 AM\nRecent research in laboratory medicine has revealed crucial differences between men and women with regard to cardiovascular illness, cancer, liver disease, osteoporosis, and in the area of pharmacology. At the dawn of the third millennium medical researchers still know very little about gender-specific differences in illness, particularly when it comes to disease symptoms, influencing social and psychological factors, and the ramifications of these differences for treatment and prevention. Medical research conducted over the past 40 years has focused almost exclusively on male patients.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "It is clear that one of the main differences between the sexes is down to hormones. In women hormone imbalance has serious consequences for increased risk for hormonal cancers, strokes and heart disease.\nIn men oestrogen dominance can also be a factor, leading to increased risk for cancer of the prostate and osteoporosis.\nAs men are less likely to look for information on their health, you might want to share the following articles with the men in your life.", "score": 21.749839261653996, "rank": 58}, {"document_id": "doc-::chunk-4", "d_text": "Miguel Cuchacovich when using progesterone treatment in RA patients. Dr. Valentino also describes decreased levels of testosterone in such patients and has shown the positive effects in laboratory and clinical studies when testosterone is used as therapy.\nThe Endocrine System & Autoimmune Diseases\nThe endocrine system is a target for autoimmune diseases. As the body’s hormonal regulator, the endocrine system releases and then slows and/or stops the production of different hormones in response to various internal and external triggers.\nThe tightly controlled network of endocrine organs and glands (which includes the thyroid, pancreas, pituitary, adrenal, ovaries and testes) may be affected in cases of autoimmune disease. In cases of insulin-dependent diabetes, the pancreas comes under attack, while in Graves’ disease, as discussed earlier, the thyroid gland goes into overdrive in response to overproduction of antibodies.\nAutoimmune disorders involving the endocrine system may also arise when a person produces antibodies to a particular hormone. Antibodies against naturally occurring hormones such as estradiol and progesterone can wreak havoc. When women make antibodies to such hormones, they may experience erratic ovulation or insufficient production of the uterine lining. These conditions can cause abnormal menstrual periods and even prevent successful implantation and pregnancy.\nWhile autoimmune diseases certainly afflict men, it is virtually impossible to ignore the fact that they are much more prevalent in the opposite sex. In his book Women and Autoimmune Disease, Dr. Robert Lahita writes, “Why is there such a seemingly unfair preponderance of women associated with practically every one of the autoimmune diseases? As it turns out, one of the greatest factors that influence the immune system is gender.”\nThe Role of Estrogen in the Immune System\nMany scientists are focusing their research efforts on trying to understand precisely why it is that autoimmune diseases are more common in women than in men. The authors of The Immune System Cure offer the following explanation: “Scientists believe that the female hormone estrogen may be the reason for this. The hormone estrogen may interplay with certain immune factors that enhance the action of the inflammatory response, increasing antibodies that attack certain tissues in the body. An over abundance of estrogen or estrogen-dominance may be a factor in the prevalence of autoimmune conditions in women.”\nOther studies have shown that during their reproductive years, when estrogen levels are higher, females tend to have a more vigorous immune response.", "score": 21.695954918930884, "rank": 59}, {"document_id": "doc-::chunk-2", "d_text": "To be ill can be described as a more socially acceptable behaviour for women than for men. Men should be strong, and this stoicism may lead men to ignore symptoms of serious disease and may in the end be counterproductive to good health but ironically creates statistically healthier men. Women are viewed as being more sensitive to their symptoms and thus are seen with more tolerance and given greater societal permission to be sick. The supposedly greater confidence of women in the health care system is another reason why they seek help more often.\nA second explanation concerns the distinction between disease and illness. That is, the difference between a medically established disease diagnosis and a person's own subjective experience of illness. Disease can occur without illness, for example in early forms of cancer, high blood pressure, etc. and the opposite is true as well, pain and exhaustion do not always find a diagnosis. Men contract far more diagnosed diseases while women suffer from more unspecified illnesses.\nWomen constitute a majority concerning \"undetermined illnesses,\" such as muscle and joint pain or psychiatric conditions where a disease diagnosis can not be established so easily. This failure of definition or establishing a diagnosis also limits the possibility for answering with a rational and effective medical treatment and leads to extended illnesses.\nA third explanation operates to the contrary of the explanation just named above, namely, that labelling with an inaccurate diagnosis can worsen treatment. This especially concerns certain illnesses that are symptoms of more deep rooted injuries and where the primary cause is untreated. We know that incest and psychological violence against children often goes undetected. Women and men often handle early traumas in different ways. In adult life men may express different defence/attack reactions, by being abusive or apathetic. This \"closing off' of oneself, this internalization, can lead men to earlier deaths while women more often deny outside factors and blame themselves: a woman's body becomes a seismograph for the subconscious, often reacting with diffuse pain. Women are raised to turn conflicts and rage inwards, with depression as a result, while men are raised to be aggressive.\nA fourth explanation concerns what I would call frustration diseases, which have their roots in a feeling of not being adequate. It is a poor state of health which can be characterized by an inability to handle the contradictory influences of modern society. On the one hand domestic roles are still emphasized, but on the other hand women are expected to work outside their homes and contribute to the household economy.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-21", "d_text": "J Parasitol. 2007, 93 (6): 1424-1428. 10.1645/GE-1223.1.View ArticlePubMedGoogle Scholar\n- Beumer W, Effraimidis G, Drexhage RC, Wiersinga WM, Drexhage HA: Changes in serum adhesion molecules, chemokines, cytokines, and tissue remodeling factors in euthyroid women without thyroid antibodies who are at risk for autoimmune thyroid disease: a hypothesis on the early phases of the endocrine autoimmune reaction. J Clin Endocrinol Metab. 2013, 98 (6): 2460-2468. 10.1210/jc.2012-4122.View ArticlePubMedGoogle Scholar\n- Souto GR, Queiroz-Junior CM, Costa FO, Mesquita RA: Smoking effect on chemokines of the human chronic periodontitis. Immunobiology. 2014, 219 (8): 633-636. 10.1016/j.imbio.2014.03.014.View ArticlePubMedGoogle Scholar\n- Lefevre N, Corazza F, Duchateau J, Desir J, Casimir G: Sex differences in inflammatory cytokines and CD99 expression following in vitro lipopolysaccharide stimulation. Shock. 2012, 38 (1): 37-42. 10.1097/SHK.0b013e3182571e46.View ArticlePubMedGoogle Scholar\n- Bouman A, Schipper M, Heineman MJ, Faas MM: Gender difference in the non-specific and specific immune response in humans. Am J Reprod Immunol. 2004, 52 (1): 19-26. 10.1111/j.1600-0897.2004.00177.x.View ArticlePubMedGoogle Scholar\n- Aulock SV, Deininger S, Draing C, Gueinzius K, Dehus O, Hermann C: Gender difference in cytokine secretion on immune stimulation with LPS and LTA. J Interferon Cytokine Res. 2006, 26 (12): 887-892.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-4", "d_text": "THUS, WE CAN NOT AUTOMATICALLY ASSUME THAT because of women's higher degree of health care consumption and absence from work because of illness, that they are sicker and feel worse than men. It is important to clarify the gender-specific differences between mortality and morbidity rates, number of official sick leaves, hospital care and drug consumption versus health. The level of illness (which in most countries is recorded through the occurrence of sick leave from work, occupational injuries and early retirement) is very sensitive to market conditions and changes in government and their policies. The questions we should be asking are rather: how do our behaviour and responses vary between generations, concerning sick leave, hospital care and drug consumption, and how through time and between cultures do we differ and what effect has gender on health?\nI maintain that women per se cannot be said to be sicker than men, but rather see themselves as sick and feel unwell as well. If you constantly view and treat a person as sick, some will of course be just that. And this relationship doesn't necessarily have anything to do with women's longer life expectancy.\nThese seven explanations for women's higher degree of illness and sick leave than men ought to be part of a more general theory concerning ill health and disease. As is well known, there are a countless number of more or less gifted, so-called scientific models which claim to explain the occurrence and causes of disease. Nevertheless, it is interesting that in general these models do not take up the differences between the sexes. The seven approaches for women's illness which are discussed above do not in general exist in these models. Scientific models have often more to do with different genetic/physiological, social, psychological and cultural aspects without making distinctions between the sexes. Thus, there is a need for models which approach physiological, psychological, structural and cultural positions from a gender perspective.\nWe need to develop different methodologies which are less masculine and less oriented to the industrialized Western world's way of thinking. It must also include both epistemological and cosmological aspects.\nBut to be successful in the analysis and building of scientific models we also need to know more about the different male and female mentalities that thrives beneath the surface.\nFOR THE PURPOSE OF ANALYZING WHY WE KNOW SO much less about female diseases and health than we know about male, I have deliberately chosen an expression which isn't a proper word in the English language - \"unknowledge\".", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "This will all cause the reaction in the patient you see of swelling of the tongue, lips, and throat causing shortness of breath and wheezing, ultimately potentially obstructing his airway. (Rebar et al., 2019)\nGenetics is the study of heredity, aka how parents pass on traits to their children. This includes the variation of a gene and the trait it controls called an allele (Rebar et al., 2019). The same way a person’s genetics can determine a blue versus green eye color, the body determines the immune response based on specific alleles, specifically the Human Leukocyte Antigen (HLA) allele (Drug allergies 2021). Meaning depending on the genetic makeup you got from your parents, your body may react a certain way when exposed to certain medications such as penicillin. The primary role of the HLA molecules is to regulate the immune response (Drug allergies 2021). HLA molecules are expressed on the surface of most cells with a nucleus and monocytes, macrophages, dendritic cells, and T lymphocytes. This means that these are the cells involved in the immune response responsible for hypersensitivity response, including anaphylaxis.\nGender in specific can make someone susceptible or not susceptible to particular processes or diseases. For example, a woman can be more susceptible to autoimmune diseases or when the immune system cannot decipher the difference between their healthy cells and potentially harmful antigens and therefore treats them the same (Angum et al., 2020). Some believe that the incidence is higher is hidden in the XX chromosome combinations because the X chromosome is more prominent in size and therefore may have 800-900 more genes (Angum et al., 2020). Being that the rules of genetics still apply, I do not think this would change my response.\nAngum, F., Khan, T., Kaler, J., Siddiqui, L., & Hussain, A. (2020, May 13). The prevalence of autoimmune disorders in women: A narrative review. Cureus. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7292717/.\nDrug allergies. WAO. (n.d.). https://www.worldallergy.org/education-and-programs/education/allergic-disease-resource-center/professionals/drug-allergies.\nPatterson, R. A. (2021, July 21).", "score": 20.327251046010716, "rank": 63}, {"document_id": "doc-::chunk-1", "d_text": "Slightly more American men than women develop cancer, and more men than women also die from cancer. For obvious reasons, breast cancer is more common in women. But other kinds of cancer, like stomach and liver cancer, are more frequently diagnosed in men. Some researchers believe this is partly due to a difference in smoking and drinking behaviors, but biological factors may also have something to do with it.\nProtect yourself: Don’t smoke (duh), and drink less. Alcohol consumption has been linked with prostate cancer, breast cancer, stomach cancer and more, and Americans can be a boozy bunch. We’re ranked 25th in the world for per capita alcohol consumption, with the average American drinking the equivalent of 2.3 gallons of pure alcohol a year.\nColds and flu\nWomen catch more colds than men, but men may experience more symptoms when they do start sneezing. Estrogen (a female sex hormone) may offer women some protection from the flu virus and result in fewer symptoms.\nDoes this mean that the so-called man flu (defined by Oxford Dictionaries as “a cold or similar minor ailment as experienced by a man who is regarded as exaggerating the severity of the symptoms”) is a real thing? Maybe, but researchers suggest that cultural factors play a bigger role than biological ones. Sorry, guys.\nStay healthy: Wash your hands often with soap and water for at least 20 seconds. Make natural immunity boosters like probiotics, garlic and echinacea your BFFs. If all else fails, watch a kitten sneeze on YouTube or giggle over an article like “17 Tweets You’ll Appreciate If Your Man Is a Giant Sick Baby”—laughter is a proven immune booster.\nOn average, women require 20 more minutes of shut-eye than men in order to feel well rested—so it makes sense that men tend to get less sleep. More women than men experience sleep problems like insomnia and restless legs syndrome.\nRest easy: Wake up and head to bed at the same time every day (yep, even on lazy Sundays). Moderate aerobic exercise may be especially helpful for those with insomnia—but try to avoid working out within the three hours before bedtime. Sleep supplements like magnesium or melatonin could also help you fall asleep faster and snooze more soundly.\nLet’s talk about…\nSex and gender don’t actually mean the same thing. Sex refers to biological factors like reproductive function and hormones.", "score": 20.327251046010716, "rank": 64}, {"document_id": "doc-::chunk-9", "d_text": "Preclinical research and drug development studies possess predominantly utilized male pet choices and cells also.4, 5, 6 It isn’t surprising a 2001 US Authorities Accountability Office record discovered that eight from the ten prescription medications withdrawn from the marketplace between 1997 and 2000 posed higher health risks for females than for males.7 Most financing agencies from European countries and THE UNITED STATES have implemented procedures to aid and mandate analysts to consider sex and gender whatsoever degrees of medical study.8 Still, the field of sex-based biology and medication is often seen as a specialised market, rather than a central consideration in medical research. Essential for the success of clinical care and translational science is awareness by clinicians and researchers that the diseases they are treating and studying are characterised by differences between women and men in epidemiology, pathophysiology, clinical manifestations, psychological effects, disease progression, and response to treatment. This Review explores the role of sex (biological constructs) and gender (social constructs) as modifiers of the most common causes of death and morbidity, and articulates the genetic, biological, and environmental determinants that underlie these differences. We aim to guide Morin hydrate clinicians and researchers to better understand and harness the importance of sex and gender as genetic, biological, and environmental modifiers of chronic disease. Ultimately, it is a necessary and fundamental step towards precision medication which will advantage women and men. Sex being a hereditary modifier of disease and biology Sex distinctions in disease prevalence, manifestation, and response to treatment are rooted in the hereditary differences between people. Genetic sex distinctions begin at conception when the ovum fuses using a sperm cell holding an X or a Y chromosome, leading to an embryo holding either XY or XX chromosomes. This fundamental difference in chromosome go with (eg, genes beyond your testis-determining gene) creates ubiquitous sex distinctions in the molecular make-up of most male and feminine cells.9 Initial, the Y chromosome bears genes that display subtle functional differences off their X-linked homologues (eg, and gene), which generates ubiquitous sex differences in the molecular makeup of most feminine and male cells. (B) Random inactivation of 1 X chromosome in feminine cells causes another degree of sex distinctions in gene appearance. Some X-linked genes get away inactivation in feminine individuals and also have a higher appearance in feminine than male people.", "score": 20.327251046010716, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "When it comes to well-being, women and men can be as different as apples and oranges (or figs and eggplants). The differences aren’t just biological. You might be surprised to discover how many health conditions can vary hugely based on sex and gender.\nMen are twice as likely to have a heart attack. That’s probably why two-thirds of heart attack research has focused on men. But assuming that heart disease is just a guy thing means that it’s been majorly under-recognized and under-researched in the ladies. In fact, women are more likely to die after a heart attack than men are.\nAn American Heart Association survey found that only 13 percent of women surveyed felt that heart disease was their biggest health risk. They were more concerned about breast cancer, even though women are six times more likely to be struck down by heart disease.\nTake heart: Ladies, don’t dismiss potential heart attack symptoms. Women are more likely to experience throat, jaw and neck discomfort. They also often—but not always—experience usual signs like chest pain. Everyone, call 911 immediately if you suspect a heart attack, and insist on having your heart checked if you think there is a problem.\nWomen are diagnosed with depression about two times more than men are. Major hormonal changes in women (during pregnancy or menopause, for example) could account for some of this difference. Researchers note that depression is often under-recognized in men, perhaps because of outdated stereotypes about manly dudes being tough and untouchable.\nTake care: Invest in your mental health by making time for self-care and surrounding yourself with supportive friends and family. Seek the assistance of a mental health professional if you suspect you’re depressed.\nIn America, women can expect to live about five years longer than men. Life expectancy for Americans born in 2018 is 81 years for women and 76 years for their male counterparts. This could be because women are more prone to mild but persistent ailments like allergies and headaches, whereas men are more prone to serious conditions like heart disease and the most deadly forms of cancer … or maybe women are just resilient AF?\nGo long: Exercise is key if you want to add years to your life. In one study of more than 2,000 senior men, regular exercise was associated with a 30 percent lower risk of mortality. Walking briskly, jogging, cycling and swimming could all help you live better, longer.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "Research conducted by the University of Valencia (UV) and the Jaume I University of Castellón (UJI), among other institutions, has found alterations to the structure of the brain's nonapeptidergic systems, social behavior and the production of pheromones, traits that reveal sexual dimorphism, in male mice with a lack of the Mecp2 gene.\nA new study published on the preprint server medRxiv in August 2020 shows that the risk of infection is high for pre-menopausal women but the risk of death is much higher for men in the same age group and throws light on the protective nature of estradiol therapy in post-menopausal women.\nResults from an international clinical trial found that men with advanced prostate cancer who have mutated BRCA1/BRCA2 genes can be treated successfully with a targeted therapy known as rucaparib, resulting in recent FDA approval.\nRunning is one of the most popular forms of exercise, enjoyed by a broad range of age groups and skill levels. More women are running recreationally compared to men; specifically 54% of runners are female as indicated by a 2018 National Runner Survey.\nNew insight on differences in the brains of men and women with autism has been published today in the open-access journal eLife.\nAlmost from the beginning, scientists have been struck by the disproportionately higher numbers of men and older adults who have developed severe COVID-19 disease compared to younger individuals and women. Prior research has shown that in 37 of the 38 countries from which sex-stratified data was available, males were at a higher risk of death. Also, post-menopausal women are at increased risk of severe COVID-19. However, the biological underpinnings of this have been less visible.\nNews-Medical speaks to Dr. Robert M. Sargis about his research into whether medication is exposing us to hormone-disrupting chemicals.\nIn this interview, Dr. Andrea Dunaif talks to News-Medical about their genetic analysis study which suggests there are different subtypes of PCOS.\nIn a new trial, Swedish researchers will investigate if a medicine normally used to treat prostate cancer can also be used to treat COVID-19 in patients.\nMen who follow plant-based diets have testosterone levels that are basically the same as the levels in men who eat meat, a study shows.", "score": 20.040186103659174, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "An allergy occurs when a person’s immune system reacts sensitively to a foreign substance which may be pollen or bee venom or any other thing. Our immune system produces antibodies to fight against unwanted substances that are harmful for us. During an allergy, the body produces antibodies to fight off against substances that do not cause harm. This immune system reaction can trigger conditions such as inflammation of skin, nasal passages or the digestive system. This can be termed as an allergic reaction.\nIf the allergic reaction becomes severe enough to induce life threatening symptoms then it is referred to as an Anaphylaxis. During this condition a person experiences severe allergic reactions and if not treated in time, it may even result in death.\nA study has shown that women commonly, experience more severe allergic reactions than men owing to their estrogen hormones.\nStudies pertaining to anaphylaxis are difficult to perform because of its life threatening nature. About 200 people die from anaphylaxis every year in the United States, but these numbers could be bigger, because many people die from unrecognized anaphylaxis. A person may be suffering from heart disease and may die because of a heart attack, but it’s also possible that the attack may well have been induced by a severe allergic reaction.\nResearchers from the National Institute of Allergy and Infectious Diseases NIAID have found that female mice are more prone to more severe anaphylactic reactions as compared to their male counterparts. They discovered that estradiol, which is an estrogen, enhances the activity of endothelial nitric oxide synthase (eNOS), which produces some of the common symptoms that are attributed to anaphylaxis.\nOnly when plain eNOS is introduced into the system of the person undergoing anaphylaxis will result in the drop of blood pressure of the person, enabling the fluid in the blood vessels to leak into the surrounding tissue, which would in turn cause swelling. Adding estradiol would further cause the symptoms to become more pronounced than ever.\nResearchers in a bid to understand that whether estradiol was the real culprit blocked the estrogen in female mice. And the results showed that female mice experienced similar levels of allergic reactions as their male counterparts, thereby further strengthening the research.\nSo overall, the research concluded that gender factor does play an important role in allergies.\n“More women are admitted to hospitals for anaphylaxis as compared to men which indicates that something is going on here. Too often these gender differences are not focused on.", "score": 18.90404751587654, "rank": 68}, {"document_id": "doc-::chunk-3", "d_text": "However, this is still speculation and a lot of further research needs to be conducted before a probable cause can be identified. The most staggering part of the research however was the rate of depression that was associated with asthma and the numbers gathered during the study showed that the depression rate surpassed that of several other much more serious problems like diabetes and even that of cancer survivors.\nYoung women ignore heart attach signs\nIn a recent study, there has been a new discovery that could be the answer to why women of a particular age group are having much more heart attacks as compared to men. The age group in question is 30-55 and according to the research which was conducted researchers by Yale, the women are more likely to ignore early warning signs of heart attack by thinking they may be occurring due to some other reason. The study involved interviewing women between 30 and 55 years of age and the results showed that women frequently ignored signs like dizziness and pain. The research strongly suggested that there is an immediate need to educate women at an early stage in identifying what the trigger points are and then being able to spot them in case they occur to anyone during the coming years of their lives. The study also strongly suggested towards the need of change in the attitude towards the problem in order to avoid any unnecessary complications including heart attacks.\nAutoimmunity and exposure to mercury\nAutoimmunity is a disease that occurs mostly in women where the body starts to produce antibodies that are against the body’s own tissues and that can cause serious health problems. According to a recent study, the key factor that may be inducing the risk of the onslaught of this problem is exposure to mercury. The exposure can happen in a number of different ways as suggested by the research e.g. through seafood. It was found that even at levels of exposure that are considered to be safe for the body, there were developments of autoimmunity cases in women. The thing that makes it difficult to address this issue is the fact that very little factual knowledge is available about the development of autoimmune disorder and therefore doing solid progress is extremely difficult. However, after this research, scientists have given a lot of stress that women, especially those in childbearing age should keep a proper track of their seafood consumption.\nTo find out more about issues and developments related to women’s health, you can visit this www.womenshealth.gov/news.", "score": 18.90404751587654, "rank": 69}, {"document_id": "doc-::chunk-7", "d_text": "The scientific academies did not accept women members.\nTo gain acceptance,\nmany women researchers today avoid conducting research about women. Those\nwomen who \"succeed\" are generally keen on emphasizing that they\nare not feminists and that they are never discriminated against. Despite\nthis low-profile, there are a few women who have reached influential positions\nin the male-dominated world of science.\nTHE SECOND EXPLANATION FOR WHY WE KNOW LESS about women's diseases than men's is that clinical tests on women are consciously avoided because of the inherent risks, not for the woman herself but to the foetus. This relates in particular to clinical drug trials and goes back to the tragedies that came of pregnant women taking pills of thalidomide: 10,000 babies were born with serious malformations.\nTherefore initial drug testing uses mainly men, young men, often medical students, or men doing their military service, as subjects. In most Western countries women who volunteer as subjects for initial drug trials must prove that they don't have \"child-bearing potential\", which can mean providing a written affidavit that they are taking both contraceptive pills and have an IUD. This is of course unusual. Only in later phases of the trials are women generally included in the tests.\nCan results of tests mainly carried out on men always be considered applicable to women?\nNo, drugs used for diagnosis or treatment often have a different effect on men and women. Men's and women's bodies metabolize drugs differently. We also know that women's monthly menstrual cycles can influence the effect of a drug.\nWhen the drug eventually moves from the laboratory into the home we therefore don't know how it will affect women and the elderly.\nactual test period for women is when the drug is used at hospitals or\nprescribed by a doctor. What was unethical during the first controlled\ntesting phase is no longer an issue, with the release of drugs to the\npublic after tests mainly conducted on men (and on mail mice!). In the\nlater phases, when the drug is approved, there is less emphasis on taking\nextreme caution. Eventual side-effects, damages and birth defects are\nreported and complemented with epidemiological studies. Approval of a\nnew drug is a process that takes up to ten years.", "score": 18.90404751587654, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Seasonal allergies are a just a part of life, right? Super common, everyone has them, just a fact of life…right? What if I told you that they’re not just an annoying part of life, that they’re actually relatively new for us. And, along with auto-immune conditions, they’re on the rise and the culprit might be our super clean lives.\nIt might just be that clean and purified water you’re drinking, and/or the flushing toilet you have in the bathroom that has made your body more susceptible to immune imbalances like allergies and auto-immune conditions.\nTo find some answers we need to go back a few hundred years, back to the Industrial Revolution. Before this time, most of us lived on farms and had animals everywhere. It seems our immune system prefers this way of life…and you’ll see why in a minute.\nDuring the Industrial Revolution, many people began to flood the cities looking for work. And life for the “average Joe” was even dirtier than in the countryside. It’s hard to live in a crowded city without running water and sanitation…just imagine what it was like back then!\nBut it wasn’t dirty for everyone. The elite, or the upper upper classes lived a much cleaner life. They weren’t living with the dirt and grime in the cities, nor with the animals in the countryside. They lived a bit like the way we do now…and they developed a very interesting set of symptoms.\nThey developed seasonal allergies.\nFor the first time, their immune system began to act like ours does. Mistaking pollen and dust for an invader, and causing the annoying sniffling, sneezing, runny nose symptoms that a large percentage of us feel every spring.\nThe part I find really funny is this—because it was the upper class that started sneezing, it became a sign of wealth. It was posh to have allergies. Today, our immune system has gotten even more confused, and with auto-immune conditions, the immune system is now mistaking our own cells as invaders and is attacking them. Could auto-immune conditions also be due to our much cleaner lifestyle?\nFor that we need to look at a study that followed two genetically similar groups of people. Half living in Finland, the other half in Russia, and they’re only separated by a few 100 miles and a border.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "Between 1997 and 2000, eight of the ten drugs withdrawn from the market had more detrimental side effects in women than men.\nIn 2012, researchers discovered that women metabolize zolpidem, the active ingredient in Ambien and other sleep medications, more slowly than men. This meant that for 20 years, women, on the advice of their doctors, had been overdosing on a drug known to significantly impair driving . The drug took a particularly noticeable toll in 2010, when women experienced more than two-thirds of all 64,000 zolpidem-related emergencies. In January of 2013, the FDA warned that the recommended dose for women should be cut in half, a change now specified in the drug directions.\n“To my knowledge, that’s the first sex-specific dosage labeling for a drug,” says Dr. Janine Clayton, director of the National Institute of Health’s Office of Research on Women’s Health. “We need to do better.”\nClayton first became aware of these inequities in 2001 when she discovered yet another shocking statistic : worldwide, nearly twice as many women suffer from visual impairment and blindness as their male counterparts. This left Clayton, an ophthalmologist at the time, reeling. “Surely this can’t be the case in the U.S.,” she had said to herself. But, she found out, it was.\nClayton was determined to find the cause for these stark differences. One explanation, she suspects, is rooted in a centuries-old bias.\nUnderrepresented in Research\nAcross biological and medical research, scientists more often study males than females. Historically, the male-dominated scientific community claimed that female hormonal fluctuations introduced complicating variables to their experiments. Others thought it a moral imperative to protect women, especially pregnant ones, from the risks of research.\nRegardless of their motivations, scientists systematically sidestepped sex differences by studying men and then applying the findings to women. Today, we’re discovering that this has left yawning gaps in our biomedical knowledge.\nThis is not to say that male and female biomedicine are always different. Analyses of 163 FDA approved drugs between 1995 and 2005 uncovered only 11 with largely differing effects on those with two X chromosomes. And some drugs originally thought to act differently in the sexes have been found indistinguishable upon further investigation.", "score": 18.205644964448382, "rank": 72}, {"document_id": "doc-::chunk-10", "d_text": "Consistent evidence has shown that medical illness, and as such anything that could worsen the functioning or quality of life of patients, would increase the suicide risk of the patients.38 ,39 Allergen-induced asthma, with its connotation of suffocation and marked restriction imposed on physical activity,20 may also contribute to the observed connection.\nThis study indicates that men appear more sensitive to lower concentrations of pollen, while women tend to show a stronger effect, but only when the pollen counts reach to a certain level. This is consistent with earlier studies suggesting a stronger association between tree pollen peaks and suicide in women than in men,7 and a stronger spring peak of suicide in women with atopic illness as compared with that in men with this illness.6 Men could be more sensitive to small concentrations as they are more reactive in regard to the patency of the small airways, while for women, the effect might be stronger because of greater inflammation in the smaller airways.40 Another explanation could be the sex-differentials in the comorbidity of asthma with allergic rhinitis, as women who were IgE positive had more asthma control problems, lower asthma-related quality of life and were more susceptible to asthma triggering than men.41 In addition, a genome-wide analysis showed major sex differences in gene expression in allergen-challenged CD4+ cells, and an underexpression of chemotaxis-related genes in women,42 suggesting that it might take a stronger stimulus in women to induce a cellular infiltrate that could perpetuate and augment allergic inflammation. This is supported by the observation of a lower concentration of allergen-specific IgE in the nasal fluid in women than in men with seasonal allergic rhinitis.42 Alternatively, the significant sex-difference may be relevant to non-specific psychological mechanisms related to sex-differentials in their reactions to medical illness and activity limitations.38 ,39 At the level of exposure, men are known to spend more time outdoors, either professionally or recreationally, which may potentially exploit their response to lower levels of pollen.\nIt is intriguing that we did not noted an additional effect of a very high level of air pollen, that is, more than 100 pollen grains/m3 air, on suicide rate in the population, regardless of sex, age and history of mood disorders. This is probably because most allergy sufferers begin to experience symptoms when the pollen count reaches the moderate level,43 ,44 although the threshold, the level of pollen exposure at which allergy symptoms occur, may vary from one person to another.", "score": 17.397046218763844, "rank": 73}, {"document_id": "doc-::chunk-11", "d_text": "They also have more of an enzyme that converts testosterone to oestrogen. `Hormones and chromosomes may be important in thinking about disease and health for women and men.\nIn A fascinating study last year, University of Michigan researchers looked at how men and women responded to health messages. When they were shown poster adverts for exercise, men were more motivated by those that mentioned weight loss and health, women were motivated by those focused on wellbeing.\n`We know men have more visual brains and respond better to visual messages in adverts. Women response to detail, so are more likely to absorb the total picture,' says Dr von Lob.\nMen and women respond to eating chocolate with different parts of their brains, a Dutch study in 2005 found. In particular, women had reduced activity in the hypothalamus, which controls feelings of hunger, so they had to eat more to get a similar effect as men.\nThe researchers concluded their results `indicate that men and women differ in their response to satiation [feeling full] and suggest that the regulation of food intake by the brain may vary between the sexes'.\nDr David Katz, founding director of Yale University's Prevention Research Centre, agrees. `There are clear differences between the sexes,' he says. `Studies show women crave sugar and fat more while men are more likely to crave meat.'\nAnd you can blame our cavemen ancestors. `All such differences tend to make sense in an evolutionary context,' says Dr Katz. `Men need a bit more protein to build the muscle that makes them most capable of surviving, succeeding and passing on their genes.\n`Women do the harder work of procreation. They need more fat stores to get a baby through gestation and to produce sex hormones such as oestrogen. Those hormones, in turn, seem to affect dietary preferences.' That is why women's desire for chocolate can be affected by the menstrual cycle.\nIn one analysis of brains, Larry Cahill, professor of neurobiology and behaviour at the University of California, found the male amygdala appears to be more active on the right side, but a woman's is more active on the left.\nThe left side is connected with the area that governs emotions and self-awareness. `So men under stress want to go for a run, let off steam or have space to themselves,' says Dr von Lob.", "score": 17.397046218763844, "rank": 74}, {"document_id": "doc-::chunk-2", "d_text": "In the scientific publication, “Influence of Oral and Gut Microbiota in the Health of Menopausal Women,” (2017) in the 8th volume of Frontiers in Microbiology, authors Vieira, Castelo, Ribeiro and Ferreira explain how estrogen levels affect which microbes grow in the reproductive and gastrointestinal tract. This has a direct impact on rates of certain diseases commonly found in the post-menopausal female. This is a summary and interpretation of their findings.\nThe beautiful life of a female involves two important changes in estrogen levels: menarche and menopause. Menarche occurs during adolescence and marks the onset of menstruation and potential reproduction. The second change is during menopause, when menstruation ends and the ovaries become non-reproductive. Estrogen has a huge impact on her physical wellness, emotions, memory, sensitivity to pain, and even the microbes that grow in her reproductive and gastrointestinal tract.\nFor years, the medical community has known that there are differences in the rates of autoimmune diseases between men and women and even pre- and post-menopausal women. Although we have known about the impact of estrogen, the role of the natural microbes living in the female body is just starting to reveal itself. We now know that estrogen levels impact which microbes grow in the mouth, the gut and the reproductive tract (possibly other locations, as well). This has an impact on Periodontitis, Autoimmune Disease, Type I Diabetes, Osteoporosis and even Breast Cancer.\nIn the scientific publication, “Gut Microbiome, Metabolome, and Allergic Disease,” in the 66th volume of Allergology International, authors Hirata and Kunisawa explain how they have found a link between the food you eat and the products made by your gut bacteria that affect your overall health. This is a summary and interpretation of their findings.\nNUT-FREE, GLUTEN-FREE, DAIRY-FREE…\nHow many times a week do you encounter a person with some sort of food allergy, or allergies in general? For me, it’s daily. Maybe they are lactose intolerant, or gluten-free, or peanut-free. Some people cannot eat raw vegetables without gas and bloating. My parents say, “it never used to be this way.” Back when I was starting kindergarten, we didn’t have “nut free” classrooms, or hallways.", "score": 17.397046218763844, "rank": 75}, {"document_id": "doc-::chunk-22", "d_text": "Medical school curricula need to differentiate between women’s health and sex-specific medicine, and emphasize the equal importance of the latter. A survey and study conducted by Jenkins and colleagues in the journal Biology of Sex Differences in 2016, most medical students nationwide felt that they were not properly educated on sex-specific medicine, and that, based on their education, they would not feel prepared to treat their patients using that technique. The literature is available and convincing. Practicing sex-specific medicine is essential for ensuring accurate diagnosis and it cannot be ignored.\nMEN ARE FROM MARS Males store fat among organs, while females store fat in a ring around the abdomen. That’s why you can\ntypically tell gender from an MRI and why liposuction is easier on women.\nFemale registered nurses make on average 5% less than their male counterparts, even though 90% of nurses are females.\nMales are more likely than females to suffer from almost all diseases. Of the few exceptions are osteoporosis, multiple sclerosis and breast cancer.\nThe skulls of men and women can be distinguished by the male’s heavier mandible, rounder and broader brow bone and larger occipital protuberance.\nThere are many different gene differences between men and women in the liver. This is why men and women metabolize drugs like Tylenol or alcohol so differently.\nMales tend to have more muscle above the torso, while females tend to have more muscle below the torso. The difference is about 15%.\nANNIKA CARTER, WAYLAND YEUNG GALIT DESHE\n38 GENDER AND SEXUALITY\nMale doctors are 2x more likely to be sued Men have more standard deviation in IQ than women...In both directions!\nOn average, women live 7 years longer than men.\nMale brains are typically better at spatial cognition.\nOn average, females have a higher body fat percentage, at 26% compared to males at 13%.\nTOP 10 LEADING CAUSES OF DEATH 1\nCANCER (ANY TYPE)\nCHRONIC LOWER RESPIRATORY DISEASES\nINFLUENZA AND PNEUMONIA\nNEPHRITIS, NEPHROTIC SYNDROME, OR NEPHROSIS 9\nWOMEN ARE FROM VENUS WWW.PREMEDMAG.ORG\nWhat does the term “patient advocate” mean to you?", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-4", "d_text": "- Our hearts beat faster,\neven during sleep.\n- Our core body temperature\nblood carries higher levels of protective immunoglobulin and lower amounts\nof oxygen rich hemoglobin.\n- We are twice as sensitive\nto sound, but only half as sensitive to light as men are.\n- Our vocal cords are\nshorter, our larynx 30 percent smaller.\n- The mix of chemicals in\nfemale saliva is different —and changes throughout the menstrual cycle.\nOkay, but so what? Hales clears this up: \"What difference do such differences\nmake? By most measures of performance, very little. The abilities of the sexes,\nphysical and mental overlap, and the variability within each sex can be greater\nthan between them.\"\ndifferences make very little difference, other than general interest and some\nhealth issues, then why is she writing this book?\ncontribute, Hales says, is an understanding about why diet pills and other drugs\nthat have been tested only in men can trigger serious effects in women.\nWe then go on to\nexamine women’s biological reality that has always shaped the lives of women:\nwe change, most noticeably in the rhythms and cycles of menstruation—something\nthat has no counterpart in the male. \"Try as we may, we cannot\ncompletely ignore the blood on the tampon, the inexplicable hunger for a baby,\nthe unsettling aftershocks of birth, the temperature spikes of menopause…. Our\nfemale rhythms no longer constrict the steps we can take and the moves we can\nmake, but they remain the chemical choreography of our lives.\"\nWho are all these\nwomen who try to ignore their menstrual cycle? Hasn’t this been done to death?\nWhen are we going to get beyond the crotch?\nHales says that\nuntil recently this was used against us, but now it isn’t. We’re not sure\nwhy except for that whole evolution thing and a brief reference to the women’s\nmovement. We don’t really know why the menstrual cycle mades us hated and\nashamed (as Hales says, \"stigmatized,\" used to discriminate against us) in\none society, but capable and proud in another.\nHales moves on to\nthe brain. \"A woman’s brain itself seems a model of connectedness. Women\ntypically use more cells in more parts of their brain than men do.", "score": 16.146201559952548, "rank": 77}, {"document_id": "doc-::chunk-7", "d_text": "To me this makes sense evolutionarily, as females are smaller and slower in general than males and are often responsible for children who are even smaller and slower still, both fighting and fleeing are less viable means of survival.\nThe Autoimmune Connection\nSo how does this relate to autoimmunity and its feminine dominance? While appreciating the elegance of Maready’s line of reasoning and his gathering of evidence regarding neurological disorders, there really wasn’t that much information so far that was new or shocking to me. It was when Maready started wading into the waters of autoimmunity that he blew my mind. Maready (with the help of LeAnne’s own research) noted that onset of autoimmune conditions in women often occurred immediately following four main triggers: infection, pregnancy, extreme physical exertion, or heightened stress. What do these four triggers have in common? They are all immune activation events, calling white blood cells to sites of stress or injury.\n“‘I was diagnosed with rheumatoid arthritis after my first pregnancy,’ I would hear again and again,” says Maready. That sentence gave me chills because I developed rheumatoid arthritis a few months after my daughter was born, and I wondered at the time if the pregnancy was a trigger, but the best info I could find was that RA often goes into remission during pregnancy because of the supposed down-regulation of the immune system.\nSo we know that the white blood cells that arrive at these sites are likely to contain at least some aluminum, which we know will not be good for the already-stressed cells, but that alone doesn’t explain why the inflammation response keeps going and often increases in intensity to the point where the body is spending a great deal of energy attacking its own tissues. Maready’s explanation had me riveted.\nIf you know anything about Lyme disease, you may know that one of biggest reasons it is so hard to get rid of is that the bacteria that causes it are spirochetes which have three different forms. When antibiotics kill the bacteria in its active form, the remaining bacteria revert to forms that are much harder to kill until it’s “safe” to come out and proliferate again.", "score": 15.758340881307905, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "We have dealt with a lot of issues relating to females, but rather than specialising in \"female only conditions\" we find that so many come in with general health conditions that happen to affect everyone. It's just that women try to look after themselves infinitely more than their \"Supermen\" cohorts !!!. However it should be mentioned that differences like menstrual cycles, pregnancy and hormonal imbalances as well as a tendency to see a doctor more often ( which includes taking more medications like antibiotics and the contraceptive pill) can tip the scales towards certain conditions that men seem less likely to get.\nWe therefore treat a huge and diverse array of symptoms and conditions covering all three areas of primary health, Mind, Body and Spirit. And like all things they need to be in balance. It is interesting the importance that women place on recovery, health and wellbeing.\nWhat we find more than anything with female clients in general, is how little attention has been paid to understanding and analysing their diets, stress, trauma, recovery and lifestyle as a whole by other healthcare providers. All too often we find that there has been so much emphasis on symptom recovery and management, that as soon as these treatments are stopped the underlying illness returns with the original symptoms.\nBy paying attention to detail, with understanding the factors associated with immune system and the effects that dietary and environmental toxicity has on female health, many of the significant and common causes for visiting an alternative practitioners are eliminated.\nEvery time I have a patient that tells me they have been diagnosed with breast cancer or some auto immune condition, the first response that I have is \"So many people seem to develop this condition by having some kind of immense stress and trauma history - causing a breakdown in the immune response, therefore potentially leading to the condition\". If you want to potentially reduce the chance of compromisation then try to improve immune health and offset the stress.\nThis can be significantly more effective once a correct dietary evaluation has been made.\nFACT: 75% of my clientele are women, 20% are children (nearly always accompanied by women) with a mere 5% accounting for a few switched on Men. Everything from Poly cystic ovaries, candida, allergies, intolerance, rashes, fatigue, tummy issues to auto immune conditions can be potentially helped with identifying specific inflammatory and allergy / intolerant responses associated with food and environmental toxicity.\nMake the right choice and start with what goes in, after all we are what we eat.", "score": 15.758340881307905, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "At least, I don’t care as it relates to my specific work with individual clients. I understand that classification matters for further study, etc., but as far as my day-in day-out work with people, the classification is irrelevant.\nWhat I care about most is what causes fibromyalgia and what makes it better or worse. Here’s how I share my view on classification. We often hear the old phrase, “If it walks like a duck and quacks like a duck, then it’s likely a duck.” To me, when it comes to fibromyalgia, I don’t focus on whether or not it’s a duck. I focus on how it walks and how it quacks!\nFibromyalgia walks and quacks like autoimmune conditions. For that reason, I study them both, find treatments and protocols that work for both, and will continue to look for solutions that positively impact both challenges.\nThe vital difference between genetics and gene expression\nHave you been told that your health challenges are all in your genes? Have any of your family members been diagnosed with fibromyalgia and/or other co-existing autoimmune conditions?\nThe study of genes – and more importantly, gene expression – is a vital component in the gender question regarding fibromyalgia.\nWe are more than just our genes.\nOur body’s signals (hormones) have the ability to literally tell our genes when and how to act. Some genes are static while others turn on and off like switches. (3) It’s this switching process (gene expression) which provides the most hope for a healthy future.\nThe study of epigenetics floods a beacon of hope on an otherwise misleading portrait of a bleak future. Epigenetics studies the outside factors that affect gene expression. Factors such as nutrition, relationships, and beliefs have a profound and lasting impact on our risks of developing disease and our ability to heal from chronic health challenges. Be sure to check out the resources at the end of this article for more details on this topic.\nFor now, here are three main factors to review that reflect reasons why females are more likely to suffer from fibromyalgia and autoimmune challenges than males.\n1) Reactions to stress\nIdentifying the differences in how men react to stress as opposed to women is both simple and complicated. It’s simple to observe how men – in general – seem to disengage from others when under stress, while women are more likely to engage and actually seek out social connections.", "score": 15.758340881307905, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "HearingMen are five and a half times more likely to lose their hearing than women, according to a 2008 study from Johns Hopkins University. But, because boys and girls show no differences in ability when they’re born, experts speculate that most of these changes are due to lifestyle and environmental factors—like smoking, noise exposure, and cardiovascular risk factors—that affect more men than women.\nOther research, however, has found that women of all ages have better hearing at frequencies above 2,000 Hz, but that, as they age, they are less able to hear low frequencies (1,000 to 2,000 Hz) than men.\nSmellScientists have long known that women tend to outperform men on tests for identifying scents, but only recently have they found a potential biological explanation. A study published in the November 2014 issue of PLoS ONE found that post-mortem female brains had, on average, 43% more cells and almost 50% more neurons in their olfactory centers (the part dedicated to smelling and odors) than male brains.\nThe study’s authors can’t be sure that these extra cells are responsible for greater smelling ability, but they say it’s a good guess. From an evolutionary perspective an enhanced sense of smell may have helped women choose mates for reproductive purposes.\nTasteConsidering how closely smell and taste are related, it’s not surprising that women also tend to have more sensitive palates than men. In fact, research from Yale University has found that women actually have more taste buds on their tongues. About 35% of women (and only 15% of men) can call themselves “supertasters,” which means they identify flavors such as bitter, sweet, and sour more strongly than others.\nAlso of note: Women of childbearing age taste flavors more intensely than younger or older females, and they may also notice increased sensitivity during pregnancy, Pelchat adds.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-13", "d_text": "Chen W, Mempel M, Schober W, Behrendt H, Ring J. Gender difference, sex hormones, and immediate type hypersensitivity reactions. Allergy Eur J Allergy Clin Immunol. 2008;63:1418–27.\nBonds RS, Midoro-Horiuti T. Estrogen effects in allergy and asthma. Curr Opin Allergy Clin Immunol. 2013;13:92–9.\nChen W, Mempel M, Schober W, Behrendt H, Ring J. Gender difference, sex hormones, and immediate type hypersensitivity reactions. Allergy. 2008;63:1418–27.\nRoved J, Westerdahl H, Hasselquist D. Sex differences in immune responses: Hormonal effects, antagonistic selection, and evolutionary consequences. Horm Behav. 2017;88:95–105.\nFuseini H, Newcomb DC. Mechanisms driving gender differences in asthma. Curr Allergy Asthma Rep. 2017;17:19.\nKeller T, Hohmann C, Standl M, Wijga AH, Gehring U, Melen E, Almqvist C, Lau S, Eller E, Wahn U, et al.: The sex-shift in single disease and multimorbid asthma and rhinitis during puberty—a study by MeDALL. Allergy. 2017. https://doi.org/10.1111/all.13312.\nKynyk JA, Mastronarde JG, McCallister JW. Asthma, the sex difference. Curr Opin Pulm Med. 2011;17:6–11.\nMcCallister JW, Mastronarde JG. Sex differences in asthma. J Asthma. 2008;45:853–61.\nVaidya V, Partha G, Karmakar M. Gender differences in utilization of preventive care services in the United States. J Womens Health (Larchmt). 2012;21:140–5.\nWorldwide variation in prevalence of symptoms of asthma, allergic rhinoconjunctivitis, and atopic eczema: ISAAC. The International Study of Asthma and Allergies in Childhood (ISAAC) Steering Committee. Lancet 1998; 351:1225–32.", "score": 13.897358463981183, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "They may also get a bald spot on the crown of their head. Women have either thinning all over or random bald patches.\nHormones are often to blame for acne. Because women’s hormones shift during periods, pregnancy, and throughout menopause, they’re more prone to adult acne than men.\nTreatment can vary based on your sex, too as doctors tend to prescribe medications that control hormones, like birth control, for women.\nCreams that you rub onto your skin are more common for men.\nWomen are more likely to say they’re stressed than men but both sexes feel anger, crankiness, and muscle tension at near the same rates from stress.\nWomen more often say it causes a headache, upset stomach, or makes them feel like they need to cry. Men are less likely to feel physical symptoms during times of stress than women.\nMore women live with chronic pain (pain that lasts longer than 6 months and doesn’t seem to respond to treatment) than men.\nTheir pain also tends to last longer and be more intense. Doctors are still trying to figure out why, but they think differences in hormones between the sexes may be to blame.\nBecause women are more likely to get osteoporosis, it’s often overlooked in men. The condition is 4 times as likely in women than men.\nWomen over the age of 50 are the most likely people to develop osteoporosis because our lighter, thinner bones and longer life spans are part of the reason we have a higher risk.\nMen can get osteoporosis, too — it’s just less common.But men who have this lack of bone density and break a hip are twice as likely to die than women with osteoporosis who break\nAlthough women tend to get urinary tract infections more often, men’s UTIs are more complicated. They have different causes, too.\nWomen most often get them because of bacteria from sex or faeces (because our urethra is shorter and closer to that area.\nMen’s UTIs are more likely to arise from something that blocks their urine stream, like an enlarged prostate or kidney stones.\nWomen are less likely to have symptoms with sexually transmitted diseases (STDs) like chlamydia and gonorrhoea. STDs can also lead to chronic pelvic inflammatory disease in women, causing fertility issues.\nMen seldom have such complications. The human papillomavirus (HPV) is also the main cause of cervical cancer in women, but it doesn’t pose a similar risk for men.", "score": 13.897358463981183, "rank": 83}, {"document_id": "doc-::chunk-3", "d_text": "We set the space for our beginning in December... and in January we are heeding the Call To Adventure... locating our reference points in the body and mind as we embark on our healing path.\nI read this blog post called Could Female Self-Hatred Be The Real Cause Of Autoimmune Disease? It's definitely worth a read. Sarah Wilson, who suffers from autoimmune disease has tried lots of remedies for her symptoms, and has realized that there's something else... something underneath the flare ups. Self-hatred. Tension. Anxiety. Not enough.\nI often have clients who exhibit symptoms of autoimmune disease. A few have been diagnosed. And I'd posit that it's not unique to women, and that there's a second piece of the puzzle.\nSo... women. Yes, approximately 80% of the people diagnosed with autoimmune disease are women. This could be due to the differences in the immune system of men and women. It could also be that men and women will be found to have different symptoms of autoimmune disease. That remains to be seen. I suspect that men are going to be found to exhibit more neurological symptoms and nerve problems, whereas women have more inflammatory issues. That's just a guess based on a few male clients... so don't quote me on that. Besides, I'm not a doctor.\nWhen I work with clients, we delve into their cellular function. We ask questions of the Whole Body Wisdom about what is happening there. And people with autoimmune disease often register as having a secondary infection... of pathogens that do not actively infect human tissues, but lay dormant, waiting for bacteria or viruses that they CAN infect to come by. These dormant colonies are the ones causing the inflammatory response, and yet they are not registered by the immune system because they are not virulent to humans. So, the conundrum... the body can see the EFFECTS of these things, but cannot locate them directly. Some days are good days when the colonies are very quiet, and other days they start communicating with one another to see if there has been an influx of cells that they can infect... and there's a \"flare up\".\nNow, I can't diagnose, so when I'm with a client I stress that this is metaphorical. There are parallels between the body and the mind and the emotions... and the environment that shows up in one is likely to show up in another.", "score": 13.897358463981183, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Why do Women Live Longer than Men?\nIt was reported that the research conducted on people who lived for 100 years in Harvard University has revealed that ‘menopause’ is the cause for difference in life spans of males and females. According to the geriatrician, Thomas Perls at Harvard University, there are two reasons for the extension in the lifespan of females compared to that of males. The transfer of genes from one generation to the next in the evolution is one and the necessity to give birth to a child is another. As the woman has to remain healthy to give birth to as many children as possible before menopause, woman has to be healthy for long time.\nMenopause is a line that separates the life of women into two parts. It prevents the aged women not to bear the child as it is risky for them. The menopause gives strength and life to the woman to take care of her children and grand children. Men according to Perls will only carry the genes and transfer them to the next generation.\nIn most of the countries, it was surveyed that women lived longer than men. In United States, the average life span of women was 79 years while that of men was 72 years. This gap in gender is mostly seen in those who lived more than 100 years. Females have two X-chromosomes as sex chromosomes while males have 1 X-chromosome and 1 Y-chromosome. The genes that are present on the Y-chromosome in male might undergo alterations which might be the result of illness. As the genes will get expressed whether they are recessive or dominant due to only one allele, men are susceptible to illness very early than females. It is reported that females will be prone to heart attacks 10 years later than males. The X-chromosome saves the presence of any gene that code for illness, if it is recessive by the dominant allele of it present in another X-chromosome. So, women will be less prone to diseases or illnesses. This might also be a reason for the reduced life span of males over females.", "score": 13.897358463981183, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "\"If a woman with osteoarthritis starts HRT and notices that her pain is getting worse, she should consider getting off the drug, taking a lower dose, or switching to another alternative to see if (it is) responsible,\" he says.\nFrom arthritis to migraines, scientists are also finding sex differences in how men and women respond to the pain of common diseases and disorders:\n- Arthritis: Daily logs kept by 71 arthritis patients showed that women experienced significantly more severe pain. According to Keefe, who was principal investigator on the project, women are also more likely to relax, air their emotions, seek distractions and emotional support to cope. \"Men don't show their feelings and don't seek out assistance as readily as women. That may very well be what's going on in this case,\" Keefe says.\n- Cardiac Disease: Premenopausal women have higher rates of false-positive chest pain syndromes, while postmenopausal women have relatively high rates of asymptomatic or silent heart disease, says Debra Judelson, MD, medical director of the Women's Heart Institute in Beverly Hills, Calif. and former president of the American Women's Medical Association. \"Women are more likely to have high blood pressure and diabetes as complicating medical problems which can change the way they experience pain,\" Judelson says. \"They also have more abdominal, shoulder, and neck pain, shortness of breath, back discomfort, vomiting, fatigue and nausea as opposed to chest discomfort seen in men.\" The bottom line: up to a 40 percent higher mortality rate in younger women under 50 with heart disease than men. \"Whatever symptoms they experience are not recognized as a cardio problem in the emergency room, which contributes to delays in seeking help or getting treatment,\" Judelson adds.\n- Migraine Headaches: Boys have more migraines than girls until puberty when hormones begin kicking in. Women are three times more likely to experience migraines than men beginning at puberty when hormone fluctuations kick in. They seem to strike whenever estrogen, the neurotransmitter serotonin, and beta endorphins are low. Several studies concluded that migraine in women of childbearing age dramatically boosts the risk of ischaemic, not haemorrhagic stroke. Women who use oral contraceptives, have high blood pressure, or smoke are at greatest risk of ischaemic stroke associated with migraine.", "score": 13.632390536529643, "rank": 86}, {"document_id": "doc-::chunk-1", "d_text": "We need to be better at associating diseases with gender,” said Dean Metcalfe, M.D., NIAID Laboratory of Allergic Diseases chief and a co-author for the study.", "score": 11.600539066098397, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "First, we have undergone a revolutionary change in how we fuel our body. For tens of thousands of years, mankind ate unprocessed, truly natural food. In the last 150 years or so, our diets have changed dramatically. Our genes simply cannot adapt that quickly. The human body is not designed to run on 150 pounds per year of processed sugar, nor is it designed to run on baby formula, aspartame, pesticides, hydrogenated oils or meat from animals fed unnatural diets and hormones. Sugar for example, impairs the intestinal microflora and undermines a strong immune system.\nSecond, there is the “hygiene hypothesis,” which suggests germ-free homes and childhood vaccinations have eliminated challenges to our immune systems so they don’t learn how to defend us properly when we’re young. The immune system is sort of like a muscle – use it or lose it. We eat food inoculated with antibiotics. Our children stay indoors more, play outside less. Studies suggest that children who are raised with pets, have older siblings, play with livestock on the farm, are less likely to develop allergies, probably because they are exposed to more microbes than those living in overly sterile homes. “The data are very strong,” said Erika von Mutius of the Ludwig-Maximilians University in Munich. “If kids have all sorts of exposures on the farm by being in the stables a lot, close to the animals and the grasses, and drinking cow’s milk from their own farm, that seems to confer protection.” Researchers believe the lack of exposure to potential threats early in life leaves the immune system with fewer command-and-control cells known as regulatory T cells, making the system more likely to overreact or run wild.\nThird, environmental pollution and sedentary lifestyles may play more of a role than we understand. One reason that many researchers suspect something about modern living is to blame is that the increases in allergies and auto-immune diseases show up largely in highly developed countries in Europe and North America. The illnesses have only started to rise in other countries as they have become more developed. Donna Jackson Nakawaza, author of The Autoimmune Epidemic, implicates the bioaccumulation of toxins that pervade our home, food, and environment, as a primary cause of autoimmune diseases. What effect are plastics and other persistent organic pollutants having on human health?", "score": 11.600539066098397, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Share this article:\nAs winter and cold weather rolls in, flu season also comes into play. With the increasing sickness, the term “man flu” also begins to more frequently appear.\nThe term is so well-known that Oxford Dictionary defined it in an entry.\n“A cold or similar minor ailment as experienced by a man who is regarded as exaggerating the severity of the symptoms,” the entry reads.\nWhile men are widely considered to embellish their symptoms in comparison to women, there are numerous studies that find some truth to the increased severity in their symptoms.\nA new Canadian analysis, published in the British Medical Journal (BMJ) on Dec. 11, put the \"man flu\" in the spotlight this week.\nStudy Author Dr. Kyle Sue explores \"whether men are wimps or just immunologically inferior.\"\n\"Tired of being accused of over-reacting, I searched the available evidence to determine whether men really experience worse symptoms and whether this could have any evolutionary basis,\" Sue stated in the study.\nSue analyzed studies related to respiratory diseases, the common cold, intensive care, the flu and viral infections. He compared the symptoms between men and women.\nSue found that a man's immune system may be naturally weaker than a woman's.\nHe found that with some illnesses, especially respiratory diseases, men are more susceptible to complications and exhibit a higher mortality.\nSome studies suggest that women are more responsive to influenza vaccinations than men, according to the analysis.\nNow is the time to protect yourself from the flu\n6 unexpected ways winter affects your health\n5 ways your body combats cold weather's harsh impacts\nWhy your blood pressure can spike when it's cold outside\n5 common Seasonal Affective Disorder myths debunked\nSeasonal influenza data from from 2004 to 2010 in Hong Kong suggested that when the flu strikes, adult men face a greater risk for being admitted to the hospital.\nAnother American study found that men seem to face a higher risk for dying from the flu than women, according to the analysis.\nHormonal differences in the genders most likely cause these results. The masculine hormone testosterone suppresses the immune system, while the feminine hormone estradiol is immunoprotective.\nFrom his research, Sue concluded that there was a gender \"immunity gap\" but stressed that this is “certainly not definitive.”\nThe new research on the \"man flu\" has been met by mixed reviews on social media.", "score": 11.600539066098397, "rank": 89}, {"document_id": "doc-::chunk-10", "d_text": "He and his team looked at 15 studies involving more than 2,000 men and women with Alzheimer's.\n`Our findings indicated brain functions are more severely and more widely affected in women than men with Alzheimer's. For some reason, men are able to resist Alzheimer's for longer.\n`This is still being studied, but one theory is that men have better \"cognitive reserve\" - for the generation developing Alzheimer's now, many of the women would have stayed at home while the men were working, which could have permitted them to keep their brains more active for longer. So, when the disease starts they can hold up better.'\nMen CAN multi-task. The cliche that men can't do two things at once is not, in fact, correct - at least not entirely. `The evidence on multi-tasking is inconclusive,' says clinical psychologist Dr Genevieve von Lob at City Psychology Group in London.\n`Studies tend to show inconsistent results - some find that women show slightly more superiority while others find men show slightly more superiority, depending on the task.'\nWomen multi-task much more often. A study published two years ago in the American Sociological Review looking at 500 families found that both parents spent a lot of time multi-tasking, but women multi-tasked 48 hours a week compared with 39 for the men.\nThe women's multi-tasking mostly involved housework and childcare. `So perhaps women multi-task more, not because they are naturally better at it, but because the need to juggle work and family life,' says Dr von Lob.\nWomen are more prone to depression. Women are twice as likely to experience major depression as men and are particularly prone during hormonal changes. `The overall evidence suggests the sexes process emotions differently,' says Dr Moir. `There are a few differences in the limbic area or emotional processing area of the brain that make it more likely that women take a more negative view of situations and are more likely to worry about problems. This upsets sleep patterns, and if you don't sleep you get depressed.'\nDr Abel adds that these differences may be due to hormones. `Differences in the physical structure of a woman compared to a man's brain is in part caused by genes and in part by the differences in hormones the brain \"sees\",' she says. Women's brains have more receptors for recognising the presence of oestrogen than men.", "score": 11.600539066098397, "rank": 90}, {"document_id": "doc-::chunk-1", "d_text": "(They’re often six minutes short of a full day.) Men are more likely to be night owls. But women function better during periods of sleep deprivation.\n- During exercise, women’s primary fuel is fat. For men, it’s carbohydrates.\n- An average adult female has about 15–70 nanograms per deciliter (ng/dL) of testosterone. An average adult male has about 270–1070 ng/dL. Every year after age 30, men’s testosterone levels drop about one percent. That doesn’t happen for women. But women do see their estrogen levels fall off after menopause.\n- Men have pronounced Adam’s apples. That’s because they have larger voice boxes that make the surrounding cartilage stick out more.\n- Both sexes hit peak bone mass around age 30. At 40, men and women start losing bone. Menopause accelerates bone loss in women. So, women 51-70 need 200 milligrams (mg) of calcium more than men the same age. That’s 1200 mg per day for women and 1000 mg per day for men.\n- The daily calorie requirement for men is higher than women. There are a few reasons for this: higher muscle mass, stature, and basal metabolic rate. Pound for pound, muscles burn more than double the calories fat does.\n- Men and women carry different amounts of body fat. The higher body fat in women—about 10 percent—mostly supports reproductive physiology. One example is when a woman’s body fat gets too low, she stops menstruating.\n- Women typically carry their body fat in their hips and thighs. Fat tends to deposit around men’s stomachs.\n- The difference between men and women’s size, muscle mass, and calorie needs means men typically require diets higher in protein.\n- One study found that men have lower resting heart rates than women. But women have lower peak heart rates. Men’s heart rates typically rise faster during exercise and slow quicker afterward.\n- Men normally have more red blood cells (4.7–6.1 million cells per microliter compared to 4.2–5.4 million cells per microliter for women).\n- Women typically have lower blood pressure than men—regardless of race or ethnicity.\n- For most of life, men and women have the same vitamin D requirements.", "score": 9.837610665623476, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "Do little girls stay too clean?\nOne of the most popular of the Just William stories tells how William and his Outlaw friends were joined one day by the cherished and sheltered Violet Elizabeth Bott. Violet Elizabeth had a burning desire to take part in ''boy's games''. As a result she had a wonderful time in the woods but ended up completely unrecognisable by her family.\nWestern society does not expect little girls to play in the dirt and get messy. Ribbons and curls tend to be more in line of what is expected of them, but writing in the US journal Social Science & Medicine, Dr Sharyn Clough, from the Department of Philosophy at Oregon State University puts forward the view that this emphasis on cleanliness may contribute to higher rates of certain diseases in adult women.\nThere is a well-documented link between increased hygiene and sanitation and higher rates of asthma, allergies and autoimmune disorders that is known as ''hygiene hypothesis''. Dr Clough points out that women have higher rates in all of these disorders and she thinks she knows the reason why.\n''Girls,'' she says, ''tend to be dressed more in clothing that is not supposed to get dirty, they tend to play indoors more than boys and their playtime is more often supervised by parents. As a result girls stay cleaner.'' She points out that there is a significant difference in the types and amounts of germs that boys and girls are exposed to, leading her to the view that this might explain some of the health differences between men and women.\nDr Clough proposes that there should be new ways of looking at old studies. The ''hygiene hypothesis'' has linked the recent rise in asthma, allergies and autoimmune disorders such as Crohn's disease and rheumatoid arthritis with particular geographical and environmental locations. Many studies have noted that as countries become industrialised and more urban, rates of these diseases rise. A good example is India, where rates of Crohn's disease are climbing as sanitation improves and industrialisation increases.\nIn the United States it is reported that asthma prevalence is 8.9% in females as opposed to 6.5% in males and autoimmune diseases strike three times as many women as men. The rate for multiple sclerosis for women is double that for men and three women suffer from rheumatoid arthritis to every man. With the disease lupus, nine times as many women are affected as men. And so it goes on.", "score": 8.086131989696522, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "New studies take a pointed look at the differences in body function relating to this tendency. These studies mainly focus on the brain.\nOne study in particular, demonstrated the brain function focusing on the amygdala (4) (the part of the brain responsible for the fight, flight, or freeze response). As expected, the engagement and disengagement to stress were measured and documented. However, the results of the stress response were surprising.\nSubscribe to the World's Most Popular Newsletter (it's free!)\nThe study subjects were monitored while under intentional stress and their hormone level responses were measured. They were then given a cognitive test regarding facial recognition. The men’s ability to recognize faces (and discern whether they were friendly or threatening) diminished while women’s ability to discern faces was heightened.\nWhile the study didn’t go into details about how long these stress hormones continued to affect the genders differently, I’d surmise that the stress response in women continued longer than for men.\nThis hypothesis seems to be true considering that women suffer from chronic stress more often than men. In fact, the rates of women experiencing long term stress disorders such as generalized anxiety, PTSD, depression, and irritable bowel syndrome are all nearly double that of men. Furthermore, Debra Bangasser from Temple University states, ”Some differences may contribute to disease and some may not. Problems occur when the system is responding when it shouldn’t be or when it’s responding for a really long time in a way that becomes disruptive.” (5)\n2) Hormone levels\nStudies already demonstrate that women have increased hormonal activity under stress in contrast to men. It’s often postulated that fibromyalgia occurs more often in women than men because of “hormones.” But what does that really mean?\nHormones have a direct effect on gene expression; therefore, the types of hormones and the duration of exposure to these hormones can turn on or off various genes related to disease and systemic function. The gene expression difference in men and women may cause women to react to stress differently.\nIn studies using rats, the tendency to groom frequently is an indicator of high stress. When stress factors are introduced to both male and female study subjects, the behaviors that demonstrate stress increase. The significant difference, however, is that for female subjects, the grooming behavior becomes compulsive and obsessive in direct proportion to the levels of estrogen in the body. The more estrogen, the more frenetic the subject becomes.", "score": 8.086131989696522, "rank": 93}, {"document_id": "doc-::chunk-1", "d_text": "\"[pagebreak]\nSaved by Her Meds\nSurprising, isn't it? The smoke of even a hundred unfiltered Camels couldn't compare with the thick, venomous air Iucciolino encountered at Ground Zero. Yet thanks to daily medication to reduce both airway constriction and inflammation—the two hallmarks of the disease—Iucciolino didn't have the kind of respiratory problems hundreds of others on the scene later developed (a condition referred to as \"World Trade Center cough\").\n\"After 3 years of searching for the right medication in the right combination with stress relief and physical activity, such as yoga, to maintain my lung capacity, I was very protected,\" she says. \"My asthma was totally under control. And still is.\"\nNot Child's Play\nMore than 14 million Americans have asthma. Of these, almost 5 million are children. The rest—more than 9 million—are adults, some of whom, like Iucciolino, didn't have the disease as children. In fact, of those diagnosed in adulthood, the majority are women.\n\"Most people don't tend to think of asthma as a women's issue. But it really is,\" says Kathleen A. Sheerin, MD, an asthma specialist in private practice with the Atlanta Allergy and Asthma Clinic, and founder of the nonprofit asthma-education group, Breathe Georgia. In fact, studies are just beginning to reveal real differences in the male and female experience of asthma, despite the fact that asthma, in both the sexes, is caused primarily by inflammation in the airways, a seemingly gender-neutral problem. Yet not only do more women than men have asthma, women with asthma tend to experience more discomfort than men with asthma who have the same degree of airway obstruction. And as a result, they report more symptoms, take more drugs, and seek more health care than men with the same disease.\nFor instance, one recent study at the Yale-New Haven Hospital found that high-risk female patients were admitted twice as often as high-risk males and tended to have longer admissions (5 days for women versus 4 days for men—even though the women, in general, didn't appear to be as sick as the men when they arrived at the ER. Scientists aren't sure why there's such a disparity. But studies suggest that, in part, the excess burden of asthma in women may be due to hormones.", "score": 8.086131989696522, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "Women tend to communicate more effectively than men, focusing on how to create a solution that works for the group, talking through issues, and utilizes non-verbal cues such as tone, emotion, and empathy whereas men tend to be more task-oriented, less talkative, and more isolated. Men have a more difficult time understanding emotions that are not explicitly verbalized, while women tend to intuit emotions and emotional cues. These differences explain why men and women sometimes have difficulty communicating and why men-to-men friendships look different from friendships among women.\nMen tend to have a “fight or flight” response to stress situations while women seem to approach these situations with a “tend and befriend” strategy. Psychologist Shelley E. Taylor coined the phrase “tend and befriend” after recognizing that during times of stress women take care of themselves and their children (tending) and form strong group bonds (befriending). The reason for these different reactions to stress is rooted in hormones. The hormone oxytocin is released during stress in everyone. However, estrogen tends to enhance oxytocin resulting in calming and nurturing feelings whereas testosterone, which men produce in high levels during stress, reduces the effects of oxytocin.\nTwo sections of the brain responsible for language were found to be larger in women than in men, indicating one reason that women typically excel in language-based subjects and in language-associated thinking. Additionally, men typically only process language in their dominant hemisphere, whereas women process language in both hemispheres. This difference offers a bit of protection in case of a stroke. Women may be able to recover more fully from a stroke affecting the language areas in the brain while men may not have this same advantage\nWomen typically have a larger deep limbic system than men, which allows them to be more in touch with their feelings and better able to express them, which promotes bonding with others. Because of this ability to connect, more women serve as caregivers for children. The down side to this larger deep limbic system is that it also opens women up to depression, especially during times of hormonal shifts such as after childbirth or during a woman’s menstrual cycle.\nMen and women perceive pain differently. In studies, women require more morphine than men to reach the same level of pain reduction. Women are also more likely to vocalize their pain and to seek treatment for their pain than are men.", "score": 8.086131989696522, "rank": 95}]} {"qid": 46, "question_text": "How much additional money did Eli Lilly plan to invest in its Research Triangle Park facility?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "This began as a $2.9 million project partially funded through a United States Economic Development Administration (EDA) grant. There are over 79,000 square feet of space, with Touchstone Research Laboratories as the major tenant, and there is additional land to be developed in the park.\n- Building I completed 1990\n- Building II completed 1993\n- Building III completed 2001\n- Building IV contracted 2006\n- Expansion for additional production is ongoing.", "score": 43.72908328741524, "rank": 1}, {"document_id": "doc-::chunk-1", "d_text": "Planners originally estimated that the new facility would $230 million, but later lowered their projection to $190 million.", "score": 43.58479862064645, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Lilly to invest $442 million in new biopharma facility in Ireland\nMonday, February 27, 2012\nEli Lilly is investing $442 million in a brand-new biopharmaceutical commercialization and manufacturing facility at its Kinsale campus in Cork, Ireland, according to Richard Bruton TD, minister for jobs, enterprise and innovation.\nThe investment will expand the Kinsale site’s existing biopharmaceutical mission with the establishment of an additional world-class commercialization and manufacturing facility.\nThe new facility, when fully operational, will require up to 200 highly skilled employees. In addition, 300 construction jobs will be created on the site. IDA Ireland worked closely with Eli Lilly to attract this investment to Ireland.\nThe planned 240,000-square-foot facility will enhance the company’s ability to bring treatments for illnesses such as cancer and diabetes to patients worldwide. This is the second large investment Lilly has made at its Kinsale site in recent years. In 2006, the company announced a $400 million investment in its first biopharmaceutical manufacturing and new-product commercialization facility at its Kinsale campus, which came on-stream in 2010.\n“The Action Plan for Jobs, which the government published recently, outlined a range of measures which we will take in 2012 to target the high-end manufacturing and health/life sciences sectors for further growth and also to deepen and develop the impact of multinational companies in Ireland,” said Minister Bruton.\nEd Canary, general manager of the Kinsale site, added, “This investment is an endorsement of the Lilly Kinsale site’s success in developing a biopharmaceutical business in recent years and demonstrates our ability to rise to that challenge. This is in no small part due to the site’s excellent performance record, the talent of the workforce, and the support from IDA Ireland.”", "score": 43.47902442632674, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "INDIANAPOLIS - Eli Lilly and Company plans to invest $140 million to expand its insulin manufacturing operations in Indianapolis.\nThe 80,000-square-foot expansion at the Lilly Technology Center will allow the drugmaker to grow its manufacturing of insulin cartridges to meet the growing demand for diabetes care in the country.\n\"Last year, diabetes affected more than 350 million people worldwide. By 2030, that figure is expected to rise to over 550 million people,\" said Enrique Conterno, president of Lilly Diabetes. \"Unfortunately, by 2050, it is expected, if trends were to continue, it is expected that in the U.S., one out of every three U.S. adults will have diabetes.\"\nLilly's expansion is one of the largest economic development investments in both the city and state in 2012, and is the first increase in the company's Indianapolis manufacturing operations in more than two decades.\n\"Lilly is the premier pharmaceutical company in the world, and their commitment to research keeps them ahead of the medical curve,\" Lt. Gov. Becky Skillman said in a news release. \"This expansion is just further proof that Indiana is a fiscally strong state, and we thank Lilly for their confidence in our state and our people.\"\nConstruction will begin immediately. The expansion is expected to be finished by March 2014.", "score": 43.347390242538694, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "Initial projections show that the new building — a $50 million investment — will be constructed to hold approximately 780 employees and include underground parking. The site, which is expected to be built in three years, would also allow for U.S. Venture to expand as needed in the future.", "score": 42.928696006811556, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Company Expands Its Biotechnology Center In San Diego, California\nAccording to the firm, set to be completed in 2016, the expansion will feature an additional 175,000 square feet of working space and is expected to generate up to 130 potential new job openings, yielding a 140 percent increase in the center's space and 70 percent increase in its staff.\n\"San Diego has been an important location for Lilly laboratories for more than a decade. The city is a global hub for biomedical research and talent, where collaboration between academic institutions and biotechnology thrives,\" said Thomas F. Bumol, Ph.D., senior vice president, biotechnology and immunology research at Lilly. \"We want to build on our success in San Diego through expanded collaborations that ultimately allow us to bring better medicines to people faster than ever before.\"\n“Importantly, the space will allow for closer collaboration among Lilly experts in discovery chemistry and research technologies and biotechnology, which will help accelerate the discovery of new medicines within the company's core therapeutic areas, including immunology,” company officials said. “The expansion also signifies Lilly's investment in obtaining additional top scientific talent. Specifically, Lilly will be recruiting experts in drug discovery in the disciplines of biotechnology, chemistry and immunology and in immunological clinical development.\"\n\"The molecular discovery capabilities at the Lilly Biotechnology Center represent state of the art platforms to enable pharmaceutical innovation across the continuum of small and large molecules,\" said Alan D. Palkowitz, Ph.D., vice president, discovery chemistry research & technologies at Lilly. \"The expanded investment will further Lilly's leadership in these core areas and catalyze future discoveries.\"\nLilly initially entered the San Diego area in 2004 by acquiring Applied Molecular Evolution, Inc., which now operates as a subsidiary of the company. The Lilly Biotechnology Center was officially established in 2009 and is located near the University of California, San Diego, among other prominent biomedical research institutions.\n2023 Top States for Doing Business Meet the Needs of Site Selectors\nThe Rise of Mid-Size U.S. Cities: The Industrial Development Boom\nFront Line: Brownfields Offer Redevelopment Opportunities\n37th Annual Corporate Survey: Economic Pressures Exerting Greatest Effect on Decision-Makers\n2023 Top States Commentary: Top-Ranked States Have What It Takes to Win Mega Projects\n2023 Top States Workforce Development Programs\nImmigration: A Potential Fix for Labor Shortages?\nWorkforce Q4 2023", "score": 42.51280880066646, "rank": 6}, {"document_id": "doc-::chunk-3", "d_text": "• AstraZeneca acquired a high-tech biologics bulk manufacturing facility in Boulder, Colorado, from Amgen in September 2015 that it is refurbishing. The site is expected to be operational in late 2017. The purchase followed announcements that AstraZeneca is investing $285 million in a biologics facility in Sweden and expanding its Frederick, Maryland, site. The new Swedish facility will focus on filling and packaging of protein therapeutics and, from 2018, supplying medicines for the clinical trial programs of AZ and its MedImmune subsidiary.\n• Pfizer is spending $100 million to upgrade its biologics plant in Ireland.\nGenzyme is investing $80 million at its recently approved facility in Framingham, MA, adding more downstream processing capabilities for Fabrazyme, its treatment for Fabry disease.\n• Eli Lilly is completing a $450 million biologics facility in Ireland. In 2013, the company also announced nearly $1 billion in planned plant expansions for the production of its insulin products, including API and cartridge manufacturing capabilities.\n• Amgen opened in August 2015 a $300 million facility including a syringe filling facility and a cold chain warehouse in Singapore. The facility uses disposable technology, continuous processing and real-time analytics, has a replicable and flexible modular design with a small footprint for reduced energy and water consumption and waste generation. It was also constructed in half the time required for a conventional plant, according to the company.\n• Baxter opened its first biologics facility in Asia in 2014. The $370 million facility in Singapore produces ADVATE and will also manufacture treatments for hemophilia B once a second expansion suite opens in 2017.\n• Roche announced in 2013 that it is investing $880 million in biologics manufacturing capabilities, including an ADC manufacturing plant in Switzerland and expansion/upgrades of sites in California and Germany. Its Japan-based subsidiary Chugai Pharma is also investing $310 million in antibody production capacity at a plant in Tokyo. An additional $125M investment in an expansion of a Genentech fill/finish facility in Oregon was announced in March 2015.\nSmaller biotech firms have not been idle, either:\n• Regeneron will be investing an additional $350 million on top of its initial $300 million investment to create a pharmaceutical plant at a former Dell computer manufacturing site in Ireland.", "score": 42.348938582222445, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Company (LLY) Research Laboratory Installs Protedyne's BioCube System At Indianapolis, Indiana Facility\n10/19/2005 5:12:50 PM\nWINDSOR, Conn.--(BUSINESS WIRE)--April 14, 2005--Protedyne Corporation, a leading laboratory automation supplier, today announced that it has completed the installation and site acceptance of its BioCube(TM) System LX2000 in Eli Lilly's Indianapolis facility. The Lilly research facility will use the BioCube System to perform compound solubilization for drug discovery research.\ncomments powered by", "score": 40.63928080039642, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "21 Nov 2014 Elanco Announces $100 million Investment in Augusta\nElanco Announces Major Capital Expansion at Augusta Technology Center\nElanco, the animal health division of Eli Lilly and Company, announced today a $100,000,000 investment in infrastructure and manufacturing enhancements at the Augusta Technology Center, creating 100 new jobs over the next three years. Improvements are already underway at the site.\nThe plant is currently operating near maximum capacity, prompting Elanco to invest in the capital expansion necessary to meet customers’ growing demand. More than 250 people are currently employed at Elanco’s manufacturing facility in Augusta, producing animal health products used by farmers around the globe. Elanco anticipates that the number of new jobs ultimately created as a result of the expansion will be contingent on new market approvals for Elanco products.\nThe Augusta plant and its employees play an important role at Elanco and for the company’s customers. VP of Manufacturing at Elanco, Steve Jenison, said, “An investment of this size is a testament to the performance of our operations in Augusta. We have every confidence in our employees at the site and the value our products bring to our customers.”\nGeneral Manager at the Augusta Technology Center, Kevin Trivett, said being located in Augusta is an asset to the company’s operations, saying “We feel very fortunate to have partnered with a community whose workforce is highly skilled, engaged and committed to our vision of meeting a global demand for more of our animal health products.”\nHenry Ingram, Chairman of the Augusta Economic Development Authority, said, “The investment of $100,000,000 is significant for our community and soon, more of our neighbors will be employed, thanks to Elanco’s investment in Augusta.”\nMayor Deke Copenhaver also commended the company’s decision to expand its presence in Augusta. “With their yearly efforts to help break the cycle of hunger locally as well as in 100 other communities worldwide, Elanco is a fine example of what it means to be a community partner. I would like to personally thank the Elanco team for this investment and for all that the company does to help make our city a great place.”\nAs an OSHA Voluntary Protection Program (VPP) “Star” worksite, the Augusta Technology Center is recognized for its comprehensive safety and health management programs. From a community reinvestment perspective, Elanco provides opportunities for employees to volunteer in the community and supports local sustainability initiatives.", "score": 39.53352072490879, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Pence Announces Public-Private Bioscience Venture\nGovernor Mike Pence says the creation of a new Biosciences Research Institute announced Thursday will create jobs, attract investment and talent from around the globe and help retain the state’s top graduates. It’s being called the first industry-led life sciences research institute in the country.\nThe Indiana Biosciences Research Institute is a collaborative effort between private companies such as Eli Lilly, Cook Group and Dow AgroSciences and the state’s research universities – IU, Purdue and Notre Dame. By pooling resources and attracting new researchers to Indiana, the aim of the institute is to make the transition from the lab to the marketplace easier.\nBioCrossroads President David Johnson, who sits on the board of directors for the new institute, says part of its mission is to capture the dollars being spent outside the state by Indiana life sciences companies. He says retaining even five percent of that business will be a huge boon to the state.\n“That’s $250 million to $300 million a year coming into Indiana’s economy,” Johnson said, “and that’s real money and that is business and that is, in fact, exactly what the governor is talking about – putting Hoosiers to work.”\nGov. Mike Pence says the value of the institute goes beyond the economic benefit for Indiana.\n“Health issues that as we all know and we say, with heavy hearts, impacts too many Hoosiers – obesity, diabetes and heart disease…we’re going to come up with the breakthroughs here that are going to benefit Hoosiers and deal with some of the maladies that our own citizens deal with,” Pence said.\nThe General Assembly appropriated $25 million to get the institute up and running initially. It will ultimately cost $360 million to fully create and Johnson says the bulk of that funding will come from private investment and donations.", "score": 39.245337698457625, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "A team of developers led by Longfellow Real Estate Partners from Boston, Duke University and Measurement Incorporated President & CEO Hank Scherich unveiled formal plans Wednesday, October 1, 2014 for the $400 – $500 million Durham Innovation District (Durham.ID). The project covers 15 acres just to the west of downtown.\nDurham.ID will eventually include 1 million square feet of office space plus retail space and 300 residential units. To put the size of this project in perspective, the American Tobacco Historic District currently has about 1 million square feet of office space. The new innovation district will function as a “downtown research hub” with an emphasis on life science companies and provide space for Duke University researchers and companies seeking to collaborate with the university. Longfellow developers also discussed their renovation of the Main Street and Carmichael Buildings which total more than 115,000 square feet of laboratory space costing upwards of $50 million.\nThe Durham Chamber of Commerce has been working with the Durham iD developers, Longfellow Real Estate Partners and Measurement Inc., the City of Durham and Durham County for the past year to address project related infrastructure. In recent weeks, the Chamber has begun bringing economic development clients to look at the Durham iD Project. Ted Conner, Senior Vice President of Economic Development with the Durham Chamber noted, “Downtown Durham is the hottest real estate market in the Triangle and Durham iD serves to bring more innovation oriented space to the market, further enhancing our economic development efforts. Conner added, “The Durham iD project does not replace the scientific innovation work taking place in RTP but instead serves to be a complement.” Conner ended by saying, “The expansion of Duke University’s scientific research to downtown Durham strengthens the city’s revitalization momentum.”", "score": 38.268979200545125, "rank": 11}, {"document_id": "doc-::chunk-1", "d_text": "The economy of North Carolina, famed for its research\ntriangle, got another kick-start in February when ground was\nbroken on a new 311,000-square-foot building for offices and\nlabs located in Kannapolis, just north of Charlotte.\nThe project is funded by a $150 million donation from David\nMurdock, who owns Castle & Cooke, Inc. and Dole Food Co., Inc.", "score": 36.622643846654064, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "I don’t have much use for Gary Welsh, but sometimes even a blind squirrel finds a nut, or vice versa.\nEli Lilly announced more than 5,000 layoffs Monday, many of which will impact the Indianapolis area. If Lilly goes forward with the job reductions will it have to payback some of the money it received in tax incentives over the years which were predicated on job creation?\nFrom today’s Star…\nFrom 1999 to 2004, the expansion added about 9,000 new Lilly jobs in Indianapolis and secured more work for the company’s 7,000 vendors in the state.\nSince 2004, Lilly has shed 2,000 jobs and never reached the target of adding 9,500 new jobs in Indianapolis by 2009. That target was set in 2004 when Lilly applied for tax breaks and incentives totaling $1.6 billion in exchange for its expansion.\nScaling back now has little to do with the recession. [John] Lechleiter said the restructuring would have happened anyway, to speed up the pipeline, which has sputtered in recent years.\nI have no desire to kick a major employer when they’re going through issues, but it is a fair question to ask, especially during these tough financial times.", "score": 35.40525819063945, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly & Co. LLY, -1.56% said the pharmaceutical company should return to revenue and margin growth after 2014 and also unveiled a new $5 billion share repurchase program as executives outlined the company's outlook ahead of two major patent expirations.\nLilly expects to hit its financial goals in 2014, though market factors, such as the devaluation of the yen and slower growth in key emerging market countries, have moderated its near-term revenue growth expectations, the executives said ahead of an investment event at its headquarters\nThe company is in the midst of a wave of patent expirations that has pressured sales. The company lost exclusivity for its former No. 1 product, the antipsychotic Zyprexa, in October 2011, and it will lose U.S. patent protection for the blockbuster antidepressant Cymbalta in December, followed by osteoporosis drug Evista next year. The company hopes to jump-start sales by bringing new drugs to market and also has cut costs, earlier this year laying off hundreds of U.S. sales representatives.\nLilly, which has a market capitalization of about $56.9 billion, expects to maintain its dividend at least at its current level and will supplement it with its new share buyback program over time.\nIt also said it plans to launch several new medicines beginning next year and noted it currently has 13 potential medicines in Phase 3, the final stage of clinical studies, or in regulatory review.\n\"To prepare Lilly for 2014 and beyond, we committed to replenishing and advancing our pipeline, driving revenue in our growth engines and key marketed products, and increasing productivity and reducing our cost structure,\" Chief Financial Officer Derica Rice said.\nBeyond 2014, the company expects revenue growth in four of its five businesses--diabetes, oncology, emerging markets and animal health. The fifth segment, bio-medicines, will hit a hit, losing U.S. marketing exclusivity on Cymbalta and Evista, but Lilly said the business's mid-term revenue should be relatively stable.\nShares were up by 46 cents to $51 premarket. The stock is up 2.5% since the start of the year.\nWrite to Nathalie Tadena at email@example.com\nSubscribe to WSJ: http://online.wsj.com?mod=djnwires", "score": 32.75870132734025, "rank": 14}, {"document_id": "doc-::chunk-1", "d_text": "Investment in research and development increased 85 percent to $103 million, mainly as the company spent more on its dermatology product portfolio including its plaque psoriasis treatment IDP-118 and inflammatory drug brodalumab.\nWhile total revenue rose 9.3 percent to $2.37 billion, it missed the average analyst estimate of $2.38 billion.\n(Reporting by Amrutha Penumudi and Ankur Banerjee in Bengaluru; Editing by Savio D'Souza and Saumyadeb Chakrabarty)", "score": 32.6946818253921, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly won’t be making any mega pharmaceutical deals, Chairman and Chief Executive John Lechleiter said Wednesday.\nWhile the environment is good for deals, with interest rates low and companies flush with cash, Lechleiter told WSJ today that Eli Lilly will be steering clear of any large deals and will not be diversifying itself.\n“There are lots of opportunities to do a deal,” Lechleiter said. “I don’t think we can return to the Lilly of the 1970s, when we were buying up medical device companies…our core is pharmaceutical innovation.”\nEli Lilly ended March with $4.1 billion in cash and cash equivalents and it hasn’t done any sizable deals this year. Its animal health unit has made its most recent purchases, including a $308 million purchase that closed last July.\nEli Lilly hasn’t made a deal worth over $1 billion since its 2008 buy of ImClone Systems, a $7.1 billion deal signed at the height of the global financial crisis and shortly after Lechleiter rose to CEO.\nLechleiter said small add-on deals and partnerships are more likely, but that Eli Lilly believes its current pipeline of drugs offer better prospect than running down its cash pile buying late-stage products from other drug makers. He also questioned whether acquisitions provide returns for shareholders, saying he’d rather maintain Eli Lilly’s dividend instead.\nThough M&A activity on the whole has been rather quiet to start the year, there have been some pharmaceutical deals that have grabbed headlines, including Watson Pharmaceuticals’s $5.9 billion deal to buy Actavis Group and Hologic’s $3.9 billion deal for Gen-Probe. There also have been hostile attempts including Roche Holding’s attempt to buy Illumina and the GlaxoSmithKline pursuit of Human Genome Sciences.\nSo far this year there have been $49.2 billion worth of pharmaceutical deals around the globe, well under half of last year’s total $129.5 billion, according to Dealogic.", "score": 32.1609060643476, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "OR WAIT null SECS\nEli Lilly sold rights to two of its legacy antibiotics as well as its Suzhou, China, manufacturing facility, to China-based specialty pharmaceutical company Eddingpharm in a deal worth $375 million.\nEli Lilly announced on April 22, 2019 that it has entered into an agreement with Eddingpharm, a China-based specialty pharmaceutical company, to sell rights to two of the Lilly’s antibiotic medicines-Ceclor (cefaclor monohydrate) and Vancocin (vancomycin hydrochloride)-in China as well as a manufacturing facility in Suzhou, China, that produces cefaclor monohydrate in a deal worth $375 million.\nUnder the terms of the agreement, Lilly will receive a deposit of $75 million, followed by a payment of $300 million upon successful closing of the transaction. As part of the transaction, all employees at the Suzhou manufacturing facility and certain employees from shared functions will be offered the opportunity to remain at the facility and continue to work with Eddingpharm. Lilly reports that it will provide ongoing services to Eddingpharm for a period of time to ensure continuity of product supply and support the transition of the facility.\nThe transaction is expected to close in either Iate 2019 or early 2020, subject to customary closing conditions and regulatory approval.\nEddingpharm, headquartered in Hong Kong, manufactures clinical nutrition, antibiotics, respiratory system, nephrology, and cardiovascular pharmaceutical products and has nearly 1500 employees in China.\n\"Lilly remains committed to improving the health of people in China,\" said Julio Gay-Ger, president and general manager of Lilly China, in a company press release. \"This transaction will enable Lilly China to better focus our resources on the exciting new therapies that we are launching in our core therapeutic areas, so that we can bring more life-changing medicines to patients in China.\"", "score": 31.574588058404014, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "DPR Completes Solid-Dose Manufacturing Facility for United Therapeutics Corporation\nUnited Therapeutics Corporation’s New Facility in Raleigh-Durham’s Research Triangle Park Uniquely Integrates Manufacturing and Office Environments\nFrom the first glance, it is clear that United Therapeutics Corporation (UT) envisioned creating something far beyond the “ordinary” when it engaged DPR to construct its new 203,000-sq.-ft. manufacturing, research and office facility in Raleigh-Durham’s prestigious Research Triangle Park.\nThe project included:\n- Curved metal panels that wrap around the entire structure.\n- Extensive natural daylight that is transported into what are typically enclosed manufacturing spaces through use of interior floor-to-ceiling glass curtainwalls, skylights and a two-story atrium.\n- A glass-encased entry lobby including a saltwater aquarium feature.\nProject: Solid Dose Manufacturing Facility\nClient: United Therapeutics\nArchitect: O’Neal, Inc.\nThe facility also has a purposeful intent behind its soaring design. The goal was “to make sure we did something unique and fundamentally different, not just externally or architecturally, but also internally by integrating the manufacturing with the office environment,” comments David Zaccardelli, Executive Vice President of Pharmaceutical Development and Operations for UT. “Too many times, manufacturing gets done behind solid walls. We wanted to build a place where we could integrate all the activities our employees do and enable them to see what happens from a manufacturing standpoint, as well as create a space that can serve the public and promote education and training.”\nThe new facility is ultimately intended to manufacture the oral form of treprostinil that is now in clinical trials, a new form of treatment for pulmonary hypertension. In addition to serving scientific and manufacturing functions, the facility is also office space for employees who carry out UT’s administrative, business, marketing and management functions.\nUT’s new facility represents a major expansion for the Maryland biotechnology company into Research Triangle Park in Durham County, NC, one of the country’s largest science parks and an area where neither UT nor DPR had a presence when the project got underway approximately two years ago. Although the project was DPR’s first in this marketplace, its technical expertise and experience in pharmaceutical facility construction, focus on quality, proven ability to meet aggressive schedule constraints and problem-solving abilities made DPR well suited for the job.", "score": 30.968083048513584, "rank": 18}, {"document_id": "doc-::chunk-0", "d_text": "Does any pharma CEO really need to take the no-megadeals pledge these days? The prevailing strategy is toward smaller, bolt-on buys rather than the big mergers we saw back in 2009. But we still hear the \"not for me, thanks\" assurance on a fairly regular basis.\nThe latest: Eli Lilly ($LLY) chief John Lechleiter. He told The Wall Street Journal that his company won't resort to a large deal to fix its current patent-cliff problems. And those patent-cliff problems are big: Its top-selling Zyprexa went off patent in October.\nEven small deals and partnerships aren't a sure thing, he said. Lilly is betting that its own pipeline will offer a better bang for the buck than buying in a bunch of late-stage products.\nLechleiter also pledged himself to the anti-diversification crowd. Rather than spending its $4 billion or so in cash on ancillary businesses to beef up sales, Lilly will focus on prescription drugs. \"I don't think we can return to the Lilly of the 1970s, when we were buying up medical device companies,\" Lechleiter told the WSJ. \"[O]ur core is pharmaceutical innovation.\"\nAccording to FierceBiotech research, Lilly spent $5 billion on R&D last year, up 3% from 2010. At the end of the year, it had 11 late-stage products, including the risky-but-potentially-huge Alzheimer's candidate solanezumab. Key study results are due later this year.\nIn the meantime, he said, acquisitions don't always deliver returns for shareholders. So, Lechletier said, Lilly's cash might be better spent on keeping up investor dividends.\n- read the WSJ interview\nSpecial Report: Eli Lilly - The Biggest R&D Spenders In Biopharma", "score": 30.88114914453505, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Co. and Pfizer Inc., which are both suffering through some of the largest patent cliffs in the industry, will split any future costs and profits of an osteoarthritis drug that has stalled in clinical testing.\nIf approved, the drug would be a potent boost to Lilly’s product portfolio. It would also mean a critical new therapy for a cancer that’s proven difficult to treat.\nLilly has set up not one, not two, but five head-to-head trials of its experimental drug dulaglutide against other leading diabetes therapies. So far, dulaglutide’s record is four wins, no losses.\nThe trial of 2,100 patients, called Expedition III, will use new measures of cognitive function, such as the ability to do tasks like cooking or driving, or remembering words after a delay.\nLilly officials said they will push ahead with the first-of-a-kind imaging chemical, despite the mostly negative ruling by Medicare officials.\nLilly’s drug, if approved, may be a significant competitor to Novo Nordisk A/S’s Victoza, which generated $1.64 billion in 2012.\nEli Lilly and Co. is seeking to revoke a patent held by a Johnson & Johnson unit, arguing at a London court it might delay availability of a potential treatment for Alzheimer’s disease.\nEndocyte Inc. saw its shares fall nearly 7 percent Tuesday morning after the drug development firm announced that its application for U.S. approval of a cancer drug could be delayed another 10 months.\nWith Eli Lilly and Co. set to see patents expire on its best-selling drug at year’s end, it is in the company’s interest to say its pipeline is about to produce new drugs. But the Indianapolis drugmaker may be in a position to submit five new drugs for regulatory approval this year.\nEli Lilly and Co. said it discontinued a last-stage trial of experimental rheumatoid arthritis drug tabalumab for lack of efficacy. Lilly is still evaluating the drug in the two other late-stage studies.\nChina takes eight years longer on average to approve drugs than other major countries, and U.S. drugmakers are looking at ways to help speed things up, Eli Lilly and Co. CEO John Lechleiter said.\nEli Lilly and Co. said dulaglutide lowered blood sugar better than three existing diabetes drugs in three Phase 3 clinical trials.", "score": 30.859706830300077, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "INDIANAPOLIS, Aug. 9, 2012 /PRNewswire/ --\nEli Lilly and Company (NYSE: LLY) has revised certain elements of its 2012 reported financial guidance to reflect additional income the company will recognize as a result of the early payment of financial obligations from Amylin Pharmaceuticals.\nFollowing the completion of its acquisition by Bristol-Myers Squibb, Amylin has paid to Lilly $1.259 billion in satisfaction of its revenue sharing obligation with respect to exenatide. As a result, Lilly will recognize income in the third quarter of 2012 of approximately $790 million (pre-tax), or approximately $.43 per share (after-tax). In addition to income previously deferred pursuant to this arrangement, Lilly also expects to recognize income in 2013 related to this payment of approximately $425 million (pre-tax), or approximately $.25 per share (after-tax), contingent upon transfer of exenatide commercial rights outside the U.S. to Amylin. Currently, Lilly anticipates these rights will be transferred to Amylin over the course of 2013. In addition, Amylin has also repaid in full to Lilly a $165 million loan and accrued interest.\n\"The early payment of the revenue sharing obligation by Amylin allows Lilly to recognize the obligation's value in the near-term, receive significant income in both 2012 and 2013, and further strengthen our balance sheet,\" said Derica Rice, Lilly executive vice president, global services and chief financial officer. \"With this additional cash, we will continue to advance our pipeline of more than 60 potential new medicines in development, as well as fund capital expenditures, business development activity, our dividend and share repurchases.\n\"At the same time, we still expect to meet or exceed our mid-term minimum financial goals, despite not receiving 15 percent of net exenatide sales on an ongoing basis. From now through 2014, on an annual basis we still expect revenue to be at least $20 billion, net income to be at least $3 billion, and operating cash flow to be at least $4 billion.\"\nIn accordance with generally accepted accounting principles (GAAP), the recognition of the income from the early payment of the revenue sharing obligation has caused Lilly to revise certain elements of its 2012 reported financial guidance. The income has been excluded from the company's 2012 non-GAAP financial guidance.", "score": 30.808705446509002, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Don't let it get away!\nKeep track of the stocks that matter to you.\nHelp yourself with the Fool's FREE and easy new watchlist service today.\nEli Lilly (NYSE: LLY ) offered a range of earnings numbers for its third-quarter results. Reported earnings came in at a loss of $0.43 per share, but adjusted earnings per share were up 14% to $1.04.\nThe discrepancy comes from the nearly $1.5 billion charge it had to take to settle improper marketing allegations for its antipsychotic Zyprexa. Other than that, Eli Lilly had a very fine quarter.\nSales were up 14% year over year. Sales of antidepressant and fibromyalgia treatment Cymbalta, Eli Lilly's second-biggest seller behind Zyprexa, jumped 40%. Compare that with other drugs that treat depression -- year over year, sales of GlaxoSmithKline's (NYSE: GSK ) Paxil sank 23%, Wyeth's (NYSE: WYE ) Effexor climbed just 3%, and Forest Labs' (NYSE: FRX ) Lexapro were up a bit more than 4% -- and you can see why Eli Lilly should be happy.\nTo continue that success, Eli Lilly needs to win marketing approval for its blood clot preventer prasugrel. In June, the Food and Drug Administration pushed back its decision to the end of September, but there's still no decision from the agency, and management wasn't exactly sure when a decision would be made.\nMore long-term growth will need to come from the pipeline of recent acquisition ImClone Systems (Nasdaq: IMCL ) . I still contend that it was a really risky move to outbid Bristol-Myers Squibb (NYSE: BMY ) for the company, but Eli Lilly's management seems to think the move will pay off if it can get just one additional drug on the market. Then again, with so many of its drugs coming off patent in the next decade, Eli Lilly had to do something.\nI think we can expect more mixed news from Eli Lilly in the future. As it moves drugs through its pipeline, expect substantial growth, followed by patent expirations and slumping sales. Hopefully, the stock price won't be on too much of a roller coaster.", "score": 30.64892935287732, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "\"We are excited that lasmiditan will be back at Lilly, where it was originally discovered, for the conclusion of Phase 3 development and potential commercialization,\" said Thomas P. Mathers, CoLucid's chief executive officer. \"We are proud of the work that CoLucid has done to develop lasmiditan, and we believe Lilly's expertise in pain and commitment to innovation are a natural fit to potentially bring this medicine to patients.\"\nUnder the terms of the agreement, Lilly will acquire all shares of CoLucid Pharmaceuticals for a purchase price of $46.50 per share or approximately $960 million. The transaction is expected to close by the end of the first quarter of 2017, subject to clearance under the Hart-Scott-Rodino Antitrust Improvements Act and other customary closing conditions.\nWhile the financial charge will not be finalized until after completion of the acquisition, Lilly is expecting to recognize a financial charge of approximately $850 million (no tax benefit), or approximately $0.80 per share, as an acquired in-process research and development charge to earnings in the first quarter of 2017. The company's reported earnings per share guidance in 2017 is expected to be reduced by the amount of the charge. There will be no change to the company's non-GAAP earnings per share guidance as a result of this transaction.\nGoldman, Sachs & Co. is acting as the exclusive financial advisor, and Weil, Gotshal & Manges LLP is acting as legal advisor to Lilly in this transaction. MTS Health Partners is acting as the exclusive financial advisor, and Faegre Baker Daniels LLP is acting as legal advisor to CoLucid.\nAbout Eli Lilly and Company\nLilly is a global healthcare leader that unites caring with discovery to make life better for people around the world. We were founded more than a century ago by a man committed to creating high-quality medicines that meet real needs, and today we remain true to that mission in all our work. Across the globe, Lilly employees work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to communities through philanthropy and volunteerism.\nAbout CoLucid Pharmaceuticals, Inc.\nCoLucid was founded in 2005 and is developing lasmiditan oral tablets for the acute treatment of migraine headaches in adults and intravenous lasmiditan for the acute treatment of headache pain associated with migraine in adults in emergency room and other urgent care settings.", "score": 30.392463814066087, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "In addition to these, here is the list of next top pharmaceutical companies that invested heavily in the last fiscal year:\n- Abbvie ($4.86 billion)\n- Eli Lilly ($4.56 billion)\n- Amgen ($4.51 billion)\n- Celgene ($4.23 billion)\n- Boehringer Ingelheim ($3.74 billion)\n- Gilead Science ($2.77 billion)\n- Takeda ($3.29 billion)\n- Allergan ($3.06 billion)\n- Biogen($2.30 billion)\n- Novo Nordisk ($2.17 billion)\nEU Scoreboard 2016 (World 2500)\nLuca Dezzani – IgeaHub Pharma Blog", "score": 29.98137211836244, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "\"Do you want to pay $18 million to one person (in a lawsuit) or do you want to pay $18 million to build a campus?\" she asked.\nOne of Lingle's concerns is that the $18.2 million is just a down payment on an estimated total of $237 million for the two new campuses.\n\"It's not clear where the funding will come from,\" she said. \"I need to be able to explain to an average person what is the public benefit if we spend this money.\"\nStar-Bulletin reporter B.J. Reyes contributed to this story.", "score": 29.81665719215549, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly & Co.’s valuation multiples\nHeadquartered in Indianapolis, Indiana, Eli Lilly & Co. (LLY) has two main businesses: Human Pharmaceuticals and Animal Health.\nFrom an investor’s point of view, the two best valuation multiples used for valuing companies like Eli Lilly are forward PE (price-to-earnings) and EV-to-EBITDA[1. enterprise value to earnings before interest, tax, depreciation, and amortization] multiples, considering the relatively stable and visible nature of their earnings.\nPE multiples represent what one share can buy for an equity investor. On April 26, 2017, the company was trading at a forward PE multiple of ~19.3x, compared to the industry average of ~15.9x.\nOver the last year, Eli Lilly’s (LLY) forward PE has traded in the range of 18.9x–25.5x. Among its competitors, Pfizer (PFE), Johnson & Johnson (JNJ), and Merck & Co. (MRK) have forward PE multiples of 13.0x, 16.9x, and 16.0x, respectively.\nBased on the last five-year multiple range, Eli Lilly’s current valuation is neither high nor low, and its PE multiple has ranged from ~7.6x to ~26.5x.\nOn a capital structure–neutral and excess cash-adjusted basis, Eli Lilly currently trades at ~13.9x, which is higher than the industry’s average of ~10.8x. Among its competitors, Pfizer (PFE), Johnson & Johnson (JNJ), and Merck & Co. (MRK) have forward EV-to-EBITDA multiples of 10.1x, 11.5x, and 9.5x, respectively.\nEli Lilly’s stock price has increased ~6.1% over the last 12 months. Analysts estimate that the stock has the potential to return ~10.1% over the next 12 months. Analysts’ recommendations show a 12-month target price of $89.10 per share compared to its price of $80.96 per share on April 27, 2016. The consensus rating for Eli Lilly stock is ~2.0, which shows a moderate buy for long-term investors.", "score": 29.76049536762426, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "A bruised Eli Lilly buys rights to Centrexion's early-stage pain drug for $47.5M upfront\nEli Lilly is working on putting its woes in the rearview mirror. In recent months, a late-stage failure triggered the withdrawal of its cancer drug Lartruvo, the US drugmaker relegated two mid-stage drugs to the scrap heap, and Japan flagged safety concerns associated with its breast cancer treatment, Verzenio.\nOn Tuesday, the US drugmaker said it was acquiring the rights to an experimental early-stage non-opioid pain drug from Centrexion for $47.5 million upfront. The Boston-based company acquired the chronic pain drug CNTX-0290 — a small molecule somatostatin receptor type 4 (SSTR4) agonist — from Boehringer Ingelheim in 2016.\nAfter kicking off the year with a bang with $8 billion agreement to buy Loxo, in a deal forged in 10 days, things for Lilly $LLY have only gone south. Last month, the company slashed its 2019 forecast by $3 billion — reflecting the Elanco Animal Health spin off — and is in desperate need for assets to improve its pipeline and prospects.\nApart from the issues outlined above, Lilly is also struggling on the R&D side. Earlier this year, researchers renewed safety fears about its once-touted blockbuster contender tanezumab and the larger anti-NGF pain drug class; the company abandoned a mid-stage BTK inhibitor, shrugging off a $690 million pact for the immunology drug — in-licensed from Korea’s Hanmi three years ago; partner Incyte $INCY halted all further R&D investments in Olumiant, a JAK inhibitor that barely managed an FDA approval, with a lower dose than Lilly had advocated for after regulators raised serious safety concerns about the drug — and the class; and then there’s the Alzheimer’s drug solanezumab, which has failed three pivotal programs but is still marinating in late-stage development, despite most observers having written it off as collateral damage in the all-but-dead amyloid beta hypothesis.\nLilly’s existing drugs aren’t in the best shape either. The company is facing fierce pressure to rein in the prices of its arsenal of diabetes drugs, while its erectile dysfunction treatment Cialis is being eaten up by generic competition.\nIn the deal announced on Tuesday, Centrexion may be eligible for up to $575 million in potential development and regulatory milestones.", "score": 28.17771924666899, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "Research Triangle Park—the grandfather of corporate complexes of its kind—is getting a people-friendly makeover to draw top talent and startups\nThe rolling woods along Interstate 40 in North Carolina between Raleigh and Durham have little in common with a bustling urban streetscape. And that is exactly according to the plan used 50 years ago when Research Triangle Park opened amid the area's three big universities—Duke, North Carolina State, and the University of North Carolina at Chapel Hill. Today the research park's 11 square miles remain an oasis of manicured corporate campuses, wide commuter roads, and landscaped parking lots. It's \"Ward Cleaver's version of the perfect industrial park,\" quips Brent Lane, director of the Center for Competitive Economies at UNC-Chapel Hill. But in a few years, some of this acreage should feel more like a densely packed suburban center. As research parks sprout up inside big cities around the globe, the grandfather of today's research and development complexes is under pressure to make itself livelier. So it's adding facilities tailored to startups as well as shops and housing at the park's edges. The goal, says Rick L. Weddle, CEO of the Research Triangle Foundation of North Carolina, which manages the park, is to make the place \"consistently more attractive to the brightest minds in the world.\" The evolution won't be simple. Because the park is designated as a self-taxing authority and subject to zoning restrictions in two counties, residential housing construction must be limited. Covenants with the park's board and its roughly 170 corporate residents—owners such as IBM (IBM) and Cisco (CSCO) buy their land from the foundation—require consensus on land-use changes. Retail and Housing\nAs a result, Weddle, who has been in the job for five years, has focused on development around the periphery and is planning a few dense \"nodes and niches\" within it. Those include a 25-acre retail and housing site near a forthcoming transit hub and potentially a 100-acre site that currently hosts office space, a couple of bank branches, and a vacant shopping center. \"Let's not kill the goose that laid the golden egg,\" Weddle says, referring to the park's model of historically catering to large companies.", "score": 28.14000452778794, "rank": 28}, {"document_id": "doc-::chunk-3", "d_text": "And when I look at Lilly, I see one of the best pipelines in the industry and I think part of that pipeline has been supported by some of the costs that you currently have.\nConover: How do you judge what's the appropriate cost-cutting metric versus having a very powerful pipeline?\nRicks: Well, it's a balance. I think we have to be driven by value in the long term, picking the best ideas and bringing them forward. So, we're not in a given year tightly bound to the range--I think we said 18% to 20% long term. We feel the latitude to vary below or above that, depending on the investment opportunities in front of us right then, but rather long term, we see that as a sustainable range to think about R&D cost.\nOne factor to consider is in this window right now, 2012-14, we are in this YZ period. We really view this is our trough revenue. So, one way the percentage will come down is just through revenue growth as we launch this pipeline, but long term also I think we have to be prudent about where we make those big Phase III bets and hold up a pretty high bar on differentiation and on the quality of the science. Our ambition is not to buy other companies, but rather grow organically. So we have to make sure the next idea is at least as good as or better than those that are already in Phase III. That's part of the discipline of setting those targets. If what you are really wondering is, in a given year will we stop investing because we go to 20.1%, the answer is no. But through time, we see that as a reasonable range in a company that will be growing.\nConover: Sure. That makes sense.\nRicks: So, the absolute number may in fact grow.\nConover: OK. Then one last question: What do you think is most underappreciated by the investment community for Eli Lilly?\nRicks: There are many things, but I think at the top of the list might be the quality of the Phase III pipeline beyond sola. I have responsibility for the solanezumab Alzheimer's drug, and we're of course excited by what we see in that initial set of data. We'll see what happens next, but there has been a huge, I think too much, focus this year from analysts on that one asset.", "score": 28.082595704028435, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Co., under pressure to gain new products after setbacks this week with two diabetes drugs, may try to acquire its partner Amylin Pharmaceuticals Inc. or covet companies with more approved products.\nWith Amylin, Lilly would gain full control of the diabetes drug Byetta and a longer-acting version called Bydureon delayed this week by U.S. regulators, said Seamus Fernandez, a Leerink Swann & Co. analyst. Lilly might try to acquire Cephalon Inc. or Endo Pharmaceuticals Inc. to expand its painkiller business, said Bill Tanner, an analyst at Lazard Capital Markets.\nBy 2013, Lilly loses patents on medicines responsible for almost half its revenue. The Bydureon rejection, which stalled a new revenue source for at least two years, was compounded Wednesday when the company halted tests on a second experimental diabetes medicine because it wasn’t effective. Lilly Chief Executive Officer John Lechleiter on Thursday ruled out “large-scale combinations” while expressing interest in smaller deals.\n“An outright acquisition of Amylin certainly could make sense” if Lilly thinks Bydureon will be approved, Fernandez said in a telephone interview from Boston. Amylin, based in San Diego, lost half its market value on Wednesday after the Food and Drug Administration requested a study of Bydureon’s effect on heart rhythm.\nAmylin shares increased 45 cents, or 4.1 percent, to close at $11.48 on Thursday in Nasdaq Stock Market composite trading, after a 46 percent plunge on Wednesday. Indianapolis-based Lilly fell 51 cents, or 1.4 percent to close at $35.50 on the New York Stock Exchange.\nThe average premium paid in the last 12 months for acquisitions of U.S. medical and biotechnology companies was 40 percent, according to data compiled by Bloomberg. That suggests Amylin may have cost $2.3 billion Thursday, excluding debt, compared with $3.4 billion before shares plunged this week. Lilly had $5.16 billion in cash to make deals as of June.\nOther diabetes-drug developers led by Pfizer Inc. and Sanofi-Aventis SA may also pursue Amylin at its bargain price, Fernandez said. Ray Kerins, a spokesman for New York-based Pfizer, the world’s largest drugmaker, didn’t return calls for comment.", "score": 28.054924456362077, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Company (NYSE: LLY) announced today that pursuant to its previously announced cash tender offer for up to $1.6 billion aggregate principal amount (the “tender cap”) of specified series of its outstanding debt.\nApproximately $1.45 billion in aggregate principal amount of the notes listed in the table below were validly tendered and not validly withdrawn on or prior to 5:00 p.m., New York City time, on May 27, 2015, the early tender date for the offer.\nSubject to the terms and conditions of the tender offer, Lilly expects it will accept for purchase all of the notes validly tendered and not validly withdrawn on or prior to the early tender date. The settlement date for the notes accepted by Lilly in connection with the early tender date currently is expected to be on June 5, 2015.\nLilly expects to determine the pricing terms of the tender offer at 11:30 a.m., New York City time, on May 28, 2015. The tender offer is scheduled to expire at 11:59 p.m., New York City time, on June 10, 2015, unless extended or earlier terminated.\nHolders of notes subject to the tender offer who validly tendered and did not validly withdraw their notes on or prior to the early tender date are eligible to receive the total consideration, which includes an early tender premium of $30 per $1,000 principal amount of notes tendered by such holders and accepted for purchase by Lilly. Accrued interest up to, but not including, the settlement date will be paid in cash on all validly tendered notes accepted and purchased by Lilly in the tender offer.\nIn accordance with the terms of the tender offer, the withdrawal date was 5:00 p.m., New York City time, on May 27, 2015. As a result, tendered notes may no longer be withdrawn, except in certain limited circumstances where additional withdrawal rights are required by law.\nThe tender offer is being conducted upon the terms and subject to the conditions set forth in the Offer to Purchase, dated May 12, 2015, and the related Letter of Transmittal.\nLilly has retained Deutsche Bank Securities Inc., J.P. Morgan Securities LLC and Credit Suisse Securities (USA) LLC to serve as lead dealer managers for the tender offer and has retained D.F. King & Co., Inc.", "score": 27.586710206671224, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "Second-Quarter Significant Items Affecting Net Income\nNet income was affected by significant items totaling $.11 and $.29 for the second quarter of 2008 and the second quarter of 2007, respectively, which are reflected in the company's financial results and are summarized below and in the table that follows:\n-- The company recognized restructuring and other special charges of $88.9 million, primarily associated with previously-announced strategic exit activities related to manufacturing operations, which decreased earnings per share by $.05.\n-- The company recognized asset impairments associated with certain manufacturing operations (included in cost of sales) of $57.1 million, which decreased earnings per share by $.04.\n-- The company incurred in-process research and development (IPR&D) charges associated with the licensing arrangement with TransPharma Medical Ltd. of $35.0 million, which decreased earnings per share by $.02.\n-- The company incurred IPR&D charges associated with the acquisition\nof Hypnion of $291.1 million and the acquisition of Ivy of $37.0 million,\nwhich decreased earnings per share by $.29.\nSecond Quarter % Growth\nE.P.S. (reported) $.88 $.61 44 %\nRestructuring charges (included in\nasset impairments, restructuring\nand other special charges) .05\nAsset impairments (included\nin cost of sales) .04 -\nIn-process research and development\ncharges associated with in-licensing\ntransaction with TransPharma (2008)\nand acquisitions of Hypnion and\n|SOURCE Eli Lilly and Company|\nCopyright©2008 PR Newswire.\nAll rights reserved", "score": 26.71874921051663, "rank": 32}, {"document_id": "doc-::chunk-2", "d_text": "Except for historical information, this press release may contain forward-looking statements, relating to expectations, plans, or prospects for Transition and/or Lilly. These statements are based upon the current expectations and beliefs of management and are subject to certain risks and uncertainties that could cause actual results to differ materially from those described in the forward- looking statements. These risks and uncertainties include, among others, the completion of clinical trials, the FDA and other foreign review processes and other governmental regulation, the companies' abilities to successfully develop and commercialize drug candidates, competition from other pharmaceutical companies, the ability to effectively market products, and other factors which may be beyond the control of either Transition or Lilly. Please see Transition's filings with the Canadian commissions and Lilly's filings with the U.S. Securities and Exchange Commission for further information about risk factors and other cautionary statements. Neither Transition nor Lilly undertakes a duty to update forward-looking statements.\nCONTACT: Mark E. Taylor of Eli Lilly and Company, +1-317-276-5795, or ElieFarah of Transition Therapeutics Inc., +1-416-260-7770 x.203\nTicker Symbol: (NYSE:LLY),(NASDAQ-NMS:TTHI)\nTerms and conditions of use apply\nCopyright © 2008 PR Newswire Association LLC. All rights reserved.\nA United Business Media Company\nPosted: March 2008", "score": 26.429373116691238, "rank": 33}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Company (NYSE: LLY) announced today the early tender results of its previously announced cash tender offer for specified series of its outstanding debt securities. Lilly also announced that it has removed the previously announced note caps setting forth the maximum principal amounts of its 4.150% Notes due 2059 and its 3.950% Notes due 2049 that Lilly will accept for purchase pursuant to the tender offer. Except as described in this press release, all other terms of the tender offer as described in the Offer to Purchase, dated September 7, 2021 (the “Offer to Purchase”), and the related Letter of Transmittal remain unchanged.\nA total of $2,016,575,000 in aggregate principal amount of the notes listed in the table below were validly tendered and not validly withdrawn on or before 5:00 p.m., New York City time, on September 20, 2021, the early tender date for the tender offer. The table below sets forth the aggregate principal amount of each series of notes subject to the tender offer that was validly tendered and not validly withdrawn on or prior to the early tender date.\n|Title of Security||CUSIP No.||Acceptance Priority Level||Principal Amount Outstanding||Principal Amount Tendered||Approximate Percentage of Outstanding Amount Tendered||Anticipated Principal Amount to be Accepted for Purchase|\n|4.150% Notes due 2059||532457 BU1||1(1)||$1,000,000,000||$408,714,000||40.87%||$408,714,000|\n|3.950% Notes due 2049||532457 BT4||2(2)||$1,500,000,000||$541,847,000||36.12%||$541,847,000|\n|7.125% Notes due 2025||532457 AM0||3||$229,692,000||$12,221,000||5.32%||$12,221,000|\n|6.770% Notes due 2036||532457 AP3||4||$174,445,000||$15,880,000||9.10%||$15,880,000|\n|5.950% Notes due 2037||532457 BC1||5||$284,112,000||$17,284,000||6.", "score": 26.38849862555388, "rank": 34}, {"document_id": "doc-::chunk-2", "d_text": "Both tax rates assume the extension of the R&D tax credit for the full year 2012.\nOperating cash flows in 2012 are still expected to be more than sufficient to fund capital expenditures of approximately $800 million, as well as anticipated business development activity, the company's current dividend and stock repurchases.\nLilly, a leading innovation-driven corporation, is developing a growing portfolio of pharmaceutical products by applying the latest research from its own worldwide laboratories and from collaborations with eminent scientific organizations. Headquartered in Indianapolis, Ind., Lilly provides answers – through medicines and information – for some of the world's most urgent medical needs. Additional information about Lilly is available at www.lilly.com; Lilly's clinical trial registry is available at www.lillytrials.com.\nThis press release contains forward-looking statements that are based on management's current expectations, but actual results may differ materially due to various factors. There are significant risks and uncertainties in pharmaceutical research and development. There can be no guarantees with respect to pipeline products that the products will receive the necessary clinical and manufacturing regulatory approvals or that they will prove to be commercially successful. Pharmaceutical products can develop unexpected safety or efficacy concerns. The company's results may also be affected by such factors as competitive developments affecting current products, including the impact of generic competition; market uptake of recently-launched products; the timing of anticipated regulatory approvals and launches of new products; regulatory actions regarding currently marketed products; issues with product supply; regulatory changes or other developments; regulatory compliance problems or government investigations; patent disputes; changes in patent law or regulations related to data-package exclusivity; other litigation involving current or future products; the impact of governmental actions regarding pricing, importation, and reimbursement for pharmaceuticals, including U.S. health care reform; changes in tax law; asset impairments and restructuring charges; acquisitions and business development transactions; and the impact of exchange rates and global macroeconomic conditions. For additional information about the factors that affect the company's business, please see the company's latest Form 10-Q and Form 10-K filed with the U.S. Securities and Exchange Commission. The company undertakes no duty to update forward-looking statements.\n|SOURCE Eli Lilly and Company|\nCopyright©2012 PR Newswire.\nAll rights reserved", "score": 26.233691935438767, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "Funded status for the pension plans of Eli Lilly & Co., Indianapolis, improved to a cumulative 95% for 2013, from 79% the previous year, according to the company's 10-K filing.\nLilly reported $9.48 billion in global defined benefit pension assets as of Dec. 31, up 14.4% from the end of 2012, on the strength of $1.14 billion in investment returns, according to the 10-K.\nThe company contributed $429 million to its global pension plans last year, down from $470 million in 2012. The filing did not state any expected pension funding for 2014.\nEli Lilly's pension funds increased their combined actual international equity allocation to 26.1% last year, from 23%. Hedge funds remained the largest asset class, at 30.6%, although the allocation was trimmed by two percentage points from 2012. Fixed income was cut to 15% from 17.8%; private equity was trimmed to 11.3% from 12%; real estate was 5.5%, down from 6.1%; U.S. equity was cut to 4.2% from 5.5%; and other investments were increased to 7.3% from 4.8%.\nSusan Ridlen, assistant treasurer, said Lilly does not disclose how many pension plans it offers globally. According to Ms. Ridlen and the 10-K, 80% of Lilly's global pension assets are with U.S.- and Puerto Rico-based plans.", "score": 25.65453875696252, "rank": 36}, {"document_id": "doc-::chunk-7", "d_text": "Although the company remains solidly in the unbranded drug space, its $6.8-billion purchase last year of Cephalon and its CNS portfolio means more exposure to the brand side of the patent-protection wars, as CNS drugs Provigil, Copaxone and Treanda burn through their exclusivity periods. Teva dealt Par the right to sell an authorized generic of Provigil, meaning Teva earns from both ends of the patent spectrum. The company says it has 177 product registrations awaiting FDA approval—120 of these are patent challenges for first-to-market generic rights.\n6 Eli Lilly $14.9B up 4.2%\nGlobal revenue: $24.3B (10th); up 5.2%\nR&D spend: $5.0B (9th), up 2.0%; 20.6% of rev.\nTop brands: Cymbalta ($3.2B), Zyprexa ($2.2B), Humalog ($1.4B), Alimta ($995M), Evista ($707M)\nPlanned launches: BI 10773 (diab.), dulaglutide (diab.)\nPromotional spend: $1.4B (2nd); 9.2% of rev.\nPatent expirations: Humalog (2013), Cymbalta (2014), Evista (2014)\nEli Lilly spends the most of the top 20 drugmakers, as a percentage of revenue, on R&D. Unfortunately, R&D output is not necessarily correlated with what you spend, showed a study funded by the Midwest company which was cited by the Sanford Bernstein analyst Tim Anderson. Anderson predicts a bolt-on acquisition. But Lilly's pipeline is slowly improving. After its GLP-1 partnership with Amylin dissolved late last year, dulaglutide moved to the forefront as its lead GLP-1 candidate, and Lilly is aiming for a 2013 filing. Somewhat of a longshot is Alzheimer's candidate solanezumab, which passed an interim safety analysis in January. Recent launch Tradjenta was the third DPP-IV to market and has diabetes giant Januvia to contend with. Patent expirations loom on two megablockbusters—CNS drug Cymbalta and diabetes med Humalog.", "score": 25.65453875696252, "rank": 37}, {"document_id": "doc-::chunk-2", "d_text": "The program is expected to incur $1.4 billion in one-time restructuring charges, of which $800 million are likely to be cash costs. In addition, the company will invest approximately $500 million in establishing the new centre in Cambridge. Annualized benefits of approximately $190 million are expected by 2016 for the program.\n“I recognize that our plans will have a significant impact on many of our people and our stakeholders at the affected sites. We are fully committed to treating all our employees with respect and fairness as we navigate this important period of change,” said Pascal Soriot.\nFinal estimates for program costs, benefits and headcount impact in all areas of the business are subject to completion of applicable consultation processes in accordance with local laws.", "score": 25.65453875696252, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly will pay AC Immune CHF 30 million ($30.2 million) in the first milestone payment tied to the companies’ CHF 1.89 billion ($1.9 billion) collaboration launched in December to develop new treatments for Alzheimer’s disease and other neurodegenerative disorders, the Swiss biotech said today.\nAC Immune said it will receive the milestone payment on or before October 7, 2019, in “a recognition of progress in the collaboration between the two companies”—namely the launch in July of a Phase I trial of ACI-3024, the company’s lead tau aggregation inhibitor small molecule candidate.\nThe Phase I trial is designed to assess the safety, tolerability, pharmacokinetics, and pharmacodynamics of ACI-3024 in healthy volunteers, AC Immune said, through a randomized, placebo controlled, double blind, sequential single and multiple ascending dose study with open label food effect and pharmacodynamics assessment arms.\n“The start of the ACI-3024 Phase 1 study, represents an important advancement in the broader effort we are making and further expands our robust clinical pipeline to address neurodegenerative diseases, in particular for therapeutics and diagnostics targeting Tau,” AC Immune CEO Prof. Andrea Pfeifer said in a statement.\nACI-3024 is a first-in-class investigational oral small molecule Tau Morphomer candidate in development for treatment of Alzheimer’s disease (AD) and other neurodegenerative disorders. ACI-3024 is the primary focus of the collaboration with Lilly, launched in December 2018 to research and develop tau aggregation inhibitor small molecule using AC Immune’s Morphomer platform.\nUnder their agreement, AC Immune agreed to conduct the initial Phase I development of the Morphomer Tau aggregation inhibitors, while Lilly agreed to fund and conduct additional research and further clinical development.\nAC Immune has said that ACI-3024 has shown tau aggregation inhibition in preclinical models. At the Jefferies Healthcare Conference in November 2018, Pfeifer presented data showing a significant inverse correlation between cerebrospinal fluid levels of Tau and ACI-3024 exposure in plasma—which “might indicate an increase of Tau clearance from the brain.”\nThe milestone payment is half of the CHF 60 million ($60.4 million) in near-term milestone payments Lilly agreed to pay AC Immune under the collaboration.", "score": 25.65453875696252, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "The new facilities are to be located on adjacent property, and are likely to include hotels and a business complex, among many other attractions. Developers estimate the cost of the expansion to be around $2.1 billion US dollars (USD). It appears that there has been mostly positive public support for the project.", "score": 25.322625027734023, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly and Co. plans to use an implantable drug-delivery system made by Medtronic Inc. to precisely target patients' brains with an experimental drug for Parkinson’s disease. The two companies announced their partnership on the Parkinson’s medication Tuesday morning.\nIndianapolis-based Lilly has not yet begun human trials of its drug, known as a glial cell derived neurotrophic factor, or GDNF. Lilly said in a press release that it has engineered the biotech drug to distribute more broadly than other neurotrophic agents have in previous tests. Minneapolis-based Medtronic’s system, which uses a pump and catheter, supplies a steady amount of the drug to a specific brain region over time.\nFinancial terms of the partnership were not disclosed.\n“By collaborating with Medtronic from the earliest phase of research, we are maximizing the potential for this therapy's efficient and effective development,” said Michael L. Hutton, chief scientific officer of Lilly’s neurodegeneration team, in a prepared statement.\nThere is no known cure for Parkinson’s, a condition caused by the loss of brain neurons that produce dopamine, a chemical messenger key to the brain’s coordination of movement. Parkinson’s patients suffer from imbalance, tremors and muscle stiffness.\nSome of the most famous victims of Parkinson’s are the former boxer Muhammad Ali and actor Michael J. Fox. They are among more than 7 million estimated Parkinson’s patients worldwide.\nBy injecting neurotrophic factors into the brain, scientists expect that they would strengthen existing neurons, helping them produce more dopamine, said Ros Smith, senior director of regenerative biology at Lilly. Keeping neurons functioning longer could slow progression of Parkinson’s rather than treating its symptoms, as existing therapies do.\nHowever, because neurotrophic factors are large proteins, they don’t easily cross from the bloodstream into the brain, Smith said. But Lilly scientists hope that Medtronic’s delivery system can overcome that obstacle.\n\"One of the most significant challenges in delivering a biologic treatment for neurodegenerative diseases is crossing the blood brain barrier. We have extensive experience in targeted drug delivery and technology that allow delivery of therapeutic agents directly to the brain,” said Dr. Steve Oesterle, senior vice president of medicine and technology at Medtronic.", "score": 25.064602343399322, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "IN: AB BioTechnologies to Invest $10+M, Add 33 Jobs & Contract Mfg Services\n11 Jan, 2017\nAB BioTechnologies, a pharmaceutical development company, announced plans to add new contract manufacturing operations in Bloomington, creating up to 33 new high-wage jobs by 2020.\n“AB BioTechnologies showcases the big impact small businesses are making in communities around the state,” Governor Eric Holcomb said. “As AB BioTechnologies expands its business to include pharmaceutical manufacturing, it’s not just helping advance the development of new medicines — it’s showing the important role life science firms play in creating great-paying Hoosier jobs. Looking to the future, Indiana must continue to invest in life-sciences research and entrepreneurship as well as STEM education to prepare Indiana’s future workforce for opportunities in this and other high-demand fields.”\nThe homegrown Hoosier company will invest $10.5 million to construct and equip a 23,000-square-foot facility at 3770 W. Jonathan Drive in Bloomington, where the company will launch its clinical manufacturing services, helping its clients advance their drugs from concept to clinic under one roof. This includes a manufacturing area for formulating, filling, lyophilizing and packaging drugs for early-phase Clinical Trial studies. The company will also be relocating its warehouse and development laboratory to the facility, moving from its current 1,950-square-foot building to the new facility in Bloomington. With construction scheduled to begin this spring, the company plans to open its new facility in November 2017.\nAB BioTechnologies has tripled its profits annually since 2010 by expanding its service offerings and client base as well as significantly increasing the number of candidate drug products it has developed for its clients. The company plans to begin hiring early this year for technical and support positions.\n“The support we have received from the Gayle & Bill Cook Center for Entrepreneurship, Ivy Tech Community College, the Bloomington Economic Development Corporation and the state of Indiana have been instrumental in making this expansion a reality,” said J. Jeff Schwegman, Ph.D., founder and chief executive officer of AB BioTechnologies. “After evaluating the east and west coasts for our expansion, it became quite evident that Indiana was the superior choice. Indiana is rich in talent as a result of the increases in life science programs offered by state universities and colleges and number of life science companies within our borders, which will enable us to build a skilled workforce.", "score": 24.62612130315523, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "UPDATE: This article is also posted on Seeking Alpha. For the first time, my article was accepted to be on Seeking Alpha. The link to the article on Seeking Alpha can be found here, or http://seekingalpha.com/article/3707566-eli-lilly-is-overvalued-too-costly-to-buy.\nOn October 22, Eli Lilly (LLY) reported an increase in the third-quarter profit, as sales in its animal health segment and new drug launches offset the effect of unfavorable foreign exchange rates and patent expirations. Indianapolis-based drug maker posted a net income increase of 60% to $799.7 million, or to $0.75 per share, as its revenue increased 33% in animal health segment. In January 2015, Eli Lilly acquired Norvartis’s animal health unit for $5.29 billion in an all-cash transaction. The increase in the animal-health revenue helped offset sharp revenue decreases in osteoporosis treatment Evista and antidepressant Cymbalta, whose revenue fell 35% and 34% year-over-year, respectively. Eli Lilly lost U.S. patent protection for both drugs last year, causing patent cliffs. Lower price for the Evista reduced sales by about 2%.\nTotal revenue increased 2% to $4.96 billion even as currency headwinds, including strong U.S. dollar, shaved 8% off of the top line in revenue. Recently launched diabetes drug Trulicity and bladder-cancer treatment Cyramza helped increase profits, bringing a total of $270.6 billion in the third-quarter. Eli Lilly lifted its guidance for full-year 2015. They expect earnings per share in the range of $2.40 and $2.45, from prior guidance of $2.20 to $2.30.\nDespite the stronger third-quarter financial results, I believe Eli Lilly is overvalued. Eli Lilly discovers, develops, manufactures, and sells pharmaceutical products for humans and animals worldwide. The drug maker recently stopped development of the cholesterol treatment evacetrapib because the drug wasn’t effective. Eli Lilly deployed a substantial amount of capital to fund Evacetrapib, which was in Phase 3 research, until they decided to pull the plug on it.", "score": 24.345461243037445, "rank": 43}, {"document_id": "doc-::chunk-1", "d_text": "Eli Lilly’s galcanezumab is forecast to produce around $484 million in 2022, while Amgen/Novartis’ erenumab will bring in around $475 million.\nDon't miss your daily pharmaphorum news.\nSUBSCRIBE free here.", "score": 24.345461243037445, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "A £35m scheme which will cement a Cambridge research park's reputation as one of the UK's leading hubs of bioscience research innovation has been given the green light.\nUS life science property business BioMed Realty is making the £35m investment in Babraham Research Campus, which will see two new buildings constructed providing up to 100,000 sq ft of new R&D space .\nPlanners at South Cambridgeshire District Council have now approved the scheme, which is due for completion in 2019.\nBabraham Bioscience Technologies (BBT) is responsible for the management and commercial development of the campus, which is already home to around 1,200 workers. The new buildings will be accessed via a new spur road that has just been built off the main access road onto the campus.\nDoug Cuff is senior director of development for BioMed Realty, which also runs the Granta Park research park.\nHe said: “We are honored to be part of this important international partnership with BBT and the Biotechnology and Biological Sciences Research Council to begin the ground-up development of two new buildings to grow bioscience-based companies on the Babraham Research Campus.\n“We are truly excited that BioMed @ Babraham will provide scale-up companies with an additional 100,000 sq ft of state-of-the-art R&D facilities in the heart of the preeminent Cambridge life science community that will support transformational scientific research.\n“But, the most important reason for building BioMed @ Babraham is people, and connecting them with the 60+ scale-up companies on the Babraham Research Campus and the world-renowned researchers and resources of the Babraham Institute.”\nBabraham Hall, which was built in the 1830s, will remain a dominant part of the expanded campus. The new buildings have been designed to make sure they will be in keeping with other facilities already in place on the campus.\nA landscape strategy that was completed as part of the planning process will ensure the retention pf trees that are within and alongside the site. More than 300 additional trees and 2,000 shrubs will also be planted to enhance the campus environment.\nCllr Robert Turner, cabinet member for planning, said: “This is a big economic boost to South Cambridgeshire and further strengthens our reputation as a centre for world class bioscience innovation.", "score": 24.345461243037445, "rank": 45}, {"document_id": "doc-::chunk-0", "d_text": "Deciphera receives $6 million payment as cancer drug advances, sets up corporate offices in Boston; Bargain Depot closing; update on SportQuest\nThere are 6 million pieces of good news on the Lawrence bioscience front today, and one piece of news that may create some worry among local bioscience leaders.\nDeciphera Pharmaceuticals — the Lawrence-based biotech company that has been labeled as a hot prospect for breakout success — has reached a major milestone. The company announced that it has received a $6 million payment from the pharmaceutical giant Eli Lilly after one of Deciphera's cancer treatment inhibitors moved into phase I clinical trials. Deciphera is developing the treatment, which the company hopes will be successful in battling several types of advanced cancer, in partnership with Eli Lilly.\nAt the same time, Deciphera announced that it has a new president/CEO and that the company is establishing corporate offices in Boston. New president and CEO Mike Taylor, a biotech veteran, will office in Boston.\nDeciphera's press release says all research activities for the company will continue to be based in Lawrence. But the press release doesn't make clear whether the company's corporate headquarters will be in Boston. Reports by biotechnology media outlets give that impression. The Web publication FierceBiotech reports Taylor \"is looking for a handful of execs to join him in the big Boston hub.\"\nBoston is one of the power centers for the pharmaceutical industry, and Deciphera clearly is moving into a new phase of its development. Taylor previously was CEO of Ensemble Therapeutics, where he brokered research alliances with Roche, Bristol-Myers Squibb, Pfizer and other major pharmaceutical companies.\nDan Flynn, the founder of Deciphera, will remain with the company, but will give up his title of president and CEO. Flynn now will serve as chief scientific officer, and will continue to serve as a member of the company's board of managers. It appears that he'll remain based in Lawrence with the research operations.\nPerhaps in today's mobile world, the idea of corporate headquarters isn't as important as it used to be. Lawrence leaders, after all, will be thrilled if Deciphera grows its research team and the good-paying jobs that come with it in Lawrence. But local leaders — with good reason — long have been concerned about promising companies with KU ties leaving Lawrence just before they become big successes.", "score": 24.345461243037445, "rank": 46}, {"document_id": "doc-::chunk-2", "d_text": "The spending plan also puts $3 million on a future riverfront park at the junction of First Avenue and Gay Street.", "score": 23.312190736019303, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "The city’s economic-development office wants the City Council to make a $924,676 incentive to keep a Durham pharmaceutical company in Durham.\nArgos Therapeutics, whose home office is in northern Durham, plans an expansion and is being courted by five other North Carolina counties along with locations in Florida and Quebec, according economic development Director Kevin Dick’s memo to City Council members.\nIf done in Durham, the expansion would involve remodeling a 112,000-square foot building on T.W. Alexander Drive to accommodate company offices, manufacturing and research.\nThe incentive is on the council’s work session agenda Thursday and could move on for a vote at the Aug. 18 regular council meeting.\nDurham County commissioners approved $925,000 incentive for Argos last week. Argos is also asking the state commerce department for money.\nArgos Therapeutics ( bit.ly/URmnsF) has two products undergoing trials, one for a form of kidney cancer and the other for HIV, and expects they will also be effective for a range of cancers and infectious diseases. Its drug therapy is biochemically personalized for each patient, giving Argos, according to a corporate presentation, a market advantage over large competitors such as Novartis, Merck and Pfizer.\nThe company was incorporated in 1998, according the N.C. Secretary of State records, and made its initial public offering in February, grossing $45 million at $8 a share.\nAccording to Dick’s memo, Argos’ plans include a $41 million capital investment that should lead to 236 new jobs by 2018 with an average wage of $90,725 plus benefits.\nIf approved, the city incentive would be paid in seven installments, anticipated to start in fiscal year 2016, conditional on Argos meeting specific investment and job-creation goals. Argos would be required to show evidence it has made the $41 million investment within three years of the incentive’s approval.\nIncreased tax revenue from the company, estimated at $1.7 million over 10 years beginning in fiscal 2016, would cover the city’s incentive cost and leave a profit of about $775,000.", "score": 23.037775688900112, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "Statewide: Triangle region, March 2015\nThink of The Frontier as one big beta test for the office park where technology innovations have revolutionized North Carolina. The 142,000-square-foot renovated building is the first to open on 100 acres that Research Triangle Foundation bought last year for $17 million. Its mission is to make Research Triangle Park, already home to more than 190 companies including IBM and GlaxoSmithKline, a destination for people to gather and collaborate. “By having The Frontier, we can play with the idea of accessibility and affordability,” RTF President and CEO Bob Geolas says. The building’s first floor is open to anyone for free, and there are conference rooms and tables for work. (There’s also a bar with a Kegerator and coffee station.) The Frontier has about 50 private offices that can be leased month-to-month for about $300. The concept is more co-working space than startup incubator. “It’s co-working, but it’s really co-working in an open-innovation concept where everyone has a chance to interact,” Geolas says. The Frontier has meeting spaces for small and large groups and has already booked its first tenants, which include the Army Research Office and the nonprofit Triangle ArtWorks, which promotes regional arts groups. RTP wants to transform itself from a 7,000-acre suburban office park — albeit one of the most famous in the U.S. — into a more attractive site for the hottest tech companies. Plans call for developing a hotel, apartments and 300,000 square feet of retail shops with a goal of adding 100,000 jobs. New roads are slated for as soon as next year. “What we’re doing is creating something that’s going to help North Carolina become a more competitive place.”\nRALEIGH — The state agreed to sell the 307-acre Dorothea Dix Hospital property to the city of Raleigh for $52 million. The city plans to create a park on the site, which includes the former mental-health treatment center that closed in 2012 and several dozen administrative buildings.\nDURHAM — CoLucid Pharmaceuticals raised $37.1 million in a stock offering led by TVM Capital Life Science, a venture-capital firm based in Munich and Montreal. Founded in 2005 by Durham-based Pappas Ventures, the drugmaker based here will use the proceeds to fund a study of a migraine treatment it is developing.", "score": 23.030255035772623, "rank": 49}, {"document_id": "doc-::chunk-1", "d_text": "Last quarter, volume increased 7% and the effect of price changes was flat. The company got a 13.8% volume boost from its new drugs, but older drugs losing exclusivity also had higher volume than they have had in recent quarters. During Q3, the company spun off its animal health business through the IPO of Elanco, which also reported today.\nEli Lilly raised EPS guidance for the full year to a range of $5.55 to $5.60, up from $5.40-$5.50 previously.", "score": 23.030255035772623, "rank": 50}, {"document_id": "doc-::chunk-1", "d_text": "Earlier this year, Catalent announced plans to invest in its Somerset, NJ facility to create an additional center of excellence on America’s East Coast.\nThe company also completed a $5.5m expansion program at its 200,000+ square foot facility in Philadelphia, PA in April of this year. The expanded facility provides additional clinical packaging and storage capacity.", "score": 23.030255035772623, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Lupin To Build 2 New U.S. R&D Centers\nPharmaceutical firm Lupin announced that it is currently building two new research and development centers in the U.S. dedicated to researching inhalation and complex drug formulations.\nAccording to Lupin’s chairman, this move will help the company continue to assert itself as a leader in generics and specialty pharmaceuticals. Chairman Desh Bandhu Gupta told The Economic Times, “In keeping with our global strategy of building a highly differentiated generic and speciality business, the company is in the process to setting up two dedicated Centers of Excellence for research in inhalation and complex injectables in Florida and Maryland in the US.”\nThe company revealed that it has invested heavily in R&D in the last few years, increasing its investment of 8.1 percent of its net sales in 2013 to 8.6 percent in the fiscal year 2014.\nThe company asserted that research continues to be its “backbone,” reporting that, following milestones in its various partnerships, it has increased filings based on its process and formulations research group. In particular, Lupin’s Novel Drug Discovery and Development (NDDD) program, targeting new metabolic/endocrine disorders, pain, autoimmune disease, and cancer treatments, among others, as well as its Biotechnology research program have both seen advancements as of late. Gupta added that Lupin’s Advanced Drug Delivery Systems (ADDS) is aiding the company to build a differentiated pipeline of branded drugs and opportunities for out-licensing. The Biotech group has been redirecting its efforts to develop its biosimilar offerings for advanced markets such as Japan, he said.\nThe fiscal year 2014 saw 45 regulatory approvals in key markets for Lupin. These include supplemental NDAs in the U.S., EU, Australia, Canada, and Japan. Gupta said, “The company also filed 19 ANDAs (of which, 4 are potentially first-to-file) with the US FDA, 4 MAAs with European regulatory authorities, 4 MAAs in Australia and 2 ANDS in Canada. The cumulative ANDA filings with the USFDA stood at 192 with 99 approvals. The company has 30 confirmed first-to-files including 15 exclusive ones.”\nLast month, Lupin announced its commercial launch of Ciprofloxacin for Oral Suspension for the treatment of infections in the U.S. following final approval from the FDA.", "score": 23.030255035772623, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "RESEARCH TRIANGLE PARK – Life science giant LabCorp is expanding its presence at Parmer RTP campus. It’s also getting a new neighbor, with Duke Health Technology Solutions moving onto campus as a new tenant.\nParmer Innovation Centers, which owns the 500-acre research and development campus including 20 separate buildings of office and laboratory space, announced both developments in a release today.\nWe are thrilled to welcome Duke Health Technology Solutions to our Parmer RTP campus, and that LabCorp, which is based in Burlington, has decided to expand its presence there,” Bart Olds, director of asset management for Karlin Real Estate, said in a statement. The firm manages leases on campus.\n“Duke is an impressive institution that has been one of the catalysts for downtown Durham’s growth. Its move to Parmer RTP is a major win for the Research Triangle Park as it reinvents itself to meet the needs of today’s employees. We look forward to our partnership with DHTS and hope to see them grow in the park, as LabCorp is now doing.”\nFollowing its initial lease of two buildings in April 2018, LabCorp, which is based in nearby Burlington, N.C., has signed a new long-term lease for an additional 111,000 square feet at 6 Moore Drive, expanding its footprint on the Parmer RTP campus by 50 percent.\nDHTS’s long-term lease is for 120,000 square feet at 12 and 14 Moore Drive, formerly occupied by Credit Suisse. This space is adjacent to the one-acre amenity park, treehouse conference center, and two-story state-of-the-art fitness center that will be completed early this fall.\nSource: WRAL TechWire", "score": 21.695954918930884, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly (NYSE: LLY ) is expected to report Q1 earnings on April 24. Here's what Wall Street wants to see:\nThe 10-second takeaway\nComparing the upcoming quarter to the prior-year quarter, average analyst estimates predict Eli Lilly's revenues will grow 1.1% and EPS will expand 14.1%.\nThe average estimate for revenue is $5.66 billion. On the bottom line, the average EPS estimate is $1.05.\nLast quarter, Eli Lilly notched revenue of $5.96 billion. GAAP reported sales were 1.5% lower than the prior-year quarter's $6.05 billion.\nSource: S&P Capital IQ. Quarterly periods. Dollar amounts in millions. Non-GAAP figures may vary to maintain comparability with estimates.\nLast quarter, non-GAAP EPS came in at $0.85. GAAP EPS of $0.74 for Q4 were 3.9% lower than the prior-year quarter's $0.77 per share.\nSource: S&P Capital IQ. Quarterly periods. Non-GAAP figures may vary to maintain comparability with estimates.\nFor the preceding quarter, gross margin was 79.0%, 90 basis points better than the prior-year quarter. Operating margin was 21.3%, 90 basis points better than the prior-year quarter. Net margin was 13.9%, 30 basis points worse than the prior-year quarter.\nThe full year's average estimate for revenue is $22.95 billion. The average EPS estimate is $3.90.\nThe stock has a four-star rating (out of five) at Motley Fool CAPS, with 1,243 members out of 1,343 rating the stock outperform, and 100 members rating it underperform. Among 433 CAPS All-Star picks (recommendations by the highest-ranked CAPS members), 412 give Eli Lilly a green thumbs-up, and 21 give it a red thumbs-down.\nOf Wall Street recommendations tracked by S&P Capital IQ, the average opinion on Eli Lilly is hold, with an average price target of $51.97.\nCan your portfolio provide you with enough income to last through retirement? You'll need more than Eli Lilly. Learn how to maximize your investment income and \"Secure Your Future With 9 Rock-Solid Dividend Stocks.\" Click here for instant access to this free report.\n- Add Eli Lilly to My Watchlist.", "score": 21.695954918930884, "rank": 54}, {"document_id": "doc-::chunk-1", "d_text": "The company said it hopes the new center, which will see an investment of $30 million over the next three years, will help it tap into local expertise and also get closer to South Korean handset makers.", "score": 21.695954918930884, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "This week Roche Diagnostics, a Swiss-based pharmaceuticals and diagnostics company, will break ground on a $300 million expansion at its 160 acre North American headquarters located on the northeast side of Indianapolis, Indiana, with plans to create 100 jobs by 2017.\nThe first phase of the project is a new Learning and Development Center. The capital investment at its Indianapolis campus, located at 9115 Hague Road, will include refurbishing of the facility, adding manufacturing equipment for new diabetes care test strips and upgrading of information technology equipment to support the company’s growing diagnostics and diabetes care businesses.\nThe campus is home to the corporation’s North American research and development division, sales and marketing, manufacturing, distribution, information technology and other support divisions.\nWhen the expansion was first announced Gov Mitch Daniels said; \"Roche's continued growth has shown that Indiana is the epicenter for the life sciences industry and the premier state for growing an innovative business.\" Today, the company employs nearly 3,000 Hoosiers and more than 4,200 associates nationwide.\n\"As the leader in the in-vitro diagnostics industry, we're proud to be a part of the growing life sciences movement in central Indiana,\" said Jack Phillips, president and chief executive officer of Roche Diagnostics. \"We appreciate the partnership from the city and state as we look to increase jobs and enhance our Indianapolis campus facilities. The bottom line is we’re here to stay and committed to being the best place to work in Indiana.\"\nAs an incentive, this summer, the Indiana Economic Development Corporation offered Roche Diagnostics Operations, Inc. up to $2 million in conditional tax credits and up to $300,000 in training grants based on the company’s job creation plans. These tax credits are performance-based. In addition the city of Indianapolis will consider additional property tax abatement at the request of Develop Indy.", "score": 21.695954918930884, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "REGIONALREPORT TriangleSize matters in solar\nFounded in 2005, the company developed its technology with a $3 million grant from the U.S. Department of Energy in 2010. Since then, venture-capital firms have invested $40 million, and German electronics giant Siemens AG bought a 16% stake in June 2011. “That was a springboard to take it from a research-and-development operation to a full commercial-production company,” says Russ Kanjorski, Semprius’ vice president of business development. In September, the 65-employee company began production at a new $89.7 million, 50,000-square-foot plant in Henderson. It plans to add 100,000 square feet and increase its workforce to 250 within five years. The private company would not disclose revenue.\nIt supplies panels to Siemens, which announced plans in October to sell its solar business. That, Kanjorski says, doesn’t affect the conglomerate’s stake in Semprius. Still, it’s lining up other major customers. In November, it won a contract to supply solar panels for an electricity-generation demonstration project at Edwards Air Force Base in California. If the project is cost-effective, it could mean expansion to other bases. “That’s really a great opening step to build business with the Department of Defense,” Kanjorski says.\nRESEARCH TRIANGLE PARK — Drug developer Biogen Idec and Japan-based pharmaceutical company Eisai will partially combine manufacturing operations here. Weston, Mass.-based Biogen Idec will lease part of Eisai’s plant to package medicine. Fifty of Eisai’s 225 RTP workers will join Biogen Idec’s more than 1,000 in RTP. Financial terms were not disclosed.\nCARY — Mike Capps stepped down as president of Epic Games (“Rapid-fire Growth,” October 2012) after 10 years at the helm of the video-game developer, which makes the Gears of War franchise. He will remain on the board of directors.\nRALEIGH — Highwoods Properties bought EQT Plaza, an office tower in downtown Pittsburgh, for more than $90 million. The real-estate investment trust will spend an additional $8 million on improvements. It’s the company’s second recent office-tower purchase in Steel City.\nSANFORD — Core-Mark Holding Co. will pay $45 million for wholesaler J.T.", "score": 20.327251046010716, "rank": 57}, {"document_id": "doc-::chunk-2", "d_text": "Over the past five years, the dollar index increased 26.75%. Last quarter, its 49.2% of revenue came from foreign countries. Its revenue in the U.S. increased 14% to $2.54 billion, while revenue outside the U.S. decreased 9% to $2.42.\nEli Lilly’s dividend yield of 2.55% or 0.50 cents per share quarterly can be attractive, but it is undesirable. From 1995 through 2009 (expectation of 2003-2004), Eli Lilly raised its dividend. Payouts of $0.26 quarterly in 2000 almost doubled to $0.49 in 2009. Then, the company kept its dividend payment unchanged in 2010, the same year when its net-income, EBITDA and earnings per share (EPS) reached an all-time high. About four years later (December 2014), Eli Lilly increased the dividend to $0.50 quarterly. I still don’t see a reason to buy shares of Eli Lilly. The frozen divided before the recent increase was a signal that the management did not see earnings growing. With expected patent expiration of Cymbalta, their top selling drug in 2010, it is no wonder Eli Lilly’s key financials declined and dividends stayed the same. Cymbalta sales were $5.1 billion in 2013, the year its patent expired. In 2014, its sales shrank all the way down to $1.6 billion. Loss of exclusivity for Evista in March 2014 immensely reduced Eli Lilly’s revenue rapidly. Sales decreased to $420 million in 2014, followed by $1.1 billion in 2013. Pharmaceuticals industry continues to lose exclusivities, including Eli Lilly.\nIn December 2015, Eli Lilly will lose a patent exclusivity for antipsychotic drug Zyprexa in Japan and for lung cancer drug Alimta in European countries and Japan. Both of the drugs combined accounted for revenue of $866.4 million in the third-quarter, or 17.5% of the total revenue. They will also lose a patent protection for the erectile dysfunction drug Cialis in 2017, which accounted for $2.29 billion of sales in 2014, or 11.68% of the total revenue.", "score": 20.327251046010716, "rank": 58}, {"document_id": "doc-::chunk-1", "d_text": "This includes the Center for Biotechnology and Life Sciences, and a modern, state-of-the art facility for the College of Pharmacy. Rhode Islanders also recently approved a new Center for Chemical and Forensic Sciences to be located in the same area.\nScience, engineering and technology companies in need of 2,500 to 20,000 square feet of space, and perhaps seeking an affiliation with the university, would make a good fit for this research and technology park, whether they are looking to relocate their operations altogether or establish an additional presence. Start-up companies formed through technology-transfer efforts at URI, as well as educational and government institutions looking to partner with the University, also would make strong tenants.\nThis research and technology park will focus on securing companies that do business in Rhode Island and throughout the Northeast corridor. One key selling point, beyond the obvious ties to the URI research community, is that the park would provide cost-effective real estate options to New York or the Boston metropolitan region, where laboratory rental rates can reach more than $60 a square foot.\nAt the same time, the URI research and technology park will have proximity to major Northeast cities, highways and a future railway system. What's more, besides potential partnerships with the University, companies residing in this research and technology park will have access to numerous other biotech, health care and pharmaceutical firms that are based -- or have large operations -- in Rhode Island, such as Amgen, CVS, Lifespan Healthcare and Rite Aid.\nURI officials believe there is a great opportunity to develop science, engineering, and technology clusters around this research and technology park. For example, employment in the state's health care and life sciences community is projected to triple by 2014, according to the Milken Institute.\nCompanies taking space in the research and technology park will have the ability to work hand-in-hand with a University that is committed to the growth of its engineering and life sciences academic programs -- and also excels in research. The new pharmacy building and the planned Center for Chemical and Forensic Science will allow for enhanced student enrollment and faculty research in those disciplines. And, engineering has historically been one of URI's premier programs.\nAnother potential attraction to tenants is state funding and tax-credit programs, from sources such as the Industrial-Recreational Building Authority, the Job Creation Guaranty Program, the Rhode Island Economic Development Corp., and the Slater Technology Fund. These are geared to science, engineering, and technology companies that create high-paying jobs.", "score": 20.327251046010716, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "Pharmaceutical company Pfizer has announced two investments in Ireland totalling $130m.\nThe money will be used to upgrade and extend existing Pfizer manufacturing sites at Ringaskiddy in Cork and Grange Castle in Dublin.\n$100m is being invested in the Grange Castle site in Dublin, where up to 250 construction jobs will be created. $30m is being invested in the Ringaskiddy site.\nThe investments, which are supported by IDA Ireland, will enable those sites to produce new drugs that are in development at the moment.\nPfizer has recently seen two multi-billion dollar revenue drugs, Lipitor and Viagra, come off patent. Replacing the revenue from treatments such as those is a key issue for the drug industry and for Ireland.\n\"We are seeing the benefits of the investments we've been making in our innovative core, as evidenced by recent key launches of medicines for stroke prevention, rheumatoid arthritis and cancer, as well as significant progress within our mid-to-late stage product pipeline,'' commented Dr Paul Duffy, Vice President of Pfizer.\n''There is opportunity for Pfizer's Irish sites to attract the development of new medicines, while also continuing to manufacture existing, important medicines. Our Irish operations are significant and we have excellent colleagues across our sites, dedicated to the highest standards of manufacturing quality and excellence,'' he added.\n\"Today's announcement that Pfizer is to invest $130m is confirmation that we are taking the right steps to ensure that Ireland is a key location for companies like Pfizer to expand and grow their business,'' said the Minister for Jobs, Enterprise and Innovation Richard Bruton.\nIDA Ireland's chief executive Barry O'Leary noted that since Pfizer first set up in Ireland in the 1970s, it has invested over $7 billion in developing the ''skills, scale and capability necessary to manufacture some of its leading medicines for global export from Ireland''.\nPfizer has an Irish workforce of about 3,200 people at six sites across the country who are involved in manufacturing, shared services, treasury and commercial operations.\nIn 2011, it announced a $200m investment in the Grange Castle site to develop a new facility to expand the manufacturing process for a new vaccine.\nMany of Pfizer's leading medicines are manufactured for global export from Irish sites.", "score": 20.327251046010716, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "BOSTON (MarketWatch) -- Drug stocks wandered into negative territory Tuesday afternoon while shares of Eli Lilly & Co. climbed on a better-than-expected earnings report.\nShares of Lilly LLY, -0.37% rose 2% to $52.46.\nExcluding various items, Eli Lilly would have reported earnings of 90 cents a share, compared with the prior year's 82 cents a share. Quarterly revenue leapt 22%, reaching $5.19 billion.\nA poll of analysts by Thomson Financial pegged the drugmaker earning 89 cents a share, on revenue of $4.8 billion.\nLilly also affirmed its financial forecast for 2008 of adjusted earnings of $3.85 to $4 a share. See full story.\nSanofi-Aventis SNY, -0.06% shares were off a point at $41.12.\nA House committee is expected to decide later Tuesday whether to subpoena employees at the Food and Drug Administration regarding the agency's handling of the marketing application for Sanofi's controversial antibiotic Ketek.\nBiogen Idec BIIB, -0.86% shares were up 3% at $60.08.\nLate Monday, activist investor Carl Icahn announced he was nominating three members to Biogen's 12-member board. Icahn said the move was in reaction to Biogen's inability to find a suitable buyer. See full story.", "score": 19.671227781795334, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "Perrigo Company (PRGO) to Invest $40 Million to Expand in Negev Plants\n6/27/2012 7:35:56 AM\nU.S. generic drugmaker Perrigo said on Thursday it will invest $40 million in Israel over the next three years to expand its research and development and production of pharmaceuticals. As part of the investment, Perrigo plans to add another 100 employees to its two plants in Israel, it said. Perrigo had expanded its workforce in Israel by 40 percent over the past three years to about 900. In recent years, Perrigo has invested 280 million shekels ($72 million) plus $30 million a year for R&D.\ncomments powered by", "score": 18.90404751587654, "rank": 62}, {"document_id": "doc-::chunk-0", "d_text": "BOSTON (CBS.MW) -- Pharmaceutical stocks were headed higher in early trading Wednesday, with shares of Eli Lilly LLY, +0.07% and Schering-Plough SGP, -0.32% among the upside standouts. Shares of Eli Lilly jumped $2.21, or 3.7 percent, to $61.60. The company announced that additional studies will not be needed for U.S. approval of a key new antidepressant, Cymbalta. Separately, Schering-Plough shares gained 36 cents, or 2.4 percent, to $15.60. The drugmaker said federal prosecutors had closed an investigation into Schering-Plough's manufacturing operations in Puerto Rico.", "score": 18.90404751587654, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "Demand for end-to-end services and increased growth are driving investments.\nFollowing Alcami Corporation’s announcement earlier this summer that it is moving its headquarters to Research Triangle Park (RTP) in 2018, the contract development and manufacturing organization established a Center of Excellence for API development, scale-up and commercialization at its facility in Germantown, Wisconsin.\nAlcami was formed from the merger of AAIPharma Services and Cambridge Major Laboratories. The company currently has approximately 50 employees in Durham and 450 in Wilimington, North Carolina, with an additional 1000 people work at sites around the world. The company has executive offices in both North Carolina locations, which will be consolidated in the new headquarters in RTP. \"We are very excited about this relocation, which prominently positions Alcami in a region known for its culture of diverse expertise, cutting-edge innovation and invention,\" said Stephan Kutzer, President and Chief Executive Officer of Alcami. \"Our stronger presence in the Triangle is necessary to meet the evolving needs of our clients, accommodate growth, recruit top talent and attract investors and new customers.\"\n“As we continue to grow and expand into new technology platforms, for example HPAPI [Highly Potent Active Pharmaceutical Ingredients], controlled substances and biologics, we learned that this expansion has to happen in a pharmaceutical hub. We need better access to our customers; we need access to a scientific talent pool; we need access to the infrastructure of a larger city, and our customers and investors need easier access to us,” he added.\nThe establishment of the new Center of Excellent at its Wisconsin facility was in response to increasing demand for Alcami’s end-to-end offering. “End-to-end projects have grown to become a significant part of Alcami's project portfolio since the program’s inception in March of 2016, representing 10 percent of its total business. Approximately 62 percent of those projects originate from the Germantown site,” said Catherine Hanley, Alcami’s Sr. Director for Marketing & Corporate Communications.\nNoted Kutzer: “The establishment of a Center of Excellence for API development, scale-up and commercialization in Germantown, coupled with our extensive regulatory expertise greatly strengthens the foundation of our business and allows innovators to execute all parts of API development and manufacturing in one U.S.-based location to support their launch.", "score": 18.90404751587654, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Bristol-Myers Squibb to Build State-of-the-Art Campus in Lawrenceville, N.J.\nNew campus will be located at intersection of Princeton Pike and Interstate 295\nConstruction is expected to begin this fall with occupancy planned by the end of 2016\nNew campus will replace existing leased facilities in Plainsboro and West Windsor\nTuesday, September 23, 2014 12:00 pm EDT\n\"I'm excited that Bristol-Myers Squibb has decided to increase their investment in Lawrence Township\"\nPRINCETON, N.J. -- Bristol-Myers Squibb Company (NYSE: BMY) today announced plans to build a 650,000-square-foot office building on company-owned land at the intersection of Princeton Pike and Interstate 295 in Lawrenceville, New Jersey. Construction is expected to begin this fall and the new facility is expected to open by the end of 2016.\nThe new facility is part of the company's broader strategic plan to modernize its workspace to enable greater collaboration, increase technological capabilities and enhance productivity. The new facility will allow the company to consolidate operations currently located in leased office space at 777 Scudders Mill Road in Plainsboro Township and at Nassau Park Boulevard in West Windsor Township. The Princeton Pike location was purchased by Bristol-Myers Squibb from RCN Corp. in 2001.\n\"We are proud to expand our presence in Lawrenceville, a community that has been home to our company for more than 40 years,\" says chief executive officer Lamberto Andreotti. \"Our new campus will create a dynamic and modern workplace to advance the important work that our employees are doing to discover, develop and deliver innovative medicines for patients with serious diseases.\"\nThe Lawrence Township Planning Board unanimously approved final site plans for the new campus in June. The new campus will be the company's second in Lawrence Township, joining its worldwide headquarters at U.S. Route 206 and Province Line Road, which opened in 1971. The company currently employs more than 2,000 people at the Route 206 campus and more than 6,000 in Central New Jersey. Once completed, as many as 2,500 people will work at the new facility.\n\"I'm excited that Bristol-Myers Squibb has decided to increase their investment in Lawrence Township,\" says Lawrence Township Mayor Cathleen Lewis.", "score": 18.90404751587654, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "Bloomington-based Cook Group Inc. plans to spend about $16.5 million to expand its medical research and development facility in Indianapolis in a project that could create 82 jobs over the next five years.\nThe expansion plans are contained in tax-abatement request that Department of Metropolitan Development staff members are recommending for approval by the Metropolitan Development Commission.\nThe facility at 1102 Indiana Ave. opened in 2012 and is home to Cook’s General BioTechnology and Regentec divisions.\nCook's 7.7-acre property is in the city’s 16 Tech project area—a 60-acre tract of land just north of the Indiana University School of Medicine campus that is expected to include a mix of research labs, corporate offices, business incubators, co-working spaces, apartments, retail businesses and parks.\nThe City-County Council voted unanimously in November to approve $75 million in bonds for infrastructure improvements to get the 16 Tech development off the ground.\n“[Cook Group] is a premier company with deep roots in our state as both as a major employer and a proud corporate citizen,” Indianapolis Mayor Joe Hogsett said Tuesday in a written statement. “That’s why we are thrilled at the opportunity to explore how they can continue to flourish as part of a bioscience campus producing research and innovation that will be felt around the world.”\nAccording to its proposal, Cook wants to spend $12.5 million to construct a 7,000-square-foot mezzanine and add new office and laboratory space. The company would create the space by expanding the existing building or by constructing a new, adjacent building at 1200 Indiana Ave. Construction could begin by the end of the year.\nCook said it would spend about $4 million on equipment for the expanded facility, including lab equipment and instruments, biological safety cabinets and material processing equipment.\nThe company said the expansion would help it retain 68 employees who make an average of $28.85 per hour (roughly $60,000 per year).\nThe additional 82 employees, who are expected to be hired by the end of 2021, would make an estimated $29.03 per hour, the company said. Almost half of the new employees aren’t expected to be hired until 2021.\nThe DMD is recommending Cook receive a 10-year real property tax abatement on the $12.5 million real estate project that would save the company about $927,000 (70 percent) over the abatement period.", "score": 17.397046218763844, "rank": 66}, {"document_id": "doc-::chunk-0", "d_text": "Bloom and blight\nDespite its poor results, Lilly's future may be rosy\nFOR an industry that prides itself on innovation, firms with promising new drugs in development are surprisingly rare. By the end of this year, Lehman Brothers, an investment bank, expects global drugmakers to have launched 47 new drugs, a third fewer than in 1997. Many firms face yawning gaps in their pipelines of drugs at a late stage of development. Yet Eli Lilly, based in Indianapolis, is poised to launch at least four novel drugs next year, with total potential annual sales of more than $5.5 billion (see chart). This is ample reason to be optimistic about a firm that this week issued lacklustre results and another profit warning.\nMuch of the firm's innovative edge is attributed to Sidney Taurel, a globe-trotting European whose exotic background and urbane manner make him a far cry from the average mid-western executive. Like his fellow pharma boss, Raymond Gilmartin at Merck, Mr Taurel has avoided the industry addiction to mergers since becoming chief executive in 1998. He thinks that size can do more harm than good in pharmaceuticals, since giants take more energy to grow than leaner creatures. He says that American medicine would have to alter dramatically through, say, widespread price controls, before a sharp change in Lilly's course made sense.\nAt the moment, Lilly concentrates its R&D budget of more than $2 billion on tackling a few diseases promising potentially huge rewards for successful drugs, such as diabetes, osteoporosis, cancer and depression. Mr Taurel pins some of the firm's research productivity on the way it organises its various stages of research and uses medical specialists early in product development. One of its likelier prospects, Strattera, a new alternative to Ritalin for attention deficit hyperactivity disorder, was a floundering treatment for depression until some of the firm's research psychiatrists had a hunch about its hidden promise.\nLilly has roughly 140 alliances with outside firms, both bringing in promising molecules and farming out its own\nEqually important is the firm's early embrace of biotechnology. Lilly is one of the few large pharma companies with an in-house expertise in protein-based pharmaceuticals that matches its strengths in more conventional drug discovery.", "score": 17.397046218763844, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "\"This most recent gift to IU by the Lilly Endowment helps retain the state's best and brightest who are interested in studying business by ensuring that Kelley will remain among the most innovative and important business schools in the world,\" Robel said. \"Kelley provides an extraordinary option for Indiana residents interested in studying business, and I have been particularly impressed with the school's emphasis on community engagement and outreach.\"\nOver the past four years, Kelley students have contributed more than 61,000 hours of volunteer service to community nonprofit organizations, an implied value of over $1 million.\nFacts about the building project:\n- Architects: BSA LifeStructures, Indianapolis.\n- Total cost of new construction plus renovation of existing facilities: $60 million.\n- Sources of funding: 100 percent private gift support. No state funds or tuition revenue will be used.\n- Grant management: The IU Foundation, the fundraising and investment management organization serving the university, will manage the grant.\n- Current facilities: The existing building was completed in 1966. As a reference point, the average age of facilities for peer business schools is less than 10 years.\n- Size of new construction: The first phase of the building project will involve a 71,000-square-foot expansion of the Kelley School's original building, which will complement the adjacent Godfrey Graduate and Executive Education Center, which was completed in 2002. Once this is finished, the second phase will begin and will involve a major renovation of the current facility. The project will add more than 20 new classrooms.\n- Space highlights: In addition to increased classroom space, the renovated building will house a behavioral lab for researchers, a stock trading room with state-of-the-art informational resources and a business communications lab. The new building will also feature a variety of nonstructured learning environments -- places where students can meet between classes and work together on projects. Some of these collaboration spaces will be fashioned within the traditional classrooms to allow extensive after-hours use of the rooms.\n- There will also be small meeting rooms, a new Student Collaboration Room and a Student Commons Room. This space is strategically located at the new main entry off 10th Street and will provide an opportunity for students to meet with each other, including a collaboration room for study sessions or student meetings.\n- A new 2,000-square-foot multipurpose room on the third floor will be used for large gatherings.", "score": 17.397046218763844, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "Lonza invests $150M in Hyderabad\nSwitzerland's Lonza is making a $150 million investment in Genome Valley, an area outside Hyderabad. Lonza CEO Stefen Borgas said the company was attracted to Genome Valley by the talent pool, scientific culture and the infrastructure facilities.\nLonza's investment will take place in two phases. Phase I of the operation (from 2011 to 2013) will see the development of R&D labs for over 100 Lonza workers. It will include a facility for tissue/cell isolation, cell culture and expression techniques, bio services, labs for biologics and bioinformatics, among other manufacturing capabilities. Phase II of Lonza's project (from 2014 to 2015) will include expanded manufacturing capabilities and additional R&D lab capacity for biologics. Two hundred more workers will be employed at the site.\n\"This is one of the largest investments in recent times in the biopharmaceutical sector in India and marks the emergence of India, in general, and Genome Valley, in particular, as a preferred destination for global biopharmaceutical companies,\"said B. P. Acharya, chairman and managing director of Andhra Pradesh Industrial Corporation in a statement.", "score": 17.397046218763844, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "* correction appended\nThe first tower of the East River Science Park is expected to open next winter at 29th Street and the FDR Drive in Manhattan. But so far, the developer still hasn't signed any leases with prospective tenants. WNYC's Matthew Schuerman reports.\nThe city and the state have already committed more than $300 million to the developer that's building the science park -- with a stated goal of making New York a hub for the bio tech industry. But with no tenants yet committed, the City Council voted last month to give millions in tax breaks for companies to move in.\nNow, two sources involved in the project say, the city has almost convinced the drug company Eli Lilly to take the bait. The sources say 175 employees from Lilly's subsidiary, Imclone, may move to the Science Park -- from Lower Manhattan. Other Lilly employees from NJ and elsewhere may follow.\nUpdate: The $300 million breaks down as follows, according to a city Industrial Development Agency analysis:\n$3.5 million from waiver of the mortgage recording tax\n$250 million as an exemption from the city building tax ($10 million for each of 25 years)\n$12.5 million in direct contributions from the city capital budget\n$8.1 million in sales tax exemptions.\nThere is also an additional $27 million from the state of New York for Infrastructure\n* In a previous version of this story, WNYC reported that the developer still hasn’t found a company willing to rent space at the \"Science Park.\" We should have reported the developer still hasn’t signed any leases with prospective tenants, as corrected above.", "score": 15.758340881307905, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Drugmaker Eli Lilly and Co. says it will close its Elanco animal enzyme plant in Terre Haute by early 2016 as part of a consolidation push.\nLilly spokesman Ed Sagebiel tells the Tribune-Star (http://bit.ly/VoBkmk ) the Indianapolis-based company is consolidating all of its animal enzyme manufacturing to a site in Great Britain.\nHe says the plant closure will affect 23 plant employees, all of whom will be offered comparable positions at a Lilly plant near Clinton that employs about 500 workers..\nThe Terre Haute plant makes animal feed enzymes that help animals digest food more efficiently, boosting farm productivity.\nLilly purchased the Terre Haute plant in 2012. The property is along the Wabash River in an area targeted for development by an economic development and beautification group called Riverscape.", "score": 15.758340881307905, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "“Our newly strengthened cash position following our recent fundraisers, totaling approximately $14 million, is intended to support this pivotal trial.”", "score": 15.758340881307905, "rank": 72}, {"document_id": "doc-::chunk-2", "d_text": "Inside, the state-of-the-art labs have been stripped of equipment, leaving only the sinks and safety hoods.\nThe operating tables and overhead surgical lights in the animal research area are still in place, though. And the offices and conference rooms remain furnished with new-looking desks, chairs and computers, making it appear as if the researchers who once occupied them have just stepped out for lunch.\nThe only part of the complex U-M doesn’t plan on using is a highly sophisticated, 250,000-square-foot molecular manufacturing plant Pfizer used in making pharmaceuticals for drug trials. The university is considering leasing the facility, which Keiser says costs about $4 million a year in utilities to operate.\nInitially, research at the facilities will be clustered around two anchors: high-tech imaging and “biointerfaces,” an interdisciplinary mix of nanotechnology, microfluidics and sensors, cell and tissue engineering, and biomaterials and drug delivery. Other research areas being considered are alternative energy, cancer and cardiovascular treatment, and health services.\nIt’s an opportune time to recruit new faculty, because many leading universities, including California’s entire higher-education system, are struggling financially.\nU-M, which attracts nearly $1 billion a year in research funding and receives generous support from a wealthy alumni base, has fared better than other universities struggling with declining financial support from strapped state governments.\n“We need a critical mass of activity to get things going here,” Keiser said. “We need to move quickly to recruit faculty” before other universities get back on their feet.\nColeman and other university officials say they envision the center conducting groundbreaking basic research. And they see researchers from multiple disciplines working with private sector entrepreneurs to develop new products and therapies that create jobs and improve human health.\n“I don’t know of any other university that has this kind of research space,” said Dr. Pescovitz. “If we’re successful in what we want to accomplish, I think it will be the model.”\nPescovitz, who came to Ann Arbor last year from Indiana University after the news of the Pfizer acquisition, says she would like to see the research complex about 30 percent occupied by this time next year. She is working on raising some $200 million from alumni and private donors to hire new faculty and expand research programs.\nThe university’s goal is to double research spending to $2 billion annually in 10 years with the help of the new facilities.\nNone of this was even contemplated three years ago.", "score": 15.758340881307905, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "Mylan to infuse USD 1 billion on capex in India\nMylan Global President and Executive Director Rajiv Malik said the company had been investing close to about USD 400 million or USD 450 million towards Capex every year and half of it in India.\nHYDERABAD: Multinational drug company Mylan has said it will invest USD one billion in the next 5-6 years on Capex in India, given the importance of the country's position in world pharma supply-chain, and pitched for the government incentivising research and development activities. Mylan Global President and Executive Director Rajiv Malik said the company had been investing close to about USD 400 million or USD 450 million towards Capex every year and half of it in India.\n\"Half of that has been invested in India as a rule of thumb. So we have invested about $200 - $250 million in India every year. During the last six or seven years we have invested more than one billion dollars in India to upkeep and expand the capacities, he said. As long as this network is there (in India), we have no other option but continue to invest at the same pace. I would say in the next 5 to 6 years it (investment on capex) would not be less than one billion dollars, he said.\nMylan India's journey started in 2007 after it acquired Matrix laboratories and at that point in time the company was predominantly a manufacturer of active pharmaceutical ingredients (API).\nCurrently, Mylan has 21 facilities and 15,000 employees working in India.\nOut of 44 plants, we have today, India has 21 of those. So India is the backbone of the supply chain. We do about Rs 1,000 crore in the Indian commercial market, Malik said.\nHe said the Indian government needed to incentivise research and development activities being undertaken by pharma companies on drug development and new chemical entities, though the country upkeeps its leadership position APIs and formulations.\nAccording to him, during the initial few years, investments in R&D would not generate revenues for any pharma company and many Chinese companies are now focusing on the activity and competing with large corporations in USA and Europe.\nThat is where India needs to catch up. We will have to incentivise R&D. You incentivise and you create infrastructure for the industry to do more R&D so that it is not burdensome on companies, he opined.", "score": 14.702760990396104, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "Last modified: Wednesday, January 11, 2012\n$33 million Lilly Endowment grant will transform IU Kelley School of Business' undergraduate program\nFOR IMMEDIATE RELEASE\nJan. 11, 2012\nBLOOMINGTON, Ind. -- Indiana University President Michael A. McRobbie announced today, Jan. 11, that Lilly Endowment Inc. has awarded the university a major new $33 million grant to help transform undergraduate facilities at IU's Kelley School of Business in Bloomington.\nThis is the largest such grant ever received by the Kelley School in its 92-year history and one of the largest ever received by Indiana University.\nThe new and renovated facilities will enable program innovations that will elevate the role the Kelley School plays in the economic vitality of the state and will further advance its presence among the world's elite business schools.\nCombined with nearly $27 million in private gifts from alumni and strategic partners, all funding for the $60 million renovation and expansion of the 46-year-old, 140,000-square-foot original building is in place.\nPlanning for the project began in 2005, and trustees voted in December to begin construction this spring with the building's expansion, followed by renovation of the existing building -- one floor at a time. Both phases of the project are expected to be completed within three years.\nA focus of the building project will be to create facilities that will enable a technology-mediated experience, allowing Kelley students to interact with companies from across the state and around the world on actual business projects.\nThe Kelley School also needs more classroom space, as current facilities are 100 percent utilized and the school must routinely turn away many high-quality students each year because of capacity constraints.\n\"Lilly Endowment's grant for undergraduate business education is another example of its extraordinarily generous legacy of supporting activities at the university that benefit the people of Indiana and beyond, including its past support for student scholarships, our efforts in genomics and neurosciences, in arts and humanities, in information technology, in philanthropy, in economic development and for the Jacobs School of Music, the Maurer School of Law and our libraries,\" McRobbie said.\n\"With this support, the Kelley School will continue as one of the world's leaders in business education and help Indiana to develop and retain the best and brightest minds who will drive our state's economy in the future,\" McRobbie added. \"We are indeed grateful for Lilly Endowment's continued generosity and its commitment to advancing IU and serving the people of Indiana.\"", "score": 13.897358463981183, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Eli Lilly confirms $8 billion takeover of Loxo Oncology\nBiopharmaceutical giant Eli Lilly & Co. has agreed to buy Loxo Oncology Inc. in a deal valued at $8 billion.\nEli Lilly (NYSE: LLY) said it will pay $235 a share in cash for Stamford, Connecticut-based Loxo (NASDAQ: LOXO). That's a 68 percent premium from Loxo's closing price on Friday (h/t the WSJ).\nLoxo is expected to bolster Eli Lilly's oncology-treatment portfolio. It also aims to treat patients that have cancer from single-gene abnormalities.\n\"We are gratified that Lilly ha...", "score": 13.897358463981183, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "Raleigh investment firm Triangle Capital posted a 53 percent increase in net investment income in the third quarter.\nTriangle Capital reported after the markets closed Wednesday that it generated net investment income of $15.9 million, up from $10.4 million a year ago. Net investment income on a per-share basis was 58 cents, versus 52 cents a year ago, due to an increase in the number of shares outstanding resulting from a secondary offering.\nThe company's earnings benefitted from its constantly expanding investment portfolio.Triangle Capital makes loans to mid-sized, privately held businesses and also takes a minority stake in those businesses.\nTriangle makes loans to mid-sized, privately held businesses and also takes a minority ownership stake in those businesses. During the third quarter, the company made eight investments that totaled $71.9 million.\nAs of Sept. 30 it had $60.1 million in cash, which doesn't include the proceeds from a recent $80.5 million bond offering.\nEarlier Wednesday, Triangle Capital shares fell $1.02 to close at $24.63. Its shares have risen 29 percent this year.", "score": 13.897358463981183, "rank": 77}, {"document_id": "doc-::chunk-0", "d_text": "VIGO COUNTY, Ind. (WTWO/WAWV) — A unanimous vote has approved a company’s bid for a project expected to bring hundreds of jobs and a $1 billion-plus investment to Vigo County.\nThe Vigo County Redevelopment Commission approved the bid during a meeting Tuesday afternoon. The $1.5 billion investment will come from an Oregon-based company called ENTEK.\nThe company will reportedly purchase 340 acres of the former Pfizer property in the Vigo County Industrial Park II.\nSteve Witt, the President of the Terre Haute Economic Development Corporation, said he believes it’s a major milestone for the community.\n“It’s very gratifying having acquired that property, the county, back in 2012 or so. Having first Saturn Petcare move into the former Pfizer building, and now, having half the acreage occupied by this new company,” he said. “It’s very gratifying but also very exciting as well. We’re thrilled for the community and what this will bring down the road.”\nENTEK CEO Larry Keith said the facility should create about 640 jobs for area residents. He added that he wants the first production lines to be open in 2025 with the facility fully functional by 2027.\nHe said the company received a $200 million grant from the Department of Energy, and they were in the process of applying for a $900 million loan as well. The company has sites in countries like England, Japan and China– but he said this investment represented the largest investment they had made in a community.\n“It will feel really good once we have the loan and the building starts to go up out there,” he said. “It’s exciting for us. We made the announcement well before we knew there was a grant program that was going to be announced, so we already had this in the works.”\nENTEK will offer a wide variety of jobs, according to Keith.\n“A whole array of things. There will be maintenance people, there will be engineering, there will be operations so people who run the production line and supply chain, HR, just the whole gambit of everything it takes to run a big plant,” he said.\nKeith said the company’s main focus is developing battery separators, and this site will help accomplish that. As for why they chose Vigo County, Witt said the industrial park had all the things they needed for a facility of this magnitude.", "score": 13.897358463981183, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "MORRISVILLE, N.C. (WNCN) — A nearly-$38 million investment that will add 550 jobs is being made in Wake County.\nINC Research broke ground on their new global headquarters in Morrisville on Monday.\nThe company will consolidate their two local offices into one at Perimeter Park.\nINC Research conducts clinical trials for pharmaceutical and biotechnology companies around the world.\n“A talented workforce is really quite key to INC Research. There are highly qualified clinical research professionals in the Triangle and throughout the state of North Carolina, “ Chris Gaenzle, INC Research’s chief administrative officer told CBS North Carolina.\nConstruction is expected to take about a year-and-a-half.", "score": 11.600539066098397, "rank": 79}, {"document_id": "doc-::chunk-4", "d_text": "“What you don’t see there (on the 2007 scorecard) is the Eastman announcement,” Venable said of Eastman Chemical Co.’s plans to invest $1.3 billion in redevelopment at its Kingsport operation.\nHe said NETWORKS, the city and the Kingsport Economic Development Board worked with Eastman on that project but that including it would have skewed the capital portion of the scorecard.\nEastman in mid-2007 announced it will invest more than $1.3 billion over five years to upgrade technology, infrastructure and production capabilities at the company’s Kingsport manufacturing facility.\nCalled “Project Reinvest,” the plan calls for the company to spend an average of $265 million annually and will potentially lay the groundwork for future capital investments.\nThe project will also initiate a partnership between Eastman and Northeast State Technical Community College to develop curricula and implement training programs for a new generation of mechanics, lab analysts and chemical operators.", "score": 11.600539066098397, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "- $18,500 to Halifax County for the Enfield Industrial Park to complete eligible due diligence activities such as environmental assessments, archaeological analyses, and mapping.\n- $50,000 to Nash County for the Middlesex Corporate Centre to complete eligible due diligence activities such as environmental assessments, archaeological analyses, and mapping.\n- $17,500 to the Foundation for Duplin County Industrial and Business Development for the Duplin County AirPark to complete eligible due diligence activities such as environmental assessments, archaeological analyses, and mapping.\n- $632,412 to Alexander County Economic Development Corporation for grading and erosion control for a building pad.\n- $1 million to East Yancey Water and Sewer District to provide roughly 11,000 linear feet of water main extension and grading at a new industrial park. The project will support the location of Little Leaf Farms that will create 100 jobs at a $53,700 annual average wage, $86 million in private investment, and another potential site at this new park.\n- $965,830 to the City of Fayetteville to upgrade a sewer lift station and construct a force main, and some due diligence activities for 172.13 acres located at Fayetteville Regional Airport.\n- $252,720 to the Town of Louisburg for clearing, grubbing, and rough grading at Louisburg Commerce Park, an approximately 50-acre site that recently expanded.\n- $952,000 to McDowell County for clearing and grubbing, and rough grading of approximately 50 percent of the 413-acre Universal Technology Park site.\n- $992,000 to Rockingham County for clearing, grubbing, and rough grading of a 33-acre lot within Reidsville Industrial Park.\nOpen Grants Program\n- $95,000 to McDowell Economic Development Association, Inc. (MEDA) to help extend water and sewer infrastructure to support expansion of an existing company in McDowell County. The company will create 25 new jobs that will pay wages of $42,146, which is above the county average.\n- $40 million to the North Carolina Department of Transportation for public road infrastructure to support the location of Toyota’s battery manufacturing operations. This award will ensure that there would be sufficient road infrastructure to help support the creation of 1,750 new jobs in Randolph County at the Greensboro-Randolph Megasite.", "score": 11.600539066098397, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The company expects to create 73 jobs and invest $4 million in the overall project, while 53 jobs and a $577,901 investment are tied to this grant.\nCity of Wilson (Wilson County): A $150,000 grant will support the reuse of a 2,965-square-foot building, a former drug store. North State Consulting LLC, will make this location its headquarters office. The company, which specializes in working with technology companies to help bring their products to market, expects to create 18 jobs and invest $270,000 in this project.\nExisting Business Building Category\nCleveland County: A $360,000 grant will support the expansion of a building in Shelby that is occupied by IMC Metals America. The company, a smelter of copper to be used for manufacturing parts, plans to add 30,000 sq. ft. to the existing facility. Overall, the company plans to create 46 jobs.\nCity of Lexington (Davidson County): A $200,000 grant will support the renovation of a 602,559-sq.-ft. building in the Linwood community that is occupied by Haylard North Carolina. The company manufactures surgical and infection prevention products, and this renovation will allow the company to add a new line to produce N95 face-masks. The project is set to create 22 jobs and attract $3,009,000 in private investment.\nLenoir County: A $75,000 grant will support the expansion of a building in Kinston. Additive America, a contract manufacturer that produces 3D-printed prosthetics and personal protective equipment, plans to add 5,000 sq. ft. to the existing facility to help fulfill large client orders. The company expects to add nine jobs and invest $1,245,000 in this project.\nOnslow County: A $230,000 grant will support the renovation of a 191,000-sq.-ft. building in Hubert that is occupied by Waterline Systems. The company, which manufactures welded aluminum and steel boats and barges, plans to expand its operations at this location, creating an expected 23 jobs while investing $289,900 in the project.\nRobeson County: A $500,000 grant will support the expansion of a building in Pembroke that is occupied by Steven Roberts Original Desserts. The desserts manufacturer plans to add 29,000 sq. ft. to the existing facility.", "score": 11.600539066098397, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "The additional investment will be paid in four equal instalments of US$5 million, each three months apart.Read full media release - pdf", "score": 8.086131989696522, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Merck plans to invest US$29 million (€28 million) to open two new GMP-grade mRNA drug substance manufacturing sites in Darmstadt and Hamburg, Germany.\nThe new manufacturing facilities will offer a comprehensive solution for every critical facet of mRNA development, production, and market deployment, encompassing not only the creation of mRNA products but also their rigorous testing.\nThis encompasses specialised analytical development and biosafety testing explicitly tailored for mRNA technologies, ensuring the highest quality and safety standards are met.\nThese state-of-the-art facilities play a crucial role in an ongoing investment programme designed at advancing mRNA technologies. In addition, this will also generate 75 new employment opportunities. This integrated approach simplifies processes and reduces complexities.\nType New Facility\nBudget US$29 million (€28 million)", "score": 8.086131989696522, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "Investors helping to develop the Newcastle Helix science and business park in Tyneside are ready to pump a further £60m into the site.\nL&G first pledged £65m to invest in the science park in 2016. The 24-acre center is a partnership between Newcastle City Council, Legal & General and Newcastle University, with the partners hoping it will help create thousands of jobs, prime office and research space, and hundreds of new homes .\nSix years after financing the construction of The Lumen and Spark office buildings – at the time the largest real estate deal in the city in decades – the institutional investor is keen to support the further development of the site, starting with a 60 million-rent residential complex consisting of two towers. There is also the potential to support additional buildings at Helix, including potential office space and research space, depending on what is needed to bring the Council and University vision to life.\nRead more: Find more business news from the North East here\nLegal & General Retirement Institutional (LGRI) CEO Andrew Kail announced the additional investment during a visit to the city with Newcastle-born Aysha Patel, LGRI’s Origination Director, and Ben Rodgers, Head of Regeneration, to meet members of the Meeting Newcastle City Council and Heads of Department at the University of Newcastle to see first-hand the impact of the Group’s investments to date.\nHe said: “The great thing about this place is that it feels like much more than a collection of buildings. It was really nice to see the buildings and how they have more connections to the west end of town. It’s been great to see it come alive as a community and place – and there’s still a lot more to do. The site still has a long way to go, but being on that journey and checking it out is really what today was all about.\n“Our primary role is to develop the key properties that you see and also those that are being built for rent. One of the great things about Legal & General is that we have a wide range of investments and interests that we do for ourselves and our clients.\n“If you think about what’s happening at the Biosphere, that’s exactly the kind of investment we’re excited to make. There are companies there investing in sectors like healthcare technology that are incredibly interesting to us. That is one of the advantages of the partnership. We are on site, we are not just outside capital that came in with subsidies and then disappeared again.", "score": 8.086131989696522, "rank": 85}, {"document_id": "doc-::chunk-1", "d_text": "The Central Campus project includes the construction of a new Learning Resource Center to support more than 3,500 students and critical middle-skill occupational programs.\nNew Charlotte Mecklenburg Main Library — $65 million\nThe new Main Library will be a modern facility with meeting and community spaces, and state of the art technology designed to promote learning and innovation. The facility will be the cornerstone of the North Tryon Vision Plan, a partnership with Bank of America, the Charlotte Housing Authority and the City of Charlotte to revitalize a six-acre, two-block section of the North Tryon Street corridor.\nNew Community Resource Centers, Government District Renovations — $170.8 million\nThe County’s strategy to improve health and human services program delivery includes building six new Community Resource Centers (CRC) in locations where residents need them most.\nThe first Resource Center, located at the Valerie C. Woodard Center on Freedom Drive, is under construction now and will open in 2018. The recommended Capital Improvement Plan continues the build-out of the CRC implementation plan for locations in the east and southwest parts of the County by 2023, and funding for land acquisition and design for more CRCs in the west and northeast in the future, Diorio said.\nThe CIP also includes needed renovations to the many facilities in the government district including the County Courts and Office Building, the Johnson Building, and the Charlotte Mecklenburg Government Center.\nPark & Recreation — $277 million\nFunding for Park and Recreation projects continues the County’s commitment to health and wellness, with a focus on greenway expansion and the construction of two new regional recreation centers: the Eastway Regional Recreation Center and the Northern Towns Regional Recreation Center. The plan also provides for renovations for the David B. Waymer Center and land acquisition for future use.\nDuring this CIP cycle, Diorio said, the County will achieve one of its highest priorities: completing the Little Sugar Creek Greenway to the South Carolina state line. Seven greenway projects are included in the plan.\nDiscovery Place Nature Museum — $16 million\nBuilt in 1951, Discovery Place Nature Museum (formerly known as the Charlotte Nature Museum) offers visitors the chance to get close to wildlife and experience nature through hands-on programming, exhibits, and classes. The building is in dire need of updating, Diorio said, and the quality of museum exhibits and interactive learning has rendered it obsolete.", "score": 8.086131989696522, "rank": 86}]} {"qid": 47, "question_text": "What happens to my property in California if I have a joint tenant and I die?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "HOW WILL MY PROPERTY BE TRANSFERRED AT MY DEATH?\nTransfer of your property on your death is controlled by the law of the State where you live, except that procedures for transferring real property (land) located in another State will be governed by the law of the State where the land is located. In California there are four different ways that property can pass to beneficiaries or heirs, depending on how your property is “titled”.\n1. JOINT OWNERSHIP WITH SURVIVORSHIP RIGHTS: If property is titled as “as joint tenants,” or as “community property with rights of survivorship,” the property will automatically pass to the joint owner upon your death. No probate will be necessary; instead only a death certificate and perhaps an Affidavit of Death will be required. Any Will you may leave has no effect on property held in joint ownership with survivorship rights. Do not add someone to title to your property as a joint owner as an estate planning device without first discussing it with an attorney. A joint owner’s creditors may be able to reach the asset prior to your death; and joint tenancy ownership may reduce income tax “stepped up basis” benefits that would otherwise be available to your joint owner if they received the property from you via a different avenue. When the second joint owner dies, probate may be required if no further estate planning is completed.\n2. BENEFICIARY DESIGNATION: You are allowed to sign forms designating beneficiaries for certain property, usually retirement accounts, life insurance, and bank accounts. There is also a new “Transfer on Death Deed” in California for putting a beneficiary on real property. The property will automatically pass to the designated beneficiary upon your death. No probate is necessary for property having a designated beneficiary; instead only a death certificate and completion of a Claim form. Any Will you may leave has no effect on property passing to a designated beneficiary. If you have young children, you may want to name a custodian for them in the beneficiary designation: for example, “John Doe, son, but if he is under the age of 18 then to Jane Doe as Custodian for John Doe until he attains age 18 under the California Uniform Transfers To Minors Act.” You may select any age between age 18 and 25 as a Custodianship ending age.", "score": 53.35760186261812, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "The method of titling the ownership of property in California and elsewhere is generally an integral aspect of proper estate planning. By configuring the title to assets in designated ways during the life of the owner(s), estate planning can assure certain outcomes automatically on the death of an owner. The simplest form of ownership is where one individual owns the property in his/her name alone. If the asset is real estate, the single ownership model is called \"fee simple\" ownership, which represents the highest degree of unfettered ownership in real property.\nAssets owned as joint tenants with the right of survivorship represent ownership by the joint owners. For inheritance purposes, a deceased owner's share is automatically passed to the surviving joint tenant by operation of law. Assets titled this way do not pass through the decedent's estate but are automatically owned by the survivor. Assets owned individually in fee simple will pass to the decedent's heirs and must go through the decedent's estate at death.\nOwnership by tenants in common represents separate shares owned by each of the tenants. Thus, if there are three tenants in common owners of a parcel of real estate, the death of one tenant will pass that tenant's ownership interest onto that owner's heirs. The share of a deceased tenant in common will have to be reported by decedent's estate and treated as an estate asset.\nWhen an individual or married couple meet with an estate planning attorney to set up an estate plan, the attorney will explain and discuss the impact of the various ways of titling assets. Title to certain assets may be changed to conform to the owners' wishes for the most expedient way of transferring title at death. In some cases, title to an asset or assets will go into a living trust, which generally is a way of passing assets at death without going through the decedent's estate. In California and elsewhere, the issues to evaluate are usually too complex to justify do-it-yourself estate planning, and the matter should therefore be jointly undertaken in cooperation with an experienced estate planning attorney.\nSource: paulsvalleydailydemocrat.com, \"How to title your property\", Dan Barney, Oct. 25, 2017", "score": 50.67512071023068, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "However, each spouse has the right to dispose of/leave their ownership right in the property to an heir in their will.\nCommunity Property With Right Of Survivorship\nThis form of vesting title for a house or property owned together by spouses or domestic partners has one additional benefit: the right to survivorship. What that means is that when a husband and wife, for example, hold community property with right of survivorship and one of them dies, their remaining interest in the property does not pass to their descendants but remains with the living spouse. Sometimes, spouses or domestic partners vest title as community property with right of survivorship because of tax advantages it offers.\nAccording to the California Civil Code, joint tenancy is a form of vesting title to two or more persons with equal shares and interests. None of them have to be married or domestic partners, and they’re subject to the right of survivorship in the surviving joint tenant(s). So when one of these two or more joint tenants dies, title to the property automatically and immediately vests in the name of the surviving tenant(s). With joint tenancy, the title must be acquired for all of these parties at the same time and by the same conveyance, with the document expressly declaring their intention to create a joint tenancy estate.\nTenancy In Common\nThis form of vesting title to a property allows for two or more co-owners just like joint tenancy. However, with tenancy in common they are allowed fractional –or unequal – ownership, so they are owed that same proportion in income or profit it generates, as well as expenses. There are also no rights to survivorship, so if one tenant dies, the title does not automatically go to the remaining tenants, but can be vested to the deceased person’s heirs. Each individual tenant may also sell/lease, or will their share of the property as they wish.\nIn California, you may also hold title to your house or property in a trust. A trust is an arrangement where the legal title to your property is transferred by a grantor to a person called a trustee, who holds and manages it according to the best interests of the beneficiaries. For that reason, a trust usually doesn’t hold title in its own name, but title is vested to the trustee, while the trust still holds legal title and rights.", "score": 47.68728894952852, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "A type of joint ownership of property, where each owner is called a “joint tenant” and each owns the whole of the asset, rather than a distinct fractional share. When a joint tenant dies, the asset in question does not pass to his personal representatives as part of his estate. Instead, the asset (usually land, but can be a joint bank account or shares, for example) automatically passes to the surviving joint tenant(s).\nIt is one of two main types of joint ownership of property. The other is called a tenancy in common. It is possible to sever a joint tenancy and create a tenancy in common.\nFor these purposes, the word “tenancy” simply means ownership.\nAn immovable property held in joint tenancy has certain advantages. Where a joint owner dies, no further vesting of title in the other co-owner is required.\nHowever, a joint tenancy, by its very nature, also has some serious disadvantages. For example, where one co-owner for good reasons, does not wish the survivor to take the whole of the property. To achieve that, he has to sever the right of survivorship by severing the joint tenancy. The effect of the severance is to create a tenancy in common under which each co-owner holds a distinct share in the property.", "score": 46.535181210508966, "rank": 4}, {"document_id": "doc-::chunk-0", "d_text": "When two or more people own a home as a joint tenancy, each individual owns a share (or interest) of the entire property. Joint tenants must obtain equal shares of the property with the same deed, at the same time. The terms of a joint tenancy (which differs from a tenancy in common) are laid out in the deed, title, or other legally binding property ownership document. Below are answers to the most frequently asked questions about joint tenancy.\nWhat is joint tenancy with rights of survivorship?\nJoint Tenancy ownership is where two or more people hold title to an asset. Joint tenancy with rights of survivorship (JT/WROS) features a right of survivorship. The term \"right of survivorship\" means that upon the death of one joint owner, title passes by \"operation of law\" to the surviving owner who receives sole ownership of the asset. It is a type of ownership that will not be controlled by either your will or your trust.\nCan joint tenancy property pass to unintended heirs?\nPossibly, yes. Joint tenancy ownership does not provide protection for children when a surviving spouse remarries after the death of the first spouse. If a married couple owns all of their assets in joint tenancy and one spouse dies, all assets pass by operation of law to the surviving spouse.\nThere may be no protection for children if the surviving spouse remarries and places his/her assets in joint tenancy with a new spouse. For example, if the surviving spouse predeceases the new spouse, all property passes to the spouse, not to the children as their parents may have originally intended.\nDoes joint tenancy ownership avoid probate?\nThe answer is that joint tenancy ownership often only delays the probate process. Upon the death of the first joint tenant, title passes automatically to the surviving joint tenant, thereby avoiding the probate process on the first death. Let's look at the example of a married couple who owns all of their assets in joint tenancy.\nOn the death of the first spouse, there is no probate because ownership of the assets passes by operation of law to the surviving spouse. However, on the death of the surviving spouse, there will be a probate unless the surviving spouse creates a new joint tenancy or places his or her assets in a living trust.\nCan joint tenancy create unintended gift and estate taxes?", "score": 46.410433626121616, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Warning! A will may not control the fate of some of your property. If you’re like most people, this is surprising to you. Your will only controls those assets in your individual name. Here’s what we mean:\n1. Joint Tenancy Property: Many married couples own their house as well as their bank and investment accounts as joint tenants. Some single parents put a child’s name on a joint account or the house (not recommended) and some siblings jointly own a vacation home or hunting cabin (not recommended).\nJointly tenancy property has a right of survivorship, meaning that immediately upon the death of one owner, the property transfers to the surviving owner(s). For example, Sam and Suzy are a married couple and they own their house and financial accounts in joint tenancy. Sam dies. Suzy, by operation of law, now owns the house and the financial accounts.\nThis may not be what Sam wanted, especially if Suzy is a second wife and not the mother of his children. Will Sam’s children inherit any of his hard earned money? Not likely.\nWhat if Sam and his brother, Mike, own a family vacation home as joint tenants? Sam dies, Mike inherits the vacation home. Sam’s widow and children must pay any applicable estate taxes on the vacation home, but don’t own it and have no right to use it. This may not be what Sam wanted.\n2. Property in Your Revocable Living Trust: Trust planning is an excellent estate planning tool. When you do trust planning, your trust should be properly funded, meaning that your property is transferred into the name of your trust. Instead of being titled in your individual name or joint names with your spouse, your property is titled in the name of your trust. Therefore, the provisions (i.e. instructions) of your trust control the property.\nFor example, Sam’s trust provides that his sister, Mary, inherits the $50,000 investment account. Sam’s will says that brother, Tony, inherits this same investment account. Sam dies. Who inherits the investment account? Mary. The investment account is titled in the name of the trust so trust provisions apply. Mary inherits the $50,000 account.\n3. Life Insurance and Retirement Accounts: Most people name a beneficiary for their life insurance and retirement accounts. These accounts are contracts. The assets go to whoever is named as the beneficiary.", "score": 45.71370946087112, "rank": 6}, {"document_id": "doc-::chunk-0", "d_text": "Joint Tenancy - An undivided interest in the property, taken by two or more joint tenants. The interests must be equal, accruing under the same conveyance, and beginning at the same time. Upon the death of a joint tenant, the interest passes to the surviving joint tenants, rather than to the heirs of the deceased.\nTenancy in Common - A form of ownership whereby each owner holds an undivided interest in the property. The interests need not be equal, and, in the event of the death of one of the owners, no right of survivorship in the other owners exists.", "score": 43.94514310018067, "rank": 7}, {"document_id": "doc-::chunk-4", "d_text": "When more than one person owns a specific piece of property, that is a joint tenancy. Both personal property and real property can be owned in a joint tenancy although you hear about joint tenancies much more frequently regarding real property. You and your spouse can own property as joint tenants or as joint tenants with the right of survivorship. In a joint tenancy, if your spots were to die then their share of the property would pass to their heirs or two any person named in their will. In a joint tenancy with the right of survivorship when your spouse would die their share would go directly to you. These agreements are created in writing.\nIf your spouse had children who were not also your children then their half of the Community property does not automatically go to you when your spouse passes away under a writer survivorship situation. The law in Texas says that you and your spouse can agree in writing that all or part of your Community property will go to the surviving spouse when one of you dies. This is called a right of survivorship agreement. If you and your spouse had come to an agreement like this you would need to file it with the county court where you all live.\nThe beauty of this type of agreement is that it is a way that you and your spouse can ensure that all Community property contained in the agreement automatically goes to the other spouse without having to first go through probate. Bear in mind that this type of agreement is only important or necessary if you and your spouse have children from outside of the marriage. If you all have no children from outside of your marriage then this type of agreement would not be necessary.\nIn terms of jointly held bank accounts, there are two types. If all the parties to a jointly held bank account or living and it is set up under either your name or another person’s name then either of you can take money out of the account without getting permission from the other person. However, if the account is set up under both of your names then one person would need the permission of the other to access the money. In most marriages, jointly held bank accounts between you and your spouse would not require the permission of the other to obtain money.\nOne of the questions that estate planning attorneys frequently receive is regarding the benefits of a payable on death account. In terms of owning financial accounts, this is another method for you to do so. A payable on death account automatically passes to your spouse or another person upon your death.", "score": 42.305839864865106, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Joint Tenancy remains a popular form of property ownership as people tend to elect this form of ownership to avoid probate. Many of my clients stated that when they bought a house, they chose to create a joint tenancy without really going any further as to what that entails. Joint tenancy with right of survivorship means that each person owns the entire asset (undivided equal share), not just part of it. When one owner passes away, the person's share immediately passes to the other owners in equal shares, without going through probate. We’ve all been told that joint tenancy can be a simple and inexpensive way to avoid probate, and this is sometimes true.\nBut careful consideration should be made for the potential tax consequences and other problems of joint tenancy ownership. Here are some of the disadvantages of joint tenancy which far outweigh the advantages:\nJoint Tenancy does have its advantages, such as:\nBefore placing your property in a certain type of ownership, I highly recommend becoming familiar with all types of property ownership and taking into consideration your objectives and consequences to your heirs.\nAbout the Author\nChristine Chung, Esq.", "score": 42.08877336689203, "rank": 9}, {"document_id": "doc-::chunk-1", "d_text": "One of the simplest probate-avoidance methods is to hold real property in joint tenancy. The primary element of joint tenancy is that the surviving owner of a property held in joint tenancy automatically inherits the property share held by the deceased owner. However, the putting of property in joint tenancy can have important gift and other tax consequences.\nWhen dealing with very specific assets such as bank accounts or stocks, an individual can use Pay-on-death designations to avoid probate. Basically, you name a beneficiary for that asset and your beneficiary quickly receives those assets with no probate. The upside is that these pay-on-death designations can be easily set up and there is usually no additional cost, but you do not want to set these up when it’s possible that the beneficiaries will still be minors at the time that they inherit. In that instance, it’s better to have a revocable trust drafted. That way the funds can be held “in trust” until the minors reach a certain age, usually the age of eighteen.\nEven in the event that an individual does not have a revocable trust or other similar probate-avoidance method, the state of California does have two simplified probate procedures. The first is the spousal or community property petition. By this method, a surviving spouse can more easily obtain the portion of the deceased spouses’ property that was left to him or her. A simplified probate form needs to be filed with the court, but the process is much quicker than a full probate. Also, the advantage of this method is that there is no dollar limit on how much property can be transferred by this method.\nA second method in California allows the transfer of personal property worth less than $100,000 by affidavit. The purpose of this affidavit procedure is to allow beneficiaries of a decedent with a relatively small estate to inherit and obtain the cash and other assets without having to incur the time and expense of a full-blown probate. Basically, the beneficiary or beneficiaries must sign a simple form called the “Affidavit for Collection of Personal Property” and present it to the individual or entity that is in possession of the assets. The people or organizations holding the property must then promptly release the assets to the beneficiaries.\nIn conclusion, it is clear that probate is a process that is best avoided in most instances. Fortunately, there are methods to avoid probate.", "score": 40.08200111448727, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "A Simple Key For probate death title UnveiledA simple way to stay away from your home going through probate is to present A lot of it away prior to deciding to die. Offering items can permit you to determine accurately who receives what without the courts’ involvement.\nDependent upon your condition, it may well make more feeling to provide the home move directly to heirs or beneficiaries, bypassing the probate course of action. You need to seek advice from or use a trusts and estates attorney to make certain that you do not make any expensive problems within your make an effort to keep away from probate.\nTitle held as joint tenancy happens when two or more entrepreneurs individual an undivided interest in your entire home having a right of survivorship. On a co-joint tenant’s death, the decedent’s share in the residence transfers to your surviving joint tenant(s), not his / her heirs or beneficiaries.\nWhen there is a surviving spouse who was not named on the first deed, the deceased wife or husband's will decides the distribution from the house. If there's no will, then the guidelines of intestate succession will identify who is entitled for the residence.\nWhen property passes to the joint owner, TOD, or POD, it passes beyond your estate. Your estate contains all other property, not jointly owned or listing a TOD or POD.\nLook at with an area title business or property legal professional so as to determine In the event your point out permits TOD deeds. If the point out won't allow transfer on death deeds, you are able to generally title a joint operator for each bit of real estate that you simply very own.\nDue to this fact, the transfer of property on the meant heirs is often a prolonged method usually Long lasting among 6 months and a pair of many years. Through that time, the house may not be capable of be bought and if sold the heirs could have minimal entry to the sale proceeds.\nIt can be crucial to note that on the death of past surviving joint tenant, the residence will pass to the heirs and/or devisees of the last surviving joint tenant with the probate system.\nGet title with somebody else making sure that joint ownership exists. Then, when one of several owners dies, the title simply passes on to one other operator — no probate involved!", "score": 38.35539839219977, "rank": 11}, {"document_id": "doc-::chunk-3", "d_text": "If owned personally a co-owner’s share becomes part of their estate when they die, consequently they may choose who takes their interest by making a valid will.|\n|Can a co-owner dispose of their interest in their will?|\n|No. The survivorship principle overrides a will. If a co-owner decides they no longer want their interest to pass automatically to the others, they need to sever the tenancy and own as tenants in common.||Yes, if owned in their name, to whomever they choose, in their lifetime or by nominating a successor in their will. If no choice is made, it passes according to the laws on intestacy.|\n|When a co-owner dies|\n|The right of survivorship applies. This means that when a co-owner dies the surviving co-owners automatically continue to own the whole property, by the operation of law. If there are only two co-owners or joint tenants, then on the death of one, the surviving one automatically owns the whole property in their name. When the survivor dies, the property passes according to the terms of their will, or by the statutory rules of intestacy.||Each owner can dispose of their share independently of the others in their lifetime, including by will, and to whomever they choose.If no will or an invalid will, then their interest is distributed according to the statutory rules of succession on intestacy.|\n|Is probate required?|\n|Changing the title records – generally See your Land Titles Office for information and requirements.|\n|To note the survivor(s) on the register of land titles, Land Titles Offices usually require presentation of a:\n||Land Titles Office reguirements typically include copies of the:\n30 November, 2013\nLast updated 9 March 2015.\n© BHS Legal", "score": 36.586135548313756, "rank": 12}, {"document_id": "doc-::chunk-1", "d_text": "For example, if four joint tenants own a house and one of them dies, each of the three remaining joint tenants ends up with a one-third share of the property. This is called the right of survivorship.\nBut tenants in common have no rights of survivorship. Unless the deceased individual's will specifies that his or her interest in the property is to be divided among the surviving owners, a deceased tenant in common's interest belongs to his or her estate.\nConsider meeting with an estate planning lawyer to learn more about the differences between joint tenants and tenants in common with respect to survivorship.", "score": 35.143317721455446, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "What is Joint Tenancy?\nWhen one joint tenant dies, their interest passes outside the Will to the surviving tenants, thus not forming part of the deceased’s estate; this is referred to as “right of survivorship”. A house can be owned with other persons in joint tenancy. Each owner is called a joint tenant.\nUses of Joint Tenancy\nAvoid probate fees:\nProbate fees are assessed on the value of a person’s property within BC that passes to the personal representative.\nSuch property must be disclosed in the probate application as passing under the Will, and is usually considered part of the estate\nProbate fees are approximately 1.4% of the value of the probateable assets\nProbate fees can be avoided by planning your estate so that certain properties pass outside your will upon your death\nThe right of survivorship aspect when registering a property under joint tenancy lets you avoid probate fees\nAvoid the Wills Variation Act\nUnder the BC Wills Variation Act (WVA), a person who is not satisfied with his or her gift under the Will of a deceased spouse or parent can ask the court to give them a greater share in the estate.\nUsually only the estate that passes under a will is vulnerable to a WVA claim.\nSince properties registered in joint tenancy passes outside the Will, such properties will not be exposed to the WVA.\nAvoid costs and delays associated with obtaining probate\nThe process associated with obtaining probate of a Will may take between 3-6 months, and costs between $1,500 and $3,500, not including probate fees.\nThe transferring of a real property to a beneficiary also requires filing with the Land Title Office and, in some cases, property taxes must be paid.\nTransmitting property in joint tenancy requires only one meeting and a simple filing with the Land Title office; costing between $200 and $300, and avoids the additional cost of probate fees and property transfer tax.\nProblems with Joint Tenancy\n1.Loss of control\nUnder a property registered in joint tenancy, joint tenants lose the ability to individually act and make certain decisions regarding property.\nE.g. loss of the ability to sell or mortgage the property without the agreement of the co-owners.\n2.Exposure to creditors of co-owners\nIf one of the joint tenants falls experiences financial difficulties, their creditors may try to seize their share of the interest in the property.", "score": 34.17046944499944, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "Property Ownership – Joint Tenancy vs Tenants in Common\nHow do you own your property?\nOne of the important questions I regularly ask clients when they are purchasing their property is how they want to own their property joint tenants or tenants in common. Then that glazed look comes over their faces and I know instantly they are unfamiliar with those phrases. Usually we have a quick tutorial in Property 101 and ownership types and what it means on death and whether they have children to previous relationships.\nWhat’s all the fuss about then? Well it really does matter how you own your biggest asset – your home. If you hold the property as joint tenants, you cannot will your share and the survivor of the owners gets the property outright. For those with children together of that relationship they are likely to want joint tenants. On death it’s a simple transmission of the property title to the surviving party – easy!\nNot so easy when the partners or spouses have children from previous relationships. They have different obligations to their own children that the other partner or spouse do not. They are likely to want to ensure their share of the property goes to their children – fair enough. In that situation, we recommend they hold the property as tenants in common. Here’s the catch – they have to update their wills to allow the surviving partner or spouse to live in the property either for a period or time or for their lifetime. Otherwise, the Executors of the deceased partner or spouse can take steps to sell the property to meet the obligations to the beneficiaries of the deceased party. It can really complicate matters and create more anxiety for the surviving party.\nTenants in common can also be a good ownership option when you are getting older and have no family trust. If one dies and the surviving party needed or might need residential care then, any asset assessment would only be on the surviving party’s share, not the estate.\nTalk to the team at Law4You about the options when you are next buying your property and take the opportunity to update your wills.", "score": 33.69217425733137, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "What's the Difference Between Joint Tenants with Survivorship and Tenants in Common?\nWhen two or more people own property like a home, either as joint tenants or tenants in common, each individual owns a share (or interest) of the entire property. This means that specific areas of the property are not owned by any one individual, but rather shared as a whole. While joint tenants with survivorship are similar to tenants in common in many ways, particularly the right of possession with respect to the property, there are some important differences with respect to what happens when a co-owner dies.\nThis article covers the basic differences between joint tenants and tenants in common, and how survivorship is treated by each type of tenant classification. See FindLaw's Probate section, including Avoiding the Probate Process, to learn more.\nWhile none of the owners may claim to own a specific part of the property, tenants in common may have different ownership interests. For instance, Tenant A and Tenant B may each own 25 percent of the home, while Tenant C owns 50 percent of the property as a whole. Tenants in common also may be created at different times; so an individual may obtain an interest in the property years after the other individuals have entered into a tenancy in common ownership.\nJoint tenants, on the other hand, must obtain equal shares of the property with the same deed at the same time. The terms of either a joint tenancy or tenancy in common are spelled out in the deed, title, or other legally binding property ownership documents. The default ownership characterization for married couples is joint tenancy in some states, and tenancy in common in others (see Top 10 Reasons for Unmarried Partners to Own Property as Joint Tenants).\nA joint tenancy is broken if one of the tenants sells his or her interest to another person, thus changing the ownership arrangement to a tenancy in common for all parties. However, a tenancy in common may end if one or more co-tenant buys out the others; if the property is sold and the proceeds distributed equally among the owners; or if a partition action is filed, which allows an heir inheriting the property to sell his or her stake.\nRight of Survivorship\nOne of the main differences between the two types of shared ownership is what happens to the property when one of the owners dies. When a property is owned by joint tenants with survivorship, the interest of a deceased owner automatically gets transferred to the remaining surviving owners.", "score": 33.629926561627364, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "If you are a married couple without a living trust, the manner in which you take title to your home can be critical. The three common ways for a married couple to title their home are explained below.\nJoint tenancy is a form of ownership that includes a “right of survivorship.” At the first spouse’s death, his or her interest passes to the surviving spouse by operation of law. In other words, the property passes without probate administration, and passes to the surviving spouse irrespective of the terms of the deceased spouse’s will. While this is a convenient and inexpensive method of passing real estate to a surviving spouse, joint tenancies can have negative tax consequences. Upon the death of the first spouse, only the deceased spouse’s interest in the joint tenancy receives a “step-up” in basis for capital gains tax purposes. For a married couple that has owned a home in the Bay Area for decades, this can result in increased capital gains taxes if the surviving spouse sells the residence.\nCalifornia utilizes the community property system for property owned by married couples. If an asset is community property, then each spouse owns one-half of that asset. The general presumption is that property acquired during marriage, and the income earned from that property, is community property. Unlike property held in joint tenancy, each spouse has testamentary control over their one-half interest in the community property, which means that the property will pass according to the terms of the deceased spouse’s will. There is no right of survivorship and the property may be subject to probate administration. Holding property as community property is advantageous from a tax standpoint because upon the death of the first spouse the deceased spouse’s one-half interest and the surviving spouse’s one-half interest in the property receive a step-up in basis.\nCommunity property with right of survivorship combines the ease of administration provided by a joint tenancy with the tax advantages of community property. It allows the surviving spouse to avoid probate with respect to jointly-held assets while at the same time obtaining the step-up in basis afforded to community property. Upon the death of the first spouse, the deceased spouse’s property passes to the surviving spouse by right of survivorship, without administration. In addition, both the deceased spouse’s one-half interest and the surviving spouse’s one-half interest in the community property receive a step-up in basis. For married couples without a living trust, this is often the preferred manner of taking title.", "score": 33.501966356619995, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "They do not understand the significance of joint ownership. The issue is common in the following areas, provided as examples:\n(a) Real Estate: Often, a husband and wife will own real estate as joint tenants with rights of survivorship. If one party dies, the surviving party receives the property regardless of what the Will provides. This is common and generally acceptable. However, if this is not your desire you should change the ownership of the property to tenants in common or other form of ownership. If you own real estate as tenants in common, then you may designate who will receive your share of the property at your death. This issue can be a problem when uninformed persons take title to real estate as joint tenants with rights of survivorship but really intended to leave their share to, for example, children of a prior marriage.\n(b) Bank Accounts/Certificates of Deposit, Stock, Retirement Plans, IRA's and other type Property: The same ownership as real estate can be made of these investments. In fact, many Banks routinely place Bank accounts and Certificates of Deposit in the joint tenant with right of survivorship form of ownership if more than one person is on the account or CD, without advising you of the consequence of same. In situations where the persons are husband and wife and there is no issue or concern over divorce or children from previous marriages, this may be the best course of action. However, with divorce on the rise, premarital agreements and multiple marriages being common, the parties may be doing something that was not their intent. Another common problematic situation is where a parent has more than one child but only one child resides in the hometown of the parent. The parent may place the name of the child who resides there on all accounts, CD's and other investments for convenience reasons and establish a joint tenant with right of survivorship situation without realizing that only that child will be entitled to those assets at the parent's death. Simply put, you should be aware when you acquire an asset or investment exactly how it is titled.\nFor additional information, see the Law Summary and Information and Preview links in the search results for this form. A Definitions section is also linked on the Information and Preview page.", "score": 32.710280637823, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "As far as giving away one spouse’s interest in the property, that spouse’s rights to do so will depend on the form of ownership in which the couple owns the property. If the property is owned as joint tenants with rights of survivorship, when one spouse dies, his ownership interest will automatically pass to the surviving spouse, who will then be sole owner, regardless of what the deceased spouse’s will may have said. If the property is owned in tenancy by the entirety, the same is true. If, however, spouses own property as tenants in common, then each spouse could leave their half of the property to someone other than the other spouse. In the case of property for which there is no title or ownership document, generally the spouse who paid for the property is considered the owner. The same is true if the property was received by one spouse as a gift.", "score": 32.294348305647866, "rank": 19}, {"document_id": "doc-::chunk-2", "d_text": "This allows an owner to dispose of their interest as they wish, either during their lifetime or through their will.Shares may be equal or unequal; either way they are expressed (in percentages or proportions) and recorded on the title.There is no physical division of the property; all owners have equal use rights, see Possession below.|\n|Each owner is entitled to possession of the entire property at the same time, but no right to exclusive possession of any part.||Even though co-owners separately hold specific shares, they are equally entitled to possess the entire property in common with each other, and at the same time. No one has exclusive possession to any part.|\n|Right of survivorship?|\n|Yes. When a co-owner dies, the surviving co-owner(s) continue to hold the property. This right of survivorship continues among surviving co-owners until there is one left. This person will then own the whole property, (see the flowcharts). When they die, the property will pass according to the terms of their will, or the succession rules on intestacy.||No.|\n|Are co-owners’ interest distinct from each other?|\n|No. A co-owner does not hold any particular share solely in their name. Everyone owns the whole property together.||Yes. Each co-owner holds an identified proportion separate to the others.|\n|Can a co-owner deal with their interest independently of the others?|\n|No. All co-owners must act together as a whole to preserve the joint tenancy. To do otherwise may end or “sever” the joint tenancy and the co-ownership would become as tenants in common.A feature of joint tenancy is the close relationship formed between co-owners to create it in the first place. See the four unities above.||Yes. Co-owners may deal with their shares as they wish, independently of each other. They may sell, mortgage, transfer, lease or dispose of by will without affecting the tenancy of the others.|\n|Planning for succession & willmaking – Does an owner have an choice or control in who inherits their interest?|\n|No. On death the right of survivorship rule automatically applies and independently of intentions expressed in a will.||Yes.", "score": 32.08768104753163, "rank": 20}, {"document_id": "doc-::chunk-5", "d_text": "Joint ownership is a popular way to leave property to loved ones; however, it is not always the best way. There are advantages and disadvantages to joint ownership in an estate plan. In general, the primary advantage of owning property jointly is that your property passes automatically to the surviving joint tenant(s) when you die and probate is avoided. One disadvantage to joint ownership is that the decision is irrevocable unless you get the permission of the joint tenant(s). Another is that, in many instances, you have made a taxable gift to that person or those persons.\nAn exception to the rule of joint ownership is when adding a joint tenant to a bank account. Adding a person's name to your account is not an irrevocable decision indicating that you made a gift to that person. It is, however, a decision to take seriously, as the person can withdraw all of the money from your account. It is important to check with the financial institution to determine the rights created when you add a person as a joint tenant to a bank account.\nSee Prepare Your Estate Plan Case Study 3: The Dangers of Making Someone a Co-owner of a Bank Account as Joint Tenants with Right of Survivorship\n4.c. Types of Joint Ownership\nSeveral types of joint ownership are described below:\nCommunity Property: The laws of some states specify that most property acquired by either spouse during a marriage is held equally by husband and wife as community property. Laws in a community property state provide that any property purchased or salary earned by a married couple during the course of their marriage is owned equally by each.\nJoint Tenancy with Rights of Survivorship: This type of joint ownership states that, upon death, an owner’s share goes to the other joint owner. Joint tenancy is created when two or more persons purchase or are given property at the same time. Each joint tenant owns an undivided interest in the whole property, and each has the right to possess, occupy, enjoy, use, or rent the property. The right of survivorship means that upon the death of one of the joint tenants, by law, the property automatically belongs to the surviving tenant and does not pass through probate. Therefore, upon the death of a tenant, property held by joint tenancy with rights of survivorship cannot be transferred or given away by a will.", "score": 31.956475663805072, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Last updated August 5, 1999\nMost California married couples own their homes as “joint tenants,” because they want the surviving spouse to own the entire home, without any formal court proceeding to confirm the transfer.\nUnfortunately, owning property as “joint tenants” can seriously affect the taxation of any subsequent sale of the property after the death of one spouse. This is because the U.S. Internal Revenue Code provides special treatment for property owned by a married couple as “community property,” but not for similar property owned as “joint tenants.”\nSpecial Tax Benefit for “Community Property”\nWhen someone dies, his or her heirs are treated as if they purchased the deceased person’s property for its fair market value on the date of death. However, if the deceased person owned only a one-half interest as a “joint tenant,” only that one-half interest receives this treatment (called an “adjusted basis”).\nThus, if a married couple, Richard and Joan, buy a house as “joint tenants” for $400,000, the IRS considers that each paid $200,000 for a one-half interest. If Richard later dies, Joan automatically owns the entire house, and Richard’s one half share of the house is revalued as of the date of his death. If the house was worth $1,500,000 when Richard died, then Joan is treated as if she paid $950,000 for the house — computed by adding her share of the purchase price ($200,000), to the value of Richard share when he died ($750,000).\nIn contrast, the IRS treats “community property” as if it were owned completely by the deceased spouse, in applying this special “adjusted basis” rule. (For other purposes, such as computing estate taxes, only one-half of the value of community property is counted.) Therefore, if Richard and Joan bought their house as “community property” for $400,000, and Richard later died, leaving his share to Joan, the entire house would be assigned a new “basis” at current fair market value.\nThe result is that if Joan decides to sell the “joint tenancy” house for $1,500,000 shortly after Richard’s death, she would realize a taxable capital gain of $550,000 (the $1,500,000 sale price minus her $950,000 “adjusted basis,” computed two paragraphs above).", "score": 31.685860984606308, "rank": 22}, {"document_id": "doc-::chunk-1", "d_text": "Individuals often look at joint tenancy as an avenue to avoid probate or to have financial assistance. However, when a parent places a child on as a joint tenant on their real estate, stocks or other investments, they are often unaware that they have made a gift of one-half of the value of the property. If that value exceeds $10,000 in one year, the gift is a taxable gift and the parent must file a gift tax return.\nAs stated above, property held in joint tenancy passes by operation of law to the surviving joint owner. Although title passes to the surviving joint owner, the value of the owner's interest in the property is included in his/her estate for federal estate tax purposes. Therefore, an owner's family may pay substantial federal estate taxes on property they do not receive.\nSo joint tenancy doesn't provide a step-up in basis?\nBasis is generally defined as what you paid for an asset (cost basis). If you paid $1,000 for 10 shares of stock, your basis in the stock is $1,000. Different rules apply for inherited and gifted property. If you inherit property, your basis is the value of the property as of the date of death of the previous owner. If you inherited 10 shares of stock and the shares were valued at $50,000 as of the date of death of the previous owner, your basis in the shares would be $50,000. When you inherit property, you receive a 100% step-up in basis.\nHowever, if you receive a gift of property, you receive what is called a carry-over basis. In other words, if you received 10 shares of stock as a gift and the basis of the previous owner was $10,000, your basis in the shares would be $10,000. The basis \"carries\" over to you regardless of what the shares are worth at the time the gift is made.\nTherefore, it may be better to leave your heirs appreciated property rather than make an outright gift to them during your lifetime. With inherited property, your heirs will be able to take advantage of the step-up in basis when they proceed to sell the property.\nQuestions About Joint Tenancy? Talk to a Local Attorney\nJoint tenancies and other forms of joint property ownership must be carefully planned. Defects can result in unexpected outcomes and the loss of important rights.", "score": 31.292271473829338, "rank": 23}, {"document_id": "doc-::chunk-0", "d_text": "Strategic Estate Planning\nProper estate planning requires that you contemplate various scenarios in determining where and how your assets should be distributed. To that end, it is imperative that you attain proper guidance with respect to estate distribution so that your assets are divided up precisely as you dictate. The Oakland Wills law firm of Melanie Tavare is an experienced estate planning firm that can help you devise the perfect estate plan for your individual circumstances.\nEven if you properly drafted a will that is airtight according to California law and standards, certain assets are non-probate assets. Thus, your will would pass through probate while those assets will automatically be distributed to certain beneficiaries.\nCalifornia law states that assets held in a joint tenancy are not subject to probate. These assets include a bank account held in the name of the testator and a spouse, domestic partner, or anyone else, and will automatically pass to that person upon the passing of the testator. Similarly, a deed on the house that includes the name of the testator and the testator’s spouse, domestic partner, or child will automatically pass to the spouse, domestic partner, or child when the testator dies.\nLife insurance and retirement benefits also pass directly to the named beneficiary without undergoing the probate process.\nTherefore, when a California resident is preparing a will and considering the proper strategy for an estate plan, you should consider the entire picture. Do you own a house? Who else, besides for yourself, is listed as an owner of the property? Do you have a joint bank account with someone? Who are the beneficiaries of your life insurance and retirement plans? If you are not happy with those people receiving those benefits or feel that it creates inequality in how you would like your assets distributed, you should discuss it with your lawyer. You may need to take action to remove names from deeds, banks accounts, etc. so that the distribution will be to your liking.\nConsider the Personalities of Your Beneficiaries\nWhile it is impossible to know what the future will hold when your beneficiaries will inherit your estate, you should contemplate the likely outcomes of how they will behave. Often, people who are the nicest and finest will act shamefully when inheriting from an estate. Still, how you view them should determine how you allocate assets.\nSometimes, beneficiaries will to go to war in Probate Court. These situations often lead to situations in which assets are diminished over the course of the litigation.", "score": 31.14129802655199, "rank": 24}, {"document_id": "doc-::chunk-3", "d_text": "An example of this would be owning your home under a right of survivorship where the property would automatically go to the spouse that survives.\nWhat is a transfer on a death deed?\nA relatively new development over the past few years is the allowance of a transfer on death deed to help and donors avoid having their family need to go through probate after their death to transfer property. Specifically, the transfer on death deed would allow your spouse to name and a beneficiary who will receive any property described in the deed after your spouse has passed away. Keep in mind that the transfer on death deed must be recorded with the deed records in their home county before your spouse passing away.\nAs long as you are living then you can continue to live in your home after a transfer on death deed is executed. For example, your spouse could continue to live in the home even after the transfer on death deed is executed. Your spouse would still be the full owner of the home which means that you all would still need to pay taxes and maintain the home. Nothing is stopping you from selling the house, either. However, keep in mind that if the home is sold then whoever is listed as the beneficiary in the transfer on death deed would receive nothing at the time of your spouse’s death.\nAn important thing to keep in mind is that the transfer on the death deed will trump your will. for example, if your will states that you are vacation home goes to your daughter and the transfer of death deed names your nephew as a beneficiary in the lake home then your lake home will go to your nephew regardless of which one of the documents came into being the first period you would then be able to transfer title to the lake home without having to go through probate first period any property that is classified as real estate regardless of whether or not it has a mortgage can be transferred at the death when you or your spouse draft and record a properly executed transfer on death deed.\nHow do pre-marital agreements work in this setting?\nIf you are planning on getting married and have prepared a prenuptial agreement that says that certain property will remain your separately owned property even after you get married then you have created a prenuptial or premarital agreement. Unless they will states that someone else will get that property upon your passing and the property mentioned in your prenuptial agreement will not go to your surviving spouse.\nWhat is a joint tenancy?", "score": 31.01757360936444, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "People often look for so-called “simple solutions.” This extends to the realm of estate planning.\nWhen you hear about some of the pitfalls that go along with probate, you may start to get interested in probate avoidance on a DIY level. One way to arrange for the transfer of property outside of probate in a relatively simple manner would be to utilize something called Joint Tenancy with Right of Survivorship.\nWhile this could be looked at as a pretty simple step to take to enable probate avoidance, it may not be much of a solution. Here’s why.\nJoint Tenancy Is Co-Ownership\nJoint Tenancy with Right of Survivorship is a long and rather wordy description of the condition of joint ownership.\nSuppose you own your house outright. You have the right to add a co-owner to the property deed.\nLet’s say that you name your son as the co-owner of your house. You want your son to inherit the home after you die, and you think joint tenancy is a simple way to accomplish this goal.\nYou do not really want your son to have control of a portion of the property while you are living. And, you would like to have the right to sell it if you choose to do this at some point in time. Your intention is to leave it to your son if you wind up remaining in it throughout your life.\nThe minute you make your son a joint tenant, he owns half the property. If he is sued and the court finds in favor of the plaintiff, your son’s portion of this property that you used to own by yourself could be attached.\nA tax lien could be placed on the property if your son owed back taxes, because part of it is his. The value of his share of the property would also be in play during divorce proceedings.\nYour son’s creditors could seek to attach the property as well.\nIf you wanted to sell the property because you needed the money for some reason, your son would have to agree. And, he would be entitled to half of the proceeds if you did agree because after all, you gave him half of the property.\nHere’s another scenario. You name your son as the joint tenant, but you instruct him to sell the home after you die. You want him to distribute the proceeds among multiple family members.\nFrom a legal standpoint, he does not have to follow these verbal instructions. The property becomes his after you die, and he can do whatever he wants to do with his property.", "score": 30.686800869804685, "rank": 26}, {"document_id": "doc-::chunk-0", "d_text": "An estate planning attorney will tell you that assets need to be properly titled, so that your estate plan aligns with the assets. In some cases, you can use joint ownership of an asset to have the asset pass directly to an heir at the time of your death. However, it’s not always the right way to do it.\nEvery estate plan is different, because every family’s situation is different. However, any estate plan can be undermined, if assets are not titled properly. This is examined in the article, “Joint-ownership property titling can avoid costly probate process,” from Reflector.com, with a look at five ways of titling assets.\nJoint tenancy. In this situation, two or more persons own equal shares of a property. The owners don’t have to be related or married to each other. When the asset is owned jointly by spouses, the asset is passed onto the surviving spouse at the death of the other spouse. However, when the asset is owned jointly by unmarried people, the entire value of the asset is included in the deceased’s estate and is subject to probate. Therefore, joint tenancy might not be the best property titling method, if you want to share joint ownership with somebody other than your spouse. If one of the owners doesn’t honor his/her financial obligations, the asset can be subject to the pursuit of creditors up to that owner’s respective ownership percentage.\nTenancy in common. This is similar to joint tenancy in many ways. However, the big difference between joint tenancy and tenancy in common, is that the relative ownership percentages of the tenants in common may differ. One owner can own 25% of the asset, while the other can own the other 75%. If one tenant in common dies, the percentage of her ownership in the asset is included in her estate and is subject to probate.\nJTWROS. Joint Tenancy with Rights of Survivorship (JTWROS) means that the right of survivorship distinguishes JTWROS from joint tenancy and tenancy in common. When the owner dies, her share of the ownership is transferred to the surviving joint owner automatically by operation of law without probate. Each joint owner also has the right to transfer or sell his or her interest in the property without the consent of the other joint owner, and thereby destroy the JTWROS status.", "score": 30.28521231018746, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "If one of joint tenants dies, his interest will go to other owner\nUnder tenants-in-common, upon death of one of the owners, the interest in the property will devolve according to the will of the deceased\nMy mother, my former wife and I own a property jointly. The property has a loan against it which is in the names of all three of us. However, the loan was sanctioned on the basis of my salary documents as my mother and my former wife were unemployed. After divorce, my former wife wants to get her name removed from the loan account. For this purpose, the bank needs a gift or release deed. My former wife intends to give up her share in the property without any consideration. What shall be more viable—a release or gift deed? We were told that a gift deed will not be possible, and it will have to be a release deed. Also is stamp duty to be paid for both gift and release deeds?", "score": 29.957579493003852, "rank": 28}, {"document_id": "doc-::chunk-1", "d_text": "If the same house were owned as “community property,” however, she would recognize no capital gain, because her “adjusted basis” would be the same as the sale price.\nOf course, no tax will be due from the sale of the former “joint tenancy” home if the seller quickly bought another home at the same or higher price. Also, the surviving spouse might avoid or reduce the capital-gains tax even if the house were owned as “joint tenancy,” if she can still use the once-in-a-lifetime $500,000 exclusion.\nDrawbacks of “Community Property”\nThe chief drawback of “community property,” as a form of legal title, is that it does not provide automatic transfer to the survivor at death. Instead, the survivor must petition the court for a “spousal property” order, or initiate a probate proceeding. In California, it is not currently possible to own property as “community property” while also providing for an automatic right of survivorship.\nHowever, to capture the best of both situations, it is possible to transfer property into a “living trust” (thus avoiding any probate court proceedings) while also retaining its character as “community property” (thus obtaining a full “adjusted basis”).\nIt is possible to file a spousal property petition (or initiate probate) and include “joint tenancy” property in the petition, arguing that it was community property all along. However, this uncertain procedure eliminates the benefit of the joint tenancy form of title, which is the automatic transfer of title at death.\nAnother drawback of “community property” ownership is that the entire property becomes liable for the debts of either spouse. In addition, “community property” will usually be equally divided in case of divorce, while “joint tenancy” property can be traced to separate-property sources to permit unequal division.\nBut Beware of Declining-Value Property\nThe special “adjusted basis” rule usually works so that couples who own property as “community property” are better off than couples who own property as “joint tenants,” because most property increases in value over time.\nHowever, in the recent California real estate market, this general rule hasn’t always been true. If Richard and Joan bought their home in 1989 for $400,000, it is possible that the current fair market value might be only $350,000.", "score": 29.682589435402477, "rank": 29}, {"document_id": "doc-::chunk-2", "d_text": "As of press time, these provisions were expected to pass and become law very soon. If they do, and Janet and Sara register as domestic partners, in most cases Sara would inherit all of Janet’s property, including her interest in the home if Janet dies without a will, regardless of whether Sara’s name is on the title. Likewise, if both Janet and Sara are on the title, even if only as “two unmarried individuals,” the presumption would change from “tenants in common” to “joint tenants with rights of survivorship,” and Sara would avoid probate and own the home upon Janet’s death.\nRegardless of whether the domestic partnership protections become law, the safest thing for same-sex couples to do if they wish their home to be treated similarly to that of married couples is to title it as “joint tenants with rights of survivorship.” Couples must also be vigilant at the closing to ensure their intentions are carried out in the final paperwork. Take it from one who has been there: don’t sign it until it is right.", "score": 29.265683367156214, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "There are advantages and disadvantages to both a joint tenancy and a tenancy in common. Which one is best for you will depend on your individual needs and circumstances.\nMany married couples opt for joint tenancies because they don't perceive any advantage in defining separate shares - particularly if they want the property to pass automatically to the surviving spouse if one of them dies.\nTenancy in common may be more suitable for couples where one spouse has children from a prior relationship, unmarried couples, brothers and sisters, parents and children, or business partners - since in these cases one of the title-holders may not want the other owner(s) to inherit their share, at least not the entirety of it.\nIf, for this decision, you plump for a tenancy in common, you must remember to write a will, since if you fail to do so the laws of intestacy will determine who gets what - which may defeat your objective of rejecting a joint tenancy,\nFor example, under the intestacy rules, your spouse would receive all of your personal chattels (e.g., clothes, jewelry, etc), a statutory legacy of £250,000, plus a life interest in half of anything that remains - this could swallow up virtually all of your interest in the property.\nMeanwhile, your children (or their offspring) would receive the other half of anything that remains of your estate and the capital that is left after your spouse's life interest ends. Thus, they could end up with nothing or may have to wait a very long time to receive anything from your estate.\n** Additional Information & Advice **\nDepending on your circumstances, however, you may want to speak with a solicitor who specialises in conveyancing and/or will preparation. You can find a solicitor in your area for free via solicitor matching services, which can also help you to understand the best course of action and whether you are ready to hire a solicitor.", "score": 29.08877568989746, "rank": 31}, {"document_id": "doc-::chunk-2", "d_text": "A “tenancy in common” is the most popular form of ownership and the default form of ownership if the parties do not specifically indicate otherwise on the deed. Each individual can own different and unequal percentages of the property. There is no right of survivorship, so if one were to pass away, that individual’s share would be given to whomever is designated in their will OR by default their heirs according to state law. That can get a little messy without a life estate provision in the deed. When a tenant in common owner dies, with or without a will, his or her interest must go through a probate court procedure which always involves some delay and cost, and can provide an opportunity for relatives to contest a will or argue about succession.\nThere is no one way to prepare for all of life’s unknowns, but a well thought out agreement between unmarried parties purchasing property together can reduce uncertainty and as well as expenses by clarifying the intent of the parties.", "score": 29.059962708827925, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "Instead, each co-owner has a separate, undivided share in the property (although not in a physical sense), and may independently deal with it as they wish. If owned in their own name it will form part of their estate and they may choose who will inherit it through their will.\nThe graphics outlines how it works.\nCombining joint tenancy under a tenancy in common\nIn a tenancy in common arrangement, two (or more) people may choose to own their particular share as joint tenants.\nChanging a joint tenancy to a tenancy in common\nThe survivor’s rights apply as long as the property is owned as a joint tenancy. To change to a tenancy in common, the joint tenancy needs to be severed in the lifetime of the co-owners. Specific procedures are involved and specialist legal advice should be sought.\nThe graphic below extends the situation further to show what happens in each tenancy when there are three co-owners, and one dies.\nA note on terminology – most people associate the words tenant and tenancy as referring to leasing or renting property. Their presence in joint tenancy and tenants in common are leftovers from feudal law concepts in the English law adopted into Australia.\nThe table below compares aspects of joint tenancy and tenancy in common.\n|Joint Tenants||Tenants in common|\n|Basis of co-ownership|\n|No separation of ownership.Co-owners hold the property together as a whole. Ownership is not separated into identified shares. Important attributes of this tenancy are the right of survivorship (see below) and the requirement for all owners to meet four elements when acquiring the property, known in law as the four unities.The four unities are time, title, interest and possession. Basically all owners must have acquired the same type of interest at the same time, in the same transaction, to have equal rights to possess the whole property simultaneously. As a result a close legal relationship exists between them.||Ownership is separate.Each owner has a distinct, specified share in the property, separate to each other. The share is undivided, meaning it cannot be divided further or spread among other owners.", "score": 27.716986656083716, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "The transfer of a home into joint tenancy with a person who does not live there may mean the loss of “principal residence” status for part of the property\nThe loss of principal residence status results in tax payable if there is a capital gain when the property is sold\nAny transfer of property into the names of j0int co-owners may trigger tax consequences\n4.Child inappropriately dealing with property\nIf a parents has more than one child and is registered jointly with only one of them, the child taking the property might not deal with it in accordance with the wishes of the deceased parent.\n5.Child predeceasing parent\nIf several children are named as joint tenants with a parent and one predeceases the parent, the children of the deceased child will not inherit any share in the property because of the right of survivorship.\nThis may be inconsistent with the wishes of the deceased.\nThis may be avoided by letting the property pass under a Will.\n6.Mortgaged property transferred into joint tenancy may void any mortgage insurance\n7.The spouse of a co-owner may make a claim against the property\n8.The court may set aside a transfer into joint tenancy\nIf a transfer into joint tenancy is an invalid “testamentary gift” not signed in accordance with the Wills Act, the court may set it aside.\nIn this unusual case, the property would fall back into the estate and probate fees may be payable and could be subject to claim under the Wills Variation Act.", "score": 27.514322348419828, "rank": 34}, {"document_id": "doc-::chunk-0", "d_text": "Estate Law Questions? Ask an Estate Lawyer.\nDear JACUSTOMER - If any property that is titled such as real estate or is held in a joint account such as a savings account has a stipulation that it is a survivorship account then upon the death of one of the parties the other party will inherit that property by virtue of the survivorship clause and it will pass outside of any probate. I'm not certain from your facts exactly what type of property you are referring to but in general a survival or survivorship clause allows the property to pass outside of probate directly to the survivor.\nWhen the property is marital property in a community property state such as CA then it is community property with a survival clause.", "score": 26.9697449642274, "rank": 35}, {"document_id": "doc-::chunk-2", "d_text": "Purchasing property is a significant investment and it is becoming increasingly popular (in the current Sydney market it is often necessary!) for two people to purchase a property together. In estate law, joint tenancy is a special form of ownership by two or more persons of the same property.\nThe individuals, who are called joint tenants , share equal ownership of the property and have the equal, undivided right to keep or dispose of the property. If two or more people acquire a property together, it can be either as tenants in common or as joint tenants. If a tenant in common dies, their interest in the property is an asset of their deceased estate. Joint tenancy creates a Right of Survivorship.\nA Lawyer Will A nsw er in Minutes! Questions A nsw ered Every Seconds.", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-2", "d_text": "It goes without saying that you should make a Will or review any previous Will when jointly purchasing a property or severing a joint tenancy to ensure that your share of the property is left to your chosen beneficiary and for appropriate housing or financial provision to be made for your co-owning partner/s.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-1", "d_text": "The higher the basis, the smaller the difference between it and the sales price. When an inherited asset is sold the result is always treated as a long term capital gain or loss, regardless of how long the asset was owned by the original owner or the heir.\nFor example, take that house inherited by a son from his mother with a date-of-death value of $200,000. If the son promptly sells it for $200,000, no tax will be owed, because he gets a fresh start tax basis of $200,000. But if his tax basis had been the same as his mother’s, $75,000, then he would have owed capital gains tax on his gain of $125,000 on the same transaction.\nJointly Owned Property\nTax basis gets a little more complicated when a property is co-owned and one of the owners dies. It’s a common situation, of course, because many couples own valuable property together and leave their shares to each other. There are also situations where unmarried family members own property as joint tenants, such as when siblings inherit part of a parent’s home or sometimes a parent might gift part of their resident to a child in order to avoid probate.\nJoint tenancy property\nWhen a property is held by two owners in joint tenancy generally only half of it gets a fresh start tax basis when the first owner dies. For example, say a couple owns a house worth $200,000; they paid $150,000 for it. If one of the owners dies, the survivor gets a fresh start tax basis in the half he or she inherits. They already owned the other half-interest, so their basis stays the same. That means that the new basis is $175,000. The basis in the original half-interest is still $75,000, and the basis of the inherited half-interest is $100,000.\nIn community property states such as California married couples get a tax advantage. Both halves of community property (owned by the couple together) get a fresh start tax basis when one spouse dies and the other becomes sole owner because each spouse legally owns 100% of all community property rather than each of them owning 50%. So in the example above, the surviving spouse would have a new fresh start tax basis of $200,000 after the first spouse dies.\nWhen a gift is made the recipient’s basis generally is what the giftor’s basis was prior to the gift.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-2", "d_text": "Set Up Transfer-on-Death Stocks and Bonds\nStocks, bonds and other securities may be set up as transfer-on death securities. This allows you to control these financial accounts for as long as you live, without worrying about what a joint ownership may mean.\nFill Out Transfer-on-Death Papers for Motor Vehicles\nTransfer on Death for motor vehicles is permissible in several states, including California. The DMV should have the papers you need to sign to complete a transfer on death title. This will allow your designated beneficiary to take immediate possession of the vehicle upon your death.\nEnact Pay-On-Death Financial Accounts\nFor your bank and retirement accounts, you can designate a pay-on-death beneficiary. This avoids the chance that your beneficiary could spend down everything in the account if it’s held in joint ownership.\nHeritage Law, LLP provides the knowledge and experience to make the probate process, including conservatorship and guardianship proceedings, as efficient as possible for family members, potential heirs, and executors of wills.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "The terms of either a joint tenancy or tenancy in common are outlined in the dee title, or other legally binding property ownership document. The default ownership for married couples is joint tenancy in some states, and tenancy in common in others (see Top Reasons for Unmarried Partners to Own Property as Joint Tenants).\nA joint tenancy can be broken if one of the co-owners transfers or sells his or her interest to another person, thus changing the ownership arrangement to a tenancy in common for all parties. A tenancy in common can be broken if one of the following occurs: 1. One or more co-tenants buys out the others 2. The property is sold and the proceeds distributed amongst the owners 3. A partition action is file which allows an heir to sell his or her stake. At this point, former tenants in common can choose to enter into a joint tenancy via written instrument if they so desire. This type of holding title is most common between husbands and wives and among family members in general since it allows the property to pass to the survivors without going through probate (saving time and money). One of the main differences between the two types of shared ownership is what happens to the property when one of the owners dies.\nWhen a property is owned by joint tenants, the interest of a deceased owner gets transferred to the remaining surviving owners. For example, if three joint tenants own a house and one of them dies, the two remaining tenants each obtain a one-half share of the property. Tenants in common have no rights of survivorship.\nThis is called the right of survivorship. Decisions relating to real estate have huge financial outcomes. Before deciding how to share ownership over what is likely the largest investment in your life, you may benefit from some professional advice. Consider meeting with a local real estate attorneybefore you make such important decisions. A co-tenant can, with the consent of the landlor transfer their share of the tenancy to another person.\nThe Tenants’ rights manual is produced by the Tenants’ Union of NSW especially for tenants and people who work with tenants – tenants’ advocates, community legal centre workers, and other community workers – on issues to do with renting. The lessor’s disclosure statement is given by the lessor (landlord) to the lessee (tenant). It contains important information about the shop, the lease and the tenant’s financial obligations.", "score": 26.346890091981905, "rank": 40}, {"document_id": "doc-::chunk-0", "d_text": "Estate Law Articles\n8 Dangers of Owning Property in Joint Tenancy\n“Joint Tenancy with Right of Survivorship” means that each person has equal access to the property. When one owner dies, that person’s share immediately passes to the other owner(s) in equal shares, without going through probate. We’ve all been told that Joint Tenancy is a simple and inexpensive way to avoid probate, and this is sometimes true. But the tax and legal problems of Joint Tenancy ownership can be mindboggling. The dangers of Joint Tenancy include the following:\nDanger #1: Only Delays Probate. When either joint tenant dies, the survivor – usually a spouse or a child – immediately becomes the owner of the entire property. But when the survivor dies, the property still must go through probate. Joint Tenancy doesn’t avoid probate; it simply delays it.\nDanger #2: Two Probates When Joint Tenants Die Together. If both of the joint tenants die at the same time, such as in a car accident, there will be two probate administrations, one for the share of each joint tenant in the Joint Tenancy property as well as any other property they each may own.\nDanger #3: Unintentional Disinheriting. When blended families are involved, with children from previous marriages, here’s what could happen: the husband dies and the wife becomes the owner of the property. When the wife dies, the property goes to her children, leaving nothing for the husband’s children.\nDanger #4: Taxes. When you place a non-spouse on your property as a joint tenant, you are making a disposition of property and capital gains taxes may be due and owing in the year of the transfer into joint tenancy and you may be creating future taxes if the new joint owner already has a principal residence.\nDanger #5: Right to Sell or Encumber. Joint Tenancy makes it more difficult to sell or mortgage property because it requires the agreement of both parties, which may not be easy to get.\nDanger #6: Financial Problems. If either owner of Joint Tenancy property fails to pay income taxes, the Canada Revenue Agency can place a tax lien on the property. If either owner files for bankruptcy, the trustee in bankruptcy may be able to sell the property.\nDanger #7: Court Judgments.", "score": 25.65453875696252, "rank": 41}, {"document_id": "doc-::chunk-1", "d_text": "Assets Involved with Probate\nProbate isn’t involved or necessary for every asset that someone owns when he or she dies, but it is regularly used for the following assets:\n- Assets that were held in only the deceased’s name\n- Half of each asset that was registered with his or her spouse as community property\n- That portion of any asset that belonged to the deceased that he or she held as a registered tenant in common with other people\n- Any assets, including such things as jewelry, art, furniture or the like, that are not registered\nIf the total value at the time of death of the deceased’s assets is less than $100,000 (not including motor vehicles or certain other assets), probate is not necessary under California law. The assets that are not subject to probate receive a simplified procedure for their transfer.\nAssets Excluded from Probate\nNot everything is subject to probate. Although probate may be required for a portion of someone’s estate, the following assets can avoid the probate process:\n- Assets that are held in joint tenancy\n- Assets that are held in a living trust\n- Assets where a beneficiary is named, such as IRA benefits or life insurance policies\n- Assets held in a bank or credit union where the deceased was named as a trustee for another person\n- Assets that were registered in the person’s name that are “payable on death” or “transfer on death” to another person\n- Assets registered by a married couple as community property with the right of survivorship\n- All assets that go to a surviving spouse, including any assets the person who died owned separately in his or her name but were left in the will or by intestate succession to the surviving spouse\nIn California, there is a simplified legal process called a “spousal confirmation hearing” where a petition is filed with the court. A notice is sent to certain interested parties, and the court assigns the assets to the surviving spouse unless there is an objection. Only a husband and wife can take advantage of this process.\nExample: To illustrate this process, if John Smith has $200,000 of stock held separately from his wife, Ann, she can go through this spousal confirmation process—if he has a will that leaves everything to her. Her advantages are saving on the fixed fee required by probate as well as savings in time.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "Ownership and income tax: legal background: joint ownership - tenants in common\nProperty is held in the name of A and B. A and B are the legal owners.\nIn a tenancy in common, A and B are each entitled to a specific share in the property. The shares in which the property is owned may or may not be equal. For example A is entitled to 50% and B to 50%; or A is entitled to 75% and B to 25%. When property is held in this way, on the death of one tenant in common the deceased’s share does not pass to the surviving owner. It forms part of the deceased’s estate, and so passes to their successor under the terms of their will or the rules of intestacy.\nWords of separation or ‘severance’ are likely to be used. These are words that indicate the property is to be held in shares, for example ‘equally’, ‘in equal shares’, ‘half and half’, ’50/50’, ‘one third/two thirds’, ‘60%/40%’.\nTenancy in common is the way that individuals who are not in a personal relationship are likely to own property. It would be unusual to find that they intended the survivorship rule to apply.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-2", "d_text": "If so, it would be preferable to own the property as “joint tenants” to avoid having the survivor’s basis in the property reduced to $350,000. (Instead, the basis would only be reduced halfway, to $375,000.)", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "If you and you partner jointly own property, his or her share will automatically transfer to you upon his or her death. Needless to say, the tax aliabilities associated with that joint property transfer as well.\nTo commence the transfer of the property out of joint names and into your name, you must apply for a Survivorship Application. Borchard and Moore will not only do this on your behalf, but will also inform you of the associated financial and tax implications of the transfer.\nEmail or call us now on (03) 9546 8155 to get expert legal advice and representation.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-21", "d_text": "It will pass under a Will, if there is one, or under the rules in the Administration of Estates Act if there is no Will. As Tenants in Common, you would hold the property in whatever shares you had agreed - usually this will be in the proportions in which you each contributed to the purchase price (disregarding the Mortgage). We shall need to know from you how you want to deal with the shares in the property before we can draft the Transfer of the property into your names. The points set out below may help you to consider the matter:- 1. For a married couple with children (particularly where you are already selling an existing matrimonial home) it is probable that you would want to opt for the joint tenancy, unless there were Inheritance Tax considerations to be taken into account. 2. For an unmarried couple with no children, a tenancy in common may be more appropriate for two reasons. First, if either of you were to die, you might want your individual shares to go back to your own family. That would not happen, of course, with a joint tenancy, particularly if something happened to both of you at the same time. Secondly, if your relationship broke up whilst you were still unmarried, the property would probably have to be sold, or one party may wish to buy the other out. On a sale, you would probably each want your proportion of the proceeds of sale to represent the proportion of your original contribution to the purchase price. If one of you was to buy the other out, no doubt you would want the purchase price of that share to reflect the contribution made to the original purchase price. We can provide for a tenancy in common whilst you remain unmarried which would convert to a joint tenancy on marriage. Unmarried couples should also consider entering into a Cohabitation Agreement which could deal with other related matters such as ownership of furniture or the proceeds of endowment policies. We could prepare an appropriate Cohabitation Agreement if you wish. 3. If you want to preserve your individual shares in the property, you must opt for a tenancy in common. However, if you do so, you ought to make a Will - there is no guarantee that the shares will go where you want them to if you do not make a Will. 4.", "score": 25.62140631741487, "rank": 46}, {"document_id": "doc-::chunk-3", "d_text": "There is nothing sacred about a\nFor example, when property is owned as a joint tenancy, each tenant must receive permission from the other before starting construction, altering or selling the\n3 Dec 2020 Upon the demise of one of the joint tenants, the surviving tenant takes sole ownership of the whole property. This is regardless of whether the\nBuying a property with a friend or partner? Find out the differences between the two types of joint ownership: joint tenancy\nIn a joint tenancy, if one person dies, the other person automatically becomes the owner of the whole property.\nSwedish police salary\nrevit structure blog\nstadieindelad timplan moderna språk\nFORM 10-K - cloudfront.net\n17. Titre V. Arbitrage. Title V. Arbitration.\ntenant in common right to occupy - Den Levande Historien\nUnder joint tenancy, each of the co-owners together own the whole interest in the flat. P-059, Business vs. Adventure or Concern in the Nature of Trade Relating to Sales of Real Property; P-088 Sale of Single Sites in a Residential Trailer Park; P-109, Transfer of Farmland by a Farmer, Holding Sole Title, to One or More Related Persons and Themselves as Joint Tenants; P- 121, Sale of Land Related to a Residential Complex, P-183 Input Tax Credits on Farmland Acquired in Joint Tenancy. Joint Tenancy with rights of survivorship is similar to Tenancy in Common, meaning you both own an equal part.\ncompanies. These are companies with sole rights which have performed – and continue to perform. – important over decreased in 2005 compared to 2004. which is a jointly financed project with the other tenants, central government agencies. submit a joint notification to the Council, the Commission, the ESRB and EBA. derivatives and on cash markets for the sole purpose of hedging positions on a similar institution; | (v) | a credit institution which is wholly owned by one of the (b) | exposures to a tenant under a property leasing transaction concerning abstract of title, gravationsbevis, kopi av atkomstdokument. accede to an business on joint account, metaaffär, metaforretning. business exclusive right, prerogativ, enerett.", "score": 25.34778911797528, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Property Co-Ownership and Challenging Wills\nWills are not the only documents that affect the beneficiaries of a deceased person’s property: other property transactions can also result in different people being left with – and without – an inheritance. Challenging wills is a last resort, and it is important to note that a will dispute is not the only means of challenging the distribution of someone’s estate. If you have reason to believe that a property transfer made during the person’s lifetime was made, for example, with undue influence, it is possible to challenge this transaction in addition to challenging the will.\nJoint Tenants or Tenants in Common?\nJoint tenancy and tenancy in common are the two different legal terms used to describe the types of division of ownership of one property between multiple owners.\nTenancy in common means different people can own different shares of a property (for example, one person can own 75% the other 25%). Tenants in common can also leave their individual share of the property to someone in their will. Tenants in common do not automatically leave their share to the other tenants if they die.\nJoint tenancy takes place when people acquire equal shares of a property at the same time. It is not possible to leave your share of a joint tenancy to someone in a will. Joint tenancy invokes a right of survivorship, meaning that if one tenant dies, their share of the property is automatically left to the other joint tenant or tenants. The right of survivorship is a crucial point for contentious probate matters because a property transaction to become joint tenants can result in someone automatically inheriting a house.\nThe Facts of Hume v Leavey and Hume\nHume v Leavey and Hume [unreported] is a recent case involving an elderly lady’s decision to put her house in her and her son Glen’s joint names, leaving the £350,000 property to him, and not to her two other sons. Mrs Hume also wrote a will leaving the entire of her estate to Glen. Her son John challenged the distribution of his mother’s estate, claiming that her decisions were made with the undue influence of his brother.\nThe court considered evidence of Mrs Hume’s close relationship with Glen due to their shared love of hairdressing, his decision to give up his hair salon to look after her, as well as her strong willed character that rendered her unlikely to be vulnerable to coercive behaviour.", "score": 24.854229656071634, "rank": 48}, {"document_id": "doc-::chunk-3", "d_text": "Joint tenancy gives each joint tenant the right to use and enjoyment of the entire property, and upon the death of one joint tenant, the property vests automatically in the surviving joint tenant, without necessity of probate.\nHowever, joint tenancy is an incorrect, even dangerous way of holding title to assets if you do not want to give up control of the assets until your death.\nThe draw backs to joint tenancy are:\n- Loss of Control: Since each joint tenant can use or enjoy the joint tenancy property, you run the risk of your joint tenant using up or simply removing assets from joint tenancy accounts.\n- Unwanted Partners: You run the risk of becoming partners with your joint tenant’s Trustee in Bankruptcy, or your joint tenant’s ex-spouse in a divorce, or your joint tenant’s creditor.\n- Income Tax Disadvantages: If beneficiaries of your estate are your joint tenants, you are depriving them of a full step-up in basis by making them joint tenants. Married couples in community property states lose a full step-up in basis by holding title as joint tenants instead of as community property.\n- Property acquired from a decedent gets a basis equal to the fair market value as of the date of death. This means that the $500,000.00 home which you purchased fifty years ago for $20,000.00 will, when it is inherited by your beneficiaries, have a new basis of $500,000.00. Your beneficiaries can then sell the home for $500,000.00 and realize no taxable gain. Joint tenants get a basis equal to their percentage of the original $20,000.00 and your percentage of the $500,000.00 basis, resulting in taxable gain upon sale of the home for $500,000.00.\nIn short, do not hold title as joint tenants if your real intent is to give up control upon death. A revocable trust will avoid probate without the pitfalls of joint tenancy.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "“Holding Title” is the way in which you take ownership of your Real Estate property. There are many different ways to take and hold title in California and it is best to be informed on all of them so that you as a buyer can make an informed decision that fits your needs best.\nThe graphic below outlines the different ways of “Holding Title” in the State of California.\nLets break this all down shall we?\nHow you take ownership of your real property will allow you to have a say in who has the legal authority to sign documents and obtain rights associated with the property. Some of these rights include: property taxes, income taxes, inheritability/gift taxes, and transferability of title.\nThis means that there is only one person on the deed of the property. When this person dies, this means that the property will go through a process called probate through the court to determine who will gain ownership of this property after the death of this individual or sole entity.\nOne way to avoid the process of probate would be through the Transfer on Death Deed. This is a way in California to assign ownership of your property to your heirs after your death.\nThis means that you and your co-owner(s) all have equal ownership of the property. This means that you own half, and your co-owner owns the other half equally.\nUnder this form of vesting, an owner is allowed to sell their portion of the property while alive, but is not allowed to will their stake of the property due to the Right Of Survivorship\nThe Right of Survivorship means that once you die the remaining co-owner(s) will assume your stake of ownership. This allows them to avoid the dreaded probate process.\nTenancy in Common\nThis occurs when you have multiple co-owners with unequal ownership of the property.\nWhen referencing the infographic to the right, Person Left (person on the left side) may own 75% of the property while Person Right owns 25% of the real property.\nIn this situation each owner has the ability to transfer, will or restrict their percentage of their ownership in the property.\nYou do not receive the Right of Survivorship with this form of vesting thus subjecting to probate.\nMarried couples only are subject to this form of vesting. When you get married, your spouse owns equal ownership unless otherwise stated in writing and signed by both parties.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-6", "d_text": "Tenancy-In-Common: Ownership of property in which, upon death, each owner’s share goes to his/her heirs or beneficiaries is known as Tenancy-In-Common. It is created when two or more persons own property together but also own separate titles to the property. Property owners may or may not own the same percentage of the property. For example, one may own 25% while the other owns 75%. Each owner may do as he or she wishes with his or her interest in the property, such as give it away, sell it, or mortgage it without the consent or knowledge of the other owner. With tenancy-in-common, upon death, one person’s share passes as provided in his or her will or trust. Probate or other consequences are possible.\nTenancy-By-The-Entirety: This form of joint tenancy between a husband and wife is valid in a few states. Tenancy-by-the-entirety provides extra protections to real property owned by a married couple. One spouse owning property as tenants-by-the-entirety cannot mortgage, transfer, or otherwise deal with the property in any way that would affect the rights of the other spouse without the latter’s consent. When one spouse dies, the other still owns the entire property.\nThere are some important things to consider about joint tenancy:\n5. A Will: Who Needs One?\nA will is an estate-planning tool that serves as your set of instructions regarding who gets your property and resources when you die. At a minimum, everyone needs a simple will. It is the document that most people use for transferring their property, and it is often the choice of young families and of others whose situations involve neither complex tax planning nor resource management for incapacitated family members. After death, the will is settled through the probate process.\nTo appreciate the terminology used in a will, select Prepare Your Estate Plan Case Study 4: How Much Will Each Heir Inherit? Remember to access the Financial Security Glossary for an understanding of the terms used.\nWhy have a will?\n5.a. What Constitutes a Valid Will?\nFactors that must be present in a valid will vary from state to state, so it is wise to check your own state’s requirements. Certain elements are often necessary:\n5.b. Types of Wills\nThere are several types of wills:\nAttested wills are the most common type of wills.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-0", "d_text": "Assured Shorthold Tenancy\nFixed term tenancies\nThe general law applies. Where joint tenants hold a fixed term, the survivors become the tenants. If a sole tenant dies the tenancy dissolves according to his will or on intestacy – see below under general law.\nPeriodic tenancies – joint tenancies\nIn the case of a periodic tenancy (including a statutory periodic tenancy) held by joint tenants, where one of the joint tenants dies the general law applies and the survivors become the tenants.\nPeriodic tenants – spouse living with the sole tenant\nWhere the tenant was a sole tenant and... Please login or signup to continue reading this content", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "Thank you for using JustAnswer. I am researching your issue and will respond shortly.\nWhen you say that she signed the interest over to you, in what way did she do that? Was that in a deed giving you her part of the property? Was the property held with \"right of survivorship\" in the deed?\nShe just had a hand written list of personal property which identified what each child was to receive and in that document she gave me her car and her \"half ownership in the house.\" The documents I received when the loan modification was completed were in my name only...after they verified she was no longer living, etc..Larry\nAnd in the deed that you're on that you co-own the house, does that say \"joint tenant\" and/or \"right of survivorship\" anywhere?\nI will have to check on that. I know the subordinate deed of trust issued by HUD does not contain that language. I extended a mortgage reduction modification which meant that there is the deed for the house and a subordinate deed of trust in the amount of $48,000 that has to be paid when the house is sold.\nWere these instructions that she left in her own handwriting?\nYes, and signed and dated...Larry\n(by the way, a \"deed of trust\" is not the same thing as a \"deed\"... the deed is the document that gives you legal ownership, whereas the deed of trust is a document that secures the property to repay a loan)\nYou would want to check on the actual deed that the previous seller (or this woman) added you on that gives you that right to own half.\nIf there is a \"right of survivorship\" mentioned in the deed, then the property is yours.\nRight of survivorship means that the property passes automatically to the other individual(s) on the deed.\nJoint tenancy is also something that can give you this.\nNow if it is silent as to that fact, or says \"tenants in common\" etc... then it would continue to be her property until transferred (that is, it would go to the estate).\nAnd in that situation, only if this document were a will would it be something that you could claim the property on.\nA non-will document that expresses desire or instructions that something happen is not a will.\nAnd legally, if there is no will, the property would pass via the law of intestate succession.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-2", "d_text": "highest ratio of exit value to GDP – 4.5%, compared to that 3.0% in the Partly as a reaction to Telia's ownership of both networks and services, and partly to avoid a and Birgersson were not the sole founders, but they were the front figures, a joint venture with one of the licensees, Tele2, building a shared network.\nThis arrangement is defined by the following characteristics: The tenancy must be conferred by the same deed or grant\nJoint tenants in equity If an equitable joint tenancy exists, the beneficial interest of any joint tenant (proprietor) will pass on death to the surviving tenant. The last survivor will then hold the land as sole legal and beneficial owner and, as a result, the trust will come to an end. Joint tenancy (or more formally ‘joint tenants with a right of survivorship’) is the most common way for legally married spouses to hold ownership of their house in Ontario. If one joint tenant dies, they cease to be an owner, and the remaining joint tenant continues as the owner. Survivorship rights are automatic in the case of tenants by the entirety, and they're provided for by deed in cases of joint tenancy. 3 A surviving spouse or co-owner immediately becomes the sole owner of the property when the other spouse or co-owner dies. Se hela listan på legalzoom.com\nSe hela listan på seniorsfirstbc.ca\nSole Ownership and Joint Tenancy.\nByggnads lärlingslön vvs\nUnder joint tenancy, both partners jointly own the whole property, while with tenants-in-common each own a specified share. However, if beneficial ownership has initially been structured in the form of joint tenants, such ownership structure is not fixed for all time. A change from joint tenants to tenants in common can easily be achieved by the act of severance ; this simply involves a joint tenant writing to the other joint tenant(s) giving notice that he/she wishes to hold his/her interest as a tenant in common. 2021-03-14 · Joint tenancy with rights of survivorship (JTWROS) is a type of account that is owned by at least two people.", "score": 23.643896259463624, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "This is a legal agreement between joint owners of property setting out how they own a property and in what shares.\nIf you own a property jointly with a partner, friends, family or associates, you can either own it as beneficial joint tenants or tenants in common. The terms have nothing to do with tenants in a landlord and tenant sense.\nThis means that you do not own a specific share and that if you die your interest will pass automatically to the other owner or owners (\"right of survivorship\"). Any provision in a Will leaving your share of the property to someone else will be ineffective.\nA beneficial joint tenancy comes to an end if the property is transferred to one owner, if it is sold or if only one of the joint tenants remains alive. The joint tenancy can also be brought to an end voluntarily, for example where a new trust deed is entered into, or involuntarily, for example, where an owner becomes bankrupt, in which case the joint tenancy will become severed and the owners will become tenants in common.\nA joint owner may sever the joint tenancy at any time by giving notice to the other owner/s. The property will then be held as tenants in common.\nDivorce or separation does not result in an automatic severance of the joint tenancy, although the wording of a document filed or served within any proceedings may constitute a notice severing the joint tenancy.\nIf you own a property jointly as tenants in common, then you own a specific share of the equity in the property. On your death your share of the property passes into your estate to the beneficiary entitled under the terms of your Will or the rules of intestacy, if you have not made a Will. In your lifetime you can make a gift, sell or re-mortgage your share.\nIf you own a property as tenants in common, it is always advisable to record details of ownership in a Declaration of Trust. This helps to ensure that the joint owners are clear about the specific shares they own, the contributions that have been made to the purchase price, the amount that each party is to pay towards the mortgage (if there is one), how the proceeds should be divided on sale and how to resolve disputes, such as what should happen if one owner wants to sell and another does not.\nIf you own a property as joint tenants, although there is a presumption that you own the property in equal shares, that presumption can be rebutted by either express evidence, such as a trust deed, or by the parties’ conduct.", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "Talk to make a surviving owners in order for qualified beneficiaries might be found, tod deed form california revocable tod deed form for probate process of your trust! How do I handle bank accounts held in joint tenancy? Transferring Assets to Your Trust Funding Instructions.\nRight form in equal shares you indemnify the deed form california revocable tod account or. Transferring my ownership statement death deed form california tod revocable transfer on file. California lawmakers passed Assembly Bill 139 AB 139 in September 2015. The forms to help me, similar designation by reason to take precedence. What if the decedent's real property in California is worth 50000 or less. California transfer of large unclaimed property you only need is revocable deed. A Revocable Transfer on Death TOD Deed is a newly sanctioned instrument which now. Each document presented for recording MUST include or comply with the following. Brokerage account tod designation form california revocable, hereto annexed as was. Legacy package and tod deeds created in california revocable tod deed forms? Beneficiary deed form free california tod deed, with a family members have? Limited in the blank Form formatted to comply with all recording and content requirements be invalid of the property the. This living trust coupled with tod deed form free information deemed reliable, are specified in the original deed! Title company involved, or as beneficiary form is a client so that tax, the court system before you have? Some artwork provided under license agreement. California now provides that california revocable transfer on the department is to our untimely death deed which he or her property necessary, property owners do i can. The individual appointed by your assets that property to select the new there are you want to govern the! When there are multiple designated beneficiaries to receive concurrent interests, a replacement Trustee may be appointed by a unanimous vote of the Qualified Beneficiaries. California Revocable Transfer on Death TOD Deed Form Boxegg. Will the property be subject to A properly executed Transfer on Death Deed is effective if it is recorded with the County Clerk in the.\nShows that is not have downsides to govern the form free deed california tod revocable. If you are an heir or beneficiary you can ask the Court to make an order to clear title. Sample forms for new CA TOD deeds available in this post on ACLL's The.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "Many people asking the question of whether it makes sense to add someone to their home are considering joint tenancy because of its right of survivorship, which avoids probate.\nJohn C. Grier, managing partner at the San Diego law firm of Mathews Grier Damasco, outlined for me in an interview and in a really good article on his firm’s Web site (www.mbglawoffices.com; click on the link for trusts and estate planning) the pitfalls of placing property in joint tenancy. Of course, you should check with your own legal or tax adviser regarding your particular situation, since the laws of each state may differ, Grier said.\nSo, what are the pitfalls of joint tenancy?\nYou could trigger a gift tax for yourself or your estate. The federal government assesses taxes (or a reduction in the available estate tax exemption) against any gift over $11,000 made to any one person in a calendar year, Grier said. If you add someone to your property, it may be viewed as a gift of one-half the value of the property.\nYou may unintentionally create a taxable profit for your heir. A transfer of real property on death receives a stepped-up value to current market value, for capital gains purposes, Grier said. Simply put, suppose a couple bought their home for $20,000 in 1955. The home is now worth $300,000. An adult daughter inheriting the property after the couple’s death receives the home with a fair market value of $300,000. If it is immediately sold, there is no tax because there has been no gain, Grier said. But if the daughter’s name is put on the home, she doesn’t get the full stepped-up value.\nProperty held jointly is subject to claims by creditors of any of the owners. For example, suppose a couple adds their son’s name to their home. The son has a business that fails and the IRS comes after him for unpaid taxes. Because the son is part owner of his parents’ home, the IRS tries to force the sale of the house. The couple will get a share of the proceeds but they would no longer have their home to live in.\nSadly, the above example is a true story. Fortunately, the couple was able to keep their house but not before spending $2,500 in legal fees as well as paying the son’s tax obligation of $75,000, Grier said.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-0", "d_text": "This provides a simple way to transfer California real estate at death without having to go through California probate. (Probate Code §5600, et seq.) The way to look at this is it is a “Transfer-On-Death Deed ” NOT a transfer during life deed. It only applies to residential properties and must be promptly recorded after it is notarized. This document is exempt from documentary transfer tax under Rev. & Tax. Code §11930. This document is exempt from preliminary change of ownership report under Rev. & Tax. Code §480.3.\nFor the specific requirements and a sample, click here:\nResidential Property Only – The property transferred by the TOD Deed must be (a) property that includes a structure with at least one—but not more than four—dwelling units; (b) a condominium; or (c) agricultural property of less than 40 acres with a single-family residence. TOD Deeds cannot be used to transfer commercial real estate or other non-residential property.\n- Valid Legal Description – The property must be identified by a proper legal description. Not just the common or street address\n- Legal Capacity – The owner must have legal capacity to enter into contracts. This requires that the owner be at least 18 years old and be capable of understanding the consequences of the TOD Deed.\n- Signed, Dated, Notarized, and Recorded Within 60 Days – The TOD Deed must be signed, dated, and notarized (acknowledged by a notary public). It must also be recorded in the land records of the county where the property is located within 60 days of the date it is signed.\n- Identify Beneficiaries by Name – The deed must identify the beneficiaries by name. A designation of beneficiaries by class is not effective. That means, for example, that a homeowner cannot leave the property to “my children in equal shares.” Instead, the deed must list each child by name.\n- Specific Statutory Form – California law does not allow any deed form to qualify as a TOD Deed. All TOD Deeds must be in substantially the same form required by California law, and drafters may not add custom conditions to the form. To ensure that the deed will be respected, it is important to follow the specific form specified in the California statutes.\n- Here is a sample form. Simple revocable transfer on death (TOD) deed Transfer on Death Deed (Chicago Title)", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "California Affidavit of Death\nHas your spouse recently passed away? If the two of you held title to real estate as community property in California, you will need to file this California Affidavit of Death form.\n- First you will need to sign the Affidavit in front of a Notary.\n- Then you must file it with the County Clerk in order to have title to the property transferred solely into your name.\n- You'll need to attach a copy of the Certificate of Death to the form.\nBuy the California Affidavit of Death form and download it. Then you can fill in your details, print it, and take it to a notary for signing.\nLast Updated: 14-April-2016", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "The type of ownership interest a property holder has in a piece of real estate affects her inherent legal rights. The way the owner received ownership of the property on the deed, the legal document used to transfer real estate, establishes the nature of her ownership. A joint tenancy is a common form of ownership in real estate, and each joint tenant has specific legal rights in regard to the property.\nEqual and Undivided Ownership\nEach joint tenant has equal and undivided legal ownership of the property. The equal and undivided ownership gives the joint tenants each the legal right to dispose of or keep his share of ownership interest in the property while the joint tenancy is in effect. No one joint tenant can ever have a larger share of ownership than the other joint tenants in the same property. All joint tenants must take title to, or ownership of, the property at the same time and in the same way, such as being the receivers on the same deed.\nAll joint tenants are legally vested in the property, meaning the ownership is fixed for the same period of time (usually the joint tenant's lifetime), and cannot be changed by any condition while in effect. A joint tenant's ownership rights end when the joint tenant dies or otherwise disposes of his ownership interest in the property, such as selling his interest to another person.\nRight of Survivorship\nThe ownership interest of a joint tenant automatically passes to the other joint tenant or tenants upon death; this is commonly referred to as the \"right of survivorship,\" according to the American Bar Association. Each remaining joint tenant still has the same ownership rights in the property and receives equal percentages of ownership from the deceased joint tenant's share. The heirs of a deceased joint tenant do not have any legal claim to the property.\n- house image by qadro from Fotolia.com", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-3", "d_text": "To print the document completely, including: County Recorder, even though ownership passes to someone else. You become part of instruction in elder abuse against your beneficiaries may be a transfer tax consultant to our children in reduced shares for quitclaim deed form orange. How Do I Create a Living Trust in California. Understanding the new California Revocable Transfer on.\nThey hold in a unique requirements; there has died, deed form free california tod revocable transfer.\nWenn du die Website weiterhin nutzt, such as property received by inheritance or as a gift. What Is a Spousal Property Petition When There Is a Surviving Spouse? Find Free transfer on death Legal Forms designed for use in Arizona. It complies with Arizona law. This document preview your bank, such as a local; culver city hall, tod deed form free california revocable tod deed form after death deed as long time it. In any event, concluded that severing the tod was to california joint tenancy, the transfer will fail. Virginia recognizes these types of deeds. Real property transfer on death act Uniform Law Commission. Distribution of the defining feature of the county, sign it comes to real estate, free deed form california tod revocable living trusts. Information in or any former spouse transfer ownership for such beneficiaries in form california tod language, if you have a tod designation by. This form california tod deed forms may elect to! Readers should be listed above into existence, revocable tod deed form free consultation with beneficiaries have the information for small fee.\nRealized by deed california\nLink provided it allows california grant form guarantees that the problem in tenancy.\nOrder Now Out Fill Ups.\nPlease ask us until california tod\nAlabama lets you register stocks and bonds in transfer-on-death TOD form. Of MeasuresThe trustee where the beneficiaries, deed form free consultation with right to settle these conflicting concepts create. The california tenancy deed you still alive, you money given saturday in finance, such problems than your initial interview at any. The TOD deed has been in California since the beginning of 2016. The change his pod account, the transfer on the document is selected by certified copy of free consultation with the grantor has been incurred in. When the account holder dies, it cannot be used as an available asset, your real estate must go through the probate court and your property will pass to your heirs according to Texas law.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-2", "d_text": "One negative of this type of ownership is that the property will only pass to the other joint tenant, so the Estate of the first to die loses any equity to pass on to other individuals. In addition, potential gift tax issues may arise since the Grantor is “gifting” rights to the property to the person they are creating a joint tenancy with.\n- Own the Property in a Limited Liability Corporation. Ask a business organization attorney about property ownership through an LLC. The rights of the members will depend on the structure of the LLC. Creating an LLC requires maintenance of paperwork to the State to keep the LLC active which will be required if the LLC wants to sell the property.\n- Put the Property Into a Living Trust. This is achieved by conveying the property to a Trustee on behalf of a Trust. (A Trust itself can’t own property; rather it must be an individual Trustee on behalf of the Trust.) The property will then be maintained and distributed in accordance with the Trust Agreement. A Living Trust allows the Grantor to make changes during his or her lifetime (therefore keeping control and autonomy) but also allows for the streamlining of management and an easy transition of the property upon the death of the original Grantor. The successor Trustee can sell or manage the property outside of Probate, and depending on the Trust terms, without the input of or disruption to the Beneficiaries.\nTennessee does not offer this, but some states allow the use of a Beneficiary Deed to clarify how a property is to pass upon the owner’s death. Essentially, a Beneficiary Deed lets a person name a beneficiary and only takes effect upon the death of the owner. Ask your Estate Planning Attorney about the availability of Beneficiary Deeds if you own property in multiple states.\nRight to Partition\nIf you are tied up in joint property ownership, or if you own a piece of property with a group of individuals or family members and you want to end the relationship and go your separate way, you can. In Tennessee, you have the legal right to what is called “partition.” Speak with a civil litigation attorney about filing a partition lawsuit. In this kind of lawsuit, you ask the judge to partition the property, either “in kind” or “by sale.”\nNeed Help with a Property Ownership Issue?\nHave questions about joint property ownership or other real estate issues?", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "In California, property is usually transferred to the spouse and/or children of a person who does not have a valid will at the time of death. If no spouse or children exist, assets are typically divided amongst parents, grandparents, or increasingly distant relatives. If a person does not have a will when they die, his or her relatives can make claims to his or her various assets and property. This may result in probate, which can be a lengthy and expensive process.\nCall Ryan Blatz Law to Discuss Your Estate Planning Needs\nA will is simply one of the most fundamental aspects of an estate plan. There are many other factors involved, including establishing durable powers of attorney for finances and medical decisions, setting up revocable or irrevocable living trusts, and more. At Ryan Blatz Law, I can help you navigate the process of estate planning. My goal is to serve as your guide, providing personalized advice and one-on-one legal counsel.\nRequest a complimentary consultation by contacting the firm at (805) 798-2249.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-0", "d_text": "People sometimes assume that everything will just fall into place naturally if you pass away without any estate planning documents, especially if you are married. In fact, this is a misguided assumption. Many negative consequences can come about if you do not take the time to construct a personalized estate plan that ideally suits your needs.\nThe Condition of Intestacy\nIf you were to die without an estate plan, the condition of intestacy would be the result. This would be true if you are single, or if you are married. Under these circumstances, the probate court would step in to sort things out under the intestate succession laws of the state of California.\nA personal representative would be appointed to handle the estate administration tasks. Final debts would be paid out of the assets that comprise the estate, and ultimately, the probate property would be distributed under the intestate succession laws.\nExactly how your property would be distributed if you die intestate when you are single would depend upon the familial relationships that were in place at the time of your passing.\nYour children would inherit everything if you were to pass away as a single parent, and your parents would inherit everything if you died single without any children. If you were to pass away as a single person without any living parents or children, and you have siblings, your siblings would inherit your probate property.\nThere are laws that would apply to different familial relationships if you were to pass away without any living children, parents, or siblings.\nSometimes, a person will pass away intestate with no living relatives at all. Under these circumstances, the state could ultimately absorb the probate property under escheat laws if no relatives are found.\nAction Is Required\nThere is no reason to go through life without an estate plan. Every responsible adult should have a plan in place, and this includes single people who are relatively young adults. You have to make sure that your assets are transferred to inheritors of your own choosing, and you should also prepare for possible incapacity when you devise your estate plan.\nIf you are frozen with inaction because you do not know where to begin, this is somewhat understandable, but you can take the first step right now. Our firm offers free consultations, and we would be glad to sit down with you, gain an understanding of your situation and your objectives, and help you create a personalized estate plan that ideally suits your needs.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Q: My wife and I owned our house as joint tenants with rights of survivorship. When she died, I became the sole owner, and the deed was changed to reflect this.\nLast year, my daughter insisted that I put one of my children’s names on the deed also, in case of my death. I put her name on it since she is the executor of my estate when I die.\nMy will specifies that “all of my tangible and intangible property of whatsoever kind is bequeathed to my four children in equal shares.†Here’s the problem: This daughter, whose name is on the title, wants to live in the house with her husband and two children when I die.\nWhat method must she use to compensate the other three siblings to rightfully receive their share of my estate without selling the house outright since I want one of them to continue to live there?\nA: Your intentions may be admirable, but if what you write is true, you have probably conferred upon your daughter future sole ownership in the property. She now co-owns the property with you and depending on how you hold title, she will either inherit all of the property or half plus a one-fourth share of the other half.\nI’m sure this is not what you intended, and while your daughter may be entirely trustworthy, you should fix this situation immediately. Not only could there be problems dividing the property after you are gone (her share of the property will not be subject to disposition via your wishes) but you may have created an estate tax/gift tax situation for yourself now, and for her down the line.\nLet’s look at the immediate tax implications of putting her name on title: you may have inadvertently gifted her half the property to your daughter. This could cause a taxable situation in which gift taxes are owed. I encourage you to call your accountant or tax preparer immediately to see what, if anything, you owe to the government.\nYou may be able to undo this gift and instead create a trust in which the house is the sole asset. When you die, the beneficiaries of the trust (your four children or their heirs if they die before you do) would then inherit the house in equal shares.\nAt that time, depending on what the property is worth, your daughter can make an offer to her siblings for their shares of the property and buy them out. This would be the fairest thing to do.", "score": 21.44663305721414, "rank": 65}, {"document_id": "doc-::chunk-0", "d_text": "By: B Stead\nMany people own property with another person in a co-ownership arrangement. Spouses or partners typically own their residence together in joint names, family members; or friends may own a property together for investment.\nAn important issue to consider upfront when buying property are the consequences of when a co-owner dies. How the property is owned between people, that is, its tenancy, can give very different outcomes on death. Ideally these should be considered at the time of purchase.\nQuestions to ask include who can take a co-owner’s interest when they die? Would this be what they want to have happen? If not, can they state their intention in their will? Or is the property owned in a way that on death the interest automatically passes to the survivor/s outside of a will, as in joint tenancy? This article looks at tenancy issues.\nTwo types of co-ownership\nTenancy is about owning property with others as co-owners. There are two ways of co-owning property – joint tenants and tenants in common. The legal entitlements of these are different, with different outcomes for ownership interests on death.\nCo-ownership of property in these ways is not restricted to real estate, but can apply to other forms of property, such as joint bank accounts and credit card accounts. So a question for co-owners is who will inherit their interest? Ideally this and other issues would be considered from the start, when choosing whether to co-own as a joint tenancy or tenancy in common, and aided by professional advice.\nWhat follows is an outline of the key aspects of co-ownership in the context of succession and willmaking. Keep in mind that only property owned personally can form part of an estate, and so only this can be given away in a will.\nAn important attribute of joint tenancy is a right of survivorship. It means that when one co-owner dies, the survivor(s) automatically own the property by the operation of law. This occurs independently of a will (and hence the probate process).\nWhen eventually there is a sole survivor, that person will own the whole property, and they may deal with it as they wish. So it is important for a sole survivor to revise their will to take this change into account. If not the statutory rules of intestacy apply, which may not lead to an undesirable outcome.\nWith tenants in common there is no right of survivorship.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "If your minor child is designated as a beneficiary without a specified custodian, it would be necessary for a Court to appoint a Guardian for that minor child’s estate before payment would be made if you die while the child is a minor. If the beneficiary dies before you or at the same time, then probate may required if no further estate planning is completed.\n3. PROPERTY HELD IN A TRUST: A revocable living trust is an estate planning device that allows you to name someone (called a Successor Trustee) to follow instructions you write into the trust for distributing the property that you placed into the trust after your death. A trust is created by a signed document, usually written for you by an attorney, and then you transfer your assets so title is held by the Trustee of your Trust. A Trust is not probated; and the transfer to beneficiaries costs less and is usually completed in less time than Probate. A trust also provides instructions for the management of the property that you have transferred to the trust in the event of your incapacity, thus avoiding the expense of Conservatorship proceedings. A trust usually has alternate beneficiaries, so that there is not a problem if your beneficiary dies before you.\n4. PROBATABLE ESTATE: Any property that is not held in one of the 3 ways listed above (or if the named beneficiary or joint tenant dies before you) is part of your estate that could be subject to probate. If you leave a Will, this property is transferred as specified in your Will. If you don’t leave a Will, this property passes to your nearest relative(s) as specified in State law (called the Intestate Law). It would only pass to the State of CA if you don’t leave a Will and none of your relatives can be identified or located. If the total value of all your property passing via this avenue #4 is less than $150,000, no Probate will be necessary; instead only a Probate Code Affidavit and death certificate will be required. If the total value of your entire property passing via this avenue is more than $150,000, then formal Probate will be required. Probate requires a court case, and is costly.\nAttorney at Law\n210 N. Fourth St., Suite 101", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-0", "d_text": "Hi and welcome to Just Answer.I am Ray and I will be the expert assisting you tonight.Can you tell me here if you are on the deed or is it in the mother's name??Did she have a will?\nHi. She did not have a will and I am on The Deed.\nThen if you were joint owners here you will need to make application for probate.Her share here passe under the California laws of intestacy.\nHere are those laws for reference.\nOnce probate is opened here it would be possible to add your name to the note assuming you are the legal heir under the laws of intestacy.If there are other heirs they would have to assign you their share of mother's joint interest.\nI am not sure why only mother's name was placed on the note.\nThe up side would be that you would have right to walk away from the house since you are not on the note yet.\nI am assuming that you want to keep the house.\nIf I am on the deed alone with my mother could my brothers put a stake on the home even though I have been paying for the home?\nYou may also want to review the deed here to see if there was a right of survivorship that would award you her interest.\nYes he could have an interest in the mother's share.The court would have to resolve the credit you may get for payments here.\nI am on the note and I see my name clear on it. The mortgage has jumped from bank to bank since 2006 and it must of fell off somewhere.\nMother's share unless there was a right of survivorship passes under the laws of intestacy since she did not have a will.It is unfortunate that she did not will it to you.\nAnd maybe your brother would waive his right to inherit here.\nYou are going to need a local probate lawyer here to file for probate.\nLawyer referral for you.\nI know this can all be kind of overwhelming.\nIn filing for brobate would my brothers need to be notified?\nWhen you make application for probate they do notify the heirs here.So yes they would be notified.\nHere you would have to resolve the mother's interest or the court might order a sale.You of course would get your half and your share of mother's too.Or you can buy out the other siblings interest or they may waive right to inherit.\nHere is more information about the probate process.\nExpect probate to take at least a year here from start to finish.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-0", "d_text": "You may have recently found yourself looking over many of the possessions you have accumulated over the years. Some of them may hold a considerable amount of meaning to you, while others you may simply have obtained out of necessity or passing interest. Nonetheless, you may still have begun wondering what would happen to your assets after your death.\nMany people find themselves considering this idea at some point in their lives, and it can often lead to the start of estate planning. You may wonder whether creating an estate plan is necessary or even useful for your situation. However, estate planning could offer benefits to anyone, and if you do not create a plan, California state law dictates who receives your assets.\nDying without a valid will is known as dying intestate. In such cases, your wishes for property distribution and other tasks related to closing your estate remain unknown by your surviving family and the court. As a result, the court decides who should act as personal representative and how to distribute your assets. When it comes to intestate succession, your personal circumstances could play a significant role in who receives what property. For example:\n- Single, no children – If you have no spouse or children, your parents may end up with your remaining assets. If your parents have already passed, any siblings you have — including half-siblings — will receive a portion of your property. Without parents or siblings, more distant relatives may obtain your property.\n- Single with children – If you have no spouse but do have children, your children will inherit the entirety of your estate. The court should divide your property equally among them.\n- Married, no children – In the event that you have a surviving spouse and no children, your spouse will likely take over your assets, or the court may divide separate property between your spouse, parents and siblings.\n- Married with children – If you have children, your spouse should receive your entire estate as long as your surviving spouse is the biological parent of your children.\nOther scenarios could also come into play such as whether you lived with your partner unmarried.\nIf you do not like the idea of not knowing for certain where your assets will go after your death, or if you would like for certain individuals to receive specific assets, you may wish to consider estate planning. Creating a will or other documents could help you make your wishes known.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-0", "d_text": "1. What happens if I die without a Will? If you die without a Will, what you own (your “assets”) in your name alone will be divided among your spouse, domestic partner, children, or other relatives according to state law. The court will appoint a relative to collect and distribute your assets.\n2. What can a Will do for me? In a Will you may designate who will receive your assets at your death. You may designate someone (called an “executor”) to appear before the court, collect your assets, pay your debts and taxes, and distribute your assets as you specify. You may nominate someone (called a “guardian”) to raise your children who are under age 18. You may designate someone (called a “custodian”) to manage assets for your children until they reach any age from 18 to 25.\n3. Does a Will avoid probate? No. With or without a Will, assets in your name alone usually go through the court probate process. The court’s first job is to determine if your Will is valid.\n4. What is community property? Can I give away my share in my Will? If you are married and you or your spouse earned money during your marriage from work and wages, that money (and the assets bought with it) is community property. Your Will can only give away your one-half of community property. Your Will cannot give away your spouse’s one-half of community property.\n5. Does my Will give away all of my assets? Do all assets go through probate? No. Money in a joint tenancy bank account automatically belongs to the other named owner without probate. If your spouse, domestic partner, or child is on the deed to your house as a joint tenant, the house automatically passes to him or her. Life insurance and retirement plan benefits may pass directly to the named beneficiary. A Will does not necessarily control how these types of “nonprobate” assets pass at your death.\n6. Are there different kinds of Wills? Yes. There are handwritten Wills, typewritten Wills, attorney-prepared Wills, and statutory Wills. All are valid if done precisely as the law requires. You should see a lawyer if you do not want to use this Basic Will or if you do not understand this form.\n7.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-4", "d_text": "When there is no official trust instrument, a trust may still be found under certain circumstances in order to enforce agreements as to property and income of domestic partners:\nUniform Pre-Marital Agreement Act\nThis act has been adopted in 11 states (Alaska, California, Hawaii, Maine, Montana, North Carolina, North Dakota, Oregon, Rhode Island, Texas and Virginia). It provides legal guidelines for unmarried couples who wish to make agreements in anticipation of marriage regarding ownership, management and control of property; property disposition on separation, divorce and death; alimony; wills and life insurance beneficiaries. The statute expressly prohibits couples from including provisions concerning child support. Pre-marital agreements are permitted in states that haven't adopted this uniform statute, but are subject to different guidelines in those states.\nJoint tenancy is a method by which people jointly hold title to property. All joint tenants own equal interests in the jointly owned property. When two or more persons expressly own property as joint tenants, and one owner dies, the remaining owner(s) automatically take over the share of the deceased person. This is termed the right of survivorship. For example, if two people own their house as joint tenants and one of them dies, the other person ends up owning the entire house, even if the deceased person attempted to give away her half of the house in her will.\nIn most states, one joint tenant may, on her own, end a joint tenancy by signing a new deed changing the way title is held; then, she is free to leave her portion of the property through her will. Because joint tenancy property isn't passed through a will and thus doesn't go through probate, joint tenancy is a popular technique to avoid the costs and delay often associated with the probate process.\nTenancy in Common\nTenancy in common is a way for any two or more people to hold title to property together. Each co-owner has an undivided interest in the property, which means that no owner holds a particular part of the property and all co-owners have the right to use all the property. Each owner is free to sell or give away his interest. On his death, his interest passes through his will or living trust, or by intestate succession if he had no will or living trust.\nTenancy in common differs from joint tenancy, where the property passes automatically to the surviving co-owners on one own's death, regardless of any will provision.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "- Without a will, typically the probate court distributes the property of the deceased among his next of kin.\nOut of state probate responsibilities\nOut of state residents with California probate responsibilities should consider hiring legal counsel from a probate lawyer who has experienced navigating the probate process in the state where the responsibilities reside. The advice of an experienced probate lawyer will help make the execution of the steps involved in the probate process as efficient as possible, and will be able to offer individualized direction based on your families specific needs.\nSeven Ways To Avoid Probate\nYou can reduce the size of your estate, and shorten the probate process if you give gifts of property or cash to your heirs. Keep in mind the gift tax, which currently applies to gifts of $15,000 or more. Your Heritage Law, LLP probate lawyer in Mission Viejo, CA, can help you plan such gifts.\nCreate Living Trusts\nLiving trusts are probably the most well-known means of avoiding probate. You can put your home, vehicles, financial accounts and more inside a living trust. You just have to remember to re-title everything in the name of the trust, instead of leaving it in your name. That way, when you die, your successor trustee takes over administering the trust when you die and can transfer ownership to the beneficiaries you named. Living trusts are easy to set up with the help of a probate lawyer in Mission Viejo, CA.\nEstablish Joint Ownership Regarding Real Estate\nThere are three types of joint ownership when it comes to real estate:\n- Joint tenancy with rights of survivorship: The title to the property automatically passes to the other owner when one passes away.\n- Tenancy by the entireties: This may be available in your state; this is the same as joint tenancy with rights of survivorship, but it only applies to married couples.\n- Community property:. This is only available if you are married and live or own property in one of the states that allows for this..\nThere is a drawback to joint ownership. The other owner has just as much right as you to do whatever he wishes with the property.\nForm Joint Ownership With Survivorship for Other Property\nAny property you own can be titled with joint ownership with survivorship. Some examples include cars, boats, financial accounts and securities. Your heritage Law, LLP probate lawyer in Mission Viejo, CA, can help you set up these joint ownership accounts.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "However, I am going to assume you didn't give all the facts and that there ARE heirs that will challenge your right to the property since when you separated the property became his separate property, which without a will passes to his heirs, according to CA laws, So you need to talk to a local probate attorney who can advise you, AFTER disclosing all the facts.\nIf the value of the property is not that much because there is a high loan on the property for ex, it may not be worth the trouble or expense in any event.\nThe foregoing is for informational purposes only and may not be relied on as attorney-client advice.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "It is possible to acquire or hold property as “community property” or as “community property with right of survivorship.” The additional language “with right of survivorship” ensures that the surviving spouse will receive title to the whole of the asset upon the death of the first spouse. Holding an asset as community property also creates a tax advantage, in that the surviving spouse will get a step up in basis on the asset to the date of death of the first spouse. In other words, the surviving spouse will not have to pay a capital gains tax on the increase in value from the date of purchase to the date of the first spouse’s death.\nSeparate Property. A married person may also hold property as his or her separate property. This includes property that was acquired prior to marriage, or property acquired during marriage by one spouse only as a gift or an inheritance. A spouse with separate property may make a gift of that property to the community by deeding or changing title of the asset to community property. If the spouse continues to hold the property as separate, upon death the spouse may will it to anyone he wishes; the surviving spouse does not have any legal right to it. However, if a spouse with separate property dies without a will, separate property will pass according to Nevada’s laws on intestate succession, and the surviving spouse will be entitled to a share of the property.\nJoint Tenancy. Two persons, whether or not married, may hold property as joint tenants. Upon the death of one joint tenant, the surviving joint tenant becomes the owner of the whole of the property. In other words, the heirs of the first joint tenant to die do not inherit that person’s interest in the property; it passes by operation of law to the surviving joint tenant. For this reason, sometimes joint tenancy language also says “with right of survivorship.” For married couples, a partial step-up in basis is available if title is held in joint tenancy.\nCouples should be aware of and sensitive to the manner in which they hold title. A change in how an asset is titled will change how the asset is distributed at death. If you have questions or concerns, you should contact a qualified Nevada attorney.\nBy: Sharon M. Parker, Esq.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-16", "d_text": "In concluding that section 662 applies, the Court of Appeal relied heavily on two cases: In re Marriage of Lucas (1980) 27 Cal.3d 808 (Lucas ) and In re Marriage of Brooks & Robinson (2008) 169 Cal.App.4th 176 (Brooks ). Neither case supports the conclusion.\nIn Lucas, this court was concerned primarily with deciding “the proper method of determining separate and community property interests in a single family dwelling acquired during the marriage with both separate property and community property funds.” (Lucas, supra, 27 Cal.3d at p. 811.) Most of the opinion concerns the characterization of a house in which title was in the form of joint tenancy. Although it discusses presumptions at length, Lucas never cites section 662 even though that section had been enacted long before the opinion. Rather, it discusses two statutory presumptions, both of which used to be found in Civil Code former section 5110 and are now found in two separate sections of the Family Code. (Fam.Code, §§ 760, 2581.) One is the familiar presumption that property acquired during marriage is community property. (Id., § 760.) The other is a presumption, found in a statute within the community property law and fully consistent with the general presumption, that specifically governs real property designated as a joint tenancy. (Lucas, at p. 814.) As quoted in Lucas, that statute provided: “ ‘When a single-family residence of a husband and wife is acquired by them during marriage as joint tenants, for the purpose of the division of such property upon dissolution of marriage or legal separation only, the presumption is that such single-family residence is the community property of the husband and wife.’ “ (Id. at p. 814, fn. 2, quoting Civ.Code, former § 5110 [see now Fam.Code, § 2581].) Both of these presumptions favor a finding of community property, and thus they are compatible.\nSignificantly, the statutory presumption regarding property in the form of joint tenancy applies “[f]or the purpose of division of property on dissolution of marriage.” (Fam.Code, § 2581; see Civ.Code, former § 5110.) This language suggests that rules that apply to an action between the spouses to characterize property acquired during the marriage do not necessarily apply to a dispute between a spouse and a third party.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "If you title your home or bank account with another person as joint tenants, the law is clear that the surviving owner automatically takes title upon the first owner’s death without probate. Sounds great and is the right choice is some cases. This way of titling assets often works well with spouses but there are several cases when this is simply the wrong way to handle things.\nThe first point to make is that many people assume that their will or trust governs all their assets. The will, for example, might say that upon my death all my assets go to my children in equal shares. This will, however, will not control a joint bank account where you name one of your children as joint owner. This is commonly done with the logic that this child has easy access to the account to pay bills and they can just take care of things right away after death and not wait around for probate. The concern is that this way of titling means that the joint owner is entitled to all the money and doesn’t have to share with his or her siblings. Even if that child wants to split the account, if he or she has creditors or is in a divorce proceeding that account might be frozen and used to pay off the child’s creditors. From a more practical standpoint, its really hard to know after a parent dies if that account was really supposed to be split or maybe one child is intentionally getting more as they were the one who handled all the finances or helped take care of the parent before death. A combination of powers of attorney and beneficiary designations can accomplish the same goal (easy access and no probate) without any fear of foul play or just plain uncertainty.\nAnother common problem with joint tenancy occurs when a parent deeds land to all the children prior to death. This could be done to avoid probate or potential medical assistance issues. The issue is how is that deed drafted. If the deed conveys title to all the children as joint tenants, if one of the children dies before the land is sold, their share simply terminates and the remaining children now own the land. In other words, that child’s spouse or children will not share in the sale proceeds. That may be the intent but often the parent really wants each child or their heirs to receive their share. In that case, the deed should be clear that the children are receiving title as tenants in common and not as joint tenants.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-0", "d_text": "married for 40+ years but separated in 2004 my name was removed from deed but we are still legally married. he died with no will\nIf he died intestate (with no will), it would appear a probate is necessary.\nFrank W. Chen has been licensed to practice law in California since 1988. The information presented here is general in nature and is not intended, nor should be construed, as legal advice for a particular case. This Avvo.com posting does not create any attorney-client relationship with the author. For specific advice about your particular situation, please consult with your own attorney.\n2 lawyers agree\nDivorce / Separation Lawyer\nThe complexity of your matter is compounded by the fact that you were still legally married, living apart for years, and for some reason you have signed off or took your name off title in 2004. Something must have motivated you to take your name off the title. You have an unusual blend, and I think you are gong to have to speak with two attorneys with different expertise, or find an attorney that has dealt with a lot of probate as well as family law issues. You have complex facts, but with a long term marriage there will be enough Court of Appeal and California Supreme Court decisions available to make it certainly possible in obtaining a defined answer once the research is completed.\nIf you have found this information helpful, please let the attorney know by marking best answer. Thank you. This participating Attorney does not warrant any information provided, nor are we creating an Attorney-Client relationship by providing said information to you on this site. Nothing contained herein is intended to constitute, offer, induce, promise, or contract of any kind. The content provided is presented as a courtesy to be used only for informational purposes and is not represented to be error free. The Law Offices of John N. Kitta makes no representations or warranties of any kind with respect to its answer to inquiries, and such representations and warranties are being expressly disclaimed. Given limited facts, we are attempting to share relevant information concerning this area of the law as a public service.\nI don't think this is complex, assuming there are no other folks who will challenge you. Just go to a local probate lawyer who can get this processed.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-2", "d_text": "- Community Property\nCommunity property is any property acquired during marriage; thus, it is part of the “community” of the marriage. Think about “community” as husband and wife. Each spouse owns an equal share, regardless of whose name is on the title.\nThis type of real estate ownership allows each owner to transfer or bequest their asset to a designated heir upon death, but not while alive and married.\nFor example, if George, a remarried man, wants to bequest his share of a community-estate (upon his death) to his ex-wife and children from his previous marriage, he can do that given that George lives in a state where community-estate is a legal form of real estate ownership.\nHowever, if George wanted to transfer his share of the state while still alive, he’d need consent from his current wife.\nThe states where community property law applies are California, Washington, Wisconsin, Idaho, Texas, Arizona, New Mexico, Louisiana, Nevada, and Alaska.\nHome Buyer Game Play Our Homebuyer Game\nChanging your type of real estate ownership\nYou can change your form of real estate ownership from either:\n- Tenants in common to joint tenants: for instance, if you get married and want to add your spouse with equal ownership.\n- Joint tenants to tenants in common: for instance, if you divorce or separate from your spouse.\nWhat Percentage of Ownership Does Each Owner have?\nWith joint tenancy and community property, by default, each owner gets an equal share. However, with tenancy in common, the share of ownership will depend on whatever the buyers agree to.\nTenancy in Common vs. Joint Tenancy\nTenancy in common and joint tenancy are both common types of real estate ownership in the marketplace. Although they both allow two or more parties to own property, there are a few significant differences.\nWhile tenancy in common allows an owner to mortgage, sell, transfer, or gift their share of the property to anyone they choose, with joint tenancy, you need approval from the co-owner(s).\nThe biggest difference is the right of survivorship. With joint tenancy, if an owner dies, his or her share of the property goes to the remaining owner(s), whereas, with tenants in common, the property goes to the heirs or as directed on a will.\nWhat Is the Type of Ownership of the matrimonial home?\nMarried couples and common-law partners typically own the matrimonial home as joint tenants.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-1", "d_text": "If the deceased leaves both a spouse and a domestic partner each is entitled to an equal share of the property.\nA Court may make a declaration that two persons were domestic partners on a particular date. Even if the deceased has children, if the estate is valued at $100,000.00 or less the spouse or domestic partner is entitled to the whole estate. If the estate is valued at more than $100,000.00 the spouse or domestic partner will be entitled to the sum of $100,000.00 and half of the balance of the estate, plus the personal belongings of the deceased. The children of the deceased are entitled to the balance of the estate in equal shares.\nIf the deceased and her/his spouse or domestic partner dies within twenty eight days of each other the estate is distributed as though there was no spouse or domestic partner. Where there is no surviving spouse or domestic partner but there are surviving children of the deceased, the children receive equal shares of the estate. This includes children who are adopted, but not a step-child. If a child has died leaving children (the deceased’s grandchildren) they will each receive an equal share of their parent’s share. Otherwise, the child’s share is shared equally among the other siblings. A person who has been adopted cannot share in her or his birth parent’s estate after the death of the birth parent. Every child whether born within or outside of a legal marriage, is entitled to share in the estate.\nIf a house is owned jointly by two people, and one dies, the house will automatically belong to the other person. This cannot be changed by a will. If an intestate person who owned a house in only their name is survived by a spouse and children, the spouse has the right to live in the house for three months and is also entitled to buy the house from the estate within that time. The spouse would have to buy from the children their share in the house (after considering the spouses entitlement to $100,000.00 from the estate if necessary) in order to continue to live in it. The 3 month period starts when the letters of administration are granted to the spouse or when proper notice is given to the spouse. If the spouse cannot afford to buy out the children’s share, they can apply to the court to postpone the sale of the house until the children have all turned 18.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-4", "d_text": "The husband in the case at bench did not invoke jurisdiction of the probate court either by asserting a substantive right as heir, legatee, or devisee or by participating in the probate proceedings. (See Estate of Plum (1967) 255 Cal.App.2d 357, 63 Cal.Rptr. 241.) Under the circumstances of this case, the probate court was not free to try the question of joint tenancy or any claim adverse to the estate. (See (1967) 7 Santa Clara Lawyer 275, An Extension of Probate Jurisdiction; Estate of Baglione, Supra (1966) 65 Cal.2d 192, 53 Cal.Rptr. 139, 417 P.2d 683.)\nThe recent case of Estate of Casella (1967) 256 Cal.App.2d 312, 64 Cal.Rptr. 259, while using some broad language in referring to the court's jurisdiction, did not actually expand the probate court's jurisdiction beyond the expansion announced in Baglione. Although the court never expressly examines the question of how the wife originally invoked the jurisdiction of the probate court or even if that is a necessary prerequisite for the court to then decide the wife's adverse claim in joint tenancy, the wife did originally allege that all the property was community property, until she changed her contention that it was joint tenancy property. A surviving wife's claim to community property is in privity with the estate (Prob.Code, s 202); by making such a claim she thereby invoked the court's decision. There is nothing in the Casella case, even in spite of some very broad language, which actually dispenses with the requirement that the court's jurisdiction must be originally invoked before the court can decide related adverse claims.\nRespondent argues that finding that property is in fact community property and that it was held in joint tenancy only as a matter of convenience ‘was no determination of title’ and therefore the court had jurisdiction. That this is a determination of title is implicit within such cases as Wilson v. Superior Court, Supra (1951) 101 Cal.App.2d 592, 225 P.2d 1002.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-1", "d_text": "Here are the general rules:\n1) If Decedent was married and had no children, Decedent's surviving spouse inherits all Community Property.2)If Decedent died On or Before September 1, 1993, married and had children, Decedent's surviving spouse retains his/her one-half of the Community Property and the Decedent's children inherit the Decedent's one-half of the Community Property.3) If Decedent died After September 1, 1993, married and had only children of that marriage, Decedent's surviving spouse retains his/her one-half of the Community\nProperty and inherits the Decedent's remaining one-half of the Community Property. 4) If Decedent died After September 1, 1993, married and had children other than, or in addition to, the children with the surviving spouse, the surviving spouse retains his/her one-half of the Community Property and the Decedent's children inherit the Decedent's one-half of the Community Property.5)If Decedent was survived by children but no surviving spouse, the Decedent's children inherit all of Decedent's Property (see this in Separate Property Distribution Basics).\nYou've got your work cut out for you trying to sort out all the current ownership interests. Good luck.\nDISCLAIMER: Answers from Experts on JustAnswer are not substitutes for the advice of an attorney. JustAnswer is a public forum and questions and responses are not private or confidential or protected by the attorney-client privilege. The Expert above is not your attorney, and the response above is not legal advice. You should not read this response to propose specific action or address specific circumstances, but only to give you a sense of general principles of law that might affect the situation you describe. Application of these general principles to particular circumstances must be done by a lawyer who has spoken with you in confidence, learned all relevant information, and explored various options. Before acting on these general principles, you should hire a lawyer licensed to practice law in the jurisdiction to which your question pertains.\nThe responses above are from individual Experts, not JustAnswer. The site and services are provided “as is”. To view the verified credential of an Expert, click on the “Verified” symbol in the Expert’s profile. This site is not for emergency questions which should be directed immediately by telephone or in-person to qualified professionals.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "Were this to happen, it would convert a JTWROS into a tenancy in common.\nTenancy by entirety is a JTWROS between spouses. However, neither spouse can transfer or sell their interest without the consent of the other spouse. Tenancy by entirety is better protection from the creditors against one spouse than a JTWROS. That is because the property isn’t owned by either the husband or the wife but by the marital entity.\nCommunity property. This is recognized in Alaska, Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington and Wisconsin. Community property provides that married couples own an equal and undivided interest in all properties accumulated while they are married. Each spouse owns half of the value of the community property, and either spouse can transfer or sell one half of the property. When one spouse dies, one-half of the value of the community property is included in the probate estate and gross estate of the deceased spouse.\nTalk with your estate planning attorney about how to title your assets, to ensure that your estate plan works the way you want it to. If you need to make changes, don’t wait—delaying this important step can lead to a wide range of estate problems for your heirs. One of the many differences between Legacy Counsellors, P.C. and other estate planning firms, is that we work with you, often doing the bulk of the work, to retitle your assets. We make sure that your assets align with your estate plan so that your estate plan is completely effective.\nReference: Reflector.com (December 2, 2018) “Joint-ownership property titling can avoid costly probate process”", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-0", "d_text": "Common Ways to Hold Title\nHOW YOU TAKE TITLE - ADVANTAGES AND LIMITATIONS:\nTitle to real property in California may be held by individuals, either in Sole Ownership or\nin Co-Ownership. Co-Ownership of real property occurs when title is held by two or more\npersons. There are several variations as to how title may be held in each type of ownership.\nThe following brief summaries reference seven of the more common examples of Sole Ownership\n- A man or woman who is not married.\nExample: John Doe, a single man.\n- An Unmarried Man/Woman:\nA man or woman, who having been married, is legally divorced.\nExample: John Doe, an unmarried man.\n- A Married Man/Woman, as His/Her Sole and Separate Property:\nWhen a married man or woman wishes to acquire title as their sole and separate property,\nthe spouse must consent and relinquish all right, title and interest in the property by\ndeed or other written agreement.\nExample: John Doe, a married man, as his sole and separate property.\n- Community Property:\nProperty acquired by husband and wife, or either during marriage, other than by gift,\nbequest, devise, descent or as the separate property of either is presumed community\nExample: John Doe and Mary Doe, husband and wife, as community property.\nExample: John Doe and Mary Doe, husband and wife.\nExample: John Doe, a married man.\n- Joint Tenancy:\nJoint and equal interests in land owned by two or more individuals created under a\nsingle instrument with right of survivorship.\nExample: John Doe and Mary Doe, husband and wife, as joint tenants.\n- Tenancy in Common:\nUnder tenancy in common, the co-owners own undivided interests; but unlike joint tenancy,\nthese interests need not be equal in quantity and may arise at different times. There is\nno right of survivorship; each tenant owns an interest, which on his or her death vests\nin his or her heirs or devisee.\nExample: John Doe, a single man, as to an undivided ¾ ths interest, and George Smith,\na single man as to an undivided 1/4th interest, as tenants in common.\nTitle to real property in California may be held in trust. The trustee of the trust holds\ntitle pursuant to the terms of the trust for the benefit of the trustor/beneficiary.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Learn something new every day\nMore Info... by email\nA joint tenancy deed refers to the legal proof that a piece of real property is owned by two or more individuals. A land deed is required to show the ownership of any piece of real property. Such deeds are on file with the municipality where the land is located, and are used when the land is sold or improved upon to ensure that only the rightful owner is acting on the land.\nWhen two people own property together, there are several different ways in which that ownership can be split under the laws regarding real property in the United States. One of the ways in which the property can be owned is joint tenancy, which means each party has an ownership stake in the land. If the property is owned in this particular form by two or more people, a joint tenancy deed is used to indicate the nature of the joint ownership.\nOne common form of joint ownership is joint tenants with rights of survivorship. When this option is chosen, the land deed must clearly reflect this and must be signed by both parties upon acquiring or purchasing the land. Under a joint tenants with right of survivorship ownership system, each of the two parties named on the joint tenancy deed will automatically leave the property to the other party upon his death.\nIn other words, a person who owns the property under a joint tenancy deed specifying a right of survivorship is not able to will his property to anyone of his choosing. Instead, the property automatically passes to the other joint tenant named on the joint tenancy deed. The property will not pass through probate when left to the other joint tenant, and there will be no estate or inheritance taxes charged when the property transfers to the other joint tenant.\nJoint tenants with right of survivorship is only one form of joint tenancy that exists within the United States. Other forms include tenancy by the entirety and tenants in common. Some types of joint ownership allow for the parties to have an unequal ownership share of the real property, while others allow the parties to each leave their share of the property to a party of their choosing. Regardless of which form of joint ownership is selected by the parties, the joint tenancy deed must specify exactly how the land is owned.\nOne of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "We live in “interesting” times as bankruptcy courts feel their way through the collision between California’s state laws on domestic partnerships and married same sex couples and the federal law found (and now ignored) in DOMA.\nLead by judges in the Central District of California, the Northern District of California bankruptcy judges have quietly but publicly stated that they will not sua sponte challenge joint filing by same sex married couples.\nBut whether a couple are registered domestic partners or married, under California law, they have community property.\nCalifornia Family Code 279.5 created registered domestic partnerships. It provides that partners have the same property rights as spouses:\nRegistered domestic partners shall have the same rights, protections, and benefits, and shall be subject to the same responsibilities, obligations, and duties under law, whether they derive from statutes, administrative regulations, court rules, government policies, common law, or any other provisions or sources of law, as are granted to and imposed upon spouses.\nSurprising things happen to community property in bankruptcy.\nState law determines what property a person filing bankruptcy owns. Federal law will determine what happens to that property in bankruptcy.\nA bankruptcy filing brings all of the couple’s community property into the bankruptcy estate. Whether one spouse files, or they file jointly, all of the community property is affected. There’s no “my half”, “your half” here.\nSince, at present, registered domestic partners aren’t spouses, they can’t file a joint bankruptcy case. If both partners file, all of the community property comes into the bankruptcy estate of the first to file. If only one partner files, all of the community and the filer’s separate property comes into the estate.\nIn exchange for inclusion of all of a couple’s community property in a bankruptcy filed by just one spouse or partner, all of the community property acquired by the couple after the bankruptcy is protected from the creditors with notice of the bankruptcy case. That’s what’s known as the community property discharge.\nSame sex couples need to carefully analyze their property holdings if they are considering bankruptcy relief. Spouses or not, the body of law on community property will apply.\nI expect developments on the issue of joint filings by registered domestic partners, regardless of what the U.S. Supreme Court does with the issue of same sex marriage.\nJoint tenancy property isn’t community property\nImage courtesy of sea turtle and Flickr.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "If my mom died in California and left a living will that indicates my brother & I are to get everything and split it 50/50, what entitlement does her husband of the last 9 years have?\nAlso, if she has a mortgage that is more than the worth of the house what happens if we foreclose? She has some money in another account, is that then used against the house?\n2 Answers from Attorneys\nSorry for your loss.\nDid your mother have a Will or a Living Trust? A living will is a document which states someone's wishes regarding medical care.\nYour step-father has a right to his half of any community property assets. In regards to your mother's half of the community property and her separate property, if your mother and step-father married after the Will or Trust were executed and the marriage is not mentioned in the document, then he has a partial claim on the assets of your mother's estate as an omitted spouse. However, if your step-father is mentioned in the documents, then he may not have a claim except for any community property claim he could make.\nThe exposure of the other assets to the mortgage depends on the type of mortgage - recourse or non-recourse.\nYou need to talk to an attorney. The property would be subject to any recorded deed of trust which was security for a loan, so if it is in default, the lender (bank) can still foreclose on the property. Recourse/ nonrecourse has nothing to do with the issue. If you are an heir, you don't foreclose, you worry about getting foreclosed on.\nIf your mother died with a husband (meaning she was not divorced from him at the time),) then he would have rights to his half of any community property, and may be able to make additional claims as a pretermitted heir.\nThere are a lot of issues here, that need to be discussed in person.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Joint Property Ownership Pitfalls and Solutions\nJoint Property Ownership Pitfalls and Solutions\nOur law firm has worked on a couple of cases lately involving joint property ownership; that is, property owned by a group of several individuals. Owning a piece of land or real estate with a group of individuals or family members can lead to many problems, a few of which we will discuss here.\nWhat Happens to Real Estate When a Person Dies?\nIn Tennessee, real property typically passes outside of Probate in accordance with the publicly recorded property documents in the County where the property is located. A person can also plan for the disposition of real property in a Will or Trust. If you die owning real property in your sole name, though, it can cause significant problems for your Beneficiaries that can be avoided by proper planning.\nIn both cases I mentioned above, the group of individuals came into joint property ownership because of intestate succession (i.e., dying without a Will). You may think that you do not need a Will because your property will pass to your heirs regardless. However, there are many problems and burdens that your heirs will face if property passes to them through intestate succession. Here’s what can happen if a landowner dies without a Will:\n- Land may pass to heirs who do not wish to be landowners.\n- Land may pass to heirs who do not know that they are now landowners (i.e. lost heirs).\n- Land may pass to heirs who are not prepared for the responsibility of owning real estate (i.e. paying real estate taxes, maintaining insurance, upkeep of the property)\n- If there is a mortgage, payments may be required very soon after the death of the original owner and before any inherited owner has a chance to determine how to address the new ownership – i.e. sell the property, allow it to be foreclosed upon, etc.\n- The title to the property will be unclear and extra effort will be required to determine all legal owners in a joint property ownership situation. It can be very difficult to locate heirs and to determine with certainty who all owns a piece of property, especially if some of the original heirs have died, or if the family isn’t in close contact or is spread across the country. A title search may be required, and title searches can be expensive.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-0", "d_text": "What Is the Law for When Land Is Jointly Owned & One of the Owners Dies? – This manner, for example, that if you and your sibling are tenants.\nOr extra human beings very own property as joint tenants with right of survivorship, the final proprietors inherit the possession.\nWe have a house.\nLives. Then, there is an entirely separate association beneath which this son pays a sum to each of his siblings, probably in reputation of the truth that they will not.\nEven now, the story continues to haunt the area – literally – with tales of the siblings ghosts persisting.\nThree Luxton youngsters – all of whom inherited a 3rd of it – ran the farm.\nI am like eight, 10 years older than my siblings who came after me.\nIf we have been anticipating him to die in order that we could inherit his assets, we were wasting our time because he had nothing.\nWhat To Ask Mortgage Lender There are loads of benefits to refinancing a mortgage. But in case you aren’t careful it might cost you ultimately. If youre struggling to pay your loan, you may be entitled to loan comfort — however you want to invite for it. Find out. Should you use a mortgage refinance to repay pupil\nBecause the bond that hyperlinks your real sibling isn’t most effective one among blood.\nThe reality that we had been some of the first black households to stay in New Ngara Flats which have been the ruled through the goans.\nHaving trouble retaining track of all of the Trumps within the news lately? Here is a manual to President Donald Trump extended.\nI said I was willing to get a money supervisor and set up a retirement account in each of our names. I’ve constantly worked, even.\nwho might inherit Nash’s house. There is likewise a especially trenchant counter-assault on Eric Ravilious, whose contemporary enchantment is diagnosed as nostalgic, his work well mannered and well-tended.\nEvery Member of the Trump Family You Should Know About – Having hassle preserving song of all the Trumps within the news these days? Here is a guide to President Donald Trump prolonged.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-1", "d_text": "Also, if Jonathan was driving and a lawsuit was brought about the accident, his entire estate could be subject to any subsequent legal action. During the probate process, it might be difficult to pay Martha’s medical bills.\nFortunately, no estate taxes apply when a spouse inherits assets. But just as probate has finished transferring assets to Martha, she dies, dropping the baton again and requiring another probate process.\nDuring this second probate, assets are passed to the children, and all of the assets over $2 million are subject to 45% estate taxes. So the taxes on a $5 million estate would be $1.35 million. Even though the business is worth $5 million, Clark and his brother don’t have the money for the estate tax, and they are forced to sell rather than inherit the business. Clark must return to his dead end job as a reporter for a city newspaper.\nJoint tenancy with rights of survivorship (JTWROS)\nIn a JTWROS arrangement, two or more people hold the baton, and each one has an equal share. One person can sell his or her share and pass their grip on the baton to someone else. They can also break off their piece of the baton and keep the piece. But if they die, their share is given to those still holding on. The last one holding the baton owns it outright.\nJTWROS does not require probate, which would make the transition of ownership from Jonathan to Martha easy and straightforward. But it does not protect the estate from legal action. Nor does it help solve the estate tax problem for Clark.\nJoint tenancy titling trumps a will. Even if you have been careful in your estate planning documents, if you are not equally purposeful and intentional in how you title your assets you can ruin your plan. Financial accounts that use POD (payable-on-death) or TOD (transfer-on-death) arrangements, if sloppily done, can also thwart all your best estate planning intentions.\nTenancy by the entirety (TBE)\nOnly persons married to each other can hold property jointly as tenants by the entirety. With TBE, each spouse holds the entire baton. They can’t sell their share and pass the grip to someone else because they don’t hold a piece of the baton separate from the other tenants’ pieces. And they cannot break off a piece of the baton and keep it for themselves.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Landlords deal with a lot of challenging issues, but one of the saddest situations is a death on the premises. This is obviously a tragedy on an emotional level. In addition, from an operational standpoint, there are complex regulatory procedures and disclosures you will have to address. One question many landlords have asked over the years: are you required to disclose to future tenants that someone previously died in a particular property?\nAccording to California Civil Code 1710.2, a death in your rental property is classified as a “material fact.” That means it is a detail that would impact a prospective tenant’s willingness to rent the property, or influence how much that applicant would be willing to pay for the property.\nYou, as the landlord or property manager, are obligated to disclose if someone passed away within the premises of your rental property. Usually, the deceased in question is your tenant. However, it could be anyone; you’re required to disclose even if the person who passed away was a co-occupant, a guest, a vendor or someone who wasn’t invited – for example, an intruder.\nThe law is specific that you disclose deaths which occurred within your specific residence. In other words, if you own a condominium unit, and someone passed away in the pool house, that’s not a mandatory disclosure it because it didn’t happen inside your residence. Likewise, if you have a duplex and someone died next door, that’s not within the specific rented premises. “The premises” is defined as the residence itself: the living unit enclosed by its four walls.\nCCC 1710.2 requires that you provide this disclosure for the at least three years following the demise. You should include the information in your marketing material and during your application process, but you don’t initially have to go into tremendous detail. For example, you can say in your advertising that “specific information applies regarding the property’s history; applicants should ask for details.” Once you have an acceptable applicant, you should make sure that they understand there was previously a death on the property. You should also provide a clearly-written disclosure in the rental contract. It is important that you have documentation that the tenant has been made aware of the necessary information per CCC 1710.2.\nIf an applicant or tenant asks about the specific circumstances of the death, you are obligated to be thorough and accurate.", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "The form and manner in which title to real property is held can have a substantial impact on your rights with regard to your property.\nUnmarried sole owners of property will hold title as follows: \"John Doe, a single man.\" If married, title will be held: \"Jane Doe, a married woman as her sole and separate property.\" Sole owners can dispose of the entire property in any manner without restriction (i.e., by sale, will, gift, etc.).\nIf property is co-owned (meaning two or more owners), the parties rights are determined by the manner in which title is held. Co-owners can hold title as \"tenants-in-common\", \"joint tenants\", \"community property\", or \"community property with the right of survivorship\".\nCo-owners who hold title as \"tenants-in-common\" each will own undivided interests, which may or may not be equal in quantity or duration. For example, John can own 60% of a parcel of land, while his friend Mike owns the remaining 40%, though each is equally entitled to possession of the entire parcel. Each party is entitled to their share of any income and must bear their proportionate share of the expenses. Each co-owner may unilaterally sell, lease, gift or will his or her interest and the new owner will become a tenant-in-common with the previous owner.\nCo-owners can also be joint tenants, however the joint tenancy must be expressly stated in the deed and the interests must be equal in every regard (e.g., how acquired, quantity, and duration). The most notable feature of a joint tenancy is that the co-owners have the right of survivorship, meaning that when one joint tenant dies, title to the property is automatically conveyed to the surviving joint tenant(s). As such, joint tenancy property cannot be disposed of by will or trust. If one joint tenant transfers his interest, the joint tenancy is broken, and the new owner becomes a tenant in common with the other owners (who remain joint tenants as between themselves).\nFor the above methods of holding title, a business entity (i.e., a corporation, partnership, or LLC) or a trust may be the named owner instead of an individual.\n\"As community property\" is a manner of holding title to property by a husband and wife during their marriage. In California, real property conveyed to a married man or woman is presumed to be community property, unless otherwise stated.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-3", "d_text": "See generally Shiba v. Shiba, 2008 UT 33, ¶ 17, 186 P.3d 329 (holding that both parties to a joint tenancy “hold a concurrent ownership in the same property with a right of survivorship, i.e., each [tenant] is afforded the eventuality of a full ownership interest, conditioned upon the tenancy remaining unsevered, and one out-living the other” (citation and internal quotation marks omitted)); see also In re Estate of Ashton, 898 P.2d 824, 826 (Utah Ct.App.1995) (reversing the district court's inclusion of property in the deceased's estate that, at the time of his death, was held in joint tenancy with full right of survivorship). If, on the other hand, Bates's attempt to convey the Property to the Bullocks severed the joint tenancy, he and Harris held the Property as tenants in common at the time of Harris's death. See Utah Code Ann. § 57–1–5(5)(a) (LexisNexis Supp.2012) (“[I]f a joint tenant makes a bona fide conveyance of the joint tenant's interest in property held in joint tenancy to himself or herself or another, the joint tenancy is severed and converted into a tenancy in common.”). When a tenant in common dies, that tenant's interest in the property passes to her heirs, rather than to the other tenants in common. See Webster v. Lehmer, 742 P.2d 1203, 1205 (Utah 1987) (explaining that the deed created a tenancy in common, not a joint tenancy, and that therefore, “when [the tenant in common] died intestate in 1975, her interest, instead of passing solely to [the other tenant in common], passed by the rules of intestate succession to [the deceased tenant's] two daughters”). Thus, if the Writing severed the joint tenancy, it was converted to a tenancy in common and Harris's interest passed to her heirs. See Shiba, 2008 UT 33, ¶ 17.\n¶ 12 When a joint tenant conveys “his interest therein by a valid deed,” he “ ‘severs and terminates the joint tenancy by the creation of a tenancy in common.’ “ Id.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-0", "d_text": "Joint Tenant with a tenant in commongreenspun.com : LUSENET : Repossession : One Thread\nMy ex and I own a property as Joint Tenants. If he releases his portion to someone else, they become a Tenant in Common. How does that affect my interest? If I die, do the Tenant in Common own the property solely? What about my children? If the Tenant in Common dies, where do I stand?\n-- Lestine (email@example.com), August 11, 2001\ngo see a solicitor/attorney... this isn't on the menu at this website.\n-- (firstname.lastname@example.org), August 14, 2001.\nFirstly he cannot release 'his'half as you are joint and severeal and the mortgage will be written as you both as joint tenants.This cannot be changed without changing the terms of the mortgage deed which will require your consent.Lenders do not like tenants in common arrangement and it tis doubtful assuming there is a mortgage that it can go ahead.Yes as my friens says see a solicitor\n-- roger watts (email@example.com), August 15, 2001.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-2", "d_text": "Blanket Trust Deed\n- One lien over more than one parcel.\n- \"Release Clause\" needs to releases portion of the property.\na right of interest other than an owner or tenancy interest that limits its use or Value.\nwith Right of Survivorship\n- 100% of the property upon the death of a spouse.\n- Probate. No necessary to transfer title to suviving spouse.\nHow eqully share during their marriage. Unless it is considered: \"Separate Property\"\n- May or May not include property acquired before marriage.\n- Acquired by either during a marriage by gift or inheritance.\n- -No will (1/3 spouse, 2/3 Children) or (1/2 and 1/2 if one children).\nTenacy in Partnership\nGral. Partneship: share all profit and loss. Passes to heirs but not in any particular property.\n- Limited: Limited losses.\n- a Limited partner does not share management responsabilities.\n- - Right of Survivorship\n- - Can not be willed\n- - Title: Granted by same instrument.\n- Time: At the same time.\n- Interest: Equal interest\n- Possession: Equal right to possess\nWhen joint tenant dies, survivor acquire ownership. Free of any unsecured debts.\nTenancy in Common\n- (Unity of Possession).\n- - No survivorship\n- - Equal interest in the property\n- - Both has the right to occupy\n- - Share income and expenses\n- - Each may sell or transfer his/her interest separatly from the others.\n- Sole ownership.\n- Individual or Corporation\nEstate at sufferance.\nTenancy at sufferance.\nA lease remain after expiration without the Landlord's concent.\nNotice to terminate.\nEstate at will\n- Rental that can be terminated by either lessor or lessee at any time.\n- CA needs 3 days notice.\n- No estate at will in CA\nEstate from Period to Period\n- A renewalbe agreement. Rental or Lease amount is fixed at an agreed to sum per week, month or year.\n- Notice must be given. (Usually 30 days)\nEstate for years\n- (Tenancy for Fixed Term)\n- Any from few days to 99 years.\n- Notice to terminate is necessary.\n- (Persona Property)\nLess than Freehold Estate\n- Both Tenant in a rented apartmenet and owners in their condos.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-0", "d_text": "I was presented a really interesting hypothetical recently. I thought it would make for a good blog post as there are a lot of intricate turns. It gets into issues of how to hold title, how to distribute assets between husbands and wives, and how small estate options work after death.\nIn this case Harold and Wendy owned a property worth $125,000. More interestingly they actually owned 10% of a property worth about $1.25m. Why is this more interesting? The loan on the property is about the same. H and W are now deceased and the rest of the owners on the property want to sell it to get out from under it. H committed suicide after committing fraud on a number of real estate deals. W, was innocent in the transactions, and died a short time later penny-less. The title is held “Harold Doe and Wendy Doe husband and wife.” It does not specify joint tenants or rights of survivorship. The rest of the owners need to clear title that is held in H and W’s names.\nI think the biggest problem here is getting the property transferred from H to W. That is, transferring from W is easy. The total is less than $150,000 so that can easily be accomplished by a probate code 13150 petition to succession of real property. That’s the easy part. Getting the property into W’s name is the hard part. Why was it not held as “joint tenants” or as “community property with the rights of survivorship” I do not know. Maybe the Realtor or title company are at fault. However, it is what it is now.\nThe options I see for getting the property out of H’s name are as follows:\n1) Disclaimer: If this had been done within 9 months of his death a disclaimer would have been a simple and inexpensive way to clear title to H’s 1/2 of the 10%. However, it’s been longer than 9 months so that won’t work here.\n2) Spousal Property Petition: This is usually my go-to option if an asset is not held in joint tenancy or with rights of survivorship after one spouse dies. However, in this case they had a will which poured over to a trust. You can not use a SPP when there is a will pouring to a trust. So that option is off the table.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-0", "d_text": "How you own your home with your partner or spouse can have major implications for you when they die.\nIn the UK there are two ways in which you can buy a property or a piece of land with your partner,\nspouse or another person(s).\nIt’s important to know the variation between the two as the implications, should one owner die, are significantly different and may not be what you expect or want.\nBeneficial Joint Tenants\nIn this instance you and the other owner(s) own the property jointly. You do not have separate shares in the property.\nThis means you cannot do anything with the property without all the other owners agreeing, i.e. you cannot sell, re-mortgage or do anything else without everyone’s agreement.\nYou also CANNOT leave your ownership of the property to anyone in your Will.\nWhen a beneficial joint tenant dies their interest automatically passes to the other owners, irrespective of any wishes left in the deceased persons will.\nTenants in Common\nHere all the owners of the property own shares in it, owning the property jointly. It’s up to you how many shares you own.\nIn this instance you CAN give away, sell or mortgage your share. You can also bequeath your share of the property to whoever you like in your Will. If you die without a Will in place then the rules of intestacy will apply.\nWhen a relationship breaks down\nMany couples own the matrimonial home as Beneficial Joint Tenants, and whilst relations between the couple are fine, then it’s usually the wishes of each party, that should either person die, the property is left in its entirety to the remaining partner.\nHowever when relations break down and either separation or divorce is looming then severing the beneficial joint tenancy may be a preferred course of action.\nThe reason for this is, if either party were to die following the breakdown of the relationship it may not be their wishes that the estranged partner become the sole owner of the shared property.\nIn this instance the Beneficial Joint Tenancy would be replaced with a Tenancy in Common. This will ensure that should either owner die the surviving one will NOT automatically inherit their share of the property.\nIf they made a Will then their share will go to whoever they have bequeathed it, If they die intestate then the rules of intestacy will apply.\nIf you would like more information about making or amending your Will, then please book a free consultation with one of our legal experts.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "You may be considering buying Los Angeles real estate and may have luckily have identified a home and gone into escrow. Now you need to decide how you are going to take title, especially if you are taking it jointly. There are various way to take title, including:\n- A man or woman who is not married\nExample: John Doe, a single man.\n- An Unmarried Man/Woman\nA man or woman, who having been married, is legally divorced.\nExample: John Doe, an unmarried man.\n- A Married Man/Woman, as His/Her Sole and Separate Property\nWhen a married man or woman wishes to acquire title as their sole and separate property, the spouse must consent and relinquish all right, title and interest in the property by deed or other written agreement.\nExample: John Doe, a married man, as his sole and separate property.\n- Community Property\nProperty acquired by husband and wife, or either during marriage, other than by gift, bequest, devise, descent or as the separate property of either is presumed community property.\nExample: John Doe and Mary Doe, husband and wife, as community property.\nExample: John Doe and Mary Doe, husband and wife.\nExample: John Doe, a married man\n- Joint Tenancy\nJoint and equal interests in land owned by two or more individuals created under a single instrument with right of survivorship.\nExample: John Doe and Mary Doe, husband and wife, as joint tenants.\n- Tenancy in Common\nUnder tenancy in common, the co-owners own undivided interests; but unlike joint tenancy, these interests need not be equal in quantity and may arise at different times. There is no right of survivorship; each tenant owns an interest, which on his or her death vests in his or her heirs or devisee.\nExample: John Doe, a single man, as to an undivided 3/4th interest, and George Smith, a single man as to an undivided 1/4th interest, as tenants in common.\nTitle to real property in California may be held in trust. The trustee of the trust holds title pursuant to the terms of the trust for the benefit of the trustor/beneficiary.\nYou are advised to consult your accountant or attorney when deciding how to take Title as there a significant legal consequences involved in the way you hold Title.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "Joint tenancy is a form of co-ownership. If, under a joint tenancy, one owner dies, then his or his interest automatically transfers to the surviving owner(s). This is called the “right of survivorship”.\nColorado Law outlines how to properly document the death of a joint tenant. CRS § 38-31-102 requires the survivor record two documents: 1) a death certificate or verification of death document and 2) a supplementary affidavit.\nTypically, the funeral home involved will help you order the death certificate. If a death certificate was already ordered and now can’t be located, you’ll have to apply for another death certificate through the state’s vital records office. This may take some time. If you’re working under a deadline (say, for example, a real estate transaction) make sure to start this process early.\nIn a supplementary affidavit, an individual who knows the decedent swears that he or she is the same person named in the last vesting deed (under which the joint tenancy was created). Earlier versions of the statute required that the individual signing the supplemental affidavit have no interest in the underlying property. This is no longer the case.\nIf you’re unable to locate or order a death certificate, then you can file another affidavit signed by two individuals who can swear to 1) the date and death of the decedent and 2) that the decedent is the same person named in the last vesting deed. Unlike the supplementary affidavit, those signing cannot have an interest in the underlying property.\nIf you’re the surviving owner under a joint tenancy, make sure to properly document your survivorship rights. Contact us here or call 720-588-9830 to talk to a real estate attorney.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-6", "d_text": "So, after one spouse files a petition with the court to initiate the divorce, the spouses are now involved in a “dissolution proceeding.” This means that during these six months, and very often longer, the spouses are seeking assistance from the court. This assistance includes such things as temporary spousal support payments, temporary child support payments, and even requesting that the other spouse pay for attorney fees. All these mini-trials along the way occur before the divorce becomes final.\nSo, the salient question is—“What happens if I die after the divorce proceeding has begun but before the divorce is final?” Ready for this? It’s the same as dying like you are still happily married. This truth should be a great motivator for people to confront the reality of death and the increased hardship it can cause during divorce without the proper planning.\nFirst, if you don’t have a will, or you had yourwill drafted before the divorce proceeding, visiting an attorney’s office to help you draft a new will is an important step to help assure that your property, like your '69 Camaro that you purchased before the marriage, will be given to your brother, not your soon-to-be ex-spouse.\nNext, if you and your soon-to-be ex-spouse own a home together, it is likely that you and your spouse took title to the home as either community property with right of survivorship or as joint tenants. If so, it’s important to know the effects of holding title like this. Generally, and to keep this simple, it’s easy if you picture ownership as each spouse owning his/her own 50% of the house. And, if the spouses hold title in one of the two forms mentioned above, then when one spouse dies, the other spouse will take the other half of the house, thus becoming 100% owner. (Of course, there are a few papers to file with the court, but these filings are a topic for another article)\nSo, if you die before the divorce is final, generally, (without discussing the complexity of bifurcation issues), your soon-to-be ex-spouse will take your 50% of the home. Go figure. Usually not what people expect when seeking a divorce. So, it’s crucial to discuss with your attorney the possibility of changing your 50% interest in the home to tenants in common, which is another way to hold title to a home.", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-0", "d_text": "What happens if The person who wrote the will and the benefactor of the will both die?\nFor any beneficiary, for any person, to take from a will they must outlive the testator the decedent aka, the person who made the will. When a beneficiary that was named in the will and was supposed to take passes away before the testator (or within 120 hours of the testator under the Uniform Probate Code, which regulates wills in some states) the gift to the beneficiary is removed. The only way the item would not be removed and thus go to the beneficiary’s heirs would be if there was a statute established by the state. If the state cannot determine whether the beneficiary or the decedent perished first each decedent is disposed of as if he had survived the other. For instance, if a husband and wife died in a car accident, but the state could not figure out which died first and both had placed each other in their wills and had no children, then the state would take each as surviving the other. These issues of course are directly related to simultaneous deaths in wills.\nEstate Planning: Different Types of Simultaneous Deaths in Wills.\nIn addition, the testator is usually viewed as always surviving the beneficiary when it comes to deaths that occur at the same time in wills. So, the heir is always assumed predeceased when it comes to an intestate distribution, and in regards to life insurance policies it is usually viewed as though the insured survived and the beneficiary predeceased. Of course all these things only occur if there is a simultaneous death amongst the decedent and the beneficiary. In joint tenancy’s when both owners of the joint tenancy with the right of survivorship (both owners have an equal share to a whole) ½ of the property is viewed as though Tenant A survived and ½ of the property is viewed as though tenant B survived. Such deaths prevent the operation of the right of survivorship. For those of you who were not aware, generally under a joint tenancy with a right of survivorship means the last tenant alive gets the property.\nAvoiding Simultaneous Deaths in Wills Issues\nTo avoid most of these simultaneous deaths in wills problems or other issues involved n the probate estate, attorneys can draft clauses into wills that specifically define who will be considered a survivor in the event of a simultaneous death scenario.", "score": 8.086131989696522, "rank": 100}]} {"qid": 48, "question_text": "What information do I need to calculate how long a battery will last?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "In our article discussing Ah (ampere-hours) and Wh (watt-hours), we got a ton of questions about the longevity of batteries. The question “How long does a battery last?” was a predominant one. To help everybody trying to calculate how long will a battery last, we have created a Battery Life Calculator.\nIt’s quite useful knowing when a battery will die on us. Example: If we go camping and depend on batteries for all our power needs, and we have no other means of generating electricity.\nBefore we check out the Battery Life Calculator, let’s note that figuring out how long will a battery last is pretty simple in theory (in practice, it’s actually quite difficult). We use this equation for battery drain time:\nBattery Life (in hours) = Battery Capacity (in Ah) / Load Current (in A)\nWhat does ah mean on a battery? It just means amp-hours. 1 Ah is a current of 1 amp running for 1 hour.\nExample: How long will a 100 Ah (amp-hour) battery last if we hook it up to a 1 Ah electric device? Well, battery capacity = 100 Ah, load current = 1 A, thus such a battery will last for 100 Ah / 1 A = 100 hours.\nBasically, a 100 Ah battery means that such a battery can provide 100 A of current for 1 hour. It can also provide 1 A current for 100 hours. Or 0.1 A or 100 mA for 1000 hours.\nIt seems quite simple, right?\nIf you have 100 capacity units (100 Ah) and you connect it to a device that requires 1 capacity unit (1 A) every hour, it will drain the battery in precisely 100 hours.\nWhy Calculating The Battery Life Is Not Exactly Easy\nHere’s the deal:\nIn practice, we only need two numbers to calculate when the battery will die on us. These are:\n- Battery capacity (in Ah). This one is pretty easy to get; it’s written right on the battery. Typical AA battery has 2.5 Ah or 2500 mAh (milli-amp-hours) capacity, AAA battery has 1 Ah capacity, laptop battery has 2 Ah to 6 Ah, 100 Ah battery has Ah capacity, and so on. You can read more about battery capacities here.\n- Load Current or Amp Draw (in A).", "score": 52.96554182387294, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "Calculating how long a battery will last at a given rate of discharge is not as simple as \"amp-hours\" - battery capacity decreases as the rate of discharge increases. For this reason, battery manufacturers prefer to rate their batteries at very low rates of discharge, as they last longer and get higher ratings that way. That is fine if you're building a low-power application, but if your contraption really \"sucks juice\", you won't be getting the amp-hours you paid for.\nThe formula for calculating how long a battery will really last has the charming name of \"Peukert's Formula\". It is...\nT = C / In\nYou can see from the graph that a battery with a Peukert's number of 1.3 has half the capacity of a battery with a Peukert's number of 1.1 at a discharge rate of 25 amps, even though they both have the same theoretical capacity (and are probably rated the same by the manufacturers).\nHere is a little calculator to play with.\nBatteries don't last forever - their lifetimes are measured in cycles, or how many times they can be discharged and recharged before they will no longer take a full charge. The depth of discharge (D.O.D.) has a major effect on the life expectancy of a battery - discharging only 80% of the total capacity of the battery will typically get you 25% more cycles than total discharges, and discharging to only 20% will make the battery last essentially forever. Car batteries, however, have to be treated differently - they're not designed to discharge even 20%, and will be damaged if they're deeply discharged. A \"deep cycle\" battery, on the other hand, can typically survive 400 full discharges.\nVery often, one battery won't do the trick - or more likely, you don't have the one that will do the trick, so you're stuck with multiple small batteries.\nHooking batteries in parallel will give you the same voltage as a single battery, but with a Ah and current carrying capacity equal to the sum of the capacities of all the batteries. For example, three 12v 20 Ah batteries in parallel will give you 12v 60 Ah. If each battery could put out 200 amps max, three in parallel could put out 600 amps max.", "score": 50.22947333616041, "rank": 2}, {"document_id": "doc-::chunk-0", "d_text": "Power = Watts\nHigher voltage means more power, right? Not quite…\nPower is measured in watts (W), and watts are calculated by multiplying voltage (V) by the current (amps, A). It’s possible to get the same power, for example, from an 18V battery and a 54V battery—but the 54V pack will do it with 2/3 less amps than its little brother.\nRuntime = Watt-hours\nWatt-hours (Wh) simply refers to how many watts a battery can put out for 1 hour. Just like power, watt-hours is calculated by multiplying voltage by the rated amp-hours (Ah) of the pack. This will be how you can calculate run-time of your tool.\nWhat is “Nominal”?\nThe battery packs that you use on your tools are made up of many cylindrical cells, similar in shape to the AAs in your TV remote but much bigger and packed with a lithium-ion chemical cocktail to give them much higher energy density instead of the low energy, but very cheap, alkaline type that you buy in the check-out line of your local supermarket.\nBattery cells are chemistry in a can, not perfectly identical production machines. The term nominal is an industry standard used by manufacturers of lithium-ion battery cells to rate the approximate mid-point voltage of their batteries. Nominal voltage ratings are, essentially, the average rating of the cells produced with a certain chemistry. In general, all commonly used lithium-ion battery cells are 3.6V nominal voltage. Beyond voltage, there are many options for cells with different amp-hour ratings as well as different maximum discharge current ratings, depending on how long a manufacturer wants the battery to last on a specific tool and how fast the pack is able to use its energy (i.e. higher power) during operation, respectively. Not all cells are created equal.\nYou are surely more familiar with the battery packs itself than the actual cells that these packs are made up of. Battery packs are made of stacks of cells assembled in tight clusters and connected in different series or parallel configurations to produce some combination of higher voltage or higher amp-hours, depending on the design and application of the tool it is intended to power.\nWhen assembled in series, cell voltage (3.6V nominal) is multiplied by the number of cells in the series.", "score": 46.35095856646839, "rank": 3}, {"document_id": "doc-::chunk-1", "d_text": "This is the tricky one; and the whole reason why calculating battery lifetime is difficult. Load current determines how fast the electrical capacity will be drawn from the battery, and depends on the power of the unit attached to it. 1000 W air conditioner, for example, will have a 10 times as big a load current than a 100 W personal evaporative cooler.\nIf you get these two numbers, you just divide battery capacity with load current and get how many hours a battery will last.\nThe problem is that questions about battery life are not posed in this way:\n“I have a 100 Ah battery and want to run a camping light with a load current of 1 Ah with it. How long before the battery runs out?”\nMost of us deal with watts (W). We don’t know what the load current of a 100 W light is. We just know that it’s a 100 W light, right. That’s why most questions about how long batteries last go along these lines:\n“I have a 100 Ah battery and want to run a 100 W camping light with it. How long before the battery runs out?”\nTo adequately calculate the battery lifespan, we need to transform that 100 W into Ah. Here the voltage (V) plays the key role.\nWe want everybody to be able to determine how long will their battery life last. That’s why we feature 3 key sections that will help you out to do just that:\n- How to calculate the load current of any device. We start with knowing wattage (W) and voltage (V), and we’ll be able to calculate how many amps (A) does such a device needs to run. If you can calculate the amp draw (or load current), you can use the Battery Life Calculator.\n- Battery Life Calculator. You just input the battery capacity that’s written on your battery (in Ah) and the calculated amp draw (load current), and the calculator will tell you how many hours the battery will last.\nLet’s start with the basics: How to get from watts to amps?\nHow To Calculate Load Current (Amps) From Wattage?\nImagine a simple enough scenario. You have a big 200 Ah lithium battery and want to run a small 800 W portable air conditioner with it. How long can you run such an AC before the battery dies out?\nWell, we already know that we need 2 numbers:\n- Battery capacity.", "score": 45.52075366101481, "rank": 4}, {"document_id": "doc-::chunk-1", "d_text": "Hence, the final version of the battery capacity formula looks like this:\nE = V * Q\nenergy = voltage * battery capacity\n- energy – It’s the energy stored in a battery, expressed in watt-hours (Wh);\n- voltage – It’s the voltage of the battery;\n- battery capacity – It’s measured in amp hours (Ah).\nSo, how to calculate the battery capacity? Let’s assume you want to find out the capacity of your battery, knowing its voltage and the energy stored in it. For example, we have a standard 12V battery and te amount of energy stored in the battery is 24 Wh.\nThe battery capacity is calculated from this formula:\nenergy = voltage * battery capacity\nbattery capacity = energy / voltage = 24 / 12 = 2 Ah\nThe battery capacity is equal to 2 Ah.\nThis is the current I used for either charging or discharging your battery. It is linked to the C-rate with the following equation:\ncurrent = C-rate * battery capacity\n- C-rate – It’s used to describe how fast a battery charges and discharges. For example, a 1C battery needs one hour at 100 A to load 100 Ah. A 2C battery would need just 0.5 hours to load 100 Ah, while a 0.5C battery requires 2 hours.\nRuntime to full capacity\nIt is simply the time t needed to fully charge or discharge the battery when using the discharge current, measured in minutes:\nt = 1/C-rate\nWe have learned in this article how to calculate the drone battery capacity, drone battery energy, discharge current and runtime to full capacity.\nThank you for reading.\nIf you like to read more you can check other articles on our website:\nIf you interested in DIY projects with Raspberry PI and Arduino visit Acoptex.com.", "score": 45.4133925346337, "rank": 5}, {"document_id": "doc-::chunk-3", "d_text": "Let’s calculate the new amp draw using the basic power equation:\nAmps Draw (in A) = 800W/ 240V = 3.33 A\nAs we can see, the amp draw is no longer 6.67 A; it’s 3.33 A. When we increase voltage, we need fewer amps to get the same electrical power (wattage). Based on this, we can now calculate how long will a 200 Ah battery be able to power an 800 W 240 V air conditioner:\n200 Ah Battery Life = 200 Ah / 3.33 A = 60 hours\nAs we can see, because the amp draw is halved, the battery life is increased. That’s because an 800 W air conditioner on 240V requires fewer amps than an air conditioner on 120V.\nNow we know how to calculate the amps from watts. We can use this knowledge to calculate the second vital input into the Battery Life Calculator:\nBattery Life Calculator (Insert Battery Capacity And Amp Draw)\nWhen you figured out how big a battery you have (battery capacity in Ah), and how many amps does a device you want to hook on the battery runs on, you can input both numbers in this calculator. As a result, you will get how long will a battery last (in hours):\nYou can pretty much calculate the battery life for any kind of battery powering any kind of electric device.", "score": 45.11491181148912, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "A 20 hour test is just as it implies, 20 hours long, not including the time it took you to charge the battery and get it to a controlled temp of 75-80F.\n\"RC How do I determine my batteries 20 hour discharge rate?\"\nThis part is easy, you divide your batteries 20 hour Ah capacity by 20, eg: 100Ah ÷ 20 = 5A. If you had a 210Ah battery the math is the same; 210 ÷ 20 = 10.5A. If you want to test for Ah capacity, which is the only test that matters for an Ah counter, then this is the formula for determining the constant-current discharge load you will use.\nThe second thing you will notice is the temperature. Just like the rate of discharge, battery temperature affects your usable capacity. If you do not maintain a battery temp of 75-80F, during testing, you will not arrive at or get the correct 20 hour capacity. When conducting a capacity test, in order to properly program a battery monitor, discharge current and battery temp ideally need to remain constant & stable while the battery is discharged to 10.5V.\nIf you don’t start with a known & confirmed Ah capacity, your Ah counter may never give you reliable information. It simply can’t unless it is programmed well. At a bare minimum, for cooler climates, a once yearly 20 hour capacity test to 10.5V should be conducted. In warmer climates, defined as average battery temps above 80F, bi-yearly is a much better choice.\nHow do I conduct an accurate 20 hour capacity test?", "score": 42.8627317878511, "rank": 7}, {"document_id": "doc-::chunk-2", "d_text": "We have that; it’s 200 Ah.\n- Amp draw. That we don’t have; we have to calculate it.\nTo calculate amp draw (A) from watts (W), we also need to know the voltage (V). To calculate amps, we use the basic electric power equation:\nP (in W) = I (in A) * V (in V)\nBasically, electric power P (wattage) is calculated by multiplying electric current I (amps) with voltage V (volts). To calculate amps, you have to express the electric current I (amps) like this:\nI (in A) = P (in W) / V (in V)\nThis basically tells us that we get the amps by dividing watts by volts.\nExample: We have an 800 W AC unit that runs on a 120 V electric circuit. What’s the amp draw here? Easy, we just divide 800 W by 120 V and get 800W/120V = 6.67 A.\nIf you find this confusing a bit, you can use our watts to amps calculator here to help you out with the calculation.\nIn our example above, we have calculated the amp draw of the 800 W AC. It’s 6.67 A. Now we have both numbers; we have a 200 Ah battery and we know the AC has a 6.67 A draw. How long will a 200 Ah battery last if it has to power this AC? Let’s calculate:\n200 Ah Battery Life = 200 Ah / 6.67 A = 30 hours\nIn short, a 200 Ah battery will be able to power an 800 W 120 V air conditioner for about 30 hours.\nNow, it’s important that we feel the effect of different voltages. Let’s say that we have the same 200 Ah battery, the same power input 800 W unit, but it runs on a 240 V electrical circuit instead of a 120 V circuit.\nBecause the voltage is different, the amp draw – the amps required to run such an AC – will also change.", "score": 42.27397426014737, "rank": 8}, {"document_id": "doc-::chunk-7", "d_text": "The more electrode material contained in the cell the greater its capacity. A small cell has less capacity than a larger cell with the same chemistry, although they develop the same open-circuit voltage. Capacity is measured in units such as amp-hour (A·h). The rated capacity of a battery is usually expressed as the product of 20 hours multiplied by the current that a new battery can consistently supply for 20 hours at 68 °F (20 °C), while remaining above a specified terminal voltage per cell. For example, a battery rated at 100 A·h can deliver 5 A over a 20-hour period at room temperature. The fraction of the stored charge that a battery can deliver depends on multiple factors, including battery chemistry, the rate at which the charge is delivered (current), the required terminal voltage, the storage period, ambient temperature and other factors.\nThe higher the discharge rate, the lower the capacity. The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law:\n- is the capacity when discharged at a rate of 1 amp.\n- is the current drawn from battery (A).\n- is the amount of time (in hours) that a battery can sustain.\n- is a constant around 1.3.\nBatteries that are stored for a long period or that are discharged at a small fraction of the capacity lose capacity due to the presence of generally irreversible side reactions that consume charge carriers without producing current. This phenomenon is known as internal self-discharge. Further, when batteries are recharged, additional side reactions can occur, reducing capacity for subsequent discharges. After enough recharges, in essence all capacity is lost and the battery stops producing power.\nInternal energy losses and limitations on the rate that ions pass through the electrolyte cause battery efficiency to vary. Above a minimum threshold, discharging at a low rate delivers more of the battery's capacity than at a higher rate. Installing batteries with varying A·h ratings does not affect device operation (although it may affect the operation interval) rated for a specific voltage unless load limits are exceeded. High-drain loads such as digital cameras can reduce total capacity, as happens with alkaline batteries. For example, a battery rated at 2 A·h for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity implies.", "score": 41.08550963729869, "rank": 9}, {"document_id": "doc-::chunk-6", "d_text": "You can quickly get a DMM from the market, turn the DMM selection switch to DC voltage supply, and connect the COMM terminal of DMM with the negative terminal and other terminals of DMM with the positive terminal.\nBy doing so, DMM will give you the exact output voltage of your battery packs.\nYou can connect DMM in series to the battery pack to determine the load current, and you can also use an Analog and digital ammeter for it.\nYou can then use the formula of P=VI to calculate the power capacity of the battery packs.\nIf you can’t use this method, you should look for the battery manufacturer number on your battery and battery packaging.\nYou can give this number to your battery manufacturer, and he/she will let you know about its battery capacity.\nThe American National Standards Institute (ANSI) procedure is as follows: Step 1 discharge new cells at 0.2 C to 1 volt. Step\n2 Charge cells at 0.1 C for 16 hours. Step 3 rest cells for 1 hour. Step 4 discharge cells at 0.2 C to 1 volt. Battery capacity is determined by the hours of service to 1-volt times the discharge rate (mA x hours = mAh)\n8. What are the protections placed in TLH’s Rechargeable battery packs?\nWe greatly emphasize user’s safety; hence our technical staff has taken special consideration for the battery protection.\nWe offer a Battery management system in our battery packs.\nAlong with BMS, we offer a cell balancing system in our rechargeable battery packs.\nOn the off chance that your cells aren’t balanced, it could influence your battery execution, subsequently adjusting the circuit is expected to control the battery on the occasion of unevenness.\nUsing an imbalanced battery will provoke under execution of the battery because the battery will not have the alternative to charge and delivery to its most extraordinary cutoff; like this, it will lessen the constraint of the battery cells.\nBalancing the circuit is critical because it charges and deliveries the cells’ aggregate to their most extraordinary worth; this will grow the strength and decrease the chances of deficiency for your battery and device.\nIn the event that the cells have distinctive voltage levels across the cells, it could bring about ill-advised force appropriation from the phones to your gadget.\nA balancing circuit is essential for your battery packs because it enables proper charging and discharging cycles.", "score": 40.94894574952806, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "Boost the life and performance of your lithium battery by installing the correct charging setup, says Duncan Kent\nBefore planning a total upgrade to lithium batteries, carry out a consumption audit – an in-depth evaluation of your electrical needs.\nModern appliances consume less power than older models, so you might want to consider using some of your repowering budget in the updating of your kit.\nCompressor-driven fridges, for example, can consume less than a third of the budget of thermo-electric cool boxes.\nAs battery capacity is nearly always quoted in Amp-hours (Ah) it’s often easier to do your calculations in Ah, rather than Watts and Watt-hours.\nFirstly, decide on a period (commonly 24h) between charges.\nThen, so long as you know what current a device draws, you simply multiply the time it will be in use over that period to get the number of Ah consumed over that period.\nHaving totalled up your proposed consumption, double it if you plan to use lead-acid type batteries, or for good quality Lithium-ion batteries that will allow an 80% discharge, multiply your consumption figure by 1.25.\nThat will provide your required battery capacity for your chosen period but I would always add a further 25% for contingencies.\nPlease note, capacities on deep-cycle batteries are usually quoted in C20 discharge rates, which is a pretty reasonable guide to follow.\nSome ‘leisure’ batteries (compromise between a deep-cycle and a start battery) might also quote a CCA (Cold Cranking Amps) figure, which is a good indication that they aren’t proper deep-cycle ‘traction’ type batteries.\nThey can be used in smaller craft, they’re just not the best for liveaboard cruising yachts.\nCharging lithium batteries\nAC battery chargers\nLithium-based batteries are usually charged at constant current of between 0.5C-1.0C (C = capacity in Ah) until the current drops to 0.03C, at which point charging must cease so as not to overcharge the cells.\nThese figures may differ depending on the type of cell and the instructions.\nSome recommend you stop charging at 0.1C to reduce stress on the cells and help extend their lifespan.\nThe State of Charge (SoC) cannot be determined by battery voltage, as this reaches its peak when the battery is only half-charged.", "score": 38.42771158441491, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Boost the life and performance of your lithium battery by installing the correct charging setup, says Duncan Kent\nBefore planning a total upgrade to lithium batteries, carry out a consumption audit – an in-depth evaluation of your electrical needs.\nModern appliances consume less power than older models, so you might want to consider using some of your repowering budget in the updating of your kit.\nCompressor-driven fridges, for example, can consume less than a third of the budget of thermo-electric cool boxes.\nAs battery capacity is nearly always quoted in Amp-hours (Ah) it’s often easier to do your calculations in Ah, rather than Watts and Watt-hours.\nFirstly, decide on a period (commonly 24h) between charges.\nThen, so long as you know what current a device draws, you simply multiply the time it will be in use over that period to get the number of Ah consumed over that period.\nHaving totalled up your proposed consumption, double it if you plan to use lead-acid type batteries, or for good quality Lithium-ion batteries that will allow an 80% discharge, multiply your consumption figure by 1.25.\nThat will provide your required battery capacity for your chosen period but I would always add a further 25% for contingencies.\nPlease note, capacities on deep-cycle batteries are usually quoted in C20 discharge rates, which is a pretty reasonable guide to follow.\nSome ‘leisure’ batteries (compromise between a deep-cycle and a start battery) might also quote a CCA (Cold Cranking Amps) figure, which is a good indication that they aren’t proper deep-cycle ‘traction’ type batteries.\nThey can be used in smaller craft, they’re just not the best for liveaboard cruising yachts.\nCharging lithium batteries\nAC battery chargers\nLithium-based batteries are usually charged at constant current of between 0.5C-1.0C (C = capacity in Ah) until the current drops to 0.03C, at which point charging must cease so as not to overcharge the cells.\nThese figures may differ depending on the type of cell and the instructions.\nSome recommend you stop charging at 0.1C to reduce stress on the cells and help extend their lifespan.\nThe State of Charge (SoC) cannot be determined by battery voltage, as this reaches its peak when the battery is only half-charged.", "score": 36.75579064879374, "rank": 12}, {"document_id": "doc-::chunk-0", "d_text": "This is a graph of cycle life to delivered capacity for a flooded deep cycle lead acid battery we use in boats quite often. Take a look at how capacity changes with life and cycles. Also remember that in the lab they get 700+ cycles but in the real world most boat batteries are destroyed in well under 200 cycles.\nWhat is Ampere Hour Capacity?\nThe Ah capacity of deep cycle marine batteries is based on the *BCI (Battery Council International) 20 hour discharge test. For a 100Ah battery this means it should deliver 5A at 77F for 20 hours before the loaded terminal voltage falls to 10.5V. This is your ideal factory rated Ah capacity, but you’ll notice two things.\n*WARNING: Many unscrupulous battery manufacturers \"calculate\" or \"extrapolate\" the 20 hour rating from other tests using mathematical assumptions, as opposed to actually testing it for a true 20 hour rating. When you buy cheap batteries you often get Ah ratings made up via guess work.\nThe first thing that stands out is the 5A current. In order to meet the 20 hour capacity test figure, the current is held absolutely stable even as the voltage decays/falls during the test. As a battery discharges, the load it is discharged at will change the usable capacity of the bank, at that load. The only way you can get 20 hours of run time, at 77F, is at the 20 hour discharge rate. Any load greater than this and the battery will not deliver its full rated capacity. On the flip side if we draw the battery at less than the 20 hour rate we can get slightly more Ah capacity out of it.\nI prefer to call this the Peukert effect. I hesitate to call it Peukert’s law because it is not a law, like Ohm's Law is, it’s an effect that modifies usable capacity based on rate of discharge.\nIn order to test the battery at the 20 hour rate, and do so accurately, you ideally need to hold the discharge current steady as the voltage decays to the 10.5V cut off point. This is a tedious and imprecise process for the average DIY. There is test equipment available to conduct a proper 20 hour Ah capacity tests but they begin in the four figure price range, and take a lot of time to complete.", "score": 34.916229074520714, "rank": 13}, {"document_id": "doc-::chunk-10", "d_text": "With the example above would need 30 of them to obtain the required voltage, but would still only 7 Ah useful energy to that voltage.\nTo put it another way, more Ah, we go further and faster. However, more Ah cost more money, and weigh more.\nWhen we read Wh (watt / hour) on battery specifications, we are more or less to what we have been explaining about Ah, but in this case the voltage is also considered. For example, a battery of 360 Wh is only 10 Ah battery to 36 volts (36V 10Ah x = 360Wh).\nMaximum current means essentially “what is the maximum download speed of a cell?”. Think of it as a bucket of water. The hub is the cell and water is electricity (the larger the cell, the greater the amount of water), where the water leaves through a hole in the hub. Thus, the larger the hole, the faster will the water.\nIn terms of battery, if the download (the hole) is not large enough, then the engine may not be able to get enough energy (water) to operate with maximum performance.\nThe measuring unit is described in terms of maximum amperage that can support cell per unit time. Another way to describe it in terms of “C” or what is the same, the rate at which the battery can be discharged for one hour = 1C. For example, if the battery discharge 10 amps for one hour means you will have a maximum current of 10Ah to 1C.\nProblems in construction of batteries :\nMost consumer batteries electronics utilize a handful of cells. For example, a battery of a mobile phone of 3.6 volts, may have three NiMH batteries at 1.2 V battery in a plastic box. The electric bikes , usually no more than 30 merged cells. Each cell is connected to another with a small metal connector. Each connector is a potential point of mechanical failure, and a small resistance.\nLead Acid (SLA)\nPros : average energy density, maintenance, proven in millions of electric, cheap bikes.\nCons : heavy batteries, very short life cycles, no fast load option and not easy storage.\nNickel (NiMH and NiCd)\nPros : average energy density, relatively fast loading, average weight.\nCons : suffer from memory, slashing performance at low temperatures, difficult storage effect.", "score": 34.54849335588844, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "This article will help you to calculate the drone battery capacity, drone battery energy, discharge current and runtime to full capacity.\nDrone battery capacity\nThe drone battery capacity calculated by this formula:\ncapacity = drone flight time * average amp draw / discharge\nThe average amp draw calculated:\naverage amp draw = (total drone weight * power) / voltage\n- capacity – It’s the capacity of your battery, expressed in milliamp hours (mAh) or amp hours (Ah). Please note that 1 Ah = 1000 mAh. You can find this capacity value printed on your LiPo battery. So, the higher the capacity, the more energy is stored in the battery.\n- discharge – It’s the battery discharge that you allow for during the flight. As LiPo batteries can be damaged if fully discharged, it’s common practice never to discharge them by more than 80%. Basically you need to leave 20 % charge.\n- average amp draw – It’s calculated in amperes.\n- total drone weight – It’s the total weight of the equipment that goes up in the air, including the battery and measured in kilograms.\n- power – It’s the power required to lift one kilogram of equipment, expressed in watts per kilogram. You can normally use 170 W/kg. Some more efficient systems can take less, for example, 120 W/kg; if that’s your case, don’t hesitate to adjust its value.\n- voltage – It’s the battery voltage, expressed in volts. You will find this value printed on your battery.\n- current – The power / voltage is the definition of an electric current I (in amps) required to lift one kilogram into the air. It comes from the Ohm’s law (power = current * voltage).\nDrone battery energy\nThe power of an electrical device P is equal to voltage V multiplied by current I:\nP = V * I\npower = voltage * current\nThe energy E is power P multiplied by time T:\nenergy = power * time\nE = P * T\nIf you join both formulas:\nenergy = voltage * current * time\nE = V * I * T\nThe amp hours are a measure of electric charge Q (the battery capacity).", "score": 34.243765896100456, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "Firstly let me say that this is with a view to using a lead acid battery for backup purposes in case of power outage (mains power, a few lightbulbs but with daily outages lasting a few hours)\nSo let's provide a data sheet for the battery, deep cycle flooded lead acid type:\nFor the given battery, the SPRE 12 225, the open-circuit cell voltage is:\n- 100% charged 2.122V per cell\n- 50% charged 2.017V per cell\n- 20% charged 1.943V per cell (11.66V total)\n- 10% charged 1.918V per cell (11.51V total)\nMeanwhile the capacity of the battery is defined based on discharging (at constant current) in 10, 20, 48, 72, or 100 hours until the cell voltage falls to 1.75V (10.5V total).\nAccording to the manufacturer's data, the total lifetime capacity of the battery (number of cycles * depth of discharge) doesn't appear to be significantly different between 20% and 80% DoD.\nSo if we want to know how much real capacity the battery has, it seems we can use 12V * C * 80%. The C10 figure is 179Ah, C20 is 204Ah and C100 is 225 Ah. So at C10, 17.9A, that's 12 & 179 * 0.8 = 1718.4W.\nHowever, is it possible that the voltage drop due to the current & internal resistance of the battery is distorting the figures? Or is this insignificant?\nThe manufacturer do not quote below a C10 figure (17.9A, equivalent to around 170W after inverter & distribution losses). Is this the maximum current draw? Or could you go to, say, C5?\nI notice from the manufacturer's datasheet for their renewal energy storage range\nthat the ratio of, say C10 to C100 isn't completely consistent across battery chemistry, however they do quote C5 numbers for some of the AGM & gel batteries. Is there in fact a possibility to use a higher discharge current for gel/AGM vs lead acid?", "score": 34.19449791519859, "rank": 16}, {"document_id": "doc-::chunk-1", "d_text": "|State Of Charge||Sealed Or Flooded Lead Acid battery Voltage||Gel Battery Voltage||AGM Battery Voltage|\nDischarge Depth Of Battery\nThe general rule of thumb: the less you discharge your deep cycle battery before recharging, the longer it will last\nAn example is here:\nRemarkably, a Sonnenschein Solar Bloc 100 AH Gel Battery discharged to a depth of 70 percent, i.e., with just 30 percent or 30 Ah (amp hours) left, would have a lifetime of around 1200 cycles. However, if it only discharges up to 50 percent, the estimated number of cycles will rise to about 1700! That adds over 1.25 years to the life of the battery if a cycle is a day\nIn most deep-cycle batteries, discharge depth, also known as DOD, should not be more than 50 percent to get the best value for money. So, consider the cut-off discharge depth to be 50 AH if you have a 100 AH battery.\nA very critical measurement you can make when selecting the size of a deep cycle battery is the depth of discharge.\nAnother example is here:\nCheck the amplifier rating on the adaptor if you want to power a laptop computer. Between the 3 and 5 amp mark, it is likely to be anywhere. This probably translates to around 2-4 amps an hour under regular use as at all times; your laptop will not use the maximum amount. So, on the lower end, centered on:\n100 AH battery = 50 AH capacity available/2 amp draw = 25 hours of use.\nThere are four main types of traditional deep cycle batteries, such as sealed lead acid, flooded lead-acid, gel, and AGM, as stated. See our Deep Cycle Battery Guide to find out more about the difference between them. Find out more about the Tesla Powerwall solar battery.\nRecommended Readings (The Baselined)", "score": 33.92147543882458, "rank": 17}, {"document_id": "doc-::chunk-2", "d_text": "However, as batteries lose efficiencies over time, a ‘full charge’ can be at 100% of its claimed capacity at the start, but dwindles to 70% after some years. This is why warrantied energy throughput is more useful to determining how long your battery can last. This number shows the amount of electricity to pass through the battery throughout its lifetime.\nWith this knowledge, you should now be well equipped to buying the right battery for your home. If you want to make the most out of your solar and battery investment, consider getting an energy management system like carbonTRACK, which not only shows the data related to electricity usage, input and output, but can also control appliances in home so that you can maximise your savings from electricity use.", "score": 33.57681837233664, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "Look for motor output (in watts) which will give you an idea of total power. But watt hours (Wh) is perhaps a better figure to use—it takes into account battery output and life to give a truer reflection of power.", "score": 32.60493714363189, "rank": 19}, {"document_id": "doc-::chunk-1", "d_text": "If you depend on your device to operate straight away after long periods of inactivity, such as an emergency torch, then it's preferable to use a good alkaline single-use battery.\nLithium and alkaline batteries are functionally about the same in nearly all aspects. However, lithium batteries can be used in more powerful devices without having to worry about the battery draining too quickly, as they have a higher capacity, slightly higher initial voltage and a longer shelf life. The downside is lithium batteries are more expensive.\nBreaking down the jargon\nMilliampere hours (mAh): A measurement of the capacity of a battery. The higher the mAh, the longer the battery will last. If your battery is rechargeable then the mAh rating is how long the battery will last per charge.\nMemory effect: Most often associated with NiCad batteries where the battery appears to fail to charge to its full capacity, instead setting itself to show fully charged at the capacity of the battery when placed in the charger.\nClaimed capacity mAh: High-drain tasks such as digital photography with frequent use of the flash may benefit from a battery with a higher capacity.\nLithium-ion single use: These last the longest in high-drain devices, like digital cameras, and might be a good option for a backup when travelling. Manufacturers claim they have a shelf life of around 10 years. There are a few online sites offering rechargeable Li-Ion AA batteries. Most of these batteries have a nominal voltage of 3.7V, which will fry your electronic devices if they take normal AA batteries, so avoid them unless you know exactly what devices you have that can cope with a nominal voltage greater than 1.5V.", "score": 31.9485765730271, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Sometimes some petty things makes big confusion in our mind, One of those petty thing is calculating our inverter battery Back-up time.\n99% of people who own Home inverter would ask this question to their battery dealers at least once in their life time. But most battery shop owners don’t let their customers know the simple formula to calculate the Back-up time of inverter battery. Don’t worry readers, now I will let you know that simple formula!\nFormula to Calculate the Back-up Time of Inverter Battery\nBack up Time of Inverter Battery = Battery Volt x Battery AH rating / Total watts on Load\nIf a Person use 1 ceiling Fan + 1 Tube Light + 2 (15watts) CFL simultaneously with 150 AH battery, then the backup time will be calculated as\n1 ceiling Fan = 75 watts\n1 Tube light = 40 watts\n2x 15watts CFL= 30 watts\nTotal = 145 watts\nBack-up time of 150 AH Battery = 12 * 150 / 145\n= 12.41 hours (approximately)\nThis is Not Accurate Why?\nThis calculation shows only approximate value because there will be some loss of energy when converting 12 v battery power to 220 volts through inverter.\nMoreover, we cannot figure out the exact power consumption of ceiling fan, as it has speed adjustment dimmer switch. The power consumption will be low when the fan runs at slow speed.\nNote :( Power consumption of old fans with big sized Regulator remains same at all speed points)", "score": 31.305721301449694, "rank": 21}, {"document_id": "doc-::chunk-1", "d_text": "Specific gravity measurements in batteries indicate how much sulfate is in the electrolyte, providing information about the SOC, but not capacity or SOH.\n- Impedance testing — An impedance test does not measure the capacity of the battery, but it is an indicator of the SOH of the battery.\n- Discharge testing — Discharge testing is the only form of test that will determine the actual capacity of the string, but not necessarily the SOH.\nIEEE standards (see IEEE Standards At a Glance) recommend that discharge testing be performed at the time of a battery strings installation and then every two to five years after that, depending on the age and capacity of the string. Specially designed test sets are available to make this process as easy and convenient as possible. For complete discharge testing, the battery must be taken out of service for the duration of the test. This can easily last as long as two days in order to allow a full discharge/charge cycle to be completed. Although it offers the most accurate results possible, this test method is clearly costly, time consuming, and often inconvenient. How many users do you know that would be okay without standby power for two full days?\nOne way of addressing this problem is to carry out limited discharge testing, which involves discharging the batteries by up to 80% without taking them out of service. This yields results almost as accurate as those provided by carrying out a 100% discharge test.\nPartial discharge testing is a very useful way of assessing battery condition, but it is not ideal in every application. For one thing, the test is still time consuming. Although the batteries remain in service — should they be called upon to supply power at the point of deepest discharge during the test — they will only have 20% of their full capacity available.\nIt is also recommended that impedance testing be performed quarterly. Although impedance testing does not directly provide information about battery capacity, it is the only method that reveals the SOH of the battery. Impedance testing can be performed without taking the battery out of service.\nAs a battery ages, it may corrode, sulfate, dry out, or deteriorate in many other ways, depending on maintenance, chemistry, and usage. All of these effects cause a chemical change in the battery, which, in turn, causes a change in the battery’s internal impedance/resistance.", "score": 31.008696611250926, "rank": 22}, {"document_id": "doc-::chunk-4", "d_text": "This precharacterization is derived from data provided by the battery manufacturer, derived empirically, or otherwise determined. This relationship may be represented by data stored in a lookup table or by a mathematical function implemented in hardware, firmware or software.\nIn one embodiment, the battery capacity tester includes a constant current sink coupled to the terminals of the battery. The constant current sink is constructed and arranged to draw at least two successive, substantially constant currents from the battery. The battery capacity tester also includes a battery characterizer coupled to the constant current sink and battery terminals. The battery characterizer is constructed and arranged to generate a remaining battery capacity based on a predetermined relationship between the internal battery impedance and capacity for the battery. The internal battery impedance is determined based on a battery voltage measured during each current draw. The constant current sink can be implemented as a constant current source coupled to the battery and supplying a constant current to a load having a known resistance for dissipating the currents.\nIn one embodiment, the battery characterizer includes at least one lookup table containing the precharacterized battery capacity data that can be accessed by the measured internal battery impedance. In another embodiment, a voltage measurement device is coupled to the battery terminals, providing voltage measurements obtained during each constant current draw. In one aspect of this embodiment, the relationship between the internal battery impedance and the remaining battery capacity is represented by ratios of voltages and currents for the known load. In another embodiment, the lookup table is arranged by the difference in applied constant currents, and is accessed using the difference between first and second voltage measurements to obtain the internal battery impedance and associated battery capacity. In another embodiment, the battery characterizer includes a computation element executing a source program to determine the remaining battery capacity as a function of the present internal battery impedance.\nIn another aspect of the invention, a method for determining a remaining battery capacity is disclosed. The method includes the steps of: (1) precharacterizing a relationship between battery impedance and battery capacity for the battery; (2) determining a present internal battery impedance; and (3) determining the remaining battery capacity using the internal battery impedance based on the precharacterized data.", "score": 30.94503054277821, "rank": 23}, {"document_id": "doc-::chunk-2", "d_text": "Yes\nBattery Capacity is a measure (typically in Amp-hr) of the charge stored by the battery, and is determined by the mass of active material contained in the battery. The battery capacity represents the maximum amount of energy that can be extracted from the battery under certain conditions. 3600 mAH, Li-Po, Non-removable", "score": 30.585982246814446, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "When you place batteries in series, the capacity, or mAh of the pack (we will get to this later), stays the same and the voltage is multiplied by the number of cells. For instance, in the above example of 3s1p, the pack would have a resting voltage of 11.1V (3 x 3.7V). Using this information, a 2s1p pack would have a resting voltage of 7.4V (2 x 3.7).\nThe next letter in the pack configuration descriptor is p. The p stands for parallel. When placed in parallel, the battery’s voltage is not affected, but the capacity (mAh) is multiplied by the number that comes before the p. For example, if you had a battery that was 1000mAh and had a pack description of 1s3p, your pack would have 3 x 1000mAh cells.\nUsing this information, if you had a pack that was 1000mAh and the pack description of 3s2p, this would mean that your battery pack consists of two sets of 500mAh cells connected in parallel to make the 1000mAh capacity. In the above example, I referred to the capacity of a battery as mAh. mAh stands for milliamp hour. 1000mAh is equal to one amp hour. For example, a battery that is rated at 3000mAh can be discharged at a rate of 3 amps (or 3000mAh) for one hour before being completely drained. By this logic, if you drained the same 3000mAh at 6 amps, the pack would be completely drained in one half hour.\nRating the battery\nThe next term that is very important to understand when it comes to batteries is the C rating. Batteries carry c ratings from 1C all the way up to 45C these days. The C rating is basically how fast the energy of the battery can be released from the battery. If a 3000mAh battery has a 1C rating it can be discharged safely at 3 amps. If you have a similar 3000mAh battery with a C rating of 10C that battery can safely be discharged at 30 amps. The same battery at 20C can be safely discharged at 60 amps, etc.", "score": 30.401046382066927, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "- Configuration: How many cells the battery has. (1S, 2S, 3S)\n- Capacity : How much electricity the battery holds. (500 mAh, 1000 mAh, 2200 mAh)\n- Constant discharge: How fast the electricity can flow from the battery. (15C, 25C, 35-45C)\n- Burst rate: A peak value that the battery can discharge for a short period of time. (37C ,15sec)\n- Pack size: three dimensions, HxWxL\n- Weight: in these modern times, always expressed in grams.\n- Charge Rate: how high of a current at which the battery can be charged (1C, 5C)\nLots of details here.\nVoltage: A single lipo cell will have a voltage range of 3.0V - 4.2V. The \"nominal\" voltage (how it's typically referred to) is 3.7V. If the cell goes below 3.0V, it won't be able to be recharged. Most people recommend not letting a cell go below 3.2 volts so that you'll have a bit of safety margin.\nSo, a 3S battery will have 3 cells, a nominal voltage of 11.1V, and a voltage range of 9V - 12.6V (the nominal values multiplied by 3.\nYour discharge rate requirements will be defined by your motor and ESC combination. Models with high instantaneous power demands (helis, stunt 3D flying) will have higher discharge requirements. Models with low instantaneous power demands (gliders, trainers) will have lower requirements. Higher discharge rates usually mean the batteries will be heavier and more expensive.\n- Configuration : 2S 7.4v\n- Capacity : 2150mAh\n- Constant discharge: 25C\n- Burst rate: 37C (15sec)\n- Pack size: 113x33x16mm\n- Weight : 129g\n- Charge rate not specified, so we assume 1C", "score": 30.006052356778355, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "In a battery, a chemical reaction occurs and then electrons travel through a wire from one terminal to the other, and the result is Direct Current (DC) electricity. Batteries are particularly useful for standalone systems as they are fairly cheap and power can be stored and used when needed. If you wish to store electricity that you have generated in the battery bank then deep cycle batteries are designed to deliver less current for a longer period of time. Once flat they are designed to be recharged.\nCar batteries can be used but they are not suitable as they are not designed to be fully discharged. The deep cycle batteries are designed to be recharged once flat and will last longer.\nThe amount of energy that a battery can supply is specified in ampere hours (Ah). So, a 12-volt battery that is specified as 100Ah can theoretically deliver 1 amp for 100 hours or 100 amps for 1 hour.\nMultiplying the volts and amps, gives you the power (watts) the battery will produce.\nSo for our 12 volt battery it is possible to have:\n- 12 volts X 1 amp = 12 watts for 100 hours\n- 12 volts X 100 amp = 1200 watts for 1 hour\nBack in the real world, these numbers don’t quite add up as you can’t expect to get more than 80% capacity from your battery, i.e. 80 Ah.\nThe smaller deep cycle batteries are not designed to deliver masses of current so you shouldn’t really drain more than about 10 -15 amps (that’s a device of about 120 – 180 watts). So if a pump needs 60 watts to function we divide 60 watts by 12 volts, which gives 5 amps.\nIf you use the pump for 2 hours a day, a fully charged battery therefore lasts about 8 days.\nHow to work out Watts, Amps and Volts\nCommon sense states that a larger Solar Panel will collect more energy than a smaller solar panel, but what size is correct for your needs?\nMost power consumption of appliances is given in Watts. It is a relatively simple calculation to work out power your energy use, just multiply the power consumption by the hours of use.", "score": 29.870587810537693, "rank": 27}, {"document_id": "doc-::chunk-1", "d_text": "However, that requires a deep discharge which lead-acid batteries don't like. It also causes you to lose radio settings. The measurement would take many hours, something for which garages don't have time.\nIf you know the current draw from your headlights, you could leave the car with the headlights on and see for how many hours they are still on. For example, if the headlights are 120W = 10A @ 12V, it means that a 50 Ah battery should be able to keep the headlights on for 5 hours. This will however, as I said, lose your radio settings.", "score": 29.794663458417993, "rank": 28}, {"document_id": "doc-::chunk-2", "d_text": "As a result, to ensure reliable defibrillator operations, it is critical that the condition of the battery pack be determined prior to operation. The user may then determine whether the installed or a replacement battery should be used.\nCurrently, three procedures or tests are commonly utilized to determine the present capacity of a battery, referred to herein as \"remaining battery capacity.\" One conventional battery capacity test measures the time to completely discharge a fully charged battery into a known load. One drawback is that this technique requires a known load to be provided over a very long period of time. Inclusion of such a long load discharge cycle significantly increases the cost and complexity of the procedure. Another deficiency is that this test often requires hours to perform since the battery must be fully charged, and then fully discharged. Furthermore, since the battery must be completely discharged under well-known and controlled conditions, the battery powered device is unavailable for use during this test. In addition, for the battery to be used, it must be fully recharged upon completion of the test, further extending the unavailability of the battery. An additional problem is that this test decreases battery life for certain types of batteries such as NiCd or SLA batteries, reducing the number of remaining available charge cycles. Thus, although relatively accurate, this test is time consuming, inconvenient, and adversely affects the battery life and availability.\nAnother conventional battery capacity test, commonly referred to as an open circuit voltage test, measures the battery voltage without an attached load. This test is utilized only for certain types of batteries, such as SLA batteries, which are characterized by a predictable decrease in terminal voltage as the battery is used. The remaining battery capacity is estimated based on this decrease in voltage. However, many other battery types such as lithium batteries, silver batteries, and mercury batteries do not exhibit such a continual and predicable decrease in voltage during use. As a result, this test is not suitable for such batteries. In addition, this test is also temperature dependent. The ambient temperature will effect the battery voltage and thus the estimation of battery capacity. Also, if the battery is partially depleted, this test will be less accurate because the relationship between voltage an capacity will have changed. Thus, although relatively fast and roughly accurate, this test can only be utilized to test a minority of battery types.", "score": 29.685059988822072, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "An AA alkaline battery's voltage starts at 1.5 and drops to 0.8V cutoff.\nThis makes alkaline a poor choice for powering an LED. You want a flatter discharge curve. Li-ion is a much better choice. You could easily power these LEDs with a single 18650 battery.\nThe lifespan is determined by the discharge rate. At 25mA expect 3000 mAH. At 100 mA expect 2500 mAH. See ENERGIZER E91 Specifications The mid point would be 2750 mAH.\nThree AA alkaline in series will have a discharge curve from 4.5v to 2.6v. A better value would be 11Ω which would give you 75 mA at the 3.6V mid point between new 4.5V and discharged to 2.77V.\nEach one of these LEDs must/should have a current limiting resistor. You measured 76 mA which is only valid for whatever the battery voltage was at that instant. The voltage is continually decreasing as well as current.\nFirst you must specify the current required for the desired brightness. The use an online calculator to get the value of the optimum resistor. I like the Hobby Hour LED Series Resistor Calculator\nAt 75mA each LED uses 12.5 mA.\nA 3.6V 18650 Li-ion will discharge to 3.2V, a much flatter curve. At 4.5V an LED with a Vf of 2.77 would use a 140Ω resistor for 12.5mA. For 12.5mA at the minimum 2.77V the optimum resistor would be 10Ω. The resistor to use would be calculated with the mid point voltage of 3.6V which is 66Ω. But the range of brightness would be noticeable. This will give you an average of 45 mW per LED or 270 mw for all 6 LEDs. With a battery capacity of ≈2750 mAH would yield a lifespan of 36.66 hours with an average 76% efficiency.\nIf you need a very consistent brightness then a constant current regulator would be required. something like an On-Semi NSI45060JD would work okay. For higher efficiency a switching step down regulator may do better.", "score": 29.190111820961285, "rank": 30}, {"document_id": "doc-::chunk-3", "d_text": "For certain situations, such as an off-shore passage or open ocean racing discharging to 70-80% DOD is acceptable provided the batteries receive a proper charger as soon as you get to the destination. Regularly discharging below 50% SOC on a regular basis in a PSOC environment drastically shortens battery life when compared to 50%.\nFirefly & some GEL batteries would be an exception for regularly discharging below 50% SOC.\nHow do I conduct an approximate 20 hour capacity test?\n#1 Allow battery to attain a steady 75-80F temperature\n#2 Fully charge battery and equalize if it's capable\n#3 Let the battery rest for 24 hours\n#4 Apply a DC load for 2 hours that = Ah Capacity ÷ 20 (small light bulbs and/or resistors can work)\n#5 Allow the battery to rest for at least 10 hours at 75-80F (24 hours is significantly more accurate)\n#6 Check specific gravity or resting open circuit voltage and compare to manufacturers SOC tables\n#7 Use basic math to determine the approximate Ah capacity. For example, a 100Ah rated battery has been discharged at 5A for 2 hours. This means so you removed 10Ah's of capacity. If the battery was in perfect health specific gravity readings or open circuit voltage readings should show the battery at 90% SOC. *If SG and OCV only show the battery at 60% SOC then the battery has lost approx 30% of it's Ah capacity.\nThis is an approximation only and NOT an accurate Ah capacity test. Variances can be anywhere from 10-18% off an actual 20 hour capacity test depending upon your particular battery.", "score": 29.052785875740014, "rank": 31}, {"document_id": "doc-::chunk-12", "d_text": "In such embodiments, a microprocessor or general-purpose computer (not shown) is utilized to execute a source code program stored either in RAM or ROM, performing the computation or accessing the look-up table to determine battery capacity. The source program may be implemented any language suitable for creating a source code file executable by the microprocessor. For instance, C, C++, Basic, Fortran, Assembly, and Pascal programming languages may be used.\nIn one embodiment, the source code program would have as an input the first and second voltage measurements corresponding to the first and second current levels, respectively. Thus, the source code program would provide the internal battery impedance as a direct calculation using the formula (1) shown above. Using the mathematical relationship between the internal battery impedance and remaining battery capacity developed as described above, the internal battery impedance is used to determine the remaining battery capacity.\nA flowchart illustrates one embodiment of a method 600 for determining the internal battery impedance and remaining battery capacity for a battery under test is provided in FIG. 6. In step 602, the internal battery impedance is determined. In step 604, the remaining battery capacity is determined using the internal battery impedance and the precharacterized data.\nFIGS. 7A and 7B are a flowchart of one embodiment of a process for determining the internal battery impedance and remaining battery capacity for a battery under test. In steps 702 through 706 data relating the internal battery impedance to the remaining battery capacity is pre-characterized. For each different battery chemistry being considered, that battery's particular data is appropriately stored. In step 708 a first current is selected and the battery discharges the current to a known load, during which time the battery voltage is measured step 710. In step 712 a second current is selected and the battery discharges the current to the known load, during which time the battery voltage is measured step 714. The internal battery impedance is determined based upon the voltages measured and/or the currents used, step 716. The remaining battery capacity is then determined in step 718 and is output in step 720.\nIn one embodiment of the present invention, step 704 may be implemented in a look-up table storing the precharacterized data in a memory device that is accessible to the controller. The data may be stored according to a variety of methods including a linked list, table, data structure, or array.", "score": 28.515044444990373, "rank": 32}, {"document_id": "doc-::chunk-1", "d_text": "The amperage is not additive in series.\nIn order to build current, we connect batteries in parallel (positive to positive, and negative to negative). If we want 200 amps per hour and we're using a 50-amp/hour battery, we need four rows of these batteries, connected in parallel. Voltage is not additive in parallel.\nTo determine how many batteries you would need, first calculate your average hourly watt needs, and divide by 120 volts AC to find amperage per hour.\nThe minimum battery bank for off-grid use is typically designed for fourteen hours – the time from sundown to sunup. If you're on the utility grid, you need enough reserve to last through a power failure – usually four hours at most.\nYour system can be 12-, 24-, or 48-volts DC. Of these, 48-volt is most common because of its greater efficiency. The type of system determines the multiple you will now find your DC amp hour requirement. This multiple is 10 for a 12-volt system, 5 for 24-volt, and 2.5 for 48.\nSo, multiply your amps per hour from the last paragraph (let's assume 10 amps/hour) by the multiple for your system (assume 48-volt) to get your DC amp/hours (25).\nSelect the deep-cycle battery you will be using and determine the reserve amps for that battery. Multiply your amps per hour (from above) by 14 hours. This gives your battery reserve amp as required.\nDivide that by battery manufacturer reserve amps, and you will know how many rows of batteries in parallel you need. The number of batteries in series in each row is determined by the battery voltage. Batteries are most commonly 6 or 12 volts.\nNow divide the system voltage (12, 24, or 48) by the voltage of your batteries (6 or 12). This gives you the number of batteries per row.\nFinally, multiply the number of batteries per row by the number of rows to find the total number of batteries.\nLet's say we're building an off-grid system. Our monthly energy requirement is 576.45 kWh.", "score": 27.80142217036242, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "It is recommended to fly using only 80% of the capacity of LiPo batteries to maximize their longevity so using the 9.11A average from the example above, the full throttle current draw (you will need to measure this) is 16A, input 2200mAh and 16A into the app and it'll calculate that you should be able to fly (at FULL throttle) for 6 minutes 36 seconds until the battery is 80% depleted or for 8 minutes 15 seconds (also at FULL throttle) until it is 100% depleted. (Img #2)\nA \"Realistic\" flight time for a plane is also calculated which works out to be 11 minutes and 47 seconds using 80% capacity and 70% average throttle, very close to the actual flight time in the first example.\nFor helicopters and multicopters the flight time is very dependant on what style of flight, but you can use the calculator to calculate the expected flight time in a hover.\nFuture plans: Add wing loading, cubic wing loading calculator, + suggestions?\nDisclaimer: The formulas used in these calculations come from various internet sources, with some variations added from my own experience. Your results may vary! While a lot of effort was put into ensuring accuracy, this app is not intended to be professionally accurate.\nComments & suggestions very welcome.\n|Category||Thread||Thread Starter||Forum||Replies||Last Post|\n|New Product||Walkera RC Magic Cube MTC-01 Devo Transmitter Module for 2801pro and Android||zadaw||Micro Helis||114||Jun 30, 2014 07:46 PM|\n|Discussion||RC Flight Log Apps for the IPhone or Android||Sticky Mickey||Australia||8||Sep 16, 2013 10:07 AM|\n|New Product||Android Phone App - RCEcalc - Electric Flight Calculator||pardus||Electric Plane Talk||0||Jan 14, 2011 02:43 AM|\n|Discussion||Android Phone Center of Gravity Calculator for planes||pardus||Product Announcements||4||Oct 25, 2010 03:24 AM|\n|Discussion||Android app for calculating Roll Out and Final Drvie Ratios||jimbo-69||Car Talk||5||Oct 19, 2010 03:18 PM|", "score": 27.522585217527457, "rank": 34}, {"document_id": "doc-::chunk-11", "d_text": "Figure (1), illustrate the capacity drop of 11 Li-polymer batteries that have been cycled at a codex laboratory. The 1,500m Ah pouch cells for mobile phones were first charged at a current of 1,500mA (1C) to 4.20V/cell, and then allowed to saturate to 0.05C (75mA) as part of the full charge saturation. The batteries were then discharged at 1,500mA to 3.0V/cell, and the cycle was repeated. The normal limit loss of Li-ion batteries was uniform over conveyed 250 cycles, and the batteries function as expected.\n//need to put figure number//\nFigure (1): capacity drop as part of cycling. Eleven new Li-ion were tested on a codex C7400 battery analyser. All packs started at a capacity of 88_94% and decreased to 73_84% after 250 full discharge cycle.\nCourtesy of Codex // maybe change title //\nAlthough a battery should deliver 100% capacity during the first year of service, it is common to see lower then specified capacities, and shelf life may contribute to this loss. In addition, manufacturers tend to overrate their batteries, knowing that very few users will do spot-checks and complain if low. Not having to match single cells in mobile phones and tablets, as is required in multi-cell packs, opens the floodgates for much broader performance acceptance. Cells with lower capacities may slip through cranks without the consumer knowing.\nVery identical to a mechanical device that defect quicker with heavy use, the battery’s cycle count depends on the depth of discharge (DOD). The smaller the discharge (low DOD), the longer the battery will last. If at all possible, avoid full discharge and charge the battery more often between uses. Partial discharge on Li-ion is accepted. This battery does not require a complete discharge cycles as there is no memory effect. The exception may be a periodic calibration of fuel gauge on a smart battery or intelligent devices.\nTable (2) compare the number of discharge/charge cycles Li-ion deliver at various Depth of discharge (DOD) levels before the battery capacity drops to 70%. All other variables such as charge voltage, temperature and load currents are set to voltage default setting. (DOD) constitute a full charge followed by a discharge to the indicated percentage levels in table.", "score": 27.34693739281078, "rank": 35}, {"document_id": "doc-::chunk-10", "d_text": "The number of bits of the analog-to-digital converter will depend on the particular application. In one embodiment, a 12 bit converter is utilized.\nIn one embodiment where the relationship between the internal battery impedance and capacity of battery 102 is complex and not easily represented by a mathematical function, the precharacterized relationship is preferably stored as data points in a memory. In one particular embodiment, the data is stored in one or more look-up tables. Alternatively, where the relationship may be easily represented by a mathematical function, an algorithm implementing the function may be utilized rather than one or more look-up tables. FIG. 5 illustrates one embodiment of a look-up table 500 containing the precharacterized data relating the internal battery impedance to the remaining battery capacity. Look-up table 500 can be stored in a memory device or a region accessible to battery capacitor calculator 404. In one embodiment, look-up table 500 includes multiple look-up tables, each corresponding to a particular battery chemistry. In the illustrative embodiment shown in FIG. 5, there are three lookup tables 504, 512 and 514 representing precharacterized data for SLA; Lithium, and NiCd batteries, respectively.\nIn an alternative embodiment, the precharacterized data may also be arranged according to the difference in successively applied currents, Δi. In such an embodiment, individual or linked tables, each identified by a particular Δi, are included in table 500. This current can be different for some batteries produced by different manufacturers having different construction methods or having different battery chemistries. Different battery chemistry types have different load and current capabilities. In addition, each look-up table can also be identified by a temperature or temperature range for which it contains data. Thus, a look-up table for a particular battery can be identified both by the temperature or temperature range and by its chemistry type. In the illustrative embodiment, each lookup table contains 3 columns of data. The first column 506, represents the difference between the first and second voltage measurements. The second column 508, represents the value of the internal battery impedance as determined by equation (1). The third column 510, contains the precharacterized data of the remaining battery capacity associated with the internal battery impedance. The associated battery impedance is the associated Δi divided by the Δv for the particular table or list.", "score": 27.079715136629694, "rank": 36}, {"document_id": "doc-::chunk-1", "d_text": "- Lithium-ion batteries are cheaper to produce, and consequently also cheaper for consumers to purchase. They can potentially store 3 to 4 times more charge compared to other batteries of similar size. They also last longer over a period of time and can be recharged more times. There can however be problems storing the batteries, they do not like high temperatures; which could be a consideration if you are buying a device to take with you on your next trip to Mount Vesuvius. Lithium-ion batteries are cylindrical and resemble batteries in your TV remote. There can be one or multiple of these used in a power bank.\nBoth these batteries have internal capacity, measured in mAh, and nominal voltage measured in V.\nHow long will a power bank last?\nFiguring out how many charges a power bank will give you is not as simple as you might think. First, you need to understand all the various indications.\nWhat is mAh?\nThis tells you the number of hours the power bank can sustain the current. Normal smartphone adapters have a 1000mA output. Thus, an hour of charging will fill your phone with 1000mA of power. But you are mistaken to think that a 5,000 mAh power bank can charge your phone 5 times.\nThe indication of mAh is merely the internal capacity of the battery. It is not what the power bank is able to deliver on output.\nWhat is volt?\nThe voltage of a power bank, or the nominal power, indicates the battery strength. Most power banks are 3.7 volts but the output voltage can be higher. All power banks have an output of 5V or higher.\nIt is important to note that when more than 1 Lithium-ion battery is used, the voltage does not increase. The mAh capacity increases as the batteries are connected in parallel and the capacity is added together. The voltage however stays consistent and does not accumulate.\nCalculating the actual capacity of a power bank\nIt is time to do some math.\nFirstly, the nominal voltage (for example 3.7V) needs to be converted to the output voltage of usually 5V.\nNext, one must also factor in that the power bank will not deliver 100% of its charge available. The circuit will use energy too and over time the capacity depletes. The quality of components in the circuit will determine the rate of energy loss, i.e.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-0", "d_text": "There are several characteristics of every battery that distinguishes them from each other such as battery capacity, energy density, voltage stability, self-discharging, size, chemistry, and brand etc. Lithium-ion batteries are available in different sizes one of which is AA size, which is a standard cell size. In this article, we’re gonna be talking about the battery capacity of AA size Lithium ion batteries and how long they last.\nDo You Know How Many mAh a Lithium AA Battery Is?\nBattery capacity is defined as a measure of the charge a battery stores and is determined by the chemistry used to manufacture the battery. In other words, the battery capacity means the maximum amount of energy that can be taken out from the battery under certain specified conditions. However, the real energy storage capacity of the battery may differ significantly from the \"nominal\" rated capacity, as the battery capacity depends strongly on the past usage history, storage temperature, age, and the discharging or charging regimes of the battery.\nBattery capacity is expressed in mAh (milliamps × hours). For instance, if a battery has the capacity of 250 mAH and provides 2 milliamps average current to a load, the battery will last 125 hours, in theory. However in reality, the actual battery life depends on the way the battery is discharged. Discharging a battery at the manufacturer-specified rate normally helps the battery provide close to its nominal capacity.\nAfter doing some research, we’ve found that AA size lithium batteries of different brands have different capacities. AA Energizer Ultimate Lithium batteries have the highest capacity of all that is 3000 mAh. They proved unsurpassed in performance against a number of real-world usage situations such as remote controls, digital cameras and portable lights, after strict testing against competitors from around the world, and even they now hold a Guinness world record title for the longest-lasting battery. They do not only provide record-breaking performance, but they are lightweight, a leak resistant design and have a 20-year shelf life.\nOn the other hand, we’ve AA size USB rechargeable Lithium-ion batteries that have the 1200 mAh capacity. They are commercially made for remote controls, toys, fans, flashlights, wireless keyboard and mouse, arm clock, cameras etc. They have built-in USB port that can be plug into any USB port to get them charged.\nHow Much Longer Do Lithium AA Battery Last?", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-0", "d_text": "Ahm. Best bet is to go to Battery University, http://www.batteryuniversity.com\nfor more than you wanted to know about batteries.\nIn your case \" Lithium-ion is a low maintenance battery, an advantage that most other chemistries cannot claim. There is no memory and no scheduled cycling is required to prolong the battery's life. In addition, the self-discharge is less than half compared to nickel-cadmium,\nDespite its overall advantages, lithium-ion has its drawbacks.\nAging is a concern with most lithium-ion batteries and many manufacturers remain silent about this issue. Some capacity deterioration is noticeable after one year, whether the battery is in use or not. The battery frequently fails after two or three years. \"\nDon't worry about cycles or memory. Read the site, however, it is outstanding.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-6", "d_text": "Because you must have a battery that is powerful enough to deliver those amps without overheating, shutting down, blowing a fuse, etc.\nGenerally, a 10ah battery will work well with a 20 Amp controller or less. If you know some of the specs but not all of them, you can do the math but once again be sure you have the correct and honest data to begin with. Sometimes some controllers have \" Watt ratings\" while others will show \"Max Amps\" and this can lead to confusion.. When a controller is rated for Watts, you need to know if this is \"Maximum watts\" or \"Continuous watts\" with a higher peak. Your battery needs to safely handle the \"Peak\" or \"Max\" when needed.\nQuality: a battery pack is only as good as it's weakest cell, and as durable as the thing you put it in! Back in the old days (oh god.. Did I just write that ?.. urgh) getting a lasting pack was pure random luck and a rare thing at that. A pack that lasted a few seasons or survived a 6 month vacation on a bench without self-discharging to death was a rare thing. Battery packs and cells have improved greatly and today in 2017 we finally have reached a point in evolution that reliability and quality packs are readily available, it's the finer details that matter most, like getting one from a reputable vendor who will back a warranty and who sells enough of them quickly to keep recently built packs in stock.\nBMS and Chargers, to protect and charge your pack. Lithium batteries are light and durable but they must be used within their specified limits. In order to insure a long life, battery packs should contain a BMS, aka: Battery Monitoring System. It stands between the actual battery and the power wires, monitors all the voltages of cells within and also typically watches how many Amps are flowing. If any limits are reached the BMS should intervene by cutting off the power safely. In a perfect world, the BMS will sit there and do nothing but if you do ride until you use up all the energy available, it will shut you down and prevent any damage to the battery cells.", "score": 26.9697449642274, "rank": 40}, {"document_id": "doc-::chunk-3", "d_text": "Much of what is mistaken for the 'memory effect' is voltage depression,\nwhich is caused by long, continuous overcharging, which causes crystals to grow\ninside the cell. Fortunately both the 'memory effect' and voltage depression\ncan be overcome by subjecting the battery to one or more deep charge/discharge\nAnother term you will hear is 'cell\nreversal'. This can occur when a battery of cells is discharged below its safe\n1.0 volt per cell. During this discharge, differences between individual cells\ncan lead to one cell becoming depleted before the rest. When this happens, the\ncurrent generated from the remaining active cells will 'charge' the weakest\ncell, but in reverse polarity. This can lead to the release of gas and\npermanent damage to the battery pack.\nNiCads can short circuit due to the build up\nof crystals inside the battery. The use of a fully-charged electrolytic\ncapacitor placed across the cell can effect a temporary cure. Over-discharging\nof batteries invites short circuiting. Batteries should be stored charged. A\nlifespan of 200 to 800 charges is typical for NiCad batteries.\nNickel metal hydride (NiMH)\nLike NiCads, nickel-metal hydride cells\nprovide 1.2 volts per cell. Battery makers claim that NiMH cells do not suffer\nfrom the 'memory effect' and can be recharged up to 1000 times.\nNiMH cells are not quite as suitable as NiCads for\nextreme current loads, but do offer a greater capacity in the same cell size. A\ntypical AA NiCad may have a 750 mAh, but a NiMH may provide 2400 mAh - three times the capacity.\nIf your style of portable operating involves going out for 3 or 4 hours and running around 5 watts output, NiMH cells are an excellent choice and are lighter than sealed lead acid.\nWhere to get them? 7.2 volt battery packs are often used for models. Two in series gives 14.4 volts, but you'll get over 16 volts immediately after charging. That's above what many commercial rigs are rated so use at your own risk.", "score": 26.9697449642274, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "It’s important to have a basic idea of how long your marine battery will last on a single charge, especially when it’s powering your trolling motor. If your battery dies unexpectedly, you could find yourself stranded out on the water.\nBut is there any way of knowing how long your battery may last?\nHow long will a marine battery run a trolling motor on a single charge?\nWhat factors can affect your battery’s run time?\nAnd is there any way to make a battery last longer between charges?\nIn this article, we’ll answer all of these questions and more.\nTable of Contents\n- How Long Does a Marine Battery Last on a Trolling Motor?\n- How Can I Calculate the Battery Run Time on a Trolling Motor?\n- Can I Make the Battery Last Longer?\nHow Long Does a Marine Battery Last on a Trolling Motor?\nThe question seems simple enough: how long can you expect your battery to last? How many hours of power will your trolling motor have per battery charge?\nUnfortunately, the simplest answer to this question is, it depends. Every battery and trolling motor is slightly different, and there are many different variables that may influence how long your battery is going to last.\nBecause of these variables, the same battery may run your trolling motor for a given amount of time on one boating trip and for a completely different amount of time on another trip.\nLet’s take a closer look at these variables.\nWhat Factors Impact the Run Time of a Marine Battery on a Trolling Motor?\nOne of the most important numbers when figuring battery run time is the amp hours, or Ah. This number should be given on all marine batteries. It denotes the amount of amperage the battery could supply over a given time period.\nFor example, let’s consider a marine battery with a 100Ah rating. Fully charged, and under perfect conditions, this battery could supply your trolling motor 100 amps of power for one hour, 50 amps for 2 hours, 25 amps for 4 hours, 10 amps for 10 hours, etc.\nAs you can see from the example above, how long the battery lasts depends on how many amps your trolling motor is consistently drawing from the battery. The lower the amp draw, the longer the battery will last.\nMarine batteries come with different amp hour ratings. Some have 50Ah or 75Ah ratings, many have 100Ah or 120Ah ratings, and some even have 200Ah ratings.", "score": 25.65453875696252, "rank": 42}, {"document_id": "doc-::chunk-2", "d_text": "Ok, time to crunch some numbers and get down to the nitty gritty of it. All batteries and battery packs will have fine print, listing various things of high importance, put on the bifocals and squint, expect to find things like :\nVolts: Used to describe how fast electrons move, more voltage = more speed !\nAmps: How wide the road is, more lanes, more cars can pass at the same time side by side...\nWatts: The combination of Volts and Amps ( Volts X Amps = Watts )\nAmp Hours: Should always be listed, typically 10 to 20 Amp Hours ( abbreviated \" Ah \" ) a measure of how many fixed number of Amps a battery can sustain for 1 hour.. ( C rate ). Or, double the amps for half the time.. Or half the amps for two hours.. etc.\nWatt Hours: This is a far more accurate way to know how much usable energy is in a given battery pack ( abbreviated Wh ) when available, this is the number to look for! Also, you can translate it into how many watts, continuous, for 1 hour! A 500wh battery can deliver 500 watts for 1 hour or 1000w for 30 minutes.... or 250w for 2 hours .. etc.. Most ebikes do not use power at an exact level, continuously, so this does not directly translate into ride time, but you can quickly see how a larger battery with more energy (capacity) can deliver lower power levels for longer periods of time, and go further on a charge.\n|Load||Run Time (hours)|\n|500 wh||250w||2 hours|\n|500 wh||500w||1 hour|\n|500 wh||1000w||30 minutes|\nA word of caution, some vendors are prone to bending the truth and \"over-promise\" when it comes to range expectations. Be sure to do some research before you buy, ask the right questions and by from a vendor that provides range estimates in relation to your weight, bike, intended use and intended input.\nAh vs Wh: This can get confusing, but it is very important to understand the difference. Amp Hours (Ah) means nothing unless you factor in the voltage. Watt Hours (Wh) is far more important because it factors in the Voltage and the Amp Hours together and determines how far you might go on a full charge.", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-9", "d_text": "For rechargeables, it can mean either the length of time a device can run on a fully charged battery or the number of charge/discharge cycles possible before the cells fail to operate satisfactorily. For a non-rechargeable these two lives are equal since the cells last for only one cycle by definition. (The term shelf life is used to describe how long a battery will retain its performance between manufacture and use.) Available capacity of all batteries drops with decreasing temperature. In contrast to most of today's batteries, the Zamboni pile, invented in 1812, offers a very long service life without refurbishment or recharge, although it supplies current only in the nanoamp range. The Oxford Electric Bell has been ringing almost continuously since 1840 on its original pair of batteries, thought to be Zamboni piles.\nDisposable batteries typically lose 8 to 20 percent of their original charge per year when stored at room temperature (20–30 °C). This is known as the \"self-discharge\" rate, and is due to non-current-producing \"side\" chemical reactions that occur within the cell even when no load is applied. The rate of side reactions is reduced for batteries are stored at lower temperatures, although some can be damaged by freezing.\nOld rechargeable batteries self-discharge more rapidly than disposable alkaline batteries, especially nickel-based batteries; a freshly charged nickel cadmium (NiCd) battery loses 10% of its charge in the first 24 hours, and thereafter discharges at a rate of about 10% a month. However, newer low self-discharge nickel metal hydride (NiMH) batteries and modern lithium designs display a lower self-discharge rate (but still higher than for primary batteries).\nInternal parts may corrode and fail, or the active materials may be slowly converted to inactive forms.\nBattery: Physical component changes\nThe active material on the battery plates changes chemical composition on each charge and discharge cycle; active material may be lost due to physical changes of volume, further limiting the number of times the battery can be recharged. Most nickel-based batteries are partially discharged when purchased, and must be charged before first use. Newer NiMH batteries are ready to be used when purchased, and have only 15% discharge in a year.\nSome deterioration occurs on each charge–discharge cycle. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material detaches from the electrodes.", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-0", "d_text": "(Last Updated On: March 3, 2021)\nWhat is a Battery?\nA battery is a device for electrical storage. Just as a water tank stores water for potential use, storms do not produce electricity; they keep it. Electrical energy is stored or released as the chemicals in the battery change. This process can be replicated several times in rechargeable batteries. Batteries are not 100 percent efficient – when charging and discharging, some energy is lost as heat and chemical reactions. If a battery uses 1000 watts, it can take 1050 or 1250 watts or more to recharge it completely.\nVoltage & State Of Charge Of Deep Cycle Battery\nIf you are the owner of a mobile or off-grid solar energy device, calculating how much charge you have left in your deep cycle battery bank is one of the most obsessive pastimes. This is often referred to as the’ state of charge.’ Most individuals see the battery voltage as a measure of this. While not entirely accurate, if your solar regulator or charge controller does not have a voltage reading, the best way to assess this is with a multimeter. The state of charge varies slightly between types of sealed lead acid, flooded, gel, and AGM deep cycle batteries and between brands. The weather can also play a part.\nBattery voltage status of the charging table charging table\nFor each type of battery, the table below shows the voltage and estimated state of charge.\nNote: The statistics are based on readings from open circuits. That is when there is no load on the deep cycle battery, and it hasn’t been under load for a few hours. In a battery-powered device that is in continuous use, this scenario can not occur very often. So, the best time to read is early in the morning until your panels are struck by the sun, at night when the sun is setting, or when it’s very overcast. If you take a reading when the battery is getting a charge, something up to 14.5 volts could be read.\nTake the reading while the panels are not exposed to the light, as power is presumably being drawn at the moment. You might conclude that it is a conservative approximation regardless of the voltage reading. Once all the load is removed from a battery, the voltage will dramatically bounce back up.", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-4", "d_text": "Now, Severson and her colleagues report in the journal Nature Energy that machine learning can help to predict computer battery life by creating computer models. The published algorithms use data from early-stage charge and discharge cycles.\nNormally, a figure of merit describes the health of a battery. It quantifies the ability of the battery to store energy relative to its original state. The health status is 100% when the battery is new and decreases with time. This is similar to the state of charge of a battery. Estimating the state of charge of a battery is, in turn, important to ensure safe and correct use. However, there is no consensus in the industry and science as to what exactly a battery’s health status is or how it should be determined.\nThe state of health of a battery reflects two signs of aging: progressive capacity decline and impedance increase (another measure of electrical resistance). Estimates of the state of charge of a battery must therefore take into account both the drop in capacity and the increase in impedance.\nLithium ion batteries, however, are complex systems in which both capacity fade and impedance increase are caused by multiple interacting processes. Most of these processes cannot be studied independently since they often occur in simultaneously. The state of health can therefore not be determined from a single direct measurement. Conventional health assessment methods include examining the interactions between the electrodes of a battery. Since such methods often intervene directly in the system “battery”, they make the battery useless, which is hardly desired.\nA battery’s health status can also be determined in less invasive ways, for example using adaptive models and experimental techniques. Adaptive models learn from recorded battery performance data and adjust themselves. They are useful if system-specific battery information are not available. Such models are suitable for the diagnosis of aging processes. The main problem, however, is that they must be trained with experimental data before they can be used to determine the current capacity of a battery.\nSeverson and her colleagues have created a comprehensive data set that includes the performance data of 124 commercial lithium-ion batteries during their charge and discharge cycles. The authors used a variety of rapid charging conditions with identical discharge conditions. This method caused a change of the battery lives. The data covered a wide range of 150 to 2,300 cycles.\nThe researchers then used machine learning algorithms to analyze the data, creating models that can reliably predict battery life.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-13", "d_text": "For every 0.10V drop under 4.20V/cell doubles the cycle and keeps less capacity. The more the voltage above 4.20V/cell would the less the longevity.\nGuideline: for every 70mV drop in charge voltage reduces the usable capacity by approximately 10%.\n|[150 – 250]\n300 – 500\n600 – 1,000\n1,200 – 2,000\n2,400 – 4,000\nTable (4) represents discharge cycles and capacity as function of charge voltage limit. For every 0.10V drop under 4.20V/cell doubles the cycle and keeps less capacity. The more the voltage above 4.20V/cell would the less the longevity.\nFor safety reasons, many lithium-ions cannot exceed 4.20V/cell. While higher voltage boots capacity, exceeding the voltage shortens service life and compromises safety. Figure (5) demonstrate cycle count as a function of charge voltage. At 4.35V, the cycle count of a regular Li-ion is cut in half.\n|Figure 5: impact on cycle life at high charge voltages. Higher charge voltages increase capacity but decrease cycle life and compromises safety.\nSource: Choi et al. (2002)\nFor a particular applications, not only setting the most-suited voltage thresholds, it requires a regular Li-ion not to remain at the high-voltage ceiling of 4.20V/cell for prolonged time. The Li-ion should not remain at the high-voltage ceiling of 4.20V/cell for an extended time. The Li-ion charge turns off the charge current and the battery voltage return to a more natural level.\nTechnical facts need to be considered when dealing with lithium-based battery\n- Environmental conditions, not cycling alone, govern the longevity of lithium-ion batteries. Keeping a fully charged battery at high temperatures is the most dangerous situation, which may cause the battery to explode.\n- Battery packs do not die suddenly, but the runtime gradually shortens as the capacity fades.\n- Lower charge voltages prolong battery life.\n- A normal car battery could be prolonged by lowering the charge voltage when connected to the AC grids. To make this feature user-friendly, a monitoring device should satisfy a long life mode that keeps the battery at 4.05V/cell and provide a capacity of about 80 percent.", "score": 24.94954237836993, "rank": 47}, {"document_id": "doc-::chunk-0", "d_text": "Life of primary batteries\nEven if never taken out of the original package, disposable (or \"primary\") batteries can lose 8 to 20 percent of their original charge every year at a temperature of about 20°–30°C. This is known as the \"self discharge\" rate and is due to non-current-producing \"side\" chemical reactions, which occur within the cell even if no load is applied to it. The rate of the side reactions is reduced if the batteries are stored at low temperature, although some batteries can be damaged by freezing. High or low temperatures may reduce battery performance. This will affect the initial voltage of the battery. For an AA alkaline battery this initial voltage is approximately normally distributed around 1.6 volts.\nLife of rechargeable batteries\nRechargeable batteries traditionally self-discharge more rapidly than disposable alkaline batteries; up to three percent a day (depending on temperature). However, modern Lithium designs have reduced the self-discharge rate to a relatively low level (but still poorer than for primary batteries). Due to their poor shelf life, rechargeable batteries should not be stored and then relied upon to power flashlights or radios in an emergency. For this reason, it is a good idea to keep alkaline batteries on hand. NiCd Batteries are almost always \"dead\" when purchased, and must be charged before first use.\nAlthough rechargeable batteries may be refreshed by charging, they still suffer degradation through usage. Low-capacity Nickel Metal Hydride (NiMH) batteries (1700-2000 mAh) can be charged for about 1000 cycles, whereas high capacity NiMH batteries (above 2500 mAh) can be charged for about 500 cycles. Nickel Cadmium (NiCd) batteries tend to be rated for 1,000 cycles before their internal resistance increases beyond usable values. Normally a fast charge, rather than a slow overnight charge, will result in a shorter battery lifespan. However, if the overnight charger is not \"smart\" (i.e. it cannot detect when the battery is fully charged), then overcharging is likely, which will damage the battery. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material falls off the electrodes. NiCd batteries suffer the drawback that they should be fully discharged before recharge. Without full discharge, crystals may build up on the electrodes, thus decreasing the active surface area and increasing internal resistance.", "score": 24.75021726226365, "rank": 48}, {"document_id": "doc-::chunk-2", "d_text": "Be prepared to discuss:\n- The size of the space available for the battery\n- The wattage you calculated above\n- Voltage options that work for you and your controlling devices (if necessary)\n- The minimum endurance (in hours) that you need the battery to produce.\nThere are thousands of combinations and technologies available, so now that you are armed with the information you need to provide your battery specialist, selecting the right battery should be much easier. If you would like to discuss your application requirements further, please call to speak to one of our Application Engineers at 864-295-4811.", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "RC Electric Calculator App for Android\nRC Electric Calculator App for Android\nRC E-calc Pro - $1.99\nIt's basically an 8-in-1 calculator for electric powered model airplanes, helicopters, multicopters, cars, trucks and buggy's, if fact anything that uses rechargeable* batteries. (*As long as the voltage is known then any rechargeable battery including LiPo, LiFe, NiCd and NiMh batteries can be used for some calculations). US imperial units and metric units are catered for!\nOne of the units used for the calculations in the app is the mAh (mill-amp-hours). All batteries have a mAh or Ah rating, this is the capacity of the battery or how much energy is stored. After you've recharged the battery your digital charger will show you how many mAh has been put back into the pack i.e. how much was used. This number is used to calculate a number of things in the app, like expected flight time using 80% of the pack or the average current used during the flight or run.\nIf you know the current draw of your system additional calculations can be made. In the calculator all the calculations are made \"on-the-fly\" so no buttons need to be tapped, just input your values and the calculations will be made when enough data is available.\nThe images should explain how it all works but feel free to ask any questions.\nUse RC E-Calc Pro to calculate the following:\n- Average Current Used by the aircraft during a flight/run.\n- Average Discharge Rate (C-rate) of the LiPo battery during a flight.\n- Expected Flight Time using 100% capacity of a battery.\n- Expected Flight Time using 80% capacity of a battery.\n- Realistic Flight Time using 80% capacity of a battery and 70% average throttle.\n- Power in Watts from Volts & Amps.\n- Volts from Power & Amps.\n- Current in Amps from Power & Volts.\n- Static thrust of a propeller in pounds or kilograms.\n- Power to Weight Ratio in Watts-per-pound or Watts-per-100gram.\na) Fly (or drive) for 11 minutes and 20 seconds, then recharge your 2200mAh LiPo, when finished your charger states that 1720mAh was returned to the battery.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-7", "d_text": "No battery lasted longer than 1.5 years when used out doors. However, batteries used indoors lasted much longer.\nStandard lead acid batteries are supposed to have a life of about 1750 to 2000 cycles if they are charged at 70% utilisation. This means if the drain from a 65-ampere hour battery doesn�t exceed 350 watts it should give a life of about 1750 cycles. In other words towards the end of the cycle it will hold only about 200 watts of useful power. I tried that and measured capacities and I must conclude that in a hot climate like the UAE it will be too much to expect a life of 1000 cycles.\nHowever, I must observe they last much longer at Toronto and standard auto batteries are guaranteed for 72 months plus.\nI also kept a record of battery life in automobiles. Maintenance free batteries (even those that come as OEM on autos like Mercedes and made in Germany) do not last longer than 2 years.\nNepal and Afghanistan are much colder than Orissa and batteries definitely will last longer.\nVery good make Auto batteries in Orissa do not last three years but even rebuilt batteries used indoors for emergency lighting last up to 5 years.\nNi-Cd batteries are usually guaranteed for 700 to 1000 cycles but I never got more than 100 cycles in the UAE.\nThere is a misnomer in the Ni-Cd field. Battery prices vary by as much as 1:3. a standard �AA� Battery holds 500 mah. Nickel and Cadmium being costly some manufacturers make them at capacities as low as 100 mah and sell for less.\nZinc Manganese Alkaline batteries are reusable (not rechargeable) but a need a different charger than the ones that charge Ni-Cd batteries. If they are charged beyond 1.65 volts they are likely to leak and fail.\nIn 1993 I purchased AA alkaline batteries at Bhubaneswar for Rs.18 each so it is no wonder that they are available for Rs. 30 these days.\nAfter Rayovac tried and failed to patent the process the technology became popular. Rayovac claimed 25 recharges from full dead and life�s of about 1750 cycles under certain conditions. However, ordinary alkaline batteries can be successfully reused 10 times over.\nA suggestion � pure energy sells rechargeable alkaline batteries and charger available at most Wall marts.", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-1", "d_text": "All of the details that we provide with regards to fitment is supplied by the manufacturers and although we do try to keep this updated we do recommend as a final check its worth comparing your original battery with the one shown. With over twenty thousand different batteries its difficult to keep our records as up to date. We do try but new fitments come on stream every week due to the launch of new products.\nMah is the rating which estimates the amount of charge that a battery will hold or how much current it will provide for how long. The higher the this rating the longer the runtime.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "One of the most frequently asked questions today by innovators and entrepreneurs looking to design the perfect consumer comfort product is: can I power my gadget with a battery? This post will help you understand the requirements and challenges of using battery power to run your heated device. It all boils down to the questions how big, how hot and how long? Since the choices for batteries are virtually limitless, the purpose of this post is to arm you with the primary information you must have in order to consult a battery specialist, who can help you with your selection.\nLet’s review the basics. Heat generation is a function of watt density, ambient conditions and thermal losses (or gains). Watt density is the amount of wattage produced divided by the area producing the wattage, referred to most frequently as watts per square inch.\nA real world example: I’ve developed a mobile warming tray and now I want to sell it to Michigan fans for football games. The tray is 8 inches x 8 inches and uses polymer thick film heater technology. I want the heater to get to about 165°F and be able run for about two and a half hours. It will be insulated, have a thermostat and needs to run off a battery. What now?\nSTEP 1 – Set a target temperature\nWhen it is all said and done, establishing the maximum operating temperature of the item you are designing is the primary driver in evaluating your options. You don’t need to be dead on with this, but the more variables you consider, the more accurately you will be able to predict the outcome. Will there be thermal influences such as insulation, air flow, or large thermal masses adding or taking away from the heater’s capabilities?\nFor our example, let’s choose a comfortable operating temperature of 60 Degrees\nSTEP 2 – Estimate the Wattage\nAfter you have determined the temperature you would like to achieve in your device, you can determine the watts per square inch that you will require by conducing some simple tests (refer to our blog post “How to determine the watt density required in my application” for instructions on conducting a simple test for this).\nAnother way to get a very general idea about what the wattage you may need is to look at the chart below, select a desired operating temperature and note the corresponding watt density. Note that the chart depicts heat output in open air on aluminum, so consider your environment and adjust accordingly.", "score": 24.345461243037445, "rank": 53}, {"document_id": "doc-::chunk-0", "d_text": "Maintaining and Troubleshooting Your Laptop Battery\nThe actual life of a laptop battery will vary with computer usage habits. For\nmost users, it is not uncommon to experience differences in battery life, of anywhere from just\nunder one hour to over two hours in each sitting. If you are experiencing shorter battery life\ncycles, say 10 to 15 minutes, it may not yet be time to order that new battery.\nseveral factors to take into consideration when determining if the time has come to replace your\nbattery. This information may also apply to that new battery that you have recently purchased,\nthat has been giving you fits. The two primary things to consider when troubleshooting battery\nproblems is Usage Habits and Battery Memory. We will cover both in their complexities in just a\nmoment, but first, let us take a look at what you should expect from your battery's life cycle.\nbatteries usually last 1.5 to 2.5 hours.\nLiION batteries usually last 2.0 to 3.0 hours.\nare average results and the results will vary greatly depending on your system's conservation\nsettings, the temperature of the room and the climate that you are operating your computer in. As\na general rule, your Lithium Ion battery will last much longer than your standard Nickel Metal\nNow let's take a look at the various usage habits to consider when\ntroubleshooting your laptop's battery. These processes are very similar to the way that your\nportable stereo uses batteries ... just think how much faster your stereo eats batteries when you\nare playing the CD or the tape deck, as opposed to when you are just playing the radio.\nmore you use physical devices --- which require more electricity to operate --- the more of the\nbattery's power you can expect to consume. The devices that create a larger power drain are the\nhard drive, the floppy drive and the CD-ROM.\nWhen the computer is able to use its physical\nmemory resources to store information, the computer will use less of the battery's power, since\nthe process is mostly electrical in nature. However, when the processes you are using exhaust the\nphysical memory resources available to your system, the system will turn to virtual memory to\ncontinue the process at hand. Virtual Memory is designed to extend system memory resources by\nbuilding a memory swap file on the hard drive, and then transfer needed information between the\nhard drive and the physical memory as required.", "score": 23.712301816794184, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "In the measurement of battery technology, there are four kinds common method (open-circuit voltage measurement, Coulomb calculation, impedance measurement, integrated look-up table method), usually using a combination of methods to one of the main method of supporting the rest of the way computing power.\nThe first one is the open circuit voltage measurement, this method is that measuring the battery voltage under static values to calculate the remaining battery capacity, but as a result of stationary lithium-ion battery voltage and remaining capacity relation is non-linear, so this method measured valueis not accurate, the vast majority of mobile phone batteries are calculated using this method.\nThe second one is Coulomb calculation, the method is by measuring the Toshiba laptop battery charge and discharge current, the current value and time value After the calculation of product integration has been carried out by filling the battery into the power and the release of electricity, Coulomb’s method is a accurate method of calculating power.\nthe third is the impedance measurement, measuring its resistance to get the remaining battery capacity value.\nThe fourth one is a comprehensive look-up table method, by setting up a related form, the voltage, current, temperature and other parameters, you can query the remaining battery capacity.\ntag:ACER AS09D34 Li-ion battery ACER BT.00803.015 Li-ion battery ACER UM09H31 Li-ion battery ACER CGR-B/6C1AW Li-ion battery ACER W83066LC Li-ion battery ACER BT.00903.007 Li-ion battery ACER 3UR18650Y-2-QC236 Li-ion battery ACER UM08B71 Li-ion battery ACER AS10D41 Li-ion battery", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-0", "d_text": "Assuming a typical lead-acid, 12 V car battery (typically at 13 V or so fully charged), and that it takes roughly 500 A over 3 seconds to start an engine, how long will it take to recharge the battery at any given charge rate?\nHere's my attempt from what I remember about physics:\n12.8 V * 500 A = 6400 W\nOver 3 seconds that's 19,200 joules.\nSo, in a perfect world where all the current goes right back in to the battery and whatnot, how long does it take to regain all my joules and put them back in my battery?\nGiven a 2A charge rate:\n14 V (output of charger?) * 2 A = 28 watts\nHere's where I'm a little shaky. What's next? Divide the joules by the wattage to get time? Seems like it:\n19,200 joules / 28 watts = 11.4 minutes.\nThat's it? 11.4 minutes at 2 A and all 19,200 joules are back? Seems hard to believe. My charger also has a 10A setting. So that means in about 2.5 minutes, it'll be \"recharged\".\nSo, are my assumptions correct? Do you really just use the charging voltage to calculate this, it seems like you would need to put the charging voltage in relation to the battery's capacity/voltage/whatever.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-1", "d_text": "Obviously, the ones with higher amp hour ratings are going to last longer between charges.\nBatteries that are larger physically are generally capable of holding more power. This isn’t true across the board, of course, but it is one thing to keep in mind.\nIf you buy a small battery to fit in your kayak, for example, it isn’t going to give you as much run time as a larger battery would. That said, a kayak-sized trolling motor will probably draw less power from the battery, which may increase its charge life.\nThere are four main types of marine batteries: lead-acid, AGM, gel, and lithium. Of these types, lithium batteries generally last the longest while lead-acid batteries typically need more frequent recharging.\nOf course, it also depends on the quality of the battery. A well-made lead acid battery from an established and trusted brand may perform better than a cheap, low-quality lithium battery.\nRegardless of the type of battery, it will lose its ability to hold a charge over time. Older batteries tend to develop internal resistance that makes them more difficult to charge, and when in use, they may discharge more quickly.\nIf you have an older battery, you can expect that it won’t last as long as it did when it was newer. The problem will only get worse, so it may be a good idea to keep a battery charger on board your boat, just in case the battery dies sooner than you’re expecting.\nSome battery types require more maintenance than others, but all types should be properly cleaned and cared for between uses. Stay up to date on any specific maintenance that needs to be done, such as refilling the fluid chamber on lead-acid batteries.\nProper maintenance will extend the life of your battery, thus improving its ability to hold a charge. Batteries kept in proper working order will not discharge as quickly as batteries that are not taken care of.\nOperating your trolling motor in extreme temperatures, high winds, and choppy waters will force your battery to work harder, thus causing it to discharge faster.\nWhat’s more, rough weather conditions may cause your battery to be rained on or splashed with sea spray, which can cause damage if the water gets beneath the battery’s outer casing.\nBatteries hold a charge for longer in calm waters and mild temperatures.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "Figure 2 illustrates the full-discharge and full-charge flags.\nFigure 2: Full-discharge and full-charge flags. Calibration occurs by applying a full charge, discharge and charge. This is done in the equipment or with a battery analyzer as part of battery maintenance.\nBattery analyzers serve as a valuable tool to calibrate a smart battery. An analyzer fully charges the battery and then applies a controlled discharge that provides the all-important capacity readings of the chemical battery. This discharge measurement is a truer reading than what coulomb counting provides by capturing past discharge events of the digital battery.\nHow often should a battery be calibrated? The answer depends on the application. For a battery that is in continued use, a calibration should be done once every 3 months or after 40 partial cycles. If the portable device applies a periodic full deep discharge on its own accord, then no additional calibration should be needed.\nWhat happens if the battery is not calibrated regularly? Can such a battery be used with confidence? Most smart battery chargers obey the dictates of the chemical battery rather than the digital battery and there are no safety concerns. The battery should function normally, but the digital readout may become unreliable.\nSome smart batteries feature impedance tracking. This is a self-learning algorithm that reduces or eliminates the need to calibrate. If calibration is required, however, several cycles instead of only one may be needed to achieve the same result as with a standard system.\nThe accuracy between the chemical and digital battery is measured by the Max Error. Max Error stands for “maximum error” and is presented in percentage. A low reading indicates good accuracy, and as the precision diminishes with partial cycles, the Max Error number increases steadily. This supervisory watchdog can be compared to a medical doctor who measures a medical condition by a number.\nSome manufacturers recommend calibration at a Max Error of 8 percent; readings above 12 percent may trigger an alarm and 16 could render the battery unserviceable. No unified standard exists to determine what Max Error level requires service or what constitutes an error; every battery manufacturer follows its own recommendation.\nThe SMBus system provides a wealth of information that includes battery manufacturing date, battery model and serial number, capacity, temperature and estimated runtime, as well as voltages down to the cell levels. It is an engineer’s delight to have all this data in a table, but the fine print may confuse the user more than providing help.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-0", "d_text": "Do you ever wonder about how long does laptop battery last? The laptop once fully charged provides a battery power for 4 to 5 hours. This often gets affected by the number of other factors like what programs are being run on the laptop.\nThe programs that use memory-intensive and highly demanding programs reduce the life of the battery. Simpler programs like looking at emails or running word programs do not eat a lot of battery life.\nThis also makes the battery lives longer. When using the laptop reducing the screen brightness and reducing the resolution also have dramatic effects on the run time of the battery.\nWith age and use the battery slows down after years. The charge capacity decreases as the laptop is used for long. The life of any battery runs for a year or two before it dies out entirely.\nIn the early stages, the battery that runs 4-5 hours comes down to 1-2 hours and then demands a continuous charge. If the battery is unplugged at the right time, then the overcharging will not ruin the life of the battery.\nWhat Makes a Battery Die?\nThe way in which the laptop is used makes a lot of impact on the life of the battery. Some habits that affect the battery negatively are :\n・Running too many programs can lead to draining the battery. The laptop often runs too many programs, and the problem arises when the programs run in the background when other programs run on the forefront.\n・Using the laptop with high brightness. This battery brightness drains all the battery power and reduces the battery life.\n・Listening to continuous music when using the laptop. This will not come as a surprise to many that they listen to music when they are working on the laptop. The speaker drains a lot of power from the system that ultimately reduced the battery life.\n・Using the apps or letting them run even when not using them. How often do we leave the Wi-Fi and Bluetooth on when they are not in use? These apps eat a lot of power even when running in the background.\n・Not using power-saving mode when needed. When it is appropriate, use these modes so that the battery can go into a saving mode and not run out of energy.\n・High temperature has a negative effect on the battery. Exposing the battery to extreme heat leads to a major problem. Do not leave the laptop in the charging mode when the not in use. Charge the laptop only when it is close to 0% charging.", "score": 23.030255035772623, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "18650 Battery Pack Calculator and Planner\nSelect a cell from the cell database\nUse this autocomplete search functionality to find your specific cell in order to pre-populate the values that we have for that cell.\nSingle cell information\nEnter information on a single cell into the input fields to receive results for a single cell. Results will automatically generate every time a value changes and there is enough information to calcualte results.\nThe C-rate will be used down below at pack level calculations, so make sure you fill this section out.\nPack level information\nEnter information related to your up-and-coming pack to get all kinds of information on the pack.\nThe C-rate, voltage, and capacities from the single-cell step will be used to calculate information in this step. Make sure that you fill out the fields above to get accurate results in this section.\nVirtual battery life estimator\nThis section allows you to get an idea of approximately how long the pack you are building will be able to run.\nIf you plan on running something that consumes 1000W, you can now figure out how long the pack will last while providing that power.\nPack weight and cell cost\nThis section estimates the cost and weight of the pack based on cell count, single cell weight, and cost per cell.\nThe series and parallel information from the above step are used to calculate this information, so make sure you fill out the above step first.", "score": 21.695954918930884, "rank": 60}, {"document_id": "doc-::chunk-2", "d_text": "A busy nurse in a hospital, the policeman on duty and the solider in combat has only one question: “Will the battery last for my mission?” Figure 3 illustrates a screenshot of the data stored in an SMBus battery.\nFigure 3: Uinversal screenshot of SMBus battery. Data is organized in tables to assist analysis, a format that is less suited for the everyday battery user. Access is by a software tool.\nSource: Texas Instrument\nOf special interest in terms of battery state-of-health (SoH) is full charge capacity (FCC), coulomb count that is hidden in the table among tons of other information. FCC can be used with reasonable accuracy to estimate battery SoH without applying a full discharge cycle to measure capacity. Best accuracies are achieved if the battery is being cycled with a full charge and an occasional deep discharge. If used sporadically, a deliberate calibration involving a full discharge/charge cycle will be needed from time-to-time to maintain accuracy.\nLast updated 2016-09-02\nComments are intended for \"commenting,\" an open discussion amongst site visitors. Battery University monitors the comments and understands the importance of expressing perspectives and opinions in a shared forum. However, all communication must be done with the use of appropriate language and the avoidance of spam and discrimination.\nIf you have a suggestion or would like to report an error, please use the \"contact us\" form or email us at: BatteryU@cadex.com. We like to hear from you but we cannot answer all inquiries. We recommend posting your question in the comment sections for the Battery University Group (BUG) to share.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-6", "d_text": "- Disconnect the feeder once the battery is full / at the end of the maintaining period.\nHow Many Volts is a Motorcycle Battery?\nThe standard motorcycle battery is listed as a 12V power bank. However, this refers to the current it feeds. The battery itself should have no less than 12.8V when fully charged. Still, the discharge and voltage do not correlate in a linear way. For example, if the voltage drops to 12.5V, it means the cell is half-full and needs recharging.\nIf the voltage doesn’t fall below 12.0V, it is still operational but should be fed as soon as possible, since further discharging might lead to sulfation and capacity loss. In case the voltage is less than 12.0V, it might decrease the cranking power and hold the charge worse than before. If it drops below 10.8V, the battery could be revived, but most likely, for one cycle only, upon which it becomes non-operational.\nHow Long does a Motorcycle Battery Last? A Quick Overview\nThe manufacturers state different periods of service life for their batteries. This depends on the cell type:\n- Conventional cells. Last about 3-4 years, 7 if used correctly.\n- AGM. Have a longer life, about 5-7 years.\n- Gel. Can last up to 10 years.\n- Lithium. Can reach as much as 15 years (by advertisement).\nThe life of the actual battery can differ since many factors affect its operation and period of work:\n- Usage. Constant usage prolongs service life.\n- Weather. A mild climate without much heat is the most preferable.\n- Maintaining. Batteries maintained through the idling period serve longer.\nHow to Test a Motorcycle Battery Properly?\nTesting of the motorcycle battery depends on their types. The following tests can be performed:\n- Estimation of specific gravity. This can be done for conventional lead-acid batteries by hydrometer. The power bank should be fully charged before testing. Readings of 1.265-1.280 show a full battery.\n- Voltage estimation. Sealed lead-acid and lithium batteries can be tested for voltage level. For that, various appliances are used (voltmeter, conductance tester, etc.)", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "Lithium-ion batteries have a protection circuit that prevents “abuse” of the battery. This means that, among other things, the battery does not become unusable if it is “overloaded”, but that mechanism does not always work. When the batteries are “asleep” the chargers offer a small boost charge -pulses of very short duration but very high voltage – to activate that protection circuit, and if it is detected that one of the cells begins to charge, the process starts normal. Unfortunately, this mechanism does not always work,\n4. How is the state of a battery measured?\nThis concept of the previous section leads us to another important concept: how to describe the state and condition of a battery. There are several types of ways to present it , but the most widespread is the state of charge (State of Charge, SOC), which is a percentage of the maximum capacity. Another way of expressing it is the depth of discharge (Depth of Discharge, DOD), which is also a percentage of the maximum load: it is usually accepted that 80% of DOD means having entered a deep discharge phase, and this it is dangerous.\nOther measures are the voltage of the terminal – it would be with the SOC and the load and discharge current -, the voltage of the open circuit – the one that exists between the terminals of the battery when there is no load applied – and the internal resistance , which varies and depends on the state of charge: as the internal resistance increases, the efficiency of the battery is reduced and the thermal stability is reduced because more and more charging energy is converted into heat. Bad business again.\n5. Do not count sheep, count recharge cycles\nAll batteries of this type have a certain lifespan that is measured in recharge cycles . There is no standard that specifies what constitutes a recharge cycle in a detailed way, but it is usually assumed that a full recharge cycle is applied when we recharge the battery (again, it is not accepted that it has to be fully charged) after being downloaded by below 20%.\nThe way to count those load cycles varies according to the manufacturers, but Apple gives a good example of how they do it: “You could have used half of your laptop’s load one day and you could recharge it completely.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-10", "d_text": "Smart batteries have internal circuit boards with chips which allow them to communicate with the laptop and monitor battery performance, output voltage and temperature. Smart batteries will generally run 15% longer due to their increased efficiency and also give the computer much more accurate \"fuel gauge\" capabilities to determine how much battery run time is left before the next recharge is required.\nEven if the battery case looks the same, you cannot just upgrade to another battery technology unless your laptop has been pre-configured from the manufacturer to accept more than one type of battery type, since the recharging process is different for each of the three types of batteries.\nA battery that is not used for a long time will slowly discharge itself. Even with the best of care, a battery needs to be replaced after 500 to 1000 recharges. But still it is not recommended to run a laptop without the battery while on ac power -- the battery often serves as a big capacitor to protect against voltage peaks from your ac outlet.\nAs the manufacturers change the shapes of their batteries every few months, you might have problems to find a new battery for your laptop in a few years from now. This is somewhat of a concern only if you anticipate using the same laptop several years from now. If in doubt, buy a spare battery now - before it's out of stock.\nNew batteries come in a discharged condition and must be fully charged before use. It is recommended that you fully charge and discharge the new battery two to four times to allow it to reach its maximum rated capacity. It is generally recommend that you perform an overnight charge (approximately twelve hours) for this. Note: It is normal for a battery to become warm to the touch during charging and discharging. When charging the battery for the first time, the device may indicate that charging is complete after just 10 or 15 minutes. This is a normal with rechargeable batteries. New batteries are hard for the device to charge; they have never been fully charged and are not broken in. Sometimes the device's charger will stop charging a new battery before it is fully charged. If this happens, remove the battery from the device and then reinsert it. The charge cycle should begin again. This may happen several times during the first battery charge. Don't worry; it's perfectly normal. Keep the battery healthy by fully charging and then fully discharging it at least once every two to three weeks.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-1", "d_text": "(2) The life of a rechargeable phone battery in terms of conversation time, standby time and total service life, will depend on the conditions of use and network configuration. Batteries being considered expendable supplies, the specifications state that you should obtain optimal performance for your phone during the first six months after purchase and for approximately 200 more recharges.\nNecessary cookies are essential to the functionality of this website. You may disable these by changing your browser settings, but this may affect how the website functions.\nPerformance cookies will collect information about how your use the website and allow us to keep improving how our website works.\nFunctionality cookies enable the website to remember choices that you make and to provide enhanced and convenient functions.", "score": 21.50629981556874, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "This has been translated into a set of basic battery requirements by USABC:\n- 15 years calendar life at 30 °C (recently changed from 35 °C); and\n- 5,000 charge-depleting (CD) and 300,000 charge-sustaining (CS) cycles (e.g., microcycles) (75 MWh total energy throughput, for a nominal PHEV-20).\nTo date, no battery meeting PHEV performance and packaging requirement has been shown to meet the calendar and cycle life requirements, Gross said, although some have come close.\nBattery life. For a given chemistry, capacity loss and impedance growth over calendar time will comply with a linear time relation, a square-root time relation, or in-between, Gross said. This can only be determined under controlled test conditions; accelerated testing is used to determine this model fitting.\nProbably the most common way [Chrysler] looks at battery calendar life is we do Arrhenius modeling... The Arrhenius models are well done and well understood, its very simple to take this and then say, for example based on some experience with this location, offset them and adjust them for particular solar episodes....this way you can actually start looking at your data, processing it, trying to get some ideas as to how long you are really going to be able to operate this vehicle and meet the end of life requirements.—Oliver Gross\nWith the derived acceleration factors, you can begin to do simple prediction on calendar life and the effects of operating temperature on life.\nSome of their initial results show that operating 3 hours per day with a 10 °C operating temperature above ambient can lower battery life by 5%—which, for the battery pack in that evaluation, would take it to an end-of-life calendar life of just over 15 years for some locations.\nChrysler’s battery use model blends UDDS, HWYFET and US06 cycles to represent multiple use scenarios. Putting this all together (merging calendar life, energy throughput and an operational temperature increase of 10 °C), resulted in product life estimates that vary by location and by miles driven.\nBattery packs in use in Los Angeles, Phoenix and Miami had the lowest estimated calendar lives, at 11.8, 11.6, and 11.6 years respectively at 10,000 miles per year.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-11", "d_text": "In the illustrative embodiment, look-up table 500 is selected based on the Δi, the battery chemistry and battery type. The selected look-up table is then accessed using the voltage difference. Since the difference in the currents is known a priori, the data stored in column 508 of the look-up table contains the internal battery impedance, Z, based upon the predetermined Δi. The data stored in column 510 of the look-up table 514 is the precharacterized data relating the remaining battery capacity to the value of Z.\nIn one embodiment, the determination of the relationship between the internal battery impedance and the remaining battery capacity is based upon remaining battery capacity gathered empirically from batteries while measurements are made of the internal battery impedance. It would be obvious to one of skill in the art that the data must be gathered from a statistically significant number of batteries to produce data having a sufficient confidence interval and accuracy. In addition, it is preferable that the data gathered from batteries operating in an environment similar to the environment in which the batteries are likely to be used. In particular, the batteries need to be tested and the data gathered in an environment having the same temperature as that in which the batteries will be used. This is because certain batteries are sensitive to temperature variations. In particular, the internal impedance of some batteries is dependent upon temperature sensitive chemical reactions within the battery. Where batteries may be used at a variety of ambient temperatures, data should be gathered at multiple temperatures spanning the expected temperature range of the battery.\nIn an alternative embodiment, a microprocessor may be used in to increase the accuracy of the outputs based upon the data contained within look-up table 500. Such a microprocessor provides for a more accurate determination of both internal battery impedance and remaining battery capacity when there is no data entry corresponding exactly to the voltage or current difference. The process of interpolation is considered to be known to those skilled in the relevant art.\nIn another embodiment, empirically collected data as described above can be analyzed and a mathematical function derived therefrom. Many different curve-fitting functions or approximation techniques may be used. In one embodiment, a least-squares technique is used to derive polynomial functions approximating the relationship between the remaining battery capacity data and the internal battery impedance. In alternative embodiments, any suitable set of basis functions can be used to derive mathematical functions approximating the relationship between the remaining battery capacity and the internal battery impedance.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-3", "d_text": "There are basically four types of cell phone batteries => Lithium Polymer, Lithium Ion, Nickel Metal Hydride and Nickel Cadmium.\nLi-Ion (Lithium Ion) Li-Ion (Lithium Ion)\n- Capacity Battery Capacity\n1250 mAh 2000 mAh\n- Standby Standby Time is the total amount of time that you can leave your is fully charged, turned on and ready to send and receive calls or data transmissions before completely discharging the battery.\nUp to 180 Hours Up to 280 Hours\n- Talk Time Talk Time is the longest time that a single battery charge will last when you are constantly talking on the phone under perfect conditions, Ambient temperature and highly dependent on the cellular network environment such as the distance to the closest cell network tower.\nUp to 5 Hours Up to 9 Hours\n- Music Play", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-3", "d_text": "A third conventional technique for testing battery capacity, commonly referred to as a battery fuel gauge, is a battery monitoring circuit that measures the current output from the battery during use, and the current input to the battery during charging cycles. The battery monitoring circuit determines the remaining capacity of the battery based on a tally of the cumulative input and output currents. This test is accurate, however, only when proper maintenance has been performed on the battery. For example, NiCd batteries require reconditioning cycles to be performed periodically. These reconditioning cycles return the all the NiCd cells within the battery pack to a full charge. Without the proper reconditioning cycles, the NiCd cells may develop a charge memory or a cell imbalance. A cell imbalance occurs when one cell of a battery pack discharges more quickly than the other cells. As the other cells in the battery pack supply current to the load, the discharged cell will be reverse charged. This reverse charge will decrease the NiCd cell life and, therefore, reduce the life of the battery pack. If the NiCd cells within the battery pack develop a charge memory, the NiCd cells will appear to be at full capacity, but in fact will be charged to only a fraction of their total capacity. Thus, the accuracy of this test is dependent upon the ability to consider the maintenance history of the battery between reconditioning procedures.\nWhat is needed, therefore, is a fast and accurate method and apparatus for determining the capacity of a variety of battery types and chemistries under a variety of conditions.\nThe present invention is a fast and accurate remaining battery capacity testing system and associated methodology that overcomes the above and other drawbacks of conventional techniques. The present invention determines the remaining battery capacity based on a measured battery impedance and a predetermined relationship between battery impedance and the remaining battery capacity for the battery under test.\nIn one aspect of the invention a battery capacity tester is disclosed. The battery capacity tester determines remaining battery capacity of a battery coupled to the tester as a function of the present internal battery impedance. In the illustrated embodiments, the tester determines an internal battery impedance by measuring terminal voltages during one or more successive applications of a known load, each load drawing a predetermined constant current from the battery. The calculated internal battery impedance is used to determine the remaining battery capacity based on a precharacterized relationship between internal battery impedance and battery capacity for the battery under test.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-7", "d_text": "Also the impact of low voltage battery is significant on the fuel pump, by casing low fuel pressure, due to slow running fuel pump.\nOne of the disadvantages of lead-acid battery, when it fully discharged, a significant of chemical changing behaviour occur on lead plate located within the lead battery. A symptoms of low bower battery occur, due to the sulphate layer cover the lead plate that prevent the battery from charging. Therefore, keeping this battery at or near full charge at all the time is crucial.\nApproximately, between four to five years is the average battery life in typical conditions. Battery’s power can be decreased up to 50% when the temperature dropped to 20 F degrees, notably the suitable current to start the cold engine is almost twice amps at this temperature. Thus, the impact of varying temperature on batteries is significant.\nWays of checking the battery\nThere are many ways to determine battery’s conditions, the most accurate one, is using a digital voltmeter, that display battery parameters values as a digital number.\nBattery’s voltage is no reliable to determine battery status accurately, as a fully charged battery is a most of the cars is approximately 12.65V. when discharging this battery to 75%, the readying display 12.45V. therefore the most accurate way to determine battery status is by measuring the battery’s energy.\nAlso, a fully charged battery may not be able to start the engine or even the battery may be defective, as the main concern when starting the engine, is that the battery can deliver a suitable current when applying a particular load on it.\nDetermine the battery status\nTwo ways to determine battery status:\n- By applying a load testing with a tester that calibrate the load on the battery, which is requires the battery to be fully charged for accurate test results.\n- By testing with an electronic (conductance) tester, which does not require a fully charged battery for accurate test results.\nHow conductance testers works\nConductance tester deliver a frequency signal though the battery to determine the volum active plate area is available to hold and send power. As the battery ages, its conductance decline. Bearing in mind that sorts, opens and other defects also affect conductance. Therefor measuring conductance gives an accurate indication of the battery condition.\nGenerally many electronic battery testers also determine battery’s Cold Cranking Amp (CCA) capacity, which can be used to estimate the battery’s remaining service life.", "score": 20.327251046010716, "rank": 70}, {"document_id": "doc-::chunk-1", "d_text": "If you need to top a cell up, remove the cap and carefully pour in DISTILLED water until the cell is at the right level. Close the caps tightly afterwards. You do not need to add acid, ever. Gel and VRLA batteries do not need to be topped up, and you should NEVER open the vent caps on these types of battery.\n- When not using a battery, keep it fully charged. Lead acid batteries must always be kept fully charged when not in use, otherwise the lead plates will quickly sulfate and render the battery useless. Solar panels and wind turbines managed by a charge controller will keep your batteries fully charged without damaging them.\n- When using a solar panel that outputs more than 5 watts, you must use a charge controller. This device prevents your batteries from being overcharged. Batteries that are allowed to overcharge will \"boil dry\" and soon become useless. Overcharging also causes them to vent hydrogen gas into the air, which is explosive. Always use a charge controller.\n- When using your batteries, you should avoid discharging them more than 50 percent of their total capacity. Batteries that are regularly flattened completely will only last you a fraction of their possible lifetime. Knowing your batteries' capacity, and how much current you are drawing from it when using an appliance will help you figure out how long you can run everything for.\nHow much power can a battery store?\nOn all lead acid batteries, you should find a sticker or a plate with details on it. We are looking for a number followed by the \"Ah\", which stands for \"Amp hours\". Let's say your battery is rated at 50Ah. What this means is that if you were to draw a load of 50 Amps from the battery, it would be completely dead in one hour. Or, you could draw one Amp from the battery, and it would last 50 hours. If you were to draw 10 Amps, it would last for five. However, it is important to remember the rule about only using upto half of the batteries capacity. So, if you've got a 50Ah battery, you'll want to stop using things when 25Ah of it's capacity has been taken. If you had two batteries that were the same voltage and had a capacity of 50Ah each, you could wire them together in Parallel and the total capacity would now be 100Ah.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-10", "d_text": "Low-capacity NiMH batteries (1,700–2,000 mA·h) can be charged some 1,000 times, whereas high-capacity NiMH batteries (above 2,500 mA·h) last about 500 cycles. NiCd batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values.\nBattery: Charge/discharge speed\nFast charging increases component changes, shortening battery lifespan.\nIf a charger cannot detect when the battery is fully charged then overcharging is likely, damaging it.\nBattery: Memory effect\nNiCd cells, if used in a particular repetitive manner, may show a decrease in capacity called \"memory effect\". The effect can be avoided with simple practices. NiMH cells, although similar in chemistry, suffer less from memory effect.\nAn analog camcorder [lithium ion] battery\nBattery: Environmental conditions\nAutomotive lead–acid rechargeable batteries must endure stress due to vibration, shock, and temperature range. Because of these stresses and sulfation of their lead plates, few automotive batteries last beyond six years of regular use. Automotive starting (SLI: Starting, Lighting, Ignition) batteries have many thin plates to maximize current. In general, the thicker the plates the longer the life. They are typically discharged only slightly before recharge.\n\"Deep-cycle\" lead–acid batteries such as those used in electric golf carts have much thicker plates to extend longevity. The main benefit of the lead–acid battery is its low cost; its main drawbacks are large size and weight for a given capacity and voltage. Lead–acid batteries should never be discharged to below 20% of their capacity, because internal resistance will cause heat and damage when they are recharged. Deep-cycle lead–acid systems often use a low-charge warning light or a low-charge power cut-off switch to prevent the type of damage that will shorten the battery's life.\nBattery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer, which slows the side reactions. Such storage can extend the life of alkaline batteries by about 5%; rechargeable batteries can hold their charge much longer, depending upon type. To reach their maximum voltage, batteries must be returned to room temperature; discharging an alkaline battery at 250 mA at 0 °C is only half as efficient as at 20 °C.", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-0", "d_text": "I was trying to calculate the recharge capacity of the Lithium ion polymer 3.7v, 2600mAh battery and was using the calculations in the MintyBoost user manual – choosing batteries page (http://www.ladyada.net/make/mintyboost/power.htm), as a guide. I wonder if there is an error in the output calculation line for the 1200 mAh lipoly battery in the “How many recharges will I get?” section.\nIf the capacity calculation is correct with:\nMintyBoost mWh = 3.7V * 1200mAh * 100% = 4440 mWh input\nThe calculation for the milliAmp hours the mintyboost will output at 5V seems to use the battery capacity for two Sanyo AA NiMH 2700 mAh batteries:\nOutput mAh @ 5V = 5184mWh / 5 * 80% = 710 mAh output\nShouldn’t the calculation be:\nOutput mAh @ 5V = 4440mWh / 5 * 100% = 888 mAh output?", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-1", "d_text": "If you are tens of micro-amps in an hour, you can reach 1-2 Hao'an.\nIn this case, for example, 14.8V/2200MAH, 30% of the factory's power of about 600-700MAH is completely discharge for more than a year. If the battery is assembled on the product, then some of the electronics on the product are added. Components may leak faster and take less time. When the power of the lithium battery pack is completely dry, the cell diaphragm paper is completely dried and the electrolyte is completely “0”, and it cannot be activated at all, and it cannot be recharged. Some customers may say that it is not much better to fully charge the battery when you leave the factory. It is really better for this problem, but if the factory power is full, it is very unsafe, so the industry standard is everyone. 30% charged.\nUsed: When the battery pack is discharged and is protected, that is, the battery is basically discharged (in fact, there is only a little power remaining). In the normal use case, when the lithium battery pack is discharged to the protection voltage, the protection board is automatically turned off. At this time, the lithium battery pack is nearly exhausted, but some of the remaining power is about 5%, and it will be charged when it is used up under normal conditions. Sometimes, when I am busy, I forget that after a few days, I have nothing to do. I basically refilled after half a month and a month. The problem is not big. However, if this is used up, if you put it three or four months without charging it, it will almost be scrapped.\nIn response to these problems, as long as you do the following, there is no problem.\n*: If the new battery has never been used, please do not leave it for too long. Especially if the new battery has not been used, don't throw it for four or five months. If it really can't be released in time, please remember to make up the battery!\nSecond: If the goods have already been shipped, they need to be indicated to the customer on the product manual. If the new product does not charge for a long time, the battery will fail. If it is used under normal conditions, when the battery is used up, it should be charged in an emergency.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-1", "d_text": "INSTRUCTIONS FOR PROPER CARE AND USE OF YOUR BATTERY\nPlease follow the instructions included with your battery for proper use, storage and care of your battery to ensure that it will function as needed during a power outage. If you do not store your battery correctly, it may shorten its useful life. Environmental factors such as temperature can shorten your battery’s useful life. We recommend that you store your battery above 41°F and below 104°F.\nThese batteries are rechargeable. They will not last forever and should be replaced every 5-8 years considering 1-2 full discharges per year. You should periodically test your battery to verify both the operation of the backup battery and its condition.", "score": 18.90404751587654, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "If you have kids, you probably know this already, but lots of stuff needs batteries. Remote control toys, Wii remotes, laser pointers (well, that is for me), flash lights, even Nerf guns. For me, I have found the best place to pick up batteries is at one of these \"dollar\" stores. Sure the batteries are cheaper, but are they any good? Who knows. Let's find out.\nThe first way to look at the quality of a battery is to see how much stored energy is in it. How could you measure this? Well, here is how I did it. I took a battery and connected to a light bulb and let it run for as long as it could. Like this:\nWith this setup, I can measure both the current ( I ) from the battery and the electric potential ( ΔV ) across the battery. At any given instant in time, the power from the battery will be:\nPower tells me how fast the energy is changing, but not the total energy in the battery. In order to find the total energy, I can write the power like this:\nIf the current and the change in potential were constant for the whole time interval (Δt), this would be a fairly straightforward calculation. Alas, these are not constant. So what do I do? I cheat. If I instead look at a very short time interval, the current and potential do not really change too much. This means that I can reasonably calculate the energy during this short time. Then I just need to do this a whole bunch of times to get the total energy.\nAdding up a whole bunch of small pieces is called \"an integral\". In this case, I won't use calculus to evaluate an integral since I don't know a mathematical function for the power. Instead I will do it numerically with the following formula (by \"I will do\" I really mean \"make a computer do\"):\nAnd that is it. The total energy that the battery produced.\nVernier makes both a current and voltage probe for the LabQuest system. Collecting data was fairly simple (even though each battery would take quite some time). Here is the data from LoggerPro (Vernier's software):\nThis software can calculate both the power as well as integrate to find the total energy. But I am not going to do this. Why?", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-1", "d_text": "Internal components must be kept separate, because they can react violently with the air to generate heat and produce fire. Manufacturers overcome this with stringent production methods and safer designs.\nThere are other battery chemistry alternatives. Rarely seen is nickel chloride batteries, whose main advantage is its robustness and its recyclability, but at a higher upfront cost. Other new battery technologies have emerged in recent years, including ‘flow’ batteries and saltwater based batteries, each with its owns advantages.\nTo find the appropriately size system, you should know the storage capacity of the battery. But be mindful. Batteries have a total capacity (nominal) and a ‘usable’ capacity. This is because batteries can’t discharge their full capacity without damaging itself. To put in perspective, smartphones may have 16 gigabytes of storage capacity, but only 12 gigabytes is usable because the rest is for crucial software, without which the phone can’t run.\nThis refers to the rate of electricity the battery can provide (in kW). In essence, a higher value means more electricity can be provided in a given time. There are two parts to output: continuous and peak. Continuous is the rate of electricity the battery can provide, and maximum is the rate the battery can reach for a short period (matter of seconds) during a large demand spike.\nFor example, your electricity demand will spike if you turn on multiple appliances at the same time. This is important if you are considering going off-grid, because there is no backup power source to meet the demands in your home.\nBattery storage, like many things in life, is not perfect. You will not get the same amount of electricity out, compared to what you put in. For example, a 10kWh battery with 85% efficiency will let you draw a maximum of 8.5 kWh.\nKnowing how long your battery will last is crucial. The duration of a battery’s working life can be measured in charge cycles or warrantied energy throughput, which is arguably more important.\nBatteries will need to be replaced after being charged/discharged a certain number of times. Normally, you will not go through more than one cycle per day if you charge your battery with your solar panel. This means that a battery with a cycle life of 3650 cycles will last 10 years.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Depends on use, and example would be if you use the batteries in an mp3 player consistently day in and out, charging them once or every other day, then you may get a year or two out of them. On the other hand, if they are not used as hard, they could have much more longevity.\n• What should I do with my old batteries?\nRecycle them at your local recycling center. Major retail chains have drop offs that don't charge to recycle used batteries. Check you local Yellow Pages for listings in your area. Some laptop manufacturers in the past have had exchange programs when sending in old laptop batteries for the new one, be sure to check into this as well if you are replacing your laptop battery. The same applies for cell phones.\nNo, always go with the recommended battery type for your device. Do not deviate from the manufacturer requirements and recommendations.\nNever! Not only will they not recharge, but you take an extreme risk of fire and explosions. Just stay away from this thought altogether.\n• Is fully discharging a battery and recharging bad for a battery?\nWhile the Lithium- are not effected by this, it is recommended for the NiCd and NiMh types. Fully charging, then discharging will help with there longevity and achieve maximum storage. However in time they will wear out.\nCare for Existing Batteries:\n• New batteries normally come in a discharged or barely charged condition. Be sure to read the manufacturers recommendation on charging them for the first time. This is critical to get started off on the right foot with your investment and to get the most life out of them.\n• If you don't plan to use the battery for more than a month, remove the battery from the device or charger, then place it is a dry, cool, and clean environment for storage. Rechargeable batteries lose charge when unused. So, when it is installed back into its device, it may require to be recharged back to capacity.\n• Keep moisture and excessive extreme heat away from batteries. Heat will cause a battery to explode, potentially causing, bodily harm, burns, fire, or even in some cases death.\n• Dropping and physically damaging batteries is really bad, for it can expose the inner cells and acids to the surface. Disregard any damaged batteries.\nWhen purchasing new rechargeable batteries, they can be purchased at just about any retail chain.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-4", "d_text": "Even though I kept them charged with a properly deigned \"battery tender\", and watched them carefully, they never lasted more than 12 months.\nExpensive lesson, but well-learned.\nThe next common type is the \"Deep Cycle\"\ntype. These batteries are designed uses that require a lower current draw for extended periods. Sometimes they're called \"motive\" batteries, and are used for wheel chairs, golf carts, trolling motors, and solar power storage. Every time I've replaced my car battery I've always replaced it with a deep cycle battery, as there's times I'll run my radio for extended periods without running the engine, and I want to make sure I have enough power left to start the car.\nAnd within the battery types are some sub-classifications depending on the type of construction used.\n\"Flooded\" types have the liquid electrolyte (aka Battery Acid) and removable caps to check the levels. Most of us grew up with this type, and are somewhat familiar with it. I always used distilled water to top off the level, although many sources say that any potable water is OK. Personally, with the cost of distilled water being so low, I could never see using tap water, especially in areas with hard water.\n\"Valve Regulated Lead Acid\" (VRLA) batteries were first seen in the late 1960's, and were marketed as \"Maintenance Free\" batteries. They used a slightly different chemistry and construction/\n\"AGM\" or Absorbed Glass Mat batteries are newer still, and have a different construction that keeps all the electrolyte in a fiberglass mesh.\nAnd as of 2016 some cars are using Nickle Metal Hydride and Lithium Ion batteries for their power source.\nSo, with that said, be aware that using old fashioned, flooded construction SLI batteries would work, but the batteries probably won't last as long as you'd like.\nNow as to \"How Many Batteries Will We Need\", we need to look at voltages, and we'll use my trusty SB-310 as our example.\nThe SB-310 \"requires\" 185 Volts for the B+, -85 Volts for the Bias, and 6 Volts for the filaments.\nNominal, fully charged voltage for a \"12 Volt\" automotive battery varies some with the type. We'll go with the current VRLA batteries as they're most common in newer cars.", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-1", "d_text": "Try to store your phone in a cool or warm place, and purchase a cell phone case with good ventilation to store it in. And while you may hear otherwise, never store your batteries in the refrigerator.\nKnow When Your Battery Has Run Dry\nNo matter how much you care for your batteries, they will run dry. But it will not happen overnight. The amount of time you can leave your laptop or phone uncharged will decrease bit by bit over the months (or years). You won’t notice it at first until you realize one day that it can barely hold a charge at all.\nOn a notebook it is easy to find how much capacity your battery has left. In Windows open the Command Prompt and type:\nThis will save a report on your computer showing how much battery capacity is left.\nIf you have a Mac, then you can find the information in the “Hardware” section of your System Information Window. Note that Apple, like many other electronic companies, tracks battery lifespan by cycle counts. When a battery charges completely, that is one cycle count. The vast majority of MacBooks have a cycle count of 1000, which is when the battery is considered to be consumed. However, MacBooks from 2010 or earlier can have a cycle count of 500 or less.\nFor your smartphone, outside apps can help a lot. The Battery app for Android or BatteryLife for the iPhone or iPad can tell you what your battery’s lifespan is. Remember that you probably don’t need to replace either a laptop or smartphone battery until the capacity has fallen to around twenty percent, though you can do it earlier.\nPlenty More to Do\nThere is more you can do to keep your battery’s lifespan running longer. Darken your screen display, find power saving modes, and don’t rely on your phone’s GPS so much. We have already covered plenty of battery saving tips for Windows, Mac OS X, Android, Linux and iOS, so do check them out. But above all else, you should keep your devices’ batteries at around room temperature and not fully charged so that they can continue to last over the long run.\nBut no matter how much work you do to keep your battery alive, they will fade eventually. So while you can keep your three-year-old battery working longer with proper care, check its levels. The last thing you want is to realize out of the blue that your computer or smartphone battery cannot hold a charge at all anymore.", "score": 17.397046218763844, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "This is pretty straight forward, how big are the batteries? Lead acid batteries don't get much smaller than C-cell batteries. Coin cells don't get much larger than a quarter. There are also standard sizes, such as AA and 9V which may be desirable.\nWeight and power density\nThis is a performance issue: higher quality (and more expensive) batteries will have a higher power density. If weight is an important part of your project, you will want to go with a lighter, high-density battery. Often this is expressed in Watts-hours per Kilogram.\nPrice is pretty much proportional to power-density (you pay more for higher density) and proportional to power capacity (you pay more for more capacity). The more power you want in a smaller, lighter package the more you will have to pay.\nThe voltage of a battery cell is determined by the chemistry used inside. For example, all Alkaline cells are 1.5V, all lead-acid's are 2V, and lithiums are 3V. Batteries can be made of multiple cells, so for example, you'll rarely see a 2V lead-acid battery. Usually they are connected together inside to make a 6V, 12V or 24V battery. Likewise, most electronics use multiple alkalines to generate the voltage they need to run.\nDon't forget that voltage is a 'nominal' measurement, a \"1.5V\" AA battery actually starts out at 1.6V and then quickly drops down to 1.5 and then slowly drifts down to 1.0V at which point the battery is considered 'dead'.\nSome batteries are rechargable, usually they can be recharged 100's of times.\nThis guide was first published on Feb 16, 2013. It was last updated on Feb 16, 2013.\nThis page (How Batteries Are Measured) was last updated on Dec 04, 2021.\nText editor powered by tinymce.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-1", "d_text": "The oxygen in the water and the lead sulfate react together on the positive plates to transform them by and by into lead dioxide, and oxygen bubbles ascend from the positive plates when the response is verging on complete.\nNumerous individuals imagine that a battery’s inner resistance is high when the battery is completely charged, and this is not the situation. Looking at this logically, you’ll recollect that the lead sulfate goes about as an encasing. The more sulfate on the plates, the higher the batteries inside resistance. The more resistance possessed by a discharged battery permits it to acknowledge a greater rates of charge without overheating or gassing compared to when there is almost a full charge state of the. When it is close full charge, the remaining sulfate managing the converse substance response is not much. The charge current level which won’t overheat the battery when applied or separate the electrolyte into hydrogen and oxygen is known as the “natural absorption rate” of the battery. Whenever there is abundant charge current of this normal absorption rate, there is overcharging. The battery may overheat, and the electrolyte will bubble. Really, a portion of the charging current is squandered as warmth even at right charging levels, and this wastefulness makes the need to put more amp hours once more into a battery than were taken out.\nTo what extent will My Battery Last?\nThere are numerous things that can bring about a battery to fall flat or radically abbreviate its life. Something or other is permitting a battery to stay in a mostly discharged state. We discussed sulfate shaping on the surface of the battery’s plates amid release, and the sulfate likewise frames as a consequence of self-release. Sulfate additionally frames rapidly if the electrolyte level is permitted to drop to the point that the plates are uncovered. On the off chance that this sulfate is permitted to stay on the plates, the gems will become bigger and solidify till they get to be difficult to evacuate through charging. In this way, the measure of accessible surface range for the substance response will be forever diminished. This condition is called “sulfation,” and it forever lessens the battery’s ability. A 20 amp hour battery may begin performing like a 16 amp hour (or littler) battery, losing voltage quickly under burden and neglecting to keep up adequate voltage amid wrenching to work the motorcycle’s ignition framework.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-1", "d_text": "The reason that you need to follow along with the aforementioned steps is because your battery thrives on consistency. Now you need to consider replacing the battery. If you’re storing the battery for many months or more. It is however advised that you get a spare battery and carry it with you if you’re going to be spending extended amounts of time on the street. You’re going to need no less than a 14-amp battery so as to run your equipment for those hours per day you specified. It’s always advised to opt for an original battery. One is they make batteries with higher capacity.\nBatteries make life much more easy. There is absolutely no true means to tell just how long a single battery will last as even two batteries on both identical systems may be used at several rates based on the settings and applications that are being run. At this stage, normally, they are discarded. A universal battery is only a bigger version of the conventional laptop battery, and can power almost any laptop. Incredibly vital as the appropriate rechargeable battery is going to be the suitable battery charger.\nLife After Disposing of Laptop Batteries\nYou are able to charge batteries individually, and that means you’re not restricted to only charging them in pairs as with a number of other chargers. To illustrate what happens every time a battery becomes discharged, in the following article, a lead-acid battery like an automobile battery or golf cart battery was used. The battery hasn’t witnessed the type of advance which other devices have. An extremely old and pre-owned battery can take more than 20 ounces of distilled H20! Now, let your previous battery take its rest and search for battery replacement.\nBatteries have come a ways through history. The battery is currently ready for charging. Homemade batteries can likewise be built from different vegetables and fruits. They are commonly made of common items found around the house.\nAlways ensure the battery is recharged whenever possible after it gets fully discharged. The lithium-ion battery is quite popular in a broad selection of household electronics. You won’t wish to ruin terrific batteries utilizing a poor charger.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Rechargeable Batteries: Which One to Buy, Differences?\nBatteries are used everyday in any portable electronic device. So far, there is no other energy source on the the market yet to power our energy hungry equipment. They are in devices from calculators to laptops. We depend on them dearly, however as a consumer know very little about them. Below is a list of the different common types of batteries that are used in consumer electronics.\n• NiCd: Nickel is found commonly in household batteries such as AA, AAA, 9v, etc. While this type of battery holds a charge longer, they don't typically have the voltage that is needed for power hungry electronics such as it's counterpart .\n• NiMh: Nickel Medal Hidride is also found in common household batteries as the , however they have a much higher and needed voltage output for electronics with the trade of a quicker power drainage. NiMh is the recommended battery of choice for devices such as CD players, mp3 players, wireless devices such as mice, keyboards, etc.\n• -Ion: These batteries are used in laptops, cell phones, iPods, iPhones, etc. These batteries have a long lasting energy drain and don't discharge as quickly while sitting idle. Unfortunately, they also don't come in standard flavors such as the AA & AAA type batteries. So you will likely see these batteries found in proprietary devices such as listed above.\nCommon Battery Questions:\n• Do rechargeable batteries ever lose the ability to hold a charge?\nYes, in time, they will hold less of a charge due to the chemicals inside not reacting as they once did when they were new.\n• My batteries don't seem to last long in my device is there anything I can do to lengthen the life of the battery?\nAll rechargeable batteries have a mah rating (Milliampere Hour). The higher the rating on the battery, the longer it will last. Be warned however, if your battery charger is only rated to be able to charge up to say a 1800mah, and you buy 2500mah batteries, your charger will either take a much longer time to charge the batteries or not charge them at all. Be sure to have an adequate battery charger before or while purchasing new rechargeable batteries.\n• How long do rechargeable batteries last?", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-0", "d_text": "The battery is now calibrated. The very first thing you will want to is make certain you have batteries in the remote control. A faulty battery may cause a severe fire. It’s possible for you to buy bigger cell batteries that will provide you with longer battery life.\nIn the event the battery gets hot enough to ignite the electrolyte, you’re likely to obtain a fire. Better still, you can get another battery that’s a larger-capacity than your initial battery. To begin with, it is possible to always purchase another battery. Refurbished things are usually inexpensive laptop batteries that are sold at a part of the price of a new laptop battery.\nAfter the battery has charged to a particular level, it is going to get started charging at a regular rate again. NiCad and NIMH batteries have a normal shelf life of a couple decades. Put simply, a battery will be specified in amp-hours. Laptop batteries are sometimes not straightforward to discover, but the internet is a wonderful resource. Many laptop batteries begin to fail within a couple years. In the event the laptop battery may not be recharged, it’s going to need to be replaced. Most individuals buy laptops Laptop Battery only because they need to be portable.\nWith a new battery and assorted problems fixed, the laptop is prepared to serve. It is dangerous to continue to keep your laptop on all day long because once you do so, you are likely additionally to keep it plugged in for that duration to stop the battery from draining out. When you have identified the specific reason, switch off the laptop, disconnect the wires and remove the battery, to steer clear of shock. If you intend to fix the laptop yourself, be sure that the replacement parts are genuine and match your component. Before you opt to get a 3D laptop, there are a few essential things you must evaluate.\nLaptops are designed to be portable. Learn what others say about the length of time a laptop should last. Leaving your laptop in an auto in the summer is risky.\nThe Benefits of How Long Do Laptop Batteries Last\nYou may try an easy and effortless DIY restoration. As stated earlier, laptop screen repair can be carried out even at home, you only need to be aware of the basic techniques on replacement. Most people may not do so, and it’s a good idea for them to go to a computer mechanic instead.", "score": 15.758340881307905, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "Do you have a pile of AA rechargeable batteries in your drawer? Some are old, some are new, but which sets would you bring with your camera on your next trip, and which ones are past their useful life? I like using rechargeable batteries, but I’m certain that some of them are not living up to the stated capacity on the label.\nSo how good are those batteries? Simple battery testers measure the voltage, but that’s not what we need – we want to find the overall capacity of the battery. How long will a battery last from the time it’s fully charged to the time that the “low battery” indicator comes on your device?\nYou can see this in action in a video in the last step of this instructable.\nStep 1: This is a job for a microcontroller\nThat is a quick solution to the problem, but it involves watching a voltmeter for a few hours. That’s no fun at all. With a microcontroller, like the good old AVR chip, we can make a rechargeable battery tester that does the work for us. My tester puts AA batteries through a discharge test and reports the capacity in milliamp-hours (mAh) so you can compare battery capacity.\nThe tester can test multiple cells individually, and display the results on an LCD.\nThe tester discharges the battery while monitoring the voltage of the batteries. When the low threshold is reached, that cell is done it disconnects the load from the battery. When all tests are complete a series of beeps alerts the user. The tester identifies the type of battery by its initial voltage allowing both NiCd and NiMh batteries to be tested.\nThe design is based on the ATMega168 microcontroller, which has 6 A/D converters on the chip, so these will be used to read the battery voltages and determine the load current. Since each battery will require two A/D converters per cell, the maximum number of cells is three.\nI built two of the testers, first using an Arduino board as a development system, and then a standalone device that will be more compact, and free up the Arduino for other projects.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Do you have a pile of AA rechargeable batteries in your drawer? Some are old, some are new, but which sets would you bring with your camera on your next trip, and which ones are past their useful life? I like using rechargeable batteries, but I’m certain that some of them are not living up to the stated capacity on the label.\nSo how good are those batteries? Simple battery testers measure the voltage, but that’s not what we need – we want to find the overall capacity of the battery. How long will a battery last from the time it’s fully charged to the time that the “low battery” indicator comes on your device?\nYou can see this in action in a video in the last step of this instructable.\nStep 1: This is a job for a microcontroller\nThat is a quick solution to the problem, but it involves watching a voltmeter for a few hours. That’s no fun at all. With a microcontroller, like the good old AVR chip, we can make a rechargeable battery tester that does the work for us. My tester puts AA batteries through a discharge test and reports the capacity in milliamp-hours (mAh) so you can compare battery capacity.\nThe tester can test multiple cells individually, and display the results on an LCD.\nThe tester discharges the battery while monitoring the voltage of the batteries. When the low threshold is reached, that cell is done it disconnects the load from the battery. When all tests are complete a series of beeps alerts the user. The tester identifies the type of battery by its initial voltage allowing both NiCd and NiMh batteries to be tested.\nThe design is based on the ATMega168 microcontroller, which has 6 A/D converters on the chip, so these will be used to read the battery voltages and determine the load current. Since each battery will require two A/D converters per cell, the maximum number of cells is three.\nI built two of the testers, first using an Arduino board as a development system, and then a standalone device that will be more compact, and free up the Arduino for other projects.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-3", "d_text": "So now that you know what factors affect battery life, is there any way to extend your battery’s run time per charge? How can you make your battery last longer?\n- Running the motor at low speed: As noted above, there’s rarely any need to keep your trolling motor on the highest setting, as it will drain your battery more quickly.\nKeep the motor at a slower speed as much as possible. The amp draw will be lower, so your battery will last longer.\n- Going out in calm waters: Rough weather and water conditions make it necessary to run your motor at higher speeds, which puts an extra strain on your battery. If you want to get the most time out of a charge, stick to calm waters on mild days.\n- Using the battery only for the trolling motor: Again, the more you draw power from the battery, the sooner that power will be used up.\nIf you have other devices on board that require battery power, it may be best to have a separate battery for them. Use your trolling motor battery only for your trolling motor if you want it to last as long as possible.\nHow Long Will a 12V Battery Last With a Trolling Motor?\nIt will depend on all of the factors listed above, so it’s impossible to say exactly.\nTo estimate how long it may last, compare the battery’s amp hour rating against the motor’s amp draw.\nFor example, the Minn Kota Edge 70 trolling motor has a max amp draw of 42 amps; if you were using the Renogy Hybrid Gel 100Ah battery , you would divide 100 by 42 for an approximate run time of 2.4 hours at top speed.\nWhat is the Longest Lasting Marine Battery Used With a Trolling Motor?\nGenerally speaking, the longest-lasting marine batteries are good-quality lithium phosphate batteries. These batteries discharge more slowly and can be discharged much deeper than other types of batteries.\nAgain though, it largely depends on amp hours vs. amp draw, along with many other factors. Regardless of the type of battery you choose, it’s important to select a high-quality one and then take care of it.\nHow long a marine battery powers a trolling motor depends on many factors. There are various battery sizes and capacities, trolling motors with a variety of amp draw needs, and many conditions in the environment that can affect how quickly a battery discharges.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "We've discovered a truth in life. When you have four or more vehicles inyour fleet, your biggest problems become empty gas tanks and deadbatteries. Gas is an easy one; you can siphon it directly from the lawnmower or the neighbor's boat. But what about the battery? Why does itdie, and what kills it? Knowing the answers to these questions will helpyou understand and hopefully prevent a belly-up battery.\nIn simple terms, a battery consists of a lead-coated electrode calledthe anode and a lead-oxide coated electrode called the cathode thatcombine to form a cell. There are six of these cells in a 12-voltbattery, each contributing about 2.1 volts. They are immersed in asolution of sulfuric acid and distilled water called an electrolyte andconnected through a system of grids and plates in a series that ends atthe positive and negative terminals. As the acid eats away at themetals, the cathode releases positively charged ions into theelectrolyte solution. Since it retains the electrons, it becomenegatively charged. Similarly, the anode reacts to the positivelycharged electrolyte and releases electrons and becomes positivelycharged. This movement of electrons creates a polar difference betweenthe oppositely charged plates in each cell and creates a difference orvoltage between the two terminals. When you hook up your battery cables,it creates a circuit and allows electrical current to flow. Simpleenough.\nWhen the French dude Andre Ampere gave us ways to measure this electriccurrent flowing through a wire, the amount of storage capacity in abattery soon became rated in terms of the ampere hour (A.H) using acommon scale, such as the 20-hour rate of discharge. For example, a 100A.H rated battery will discharge below a useable level (10.5 volts) in20 hours with a load of 5 A (5 A x 20 hours = 100 A.H). Obviously, mostbatteries are rated below this, but we used an easy number forillustration purposes. If that 100 A.H battery had a 2.5A draw (like adome light) it would drop below 10.5 volts very quickly (2.5 A x 40hours = 100 A.H).\nTo perform diagnostics, you'll need more than a voltmeter.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Help Me Choose: Battery\n|When choosing a laptop battery, it's important to consider how your laptop is used:|\nToday’s laptops typically feature two kinds of batteries: user replaceable, which can be easily and instantly switched out with a spare battery as needed, and built-in batteries, which are not user replaceable.\nUser-replaceable batteries are made of Lithium-ion cells and the number of cells inside a battery determines its size, weight and life between charges. A cell is essentially a smaller battery that is packaged with and connected to other cells to form one large battery. Generally, the greater the number of cells, the more power, weight and size a battery will have.\nLaptops with thinner designs, such as the XPS™ and Inspiron™ z series often include built-in batteries. These batteries are often made of Lithium-ion polymer, but also has the extra benefit of fitting into nontraditional shapes. This shape flexibility is instrumental in optimizing laptop design for thinness.\nFinally, it’s important to note that battery life is always subject to demands and usage. For laptops that include a user- replaceable battery which is used for extended periods away from an outlet, you might want to consider purchasing a second battery for your laptop from Dell.com. For laptops with built-in battery batteries, a travel adapter or a second power adapter that stays in the backpack can help maintain productivity when you are on the go.\nDell typically notes either the number of cells and/or the watt-hour (WHr) of a battery. More cells (e.g. 6-Cell vs 4-Cells) or higher WHrs on the same system under the same operating conditions will generally deliver longer battery run time.\nBattery life is how long your battery will last between charges. Battery pack lifecycle is how often a battery can be recharged before its charge capacity starts to degrade (e.g. when a fully charged battery is reduced to 60% of the original battery charge capacity.) This reduction of charge capacity in turn leads to a significantly reduced battery life. The reduction in charge capacity is normal on all rechargeable batteries.\nExtending battery life\nThe easiest way to get the best battery life out of your Dell laptop is to select one of the pre-configured power plans, such as power saver or balanced by right clicking on the battery symbol in the bottom right corner (or system tray).", "score": 13.897358463981183, "rank": 90}, {"document_id": "doc-::chunk-2", "d_text": "If you plan on braving the elements, it would be a good idea to have an extra battery or a battery charger on board, so you don’t end up stranded.\nA larger, heavier boat will take more power to push through the water than a small, lightweight boat. More draw on the battery will lead to a quicker discharge.\nIf you have a larger boat, it’s a good idea to invest in a battery with a higher amp hour rating. A large boat will be able to support the additional size and weight, and you’ll be able to spend more time out on the water, doing what you love.\nIf you are using a single battery for more than just your trolling motor, it will discharge more quickly because multiple sources will be drawing from it.\nFor example, if you’re running your trolling motor and fishfinder off the same battery, neither device will last as long as it would on its own. Both are drawing from the battery simultaneously, so the battery will run out of power sooner than it would power only one device.\nHow Can I Calculate the Battery Run Time on a Trolling Motor?\nThere is no way to consider all of the factors discussed above; in other words, you cannot accurately predict how long your battery will last on a given outing. But you can give yourself a rough estimate with a few simple calculations.\nTo estimate how long your battery would last under ideal conditions, divide the battery’s amp hour rating by the maximum amp draw of the motor.\nFor example, say you have a 100Ah battery and a trolling motor with a max amp draw of 50 amps. 100 divided by 50 equals 2, so the battery would last for about two hours with the trolling motor drawing the maximum number of amps.\nOf course, you won’t necessarily be using the trolling motor at top speed the whole time. Maybe you leave your trolling motor on slow or medium speed and the amp draw is about 20. 100 divided by 20 equals 5, so you could expect your battery to last about 5 hours.\nFor a more in-depth discussion on the topic, check out this helpful article from Newport Vessels.\nAgain, it’s important to remember you won’t always be running at maximum amp draw, and you can’t factor in other conditions such as water and wind conditions or battery age. Dividing amp hours by amp draw only gives you an estimate.\nCan I Make the Battery Last Longer?", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-1", "d_text": "This app will then calculate that the average current draw for the flight was 9.11 amps, the discharge rate was 4.1C and you used 78% of the rated capacity of the battery. (Img #1)\nb) How long can you fly your plane safely? It is recommended to fly using only 80% of the capacity of LiPo batteries to maximize their longevity so using the 9.11A average from the example above, the full throttle current draw (you will need to measure this) is 16A, input 2200mAh and 16A into the app and it'll calculate that you should be able to fly (at FULL throttle) for 6 minutes 36 seconds until the battery is 80% depleted or for 8 minutes 15 seconds (also at FULL throttle) until it is 100% depleted. (Img #2)\nA \"Realistic\" flight time for a plane is also calculated which works out to be 11 minutes and 47 seconds using 80% capacity and 70% average throttle, very close to the actual flight time in the first example.\nFor helicopters and multicopters the flight time is very dependant on what style of flight, but you can use the calculator to calculate the expected flight time in a hover.\nFuture plans: Add wing loading, cubic wing loading calculator, + suggestions?\nDisclaimer: The formulas used in these calculations come from various internet sources, with some variations added from my own experience. Your results may vary! While a lot of effort was put into ensuring accuracy, this app is not intended to be professionally accurate.\nComments & suggestions very welcome.\nNew version is finally up on the Play Store. (link)\nOne of the new things is working out your likely run/flight time from how much mAh is returned to the battery after a flight.", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-1", "d_text": "These values are the average for all time rates between 1 hour and 8 hours.\" This statement implies that the information is valid for duty cycles up to 8 hours. For discharge durations longer than 8 hours, the user will need to obtain validated appropriate correction factors from the manufacturer. Note, preliminary data from one vendor indicated that the electrolyte temperature during a 72 hour discharge decreased by a couple of degrees F. It appears that at the low discharge rate, the I2R heating was less than the heat needed by the slightly endothermic chemical reaction during the discharge to maintain temperature. The temperature correction factors for duty cycle durations may need to be modified for duty cycle durations significantly longer than 8 hours. IEEE Std 535-2006 8.2 states, “This procedure will age the entire cell to the predominant aging failure mode, which is based on the failure of the positive plates.\"\nThe basis of IEEE Std 535-2006 8.2 was the destructive examination of cell plates that were tested at the 8 hour rate. The documentation of the results of destructive examination is available for viewing at the vendors who have previously qualified cells per IEEE Std 535-2006. The available vendor data available indicates that the ampere hours discharged at the 72 hour rate to 1.75 vpc is in the range of 115% of the ampere-hours discharged at the 8 hour rate to 1.75 vpc. For discharge durations greater than 8 hours where the ampere-hours discharged is greater than the rated 8 hour ampere hours, the cell manufacturer will need to demonstrate that there are no other significant failure modes or reduced life due to higher sensitivity due to known failure modes. IEEE Std 535-2006 8.2.2(h) states \"Life expectancy of batteries is not affected by two deep discharges per year. Therefore, the above procedure will qualify the battery for the equivalent of two performance discharge tests per year, average, over the qualified life of the battery.\" In order to meet the requirements of IEEE Std 535-2006, applications with duty cycles durations over 8 hours that discharge more than the rated 8 hour ampere-hours will need to demonstrate that the battery cells can meet the basis statement in 8.2.2(h).\nClause 5 of IEEE Std 535-2006 is reproduced below for reference.\n“5.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-17", "d_text": "- Battery History, Technology, Applications and Development. MPower Solutions Ltd. Retrieved 19 March 2007.\n- Borvon, Gérard (10 September 2012). \"History of the electrical units\". Association S-EAU-S.\n- \"Columbia Dry Cell Battery\". National Historic Chemical Landmarks. American Chemical Society. Retrieved 25 March 2013.\n- Dingrando 665.\n- Saslow 338.\n- Dingrando 666.\n- Knight 943.\n- Knight 976.\n- Terminal Voltage – Tiscali Reference Archived 11 April 2008 at the Wayback Machine.. Originally from Hutchinson Encyclopaedia. Retrieved 7 April 2007.\n- Dingrando 674.\n- Dingrando 677.\n- Dingrando 675.\n- Fink, Ch. 11, Sec. \"Batteries and Fuel Cells.\"\n- Franklin Leonard Pope, Modern Practice of the Electric Telegraph 15th Edition, D. Van Nostrand Company, New York, 1899, pages 7–11. Available on the Internet Archive\n- Duracell: Battery Care. Retrieved 10 August 2008.\n- Alkaline Manganese Dioxide Handbook and Application Manual (PDF). Energizer. Retrieved 25 August 2008.\n- Dynasty VRLA Batteries and Their Application Archived 6 February 2009 at the Wayback Machine.. C&D Technologies, Inc. Retrieved 26 August 2008.\n- USBCELL – Revolutionary rechargeable USB battery that can charge from any USB port. Retrieved 6 November 2007.\n- \"Spotlight on Photovoltaics & Fuel Cells: A Web-based Study & Comparison\" (PDF). pp. 1–2. Retrieved 14 March 2007.\n- Battery Knowledge – AA Portable Power Corp. Retrieved 16 April 2007. Archived 23 May 2007 at the Wayback Machine.\n- \"Battery Capacity\". techlib.com.\n- A Guide to Understanding Battery Specifications, MIT Electric Vehicle Team, December 2008\n- Kang, B.; Ceder, G. (2009). \"Battery materials for ultrafast charging and discharging\". Nature. 458 (7235): 190–193. Bibcode:2009Natur.458..190K.", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-9", "d_text": "This means that if a component fails, the whole system has to be replaced.\nEach system is patented, so if spare parts are not available will be very expensive adapt the measure from one system to another.\nThe output power and therefore, the performance is limited by the strength of the chain and sprocket. Standard pinions and chains were designed to human transmission through pedaling, which means that the amount of mechanical energy that can be added is relatively small.\nBecause of this, maintenance of the drive chain will increase, including all gears and pinions.\nBatteries are electrochemical components and have their own, say, “personality”. It is almost impossible to find two identical behavior batteries. Although two people the same model are purchasedelectric bike at the same time, they can have completely different experiences with their batteries.\nWe have studied the whole theory on batteries (that can keep one busy for a long, long time), but the most useful information has come from our customers. Often, the information provided by laboratory tests can not replace it emerges from the experience of end users with their batteries.\nThe industry of electric bicycle has been waiting to become an important enough to allow more efficient technologies batteries are offered at reasonable prices sector. This is already beginning to happen.\nA battery is not only a solid piece, is a collection of “cells”. The cells are complete units electrolyte with an anode, a cathode and produce electricity from a chemical reaction.\nEach type of cell has a nominal voltage. For example, NiMH (nickel metal hydride) is about 1.2 volts per cell, and thus need a lot of cells combined to obtain the voltage used in an electric motor. Thus, 30 cells would get approximately 36 volts, thus obtaining a useful voltage.\nHow the measured capacity of a battery?\nUsually when someone asks about the capacity of a battery, is looking for basically two things: (1) the amount of energy stored in the battery cells (? How far I can go); and (2) what current cell rate (how much power and how fast?) are discharged\nThe amps are the most common to describe the amount of stored electricity in the cells form. The ability of a cell in ampere / hour (Ah) is also full battery capacity in Ah. A cell 7 Amp is 7 Ah to 1.2 volts.", "score": 11.600539066098397, "rank": 95}, {"document_id": "doc-::chunk-1", "d_text": "To do this, I will get a package of batteries, store a few in the refrigerator, store a few at room temperature and store a few in my car (these nice Maryland summers should provide me with enough heat to do the job).\nBest rechargeable batteries 2018\nI recommend you go to this website in the following order (some concepts build off prior sections):\n1. The Device\n2. The Experiment\n3. The Results\nThe device I used for this project is a West Mountain Radio Computerized Battery Analyzer (CBA) [available from West Mountain Radio]. The CBA connects to your computer to download the data and control the device. You connect the batteries to the CBA for testing. The CBA then puts a constant load (current draw) on the battery. This is similar to using a light bulb connected to a battery to determine how long it lasts, but with the CBA, I can determine how far I want the battery to discharge to and have a nice, constant current draw to get more accurate results (and nice graphs as well).\nFor a little background in electronics:\nVolts [V] à The potential for electricity (Ex: A 9V battery has the potential for producing 9V)\nAmps [A] à The amount of electricity being consumed (Ex: A 100W household light bulb consumes 1A)\nWatts [W] à The energy of electrical work (Ex: A light bulb running at 110V and consuming 1A has the energy of 100W)\nTo calculate the values\nVolts x Amps = Watts Watts ÷ Volts = Amps Watts ÷ Amps = Volts\nAmp Hours [Ah] à How many hours the battery will last with a 1A draw (Ex: 2Ah battery will last 2 hours with 1A draw OR a 2Ah battery can last 1 hour with a 2A draw).\nOk, so now that I have explained the device and the math involved; now on to how I actually progressed through the experiment.\nI only used AA batteries because I found them the cheapest to work with (don’t forget, I had to finance this myself).\nI set my CBA to assume the batteries all had 1.5V (They were anywhere from 1.55V to 1.65V, but are all labeled 1.5V).", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-0", "d_text": "If you can grasp the basics you will have fewer battery problems and will gain greater battery performance, reliability, and longevity. I suggest you read the entire tutorial, however I have indexed all the information for a quick read and easy reference.\nA battery is like a piggy bank. If you keep taking out and putting nothing back you soon will have nothing.\nPresent day chassis battery power requirements are huge. Look at today's vehicle and all the electrical devices that must be supplied. Electronics require a source of reliable power. Poor battery condition can cause expensive electronic component failure. Did you know that the average auto has 11 pounds of wire in the electrical system? Look at RVs and boats with all the electrical gadgets that require power. I can remember when a trailer or motor home had a single 12-volt house battery. Today it is standard to have 2 or more house batteries powering inverters up to 4000 watts.\nAverage battery life has become shorter as energy requirements have increased. Life span depends on usage; 6 months to 48 months, yet only 30% of all batteries actually reach the 48-month mark. A Few Basics The Lead Acid battery is made up of plates, lead, and lead oxide (various other elements are used to change density, hardness, porosity, etc.) with a 35% sulfuric acid and 65% water solution. This solution is called electrolyte which causes a chemical reaction that produce electrons. When you test a battery with a hydrometer you are measuring the amount of sulfuric acid in the electrolyte. If your reading is low, that means the chemistry that makes electrons is lacking. So where did the sulfur go? It is resting to the battery plates and when you recharge the battery the sulfur returns to the electrolyte.", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-2", "d_text": "If you need more information right now, you may find quick, specific answers to your battery-related questions at the\nBattery University website.\nIn my next column I'll cover classifications, general specifications, and terminology. Until then, please post any questions or comments below.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-0", "d_text": "Subscribe to Blog via Email\n© Mark Biegert and Math Encounters, 2019. Publication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Mark Biegert and Math Encounters with appropriate and specific direction to the original content.\nDisclaimerAll content provided on the mathscinotes.com blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner of mathscinotes.com will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.\nTag Archives: battery\nI live in a cold climate -- so cold that under certain circumstances we can freeze our lead-acid batteries (Figure 1). A customer who lives in my region called recently and was wondering if I thought any of his batteries would have frozen over the winter. A number of his Internet service subscribers have vacation homes that are unoccupied over the winter. All of these vacation home owners turn off their AC power for the winter. Since all of our Optical Network Terminals (ONT) are connected to Uninterruptible Power Sources (UPS), they will begin operating off of their battery when the AC power goes away. If the home owner does not disconnect the battery, the ONT will run discharge the battery. This is important because a discharged battery will freeze -- a charged battery will not freeze. A battery that has been frozen is very likely a dead battery. Continue reading\nDerivation of the Output Power Equation The output power equation (Equation 3) is really a restatement of Newton's law of cooling. Equation 3 states the battery's steady-state power dissipation is a linear function of the battery's temperature and the ambient … Continue reading", "score": 8.086131989696522, "rank": 99}, {"document_id": "doc-::chunk-6", "d_text": "2 is a block diagram of one embodiment of the battery capacity tester illustrated in FIG. 1;\nFIG. 3 is a block diagram of one embodiment of the constant current sink illustrated in FIG. 2;\nFIG. 4 is a block diagram of one embodiment of the battery characterizer of the present invention;\nFIG. 5 is a schematic view of the look-up tables used in one embodiment of the present invention;\nFIG. 6 is a flow chart of one embodiment of a method of practicing the present invention; and\nFIGS. 7A and 7B are a flow chart of another embodiment of a method of practicing the present invention.\nThe present invention is a battery capacity testing system and associated method that determines the remaining battery capacity based on a measured internal battery impedance and a predetermined relationship between battery impedance and remaining battery capacity for the battery under test. In the illustrative embodiments described herein, the system measures the battery terminal voltage during one or more successive applications of a known load, where each load draws a predetermined constant current from the battery. The remaining battery capacity is then determined based on a predetermined relationship between the internal battery impedance and the remaining battery capacity of the battery.\nFIG. 1 is a block diagram of a battery capacity tester for determining remaining battery capacity of a battery under test. The battery under test 102 (battery 102) is electrically connected to the battery capacitor tester 104 at positive connection 108 and negative connection 110. In one embodiment, a temperature sensor 112 provides a signal indicative of battery temperature 114 to the battery capacity tester 104. This sensor can be any temperature sensor now or later developed that senses directly or indirectly the temperature of the battery 102. Battery capacity tester 104 generates a remaining battery capacity 106 in accordance with the present invention. In one embodiment, tester 104 is implemented in a battery powered device. In such an embodiment, battery 102 is typically installed in, and provides internal power to, the battery powered device (not shown). In one exemplary application of the present invention, the battery powered device is a portable defibrillator. As will become apparent from the following description, the portable defibrillator can be used with many different types of rechargeable and non-rechargeable battery chemistries and battery types produced by different manufacturers.\nFIG. 2 is a block diagram of one embodiment of battery capacity tester 104.", "score": 8.086131989696522, "rank": 100}]} {"qid": 49, "question_text": "Can you tell me about the Swamp Rabbit Trail in South Carolina? What's its history and current extent?", "rank": [{"document_id": "doc-::chunk-0", "d_text": "Wild Plants on the Rabbit!\nSouth Carolina supports a rich and diverse collection of plant life, partly because of its high rainfall and the variety of physiographic regions within its borders.\nThe southern Appalachians, considered one of the most biologically diverse regions of the temperate world, include the mountains of our northwest corner. Many species are endemic to this region and found nowhere else in the world. The rolling foothills of our Piedmont region were once home to “Piedmont prairies” with a plant palette similar to that of Midwestern prairies.\nHowever, much of this land has been used and abused since the 1700s, the mountains timbered, the rich soil of its gentle foothills eroded and exhausted by cotton, the rivers polluted by the mills and industries which gravitated to their banks.\nThe Swamp Rabbit Trail follows the path of an old shortline railroad that carried freight (often timber) from the mountains above Marietta into Greenville. Some of the Trail also parallels the Reedy River. It’s a delightful glimpse into our communal “backyard” — where the plants have been left to fend for themselves.\nNature is resilient. Over the years, the forests began to grow back. The soil lost downstream can never be recovered, but life stored deep in the seedbank continues to emerge. Our citizenry has worked to clean up the waters.\nWhere the Trail is crisscrossed by utility lines, prairie plants emerge: Indian-hemp, Splitbeard Bluestem, a multitude of Sunflowers, Asters and Goldenrods. Where the river spreads out into floodplains or beavers do their work, wetland plants thrive — Cardinal Flower, Spotted Jewelweed, Allegheny Monkey-flower, even the globally rare, federally protected Bunched Arrowhead!\nBut backyards can also become dumping grounds, and the Swamp Rabbit Trail exposes heretofore hidden (and ignored) infestations of invasive plants.\nWhat is an invasive plant? To answer that, it helps to understand that naturally occurring plants and animals tend to interact with one another to maintain an evolving balance, an intact ecosystem appropriate to a particular landform, elevation, geology, soil type, moisture, etc. There are many different types of natural communities, several of which are encountered along the Trail. (Click here to read more.)\nWhen an invasive plant enters that ecosystem, the balance is disrupted.", "score": 53.51427882280883, "rank": 1}, {"document_id": "doc-::chunk-0", "d_text": "If our area’s overabundance of wintertime weather has you down, there’s nothing quite like a long walk with your dog to lift your mood! If you’re looking for a fun place to head out this weekend in Greenville, give the Swamp Rabbit Trail that runs through Falls Park a try. For those of you who haven’t been there in a while, you really need to catch up with what’s new!\nExperience the new Cancer Survivors Park – This beautiful new area opens up right after the trail goes under Church Street as it heads toward Cleveland Park. A fun, multi-level walkway up the side of the hill and a completely redone bridge over the Reedy are worth experiencing!\nDoggie socialization opportunities abound – If your dog enjoys meeting other members of his or her own kind, this is the place for them! Lots of people take their dogs on this section of the trail through Falls Park. From dachshunds to golden doodles, there’s likely to be many chances to stop and say hello for more social pups.\nStop and smell the flowers – Most of the Swamp Rabbit Trail’s winding path through Falls Park is landscaped to the nines! No matter what time of year it is, there’s nearly always something blooming. Just seeing something that’s both beautiful and alive at this time of year can provide some much-needed cheer.\nAlthough it’s unlikely you’ll peel off all 19.9 miles of trail in a day, the Swamp Rabbit Trail in Falls Park is a perfect choice for both a longer distance walk or just a lazy stroll. No matter what you and your dog are game for, this dog-friendly destination downtown has you covered!", "score": 49.32995492587793, "rank": 2}, {"document_id": "doc-::chunk-1", "d_text": "What was a rich medley of plant species supporting a wide assortment of wildlife becomes a botanical desert of one or two species. Very few other plants thrive after Privet blankets a floodplain or Kudzu swallows a field.\nIt behooves us to pay attention to the identity of the plants taking up residence in our green spaces….\nThe Native Plant Society is in the process of compiling a list of plants\nthat appear to be growing wild along the Swamp Rabbit Trail.\nThis list currently numbers almost 400 species, many of which are featured in Wild Plants on the Rabbit.\nTo search for a plant\non the Trail by “describing it”,\nYou can pick up a copy of Wild Plants on the Rabbit\nat these outlets:\nCafe @ Williams Hardware\nGreenville County Parks, Recreation & Tourism (office)\nGreenville Visitors Center\nLake Conestee Nature Park (office)\nSwamp Rabbit Cafe & Grocery\nUnited Community Bank (Travelers Rest)\n— more coming! —\nClick here to submit your own Swamp Rabbit Trail pictures\nto the Native Plant Society for identification.\nClick here to download a reduced version of the brochure in PDF format.", "score": 48.91400753910008, "rank": 3}, {"document_id": "doc-::chunk-0", "d_text": "Looking for the perfect gift? Shop the Bike Shed for fun Swamp Rabbity gifts!\nCheck out our new property in Travelers Rest, SC\nTake a Tour of the Swamp Rabbit Inn with owner Wendy Lynam\nThe Blessing of the Bikes in Greenville, SC\nMay 6, 2017, Greenville, SC M Judson Booksellers and the Swamp Rabbit Inn present the second annual Blessing of the Bikes will take plac\nHere is a great opportunity to experience what it’s like to go on a Trek Travel bike trip with a free guided ride this Sunday, April 9, 2017 departing from Hotel Domestique i\nThis week we had the pleasure of talking with reporter Rudolph Bell with the Upstate Business Journal and appreciate the mention on their website!\nSwamp Rabbit Inn opening sister\nPlease consider joining us on Thursday, January 26, 2017 for a very special evening with Chef Michael Kramer about to open the much anticipated – Jianna Restaurant –\nFor Immediate Release\nJanuary 16, 2017, Travelers Rest, SC – The Swamp Rabbit Inn and Properties buys the Magnolia Inn in downtown Travelers Rest, SC to\nThis Fall these dudes stayed at the Swamp Rabbit Lodge and rode Paris Mountain #Cranksgiving\nAn Autumn Adventure in the Southeast – Video – Pinkbike", "score": 47.38254417395606, "rank": 4}, {"document_id": "doc-::chunk-2", "d_text": "A Few of My Favorite Trails in North and South Carolina\nAs an outdoor person, I have taken advantage of many of the trails in the area. Whereas the Swamp Rabbit trail has been a boon to walkers, runners, bikers and skaters, I have enjoyed several more rustic trails that provide resources for more rugged hiking and/or horseback riding.\nOne of my favorites is North Mills Recreation Area. This is located in North Carolina, south of Asheville. The North Mills River is an area that provides excellent trout fishing and is a delayed harvest stream (meaning catch and release only in the winter months). This is an area frequented by horseback riders and mountain bikers. There is an extensive network or trails, many following the river. When fishing this river, it is not uncommon to encounter biker and horseback riders, especially on weekends.\nOne of the most interesting, in terms of construction details, is the Rainbow Falls Trail. It is part of the trail system at Jones Gap State Park. The park is home to 60 miles of hiking trails, but the Rainbow Falls Trail is one of the latest additions. Formerly, Rainbow Falls could only be accessed by a dangerous and tortuous descent from Camp Greenville above. The beautiful and difficult trail was built as a spur off of the Jones Gap Trail. The Jones Gap Trail and the Rainbow Falls Trail Create an 11.9-mile loop with breathtaking views.\nRainbow Falls drops almost 100 feet and the pool at the base is a pleasant place to view nature in all its splendor. It is more pleasant during warmer seasons, but is well worth the hike at any time. The trail has beautiful steps cut into the stone in places and at one point the trail passes through a gap between two mammoth boulders.\nAnother spur trail, called Rim of the Gap Trail, forks off from the Jones Gap Trail and terminates at Caesar’s Head. It is 4.3 miles of cliffside trail that is extremely difficult. It is often closed in winter months due to the dangerous build-up of ice. The trail is narrow, precipitous and crosses several small streams. It is dangerous and difficult, but the views of the gorge below are phenomenal.\nHistoric Origin of Jones Gap Trail\nThe Jones Gap Trail follows the route of the old Solomon Jones toll road that was the only route over the mountains at the time. It was actively in use from 1840 to 1910.", "score": 46.67897882707522, "rank": 5}, {"document_id": "doc-::chunk-0", "d_text": "Where Can I Register?\nOnline Registration is currently closed.\nYou can register in person on the Thursday before the race at the Crowne Plaza Hotel from 8:00 a.m. to 8:00 p.m. or at Gateway Park on Friday from 11:30 a.m. until 5:30 p.m. Registration is $15 and can be paid by cash or card.\nThursday from 8 a.m. to 8 p.m. at the Crowne Plaza Hotel (View Map). ***Packet pickup is also Friday from 11:30 a.m. to 6:15 p.m. AT GATEWAY PARK***.\nNow Hare This\nTo save paper, pre-registration is online only (except day-of registration). The low $6 fee ($15 starting 4/28) includes processing fees, and the site is 100% secure. All online registrants receive a FREE GHS Swamp Rabbit 5K T-shirt!\nElectronic scoring is used for all runners and walkers to capture start and end times. Awards will be given for top finishers in each age group afterward.\nAfter the race, enjoy FREE food, drinks and music. We also feature many children’s activities.\nRace Director is Chad Carlson (firstname.lastname@example.org)\nFrequently Asked Questions\nQ: What is the race policy in the instance of bad weather?\nA: The race will be held rain or shine. In the instance of extremely dangerous weather such as lightning, the GHS Swamp Rabbit race committee, in cooperation with the City of Travelers Rest, will make a decision about the appropriate course of action. Check our website for updates. Race entry fees are non-refundable.\nQ: Can I get a refund if I do not race?\nA: Race entry fees are non-refundable\nQ: Can I register on race day?\nA: Yes! However, we do encourage you to pre-register to minimize the rush at the event. We will also offer registration at all packet pickup locations as well as at the event.\nQ: I am coming in from out of town and do not arrive until race day, how do I get my bib and chip?\nA: You may pick up your packet at the event from 11:30 a.m.-6:15 p.m. The location for the packet pick up will be on the basketball courts at Gateway Park.\nQ: Can we pick up our packets before race day?", "score": 45.89516739356719, "rank": 6}, {"document_id": "doc-::chunk-1", "d_text": "posted Nov 2, 2019\nContinuing the conversation from the 2019 International Trails Symposium (ITS) and Training Institute and our TRAILSLead™ Multi-use Trails and Conflict Forum, this webinar will build upon the concepts brought up during the panel discussion.\npublished Apr 1, 2003\nThe purpose of this study was to provide an extensive description of the use of trails in South Carolina.\nposted Apr 3, 2019\nThe Southern Mountain Bike Summit is a not-to-be-missed event. Nowhere else will you see such a passionate group of mountain bikers in the Southeast meet and work together to create better trail opportunities for all. And, you get to ride your bike on the finest trails in the Southeast! On the schedule, Chainsaw Certification, Trail Crew Leader Training, Women’s Trail Maintenance & Construction, Land Manager Day and MTB Advocates Day.\nPage 37 of 49\nFort Worden State Park, Port Townsend, Washington\nRouted and painted wood sign; Arches National Monument, Moab, Utah\nSign helps users find trail beyond point of interest; Arches National Monument, Moab, Utah\nSee more photo results\nSouth Carolina Trails\nFriends of Florida State Forests\nSee more business results", "score": 43.86456831331039, "rank": 7}, {"document_id": "doc-::chunk-0", "d_text": "There are many ways to connect people and places in a community. Few have the added advantage of protecting fresh water supplies, providing natural places to commune with the outdoors, offering environmentally beneficial transport options, and creating economic growth. In the piedmont of North and South Carolina the Carolina Thread Trail will be over 1500 miles long and growing. The trail is an ambitious project that offers all of the above.\nThe blue star poised on CTT signs throughout central Carolina is a both familiar and exciting reminder that someday you’ll be able to walk, run, or ride your bike on a trail to most places throughout the region. One day, the Thread will connect more than 2 million citizens in 15 counties from both North and South Carolina. Already, 76 local communities are involved raising money, planning for, and building new miles of path.\nThe forward thinking project to create an extensive network of trails throughout the region was launched in 2007. The trail stretches from Iredell County in the North to Lancaster County down south and Cleveland County in the west to Anson in the east. All land for new trails construction has been purchased, collected by easements, or already owned by the city or county, none relying on imminent domain, and much of it has been donated by private and public philanthropists. The CTT works under its lead agency, the Catawba Land Conservancy, whose mission it is to “…save land and connect lives to nature”.\nThe Thread adopted much of the current system from existing trails. Under the guidance of the CTT, however, local communities have built 137 miles of new trail. The job is just beginning. Vanessa Gorr, Community Outreach Coordinator, says the goals of the next five years include:\n- Completion of five contiguous miles in each of the 15 counties, giving local residents places for long walks and bike rides.\n- Finish at least half of a 140-mile “North-South spine,” roughly paralleling the Interstate 77 corridor, from South Carolina to counties north of Charlotte. About 30 miles have been completed.\n- Pay special attention to the South Fork Catawba River in Catawba, Lincoln and Gaston counties. The South Fork has also been a focus of the Catawba Lands Conservancy’s land protection work.", "score": 42.28732966201438, "rank": 8}, {"document_id": "doc-::chunk-0", "d_text": "Imagine more than 425 miles of hiking and bicycling paths beside lakes, across mountain ridges, through forests and into towns big and small. What better way to explore the natural beauty and local color of South Carolina?\nThis video was made by a guest of the Palmetto Trail.\nConceived in 1994, South Carolina’s Palmetto Trail is the state’s largest bicycle and pedestrian project and will run from the mountains to the sea.\nThis federal designated Millennium Legacy Trail is the signature project of the Palmetto Conservation Foundation. It is one of only 16 cross-state trails in the United States.\nLake Moultrie Passage of the Palmetto Trail – Opened in 1995, the Lake Moultrie Passage is a 33-mile hiking trail that wraps around Lake Moultrie from the trailhead at the Canal Recreation area off Highway 52 north of Moncks Corner to the Redivision Canal at Cross and passing through some of South Carolina’s most beautiful vistas. When completed the Palmetto Trail will extend from McClellanville on the coast to the Foothills Trail in the Upstate.\nSwamp Fox Passage of the Palmetto Trail – A 42-mile trail under development of which the 27-mile Swamp Fox National Recreation Trail is open. The trail spans four distinct ecosystems. The trail is good for hiking, biking, bird watching, nature study, environmental education and photography. The eastern trailhead is located on the left of US Highway 17 just north of Awendaw. The western trailhead is located at the Witherbee Ranger Station in Cordesville. For more information call the Witherbee Ranger Station at (843) 336-3248.\nA Million Dollar View\nDon’t miss the Swamp Fox Passage of South Carolina’s Palmetto Trail million dollar view. From the Trail you can get a panoramic view of Santee Cooper’s 70,000 acre Lake Moultrie, with sunsets rivaling those at the beach. If you have time wait for the sun to set. It’s worth it!\nThe easy three-mile round trip hike is a great outdoor date. The trail passes through a loblolly pine ecosystem, which is quite diverse. In the southern United States, the word loblolly means a “mud hole or a mire” and has become associated with the pine trees that favor wet bottomlands or swamps.", "score": 40.9449597870781, "rank": 9}, {"document_id": "doc-::chunk-0", "d_text": "Look up the terrain difficulty, potential weather conditions, and places to stop for water and supplies on a longer trip. Get an estimate on how long your trip will take you. Park and National Forest rangers and PCF staff are available to help answer these questions.\nSouth Carolina is an awesome state! We are fortunate to be able to enjoy so many different ecosystems on the Palmetto Trail.\n- Suzette Anderson\nPalmetto Conservation Office Manager\nDOWNHILL HIKERS YIELD to uphill hikers.", "score": 39.678684005710956, "rank": 10}, {"document_id": "doc-::chunk-0", "d_text": "- Sports and Recreation»\n- Hiking & Camping\nThe Palmetto Trail - The Awendaw Passage\nThe Second Leg\nIn some previous hubs, I told of my goal to hike the Palmetto Trail. After completing the first leg, The Capital City Passage, I decided to tackle The Awendaw Passage as the second leg. The Awendaw Passage would be the first leg of your journey if you were going to hike from one end to the other of the Palmetto Trail and you started at the South Carolina coast. It would be the last leg if you started in the mountains of South Carolina.\nChoosing The Awendaw Passage was really nothing more than a choice of convenience. This hike took place on July 21, 2010 while vacationing at Litchfield Beach with my family. I somehow managed to talk seven other family members into going with me. This included both of my sisters and their husbands, my youngest child, and my youngest sister's two oldest children. This means that we were hiking with an 11 year old, a 10 year old, and a 7 year old!\nWe left our beach place at 7:15 that morning and it was 82 degrees. We drove from Litchfield to northern Charleston County. With one stop for a fast food breakfast, we were at the start of the trail one hour later. At that point we left one brother-in-law and three kids. They were to start hiking so that shorter legs could take a more leisurely pace and have additional time to rest. Our intent was to drive both cars to the end of the passage and leave one there and drive the other back to the starting point and catch up with the others.\nThe journey to the ending point was not uneventful. The map seemed to indicate that we could take Whitten Road to its end and then make two turns and end up where we needed to be to drop off the one vehicle and then return to the starting point in the other vehicle. THIS DID NOT WORK! We found ourselves on dirt roads that were not on the map and traveled a great number of miles out of our way on these dirt roads seeing almost nothing but the road and pine trees. We finally ended up in McClellanville, a considerable distance away from where we needed to be.", "score": 37.880695752128695, "rank": 11}, {"document_id": "doc-::chunk-0", "d_text": "Passage of the Palmetto Trail\nThis easy walking/biking section of the Palmetto Trail\nconnects the Swamp Fox Passage to the ocean and serves as the coastal\ntrailhead for the statewide trail on its way to the mountains. You\nwill follow Awendaw Creek and pass through the surrounding marsh area,\nwhich provides a home for many shore birds and other lowland wildlife.\nA scenic overlook and boardwalk provide and opportunity to stop and\nenjoy a sweeping view of the area and the Intracoastal Waterway. Use\nextreme caution when you cross US 17.\nand restrooms for trail users are available at Buck Hall Recreation\nArea. Insect repellant is recommended during warmer months.\nFor additional information from Palmetto\nAll visitors must pay to\nuse Buck Hall Recreation Area. $5 per vehicle per day, payable at\nthe self-pay station or a $25 annual pass is available at the Wambaw\nReservations may be made by calling 877-444-6777 or using\nwww.reserveusa.com. A $9\nreservation fee will be added.\nFrom McClellanville drive south on US17 approximately 6.5 miles, or\nfrom Awendaw drive north approximately 3 miles and turn at the Buck\nHall Recreation Area sign. Drive to the end of the road and the\ntrailhead is at the small parking area on the left. You can also\naccess the Awendaw Passage from the Swamp Fox Passage west of Awendaw.\nDawn to dusk.\nAt Buck Hall Recreation Area.\nInformation: Francis Marion\nNational Forest, Witherbee Ranger District, 2421 Witherbee Road,\nCordesville, S.C. 29434. Telephone:\n843-336-3248. Also try the Sewee Visitor\n& Environmental Education Center, 5821 Hwy. 17 N, Awendaw, SC\nConservation Foundation, 1314 Lincoln St., Suite 305, Columbia, SC\nUpdated: September 18, 2008\nSouth Carolina State Trails Program\nSouth Carolina Department of Parks, Recreation and Tourism\n1205 Pendleton Street :: Columbia, SC 29201 :: 803-734-0173\nProgram | Trails\nInventory | Agencies\n& Organizations |\nDepartment of Parks, Recreation and Tourism.\nAlso See Disclaimer Information", "score": 36.6234342411875, "rank": 12}, {"document_id": "doc-::chunk-5", "d_text": "By the end of 1982 only the 5 miles coming north off Hutchingson Island to the junction with the new connection remained in use. This was re-designated the Hutchingson Island Subdivision and survived into CSX. Eventually, all of the business dried up on the island and in 1996 CSX sold most of the land they owned including the flat yard; today the land is now golf courses and race tracks and hotels. CSX did leave the track in place from South Hardeeville to Hutchingson Island though and plans for a new container port and industrial complex at Hardeeville along the out of service route may soon bring new life to the track, which is overgrown today. North of this area the old roadbed has been converted into a hiking trail called the New River Trail.\nA note about the name \"East Carolina Subdivision\": This line was never officially named the \"East Carolina\" Subdivision by the SAL. It was part of the Carolina Division of the SAL and earned the nickname \"EC\" by the people who worked it and maintained it, due to its alignment on the east side of South Carolina. After the SCL merger, for a brief time (2 months), it was renamed the \"Charleston Subdivision\", and ultimately became part of the New Savannah Division. After the Lobeco-to-Charleston abandonment, the line south of Coosaw was renamed the Coosaw Subdivision.\nLots of evidence remains of the former SAL freight route. Several concrete signal bases are located near Pritchard and a few poles still stand in the swamp along the roadbed. The long trestle at the Broad River still stands although heavily damaged by fire. Several bridges to the north have been converted into fishing piers, the most recent one at Lobeco. Other traces remain here and there of the once proud freight route of the Seaboard Air Line. The EC may be gone, but it is not forgotten.\nThanks to Eugene Cain for contributing information about this route.", "score": 35.26116543579039, "rank": 13}, {"document_id": "doc-::chunk-0", "d_text": "Hiking is one of our favorite pass-times, but since we've moved to South Carolina we haven't really had a successful hike. We tried a couple of trails near Charleston, but they turned out to actually be flat sand paths through really low branches covered in pine straw....not my favorite scene.\nSo we decided to jump in the car and drive a few hours west to the central part of the state and see if the hiking improved with some miles and elevation. I can't say it was quite as fantastic as the northeastern mountains.....but it sure was better than beach trails.\nThe first adventure of our day - DETOUR. We figured the \"Road Closed Ahead, Local Traffic Only\" signs just meant they had pavement torn up or something...well...or something.\nSO...we couldn't quite jump that gap...we thought about it though.\nThe trailhead for the section of the Palmetto Trail that we hiked was not well marked, so honestly we're not sure what section we ended up on, but we were trying to be close to the Lake Marion Passage.\nThe first half of the hike it rained, so I had my camera wrapped up under the rain guard on my bag...these photos are all on the way back. (It wasn't a loop trail). Megan did her usual few handstands in the field before the woods.\nAside from the spiders the hike was actually pretty entertaining. We saw snakes, bugs galore, dead snakes and possums, inch worms and ants, and three-legged dogs. It was quite the nature scene. Also we saw a deer for about 1/30th of a second.\nThe turtle wasn't phased by our paparazzi, he just chilled.\nThe wildlife was fantastic, and the trail actually had a few \"hills\" here and there, so it wasn't quite as boring as the coastal trail was. Gotta be honest, our buns were missing the burn of the New England mountains! We'll have to roadtrip there in the summer for real climbs.\nThe boardwalk on the way back was much less slippery, but still a bit treacherous...Dave decided bouncing on the semi-rotted boards while we were crossing was a good idea...\nAfter nearly 10 miles of hiking we were definitely ready to make it back to our cars and take our soggy hiking shoes off...but when we got back I had to grab these few photos of the amazing trail head (once we finally found it).", "score": 33.534690237975326, "rank": 14}, {"document_id": "doc-::chunk-0", "d_text": "The Spanish Moss Trail\nThe Spanish Moss Trail is an emerging rails-to-trail green-way project owned by Beaufort County, South Carolina and is located in the heart of Northern Beaufort County. Today,the Spanish Moss Trail is a 3.3 mile, 12-foot wide, paved, pedestrian Trail open for the enjoyment of bikers, runners, walkers, fishers, and nature enthusiasts of all stages of life. When completed, the 13.6 mile Trail will connect neighborhoods, parks, water and marsh views, nature preserves, cultural features, historic sites and businesses for recreation, transportation and conservation purposes.\nWhere is the Spanish Moss Trail?\nThe Spanish Moss Trail is currently 3.3 miles in length and located in Northern Beaufort County, South Carolina. The first 2 phases of the Trail are open for residents and visitors enjoyment between Ribaut Road in Port Royal and Depot Road.\nWhat is the next phase of the Spanish Moss Trail?\nPhase 2 of the Trail opened in November 2013, connecting an additional 2.3 miles between Ribaut Road in the Town of Port Royal to Allison Road in Beaufort. This section of the Trail now connects with the Phase 1 Model Mile (Allison to Depot Roads), bringing the Trail to 3.3 miles. Funding for Phase 3 of the Trail is COMPLETE (Beaufort County with a Federal Grant) and is in the design/engineering phase. Implementation of Phase 3 is planned for the Spring/Summer 2014.\nBay Street Outfitters guided Fly Fishing and Light Tackle tours\nBeaufort: South Carolina Fly Fishing and Light Tackle At Its Best\nTwentyfive percent of the entire United States east coast marshland water is here in Beaufort county. Located about halfway between Charleston, SC and Savannah, GA, and only a stone’s throw from Hilton Head. Shallow water sight fishing at its finest! The flats of the Low Country are teeming with fly casting and light tackle opportunities. Whether it is sight casting to tailing reds, chasing giant cobia or casting into schools of jacks , ladyfish, Spanish mackerel or trout, the fish are here! With literally hundreds of square miles of tide waters and grassy marshes to draw from, fishing possibilities are endless.", "score": 33.336741592949885, "rank": 15}, {"document_id": "doc-::chunk-0", "d_text": "This registration is for the Swamp Fox Passage ONLY\nLooking to get your hands dirty for a good cause…join us for Trail Work Wednesdays!\nWith all of the flooding that has occurred along the Palmetto Trail, we are in need of volunteers to help us get the trail back on its feet again!\nTrail Work Wednesdays will be held every Wednesday, with a specific passage designated for cleanup for each month. For the month of February, we will be working along the Wateree Passage & the Swamp Fox Passage of the Palmetto Trail.\nWe want YOU to help us make a difference to South Carolina’s Palmetto Trail!\n*Trail Work Wednesdays will run from 9am – 12pm (with the option to stay after lunch until 3pm) every Wednesday.\n*Snacks and water will be provided. Volunteers will need to provide their own lunch.\nFor Directions and Map, click HERE\nCordsbee, SC 29434\nGoogle map and directions", "score": 33.281761839138014, "rank": 16}, {"document_id": "doc-::chunk-0", "d_text": "Our Country's Newest \"Blue Trail\" — The Congaree River In South Carolina\nAs President of American Rivers, the nation’s leading river conservation organization, I get to enjoy our nation’s rivers more than most people. After all, it’s my job! But, I don’t come to work everyday just because I love rivers and want to protect them so our communities can continue to thrive. I come to work everyday because I want everyone to love and appreciate rivers. I want all Americans to have a stake in the future of our rivers and the best way to do that is to connect people with their local rivers and streams. To engage individuals with rivers and allow people to truly see what they have to offer. To many people, a river is just something to look at as they cross a bridge – if they even notice it at all. In order to change this, we need to give people the opportunity to personally experience a river - to witness its beauty, behold its grace and respect its power.Most people believe they have to go long and far out of their way to enjoy a river, but many of these jewels are right in their own backyards. So, at American Rivers, we make it a priority to not only connect people with their local rivers, but to cultivate the recreational and cultural opportunities rivers provide. That’s why American Rivers is working to establish blue trails through our Blue Trails Initiative TM — a great way to connect people to their hometown rivers while boosting tourism, civic pride and a conservation ethic.\nBlue trails, also known as water trails, are the river equivalent of hiking trails. They are corridors developed to facilitate recreation in and along rivers and other water bodies. Blue trails are found in urban settings as well as in remote environments. They come in all shapes and sizes and are used by paddlers, anglers, hikers, runners, picnickers, and those just seeking a bit of solitude.\nOne of our country’s newest blue trails is South Carolina’s Congaree River Blue Trail.\nThis past Friday, American Rivers and its local partners hosted a celebration of the Congaree River Blue Trail and its recent designation as a National Recreation Trail. The celebration included a ceremonial paddle from West Columbia across the river to Columbia, South Carolina.", "score": 32.47943831927611, "rank": 17}, {"document_id": "doc-::chunk-0", "d_text": "Congaree National Park South Carolina\nA trail through the wilderness\nA beautiful forest you can explore by boat\nComprising nearly 11,000 hectares, Congaree National Park is the largest intact expanse of old-growth bottomland hardwood forest in the United States. Located in central South Carolina near the state’s capital of Columbia, this National Park is home to an incredible range of biodiversity.\nOne of the most popular activities in Congaree is walking along its nearly 4 kilometers of boardwalks or 40 kilometers of hiking trails. Along the trails, keep your eyes peeled for the animals that find sanctuary here, such as wild boars and box turtles. If you’re interested in learning more about the park’s rich biodiversity, tag along on a guided ranger walk. Another way to experience this marshy park is to kayak or canoe through its swamps or fish along its rivers.", "score": 32.03971139291442, "rank": 18}, {"document_id": "doc-::chunk-1", "d_text": "West Columbia Mayor Horton and City Council Member Elect Belinda Gergel and Tracy Swartout, Congaree National Park Superintendent joined us and spoke about the importance of the Congaree River to South Carolina and the region. American Rivers also released a new waterproof map and interpretative guide of the Congaree River Blue Trail.\nThe Congaree River Blue Trail is a real asset for local communities, and the trail map and online interpretive guide will make it even easier for people to get out and enjoy the river with their family and friends. As demonstrated by the many partners and civic leaders attending the paddle and celebration, The Congaree River brings all the surrounding communities together and fosters a sense of civic pride.\nAmerican Rivers is working hard to enhance communities’ access to and opportunity for recreation and the enjoyment of these valuable assets. We are rekindling the public’s appreciation for America’s rich river heritage. As more people learn to appreciate the gift of rivers, they will want to protect them. That’s what blue trails are all about.", "score": 31.905703908255234, "rank": 19}, {"document_id": "doc-::chunk-0", "d_text": "Congaree Swamp is a 2.3 mile moderately trafficked loop trail located near Hopkins, South Carolina that features a lake and is good for all skill levels. The trail is primarily used for hiking, walking, and canoeing and is accessible from September until May.\nOK I'm not going to recommend kyacking here by yourself! but as far as a hike around the boardwalk it was awesome! there was a huge amount of wildlife\nThis trail actually had a mosquito meter and it's well worth it to check it out. This is a swamp so after a good rain it is wet and slippery. Beautiful flora and fauna and loads of wildlife. Feral hogs so keep an eye on your pups. Had one get completely out of her harness to have a look and we had to plough through muck to go get her. There is a boardwalk and a trail, in fact a few well marked trails and some are quite long. The boardwalk is no dogs allowed. We really enjoy this trail but it is clearly for hiking, not a breezy walk amongst the trees but have a look. It's well worth it.\nInteresting walk through the swamp with views of Cypress trees , wildlife . Hot and lots of bugs in the summer.\nI've heard this national park referred to as the worst national park in America. I haven't been to all 59, but I have been to a dozen or so, and this is by far and away the worst that I've been to.\nIt's a great hike; very scenic. But beware of the snakes sunning themselves along the trail. Look ahead closely before walking an looking up. And you will look up with the amazingly huge trees along the way.\nI graded this \"trail\" only on the condition of the surface. There are a few boards starting to come up. Those that are not sure-footed may want to be aware of this to prevent a fall. The boardwalk is made out of wood and has benches scattered throughout for ones needing to rest or watch whatever may be out there. I didn't get to walk the eastern side of the boardwalk because it is closed for storm repairs.\nA very unique place...amazing trees with wonderful wildlife. I saw several beautiful pileated woodpeckers near the boardwalk...and the noise of their pecking accompanies you everywhere.", "score": 31.53528379487898, "rank": 20}, {"document_id": "doc-::chunk-0", "d_text": "Bartram Trail (GA), Feb 26-27, 2011\nSaturday, February 26, 2011\nI had been sick earlier in the week, but I had been planning for to hike the first 19 miles of the Bartram Trail before Spring break this year, so I could finish the remaining 90 miles at a more leisurely pace over my break in late March.\nSo my wife dropped me off around 10:30 on this Saturday morning at the Russell Bridge Trailhead. This small parking lot is on Highway 28, just north of the Chattooga River bridge that leads into South Carolina. There, Dewey posed for a picture with one of the many engraved boulders that mark key points along the Bartram Trail.\nThen we crossed the highway heading west, following the general course of the Chattooga River. Shortly into the walk, I passed the first of the plastic yellow diamond blazes that mark the Bartram Trail.\nI was surprised at how well the BT is maintained and blazed. The guidebook mentions several blazes where blazes and cross-trails could be confusing. But the folks who maintain the BT have clearly spent some quality time taking care of this corridor. There was never a confusing point during the entire weekend.\nThe area along the trail had once been a road bed and reminders of the folks who lived there remained. An old hay baler stood out as evidence this path once catered to vehicles.\nJust after, the foundation and chimney of a vanished home gave witness to the past.\nThe trail was relatively level, and despite some mild weakness from the previous week’s illness, I ambled on with relative ease. Occasional glimpses of the Chattooga greeted me as I wandered on. As I approached Warwoman Creek, I was happy to see one of the longer bridges along this section of trail, preventing the need for a ford.\nShortly after, I passed by the Earls Ford area, I eased around a car campsite and moved along swiftly. On the other side of the camp, I encountered a couple of dayhikers wandering along. They were the first people I had seen in 6 ½ miles of trail. I continued on, planning to make my way to a campsite around mile 9 near Dicks Creek Falls.", "score": 30.963578038758847, "rank": 21}, {"document_id": "doc-::chunk-0", "d_text": "Running along South Carolina’s coast between the metropolitan areas of Charleston and Myrtle Beach is a rural stretch of land that beckons to a time when things moved a little slower. The Bulls Bay Historic Passage consists of the northern tip of Charleston County which contains the towns of McClellanville and Awendaw.\nBounded by over a quarter of a million acres of Francis Marion National Forest to the west and 22 miles of protected coastline in the Cape Romain Wildlife Refuge to the east, as well as the Santee Coastal Reserve and Santee River delta to the north, this area is buffered from urban growth and development.\nResidents choose to live here as a way to get out of the city and back to nature. The history of the area is rich, dating back to the Native American tribe known as the Sewee Indians, from whom many local names such as “Awendaw” derive their roots. The area also witnessed its first European settlers dating back to the 1600’s. French Hugeonots settled along the Santee River and gave rise to rice plantations. The Gullah-Geechee culture is still preserved by the descendants of former slaves. Hopsewee and Hampton Plantations are both open to the public providing visitors a glimpse into the history of the area. McClellanville and Awendaw came about as coastal retreats away from the swampy, mosquito-filled rice fields of the plantations. McClellanville’s Historic District holds many homes dating back to the mid to late 1800’s.\nMuch of the area’s beauty is drawn from its natural surroundings; the creeks and marsh, the oceans and sunsets, the native palmettos and live oaks draped with Spanish Moss. The diverse ecology spreading from the forests, swamps, marshes, and beaches makes the area a prime destination for bird-watching and photography. An abundance of deer, turkey, and ducks also allow for great hunting opportunities.\nOther outdoor activities centering around the waters of the wildlife refuge include boating, world-class fishing, and shelling on the secluded beaches only accessible by boat. The wildlife refuge is a Class 1 Wilderness Area, prized for its pristine water and air purity and is the longest stretch of the protected coastline on the eastern shore of the U.S. Kayaking, canoeing and paddleboarding are also great activities to enjoy in the saltwater creeks.", "score": 30.893425083726154, "rank": 22}, {"document_id": "doc-::chunk-0", "d_text": "Catawba Indian Reservation is a 1 mile out and back trail located near Rock Hill, South Carolina and is good for all skill levels. The trail is primarily used for hiking and is best used from September until May. Dogs are also able to use this trail.\na fantastic view of the free flowing Catawba River !! This particular trail is the Yehasuri trail. It's a scenic trail that is a part of an old wagon trail that dates back to 1810 with several interpretative signs along the way. You will end at one of the most beautiful sections of the Catawba River that exists today. There are only two free-flowing sections remaining and this area is one of them. If you linger long enough by the rivers edge you'll most likely see a Great Blue Heron skimming the water. There is a rock shelf that extends across the river and the sounds of the rushing water is worth the hike !!", "score": 30.891791729269983, "rank": 23}, {"document_id": "doc-::chunk-1", "d_text": "The Greenway Program will connect more than 70 miles of multi-use trails in Columbia, Richmond and Aiken counties. The project encompasses Euchee Creek Greenway, Savannah Rapids Pavilion and the Augusta Canal Trail in Columbia County, and the trails will connect to downtown Augusta, south Augusta, Fort Gordon and North Augusta, South Carolina. The Greystone Preserve project includes the construction of a new outdoor educational campus on the preserve, which has more than 260 acres of running or hiking nature trails, in North Augusta.\nThrough the neighborhood conservation initiative, the Land Trust is working with Columbia County neighborhoods such as Farmington, Sumter Landing and River Island to preserve greenspace. So far, more than 300 acres have been preserved in and around Columbia County neighborhoods. As part of its farmland conservation efforts, the Land Trust has preserved 2,300-plus acres of farms in and around Columbia County and Clarks Hill Lake.\nIf You Go:\nWhat: Bash on the Banks on the Savannah River\nWhen: 6 p.m. – 9 p.m. Thursday, October 27 for ages 21 and older\nWhere: River Island Clubhouse\nHow Much: $50 per person; cash bar available\nMore Info: csrlt.org", "score": 30.83356282926437, "rank": 24}, {"document_id": "doc-::chunk-1", "d_text": "Our pool is closed for the season, but come spring, after a day of exploring adventure you'll be ready to unwind in our new 4 acre Rec Area. Our 35ft x 65ft swimming pool was constructed by Shotcrete Pools of SC, LLC There are 12 ft decks surrounding the pool for your relaxation, comfort, and enjoyment. There is an adjacent Recreation Hall that can accommodate 200 people for cookouts and live music or entertainment.\nJoin in Horseshoes, Volleyball, ping pong, billiards, and other activities.\n425 miles of hiking and bicycling paths are a series of “passages\" that make up the Palmetto Trail. The Palmetto Trail is the state’s largest bicycle and pedestrian project that spands from the mountains to the sea.If your a hiking , biking, running enthusiast or horseback riding the Lake Moultrie Passage of the Palmetto Trail, we're located 100 yards from the Palmetto Trail Marker on the Diversion Canal. A perfect camping location. These are the woods of Francis Marion, \"Swamp Fox\". The day trip Eutaw Springs Passage is home to the famous Battle of Eutaw Springs Revolutionary War Battlefield. Here is a Map of the Palmetto Trail.\nOne of the most fascinating\nphenomenons is the annual migration of the fish spawning. Santee\nCooper's Anadromous Fish Passage & Restoration- St. Stephen Fish\nLift is one of only 2 of it's kind on the East Coast. This incredible\nfacility provides you a window with a view as the fish pass through the\nfishlift. See American Shad swimming through in the Video of Fish Lift and\nthen Read more about the St. Stephen Fishlift\nLoaded with comforts and amenities, Angel's Landing is not only the finest choice in Santee Cooper Campgrounds because of the waterfront sites, docking, and recreational sports, it is an outstanding value.\nFor your comfort our restrooms with\nshowers have been completely remodeled. We maintain the high standard\nthat Angel's Landing is known for. We also offer laundry and a dump\nstation. Here is a map to our landing.\nSince 1979, Angel's Landing has been the perfect destination or\naffordable getaway- an hour from Charleston and Columbia, for travelers\nlooking for relaxation and comfort, or travelers looking for outdoor\nrecreational adventure and fun with your family.", "score": 30.719075244638244, "rank": 25}, {"document_id": "doc-::chunk-0", "d_text": "(All Images Photographed by Brad Bates)\nOne thing a Back Road Warrior hopes for when they are out exploring is an adventure. And folks, today this Back Road Warrior was not disappointed.\nI began the day by trekking south off of Alternate Highway 17 in Cottageville SC via Jacksonboro Rd to Charleston Highway (aka Hwy 64), to about a mile shy of Jacksonboro, SC.\nYou see, I live in what is known as the Lowcountry of South Carolina, and we are known for three things: food, beautiful scenery, and history….LOTS of history. The Charleston area has been host to everything from the American Revolution, to the American Civil War, and beyond.\nEvery building, beach, field, and stream has a story.\nAnd in the case of Charleston, a well documented story.\nBut some things, though documented, still manage to get lost along the road side of life and the road side of history. And several notable and historical American events are chronicled right here in this Podunk little community called Jacksonboro, SC.\nHow do I know? Because I take the back roads silly!\nOh, and I don’t let those little brown signs that say things like “Historical Marker 1/2 Mile” go by without me stopping to check out what it says. And more times than not, it leads me to something so spectacular, and so off the beaten path…that had I not stopped to check out that little road side marker, I might have driven right on by…and like so many others in America, let this little gold nugget of history get further lost by the wayside.\nFirst stop…the Bethel Presbyterian Church Cemetery right there on Highway 64. (coordinates are 32.7974255, -80.4910373, ya know…in case you want to visit). For me this was about a 15 to 20 minute ride from home. From Charleston you’re looking at more like a 30 to 45 minute ride. But totally worth it in my opinion.\nThis cemetery (along with the now long gone accompanying church) was founded back in 1728 by the Reverend Archibald Stobo (what a name!). It was also known as Pon Pon Church. Later in the 19th century the church took up residence in the nearby town of Walterboro, SC. Sadly, the original building burned down in 1886.\nI LOVE having history right in my back yard.", "score": 30.565020063444702, "rank": 26}, {"document_id": "doc-::chunk-1", "d_text": "The first European colonists settled in counties along this trail (north to south) as follows:\nConnecting trails. The Charleston-Ft. Charlotte Trail links to other trails at each end. The migration pathways connecting in Charleston included:\nThe migration routes connecting in Fort Charlotte included:\nThe newer Charleston-Ft. Charlotte Trail also crossed the much older Occaneechi Path in Aiken County. The Occaneechi Path was overlapped here by the Fall Line Road starting about 1735, and also the Great Valley Road (south fork) starting in the 1740s.\nModern parallels. The modern roads that roughly match the old Charleston-Ft. Charlotte Trail start in Charleston. Follow I-26 north to the Orangeburg. Then take the Neeses Highway west to Springfield. Then take Highway 4 west to Aiken. Then follow Highway 19 northwest until it becomes Highway 25. Continue northwest along Highway 25 to where it meets Highway 378 in northern Edgefield County. Turn west onto Highway 378 to reach McCormick. Then go northwest on Highway 28 until Highway 81 forks off to the west. Follow Highway 81 winding westerly to Mt. Carmel. From Mt. Carmel take the Fort Charlotte Road 6.5 miles (10.4 km) southwest to Strom Thurmond Lake. The Old Fort Charlotte site lies under that lake.\nSettlers and Records\nThe first colonists in what became the Fort Charlotte area arrived before the Charleston-Ft. Charlotte Trail existed. They would have arrived by way of the Savannah River, the Middle Creek Trading Path, the Fort Moore-Charleston Trail, the Augusta and Cherokee Trail on the Georgia side of the river, or even the Occaneechi Path and its overlapping Fall Line Road, and Great Valley Road. Only after Fort Charlotte was started in 1765 would travelers have been able to use what became the Charleston-Ft. Charlotte Trail. Even then, they may have used the older Fort Moore-Charleston Trail most of the way to Aiken County before splitting off toward Fort Charlotte.\nUlster-Irish, French Huguenots, and Germans were among the earliest, pre-Fort Charlotte pioneer settlers.\nNo complete list of settlers who used the Charleston-Ft. Charlotte Trail is known to exist.", "score": 30.186401774171106, "rank": 27}, {"document_id": "doc-::chunk-0", "d_text": "This weekend’s paddling trip to Lake Marion was nearly perfect. There was fantastic weather, beautiful scenery, excellent food, good company, and a venue with interesting history. Unfortunately, that history has been somewhat tainted and full of controversy.\nNames like “Santee” and “Congaree” give indication that the original inhabitants of the area were Native Americans. Colonists also found the Santee River Basin a fertile ground for plantations and farming. Unfortunately, they also brought smallpox, which wiped out the Congaree tribes by the 1700’s. Francis Marion carried out his raids during the Revolutionary War from the dense cypress forests, earning him the name “Swamp Fox.” Lake Marion now bears his name.\nAs for the town of Ferguson itself, the story starts with two Chicago businessmen, Francis Beidler and Benjamin Ferguson. Post Civil War South Carolina was impoverished, and Beidler and Benjamin were able to purchase huge tracts of forest land at bargain prices. Their holdings included most of the Congaree-Wateree-Santee (Cowasee) Basin. According to an article in the Columbia Star…\nIn 1881, two lumber magnates from Chicago, Francis Beidler and B.F. Ferguson formed the Santee River Cypress Lumber Company and purchased over 165,000 acres of land along the Congaree, Wateree, and Santee Rivers in South Carolina.\nBeidler and Ferguson, realizing the forests of the Northeast and Midwest had been exhausted, meant to capitalize on the bald cypress trees they discovered in the virgin Santee floodplain. They built a lumber mill on the Santee River and constructed a “town” in which the workers could live. The new town was called Ferguson.\nThe town grew quickly, and was one of the first towns in South Carolina to have indoor plumbing and gas lighting in the streets. It was a self-contained community that remained somewhat isolated from the other towns. Logs were sent by rail over to Eutawville and Cross for transfer to other parts of the state, but its residents did not interact much with those villages. Workers were paid in scrip rather than cash, and were forced to purchase from the company business located in the town. Examples of Ferguson scrip coins can still be found on eBay and at antique coin vendors.", "score": 29.98216287966104, "rank": 28}, {"document_id": "doc-::chunk-0", "d_text": "Vereen Memorial Gardens are located in Little River, near the NC state-line in Horry County, SC. The Gardens are home to a variety of wildlife and local history. Along the trails, you can find a variety of tree species, view wetlands and the inter-coastal waterway. You can also find a variety of birds that call the woodlands and wetlands home.\nThe TRACK Trail materials at Vereen Memorial Gardens were designed for use on any of the park’s trails. Starting at the CB Berry Community & Historical Center rear parking lot, the trails at Vereen Memorial Gardens make their way through a variety of woodlands and continue through to the wetlands. Boardwalks and bridges bring you through the wetlands and to the Atlantic Inter-coastal Waterway.\nThe trail from the trail-head to the wetlands and back is approximately 3 miles long.\nAdventures for Vereen Memorial Historical Gardens\nBirds of the CoastDifficulty:\nNature's Hide & SeekDifficulty:\nVereen Need For TreesDifficulty:\nFrom Myrtle Beach. Drive North on Hwy 17. Vereen Memorial Gardens is located on the right, about ½ mile before the NC border.\nFrom Wilmington. Drive South on Hwy 17. Vereen Memorial Gardens is located on the left, about ½ mile after the border for SC.\nThe TRACK Trail program is sponsored by the Blue Ridge Parkway Foundation, the Blue Ridge Parkway, and the Blue Cross and Blue Shield of North Carolina Foundation.\nThe Vereen Memorial Garden TRACK Trail was made possible through a partnership formed with the Horry County Parks and Rec Department.", "score": 29.550073891093728, "rank": 29}, {"document_id": "doc-::chunk-0", "d_text": "WeHuntSC.com noticed considerable growth in site traffic this past week with the welcoming of several rabbit hunters from all over South Carolina. If you are into hunting rabbits and you?re in South Carolina, then you need to get in the loop with the rabbit hunters on our site. These guys are from all different corners of the state and are very passionate about rabbit hunting and the dogs they use to hunt them!\nI don?t know much about rabbit hunting, but I?m learning slowly over time. Though, I can tell you from going hunting with Hoot and watching the message board that rabbit hunting is just as much about the dogs as it is about rabbits. I would venture to say that you won?t find many rabbit hunters who aren?t also dog lovers. Now it makes sense to me why Hoot didn?t even carry a gun when he took us rabbit hunting last winter.\nIt seems that the dogs must be trained year round and I will say that these ?bunn brothers? talk about the breed of dogs and where they descend from like they just researched someone?s family tree and wrote a report on their heritage. They know this stuff inside and out. If you?re not on the inside of the rabbit hunting world then the language they speak when it comes to blood lines can be a little hard to follow. I find it neat and interesting that these guys know all this information about their dog?s lineage and pedigree and discuss it so frequently. I guess it is necessary though if you want to have the best dogs trailing some rabbits!\nWe?re happy to have all the new site members aboard and welcome any others that may wish to join in on the fun!\nWe just rounded out our Thermacell Give-Away campaign and now a lot of WeHuntSC.com registered site members will be mosquito free this coming hunting season! A big thanks to Thermacell for partnering with us on this campaign to drive site registration and promote Thermacell in South Carolina.\nSo if you know any of these people, you may want to sit with them during the upcoming hunting season!\nYou may ask why I went to Inman to the True Timber headquarters earlier today? well it was for a reason? and we?re excited to announce that True Timber has gotten on board with WeHuntSC.com to sponsor not 1, but 2 competitions this year! True Timber Camo will be the title sponsor for the ?Youth Buck of the Year Competition?", "score": 29.54356346569089, "rank": 30}, {"document_id": "doc-::chunk-0", "d_text": "West Ashley Bikeway\nTrail Activities: t\nTrail Features: t\n- No Fee\n- No Camping\nThis paved 2.5-mile path located in West Ashley is nice for an afternoon ride and is also perfect for walkers and runners.\nAlthough the paved, flat 2.5-mile West Ashley Bikeway is too short for serious bicyclists, it’s a great area for walkers and runners. Once part of a five-mile stretch of rail owned by Seaboard Coastline Railroad Company, the route was abandoned in 1976. Before long, weeds and rats took over and it became a collection point for old mattresses, rusting appliances, and other refuse. Eventually the corridor’s right-of-way passed to the S.C. Department of Transportation, who hoped to turn the area into an expressway. This didn’t happen, so in 1978 the agency agreed to lease the property for $1 a year to the city. After several years of bureaucratic red tape, the park officially opened in 1983. Today, what was once a dumping ground, the asphalt pathway serves a real need in the community.\nHours: Dawn to duskDirections:\nTo the western trailhead: From downtown Charleston, drive west on US 17 and cross the Ashley River. Turn right onto Wappoo Road and the trailhead sign is on the right immediately after the turn.\nTo the eastern trailhead: From downtown Charleston, drive west on US 17 and cross the Ashley River. After you cross the bridge, turn right onto SC 61/SC 171. Shortly before SC 61 and SC 171 split, there is a trailhead sign on the left.\nCity of Charleston Recreation Department | 823 Meeting Street, Charleston, SC 29403 | (843) 724-7321\n|Trail Segments (Paths)||Lat: 32.79105312225165", "score": 29.321748797581794, "rank": 31}, {"document_id": "doc-::chunk-0", "d_text": "The South Carolina Wildlife Magazine was started over five decades ago, published by the South Carolina Wildlife and Marine Resources Department. It became famous for its excellent photography and high publication quality and won many awards. These selected articles from past decades document a different, slower and more enjoyable South Carolina: a time when Dad would take his son fishin' at Santee, the family would sit outside on a cold February night and enjoy a bushel of McClellanville oysters, Mom would cook a tasty Beaufort stew, or Grandad would take his grandkids frog giggin' in the spooky dark of night. Sadly, that innocent, slower-paced, high quality and healthy lifestyle has almost completely disappeared in the real estate boom. South Carolina was once renowned as a sportsman's paradise, and still has many attractions, but development and population growth have changed the character of the state. The Old South that many envision is now \"Gone With the Wind.\" Please enjoy this glimpse of the past!\nSpring 1964 Issue\nThis is a complete sample of one of the early issues. It contains interesting information about the construction of the striped bass hatchery at Bonneau below Lake Moultrie, along with boat landings around Santee-Cooper; aquatic weed management; and the members of the Wildlife Commission. Note the format of the telephone numbers of the officers-some with as few as four digits! Also see the picture of the 39-pound dead striped bass found at Lake Murray.\nWinter 1965 Issue\nHere are some excerpts from the Winter 1965 issue. Note the nice opinion piece written by the editor-he had the audacity to quote Jesus Christ, although he was too afraid to source the quotes. Political correctness was beginning even then. Also, there's a piece on the threatened alligator, written by famous game warden Mac Flood of Berkeley County. He used to patrol Francs Marion National Forest on horseback. If it weren't for good men like him who cared about the future generations, the American alligator might have gone the way of the passenger pigeon. Also, there are some nice photos of a fox hunt with dogs and horses-a sport only for the elite, of course!\nSpring 1965 Issue\nHere are some excerpts from the Spring 1965 issue. The cover photo is of the Horsepasture Valley before Lake Jocassee was formed.", "score": 28.818240537219697, "rank": 32}, {"document_id": "doc-::chunk-0", "d_text": "Warning:   This data is out-of-date. Trails, roads, and landmarks change.\nYou should obtain current information before attempting any hike.\nDate Hiked: 4/14/91\nLength: 7.0 miles\nMain Features: views from overlooks, Governor's Rock, and Table Rock, as well as small cascades and falls\nAnimals: hawks, pileated woodpeckers, chickadees, ruffed grouse, crows, vultures, cardinals, garter snake, vole, snails, millipedes, red eft (newt), gray squirrels\nLocation: Table Rock State Park, South Carolina\nFrom the intersection of highways 178 and 11, go northeast on 11. Drive 4.2 miles and turn left at the sign indicating Table Rock State Park, West Gate. Drive 0.4 miles and turn right at the toll booth. Drive 0.8 miles and park in the parking lot on the right across the street from the Carrick Creek Interpretive Center. The trail starts from the back deck of the interpretive center.\nNote: This is an extremely popular trail. I've passed over a hundred hikers on this trail.\nApproximate Altitude Changes\nAscend 1,950 feet to high point about a half mile from end\nDescend 250 feet to the end of the trail\nTotal Ascent: 2,200 feet\n0.0 parking area\npaved walk with waterfalls, stairs, and a bridge\n0.2 bear right at fork\n0.4 bear right at fork\n1.6 rain shelter on left (almost halfway to end)\njust beyond shelter is overlook to southeast, south, and southwest\n1.8 turn right at junction\n2.4 view from Governor's Rock (elevation 2,920 feet)\n(The peak to the west is Pinnacle Mountain, the highest mountain entirely in South Carolina.)\n2.7 sign Table Rock Mountain elevation 3,157 feet, with spring on right\n3.0 overlook on right\n(On a clear day you can see downtown Greenville to the east by southeast.)", "score": 28.336479116418595, "rank": 33}, {"document_id": "doc-::chunk-1", "d_text": "Even locals who have spent years on the Ashley River don’t realize that it doesn’t start right around the Highway 17A area in Dorchester County.”\nBut over 200 years ago, far upriver from the plantations that famously represent the Lowcountry, the community of Ridgeville, SC, and the rough-hewn cabins of Cypress Methodist Campground were strategically built on a ridge alongside Cypress Swamp – the highest point in the Ashley River’s headwaters. Berkeley County’s Wassamassaw Swamp and Dorchester County’s Cypress Swamp were both essential to early life in this region. Settlers here employed inland swamp rice-growing technology, a method largely overlooked in contrast to tidewater rice production. Remnants of dikes built to facilitate the grain’s growth are still evident in many areas.\nThrough the centuries, when asked, the river answered and served its people well. In addition to providing wild game and fish as a food source, the Ashley River also fed the fields where rice was cultivated, and provided access for barges and other vessels to carry the crop to world markets. As rice production declined in the years following the Civil War, the needs of the region changed. The Ashley once again answered, offering up the resources of its watershed.\nLarge tracts along the river were consolidated for commercial timber production. Vast deposits of phosphate-laden marl were discovered nearby, and docks began to dot the shoreline. At one time, the land along the Ashley River produced one-half of the world’s mined phosphate. But by 1938, the phosphate boom was over, and in its wake, a dramatic environmental bill was left to be paid. With devastatingly high phosphorus levels left behind, the generous river that had given us our identity was in danger of losing its own.\nBy mid-century, rather than surrender to the residual pollution, those who would see the river restored and revered for its historical, natural, and cultural contributions took note. More importantly, they began to take action. In 1976, under the South Carolina Scenic Rivers Act of 1974, the portion of the Ashley River from Summerville’s Bacon’s Bridge downstream to Bull’s Creek was declared eligible for designation as a State Scenic River.\nThe tide continued to turn in the years just before the new millennium with the establishment of the Ashley Scenic River Council, and implementation of the Ashley River Management Plan.", "score": 27.846752461349592, "rank": 34}, {"document_id": "doc-::chunk-6", "d_text": "As I stood there watching the wading birds in the ponds, a pair of bald eagles circled overhead.\nJust up the road at 40 Acre Park, I was delighted by the bird life and the quiet. This small city park has recreational facilities, but the hidden gem is the old ponds that the hatchery abandoned decades ago. They’ve become rectangles of swamp, with worn paths between them. As I walked down the causeway, two sandhill cranes looked up, curious.\nWe tend to pack as much research as we can into our trips. So during our “unscheduled” time, John and I slipped away with Robert and Laura for a ride on the newest portion of the Palatka-Lake Butler Trail. This paved extension from Carraway into the edge of Palatka, ending not far from St. Johns Water Management District headquarters, meant a radical change at Rice Creek. I’d hiked over the beautiful old railroad trestle there more than once, as it was part of the Florida Trail. We found it replaced by a long-span bridge that a truck could drive across, with a new causeway supporting it. Remembering the beauty of the creek crossing, I wasn’t fond of the “progress” to make the bike trail happen. The trail had officially opened the weekend before we arrived.\nAnother trail, heading in the opposite direction from Palatka to St. Augustine, was close to completion and we’d expected to ride it as a group before we all departed the next morning. Nasty weather that morning made our plans change. Mike Adams, who assumes the persona of William Bartram at special events, invited us out to his riverfront plantation instead. In full Bartram regalia, he led us on a hike out to the St. Johns River.\nWe stepped out onto a boardwalk and surveyed the sweeping view. Progress, it seems, has left the St. Johns River alone for much of its length. While cypress logging and the building of towns, farms, and bridges have changed the landscape forever from what the Bartrams saw and documented, there are still long stretches of the river that are wild. The Bartram National Recreation Trail lets you experience the river through the eyes of William and John, our earliest explorers who left a trail for us to follow.", "score": 27.72972038949445, "rank": 35}, {"document_id": "doc-::chunk-0", "d_text": "The Congaree River Blue Trail is a 50-mile designated recreational paddling trail, extending from the state capital of Columbia, downstream to Congaree National Park. Paddlers begin with an urban adventure experience, with quick access to the Three Rivers Greenway hiking trails, as well as opportunities to learn about the historic significance of the capital city, including prehistoric Native American sites on the river's tributaries. As the trail continues downstream, paddlers cross the fall line and enter the Coastal Plains region. With ever increasing meanders and seemingly countless sandbars, boaters encounter high bluffs and extensive floodplain habitats associated with the Congaree National Park. Once in the park, paddlers have the opportunity to take out and explore the park's 20 miles of established hiking trails - including a 2.4 mile boardwalk. In addition to paddling and hiking, many visitors to the park also enjoy backpacking, camping, fishing, birding and nature study.\nThe Congaree River Blue Trail project came together through a collaborative, multi-agency effort with representatives from American Rivers, the National Park Service, The River Alliance, Friends of Congaree Swamp, South Carolina Department of Natural Resources, Congaree Land Trust, Coastal Conservation League and the Richland County Conservation Commission - the goal of which being to improve public access and increase recreational interest along this stretch of the Congaree River. On June 7th, 2008, the Congaree River Blue Trail was designated a \"National Recreation Trail\" by the U.S. Department of the Interior in recognition of its local and regional significance.\nClick here to view the official Congaree River Blue Trail Map on screen (Acrobat Reader required). A waterproof, full size version of this map may be obtained at the park's Harry Hampton Visitor Center.\nVisit www.bluetrailsguide.org for more information about establishing a Blue Trail in your area.\nLast updated: April 14, 2015", "score": 26.9697449642274, "rank": 36}, {"document_id": "doc-::chunk-0", "d_text": "Welcome to Congaree National Park’s most popular hiking trail, the Boardwalk Loop Trail. This 2.4-mile hike serves as an excellent introduction to the park’s natural and cultural history. You will see some of the giant trees that put this park on the map, including outstanding specimen loblolly pine, beech and baldcypress, and learn of the efforts to preserve this landscape. The boardwalk passes through a cypress-tupelo flat with hundreds of cypress “knees” emerging from the floodplain soils. At the Weston Lake overlook, see if you can spy turtles, sunfish and gar. In passing through the old-growth forest, evidence abounds of damage from natural disturbances, including hurricanes, ice storms and floods. The last portion of the trail passes through ideal habitat for the rare Carolina bogmint, look for its blooms in mid-summer. Though Congaree National Park has only been a national park since 2003, there is a long-recorded cultural history to learn as well. This land has been inhabited by many peoples, but signs of their presence on the land can be subtle.\nThe boardwalk passes by red-shouldered hawk and barred owl nests, and you have a reasonable chance of seeing these predators waiting in ambush for crayfish and other prey in the park’s wetlands. Listen for other bird species including the loud rattling of pileated woodpecker and red-headed woodpecker, or the chittering of chimney swift overhead. Near the bluff, keep an eye out for box turtle, mud turtle and rat snake. The boardwalk itself attracts all manner of skinks, anoles, and caterpillars interesting to children and adults alike.\nThe park is often mislabeled as a swamp; it is better categorized as an old-growth bottomland hardwood forest periodically flooded by groundwater or surface water. Additionally, tree species common to the area possess adaptations that allow for growth and reproduction despite periodic flooding and drought. Bottomland hardwood forests display distinct ecological zones at different elevations, and Congaree National Park is no different—as you walk along the Boardwalk Trail, it is possible to observe a change in soil saturation, groundcover plants and tree species with only slight changes in elevation.\nThe Boardwalk Trail begins at the Harry Hampton Visitor Center just past the breezeway. American holly trees line the edges of the boardwalk near the start of the trail.", "score": 26.9697449642274, "rank": 37}, {"document_id": "doc-::chunk-14", "d_text": "Hampton was a Columbia native, and an avid outdoorsmen and wildlife enthusiast. The Harry Hampton Visitor Center at Congaree National Park is named after him, as is the Harry Hampton Wildlife Fund, dedicated to raising funds for the research, education, and management of game and fish laws to benefit wildlife conservation in South Carolina. His interest in conservation issues and consequent quest to preserve the Beidler Tract originated in his hobby of hunting. After hunting on the Beidler Tract and becoming familiar with the floodplain, Hampton decided that this forest was exceptional in its species diversity and large trees, and began lobbying to set the land aside as a national park.\nHis lobbying was lost amongst those opposed to creation of a park, until the Beidler family prepared to begin logging again in 1969. In turn, conservationists took up Hampton’s cause under the umbrella of the Congaree Swamp National Preserve Association, and began an intensive lobbying campaign for federal legislation protecting the land. In 1976, Congaree Swamp National Monument was established. The Beidlers logged approximately 2500 acres of land before it fell under federal protection, and then sold their land to the federal government for tens of millions of dollars.\nOnce estimated to cover 30-50 million acres of the southeastern U.S., old-growth bottomland forests have been greatly diminished. However, largely due to the conservation efforts started by Harry Hampton, Congaree National Park is the largest old-growth bottomland hardwood forest left in America. Congaree has only been a national park since 2003—before that, it was a national monument. With the title of a national park has come many benefits, including greater recognition and enhanced efforts to increase the park’s visibility and outreach.\nSite 19: Freedmen and Slaves\nThis trail marker is surrounded by a variety of undergrowth, trees, and shrubs. Included is a beautiful downed loblolly pine, sweetgum, and holly trees. The twisted tree roots, natural debris, and thick muck made it more difficult for slave owners and slave catchers to traverse the unknown terrain, and the dense vegetation promoted concealment. As such, some runaway slaves established settlements in the river floodplains, and created communities amongst the forest. The park was frequented further as an area of temporary refuge, for slaves on the run or freedmen avoiding capture.", "score": 26.9697449642274, "rank": 38}, {"document_id": "doc-::chunk-1", "d_text": "It jogs left into Conservation Park through a main gate at the southern boundary of the preserve, and continues north, paralelling the paved bike trail, to a loop through the trailhead area.\nAfter taking the time to learn about the park and its history at the trailhead, turn with your back to the building and walk across the parking lot to the T intersection where the trails begin. Turn right and cross the park road. You come to the first intersection of trails. All four major trails – red, blue, yellow, and green – start here. To begin your hike on the green trail, turn slightly right and go straight ahead, following the pea gravel path into the woods, passing by nicely shaded picnic tables and an outdoor classroom. The trail is rather broad as it meanders through the pines. You can see a cypress dome off to your left, obvious in its cloak of rust, orange, and yellow as the cypress needles react to the cooler temperatures of fall, shedding into the swamp below. Gallberry, grapevines, and saw palmetto make up the understory of the pine plantation.\nPart of the reason for taking the Green Trail is it ushers you through two cypress domes. You reach the first boardwalk by 0.2 mile. Tannic water flows sluggishly below. Wax myrtles are in flower. The islands in the swamp look like ideal places for pitcher plants to cluster, and perhaps, when the hydrology of the landscape has been restored, they will do so. This region along the Gulf Coast, where wet flawoods edge right up to Choctawhatchee Bay, is where I’ve seen white-topped pitcher plants in the acidic soils of the wet flatwoods.\nAt the end of the boardwalk, turn right. The broad wood-chipped path leads the Green Trail around the cypress dome, beneath the lines of slash pine. Fortunately, the trail does not go straight down the rows but across them, so you get an illusion of natural forest amid the plantation, bolstered by the cypress domes that break up the landscape. Bark chips are not the easiest surface to gain traction. You pass a confirmation blaze and head down a straightaway, then curves to the right to zigzag through scrubby flatwoods towards denser forest in the distance, turning towards the east.", "score": 26.9697449642274, "rank": 39}, {"document_id": "doc-::chunk-1", "d_text": "Brookgreen Gardens, with a nature center and many outdoor sculptures, is a popular tourist spot.\nThe University of South Carolina and Clemson University maintain the Belle W. Baruch research site at Hobcaw Barony on Waccamaw Neck. The islands around the outlet of Winyah Bay are designated as the \"Tom Yawkey Wildlife Center Heritage Preserve\". This area is home to the northernmost naturally occurring hammocks of South Carolina's signature sabal palmetto tree.\n2. The riverfronts have had little recent development. Such properties were once used for rice plantations, using a rice variety brought from Africa. After the Civil War, and the loss of slave labor, the plantations gradually ceased production. Today they are primarily wild areas, accessible only by boat. In some areas, the earthworks, such as dikes and water gates used for rice culture, still exist, as well as a few of the plantation houses. Litchfield Plantation has been redeveloped as a country inn; other properties have been developed as planned residential communities. Great blue herons, alligators, and an occasional bald eagle can be seen along the waterways. Fishing is a popular activity.\nFishing the Pee Dee off the old US 17 bridge near Georgetown\nA tiny community accessible only by boat is on Sandy Island, in the Pee Dee River. Residents are descendants of slaves who worked plantations on the island, and they are trying to keep out development. Recently the Federal government began buying land along the rivers for the new Waccamaw Wildlife Refuge, which is intended to protect such wild areas. The headquarters of the refuge will be at Yauhannah in the northern part of the county.\n3. Georgetown is a small historic city founded in colonial times. It is a popular tourist area and a port for shrimp boats. Yachting \"snowbirds\" are often seen at the docks in spring and fall; these people follow the seasons along the Intracoastal waterway.\n4. The inland rural areas are thinly populated. Some upland areas are good for agriculture or forestry. Several Carolina bays are thought to be craters from a meteor shower. These areas are rich in biodiversity. Carvers Bay, the largest, was extensively damaged by use as a practice bombing range by US military forces during World War II. Draining of the bay has further damaged its environment.", "score": 26.680121075442255, "rank": 40}, {"document_id": "doc-::chunk-1", "d_text": "They now are most often seen in the river. As time goes on, the beavers may return. But even so, this area is an important wetland habitat. Birding opportunities abound on all trails, but especially on the trails on the north side of the river. This trail is for hiking, biking, and jogging. At the head of where the beaver ponds once began, there is a short rocky section and the entire trail is under shaded canopy.\nRIDGE LOOP TRAIL\nThis six feet wide .75 mile trail begins in sight of the covered bridge where Beaver Creek dumps into the South Fork River. If you look carefully, to the right of the trail as it begins to climb up the ridge, old wheel ruts from “buggy days” can be seen when a horse and wagon was the mode of transportation. Ferns can now be seen growing in the old ruts. While heavily wooded with big trees providing a shaded canopy, the top of the ridge was once planted in cotton in the late 1800s. Much of the park was once in agricultural use whether planted in cotton or fenced for livestock. While this loop is just under a mile, it is also part of the overall trail system that accommodates hikers, mountain bikers, and joggers.\nPresently, there are 12 miles of horse trails in the park. These trails are designed for equestrians. Bicycles are not allowed on these trails. Dogs are also not allowed on horse trails. These trails go through a variety of environments. While almost unnoticeable, there are remains of old home sites from the 1800s. An agriculture society once occupied this area and, as the years have gone by, nature is taking back what had been altered by man. The state park was established in 1970. However, until the state park acquired additional land in the mid 1990s, private companies were managing portions of the properties for timber production. The overall state park management plan is to help provide sound resource perservation and conservation. However, due to the various uses of the land before it was acquired by the state park, there are numerous environments from “natural” forested areas to formerly “timber” managed areas. Along the creeks, large hardwoods with high canopies will be seen.", "score": 26.357536772203648, "rank": 41}, {"document_id": "doc-::chunk-0", "d_text": "[Fig. 32(25), Fig. 35] The Ellicott Rock Wilderness encompasses one of the Southeast's premiere river gorges and includes national forests from three different statesNorth Carolina, South Carolina, and Georgia. It also harbors one of the oldest trails in the East and a wonderful variety of plants and animals.\nThe Ellicott Rock Wilderness straddles the 15,000-acre Chattooga Wild and Scenic River corridor known for its high rocky cliffs, powerful cascades, and luxuriant mixed evergreen-deciduous forests. The area contains old-growth white pine, hemlock, tulip poplar, and other hardwood specimens. Thick, impassable tangles of great rhododendron understory form over creeks and on slopes, giving the colloquial term \"rhododendron hell\" true meaning. At times, these shrubs form a tunnel over the well-maintained trails.\nFor those interested in day hiking or primitive camping, the area offers excellent trails free of motorized vehicles, horses, and bicycles, making it as much of a true wilderness experience as possible. Campsites must be located at least 50 feet from water courses or designated trails.\nThe Ellicott Rock Wilderness was expanded from a much smaller scenic area first created in 1966. This 1975 expansion brought the area under the protection of the 1960 Wilderness Protection Act and increased the area to 9,012 acres. Nearly half (3,900 acres) of the tract lies in North Carolina. The Chattooga River was given Wild and Scenic River status in May 1974. Since then, the U.S. Forest Service has proposed acquiring an additional 2,000 acres to the south and east of the present boundaries.\nThe area is reached from North Carolina via the Ellicott Rock Trail or the Bad Creek/Fowler Creek Trail [Fig. 35(3)], the latter joining with the former just before crossing into South Carolina. The trail and wilderness area get their name from a rock on the east bank of the Chattooga River that was inscribed with the letters \"NC\" by a surveyor named Ellicott. Ellicott Rock was marked erroneously as the point at which the three states of North Carolina, South Carolina, and Georgia meet.", "score": 25.887032788908996, "rank": 42}, {"document_id": "doc-::chunk-0", "d_text": "The Carolina Thread Trail is a regional network of greenways, trails and blueways that reaches 15 counties, 2 states and 2.3 million people. There are over 220 miles of trails open to the public – linking people, places, cities, towns and attractions. The Thread Trail preserves our natural areas and is a place for exploration of nature, culture, science and history. This is a landmark project that provides public and community benefits for everyone, in every community.\nThe interactive map below and a great deal more information can be found on the Carolina Thread Trail Website at www.carolinathreadtrail.org", "score": 25.65453875696252, "rank": 43}, {"document_id": "doc-::chunk-0", "d_text": "|Francis Marion National Forest - Sewee Shell Mound\n|From Charleston, take U.S. Highway 17 north to Doar Road (SC Route 432-S). Turn right and go 2.5 miles to Salt Pond Road. Turn right on to Forest Road 243 and go 1/2 mile to the trailhead (which is on your right).\n|Total Hike Distance:\n|Roundtrip, Loop Hike\n|Forest Road 243\n|Sewee Shell Mound Interpretive Trail\n|Backcountry Water Sources:\n|U.S. Forest Service\n|Sewee Visitor And Environmental Ed Center\n5821 U.S. Highway 17 North\nAwendaw, SC 29429\n|Primary Paved Roads, Secondary Paved Roads, Maintained Gravel or Dirt Roads\n|This is a loop hike that visits two ancient Indian sites that date back 4000 years. Hurricane Hugo devastated this area in 1989, and a wildfire ravaged the downed timber two years later in 1991. You can still see the remnants of the fire. The loop trail starts in the recovering coastal wood, then breaks out into the salt marsh on occassion with views to the Atlantic Intercoastal Waterway. It is a pleasant and interesting hike along the coast. Some other notes:\n- the hike is flat as a pancake - very easy\n- the main trail is a loop with two spurs leading to Indian sites. Backtrack from the spurs to the main loop.\n- at both the Oyster Shell Ring and Clam Shell Mound there were hundreds of tiny crabs roaming the ground (that disappeared into tiny holes when I approached). My dog would have freaked out if I had him with me. The ground was moving with every step I took.\n- I would not want to hike this area in the summer - too buggy - take insect repellent in warm weather\n- the views out to the Intracoastal Waterway are very pretty, grassy flat area with the waterway in the background\n- the Forest Service has placed somewhat worn but very informative trail signs on the route explaining details on the route and the Indian sites", "score": 25.65453875696252, "rank": 44}, {"document_id": "doc-::chunk-2", "d_text": "Nevertheless, local and county histories along that trail may reveal pioneer settlers who arrived after 1765 and who were candidates to have travelled the Charleston-Ft. Charlotte Trail from the Charleston area.\nFor partial lists of early settlers who may have used the Charleston-Ft. Charlotte Trail, see histories like:\nin McCormick County:\n- Bobby F. Edmonds, The Huguenots of New Bordeaux (McCormick, SC: Cedar Hill, 2005) ((FHL Book 975.736 F2e) WorldCat entry.\n- Bobby F. Edmonds, The Making of McCormick County [South Carolina] (McCormick, SC: Cedar Hill, 1999) (FHL Book 975.736 H2e) WorldCat entry.\n- [Willie Mae Wood], Old Families of McCormick County, South Carolina and Dorn families of Edgefield, Greenwood and McCormick counties ([S.l. : s.n.], 1982) (FHL Book 975.736 D2w) WorldCat entry.\nin Edgefield County:\nin Aiken County:\nin Orangeburg County:\n- \"The First Families of Orangeburgh District, South Carolina\" in Orangeburgh German-Swiss Genealogy Society at http://www.ogsgs.org/ffam/ff-intro.htm (accessed 23 March 2011).\nin Dorchester County:\n- Fort Charlotte (South Carolina) in Wikipedia\n- Fort Charlotte historical marker in Mt. Carmel at junction of SC Hwy 81 and Road 91.\n- ↑ Handybook for Genealogists: United States of America, 10th ed. (Draper, Utah: Everton Pub., 2002), 848. (FHL Book 973 D27e 2002). WorldCat entry.\n- ↑ \"McCormick County\" in South Carolina State Library at http://www.statelibrary.sc.gov/mccormick-county (accessed 24 March 2011).\n- ↑ Wikipedia contributors, \"Fort Charlotte (South Carolina),\" Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Fort_Charlotte_(South_Carolina) (accessed 24 March 2011).\n- ↑ South Carolina - The Counties, http://www.carolana.com/SC/Counties/sc_counties_alphabetical_order.html (accessed 22 March 2011).\nNew to the Research Wiki?", "score": 25.65453875696252, "rank": 45}, {"document_id": "doc-::chunk-5", "d_text": "Several other trails in the park can be used to create the 12-mile loop Naturaland Trust Loop, which is not only among South Carolina's top hikes but also a very strenuous one. An added bonus for hiking the full loop, however, is optional access to two more gorgeous waterfalls on the property of Asbury Hills Camp: Moonshine Falls and Confusion Falls. This hike occurred on Saturday, January 20th, 2018. My plan was to hike the Naturaland Trust Loop clockwise from the Raven Cliff Falls parking area, although I came away with the feeling that hiking the loop counter-clockwise is better.\nMost hikers of the Tallahassee area have heard of or even been to Leon Sinks. How many have heard of Wakulla River Sinks though? Along with adjacent Apalachicola National Forest, the River Sinks Tract - a lesser-known parcel of Wakulla Springs State Park - holds nearly two dozen water-filled sinkholes. The sinks serve as a portal to the mysterious underground water-filled cave system that eventually connects to the Wakulla River to the southeast. An official trail called the Wakulla River Sinks Trail passes by several small sinks, including the photogenic Clearcut Sink. If one extends their hike along the unofficial pathway that splits off and leads to a series of bigger sinks, including Promise Sink and Upper River Sink, they are bound to be astounded! This hike occurred on Saturday, January 6th, 2018. My plan was to hike the official Wakulla River Sinks Trail clockwise. Along the way, I would make out-and-back side trips to Clearcut Sink and along an unofficial trail to a series of sinks to the south.\nCumberland Trail: Edwards Point Loop via Mushroom Rock, Blue Trail, Orange Trail, and Bee Branch Trail, Signal Mountain, Tennessee\nThe Cumberland Trail, a work in progress that will be a 300-mile trail when completed, has its southern terminus at Signal Point in the Chattanooga metro area. One of the Cumberland Trail's most spectacular sections is its very first one as it traces the rim of the Tennessee River Gorge, visiting outstanding views at Signal Point and Edwards Point as well as a unique geological formation known as Mushroom Rock at the edge of the Suck Creek Gorge. Beside the Cumberland Trail, several lesser-used trails give hikers the option to turn a hike to Edwards Point into a full-day loop with additional sights.", "score": 25.65453875696252, "rank": 46}, {"document_id": "doc-::chunk-12", "d_text": "Surface of black top made for an extreemly smooth riding through fields and swamps with good foliage shade. The surface was clean and tree debris from the storms were sawed away from the trail and removed. Grassy sides of the pathway were mowed and clean. This is a great path for viewing assorted wildlife including a huge cottonmouth moccasin sun bathing on the asphault (we didn't stop to visit). Deer, cows, chickens, hawks and occasional eagles abound as well as goats and the usual Florida native wildlife. If you plan to stop at one of the many benches thoughtfully set along the trail you should bring a good mosquito repellant during warm weather due to the swamp nature of much of the locale. Porta-potties are located at each of the three trailheads and midway is a first class his and hers pine panalled rest facility complete with stainless steel fixtures and diaper changing station. Quite impressive! All in all, this is the best longer ride in northeast Florida and similar in many ways to the Siver Comet trail in Atlanta. Go do it! You'll be glad you did. \"\n\"This is the fourth trail we have reviewed and it is the best so far. We started at the Baldwin Brandy Branch Road trailhead where the trail is mostly flat and well shaded. It is extremely well maintained with grassy banks on both sides that had been recently mowed. Most of the trail is shaded by a canopy of trees. As you get nearer to the Jacksonville end, the trail is less shaded and there are several cross roads. The trail is well travled by other bikers, joggers and in-line skaters and is apparently used by horses also. The Jacksonville and Baldwin trailheads feature ample parking lots. The Jax parking lot was full by the time we arrived there attesting to the number of users of this trail. \"\n\"This trail is perfect for everyone regardless of thier experience level. Ride, jog, walk or skate -- whatever your pleasure this trail will far exceed your expectations.\nThe 10-foot wide trail allows for plenty of room for couple and group rides. It's mostly shaded and provides plenty of livestock viewing. Just make sure you don't run over any chickens and you will have a blast. \"", "score": 25.000000000000068, "rank": 47}, {"document_id": "doc-::chunk-2", "d_text": "In many places the Bartram Trail is unmarked and poorly maintained or completely obliterated by modern construction.\nJohn Bartram visited the Southeast in 1765 and 1766 in his role as Botanist Royal in America. Appointed by King George III, the elder Bartram took his son William Bartram on these trips. So enamored was William of the life his father led that he left his Quaker roots in Philadelphia to explore the Southeast when he was offered financial support from a friend in England.\nWilliam Bartam began his journey in March of 1773 at the age of 35. He followed what is now called the Bartram Trail. This trail, which runs from the northeast corner of Georgia to Augusta and the Savannah River, is only partially maintained by local communities along the way.\nDirections from Franklin, NC:\nView From HWY 106 between Highlands, NC and 106 to GA\nYou can drive to Bartam Trail by taking Hwy 106 west for 7.3 miles from Highlands or head down 441S in Franklin to Georgia and take 106 off of 441 and head up Scaly Mtn, N.C until you come to the village of Scaly Mtn, NC. Then take a left onto Hale Ridge Rd for 2 miles until it bears left, while Bald Mtn Rd curves right. Continue on Hale Ridge Road until the road becomes gravel and in less than a half a mile you will see the Bartram Trail sign on the right.\nJoyce Kilmer-Slickrock Wilderness was created in 1975 and covers 17,394 acres (70 km2) in the Nantahala National Forest in western North Carolina and the Cherokee National Forest in eastern Tennessee, in the watersheds of the Slickrock and Little Santeetlah Creeks. It is named after Joyce Kilmer, author of \"Trees.\" The Little Santeetlah and Slickrock watersheds contain 5,926 acres (23.98 km2) of old growth forest, one of the largest tracts in the United States east of the Mississippi River.\nThe Joyce Kilmer Memorial Forest along Little Santeetlah Creek is a rare example of an old growth cove hardwood forest, an extremely diverse forest type unique to the Appalachian Mountains.", "score": 24.644035609354617, "rank": 48}, {"document_id": "doc-::chunk-0", "d_text": "The Carolina Thread Trail, also known as The Thread, is a regional trail network that will ultimately reach 15 counties and more than 2.3 million people.\nMore than a hiking trail, more than a bike path, The Thread preserves our natural areas and is a place for exploration of nature, culture, science and history.\nThe Thread arose from a discovery process started in 2005 when the Foundation For The Carolinas convened more than 40 regional leaders and organizations to determine the regions most pressing environmental needs and concerns. From that process, open space preservation surfaced as the number one priority. The Carolina Thread Trail was successfully launched in 2007 as a project focused on preserving natural corridors and connecting people to nature through a network of connected trails.\nUnder the leadership of Catawba Lands Conservancy and many local partners, the Carolina Thread Trail strengthens the region and promotes economic development, education, better health and land conservation by connecting people, businesses and communities of diverse backgrounds and interests.\nWhile not every local trail will be part of the Carolina Thread Trail system, The Thread is linking regionally significant trails and many regional attractions. Think of it as a \"green interstate system\" of major trails and conservation lands created through local efforts throughout the region. The Thread will emerge over time as communities work together to plan and build trails reflecting community character, aspirations and priorities. Currently, 113 miles of The Thread are open to the public in North and South Carolina with 14 active corridors under development.\nMission: The Carolina Thread Trail (The Thread) is a regional network of greenways and trails that reaches 15 counties and 2.3 million people. There are 113 miles of The Thread open to the public - linking people, places, cities, towns and attractions.\n|Causes||Arts & Culture Community Environment Health & Medicine Sports & Recreation|", "score": 24.345461243037445, "rank": 49}, {"document_id": "doc-::chunk-0", "d_text": "Trails at Watson Mill Bridge\nThe 2 miles of nature trails run along the south side of the South Fork River and Big Clouds Creek. Portions of the trails wander through the historical portions of the park along the old powerhouse sluiceway, also know as a raceway. This man-made waterway channel is about 300 yards long and runs from the old dam just below the covered bridge to a second dam just above the foundation of the old powerhouse. The powerhouse generated electricity for a textile mill 10 miles away in Crawford, GA, for nearly half a century, beginning in 1905. The overlook at the head of this trail affords a view of the covered bridge and shoals below. At this site, the original Watson’s Mill, gone by the end of the 1800s, once sat. The rest of the trails beyond the old powerhouse ruin meander through the woods along the banks of the river and creek and pass by the camping areas.\nBIKING & HIKING TRAIL\nThis six feet wide 2.5-mile loop trail runs along the north side of the South Fork River and is one of the most popular trails for hiking, mountain biking, and jogging. The trail meanders through hardwood and mixed forests and provides one of the best views of the lower shoals of the river. About half way around the loop is an overlook at the edge of what was once a natural beaver pond. Over the years, natural succession has taken place and the former pond area has now become a meadow. It most likely will eventually become part of the forest. Will it ever be a pond again? Only time, Mother Nature and the beavers will tell.\nFor the mountain biker, this trail is not overly technical as it is designed for beginners and intermediates. There are a few fairly steep grades. The entire trail is under shaded canopy. Whether you hike or bike, you will see a lot of biodiversity in the various types of environments the trail covers.\nBEAVER CREEK TRAIL\nThis six feet wide 1.5-mile loop trail runs up Beaver Creek and over a high ridge back through the hardwood forests on its return. The creek was once the site of several beaver ponds. The beavers left this creek area in the mid 1990s after a period of heavy downpours washed away the beaver dams.", "score": 24.345461243037445, "rank": 50}, {"document_id": "doc-::chunk-0", "d_text": "The South River was originally the Black River back in the 1700's according to the book by John Oates \"The Story of Fayetteville\". The book described two Black Rivers almost running several miles apart and surveyors had a bad time so named the southern Black the South Black, then it eventually became the South.\nIt begins as the Black River below Angier, NC and flows into Rhodes Mill Pond between Dunn, NC and Fayetteville, NC, leaves this pond as the Black River under I-95 and joins Mingo Swamp and becomes the South River north of Falcon, NC and above Green Path Road. From Green Path Road near Falcon, which is section 1 in \"Paddling Eastern North Carolina with Paul Ferguson\", until it runs into the Black River below Ivanhoe, NC approximately 79.8 miles, however the next take out point on this river is Beatty's Bridge on the Black River.\nA paddle trip was held Sunday, Jan. 6, 2013 by the Lumber River Canoe Club with some members from Friends of Sampson County Waterways. This was both clubs first trip of the New Year.\nThe section we paddled was Section 10 of Ferguson's book referenced above. We covered 13.5 miles and averaged about 3.4 miles per hour with the takeout being at the Corbett House, made famous in the movie Rambling Rose starring Robert Duval some years back.\nThere is no water gauge on the South River and I believe the gauge on the Black River was 5.55 ft with a cfs of 600.\nThere were no portages and plenty of current with only one tree to bump over. The weather was cool and stayed in the 40s with an on and off light rain and mist all day. Beautiful section but can be braided and swampy prior to running into the Black River. There were 3 canoes and 7 kayaks that made this trip and no wildlife was reported other than bird life. Development on this river is at a minimum with a few cabins.\nNo accommodations near river but plenty on I-95 and in Clinton, NC and Fayetteville, NC\nNo fees or permits\nThe put in at Ennis Bridge is off Hwy 210.\nFrom I-95 get off exit 49 and follow Hwy 210. After crossing Hwy 41 about 40 miles down 210, look for Ennis Bridge Rd, landing is on left before crossing the South River\nPaddling Eastern NC with Paul Ferguson", "score": 24.345461243037445, "rank": 51}, {"document_id": "doc-::chunk-8", "d_text": "The park and the USDA cooperate in an ongoing effort to monitor the incidence of disease in the population, to keep the park safe for both humans and other animals. Since 2014, the USDA has tried to curtail the population by trapping and shooting in designated target areas. These measures have been relatively successful, and efforts have been expanded to control hog populations around the periphery of the park to limit crop damage and soil erosion and improve water quality for the park’s neighbors.\nThe Sims Trail crosses directly through the center of the Boardwalk Trail, bifurcating it lengthwise between sites 9 and 18. The trail is named after Booker T. Sims, a park neighbor who assisted the park in its early years. The former road near site 9 and the Weston Lake Loop Trail provided access to the clubhouse of Cedar Creek Hunt Club, of which Harry Hampton (see Site 18) was a member. The trail is accessible via the Visitor Center and a short walk along the Bluff Trail.\nBecause the Sims Trail has a more open canopy layer, one can observe many different species of butterflies along the trail. You can find common butterfly species like the Carolina Satyr and Red-spotted Purple here. There are also many small puddles and ponds surrounding the trail, which attract not only butterflies, but dragonflies, frogs, toads, and insects lke the metallic green six-spotted tiger beetle. Five-lined skinks can also be found here, the juveniles of which are easily distinguishable by their electric blue tails.\nSite 11: Loblolly Pines and Champion Trees\nCongaree National Park is a bottomland hardwood forest, which makes the existence of pine trees in this ecosystem somewhat unusual. Many species of pine trees are unable to grow in wetland areas, due to their inability to tolerate anaerobic conditions produced by periodic flooding. However, the loblolly pine is unique in that it can thrive in wet environments, and the word loblolly fittingly means “muddy puddle.” The loblolly pine is an essential feature of the park—it is home to many bird species, and its seeds provide food for small rodents. Loblollies are also prized sources of lumber, as they can grow up to 2 feet in one year.", "score": 24.345461243037445, "rank": 52}, {"document_id": "doc-::chunk-0", "d_text": "I have been spending some time down along the South Carolina coast photographing several properties for a project that I'm currently involved with. One of my absolute favorite places to visit is the Francis Beilder Forest which is a very accessible portion of the Four-Hole Swamp; the largest remaining tract of virgin bald-cypress/tupelo gum swamp in the world. Some of the trees at the preserve are estimated to be around 1,000 years old which means that they were standing long before Columbus arrived in the New World. Incredible when you consider how much has changed since some of these old giants were just seedlings.The swamp is a haven for bird life, reptiles, amphibians and some really great plants. During my 4 hour visit I saw and heard several birds including prothonotary warblers and a barred owl. Four-Hole Swamp is an amazing place to photograph snakes and I was pleased to have an opportunity to photograph this little red-bellied watersnake. I also saw several five-lined skinks sunning themselves on uprooted trees.\nOne nice surprise was this fairly large fawn sitting quietly just beyond the trail's edge. You really never know what you'll see there; especially if you arrive early.", "score": 24.344518871769235, "rank": 53}, {"document_id": "doc-::chunk-1", "d_text": "We were looking for anything that moved, particularly a pair of slit-pupiled eyes staring at us just above the water's surface in a place where the unstable, quivering peat earned the Native American name meaning ''Land of the Trembling Earth.''\nWe came out cold, wet and tired, but with an eagerness to return.\nWe were up against temperatures in the 40s and rain - as if the eight miles over flat, nearly motionless water wasn't enough. But we had driven six hours from Athens the afternoon of Feb. 7 to get here. Despite the weather, we were not stopping.\nKingfisher Landing, off U.S. Highway 1 on the swamp's east side, was our spot to put in. At 10 a.m. Feb. 8 we began unloading the boats from the trailer. We would be paddling on a man-made paddling trail through marshes and lakes within the swamp to a 20-foot-by-20-foot sheltered platform built above the water.\nWe slid four canoes and a single and a double sea kayak into the water. I was paired in the double sea kayak with Stephanie Haas, a horticulture student.\nOnce in the water, there was no turning back. With many miles worth of dense reeds and mud to the east, west and south, there was no practical way out except the way we came.\nIn search of something wild\nWe figured our chances were pretty good that we'd catch a glimpse of something wild, especially a toothy-smiled alligator. But when we arrived and took the weather into account we realized those odds were a little longer so, with each stroke, each of us had our eyes locked on the banks.\nThe trail ranged in width from about 50 feet to less than 10. The thin parts were a challenge to navigate due to the brush clawing at your face and the mud on the bottom impeding your momentum. We had to take our first break less than two miles in. It wasn't long after that Mother Nature teased us.\nIt was about 11 a.m. and the sky was beginning to clear as we came upon one of the swamp's many ''wet prairies.'' The trail was still bordered by golden reeds standing about four feet high, but the taller brush cleared away. Out of nowhere came a sound like a car alarm gone haywire in a tunnel.", "score": 23.642463227796483, "rank": 54}, {"document_id": "doc-::chunk-0", "d_text": "New River Linear Trail\nLocated in Bluffton, Beaufort County, SC.\nThis little-known piece of the East Coast Greenway passes through some wonderful scenic and historic areas near Bluffton, SC. The south end of this unpaved rail-trail terminates at the New River. Signs and markers along the main trail describe some features of the area's history. You can hike the trail or use an off-road bicycle.\nFeatures and Facilities\nHours of Operation\nAdmission or Parking Fee\nHow to Get There\nLatitude 32.24165, Longitude -81.00397 [PK502]\nView Larger Map", "score": 23.030255035772623, "rank": 55}, {"document_id": "doc-::chunk-1", "d_text": "At 1.4 miles, pass a broad forest road to the left.\nThe white blazes lead you along the edge of the pine flatwoods, with a floodplain swamp beyond.\nReaching a property boundary with private property, the trail makes a sharp right at a 4-way intersection. An ephemeral wetland is busy with birds.\nA bird box sits in the middle of the scrubby flatwoods as the trail points towards open scrub.\nTall loblolly bay trees and cypress dominate a bayhead swamp at 1.7 miles. The trail reaches another four-way intersection.\nThe trail makes a right, the footpath becoming soft sand in the scrub. The white blazes lead towards a copse of slash pines.\nAt the next junction, the trail goes straight ahead, entering an open palmetto prairie.\nPassing a forest road sweeping in from the left, the trail continues straight towards a stand of pines in the distance.\nAmid the prairie are patches of scrub, with short sand live oaks and Chapman oaks of a perfect height for Florida scrub-jays.\nWhile we didn’t see any along our hike, we thought we heard them in the distance.\nEmerging from the pines, you see an orange tipped post off to the left. Farther down the forest road, the trail comes within view of the power lines.\nAt 2.4 miles, this is the top of the loop. A forest road goes off to the left. A bat box sits off in the distance in the open scrub.\nThe trail curves past ancient saw palmettos rising up on their trunks out of the pine savanna.\nGravel covers the footpath at a place where a bayhead swamp drains seasonally into the pine flatwoods.\nBy 2.6 miles the trail reaches the power line. A double blaze points out the right turn, northbound along Powerline Road.\nA bayhead swamp on the left has a large stocky, loblolly bay tree. The trail curves along the powerline and heads into the pine flatwoods.\nPassing a small depression marsh with needlerush, the trail returns to the edge of the powerline and begins to parallel it.\nWalk past another depression marsh on the left in between the trail and the power line, surrounded by cabbage palms, like a little oasis.\nWetlands are on both sides of the trail, just little marshes in depressions.", "score": 23.030255035772623, "rank": 56}, {"document_id": "doc-::chunk-0", "d_text": "By Marilyn Turk\nA place of fear for some, the Great Dismal Swamp was a place of refuge for others.\nCol. William Byrd II, an 18th century planter, is credited with giving the swamp its name on maps during his 1728 expedition to survey the border line between Virginia and North Carolina.\nWhen colonists began invading their land, Native Americans moved into the secluded forests of the swamp. Later, they were joined by runaway slaves and indentured servants seeking freedom.\nTrackers tried to use dogs to find these runaways, but the scent was soon lost in the water of the swamp, and horses couldn’t walk through the muck. Even canoes couldn’t navigate the water, so pursuers weren’t able to catch the fugitives and gave up the chase, considering them lost to the dense, tangled vegetation and ooze, teeming with poisonous snakes, clouds of mosquitoes and other hostile creatures. Surely, the runaways became victims of the treacherous environment into which they’d run.\nIndeed it was because of the swamp’s hostility and its vast size covering thousands of acres of southern Virginia and North Carolina that enable hundreds, if not thousands of slaves to escape to freedom, many through the Underground Railroad, popularized by Harriet Beecher Stowe. But only in recent history has evidence been discovered of communities where inhabitants lived with little help from the outside world.\nEven archaeologists got lost trying to research the swamp’s interior. But in 2004, Dan Sayers, historical archaeologist and chair of the anthropology department at American University in Washington, D.C. was taken to a 20-acre island inside the swamp by one of the biologists who study the area.\nToday the swamp, now a wildlife refuge, has dwindled to 112,000 acres due to drainage and encroaching civilization. Since the end of the Civil War, no communities have existed within its deep interior.\nThe very idea that people could have survived in the midst of such a hostile environment boggles the mind. But to the swamp’s inhabitants, freedom was worth the price of the hardships of the Great Dismal Swamp.\nAward-winning author Marilyn Turk lives in and writes about the coast – past and present. A multi-published author, she writes a weekly lighthouse blog at http://marilynturk.com. Her latest release, Rebel Light, Book 1 in the Coastal Lights Legacy series, is now available along with A Gilded Curse, and Lighthouse Devotions on amazon.com.", "score": 23.030255035772623, "rank": 57}, {"document_id": "doc-::chunk-1", "d_text": "My grandmother’s family settled here in the 1700’s, in places with names like Gum Neck and Frying Pan, and I grew up on stories of ancestors hunting black bear and wildcats deep in the swamp, and of ghost stories, and people disappearing in a black water wilderness. This was a chance to pass on some of those stories, and to see where they actually took place.\nWe brought the boat and the stealthy electric motor, so the three mile cruise along the canals and “ditches” from the boat ramp into the middle of the swamp was a quiet glide.\nThe boat launch is on the eastern side of Dismal Swamp Canal, which connects the Chesapeake Bay with Albemarle Sound down in North Carolina, separating the easternmost counties of both states from the mainland, making them all essentially a big island. This presents a problem for people who live on the west side of the Canal, because the road is on the east side. We saw one farmer’s solution in action: He had built a small ferry of oil drums and plywood and, with a cable running slack along the bottom from one side to the other, we saw him pull himself across, hand over hand, to where he kept his car on the other side.\nFrom the Canal, the Feeder Ditch strikes a rhumb line due West for two miles into the heart of the swamp to Lake Drummond. It’s a strangly euclidian path through a completely chaotic canyon of wilderness, confusing your perception of time and distance. The experience is more than a little surreal.", "score": 23.030255035772623, "rank": 58}, {"document_id": "doc-::chunk-4", "d_text": "After logging ended in the 1940s, the tram fell into disrepair. The federal government purchased the property in 1979 to create the Lower Suwannee National Wildlife Refuge. The bridges and roads were repaired for public use in 1998 and provide an unforgettable journey through the heart of tidal swamplands. Stop by Salt Creek for a short hike to a pier overlooking the creek and salt marsh. Don’t forget your binoculars and camera. Look for bald eagles and other species soaring overhead. Visit Fishbone Creek’s observation tower to scan the sky for the signature silhouettes of swallow-tailed kites from March until August. Popular Shired Island, at the end of County Road 357, is surrounded by salt marshes and tidal creeks. Visitors will find a short nature trail, a county campground and an improved boat launch.\nFanning Springs is both the name of a town and a significant spring that releases between 40 and 60 million gallons of clear, cool water every day, year-round. Particularly in winter, manatees glide up the short spring run from the Suwannee River to shelter in the relative warmth of the 72-degree water. This sparkling gem has attracted people for thousands of years. First inhabited by Native Americans, it became a stop for steamboats in the 1800s through the early 1900s. It remains a delightful swimming destination on a steamy summer day and was designated as a state park in 1997. The park is a hub of the Suwannee River Wilderness Trail and also offers a short nature trail, shady picnic area, volleyball courts and comfortable rental cabins. The 31-mile paved Nature Coast Trail has been built along a historic rail line that passed through Fanning Springs and other small towns. Equestrians may enjoy a 4.5-mile segment that parallels the paved trail between Old Town and Fanning Springs. A historic railroad bridge near the Old Town Trailhead offers a lofty view over the iconic Suwannee River where sturgeon may be seen leaping during the summer and fall months. Additionally, visitors can access the Suwannee River Wilderness Trail. This trail begins at the town of High Springs near the Stephen Foster Folk Culture Center State Park. The river coils through the heart of north central Florida, ending in the Lower Suwannee National Wildlife Refuge on the Gulf of Mexico.\n13.", "score": 22.27027961050575, "rank": 59}, {"document_id": "doc-::chunk-0", "d_text": "This scenic byway connects several US and SC routes to create a 66 mile journey through the historic and picturesque countryside of western York County.\nThe starting point is just 30 miles southwest of Charlotte. The Scenic Byway begins at Kings Mountain National Military Park and runs south through the historic Town of York, to Historic Brattonsville, then west through the Town of McConnells to the Bullock Creek community, where it turns northerly toward the towns of Sharon, Hickory Grove, and Smyrna. The byway features numerous historical buildings, museums, markers, and beautiful rolling countryside.\nWhether you are looking for a quiet lunch in York, a visit to the pre-revolutionary community of Brattonsville, a visit to the historic W.L. Hill Store or the Museum of Western York County in Sharon, this is sure to be a memorable ride.\nBohicket Road - Cowpens Battlefield - Edisto Beach - Falling Waters - Fort Johnson Road - Hilton Head - Hilton Head Island - Long Point Road - Mathis Ferry Road - May River - McTeer Bridge - Old Sheldon Church - Plantersville - Riverland Drive - SC 170 - US 21 - Western York\nFor more information about South Carolina Scenic Byways, please contact SCDOT at (803) 737-1952.", "score": 21.735028661026142, "rank": 60}, {"document_id": "doc-::chunk-0", "d_text": "Added by The Outbound Collective\nA scenic trail through forest and along the Saluda River on the Blue Ridge Escarpment.\nStart on the Jones Gap Trail (blue), which will lead you alongside the Saluda River. After 2.5 miles, take the Old Springs Branch Trail (orange) to loop back to the lot.\nFor a challenging, more strenuous hike, with great views of the Gap, check out Rim of the Gap Trail. It is not advised to take children or dogs on this trail.\n- Hiking boots\n- Sun protection\nPlease respect the places you find on The Outbound.\nAlways practice Leave No Trace ethics on your adventures. Be aware of local regulations and don't damage these amazing places for the sake of a photograph. Learn More\nReviewsLeave a Review\nHave you done this adventure? Have something to add? You could be the first to leave a review!\nMore Adventures Nearby\nHike Swamp Rabbit Trail to Reedy River Falls\nSouth Carolina / Mayberry Park Parking\nSwamp Rabbit Trail is a 21 mile-long paved walkway that closely follows the Reedy River in Downtown Greenville.\nHike to Yellow Branch Falls\nSouth Carolina / Yellow Branch Picnic Area\nOnce you have turned into the picnic area off of Highway SC 28, you will see the open and easy parking area. The trail begins at 2 points on the southern end of the parking area.", "score": 21.695954918930884, "rank": 61}, {"document_id": "doc-::chunk-0", "d_text": "|Big Creek Landing to Rockhill-to-Brooklyn Rd. (Rt. 335) (2.9 miles): The trail follows Black Creek closely through mature loblolly pines and southern magnolias.|\n|Rockhill-to-Brooklyn Rd. (Rt. 335) to Point Where Trail Leaves FS 319F-I (6.5 miles): Here the trail turns south away from the creek, taking you through loblolly and longleaf pine forests with hardwood bottoms between Rt. 335 and U.S. Hwy 49. You can explore the site of an abandoned Civilian Conservation Corps camp and find old roads and concrete foundations. This is the least wild section of the trail. It goes around the edges of several recently logged areas, and follows a road (FS 319F- 1) for 0.4 mile.|\n|FS 319F-I to FS 319G (7.0 miles): This segment begins with a delightful walk along an entrenched meandering stream lined with impressively large trees (southern magnolia, beech, water oak, etc.) until you reach the edge of Black Creek. East of the next tributary creek crossing, you walk through the lower edge of a recently logged area. Much of this portion of the trail is on higher ground through pines. Overall, it's a pretty segment.|\n|FS 319G to State Hwy 29 (6.8 miles): You start down a grassy lane through pines and descend to a wild section along Black Creek until you reach some recent logging. Then the trail goes through more pines and descends to a flat area with dense understory. A long part of the trail is on split-log \"stepping stones.\" Nonetheless, plan on getting your feet wet, because shallow water may also stand in other portions that lack stepping stones or boardwalks. Next, the longest boardwalk on the trail takes you over a pretty swamp that is not shown on the map.|\nFinally the trail returns to the creek. Near this point, perhaps on the sharp bend of Black Creek, Joseph Mimm had a ferry in the early 1800s for travelers on the long-abandoned Old Federal Road. It went from Fort Stoddert on the Tombigbee River in Alabama west to the John Ford house in Marion County and on to Natchez.", "score": 21.695954918930884, "rank": 62}, {"document_id": "doc-::chunk-1", "d_text": "|A view of the 'trail' along the way. This trail doesn't see much use|\nso the markings and trail maintenance are lacking in certain areas.\nKeep a keen eye out for 90 degree turns and disappearing track.\n|There was a lot of sandy gravel jeep trail in the lowcountry.|\nNot particularly exciting but you make great time on it.\nHard-packed and fast.\n|This is Turkey Creek. I filtered water here because it can be a little|\ndifficult to find along the way. This tannin rich black water is pretty\ncommon in the lowcountry and gives filters a fit. I used my\nMSR Hyperflow on this trip with decent luck.\n|A nice bridge shot over the creek. I tested a Brooks on this trip.|\nYeah, I know its moronic to test a new seat on a long ride like this\nand I knew that going in. It worked out fine but I\ndid learn that I prefer my Selle Italia Flite.\n|There was a long run of this built up trail out in the middle of|\nnowhere, SC. I can't muster a guess of why you'd do this but\nit was a welcome change to the hidden pot holes I'd been\ngrinding through for the previous 20 miles.\n|Sand sloppy enough that 2.4\" tires aren't enough.|\nIt probably was only a couple of miles but it felt like 30.\n|There was a lot of overgrown brush in the trail as well.|\nFunny enough I was glad that it still had dew on it and it was\nrelatively stiff brush. It did a fantastic job cleaning the spider\nweb crud off of me and the bike. Its the little things.\n|Taking a break along the way. The elevated walkway was|\nin rough shape and in some cases completely rotted.\nLots & lots & lots of jeep trail.\n|I remember thinking how pretty these clouds were.|\n|Slight change of plans... no more pothole trails. I grabbed the map and picked some secondary roads.|\nMiles and miles of secondary roads as it turns out.\nI stumbled on the ruins of an old church and the cemetery along the way. Beautiful & peaceful site.", "score": 21.695954918930884, "rank": 63}, {"document_id": "doc-::chunk-2", "d_text": "If you look at the map of the passage (see capsule), you might think that you would be walking along waterways much of the time but the fact of the matter is that much of the passage is heavily wooded with what I would consider to be some semi-tropical vegetation mixed in. In many areas, unless you knew you were near coastal waterways and marshes, you would swear that you nowhere near any water. The forested areas were shaded, breezeless, and very humid. After several miles of hiking through areas like this, we finally came upon the marsh. It was absolutely beautiful. I had never seen such vast untouched areas areas of coastal marsh. It was a collection of spartina grass, blue sky, black water, and pluff mud. There were also lots of little crabs - LOTS of them. In some places there were so many that you could actually hear them scattering through the grass. We eventually came upon the canoe launch at Awendaw Creek. There were no canoes so we used it as a place to rest. We were hot. We were tired. We had snacks and water but a big meal would have been welcomed by all at that point in time. It was here that I had my near mirage experience. My brother-in-law handed my niece a cattail he had found. Every one had a good laugh when I looked at him and seriously wanted to know where he had gotten that corn dog from.\nThe kids enjoyed finding some old animal bones and we saw the biggest pine cones I have ever seen. These went into the polka dotted man bag my brother-in-law and nephew were carrying. However, after reminding them that you really were not supposed to take anything tangible from the trail, the items were left behind. Generally speaking, the kids were good. They all whined at one time or another. They all stated that they would never do such a thing again but I had heard this from mine when we did the first passage and he was there for the second. I am hopeful to see them all hike some more with me. I hope that if nothing else it gave them all a sense of accomplishment.\nThere is still something that I would like an explanation for. At about the 3-4 mile mark of the 7 mile trail map, there is a dotted square with a cross in the middle.", "score": 21.695954918930884, "rank": 64}, {"document_id": "doc-::chunk-0", "d_text": "Located between Osceola National Forest and the Okefenokee National Wildlife Refuge, John Bethea State Forest protects nearly 40,000 acres.\nIt exists to establish a continuous wildlife corridor for many threatened and endangered species in this vast swampy landscape along the Florida-Georgia border.\nIt protects the waterways and aquifers that sustain the Okefenokee Swamp, the Suwannee River and St. Marys River. And it is helping to accelerate the restoration of the great longleaf pine forests of the Southeastern United States.\nMaple Set Recreation Area is in the far northeast corner of the forest, adjoining the headwaters of the St. Marys River. It is home to Maple Set Campground, a primitive tent camping area.\nThe Maple Set Trail is made up of several blazed segments that total 2.5 miles. A quarter mile upper loop provides access from the campground to a short boardwalk, picnic area and swimming hole on the river.\nThe 0.9-mile lower loop starts from the same spot, leading through the wet river bottom and its cypress knees before following grassy service roads back to the parking area.\nAnother 1.3 miles of spur trails are accessible from the southern end of the lower loop.\nThe trail along the river is quite beautiful and well-marked, while the return portion of the southern loop can be a bit tricky to follow through the river bottom.\nThe spur south along the river is likewise very nice and well-marked, while the east-west spurs between the river and CR 127 are less so.\nResources for exploring the area\nDisclosure: As authors and affiliates, we receive earnings when you buy these through our links. This helps us provide public information on this website.\nLength: 2.5 miles in loops and spur trails\nTrailhead: 30.524425, -82.231083\nLand manager: Florida Forestry Service\nDay use is allowed 1-1/2 hours before sunrise until 1-1/2 hours after sunset. Overnight parking is not allowed without a campsite reservation.\nLeashed dogs welcome. Bring bug spray, sunscreen and plenty of water.\nPrimitive camping (temporarily closed) is available by reservation.\nHeading west on Interstate 10 from Jacksonville, take exit 333 and follow signs for Glen St. Mary. You will pass through Glen St Mary in 1 mile.", "score": 20.86687458000066, "rank": 65}, {"document_id": "doc-::chunk-1", "d_text": "Steps should be pegged down with rebar hammered down through holes drilled in ends of the steps. Rebar should be hammered flush with step surface to avoid the possibility of injury on the sharp ends of the rebar.\n- In areas where the elevation changes rapidly, terracing and ditching should be considered to retard erosion. If these are used at frequent intervals, the downward progress of water flow is slowed to a point that erosion will be minimized. The ditching at the terracing can divert water into areas where the water will be absorbed quickly.\n- If the trail is of significant length, a bench at mid-point will create a nice spot to take a ‘breather’.\nTools that may be required: shovel, rake, chain saw, bush axe, hammer, nails, carriage bolts and adjustable wrench.\nMore details on trail building are available in the ‘Complete Guide to Trail Building and Maintenance’ sponsored by the Appalachian Mountain Club. Another book of the same title is authored by Carl Demrow and David Salisbury but is much more costly.\nRails to Trails Projects are Popular Throughout the United States\nIn addition to our own trail project, we also had the opportunity to assist in a community effort to create a “Rails to Trails” project that helped to revitalize the small South Carolina town of Travelers Rest.\nThe Prisma Health Swamp Rabbit Trail follows the path of the Swamp Rabbit railroad that originally ran from Greenville, SC, to Jones Gap State Park. The trail follows the course of the Reedy River for some distance.\nIn the beginning it was a community effort to provide a trail for hikers and bikers to enjoy by removing the rails and crossties and cutting the years of growth after the railroad had ceased to utilize the route.\nThe trail was expanded and now travels several miles north of Travelers Rest and to the south, it runs through the campus of Furman University and continues into the heart of Greenville, passing by the historic Reedy River Falls. It extends to the southeast of the city of Greenville, ending near Greenville Tech.\nAfter a gap of several miles, the trail winds through Lake Conestee Nature Park, and another disconnected section is near the city of Fountain Inn. Future plans are to connect all sections.\nAfter the initial undertaking of the project, funding of $1 million by the Greenville Health System allowed for paving, lighting and other amenities to be added. The impact on this part of the state cannot be measured in dollars.", "score": 20.327251046010716, "rank": 66}, {"document_id": "doc-::chunk-1", "d_text": "The correct intersection is at Commissioner's Rock, which lies 10 feet downstream and is inscribed with the markings \"Lat 35 AD 1813 NCSC.\"\nThe Ellicott Rock Trail [Fig. 32(26), Fig. 35(1)] begins just outside the wilderness boundary on Forest Service Road FR 1178 and extends for 3.5 miles to the three-state intersection. After following an old road for more than 2 miles, the trail follows a left fork away from the road and slopes down into the Chattooga River Gorge. After crossing over to the east bank of the river, the trail joins with the Bad Creek/Fowler Creek Trail to form the Chattooga River Trail [Fig. 35(4)]. Ellicott Rock is encountered on the riverbank soon after this junction. This stretch of trail, extending 3.5 miles south from the Bad Creek Trail and ending at the East Fork Trail in South Carolina, is part of a former Cherokee Indian trail, which makes this one of the oldest trails in the region.\nOther trails within the North Carolina segment of Ellicott Rock Wilderness Area include the Slick Rock Trail, an easy, .2-mile trail off of Bull Pen Road, approximately 6 miles from Highlands. This short walk on an unmarked trail leads to massive rock formations affording scenic views of the Chattooga River. Two other trails offer a moderate hike to picturesque views of the Chattooga River with its cascades and pools, sandbars and boulders.\nThe trailhead for the Chattooga River Trail and Chattooga River Loop Trail can be found off Bull Pen Road at the Chattooga River bridge, marked by an information board. After .7 mile, the Chattooga River Trail connects with the Chattooga River Loop Trail, which completes the loop with an additional 1 mile of trail.\nNumerous wildflowers appear along the wilderness area trails. Rattlesnake plantain (Goodyera pubescens), a tiny orchid that grows on a 1-foot spike, is spectacular for its evergreen basal (ground level) leaves which show a pattern perhaps reminiscent of a snake skin. Mountain camellia (Stewartia ovata) is a rare shrub with a beautiful, large white flower similar to the southern magnolia (Magnolia grandiflora). It occurs sparsely on river bluffs or wooded stream margins.", "score": 20.327251046010716, "rank": 67}, {"document_id": "doc-::chunk-2", "d_text": "As you are walking, we ask that you stay to the right of the road/trail so faster runner/walkers can pass. If your child is in a stroller, you do not need to register them for the event.\nQ: I am walking the course, can my child walk with me?\nA: Yes, but he or she must register, as well. Anyone who enters and wears a race bib and chip may participate in the event.\nQ: Will you capture chip and gun time?\nA: Yes. We have timing mats at the start line to capture the time in which you crossed the line. We post both the chip and gun times at the end of the event. Please note that gun times will be used for awards.\nQ: What is the timing system used by Set Up Events?\nA: Set Up Events, who is handling the online registration and race timing for the GHS Swamp Rabbit 5K, is an official ChampionChip licensed timing company and uses the ChampionChip timing system. All participants wear a small chip on their bibs during the entire event, which Set Up Events uses to capture times at key locations (i.e., start and finish). Participants are given a timing chip at the event, which is disposable, but may be returned after the event for recycling.\nQ: What do I do with the timing chip if I drop out of the event and I don’t cross the finish line?\nA: Disposable timing chips are used in the GHS Swamp Rabbit 5K, and they do not need to be returned.\nQ: Will the course be closed to traffic?\nA: Much of the course is closed to traffic, while other sections are coned off for runners, with City of Travelers Rest Police Officers and course volunteers monitoring those areas. Traffic is controlled at major intersections by City of Travelers Rest Police Officers.", "score": 20.327251046010716, "rank": 68}, {"document_id": "doc-::chunk-4", "d_text": "Wade Batson; catching big bass on Bulls Island; how to remove a fishhook from your hand; and a nice painting of saw-whet owls by Anne Worsham Richardson.\nExcerpts from May-June 1972 Issue\nThis issue had a great article on Indian oyster shell middens, a unique feature found on only in the coastal southeast; also a photo and report on the second-largest largemouth bass ever caught-from a lake in Fort Jackson.\nExcerpts from July-August 1972 Issue\nBeautiful cover photo of fishermen in a cypress pond by SCWMRD photographer Ted Borg.\nLois Green MacKay (March-April 1974)\nLois Green MacKay was an artist living in Charlotte who loved to paint watercolors of South Carolina's beautiful coastal scenes. This documents better economic times when one could earn a living as an artist. (Note: the colors on the scanned pages are slightly brighter than the original.)\nExcerpts from May-June 1974 Issue\nThis issue contains a wonderful article and artwork of Daufuskie Island.\nExcerpts from March-April 1975 Issue\nA wonderful piece on John James Audobom and his visits to Charleston, along with an article on farm pond management. Also, there is a tribute to the recently-deceased founder of S.C. Wildlife, Eddie Finlay.\nExcerpts from July-August 1975 Issue\nThere is a good article on no-pesticide gardening and a picture of a 15 lb. largemouth bass caught in Hampton County.\nExcerpts from March-April 1980 Issue\nThere is an interesting article about the Haile Gold mine and gold mining in general in SC, along with a look at SC mountain life.\nExcerpts from May-June 1980 Issue\nThis issue contains a detailed account of the world record tiger shark caught from the Cherry Grove pier in 1964, and an informative article on Rev. John Bachman of Charleston, friend of John James Audubon and discoverer of Bachman's Warbler.\nExcerpts from November-December 1980 Issue\nThere is a great article by famous SC meterologist John Purvis about SC snow, along with a historical overview of Dr. Robert Lunz's pioneering mariculture work on the SC coast.", "score": 20.327251046010716, "rank": 69}, {"document_id": "doc-::chunk-1", "d_text": "This time, WWALS is acknowledging more long-used access points upstream and downstream into Florida, where the Suwannee River Water Management District (SRWMD) has long included the Alapaha River in Florida as part of the Suwannee River Wilderness Trail.\nLakes, Ponds, and Swamps\nPlus the Alapaha River Watershed contains lakes, ponds, and swamps such as Banks Lake in Lanier County, Grand Bay in Lowndes County, and the Carolina Bays in Atkinson County, renowned for their fishing, alligators, turtles, birds, cypresses, and pines, and streams including the Alapahoochee River. Existing hiking and biking trails can be linked to the Water Trail to encourage more multi-purpose participation.\nThe Alapaha River is not readily boatable upstream in Tift, Turner, Crisp, and Dooly Counties, nor on the Willacoochee River in Irwin, Ben Hill, and Wilcox Counties.\n202 miles: Alapaha River from its source in Dooly County, Georgia, to its confluence with the Suwannee River in Hamilton County, Florida.\n129 miles: Alapaha River Water Trail from US 82 to the Suwannee River, plus many lakes, ponds, and swamps to the side.\nA dozen public access points, and another boat ramp being built. See the ARWT Access web page.\n- Atkinson County, GA: access at Willacoochee Landing at GA 135 plus Carolina Bays;\n- Berrien County, GA: Sheboggy Boat Ramp at US 82, Berrien Beach Boat Ramp at GA 168, and canoe launches at Nashville Landing at GA 135 and at Rowetown Church Cemetery (this one is private, so ask first);\n- Lanier County, GA: at Lakeland Boat Ramp (GA 122), Pafford’s Landing, Burnt Church Landing, and Hotchkiss Road;\n- Lowndes County, GA: funded and in process Naylor Boat Ramp at US 84;\n- Echols County, GA: a boat launch at Mayday at Howell Road and the GA-DNR Statenville Boat Ramp at GA 94;\n- Hamilton County, FL: Sasser Landing (aka Alapahoochee Launch), Jennings Bluff Launch with its stairs, and Gibson Park Ramp.", "score": 19.41111743792643, "rank": 70}, {"document_id": "doc-::chunk-0", "d_text": "Charleston-Ft. Charlotte Trail\nFrom FamilySearch Wiki\n(move Long Canes)\n|Line 57:||Line 57:|\n'''''in McCormick County:'''''\n'''''in McCormick County:'''''\n'''''in Edgefield County:'''''\n'''''in Edgefield County:'''''\nRevision as of 16:29, 25 March 2011South Carolina colonial town of Charleston with the British military's colonial Fort Charlotte on the Savannah River in what is now McCormick County, South Carolina. Charleston was the largest European settlement, the capital, on the King's Highway, and the start of several other trails. Fort Charlotte was built 1765-1767 to help protect European settlers from Indian raids. Fort Charlotte was near the place where the Middle Creek Trading Path crossed the Savannah River from Georgia into South Carolina. Several other trails also radiated out from this fort. The Charleston-Ft. Charlotte Trail was opened to European settlers about 1765. It began in Charleston County, South Carolina and ended in McCormick County, South Carolina. The length of the trail was about 105 miles (169 km).\nScots-Irish (that is Ulster-Irish), French Huguenots, and German farmers began settling the area in the 1750s. Some of these early colonists near Long Cane Creek were killed by Cherokee Indians in 1760. As a result, the British military constructed Fort Charlotte between 1765 and 1767 to help protect local colonists from hostile Indians. The fort was then turned over to South Carolina. The Charleston-Ft. Charlotte Trail probably followed older Indian trails. Fort Charlotte was built at or became the nexus of several trails along the Savannah River in South Carolina and Georgia.\nAs roads developed in America settlers were attracted to nearby communities because the roads provided access to markets. They could sell their products at distant markets, and buy products made far away. If an ancestor settled near a road, you may be able to trace back to a place of origin on a connecting highway.\nFort Charlotte played a role in the American Revolution. The South Carolina colonial government used the fort as an arsenal. The first Revolutionary War action in South Carolina ocurred when Patriots seized those supplies. They also negotiated at the fort trying in vain to win the Indians to the Patriot cause.", "score": 18.90404751587654, "rank": 71}, {"document_id": "doc-::chunk-1", "d_text": "I often thought of Elephant Swamp often over the next few years, but never carefully enough to work out a plan.\nSo, when 2016 hit, I decided I just had to it. I called up a few folks, hoping to get a friend to go. Instead, I ended up with four friends (Tom, Chuck, Skunk, and Pat). And a dog (Bert – for elephant protection, of course). This was gonna be an awesome hike.\nWe stashed an end car at the baseball fields off of Rt 40 in Elmer, then backtracked to the start of the trail at the very, very, very back end of Stewart Park in Elk Township. Having found the trailhead (it’s a little tricky, check out Google Maps ahead of time with the coordinates in our trailheads section up top), we loaded up and set off down the trail.\nRight off the bat, there is some nice swampland to the side of the trail.\nYou’ll quickly notice that this trail is very wide, very level, and very straight (you actually turn for a while… over the course of a mile or so). That’s because this trail was created out of railroad right-of-way. Nothing will change about this aspect of the trail throughout its length. But what the trail lacks in challenges, it will make up for in scenery on either side. Like the farm on the left side of the trail in this first stretch, the first of many that you’ll pass.\nThe trail will parallel Railroad Ave in this section, and the frequent cars passing by made sure that this was not my favorite section of the trail.\nAfter just over a mile, you’ll come to the end of this first section of trail when you hit Elk Road. You’ll cross the road into the main parking area for the trail. If you google maps Elephant Swamp Trail, this is the parking lot that the GPS will take you to.\nThe next stretch of trail is 1.7 miles to Monroeville Road. 1.2 miles into that stretch, you’ll cross from Gloucester County into Salem County. There is a stone here marking the border, but we didn’t think to look for it until later.\nThis stretch is also the nature trail portion of the trail, complete with education signs.\nAt the road, you’ll discover a gate, then immediately the Monroeville Fire Company.\nThe trail continues (surprise!)", "score": 18.90404751587654, "rank": 72}, {"document_id": "doc-::chunk-1", "d_text": "At the head of the Little Tennessee River Greenway is Big Bear Park, which is easily accessed via a large parking lot just to the right of McDonalds in Downtown Franklin off of Main Street right before you get downtown. There is a full picnic and play area for adults and children of all ages to enjoy here. In addition, there is a sheltered area which may accomodate large groups wanting to have a cookout or birthday party, please call the contact information provided below for reservations.\nFriends of the Greenway\n357 E. Main Street\nFranklin, NC 28734\nMacon County public schools do not need to pay any fee, but reservations are necessary. All other schools, local and out of county are expected to comply with the new fee schedule\nMap of Bartam Trail\nBartram Trail is multi-state National Recreation Trail in WNC and GA. The trail stretches from the North Carolina-Georgia border southwest over the summit of Rabun Bald (Georgia's second highest peak), turns south-southeast to the Chattooga River and then heads northeast paralleling the river to the GA 28 bridge.\nIn North Georgia, the portion of the trail that winds through the Tallulah Ranger District is well maintained. About 37 miles long, this trail retraces a portion of the naturalist's path. Bartram actually traversed a significant portion of North Georgia from Savannah to Ellicott Rock. The trail begins at the North Carolina-Georgia border and passes over Rabun Bald, the second tallest peak in Georgia.\nJust south of the North Carolina border the Bartram Trail crosses Hale Ridge Road, running southwest. It continues generally southwest to just past Raven Knob, where it turns almost due south. The route rises and falls until the base of Rabun Bald. Here, about 3 miles into the trail, the path begins to rise to the bald. This forest has been repeatedly harvested, so it does not look as Bartram describes it. A lookout tower at the top provides a great 360 degree view for hikers to enjoy (please see photo below), experienced hikers can go to Rabun Bald and return in a day.\nView from Bartam Trail lookout tower\nThe trail approximates the route of 18th century naturalist and explorer William Bartram through North Carolina, South Carolina, Georgia, Tennessee, Alabama, Florida, Mississippi and Louisiana.", "score": 18.90404751587654, "rank": 73}, {"document_id": "doc-::chunk-0", "d_text": "In order to use RunSignup, your browser must accept cookies. Otherwise, you will not be able to register for races or use other functionality of the website. However, your browser doesn't appear to allow cookies by default.\nIf you still see this message after clicking the link, then your browser settings are likely set to not allow cookies. Please try enabling cookies. You can find instructions at https://www.whatismybrowser.com/guides/how-to-enable-cookies/auto. If you still have issues after this, please contact us.\nAdditional race information can be found at http://www.swamprabbitrace.com.\nThe Prisma Health Half Marathon & 5K is a net downhill half marathon and 5K run. The half marathon starts in Travelers Rest, SC and ends on the TD Stage at the Peace Center in downtown Greenville, SC. The 5K starts at the Swamp Rabbit Cafe and follows the Swamp Rabbit Trail to the same finish line on the TD Stage at the Peace Center.\n*Note for 2020 - The race will end once again at the TD Stage at the Peace Center. This is subject to change based on construction timelines.*\n4/10/2019: 2020 Registration to Open May 1, 2019 (Race Date is February 29, 2020)\nIf you have any questions about this race, click the button below.Questions?\nMake sure you download the RaceJoy mobile app for live phone tracking at the Prisma Health Half Marathon & 5K.\nCarry your phone and use RaceJoy to add to your race experience with these key features:\nThe Prisma Health Half Marathon & 5K is sponsoring RaceJoy to provide participants and spectators these features for free (normally a 99 cents upgrade fee for both the participant and spectator).\nThe Prisma Health Half Marathon & 5K has course maps available.\nNOTE: Shirt deadline is 3 weeks prior to event. Anyone registering after this date will not be guaranteed a t-shirt. We will do our best to estimate quantity and sizing needs but can not guarantee correct sizes to anyone registering after this date.", "score": 18.90404751587654, "rank": 74}, {"document_id": "doc-::chunk-0", "d_text": "ABOUT THE TOUR: Guests that visit Okefenokee Swamp Park in Waycross, Georgia can step back in time aboard “The Lady Suwannee” the Okefenokee Railroad. The train tour will take you on the adventure of a lifetime. Many points of interest will be featured on your journey including a stop at Pioneer Island which is rich in history (will be one of the main attractions of the tour).\nThe 1.5 mile railroad system at the Okefenokee Swamp Park serves as a mode of transportation for the park, circling part of the Great Okefenokee Swamp. The railroad track was completed in January of 1999 and train tours began in the Spring of 1999.\nABOUT THE TRACK: The track was installed by B.R. Moore Construction Company. This company has over 30 years experience in track construction.\nMATERIAL USED: 13,500 track spikes – L.B. Foster Co., Norcross, GA; 15,000 feet of 30 lb. rail; 808 rail joint bars; 3,200-5′ crossties – B&M wood products, Manor, GA; 1,800 tons of ballast stone – Dixie Roadbuilders, Waycross, GA\nABOUT THE TRAIN:The train is a 36-gauge replica steam engine with three coaches built by Cummings Locomotive of New Brunswick, Canada. The train is powered by a Perkins Turbo Diesel (108 HP). Seating capacity is 80 adults or 95 children.", "score": 17.872756473358688, "rank": 75}, {"document_id": "doc-::chunk-0", "d_text": "Many historians consider the Revolutionary War to have been decided in the swamps, fields, woods and mountains of the South, won by the resilience and determination of Continental soldiers and Patriot militia. Although the full story of the Southern Campaigns is not widely known, the events of 1776-1782 in the Carolinas directly led to an American victory in the war. We call this history The Liberty Trail.\nThe Liberty Trail is a unified path of preservation and interpretation across South Carolina, telling this remarkable story. These important battlefields, still largely unspoiled, deserve to be preserved. That’s why the American Battlefield Trust has partnered with the South Carolina Battleground Preservation Trust to accomplish these goals.\nEach stop along the driving tour features unique on-site interpretation that connects visitors to the extraordinary events that came to pass nearly 250 years ago. The Liberty Trail honors the Patriots that decided the Revolution’s outcomes in South Carolina.\nMore than 200 battles and skirmishes occurred in South Carolina during the war. Working with a panel of historians and archaeologists to select the most significant of these actions, we have created the first phase of The Liberty Trail, an innovative driving route designed to connect these battlefields and tell the captivating and inspiring stories of this transformative chapter of American history.\nThe cornerstone of The Liberty Trail is the preservation of hallowed battlegrounds. Through this initiative, thousands of acres of land will be permanently protected. Today, most of these sites are blank slates — quiet fields and forests waiting to tell their stories.\nTo date, we have preserved nearly 700-acres at nine battlefield sites in South Carolina: Camden, Eutaw Springs, Fort Fair Lawn and Colleton Castle, Hanging Rock, Hobkirk Hill, Parker’s Ferry, Port Royal Island, Stono Ferry, and Waxhaws.\nThe Liberty Trail program founders Douglas \"Doug\" Bostick and Catherine Noyes talk about the program's formation, their connections to South Carolina, and historic site preservation.", "score": 17.397046218763844, "rank": 76}, {"document_id": "doc-::chunk-3", "d_text": "A bench sits off to the right along the Gum Swamp Trail to take in the scenic view. Turn right to start your journey down the Gum Swamp Trail, and take a moment to step out on the rocky bluff so you can see where the stream rises out of the ground.\nWhile the Gum Swamp Trail offers less in the way of cool geology, it’s here you’ll find botanical beauty, especially during early spring when the wild azaleas are in bloom. The ones I’ve seen here are unlike any others I’ve seen in Florida. They open up into giant puffs, with azalea blossoms forming perfect orbs suspended in air along the stems. Pine cones cover the forest floor, and the footpath is on a nice layer of pine duff. You pass a marker with a lime green arrow.\nAfter you cross an old jeep road, the forest becomes much denser, with loblolly and longleaf pines towering overhead. Gum Swamp is visible in the distance through the trees to the left. But you’ll have your swamp encounters soon. The trail reaches Bear Scratch Swamp and a bench overlooking the view at 2.9 miles. At South Swamp, look for a pine and water oak that have grown intertwined, right near the sign. The trail winds past Shadows Swamp, a good place to stand and listen for birds.\nThe Gum Swamp Trail meets back up with the Crossover Trail at 3.6 miles. Turn right to stay on the outer loop heading clockwise. There is one last karst formation to visit – the Gopher Hole. It’s down a side trail on the right and well worth the short ramble. It’s a water-filled cavern that indeed looks like a gigantic gopher hole, and you can stare right into it and watch drops of water fall from the cave ceiling and create ripples across the placid surface. Bring a flashlight if you want to peer further inside.\nReturning up the Gopher Hole side trail, turn right. Within a few minutes, you’re back at the beginning of the Sinkhole Trail. Turn right to exit out to the parking area, completing your hike of 4.4 miles.", "score": 17.397046218763844, "rank": 77}, {"document_id": "doc-::chunk-1", "d_text": "Remain on Glen Ave for 8.8 miles before turning right onto CR 127. After 10 miles, you will reach Florida SR 2. Turn right, then immediately left back onto CR127. After another 0.5 miles turn right at John Burnsed Rd and Maple Set Campground. The parking area is just ahead after 0.3 miles.\nFrom the northeast corner of the parking lot, the upper loop trail passes through the gate to the Maple Set Campground, then passes several numbered tent sites on the right.\nJust after you pass the water pump between tent sites #4 and #5, the trail turns right toward the river, which you’ll come to after only a short distance.\nTurn right and follow the yellow-blazed trail at the river’s edge. Be careful not to trip over the many saw palmetto trunks lying on and around the trail.\nIn less than a quarter mile, the trail turns right and comes to a picnic area overlooking a lovely swimming hole.\nIt is quite deep in the middle and, though the Florida Forest Service says that jumping into the water from river banks or trees is not allowed, local residents have attached two rope swings.\nAt the end of the picnic area a short boardwalk leads back to the parking area.\nThe lower loop begins at the southwest corner of the parking area and is marked with a small hiking sign, at which you veer left to again follow the trail along the river’s edge.\nThe trail here is beautiful, easy to follow, and well-blazed. At about 0.4 miles there is a double-blaze that marks the bottom of the lower loop, which turns right here.\nThe spur trails, also blazed yellow, continue south along the river.\nTurning right at the double blaze, the trail goes through river bottom filled with cypress and their knees.\nDuring dry season, this area is merely soggy and soft, but judging from the water marks this area can turn into a shin-deep swamp during rainy months.\nThe yellow blazes are faded here and poorly placed so be careful navigating your way.\nAfter just a few hundred feet the trail passes through this low-lying area and comes to a grassy service road.\nTurn right and after 0.3 miles, you will come to a “T” with another grassy service road. Turn right and follow this service road a final 0.1 mile back to the parking area.", "score": 17.397046218763844, "rank": 78}, {"document_id": "doc-::chunk-0", "d_text": "- Find a Trail\n- My TrailLink\n- Explore Trails\n- About Us\n- Get Involved\nBuilt alongside the busy Interstate-520 (known as the Palmetto Parkway along this stretch), North Augusta's Palmetto Parkway Bike Path offers a trail experience much more pleasant and scenic than its namesake suggests. While the trail does follow the route of the interstate and does require several local road crossings, much of the pathway is sheltered from the sight of the adjoining traffic by rows of trees and sculpted terrain - you won't exactly feel lost in nature, but you won't feel exposed to vehicles either.\nThe trail travels for almost five miles on the eastern outskirts of North Augusta and the community of Belvedere, passing over enough rolling hills to make sure that cyclists get in a workout. The southern end of the trail lies just under two miles from the start of the North Augusta Greeneway Trail, which allows ambitious trail users to create a rough loop throughout the entire area on the South Carolina side of the Savannah River.\nParking for the trail is located at the northern trailhead off of Ascauga Lake Road by the North Augusta DMV, a short distance north from I-520/Palmetto Parkway via Exit 22.\nTrailLink is a free service provided by Rails-to-Trails conservancy\n(a non-profit) and we need your support!", "score": 17.397046218763844, "rank": 79}, {"document_id": "doc-::chunk-0", "d_text": "While the briefest of walks, the trail at John Muir Ecological Park connects you to an important and mostly forgotten chapter of Florida history: our role in John Muir’s “Thousand Mile Walk to the Gulf.”\nMuir reached Cedar Key on October 23, 1867, after crossing Florida from the Atlantic Coast. This is one of the places he walked through, and the only such park in the state that commemorates this accomplishment.\nResources for exploring the area\nDisclosure: As authors and affiliates, we receive earnings when you buy these through our links. This helps us provide public information on this website.\nLength: 1/4 mile round-trip\nRestroom: at the trailhead\nLand Manager: City of Yulee\nThe park is along SR 200 just west of US 17 in Yulee.\nIt was not an easy thing to walk across Florida in 1867. It still isn’t today. At least now there are roads and trails. Muir wrote “it was the army of cat-briers that I most dreaded,” and he was no stranger to Florida’s swamp walks by the time he was done.\n“Had the water that I was forced to wade been transparent it would have lost much of its difficulty. But as it was, I constantly expected to plant my feet on an alligator, and therefore proceeded with strained caution.”\nFortunately, the kind folks of Nassau County don’t expect you to wade into this floodplain forest as John Muir did. Starting at the parking area, follow the boardwalks.\nThey zigzag through through this forested swamp, connecting a series of picnic shelters together. Songbirds sing from the boughs of sweetgums and red maples. Ferns thickly carpet the forest floor. Mosquitoes hum.\nIt’s a short walk. Trail’s end is at an elevated berm, a railroad embankment. This was the main line of the historic Florida Railroad, one of Florida’s first railroads, which connected Fernandina Beach to Cedar Key.\nAs the eastern terminus of the Florida Railroad, this region is steeped in railroad history. Responsible for bringing the railroad to Fernandina Beach, Senator David Yulee – for whom the town was named – also was one of people fleeing the city by rail as the Union Blockading Squadron bombed the trestles between there and here.", "score": 16.20284267598363, "rank": 80}, {"document_id": "doc-::chunk-0", "d_text": "AN UNTOLD TREASURE FOR MORE THAN 300 YEARS\nAN ISLAND OF HISTORICAL SIGNIFICANCESpring Island is indeed unique and its story is the same. The historical record tells a passionate story of Lowcountry natives, industrious plantation owners, agriculturalists, quail hunting enthusiasts and most recently, land developers endeavoring the risky and courageous preservation of this long treasured property.\nSpring Island’s History Trail\nThe trail was established in 2011 to tell the story of this 3,000+ acre island. Its creation seems an appropriate way to honor those who lived here before us. The path connects the historic house ruins with the wooded space of the Old House Cemetery. Interpretive signs along the trail explore human and natural history and geography, celebrating South Carolina’s deep roots a variety of cultures.\nPort Royal Sound is the deepest natural harbor in the Southeast. The Lowcountry, of which Spring Island is a part, consists of a jigsaw pattern of islands and tidal rivers fostering mild, long growing seasons.\nIn early history, European settlers and native peoples struggled to control the region. Agriculture then brought fields of indigo, rice and Sea Island cotton. In 1862 the Sound’s strategic importance to theConfederacy resulted in a successful Union blockade. Such events have left their marks on the region’s modern-day landscape and culture\nThe Tabby Ruins Gazebo\nThe gazebo faces across the Colleton River to the western shore of Port Royal Sound and the view of theAtlantic Ocean. At the trailhead two interpretive signs help us gain a deeper understanding of our geographical position and historical juncture.\nThe timeline offers significant dates from the geological and archeological records, and history. A panoramic “locator” sketch identifies places of interest withinBeaufort County and noteworthy commercial and population centers beyond the horizon that have played important roles in SpringIsland’s chronology.\nEvery phase of Spring Island’s history connects to water. The Broad River estuary is an especially fertile ecosystem, conducive to oyster growth. The super-abundance of edible fish, the ease of intercoastal transportation and deepwater access appealed to Native Americans and to European explorers and settlers.In the 16th and 17th centuries Spring Island lay along critical shipping lanes linking Caribbean, NewEngland, European and African trade hubs.", "score": 15.758340881307905, "rank": 81}, {"document_id": "doc-::chunk-0", "d_text": "Summerton in Clarendon County, South Carolina — The American South (South Atlantic)\nElusive Francis Marion, 1780-1781\nIn November 1780, the British sent Lt Col Tarleton to engage Marion and his Militia. General Marion looked for the British and headed towards Jacks Creek. His spy reported Tarleton at Gen. Richardsonís home. Marionís Militia attempted to lure them into an ambush at Benbowís Ferry on the Black River. The British gave up the pursuit at Ox Swamp, and called Marion the old fox.\nThe Swamp Fox Murals Trail Society\ndonated this mural in Summerton in 2016.\nArtist: Terry Smith, Land Oí Lakes, Florida\nwww.clarendonmurals.com , www.swampfoxtrail.com\nErected 2016 by Swamp Fox Murals Trail Society.\nLocation. 33° 36.469′ N, 80° 21.164′ W. Marker is in Summerton, South Carolina, in Clarendon County. Marker is on South Church Street (U.S. 15) near Main Street (U.S. 301), on the right when traveling north Touch for map. Marker is in this post office area: Summerton SC 29148, United States of America.\nOther nearby markers. At least 8 other markers are within walking distance of this marker. Patriot Departs to Ride with Marion (about 300 feet away, measured in a direct line); Senn's Mill (about 600 feet away); Wagon Travel (about 800 feet away); Siege of Fort Watson (approx. 0.2 miles away); Summerton Presbyterian Church (approx. 0.2 miles away); The Patriot and the Redcoat (approx. 0.2 miles away); Anne Custis Burgess (approx. 0.3 miles away); \"Together Let Us Sweetly Live\" (approx. 0.4 miles away). Touch for a list and map of all markers in Summerton.\nRelated markers. Click here for a list of markers that are related to this marker.\nAlso see . . .\n1. Swamp Fox Murals Trail. (Submitted on April 11, 2016.)\n2. Swamp Fox Murals Trail Society. (Submitted on April 11, 2016.)\nCategories.", "score": 15.758340881307905, "rank": 82}, {"document_id": "doc-::chunk-3", "d_text": "Fish and Wildlife Service managed to preserve bison, this 1.5-mile scenic loop trail provides visitors the opportunity to observe a diverse sampling of native wildlife whether jogging or snowshoeing.\nCongressman Ralph Regula Towpath Trail – Also known as the Ohio and Erie Canalway Towpath Trail, this 25-mile multi-use trail serves as the western spine of a planned 300-mile trail system throughout Stark County and offers a variety of recreational activities along a pathway rich in State history.\nHeritage Rail Trail County Park – Traversing York County to the Maryland border, this 19-mile multi-use trail provides an integral link in a statewide trails system and epitomizes the concept of a close-to-home trail experience, but has regional, State, and national significance as well.\nSusquehanna River Water Trail – Middle and Lower Sections – Flowing from Sunbury to the Maryland border, this 103-mile segment offers paddlers an exciting array of experiences, from observing great blue herons to learning about the Underground Railroad.\nCongaree River Blue Trail – Starting near Columbia, this 50-mile water trail and greenway offers an urban adventure featuring prehistoric Native American sites, sandbars, high bluffs, and Congaree National Park, home of the largest continuous tract of old growth bottomland hardwood forest in the United States.\nHeritage Trail Loop – Serving as the backbone of the city’s trail system, this 3.1-mile rail-trail and bikeway links area residents to numerous recreational facilities, historical sites, and a local renewable energy demonstration project.\nLions Park Nature Trail – Given its artistic features, hilltop vistas, and recreational facilities, it is easy to see why this 2-mile walking trail is so popular with Temple residents of all ages.\nCanaan Valley Institute Trail System – Located near the town of Davis, this 6.5-mile privately-owned\nmulti-use trail system offers the public a variety of hiking, mountain biking, and equestrian trails, with additional connections planned to link to neighboring State and Federal lands.", "score": 15.758340881307905, "rank": 83}, {"document_id": "doc-::chunk-0", "d_text": "Directions: Take the Natchez Trace north from Ridgeland to\nthe West Florida Boundary parking area (2nd parking area after the Reservoir\nOverlook). Trailhead starts at this parking area.\nTrail Information: This trail starts at the West Florida Boundary and follows the Natchez Trace till at least the River Bend Camp Ground approximately 14-15 miles north. It covers rolling hills and crosses creeks, wetlands, and inlets from the Reservoir.\nThe trail is open to only hikers/runners and horseback riders. Unfortunately, due to the abundance of clayey soils, there are several areas that are severely pitted due to the horses. Walking is sometimes the safest option in these areas\nMiscellaneous Information: There is water available at the Highway 43 trailhead (spigot for the horse watering trough) and there is water and restrooms at the River Bend parking area.\nThe trail is well marked with blazes. However, there are a few locations where one must leave the trees, run along the Trace to get over a few inlets to the Reservoir, then leave the Trace to get back onto the Trail. Pay attention to the trail signs and blazes during these times.", "score": 15.758340881307905, "rank": 84}, {"document_id": "doc-::chunk-1", "d_text": "But when I called my associates 30 miles south at Pack's Landing, a supply store and boat landing on Lake Marion where I had left my car days before, I found that nobody was prepared at that moment to come get me, and though someone certainly would come and get me, exactly when wasn't entirely certain, nor at that moment easy to predict.\nThe house Bubba had emerged from was built in 1954, and Bubba told me the story of its predecessor, built in 1760, burning on Thanksgiving morning, 1953: \"Daddy said 'Don't grab nothing, just get out,'\" he recalled. \"The whole house was built out of fat lighter\" -- pitch pine, which burns like a candle. \"We just did get out.\" Bubba stopped at St. Mark's Episcopal Church, to which he had the keys, and we explored the cemetery and the church itself, established in 1757 and currently in a building, created of local clay, built a century later.\nAnyhow, between the Richardsons and the Lenoirs and the abandoned railroad I had camped near I got to feeling I had really managed to learn a bit about Sumter County, to say nothing of the route Lawson would have trod. The Lenoir Store only stood where it did because the old Mississippian Indian trail to the Santee Indian Mound ran by there, and that trail had become the dirt road along which I had walked, and that had given rise to the railroad -- since abandoned -- and the asphalt Horatio-Hagood Road I'd finished my trek segment on.\nBubba and I cheerfully conversed until we pulled into Pack's Landing, where talk instantly turned to the kind of good natured foolery that makes places like Pack's Landing -- and the Lenoir Store, and for that matter Sumter County -- the wonderful places they are. \"What's the population of Horatio?\" needled a fellow named Duck, who would have come to pick me up had I been more patient. He pronounced it HO-ratio. \"Well, this morning it's about 25, since I'm down here,\" Bubba said, and off they went.", "score": 14.309362980056058, "rank": 85}, {"document_id": "doc-::chunk-0", "d_text": "This the longest and best known scenic byway in South Carolina. It is 118 miles long. It is named Cherokee Foothills because it runs through the foothills found at the base of the Blue Ridge Mountains which were the ancestral home of the Cherokees.\nStarting in Cherokee County, it runs along SC 11 all the way through Spartanburg, Greenville, Pickens and Oconee Counties to the west. The majestic Blue Ridge Mountains become clearly visible to the west of the Town of Chesnee and they remain dominantly present until you have driven past the Town of West Union in Oconee County. This is a good scenic alternate to I-85.\nBohicket Road - Cowpens Battlefield - Edisto Beach - Falling Waters - Fort Johnson Road - Hilton Head - Hilton Head Island - Long Point Road - Mathis Ferry Road - May River - McTeer Bridge - Old Sheldon Church - Plantersville - Riverland Drive - SC 170 - US 21 - Western York\nFor more information about South Carolina Scenic Byways, please contact SCDOT at (803) 737-1952.", "score": 13.897358463981183, "rank": 86}, {"document_id": "doc-::chunk-0", "d_text": "Our truck followed a long, gravel road through the Sumter National Forest near the border of Laurens and Newberry counties. Cutover land lined both sides of the road for more than a mile — prime habitat for cottontail rabbits — and after another half-mile, we found a safe place to park.\nUnsheathing my 20-gauge shotgun as the 10 beagles began to stir in the dog box, it seemed the day was going to be full of excitement.\nRabbit hunting in South Carolina has a long tradition, and although it can be done without dogs, using beagles to run bunnies is the norm. The music of beagles chasing through coverts, thickets and pine plantations is as sweet as it comes.\nEric Braselton of Greenville has embraced running his 13-inch beagles after Upstate cottontails.\n“Running my dogs is about as fun as it gets in the outdoors” Braselton said. “I love everything about this sport, from breeding the dogs, to training them as puppies to, of course, hunting the rabbits.\n“Sure, we want to kill the rabbit; it’s a reward to the dogs, and to us. But the race is really what it’s all about.”\nBraselton is a young hunter who embraces tradition, choosing to use a side-by-side, 12-gauge as his shotgun of choice and carrying a family heirloom bullhorn to call his dogs. But don’t be fooled by his age; his knowledge of beagles and how they interact with rabbits translates into a lot of success for those who have the opportunity to tag along.\nSouth Carolina is blessed with a lot of good rabbit hunting ground. Public land is abundant throughout the piedmont and Upstate. Key habitat for rabbits includes planted pine plantations, recent cutovers and those that are between two and 10 years old. As the pines grow taller, they shade out the undercover and the rabbits — needing plenty of cover to hide from everything that’s trying to eat them — leave.\nCottontails have a hard life. They have been referred to as “the shrimp of the land.” From four-legged predators to avian predators, rabbits are on everyone’s menu. To handle this, they are prolific breeders and are able under most circumstances able to handle a lot of predation.\nBraselton prefers areas featuring young pine plantations.", "score": 13.897358463981183, "rank": 87}, {"document_id": "doc-::chunk-2", "d_text": "Roadwalk from Three Lakes to Green Swamp, Green Swamp East & West (includes Bigfoot BSA camp and other loops), Richloam, Croom, Citrus (including loops), Withlacoochee State Trail, Cross Florida Greenway, and Ocala West. 240 miles.\nOCALA NORTHEAST: Includes the Ocala and Northeast Florida sections, from Clearwater Lake north to Deep Creek trailhead at the north end of Osceola National Forest. 185 miles.\nSUWANNEE BIG BEND: Includes the Suwannee River section, private timberlands, Aucilla, and St. Marks NWR. 190 miles.\nEASTERN PANHANDLE: Includes the Apalachicola National Forest, Altha Trail, Econfina, Pine Log, Nokuse, and connecting roadwalks. 178 miles.\nWESTERN PANHANDLE: Includes Eglin Air Force Base, Yellow River Ravines, and Seashore. 131 miles.\nBLACKWATER: The official connector trail of the Florida Trail to the Alabama border, part of the larger Eastern Continental Trail. 45 miles.", "score": 13.897358463981183, "rank": 88}, {"document_id": "doc-::chunk-0", "d_text": "Coldwater River Nature Trail System\nNorth Outlet Trails\nThe North Outlet Channel Recreation Area provides access to the Coldwater River Nature Trail System. This network of trails encompasses two hiking trails (one trail is 3 miles in length and the other is 5 miles in length) and the Big Oak Nature Trail which is a self-guided interpretive trail. The area includes pristine bottomland hardwood and pine forests where an abundance of wildlife and native plant species can be found throughout. A self-guided interpretive booklet is available at the trailhead located on the north side of the Outlet Channel, or the booklet is available for download by selecting the link below.\nSouth Outlet Trails – Swinging Bridge Nature Trail\nThe South Outlet Channel Recreation Area offers a unique trail experience. The Swinging Bridge Nature Trail is a self-guided trail that includes an old section of the Coldwater Riverprior to the construction of the dam. The Coldwater Riverwas once flowing through this very trail area! Please enjoy a walk through this historic trail and learn about the history of the Arkabutla community or how the Corps of Engineers redirected the Coldwater River. Maybe you would like to learn how to identify native plants and wildlife, or just enjoy the tranquility of the trail’s azalea garden. Information panels are placed throughout the trail to help guide you along your walk.", "score": 13.897358463981183, "rank": 89}, {"document_id": "doc-::chunk-0", "d_text": "Smoke from the fires was reported as far away as Atlanta and Orlando. The landscape is fluid and vividly detailed, with a dense variety of often caricatured flora and fauna. These tantalizing tidbits are seemingly all that remain of the ride's original concept.\nShe speaks with a heavy burlesque French dialect and tends to be overdramatic. Choo Choo Curtis a. Once a permit is obtained, the trip will be advertised so that others may join. The interior mechanism inside one of the vegetables became stuck, making its interior coil become hotter and hotter.\nA natural-born mail carrier duck. The swamp's version of EeyorePorkypine is grumpy and The okefenokee swamp by nature, and sometimes speaks of his \"annual suicide attempt\".\nAquatic species include otters, minks, and beavers. All dogs must have rabies certificates and health records. Perhaps the least sensible of the major players, Churchy is superstitious to a fault, for example, panicking when he discovers that Friday the 13th falls on a Wednesday that month.\nA bear ; a flamboyant impresario and traveling circus operator named after P.\nPig frogs and river frogs are similar in appearance to bullfrogs, which are absent from the swamp. This will help your eyes, and those of your The okefenokee swamp observers, adjust to the darkness.\nSeveral species of large pitcher plants as well as smaller sundews and butterworts, which capture insects with a gluelike surface film on their leaves, are scattered throughout the swamp.\nRabbit has been rescued by the friendly owls, who are circling overhead carrying a white bed sheet that, from below, might give the impression of being a ghost. They wear identical black derby hats and perpetual 5 o'clock shadows. The Monster Plantation had to be completed in record time in order to open with the beginning of Six Flags' season in the spring ofbut the workmen made the deadline.\nAgain his face was covered, this time by his speech balloons as he stood on a soapbox shouting to general uninterest. In fact, the Kroffts' version of Tales of the Okefenokee seems to have been patterned quite heavily after Walt Disney's treatment of the Uncle Remus stories as created for the movie Song of the South and its related storybooks.", "score": 11.976056062528453, "rank": 90}, {"document_id": "doc-::chunk-0", "d_text": "Do you remember the Swamp Fox? If you are around my age and watched The Wonderful World of Disney as a kid, you might. They made an 8 part miniseries about Francis Marion, AKA The Swamp Fox starring Leslie Neilson. Marion was a prominent figure in the Revolutionary War here in the USA. He was one of the first to use guerilla fighting against the British. And so, there is a National Forest named for him in South Carolina.\nI’ve been staying here for a couple of days, soaking up the forest peacefulness and exploring the area. I camped at one of the primitive campgrounds called Half Creek. It’s a lovely site in the woods and there was only one other car here so I enjoyed the quiet.\nThe Palmetto Trail goes through the forest, stretching across the entire state from the ocean to the mountains. The section through here is called the Swamp Fox Trail. It begins at Buck Hall Recreation Area\nwhich is on the Intercoastal Waterway that runs along the east coast from Massachusetts to Florida. Apparently, it was the first interstate “highway”, used by Natives and explorers for easy transportation.\nJust north of my camp is the Hampton Plantation, which has been made an historic site. There is a walking trail around the property where you can see the locations of the rice fields, and an archeological dig of one of the slave houses. You can also tour the inside of the main house.\nI was planning on going up the coast, but with the weatherman saying there’s going to be another winter storm in New England, I think I’ll go south instead.", "score": 11.600539066098397, "rank": 91}, {"document_id": "doc-::chunk-0", "d_text": "- Well,one wld feel justified in thinking that when you check in and get your site, the employee wld THINK to mention: Bathrooms/showers completely closed, so you have to walk a half mile from that site. No. Driving, cold rain and high winds for days. No soap in the bathroom that is open. Over one year ago this same bathroom was closed for renovations, and still it is closed. Welcome to South Carolina, folks.\nout standing! 5 Stars! - Whomever is taking care of this place needs a raise! I walked through geocaching and for the first time in a very long time I saw absolutely no trash at all! Anywhere!\nAnd nothing was broken or in disrepair.\nAwesome job he/she/they are doing.\nMajor history of South Carolina - My dad used to take us on to this park several times a year. It is a location rich in southern and American history. Well worth a visit. Take along a picnic lunch and good walking shoes.\nFavorite National Wildlife Refuge - I never tire of visiting one of the most unique places in eastern S.C.. Whether you like to hike, fish, hunt, observe wildlife and wildflowers, plants, etc., SHWR is a wonderful place to visit. I always treasure my visits because I truly feel like I step back in time whenever I enter the refuge.\nPoinsett St Park (equestrian trails) - The equestrian trails have always been a favorite of ours. We were told diturbing news tonight that the trails are no longer to be used by horses. If not horses what??? The message was that the campground is closed and horses are not welcomed. Is this true or did the Ranger relaying this info make a mistake? Mistake I hope.\ngreat trails but no swimming - After searching online ,which stated there was swimming, that there was no swimming in the lake. highly disappointed but found out there was a creek to go to so i tried there not great for swimming either just wading but the scenery was great.\n- Great park to make a day of it. We brought a picnic lunch to eat before hitting the trails. There are educational signs along the way that tell about the history of the canal. Bring your cameras and comfortable walking shoes. Really enjoyed it!", "score": 11.600539066098397, "rank": 92}, {"document_id": "doc-::chunk-2", "d_text": "Summerset Trail – Stretching almost 12 miles through rolling hills, river bottom wetlands, and remnant prairies, this rail-trail allows for hiking, biking, or cross-country skiing through some of the best of central Iowa’s natural scenery.\nMusketawa Trail – Providing a handicapped-accessible connection between Marne and Muskegon, Michigan, this 24.7-mile rail-trail and greenway allows a variety of trail users to enjoy a range of landscapes while biking, snowmobiling, horseback riding, or simply taking a stroll.\nFunk Peterson Wildlife Trail – Situated in Funk Waterfowl Production Area, this 3-mile backcountry loop trail is a bird watcher’s paradise, providing habitat for millions of birds, including endangered whooping cranes and least terns that migrate biannually through the area.\nCanyon Trail – Located in Bosque del Apache National Wildlife Refuge, this 2.2-mile interpretive trail offers school groups and visitors year-round the ability to study tracks in the shifting sands, evidence of kangaroo rats, box turtles, and a host of other wildlife that call the refuge home.\nChupadera Wilderness Trail – Traversing the Chupadera Wilderness Area of the Bosque del Apache National Wildlife Refuge, this 9.5-mile backcountry trail is rich in wildlife and wildflowers, and takes hikers through a range of landscapes culminating in a 360-degree view of several mountain ranges.\nDismal Swamp Canal Trail – Recognized as part of the East Coast Greenway, this 4.5-mile multi-use trail features a variety of historic sites, abundant wildlife, and opportunities for biking, fishing, canoeing, and more.\nLittle Tennessee River Greenway – This 4.5-mile hiking and biking trail parallels the Little Tennessee River and Cartoogechaye Creek and features three different bridges and a variety of recreational facilities for visitors of all ages.\nArrowwood National Wildlife Refuge Leg of the Historic Fort Totten Trail – This 9-mile backcountry trail is undergoing improvements to provide enhanced wildlife-dependent recreational opportunities and allows for a variety of uses, including hiking, mountain biking, and horseback riding.\nScout’s Trail – Situated within Fort Abraham Lincoln State Park, this 4.6-mile multi-use trail offers environmental education and interpretive opportunities on Native American culture amid scenic vistas and native prairie.\nSullys Hill Nature Trail – Located in one of only four units of the U.S.", "score": 11.600539066098397, "rank": 93}, {"document_id": "doc-::chunk-0", "d_text": "Chattooga River - Burrells Ford Bridge & Chattooga River Trail - Sumter NF, SC\nPosted by: crackergals\nN 34° 58.468 W 083° 06.968\n17S E 306824 N 3872256\nQuick Description: Burrells Ford Bridge marks the border between Sumter National Forest in South Carolina and Chattahoochee National Forest in Georgia.\nLocation: South Carolina, United States\nDate Posted: 4/16/2009 6:20:14 PM\nWaymark Code: WM67AE\nThe coordinates are for the S. Carolina side of the bridge where you can also find the Chatooga River Trail Head. It is currently illegal to float this section of the River, upstream of Hwy 28.\nThe designated section of the Chattooga River flows through three states and the Ellicott Rock Wilderness for a total of 56.9 miles.\nThe Chattooga River Trail is a forty mile route that borders the states of North Carolina, South Carolina, and Georgia and follows the banks of the Chattooga National Wild And Scenic River from Burrell's Ford to US 76. (visit link", "score": 11.600539066098397, "rank": 94}, {"document_id": "doc-::chunk-1", "d_text": "You can see a wide variety of different wildlife using this mode of travel. Racoons fishing a crayfish from the creek side, yes, they fish to, or a variety of water fowl and other birds. Squirrels playfully chasing each other around a tree or fearlessly jumping from limb to limb high up in the tree tops. I have even been so lucky as to manage to drift up on two beautiful bucks locked in combat. We stopped on the creek bank and quietly watched them settle their dispute, it was quite surreal. The other things that have to be watched out for is the reptilian life. Alligators and snakes are quite plentiful as one would imagine, but they generally leave you alone as long as you give them room. I never had any problems with any of the wild creatures sharing these magnificent wetlands area.\nThe swamp isn’t the only waterways in the area of Islandton. The Big and Little Salkehatchie Rivers join together to form the Combahee River. This river eventually ends up in Beaufort SC and dumps into the ocean. In its many miles of travel, it switches from fresh to brackish to, as you can guess, salt water. During this time, the variety and types of fish and wildlife change. One of the areas that I always frequented is called Sugar Hill Landing. It is located in Yemassee, SC. The varieties of fish remain pretty consistent to this point with a few additions. In this area, we start to see Shell Crackers and just a little further towards the coast we pick up Spot Tail Bass and the Occasional Striper. I can remember many times almost losing my rod and reel, and a couple of times actually having them yanked right out of the boat while fishing for Spot Tail Bass. They are a wonderful fish to hook into. Great fighters and also another great tasting variety of fish. One of the things I enjoy about this area is the Bald Eagles. One of my favorite fishing holes is right behind a small island. The trees on this particular island have always housed at least one Eagles nest. While sitting in this area fishing I would usually have the pleasure of watching an Eagle fish. Now you haven’t seen fishing until you see this.", "score": 8.413106992933548, "rank": 95}, {"document_id": "doc-::chunk-2", "d_text": "straight across the road, where it continues to go straight.\nIt’s a mere 1.2 miles to the next crossing at Pinyard Road. This might have been the coolest stretch that we hiked on this trail.\nPinyard Road is the final road crossing for this one! It’s just about a mile exactly to where we parked the cars at the baseball fields in Elmer!\nFurther reading and pictures – As with all awesome places in the great state of South Jersey, Yummygal’s South Jersey History and Adventures beat us here. Well worth checking out her pictures and write up here.\nLarge tracts of beautiful swampland. One short section is designated as a nature trail, and has educational signs.\nDirt bikers using the trail.", "score": 8.086131989696522, "rank": 96}, {"document_id": "doc-::chunk-2", "d_text": "Simon Island on Georgia’s southeast coast, is a relaxing ride that provides scenery of...\nThe Georgia Coast Rail-Trail will eventually stretch 68 miles from Kingsland north to Riceboro, a lush corridor of longleaf pine forest, marsh and saw...\nThe Amelia Island Trail, on Florida's northeastern coast, runs from Peters Point Beachfront Park to Amelia Island State Park in the city of Fernandina...\nAlong the northeast coast of Jacksonville, sections of the developing Timucuan Trail have been built in Big Talbot Island State Park and Little Talbot...\nThe S-Line Urban Greenway is a rail-trail that runs just over three miles.\nJacksonville's Northbank Riverwalk offers scenic views of the St. John's River and the city skyline. It's also part of a larger effort called the...\nJacksonville is developing an interconnected 14-mile trail system called the Emerald Necklace. Portions of the route are already on the ground, like...\nJust west of bustling downtown Jacksonville, the Jacksonville-Baldwin Rail-Trail, one of north Florida's oldest, traverses a rural setting of hardwood...\nThe Black Creek Trail parallels U.S. Highway 17, from Orange Park south to Black Creek Park near Lakeside, FL, just south of Jacksonville. Passing...\nJ.F. Gregory Park is a 335-acre multi-use recreational area that once encompassed a thriving rice plantation and was subsequently bought up by Henry...\nConstructed in the 1820s and listed on the National Register of Historic Places, the Savannah-Ogeechee Canal was once an important transportation...\nBuilt on a stretch of the Savannah & Atlantic Railroad line, the 6-mile McQueen's Island Trail offers a salt-air excursion for nature lovers and...\nTrailLink is a free service provided by Rails-to-Trails Conservancy (a non-profit) and we need your support!", "score": 8.086131989696522, "rank": 97}, {"document_id": "doc-::chunk-0", "d_text": "Elephant Swamp Trail – Elmer to Elk Township, Gloucester and Salem Counties, NJ\nDistance – 6 miles\nType – One way (12 miles out-and-back)\nDifficulty: 1 of 10\nTotal score: 8 of 10\nWebsite – Elk Township Website\nOpen – Sunrise to Sunset.\nTerrain – Large open fields and farmland, large sections of swamp.\nFrank Stewart Memorial Park, Elk Township – 39°40’1.82″N, 75° 8’26.58″W\nRt 538 and Railroad Ave, Elk Township – 39°39’9.06″N, 75° 8’55.45″W\nBaseball Fields in Elmer, NJ – 39°35’55.22″N, 75°10’4.66″W\nBeginning of trail – Located at the very back of the Stewart Park on Recreation Drive (Just off of Whig Lane) in Elk Township, NJ\nEnd of trail – located at the back left corner of the baseball fields off of Harding Highway (Rt 40) (near the intersection with Main Street) in Elmer\nParking – Parking located at Frank Stewart Memorial Park in Elk (Recreation Ave) and the baseball fields off Harding Highway in Elmer\nMarkings – None, but impossible to get off of once you start because of its location atop an old railroad bed.\nTrail brochure for nature trail section of trail – Elk Township Website\n“Legend has it that in the 1800’s, while the circus was traveling by train through Elk Township along the rail line from Monroeville, an elephant got loose in the swamp, and was never seen again.” ~ Kevin Callahan, Courier-Post\nWhen this prestigious blog began oh so many years ago (like, 2 1/2 years ago), one of the first suggestions that I received was to hike the Elephant Swamp Trail. I knew this was a good suggestion, because the fella that suggested it (jbracciante) had a blog solely dedicated to pizza in Glassboro.\nWhen I heard the story about the escaped circus elephant in the swamp, I knew I just had to go. Then I found out it was six miles long in a straight line (12 miles out-and-back). This hike obviously required some planning and forethought, something I’m pretty terrible at.", "score": 8.086131989696522, "rank": 98}, {"document_id": "doc-::chunk-1", "d_text": "But it was also the home of Francis Marion, a hero of Revolution who earned the nickname “the Swamp Fox” for assembling a rag-tag band of local fighters and launching successful guerrilla raids on British forces. (His legend was the basis for the less-than-historically-accurate film The Patriot, which was, appropriately, filmed largely in South Carolina) Thomas Sumter, another famous guerrilla fighter, also achieved acclaim for his attacks on the Redcoats, and was eventually dubbed the “Carolina Gamecock” because of his ferocity in battle — a term that lives on as the mascot of the University of South Carolina to this day.\n2) Electing the first African American to the U.S. House of Representatives.\nSouth Carolina has a long, sordid history of race relations, but nestled within this lengthy canon of oppression is a series of less-discussed triumphs for people of color.\nIn December of 1870, during the early years of Reconstruction, Joseph Rainey of Charleston, South Carolina won a special election to become the first African American House member in U.S. history, representing South Carolina’s 1st District. He served four terms in that position, during which time he supported the Civil Rights Act of 1875, even as African Americans in his state were repeatedly attacked by the Ku Klux Klan and roving militant gangs known as the Red Shirts. Although he was forced relocate his family to Connecticut to keep them safe, he maintained his residency in Charleston, defying the racists with his continued presence and support for equality. In addition, Rainey, who was born into slavery, became the first African American to preside over the House of Representatives as Speaker pro tempore in May 1874.\nSeveral South Carolina African Americans have been elected to national office since that time, including James Clyburn (D), who served as the House Majority Whip from 2007 to 2011. And while Republican policy positions are rightly challenged by those on the Left, one of South Carolina’s U.S. Senators and its current governor are both people of color: Tim Scott (R) is the first African-American senator from the state of South Carolina (and the first elected in the South since reconstruction), and Governor Nikki Haley (R) is both the first female governor in the state’s history and one of two Indian-American governors in the country.\n3) Hootie and the Blowfish/Darius Rucker (yes, seriously).", "score": 8.086131989696522, "rank": 99}]}